Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 156. Отображено 100.
04-01-2018 дата публикации

CLUSTER BASED PHOTO NAVIGATION

Номер: US20180005036A1
Принадлежит:

The technology relates to navigating imagery that is organized into clusters based on common patterns exhibited when imagery is captured. For example, a set of captured images which satisfy a predetermined pattern may be determined. The images in the set of set of captured images may be grouped into one or more clusters according to the predetermined pattern. A request to display a first cluster of the one or more clusters may be received and, in response, a first captured image from the requested first cluster may be selected. The selected first captured image may then be displayed. 1. A method for organizing and navigating image clusters comprising:accessing, by one or more processing devices, a set of captured images;detecting, by the one or more processing devices, whether images within the set of captured images satisfy a predetermined pattern; andgrouping, by the one or more processing devices, the images in the set of captured images into one or more clusters according to the detected predetermined pattern;receiving, by the one or more processing devices, a request to display a first cluster of the one or more clusters of captured images;selecting, by the one or more processing devices in response to the request, a first captured image from the first cluster to display; andproviding the first captured image from the first cluster for display.2. The method of claim 1 , wherein the predetermined pattern is one of a panoramic pattern claim 1 , an orbit pattern claim 1 , and a translation pattern.3. The method of claim 1 , further comprising:determining, by the one or more processing devices, from the images within the first cluster, a set of neighboring captured images that are within a predetermined proximity to the first captured image;assigning, by the one or more processing devices, one or more neighboring images of the first captured image from the set of neighboring captured images; andproviding, by the one or more processing devices in response to a click ...

Подробнее
07-01-2016 дата публикации

FAILURE DETECTION APPARATUS AND FAILURE DETECTION PROGRAM

Номер: US20160007018A1
Автор: Ooi Takashi
Принадлежит:

In a failure detecting apparatus, acquiring unit acquires a plurality of images captured by a plurality of imaging devices in which exposures thereof are capable of being individually controlled. The plurality of images including an overlapped region that represents a region where images are overlapped. The region extracting unit extracts a plurality of overlapped regions from the plurality of images acquired by the acquiring unit. The feature extracting unit extracts features of image from the plurality of overlapped regions extracted by the region extracting unit. Further, the comparing unit compares the features of image between the plurality of overlapped regions and the similarity determining unit determines whether or not the features of image are similar based on a result of comparing by the comparing unit. The failure determining unit determines a failure in the imaging device when the similarity determining unit determines the features of image are not similar. 1. A failure detecting apparatus that detects failure of an imaging device comprising:acquiring means for acquiring a plurality of images captured by a plurality of imaging devices in which exposures thereof are capable of being individually controlled, the plurality of images including an overlapped region that represents a region where images are overlapped;region extracting means for extracting a plurality of overlapped regions from the plurality of images acquired by the acquiring means;feature extracting means for extracting features of image from the plurality of overlapped regions extracted by the region extracting means;comparing means for comparing the features of image between the plurality of overlapped regions;similarity determining means for determining whether or not the features of image are similar based on a result of comparing by the comparing means; andfailure determining means for determining a failure in the imaging device when the similarity determining means determines the ...

Подробнее
20-01-2022 дата публикации

GENERATING AN IMAGE OF THE SURROUNDINGS OF AN ARTICULATED VEHICLE

Номер: US20220019815A1
Принадлежит:

Systems and methods for generating an image of the surroundings of an articulated vehicle are provided. According to an aspect of the invention, a processor determines a relative position between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle; receives a first image from a first camera arranged on the first vehicle and a second image from a second camera arranged on the second vehicle; and combines the first image and the second image based on the relative position between the first vehicle and the second vehicle to generate a combined image of surroundings of the articulated vehicle. 1. A method comprising:determining, by a processor, an angle between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle as the first and second vehicles rotate laterally relative to each other around a point at which the first and second vehicles are connected to each other, based on a relative position between a first camera arranged on the first vehicle and a second camera arranged on the second vehicle, the first and second cameras being located on a same side of the articulated vehicle;receiving, by the processor, a first image from the first camera arranged on the first vehicle and a second image from the second camera arranged on the second vehicle, the first and second images being obtained from the same side of the articulated vehicle; andcombining, by the processor, the first image and the second image based on the relative position between the first camera and the second camera to generate a combined image of surroundings of the articulated vehicle on the same side thereof, wherein the first image and the second image are combined by rotating the first image and the second image with respect to each other, based on the angle between the first vehicle and the second vehicle.2. The method according to claim 1 , wherein the angle is measured by an angular sensor arranged on the articulated vehicle. ...

Подробнее
15-01-2015 дата публикации

Moving-Object Position/Attitude Estimation Apparatus and Moving-Object Position/Attitude Estimation Method

Номер: US20150015702A1
Принадлежит: Nissan Motor Co Ltd

A moving-object position/attitude estimation apparatus includes: an image-capturing unit configured to acquire a captured image; a comparative image acquiring unit configured to acquire a comparative image viewed from a predetermined position at a predetermined attitude angle; a likelihood setting unit configured to compare the captured image with the comparative image and to assigns a high attitude angle likelihood to the comparative image and to assigns a high position likelihood to the comparative image; a moving-object position/attitude estimation unit configured to estimate the attitude angle of the moving object based on the attitude angle of the comparative image assigned the high attitude angle likelihood and to estimate the position of the moving object based on the position of the comparative image assigned the high position likelihood.

Подробнее
19-01-2017 дата публикации

IMAGE PRODUCTION FROM VIDEO

Номер: US20170017855A1
Принадлежит: GOOGLE INC.

Implementations generally relate to producing a still image from a video or series of continuous frames. In some implementations, a method includes receiving the frames that a capture device shot while moving in at least two dimensions. The method further includes analyzing the frames to determine changes of positions of objects in at least two of the frames due to movement of the objects in the scene relative to changes of positions of objects due to the movement of the capture device during the shoot time. The method further includes determining, based at least in part on the variability of the objects, one or more target subjects which the capture device captures during the shoot time. One or more still images are generated from the plurality of frames having at least a portion of the target subject. 1. A computer-implemented method to generate one or more still images , the method comprising:receiving a video of a scene captured by a capture device during a shoot time, wherein the capture device is moved in at least two dimensions during the shoot time;analyzing a plurality of frames of the video to determine variability of objects in at least two of the plurality of frames due to movement of the objects in the scene relative to movement of the capture device during the shoot time, anddetermining, based at least in part on the variability of the objects, one or more target subjects which the capture device captures during at least a portion of the shoot time; andgenerating one or more still images based on the plurality of frames of the video having at least a portion of the target subject.2. The method of claim 1 , wherein the video is automatically captured by the capture device upon determination of a threshold initiation movement of the capture device prior to capture of the video claim 1 , wherein the threshold initiation movement includes two or more movement characteristics.3. The method of claim 2 , wherein the two or more movement characteristics ...

Подробнее
21-01-2016 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20160019705A1
Автор: KONDO Tetsujiro
Принадлежит:

An image processing apparatus includes: a digital processing unit that performs digital processing on accepted one or at least two images, thereby acquiring one or at least two processed images; a physical property information acquiring unit that acquires physical property information, which is information relating to a physical property that has been lost compared with one or more physical properties of a target contained in the images, in the one or at least two processed images; a physical property processing unit that performs physical property processing, which is processing for adding a physical property corresponding to the physical property information, using the one or at least two processed images; and an output unit that outputs a processed image subjected to the physical property processing. Accordingly, it is possible to reproduce lost physical properties of a target expressed in an image, to the fullest extent possible.

Подробнее
17-01-2019 дата публикации

Character/graphics recognition device, character/graphics recognition method, and character/graphics recognition program

Номер: US20190019049A1

The controller applies a lighting pattern to the illumination unit and controls a timing to capture the image by the imaging unit, a lighting pattern being a combination of turning on and off of the plurality of illumination lamps.

Подробнее
31-01-2019 дата публикации

METHOD AND APPARATUS FOR COMBINING DATA TO CONSTRUCT A FLOOR PLAN

Номер: US20190035099A1
Принадлежит:

Provided is a method and apparatus for combining perceived depths to construct a floor plan using cameras, such as depth cameras. The camera(s) perceive depths from the camera(s) to objects within a first field of view. The camera(s) is rotated to observe a second field of view partly overlapping the first field of view. The camera(s) perceives depths from the camera(s) to objects within the second field of view. The depths from the first and second fields of view are compared to find the area of overlap between the two fields of view. The depths from the two fields of view are then merged at the area of overlap to create a segment of a floor plan. The method is repeated wherein depths are perceived within consecutively overlapping fields of view and are combined to construct a floor plan of the environment as the camera is rotated.

Подробнее
31-01-2019 дата публикации

METHOD AND APPARATUS FOR COMBINING DATA TO CONSTRUCT A FLOOR PLAN

Номер: US20190035100A1
Принадлежит:

Provided is a process, including obtaining, with a robot, raw pixel intensity values of a first image and raw pixel intensity values of a second image, wherein the first image and the second image are taken from different positions; determining, with one or more processors, an overlapping area of a field of view of the first image and of a field of view of the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; spatially, with one or more processors, aligning values based on sensor readings of the robot based on the overlapping area; and inferring, with one or more processors, features of a working environment of the robot based on the spatially aligned sensor readings. 1. One or more tangible , non-transitory , machine-readable media storing instructions that when executed by one or more processors effectuate operations comprising:obtaining, with a robot, raw pixel intensity values of a first image and raw pixel intensity values of a second image, wherein the first image and the second image are taken from different positions;determining, with one or more processors, an overlapping area of a field of view of the first image and of a field of view of the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image;spatially aligning, with one or more processors, values based on sensor readings of the robot based on the overlapping area; andinferring, with one or more processors, features of a working environment of the robot based on the spatially aligned sensor readings.2. The one or more media of claim 1 , wherein:determining the overlapping area is performed by one or more processors of the robot;spatially aligning sensor readings of the robot is performed by one or more processors of the robot; orinferring features of the working environment of the robot is performed by one or more processors of the robot.3. The one ...

Подробнее
31-01-2019 дата публикации

System and method for constructing document image from snapshots taken by image sensor panel

Номер: US20190037099A1
Принадлежит: Bidirectional Display Inc

In one aspect, the present disclosure provides an electronic device having a light source, a two-dimensional photosensor, the photosensor and the light source being stacked on top of each other, and a non-transitory computer readable memory. In one example, the mobile electronic device is configured to: capture two or more frames using the photosensor while light is emitted from the light source, identify common features in neighboring frames of said two or more frames, combine said two or more frames into an image based on the common features, such that the common features are spatially collocated in the image, and record the image to the memory.

Подробнее
08-02-2018 дата публикации

METHOD FOR DETERMINING THE OFFSET BETWEEN THE CENTRAL AND OPTICAL AXES OF AN ENDOSCOPE

Номер: US20180040139A1
Принадлежит:

Disclosed is a method for determining the offset or misalignment between the central or rotational axis and the optical axis of a rigid endoscope or a similar imaging device including a rigid body having an outer casing cylindrically-shaped in the direction of the optical axis, or including at least one segment having a rigid end with such a casing. The method includes taking a plurality of images with a field of view limited by a contour, the positioning of which relative to the central axis is, for each image, physically defined and specific, a relative angular rotation between the contour and the endoscope taking place between two successive images, and of determining a point or a pixel in the successively acquired images whose position remains unchanged, the point corresponding to the projection in the image plane of the central or rotational axis of the rigid body of the endoscope. 1. Procedure for determining the offset or misalignment between the central or rotating axis and the optical axis of a rigid endoscope or similar camera device , consisting of a rigid body with a cylindrical external casing , profiled in the direction of the optical axis , or consisting of at least one rigid segment end with such a casing ,{'b': 3', '1', '4', '4', '1', '3', '2', '1, 'the procedure comprising taking a series of shots, using a camera or similar sensory device () that is part of the endoscope or similar device (), with a field of vision restricted by a contour () that is polygonal, circular, or elliptical in shape, whose positioning in relation to the central or rotating axis (Δ) for each shot is physically defined and specific, with a relative angular rotation between the contour () and the endoscope or similar device () intervening between two successive shots, and determining a point or a pixel (PI, CΔ) in the images acquired successively whose position remains unchanged between the various shots, this point or pixel (PI, CΔ) corresponding to the projection in the ...

Подробнее
15-02-2018 дата публикации

HIGH SECURITY KEY SCANNING SYSTEM

Номер: US20180046881A1
Принадлежит:

A high security key scanning system and method is provided. The scanning system may comprise a sensing device configured to determine information and characteristics of a master high security key, and a digital logic to analyze the information and characteristics of the master key. The sensing device may be configured to capture information about the geometry of features cut into the surface of the master key. The logic may analyze the information related to that geometry and compare it to known characteristics of that style of high security key in order to determine the data needed to replicate the features on a new high security key blank. The system may be configured to capture the surface geometry using a camera or other imaging device. The system may utilize object coating techniques, illumination techniques, filtering techniques, image processing techniques, and feature extraction techniques to capture the desired features. 1. A high security key scanning system comprising:an imaging device configured to capture at least one image of a portion of a first side of a blade of a high security master key;one or more light sources positioned to direct light along a light path towards the first side of said blade of the high security master key at the imaging position;a mirror positioned towards the imaging position to align an optical path with said imaging device, said optical path traverses though said light path of said one or more light sources;wherein said at least one image reveals surface features formed into a face of at least a portion of said blade; anda logic configured to analyze the captured image to determine characteristics of said surface features.2. The key scanning system of claim 1 , wherein said high security master key is a sidewinder key.3. The key scanning system of claim 1 , wherein the plurality of light sources are controlled to be individually turned on and off to direct light onto the surface of the high security master key.4. The key ...

Подробнее
13-02-2020 дата публикации

METHOD AND SYSTEM FOR DETECTING A RAISED OBJECT LOCATED WITHIN A PARKING AREA

Номер: US20200050865A1
Принадлежит:

A method for detecting a raised object located within a parking area, using at least two video cameras, which are spatially distributed inside of the parking area, and whose respective visual ranges overlap in an overlapping region; the method including the following steps: recording respective video images of the overlapping region, using the video cameras; analyzing the recorded images, in order to detect a raised object in the recorded video images; the analyzing being carried out exclusively by at least one of the video cameras, inside of the video camera(s). A corresponding system, a parking area and a computer program are also described. 115-. (canceled)16. A method for detecting a raised object located within a parking area , using at least two video cameras , which are spatially distributed inside of the parking area , and whose respective visual ranges overlap in an overlapping region , the method comprising:recording specific video images of the overlapping region, using the video cameras;analyzing the recorded video images to detect a raised object in the recorded video images;wherein the analyzing is carried out exclusively by at least one of the video cameras, inside of the at least one of the video cameras.17. The method as recited in claim 16 , wherein the analyzing is carried out with the aid of a plurality of the video cameras claim 16 , each of the plurality of video cameras analyzing the recorded video images independently of each other.18. The method as recited in claim 16 , wherein a plurality of the video cameras are spatially distributed within the parking area claim 16 , and at least two of the video cameras of the plurality of video cameras are selected as the video cameras to be used claim 16 , whose respective visual ranges overlap in the overlapping region.19. The method as recited in claim 18 , wherein the analyzing of the recorded video images is carried out with the aid of one or more of the selected video cameras claim 18 , inside of ...

Подробнее
14-02-2019 дата публикации

METHODS AND APPARATUS TO CAPTURE PHOTOGRAPHS USING MOBILE DEVICES

Номер: US20190052801A1
Принадлежит:

Methods and apparatus to capture photographs using mobile devices are disclosed. An example apparatus includes a photograph capturing controller to capture a photograph with a mobile device. The apparatus further includes a blurriness analyzer to determine a probability of blurriness of the photograph. The example apparatus also includes a photograph capturing interface to prompt a user to capture a new photograph when the probability of blurriness exceeds a blurriness threshold. 1. An apparatus comprising:a photograph capturing controller to capture a photograph with a mobile device;a blurriness analyzer to determine a probability of blurriness of the photograph; anda photograph capturing interface to prompt a user to capture a new photograph when the probability of blurriness exceeds a blurriness threshold.2. The apparatus of claim 1 , wherein the blurriness analyzer is to determine the probability of blurriness by:applying an edge detection filter to the photograph;identifying pixels having a pixel value above a pixel value threshold;estimating a variance of pixel values corresponding to the identified pixels; andcalculating the probability of blurriness based on the estimated variance.3. The apparatus of claim 2 , wherein the variance of the pixel values is a first variance of a plurality of variances of the pixel values claim 2 , the first variance associated with a first area of a plurality of areas of the photograph claim 2 , the blurriness analyzer to estimate the first variance based on the pixels identified within the first area.4. The apparatus of claim 3 , wherein the blurriness analyzer is to calculate the probability of blurriness by applying a logistic model to the plurality of variances of the pixel values corresponding to the plurality of areas of the photograph.5. The apparatus of claim 1 , wherein the blurriness analyzer is to:divide the photograph into separate areas; anddetermine the probability of blurriness based on an analysis of a subset of ...

Подробнее
21-02-2019 дата публикации

IMAGE IDENTIFICATION APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Номер: US20190057253A1
Автор: TATSUMI Daisuke
Принадлежит: FUJI XEROX CO., LTD.

An image identification apparatus includes an extraction unit, an excluding unit, and an identification unit. The extraction unit extracts lines from an image. The exclusion unit excludes from objects to be identified a boundary delimiting an entire area of the image among the extracted lines. The identification unit identifies as an object multiple lines that are among the extracted lines and that are not excluded by the exclusion unit if the multiple lines are connected to each other. 1. An image identification apparatus comprising:an extraction unit that extracts lines from an image;an exclusion unit that excludes from objects to be identified a boundary delimiting an entire area of the image among the extracted lines; andan identification unit that identifies as an object a plurality of lines that are among the extracted lines and that are not excluded by the exclusion unit if the plurality of lines are connected to each other.2. The image identification apparatus according to claim 1 , wherein the exclusion unit excludes as a region that corresponds to the boundary delimiting the entire area of the image a region that is largest among regions that are included in the image and in which pixels of the extracted lines are aligned continuously.3. The image identification apparatus according to claim 1 , wherein the exclusion unit excludes a portion of the boundary delimiting the entire area of the image claim 1 , the portion being located in a region that is not included in a region that overlaps a table region extracted from the image.4. The image identification apparatus according to claim 2 , wherein the exclusion unit excludes a portion of the boundary delimiting the entire area of the image claim 2 , the portion being located in a region that is not included in a region that overlaps a table region extracted from the image.5. The image identification apparatus according to claim 1 , wherein the exclusion unit excludes as the boundary delimiting the entire area ...

Подробнее
01-03-2018 дата публикации

PARALLAX MINIMIZATION STITCHING METHOD AND APPARATUS USING CONTROL POINTS IN OVERLAPPING REGION

Номер: US20180060682A1

Provided is a parallax minimization stitching method and apparatus using control points in an overlapping region. A parallax minimization stitching method may include defining a plurality of control points in an overlapping region of a first image and a second image received from a plurality of cameras, performing a first geometric correction by applying a homography to the control points, defining a plurality of patches based on the control points, and performing a second geometric correction by mapping the patches. 1. A parallax minimization stitching method , the method comprising:defining a plurality of control points in an overlapping region of a first image and a second image received from a plurality of cameras;performing a first geometric correction by applying a homography to the control points;defining a plurality of patches based on the control points; andperforming a second geometric correction by mapping the patches.2. The method of claim 1 , wherein the performing of the second geometric correction comprises mapping the patches based on a cost function.3. The method of claim 2 , wherein the cost function is one of a correlation function claim 2 , a mean squared error (MSE) function claim 2 , a fast structure similarity (FSSIM) function claim 2 , and a peak signal-to-noise ratio (PSNR) function.4. The method of claim 1 , further comprising:converting coordinates of the control points on which the first geometric correction is performed based on depth information.5. The method of claim 4 , wherein the converting of the coordinates comprises calculating the depth information based on at least one of distances and directions of matched feature points among a plurality of feature points included in the overlapping region claim 4 , a distance and a direction of a control point included in the overlapping region claim 4 , alignment information of at least one object claim 4 , feature similarity index (FSIM) information of the overlapping region claim 4 , or ...

Подробнее
04-03-2021 дата публикации

ELECTRONIC DEVICE AND METHOD FOR RECOGNIZING CHARACTERS

Номер: US20210064864A1
Принадлежит:

An electronic device according to an embodiment disclosed in the present document may comprise: an imaging device for generating image data; a communication circuit; at least one processor operatively connected to the imaging device and the communication circuit; and a memory operatively connected to the processor, for storing a command 1. An electronic device comprising:an imaging device configured to generate image data;a communication circuit;at least one processor operatively connected to the imaging device and the communication circuit; anda memory operatively connected to the processor to store instructions, receive first image data including a first image from the imaging device;', 'transmit the first image data to a first server through the communication circuit;', 'receive first text data, including a first text recognized from the first image data, from the first server through the communication circuit;', 'receive second image data, including a second image including a part of the first image, from the imaging device; and', 'transmit the second text data, including at least a part of the first text data, and a part of the second image data, not all of the second image data, to the first server through the communication circuit., 'wherein the instructions cause, when executed, the processor to2. The electronic device of claim 1 , wherein the instructions cause claim 1 , when executed claim 1 , the processor to determine the part of the second image data based at least partially on at least one of the first image claim 1 , the second image and the first text data.3. The electronic device of claim 2 , wherein the part of the second image data includes no data related at least partially to the first image.4. The electronic device of claim 1 , wherein the first text data has a javascript object notation (JSON) format.5. The electronic device of claim 4 , wherein the first text data includes data about at least one coordinates related to the first text in the ...

Подробнее
28-02-2019 дата публикации

RECONSTRUCTING DOCUMENT FROM SERIES OF DOCUMENT IMAGES

Номер: US20190065880A1
Принадлежит:

Systems and methods for reconstructing a document from a series of document images. An example method comprises: receiving a plurality of image frames, wherein each image frame of the plurality of image frames contains at least a part of an image of an original document; identifying a plurality of visual features in the plurality of image frames; performing spatial alignment of the plurality of image frames based on matching the identified visual features; splitting each of the plurality of image frames into a plurality of image fragments; identifying one or more text-depicting image fragments among the plurality of image fragments; associating each identified text-depicting image fragment with an image frame in which that image fragment has an optimal value of a pre-defined quality metric among values of the quality metric for that image fragment in the plurality of image frames; and producing a reconstructed image frame by blending image fragments from the associated image frames. 1. A method , comprising:receiving, by a computer system, a plurality of image frames, wherein each image frame of the plurality of image frames contains at least a part of an image of an original document;identifying a plurality of visual features in the plurality of image frames;performing spatial alignment of the plurality of image frames based on matching the identified visual features;splitting each of the plurality of image frames into a plurality of image fragments;identifying one or more text-depicting image fragments among the plurality of image fragments;associating each identified text-depicting image fragment with an image frame in which that image fragment has an optimal value of a pre-defined quality metric among values of the quality metric for that image fragment in the plurality of image frames; andproducing a reconstructed image frame by blending image fragments from the associated image frames.2. The method of claim 1 , further comprising:performing optical character ...

Подробнее
14-03-2019 дата публикации

TRACKING AND/OR ANALYZING FACILITY-RELATED ACTIVITIES

Номер: US20190080274A1
Принадлежит:

A device may receive video of a facility from an image capture system. The video may show an individual within the facility, an object within the facility, or an activity being performed within the facility. The device may process the video using a technique to identify the individual within the facility, the object within the facility, or the activity being performed within the facility. The device may track the individual, the object, or the activity through the facility to facilitate an analysis of the individual, the object, or the activity. The device may perform the analysis of the individual, the object, or the activity using information related to tracking the individual, the object, or the activity. The device may perform an action related to the individual, the object, or the activity based on a result of the analysis. The action may positively impact operations of the facility. 1. A device , comprising: an individual within the facility,', 'an object within the facility, or', 'an activity being performed within the facility;, 'the video showing at least one of, 'receive video of a facility from an image capture system,'}, 'one or more processors toprocess the video using a technique to identify the individual within the facility, the object within the facility, or the activity being performed within the facility;track the individual, the object, or the activity through the facility to facilitate an analysis of the individual, the object, or the activity;perform the analysis of the individual, the object, or the activity using information related to tracking the individual, the object, or the activity; andperform an action related to the individual, the object, or the activity based on a result of the analysis,the action to positively impact operations of the facility.2. The device of claim 1 , where the one or more processors are further to:map information identifying the individual, the object, or the activity to a map of the facility after tracking the ...

Подробнее
12-03-2020 дата публикации

METHOD FOR RECONTRUCTING AN IMPRINT IMAGE FROM IMAGE PORTIONS

Номер: US20200082149A1
Автор: GIRARD Fantin, NIAF Emilie
Принадлежит:

A method for reconstructing an imprint image, from a set of image portions, includes the steps of: extracting, from each image portion, a set of local points of interest and, for each point of local interest, calculating a descriptor vector that characterizes said point of local interest; for each pair of two image portions, evaluating a local interest points association score representative of a probability that the two image portions are contiguous on the imprint image; assembling the image portions of a best pair to form an assembled fragment; repeating the above steps by replacing each time, in the set of image portions, the two image portions of the best pair, until all the association scores of the remaining pairs are less than or equal to a predetermined threshold, and producing an assembly map of the image portions; merging the image portions to reproduce the imprint image. 1. A method for reconstructing at least one imprint image , representative of a papillary imprint , from a set of image portions acquired using at least one sensor , the reconstruction method comprising the steps , carried out by at least one electrical processing unit , of:extracting, from each image portion of the set of image portions, a set of local points of interest, said set of local points of interest characterizing the image portion and, for each point of local interest, calculating a descriptor vector that characterizes said point of local interest;for each pair of two image portions, evaluating from the sets of local interest points and the descriptor vectors of said two image portions an association score representative of a probability that the two image portions are contiguous on the imprint image;assembling the image portions of a best pair, with the highest association score, to form an assembled fragment;repeating the above steps by replacing each time, in the set of image portions, the two image portions of the best pair by the assembled fragment, until all the ...

Подробнее
29-03-2018 дата публикации

Automatic Medical Image Retrieval

Номер: US20180089840A1
Принадлежит:

A framework for automatic retrieval of medical images. In accordance with one aspect, the framework detects patches in a query image volume that contain at least a portion of an anatomical region of interest by using a first trained classifier. The framework determines disease probabilities by applying a second trained classifier to the detected patches, and selects, from the patches, a sub-set of informative patches with disease probabilities above a pre-determined threshold value. For a given patch from the sub-set of informative patches, the framework retrieves, from a database, patches that are most similar to the given image. Image volumes associated with the retrieved patches are then retrieved from the database. A report based on the retrieved image volumes may then be generated and presented. 1. A system for image retrieval , comprising:a non-transitory memory device for storing computer readable program code; and constructing a database by using first and second trained classifiers,', 'receiving a query image volume,', 'detecting patches that contain at least a portion of a lung by applying the first trained classifier to the query image volume,', 'determining lung disease probabilities by applying the second trained classifier to the detected patches,', 'selecting, from the patches, a sub-set of informative patches with the lung disease probabilities above a pre-determined threshold value,', 'for a given patch from the sub-set of informative patches, retrieving, from the database, one or more patches that are most similar to the given patch,', 'retrieving, from the database, image volumes associated with the retrieved patches, and', 'generating and presenting a report based on the retrieved image volumes., 'a processor device in communication with the memory device, the processor being operative with the computer readable program code to perform steps including'}2. The system of wherein the processor is operative with the computer readable program code to ...

Подробнее
25-03-2021 дата публикации

CALIBRATION OF A SURROUND VIEW CAMERA SYSTEM

Номер: US20210092354A1
Принадлежит:

A method for automatic generation of calibration parameters for a surround view (SV) camera system is provided that includes capturing a video stream from each camera comprised in the SV camera system, wherein each video stream captures two calibration charts in a field of view of the camera generating the video stream; displaying the video streams in a calibration screen on a display device coupled to the SV camera system, wherein a bounding box is overlaid on each calibration chart, detecting feature points of the calibration charts, displaying the video streams in the calibration screen with the bounding box overlaid on each calibration chart and detected features points overlaid on respective calibration charts, computing calibration parameters based on the feature points and platform dependent parameters comprising data regarding size and placement of the calibration charts, and storing the calibration parameters in the SV camera system. 1. A method comprising: wherein the image of each of the plurality of video streams includes a plurality of calibration charts, and', 'wherein each of the plurality of video streams is from a respective one of a plurality of cameras;, 'capturing a plurality of video streams, wherein each of the plurality of video streams includes an image,'}associating each of the plurality of calibration charts in the image of each of the plurality of video streams with a respective bounding box;in the image of each of the plurality of video streams, aligning each of the plurality of calibration charts within the respective bounding box;in the image of each of the plurality of video streams, detecting feature points within the respective bounding box for each of the plurality of calibration charts;generating a set of matrices based on the detected feature points and locations of the detected feature points;displaying the image of each of the plurality of video streams on a display, wherein the image of each of the plurality of video streams ...

Подробнее
19-03-2020 дата публикации

IMAGING PROCESSING DEVICE, IMAGING SYSTEM AND IMAGING APPARATUS INCLUDING THE SAME, AND IMAGE PROCESSING METHOD

Номер: US20200092474A1

A video display method for a video display device is provided. The video display method comprises: acquiring a plurality of captured images from each of a plurality of cameras for each frame; generating a plurality of correction images from the plurality of captured images by performing parallax correction in each of the plurality of captured images for each frame, the parallax correction being performed between images captured by adjacent cameras; and compositing the plurality of correction images to generate a 360° panoramic composite image for each frame. An amount of the parallax correction of each captured image in each frame is limited so that an amount of change from a previous amount of the parallax correction in a previous frame is within a range defined by a predetermined limitation value. 1. A video display method for a video display device , the video display method comprising:acquiring a plurality of captured images from each of a plurality of cameras for each frame;generating a plurality of correction images from the plurality of captured images by performing parallax correction in each of the plurality of captured images for each frame, the parallax correction being performed between images captured by adjacent cameras; andcompositing the plurality of correction images to generate a 360° panoramic composite image for each frame,wherein an amount of the parallax correction of each captured image in each frame is limited so that an amount of change from a previous amount of the parallax correction in a previous frame is within a range defined by a predetermined limitation value.2. The video display method of claim 1 , 'a correction image is generated based on the amount of parallax correction, and', 'wherein in a case where the amount of change from the previous amount of the parallax correction is within the range defined by the predetermined limitation value,'} 'a correction image is generated by using a value obtained by adding or subtracting the ...

Подробнее
12-05-2022 дата публикации

IMAGE STITCHING APPARATUS, IMAGE PROCESSING CHIP AND IMAGE STITCHING METHOD

Номер: US20220147752A1
Принадлежит:

An image stitching apparatus, an image processing chip and an image stitching method are provided. The image stitching apparatus includes a motion detection circuit, a determination circuit and a stitching circuit. The motion detection circuit performs motion detection on an overlapping area between a first image and a second image that are to undergo stitching to obtain a motion area having a moving object in the overlapping area. The determination circuit calculates a target stitching line using a constraint of avoiding the motion area. The stitching circuit stitches the first image and the second image according to the target stitching line to obtain a stitched image. 1. An image stitching apparatus , comprising:a motion detection circuit, performing motion detection on an overlapping area between a first image and a second image to obtain a motion area having a moving object in the overlapping area;a determination circuit, calculating a target stitching line in the overlapping area using a constraint of avoiding the motion area; anda stitching circuit, stitching the first image and the second image according to the target stitching line to obtain a stitched image.2. The image stitching apparatus according to claim 1 , further comprising:a difference calculation circuit, calculating at least one difference matrix between the first image and the second image with respect to the overlapping area;wherein, the determination circuit calculates the target stitching line using the constraint of avoiding the motion area according to the at least one difference matrix.3. The image stitching apparatus according to claim 2 , wherein the difference calculation circuit calculates a plurality of difference matrices between the first image and the second image with respect to the overlapping area according to a plurality of different difference calculation methods.4. The image stitching apparatus according to claim 2 , wherein the difference calculation circuit comprises:a ...

Подробнее
19-04-2018 дата публикации

IMAGE PRODUCTION FROM VIDEO

Номер: US20180107888A1
Принадлежит: Google LLC

Implementations generally relate to producing a still image from a video or series of continuous frames. In some implementations, a method includes receiving the frames that a capture device shot while moving in at least two dimensions. The method further includes analyzing the frames to determine changes of positions of objects in at least two of the frames due to movement of the objects in the scene relative to changes of positions of objects due to the movement of the capture device during the shoot time. The method further includes determining, based at least in part on the variability of the objects, one or more target subjects which the capture device captures during the shoot time. One or more still images are generated from the plurality of frames having at least a portion of the target subject. 1. A computer-implemented method comprising:detecting a change in two or more initiation movement characteristics of a capture device, wherein the two or more initiation movement characteristics include at least one of: a change in temperature and an increase in ambient light;determining a combination of at least two of the two or more initiation movement characteristics meets an initiation threshold indicating preparation to acquire images of a subject by the capture device;activating automatic capture of a plurality of images by the capture device in response to determining the combination meets an initiation threshold; andselecting at least one image of the plurality of images depicting at least a portion of the subject.2. The method of claim 1 , wherein detecting the change in the two or more initiation movement characteristics occurs as the capture device moves from a first environment to a second environment.3. The method of claim 2 , wherein the first environment includes a receptacle and the capture device moves from the receptacle to the second environment.4. The method of claim 2 , wherein the change of temperature includes a decrease in ambient temperature ...

Подробнее
29-04-2021 дата публикации

RECOGNITION FOR OVERLAPPED PATTERNS

Номер: US20210124907A1
Принадлежит:

In an approach, data of a plurality of points is sampled in a target area, wherein the data of each point of the plurality of points comprises position information and a height value. A first area of a target area is determined, wherein the height value of each point of the plurality of points in the first area complies with a first range. A second area of the target area is determined, wherein the height value of each point of the plurality of points in the second area complies with a second range. A third area of the target area is determined, wherein the height value of each point of the plurality of points in the third area complies with a third range. A first pattern is generated, wherein the first pattern is a combination of the first area and the third area. 1. A computer-implemented method comprising:obtaining, by one or more processors, data of a plurality of points sampled in a target area, wherein the data of each point of the plurality of points comprises position information and a height value, and wherein the position information indicates a position of a respective point in a reference plane of the target area and the height value indicates a vertical distance of the respective point to the reference plane;determining, by one or more processors, a first area of the target area, wherein the height value of each point of the plurality of points in the first area complies with a first range;determining, by one or more processors, a second area of the target area, wherein the height value of each point of the plurality of points in the second area complies with a second range;determining, by one or more processors, a third area of the target area, wherein the height value of each point of the plurality of points in the third area complies with a third range; andgenerating, by one or more processors, a first pattern that is a combination of the first area and the third area.2. The computer-implemented method of claim 1 , wherein the third range is ...

Подробнее
11-04-2019 дата публикации

COMPARING EXTRACTED CARD DATA USING CONTINUOUS SCANNING

Номер: US20190108415A1
Принадлежит:

Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud. 120-. (canceled)21. A computer-implemented method to provide extraction results by comparing extracted data from multiple images to identify matched extraction results , comprising:performing, by one or more computing devices, an optical character recognition algorithm on a first image of a plurality of images obtained from a digital scan of an object, the digital scan being a continuous digital scan;performing, by the one or more computing devices, the optical character recognition algorithm on a second image of the plurality of images obtained from the digital scan of the object;comparing, by the one or more computing devices, results from the performance of the optical character recognition on the first image with results from the performance of the optical character recognition on the second image;determining, by the one or more computing devices, if a configured threshold number of the results for the first image matches the results of the second image based on the comparison of the results of the performance of the optical character recognition algorithm on the ...

Подробнее
26-04-2018 дата публикации

DEVICES, SYSTEMS, AND METHODS FOR ANOMALY DETECTION

Номер: US20180114092A1
Принадлежит:

Devices, systems, and methods obtain a first image, obtain a second image, calculate respective distances between a histogram from a patch in the first image to respective histograms from patches in the second image, and identify a patch in the second image that is most similar to the patch in the first image based on the respective distances. 1. A device comprising:one or more processors; and obtaining a first image,', 'obtaining a second image,', 'calculating respective distances between a histogram from a patch in the first image to respective histograms from patches in the second image, and', 'identifying a patch in the second image that is most similar to the patch in the first image based on the respective distances., 'one or more computer-readable media that are coupled to the one or more processors and that include instructions for'}2. The device of claim 1 , wherein the patches in the second image include a central patch and other patches that are offset from the central patch.3. The device of claim 2 , wherein the central patch has a x claim 2 , y position in the second image that is identical to an x claim 2 , y position of the patch in the first image.4. The device of claim 1 , wherein the patches in the second image partially overlap each other.5. The device of claim 4 , wherein each patch in the first image and the second image is composed of two or more respective feature patches.6. The device of claim 5 , wherein each feature patch partially overlaps another feature patch.7. The device of claim 1 , wherein each distance of the respective distances indicates a cost of changing the histogram from the patch in the first image to a respective one of the histograms from the patches in the second image.8. The device of claim 7 , wherein calculating each of the respective distances includes calculating a no-cost shift between the histogram from the patch in the first image to a respective one of the histograms from the patches in the second image.9. The ...

Подробнее
09-04-2020 дата публикации

COMBINED INFORMATION FOR OBJECT DETECTION AND AVOIDANCE

Номер: US20200108946A1
Принадлежит:

Described is an imaging component for use by an unmanned aerial vehicle (“UAV”) for object detection. As described, the imaging component includes one or more cameras that are configured to obtain images of a scene using visible light that are converted into a depth map (e.g., stereo image) and one or more other cameras that are configured to form images, or thermograms, of the scene using infrared radiation (“IR”). The depth information and thermal information are combined to form a representation of the scene based on both depth and thermal information. 120.-. (canceled)21. A method , comprising:receiving, from a first camera coupled to an aerial vehicle and having a first orientation, first image data of a scene using visible light;receiving, from a second camera coupled to the aerial vehicle and having the first orientation, second image data of the scene using visible light; andreceiving, from a sensor coupled to the aerial vehicle and having the first orientation, sensor data representative of the scene; andprocessing the first image data, the second image data, and the sensor data to produce a combined information representative of the scene.22. The method of claim 21 , wherein the combined information includes a horizontal dimension claim 21 , a vertical dimension claim 21 , and at least one of a depth dimension or a thermal dimension.23. The method of claim 21 , further comprising:detecting, based at least in part on the combined information, an object;determining, based at least in part on the first image data, the second image data, and the sensor data, whether the object is an object to avoid; andsending instructions to alter a navigation of the aerial vehicle in response to determining that the object is an object to avoid.24. The method of claim 21 , wherein the sensor is at least one of an infrared sensor or an ultrasonic sensor.25. The method of claim 21 , wherein a first field of view of the first camera claim 21 , a second field of view of the ...

Подробнее
13-05-2021 дата публикации

USER FEEDBACK FOR REAL-TIME CHECKING AND IMPROVING QUALITY OF SCANNED IMAGE

Номер: US20210144353A1
Принадлежит: ML Netherlands C.V.

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback. 121.-. (canceled)22. A method of forming a representation of a three-dimensional (3D) environment , the method comprising:acquiring a plurality of image frames with a portable electronic device comprising a user interface;sequentially incorporating image frames of the plurality of image frames into the representation of the 3D environment;concurrently with sequentially incorporating the image frames into the representation of the 3D environment, analyzing a portion of the representation of the 3D environment to determine a quality of depiction of the 3D environment in the portion of the representation of the 3D environment;computing, based at least in part on the determined quality, a parameter of the portable electronic device; andgenerating, based at least in part on the computed parameter, feedback on the user interface, wherein the feedback comprises an indication to a user of the portable electronic device.23. The method of claim 22 , wherein the representation of the 3D environment comprises a composite image.24. The method of claim 22 , wherein the representation of the ...

Подробнее
03-05-2018 дата публикации

METHOD OF TAKING A PICTURE WITHOUT GLARE

Номер: US20180121746A1
Принадлежит: Engineering Innovation, Inc.

A glare reducing optical recognition system that recognizes alphanumeric text wherein the system includes a first light emitter that emits light in a first direction and a second light emitter that emits light in a second direction different from the first direction. The system includes an image capturing device that captures a first image of alphanumeric text illuminated by the first light emitter emitting light in the first direction, and a second image of the alphanumeric text illuminated by the second light emitter emitting light in the second direction. The system includes an image processor that constructs a glare reduced image by comparing sections of the first image with corresponding sections of the second image and selecting the section with the least luminosity to populate the corresponding section of the glare reduced image. The system may include a character recognition processor, a label producing apparatus, and/or a conveyance system. 1. A glare reducing optical recognition system for recognizing alphanumeric text , comprising:a first light emitter configured and arranged to emit light in a first direction;a second light emitter configured and arranged to emit light in a second direction different from the first direction; a first image of alphanumeric text illuminated by the first light emitter emitting light in the first direction, and', 'a second image of the alphanumeric text illuminated by the second light emitter emitting light in the second direction;, 'an image capturing device configured to capturean image processor configured to construct a glare reduced image by comparing sections of the first image with corresponding sections of the second image and selecting the section with the least luminosity to populate the corresponding section of the glare reduced image; anda character recognition processor configured to automatically perform optical character recognition on alphanumeric text contained within the glare reduced image produced by said ...

Подробнее
16-04-2020 дата публикации

Method and Apparatus for Object Status Detection

Номер: US20200118063A1
Принадлежит:

A method of object status detection for objects supported by a shelf, from shelf image data, includes: obtaining a plurality of images of a shelf, each image including an indication of a gap on the shelf between the objects; registering the images to a common frame of reference; identifying a subset of the gaps having overlapping locations in the common frame of reference; generating a consolidated gap indication from the subset; obtaining reference data including (i) identifiers for the objects and (ii) prescribed locations for the objects within the common frame of reference; based on a comparison of the consolidated gap indication with the reference data, selecting a target object identifier from the reference data; and generating and presenting a status notification for the target product identifier. 1. A method by an imaging controller of object status detection for objects supported by a shelf , from shelf image data , the method comprising:obtaining, at an image pre-processor of the imaging controller a plurality of images of a shelf, each image including an indication of a gap on the shelf between the objects;registering, by the image pre-processor, the images to a common frame of reference;identifying, by the image pre-processor, a subset of the gaps having overlapping locations in the common frame of reference;generating, by the image pre-processor, a consolidated gap indication from the subset;obtaining, by a comparator of the imaging controller, reference data including (i) identifiers for the objects and (ii) prescribed locations for the objects within the common frame of reference;based on a comparison of the consolidated gap indication with the reference data, selecting, by the comparator, a target object identifier from the reference data; andgenerating and presenting, by a notifier of the imaging controller, a status notification for the target product identifier.2. The method of claim 1 , wherein the selecting comprises selecting a target object ...

Подробнее
17-05-2018 дата публикации

METHODS AND APPARATUS TO CAPTURE PHOTOGRAPHS USING MOBILE DEVICES

Номер: US20180139381A1
Принадлежит:

Methods and apparatus to capture photographs using mobile devices are disclosed. An example apparatus includes a photograph capturing controller to capture a photograph with a mobile device. the example apparatus further includes a perspective analyzer, implemented by the mobile device, to analyze the photograph to determine a probability of perspective being present in the photograph. The example apparatus also includes a photograph capturing interface to prompt a user to capture a new photograph when the probability of perspective exceeds a threshold. 1. An apparatus comprising:a photograph capturing controller to capture a photograph with a mobile device;a perspective analyzer, implemented by the mobile device, to analyze the photograph to determine a probability of perspective being present in the photograph; anda photograph capturing interface to prompt a user to capture a new photograph when the probability of perspective exceeds a threshold.2. The apparatus of claim 1 , wherein the perspective analyzer is to determine the probability of perspective by:applying an edge detection filter to the photograph to determine edge pixels corresponding to edges in the photograph identified by the edge detection filter;evaluating potential edge lines passing through a first one of the edge pixels;determining a first one of the potential edge lines that passes through more of the edge pixels than other ones of the potential edge lines; andcalculating the probability of perspective based on an angle of the first one of the potential edge lines.3. The apparatus of claim 2 , wherein a quantity of the edge pixels through which the potential edge lines pass through is normalized by a length of respective ones of the potential edge lines claim 2 , the length of the respective ones of potential edge lines defined by a perimeter of the photograph.4. The apparatus of claim 2 , wherein the perspective analyzer is to calculate a second one of the potential edge lines for a second one ...

Подробнее
10-06-2021 дата публикации

GUIDED BATCHING

Номер: US20210172757A1
Принадлежит: BLUE VISION LABS UK LIMITED

The present invention provides a method of generating a robust global map using a plurality of limited field-of-view cameras to capture an environment. 1. A computer-implemented method comprising:determining, by a computing system, subsets of image data associated with an area, wherein the subsets of image data are based on a set of images captured at the area;determining, by the computing system, a first group of the subsets of image data having image properties that are separated by a first distance that exceeds a similarity distance threshold to each other;determining, by the computing system, a second group of the subsets of image data having image properties that are separated by a second distance that is within the similarity distance threshold to each other;excluding, by the computing system, the first group of the subsets of image data from the determined subsets of image data based on the first distance exceeding the similarity distance threshold; andgenerating, by the computing system, a map portion associated with the area based on the excluded first group of the subsets of image data and the second group of the subsets of image data.2. The method of claim 1 , further comprising:generating, by the computing system, a graph of the subsets of image data, wherein nodes in the graph correspond to the subsets of image data, edges in the graph connect nodes that are within the similarity distance threshold, and the edges are weighted based on the image properties for the subsets of image data corresponding to the nodes connected by the edges.3. The method of claim 2 , further comprising:partitioning, by the computing system, the graph into subgraphs based on a measure of dissimilarity between the image data of the subgraphs and a measure of similarity between the image data within the subgraphs.4. The method of claim 3 , wherein the partitioning the graph into subgraphs comprises performing a first level graph cut based on the edges that are weighted below a ...

Подробнее
30-04-2020 дата публикации

METHOD, APPARATUS, AND SYSTEM FOR DETERMINING A GROUND CONTROL POINT FROM IMAGE DATA USING MACHINE LEARNING

Номер: US20200134311A1
Принадлежит:

An approach is provided for determining a ground control point from image data using machine learning. The approach, for example, involves selecting an feature based determining that the feature meets one or more properties for classification as a machine learnable feature. The approach also involves retrieving a plurality of ground truth images depicting the feature. The plurality of ground truth images is labeled with known pixel location data of the feature as respectively depicted in each of the plurality of ground truth images. The approach further involves training a machine learning model using the plurality of ground truth images to identify predicted pixel location data of the ground control point as depicted in an input image. 1. A computer-implemented method for determining a ground control point from image data comprising:selecting a feature as the ground control point based determining that the feature meets one or more properties for classification as a machine learnable feature;retrieving a plurality of ground truth images depicting the feature, wherein the plurality of ground truth images is labeled with known pixel location data of the feature as respectively depicted in each of the plurality of ground truth images; andtraining a machine learning model using the plurality of ground truth images to identify predicted pixel location data of the ground control point as depicted in an input image.2. The method of claim 1 , wherein the feature is selected based on determining that the one or more properties indicate that the feature is uniquely identifiable from among other features.3. The method of claim 1 , wherein the feature is selected based on determining that the one or more properties indicate that the intersection feature has a spatial sparsity that meets a sparsity criterion.4. The method of claim 1 , wherein the feature is selected based on determining that the one or more properties indicate that the feature is applicable to a plurality of ...

Подробнее
31-05-2018 дата публикации

IMAGE PROCESSING METHOD AND IMAGE SYSTEM FOR TRANSPORTATION

Номер: US20180150939A1
Принадлежит:

An image processing method is adapted to process images captured by at least two cameras in an image system. In an embodiment, the image processing method comprises: matching two corresponding feature points for two images, respectively, to become a feature point set; selecting at least five most suitable feature point sets, by using an iterative algorithm; calculating a most suitable radial distortion homography between the two images, according to the at least five most suitable feature point sets; and fusing the images captured by the at least two cameras at each of timing sequences, by using the most suitable radial distortion homography. 1. An image processing method for transportation adapted to process images captured by at least two cameras in an image system , comprising:matching two corresponding feature points for two images, respectively, to become a feature point set;selecting at least five most suitable feature point sets, by using an iterative algorithm;calculating a most suitable radial distortion homography between the two images, according to the at least five most suitable feature point sets; andfusing the images captured by the at least two cameras at each of timing sequences, by using the most suitable radial distortion homography.2. The image processing method according to claim 1 , wherein before matching the two corresponding feature points for the two images claim 1 , respectively claim 1 , to become the feature point set claim 1 , the image processing method further comprises:detecting a plurality of feature points of the two images.3. The image processing method according to claim 1 , wherein after fusing the images captured by the at least two cameras at each of the timing sequences claim 1 , by using the most suitable radial distortion homography claim 1 , the image processing method further comprises:storing the most suitable radial distortion homography.4. The image processing method according to claim 1 , further comprising:before ...

Подробнее
22-09-2022 дата публикации

SYSTEM AND METHOD FOR TRACKING OCCLUDED OBJECTS

Номер: US20220300748A1
Принадлежит: Toyota Research Institute, Inc.

A method for tracking an object performed by an object tracking system includes encoding locations of visible objects in an environment captured in a current frame of a sequence of frames. The method also includes generating a representation of a current state of the environment based on an aggregation of the encoded locations and an encoded location of each object visible in one or more frames of the sequence of frames occurring prior to the current frame. The method further includes predicting a location of an object occluded in the current frame based on a comparison of object centers decoded from the representation of the current state to object centers saved from each prior representation associated with a different respective frame of the sequence of frames occurring prior to the current frame. The method still further includes adjusting a behavior of an autonomous agent in response to identifying the location of the occluded object. 1. A method for tracking occluded objects performed by an object tracking system , comprising:encoding locations of visible objects in an environment captured in a current frame of a sequence of frames;generating a representation of a current state of the environment based on an aggregation of the encoded locations and an encoded location of each object visible in one or more frames of the sequence of frames occurring prior to the current frame;predicting a location of an object occluded in the current frame based on a comparison of object centers decoded from the representation of the current state to object centers saved from each prior representation associated with a different respective frame of the sequence of frames occurring prior to the current frame; andadjusting a behavior of an autonomous agent in response to identifying the location of the occluded object.2. The method of claim 1 , further comprising decoding claim 1 , from the generated representation of the current frame claim 1 , a location in the environment of ...

Подробнее
16-06-2016 дата публикации

Insurance Asset Verification and Claims Processing System

Номер: US20160171622A1
Принадлежит:

A user interface system for providing insurance including: a media capture unit including a camera; a controller in communication with the media capture unit; and a memory in communication with the controller; wherein the memory includes first asset information previously captured by an asset verification process in which a user captures first visual media information including at least one visual media information including an asset shown from a perspective, wherein the memory additionally including an asset verification software application that, when executed by the controller, causes the controller to: in response to receiving a notification of a loss regarding the asset, prompt the user to capture second asset information using a media prompt including directions for capturing at least one image including a view of the asset from the perspective; and transmit the second asset information to an underwriting server. 1. A user interface system for providing insurance comprising:a media capture unit including a camera;a controller in communication with the media capture unit; anda memory in communication with the controller;wherein the memory includes first asset information previously captured by an asset verification process in which a user captures first visual media information including at least one visual media information including an asset shown from a perspective, in response to receiving a notification of a loss regarding the asset, prompt the user to capture second asset information using a media prompt including directions for capturing at least one image including a view of the asset from the perspective; and', 'transmit the second asset information to an underwriting server., 'wherein the memory additionally including an asset verification software application that, when executed by the controller, causes the controller to2. The system of claim 1 , wherein the asset comprises a residential dwelling.3. The system of claim 1 , wherein the media capture ...

Подробнее
14-06-2018 дата публикации

Generating an Image of the Surroundings of an Articulated Vehicle

Номер: US20180165524A1
Принадлежит:

Systems and methods for generating an image of the surroundings of an articulated vehicle are provided. According to an aspect of the invention, a processor determines a relative position between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle; receives a first image from a first camera arranged on the first vehicle and a second image from a second camera arranged on the second vehicle; and combines the first image and the second image based on the relative position between the first vehicle and the second vehicle to generate a combined image of surroundings of the articulated vehicle. 1. A method comprising:determining, by a processor, an angle between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle as the first and second vehicles rotate laterally relative to each other around a point at which the first and second vehicles are connected to each other, based on a relative position between a first camera arranged on the first vehicle and a second camera arranged on the second vehicle, the first and second cameras being located on a same side of the articulated vehicle;receiving, by the processor, a first image from the first camera arranged on the first vehicle and a second image from the second camera arranged on the second vehicle, the first and second images being obtained from the same side of the articulated vehicle; andcombining, by the processor, the first image and the second image based on the relative position between the first camera and the second camera to generate a combined image of surroundings of the articulated vehicle on the same side thereof, wherein the first image and the second image are combined by rotating the first image and the second image with respect to each other, based on the angle between the first vehicle and the second vehicle.2. The method according to claim 1 , wherein the angle is measured by an angular sensor arranged on the articulated vehicle. ...

Подробнее
21-05-2020 дата публикации

TRAINING USING TRACKING OF HEAD MOUNTED DISPLAY

Номер: US20200160746A1
Принадлежит:

Embodiments can use a model for training according to embodiments of the present disclosure. In some embodiments, a model can be created from actual video. The model can be a spherical video. In this manner, users can be immersed in real situations, and thus the user can get more experience than the user otherwise would have had. Various technical features can be provided for enhancing such a system, e.g., synchronization of pointers on two screens, camera rigs with extended view to allow the camera rig to be placed further from a location of the players, analytics for rating users and controlling playback of a next play (action interval), and for allowing a user to feel translation while in a model. 1. A method for measuring movement of users in a model , the method comprising performing , by a computer system:storing, in a memory of the computer system, one or more files providing a visual scene, the memory communicably coupled with one or more processors of the computer system; receiving tracking information from one or more sensors of a head-mounted display, the tracking information providing an orientation of the head-mounted display;', 'determining a portion of a respective visual scene that is being viewed by a calibration user based on the tracking information;', 'providing the portion of the respective visual scene from the computer system to the head-mounted display for displaying on a display screen of the head-mounted display;', 'storing the tracking information at a plurality of times over playback of the respective visual scene to obtain playback orientation information of the calibration user;', 'determining a statistical distribution of values of the orientation from the playback orientation information of the calibration user; and', 'determining a calibration statistical value for one or more statistical parameters of the statistical distribution;, 'for each of a plurality of calibration usersfor each of the one or more statistical parameters, ...

Подробнее
23-06-2016 дата публикации

SYSTEMS AND METHODS FOR COMBINING MULTIPLE FRAMES TO PRODUCE MEDIA CONTENT WITH SIMULATED EXPOSURE EFFECTS

Номер: US20160180559A1
Принадлежит:

Systems, methods, and non-transitory computer-readable media can capture media content including an original set of frames. A plurality of subsets of frames can be identified, based on a subset selection input, out of the original set of frames. An orientation-based image stabilization process can be applied to each subset in the plurality of subsets of frames to produce a plurality of stabilized subsets of frames. Multiple frames within each stabilized subset in the plurality of stabilized subsets of frames can be combined to produce a plurality of combined frames. Each stabilized subset of frames can be utilized to produce a respective combined frame in the plurality of combined frames. A time-lapse media content item can be provided based on the plurality of combined frames. 1. A computer-implemented method comprising:capturing, by a computing system, media content including an original set of frames;identifying, by the computing system, based on a subset selection input, a plurality of subsets of frames out of the original set of frames;applying, by the computing system, an orientation-based image stabilization process to each subset in the plurality of subsets of frames to produce a plurality of stabilized subsets of frames;combining, by the computing system, multiple frames within each stabilized subset in the plurality of stabilized subsets of frames to produce a plurality of combined frames, wherein each stabilized subset of frames is utilized to produce a respective combined frame in the plurality of combined frames, and wherein combining the multiple frames within each stabilized subset to produce the plurality of combined frames is based on pixel blending a respective entirety of each frame in the multiple frames; andproviding, by the computing system, a time-lapse media content item based on the plurality of combined frames.2. The computer-implemented method of claim 1 , wherein the applying of the orientation-based image stabilization process to each ...

Подробнее
21-06-2018 дата публикации

DRUG INSPECTION DEVICE, DRUG INSPECTION METHOD, AND PROGRAM

Номер: US20180174292A1
Автор: TAKAMORI Tetsuya
Принадлежит: FUJIFILM Corporation

A drug inspection device includes an image acquisition unit that acquires a plurality of captured images obtained by imaging a bundle of drug sheets bundled in a state where at least some thereof overlap each other, the image acquisition unit acquiring the captured images including at least some of respective drug sheets with respect to all the drug sheets; a drug classification specifying unit that specifies a drug classification from the captured images; an outer edge information extraction unit that extracts information of an outer edge of each of the drug sheets from the captured images; a number-of-sheets counting unit that counts the number of drug sheets based on the information; an outermost layer sheet specifying unit that specifies an outermost layer sheet present on an outermost surface portion of the bundle; and a first drug counting unit that counts the number of drugs in the outermost layer sheet. 1. A drug inspection device comprising:an image acquisition unit that acquires a plurality of captured images obtained by imaging a bundle of drug sheets bundled in a state where at least some thereof overlap each other, the image acquisition unit acquiring the plurality of captured images including at least some of respective drug sheets with respect to all the drug sheets constituting the bundle of drug sheets;a drug classification specifying unit that specifies a drug classification from at least one of the plurality of captured images;an outer edge information extraction unit that extracts information of an outer edge of each of the drug sheets constituting the bundle of drug sheets from the plurality of captured images;a number-of-sheets counting unit that counts the number of drug sheets on the basis of the information of the outer edge of the drug sheet;an outermost layer sheet specifying unit that specifies a drug sheet piece present on an outermost surface portion of the bundle of drug sheets, as an outermost layer sheet, on the basis of the ...

Подробнее
08-07-2021 дата публикации

SYSTEMS AND METHODS FOR MATCHING AUDIO AND IMAGE INFORMATION

Номер: US20210209362A1
Принадлежит: OrCam Technologies Ltd.

System and methods for processing audio signals are disclosed. In one implementation, a system may comprise a wearable camera configured to capture images from an environment of a user; a microphone configured to capture sounds from the environment of the user; and a processor. The processor may be configured to receive at least one image of the plurality of images, the at least one image comprising a plurality of image portions associated with corresponding image portion timestamps; receive at least one audio signal representative of the sounds captured by the at least one microphone, identify an audio timestamp associated with a portion of the audio signal; identify an image portion from among the plurality of image portions, the image portion having an image portion timestamp associated with the audio timestamp; and analyze the image portion to identify a voice originating from an object represented in the image. 1. A system for processing audio signals , the system comprising:a wearable camera configured to capture a plurality of images from an environment of a user;at least one microphone configured to capture sounds from the environment of the user; and receive at least one image of the plurality of images, the at least one image comprising a plurality of image portions associated with corresponding image portion timestamps;', 'receive at least one audio signal representative of the sounds captured by the at least one microphone;', 'identify an audio timestamp associated with a portion of the audio signal;', 'identify an image portion from among the plurality of image portions, the image portion having an image portion timestamp associated with the audio timestamp; and', 'analyze the image portion to identify a voice originating from an object represented in the image., 'at least one processor programmed to2. The system of claim 1 , wherein identifying the image portion comprises determining the audio timestamp is later than the image portion timestamp claim 1 ...

Подробнее
28-06-2018 дата публикации

TRANSITION BETWEEN BINOCULAR AND MONOCULAR VIEWS

Номер: US20180182175A1
Принадлежит:

An image processing system is designed to generate a canvas view that transitions between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which transitions between binocular views and monocular views in terms of image shape and color based on the blended images. 1. A method comprising:identifying a horizontal angle and vertical angle of view for a portion of a display for an eye, the horizontal angle and vertical angle representing a portion of a canvas for view by the eye;identifying an overlapping portion of a top image and an image corresponding to a view for the eye;determining an optical flow; andapplying the optical flow to the top image based on the vertical angle.2. The method of claim 1 , wherein the optical flow is applied as a function of the vertical angle and wherein the function for applying the optical flow applies no optical flow at a vertical angle when the overlapping portion begins adjacent to the top image claim 1 , and applies a full optical flow at a pivotal row in the overlapping portion.3. The method of claim 2 , further comprising:for each of a left eye image and a right eye image: morphing the color of the top image with the corresponding image between the pivotal row and a side-only portion of the corresponding image where the overlapping portion is adjacent to the side-only portion.4. ...

Подробнее
04-06-2020 дата публикации

IMAGE RECOGNITION COMBINED WITH PERSONAL ASSISTANTS FOR ITEM RECOVERY

Номер: US20200175302A1
Принадлежит:

According to one embodiment, a method, computer system, and computer program product for using a virtual assistant and electronic devices to find lost objects is provided. The present invention may include identifying one or more candidate items corresponding with one or more user-identified lost items within captured images such as real-time or archived camera feeds within the area where the item was lost; identifying secondary items with a spatial relationship to the candidate items; communicating, to a user, location information associated with the one or more candidate items by reference to the secondary items based on the identifying. 1. A processor-implemented method for utilizing electronic devices to identify one or more lost items , the method comprising:identifying one or more candidate items corresponding with one or more user-identified lost items within one or more captured images;communicating, to a user, location information associated with the one or more candidate items based on the identifying.2. The method of claim 1 , identifying the one or more candidate items within archived images in response to determining that one or more candidate items are not visible to one or more sensors.3. The method of claim 1 , further comprising:identifying at least one secondary item associated with a candidate item.4. The method of claim 3 , wherein the communicating the location information associated with the one or more candidate items is conducted by reference to one or more secondary items.5. The method of claim 1 , wherein the identifying is performed on images captured by one or more sensors in one or more locations corresponding with location data associated with one or more users.6. The method of claim 1 , wherein one or more exemplary candidate items are captured in a reference image by one or more sensors.7. The method of claim 1 , wherein communicating the location information is performed by a virtual assistant.8. A computer system for utilizing ...

Подробнее
16-07-2015 дата публикации

EMULATION OF REPRODUCTION OF MASKS CORRECTED BY LOCAL DENSITY VARIATIONS

Номер: US20150198798A1
Принадлежит:

A method is provided for emulating the imaging of a scanner mask pattern to expose wafers via a mask inspection microscope, in which the mask was corrected by introducing scattering centers. The method includes determining a correlation between the first values of at least one characteristic of aerial images of the mask pattern as produced by a mask inspection microscope and the second values of the at least one characteristic of aerial images of the mask pattern as produced by a scanner, recording a first aerial image of the mask pattern with the mask inspection microscope, determining the first values of the at least one characteristic from the first aerial image, and determining the second values of the at least one characteristic of the first aerial image, using the correlation. A mask inspection microscope is also provided for emulating the imaging of a mask pattern of a scanner to expose wafers, in which the mask was corrected by introducing scattering centers. 1. A method for emulating the imaging of a scanner mask pattern to expose wafers via a mask inspection microscope , in which the mask was corrected by introducing scattering centers , the method comprising:determining a correlation between first values of at least one characteristic of aerial images of a mask pattern produced by a mask inspection microscope and second values of the at least one characteristic of aerial images of the mask pattern produced by a scanner;recording a first aerial image of a mask pattern using the mask inspection microscope;determining the first values of the at least one characteristic from the first aerial image; anddetermining the second values of the at least one characteristic of the first aerial image, using the correlation.2. The method of in which the illumination field of the scanner is greater than the illumination field of the mask inspection microscope.4. The method of in which the characteristic of aerial images of the mask pattern as produced by the scanner is ...

Подробнее
12-07-2018 дата публикации

COMBINED INFORMATION FOR OBJECT DETECTION AND AVOIDANCE

Номер: US20180194489A1
Принадлежит:

Described is an imaging component for use by an unmanned aerial vehicle (“UAV”) for object detection. As described, the imaging component includes one or more cameras that are configured to obtain images of a scene using visible light that are converted into a depth map (e.g., stereo image) and one or more other cameras that are configured to form images, or thermograms, of the scene using infrared radiation (“IR”). The depth information and thermal information are combined to form a representation of the scene based on both depth and thermal information. 120.-. (canceled)21. An unmanned aerial vehicle (“UAV”) , comprising:a frame;a plurality of propulsion mechanisms to aerially lift and navigate the UAV; a first camera coupled to the frame and having a first orientation, wherein the first camera is configured to form a first image data of a scene using visible light;', 'a second camera coupled to the frame and having the first orientation, wherein the second camera is configured to form a second image data of the scene using visible light;, 'an imaging component, including 'a sensor coupled to the frame and having the first orientation, wherein the sensor is configured to form a sensor data representative of the scene; and', 'and'} receive the first image data, the second image data, and the sensor data; and', 'process the first image data, the second image data, and the sensor data to produce a combined information representative of the scene., 'a processing component, configured to at least22. The UAV of claim 21 , wherein the combined information includes a horizontal dimension claim 21 , a vertical dimension claim 21 , and at least one of a depth dimension or a thermal dimension.23. The UAV of claim 21 , further comprising: 'send instructions to a UAV control system to alter a navigation of the UAV in response to a determination that the object is an object to avoid.', 'determine, based at least in part on the first image data, the second image data, and the ...

Подробнее
14-07-2016 дата публикации

PERIPHERY MONITORING DEVICE FOR WORK MACHINE

Номер: US20160205319A1

A periphery monitoring device for a work machine includes imaging devices that capture an image of the surroundings of the work machine. An overhead view image is generated from the surroundings of the work machine based upon upper view-point images of the image devices. When generating an overhead view image of an overlap region of first and second upper view-point images relating to the images captured by the first and second imaging devices, the overhead view image generating unit, based upon a height of a virtual monitoring target, sets at least one of a first region in which the first upper view-point image is displayed and a second region in which the second upper view-point image is displayed, and also sets a third region in which a composite display image based upon the first and second upper view-point images is displayed. 1. A periphery monitoring device for a work machine , comprising:a plurality of imaging devices that each captures an image of surroundings of the work machine;an overhead view image generating unit that converts the image captured by each of the plurality of imaging devices into an upper view-point image, and generates an overhead view image of the surroundings of the work machine based upon upper view-point images; anda display unit that displays the overhead view image in an overhead view image display region, wherein:when generating an overhead view image of an overlap region of a first upper view-point image relating to the image captured by a first imaging device included in the plurality of imaging devices and a second upper view-point image relating to the image captured by a second imaging device included in the plurality of imaging devices;the overhead view image generating unit, based upon a height of a virtual monitoring target, sets at least one of a first region in which the first upper view-point image is displayed and a second region in which the second upper view-point image is displayed, and also sets a third region in ...

Подробнее
25-06-2020 дата публикации

SYSTEM AND METHOD FOR PERSPECTIVE PRESERVING STITCHING AND SUMMARIZING VIEWS

Номер: US20200202599A1
Принадлежит:

A method and system of stitching a plurality of image views of a scene, including grouping matched points of interest in a plurality of groups, and determining a similarity transformation with smallest rotation angle for each grouping of the matched points. The method further includes generating virtual matching points on non-overlapping area of the plurality of image views and generating virtual matching points on overlapping area for each of the plurality of image views. 1. A method of stitching a plurality of image views of a scene , the method comprising:grouping matched points of interest in a plurality of groups; anddetermining a similarity transformation with smallest rotation angle for each grouping of the matched points.2. The method according to claim 1 , further comprising:generating virtual matching points on non-overlapping area of the plurality of image views;generating virtual matching points on overlapping area for each of the plurality of image views; andcalculating piecewise projective transformations for the plurality of image views.3. The method according to claim 1 , further comprising of deriving the matching points of interest from points of interest representations claim 1 ,wherein the points of interest representations comprise translation-invariant representations of edge orientations.4. The method according to claim 3 , wherein the points of interest representations comprise scale invariant feature transform (SIFT) points.5. The method according to is stored in a non-transitory computer-readable medium and executed by a processor claim 1 ,wherein the match points are derived from a plurality of views of a scene that are remotely captured from an aerial view.6. The method according to claim 1 , wherein each group of the plurality of matched points is used to calculate an individual similarity transformation claim 1 , and then the rotation angles corresponding to the transformations are examined and a one with the smallest rotation angle is ...

Подробнее
16-08-2018 дата публикации

CAMERA AND SPECIMEN ALIGNMENT TO FACILITATE LARGE AREA IMAGING IN MICROSCOPY

Номер: US20180231752A1
Принадлежит:

A microscope system and method allow for a desired x′-direction scanning along a specimen to be angularly offset from an x-direction of the XY translation stage, and rotates an image sensor associated with the microscope to place the pixel rows of the image sensor substantially parallel to the desired x′-direction. The angle of offset of the x′-direction relative to the x-direction is determined and the XY translation stage is employed to move the specimen relative to the image sensor to different positions along the desired x′-direction without a substantial shift of the image sensor relative to the specimen in a y′-direction, the y′-direction being orthogonal to the x′ direction of the specimen. The movement is based on the angle of offset. 1. A microscopy method for imaging a specimen along a desired x′-direction of the specimen , the specimen being placed on an XY translation stage and movable by the XY translation stage so as to have portion thereof placed within the field of view of an image sensor , wherein the XY translation stage is movable in an x-direction and a y-direction to move the specimen relative to the image sensor , the image sensor having a multitude of pixels arranged to define pixel rows and pixel columns , the desired x′-direction of the specimen being angularly offset from the x-direction of the XY translation stage so as to define a slope and angle of offset relative thereto , the image sensor viewing only a discrete segment of the specimen at a time , the method comprising the steps of:rotating the image sensor such that the pixel rows are substantially parallel with the desired x′-direction of the specimen;determining the angle of offset of the desired x′-direction as compared to the x-direction of the XY translation stage;establishing a first position for the specimen relative to the image sensor as rotated in said step of rotating, said first position placing at least a portion of the specimen within the field of view of the image ...

Подробнее
23-08-2018 дата публикации

SATELLITE WITH MACHINE VISION

Номер: US20180239982A1
Принадлежит: ELWHA LLC

In one embodiment, a satellite configured for machine vision includes, but is not limited to, at least one imager; one or more computer readable media bearing one or more program instructions; and at least one computer processor configured by the one or more program instructions to perform operations including at least: obtaining imagery using the at least one imager of the satellite; determining at least one interpretation of the imagery by analyzing at least one aspect of the imagery; and executing at least one operation based on the at least one interpretation of the imagery. 1. A computer process executed by at least one computer processor of at least one satellite for providing machine vision , the computer process comprising:obtaining imagery using at least one imager of the at least one satellite;determining at least one interpretation of the imagery by analyzing at least one aspect of the imagery; andexecuting at least one operation based on the at least one interpretation of the imagery.248-. (canceled)49. A satellite configured for machine vision , the satellite comprising:at least one imager;one or more computer readable media bearing one or more program instructions; and obtaining imagery using the at least one imager of the satellite;', 'determining at least one interpretation of the imagery by analyzing at least one aspect of the imagery; and', 'executing at least one operation based on the at least one interpretation of the imagery., 'at least one computer processor configured by the one or more program instructions to perform operations including at least50. The satellite of claim 49 , wherein the obtaining imagery using the at least one imager of the satellite comprises:obtaining raw ultra-high resolution pre-transmitted imagery using the at least one imager of the satellite.51. The satellite of claim 49 , wherein the obtaining imagery using the at least one imager of the satellite comprises:obtaining imagery using a plurality of imagers of the ...

Подробнее
23-07-2020 дата публикации

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Номер: US20200236279A1
Принадлежит: RICOH COMPANY, LTD.

An information processing system includes a server apparatus including one or more processors configured to analyze a plurality of wide angle images acquired through photographing, and transmit a result image indicating a result of analyzing to an information processing terminal; and the information processing terminal communicatable with the server apparatus and including one or more processors configured to receive the result image, and display the result image on a display. 1. An information processing system comprising a server apparatus and an information processing terminal , wherein analyze a plurality of wide angle images acquired through photographing, and', 'transmit a result image indicating a result of analyzing to an information processing terminal; and, '(i) the server apparatus including one or more processors configured to receive the result image, and', 'display the result image on a display., '(ii) the information processing terminal communicatable with the server apparatus and including one or more processors configured to2. The information processing system according to claim 1 , wherein identify wide angle images used as a base to derive an analysis result indicated by the result image as base images, and', 'transmit the base images to the information processing terminal; and, '(i) the one or more processors of the server apparatus are further configured to receive the base images, and', 'display the wide angle image as the base image on the display., '(ii) the one or more processors of the information processing terminal are further configured to3. The information processing system according to claim 2 , wherein set initial image information indicating a wide angle image to be first displayed on the display of the information processing terminal from among the wide angle images included in the base images, and', 'transmit the base images and the initial image information to the information processing terminal; and, '(i) the one or more ...

Подробнее
06-09-2018 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER READABLE MEDIUM

Номер: US20180253823A1
Принадлежит: Mitsubishi Electric Corporation

The present invention is provided with a boundary calculation unit () to calculate a boundary position () being a basis for dividing a common area into a side of the first imaging device () and a side of the second imaging device (), a selection unit () to select a bird's-eye view image wherein distortion in an image of a three-dimensional object is less as a selected image (), out of the first bird's-eye view image () and the second bird's-eye view image () based on the boundary position () and a position of the three-dimensional object, and an image generation unit () to generate an area image () based on an image other than the common area in the first bird's-eye view image (), an image other than the common area in the second bird's-eye view image (), and an image of the common area included in the selected image (). 19-. (canceled)10. An image processing device comprising:processing circuitry to:calculate, by using first device information including position information of a first imaging device to take an image of a first area including a common area wherein a three-dimensional object is placed, and second device information including position information of a second imaging device to take an image of a second area including the common area, a boundary position being a basis for dividing the common area into a side of the first imaging device and a side of the second imaging device;select, based on the boundary position and a position of the three-dimensional object, a bird's-eye view image wherein distortion in an image of the three-dimensional object is less as a selected image, out of a first bird's-eye view image, which is an image of the first area being switched a viewpoint after having been taken by the first imaging device, wherein an image of the three-dimensional object is distorted, and of a second bird's-eye view image, which is an image of the second area being switched a viewpoint after having been taken by the second imaging device, wherein an ...

Подробнее
06-09-2018 дата публикации

IMAGING APPARATUS, IMAGE PROCESSING DEVICE, IMAGING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Номер: US20180255232A1
Принадлежит: OLYMPUS CORPORATION

An imaging apparatus includes: an imaging unit configured to continuously capture images to sequentially generate image data; a combining unit configured to combine a plurality of sets of the image data generated by the imaging unit to generate composite image data; a display unit configured to display a composite image corresponding to the composite image data generated by the combining unit; an operating unit configured to receive an operation for the image data to be left in the composite image selected from among a plurality of sets of the image data combined into the composite image displayed by the display unit; and a control unit configured to cause the combining unit to combine at least two sets of the image data selected in accordance with the operation of the operating unit to generate a new set of the composite image data. 1. An imaging apparatus comprising:an imaging unit configured to continuously capture images to sequentially generate image data;a combining unit configured to combine a plurality of sets of the image data generated by the imaging unit to generate composite image data;a display unit configured to display a composite image corresponding to the composite image data generated by the combining unit;an operating unit configured to receive an operation for the image data to be left in the composite image selected from among a plurality of sets of the image data combined into the composite image displayed by the display unit; anda control unit configured to cause the combining unit to combine at least two sets of the image data selected in accordance with the operation of the operating unit to generate a new set of the composite image data.2. The imaging apparatus according to claim 1 , further comprising a display control unit configured to cause the display unit to display a last image overlaid on the composite image claim 1 , whenever the imaging unit generates the image data claim 1 , the last image corresponding to a last set of the image ...

Подробнее
20-09-2018 дата публикации

COMPACT BIOMETRIC ACQUISITION SYSTEM AND METHOD

Номер: US20180268216A1
Принадлежит:

A method of determining the identity of a subject while the subject is walking or being transported in an essentially straight direction is disclosed, the two dimensional profile of the subject walking or being transported along forming a three dimensional swept volume, without requiring the subject to change direction to avoid any part of the system, comprising acquiring data related to one or more biometrics of the subject with the camera(s), processing the acquired biometrics data, and determining if the acquired biometric data match corresponding biometric data stored in the system, positioning camera(s) and strobed or scanned infrared illuminator(s) above, next to, or below the swept volume. A system for carrying out the method is also disclosed. 120.-. (canceled)21. A system for acquisition of iris and facial biometric data from a subject , the system comprising: illuminate a subject with light pulses from a first spectrum and a second spectrum of a plurality of discrete light spectra, and', 'illuminate the subject with light pulses from a second spectrum of the plurality of discrete light spectra, the second spectrum different from the first spectrum;, 'at least one light source configured to acquire, at a first rate of acquisition, a set of iris data from the subject for biometric matching, using the light pulses from the first spectrum and the second spectrum, and', 'acquire at a second rate of acquisition that is lower than the first rate of acquisition and interleaved with the acquisitions at the first rate of acquisition, a set of biometric data from a face of the subject using the light pulses from the second spectrum; and, 'a sensor configured to'}at least one processor configured to perform biometric matching using data from the set of iris data, and liveness detection.22. The system of claim 21 , wherein the sensor is further configured to acquire claim 21 , at a third rate of acquisition interleaved with the acquisitions of the set of iris data and ...

Подробнее
11-11-2021 дата публикации

Virtual Parallax to Create Three-Dimensional Appearance

Номер: US20210349609A1
Принадлежит: Apple Inc.

In some implementations, a computing device can simulate a virtual parallax to create three dimensional effects. For example, the computing device can obtain an image captured at a particular location. The captured two-dimensional image can be applied as texture to a three-dimensional model of the capture location. To give the two-dimensional image a three-dimensional look and feel, the computing device can simulate moving the camera used to capture the two-dimensional image to different locations around the image capture location to generate different perspectives of the textured three-dimensional model as if captured by multiple different cameras. Thus, a virtual parallax can be introduced into the generated imagery for the capture location. When presented to the user on a display of the computing device, the generated imagery may have a three-dimensional look and feel even though generated from a single two-dimensional image. 1. A method , comprising:obtaining, by a computing device, a first image and a second image;for each first pixel in the first image, determining, by the computing device, a corresponding second pixel in the second image;obtaining, by the computing device, pixel quality scores for each first pixel and corresponding second pixel;for one or more first pixels, comparing, by the computing device, pixel quality scores for the one or more first pixels to pixel quality scores for one or more corresponding second pixels;for each of the one or more first pixels, selecting, by the computing device, between the first pixel and the corresponding second pixel based on the comparison; andgenerating, by the computing device, a composite image based on selected pixels from the first image and the second image.2. The method as recited in claim 1 , further comprising:capturing, by the computing device, a first image capture point view at a first image capture point; andgenerating, by the computing device, the first image from a perspective of a portion of a ...

Подробнее
18-11-2021 дата публикации

DIFFERENTIATION-BASED TRAFFIC LIGHT DETECTION

Номер: US20210357668A1
Автор: Zhu Fan
Принадлежит:

A method, apparatus, and system for determining a state of an upcoming traffic light is disclosed. At an autonomous driving vehicle (ADV), an upcoming traffic light ahead in a direction of travel is detected. A relative position of the ADV to the traffic light is determined based on a three-dimensional (D) position of the traffic light and a position of the ADV. A first image whose content includes the traffic light is captured. A second image of the traffic light is obtained, which comprises cropping the first image and preserving only a first sub-region of the first image that corresponds to the traffic light. One or more third images of the traffic light are retrieved from a precompiled image library based on the relative position of the ADV to the traffic light. A state of the traffic light is determined based on the one or more third images. 1. A computer-implemented method for operating an autonomous driving vehicle , the method comprising:in response to detecting, at an autonomous driving vehicle (ADV), an upcoming traffic light ahead in a direction of travel, capturing a first image of the traffic light;retrieving, at the ADV, one or more second images associated with the traffic light from a precompiled image library based on a relative position of the ADV to the traffic light, wherein the relative position of the ADV to the traffic light is determined based on a position of the traffic light and a position of the ADV;determining, at the ADV, a state of the traffic light based on matching of the first image and the one or more second images;planning a trajectory for the ADV based at least in part on the determined state of the traffic light; andgenerating control signals to drive the ADV based on the planned trajectory.2. The method of claim 1 , wherein the state of the traffic light comprises one of: a green state claim 1 , a yellow state claim 1 , or a red state.3. The method of claim 1 , wherein determining the state of the traffic light based on ...

Подробнее
29-08-2019 дата публикации

IMAGE PRODUCTION FROM VIDEO

Номер: US20190266428A1
Принадлежит: Google LLC

Implementations generally relate to producing a still image from a video or series of continuous frames. In some implementations, a method includes receiving the frames that a capture device shot while moving in at least two dimensions. The method further includes analyzing the frames to determine changes of positions of objects in at least two of the frames due to movement of the objects in the scene relative to changes of positions of objects due to the movement of the capture device during the shoot time. The method further includes determining, based at least in part on the variability of the objects, one or more target subjects which the capture device captures during the shoot time. One or more still images are generated from the plurality of frames having at least a portion of the target subject. 1. A computer-implemented method to generate one or more still images , the method comprising:receiving, at a computing device, a video that includes a plurality of sequential frames captured by a capture device during a shoot time, wherein at least two frames of the plurality of sequential frames each include one or more target subjects, and wherein the capture device is moved in at least two dimensions during the shoot time;identifying, by one or more processors of the computing device, a defective frame of the at least two frames of the plurality of sequential frames in which the one or more target subjects is detected as having an image defect, wherein the defective frame is a first frame in the plurality of sequential frames to depict at least a portion of the one or more target subjects with the image defect;identifying, by the one or more processors of the computing device, a reference frame of the at least two frames of the plurality of sequential frames that is immediately prior to the defective frame in the plurality of the sequential frames, wherein the reference frame includes the one or more target subjects being free of the image defect;overlapping, by ...

Подробнее
29-08-2019 дата публикации

METHOD AND SYSTEM FOR BACKGROUND REMOVAL FROM DOCUMENTS

Номер: US20190266433A1
Автор: FOROUGHI Homa
Принадлежит: INTUIT INC.

The invention relates to a method for background removal from documents. The method includes obtaining an image of a document, performing a clustering operation on the image to obtain a plurality of image segments, and performing, for each image segment, a foreground/background classification to determine whether the image segment includes foreground. The method further includes obtaining an augmented image by combining the image segments that include foreground, and obtaining a background-treated image by cropping the image of the document, based on the foreground in the augmented image. 1. A method for background removal from documents , comprising:obtaining an image of a document;performing a clustering operation on the image to obtain a plurality of image segments;performing, for each image segment, a foreground/background classification to determine whether the image segment comprises foreground;obtaining an augmented image by combining the image segments comprising foreground; andobtaining a background-treated image by cropping the image of the document, based on the foreground in the augmented image.2. The method of claim 1 , further comprising converting the image of the document to Lab color space claim 1 , wherein the clustering operation is performed using ab channels of the Lab color space.3. The method of claim 1 ,wherein performing the clustering operation comprises generating k image segments for k clusters, andwherein k represents the number of major color components identified in a color histogram of the image of the document.4. The method of claim 1 , wherein the clustering operation is performed using a K-means algorithm.5. The method of claim 1 , wherein performing the foreground/background classification comprises:selecting a plurality of random patches of pixels in the image segment,classifying each of the selected random patches as either foreground or background, andbased on the classification of the selected random patches, classifying the ...

Подробнее
20-08-2020 дата публикации

CLASSIFICATION OF POLYPS USING LEARNED IMAGE ANALYSIS

Номер: US20200265275A1
Принадлежит:

Computational techniques are applied to video images of polyps to extract features and patterns from different perspectives of a polyp. The extracted features and patterns are synthesized using registration techniques to remove artifacts and noise, thereby generating improved images for the polyp. The generated images of each polyp can be used for training and testing purposes, where a machine learning system separates two types of polyps. 1. A system for classifying polyps , the system comprising:an polyp image database comprising, for a plurality of polyps, images of the plurality of polyps taken from different perspectives; compile, for at least one of the plurality of polyps, images of at least one of the plurality of polyps taken from different perspectives,', 'generate, from the compiled images of the at least one polyp, a new polyp image, the new polyp image having fewer reflection artifacts and occlusions than the compiled images, and', 'compute, based on the generated new polyp image, a polyp surface model; and, 'a polyp imaging engine, the polyp imaging engine configured to apply linear subspace learning techniques and nonlinear subspace learning techniques to identify discriminate features in the polyp surface model, and', 'based on the identified discriminate features, classify the polyp as adenomatous or hyperplastic., 'a learning engine, the learning engine configured to2. The system of where at least one of the images of the plurality of polyps taken from different perspectives comprises images generated using Narrow Band Imaging (NBI).3. The system of where generating the images of the plurality of polyps taken from different perspectives using NBI comprises:generating a first image of at least one of the plurality of polyps using a light source emitting at or about 415 nanometers, andgenerating a second image of the at least one of the plurality of polyps using a light source emitting at or about 540 nanometers.4. The system of where at least one of ...

Подробнее
06-10-2016 дата публикации

COMPARING EXTRACTED CARD DATA USING CONTINUOUS SCANNING

Номер: US20160292527A1
Принадлежит:

Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud. 1. A computer-implemented method to compare extracted card data , comprising:performing, by the one or more computing devices, an optical character recognition algorithm on each of a plurality of images obtained from a digital scan of a card;determining, by the one or more computing devices, if a configured threshold of results for each of the plurality of images match each other based on a comparison of the results of the performance of the optical character recognition algorithm on each of the plurality of images; andverifying, by the one or more computing devices, the results as card data when at least the configured threshold number of results for each of the plurality of images match each other is reached.2. The method of claim 1 , further comprising:accessing, by the one or more computing devices, at least one additional digital image of the card when the one or more computing devices determines that at least the configured threshold number of results for each of the plurality of images match each other is not reached;performing, by the one or more computing ...

Подробнее
04-10-2018 дата публикации

DISPLAY APPARATUS, DISPLAY METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM

Номер: US20180285626A1
Принадлежит:

A display apparatus includes: a display unit that displays a first image; an obtaining unit that obtains a recognition result of recognizing a foodstuff serving as a cooking target on the basis of a photographed image obtained from a camera; a determination unit that determines whether a recognized foodstuff indicated by the recognition result is present in a superposable area where the first image can be superposed on the foodstuff; and a display control unit that causes the display unit to: (i) superpose the first image on the foodstuff when the determination unit determines that the foodstuff is present in the superposable area; and (ii) superpose the first image on a second image when the determination unit determines that the foodstuff is not present in the superposable area. 1. A display apparatus comprising:a display unit that displays a first image for assisting a cooker with a cooking action;an obtaining unit that obtains a recognition result of recognizing a foodstuff serving as a cooking target on the basis of a photographed image obtained from a camera;a determination unit that determines whether a recognized foodstuff indicated by the recognition result is present in a superposable area where the first image can be superposed on the recognized foodstuff, out of a displayable area of the display unit; anda display control unit that causes the display unit to:(i) superpose the first image on the recognized foodstuff when the determination unit determines that the recognized foodstuff is present in the superposable area; and(ii) display a second image generated on the basis of the recognized foodstuff included in the photographed image obtained from the camera and superpose the first image on the second image when the determination unit determines that the recognized foodstuff is not present in the superposable area.2. The display apparatus according to claim 1 , wherein the display unit includes a projector.3. The display apparatus according to claim 1 , ...

Подробнее
04-10-2018 дата публикации

OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND RECORDING MEDIUM

Номер: US20180285672A1
Принадлежит: CASIO COMPUTER CO., LTD.

The present invention is to reduce the time required to detect an object after completion of the rotation of a head or a body of a robot. A robot includes a camera , and a control unit which determines an overlapping area between a first image captured with the camera at first timing and a second image captured with the camera at second timing later than the first timing to detect an object included in an area of the second image other than the determined overlapping area. 1. An object detection device comprising:a determination section which determines an overlapping area between a first image captured by an imaging unit at a first timing, and a second image captured by the imaging unit at a second timing later than the first timing; andan object detection section which detects an object included in an area of the second image other than the overlapping area determined by the determination section.2. The object detection device according to claim 1 , further comprisinga working part which changes an imaging direction of the imaging unit,wherein the imaging unit is controlled to perform imaging plural times while the imaging direction is being changed by the action of the working part to change the imaging direction of the imaging unit.3. The object detection device according to claim 2 , wherein the imaging unit is controlled to perform imaging plural times at predetermined time intervals while the imaging direction is being changed by the action of the working part to change the imaging direction of the imaging unit.4. The object detection device according to claim 2 , wherein the imaging unit is controlled to perform next imaging at timing of completion of detection processing by the object detection section while the imaging direction is being changed by the operation of the working part to change the imaging direction of the imaging unit.5. The object detection device according to claim 2 , further comprising:a sound detection unit which detects an ambient ...

Подробнее
03-09-2020 дата публикации

Measurement apparatus, measurement method, system, storage medium, and information processing apparatus

Номер: US20200278197A1
Автор: Takumi Tokimitsu
Принадлежит: Canon Inc

The present invention provides a measurement apparatus including a processing unit configured to perform a process of obtaining three-dimensional information regarding an object based on a first image obtained by a first image capturing unit and a second image obtained by a second image capturing unit, wherein the processing unit corrects, based on a model representing a measurement error and using first three-dimensional measurement values obtained from data of the first image and data of the second image corresponding to an overlap region captured by both of the first image capturing unit and the second image capturing unit, a measurement error of a second three-dimensional measurement value obtained from data of one of the first image and the second image corresponding to a non-overlap region captured by the one of the first image capturing unit and the second image capturing unit.

Подробнее
12-10-2017 дата публикации

TRANSITION BETWEEN BINOCULAR AND MONOCULAR VIEWS

Номер: US20170294045A1
Принадлежит:

An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images. 1. A method comprising:receiving a top image of a scene;receiving a first and second synthetic image of the scene, the first and second synthetic image separately corresponding to a left eye view and a right eye view of a user; and identifying an overlapping portion of the top image of the scene with the synthetic image;', 'within the overlapping portion, determining an optical flow from the top view of the scene to the synthetic image;', 'generating a warped top image by blending the top image onto the synthetic image by applying the optical flow, wherein the optical flow is applied as a function of a vertical angle; and', 'generating a canvas view for the corresponding eye view of the user by combining the top image of the scene, the synthetic image, and the warped top image., 'for each of the first and second synthetic images2. The method of claim 1 , wherein the function for applying optical flow applies no optical flow at a vertical angle where the overlapping portion begins adjacent to the top portion claim ...

Подробнее
19-10-2017 дата публикации

System and method for perspective preserving stitching and summarizing views

Номер: US20170301119A1
Принадлежит: International Business Machines Corp

A method and system of stitching a plurality of image views of a scene, including grouping matched points of interest in a plurality of groups, and determining a similarity transformation with smallest rotation angle for each grouping of the matched points. The method further includes generating virtual matching points on non-overlapping area of the plurality of image views and generating virtual matching points on overlapping area for each of the plurality of image views.

Подробнее
25-10-2018 дата публикации

CAMERA AND SPECIMEN ALIGNMENT TO FACILITATE LARGE AREA IMAGING IN MICROSCOPY

Номер: US20180307016A1
Принадлежит:

A microscope system and method allow for a desired x′-direction scanning along a specimen to be angularly offset from an x-direction of the XY translation stage, and rotates an image sensor associated with the microscope to place the pixel rows of the image sensor substantially parallel to the desired x′-direction. The angle of offset of the x′-direction relative to the x-direction is determined and the XY translation stage is employed to move the specimen relative to the image sensor to different positions along the desired x′-direction without a substantial shift of the image sensor relative to the specimen in a y′-direction, the y′-direction being orthogonal to the x′ direction of the specimen. The movement is based on the angle of offset. 1. A microscopy method for imaging a specimen comprising: identifying an axis-defining feature on the specimen running in an x′-direction; and', 'aligning the pixel rows, using computer vision, substantially parallel to the axis-defining feature on the specimen., 'rotating an image sensor having pixel rows and pixel columns, about its center axis, relative to a specimen on an XY translation stage that is movable in an x direction and a y direction, wherein rotating the image comprises2. The method of claim 1 , further comprises aligning the pixel rows substantially parallel to the X direction of the XY translation stage claim 1 , before rotating the image sensor.3. The method of claim 1 , wherein the axis-defining feature has a detectable shape running in the x′-direction claim 1 , and the aligning the pixel rows uses computer vision to align the pixel rows substantially parallel to the detectable shape.4. A microscope system comprising:a microscope;an image sensor, having pixel rows and pixel columns and rotatable about its center axis, configured to record image data;an XY translation stage that is movable in an X direction and a Y direction; rotate an image sensor having pixel rows and pixel columns relative to a specimen;', ...

Подробнее
19-11-2015 дата публикации

COMPOSITION MODELING FOR PHOTO RETRIEVAL THROUGH GEOMETRIC IMAGE SEGMENTATION

Номер: US20150332117A1
Принадлежит:

A composition model is developed based on the image segmentation and the vanishing point of the scene. By integrating both photometric and geometric cues, better segmentation is provided. These cues are directly used to detect the dominant vanishing point in an image without extracting any line segments. Based on the composition model, a novel image retrieval system is developed which can retrieve images with similar compositions as the query image from a collection of images and provide feedback to photographers. 1. A method of analyzing a photographic image , comprising the steps of: determining a vanishing point in the image,', 'providing an over-segmentation which generates regions and boundaries between two of the adjacent regions, wherein a plurality of the regions is characterized by photometric and geometric cues,', 'defining a weight for a boundary between two of the adjacent regions as a function of the photometric and geometric cues between the two of the adjacent regions,', 'executing a hierarchical image segmentation process on the image to obtain an image segmentation map that partitions the image into photometrically and geometrically consistent regions, and', 'modeling the composition of the image based upon the image segmentation map and the vanishing point., 'receiving an image at a computer processor operative to perform the following steps2. The method of claim 1 , including the steps of:storing in a memory a plurality of images with image composition previously determined by using the model,retrieving exemplar images from the plurality of images in the memory, the exemplar images having a similar composition to the image.3. The method of claim 2 , wherein the retrieving step uses a similarity measure to compare the composition of two images.4. The method of claim 3 , wherein the similarity measure is computed based on the image segmentation maps and the vanishing points of the images.5. The method of claim 2 , including the step of using the ...

Подробнее
10-11-2016 дата публикации

DYNAMIC UPDATING OF A COMPOSITE IMAGE

Номер: US20160328827A1
Принадлежит: Dacuda AG

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Further, operating conditions may be selected, automatically or based on instructions provided to a user, to reduce motion blur. Techniques, including relocalization such that, allow for user-selected regions of the composite image to be changed. 1. A method of forming a composite image from a plurality of image frames of a scene acquired using a portable electronic device associated with a user interface , the method comprising:sequentially processing image frames of the plurality of image frames by, for a processed image frame, incorporating the processed image frame into a representation of the composite image;receiving user input indicating a region of the composite image; andreplacing a portion of the representation of the composite image based on at least one additional image frame of the plurality of image frames.2. The method of claim 1 , further comprising:detecting a stop condition; andstopping the sequential processing of the plurality of image frames based on the detected stop condition.3. The method of claim 2 , wherein the stop condition comprises a detected output of an inertial sensor on the portable electronic device.4. The method of claim 1 , further comprising:detecting a resume condition; andspatially correlating an image frame of the plurality of image frames to the representation of the composite ...

Подробнее
09-11-2017 дата публикации

Overlay measurement method, device, and display device

Номер: US20170322021A1
Принадлежит: Hitachi High Technologies Corp

To address the problem in which when measuring the overlay of patterns formed on upper and lower layers of a semiconductor pattern by comparing a reference image and measurement image obtained through imaging by an SEM, the contrast of the SEM image of the pattern of the lower layer is low relative to that of the SEM image of the pattern of the upper layer and alignment state verification is difficult even if the reference image and measurement image are superposed on the basis of measurement results, the present invention determines the amount of positional displacement of patterns of an object of overlay measurement from a reference image and measurement image obtained through imaging by an SEM, carries out differential processing on the reference image and measurement image, aligns the reference image and measurement image that have been subjected to differential processing on the basis of the positional displacement amount determined previously, expresses the gradation values of the aligned differential reference image and differential measurement image as brightnesses of colors that differ for each image, superposes the images, and displays the superposed images along with the determined positional displacement amount.

Подробнее
15-11-2018 дата публикации

COMPACT BIOMETRIC ACQUISITION SYSTEM AND METHOD

Номер: US20180330161A1
Принадлежит: Eyelock LLC

A method of determining the identity of a subject while the subject is walking or being transported in an essentially straight direction is disclosed, the two dimensional profile of the subject walking or being transported along forming a three dimensional swept volume, without requiring the subject to change direction to avoid any part of the system, comprising acquiring data related to one or more biometrics of the subject with the camera(s), processing the acquired biometrics data, and determining if the acquired biometric data match corresponding biometric data stored in the system, positioning camera(s) and strobed or scanned infrared illuminator(s) above, next to, or below the swept volume. A system for carrying out the method is also disclosed. 120-. (canceled)21. A system for acquisition of biometric data from a subject in a transport or vehicle , the system comprising: illuminate a subject with light pulses from a first spectrum and a second spectrum of a plurality of discrete light spectra, and', 'illuminate the subject with light pulses from a second spectrum of the plurality of discrete light spectra, the second spectrum different from the first spectrum;, 'at least one light source configured to acquire, at a first rate of acquisition, a set of iris data from the subject for biometric matching, using the light pulses from the first spectrum and the second spectrum, and', 'acquire at a second rate of acquisition that is lower than the first rate of acquisition and interleaved with the acquisitions at the first rate of acquisition, a set of biometric data from other than an iris of the subject using the light pulses from the second spectrum; and, 'a sensor configured to'}at least one processor configured to perform biometric matching using data from the set of iris data, and liveness detection.22. The system of claim 21 , wherein the sensor is further configured to acquire claim 21 , at a third rate of acquisition interleaved with the acquisitions of the ...

Подробнее
15-10-2020 дата публикации

CAMERA AND SPECIMEN ALIGNMENT TO FACILITATE LARGE AREA IMAGING IN MICROSCOPY

Номер: US20200326519A1
Принадлежит:

A microscope system and method allow for a desired x′-direction scanning along a specimen to be angularly offset from an x-direction of the XY translation stage, and rotates an image sensor associated with the microscope to place the pixel rows of the image sensor substantially parallel to the desired x′-direction. The angle of offset of the x′-direction relative to the x-direction is determined and the XY translation stage is employed to move the specimen relative to the image sensor to different positions along the desired x′-direction without a substantial shift of the image sensor relative to the specimen in a y′-direction, the y′-direction being orthogonal to the x′ direction of the specimen. The movement is based on the angle of offset. 1. A method for imaging a specimen along a desired x′-direction of the specimen , the method comprising:rotating an image sensor such that pixel rows of the image sensor are substantially parallel with the desired x′-direction of the specimen, the specimen being angularly offset from an x direction of an XY translation stage on which the specimen is positioned, wherein the XY translation stage is movable in an x direction and a y direction relative to the image sensor;determining the angle of offset of the desired x′-direction as compared to the x-direction of the XY translation stage; andmoving the specimen, using the XY translation stage, to one or more positions, along the desired x′ direction.2. The method of claim 1 , wherein the determining the angle of offset comprises:measuring, relative to the x-direction and y-direction of the XY translation stage, an x distance and y distance between a first focal feature and a second focal feature on the specimen aligned along and defining the desired x′-direction.3. The method of claim 2 , wherein the measuring the x distance and y distance comprises:placing the first focal feature so as to overlap with one or more target pixels of the image sensor,moving the specimen to place the ...

Подробнее
22-11-2018 дата публикации

CAMERA AND SPECIMEN ALIGNMENT TO FACILITATE LARGE AREA IMAGING IN MICROSCOPY

Номер: US20180335614A1
Принадлежит:

A microscope system and method allow for a desired x′-direction scanning along a specimen to be angularly offset from an x-direction of the XY translation stage, and rotates an image sensor associated with the microscope to place the pixel rows of the image sensor substantially parallel to the desired x′-direction. The angle of offset of the x′-direction relative to the x-direction is determined and the XY translation stage is employed to move the specimen relative to the image sensor to different positions along the desired x′-direction without a substantial shift of the image sensor relative to the specimen in a y′-direction, the y′-direction being orthogonal to the x′ direction of the specimen. The movement is based on the angle of offset. 1. A method for imaging a specimen along a desired x′-direction of the specimen , the method comprising:rotating an image sensor such that pixel rows of the image sensor are substantially parallel with the desired x′-direction of the specimen, the specimen being angularly offset from an x direction of an XY translation stage on which the specimen is positioned, wherein the XY translation stage is movable in an x direction and a y direction relative to the image sensor;determining the angle of offset of the desired x′-direction as compared to the x-direction of the XY translation stage; andmoving the specimen, using the XY translation stage, to one or more positions, along the desired x′ direction.2. The method of claim 1 , wherein the determining the angle of offset comprises:measuring, relative to the x-direction and y-direction of the XY translation stage, an x distance and y distance between a first focal feature and a second focal feature on the specimen aligned along and defining the desired x′-direction.3. The method of claim 2 , wherein the measuring the x distance and y distance comprises:placing the first focal feature so as to overlap with one or more target pixels of the image sensor,moving the specimen to place the ...

Подробнее
24-10-2019 дата публикации

METHOD AND SYSTEM FOR DETECTING AN ELEVATED OBJECT SITUATED WITHIN A PARKING FACILITY

Номер: US20190325225A1
Принадлежит:

A method for detecting an elevated object situated within a parking facility, using at least two video cameras that are spatially distributed within the parking facility and whose visual ranges overlap in an overlap area. The method encompasses the following: recording particular video images of the overlap area with the aid of the video cameras; analyzing the recorded video images in order to detect an elevated object in the recorded video images, and ascertaining, based on the recorded video images, whether in the detection of an elevated object the detected elevated object is real. A corresponding system, a parking facility, and a computer program are also provided. 112-. (canceled)13. A method for detecting an elevated object situated within a parking facility , using at least two video cameras that are spatially distributed within the parking facility and whose visual ranges overlap in an overlap area , the method comprising:a) recording particular video images of the overlap area using the video cameras;b) analyzing the recorded video images to detect an elevated object in the recorded video images; andc) ascertaining, based on the recorded video images, whether in the detection of an elevated object the detected elevated object is real.14. The method as recited in claim 13 , wherein step c) includes ascertaining an object speed claim 13 , the ascertained object speed being compared to a predetermined object speed threshold value claim 13 , and based on the comparison claim 13 , determining whether the detected elevated object is real.15. The method as recited in claim 13 , wherein step c) includes ascertaining a movement of the detected elevated object claim 13 , it being ascertained whether the movement of the detected elevated object is plausible claim 13 , and based on the plausibility check claim 13 , determining whether the detected elevated object is real.16. The method as recited in claim 13 , wherein step c) includes classifying the detected elevated ...

Подробнее
07-11-2019 дата публикации

USER FEEDBACK FOR REAL-TIME CHECKING AND IMPROVING QUALITY OF SCANNED IMAGE

Номер: US20190342533A1
Принадлежит: ML Netherlands C.V.

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback. 1. A method of forming a composite image , the method comprising:acquiring a plurality of image frames with a portable electronic device comprising a user interface;sequentially incorporating image frames of the plurality of image frames into a representation of the composite image;determining a quality of depiction of a scene in a portion of the representation of the composite image;computing, based at least in part on the determined quality, a position parameter of the portable electronic device; andgenerating feedback on the user interface, wherein the feedback comprises an indication to a user to adjust positioning of the portable electronic device.2. The method of claim 1 , further comprising sequentially processing image frames of the plurality of image frames prior to incorporating an image frame in the representation of the composite image frame.3. The method of claim 2 , wherein sequentially processing image frames of the plurality of image frames comprises in real-time correcting for warping of the image represented in the image frame.4. The method of claim 2 , ...

Подробнее
14-12-2017 дата публикации

SYSTEMS AND METHODS FOR COMBINING MULTIPLE FRAMES TO PRODUCE MEDIA CONTENT WITH SIMULATED EXPOSURE EFFECTS

Номер: US20170359517A1
Принадлежит:

Systems, methods, and non-transitory computer-readable media can capture media content including an original set of frames. A plurality of subsets of frames can be identified, based on a subset selection input, out of the original set of frames. An orientation-based image stabilization process can be applied to each subset in the plurality of subsets of frames to produce a plurality of stabilized subsets of frames. Multiple frames within each stabilized subset in the plurality of stabilized subsets of frames can be combined to produce a plurality of combined frames. Each stabilized subset of frames can be utilized to produce a respective combined frame in the plurality of combined frames. A time-lapse media content item can be provided based on the plurality of combined frames. 1. A computer-implemented method comprising:applying, by a computing system, an orientation-based image stabilization process to each subset in a plurality of subsets of frames of an original set of frames to produce a plurality of stabilized subsets of frames, wherein the applying the orientation-based image stabilization process comprises minimizing a rate of rotation between successive frames within each subset;combining, by the computing system, multiple frames within each stabilized subset in the plurality of stabilized subsets of frames to produce a plurality of combined frames, wherein each stabilized subset of frames is utilized to produce a respective combined frame in the plurality of combined frames; andgenerating, by the computing system, a time-lapse media content item based on the plurality of combined frames.2. The computer-implemented method of claim 1 , wherein the applying of the orientation-based image stabilization process to each subset in the plurality of subsets of frames to produce the plurality of stabilized subsets of frames further comprises:acquiring timestamps for multiple frames within each subset in the plurality of subsets of frames;acquiring camera orientation ...

Подробнее
14-11-2019 дата публикации

THREE-DIMENSIONAL FINGER VEIN RECOGNITION METHOD AND SYSTEM

Номер: US20190347461A1
Принадлежит: SOUTH CHINA UNIVERSITY OF TECHNOLOGY

A three-dimensional finger vein recognition method and system, comprising the following steps: three cameras taking finger vein images from three angles to obtain three images; constructing a three-dimensional finger model according to finger contour lines; mapping two-dimensional image textures photographed by the three cameras into the three-dimensional finger model, respectively performing different processes on an overlapping region and a non-overlapping region; obtaining a three-dimensional finger vein image; and finally, performing feature extraction and matching on the three-dimensional finger vein image, to complete recognition. The method can acquire a better finger vein recognition effect, and has a higher robustness for a plurality of postures, such as finger rotation and inclination. 1. A three-dimensional finger vein recognition method , characterised by comprising the following steps:S1, photographing, by three cameras, from three evenly spaced angles to obtain three images;S2, constructing a three-dimensional finger model according to the finger edges:a sectional view of a finger is considered approximately as a circle S, a three-dimensional finger is divided into several sections at equal distances, a contour of each section is calculated, and the finger is modelled approximately by using a plurality of circles that have different radii and are located in different positions; and all the approximated circles are then connected in an axial direction of the finger to obtain an approximate three-dimensional finger model;S3, after the three-dimensional finger model has been constructed, next, mapping a two-dimensional image texture photographed by the three cameras into the three-dimensional finger model, wherein an overlapping portion and a nonoverlapping portion exist between every two of the images photographed by the three cameras, an overlapping region needs to be determined first, and the overlapping region and a nonoverlapping region are then ...

Подробнее
14-11-2019 дата публикации

AUTOMATIC DETERMINATION AND CALIBRATION FOR SPATIAL RELATIONSHIP BETWEEN MULTIPLE CAMERAS

Номер: US20190347822A1
Принадлежит:

Aspects of the present disclosure relate to systems and methods for determining or calibrating for a spatial relationship for multiple cameras. An example device may include one or more processors. The example device may also include a memory coupled to the one or more processors and including instructions that, when executed by the one or more processors, cause the device to receive a plurality of corresponding images of scenes from multiple cameras during normal operation, accumulate a plurality of keypoints in the scenes from the plurality of corresponding images, measure a disparity for each keypoint of the plurality of keypoints, exclude one or more keypoints with a disparity greater than a threshold, and determine, from the plurality of remaining keypoints, a yaw for a camera of the multiple cameras. 1. A device , comprising:one or more processors; and receiving a plurality of corresponding images of scenes from multiple cameras during normal operation;', 'accumulating a plurality of keypoints in the scenes from the plurality of corresponding images;', 'measuring a disparity for each keypoint of the plurality of keypoints;', 'excluding one or more keypoints with a disparity greater than a threshold; and', 'determining, from the plurality of remaining keypoints, a yaw for at least one of the multiple cameras., 'a memory coupled to the one or more processors and including instructions that, when executed by the one or more processors, cause the device to perform operations comprising2. The device of claim 1 , further comprising a first camera and a second camera with overlapping fields of view claim 1 , wherein the instructions cause the device to perform operations further comprising:capturing by the first camera a plurality of images of the plurality of corresponding images; andcapturing by the second camera a corresponding image for each of the plurality of images;wherein determining the yaw comprises determining the yaw of the second camera relative to the ...

Подробнее
14-11-2019 дата публикации

APPARATUS AND ASSOCIATED METHODS FOR VIRTUAL REALITY SCENE CAPTURE

Номер: US20190347863A1
Автор: SANGUINETTI Alejandro
Принадлежит:

A virtual reality visual indicator apparatus comprising a virtual reality image capture device comprising a plurality of cameras configured to capture a respective plurality of images of a scene, the respective plurality of images of the scene configured to be connected at stitching regions to provide a virtual reality image of the scene; and a visual indicator provider configured to transmit, into the scene, a visual indicator at a location of at least one stitching region prior to capture of the respective plurality of images of the scene and provide no visual indicator during capture of the respective plurality of images. 115-. (canceled)16. A visual indicator provider device comprising:a processor configured to:communicably couple the visual indicator provider device to a virtual reality image capture device including a plurality of cameras for capturing an image of a scene; and,transmit, into the scene, a visual indicator to indicate a location of at least one stitching region prior to the capture of a respective plurality of images by the respective plurality of cameras; and, to stop transmitting the visual indicator into the scene during capture of the respective plurality of images by the respective plurality of cameras.17. The visual indicator provider device of claim 16 , wherein the processor is further configured to transmit claim 16 , into the scene claim 16 , a visual indicator comprising one or more of:a line indicating a boundary between adjacent captured images, wherein adjacent cameras of the plurality of cameras are configured to capture the adjacent captured images meeting at the boundary; oran area indicating an overlap region between adjacent captured images, wherein adjacent cameras of the plurality of cameras are configured to capture the overlap region between the adjacent captured images.18. The visual indicator provider device of claim 16 , wherein the processor is further configured to transmit claim 16 , into the scene claim 16 , a ...

Подробнее
21-12-2017 дата публикации

Augmented Reality Occlusion

Номер: US20170365100A1
Автор: Walton David
Принадлежит:

A method for generating an augmented reality image from first and second images, wherein at least a portion of at least one of the first and the second image is captured from a real scene, the method comprising: identifying a confidence region in which a confident determination as to which of the first and second image to render in that region of the augmented reality image can be made; identifying an uncertainty region in which it is uncertain as to which of the first and second image to render in that region of the augmented reality image; determining at least one blending factor value in the uncertainty region based upon a similarity between a first colour value in the uncertainty region and a second colour value in the confidence region; and generating an augmented reality image by combining, in the uncertainty region, the first and second images using the at least one blending factor value. 1. A method for generating an augmented reality image from first and second images , wherein at least a portion of at least one of the first and the second image is captured from a real scene , the method comprising:identifying a confidence region in which a confident determination as to which of the first and second image to render in that region of the augmented reality image can be made;identifying an uncertainty region in which it is uncertain as to which of the first and second image to render in that region of the augmented reality image;determining at least one blending factor value in the uncertainty region based upon a similarity between a first colour value in the uncertainty region and at least one second colour value in the confidence region; andgenerating an augmented reality image by combining, in the uncertainty region, the first and second images using the at least one blending factor value.2. The method according to claim 1 , wherein the first image and the second image each have associated therewith a plurality of colour values and a corresponding plurality ...

Подробнее
20-12-2018 дата публикации

Generating Prescription Records from a Prescription Label on a Medication Package

Номер: US20180366218A1
Принадлежит:

The system captures portions of a label on a package in a set of images, reconstructs the label based on the set of images, identifies text in the label, determines associations of identified text and types of information, and stores the set of images, the reconstructed label, the identified text in the label, and the determined associations as, for example, a batch in a review queue. During a review process, the batch is reviewed and a structured prescription record is determined for the batch which is further used by the system and user of the system associated with the batch to provide various features to the user. 1. A method comprising:receiving, by a medication management system from each of a various client devices, one or more images of a prescription label on a medication package associated with a user operating the client device, each image taken of the prescription label by one of the client devices;for images determined to be of a same prescription label, storing the images as a batch of images for the prescription label;associating each of the images or each batch of images with a user identifier for a user who provided the one or more images to the medication management system via one of the client devices;receiving, by the medication management system, a confirmation of review of the prescription label in the one or more images associated with each user identifier;storing, by the medication management system, the confirmation for each prescription label in association with the user identifier of the user who provided the one or more images; andgenerating, by the medication management system, a prescription record for each prescription label based on the stored confirmation, each prescription record including information about a prescription identified by the prescription label on the medication package.2. The method of claim 1 , further comprising:storing each of the images in a review queue; andordering the review queue based on a queuing algorithm ...

Подробнее
12-11-2020 дата публикации

METHODS AND APPARATUS TO CAPTURE PHOTOGRAPHS USING MOBILE DEVICES

Номер: US20200358948A1
Принадлежит:

Methods and apparatus to capture photographs using mobile devices are disclosed. An example apparatus includes a photograph capturing controller to capture a first photograph. The apparatus further includes a blurriness analyzer to determine a probability of blurriness of the first photograph. The probability of blurriness is based on an analysis of a portion of the first photograph, the portion excluding a region of the first photograph associated with an auto-focus operation. The example apparatus also includes a photograph capturing interface to prompt a user to capture a new photograph to replace the first photograph when the probability of blurriness exceeds a blurriness threshold. 1. An apparatus comprising:a photograph capturing controller to capture a first photograph;a blurriness analyzer to determine a probability of blurriness of the first photograph, the probability of blurriness based on an analysis of a portion of the first photograph, the portion excluding a region of the first photograph associated with an auto-focus operation; anda photograph capturing interface to prompt a user to capture a new photograph to replace the first photograph when the probability of blurriness exceeds a blurriness threshold.2. The apparatus of claim 1 , wherein the blurriness analyzer is to determine the probability of blurriness by:applying an edge detection filter to the first photograph;identifying pixels having a pixel value above a pixel value threshold;estimating a variance of pixel values corresponding to the identified pixels; andcalculating the probability of blurriness based on the estimated variance.3. The apparatus of claim 2 , wherein the variance of the pixel values is a first variance of a plurality of variances of the pixel values claim 2 , the first variance associated with a first area of multiple areas within the portion of the first photograph claim 2 , the blurriness analyzer to estimate the first variance based on the pixels identified within the ...

Подробнее
28-11-2019 дата публикации

DYNAMIC UPDATING OF A COMPOSITE IMAGE

Номер: US20190362469A1
Принадлежит: ML Netherlands C.V.

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Further, operating conditions may be selected, to automatically or based on instructions provided to a user, to reduce motion blur. Techniques, including relocalization such that, allow for user-selected regions of the composite image to be changed. 1. A method of forming a composite representation from a plurality of image frames of a scene acquired using a portable electronic device associated with a user interface , the method comprising:sequentially processing image frames of the plurality of image frames by, for a processed image frame, incorporating information derived from the processed image frame into the composite representation;receiving user input indicating a region of the composite representation; andreplacing a portion of the composite representation based on at least one additional image frame of the plurality of image frames, wherein:sequentially processing the image frames of the plurality of image frames comprises tracking a position of the portable electronic device with respect to a spatial coordinate system associated with the composite representation.2. The method of claim 1 , further comprising:detecting a stop condition; andstopping the sequential processing of the plurality of image frames based on the detected stop condition.3. The method of claim 2 , wherein the stop condition comprises a ...

Подробнее
03-12-2020 дата публикации

Dynamic Street Scene Overlay

Номер: US20200379629A1
Принадлежит: Apple Inc.

In some implementations, a computing device can present a dynamic street scene overlay when presenting a map view on a display of the computing device. The dynamic street scene overlay can be presented such that a user can clearly view both the dynamic street scene and the map view. The dynamic street scene can be dynamically adjusted in response to the user manipulating the map view to a different location. The dynamic street scene can be presented such that the objects in the images of the dynamic street scene have a three-dimensional look and feel. The dynamic street scene can be presented such that the dynamic street scene does not prevent the user from viewing and interacting with the map view. 1. A method comprising:presenting, by a computing device, a graphical user interface having a first portion and a second portion distinct from the first portion;presenting, by the computing device, a map view in the first portion of the graphical user interface, the map view including a first map of a first geographical area and a location indicator associated with a first geographic location within the first geographical area, the location indicator having a fixed position within the first portion of the graphical user interface and located over the first geographic location on the first map;presenting, by the computing device, a dynamic street scene overlay in the second portion of the graphical user interface, the dynamic street scene overlay including a first street level image of a first object from a perspective of the first geographic location;receiving, by the computing device, a first user input to the map view;in response to receiving the first user input, moving, by the computing device, the first map under the location indicator while the location indicator remains at the fixed position within the first portion of the graphical user interface, where the moving causes the computing device to present a second map of a second geographical area within the first ...

Подробнее
03-12-2020 дата публикации

METHOD FOR OBSERVING THE SURFACE OF THE EARTH AND DEVICE FOR IMPLEMENTING SAME

Номер: US20200380283A1
Автор: GEORGY Pierre-Luc
Принадлежит:

A method for acquiring images of the surface of the earth, installing an aerial platform in a quasi-stationary position, equipped with an image acquisition system with a large field of view and a second, high-resolution, image acquisition system is disclosed. The method includes implementing successive observation cycles, each one including the acquisition of an image of a zone of interest by the first system, the partitioning of the image thus acquired into mesh units which each correspond to a sector of the zone of interest, the analysis of the image in order to detect the potential presence of unwanted marks, and the acquisition of an image by the second system for the mesh units for which no unwanted marks have been detected. Observation cycles are thereby implemented until images of the entire zone of interest have been acquired by the second system. 1. A method for acquiring images of the surface of the Earth , comprising:placing of a first aerial or space platform in a stationary position above said surface of the Earth or moving at a speed less than 200 km/h above said surface of the Earth, said first platform comprising a first image acquisition system with a field of view covering a zone, called zone of interest, of said surface of the Earth,placing of a second aerial or space platform in a stationary position above said surface of the Earth or moving at a speed less than 200 km/h above said surface of the Earth, said second platform comprising a second image acquisition system with a narrower field of view and of better resolution than the first image acquisition system, the field of view of said second image acquisition system being orientable such that the field of regard of said second image acquisition system covers said zone of interest, (a) acquiring an image of said zone of interest by said first image acquisition system,', '(b) partitioning of the image thus acquired, called preliminary image, in mesh units each corresponding to a sector of said ...

Подробнее
17-12-2020 дата публикации

WEARABLE KEY DEVICE AND ELECTRONIC KEY SYSTEM

Номер: US20200391696A1
Принадлежит:

A wearable key device is to be used while being worn on a predetermined position of a body and includes a ring communication module, an imaging device, and a ring controller. The ring communication module is configured to wirelessly communicate with an authentication device provided to a predetermined protection object. The imaging device is configured to capture an image of the predetermined position of the body. The ring controller is configured to acquire wearer information that is biometric information of a wearer who wears the wearable key device based on the image captured by the imaging device. The wearer information is used for determining whether the wearer is an authorized user. 1. A wearable key device to be used while being worn on a predetermined position of a body , the wearable key device comprising:a ring communication module configured to wirelessly communicate with an authentication device provided to a predetermined protection object;an imaging device configured to capture an image of the predetermined position of the body;a user information storage storing user information that is biometric information of an authorized user; and cooperate with the ring communication module and transmit, to the authentication device, authentication information that is information for certifying that the wearable key device is a key of the protection object;', 'acquire wearer information that is biometric information of a wearer who wears the wearable key device based on the image captured by the imaging device; and', 'compare the wearer information and the user information stored by the user information storage to determine whether the wearer is the authorized user, wherein, 'a ring controller configured tothe ring controller is further configured to transmit the authentication information in response to a request from the authentication device when determining that the wearer is the authorized user, and not to transmit the authentication information when not ...

Подробнее
31-12-2020 дата публикации

CONSISTENTLY EDITING LIGHT FIELD DATA

Номер: US20200410635A1
Принадлежит:

The invention describes a method for applying a geometric warp to the light field capture of a 3D scene, consisting of several views of the scene taken from different viewpoints. The warp is specified as a set of (source point, target point) positional constraints on a subset of the views. These positional constraints are propagated to all the views and a warped image is generated for each view, in such a way that these warped images are geometrically consistent in 3D across the views. 2. The method of claim 1 , wherein said plurality of calibrated 2D images of the 3D scene is further depicted from a set of corresponding matrices of projection of the 3D scene for each of the 2D views.3. The method of claim 1 , wherein it comprises a prior step of inputting the at least one initial set of positional constraint parameters.4. The method of claim 1 , wherein it comprises determining claim 1 , for each of the positional constraint parameters associated with the at least two reference views claim 1 , a line in 3D space that projects on the 2D source location claim 1 , and a line in 3D space that projects on the 2D target location claim 1 , and determining said 3D source location and said 3D target location from said lines.5. The method of claim 4 , wherein each line is represented in Plücker coordinates as a pair of 3D vectors noted (d claim 4 ,m) claim 4 , and wherein determining the 3D source location (P) claim 4 , in the 3D scene claim 4 , of which the 2D source location (p) is the projection into the corresponding view (V) claim 4 , comprises solving the system of equations formed by the initial set of positional constraints (p claim 4 ,q) claim 4 , in the least squares sense:{'br': None, 'i': {circumflex over (P)}', '∥P', 'Λd', '−m, 'sub': i', 'P', {'sub2': 'i'}, 'j', 'i', 'i', 'i, 'sup': j', 'j', '2, '=ArgminΣ∥.'}6. The method of claim 1 , wherein warping implements a moving least square algorithm.7. The method of claim 1 , wherein warping implements a bounded ...

Подробнее
21-09-2021 дата публикации

Dynamic street scene overlay

Номер: US11126336B2
Принадлежит: Apple Inc

In some implementations, a computing device can present a dynamic street scene overlay when presenting a map view on a display of the computing device. The dynamic street scene overlay can be presented such that a user can clearly view both the dynamic street scene and the map view. The dynamic street scene can be dynamically adjusted in response to the user manipulating the map view to a different location. The dynamic street scene can be presented such that the objects in the images of the dynamic street scene have a three-dimensional look and feel. The dynamic street scene can be presented such that the dynamic street scene does not prevent the user from viewing and interacting with the map view.

Подробнее
22-01-2019 дата публикации

Vehicular image processing apparatus and vehicular image processing system

Номер: US10183621B2
Автор: Norifumi HODOHARA
Принадлежит: Mitsubishi Electric Corp

A vehicular image processing system and a vehicular image processing apparatus are provided which combine an image from a photographing device for detection of an object forward of a vehicle and an image from a photographing device for parking assistance, thereby enabling detection, in a synthesized image, of an obstacle of which the entirety cannot be captured by each photographing device since the object is located in a region where detection is difficult in the conventional art, for example, the object is located at a short distance from the own-vehicle.

Подробнее
06-12-2016 дата публикации

Method and apparatus for image reconstruction

Номер: US9516242B2
Принадлежит: CARL ZEISS AG

An apparatus for image reconstruction comprises an optical system and a control means for controlling the optical system, which control means is configured to control the optical system, for the capture of a plurality of single images, in such a manner that at least one parameter of the optical system is different upon capture of at least two single images. The apparatus comprises a processing device for digitally reconstructing an image in dependence on the plurality of single images and in dependence on information about optical transfer functions of the optical system upon capture of the plurality of single images.

Подробнее
20-11-2018 дата публикации

Compact biometric acquisition system and method

Номер: US10133926B2
Принадлежит: Eyelock LLC

A method of determining the identity of a subject while the subject is walking or being transported in an essentially straight direction is disclosed, the two dimensional profile of the subject walking or being transported along forming a three dimensional swept volume, without requiring the subject to change direction to avoid any part of the system, comprising acquiring data related to one or more biometrics of the subject with the camera(s), processing the acquired biometrics data, and determining if the acquired biometric data match corresponding biometric data stored in the system, positioning camera(s) and strobed or scanned infrared illuminator(s) above, next to, or below the swept volume. A system for carrying out the method is also disclosed.

Подробнее
17-09-2019 дата публикации

Image processing device, image processing method and computer readable medium

Номер: US10417743B2
Принадлежит: Mitsubishi Electric Corp

The present invention is provided with a boundary calculation unit ( 110 ) to calculate a boundary position ( 180 ) being a basis for dividing a common area into a side of the first imaging device ( 210 ) and a side of the second imaging device ( 220 ), a selection unit ( 130 ) to select a bird's-eye view image wherein distortion in an image of a three-dimensional object is less as a selected image ( 330 ), out of the first bird's-eye view image ( 311 ) and the second bird's-eye view image ( 321 ) based on the boundary position ( 180 ) and a position of the three-dimensional object, and an image generation unit ( 140 ) to generate an area image ( 340 ) based on an image other than the common area in the first bird's-eye view image ( 311 ), an image other than the common area in the second bird's-eye view image ( 321 ), and an image of the common area included in the selected image ( 330 ).

Подробнее
20-06-2017 дата публикации

Trainable versatile monitoring device and system of devices

Номер: US9684834B1
Принадлежит: Surround IO Corp

A machine system includes monitor devices each having a camera, the monitor devices distributed over a physical area; layout logic forms images from the cameras of the monitor devices into a scene layout for the area; user interface logic receives training signals from sensors directed to a person physically present in the area and to correlate those signals to subareas of the layout; and analytical logic analyzes the layout and training signals to ascertain subareas of the area at which the monitor devices should focus machine sensor and processing resources.

Подробнее
07-10-2020 дата публикации

Apparatus for Verifying Counterfeit and falsification by Using Different Patterns and Driving Method Thereof

Номер: KR102163122B1
Автор: 김정남, 박행운
Принадлежит: 주식회사 더코더

The present invention relates to a device for verifying forgery and alteration using a heterogeneous pattern and to a driving method of the device. According to an embodiment of the present invention, the device for verifying forgery and alteration using a heterogeneous pattern comprises: an interface unit which receives scan data of a discrimination target including a first pattern which can be identified by naked eyes and a second pattern which cannot be identified by naked eyes; and a control unit which determines duplication or forgery and alteration in accordance with whether the first pattern and the second pattern are recognized, based on an analysis result of the received scan data.

Подробнее
12-05-2020 дата публикации

Tracking and/or analyzing facility-related activities

Номер: US10650340B2
Принадлежит: Accenture Global Solutions Ltd

A device may receive video of a facility from an image capture system. The video may show an individual within the facility, an object within the facility, or an activity being performed within the facility. The device may process the video using a technique to identify the individual within the facility, the object within the facility, or the activity being performed within the facility. The device may track the individual, the object, or the activity through the facility to facilitate an analysis of the individual, the object, or the activity. The device may perform the analysis of the individual, the object, or the activity using information related to tracking the individual, the object, or the activity. The device may perform an action related to the individual, the object, or the activity based on a result of the analysis. The action may positively impact operations of the facility.

Подробнее
11-05-2021 дата публикации

Image identification apparatus and non-transitory computer readable medium

Номер: US11003902B2
Автор: Daisuke Tatsumi
Принадлежит: Fuji Xerox Co Ltd

An image identification apparatus includes an extraction unit, an excluding unit, and an identification unit. The extraction unit extracts lines from an image. The exclusion unit excludes from objects to be identified a boundary delimiting an entire area of the image among the extracted lines. The identification unit identifies as an object multiple lines that are among the extracted lines and that are not excluded by the exclusion unit if the multiple lines are connected to each other.

Подробнее
17-05-2022 дата публикации

Methods and apparatus to capture photographs using mobile devices

Номер: US11336819B2
Принадлежит: Nielsen Co US LLC

Methods and apparatus to capture photographs using mobile devices are disclosed. An example apparatus includes a photograph capturing controller to capture a first photograph. The apparatus further includes a blurriness analyzer to determine a probability of blurriness of the first photograph. The probability of blurriness is based on an analysis of a portion of the first photograph, the portion excluding a region of the first photograph associated with an auto-focus operation. The example apparatus also includes a photograph capturing interface to prompt a user to capture a new photograph to replace the first photograph when the probability of blurriness exceeds a blurriness threshold.

Подробнее
15-11-2016 дата публикации

Method for product recognition from multiple images

Номер: US9495606B2
Принадлежит: Ricoh Co Ltd

A method for product recognition from multiple images includes producing a plurality of recognition results for a plurality of input images, stitching the plurality of input images into a single stitched image; merging the plurality of recognition results using information from stitching the plurality of input images to generate a merged recognition result; and outputting the merged recognition result. The disclosure also includes systems for implementing the method.

Подробнее
22-01-2020 дата публикации

Cluster based photo navigation

Номер: KR102056417B1
Принадлежит: 구글 엘엘씨

이 기술은 이미지가 캡쳐될 때 보여지는 공통적 패턴들에 기초하여 클러스터들로 조직화되는 이미지를 네비게이션하는 것에 관한 것이다. 예를 들어, 미리 결정된 패턴을 만족시키는 캡쳐된 이미지들의 세트가 결정될 수 있다. 캡쳐된 이미지들의 세트 내의 이미지들은 상기 미리 결정된 패턴에 따라 하나 이상의 클러스터들로 그룹화될 수 있다. 상기 하나 이상의 클러스터들 중 제1 클러스터를 디스플레이하기 위한 요청이 수신될 수 있고, 이에 응답하여, 상기 요청된 제1 클러스터로부터 제1 캡쳐된 이미지가 선택될 수 있다. 그 다음, 선택된 제1 캡쳐된 이미지가 디스플레이될 수 있다. This technique relates to navigating an image organized into clusters based on common patterns seen when the image is captured. For example, a set of captured images that satisfy a predetermined pattern can be determined. Images in the set of captured images may be grouped into one or more clusters according to the predetermined pattern. A request to display a first one of the one or more clusters may be received, and in response, a first captured image from the requested first cluster may be selected. The selected first captured image can then be displayed.

Подробнее
07-08-2018 дата публикации

Periphery monitoring device for work machine

Номер: US10044933B2
Принадлежит: HITACHI CONSTRUCTION MACHINERY CO LTD

A periphery monitoring device for a work machine includes imaging devices that capture an image of the surroundings of the work machine. An overhead view image is generated from the surroundings of the work machine based upon upper view-point images of the image devices. When generating an overhead view image of an overlap region of first and second upper view-point images relating to the images captured by the first and second imaging devices, the overhead view image generating unit, based upon a height of a virtual monitoring target, sets at least one of a first region in which the first upper view-point image is displayed and a second region in which the second upper view-point image is displayed, and also sets a third region in which a composite display image based upon the first and second upper view-point images is displayed.

Подробнее