Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 3389. Отображено 100.
09-05-2013 дата публикации

METHOD FOR EXTRACTING RANDOM SIGNATURES FROM A MATERIAL ELEMENT AND METHOD FOR GENERATING A DECOMPOSITION BASE TO IMPLEMENT THE EXTRACTION METHOD

Номер: US20130114903A1
Принадлежит:

The present invention concerns a method for extracting a random signature from a subject material element, comprising: 1. Method for generating a decomposition base which can be used to extract a random signature from a subject material element , comprising the following steps:generating N acquisition vectors of structural characteristics of at least one region of an at least one material element separate from the subject material element and/or from the subject material element itself,analysing all the acquisition vectors using statistical methods to obtain the decomposition base formed of decomposition vectors enabling the representation of each acquisition vector in the form of an image vector of which each component corresponds to the contribution of a decomposition vector in the acquisition vector,analysing at least part of the decomposition vectors to identify that or those decomposition vectors, called common or certain contribution decomposition vectors, which will be at the origin of highly determinist and/common components to all image vectors obtained using the decomposition base,saving the decomposition base, optionally saving a reading mask which, in the decomposition base, gives the position of any decomposition vectors at the origin of determinist components and/or the position of decomposition vectors at the origin of random components.2. Method for generating a decomposition base as in claim 1 , characterized in that analysis of the decomposition vectors comprises the following steps:projecting each acquisition vector onto the decomposition base to obtain an image vector of which each component corresponds to the contribution of a decomposition vector in the acquisition vector, analysing at least part of the image vectors to identify that or those components which are highly determinist and/or common to all image vectors, the determinist components corresponding to decompositionvectors in the decomposition base called common or certain contribution ...

Подробнее
06-06-2013 дата публикации

Pose Estimation

Номер: US20130142436A1
Автор: SHIBA Hisashi
Принадлежит: NEC Corporation

In a pose estimation for estimating the pose of an object of pose estimation with respect to a reference surface that serves as a reference for estimating a pose, a data processing device: extracts pose parameters from a binarized image; identifies a combination of pose parameters for which the number of cross surfaces of parameter surfaces that accord with surface parameter formulas, which are numerical formulas for expressing a reference surface, is a maximum; finds a slope weighting for each of cross pixels, which are pixels on each candidate surface and which are pixels within a prescribed range, that is identified based on the angles of the tangent plane at the cross pixel and based on planes formed by each of the axes of parameter space; and identifies the significant candidate surface for which a number, which is the sum of slope weightings, is a maximum, as the actual surface that is the reference surface that actually exists in the image. 1. A pose estimation system provided with a first data processing device , a second processing device and a data process switching device ,said first data processing device comprising:a binarization unit for dividing an image received as input into a candidate region that is a candidate for a reference surface that serves as a reference for estimating pose and a background region that is a region other than the candidate region;a surface parameter formula expression unit for extracting pose parameters that indicate pose of an object of pose estimation with respect to a reference surface that appears in said image that was received as input and, based on a combination of values obtained by implementing a transform by a prescribed function upon, of the pose parameters that were extracted, parameters that indicate the direction in which the object of pose estimation is directed and pose parameters other than the parameters that indicate direction, finding surface parameter formulas that are numerical formulas that express ...

Подробнее
06-06-2013 дата публикации

Pose Estimation

Номер: US20130142437A1
Автор: SHIBA Hisashi
Принадлежит: NEC Corporation

In a pose estimation for estimating the pose of an object of pose estimation with respect to a reference surface that serves as a reference for estimating a pose, a data processing device: extracts pose parameters from a binarized image; identifies a combination of pose parameters for which the number of cross surfaces of parameter surfaces that accord with surface parameter formulas, which are numerical formulas for expressing a reference surface, is a maximum; finds a slope weighting for each of cross pixels, which are pixels on each candidate surface and which are pixels within a prescribed range, that is identified based on the angles of the tangent plane at the cross pixel and based on planes formed by each of the axes of parameter space; and identifies the significant candidate surface for which a number, which is the sum of slope weightings, is a maximum, as the actual surface that is the reference surface that actually exists in the image. 1. A pose estimation method including a data processing switching process for , based on the results of comparing a parameter space calculation amount , which is a calculation amount indicating the volume of arithmetic processing carried out by using a first pose estimation method to identify an actual surface that is said reference surface , and an image space calculation amount , which is a calculation amount indicating the volume of arithmetic processing carried out by using a second pose estimation method to identify an actual surface , selecting one of the pose estimation methods , among the first pose estimation method and the second pose estimation methods and using the selected pose estimation method to identify said actual surface:the first pose estimation method for, based on an image that is received as input, estimating the pose of an object of pose estimation with respect to a reference surface that serves as a reference for estimating the pose, said pose estimation method comprising:a binarization process for ...

Подробнее
27-06-2013 дата публикации

VIDEO DETECTION SYSTEM AND METHODS

Номер: US20130163864A1
Автор: Cavet Rene
Принадлежит: IPHARRO MEDIA GMBH

A video detection system and method compares a queried video segment to one or more stored video samples. Each of the queried video segments and stored video samples can be represented by respective digital image sets. A first and second comparison comprises comparing a set of low and high resolution temporal and spatial statistical moments in a COLOR9 space, and eliminating file digital image sets that do not match the queried digital image set. A third comparison generates a set of matching files by comparing a set of wavelet transform coefficients in a COLOR9 space. RGB bit-wise registration and comparison of one or more subframes of specific frames in the queried digital image set to a corresponding set of matching file subframes determines queried subframe changes. In the event of a change in a queried subframe, the changed subframe is added to the set of matching file subframes. 118-. (canceled)19. A video archiving method , comprising:(a) encoding a video;(b) importing the video to a set of file digital images;(c) generating a set of video detection data from the set of file digital images;(d) generating a set of video analysis data from the set of file digital images;(e) generating a set of metadata from the set of file digital images;(f) generating a set of manual annotation data based on the set of file digital images;(g) generating a set of video indexing data from (c)-(f); and(h) archiving the video and video indexing data.20. The method of claim 19 , wherein the encoding comprises converting the video to an RGB color space.21. The method of claim 19 , wherein the generating a set of video detection data comprises extracting: a first feature data set claim 19 , a second feature data set claim 19 , and a third feature data set.22. The method of claim 21 , wherein the first feature data set comprises a first function of a set of two-dimensional statistical moments in a COLOR9 space.23. The method of claim 21 , wherein the second feature data set comprises ...

Подробнее
04-07-2013 дата публикации

LEARNING APPARATUS, A LEARNING SYSTEM, LEARNING METHOD AND A LEARNING PROGRAM FOR OBJECT DISCRIMINATION

Номер: US20130170739A1
Автор: Hosoi Toshinori
Принадлежит: NEC Corporation

A learning apparatus in the present invention includes a weak discriminator generation unit that generates a weak discriminator which calculates a discrimination score of an instance of a target based on a feature and a bag label, a weak discrimination unit which calculates the discrimination score based on the generated weak discriminator, an instance probability calculation unit that calculates an instance probability of the target instance based on the calculated the discrimination score, a bag probability calculation unit that calculates a probability that no smaller than two positive instances are included in the bag based on the calculated instance probability and a likelihood calculation unit which calculates likelihood representing plausibility of the bag probability based on the bag label. 1. A learning apparatus comprising:a weak discriminator generation unit which generates a weak discriminator which calculates a discrimination score that shows whether a target instance of is a positive instance based on the feature extracted from a plurality of bags and a bag label which is information whether each bag is a positive bag or a negative bag,a weak discrimination unit which calculates the discrimination score based on the weak discriminator generated by the weak discriminator generation unit,an instance probability calculation unit which calculates a probability (instance probability) that the target instance is an instance (positive instance) of the correct target object based on the discrimination score calculated by the weak discrimination unit,a bag probability calculation unit which calculates a probability (bag probability) that no smaller than two positive instances are included in the bag based on an instance probability calculated by the instance probability calculation unit, anda likelihood calculation unit which calculates the likelihood which expresses plausibility of the calculated bag probability in the bag probability calculation unit based on ...

Подробнее
08-08-2013 дата публикации

Surface shape measurement method, surface shape measurement apparatus, non-transitory computer-readable storage medium, optical element, and method of manufacturing optical element

Номер: US20130202215A1
Принадлежит: Canon Inc

A surface shape measurement method that divides a surface shape of an object ( 107 ) into a plurality of partial regions ( 201, 202, 203, 204 ) to obtain partial region data and that stitches the partial region data to measure the surface shape of the object, and the method includes the steps of calculating sensitivity of an error generated by a relative movement between the object and a sensor ( 110 ) for each of the partial regions, dividing the surface shape of the object into the plurality of partial regions to obtain the partial region data, obtaining the partial region data, calculating an amount corresponding to the error using the sensitivity, correcting the partial region data using the amount corresponding to the error, and stitching the corrected partial region data to calculate the surface shape of the object.

Подробнее
15-08-2013 дата публикации

SYSTEM AND METHOD FOR SHAPE MEASUREMENTS ON THICK MPR IMAGES

Номер: US20130208989A1
Автор: Yang Lining
Принадлежит: Siemens Corporation

A method for measuring shapes in thick multi-planar reformatted (MPR) digital images, including identifying a shape in a digital MPR image, scan-converting points corresponding to the identified shape on a starting plane of an MPR slab in an image volume from which the MPR was obtained to generate a plurality of starting points for the identified shape, calculating an end point in the MPR slab corresponding to each starting point, propagating a ray from each starting point to each corresponding end point, accumulating samples along each ray, and computing a desired measurement value from the accumulated samples after reaching the end point for all rays. 1. A method for measuring shapes in thick multi-planar reformatted (MPR) digital images , comprising the steps of:identifying a shape in a digital MPR image;scan-converting points corresponding to the identified shape on a starting plane of an MPR slab in an image volume from which the MPR was obtained to generate a plurality of starting points for the identified shape;calculating an end point in the MPR slab corresponding to each starting point;propagating a ray from each starting point to each corresponding end point;accumulating samples along each ray; andcomputing a desired measurement value from the accumulated samples after reaching the end point for all rays.2. The method of claim 1 , wherein samples are accumulated by performing tri-linear interpolation at each sample point.3. The method of claim 1 , wherein samples are accumulated by performing nearest-neighbor interpolation at each sample point.4. The method of claim 1 , wherein accumulating samples includes averaging the sample values.5. The method of claim 1 , wherein accumulating samples comprises saving a maximum of the accumulated samples.6. The method of claim 1 , wherein calculating an end point corresponding to each starting point comprises extending a normal of the MPR slab from each starting point until an end plane of the MPR slab is reached in ...

Подробнее
29-08-2013 дата публикации

METHODS AND SYSTEMS FOR ENHANCING DATA

Номер: US20130223753A1
Автор: SORNBORGER ANDREW T.
Принадлежит:

Methods and systems for data analysis using covarying data. Eigenvalues and eigenvectors of one or more lagged covariance matrices of data obtained over time may be generated and used to enhance the data. 1. A computer-implemented method for use in analysis of data comprising:providing a dataset representative of data obtained over time;generating a plurality of eigenvalues and a plurality of eigenvectors of at least one lagged covariance matrix for the dataset in the time domain; andreconstructing an enhanced dataset using the plurality of eigenvalues and eigenvectors and the dataset.2. The method of claim 1 , wherein the at least one lagged covariance matrix for the dataset comprises a plurality of lagged covariance matrices for the dataset.3. The method of claim 1 , wherein generating the plurality of eigenvalues and the plurality of eigenvectors of at least one lagged covariance matrix for the dataset in the time domain comprises:generating at least one shift matrix for the dataset in the time domain; andgenerating the plurality of eigenvalues and the plurality of eigenvectors based on the at least one shift matrix.4. The method of claim 3 , wherein generating the plurality of eigenvalues and the plurality of eigenvectors based on the at least one shift matrix comprises generating the plurality of eigenvalues and the plurality of eigenvectors based on the at least one shift matrix using fast Fourier transforms.5. The method of claim 1 , wherein generating the plurality of eigenvalues and the plurality of eigenvectors of at least one lagged covariance matrix for the dataset in the time domain comprises:generating at least one lagged covariance matrix for the dataset in the time domain; andgenerating the plurality of eigenvalues and the plurality of eigenvectors based on the at least one lagged covariance matrix.6. The method of claim 5 , wherein generating the plurality of eigenvalues and the plurality of eigenvectors of at least one lagged covariance matrix for ...

Подробнее
05-12-2013 дата публикации

High-Accuracy Centered Fractional Fourier Transform Matrix for Optical Imaging and Other Applications

Номер: US20130322777A1
Автор: Ludwig Lester F.
Принадлежит:

Methods for numerically generating a centered discrete fractional Fourier transform matrix on a computer, the centered discrete fractional Fourier transform matrix of size N by N where N is an odd integer. Centering is obtained by simple barrel roll operations on eigenvectors. High-accuracy is obtained by numerically calculating the eigenvectors of the discrete fractional Fourier transform matrix from a closed-form mathematical formula and then iteratively performing a Gram-Schmidt orthogonalization procedure until a resulting set of improved-orthogonal eigenvectors is produced that is sufficiently orthogonal. 1. A method for numerically generating a centered discrete fractional Fourier transform matrix on a computer , the centered discrete fractional Fourier transform matrix of size N by N where N is an odd integer , the method comprising:Numerically calculating the N eigenvectors of an N by N discrete fractional Fourier transform matrix from a closed-form mathematical formula, the calculation performed on a computer;Performing a barrel shift operation on each of the N eigenvectors to produce N shifted eigenvectors;Performing a Gram-Schmidt orthogonalization procedure on the N shifted eigenvectors to produce a first set of improved-orthogonal shifted eigenvectors, the Gram-Schmidt orthogonalization procedure;Testing the resulting first set of improved-orthogonal shifted eigenvectors for mutually orthogonality performed on the computer;If the first set of improved-orthogonal shifted eigenvectors does not possess enough mutually orthogonality, applying another Gram-Schmidt orthogonalization procedure on the first set of improved-orthogonal shifted eigenvectors to produce a second set of improved-orthogonal shifted eigenvectors, andTesting the resulting second set of improved-orthogonal shifted eigenvectors for mutually orthogonality;Wherein if the first set of improved-orthogonal shifted eigenvectors does not possess enough mutually orthogonality, applying another ...

Подробнее
23-01-2014 дата публикации

Redundant aspect ratio decoding of devanagari characters

Номер: US20140023275A1
Принадлежит: Qualcomm Inc

An electronic device and method receive a block sliced from a rectangular portion of an image of a scene of real world captured by a camera and use a property of the block to operate one of multiple optical character recognition (OCR) decoders. In an illustrative aspect, a first OCR decoder is configured to recognize characters whose property satisfies the test based on a first limit, the first limit being obtained by reducing a predetermined limit by an overlap amount. In this illustrative aspect, a second OCR decoder is configured to recognize characters whose property does not satisfy the test based on a second limit, the second limit being obtained by increasing the predetermined limit by the overlap amount. When the property of the block satisfies the test, the first OCR decoder is operated and alternatively the second OCR decoder is operated, resulting in candidates for a character being identified.

Подробнее
03-01-2019 дата публикации

METHOD FOR AUTOMATING TRANSFER OF PLANTS WITHIN AN AGRICULTURAL FACILITY

Номер: US20190000019A1
Принадлежит:

One variation of a method for automating transfer of plants within an agricultural facility includes: dispatching a loader to autonomously deliver a first module—defining a first array of plant slots at a first density and loaded with a first set of plants at a first growth stage—from a first grow location within an agricultural facility to a transfer station within the agricultural facility; dispatching the loader to autonomously deliver a second module—defining a second array of plant slots at a second density less than the first density and empty of plants—to the transfer station; recording a module-level optical scan of the first module; extracting a viability parameter of the first set of plants from features detected in the module-level optical scan; and if the viability parameter falls outside of a target viability range, rejecting transfer of the first set of plants from the first module. 1. A method for automating transfer of plants within an agricultural facility , the method comprising:dispatching a loader to autonomously deliver a first module from a first grow location within an agricultural facility to a transfer station within the agricultural facility, the first module defining a first array of plant slots at a first density and loaded with a first set of plants at a first growth stage;dispatching the loader to autonomously deliver a second module to the transfer station, the second module defining a second array of plant slots at a second density less than the first density and empty of plants;recording a module-level optical scan of the first module;extracting a viability parameter of the first set of plants from features detected in the module-level optical scan;in response to the viability parameter falling outside of a target viability range, rejecting transfer of the first set of plants from the first module; and triggering a robotic manipulator at the transfer station to sequentially transfer a first subset of the first set of plants from the ...

Подробнее
03-01-2019 дата публикации

METHOD AND SYSTEM FOR IMAGE PROCESSING TO DETERMINE BLOOD FLOW

Номер: US20190000554A1
Автор: Taylor Charles A.
Принадлежит:

Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model. 1184-. (canceled)185. A method for processing images to determine cardiovascular information , comprising the steps of:receiving image data including a plurality of coronary arteries originating from an aorta;processing the image data to generate three-dimensional shape models of the coronary arteries;simulating a blood flow for the generated three-dimensional shape models of the coronary arteries; anddetermining a fractional flow reserve (FFR) of the coronary arteries based on a blood flow simulation result, wherein in the step of simulating the blood flow, a computational fluid dynamics model is applied to the three-dimensional shape models of the coronary arteries, a lumped parameter model is combined with the computational fluid dynamics model, and a simplified coronary artery circulation model including coronary arteries, capillaries of the coronary arteries and coronary veins is used as the lumped parameter model.186. The method of claim 185 , wherein claim 185 , when simulating the blood flow claim 185 , when applying the computational fluid dynamics model to the three-dimensional shape models of the coronary arteries claim 185 , using an aorta blood pressure pattern as an inlet boundary condition.187. The method of claim 185 , wherein simulating the blood flow comprises determining lengths of centerlines of the three- ...

Подробнее
02-01-2020 дата публикации

AUTONOMOUS MONITORING ROBOT SYSTEMS

Номер: US20200001475A1
Принадлежит:

An autonomous mobile robot includes a chassis, a drive supporting the chassis above a floor surface in a home and configured to move the chassis across the floor surface, a variable height member being coupled to the chassis and being vertically extendible, a camera supported by the variable height member, and a controller. The controller is configured to operate the drive to navigate the robot to locations within the home and to adjust a height of the variable height member upon reaching a first of the locations. The controller is also configured to, while the variable height member is at the adjusted height, operate the camera to capture digital imagery of the home at the first of the locations. 120-. (canceled)21. A method comprising:receiving, by a user computing device in communication with a communication network, imagery captured by an autonomous mobile robot in an environment, the imagery representing at least a portion of an electronic device in the environment, and the autonomous mobile robot and the electronic device being in communication with the communication network;receiving, by the user computing device, data indicative of a status of the electronic device; andpresenting, by the user computing device, a representation of the imagery captured by the autonomous mobile robot and an indicator of a status of the electronic device, the indicator being overlaid on the representation of the imagery.22. The method of claim 21 , further comprising transmitting claim 21 , to the autonomous mobile robot claim 21 , one or more commands to cause the autonomous mobile robot to be moved to a user-selected location within the environment claim 21 , wherein the autonomous mobile robot at the user-selected location captures the imagery representing at least the portion of the electronic device.23. The method of claim 22 , wherein transmitting the one or more commands to cause the autonomous mobile robot to be moved to the user-selected location within the environment ...

Подробнее
05-01-2017 дата публикации

SHEET SIZE SPECIFICATION SYSTEM, SHEET SIZE SPECIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM STORING SHEET SIZE SPECIFICATION PROGRAM, AND IMAGE FORMING DEVICE

Номер: US20170001821A1
Принадлежит:

A sheet size specification system for specifying size of a sheet on a sheet stacking tray of an image forming device, by using an image capture unit of a portable terminal. The system includes an analyzer and a size specifier. The analyzer analyzes image data captured by the image capture unit, the image data indicating the sheet stacking tray and the sheet thereon. The size specifier acquires, from the analyzed image data, first position data indicating positions of predefined feature points of the sheet stacking tray and second position data indicating positions of an outline of the sheet, and specifies size of the sheet on the sheet stacking tray based on the first position data and the second position data. 1. A sheet size specification system for specifying size of a sheet on a sheet stacking tray of an image forming device by using an image capture unit of a portable terminal , the system comprising:an analyzer that analyzes image data captured by the image capture unit, the image data indicating the sheet stacking tray and the sheet thereon; anda size specifier that acquires, from the analyzed image data, first position data indicating positions of predefined feature points of the sheet stacking tray and second position data indicating positions of an outline of the sheet, and specifies size of the sheet on the sheet stacking tray based on the first position data and the second position data.2. The sheet size specification system of claim 1 , whereinthe image forming device has a storage that stores, in association with the sheet stacking tray, sheet size data specified by the size specifier.3. The sheet size specification system of claim 1 , whereinthe analyzer and the size specifier are included in the portable terminal.4. The sheet size specification system of claim 1 , whereinthe analyzer and the size specifier are included in the image forming device, andthe portable terminal transmits image data of the sheet stacking tray to the image forming device and ...

Подробнее
06-01-2022 дата публикации

LIGHT-EMITTING DEVICE, OPTICAL DEVICE, AND INFORMATION PROCESSING DEVICE

Номер: US20220006268A1
Принадлежит: FUJIFILM Business Innovation Corp.

A light-emitting device includes: a first light-emitting element array that includes plural first light-emitting elements arranged at a first interval; a second light-emitting element array that includes plural second light-emitting elements arranged at a second interval wider than the first interval, second light-emitting element array being configured to output a light output larger than a light output of the first light-emitting element array, and being configured to be driven independently from the first light-emitting element array; and a light diffusion member provided on an emission path of the second light-emitting element array. 1. A light-emitting device comprising:a first light-emitting element array that includes a plurality of first light-emitting elements arranged at a first interval;a second light-emitting element array that includes a plurality of second light-emitting elements arranged at a second interval wider than the first interval, second light-emitting element array being configured to output a light output larger than a light output of the first light-emitting element array, and being configured to be driven independently from the first light-emitting element array; anda light diffusion member provided on an emission path of the second light-emitting element array.2. The light-emitting device according to claim 1 , whereina spread angle of light emitted from the first light-emitting elements is smaller than a spread angle of light emitted from the second light-emitting elements toward the light diffusion member.3. The light-emitting device according to claim 1 , whereinthe first light-emitting elements include a laser element that emits single mode light.4. The light-emitting device according to claim 2 , whereinthe first light-emitting elements include a laser element that emits single mode light.5. The light-emitting device according to claim 3 , whereinthe first light-emitting elements include a vertical cavity surface emitting laser ...

Подробнее
01-01-2015 дата публикации

OBJECT DETECTION DEVICE AND OBJECT DETECTION METHOD

Номер: US20150003743A1
Автор: NOSAKA Kenichiro
Принадлежит: Panasonic Corporation

An object detection device includes: a binary difference image generation unit for generating a binary difference image C by binarizing a difference value between a background image B, which is an image as a reference for the absence of a detection target object in the detection area, and a detection target image F which is an image as a detection target to detect a detection target object in the detection area; a binary second derivative image generation unit for generating a binary second derivative image D by binarizing second derivatives of the detection target image F or of a smoothed image F′, obtained by smoothing the detection target image F; and an object detection unit for detecting the detection target object based on a logical product of the binary difference image C and the binary second derivative image D. 1. An object detection device comprising:a binary difference image generation unit for generating a binary difference image by binarizing a difference value, with a predetermined threshold for the difference value, between: a background image which is an image showing a temperature distribution in a detection area and which is an image as a reference for the absence of a detection target object in the detection area; and a detection target image which is an image showing a temperature distribution in the detection area and which is an image as a detection target to detect a detection target object in the detection area;a binary second derivative image generation unit for generating a binary second derivative image by binarizing second derivatives of the detection target image or of a smoothed image, obtained by smoothing the detection target image, with a predetermined threshold for the derivative; andan object detection unit for detecting the detection target object based on a logical product of the binary difference image and the binary second derivative image.2. The object detection device according to claim 1 ,which detects the detection target ...

Подробнее
13-01-2022 дата публикации

SYSTEM AND METHOD OF DETERMINING AN ACCURATE ENHANCED LUND AND BROWDER CHART AND TOTAL BODY SURFACE AREA BURN SCORE

Номер: US20220008001A1
Принадлежит:

A system and method of generating an enhanced Lund and Browder chart and total body surface area burn score is described herein. In some embodiments, a plurality of images is obtained from of a patient using a camera system. The images may be taken by aligning the patient's body with pose templates presented on a display of the camera system. The non-skin portions of the images may be removed, and skin analysis performed on the skin portion to determine burn location, coverage, and depth. Further, landmarks may be detected in the images to morph and align the images with the pose templates to obtain standard poses. The plurality of images may be combined and presented in two-dimensional and three-dimensional models with labels and the total surface area burn score. 1. A method of creating an enhanced Lund and Browder chart and determining a total body surface area burn score for a patient with burns , the method comprising the steps of:obtaining, by at least one camera, a plurality of images of the patient with burns,wherein the patient is positioned based on at least one pose template presented to a user via a display associated with the at least one camera;recognizing patient body landmarks in the plurality of images;recognizing patient skin regions in the plurality of images;recognizing a background in the plurality of images;recognizing distractors in the plurality of images;recognizing burn locations on the patient in the plurality of images;combining the plurality of images to create the enhanced Lund and Browder chart; anddetermining the total body surface area burn score based at least in part on the enhanced Lund and Browder chart.2. The method of claim 1 ,wherein the enhanced Lund and Browder chart is a two-dimensional model of the patient including burns,wherein the enhanced Lund and Browder chart is two-dimensional and customized to a size and a shape of the patient,wherein the enhanced Lund and Browder chart is a combined image from the plurality of ...

Подробнее
05-01-2017 дата публикации

Robust Eye Tracking for Scanning Laser Ophthalmoscope

Номер: US20170004344A1
Принадлежит: Canon Inc, UNIVERSITY OF ROCHESTER

A system, apparatus, and method of obtaining an image of a fundus. Acquiring a reference image of the fundus at a first point in time. Acquiring a target image of the fundus at a second point in time. The target imaging area may overlap with the reference imaging area. An area of the target imaging area may be less than an area of the reference imaging area. Estimating movement of the fundus may be based upon at least the target image and the reference image. Acquiring a narrow field image of the fundus. An area of the narrow field imaging area may be less than the area of the target imaging area. A position of the narrow imaging area on the fundus may be adjusted based on the estimated movement of the fundus.

Подробнее
05-01-2017 дата публикации

SYSTEM AND METHOD OF BIOMETRIC ENROLLMENT AND VERIFICATION

Номер: US20170004350A1
Автор: CLAUSEN SIGMUND
Принадлежит: IDEX ASA

A system and method for biometric enrollment and verification compares a test biometric image (e.g., of a fingerprint) with each of a plurality of reference biometric images of one or more enrolled users. Verification of a user as an enrolled user is based on the cumulative amount of overlap between the test image and the reference images. The reference images are defined during an enrollment process by comparing a plurality of sample images, identifying overlapping data in each of the images, computing one or more quality measures, and storing at least a portion of the sample images. The enrollment process is deemed complete when each quality measures meets or exceeds an associated threshold. 1. A biometric identification method comprising storing a plurality of reference biometric images of an organic tissue of a user in a reference database , wherein each of the reference biometric images has a predefined image size and at least partially overlaps at least one of the other reference biometric images , and wherein all of the reference biometric images arranged with their overlapping portions aligned has an area greater than the predefined image size.2. The method of claim 1 , wherein storing the reference biometric images comprises:providing a plurality of sample biometric images of the predefined image size from the user;comparing each of the sample biometric images with the other sample biometric images to identify overlapping data in the sample biometric images;computing an amount of unique, non-overlapping data in the sample biometric images;computing an amount of unique data relative to the predefined image size;arranging the plurality of biometric images with their overlapping portions aligned and computing the area of a bounding border encompassing the arranged biometric images relative to the predefined image size; andstoring at least a portion of the plurality of sample biometric images as a plurality of reference biometric images in the reference ...

Подробнее
05-01-2017 дата публикации

Display Device, Vehicle Controller, Transmitter, And Travelling Assistance System

Номер: US20170004366A1
Автор: Nakata Tsuneo
Принадлежит:

A display device includes: an information acquisition unit communicating with an outside to acquire absence region information identifying an absence region in which an obstacle is presumed to be absent; and a display unit displaying the absence region, which is acquired by the information acquisition unit, in a state of superimposing the absence region on a map. A vehicle controller includes: an information acquisition unit communicating with an outside to acquire absence region information identifying an absence region in which an obstacle is presumed to be absent; and a vehicle control unit performing vehicle control based on the absence region. A transmitter includes: a sensor detecting an obstacle; an information creation unit creating absence region information identifying an absence region based on a result detected by the sensor; and a transmission unit transmitting the absence region information. In addition, a travelling assistance system includes the display device and the transmitter. 1. A display device comprising:an information acquisition unit that communicates with an outside to acquire absence region information identifying an absence region in which an obstacle is presumed to be absent; anda display unit that displays the absence region, which is acquired by the information acquisition unit, in a state of superimposing the absence region on a map.2. The display device according to claim 1 , wherein:the absence region information includes a positional accuracy of the absence region; andthe display unit selects and displays an area, where a probability of the area being the absence region is equal to or higher than a threshold value, in the absence region based on the positional accuracy.3. The display device according to claim 1 , further comprising:a prediction unit that predicts the absence region subsequent to a moment at which the absence region information is created, based on a positional change of the absence region as time elapses,wherein ...

Подробнее
05-01-2017 дата публикации

Hypotheses line mapping and verification for 3d maps

Номер: US20170004377A1
Автор: Kiyoung Kim, Youngmin Park
Принадлежит: Qualcomm Inc

Disclosed are a device, apparatus, and method for performing line mapping. A three-dimensional (3D) map that includes at least a first and a second 3D line corresponding to an aspect of a real world environment may be obtained. One or more images of the aspect may also be obtained and hypotheses 3D lines may be determined. The hypothesis 3D lines may be verified with the one or more images and the 3D map may be updated. The determination and verification of the hypothesis 3D lines may include creating a plane in 3D space and using co-planarity or orthogonality assumptions.

Подробнее
05-01-2017 дата публикации

ON-ROAD STEREO VISUAL ODOMETRY WITHOUT EXPLICIT POSE DETERMINATIONS

Номер: US20170004379A1
Принадлежит:

A method determines motion between first and second coordinate systems by first extracting first and second sets of keypoints from first and second images acquired of a scene by a camera arranged on a moving object. First and second poses are determined from the first and second sets of keypoints. A score for each possible motion between the first and the second poses is determined using a scoring function and a pose-transition graph constructed from training data where each node in the post-transition graph represents a relative pose and each edge represents a motion between two consecutive relative poses. Then, based on the score, a best motion is selected as the motion between the first and second coordinate systems. 1. A method for determining a motion between a first coordinate system and and a second coordinate system , comprising steps of:extracting a first set of keypoints from a first image acquired of a scene by a camera arranged on a moving object;extracting a second set of keypoints from a second image acquired of the scene by the camera;determining first and second poses from the first and second sets of keypoints, respectively;determining a score for each possible motion between the first and the second poses using a scoring function and a pose-transition graph constructed from training data where each node in the post-transition graph represents a relative pose and each edge represents a motion between two consecutive relative poses; andselecting, based on the score, a best motion as the motion between the first and second coordinate systems, wherein the steps are performed in a processor.2. The method of claim 1 , wherein the keypoints are obtained using Speeded Up Robust Features (SURF).3. The method of claim 1 , wherein the pose-transition graph is constructed using training data obtained from a video sequence acquired by the camera.4. The method of claim 1 , wherein the poses represented in the pose-transition graph are determined using an ...

Подробнее
04-01-2018 дата публикации

Method and system for ink data generation, ink data rendering, ink data manipulation and ink data communication

Номер: US20180004407A1
Принадлежит: Wacom Co Ltd

A method implemented by a transmission device to communicate with multiple reception devices that respectively share a drawing area with the transmission device is provided. The transmission device transmits to the multiple reception devices vector-data ink data representative of traces of input operation detected by an input sensor of the transmission device. The method includes: (a) an ink data generation step of generating fragmented data of a stroke object, wherein the stroke object contains multiple point objects to represent a trace formed by a pointer, the fragmented data being generated per defined unit T, and generating a drawing style object; (b) a message formation step of generating messages including the drawing style object and the fragmented data; and (c) a transmission step of transmitting the messages.

Подробнее
07-01-2016 дата публикации

Tracking using multilevel representations

Номер: US20160004909A1

Tracking object in frames of video data, including receiving a first tracking position associated with the target object in a first frame of a video sequence; identifying, for a second frame of the video sequence, a plurality of representation levels and at least one node for each representation level; determining, by a processor, a second tracking position in the second frame by estimating motion of the target object in the second frame between the first frame and the second frame; determining, at each representation level by the processor, a value for each node based on a conditional property of the node in the second frame; and adjusting, by the processor, the second tracking position based on the values determined for each of the nodes and interactions between at least some of the nodes at different representation levels.

Подробнее
07-01-2016 дата публикации

Optical detection apparatus and methods

Номер: US20160004923A1
Принадлежит: Brain Corp

An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., sue to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.

Подробнее
07-01-2016 дата публикации

SYSTEM FOR ACCURATE 3D MODELING OF GEMSTONES

Номер: US20160004926A1
Принадлежит:

A computerized system, kit and method for producing an accurate 3D-Model of a gemstone by obtaining an original 3D-model of an external surface of the gemstone; imaging at least one selected junction with only portions of its associated facets and edges disposed adjacent the junction, the location of the junction being determined based on information obtained at least partially by using the original 3D model; analyzing results of the imaging to obtain information regarding details of the gemstone at the junction; and using the information for producing an accurate 3D-model of said external surface of the gemstone, which is more accurate than the original 3-D model. 150-. (canceled)51. A method for producing a 3D-Model of an external surface of a gemstone , said method comprising:a) taking a plurality of images of the gemstone and using the plurality of images for generating an original 3D-model of an external surface of said gemstone including facets, edges abounding said facets, and junctions each constituting an area of meeting of at least three said edges associated with at least two facets;b) using the original 3D model generated in step a) to obtain information, based on which location of one or more selected junctions is determined, and subsequently imaging an area of each such selected junction with only portions of associated facets thereof and edges disposed adjacent said selected junction, said imaging being performed under illumination conditions different from those, at which said plurality of images were taken and providing such contrast between adjacent facets as to allow to distinguish an edge therebetween;c) analyzing results of said imaging to obtain information regarding the area imaged in step b); andd) using the information obtained in step c) for producing an improved 3D-model of said external surface of the gemstone, which is more accurate than the original 3-D model.52. The method according to claim 51 , wherein the gemstone has a planned cut ...

Подробнее
07-01-2016 дата публикации

AUTHENTICITY DETERMINATION SYSTEM, FEATURE POINT REGISTRATION APPARATUS AND METHOD OF CONTROLLING OPERATION OF SAME, AND MATCHING DETERMINATION APPARATUS AND METHOD OF CONTROLLING OPERATION OF SAME

Номер: US20160004934A1
Принадлежит:

A feature point is a point at which a correlation value is greater than a threshold value, wherein the correlation value is calculated between a template and partial image within an area that is one portion of each genuine tablet image. With regard to a cross-check image, which represents a tablet the authenticity of which is to be verified, a correlation value is calculated between a partial image within an area that is one portion of the cross-check image and the template image, and multiple feature points of the cross-check image at which the calculated correlation value is greater than a predetermined threshold value are extracted. The degree of similarity between the cross-check image and the genuine tablet image is calculated using a geometric characteristic of the extracted multiple feature points and a geometric characteristic of the stored multiple feature points of the genuine tablet image. 1. An authenticity determination system comprising a genuine product feature point registration apparatus and a matching determination apparatus , said genuine product feature point registration apparatus including:a first correlation value calculation device for calculating a correlation value between a partial image within a genuine product image and a template image;a genuine product feature point extraction device for extracting multiple feature points of the genuine product image where the correlation value calculated by said first correlation value calculation device is equal to or greater than a first threshold value; anda genuine product identification data storage device for storing genuine product identification data that includes the multiple feature points of the genuine product image extracted by said genuine product feature point extraction device, andsaid matching determination apparatus including:a second correlation value calculation device for calculating a correlation value between a partial image within an authenticity verification product image and ...

Подробнее
07-01-2016 дата публикации

Display Management for High Dynamic Range Video

Номер: US20160005153A1

A display management processor receives an input image with enhanced dynamic range to be displayed on a target display which has a different dynamic range than a reference display. The input image is first transformed into a perceptually-quantized (PQ) color space. A non-linear mapping function generates a tone-mapped intensity image in response to the characteristics of the source and target display and a measure of the intensity of the PQ image. After a detail-preservation step which may generate a filtered tone-mapped intensity image, an image-adaptive intensity and saturation adjustment step generates an intensity adjustment factor and a saturation adjustment factor as functions of the measure of intensity and saturation of the PQ image, which together with the filtered tone-mapped intensity image are used to generate the output image. Examples of the functions to compute the intensity and saturation adjustment factors are provided. 1. A method for the display management of images with a processor , the method comprising:{'b': '102', 'accessing an input image () in a first color space with a first dynamic range;'}{'b': 110', '112, 'applying a color transformation step () to the input image to determine a first output image () in a perceptually-quantized (PQ) color space, wherein the color transformation from the first color space to the PQ color space is based at least in part in applying a non-linear perceptual quantizer function to a function of the input image;'}{'b': 324', '112, 'generating an intensity component () of the first output image ();'}{'b': 320', '112', '322, 'sub': 'O', 'in response to characteristics of a target display, applying a non-linear tone-mapping function () to the intensity (I) component of the first output image () to determine a tone-mapped intensity image () for the target display;'}{'b': 125', '127', '324', '322, 'applying a detail preservation function () to generate a filtered tone-mapped intensity image () in response to the ...

Подробнее
07-01-2016 дата публикации

SYSTEMS AND METHODS FOR APPEARANCE MAPPING FOR COMPOSITING OVERLAY GRAPHICS

Номер: US20160005201A1

Systems and methods for overlaying a second image/video data onto a first image/video data are described herein. The first image/video data may be intended to be rendered on a display with certain characteristics—e.g., HDR, EDR, VDR or UHD capabilities. The second image/video data may comprise graphics, closed captioning, text, advertisement—or any data that may be desired to be overlaid and/or composited onto the first image/video data. The second image/video data may be appearance mapped according to the image statistics and/or characteristics of the first image/video data. In addition, such appearance mapping may be made according to the characteristics of the display that the composite data is to be rendered. Such appearance mapping is desired to render a composite data that is visually pleasing to a viewer, rendered upon a desired display. 1. A method for overlaying a second image over a first image , the method comprising:receiving said first image and said second image, said first image differing in dynamic range and size than said second image;receiving first metadata regarding said first image;receiving second metadata regarding said second image;performing appearance mapping of said second image to determine an adjusted second image, said adjusted second image differing in dynamic range than said second image, according to said first metadata and said second metadata; andforming a composite image overlaying said adjusted second image onto at least a portion of said first image.2. The method of wherein said first image comprises one of a group claim 1 , said group comprising: HDR image/video claim 1 , EDR image/video and VDR image/video.3. The method of wherein receiving first metadata regarding said first image further comprises:receiving image statistics regarding said first image.4. The method of wherein said image statistics comprises one of a group claim 3 , said group comprising: average luminance claim 3 , minimum luminance claim 3 , mean luminance ...

Подробнее
07-01-2016 дата публикации

SYSTEMS AND METHODS FOR VISUALIZING ELONGATED STRUCTURES AND DETECTING BRANCHES THEREIN

Номер: US20160005212A1
Принадлежит:

Computer implemented methods are disclosed for acquiring, using a processor, digital data of a portion of an elongate object, and identifying, using a processor, a centerline connecting a plurality of points within the portion of the elongate object. The methods also may include defining a first half-plane along the centerline, traversing a predetermined angular distance in a clockwise or counter clockwise direction from the first half-plane to a second half-plane to define an angular wedge, and calculating, using a processor, a view of the angular wedge between the first half-plane and the second half-plane and generating an electronic view of the angular wedge. 1. A computer-implemented method for visualizing elongate objects , the method comprising:acquiring, using a processor, digital data of a portion of an elongate object;identifying, using a processor, a centerline connecting a plurality of points within the portion of the elongate object;defining a first half-plane along the centerline;traversing a predetermined angular distance in a clockwise or counter clockwise direction from the first half-plane to a second half-plane to define an angular wedge;calculating, using a processor, a view of the angular wedge between the first half-plane and the second half-plane; andgenerating an electronic view of the angular wedge.2. The method of claim 1 , further comprising claim 1 , repeating the steps of traversing and calculating for one or more additional angular wedges of the portion of the elongate object.3. The method of claim 2 , further comprising claim 2 , aligning views of two opposing angular wedges next to each other.4. The method of claim 1 , wherein generating the half-plane comprises:defining an origin direction for each of the plurality of points of the centerline;calculating, using a processor, a plurality of vectors, each originating from one of the plurality of points toward the origin direction; andcombining the plurality of vectors to generate the ...

Подробнее
07-01-2021 дата публикации

METHOD TO GENERATE A SLAP/FINGERS FOREGROUND MASK

Номер: US20210004559A1
Автор: Ding Yi, WANG Anne Jinsong
Принадлежит:

The present invention relates to a method to generate a slap/fingers foreground mask to be used for subsequent image processing of fingerprints on an image acquired using a contactless fingerprint reader having at least a flash light, said method comprising the following steps: 1. A method to generate a slap/fingers foreground mask to be used for subsequent image processing of fingerprints on an image acquired using a contactless fingerprint reader having at least a flash light , said method comprising the following steps:acquisition of two images of the slap/fingers in a contactless position in vicinity of the reader, one image taken with flash light on and one image taken without flash light,calculation of a difference map between the image acquired with flash light and the image acquired without flash light,calculation of an adaptive binarization threshold for each pixel of the image, the threshold for each pixel being the corresponding value in the difference map, to which is subtracted this corresponding value multiplied by a corresponding flashlight compensation factor value determined in a flashlight compensation factor map using an image of a non-reflective blank target acquired with flash light and to which is added this corresponding value multiplied by a corresponding background enhancement factor value determined in a background enhancement factor map using the image acquired without flash light,binarization of the difference map by attributing a first value to pixels where the adaptive binarization threshold value is higher than the corresponding value in the difference map and a second value to pixels where the adaptive binarization threshold value is lower than the corresponding value in the difference map, the binarized image being the slap/fingers foreground mask.2. The method according to claim 1 , further comprising a step of noise removal in the binarized image.3. The method according to claim 1 , wherein the flashlight compensation factor is ...

Подробнее
07-01-2021 дата публикации

METHOD OF TRACKING A PLURALITY OF OBJECTS

Номер: US20210004564A1
Принадлежит:

Method of tracking a plurality of objects comprising: focusing an image of the objects on an imaging element using an optical system; capturing the image of the objects using an imaging element comprising a plurality of pixels; measuring at least one characteristic of the objects from the captured image using an image processor; wherein the field of view is set be the widest field of view for which the image processor is able to measure the at least one characteristic. 1. Method of tracking a plurality of objects comprising:focusing an image of the objects on an imaging element using an optical system;capturing the image of the objects using an imaging element comprising a plurality of pixels;measuring at least one characteristic of the objects from the captured image using an image processor;wherein the field of view is set be the widest field of view for which the image processor is able to measure the at least one characteristic.2. The method of claim 1 , wherein the at least one characteristic includes on or more of: bends per minute claim 1 , bending amplitude claim 1 , length claim 1 , translational speed and paralysis rate.3. The method of any preceding claim claim 1 , wherein the field of view is determined based on the average size of the objects and a predefined resolution required for the image processor to measure the at least one characteristic.4. The method of claim 3 , wherein the characteristic is length and/or translational speed and the predefined resolution is the resolution at which the objects are still detectable by the image processor.5. The method of claim 3 , wherein the characteristic is bending amplitude and/or bends per minute and the bending is calculated based on an eccentricity of the objects claim 3 , and the predefined resolution is the resolution at which the total error in the characteristic is at a minimum.6. The method of claim 5 , wherein the error minimum is determined based on the error on eccentricity caused by pixelisation ...

Подробнее
02-01-2020 дата публикации

Systems and Methods for Modeling Symmetry Planes and Principal Orientation from 3D Segments

Номер: US20200004901A1
Автор: Esteban Jose Luis
Принадлежит: Geomni, Inc.

A system and method for automatically modeling symmetry planes and principal orientations from three dimensional (“3D”) segments. The system comprises receiving a set of 3D segments representing a structure from the input source, wherein the set of 3D segments comprises one or more segment pairs. The system then generates symmetry plane data by calculating a symmetry plane for each of the one or more segment pairs. Next, the system accumulates the symmetry plane data in a Hough space. Lastly, the system constructs one or more Hough space symmetry planes from the symmetry plane data and calculates a principal orientation of the structure. 1. A system for automatically modeling symmetry planes and principal orientations from three dimensional (“3D”) segments , comprising:a processor in communication with an input source; and receive a set of 3D segments representing a structure from the input source, wherein the set of 3D segments comprises one or more segment pairs;', 'generate symmetry plane data by calculating a symmetry plane for each of the one or more segment pairs;', 'accumulate the symmetry plane data in a Hough space; and', 'construct one or more Hough space symmetry planes from the symmetry plane data and calculate a principal orientation of the structure., 'computer system code executed by the processor, the computer system code causing the processor to2. The system of claim 1 , wherein the computer system code causes the processor to:select a first segment pair from the one or more segment pairs;determine whether the first segment pair is a parallel pair or a crossing pair; andwhen the first segment pair is a parallel pair, project a first point from a first line segment of the first segment pair onto a second line segment of the first segment pair to obtain a second point, calculate a normal vector and a reference point, and construct a symmetry plane between the first line segment and the second line segment using the reference point and the normal ...

Подробнее
07-01-2021 дата публикации

CONVOLUTIONAL NEURAL NETWORK ON PROGRAMMABLE TWO DIMENSIONAL IMAGE PROCESSOR

Номер: US20210004633A1
Принадлежит:

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The two-dimensional shift register provides local respective register space for the execution lanes. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array. 1. A method , comprising: loading different respective portions of an image having a plurality of image planes comprising pixel values into each two-dimensional shift register array of each of the plurality of stencil processors;', 'loading, into each two-dimensional shift register array of each of the plurality of stencil processors, respective coefficient sets of the plurality of coefficient sets that correspond to a respective portion of the image loaded into each two-dimensional shift-register array; and', concurrently multiplying within the execution lanes respective pixel values and coefficient values of the coefficient sets loaded into the stencil processor to produce an array of partial products;', 'concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept ...

Подробнее
07-01-2021 дата публикации

Determining an item that has confirmed characteristics

Номер: US20210004634A1
Принадлежит: eBay Inc

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

Подробнее
04-01-2018 дата публикации

SPECULAR LIGHT SHADOW REMOVAL FOR IMAGE DE-NOISING

Номер: US20180005023A1
Автор: Kounavis Michael
Принадлежит: Intel Corporation

Specular light shadow removal is described for use in image de-noising. In one example a method includes placing a window on an image, determining a cumulative distribution function for the window, determining a destination histogram for the window, determining a cumulative distribution function for the destination histogram, replacing the intensity of a pixel with the smallest index for which the histogram distribution for the pixel is greater than the window distribution, repeating determining a cumulative distribution function, a destination histogram, and a cumulative distribution function and replacing the intensity for a plurality of windows of the image, and de-noising the image after repeating by applying a median filter to the image. 1. A method comprising:placing a window on an image;determining a cumulative distribution function for the window;determining a destination histogram for the window;determining a cumulative distribution function for the destination histogram;replacing the intensity of a pixel with the smallest index for which the histogram distribution for the pixel is greater than the window distribution;repeating determining a cumulative distribution function, a destination histogram, and a cumulative distribution function and replacing the intensity for a plurality of windows of the image; andde-noising the image after repeating by applying a median filter to the image.2. The method of claim 1 , wherein the histogram is linear.3. The method of claim 2 , wherein the linear histogram has a negative slope for a lower half of possible input values and a positive slope for a higher half of possible input values.4. The method of claim 1 , wherein the destination histogram has a neighborhood around the pixel.5. The method of claim 1 , wherein de-noising comprises applying a non-linear function to the image.6. The method of claim 1 , wherein de-noising comprises generating a pilot signal before replacing the intensity of the pixel and applying the ...

Подробнее
04-01-2018 дата публикации

METHOD FOR AUTOMATICALLY GENERATING PLANOGRAMS OF SHELVING STRUCTURES WITHIN A STORE

Номер: US20180005035A1
Принадлежит:

One variation of a method for automatically generating a planogram for a store includes: dispatching a robotic system to autonomously navigate within the store during a mapping routine; accessing a floor map of the floor space generated by the robotic system from map data collected during the mapping routine; identifying a shelving structure within the map of the floor space; defining a first set of waypoints along an aisle facing the shelving structure; dispatching the robotic system to navigate to and to capture optical data at the set of waypoints during an imaging routine; receiving a set of images generated from optical data recorded by the robotic system during the imaging routine; identifying products and positions of products in the set of images; and generating a planogram of the shelving segment based on products and positions of products identified in the set of images. 1. A method for automatically generating a planogram assigning products to shelving structures within a store , the method comprising:dispatching a robotic system to autonomously collect map data of a floor space within the store during a first mapping routine;initializing the planogram of the store, the planogram representing locations of a set of shelving structures within the store based on map data recorded by the robotic system;dispatching the robotic system to record optical data at a first waypoint proximal a first shelving structure, in the set of shelving structures, during a first imaging routine;accessing a first image comprising optical data recorded by the robotic system while occupying the first waypoint;detecting a first shelf at a first vertical position in the first image;detecting a first object in a first lateral position over the first shelf in the first image;identifying the first object as a unit of a first product based on features extracted from a first region of the first image representing the first object;projecting the first vertical position of the first shelf ...

Подробнее
04-01-2018 дата публикации

METHOD AND APPARATUS FOR GENERATING AN INITIAL SUPERPIXEL LABEL MAP FOR AN IMAGE

Номер: US20180005039A1
Принадлежит:

A method and an apparatus for generating an initial superpixel label map for a current image from an image sequence are described. The apparatus includes a feature detector that determines features in the current image. A feature tracker then tracks the determined features back into a previous image. Based on the tracked features a transformer transforms a superpixel label map associated to the previous image into an initial superpixel label map for the current image. 1. A method for generating an initial superpixel label map for a current image from an image sequence , the method comprising:determining features in the current image;tracking the determined features back into a previous image; andtransforming a superpixel label map associated to the previous image into an initial superpixel label map for the current image based on the tracked features, the method further comprising adding features at the borders of the current image.2. The method according to claim 1 , further comprising generating meshes consisting of triangles for the current image and the previous image from the determined features.3. The method according to claim 2 , further comprising determining for each triangle in the current image a transformation matrix of an affine transformation for transforming the triangle into a corresponding triangle in the previous image.4. The method according to claim 3 , further comprising transforming coordinates of each pixel in the current image into transformed coordinates in the previous image using the determined transformation matrices.5. The method according to claim 4 , further comprising initializing the superpixel label map for the current image at each pixel position with a label of the label map associated to the previous image at the corresponding transformed pixel position.6. The method according to claim 4 , further comprising clipping the transformed coordinates to a nearest valid pixel position.7. (canceled)8. The method according to claim 1 , ...

Подробнее
04-01-2018 дата публикации

ROAD RECOGNITION APPARATUS

Номер: US20180005073A1
Принадлежит:

In a road recognition apparatus mounted in a vehicle, a shape change point detector is configured to detect a shape change point along each of lane lines of an own lane. A turn-off lane determiner is configured to, if the shape change point is detected, determine whether or not a shape changing lane line that is one of the lane lines including the shape change point constitutes a border of a turn-off lane branching off from the own lane. A road recognizer is configured to, when the shape change point has been detected, use only feature points of the left and right lane lines of the own lane located within a distance from the own vehicle to the shape change point to recognize the shape of the own lane, before a result of determination by the turn-off lane determiner is produced. 1. A road recognition apparatus mounted in a vehicle , comprising:a lane line recognizer configured to extract feature points from an image captured by a vehicle-mounted camera, and based on the extracted feature points, recognize lane lines that demarcate a lane of a road in which the vehicle is traveling, which lane being referred to as an own lane;a shape change point detector configured to detect a shape change point along each of lane lines of the own lane, at which a shape of the lane line changes;a turn-off lane determiner configured to, if the shape change point is detected, determine whether or not a shape changing lane line that is one of the lane lines of the own lane including the shape change point constitutes a border of a turn-off lane branching off from the own lane; anda road recognizer configured to, when the shape change point has been detected, use only feature points of the left and right lane lines of the own lane located within a distance from the own vehicle to the shape change point to recognize a shape of the own lane, before a result of determination by the turn-off lane determiner is produced.2. The apparatus according to claim 1 , whereinthe turn-off lane ...

Подробнее
04-01-2018 дата публикации

Convolutional Neural Network On Programmable Two Dimensional Image Processor

Номер: US20180005074A1
Принадлежит:

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The two-dimensional shift register provides local respective register space for the execution lanes. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array. 1. A method , comprising: a) loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register;', concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products;', 'concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and,', 'effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array., 'b) performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially], 'executing a convolutional neural network layer on an image ...

Подробнее
04-01-2018 дата публикации

CONVOLUTIONAL NEURAL NETWORK ON PROGRAMMABLE TWO DIMENSIONAL IMAGE PROCESSOR

Номер: US20180005075A1
Принадлежит:

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array. 1. A processor comprising:a two-dimensional shift-register array; anda two-dimensional array of processing elements, wherein each shift register of the shift-register array is dedicated to one of the processing elements in the two-dimensional array of processing elements, performing, by each processing element in the two-dimensional array of processing elements, a multiplication using (i) a first coefficient of the stencil function and (ii) a respective pixel value stored in a shift register dedicated to the processing element,', 'performing, by each processing element in the two-dimensional array of processing elements, an addition of (i) a result of the multiplication with (ii) a respective current convolution sum for the processing element to update the current convolution sum for the processing element, and, 'wherein the processor is configured to execute instructions to perform a stencil function on each of one or more pixel values ...

Подробнее
02-01-2020 дата публикации

PHOTO IMAGE PROVIDING DEVICE AND PHOTO IMAGE PROVIDING METHOD

Номер: US20200005100A1
Автор: KIM Sungsik
Принадлежит: LG ELECTRONICS INC.

A photo image providing method includes learning an artificial neural network repeatedly to obtain user preference image quality information corresponding to a candidate photo image selected from a plurality of candidate photo images, and when obtaining a photo image from a camera, adjusting an image quality of the obtained photo image based on the obtained user preference image quality information. 1. A photo image providing method comprising:learning an artificial neural network repeatedly to obtain user preference image quality information corresponding to a candidate photo image selected from a plurality of candidate photo images; andwhen obtaining a photo image from a camera, adjusting an image quality of the obtained photo image based on the obtained user preference image quality information.2. The method of claim 1 , wherein the learning of the artificial neural network repeatedly comprises:obtaining photographic environment information related to the obtained photo image from among a plurality of photographic environment information when obtaining a photo image from the camera;controlling to display a plurality of candidate photo images based on the obtained photographic environment information; andlearning the artificial neural network to obtain user preference image quality information corresponding to at least one candidate photo image selected from the plurality of candidate photo images.3. The method of claim 1 , further comprising:when a first photo image is obtained from the camera, controlling to display a plurality of candidate photo images, the plurality of candidate photo images being preset as a plurality of candidate photo images of photographic environment information related to the first photo image;learning to obtain first user preference image quality information corresponding to a candidate photo image selected from the plurality of preset candidate photo images;when a second photo image is obtained from the camera, controlling to display a ...

Подробнее
02-01-2020 дата публикации

Cylindrical Panorama

Номер: US20200005508A1
Автор: Hu Shane Ching-Feng
Принадлежит:

A method for generating a panoramic image is disclosed. The method comprises simultaneously capturing images from multiple camera sensors aligned horizontally along an arc and having an overlapping field of view; performing a cylindrical projection to project the captured images from the multiple camera sensors to a cylindrical images; and aligning overlapping regions of the cylindrical images corresponding to the overlapping field of view based on an absolute difference of luminance, wherein the cylindrical projection is performed by adjusting a radius for the cylindrical projection, wherein the radius is adjusted based on a scale factor and wherein the scale factor is calculated based on a rigid transform and wherein the scale factor is iteratively calculated for two sensors from the multiple camera sensors. 1. A method for generating a panoramic image , comprising:capturing images simultaneously from each of multiple camera sensors aligned horizontally along an arc and having an overlapping field of view;performing a cylindrical projection to project each of the captured images from the multiple camera sensors to cylindrical images; andaligning overlapping regions of the cylindrical images corresponding to the overlapping field of view based on absolute difference of luminance, wherein the cylindrical projection is performed by adjusting a radius for the cylindrical projection, wherein the radius is adjusted based on a scale factor and wherein the scale factor is calculated based on a rigid transform, and wherein the scale factor is iteratively calculated for two sensors from the multiple camera sensors.2. The method of claim 1 , wherein aligning the overlapping regions and adjusting the radius is performed as an integrated step.3. The method of claim 2 , wherein said integrated step is part of an iterated calibration process.4. The method of claim 3 , wherein a correction for lens distortion and the cylindrical projection is combined as a single reverse address ...

Подробнее
03-01-2019 дата публикации

TEMPERATURE COMPENSATION FOR STRUCTURED LIGHT DEPTH IMAGING SYSTEM

Номер: US20190005664A1
Принадлежит:

Disclosed are an apparatus and a method of compensating temperature shifts of a structured light pattern for a depth imaging system. In some embodiments, a depth imaging device includes a light source, an imaging sensor and a processor. The light source emits light corresponding to a pattern. A temperature drift of the light source can cause a shift of the pattern. The imaging sensor receives the light reflected by environment in front of the depth imaging device and generates a depth map including a plurality of pixel values corresponding to depths of the environment relative to the depth imaging device. The processor estimates the shift of the pattern based on a polynomial model depending on the temperature drift of the light source. The processor further adjusts the depth map based on the shift of the pattern. 1. A depth imaging device , comprising: receive light as reflected by an environment of the depth imaging device;', 'generate, based on the light, a depth map including a plurality of pixel values corresponding to depths of the environment relative to the depth imaging device;, 'an imaging sensor configured toa temperature sensor configured to measure a temperature drift from a reference temperature of one or more of a light source, an optical component of the depth imaging device, or the environment of the depth imaging device; and estimate a shift of a pattern of the light based on the temperature drift; and', 'adjust the depth map based on the shift of the pattern., 'a processor configured to2. The depth imaging device of claim 1 , wherein the pattern is a speckle pattern corresponding to a reference image including a plurality of dots claim 1 , each of the dots of the plurality of dots having known coordinates in the reference image.3. The depth imaging device of claim 1 , wherein the shift of the pattern is estimated by using a polynomial model depending on the temperature drift.4. The depth imaging device of claim 3 , wherein the polynomial model is ...

Подробнее
03-01-2019 дата публикации

THREE-DIMENSIONAL IMAGING USING FREQUENCY DOMAIN-BASED PROCESSING

Номер: US20190005671A1
Принадлежит:

A brightness image of a scene is converted into a corresponding frequency domain image and it is determined whether a threshold condition is satisfied for each of one or more regions of interest in the frequency domain image, the threshold condition being that the number of frequencies in the region of interest is at least as high as a threshold value. The results of the determination can be used to facilitate selection of an appropriate block matching algorithm for deriving disparity or other distance data and/or to control adjustment of an illumination source that generates structured light for the scene. 1. An imaging system comprising:an illumination source operable to generate structured light with which to illuminate a scene;a depth camera sensitive to light generated by the illumination source and operable to detect optical signals reflected by one or more objects in the scene, the depth camera being further operable to convert the detected optical signals to corresponding electrical signals representing a brightness image of the scene; transform the brightness image into a corresponding frequency domain image;', 'determine whether a threshold condition is satisfied for each of one or more regions of interest in the frequency domain image, the threshold condition being that the number of frequencies in the region of interest is at least as high as a threshold value; and', 'generate a control signal to adjust an optical power of the illumination source if it is determined that the threshold condition is satisfied for fewer than a predetermined minimum number of the one or more regions of interest., 'one or more processor units operable collectively to receive the electrical signals from the depth camera and operable collectively to2. The imaging system of wherein the threshold condition is that the number of frequencies claim 1 , which have an amplitude at least as high as a threshold amplitude claim 1 , is at least as high as the threshold value.3. The ...

Подробнее
03-01-2019 дата публикации

Alert volume normalization in a video surveillance system

Номер: US20190005806A1
Принадлежит: Omni AI Inc

Techniques are disclosed for normalizing and publishing alerts using a behavioral recognition-based video surveillance system configured with an alert normalization module. Certain embodiments allow a user of the behavioral recognition system to provide the normalization module with a set of relative weights for alert types and a maximum publication value. Using these values, the normalization module evaluates an alert and determines whether its rareness value exceed a threshold. Upon determining that the alert exceeds the threshold, the module normalizes and publishes the alert.

Подробнее
05-01-2017 дата публикации

TECHNOLOGIES FOR PAN TILT UNIT CALIBRATION

Номер: US20170006209A1
Принадлежит:

Technologies for calibrating a pan tilt unit with a robot include a robot controller to move a camera of the pan tilt unit about a first rotational axis of the pan tilt unit to at least three different first axis positions. The robot controller records a first set of positions of a monitored component of the robot in a frame of reference of the robot and a position of the camera in a frame of reference of the pan tilt unit during a period in which the monitored component is within a field of view of the camera for each of the at least three different first axis positions. Further, the robot controller moves the camera about a second rotational axis of the pan tilt unit to at least three different second axis positions and records a second set of positions of the monitored component in the frame of reference of the robot and a position of the camera in the frame of reference of the pan tilt unit during a period in which the monitored component is within a field of view of the camera for each of the at least three different second axis positions. Further, the robot controller determines a transformation from the frame of reference of the robot to the frame of reference of the pan tilt unit based on the first set of recorded positions and the second set of recorded positions. 1. A robot system for calibrating a pan tilt unit , the robot system comprising:a robot;an arm control circuitry configured to operate an articulating arm and a robot tool of the robot;a pan tilt unit control circuitry to (i) move a camera of the pan tilt unit about a first rotational axis of the pan tilt unit to at least three different first axis positions and (ii) move the camera about a second rotational axis of the pan tilt unit to at least three different second axis positions;a position recording circuitry to (i) record a first set of positions of the robot tool in a frame of reference of the robot and a position of the camera in a frame of reference of the pan tilt unit during a period in ...

Подробнее
05-01-2017 дата публикации

Image stitching in a multi-camera array

Номер: US20170006219A1
Принадлежит: GoPro Inc

Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.

Подробнее
05-01-2017 дата публикации

Image stitching in a multi-camera array

Номер: US20170006220A1
Принадлежит: GoPro Inc

Images captured by multi-camera arrays with overlap regions can be stitched together using image stitching operations. An image stitching operation can be selected for use in stitching images based on a number of factors. An image stitching operation can be selected based on a view window location of a user viewing the images to be stitched together. An image stitching operation can also be selected based on a type, priority, or depth of image features located within an overlap region. Finally, an image stitching operation can be selected based on a likelihood that a particular image stitching operation will produce visible artifacts. Once a stitching operation is selected, the images corresponding to the overlap region can be stitched using the stitching operation, and the stitched image can be stored for subsequent access.

Подробнее
14-01-2016 дата публикации

METHOD AND SYSTEM FOR PATIENT-SPECIFIC MODELING OF BLOOD FLOW

Номер: US20160007945A1
Автор: Taylor Charles A.
Принадлежит:

Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model. 1184-. (canceled)185. A method for vascular assessment comprising:receiving a plurality of 2D angiographic images of a portion of a vasculature of a subject, and processing said images to produce a stenotic model over the vasculature, said stenotic model having measurements of the vasculature at one or more locations along vessels of the vasculature;obtaining a flow characteristic of the stenotic model; andcalculating an index indicative of vascular function, based, at least in part, on the flow characteristic in the stenotic model.186. The method according to claim 185 , wherein said measurements of the vasculature are at one or more locations along a centerline of at least one branch of the vasculature.187. The method according to claim 185 , wherein said flow characteristic of said stenotic model comprises resistance to fluid flow.188. The method according to claim 187 , further comprising identifying in said first stenotic model a stenosed vessel and a downstream portion of said stenosed vessel claim 187 , and calculating said resistance to fluid flow in said downstream portion;wherein said index is calculated based on a volume of said downstream portion, and on a contribution of said stenosed vessel to said resistance to fluid flow.189. The method according to claim 185 , wherein said flow characteristic of said stenotic model ...

Подробнее
14-01-2016 дата публикации

Method and system for reducing localized artifacts in imaging data

Номер: US20160007948A1

A method and system for reducing localized artifacts in imaging data, such as motion artifacts and bone streak artifacts, are provided. The method includes segmenting the imaging data to identify one or more suspect regions in the imaging data near which localized artifacts are expected to occur, defining an artifact-containing region of interest in the imaging data around each suspect region, and applying a local bias field within the artifact-containing regions to correct for the localized artifacts.

Подробнее
20-01-2022 дата публикации

GENERATING AN IMAGE OF THE SURROUNDINGS OF AN ARTICULATED VEHICLE

Номер: US20220019815A1
Принадлежит:

Systems and methods for generating an image of the surroundings of an articulated vehicle are provided. According to an aspect of the invention, a processor determines a relative position between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle; receives a first image from a first camera arranged on the first vehicle and a second image from a second camera arranged on the second vehicle; and combines the first image and the second image based on the relative position between the first vehicle and the second vehicle to generate a combined image of surroundings of the articulated vehicle. 1. A method comprising:determining, by a processor, an angle between a first vehicle of an articulated vehicle and a second vehicle of the articulated vehicle as the first and second vehicles rotate laterally relative to each other around a point at which the first and second vehicles are connected to each other, based on a relative position between a first camera arranged on the first vehicle and a second camera arranged on the second vehicle, the first and second cameras being located on a same side of the articulated vehicle;receiving, by the processor, a first image from the first camera arranged on the first vehicle and a second image from the second camera arranged on the second vehicle, the first and second images being obtained from the same side of the articulated vehicle; andcombining, by the processor, the first image and the second image based on the relative position between the first camera and the second camera to generate a combined image of surroundings of the articulated vehicle on the same side thereof, wherein the first image and the second image are combined by rotating the first image and the second image with respect to each other, based on the angle between the first vehicle and the second vehicle.2. The method according to claim 1 , wherein the angle is measured by an angular sensor arranged on the articulated vehicle. ...

Подробнее
14-01-2016 дата публикации

TOOTH MOVEMENT MEASUREMENT BY AUTOMATIC IMPRESSION MATCHING

Номер: US20160008096A1
Принадлежит:

The present invention relates to systems and methods for detecting deviations from an orthodontic treatment plan. One method includes receiving a tracking model, performing a matching step between individual teeth in a plan model and the tracking model, comparing the tracking model with the plan model, and detecting one or more positional differences. 1. A system for performing an alignment between digital models of a patient's teeth for improved detection of deviations from an orthodontic treatment plan , the system comprising:at least one processor; and ["obtain a first digital model of the patient's teeth in a first arrangement and a second digital model of the patient's teeth in a second arrangement,", 'detect, in each of the first and second digital models, a partial region beyond one or more tooth crowns based on a polygon formed from points on one or more teeth, and', 'perform an alignment between the first and second digital models using the partial regions such that one or more stationary elements of each of the first and second digital models are aligned with one another., 'memory comprising instructions that, when executed by the at least one processor, cause the system to2. The system of claim 1 , wherein the instructions claim 1 , when executed by the at least one processor claim 1 , further cause the system to detect one or more positional differences between the first and second arrangements of the patient's teeth.3. The system of claim 1 , wherein the first arrangement comprises an actual arrangement of the patient's teeth after the orthodontic treatment plan has begun for the patient and the second arrangement comprises a pre-determined planned arrangement of the patient's teeth.4. The system of claim 3 , wherein the first digital model comprises a non-segmented model of the patient's teeth in the actual arrangement and the second digital model comprises a previously segmented model of the patient's teeth.5. The system of claim 1 , wherein the ...

Подробнее
14-01-2016 дата публикации

System And Method Of Material Handling Using One Or More Imaging Devices On The Transferring Vehicle And On The Receiving Vehicle To Control The Material Distribution Into The Storage Portion Of The Receiving Vehicle

Номер: US20160009509A1
Принадлежит: Deere & Company

First imaging device collects first image data, whereas second imaging device collects second image data of a storage portion. A container identification module identifies a container perimeter of the storage portion in at least one of the collected first image data and the collected second image data. A spout identification module is adapted to identify a spout of the transferring vehicle in the collected image data. An image data evaluator determines whether to use the first image data, the second image data, or both based on an evaluation of the intensity of pixel data or ambient light conditions. An alignment module is adapted to determine the relative position of the spout and the container perimeter and to generate command data to the propelled portion to steer the storage portion in cooperative alignment such that the spout is aligned within a central zone or a target zone of the container perimeter. 1. A method for facilitating the transfer of material from a transferring vehicle having a material distribution end to a receiving vehicle having a bin to the store transferred material , the method comprising the steps of:a. identifying and locating the bin;b. detecting a representation of the fill level or volumetric distribution of the material in the bin;c. aligning the material distribution end over a current target area of the bin requiring the material;d. determining subsequent target areas of the bin that require material based on the representation of the fill level or volumetric distribution of the material in the bin;e. transferring the material from the transferring vehicle to the current target area of the bin of the receiving vehicle;f. detecting when the current target area of the bin is filled with the material;g. repeating steps c-f until the subsequent target areas of the bin are filled; andh. terminating the transfer of the material from the transferring vehicle to the receiving vehicle.2. The method according to claim 1 , wherein the ...

Подробнее
14-01-2016 дата публикации

INDUSTRIAL VEHICLES WITH OVERHEAD LIGHT BASED LOCALIZATION

Номер: US20160011595A1
Принадлежит: CROWN EQUIPMENT LIMITED

According to the embodiments described herein, a method for environmental based localization may include capturing an input image of a ceiling comprising a plurality of skylights. Features can be extracted from the input image. The features can be grouped into a plurality of feature groups such that each of the feature groups is associated with one of the skylights. Line segments can be extracted from the features of each feature group, automatically, with one or more processors executing a feature extraction algorithm on each feature group separately. At least two selected lines of the line segments of each feature groups can be selected. A centerline for each of the feature groups can be determined based at least in part upon the two selected lines. The center line of each of the feature groups can be associated with one of the skylights. 1. An industrial vehicle comprising a camera , a steering apparatus , a throttle , wheels , and one or more processors , whereinthe steering apparatus controls the orientation of at least one of the wheels;the throttle controls a traveling speed of the industrial vehicle;the camera is communicatively coupled to the one or more processors;the camera captures an input image of ceiling lights of the ceiling of the warehouse; and associate raw features of the ceiling lights of the input image with one or more feature groups,', 'execute a Hough transform to transform the raw features of the one or more feature groups into line segments associated with the one or more feature groups,', 'determine a convex hull of the raw features of the one or more feature groups,', 'compare the line segments of the one or more feature groups and the convex hull in Hough space,', 'discard the line segments of the one or more feature groups that are outside of a threshold of similarity to the convex hull of the raw features of the one or more feature groups, whereby a preferred set of lines is selected for the one or more feature groups from the line ...

Подробнее
12-01-2017 дата публикации

SUPERVISED FACIAL RECOGNITION SYSTEM AND METHOD

Номер: US20170011257A1

A computer executed method for supervised facial recognition comprising the operations of preprocessing, feature extraction and recognition. Preprocessing may comprise dividing received face images into several subimages, converting the different face image (or subimage) dimensions into a common dimension and/or converting the datatypes of all of the face images (or subimages) into an appropriate datatype. In feature extraction, 2D DMWT is used to extract information from the face images. Application of the 2D DMWT may be followed by FastICA. FastICA, or, in cases where FastICA is not used, 2D DMWT, may be followed by application of the l-norm and/or eigendecomposition to obtain discriminating and independent features. The resulting independent features are fed into the recognition phase, which may use a neural network, to identify an unknown face image. 1. A computer executed method for facial recognition comprising:receiving a face image;performing preprocessing on the face image;applying a 2D DMWT to the preprocessed face image to obtain a resultant image matrix for the face image, the resultant image matrix having a plurality of subimages;converting each of the subimages into a vector;combining the vectors for each of the subimages to create a feature matrix;applying 2D FastICA to the feature matrix to obtain a plurality of independent subimages;converting the plurality of independent subimages into two-dimensional form;determining a resultant feature vector using the plurality of two-dimensional independent subimages; andperforming recognition of the resultant feature vector.2. The method of claim 1 , wherein the operation of performing preprocessing on the face image comprises:converting an image dimension of the face image to a common dimension; andconverting the face image from a first datatype to a second datatype.3. The method of claim 2 , wherein the common dimension is of size N×N and wherein N is of the power two.4. The method of claim 2 , wherein the ...

Подробнее
14-01-2016 дата публикации

SYSTEMS AND METHODS OF EYE TRACKING CALIBRATION

Номер: US20160011658A1
Принадлежит:

An image of a user's eyes and/or face, captured by a camera on the computing device or on a device coupled to the computing device, may be analyzed using computer-vision algorithms, such as eye tracking and gaze detection algorithms, to determine the location of the user's eyes and estimate the gaze information associated with the user. A user calibration process may be conducted to calculate calibration parameters associated with the user. These calibration parameters may be taken into account to accurately determine the location of the user's eyes and estimate the location on the display at which the user is looking. The calibration process may include determining a plane on which the user's eyes converge and relating that plane to a plane of a screen on which calibration targets are displayed. 1. A method comprising:displaying an object on a display of a computing device in communication with an eye tracking device, the object being associated with a calculation of calibration parameters relating to a calibration of a calculation of gaze information of a user of the computing device, the gaze information indicating information about where the user is looking;while the object is displayed, receiving, from the eye tracking device, an image of at least one eye of the user;determining eye information associated with the user, the eye information being based on the image and relating to eye features associated with the at least one eye of the user; andcalculating one or more of the calibration parameters and one or more geometry parameters based on the eye information, the one or more geometry parameters indicating information associated with the display relative to the eye tracking device.2. The method of claim 1 , further comprising:displaying a second object on the display of the computing device;while the second object is displayed, receiving, from the eye tracking device, a second image of the at least one eye of the user;determining second eye information ...

Подробнее
14-01-2016 дата публикации

SYSTEMS AND METHODS FOR MANIPULATION OF OBJECTS

Номер: US20160011750A1
Принадлежит:

Systems and methods for manipulating an object include a display for displaying an object where the object has a geometric shape and is arranged in a first orientation of the geometric shape. The display also displays at least a second orientation of the geometric shape in proximity to the object. The system includes a user interface for receiving a user input to select the second orientation of the geometric shape. A processor, in communication with the display and user interface, determines one or more possible orientations of the object including the second orientation and arranges the orientation of the geometric shape of the object to match the selected second orientation. 144-. (canceled)45. A system for manipulating a first object in relation to one or more other objects comprising:a computer; displaying the one or more other objects on a display such that each of the one or more other objects has a geometric shape, is arranged in an orientation of the geometric shape, and is non-overlapping with respect to any other objects;', 'displaying the first object in a first position on the display, the first object having a geometric shape and being arranged in a first orientation of the geometric shape;', 'determining a single best destination position and orientation candidate of the object from among one or more possible destination position and orientation candidates based on at least one of a selected portion of the display, the geometric shape of the object, the geometric shape of one or more subsequently available objects, one or more possible destination position and orientation candidates, the height to width ratio of an object or the other objects, the number and position of any empty cells next to the other objects, any gaps next to the other objects, and any empty cells or gaps subsequent to positioning and orienting the object;', 'wherein the one or more possible destination position and orientation candidates are next to at least one of the one or more ...

Подробнее
12-01-2017 дата публикации

METHOD AND APPARATUS FOR MEASURING AN ULTRASONIC IMAGE

Номер: US20170011515A1
Принадлежит:

The present invention relates to a method and apparatus for measuring an ultrasonic image. The method comprises: a measuring template loading step: loading a measuring template according to a received instruction; and a measuring template displaying step: displaying a selected measuring template at a designated position on the ultrasonic image. 1. A method for measuring an ultrasonic image , comprising:a measuring template loading step: loading a measuring template according to a received instruction; anda measuring template displaying step: displaying a selected measuring template at a designated position on said ultrasonic image.2. The method according to claim 1 , wherein said ultrasonic image is a live ultrasonic image.3. The method according to claim 1 , further comprising: a measuring template generating step: generating said measuring template according to clinical experience data of a measured object on the ultrasonic image where said measuring template is applied and storing said measuring template.4. The method according to claim 3 , wherein said measuring template generating step further comprises:generating a length measuring template according to said clinical experience data;generating an area measuring template according to said clinical experience data; andgenerating an angle measuring template according to said clinical experience data.5. The method according to claim 4 , wherein said length measuring template contains straight-line segments which contain length information and midpoint position information.6. The method according to claim 4 , wherein said area measuring template contains an enclosed plane figure.7. The method according to claim 6 , wherein said area measuring template contains at least one circle.8. The method according to claim 6 , wherein said area measuring template contains a plurality of concentric circles.9. The method according to claim 1 , further comprising:a measuring template moving step: shifting and/or rotating said ...

Подробнее
14-01-2016 дата публикации

METHOD FOR ACCURATELY GEOLOCATING AN IMAGE SENSOR INSTALLED ON BOARD AN AIRCRAFT

Номер: US20160012289A1
Автор: PETIT Jean-Marie
Принадлежит: THALES

A method for geolocating an image sensor having an LoS and installed on board an aircraft. The geographical position of the sensor and the orientation of its LoS being approximate, it comprises: a step of creating an opportune landmark comprising the following substeps: an operator locating, on a screen for displaying acquired images, a stationary element on the ground, the axis of a telemeter being indicated in these images by means of a reticle the direction of which represents the LoS; the operator moving the LoS in order to place the reticle on this stationary element; tracking of this stationary element; estimating the approximate geographical position of this stationary element; searching in a terrain DB for the location corresponding to a zone centered on the stationary element; displaying an image of the terrain of this location, the operator locating the stationary element; and the operator pointing to this stationary element in the displayed terrain image, the geographical coordinates pointed to being retrieved from the terrain DB, this stationary element becoming an opportune landmark; and the sensor moving relative to the landmark, a step of accurately locating the sensor, from the geographical coordinates of this landmark and using a Kalman filter supplied with a plurality of measurements of the distance between the sensor and the landmark and with a plurality of measurements of the orientation of the LoS of the sensor toward the landmark, there being one orientation measurement for each telemetry measurement, simultaneously allowing the orientation of the LoS to be accurately estimated. 1. A method for geolocating an image sensor having a line of sight and installed on board an aircraft , characterized in that the geographical position of the sensor and the orientation of its a line of sight being approximate , it comprises:a step of creating at least one opportune landmark comprising the following substeps:an operator locating, on a screen for ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE AND COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM OF RECOGNIZING LANE BOUNDARY LINES ON ROADWAY

Номер: US20160012298A1
Принадлежит:

An in-vehicle camera obtains image frames of a scene surrounding an own vehicle on a roadway. An extracting section in a lane boundary line recognition device extracts white line candidates from the image frames. The white line candidates indicate a degree of probability of white lines on an own vehicle lane on the roadway and a white line of a branch road which branches from the roadway. A branch judgment section calculates a likelihood of the white line as the white line of the branch road, and judges whether or not the white line candidate is the white line of the branch road based on the calculated likelihood. The branch judgment section decreases the calculated likelihood when a recognizable distance of the lane boundary line candidate monotonically decreases in a predetermined number of the image frames. 1. A lane boundary line recognition device comprising:a detection section capable of detecting lane boundary line candidates on a roadway on which an own vehicle drives on the basis of image frames of a surrounding area of the own vehicle on the roadway, captured by an in-vehicle camera mounted on the own vehicle; anda branch judgment section capable of calculating a likelihood which indicates a degree of whether each of the lane boundary line candidates detected by the detection section is a lane boundary line of a branch road, the branch road branching from the roadway, and the branch judgment section judging whether or not the lane boundary line candidate detected by the detection section is the lane boundary line of the branch road on the basis of the calculated likelihood, the branch judgment section increasing the likelihood of the lane boundary line candidate when a recognizable distance of the lane boundary line candidate monotonically decreases in a predetermined number of the image frames, where the recognizable distance indicates a distance to a farthest recognizable end point of the lane boundary line candidate.2. The lane boundary line recognition ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE AND COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM OF RECOGNIZING LANE BOUNDARY LINES ON ROADWAY

Номер: US20160012299A1
Принадлежит:

A lane boundary line recognition device detects lane boundary line candidates of a roadway from images captured by an in-vehicle camera, judges that the lane boundary line candidate is a lane boundary line of a branch road, and calculates a curvature of the lane boundary line candidate, and recognizes the lane boundary line based on the calculated curvature. The device removes the lane boundary line candidate, which has been judged as the lane boundary line of the branch road, is removed from a group of the lane boundary line candidates, and calculates the curvature of the lane boundary line candidate based on an estimated rate of change of the curvature. The device uses a past curvature calculated predetermined-number of images before when the lane boundary line candidate is the lane boundary line of the branch road, and resets the estimated rate of change of the curvature to zero. 1. A lane boundary line recognition device comprising:a detection section capable of detecting lane boundary line candidates of a roadway on the basis of frame images of the roadway around an own vehicle transmitted from an in-vehicle camera;a branch judgment section capable of judging whether the lane boundary line candidate detected by the detection section corresponds to a lane boundary line of a branch road; anda recognition section capable of calculating feature values comprising a curvature of the lane boundary line candidate detected by the detection section, and recognizing the lane boundary line on the basis of the calculated feature values, the recognition section comprising:a removing section capable of removing the lane boundary line candidate, which has been judged to correspond to the lane boundary line of the branch road by the branch judgement section, is removed from the lane boundary line candidates;a curvature calculation section capable of calculating a curvature of the lane boundary line candidate on the basis of an estimated rate of change of the curvature of the ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE

Номер: US20160012300A1
Принадлежит:

In a lane boundary line recognition device, an extraction unit extracts lane boundary line candidates from image acquired by an in-vehicle camera. A position estimation unit estimates a position of each lane boundary line based on drive lane information containing a number of drive lanes on a roadway and a width of each drive lane when (a) and (b) are satisfied, (a) when an own vehicle drives on an own vehicle lane specified by the drive lane specifying unit, and (b) when the lane boundary line candidate corresponds to lane boundary lines of the own vehicle lane. A likelihood calculation unit increases a likelihood of the lane boundary line candidate when a distance between a position of the lane boundary line candidate and an estimated position of the lane boundary line candidate obtained by the drive lane boundary line position estimation unit is within a predetermined range. 1. A lane boundary line recognition device comprising:an image acquiring unit capable of acquiring surrounding images of a roadway on which an own vehicle drives;a drive lane boundary line candidate extraction unit capable of extracting lane boundary line candidates from the images acquired by the image acquiring unit;a likelihood calculation unit capable of calculating a likelihood of each of the lane boundary line candidates;a drive lane boundary line recognition unit capable of recognizing, as a lane boundary line, the lane boundary line candidate having the likelihood of not less than a predetermined threshold value;a selection unit capable of selecting a predetermined number of the lane boundary line candidates having the likelihood of not less than the predetermined threshold value;a drive lane information acquiring unit capable of obtaining drive lane information containing a number of drive lanes on the roadway on which the own vehicle drives, and a width of each of the drive lanes;a drive lane specifying unit capable of correlating the image with the drive lane information, and ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING APPARATUS AND METHOD FOR DETECTING PARTIALLY VISIBLE OBJECT APPROACHING FROM SIDE USING EQUI-HEIGHT PERIPHERAL MOSAICKING IMAGE, AND DRIVING ASSISTANCE SYSTEM EMPLOYING THE SAME

Номер: US20160012303A1
Автор: Jung Soon Ki, Park Min Woo
Принадлежит:

Provided are an image processing apparatus and method, and a driving assistance system employing the same. The image processing apparatus includes a peripheral region extracting unit extracting, from an image, peripheral regions corresponding to a size of a target object determined in advance, a modifying unit modifying the peripheral regions to allow a viewpoint for the peripheral regions to be changed, a mosaicking image creating unit creating a mosaicking image by stitching the modified peripheral regions together, and an object detection unit detecting an object including a part of the target object from the mosaicking image. 1. An image processing device comprising:a peripheral region extracting unit extracting, from an image, peripheral regions corresponding to a size of a target object determined in advance;a modifying unit modifying the peripheral regions to allow a viewpoint for the peripheral regions to be changed;a mosaicking image creating unit creating a mosaicking image by stitching the modified peripheral regions together; andan object detection unit detecting an object including a part of the target object from the mosaicking image.2. The image processing device of claim 1 , wherein the peripheral region extracting unit comprises claim 1 ,a vanishing point detection unit detecting a vanishing point from the image;a region size calculating unit designating a horizontal line at every preset interval in a vertical direction from the vanishing point on the image and calculating a region height and a region width corresponding to a height and a width of the target object at each of the horizontal lines; anda region cutting unit cutting regions having a height of the region height from each of the horizontal lines and a width of the region width from sides of the image.3. The image processing device of claim 2 , wherein the region cutting unit cuts a left region having a height of the region height from each of the horizontal lines and a width of the ...

Подробнее
14-01-2016 дата публикации

ROOM INFORMATION INFERRING APPARATUS, ROOM INFORMATION INFERRING METHOD, AND AIR CONDITIONING APPARATUS

Номер: US20160012309A1
Принадлежит: Omron Corporation

A room information inferring apparatus that infers information regarding a room has an imaging unit that captures an image of a room that is to be subjected to inferring, a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room, a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times, and an inferring unit that infers information regarding the room based on the presence map. 1. A room information inferring apparatus that infers information regarding a room , comprising:an imaging unit that captures an image of a room that is to be subjected to inferring;a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room;a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times; andan inferring unit that infers information regarding the room based on the presence map.2. The room information inferring apparatus according to claim 1 , wherein the person detector detects a face claim 1 , a head claim 1 , or an upper body of the person in the image claim 1 , and acquires the position of the person in the room based on a position and a size of the face claim 1 , the head claim 1 , or the upper body in the image.3. The room information inferring apparatus according to claim 1 , wherein the inferring unit infers a shape of the room based on the presence map.4. The room information inferring apparatus according to claim 3 , wherein the inferring unit infers that a polygon circumscribed around the distribution of detection points in the presence map is the shape of the room.5. The room information inferring apparatus according to claim 4 , wherein the inferring unit infers the shape ...

Подробнее
14-01-2016 дата публикации

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR BREAST DENSITY CLASSIFICATION USING FISHER DISCRIMINATION

Номер: US20160012316A1
Автор: GHOUTI LAHOUARI
Принадлежит:

A method for content-based image retrieval for the classification of breast density from mammographic imagery is described. The breast density is characterized through the Fisher linear discriminants (FLD) extracted from the Principal Component Analysis (PCA). Unlike PCA, the FLD provides a very discriminative representation of the mammographic images in terms of the breast density. Various exemplary methods, systems and computer program products are also disclosed. 1. A system for classifying breast density from mammographic imagery using content-based image retrieval , the system comprising:circuitry configured tostore a mammogram image database;pre-process one or more digital mammogram images of a patient to remove noise and enhance contrast;segment the one or more digital mammogram images to produce one or more extracted regions of interest and save the one or more extracted regions of interest;group the one or more saved extracted regions of interest to produce a large mammogram image;decompose the large mammogram image by principal component analysis (PCA) in the mammogram image database; andclassify the large mammogram image according to breast density with Fisher Linear Discriminant (FLD) in the mammogram image database.2. The system of claim 1 , wherein the circuitry is configured to decompose a covariance matrix of the mammogram image database with Formula II:{'br': None, 'i': E', =[UDV, 'sub': db', 'db, 'sup': T', 'T, '[MammoMammo]; and'} {'br': None, 'i': 'U', 'sup': 'T', 'Proj=Ω;'}, 'project the large mammogram image into a feature space with Formula IIIwherein:{'sub': 'db', 'Mammorepresents the mammogram image database;'}U and V represent the left and right eigenvectors, respectively, associated with the eigenvalues stored in the diagonal matrix D;Ω and Proj represent the original and projected large mammogram image, respectively.3. The system of claim 1 , wherein the circuitry is configured to store the mammogram image database such that the mammogram ...

Подробнее
11-01-2018 дата публикации

ADAPTIVE QUANTIZATION METHOD FOR IRIS IMAGE ENCODING

Номер: US20180012071A1
Принадлежит:

A user recognition method that uses an iris is provided. The user recognition method includes generating a first mask for blocking a non-iris object area of an iris image, generating a converted iris image, in which the non-iris object area is blocked according to the first mask, generating a second mask for additionally blocking an inconsistent area, in which quantization results of the converted iris image are inconsistent, by adaptively transforming the first mask according to features of the converted iris image, obtaining an iris code by quantizing pixels included in the iris image, obtaining a converted iris code, in which portions corresponding to the non-iris object area and the inconsistent area are blocked, by applying the second mask to the iris code, and recognizing a user by matching a reference iris code, stored by the user in advance, to the converted iris code. 1. A user recognition method using an iris , the user recognition method comprising:generating a first mask for blocking a non-iris object area of an iris image;generating a converted iris image, in which the non-iris object area is blocked according to the first mask;generating a second mask for additionally blocking an inconsistent area, in which quantization results of the converted iris image are inconsistent, by adaptively transforming the first mask according to features of the converted iris image;obtaining an iris code by quantizing pixels included in the iris image;obtaining a converted iris code, in which portions corresponding to the non-iris object area and the inconsistent area are blocked, by applying the second mask to the iris code; andrecognizing a user by matching a reference iris code, stored by the user in advance, to the converted iris code.2. The user recognition method of claim 1 , further comprising:obtaining an eye image from a face image;segmenting an iris image expressed according to polar coordinates from the eye image; andnormalizing the iris image to be expressed ...

Подробнее
11-01-2018 дата публикации

Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code

Номер: US20180012085A1
Принадлежит:

The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds. 1. A system for computer vision based driver assistance , said system comprising:an adjustable camera having an angle of view of 75 degrees or greater and adapted to be adjustably mounted to a vehicle, such that the orientation of said camera in relation to the vehicle is adjustable;one or more fixed cameras, having an angle of view of 70 degrees or less and adapted to be mounted to a vehicle, such that the orientation of said camera in relation to the vehicle is fixed;first processing circuitry communicatively coupled to said adjustable and fixed cameras and adapted to process images captured by said adjustable and fixed cameras to identify hazardous situations relating to the vehicle.2. The system according to claim 1 , wherein said one or more fixed cameras comprise at least two cameras having an angle of view of 70 degrees or less.3. The system according to claim 2 , wherein said at least two cameras capture stereo images of an area in front of the vehicle and said first processing circuitry is further adapted to derive depth information ...

Подробнее
14-01-2016 дата публикации

Systems and Methods for Ultrasound Imaging

Номер: US20160012582A1
Принадлежит: Rivanna Medical LLC

Techniques for processing ultrasound data. The techniques include using at least one computer hardware processor to perform obtaining ultrasound data generated based, at least in part, on one or more ultrasound signals from an imaged region of a subject; calculating shadow intensity data corresponding to the ultrasound data; generating, based at least in part on the shadow intensity data and at least one bone separation parameter, an indication of bone presence in the imaged region, generating, based at least in part on the shadow intensity data and at least one tissue separation parameter different from the at least one bone separation parameter, an indication of tissue presence in the imaged region; and generating an ultrasound image of the subject at least in part by combining the indication of bone presence and the indication of tissue presence.

Подробнее
14-01-2016 дата публикации

VIEW CLASSIFICATION-BASED MODEL INITIALIZATION

Номер: US20160012596A1
Принадлежит:

An image processing apparatus and related method. The apparatus (PP) comprises an input port (IN), a classifier (CLS) and an output port (OUT). The input port is capable of receiving an image of an object acquired at a field of view (FoV) by an imager (USP). The image records a pose of the object corresponding to the imager's field of view (FoV). The classifier (CLA) is configured to use a geometric model of the object to determine, from a collection of pre-defined candidate poses, the pose of the object as recorded in the image. The output port (OUT) is configured to output pose parameters descriptive of the determined pose. 1. An image processing apparatus , comprising:an input port (IN) for receiving an image (3DV) of an object (HT) acquired at a field of view (FoV) by an imager (USP), the image recording a pose of the object corresponding to the imager's field of view (FoV);a classifier configured to use a geometric model (MOD) of the object (HT) to determine, from a collection of pre-defined poses, the pose of the object as recorded in the image;an output port (OUT) configured to output a pose parameter descriptive of the determined pose,characterized in that the classifier uses a generalized Hough transform, GHT, to determine the object's pose, each of the pre-defined poses associated with a point in the Hough parameter space of the GHT.2. An image processing apparatus of claim 1 , further comprising:a segmenter configured to use the pose parameters as initialization information to segment the image for the object at the estimated pose.3. (canceled)4. An image processing apparatus of claim 1 , wherein the GHT is based on a plurality of separate Hough accumulators claim 1 , each dedicated to a different one of the pre-defined poses.5. An image processing apparatus of claim 1 , comprising an identifier (ID) configured to identify at least one landmark in the image (3DV) claim 1 , each pre-defined pose associated with a transformation claim 1 , the classifier ( ...

Подробнее
14-01-2016 дата публикации

GRAPH DISPLAY APPARATUS, ITS OPERATION METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN GRAPH DISPLAY PROGRAM

Номер: US20160012620A1
Автор: KANADA Shoji
Принадлежит:

When line graphs are displayed on coordinates having a horizontal axis as a time axis and a vertical axis as an axis representing examination values, a line graph is generated in such a manner that data points representing examination data are connected to each other by a line in a case where a time interval between examinations temporally next to each other is less than a maximum line-connection interval for an examination item, and in such a manner that data points representing examination data are not connected to each other in a case where the time interval between examinations temporally next to each other exceeds the maximum line-connection interval for the examination item. Plural line graphs overlapping with each other are displayed on the coordinates. 1. A graph display apparatus that displays , on coordinates having a horizontal axis and a vertical axis , one of which is a time axis and the other one is an axis of examination values , line graphs connecting data points representing examination data about a patient by lines in order of time of examination for a plurality of examination items , the apparatus comprising:a maximum line-connection interval determination unit that determines, for each of the plurality of examination items, a maximum line-connection interval that is a longest time interval between examinations temporally next to each other to connect data points representing examination data for the each of the plurality of examination items to each other;a line graph generation unit that generates a line graph for each of the plurality of examination items in such a manner that data points representing examination data are connected to each other in a case where a time interval between examinations temporally next to each other is less than or equal to the maximum line-connection interval for the each of the plurality of examination items, and in such a manner that data points representing examination data are not connected to each other in a ...

Подробнее
14-01-2016 дата публикации

CT SYSTEM FOR SECURITY CHECK AND METHOD THEREOF

Номер: US20160012647A1
Принадлежит:

A CT system for security check and a method thereof are provided. The method includes: reading inspection data of an inspected object; inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, which is obtained from the inspection data; receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; and providing a feedback of the 3D inspection image including at least one 3D FTI in response to the selection. With the above solution, it is convenient for a user to rapidly mark a suspected object in the CT image, and provides a feedback of whether a FTI is included. 1. A method in a Computed Tomography (CT) system for security check , comprising steps of:reading inspection data of an inspected object;inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, wherein the 3D inspection image is obtained from the inspection data;receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; andproviding a feedback of the 3D inspection image including at least one 3D FTI in response to the selection.2. The method according to claim 1 , wherein the step of receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a 2D inspection image including a 2D FTI corresponding to the 3D FTI comprises:receiving coordinate positions of a part of the 3D inspection image or the 2D ...

Подробнее
11-01-2018 дата публикации

METHOD OF DETERMINING IMAGE QUALITY IN DIGITAL PATHOLOGY SYSTEM

Номер: US20180012352A1
Автор: PHAM Duy Hien
Принадлежит:

Disclosed is an image quality evaluation method for a digital pathology system according to the present invention. The image quality evaluation method includes receiving a digital slide image by an image quality evaluation unit; dividing the digital slide image into a plurality of blocks by the image quality evaluation unit; analyzing the plurality of blocks to extract a foreground; calculating a blur for the extracted foreground; calculating brightness distortion for the extracted foreground; calculating contrast distortion for the extracted foreground; and evaluating the overall quality of the digital slide image using the blur, the brightness distortion, and the contrast distortion by the image quality evaluation unit. 1. An image quality evaluation method for a digital pathology system , the image quality evaluation method comprising:receiving a digital slide image by an image quality evaluation unit;dividing the digital slide image into a plurality of blocks by the image quality evaluation unit;analyzing the plurality of blocks to extract a foreground;calculating a blur for the extracted foreground;calculating brightness distortion for the extracted foreground;calculating contrast distortion for the extracted foreground;evaluating the overall quality of the digital slide image using the blur, the brightness distortion, and the contrast distortion by the image quality evaluation unit.3. The image quality evaluation method of claim 2 , wherein the calculating of brightness distortion comprises: determining that a pixel of the foreground has a brightness of 0 when the pixel is absolute black and determining that the pixel has a brightness of 1 when the pixel is absolute white; and calculating the brightness distortion by averaging brightness values of all pixels of the foreground.4. The image quality evaluation method of claim 3 , wherein the calculating of contrast distortion comprises: converting the foreground into gray; calculating a cumulative histogram; and ...

Подробнее
10-01-2019 дата публикации

Feature density object classification, systems and methods

Номер: US20190012562A1
Принадлежит: NANT HOLDINGS IP LLC

A system capable of determining which recognition algorithms should be applied to regions of interest within digital representations is presented. A preprocessing module utilizes one or more feature identification algorithms to determine regions of interest based on feature density. The preprocessing modules leverages the feature density signature for each region to determine which of a plurality of diverse recognition modules should operate on the region of interest. A specific embodiment that focuses on structured documents is also presented. Further, the disclosed approach can be enhanced by addition of an object classifier that classifies types of objects found in the regions of interest.

Подробнее
10-01-2019 дата публикации

Image segmentation for object modeling

Номер: US20190012563A1
Принадлежит: Snap Inc

Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by accessing a set of images within a video stream, identifying an object of interest within one or more images of the set of images, and detecting a region of interest within the one or more images. The systems, devices, and method identify a first set of median pixels in a first portion of the object of interest and a second set of median pixels in a second portion of the object of interest. The systems, devices, and methods determine a polyline approximating the first and second sets of median pixels and generate a model for the polyline.

Подробнее
14-01-2021 дата публикации

METHOD AND SYSTEM FOR 3D CORNEA POSITION ESTIMATION

Номер: US20210012105A1
Принадлежит: Tobii AB

There is provided a method, system, and non-transitory computer-readable storage medium for performing three-dimensional, 3D, position estimation for the cornea center of an eye of a user, using a remote eye tracking system, wherein the position estimation is reliable and robust also when the cornea center moves over time in relation to an imaging device associated with the eye tracking system. This is accomplished by generating, using, and optionally also updating, a cornea movement filter, CMF, in the cornea center position estimation. 1) A method for performing three-dimensional , 3D , position estimation for the cornea center of an eye of a user , using a remote eye tracking system , when the cornea center moves over time in relation to an imaging device associated with the eye tracking system , the method comprising:generating, using processing circuitry associated with the eye tracking system, a cornea movement filter, CMF, comprising an estimated initial 3D position and an estimated initial 3D velocity of the cornea center of the eye at a first time instance; a first two-dimensional, 2D, glint position in an image captured at a second time instance by applying the cornea movement filter, CMF, wherein the predicted first glint position represents a position where a first glint is predicted to be generated by a first illuminator associated with the eye tracking system; and', 'a second 2D glint position in an image captured at the second time instance by applying the cornea movement filter, CMF, wherein the predicted second glint position represents a position where a glint is predicted to be generated by a second illuminator associated with the eye tracking system;, 'predicting, using the processing circuitry identifying at least one first candidate glint in a first image captured by the imaging device at the second time instance wherein the first image comprises at least part of the cornea of the eye and at least one glint generated by the first illuminator; ...

Подробнее
14-01-2021 дата публикации

IMAGE COLLATING DEVICE

Номер: US20210012167A1
Автор: Takahashi Toru
Принадлежит: NEC Corporation

An image collating device that collates a first image and a second image includes a frequency characteristic acquiring unit, a frequency characteristic synthesizing unit, and a determining unit. The frequency characteristic acquiring unit acquires a frequency characteristic of the first image and a frequency characteristic of the second image. The frequency characteristic synthesizing unit generates a synthesized frequency characteristic by synthesizing the frequency characteristic of the first image and the frequency characteristic of the second image. The determining unit calculates a score indicating a degree to which the synthesized frequency characteristic is a wave having a single period, and collates the first image and the second image based on the score. 1. An image collating device comprising:a memory containing program instructions; anda processor coupled to the memory, wherein the processor is configured to execute the program instructions to:perform acquisition of a frequency characteristic of a first image and a frequency characteristic of a second image;perform generation of a synthesized frequency characteristic by synthesizing the frequency characteristic of the first image and the frequency characteristic of the second image; andcalculate a score indicating a degree to which the synthesized frequency characteristic is a wave having a single period, and perform collation of the first image and the second image based on the score.2. The image collating device according to claim 1 , wherein in the generation of the synthesized frequency characteristic claim 1 , a normalized cross power spectrum of the frequency characteristic of the first image and the frequency characteristic of the second image is calculated as the synthesized frequency characteristic.3. The image collating device according to claim 1 , wherein in the collation claim 1 , a score indicating a degree to which the synthesized frequency characteristic is a complex sine wave having a ...

Подробнее
09-01-2020 дата публикации

METHOD FOR 2D FEATURE TRACKING BY CASCADED MACHINE LEARNING AND VISUAL TRACKING

Номер: US20200012882A1
Принадлежит: SONY CORPORATION

A method for 2D feature tracking by cascaded machine learning and visual tracking comprises: applying a machine learning technique (MLT) that accepts as a first MLT input first and second 2D images, the MLT operating on the images to provide initial estimates of a start point for a feature in the first image and a displacement of the feature in the second image relative to the first image; applying a visual tracking technique (VT) that accepts as a first VT input the initial estimates of the start point and the displacement, and that accepts as a second VT input the two 2D images, processing the first and second inputs to provide refined estimates of the start point and the displacement; and displaying the refined estimates in an output image. 1. A method for 2D feature tracking by cascaded machine learning and visual tracking , the method comprising:applying a machine learning technique (MLT) that accepts as a first MLT input first and second 2D images, the MLT operating on the images to provide initial estimates of a start point for a feature in the first 2D image and a displacement of the feature in the second 2D image relative to the first image;applying a visual tracking technique (VT) that accepts as a first VT input the initial estimates of the start point and the displacement, and that accepts as a second VT input the first and second 2D images, processing the first and second inputs to provide refined estimates of the start point and the displacement; anddisplaying the refined estimates in an output image.2. The method of further comprising claim 1 , before applying the MLT:extracting the first and second images as frames from a camera or video stream; andtemporarily storing the extracted first and second images in first and second image buffers.3. The method of claim 2 , further comprising claim 2 , before applying the MLT:applying a 2D feature extraction technique to the first 2D image to identify the feature; andproviding information on the identified ...

Подробнее
09-01-2020 дата публикации

PACKAGE ANALYSIS DEVICES AND SYSTEMS

Номер: US20200013011A1
Принадлежит:

Disclosed herein are systems and methods for analyzing one or more package. In an embodiment, disclosed is a method comprising determining, by a sensor component of a package analysis device comprising a processor that executes computer executable components stored in a memory and the memory that stores computer executable components, a presence of a package and a set of dimensions of the package and extracting, by an image capturing component of the package analysis device, a set of image data from a package label based on a determined presence of the package. 1. A computer-implemented method comprising:determining, by a camera portion of a package analysis device comprising a processor, a distance between the package analysis device and a top surface of the package, wherein the distance is represented by distance data;capturing, by the camera portion of the package analysis device, a set of first images of the top surface of the package representing a set of boundaries of the top surface of the package and a second image of a label of the package;determining, by the package analysis device, dimensions of the top surface of the package based on an analysis of the set of first images;receiving, by the package analysis device, label data based on an optical character analysis of the second image by a label recognition device;generating, by the package analysis device, identifier data corresponding to the package; andtransmitting, by the package analysis device, the identifier data, the dimensions of the top surface of the package, the distance data and the label data to a support server device.2. The method of claim 1 , wherein the camera portion employs a depth sensor to determine the distance based on a detected change in depth between the package analysis device and a surface configured to support the package.3. The method of claim 1 , wherein the camera portion employs an industrial camera capture mechanism configured to generate a set of image data based on a ...

Подробнее
09-01-2020 дата публикации

APPARATUS AND METHOD FOR ANALYZING CEPHALOMETRIC IMAGE

Номер: US20200013162A1
Принадлежит:

Disclosed herein are an apparatus and method for analyzing a cephalometric image. The apparatus for analyzing a cephalometric image includes a control unit configured to extract a landmark point on a cephalometric image and to generate an analysis image, and memory configured to store the generated analysis image. 1. An apparatus for analyzing a cephalometric image , the apparatus comprising:a control unit configured to extract a landmark point on a cephalometric image, and to generate an analysis image; andmemory configured to store the generated analysis image.2. The apparatus of claim 1 , wherein the control unit is further configured to acquire a learning image in order to perform learning based on the learning image claim 1 , to extract the landmark point claim 1 , and to then generate the analysis image.3. The apparatus of claim 1 , wherein the control unit is further configured to identify a landmark point by analyzing the cephalometric image claim 1 , and to extract the landmark point by verifying the identified landmark point.4. The apparatus of claim 1 , wherein the control unit is further configured to identify a first point by performing geometric computation for the cephalometric image claim 1 , and to extract the landmark point by verifying the first point based on a second point determined through machine learning for the cephalometric image.5. The apparatus of claim 1 , wherein the control unit is further configured to identify a first point by performing machine learning for the cephalometric image claim 1 , and to extract the landmark point by verifying the first point based on a second point determined through geometric computation for the cephalometric image.6. The apparatus of claim 1 , wherein the control unit is further configured to set an area of interest on the cephalometric image claim 1 , and to identify a first point by performing machine learning for the cephalometric image within the area of interest.7. The apparatus of claim 6 , ...

Подробнее
15-01-2015 дата публикации

IMAGE GENERATION DEVICE, CAMERA DEVICE, IMAGE DISPLAY DEVICE, AND IMAGE GENERATION METHOD

Номер: US20150015738A1
Автор: Kuwada Junya
Принадлежит: Panasonic Corporation

A camera device is provided with: an imaging unit for generating an area image obtained by shooting an area from above; and a display image generation unit for generating a display image of a target moving in the area using a clip image which is clipped from the area image. In this case, a rotation angle of a current frame is calculated on the basis of the rotation angle of the previous frame and a reference angle of the current frame. As a result, a rapid change in an orientation of the target displayed in the display image can be suppressed. 1. An image generation device for generating a clip image from a wide-angle image , comprising:a reference angle determination unit for obtaining a reference angle of a clip area in the wide-angle image;a rotation angle storage unit for storing a rotation angle of a previous clip image;a rotation angle calculation unit for obtaining the rotation angle with respect to the clip area on the basis of a previous rotation angle and the reference angle; andan image clip unit for generating the clip image with respect to the clip area on the basis of the rotation angle obtained by the rotation angle calculation unit, whereinthe rotation angle calculation unit executes control so that a change amount of the rotation angle of the clip area does not exceed a predetermined angle.2. The image generation device according to claim 1 , whereinthe change amount of the rotation angle of the clip area is a difference between the previous rotation angle and the reference angle.3. The image generation device according to claim 1 , whereinthe rotation angle calculation unit makes the predetermined angle the change amount of the rotation angle of the clip area when the change amount of the rotation angle of the clip area exceeds the predetermined angle.4. The image generation device according to claim 1 , further comprising:a reference position determination unit for determining a reference position of a clip area in a wide-angle image.5. The image ...

Подробнее
14-01-2016 дата публикации

AUTO-FOCUSING SYSTEM AND METHOD

Номер: US20160014326A1
Принадлежит: HANWHA TECHWIN CO., LTD.

An auto-focusing method includes dividing an image into a plurality of blocks, performing discrete cosine transform (DCT) on image data of the plurality of blocks to output a plurality of DCT blocks each comprising DCT coefficients, generating a DCT mask for selecting DCT coefficients corresponding to a selected frequency range in a DCT block among the plurality of DCT blocks, and calculating a focus value of the image by applying the generated DCT mask to the DCT block. 1. An auto-focusing method comprising:dividing an image into a plurality of blocks;performing discrete cosine transform (DCT) on image data of the plurality of blocks to output a plurality of DCT blocks each comprising DCT coefficients;generating a DCT mask for selecting DCT coefficients corresponding to a selected frequency range in a DCT block among the plurality of DCT blocks; andcalculating a focus value of the image by applying the DCT mask to the DCT block.2. The auto-focusing method of claim 1 , wherein the DCT block comprises a plurality of alternating current (AC) components divided into at least one low frequency component claim 1 , at least one medium frequency component claim 1 , and at least one high frequency component claim 1 , and a first DCT mask for selecting DCT coefficients corresponding to the low frequency component and a first medium frequency component adjacent to the low frequency component; and', 'a second DCT mask for selecting DCT coefficients corresponding to the high frequency component and a second medium frequency component adjacent to the high frequency component., 'wherein the generating the DCT mask comprises generating at least one of3. The auto-focusing method of claim 2 , wherein the calculating the focus value comprises:calculating a first focus value of the image by applying the first DCT mask to the DCT block;calculating a second focus value of the image by applying the second DCT mask to the DCT block; andselecting one of the first focus value and the second ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING DEVICE, ENDOSCOPE APPARATUS, INFORMATION STORAGE DEVICE, AND IMAGE PROCESSING METHOD

Номер: US20160014328A1
Автор: Rokutanda Etsuko
Принадлежит: OLYMPUS CORPORATION

An image processing device includes an image acquisition section that acquires a captured image that includes an image of the object, a distance information acquisition section that acquires distance information based on the distance from an imaging section to the object when the imaging section captured the captured image, an in-focus determination section that determines whether or not the object is in focus within a pixel or an area within the captured image based on the distance information, a classification section that performs a classification process that classifies the structure of the object, and controls the target of the classification process corresponding to the results of the determination as to whether or not the object is in focus within the pixel or the area, and an enhancement processing section that performs an enhancement process on the captured image based on the results of the classification process. 1. An image processing device comprising:an image acquisition section that acquires a captured image that includes an image of an object;a distance information acquisition section that acquires distance information based on a distance from an imaging section to the object when the imaging section captured the captured image;an in-focus determination section that determines whether or not the object is in focus within a pixel or an area within the captured image based on the distance information;a classification section that performs a classification process that classifies a structure of the object, and controls a target of the classification process corresponding to results of the determination as to whether or not the object is in focus within the pixel or the area; andan enhancement processing section that performs an enhancement process on the captured image based on results of the classification process.2. The image processing device as defined in claim 1 ,the classification section outputting a classification result that corresponds to an ...

Подробнее
14-01-2016 дата публикации

INTEGRATED PRESENTATION OF SECONDARY CONTENT

Номер: US20160014473A1
Принадлежит:

Apparatuses, methods and storage medium associated with content distribution and consumption are disclosed herein. In embodiments, an apparatus may include a decoder and a presentation engine. The decoder may be configured to receive and decode a primary content. The presentation engine may be configured to process and present decoded primary content. Processing of the decoded primary content may include identification of a feature in a frame of the primary content, and integration of a secondary content with the feature. Presentation of the decoded primary content may include presentation of the decoded primary content with the secondary content integrated with the feature of the frame. Other embodiments may be described and/or claimed. 1. An apparatus for consuming content , comprising:a decoder configured to receive and decode a primary content; anda presentation engine coupled to the decoder, and configured to process and present decoded primary content, wherein process of the decoded primary content includes identification of a feature in a frame of the primary content, and integration of a secondary content with the feature, and wherein presentation of the decoded primary content includes presentation of the decoded primary content with the secondary content integrated with the feature of the frame.2. The apparatus of claim 1 , wherein the primary content is streamed to the apparatus claim 1 , and the decoder is configured to receive and decode streamed primary content.3. The apparatus of claim 1 , wherein the decoder or the presentation engine is further configured to receive the secondary content or identification or description of the feature claim 1 , wherein the secondary content or the identification or description of the feature is provided to the apparatus separate from the primary content.4. The apparatus of claim 1 , wherein the presentation engine comprises a camera tracker module configured to retrieve a position or a pose of the camera for the ...

Подробнее
21-01-2016 дата публикации

METHOD AND SYSTEM FOR AUTOMATICALLY DETERMINING VALUES OF THE INTRINSIC PARAMETERS AND EXTRINSIC PARAMETERS OF A CAMERA PLACED AT THE EDGE OF A ROADWAY

Номер: US20160018212A1
Принадлежит:

A method for determining values of the intrinsic and extrinsic parameters of a camera placed at the edge of a roadway, wherein the method includes: a step of detecting a vehicle passing in front of the camera; a step of determining, from at least one 2D image taken by the camera of the vehicle detected and at least one predetermined 3D vehicle model, the intrinsic and extrinsic parameters of the camera with respect to the reference frame of the predetermined 3D vehicle model or models so that a projection of said or one of said predetermined 3D vehicle models corresponds to said or one of the 2D images actually taken by said camera. A method for determining at least one physical quantity related to the positioning of said camera with respect to said roadway. It concerns systems designed to implement methods. Finally, it concerns computer programs for implementing said methods.

Подробнее
19-01-2017 дата публикации

Method and Apparatus for Fingerprint Recognition and Authentication

Номер: US20170017825A1
Принадлежит:

According to one embodiment of the present invention, provided is a method for fingerprint recognition and authentication, comprising: a step for sequentially acquiring a plurality of slice images including a fingerprint pattern via a fingerprint sensor; a slice dividing unit for dividing the slice images into block images; a step for converting spatial domain information of the block images into frequency domain information, and eliminating information of a high-frequency component by using a low-pass filter; and a step for forming a fingerprint pattern template by matching the block images, from which the information of the high-frequency component has been eliminated. 1. A method for fingerprint recognition and authentication , comprising:sequentially acquiring a plurality of slice images including a fingerprint pattern via a fingerprint sensor;dividing the slice images into block images;converting spatial domain information of the block images into frequency domain information, and eliminating information of a high-frequency component by using a low-pass filter; andforming a fingerprint pattern template by matching the block images, from which the information of the high-frequency component has been eliminated.2. The method according to claim 1 , further comprising:matching a ridge flow direction at the fingerprint pattern template with a ridge flow direction at a pre-registered basic fingerprint pattern template; andcomparing the finger pattern template with the basic fingerprint pattern template to perform authentication.3. The method of claim 2 , wherein matching the ridge flow directions further comprising:rotating the finger pattern template by an angle unmatched when the ridge flow directions are not matching at corresponding points between the templates.4. The method of claim 1 , further comprising:performing a hashing for the block images from which the information of the high-frequency component has been eliminated.5. An apparatus for fingerprint ...

Подробнее
19-01-2017 дата публикации

SYSTEM AND METHOD FOR GENERATING AND EMPLOYING SHORT LENGTH IRIS CODES

Номер: US20170017843A1
Принадлежит:

A system and method for generating compact iris representations based on a database of iris images includes providing full-length iris codes for iris images in a database, where the full-length iris code includes a plurality of portions corresponding to circumferential rings in an associated iris image. Genuine and imposter score distributions are computed for the full-length iris codes, and code portions are identified that have a contribution that provides separation between imposter and genuine distributions relative to a threshold. A correlation between remaining code portions is measured. A subset of code portions having low correlations within the subset is generated to produce a compact iris representation. 1. A computer readable storage medium comprising a computer readable program for generating compact iris representations based on a database of iris images , wherein the computer readable program when executed on a computer causes the computer to perform the steps of:computing genuine and imposter score distributions for full-length iris codes for iris images in a database, where the full-length iris codes include a plurality of portions corresponding to circumferential rings in an associated iris image;identifying and retaining code portions that have a contribution that provides separation between imposter and genuine distributions relative to a threshold;measuring a correlation between remaining code portions; andgenerating a subset of the remaining code portions having low correlations within the subset to produce a compact iris representation.2. The computer readable storage medium as recited in claim 1 , further comprising determining parameters of a full length iris code to compact iris code transform by generating an all-aligned-pairs set.3. The computer readable storage medium as recited in claim 2 , wherein identifying code portions includes computing a Hamming distance for all rows in an aligned code where a minimum Hamming distance is used as a ...

Подробнее
21-01-2016 дата публикации

AUTOMATED OBSCURITY FOR PERVASIVE IMAGING

Номер: US20160019415A1
Принадлежит:

Methods for obfuscating an image of a subject in a captured media are disclosed. For example, a method receives a communication from an endpoint device of a subject indicating that the image of the subject is to be obfuscated in a captured media. The communication may include a feature set associated with the subject, where the feature set contains facial features of the subject and motion information associated with the subject. The method then detects the image of the subject in the captured media. For example, the image of the subject is detected by matching the facial features of the subject to the image of the subject in the captured media and matching the motion information associated with the subject to a trajectory of the image of the subject in the captured media. The method then obfuscates the image of the subject in the captured media. 1. A method for obfuscating an image of a subject in a captured media , comprising:receiving, by a processor, a communication from an endpoint device of the subject indicating that the image of the subject is to be obfuscated in the captured media, wherein the communication includes a feature set associated with the subject, wherein the feature set comprises facial features of the subject and motion information associated with the subject; matching the facial features of the subject to the image of the subject in the captured media; and', 'matching the motion information associated with the subject to a trajectory of the image of the subject in the captured media; and, 'detecting, by the processor, the image of the subject in the captured media, wherein the image of the subject is detected byobfuscating, by the processor, the image of the subject in the captured media when the image of the subject is detected in the captured media.2. The method of claim 1 , wherein the receiving comprises receiving the feature set as a set of quantized vectors.3. The method of claim 2 , wherein the receiving comprises receiving the motion ...

Подробнее
21-01-2016 дата публикации

CONTENT PLAYBACK SYSTEM, SERVER, MOBILE TERMINAL, CONTENT PLAYBACK METHOD, AND RECORDING MEDIUM

Номер: US20160019425A1
Автор: YAMAJI Kei
Принадлежит:

Selected image data or specific information thereon is stored in association with moving image data as a management marker of a selected image. The selected image data is selected from among still image data extracted from the moving image data. When an output image of the selected image is captured, image analysis is performed on the captured image data to acquire a management marker of a captured image. A management marker of a selected image corresponding to the management marker of the captured image from among management markers of selected images stored in the storage is specified. Digest moving image data is generated by picking out a part of moving image data associated with the specific management marker. Control is performed so that a digest moving image is playbacked and displayed on the display section.

Подробнее
21-01-2016 дата публикации

A METHOD AND X-RAY SYSTEM FOR COMPUTER AIDED DETECTION OF STRUCTURES IN X-RAY IMAGES

Номер: US20160019432A1
Принадлежит:

The present invention relates to X-ray imaging technology as well as image post-processing. Particularly, the present invention relates to a method for computer aided detection of structures in X-ray images as well as an X-ray system. A computer aided detection algorithm visibly determines tissue structures in X-ray image information and subsequently matches the shape of a determined tissue structure with a library of known tissue structures for characterizing the type of determined tissue structure. The determination of a tissue structure and thus the characterization of the type of the tissue structure may be enhanced when employing also spectral information, in particular energy information of the acquired X-ray image. Accordingly, a method () for computer aided detection of structures and X-ray images is provided, comprising the steps of obtaining () spectral X-ray image information of an object, wherein the spectral X-ray image information constitutes at least one X-ray image, detecting () a tissue structure of interest in the X-ray image by employing a computer aided detection algorithm, wherein detecting a tissue structure of interest in the X-ray image comprises the computer aided detection algorithm being adapted to evaluate the X-ray image for tissue structure shape and compare the tissue structure shape with a plurality of pre-determined tissue structure shapes and wherein the computer aided detection algorithm is adapted to evaluate spectral information of the X-ray image for detecting the tissue structure of interest.

Подробнее
21-01-2016 дата публикации

Image processing system, client, image processing method, and recording medium

Номер: US20160019433A1
Автор: Masaki Saito
Принадлежит: Fujifilm Corp

An image processing system shares image processing on an image through sharing between a server and a client. The image processing system calculates a degree of interest of the user in the image based on operation information indicating information regarding an operation performed by a user, and information regarding the image, determines whether the degree of interest is equal to or greater than a first threshold value, and performs control so that the image processing is performed in the client on the image in which the degree of interest is determined to be equal to or greater than the first threshold value, and the image processing is performed in the server on the image in which the degree of interest is determined to be smaller than the first threshold value.

Подробнее
21-01-2016 дата публикации

IMAGE PROCESSING APPARATUS, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN IMAGE PROCESSING PROGRAM, AND OPERATION METHOD OF IMAGE PROCESSING APPARATUS

Номер: US20160019435A1
Автор: KITAMURA Yoshiro
Принадлежит: FUJIFILM Corporation

When binary labeling is performed, an outline specification unit specifies a first outline present toward a target region and a second outline present toward a non-target region, and which have shapes similar to an outline of the target region. A voxel selection unit selects an N number of voxels constituting all of the first outline and the second outline. The energy setting unit sets N-order energy when a condition that all of the voxels of the first outline belong to the target region and all of the voxels of the second outline belong to the non-target region is satisfied smaller than the N-order energy when the condition is not satisfied. After then, labeling is performed by minimizing energy.

Подробнее
21-01-2016 дата публикации

STEREO MATCHING APPARATUS AND METHOD THROUGH LEARNING OF UNARY CONFIDENCE AND PAIRWISE CONFIDENCE

Номер: US20160019437A1
Автор: CHANG Hyun Sung, Choi Ouk
Принадлежит:

A stereo matching apparatus and method through learning a unary confidence and a pairwise confidence are provided. The stereo matching method may include learning a pairwise confidence representing a relationship between a current pixel and a neighboring pixel, determining a cost function of stereo matching based on the pairwise confidence, and performing stereo matching between a left image and a right image at a minimum cost using the cost function.

Подробнее
21-01-2016 дата публикации

Transaction Authorization Employing Drag-And-Drop of a Security-Token-Encoded Image

Номер: US20160019538A1
Автор: Arif Adeel
Принадлежит: KOOBECAFE, LLC

In one embodiment, a computer-implemented electronic commerce transaction method. The computer receives original image data from a user device, associates a security token with the user, embeds the security token into the original image data to generate modified image data, and provides the modified image data to the user device. To authorize a financial transaction that uses personal data of the user, the computer subsequently receives the modified image data from the user device, extracts the security token from the modified image data, and validates the user and/or the user device.

Подробнее
19-01-2017 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20170018060A1
Автор: Hamano Hideyuki
Принадлежит:

In order to effectively accomplish blur restoration in a short time, an image processing apparatus includes a blur restoration unit configured to perform blur restoration processing on image data according to an object distance and a blur restoration distance correction unit configured to correct a blur restoration distance that represents an object distance at which the blur restoration processing is performed by the blur restoration unit. The blur restoration distance correction unit is configured to set an interval of the blur restoration distance according to a difference between a reference object distance and another object distance. 1. An image processing apparatus having a generation unit which generates a restored image in which blur restoration has been performed with respect to an object of an indicated object distance from an image captured by an image capturing unit , by performing image processing to image data , the image processing apparatus comprising:one or more processors; a control unit configured to control the generation unit and the object distance at which the blur restoration is performed in the restored image generated by the generation unit;', 'an indication unit configured to indicate the control unit to generate a newly restored image in which blur restoration has been performed and the object distance at which the blur restoration is performed has been corrected; and', 'an acquiring unit configured to acquire information about an object distance of an object;, 'a memory storing instructions which, when the instructions are executed by the one or more processors, cause the image processing apparatus to function aswherein, in a case where a difference between an object distance at which blur restoration has been previously performed in the restored image generated by the generation unit and the object distance of the object acquired by the acquiring unit is smaller than a predetermined value, the control unit controls the object distance ...

Подробнее
19-01-2017 дата публикации

GUIDED INSPECTION OF AN INSTALLED COMPONENT USING A HANDHELD INSPECTION DEVICE

Номер: US20170018067A1
Принадлежит: GM GLOBAL TECHNOLOGY OPERATIONS LLC

A method for inspecting an installed component includes receiving an identity and selected location of the component as an input signal via a handheld inspection device having a controller, digital camera, and display screen, and collecting a dynamic pixel image of the selected location in real-time using the digital camera. The method includes displaying the image in real time via the display screen, projecting virtual guidance lines onto the image corresponding to edges of the installed component, and identifying the component via the controller when the image is aligned with the projected acquisition lines. A predetermined area of the installed component is identified after identifying the installed component, a predetermined feature dimension is measured within the identified predetermined area, and an output signal is generated with a status indicative of whether the measured feature dimension falls within a calibrated range. 1. A method for inspecting an installed component comprising:receiving an identity and selected location of the installed component as an input signal via a handheld inspection device having a controller, a digital camera, and a display screen;collecting a dynamic pixel image of the selected location in real-time using the digital camera;displaying the collected dynamic pixel image in real time via the display screen;projecting a set of virtual acquisition guidance lines onto the displayed dynamic pixel image via the controller, wherein the projected acquisition guidance lines correspond to edges of the installed component within the selected location;identifying the installed component in the selected location via the controller when the displayed dynamic pixel image is aligned with the projected acquisition lines;identifying a predetermined area of the installed component via the controller after identifying the installed component;measuring, via the controller using image processing instructions, a predetermined feature dimension of the ...

Подробнее
19-01-2017 дата публикации

Digital Rock Physics-Based Trend Determination and Usage for Upscaling

Номер: US20170018073A1
Принадлежит: INGRAIN, INC.

An example method includes acquiring two-dimensional (2D) or three-dimensional (3D) digital images of a rock sample. The method also includes selecting a subsample within the digital images. The method also includes deriving a trend or petrophysical property for the subsample. The method also includes applying the trend or petrophysical property to a larger-scale portion of the digital images. 1. A method that comprises:acquiring two-dimensional (2D) or three-dimensional (3D) digital images of a rock sample;selecting a subsample within the digital images;deriving a trend or petrophysical property for the subsample; andapplying the trend or petrophysical property to a larger-scale portion of the digital images.2. The method of claim 1 , wherein selecting the subsample comprises identifying a fully-resolved entity within the digital images and selecting the fully-resolved entity as the subsample.3. The method of claim 2 , further comprising performing a statistical analysis to identify the fully-resolved entity.4. The method of claim 2 , further comprising performing image-processing to identify the fully-resolved entity.5. The method of claim 1 , wherein selecting the subsample comprises identifying an unresolved entity within the digital images claim 1 , obtaining a higher-resolution image of the unresolved entity claim 1 , identifying a fully-resolved entity within the higher-resolution image claim 1 , and selecting the fully-resolved entity as the subsample.6. The method of claim 1 , further comprising relating a property value of the larger-scale sample to the trend or petrophysical property.7. The method of claim 1 , further comprising deriving a trend or petrophysical property for each of a plurality of subsamples claim 1 , and applying an aggregation of the trends or petrophysical properties to a larger-scale portion of the digital images.8. The method of claim 1 , wherein deriving a trend or petrophysical property comprises deriving a multi-modal distribution ...

Подробнее
19-01-2017 дата публикации

Method and apparatus for planning Computer-Aided Diagnosis

Номер: US20170018076A1
Принадлежит: Delineo Diagnostics, Inc.

The invention provides a method and apparatus for classifying a region of interest in imaging data, the method comprising: 1. Method for classifying a region of interest in imaging data , the method comprising:calculating a feature vector for at least one region of interest in the imaging data, said feature vector including features of a first modality;projecting the feature vector for the at least one region of interest in the imaging data using a decision function to generate a classification, wherein the decision function is based on classified feature vectors including features of a first modality and features of at least a second modality;estimating the confidence of the classification if the feature vector is enhanced with features of the second modality.2. The method according to claim 1 , further comprising determining an outcome value based on outcome values associated with the classified feature vectors.3. The method according to claim 1 , further comprising marginalizing the decision function for features of the second modality.4. The method according to claim 3 , further comprising marginalizing the decision function for features of a third modality.5. The method according to claim 4 , further comprising selecting one of the second modality and the third modality.6. The method according to claim 4 , wherein the decision function has been trained using the respective set of classified feature vectors to project a feature vector to a classification.7. The method according to claim 6 , wherein the decision function is based on one of a support vector machine (SVM) claim 6 , a decision tree claim 6 , or a boosted stump.8. The method according to claim 7 , wherein the imaging data represents a human lung or a human breast.9. The method according to claim 8 , wherein the imaging data is a computer tomography (CT) image.10. The method according to claim 9 , wherein the first claim 9 , second claim 9 , or third modality is characterized by a type of contrasting ...

Подробнее
19-01-2017 дата публикации

DATA VISUALIZATION SYSTEM AND METHOD

Номер: US20170018102A1
Автор: Cardno Andrew John
Принадлежит:

A data visualization system comprising: a data retrieval module arranged to retrieve data from a data storage module in communication with the data visualization system, wherein the retrieved data includes data sets for representation in a tree map; a tree map generation module arranged to generate a tree map based on the retrieved data, wherein the tree map generation module is further arranged to: i) sort the retrieved data sets according to the size of the data sets; ii) define an area for generating multiple rectangles, each rectangle representing one of the data sets, wherein the area is defined to allow the data sets to be spatially arranged within the area; iii) accumulate data points for data within the data sets to generate a rectangle that has dimensions that fall within pre-defined parameters; iv) generate a rectangle for each data set; and v) orientate the rectangle such that its orientation is only changed if the rectangle does not fit in the available area. 1. A data visualization system including:a data retrieval module arranged to retrieve data from a data storage module in communication with the data visualization system, wherein the retrieved data includes data sets for representation in a tree map;a tree map generation module arranged to generate a tree map based on the retrieved data, wherein the tree map generation module is further arranged to:i) sort the retrieved data sets according to the size of the data sets;ii) define an area for generating multiple rectangles, each rectangle representing one of the data sets, wherein the area is defined to allow the data sets to be spatially arranged within the area;iii) generate a rectangle for each data set; andiv) orientate the rectangle while maintaining the area of the rectangle such that its orientation is only changed if the rectangle does not fit in the available area.2. The system of claim 1 , wherein the tree map generation module is further arranged to determine a total number of data points ...

Подробнее
21-01-2016 дата публикации

Image processing system with artifact suppression mechanism and method of operation thereof

Номер: US20160019677A1
Принадлежит: Sony Corp

An image processing system, and a method of operation thereof, includes: a local patch ternarization module for receiving an input image, for calculating a mean value of a local patch of pixels in the input image, and for calculating ternary values for the pixels based on the mean value; and an artifact removal module, coupled to the local patch ternarization module, for removing a residue artifact based on the ternary values and for generating an output image with the residue artifact removed for sending to an image signal processing hardware.

Подробнее