Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 948. Отображено 100.
01-01-2015 дата публикации

Method of operating a radiographic inspection system with a modular conveyor chain

Номер: US20150003583A1
Принадлежит: Mettler Toledo Safeline Ltd

A method of operating a radiographic inspection system is designed for a radiographic inspection system in which a conveyor chain with identical modular chain segments transports the articles being inspected. The method encompasses a calibration mode and an inspection mode of the radiographic inspection system. In the calibration mode, calibration data characterizing the radiographic inspection system with the empty conveyor chain are generated and stored as a template image. In the inspection mode, raw images ( 50 ) of the articles ( 3 ) under inspection with the background ( 41 ) of the conveyor chain are acquired and arithmetically merged with the template image. The method results in a clear output image ( 51 ) of the articles under inspection being obtained without the interfering background of the conveyor chain.

Подробнее
01-01-2015 дата публикации

Background detection as an optimization for gesture recognition

Номер: US20150003727A1
Принадлежит:

Methods and systems are provided allowing for background identification and gesture recognition in video images. A computer-implemented image processing method includes: receiving, using at least one processing circuit, a plurality of image frames of a video; constructing, using at feast one processing circuit, a plurality of statistical models of the plurality of image frames at a plurality of pixel granularity levels; constructing, using at least one processing circuit, a plurality of probabilistic models of an input image frame at a plurality of channel granularity levels based on the plurality of statistical models; merging at least some of the plurality of probabilistic models based on a weighted average to form a single probability image; determining background pixels, based on a probability threshold value, from the single probability image; and determining whether the plurality of image frames, when examined in a particular sequence, conveys a gesture by the object. 1. (canceled)2. A computer-implemented method , comprising:receiving multiple images; generating, for each of multiple pixel granularity levels, multiple statistical models,', 'generating, for each of a plurality of channels and based at least in part on the multiple statistical models, a compact probability model;', 'aggregating the compact probability models to generate a single probability model,', 'classifying each pixel as a foreground pixel or a background pixel based on a respective probability indicated for the pixel in the single probability model, and', 'generating a foreground component that includes the pixels that are classified as foreground pixels; and, 'for each imageperforming a gesture recognition process using the respective foreground components of each of the multiple images.3. The computer-implemented method of claim 2 , further comprising generating claim 2 , for each of the statistical models and for each of the plurality of channels claim 2 , a per-channel probabilistic ...

Подробнее
01-01-2015 дата публикации

ARTICLE ESTIMATING SYSTEM, ARTICLE ESTIMATING METHOD, AND ARTICLE ESTIMATING PROGRAM

Номер: US20150003729A1
Автор: HAYASHI Yasuyuki
Принадлежит: RAKUTEN, INC.

A server includes an extraction unit , an analysis unit , a first estimating unit , an information acquisition unit and a second estimating unit . The extraction unit extracts an image area for each article. The analysis unit analyzes the image area to acquire analysis information. The first estimating unit narrows down candidates estimated to correspond to the article in the image area based on the analysis information. When the candidates were able to be narrowed down, the information acquisition unit acquires additional information additional information of a reference article. The second estimating unit attempts a narrowing process based on the additional information of the reference article in addition to the analysis information, for the image area including a spine, which is an image area in which candidates were unable to be narrowed down. 19-. (canceled)10. An article estimating system configured to be able to acquire identification information for identifying an article and additional information for the article from a storage which stores the identification information and the additional information in association with each other for each of a plurality of articles each having a cover and a spine , the article estimating system comprising:at least one memory operable to store program code;at least one processor operable to read said program code and operate as instructed by said program code, said program code including:image acquisition code which acquires an image including the plurality of articles;extraction code which extracts, for each article, an image area showing the article from the acquired image;analysis code which analyzes the plurality of extracted image areas to acquire analysis information;first estimating code which attempts a process of narrowing down candidates of identification information estimated to correspond to the article in the image area among identification information of the plurality of articles stored in the storage to a ...

Подробнее
07-01-2016 дата публикации

MICROSCOPY SLIDE SCANNER WITH VARIABLE MAGNIFICATION

Номер: US20160004062A1
Автор: Dixon Arthur Edward
Принадлежит:

An instrument and a method of scanning a large microscope specimen moves the specimen relative to a detector array during scanning by a scanner. Magnification of the instrument is adjustable using a zoom tube lens over a continuous range of magnification to enable scans of the specimen to be taken over a range of resolutions without varying the infinity corrective objective. Scans of the specimen can be taken over a range of resolutions with the same infinity connected objective. 1. An instrument for scanning a large microscope specimen , the instrument comprising a detector array that is part of an optical train to focus light from the specimen onto the detector array , the specimen being movable relative to the detector array , the optical train having an infinity corrected objective , the specimen being mounted on a support and moving relative to the detector array during scanning by a scanner , the instrument having a magnification that is adjustable using a zoom tube lens over a continuous range of magnification to enable scans of the specimen to be taken over a range of resolutions with the same infinity corrected objective.2. (canceled)3. (canceled)4. (canceled)5. (canceled)6. (canceled)7. (canceled)8. (canceled)9. (canceled)10. (canceled)11. (canceled)12. (canceled)13. (canceled)14. (canceled)15. (canceled)16. (canceled)17. (canceled)18. (canceled)19. (canceled)20. (canceled)21. (canceled)22. (canceled)23. (canceled)24. (canceled)25. (canceled)26. A method for scanning a large microscope specimen using an instrument having a detector array that is part of an optical train to focus light from the specimen onto the detector array , the optical train having an infinity corrected objective , the method comprising moving the specimen relative to the detector array during scanning by a scanner , adjusting the magnification over a range using a zoom tube lens to enable scans of the specimen to be taken over a range of resolutions with the same infinity corrected ...

Подробнее
07-01-2016 дата публикации

Comparing Users Handwriting for Detecting and Remediating Unauthorized Shared Access

Номер: US20160004422A1
Принадлежит: Clareity Security LLC

A method of using handwriting input on a touch screen device to verify the identity of a user. The user writes a profile word in an input space provided on the touch screen. Features of the handwriting are captured and sent to a server, which stores the data in a data record associated with the authorized user. When a user subsequently writes a challenge word, the handwriting features of the challenge word are compared to the authorized user's handwriting data record and given a rating of similarity. If the rating is within a prescribed range, the user's identity is verified as being the authorized user and permitted to access a given asset. If not, the user's identity is not verified and that user may be denied access to the asset or other action taken. This biometric feature of authentication may be used alone or in a multi-factor authentication environment.

Подробнее
07-01-2016 дата публикации

SYSTEM AND METHOD FOR ROBUST MOTION DETECTION

Номер: US20160004912A1
Автор: Varghese Gijesh
Принадлежит:

Method and system for detecting objects of interest in a camera monitored area are disclosed. Statistical analysis of block feature data, particularly Sobel edge and spatial high frequency responses is used to model the background of the scene and to segregate foreground objects from the background. This technique provides a robust motion detection scheme prone to catching genuine motions and immune against false alarms. 1. A method for detecting motion in a sequence of video frames captured from a scene , each frame comprising a plurality of pixels grouped in a plurality of image blocks , said method comprising:(a) receiving pixel data and block feature data for each of the plurality of blocks of a current frame and a previous frame, the block feature data being at least one of Sobel edge and spatial high frequency response values for each pixel averaged over the block;(b) classifying the blocks as one of background, strong foreground, and weak foreground based on temporal profile of the block feature data;(c) producing an initial list of rectangles that enclose a plurality of connected foreground block, wherein each rectangle is assigned with a strength score and a frame by frame tracking count;(d) identifying the rectangles as one of: (i) new, (ii) persistent and (iii) recurring based on their strength score and tracking count;(e) validating the new rectangles by comparing their constituent block data with that of corresponding collocated blocks from the previous frame; and(f) producing a final list of rectangles comprising validated new, recurring and persistent rectangles.2. The method of claim 1 , wherein distribution of the at least one feature data claim 1 , monitored for a period of time claim 1 , is represented in a histogram.3. The method of claim 2 , wherein a block is classified as background if the normalized distance of the block feature value from the mean of the histogram is smaller than a first threshold claim 2 , or as foreground if larger.4. The ...

Подробнее
05-01-2017 дата публикации

METHOD AND APPARATUS FOR SEGMENTING OBJECT IN IMAGE

Номер: US20170004628A1
Принадлежит:

A method and apparatus are provided for segmenting an object in a first image. The method includes obtaining the first image including the object; receiving a first input signal including first information about a first position in the first image; selecting at least one pixel included in the first image, based on the first information about the first position; generating a second image by dividing the first image into several areas, using the selected at least one pixel; and segmenting the object in the first image by using the first image and the second image. 1. A method of segmenting an object in a first image , the method comprising:obtaining the first image including the object;receiving a first input signal including first information about a first position in the first image;selecting at least one pixel included in the first image, based on the first information about the first position;generating a second image by dividing the first image into several areas, using the selected at least one pixel; andsegmenting the object in the first image by using the first image and the second image.2. The method of claim 1 , wherein segmenting the object comprises segmenting the object in the first image based on color information of pixels included in the first image and information of several areas included in the second image.3. The method of claim 2 , further comprising updating information about the segmented object based on color information and information about the several areas that are updated using information about the segmented object claim 2 ,wherein updating is performed a predetermined number of times.4. The method of claim 2 , wherein segmenting the object in the first image based on the color information comprises:generating a foreground model and a background model based on the color information and the information about the several areas;constructing a graph of an energy function of the pixels included in the first image by combining a data term ...

Подробнее
07-01-2016 дата публикации

SYSTEM AND METHOD FOR ROBUST MOTION DETECTION

Номер: US20160004929A1
Автор: Varghese Gijesh
Принадлежит: GEO SEMICONDUCTOR INC.

Method and system for detecting objects of interest in a camera monitored area are disclosed. Statistical analysis of block feature data, particularly Sobel edge and spatial high frequency responses is used to model the background of the scene and to segregate foreground objects from the background. This technique provides a robust motion detection scheme prone to catching genuine motions and immune against false alarms. 1. A method for detecting salient objects in a sequence of video frames captured from a scene , each frame comprising a plurality of pixels grouped in a plurality of image blocks , said method comprising:(a) receiving pixel data and block feature data for each of the plurality of image blocks, the block feature data being at least one of Sobel edge and spatial high frequency response values for each pixel averaged over the block;(b) classifying the blocks as background or foreground in a current frame based on temporal profile of the block feature data;(c) identifying objects as a plurality of blobs, wherein each blob comprises a plurality of connected foreground blocks in the current frame;(d) grouping the objects as one of: (i) new, (ii) persistent and (iii) recurring based on the number of foreground blocks and tracking count of the blobs;(e) discarding the recurring objects.2. The method of claim 1 , wherein distribution of the at least one feature data monitored for a period of time claim 1 , is represented in a histogram.3. The method of claim 2 , wherein a block is classified as background if the normalized distance of the block feature value from the mean of the histogram is smaller than a threshold or as foreground if larger.4. The method of claim 1 , wherein a background map is generated for the current frame using the background blocks.5. The method of claim 4 , where background of the scene is dynamically modeled using the background map over a plurality of frames.6. The method of claim 5 , wherein the background model is used for inter- ...

Подробнее
07-01-2016 дата публикации

VOLUME DATA ANALYSIS SYSTEM AND METHOD THEREFOR

Номер: US20160005167A1
Принадлежит: Hitachi, Ltd.

A controller has a function that: poygonizes and converts three-dimensional volume data, which is generated by a modality, into polygon data; divides this polygon data into a plurality of clusters; calculates an L2 norm vector of spherical harmonics as a feature vector with respect to each of the clusters based on the polygon data constituting each cluster; identifies whether each cluster is a target or not, based on each calculated feature vector and learning data; and displays an image of a cluster identified as the target at least on a screen. 1. A volume data analysis system comprising:a modality for receiving a measured signal acquired by scanning a measuring object and generating three-dimensional volume data;an input device for inputting information in response to manipulation;a controller for processing the three-dimensional volume data generated by the modality and the information input from the input device; anda display device for displaying a processing result of the controller;wherein the controller includes:a cluster generator for polygonizing and converting the three-dimensional volume data generated by the modality into polygon data and dividing the converted polygon data into a plurality of clusters;a feature vector calculation unit for calculating an L2 norm vector of spherical harmonics as a feature vector with respect to each of the clusters based on the polygon data constituting the each cluster;an identification unit for identifying whether each cluster is a target or not, based on each feature vector calculated by the feature vector calculation unit and learning data acquired by machine learning by using training data; andan image generator for generating an image of a cluster identified as the target by the identification unit among each of the clusters generated by the cluster generator, from the polygon data constituting the relevant each cluster and having the display device display at least the generated image.2. The volume data analysis ...

Подробнее
07-01-2016 дата публикации

IMAGE PROCESSING OF IMAGES THAT INCLUDE MARKER IMAGES

Номер: US20160005178A1
Принадлежит: Varian Medical Systems, Inc.

A method, includes: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image using a processor; and providing a signal for stopping a procedure if the presence of the object is identified. An image processing apparatus, includes: a processor configured for: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image; and providing a signal for stopping a procedure if the presence of the object is identified. A computer product having a non-transitory medium storing instructions, an execution of which causes an image processing method to be performed, the method includes: obtaining an image, the image having marker images and a background image; identifying presence of an object in the background image; and providing a signal for stopping a procedure if the presence of the object is identified. 1. An image processing method , comprising:obtaining an image, the image having marker images and a background image;identifying presence of an object in the background image using a processor; andproviding a signal for stopping a procedure if the presence of the object is identified.2. The method of claim 1 , wherein the act of identifying the presence of the object in the background comprises:dividing the image into a plurality of image portions arranged in a matrix; anddetermines a mean or median value of pixel values in each of the image portions.3. The method of claim 2 , wherein the act of identifying the presence of the object in the background further comprises determining a histogram using the determined mean or median values.4. The method of claim 3 , wherein the act of identifying the presence of the object further comprises determining if any of the mean or median values exceeds a peak value of the histogram by more than a specified threshold.5. The method of claim 2 , further comprising setting a size for one or more ...

Подробнее
07-01-2016 дата публикации

Method, system and software module for foreground extraction

Номер: US20160005182A1
Автор: ASHANI Zvika
Принадлежит:

A method is provided, suitable for use in extraction of foreground objects from image stream. The method comprises: providing input image data of a region of interest, providing background model of said region of interest, and utilizing said background model for processing each image of the input image data. The processing comprises: determining a background gradient map for pixels in said background model and an image gradient map for pixels in the image; defining a predetermined number of one or more segments in said image and corresponding one or more segments in the background model; determining, for each image segment, an edge density factor is a first relation between the image and background gradient maps for said segment; and calculating foreground detection threshold based on said certain relation, thereby enabling use of said foreground detection threshold for classifying each pixel in the segment as being a foreground or background pixel. 1. A method for use in extraction of foreground objects in an image stream , the method comprising:providing input image data of a region of interest;providing a background model of said region of interest;utilizing said background model and processing each image in said input image data, said processing comprising:determining a background gradient map for pixels in said background model and an image gradient map for pixels in the image;defining a predetermined number of one or more segments in said image and corresponding one or more segments in the background model;for each image segment, determining an edge density factor being a first relation between the image and background gradient maps for said segment, and calculating a foreground detection threshold based on said certain relation, thereby enabling use of said foreground detection threshold for classifying each pixel in the segment as being a foreground or background pixel.2. The method of claim 1 , wherein determining said edge density factor comprises ...

Подробнее
07-01-2016 дата публикации

INFORMATION PROCESSING APPARATUS

Номер: US20160005202A1
Принадлежит:

In the case where a table region is erroneously recognized, the edition of a character is facilitated. An information processing apparatus selects one selectable region from an image. According to the change of the position of the region selected by a selecting unit, a region that is included in the region before the position change and that is not included in the region after the position change is set to a new selectable region. 1. An information processing apparatus comprising:a selecting unit configured to select one selectable region from an image;a changing unit configured to change position of the region selected by the selecting unit; anda setting unit configured to set, according to the change of the position by the changing unit, a region that is included in the region before the change of the position and that is not included in the region after the change of the position to a new selectable region.2. The information processing apparatus according to claim 1 ,wherein a character string is included in the region before the change of the position, andthe information processing apparatus further comprises an allocating unit configured to divide the character string according to the change of the position by the changing unit and to allocate the divided character strings to the region after the change of the position and the new selectable region.3. An information processing apparatus comprising:a selecting unit configured to select two regions;a generating unit configured to couple the two regions selected by the selecting unit to generate a new region including the two regions;a coupling unit configured to couple character strings included in the two regions to generate a new character string; andan allocating unit configured to allocate the new character string generated by the coupling unit to the new region generated by the generating unit.4. The information processing apparatus according to claim 2 ,wherein the allocating unit allocates the character ...

Подробнее
07-01-2016 дата публикации

DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG

Номер: US20160005220A1
Принадлежит:

A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model. 1. A method for implementing a dynamic three-dimensional (3D) lung map view for navigating a prove inside a patient's lungs , the method comprising:loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images;inserting the probe into a patient's airways, the probe including a location sensor in operative communication with the navigation system;registering a sensed location of the probe with the planned pathway;selecting a target in the navigation plan;presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe;navigating the probe through the airways of the patient's lungs toward the target;iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe; andupdating the presented view by removing at least a part of an object forming part of the 3D model.2. The method according to claim 1 , wherein iteratively adjusting the presented view of the 3D model includes zooming in when the probe approaches the ...

Подробнее
05-01-2017 дата публикации

Controller for a Working Vehicle

Номер: US20170006261A1
Принадлежит:

A controller configured to receive image data representative of the surroundings of a working vehicle. The image data comprises a plurality of portion data. The controller determines one or more features associated with each of the plurality of portion data. For each of the plurality of portion data, the controller applies a label-attribution-algorithm to attribute one of a plurality of predefined labels to the portion data in question based on: (i) features determined for the portion data in question; and (ii) features determined for proximate portion data, which is portion data that is proximate to the portion data in question. The labels are representative of objects. The controller provides a representation of the attributed labels and their position in the received image data. 1. A controller configured to:receive image data representative of the surroundings of a working vehicle, the image data comprising a plurality of portion data;determine one or more features associated with each of the plurality of portion data; (i) features determined for the each of the plurality of portion data; and', '(ii) features determined for proximate portion data, which is portion data that is proximate to the each of the plurality of portion data; and, 'for each of the plurality of portion data, apply a label-attribution-algorithm to attribute one of a plurality of predefined labels to the each of the plurality of portion data based onprovide a representation and position of the labels attributed to the plurality of portion data,wherein the labels are representative of objects.2. The controller of claim 1 , wherein the working vehicle is an agricultural vehicle claim 1 , a construction vehicle claim 1 , or an off-road vehicle.3. The controller of claim 1 , wherein the received image data comprises video data claim 1 , and wherein the controller is further configured to provide as an output claim 1 , in real-time claim 1 , a representation of the plurality of predefined labels ...

Подробнее
14-01-2016 дата публикации

Method and system for reducing localized artifacts in imaging data

Номер: US20160007948A1

A method and system for reducing localized artifacts in imaging data, such as motion artifacts and bone streak artifacts, are provided. The method includes segmenting the imaging data to identify one or more suspect regions in the imaging data near which localized artifacts are expected to occur, defining an artifact-containing region of interest in the imaging data around each suspect region, and applying a local bias field within the artifact-containing regions to correct for the localized artifacts.

Подробнее
08-01-2015 дата публикации

REAL TIME PROCESSING OF VIDEO FRAMES

Номер: US20150010209A1
Принадлежит:

A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. Each frame includes a two-dimensional array of pixels. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. A determination is made that a persistence requirement, a non-persistence duration requirement, and a persistence duration requirement have been satisfied. 1. A method for real time processing of a sequence of video frames , said method comprising:analyzing, by a processor of a computer system, a current frame in the sequence and at least one frame in the sequence occurring prior to the current frame, each frame comprising a two-dimensional array of pixels and a frame-dependent color intensity at each pixel, said array of pixels in each frame being a totality of pixels in each frame in the sequence of video frames received in synchronization with a recording of the video frames in real time, said analyzing comprising performing a background subtraction on the at least one frame, said performing the background subtraction determining a background image and also determining a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame, said static region mask identifying each pixel in the static region upon the static region mask being superimposed on the current frame, said background image comprising the array of pixels and a background model of the at least one frame ...

Подробнее
08-01-2015 дата публикации

REAL TIME PROCESSING OF VIDEO FRAMES

Номер: US20150010211A1
Принадлежит:

A method and system for real time processing of a sequence of video frames. A current frame in the sequence and at least one frame in the sequence occurring prior to the current frame is analyzed. The sequence of video frames is received in synchronization with a recording of the video frames in real time. The analyzing includes performing a background subtraction on the at least one frame, which determines a background image and a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame, which includes executing a mixture of 3 to 5 Gaussians algorithm coupled together in a linear combination by Gaussian weight coefficients to generate the background model, a foreground image, and the static region. The static region mask identifies each pixel in the static region upon the static region mask being superimposed on the current frame. 1. A method for real time processing of a sequence of video frames , said method comprising:analyzing, by a processor of a computer system, a current frame in the sequence and at least one frame in the sequence occurring prior to the current frame, each frame comprising a two-dimensional array of pixels and a frame-dependent color intensity at each pixel, said array of pixels in each frame being a totality of pixels in each frame in the sequence of video frames received in synchronization with a recording of the video frames in real time, said analyzing comprising performing a background subtraction on the at least one frame, said performing the background subtraction determining a background image and also determining a static region mask associated with a static region consisting of a contiguous distribution of pixels in the current frame, said static region mask identifying each pixel in the static region upon the static region mask being superimposed on the current frame, said background image comprising the array of pixels and a background model of the at least one ...

Подробнее
08-01-2015 дата публикации

Region-Growing Algorithm

Номер: US20150010227A1
Принадлежит: COVIDIEN LP

A region growing algorithm for controlling leakage is presented including a processor configured to select a starting point for segmentation of data, initiate a propagation process by designating adjacent voxels around the starting point, determine whether any new voxels are segmented, count and analyze the segmented new voxels to determine leakage levels, and identify and record segmented new voxels from a previous iteration when the leakage levels exceed a predetermined threshold. The processor is further configured to perform labeling of the segmented new voxels of the previous iteration, select the segmented new voxels from the previous iteration when the leakage levels fall below the predetermined threshold, and create a voxel list based on acceptable segmented voxels found in the previous iteration.

Подробнее
12-01-2017 дата публикации

NEARSIGHTED CAMERA OBJECT DETECTION

Номер: US20170011275A1
Автор: Barton Scott E.
Принадлежит:

A system and process of nearsighted (myopia) camera object detection involves detecting the objects through edge detection and outlining or thickening them with a heavy border. Thickening may include making the object bold in the case of text characters. The bold characters are then much more apparent and heavier weighted than the background. Thresholding operations are then applied (usually multiple times) to the grayscale image to remove all but the darkest foreground objects in the background resulting in a nearsighted (myopic) image. Additional processes may be applied to the nearsighted image, such as morphological closing, contour tracing and bounding of the objects or characters. The bound objects or characters can then be averaged to provide repositioning feedback for the camera user. Processed images can then be captured and subjected to OCR to extract relevant information from the image. 1. A method of generating , during acquisition via a camera of a foreground document , a plurality of pre-processed images of the foreground document , the pre-processed images being used to optimize capture of the foreground document for optical character recognition , the method comprising:obtaining a plurality of source images; including a first source image and a second source image, continuously acquired via the camera of a co ng device, each of the obtained plurality of source images containing characters associated with the foreground document, wherein the first source image is acquired by the camera at a first capture position, and wherein the second source image is acquired by the camera at a second capture position, wherein the first capture position is different from the second capture position;for each of the plurality of obtained source images, pre-processing a given obtained source image to generate by a processor of the computing device, a pre-processed image of the given obtained source image so as to emphasize the characters associated with the foreground ...

Подробнее
12-01-2017 дата публикации

Three-dimensional cavitation quantitative imaging method for microsecond-resolution cavitation spatial-temporal distribution

Номер: US20170011508A1
Принадлежит:

A three-dimensional cavitation quantitative imaging method for a microsecond-resolution cavitation spatial-temporal distribution includes steps of: after each wide beam detection, moving an array transducer by one unit; waiting until the cavitation nuclei distribution backs to an initial state, then detecting the cavitation by the wide beam detection with same cavitation energy incitation, so as to obtain a spatial series of two-dimensional cavitation raw radio frequency data corresponding to different placing positions of the array transducer; then changing the cavitation energy source duration, time delays between energy source incitation and the wide beam transmitted by the array transducer, and time delays between the pulsating pump and energy source incitation, so as to obtain a temporal series of two-dimensional cavitation raw radio frequency data; and then obtaining a microsecond-resolution three-dimensional cavitation spatial-temporal distribution image and a cavitation micro bubble concentration quantitative Nakagami parametric image. 1. A three-dimensional cavitation quantitative imaging method for a microsecond-resolution cavitation spatial-temporal distribution , comprising steps of: using a wide beam to detect cavitation activity for two-dimensional cavitation raw radio frequency data obtained; after cavitation detection by each wide beam , moving an array transducer for the wide beam detection by one unit perpendicular to a placing direction of the array transducer; waiting until a cavitation nuclei distribution returns to an original state thereof , then detecting the cavitation by the wide beam detection with same cavitation energy incitation , so as to obtain a spatial series of two-dimensional cavitation raw radio frequency data with the array transducer placed at different unit positions; and then obtaining a three-dimensional cavitation image and a cavitation micro bubble concentration quantitative three-dimensional image by combining wide beam ...

Подробнее
12-01-2017 дата публикации

Method for estimating an amount of analyte in a fluid

Номер: US20170011517A1

The invention is a method for estimating the amount of analyte in a fluid sample, and in particular in a bodily fluid. The sample is mixed with a reagent able to form a color indicator in the presence of the analyte. The sample is then illuminated by a light beam produced by a light source; an image sensor forms an image of the beam transmitted by the sample, from which image a concentration of the analyte in the fluid is estimated. The method is intended to be implemented in compact analyzing systems. One targeted application is the determination of the glucose concentration in blood.

Подробнее
12-01-2017 дата публикации

METHOD FOR CONTROLLING TRACKING USING A COLOR MODEL, CORRESPONDING APPARATUS AND NON-TRANSITORY PROGRAM STORAGE DEVICE

Номер: US20170011528A1
Принадлежит:

A method for controlling tracking using a color model is disclosed. The method includes obtaining a window in a second frame of a video image corresponding to a window in a first frame of the video image using a tracking algorithm in a tracking mode, wherein each pixel in the video image has at least one color component. The method further includes defining a background area around the window in the first frame, assigning a pixel confidence value for each pixel in the second frame according to a color model, assigning a window confidence value for the window in the second frame according to the pixel confidence values of pixels in the window in the second frame, if the window confidence value is greater than a first confidence threshold, selecting the tracking mode, and if the window confidence value is not greater than the first confidence threshold, selecting a mode different from the tracking mode. 1. A method for controlling tracking using a color model , the method comprising:obtaining a window in a second frame of a video image corresponding to a window in a first frame of the video image using a tracking algorithm in a tracking mode, wherein each pixel in the video image has at least one color component;computing a foreground color model for each one of at least two groups of pixels in the window in the first frame, each foreground color model comprising at least one color component computed from values of the color components of pixels in the corresponding group;computing a background color model for each one of at least two groups of pixels in a background area around the window in the first frame, each background color model comprising at least one color component computed from values of the color components of pixels in the corresponding group;determining a foreground color distance between each pixel in the window in the second frame and each of the foreground color models according to color component values and determining a foreground minimum color ...

Подробнее
12-01-2017 дата публикации

VIDEO ANALYSIS SYSTEM

Номер: US20170011529A1
Автор: URASHITA Keiichi
Принадлежит: NEC Corporation

A video analysis system includes: a video data acquiring means that acquires video data; a moving object detecting means that detects a moving object from video data acquired by the video data acquiring means, by using a moving object detection parameter, which is a parameter for detecting a moving object; an environment information collecting means that collects environment information representing an external environment of a place where the video data acquiring means is installed; and a parameter changing means that changes the moving object detection parameter used when the moving object detecting means detects a moving object, on the basis of the environment information collected by the environment information collecting means. 121.-. (canceled)22. A video analysis device comprising at least one processor configured to:acquire video data from an external device, the video data having been acquired by the external device;detect a moving object from video data by using a moving object detection parameter, the video data having been acquired by the processor, the moving object detection parameter being a parameter for detecting a moving object;collect environment information representing an external environment of a place where the external device having acquired the video data is installed; andchange the moving object detection parameter on a basis of the environment information collected by the processor, the moving object detection parameter being used when the processor detects the moving object.23. The video analysis device according to claim 22 , wherein:the processor is configured to detect the moving object by obtaining a difference of image data extracted from the video data and includes a sensitivity threshold as one of moving object detection parameters, the sensitivity threshold being a predetermined threshold to become a criterion for recognizing a disparity between image data of a previous frame and image data of a current frame; andthe processor is ...

Подробнее
12-01-2017 дата публикации

METHOD AND APPARATUS FOR EXTENDED PHASE CORRECTION IN PHASE SENSITIVE MAGNETIC RESONANCE IMAGING

Номер: US20170011536A1
Автор: MA Jingfei

Methods, apparatuses, systems, and software for extended phase correction in phase sensitive Magnetic Resonance Imaging. A magnetic resonance image or images may be loaded into a memory. Two vector images A and B associated with the loaded image or images may be calculated either explicitly or implicitly so that a vector orientation by one of the two vector images at a pixel is substantially determined by a background or error phase at the pixel, and the vector orientation at the pixel by the other vector image is substantially different from that determined by the background or error phase at the pixel. A sequenced region growing phase correction algorithm may be applied to the vector images A and B to construct a new vector image V so that a vector orientation of V at each pixel is substantially determined by the background or error phase at the pixel. A phase corrected magnetic resonance image or images may be generated using the vector image V, and the phase corrected magnetic resonance image or images may be displayed or archived. 1. A computerized method for generating a phase corrected magnetic resonance image or images comprising:(a) acquiring a magnetic resonance image or images containing background or error phase information;(b) calculating two vector images A and B using the acquired image or images so that a vector orientation by one of the two vector images at a pixel is substantially determined by the background or error phase at the pixel, and the vector orientation at the pixel by the other vector image is substantially different from that determined by the background or error phase at the pixel; (i) selecting an initial seed pixel or pixels and assigning either A or B of the initial seed pixel or pixels as a value of V for the initial seed pixel or pixels;', '(ii) selecting a secondary seed pixel and selecting either A or B of the secondary seed pixel as a value of V for the secondary seed pixel based on whether A or B of the secondary seed pixel ...

Подробнее
14-01-2016 дата публикации

INFORMATION PROCESSING APPARATUS RECOGNIZING CERTAIN OBJECT IN CAPTURED IMAGE, AND METHOD FOR CONTROLLING THE SAME

Номер: US20160012599A1
Автор: Kuboyama Hideo
Принадлежит:

An information processing apparatus includes an image obtaining unit configured to obtain an input image, an extraction unit configured to extract from the input image one or more regions corresponding to one or more objects included in a foreground of the operation surface in accordance with the reflected positional information and positional information of the operation surface in the space, a region specifying unit configured to specify an isolation region, which is not in contact with a boundary line which defines a predetermined closed region in the input image, from among the one or more regions extracted by the extraction unit, and a recognition unit configured to recognize an adjacency state of a predetermined instruction object relative to the operation surface in accordance with the positional information reflected from the portion corresponding to the isolation region as specified by the region specifying unit. 1. An information processing apparatus comprising:an image obtaining unit configured to obtain an input image on which positional information in a space including an operation surface as a portion of a background is reflected;an extraction unit configured to extract one or more regions corresponding to one or more objects included in a foreground of the operation surface from the input image in accordance with the positional information reflected on the input image obtained by the image obtaining unit and positional information of the operation surface in the space;a region specifying unit configured to specify an isolation region which is not in contact with a boundary line which defines a predetermined closed region in the input image from among the one or more regions extracted by the extraction unit; anda recognition unit configured to recognizes an adjacency state of a predetermined instruction object relative to the operation surface in accordance with the positional information reflected on the isolation region in the input image in a ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, PROGRAM, STORAGE MEDIUM, PRODUCTION APPARATUS, AND METHOD OF PRODUCING ASSEMBLY

Номер: US20160012600A1
Автор: Kitajima Hiroshi
Принадлежит:

A tentative local score between a point in a feature image in a template image and a point, in a target object image, at a position corresponding to the point in the feature image is calculated, and a determination is performed as to whether the tentative local score is smaller than 0. In a case where the tentative local score is greater than or equal to 0, the tentative local score is employed as a local score. In a case where the tentative local score is smaller than 0, the tentative local score is multiplied by a coefficient and the result is employed as a degree of local similarity. 1. An image processing method for performing image processing by an image processing apparatus using a first pyramid including a plurality of template images having different first resolutions and hierarchized in layers according to the first resolutions , a second pyramid including a plurality of target object images having different second resolutions from each other but equal to the respective first resolutions of the template images in the first pyramid and hierarchized in layers according to the second resolutions such that an image similar to a feature image included in one of the template images in the first pyramid is searched for from one of the target object images in the second pyramid by evaluating a degree of layer-to-layer similarity between the first and second pyramids in an order of resolution from the lowest to highest , the method comprising:calculating a degree of local similarity between a point in the feature image and a corresponding point in the target object on a point-by-point basis for each of all points in the feature image; andcalculating the degree of the similarity between the feature image and the target object image by determining the sum of the calculated degrees of local similarity and normalizing the sum,the calculating the degree of local similarity includingcalculating a tentative degree of local similarity between a point in the feature image ...

Подробнее
14-01-2016 дата публикации

AUTOMATIC BACKGROUND REGION SELECTION FOR LESION DELINEATION IN MEDICAL IMAGES

Номер: US20160012604A1
Принадлежит: SIEMENS MEDICAL SOLUTIONS USA, INC.

In a method and apparatus for automatic background region selection for lesion segmentation in medical images, a patient medical image dataset is loaded into a computer and an image region is delineated to obtain a segmentation containing a lesion. A background region is created from the segmentation representing the patient organ. 1. A method for automatic background region selection for lesion segmentation in medical image data , comprising:loading a medical image data set of a patient into a computer;in said computer, automatically identifying and delineating an image region that contains a lesion, within said medical image data set;in said computer, automatically identifying a background region within said image region;in said computer, automatically calculating a segmentation of said lesion by comparison of said lesion with said background region; andfrom said computer, causing the segmented lesion to be visually displayed at a display screen in communication with said computer.2. A method as claimed in comprising identifying said background region by:automatically identifying regions of said lesion in said image region, and identifying potential non-background regions; andautomatically identifying said background region by subtracting, from said image region, said regions identified as said lesion and as potential non-background regions.3. A method as claimed in comprising providing said computer with co-registered functional and anatomical data sets claim 1 , respectively obtained with multiple imaging modalities claim 1 , as said medical image data set.4. A method as claimed in comprising delineating said image region by executing said algorithm on said anatomical data set claim 3 , and transferring a segmentation of said lesion from said anatomical data set to said functional data set claim 3 , and performing the identification of said background region using said functional data set.5. A method as claimed in wherein said medical image data set is a ...

Подробнее
14-01-2016 дата публикации

MULTI-CUE OBJECT DETECTION AND ANALYSIS

Номер: US20160012606A1
Принадлежит:

Foreground objects of interest are distinguished from a background model by dividing a region of interest of a video data image into a grid array of individual cells. Each of the cells are labeled as foreground if accumulated edge energy within the cell meets an edge energy threshold, or if color intensities for different colors within each cell differ by a color intensity differential threshold, or as a function of combinations of said determinations. 1. A computer-implemented method for distinguishing foreground objects of interest from a background model , the method comprising executing on a processing unit the steps of:dividing a region of interest of a video data image into a grid array of a plurality of individual cells;acquiring frame image data for each of the cells;determining a first background indication for each of the cells that have determined color intensities that do not exceed others of the determined color intensities for the cell by a color intensity differential threshold;determining a first foreground indication for each of the cells that have one of the determined color intensities greater than another of the determined color intensities for that cell by the color intensity differential threshold;determining a second background indication for each of the cells that have an accumulated energy of edges detected within the cells that less than an edge energy threshold;determining a second foreground indication for each of the cells that have an accumulated energy of edges detected within the cells that meets or exceeds the edge energy threshold;labelling as foreground or background each of the cells in response to applying a combination rule to the foreground indications and the background indications for the cells; andusing the frame image data from the cells labeled as foreground to define a foreground object.2. The method of claim 1 , further comprising:integrating computer-readable program code into a computer system comprising the processing ...

Подробнее
14-01-2016 дата публикации

Image processing device and region extraction method

Номер: US20160012614A1
Автор: Yoshihiro Goto
Принадлежит: Hitachi Medical Corp

To provide an image processing device, a region extraction method, and an image processing method capable of extracting a target region based on minute variations in a concentration value that exist locally and clearly displaying the extracted target region, the image processing device extracts a blood vessel region A from an image to extract a region where a CT value is smaller than an average concentration value of the blood vessel region A as a soft plaque region B. For unextracted soft plaque, a pixel pair is set in a difference region between the region A and the region B, and for each pixel Pj between the pixel pairs, whether or not the pixel value is even smaller than a value slightly smaller than the CT value of the pixel pair is determined. Hence, a portion where a pixel value slightly varies locally is extracted as soft plaque.

Подробнее
14-01-2016 дата публикации

CT SYSTEM FOR SECURITY CHECK AND METHOD THEREOF

Номер: US20160012647A1
Принадлежит:

A CT system for security check and a method thereof are provided. The method includes: reading inspection data of an inspected object; inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, which is obtained from the inspection data; receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; and providing a feedback of the 3D inspection image including at least one 3D FTI in response to the selection. With the above solution, it is convenient for a user to rapidly mark a suspected object in the CT image, and provides a feedback of whether a FTI is included. 1. A method in a Computed Tomography (CT) system for security check , comprising steps of:reading inspection data of an inspected object;inserting at least one three-dimensional (3D) Fictional Threat Image (FTI) into a 3D inspection image of the inspected object, wherein the 3D inspection image is obtained from the inspection data;receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a two-dimensional (2D) inspection image including a 2D FTI corresponding to the 3D FTI, wherein the 2D inspection image is obtained from the 3D inspection image or is obtained from the inspection data; andproviding a feedback of the 3D inspection image including at least one 3D FTI in response to the selection.2. The method according to claim 1 , wherein the step of receiving a selection of at least one region in the 3D inspection image including the 3D FTI or at least one region in a 2D inspection image including a 2D FTI corresponding to the 3D FTI comprises:receiving coordinate positions of a part of the 3D inspection image or the 2D ...

Подробнее
14-01-2016 дата публикации

Image Binarization

Номер: US20160014300A1
Принадлежит: Lexmark International Inc

Systems and methods convert to binary an input image having pixels defining text and background. Thresholds are determined by which pixels in the input image and a corresponding blurred image will be defined as either binary black or binary white. Thresholds derive from grouped together neighboring pixels having pixels separated out that correspond to the background. For pixels of the input image defined as binary black and having corresponding pixels in the blurred image defined as binary black relative to their thresholds, those are set to black in the binary image, else they are set white. Techniques for devising thresholds, blurring images, grouping together pixels, statistical analysis, etc., typify the embodiments.

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING SYSTEM OF BACKGROUND REMOVAL AND WHITE/BLACK POINT COMPENSATION

Номер: US20160014301A1
Принадлежит: CSR Imaging US, LP

Embodiments are directed towards identifying background pixels in a scanned image so that they can be removed from the image. A histogram of each color channel for an initially scanned portion of the image may be determined. The histogram may represent a frequency distribution of pixels in the scanned portion across each color value for each color channel. A white point tracking profile may be determined based on the histogram. The white point tracking profile may identify a range of color values for each channel that are statistically related to a mode color value of a corresponding color channel. When pixels in the scanned image are determined to have a color profile within the white point tracking profile, then those pixels may be modified to a predetermined color profile, such as a maximum color value for each channel. The modified pixels may then he removed from the scanned image. 1. A method for removing a background in a scanned image , comprising:determining a histogram of each color channel for at least an initially scanned portion of the image, wherein the histogram represents a frequency distribution of pixels in the initially scanned portion across each color value for each color channel;determining a white point tracking profile based on the histogram, wherein the white point tracking profile identifies a range of color values for each channel that are statistically related to a mode color value of a corresponding color channel;when at least a portion of the pixels in the scanned image are determined to have a color profile within the white point tracking profile, modifying the at least portion of pixels to a predetermined color profile; andremoving the modified pixels from the scanned image.2. The method of claim 1 , further comprising:determining black point tracking based on each color channel's lowest color value in at least the initially scanned portion of the image; andexpanding each color channel's color spectrum by normalizing each color channel ...

Подробнее
15-01-2015 дата публикации

SYSTEMS AND METHODS FOR NOTE CONTENT EXTRACTION AND MANAGEMENT BY SEGMENTING NOTES

Номер: US20150016716A1
Принадлежит:

Techniques for creating and manipulating software notes representative of physical notes are described. A note management system comprises a sensor configured to capture an image data of a physical note, wherein the note is separated into one or more segments using marks, wherein each of the segments is marked by at least one of the marks. The note management system further comprises a note recognition module coupled to the sensor, the note recognition module configured to receive the captured image data and identify the marks on the note, and a note extraction module configured to determine general boundaries of the one or more segments within the captured image data based on the identified marks and extract content using the general boundaries, the content comprises content pieces, each of the content pieces corresponding to one of the one or more segments of the note. 1. A method of extracting content from a note using a computer system having one or more processors and memories , comprising:capturing, by a sensor, image data comprising a visual representation of a note, wherein the note comprises a physical note having one or more segments, each of the segments having one or more marks affixed thereon;identifying, by the one or more processors, the marks on the note;based on the identified marks, determining, by the one or more processors, general boundaries within the image data for the one or more segments; andextracting, by the one or more processors and from the image data, content comprising a set of content pieces using the general boundaries, each of the content pieces corresponding to a different one of the one or more segments of the note.2. The method of claim 1 , wherein the marks comprise at least one of lines claim 1 , arrows claim 1 , star-shaped marks claim 1 , elbow-shaped marks claim 1 , rectangular marks claim 1 , circular marks claim 1 , ellipse-shaped marks claim 1 , polygon-shaped marks claim 1 , and geometric-shaped marks.3. The method of ...

Подробнее
15-01-2015 дата публикации

Opacity Measurement Using A Global Pixel Set

Номер: US20150016717A1
Принадлежит: Microsoft Technology Licensing LLC

A computing device is described herein that is configured to select a pixel pair including a foreground pixel of an image and a background pixel of the image from a global set of pixels based at least on spatial distances from an unknown pixel and color distances from the unknown pixel. The computing device is further configured to determine an opacity measure for the unknown pixel based at least on the selected pixel pair.

Подробнее
15-01-2015 дата публикации

Method for determining the extent of a foreground object in an image

Номер: US20150016724A1
Автор: Noam Levy
Принадлежит: Qualcomm Technologies Inc

Embodiments are directed towards determining within a digital camera whether a pixel belongs to a foreground or background segment within a given image by evaluating a ratio of derivative and deviation metrics in an area around each pixel in the image, or ratios of derivative metrics across a plurality of images. For each pixel within the image, a block of pixels are examined to determine an aggregate relative derivative (ARD) in the block. The ARD is compared to a threshold value to determine whether the pixel is to be assigned in the foreground segment or the background segment. In one embodiment, a single image is used to determine the ARD and the pixel segmentation for that image. Multiple images may also be used to obtain ratios of a numerator of the ARD, useable to determine an extent of the foreground.

Подробнее
15-01-2015 дата публикации

DESTRUCTIVE AND VISUAL MEASUREMENT AUTOMATION SYSTEM FOR WEB THICKNESS OF MICRODRILLS AND METHOD THEREOF

Номер: US20150017879A1
Автор: Chang Wen-Tung, LU Yu-Yun
Принадлежит: NATIONAL TAIWAN OCEAN UNIVERSITY

An improved destructive and visual measurement automation system and a method for measuring a web thickness of a microdrill are provided. When a dual-axis motion platform module moves the microdrill to a first position, a reflection module reflects a first image in a first direction toward a second direction. A vision module receives the reflected first image in the second direction and outputs the received first image to a computer. According to the first image, the computer performs a positioning procedure and a grinding procedure to drive a drill grinding module to grind the microdrill to a sectional position to be measured of the microdrill. When the microdrill is moved to a second position, the vision module outputs a second image to the computer. According to the second image, the computer performs an image computing procedure to obtain the web thickness at the sectional position to be measured. 1. An improved destructive and visual measurement automation system for measuring a web thickness of a microdrill , comprising:a computer;a dual-axis motion platform module, electrically connected to the computer and configured to hold the microdrill and be controlled by the computer to move the microdrill;a drill grinding module, electrically connected to the computer and configured to grind the microdrill to a sectional position to be measured to form an axial cross-section of the microdrill when the dual-axis motion platform module moves the microdrill to a grinding position;a reflection module, configured to reflect a first image, which presents a drill tip of the microdrill and the drill grinding module, in a first direction toward a second direction when the dual-axis motion platform module moves the microdrill to a first position at which the microdrill does not contact with the drill grinding module; anda vision module, electrically connected to the computer, wherein when the vision module acquires the reflected first image in the second direction and outputs ...

Подробнее
19-01-2017 дата публикации

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20170017830A1
Автор: HANAI YUYA
Принадлежит:

[Object] To provide an information processing device, an information processing method, and a program that can give a user a stronger impression that a real world is enhanced by using an AR technique. 1. An information processing device , comprising:a recognition unit configured to recognize an object included in a real space so as to distinguish the object from a background on the basis of three-dimensional data of the real space in order to generate a virtual object image obtained by changing a state of the object.2. The information processing device according to claim 1 , further comprising:an image generation unit configured to generate the object image of the object recognized by the recognition unit.3. The information processing device according to claim 2 , further comprising:a change object generation unit configured to generate a virtual change object obtained by changing the state of the object; anda display control unit configured to control a display unit so that the display unit displays the object image generated by the image generation unit on a surface of the change object.4. The information processing device according to claim 3 ,wherein the image generation unit generates, on the basis of a portion corresponding to an exposed surface of the object in a captured image obtained by capturing an image of the real space, a second surface image obtained by estimating a surface of the object hidden in the captured image, andwherein the display control unit displays the object image in which the second surface image is attached to a region of the change object which is newly exposed due to the change.5. The information processing device according to claim 3 ,wherein the display control unit displays the object image obtained by attaching an image of a target object exposed in a through image obtained by capturing an image of the real space in real time to a corresponding region of the change object.6. The information processing device according to claim 3 ...

Подробнее
21-01-2016 дата публикации

SYSTEMS AND METHODS FOR PEOPLE COUNTING IN SEQUENTIAL IMAGES

Номер: US20160019698A1
Принадлежит:

Methods for counting persons in images and system therefrom are provided. The method can include obtaining image data for multiple sequential images of a physical area acquired by a camera. The method can also include, based on the image data, generating a background mask for at least one image from the multiple images, where the background mask indicating pixels identified as corresponding to non-moving regions and pixels identified as corresponding to moving regions in the at least one image meeting an exclusion criteria. The method additionally includes, based on the background mask, generating a foreground mask for the at least one image identifying pixels in the image associated with persons and computing an estimate of a number of persons in the physical area based at least on the number of the foreground pixels and pre-defined relationship between a number of pixels and a number of persons for the camera. 1. A method , comprising:obtaining image data for multiple sequential images of a physical area acquired by a camera;based on the image data, generating a background mask for at least one image from the multiple images, the background mask indicating pixels from the image data for the at least one image identified as corresponding to non-moving regions in the at least one image and pixels in the at least one image identified as corresponding to moving regions in the at least one image meeting an exclusion criteria;based on the background mask, generating a foreground mask for the at least one image identifying pixels in the image associated with persons; andcomputing an estimate of a number of persons in the physical area based at least on the number of the foreground pixels and pre-defined relationship between a number of pixels and a number of persons for the camera.2. The method of claim 1 , further comprising:determining locations of persons in the physical area based on the foreground pixels.3. The method of claim 2 , wherein the determining comprises: ...

Подробнее
19-01-2017 дата публикации

IMMERSIVE TELECONFERENCING WITH TRANSLUCENT VIDEO STREAM

Номер: US20170019627A1
Принадлежит:

An immersive video teleconferencing system may include a transparent display and at least one image sensor operably coupled to the transparent display. The at least one image sensor may be multiple cameras included on a rear side of the transparent display, or a depth camera operably coupled to the transparent display. Depth data may be extracted from the images collected by the at least one image sensor, and an image of a predetermined subject may be segmented from a background of the collected images based on the depth data. The image of the segmented predetermined subject may also be scaled based on the depth data. The image of the scaled segmented predetermined subject may be transmitted to a remote transparent display at a remote location, and displayed on the remote transparent display such that a background surrounding the displayed image of the remote location is visible through the transparent display, so that the predetermined subject appears to be physically located at the remote location. 1. A method , comprising:establishing a connection between a first video teleconferencing device at a first location to a second video teleconferencing device at a second location to initiate a video teleconferencing session, the second location being different from the first location;synchronizing operation of a first transparent display at the first location and at least one first image sensor at the first location, the at least one first image sensor being operably coupled to the first transparent display;capturing images at the first location using the at least one first image sensor;generating a scaled image of a subject at the first location based on the images captured at the first location by the at least one first image sensor; andtransmitting the generated scaled image of the subject at the first location to the second video teleconferencing device at the second location for display on a second transparent display of the second video teleconferencing system at ...

Подробнее
22-01-2015 дата публикации

IMAGE PROCESSING APPARATUS, COMPUTER-READABLE MEDIUM STORING AN IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD

Номер: US20150023554A1
Автор: Kita Koji
Принадлежит: NK WORKS CO., LTD.

The image processing apparatus for detecting a moving object in a moving image includes a background generation unit configured to generate a background image of the moving image while updating the background image over time. The background generation unit includes a model derivation unit configured to derive a mixed distribution model having one or more distribution models for each pixel of interest, and a background value derivation unit configured to derive one or more background pixel values respectively corresponding to the one or more distribution models. The model derivation unit is configured to generate a new distribution model from pixel values of a plurality of pixels within a local region containing the pixel of interest in a first frame, and update the existing distribution model using a pixel value of the pixel of interest in a second frame that is different from the first frame. 1. An image processing apparatus configured to detect a moving object in a moving image , the apparatus comprising:a background generation unit configured to generate a background image of the moving image while updating the background image over time; anda moving object detection unit configured to detect the moving object in the moving image over time based on the background image, a model derivation unit configured to derive a mixed distribution model for each pixel of interest, the mixed distribution model having one or more distribution models depending on a situation; and', 'a background value derivation unit configured to derive one or more background pixel values respectively corresponding to the one or more distribution models based on the mixed distribution model for each pixel of interest, and, 'the background generation unit including generate a new distribution model from pixel values of a plurality of pixels within a local region containing the pixel of interest in a first frame that is contained in the moving image, and', 'update the existing distribution model ...

Подробнее
22-01-2015 дата публикации

PERSON CLOTHING FEATURE EXTRACTION DEVICE, PERSON SEARCH DEVICE, AND PROCESSING METHOD THEREOF

Номер: US20150023596A1
Принадлежит: NEC Corporation

A person's region is detected from input video of a surveillance camera; a person's direction in the person's region is determined; the separability of person's clothes is determined to generate clothing segment separation information; furthermore, clothing features representing visual features of person's clothes in the person's region are extracted in consideration of the person's direction and the clothing segment separation information. The person's direction is determined based on a person's face direction, person's motion, and clothing symmetry. The clothing segment separation information is generated based on analysis information regarding a geometrical shape of the person's region and visual segment information representing person's clothing segments which are visible based on the person's region and background prior information. A person is searched out based on a result of matching between a clothing query text, representing a type and a color of person's clothes, and the extracted person's clothing features. 1. A person clothing feature extraction device comprising:a person region detection part that detects a person's region from input video;a person direction determination part that determines a person's direction in the person's region;a clothing segment separation part that determines a separability of person's clothes in the person's region so as to produce clothing segment separation information reflecting an automatic separability of clothing segments and a separable manner how clothing segments are separated;a clothing feature extraction part, considering the person's direction and the clothing segment separation information, which extracts clothing features representing visual features of person's clothes in the person's region with respect to each clothing segment when clothing segments are automatically separable but which extracts clothing features without separating them when clothing features are not automatically separable; anda clothing ...

Подробнее
26-01-2017 дата публикации

POINTING INTERACTION METHOD, APPARATUS, AND SYSTEM

Номер: US20170024015A1
Принадлежит:

Embodiments of the present invention provide a pointing interaction method, apparatus, and system. The method includes: obtaining a hand image and an arm image; determining spatial coordinates of a fingertip according to the hand image, and determining spatial coordinates of an arm key portion according to the arm image; and performing converged calculation on the spatial coordinates of the fingertip and the spatial coordinates of the arm key portion, to determine two-dimensional coordinates, on a display screen, of an intersection point between fingertip pointing and the display screen. Therefore, the pointing interaction apparatus can implement high-precision pointing only by using the spatial coordinates of the fingertip and the spatial coordinates of the arm key portion, and the pointing has good realtimeness. 1. A pointing interaction method , comprising:obtaining a hand image and an arm image;determining spatial coordinates of a fingertip according to the hand image, and determining spatial coordinates of an arm key portion according to the arm image; andperforming convergence calculation on the spatial coordinates of the fingertip and the spatial coordinates of the arm key portion, to determine two-dimensional coordinates, on a display screen, of an intersection point between a fingertip pointing and the display screen.2. The method according to claim 1 , wherein the obtaining a hand image and an arm image comprises:obtaining a depth map acquired by a depth camera; andextracting the hand image and the arm image in the depth map according to a specified threshold of the depth map.3. The method according to claim 2 , after the obtaining a depth map acquired by a depth camera claim 2 , further comprising:performing denoising processing on the depth map.4. The method according to claim 1 , wherein the determining spatial coordinates of a fingertip according to the hand image comprises:extracting a hand contour according to the hand image; andif the fingertip is ...

Подробнее
26-01-2017 дата публикации

ITERATIVE RECOGNITION-GUIDED THRESHOLDING AND DATA EXTRACTION

Номер: US20170024629A1
Принадлежит:

Techniques for improved binarization and extraction of information from digital image data are disclosed in accordance with various embodiments. The inventive concepts include independently binarizing portions of the image data on the basis of individual features, e.g. per connected component, and using multiple different binarization thresholds to obtain the best possible binarization result for each portion of the image data independently binarized. Determining the quality of each binarization result may be based on attempted recognition and/or extraction of information therefrom. Independently binarized portions may be assembled into a contiguous result. In one embodiment, a method includes: identifying a region of interest within a digital image; generating a plurality of binarized images based on the region of interest using different binarization thresholds; and extracting data from some or all of the plurality of binarized images. Corresponding systems and computer program products are also disclosed. 1. A computer-implemented method , comprising:identifying a region of interest within a digital image;generating a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; andextracting data from some or all of the plurality of binarized images.2. The computer-implemented method as recited in claim 1 , wherein the region of interest comprises a plurality of connected components; and one of the plurality of connected components; and', 'one of the plurality of binarization thresholds., 'wherein each of the plurality of binarized images corresponds to a different combination of3. The computer-implemented method as recited in claim 1 , wherein the region of interest comprises a plurality of connected components; andwherein extracting the data is performed on a per-component basis for at least some of the plurality of connected components.4 ...

Подробнее
28-01-2016 дата публикации

PAPER SHEET OR PRESENTATION BOARD SUCH AS WHITE BOARD WITH MARKERS FOR ASSISTING PROCESSING BY DIGITAL CAMERAS

Номер: US20160026892A1
Автор: HANSSON Olof
Принадлежит: WHITELINES AB

A surface object comprises a surface with a boundary, the surface having a background color. At least one optical marker is provided on the surface for assisting of image processing for improving appearance of an image of the surface with the boundary. At least one optical marker includes at least one color which is lighter or darker than the background color and the color different ΔE between the background color and the at least one color of the optical marker being between ΔE=2 and ΔE=18. 119-. (canceled)20. A surface object , comprising:a surface with a boundary, said surface having a background color;at least one optical marker on the surface for assisting of image processing for improving appearance of an image of the surface with the boundary; andsaid at least one optical marker including at least one color which is lighter or darker than the background color and a color difference ΔE between the background color and the at least one color of the optical marker being between ΔE=2 and ΔE=18.21. The surface object of comprising one of a sheet of paper claim 20 , a markerboard claim 20 , a presentation board claim 20 , a screen claim 20 , or a projected image.22. The surface object of wherein at least two of said optical markers are placed on the surface in a non-repetitive way.23. The surface object of wherein said the at least one optical marker at least one color has a lighter color than the background color.24. The surface object of wherein said background color is substantially white and the at least one optical market at least one color is a darker color.25. The surface object of wherein at least two of said optical markers are provided in a range to provide information about a type of said surface.26. The surface object of wherein at least two of said optical markers are provided in a range to provide information about a location in which to store the image.27. The surface object according to further comprising a pattern for assisting writing or drawing ...

Подробнее
26-01-2017 дата публикации

VEHICLE-MOUNTED DISPLAY DEVICE, METHOD FOR CONTROLLING VEHICLE-MOUNTED DISPLAY DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM RECORDING PROGRAM

Номер: US20170024861A1
Принадлежит:

A vehicle-mounted display device includes a background identifier, a background processor, and a display unit. The background identifier specifies the background of a camera image captured by a camera mounted in the vehicle based on the vanishing point in the camera image. The background processor performs background processing to reduce the clarity of the background specified by the background identifier. The display unit displays the camera image background-processed by the background processor. 1. A vehicle-mounted display device comprising:a background identifier which specifies a background of a camera image based on a vanishing point in the camera image, the camera image being captured by a camera mounted in a vehicle;a background processor which performs background processing to reduce clarity of the background specified by the background identifier; anda display unit which displays the camera image background-processed by the background processor.2. The vehicle-mounted display device according to claim 1 ,wherein the background identifier specifies, as the background, an edge existing on a straight line passing through the vanishing point and having a slope agreeing with a slope of the straight line passing through the vanishing point.3. The vehicle-mounted display device according to claim 1 ,wherein a first pixel exists on a straight line passing through a vanishing point in a first camera image captured at a first timing,a second pixel exists at a position shifted from a position of the first pixel existing on the straight line passing through the vanishing point in a second camera image captured later than the first timing to the vanishing point in the second camera image, andthe background identifier specifies, as the background, the second pixel having a correlation of a given value or more with the first pixel.4. The vehicle-mounted display device according to claim 3 ,wherein the second pixel is one of a plurality of second pixels, and the background ...

Подробнее
26-01-2017 дата публикации

METHOD AND APPARATUS FOR DETECTING ABNORMAL SITUATION

Номер: US20170024874A1
Принадлежит: RICOH COMPANY, LTD.

A method and an apparatus for detecting an abnormal situation are disclosed. The method includes recognizing whether a detection target exists in a captured image; generating, based on the captured image, a three-dimensional point cloud of the detection target in the captured image, when the detection target exists; obtaining, based on the generated three-dimensional point cloud, one or more current posture features of the detection target; and determining, based on the current posture features and one or more predetermined posture feature standards, whether the abnormal situation exists, the posture feature standards being previously determined based on one or more common features when the detection target performs a plurality of abnormal actions. 1. A method for detecting an abnormal situation , the method comprising:recognizing whether a detection target exists in a captured image;generating, based on the captured image, a three-dimensional point cloud of the detection target in the captured image, when the detection target exists;obtaining, based on the generated three-dimensional point cloud, one or more current posture features of the detection target; anddetermining, based on the current posture features and one or more predetermined posture feature standards, whether the abnormal situation exists, the posture feature standards being previously determined based on one or more common features when the detection target performs a plurality of abnormal actions.2. The method for detecting an abnormal situation according to claim 1 ,wherein the current posture features include at least one of current volume of a circumscribed cube, a current center position, current projection mapping in three adjacent views, and current symmetry of top-view projection mapping of the detection target, which are obtained based on the generated three-dimensional point cloud,wherein the posture feature standards include at least one of standards for volume of a circumscribed cube, a ...

Подробнее
26-01-2017 дата публикации

DETERMINING DIMENSION OF TARGET OBJECT IN AN IMAGE USING REFERENCE OBJECT

Номер: US20170024898A1
Принадлежит:

Systems and methods for determining dimensions of an object using a digital image. In particular, systems and methods for determining an actual dimension of a target object using a digital image of that object along with a reference object are disclosed. The digital image may be of a mirrored reflection of the reference object and the target object. 117-. (canceled)18. A method for determining separation between two regions in an image captured by a camera , the method comprising:identifying a digital picture captured by a camera of a mobile phone, wherein the digital picture includes an image of a user's body part and an image of the mobile phone;identifying a location of a first digital marker placed by the user on the digital picture;identifying a boundary of the body part image using the location of the first digital marker;computing a first distance between the identified boundary of the body part image and another boundary of the body part image;computing a second distance between boundaries of the mobile phone image;identifying a known physical dimension of the mobile phone;determining a scaling factor using the known physical dimension of the mobile phone and the second distance;determining a physical dimension of the user's body part by applying the scaling factor to the first distance.19. The method of claim 18 , wherein the method comprises:identifying a position coordinate of an initial location of the first digital marker placed by the user on the digital picture;identifying a location of an estimated end point of one section of the body part imageidentifying a position coordinate of the location of the estimated end point;determining a difference between the position coordinate of the initial location of the first digital marker and the position coordinate of the location of the estimated end point; andafter determining that the difference exceeds a permitted difference, instructing the user to move the first digital marker from the initial location, ...

Подробнее
28-01-2016 дата публикации

DETERMINING COORDINATES FOR AN AREA OF INTEREST ON A SPECIMEN

Номер: US20160027164A1
Принадлежит:

Methods and systems for determining coordinates for an area of interest on a specimen are provided. One system includes one or more computer subsystems configured for, for an area of interest on a specimen being inspected, identifying one or more targets located closest to the area of interest. The computer subsystem(s) are also configured for aligning one or more images for the one or more targets to a reference for the specimen. The image(s) for the target(s) and an image for the area of interest are acquired by an inspection subsystem during inspection of the specimen. The computer subsystem(s) are further configured for determining an offset between the image(s) for the target(s) and the reference based on results of the aligning and determining modified coordinates of the area of interest based on the offset and coordinates of the area of interest reported by the inspection subsystem. 1. A system configured to determine coordinates for an area of interest on a specimen , comprising:an inspection subsystem comprising at least an energy source and a detector, wherein the inspection subsystem is configured to scan energy generated by the energy source over specimens while the detector detects energy from the specimens and generates images responsive to the detected energy; and for an area of interest on another specimen being inspected, identifying one or more targets located closest to the area of interest;', 'determining an offset between the one or more images for the one or more targets and the reference based on results of said aligning; and', 'aligning one or more images for the one or more targets to a reference for the other specimen, wherein the one or more images for the one or more targets and an image for the area of interest are acquired by the inspection subsystem during inspection of the other specimen, 'determining modified coordinates of the area of interest based on the offset and coordinates of the area of interest reported by the inspection ...

Подробнее
28-01-2016 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM

Номер: US20160027183A1
Автор: OHI Akinori
Принадлежит:

To specify a contour to be detected even when there are a plurality of candidates of the contour on the periphery of a photographing target, an image processing apparatus comprises: a determining unit which detects a plurality of candidate points being the candidates of the contour of a subject based on distance image information of the subject in an image, and determines an inspection-target area in the image based on the detected candidate points; and a specifying unit which detects line segments existing in the inspection-target area determined by the determining unit, based on luminance information of the inspection-target area, and specifies the line segment being the contour of the subject based on the candidate point from the detected line segments. 1. An image processing apparatus comprising:a determining unit configured to detect a plurality of candidate points being candidates of a contour of a subject based on distance image information of the subject in an image, and to determine an inspection-target area in the image based on the detected candidate points; anda specifying unit configured to detect line segments existing in the inspection-target area determined by the determining unit, based on luminance information of the inspection-target area, and to specify the line segment being the contour of the subject based on the candidate point from the detected line segments.2. The image processing apparatus according to claim 1 , further comprising:a first obtaining unit configured to obtain the image photographed from a photographing unit; anda second obtaining unit configured to obtain the measured distance image information from a distance measuring unit, whereinthe determining unit detects the candidate points being the candidates of the contour of the subject based on the distance image information, obtained by the second obtaining unit, of the subject in the image obtained by the first obtaining unit, and determines the inspection-target area in the ...

Подробнее
28-01-2016 дата публикации

TECHNIQUES FOR IMAGE SEGMENTATION

Номер: US20160027187A1
Принадлежит: Xiaomi Inc.

Techniques for image segmentation can include receiving image data of an image including a background and a face of a person in a foreground, and determining a respective a priori probability of a head-shoulder foreground pixel appearing per pixel of the plurality of pixels, according to a positioning result of a plurality of exterior contour points of the face. The techniques can also include selecting foreground and background pixels of the plurality of pixels, according to at least the a priori probabilities, and determining respective color likelihood probabilities of the foreground and the background, according to color feature vectors of the selected pixels. The techniques can also include determining respective posteriori probabilities of at least part of the foreground and at least part of the background, according to the a priori probabilities and the respective color likelihood probabilities. The techniques can also include performing segmentation on the plurality of pixels, according to the respective posteriori probabilities. 1. A method for segmenting an image , comprising:receiving image data of an image including a background and a face of a person in a foreground, the image data including data representative of a plurality of pixels of the image and a positioning result of a plurality of exterior contour points of the face;determining a respective a priori probability of a foreground pixel appearing per pixel of the plurality of pixels, according to the positioning result of the plurality of exterior contour points of the face;selecting foreground pixels and background pixels of the plurality of pixels, according to the a priori probabilities, a foreground probability threshold, and a background probability threshold;determining a first color likelihood probability of the foreground and a second color likelihood probability of the background, according to color feature vectors of the selected foreground pixels and the selected background pixels; ...

Подробнее
28-01-2016 дата публикации

DISPLAYING METHOD, ANIMATION IMAGE GENERATING METHOD, AND ELECTRONIC DEVICE CONFIGURED TO EXECUTE THE SAME

Номер: US20160027202A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method for of playing an animation image, the method including: obtaining a plurality of images; displaying a first image of the plurality of images; detecting a first event as a trigger to play the animation image for a first object of the first image; and playing the animation image for the first object using the plurality of images. 1. A method of playing an animation image , the method comprising:obtaining a plurality of images;displaying a first image of the plurality of images;detecting a first event as a trigger to play the animation image for a first object of the first image; andplaying the animation image for the first object using the plurality of images.2. The method of claim 1 , further comprising maintaining a still display of objects within the first image other than the first object.3. The method of claim 1 , further comprising segmenting the first object in each of the plurality of images.4. The method of claim 3 , wherein the segmenting comprises:obtaining a respective depth map for each of the plurality of images; andsegmenting the first object in each of the plurality of images based on the respective depth maps.5. The method of claim 3 , wherein the segmenting comprises:obtaining respective image information for each of the plurality of images; andsegmenting the first object in each of the plurality of images based on the respective image information.6. The method of claim 1 , wherein the detecting comprises detecting a touch on an area corresponding to the first object of the displayed first image or detecting a user's view corresponding to the area corresponding to the first object of the displayed first image.7. The method of claim 1 , wherein the detecting comprises detecting an object of the first image having a movement exceeding a threshold of the first image.8. The method of claim 1 , wherein the detecting comprises detecting at least one of a sound as the first event claim 1 , motion information as the first event claim 1 , and an ...

Подробнее
26-01-2017 дата публикации

Event Detection System

Номер: US20170024986A1
Автор: Austin Thomas Robert
Принадлежит:

A method and apparatus for detecting an occurrence of an event of interest. The apparatus comprises a surveillance system, a detector, and an analyzer. The surveillance system monitors subjects within an environment to generate monitoring data. The detector detects a number of indicator instances exhibited by at least a portion of the subjects using the monitoring data. Each of the number of indicator instances is an instance of a corresponding event indicator in a set of pre-selected event indicators. The analyzer evaluates the number of indicator instances and additional information to determine whether at least a portion of the number of indicator instances meets an event detection threshold, thereby indicating a detection of an occurrence of an event of interest. 1. An apparatus comprising:a surveillance system that monitors subjects within an environment to generate monitoring data;a detector that analyzes the monitoring data to identify a number of indicator instances exhibited by at least a portion of the subjects, wherein each of the number of indicator instances is an instance of a corresponding event indicator in a set of pre-selected event indicators; andan analyzer that evaluates the number of indicator instances identified to determine whether at least a portion of the number of indicator instances meets an event detection threshold, thereby indicating a detection of an occurrence of an event of interest.2. The apparatus of claim 1 , wherein a determination that the at least a portion of the number of indicator instances meets the event detection threshold enables a triggering of an alarm system such that the alarm system is triggered without significant delay after the occurrence of the event of interest.3. The apparatus of further comprising:a data store that stores additional information that the analyzer uses in evaluating whether the at least a portion of the number of indicator instances meets the event detection threshold.4. The apparatus of ...

Подробнее
25-01-2018 дата публикации

SYSTEM AND METHOD FOR SEGMENTING MEDICAL IMAGE

Номер: US20180025512A1
Автор: Feng Tao, LI Hongdi, ZHU Wentao

A method for segmenting a medical image is disclosed. The method includes acquiring MR image and PET data during a scan of the object, acquiring an air/bone ambiguous region in the MR image, the air/bone ambiguous region including air voxels and bone voxels undistinguished from each other. The method also includes assigning attenuation coefficients to the voxels of the plurality of regions and generating an attenuation map. The method further includes iteratively reconstructing the PET data and the attenuation map to generate a PET image and an estimated attenuation map. The method further includes reassigning attenuation coefficients to the voxels of the air/bone ambiguous region based on the estimated attenuation map, and distinguishing the bone voxels and air voxels in the air/bone ambiguous region. 1. A method for segmenting a medical image , comprising:acquiring MR data during a scan of an object using an MR scanner;reconstructing an MR image using the MR data, wherein the MR image comprises a plurality of voxels;acquiring PET data during a scan of the object using a PET scanner;segmenting the MR image into a plurality of regions which includes at least an air/bone ambiguous region, wherein the air/bone ambiguous region including air voxels and bone voxel undistinguished from each other;assigning attenuation coefficients to the voxels of the plurality of regions and generating an attenuation map, wherein the voxels of the air/bone ambiguous region are assigned specific attenuation coefficients;iteratively reconstructing the PET data and the attenuation map to generate a PET image and an estimated attenuation map;reassigning attenuation coefficients to the voxels of the air/bone ambiguous region based on the estimated attenuation map; anddistinguishing the bone voxels and the air voxels in the air/bone ambiguous region.2. The method of claim 1 , wherein the plurality of regions of the MR image comprises a soft-tissue region claim 1 , the soft-tissue region ...

Подробнее
28-01-2016 дата публикации

Image processing apparatus that sets moving image area, image pickup apparatus, and method of controlling image processing apparatus

Номер: US20160028954A1
Автор: Takahiro Abe
Принадлежит: Canon Inc

An image processing apparatus that properly sets a moving image area according to an image. An image processor generates a combined image, by setting one of a plurality of images as a basic image and disposing a moving image generated from the plurality of images in a moving image area which is designated in part of the basic image. The image processor sets a still area in part of the basic image. A system controller causes the basic image to be displayed on a display section. A user performs a touch operation on the display section displaying the basic image, whereby the moving image area is designated. When the moving image area is designated in a manner overlapping the still area, the image processor performs image combining by deleting the overlapping area from the moving image area.

Подробнее
28-01-2016 дата публикации

METHOD OF REPLACING OBJECTS IN A VIDEO STREAM AND COMPUTER PROGRAM

Номер: US20160028968A1
Автор: AFFATICATI Jean-Luc
Принадлежит:

The invention relates to a method for replacing objects in a video stream. A stereoscopic view of the field is created. It serves to measure the distance from the camera and to determine the foreground, background and occluding objects. The stereoscopic view can be provided by a 3D camera or it can be constructed using the signal coming from a single camera or more. The texture of the objects to be replaced can be static or dynamic. The method does not require any particular equipment to track the camera position and it can be used for live content as well as archived material. The invention takes advantage of the source material to be replaced in the particular case when the object to be replaced is filled electronically. 1. A method for replacing objects in a video stream comprising: 'analyzing the one or more images to extract the camera pose parameters, the camera pose parameters at least including x, y, and z axis coordinates and direction of the camera;', 'receiving one or more images from at least one camera'}creating a stereoscopic view using a depth table for objects viewed by the camera, wherein the depth table defines a distance along the z-axis from a camera lens to each object in a field of view of the camera, the depth table comprising a plurality of pixels having z values, wherein pixels are grouped into objects based on the z values;identifying a foreground object that occludes a background object using the stereoscopic view and the depth table;detecting foreground object contours;creating an occlusion mask using the foreground object contours;calculating a replacement image using the camera pose parameters; and applying the occlusion mask to the replacement image.2. The method according to claim 1 , wherein the stereoscopic view is created based on images received from at least two cameras.3. The method according to claim 1 , wherein extracting the camera pose parameters includes:detecting if a cut with between a current image and a previous image ...

Подробнее
29-01-2015 дата публикации

System and Method for Creating a Virtual Backdrop

Номер: US20150029216A1
Принадлежит:

Some implementations may provide a method for generating a portrait of a subject for an identification document, the method including: receiving a photo image of the subject, the photo image including the subject's face in a foreground against an arbitrary background; determining the arbitrary background of the photo image based on the photo image alone and without user intervention; masking the determined background from the photo image; and subsequently generating the portrait of the subject for the identification document of the subject, the portrait based on the photo image with the determined background masked.

Подробнее
24-01-2019 дата публикации

Multi-Angle Product Imaging Device

Номер: US20190026585A1
Принадлежит: Conduent Business Services LLC

A system for acquiring multi-angle images of a product includes a workstation having a working surface for placing a product, a camera supporting member having a vertical axis, and an image capturing device movably attached to the camera supporting member so that it may move along the vertical axis of the camera supporting member. The system captures and analyzes a digital image of a product to detect the vertical center of the product, and adjusts the position of the image capturing device along the vertical axis so that the vertical center of the product is proximate to the vertical center of the image. The system may also have a turntable and additionally rotate the turntable at multiple capturing angles and capture one or more additional digital images of the product at various capturing angles and store the one or more additional images in a product database.

Подробнее
02-02-2017 дата публикации

COMPUTER-VISION BASED SECURITY SYSTEM USING A DEPTH CAMERA

Номер: US20170032192A1
Принадлежит:

A method for securing an environment. The method includes obtaining a two-dimensional (2D) representation of a three-dimensional (3D) environment. The 2D representation includes a 2D frame of pixels encoding depth values of the 3D environment. The method further includes identifying a set of foreground pixels in the 2D representation, defining a foreground object based on the set of foreground pixels. The method also includes classifying the foreground object, and taking an action based on the classification of the foreground object. 121.-. (canceled)22. A method for configuring a depth-sensing monitoring system , comprising:obtaining, from a depth-sensing camera, a first two-dimensional (2D) representation of a three-dimensional (3D) environment, wherein the first 2D representation comprises a 2D frame of pixels encoding depth values of the 3D environment;obtaining, from a video camera, a second 2D representation of the three-dimensional (3D) environment, wherein the second 2D representation comprises a 2D frame of pixels encoding one selected from a group consisting of color values and brightness values of the 3D environment;identifying, in the first 2D representation, regions that are within a tracking range of the depth-sensing camera; anddisplaying, in the second 2D representation, the regions that are within the tracking range of the depth-sensing camera.23. The method of claim 22 ,wherein the depth-sensing camera and the video camera are co-aligned; andwherein there is a correspondence between the first 2D representation and the second 2D representation.24. The method of further comprising: 'wherein regions that are outside the tracking range of the depth sensing camera are displayed in a second format, different from a first format used for displaying the regions that are within the tracking range of the depth sensing camera.', 'displaying, in the second 2D representation, regions that are outside the tracking range of the depth sensing camera,'}25. The ...

Подробнее
02-02-2017 дата публикации

VIDEO MONITORING METHOD, VIDEO MONITORING SYSTEM AND COMPUTER PROGRAM PRODUCT

Номер: US20170032194A1
Автор: LI CHAO, SHANG Zeyuan, Yu Gang
Принадлежит:

The present disclosure relates to a video monitoring method based on a depth video, a video monitoring system and a computer program product. The video monitoring method comprises: obtaining video data collected by a video collecting apparatus; determining an object as a monitoring target based on the video data; and extracting feature information of the object, wherein the video data is video data containing depth information. 1. A video monitoring method comprising:obtaining video data collected by a video collecting apparatus;determining an object as a monitoring target based on the video data; andextracting feature information of the object,wherein the video data is video data containing depth information.2. The video monitoring method according to claim 1 , further comprising:configuring the video collecting apparatus and determining coordinate parameters of the video collecting apparatus.3. The video monitoring method according to claim 2 , wherein determining coordinate parameters of the video collecting apparatus comprise:selecting multiple reference points on a predetermined reference plane;determining transformation relationship between a camera coordinate system of the video collecting apparatus and a world coordinate system based on coordinate information of the multiple reference points; anddetermining the coordinate parameters of the video collecting apparatus based on the transformation relationship.4. The video monitoring method according to claim 1 , wherein determining an object as a monitoring target based on the video data comprises:determining background information in the video data;determining foreground information in each frame of the video data based on the background information;obtaining edge profile information of a foreground area corresponding to the foreground information; anddetermining the object based on the edge profile information.5. The video monitoring method according to claim 4 , wherein determining the object based on the ...

Подробнее
02-02-2017 дата публикации

ABANDONED OBJECT DETECTION APPARATUS AND METHOD AND SYSTEM

Номер: US20170032514A1
Автор: Zhang Nan
Принадлежит: FUJITSU LIMITED

An abandoned object detection apparatus and method and a system where the apparatus includes: a detecting unit configured to match each pixel of an acquired current frame with its background model, mark unmatched pixels, taken as foreground pixels, on a foreground mask, add 1 to a foreground counter to which each foreground pixel corresponds, and update the background model; a marking unit configured to, for each foreground pixel, mark a point corresponding to the foreground pixel on an abandon mask when a value of the foreground counter to which the foreground pixel corresponds is greater than a second threshold value; and a mask processing unit configured to, for each point on the abandon mask, process the abandon mask according to its background model and buffer background or foreground mask. The buffer background of the abandoned object is provided in this application, hence, when the abandoned object leaves and how long it stays may be judged, and interference of occlusion and ghost may also be avoided, thereby solving a problem of illegal road occupation identification. 1. An abandoned object detection apparatus , comprising:a detecting unit configured to match each pixel of an acquired current frame with a corresponding background model, mark unmatched pixels, taken as foreground pixels, on a foreground mask, add 1 to a foreground counter to which each foreground pixel corresponds, and update the background model;a marking unit configured to, for each foreground pixel, mark a point corresponding to the foreground pixel on an abandon mask when a value of the foreground counter to which the foreground pixel corresponds is greater than a threshold value; anda mask processing unit configured to, for each point on the abandon mask, process the abandon mask according to a corresponding background model and one of a buffer background and the foreground mask.2. The apparatus according to claim 1 , wherein the apparatus further comprises:an image processing unit ...

Подробнее
02-02-2017 дата публикации

SYSTEMS AND METHODS FOR AUTOMATED SEGMENTATION OF INDIVIDUAL SKELETAL BONES IN 3D ANATOMICAL IMAGES

Номер: US20170032518A1
Автор: Behrooz Ali, Kask Peet
Принадлежит:

Presented herein, in certain embodiments, are approaches for robust bone splitting and segmentation in the context of small animal imaging, for example, microCT imaging. In certain embodiments, a method for calculating and applying single and hybrid second-derivative splitting filters to gray-scale images and binary bone masks is described. These filters can accurately identify the split lines/planes of the bones even for low-resolution data, and hence accurately morphologically disconnect the individual bones. The split bones can then be used as seeds in region growing techniques such as marker-controlled watershed segmentation. With this approach, the bones can be segmented with much higher robustness and accuracy compared to prior art methods. 1. A method of performing image segmentation to automatically differentiate individual bones in an image of a skeleton or partial skeleton of a subject , the method comprising:receiving, by a processor of a computing device, an image of a subject;applying, by the processor, one or more second derivative splitting filters to the image to produce a split bone mask for the image;determining, by the processor, a plurality of split binary components of the split bone mask by performing one or more morphological processing operations;optionally, quantifying, by the processor, a volume of each split binary component and eliminating one or more components having unacceptably small volume; andperforming, by the processor, a region growing operation using the split bone mask components as seeds, thereby producing a segmentation map differentiating individual bones in the image.2. The method of claim 1 , wherein the one or more second derivative splitting filters comprises at least one member selected from the group consisting of a LoG (Laplacian of Gaussian) claim 1 , a HEH (highest Hessian eigenvalue claim 1 , with preliminary Gaussian filtering) claim 1 , and a LEH (lowest Hessian eigenvalue claim 1 , with preliminary Gaussian ...

Подробнее
02-02-2017 дата публикации

IMAGE PROCESSING DEVICE, METHOD FOR OPERATING THE SAME, AND ENDOSCOPE SYSTEM

Номер: US20170032539A1
Автор: Kuramoto Masayuki
Принадлежит: FUJIFILM Corporation

First RGB image signals are subjected to an input process. First color information is obtained from the first RGB image signals. Second color information is obtained by a selective expansion processing in which color ranges except a first color range are moved in a feature space formed by the first color information, the first color information that represents each object in a body cavity being distributed in the each color range. The second color information is converted to second RGB signals. A red display signal, a green display signal and a blue display signal are obtained by applying a pseudo-color display process to the second RGB signals. 1. An image processing device comprising:an input processor configured to perform an input process of a first red signal, a first green signal and a first blue signal;a color information obtaining processor configured to obtain first color information from the first red signal, the first green signal and the first blue signal;an expansion processor configured to obtain second color information by a selective expansion processing in which color ranges except a first color range are moved in a feature space formed by the first color information, the first color information that represents each object in a body cavity being distributed in the each color range;an RGB signal converter configured to convert the second color information to a second red signal, a second green signal and a second blue signal; anda pseudo-color display processor configured to obtain a red display signal, a green display signal and a blue display signal by applying a pseudo-color display process to the second red signal, the second green signal and the second blue signal.2. The image processing device according to claim 1 , wherein the pseudo-color display process is a first color tone converting process to convert the second blue signal to the blue display signal and the green display signal and convert the second green signal to the red display ...

Подробнее
02-02-2017 дата публикации

TECHNIQUE FOR MORE EFFICIENTLY DISPLAYING TEXT IN VIRTUAL IMAGE GENERATION SYSTEM

Номер: US20170032575A1
Принадлежит: Magic Leap, Inc.

A virtual image generation system and method of operating same is provided. An end user is allowed to visualize the object of interest in a three-dimensional scene. A text region is spatially associated with the object of interest. A textual message that identifies at least one characteristic of the object of interest is generated. A textual message is streamed within the text region. 1. A method of operating a virtual image generation system , the method comprising:allowing an end user to visualize a three-dimensional scene;spatially associating a text region within a field of view of the user;generating a textual message; andstreaming the textual message within the text region.2. The method of claim 1 , wherein streaming the textual message within the text region comprises displaying the textual message only one word at a time.3. The method of claim 1 , wherein streaming the textual message within the text region comprises displaying the textual message at least two words at a time while emphasizing only one of the at least two displayed words.4. The method of claim 1 , further comprising sensing a gestural command from the end user claim 1 , wherein streaming the textual message is controlled by the gestural command.5. The method of claim 1 , further comprising allowing the end user to visualize an object of interest in the three-dimensional scene claim 1 , wherein the text region is spatially associated with the object of interest claim 1 , and the textual image identifies at least one characteristic of the object of interest.6. The method of claim 5 , wherein the object of interest is an actual object.7. The method of claim 6 , wherein allowing the end user to visualize the actual object comprises allowing the end user to visualize directly light from the actual object.8. The method of claim 5 , wherein the object of interest is movable claim 5 , and wherein spatially associating the text region with the object of interest comprises linking the text region with ...

Подробнее
04-02-2016 дата публикации

IMAGE PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING AN IMAGE PROCESSING PROGRAM

Номер: US20160034753A1
Автор: Harada Hiroyuki
Принадлежит:

In an image processing apparatus, a character recognizing unit identifies a character image in a document image. A font matching unit determines a character code and a font type corresponding to the identified character image. A fore-and-background setting unit sets the document image as a background image and sets a standard character image based on the determined character code and the determined font type. A background image correcting unit (a) deletes a deletion area in the background image, the deletion area taking a same position as the character image or the standard character image, (b) interpolates a differential area between the character image and the standard character image in a specific neighborhood area that contacts with the deletion area on the basis of the background image, and (c) interpolates the deletion area on the basis of the back ground image. 1. An image processing apparatus , comprising:a character recognizing unit configured to identify a character image in a document image;a font matching unit configured to determine a character code and a font type corresponding to the identified character image;a fore-and-background setting unit configured to set the document image as a background image and set a standard character image based on the determined character code and the determined font type as a foreground image; anda background image correcting unit configured to (a) delete a deletion area in the background image, the deletion area taking a same position as the character image or the standard character image, (b) interpolate a differential area between the character image and the standard character image in a specific neighborhood area that contacts with the deletion area on the basis of the background image, and (c) interpolate the deletion area on the basis of the back ground image.2. The image processing apparatus according to claim 1 , wherein:(c1) if the deletion area is an open area that is partially surrounded with an object ...

Подробнее
04-02-2016 дата публикации

ABNORMALITY DETECTION APPARATUS, ABNORMALITY DETECTION METHOD, AND RECORDING MEDIUM STORING ABNORMALITY DETECTION PROGRAM

Номер: US20160034784A1
Принадлежит: RICOH COMPANY, LTD.

An abnormality detection apparatus, an abnormality detection method, and an abnormality detection program are provided. Each of the abnormality detection apparatus, an abnormality detection method, and an abnormality detection program extracts a target image to be monitored and a reference image, respectively, from target video to be monitored, detects an abnormality based on a difference between the target image to be monitored and the reference image, and displays an image indicating a difference between the target image to be monitored and the reference image on a monitor. Moreover, an abnormality detection system is provided including the abnormality detection apparatus, a video that captures an image of a target to be monitored, and a monitor. 1. An abnormality detection apparatus comprising:a still image extraction unit configured to extract a target image to be monitored and a reference image, respectively, from target video to be monitored;an abnormality detector configured to detect an abnormality based on a difference between the target image to be monitored and the reference image; anda display controller configured to display an image indicating a difference between the target image to be monitored and the reference image on a monitor.2. The abnormality detection apparatus according to claim 1 , whereinthe still image extraction unit extracts a frame where a difference with a temporally neighboring frame in the target video to be monitored is smaller than a first threshold as a still image.3. The abnormality detection apparatus according to claim 2 , whereinwhen the still image includes a plurality of continuous still images, the still image extraction unit specifies a still image interval that includes the continuous still images, andthe abnormality detector performs abnormality detection based on a difference between the target image to be monitored and the reference image that belong to the specified still image interval.4. The abnormality detection ...

Подробнее
04-02-2016 дата публикации

METHOD FOR DETECTING AND QUANTIFYING CEREBRAL INFARCT

Номер: US20160035085A1
Принадлежит:

A method for detecting a cerebral infarct includes receiving an image of a brain of a subject from a magnetic resonance imaging scanner, wherein the image has a plurality of voxels, and each of the voxels has a voxel intensity. Then, the voxel intensities are normalized, wherein the normalized voxel intensities have a distribution peak, and the normalized voxel intensity of the distribution peak is I. A threshold is determined, which is the I+ a value. Voxel having the normalized voxel intensity larger than the threshold is selected, wherein the selected voxel is the cerebral infarct. A method for quantifying the cerebral infarct is also provided. 1. A method for detecting a cerebral infarct , comprising:receiving an image of a brain of a subject from a magnetic resonance imaging scanner, wherein the image has a plurality of voxels, and each of the voxels has a voxel intensity;{'sub': 'peak', 'normalizing the voxel intensities to make the voxel intensities disperse in a standard range, wherein the normalized voxel intensities have a distribution peak, and the normalized voxel intensity of the distribution peak is I;'}{'sub': peak', 'peak, 'determining a threshold, which is the I+ a value, wherein the value is a difference value between a minimum normalized voxel intensity of the cerebral infarct diagnosed by a semi-automatic segmentation method and the I; and'}selecting voxel having the normalized voxel intensity larger than the threshold, wherein the selected voxel is the cerebral infarct.2. The method of claim 1 , wherein receiving the image of the brain of the subject from the magnetic resonance imaging scanner comprises determining a brain mask of the subject in the image.3. The method of wherein the brain mask comprises an inner surface and an outer surface of a skull of the subject.4. The method of claim 1 , wherein the image is obtained by diffusion-weighted imaging (DWI).5. The method of claim 1 , wherein the standard range is (0 claim 1 , 1).6. The method ...

Подробнее
04-02-2016 дата публикации

VESSEL SEGMENTATION

Номер: US20160035103A1
Принадлежит:

The present invention relates to vessel segmentation. In order to provide an improved way of providing segmentation information with reduced X-ray dose, an X-ray image processing device () is provided that comprises an interface unit (), and a data processing unit (). The interface unit is configured to provide a sequence of time series angiographic 2D images of a vascular structure obtained after a contrast agent injection. The data processing unit is configured to determine an arrival time index of a predetermined characteristic related to the contrast agent injection for each of a plurality of determined pixels along the time series, and to compute a connectivity index for each of the plurality of the determined pixels based on the arrival time index. The data processing unit is configured to generate segmentation data of the vascular structure from the plurality of the determined pixels, wherein the segmentation data is based on the connectivity index of the pixels. The data processing unit is configured to provide the segmentation data for further purposes. 1. An X-ray image processing device , comprising:an interface unit; anda data processing unit;wherein the interface unit is configured to provide a sequence of time series angiographic 2D images of a vascular structure obtained after a contrast agent injection;wherein the data processing unit is configured to determine an arrival time index of a predetermined characteristic related to the contrast agent injection for each of a plurality of determined pixels along the time series; to compute a connectivity index for each of the plurality of the determined pixels based on the arrival time index; to generate segmentation data of the vascular structure from the plurality of the determined pixels, wherein the segmentation data is based on the connectivity index of the pixels; and to provide the segmentation data for further purposes.2. X-ray image processing device according to claim 1 , wherein the data ...

Подробнее
04-02-2016 дата публикации

MOVEMENT INDICATION

Номер: US20160035105A1
Принадлежит:

A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof. 1. An image analysis system comprising:a central processing unit; capture a plurality of images;', 'generate an indication of whether a first region of a first image of the plurality of images differs by a threshold from a corresponding second region of a second image of the plurality of images, wherein the threshold difference indicates a likelihood that motion has occurred; and, 'a camera communicatively coupled to the central processing unit, and in conjunction with the central processing unit, configured toa transceiver communicatively coupled to the central processing unit and the camera, configured to transmit, to a server, image data associated with at least one of the plurality of images and the generated indication of whether the first region differs by the threshold from the second region.2. The system of claim 1 , wherein the camera is configured to capture the plurality of images sequentially and digitize the plurality of images to produce a ...

Подробнее
01-02-2018 дата публикации

MULTI-ANGLE PRODUCT IMAGING DEVICE

Номер: US20180032832A1
Принадлежит:

A system for acquiring multi-angle images of a product includes a workstation having a working surface for placing a product, a camera supporting member having a vertical axis, and an image capturing device movably attached to the camera supporting member so that it may move along the vertical axis of the camera supporting member. The system captures and analyzes a digital image of a product to detect the vertical center of the product, and adjusts the position of the image capturing device along the vertical axis so that the vertical center of the product is proximate to the vertical center of the image. The system may also have a turntable and additionally rotate the turntable at multiple capturing angles and capture one or more additional digital images of the product at various capturing angles and store the one or more additional images in a product database. 1. A method of acquiring multi-angle images of a product , comprising:placing a product on a workstation comprising at least one background wall and a camera supporting member having a vertical axis, wherein the camera supporting member is configured to movably attach an image capturing device so that the image capturing device may be moved along the vertical axis of the camera supporting member;capturing, by the image capturing device, a first image of the product;analyzing, by a computing device, the captured first image and detecting a vertical center of the product;determining, by the computing device, if the vertical center of the product is within a proximate distance to the vertical center of the first image;upon determining that the vertical center of the product is not within the proximate distance to the vertical center of the first image, causing, by the computing device, the image capturing device to move along the vertical axis a location based on the distance between the vertical center of the product and the vertical center of the first image;capturing, by the image capturing device, one or ...

Подробнее
01-02-2018 дата публикации

METHODS AND SYSTEMS OF PERFORMING ADAPTIVE MORPHOLOGY OPERATIONS IN VIDEO ANALYTICS

Номер: US20180033152A1
Принадлежит:

Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive morphology operations. A first erosion function can be performed on a foreground mask of a video frame, including setting one or more foreground pixels of the frame to one or more background pixels. A temporary foreground mask can be generated based on the first erosion function being performed on the foreground mask. One or more connected components can be generated for the frame by performing connected component analysis to connect one or more neighboring foreground pixels. A complexity of the frame (or of the foreground mask of the frame) can be determined by comparing a number of the one or more connected components to a threshold number. A second erosion function can be performed on the temporary foreground mask when the number of the one or more connected components is higher than the threshold number. The one or more connected components can be output for blob processing when the number of the one or more connected components is lower than the threshold number. 1. A method of performing content-adaptive morphology operations , the method comprising:performing a first erosion function on a foreground mask of a frame, the first erosion function setting one or more foreground pixels of the foreground mask to one or more background pixels;determining a complexity of the foreground mask; anddetermining whether to perform one or more additional erosion functions for the frame based on the determined complexity of the foreground mask.2. The method of claim 1 , further comprising:generating one or more connected components by performing connected component analysis on foreground pixels of the foreground mask to connect one or more neighboring foreground pixels; andwherein the complexity of the foreground mask is determined by comparing a number of the one or more connected components to a threshold number.3. The method of ...

Подробнее
04-02-2016 дата публикации

IMAGE SEGMENTATION FOR A LIVE CAMERA FEED

Номер: US20160037087A1
Автор: PRICE Brian
Принадлежит: ADOBE SYSTEMS INCORPORATED

Techniques are disclosed for segmenting an image frame of a live camera feed. A biasing scheme can be used to initially localize pixels within the image that are likely to contain the object being segmented. An optimization algorithm for an energy optimization function, such as a graph cut algorithm, can be used with a non-localized neighborhood graph structure and the initial location bias for localizing pixels in the image frame representing the object. Subsequently, a matting algorithm can be used to define a pixel mask surrounding at least a portion of the object boundary. The bias and the pixel mask can be continuously updated and refined as the image frame changes with the live camera feed. 1. A computer-implemented digital image processing method comprising:receiving pixel data representing a current image frame;calculating a bias term representing a weighting of each pixel in the current image frame towards one of a foreground bias region and a background bias region;segmenting, by a processor using an energy optimization function, the current image frame into a foreground segment and a background segment based on the pixel data and the bias term; andgenerating a pixel mask corresponding to pixels in at least one of the foreground segment and the background segment.2. The method of claim 1 , further comprising:displaying, via a graphical user interface, the current image frame and the pixel mask such that the pixel mask is superimposed over the respective pixels in the image frame.3. The method of claim 1 , wherein the segmentation is further based on a first feature of a first pixel in the foreground bias region and a second feature of a second pixel in the background bias region claim 1 , and wherein a third pixel in the current image frame having the first feature is weighted towards the foreground segment and a fourth pixel in the current image frame having the second feature is weighted towards the background segment.4. The method of claim 1 , wherein ...

Подробнее
09-02-2017 дата публикации

CHARACTERIZING DISEASE AND TREATMENT RESPONSE WITH QUANTITATIVE VESSEL TORTUOSITY RADIOMICS

Номер: US20170035381A1
Принадлежит:

Methods, apparatus, and other embodiments associated with classifying a region of tissue using quantified vessel tortuosity are described. One example apparatus includes an image acquisition logic that acquires an image of a region of tissue demonstrating cancerous pathology, a delineation logic that distinguishes nodule tissue within the image from the background of the image, a perinodular zone logic that defines a perinodular zone based on the nodule, a feature extraction logic that extracts a set of features from the image including a set of tortuosity features, a probability logic that computes a probability that the nodule is benign, and a classification logic that classifies the nodule tissue based, at least in part, on the set of features or the probability. A prognosis or treatment plan may be provided based on the classification of the image. 1. A non-transitory computer-readable storage device storing computer executable instructions that when executed by a computer control the computer to perform a method for characterizing a nodule in a region of tissue , the method comprising:accessing an image of a region of tissue demonstrating cancerous pathology;segmenting a lung region from surrounding anatomy in the region of tissue;segmenting a nodule from the lung region by defining a nodule boundary;defining a perinodular zone in the image based, at least in part, on the nodule boundary;generating a three dimensional (3D) segmented vasculature by segmenting a vessel from the perinodular zone;identifying a center line of the 3D segmented vasculature;extracting a set of perinodular tortuosity features based, at least in part, on the center line;computing a probability that the nodule is benign based, at least in part, on the set of perinodular tortuosity features; andcontrolling a computer aided diagnosis (CADx) system to generate a classification of the nodule based, at least in part, on the set of perinodular tortuosity features, or the probability.2. The non- ...

Подробнее
11-02-2016 дата публикации

DETERMINING A RESIDUAL MODE IMAGE FROM A DUAL ENERGY IMAGE

Номер: US20160038112A1
Принадлежит: KONINKLIJKE PHILIPS N.V.

A digital image () comprises pixels with intensities relating to different energy levels. A method for processing the digital image () comprises the steps of: receiving first image data () and second image data () of the digital image (), the first image data () encoding a first energy level and the second image data () encoding a second energy level; determining a regression model () from the first image data () and the second image data (), the regression model () establishing a correlation between intensities of pixels of the first image data () with intensities of pixels of the second image data (); and calculating residual mode image data () from the first image data () and the second image data (), such that a pixel of the residual mode image data () has an intensity based on the difference of an intensity of the second image data () at the pixel and a correlated intensity of the pixel of the first image data (), the correlated intensity determinate by applying the regression model to the intensity of pixel of the first image data (). 1. A method for processing a digital image comprising pixels with intensities relating to different energy levels , the method comprising the steps of:receiving first image data and second image data of the digital image, the first image data encoding a first energy level and the second image data encoding a second energy level;determining a regression model from the first image data and the second image data, the regression model establishing a correlation between intensities of pixels of the first image data with intensities of pixels of the second image data;calculating residual mode image data from the first image data and the second image data, such that a pixel of the residual mode image data has an intensity based on the difference of an intensity of the second image data at the pixel and a correlated intensity of the pixel of the first image data, the correlated intensity being determined by applying the regression model ...

Подробнее
09-02-2017 дата публикации

Image Capture and Identification System and Process

Номер: US20170036113A1
Принадлежит:

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. 1. A method of presenting interactive content , comprising:obtaining, by a mobile device, a digital representation of an environment the digital representation comprising at least one of location data, position data and orientation data associated with the mobile device;recognizing, by at least one of a server and the mobile device, at least one object from the digital representation as a target object considered to be part of a game based at least in part on the at least one of the location data, position data and orientation data of the digital representation;identifying, by at least one of a server and the mobile device, content information associated with the target object based on a location of the target object;accessing, by at least one of the server and the mobile device, the content information via an information address associated with the target object; andexecuting, by the mobile device, an interactive application that incorporates the content information based on the position and orientation of the target object relative to the mobile device.2. The method of claim 1 , wherein executing the interactive application that incorporates the content information further comprises visually presenting claim 1 , by the mobile device claim 1 , the content information based on the position and orientation of the target object relative to the mobile device.3. The method of claim 2 , wherein the content information further comprises graphical content information associated with the target object and presenting the content information further comprises displaying claim 2 , by the mobile device claim 2 , the graphical content information based on the position and orientation of the mobile device relative to the target object ...

Подробнее
09-02-2017 дата публикации

Image Capture and Identification System and Process

Номер: US20170036114A1
Принадлежит:

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. 1. A method of interactive content management , comprising:obtaining, by a first mobile device, a digital representation of an environment, the digital representation comprising location data associated with the first mobile device;recognizing, by at least one of a server and the first mobile device, an object from the digital representation as a target object considered to be part of a game based at least in part on the location data,identifying, by at least one of a server and the first mobile device, content information associated with the target object based on a location of the target object:,accessing, by at least one of the server and the first mobile device, the content information via an information address associated with the target object; andvisually presenting, by the first mobile device, the content information within the game based on the location data associated with the first mobile device and a position of the target object relative to the first mobile device.2. The method of claim 1 , wherein visually presenting the content information further comprises visually presenting claim 1 , by the first mobile device claim 1 , the content information based on a proximity of the first mobile device to the target object.3. The method of claim 2 , wherein the content information comprises at least one of graphical content information claim 2 , animation content information claim 2 , video content information and text content information.4. The method of claim 1 , further comprising enabling an interaction with the target object via the first mobile device based on the location data associated with the first mobile device and a position and orientation of the first mobile device relative to the target object.5. ...

Подробнее
11-02-2016 дата публикации

Systems, Methods, and Apparatuses for Measuring Deformation of a Surface

Номер: US20160040984A1
Автор: Byrne Richard Baxter
Принадлежит:

The present invention regards a method for measuring displacement of a surface at a region of interest when the region of interest is exposed to a load. The method includes the steps of (1) evenly illuminating the surface; (2) by means of a camera capturing a first set of images comprising a first image of the surface, applying a load to the surface at the region of interest, and capturing a second image of the surface; and (3) transmitting the first and second image to a processing module of a computer, wherein the processing module: (a) includes data relating to the image capture, such as the spatial position and field of view of the camera relative to the surface when the images were captured; (b) generates a global perspective transform from selected regions out of the displacement area (c) performs a global image registration between the two images using perspective transform to align the images; (d) computes vertical pixel shift and horizontal pixel shift between the first image and the second image for the region of interest; and (e) computes displacement of the region of interest between the images, in length units. The images are captured by the camera at an image camera position relative to the surface. In some embodiments two cameras are used, each capturing a single image from the same image camera position; in some embodiments multiple sets of images are captured by multiple cameras, from different perspectives. 1. A method for measuring displacement of a surface at a region of interest when the region of interest is exposed to a load , the method comprising the steps of:a. evenly illuminating the surface;b. by means of a camera capturing a first set of images comprising a first image of the surface, applying a load to the surface at the region of interest, and capturing a second image of the surface; i. comprises data relating to the image capture, including the spatial position and field of view of the camera relative to the surface when the images ...

Подробнее
09-02-2017 дата публикации

VIDEO MONITORING METHOD, VIDEO MONITORING APPARATUS AND VIDEO MONITORING SYSTEM

Номер: US20170039431A1
Автор: HE Qizheng, LI CHAO, Yin Qi, Yu Gang
Принадлежит:

The present disclosure relates to a video monitoring method and a video monitoring system based on a depth video. The video monitoring method comprises: obtaining video data collected by a video collecting module; determining an object as a monitored target based on pre-set scene information and the video data; extracting characteristic information of the object; and determining predictive information of the object based on the characteristic information, wherein the video data comprises video data including the depth information. 1. A video monitoring method , comprising:obtaining video data collected by a video collecting module;determining an object as a monitored target based on pre-set scene information and the video data;extracting characteristic information of the object; anddetermining predictive information of the object based on the characteristic information, wherein the video data comprises video data including the depth information.2. The video monitoring method according to claim 1 , further comprising:configuring the video collecting module and determining coordinate parameters of the video collecting module, selecting multiple reference points on a predetermined reference plane;', 'determining a transformation relationship of a camera coordinate system of the video collecting module and a world coordinate system based on coordinate information of the multiple reference points; and', 'determining the coordinate parameters of the video collecting module based on the transformation relationship., 'wherein determining coordinate parameters of the video collecting module comprises3. The video monitoring method according to claim 2 , wherein the pre-set scene information comprises background depth information of a background region of a monitored scene claim 2 , and determining an object as a monitored target based on preset scene information and the video data comprises:obtaining a depth information difference between current depth information of each ...

Подробнее
09-02-2017 дата публикации

COMPUTER-VISION BASED SECURITY SYSTEM USING A DEPTH CAMERA

Номер: US20170039455A1
Принадлежит:

A method for securing an environment. The method includes obtaining a two-dimensional (2D) representation of a three-dimensional (3D) environment. The 2D representation includes a 2D frame of pixels encoding depth values of the 3D environment. The method further includes identifying a set of foreground pixels in the 2D representation, defining a foreground object based on the set of foreground pixels. The method also includes classifying the foreground object, and taking an action based on the classification of the foreground object. 18.-. (canceled)9. A method for securing an environment , comprising:receiving a two-dimensional (2D) representation of a three-dimensional (3D) environment, wherein the 2D representation is a 2D frame of pixels encoding depth values of the 3D environment, wherein the 2D representation comprises a foreground object, and wherein a background has been removed from the 2D representation;classifying the foreground object; andtaking an action based on the classification of the foreground object.10. The method of claim 9 , wherein classifying the foreground object comprises using a camera-specific classifier.11. The method of claim 10 , further comprising:prior to classifying the foreground object using the camera-specific classifier, training the camera-specific classifier using data samples that are specific to a field of view of a camera with which the camera-specific classifier is associated and data samples that do not include the field of view of the camera.12. The method of claim 9 , wherein classifying the foreground object comprises:associating the foreground object with a category; andclassifying the foreground object as one selected from a group consisting of a threat and a non-threat based, at least in part, on the category.13. The method of claim 9 , wherein classifying the foreground object comprises:making a first determination, by a classifier, that the classification of the foreground object is unknown;based on the first ...

Подробнее
11-02-2016 дата публикации

System and method for increasing the bit depth of images

Номер: US20160042498A1
Автор: Andrew Ian Russell
Принадлежит: Google LLC

A method for processing an image having a first bit depth includes performing two or more iterations of a bit depth enhancement operation that increases the bit depth of the image to a second bit depth that is higher than the first bit depth. The bit depth enhancement operation includes dividing the image into a plurality of areas, performing an edge detection operation to identify one or more areas from the plurality of areas that do not contain edge features, and applying a blur to the one or more areas from the plurality of areas that do not contain edge features. In a first iteration of the of the bit depth enhancement operation, the plurality of areas includes a first number of areas, and the number of areas included in the plurality of areas decreases with each subsequent iteration of the bit depth enhancement operation.

Подробнее
11-02-2016 дата публикации

Integration of intra-oral imagery and volumetric imagery

Номер: US20160042509A1
Принадлежит: Ormco Corp

Systems and methods are described for identifying a sub-gingival surface of a tooth in volumetric imagery data. Shape data is received from a surface scanner and volumetric imagery data is received from a volumetric imaging device. The shape data of the super-gingival portion of a first tooth is registered with the volumetric imagery data of the super-gingival portion of the first tooth to obtain a registration result. At least one criterion is then determined for detecting a surface of the first tooth in the volumetric imagery data of the super-gingival or the sub-gingival portion using the registration result. The surface of the sub-gingival portion of the first tooth is detected in the volumetric imagery data using the at least one criterion.

Подробнее
11-02-2016 дата публикации

METHOD AND APPARATUS FOR DETERMINING A SEQUENCE OF TRANSITIONS

Номер: US20160042528A1
Принадлежит:

An apparatus and a method of determining a sequence of transitions for a varying state of a system, wherein the system is described by a finite number n of states, and wherein a transition from a current state to a next state causes a cost in dependence of a distance that is dependent on a previous state, the current state, and the next state. The method comprises: combining each two consecutive states to generate super states, wherein the cost for a transition from a current super state to a next super state only depends on the current super state and the next super state; in an iterative process, applying a dynamic programming algorithm to the super states in order to determine a minimum accumulated cost for each varying super state and to determine a preceding super state that led to the minimum accumulated cost; and after a final iteration, determining a final super state with the minimum accumulated cost and retrieving the sequence of the preceding super states leading to the final super state with the minimum accumulated cost. 1. A method for determining a sequence of optimal states for a varying state of a system describing a varying margin line in a sequence of images , the margin line being divided into a plurality of segments , wherein for each segment an optimal state out of a finite number of n states is to be determined , each state describing a profile across the margin line , and wherein a transition from a current state in a current segment to a next state in a next segment causes a cost in dependence of a distance that is dependent on a previous state in a preceding segment , the current state , and the next state , the method comprising:combining the states of each two consecutive segments along the margin line into super states; anddetermining an optimal state for each segment by applying a dynamic programming algorithm to the sequence of super states.2. The method according to claim 1 , wherein the dynamic programming algorithm is accelerated ...

Подробнее
09-02-2017 дата публикации

METHOD AND APPARATUS FOR REMOVING CHARACTER BACKGROUND FROM COLORED IMAGE

Номер: US20170039724A1
Принадлежит: Glory Ltd.

Provided is a method for removing character background in a color image that obtains an image for printing evaluation by removing a background design of a character from the color image of a printed object on which the character has been printed. The method includes separating a color input image into a character part and a background part, calculating a discriminant function for separating pixels of the character part and pixels of the background part based on pixel values, and generating a background-removed image by removing the background part from the input image by using the discriminant function. Moreover, an installation adjustment method of a line camera including adjusting, based on a signal acquired by capturing an installation adjustment chart fixed to the inspection drum, an installation position of the line camera that acquires an image of a large-size printed object arranged on an inspection drum, is executed by using an installation adjustment chart wherein a plurality of patterns formed by white background and black vertical lines are arranged by shifting in a vertical direction so that the vertical lines continue horizontally only in a predetermined rectangular region that is elongated in a scan line direction of the line camera. 1. A method for removing character background from a color image in order to obtain an image for printing evaluation by removing a background design of a character from the color image of a printed object on which the character has been printed , comprising:separating a color input image into a character part and a background part;calculating a discriminant function for separating pixels of the character part and pixels of the background part based on pixel values; andgenerating the background-removed image by removing the background part from an input image by using the discriminant function.2. The method for removing character background in a color image as claimed in claim 1 , wherein the separating includesidentifying ...

Подробнее
09-02-2017 дата публикации

DECISION SUPPORT FOR DISEASE CHARACTERIZATION AND TREATMENT RESPONSE WITH DISEASE AND PERI-DISEASE RADIOMICS

Номер: US20170039737A1
Принадлежит:

Methods, apparatus, and other embodiments associated with classifying a region of tissue using textural analysis are described. One example apparatus includes an image acquisition logic that acquires an image of a region of tissue demonstrating cancerous pathology, a delineation logic that distinguishes nodule tissue within the image from the background of the image, a perinodular zone logic that defines a perinodular zone based on the nodule, a feature extraction logic that extracts a set of features from the image, a probability logic that computes a probability that the nodule is benign or that the nodule will respond to a treatment, and a classification logic that classifies the nodule tissue based, at least in part, on the set of features or the probability. A prognosis or treatment plan may be provided based on the classification of the image. 1. A non-transitory computer-readable storage device storing computer executable instructions that when executed by a computer control the computer to perform a method for characterizing a nodule in a region of tissue , the method comprising:accessing an image of a region of tissue demonstrating cancerous pathology;segmenting a nodule in the image by extracting a nodule boundary from the image;defining a perinodular region in the image;generating a set of perinodular texture features;computing a probability that the nodule is benign based, at least in part, on the set of perinodular texture features; andcontrolling a computer aided diagnosis (CADx) system to generate a classification of the nodule based, at least in part, on the set of perinodular texture features, or the probability that the nodule is benign.2. The non-transitory computer-readable storage device of claim 1 , where accessing the image of the region of tissue comprises accessing a computed tomography (CT) image of a region of lung tissue claim 1 , where the CT image is a no-contrast chest CT image.3. The non-transitory computer-readable storage device of ...

Подробнее
12-02-2015 дата публикации

IMAGE PROCESSING DEVICE, CONTROL METHOD OF IMAGE PROCESSING DEVICE AND PROGRAM

Номер: US20150043786A1
Автор: Ohki Mitsuharu
Принадлежит: SONY CORPORATION

A dynamic image is generated. An image processing device includes a moving object acquisition unit, a moving direction acquisition unit, a rear region detection unit, and a smoothing processing unit. The moving object acquisition unit acquires a region of a moving object in a target image which is at least one image among a plurality of images which are temporally consecutive. The moving direction acquisition unit acquires a moving direction of the moving object. The rear region detection unit detects a region of a rear portion with respect to the moving direction in the region of the moving object, as a rear region. The rear region processing unit performs a predetermined image process on the rear region.

Подробнее
12-02-2015 дата публикации

IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR

Номер: US20150043813A1
Принадлежит:

An image processing apparatus according to the present invention comprises: an acquisition unit that acquires a statistical value of pixel values for each divided region; a determination unit that compares for each divided region the statistical value of the divided region acquired by the acquisition unit with a first threshold and determines whether the divided region is as color region or a monochrome region; and a re-determination unit that compares a statistical value of an adjacent divided region with a second threshold, by which a divided region is more likely determined as a color region than by the first threshold, for each adjacent divided region, and re-determines whether the adjacent divided region is a color region or a monochrome region. 1. An image processing apparatus that divides an input image into a color region and a monochrome region ,the apparatus comprising:an acquisition unit that acquires a statistical value of pixel values for each divided region obtained by dividing the input image;a division unit that divides the input image into a color region and a monochrome region on the basis of the statistical value for each divided region acquired by the acquisition unit; anda movement unit that moves a boundary between the color region and the monochrome region, which are divided by the division unit, so that the boundary passes inside a boundary proximity region, which is a region separated from the boundary by a predetermined distance toward a monochrome region, when a brightness value of the boundary proximity region is lower than a predetermined value.2. The image processing apparatus according to claim 1 , wherein the boundary proximity region is a region in a divided region adjacent to the boundary.3. The image processing apparatus according to claim 1 , wherein the boundary proximity region is a small divided region separated by the predetermined distance from the boundary toward a monochrome region claim 1 , from among a plurality of small ...

Подробнее
12-02-2015 дата публикации

AREA DESIGNATING METHOD AND AREA DESIGNATING DEVICE

Номер: US20150043820A1
Принадлежит:

An area designating method of allowing, when area segmentation processing of dividing a target image into a foreground and a background is performed, a user to designate an area, as a part of the target image, as an area to be the foreground or the background, the method including a subarea setting step of setting at least one subarea larger than a pixel, in the target image, a display step of displaying a designating image, on which a boundary of the subarea is drawn on the target image, on a display device, and a designating step of accepting input from an input device to allow the user to select the area to be the foreground or the background, from the at least one subarea on the designating image. 1. An area designating method of allowing , when area segmentation processing of dividing a target image into a foreground and a background is performed , a user to designate an area , as a part of the target image , as an area to be the foreground or the background , the method comprising:a subarea setting step of setting at least one subarea larger than a pixel, in the target image;a display step of displaying a designating image, on which a boundary of the subarea is drawn on the target image, on a display device; anda designating step of accepting input from an input device to allow the user to select the area to be the foreground or the background, from the at least one subarea on the designating image.2. The area designating method according to claim 1 , wherein the subarea setting step comprises a segmentation step of segmenting the target image into a predetermined pattern to form a plurality of the subareas.3. The area designating method according to claim 2 ,wherein the subarea setting step further comprises an extraction step of extracting a part of the subareas from the plurality of subareas formed in the segmentation step, andwherein in the display step, only the subarea extracted in the extraction step is drawn in the designating image.4. The area ...

Подробнее
12-02-2015 дата публикации

IMAGE PROCESSING METHOD

Номер: US20150043821A1
Автор: Hamada Toshiki
Принадлежит:

An image processing method for increasing the stereoscopic effect of an image is provided. Difference mask data including an object area and a background area is created from image data, and the center coordinate of the object area is calculated. Then, a gradation pattern is selected in accordance with the average brightness value of the object area and applied to the background area, whereby a gradation mask data is created. After that, the image data of the background area is converted into image data based on the gradation mask data, so that the stereoscopic effect of the image is increased. 1calculating a difference between continuous frames of the image data, thereby creating binary difference mask data including an object area and a background area;calculating center coordinate data of the object area;calculating an average brightness value of the object area and applying a gradation pattern based on the average brightness value to the background area, thereby creating gradation mask data; andconverting the image data into image data based on the gradation mask data.. An image processing method for processing image data having a plurality of frames, comprising the steps of: 1. Field of the InventionThe present invention relates to an image processing method. The present invention also relates to an image processing program.2. Description of the Related ArtA variety of display devices have come onto the market, ranging from large-size display devices such as television receivers to small-size display devices such as cellular phones. From now on, the display devices will be expected to have higher added values, and development has been advanced. In recent years, display devices capable of displaying stereoscopic images have been actively developed to provide more realistic images.Many of the display devices capable of displaying stereoscopic images utilize binocular parallax. The method utilizing binocular parallax mostly uses, for example, special glasses for ...

Подробнее
18-02-2016 дата публикации

DISPLAYING INFORMATION RELATING TO A DESIGNATED MARKER

Номер: US20160048732A1
Принадлежит:

A method and system for displaying information relating to a designated marker is provided. An image including the designated marker is acquired. The designated marker is extracted from the acquired image. A type of the designated marker is identified from the extracted marker. The identified type of the designated marker is communicated to a server, and in response, marker information identified from the type of the designated marker is obtained from the server. The marker information relates to the designated marker and identifies at least two other markers. Relative positional information of the device in relation to the extracted marker is determined. A displayed informational image includes the designated marker and at least one other marker of the at least two other markers, which are displayed in accordance with a determined relative position between the designated marker and each marker of the at least one other marker. 114-. (canceled)15. A method for displaying information relating to a designated marker , said method comprising:acquiring, by an image acquiring unit of an endpoint device that comprises a display unit, an image including the designated marker;said endpoint device extracting the designated marker from the acquired image;said endpoint device identifying a type of the designated marker from the extracted marker;said endpoint device communicating the identified type of the designated marker to a server external to the endpoint device and in response, obtaining, from the external server, marker information relating to the designated marker, wherein the obtained marker information comprises a reference size and shape of the designated marker and an identification of a plurality of other markers related to the designated marker, wherein the marker information in the external server is based on the identified type of the designated marker;said endpoint device ascertaining a size and shape of the extracted marker from analysis of the extracted ...

Подробнее
18-02-2016 дата публикации

Method and System for Recognizing User Activity Type

Номер: US20160048738A1
Автор: He Xiuqiang, ZHANG Gong
Принадлежит:

The present invention discloses a method and system for recognizing a user activity type, where the method includes: collecting an image of a location in which a user is located; extracting, from the image, characteristic data of an environment in which the user is located and characteristic data of the user; and obtaining, by recognition, an activity type of the user by using an image recognition model related to an activity type or an image library related to an activity type and the characteristic data. 1. A method for recognizing a user activity type , the method comprising:collecting an image of a location in which a user is located;extracting environmental characteristic data that is characteristic an environment in which the user is located and user characteristic data of the user from the image; andobtaining by recognition a user activity type by using an image recognition model or an image library related to an activity type and the environmental characteristic data and user characteristic data.2. The method according to claim 1 , wherein obtaining the user activity type comprises using the image recognition model related to the activity type.3. The method according to claim 1 , wherein obtaining the user activity type comprises using an image library related to an activity type.4. The method according to claim 1 , wherein the extracting step comprises extracting the environmental characteristic data and the user characteristic data from the image by using an image object recognition method; andwherein the obtaining step comprises matching the environmental characteristic data and user characteristic data by using a rule that is pre-learned by using an activity type rule model method or an activity type machine learning method, so as to obtain the user activity type.5. The method according to claim 1 , wherein:the environmental characteristic data and user characteristic data is a hash sketch value;the extracting step comprises extracting a hash sketch ...

Подробнее
18-02-2016 дата публикации

COMPUTER-AIDED DIAGNOSIS APPARATUS AND METHOD

Номер: US20160048972A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A computer-aided diagnosis (CAD) apparatus and method. The CAD apparatus includes an area divider configured to divide a current image frame into a first area and a second area based on location of a region of interest (ROI) detected in a previous image frame. The CAD apparatus further includes a functional processor configured to perform different functions of the CAD apparatus for the first area and the second area. 1. A computer-aided diagnosis (CAD) apparatus comprising:an area divider configured to divide a current image frame into a first area and a second area based on location of a region of interest (ROI) detected in a previous image frame; anda functional processor configured to perform different functions of the CAD apparatus for the first area and the second area.2. The CAD apparatus of claim 1 , wherein the first area includes an area that extends radially from a same point as a center point of the ROI in the previous image frame and the second area is an outlying area in the current image frame.3. The CAD apparatus of claim 2 , wherein a radial extension of the first area is determined based on at least one of the following previously collected lesion data factors: a distribution of a specific lesion in the current image frame claim 2 , the specific lesion being similar to a lesion within the ROI in the previous image frame claim 2 , a length of the specific lesion claim 2 , changes in an area of the specific lesion claim 2 , and a degree of change in shape of the specific lesion.4. The CAD apparatus of claim 1 , wherein the functional processor comprises:a first functional module configured to perform on the first area at least one of an ROI check, lesion segmentation, and lesion classification; anda second functional module configured to perform ROI detection on the second area.5. The CAD apparatus of claim 4 , wherein the first functional module is configured to claim 4 , for the ROI check claim 4 , extract feature values from the first area claim 4 ...

Подробнее
18-02-2016 дата публикации

Method and Device for Detecting Face, and Non-Transitory Computer-Readable Recording Medium for Executing the Method

Номер: US20160048977A1
Автор: Ryu Woo Ju
Принадлежит: Intel Corporation

In the present disclosure, a plurality of frames of input images sequentially received for a predetermined time interval is obtained, and a face detecting operation is performed on a first frame if a full detecting mode is implemented. If a face is detected from a specific region of the first frame during the face detecting operation, a face tracking mode is implemented, a second frame is divided to produce the divided input image portions of the second frame, and the face tracking operation is performed on a surrounding region of the specific region of the divided input image portions of the second frame that corresponds to the specific region in the first frame. If the face is not detected in the face tracking mode, a partial detecting mode is implemented, and the face detecting operation is performed on image portions resized on divided input image portions of a third frame to which a specific region of the third frame corresponding to the specific region of the first frame belongs. 150-. (canceled)51. A face detection device , comprising:an image dividing unit to divide an input image to generate one or more divided input image portions, the input image including a frame from among a plurality of frames of input images sequentially received during a predetermined time period;a mode change unit to transmit one or more divided input image portions of a first frame in parallel when a full detection mode signal is generated for the first frame, transmit a divided input image portion including a specific region in a second frame corresponding to a specific region of the first frame at which a face is detected when a face tracking mode signal is generated for the second frame temporally succeeding the first frame, and transmit a divided input image portion including a specific region in a third frame corresponding to the specific region of the first frame in which the face is detected when a partial detection mode signal is generated for the third frame temporally ...

Подробнее
16-02-2017 дата публикации

Systems and Methods for Behavior Detection Using 3D Tracking and Machine Learning

Номер: US20170046567A1
Принадлежит: California Institute of Technology

Systems and methods for performing behavioral detection using three-dimensional tracking and machine learning in accordance with various embodiments of the invention are disclosed. One embodiment of the invention involves a the classification application that directs a microprocessor to: identify at least a primary subject interacting with a secondary subject within a sequence of frames of image data including depth information; determine poses of the subjects; extract a set of parameters describing the poses and movement of at least the primary and secondary subjects; and detect a social behavior performed by at least the primary subject and involving at least the second subject using a classifier trained to discriminate between a plurality of social behaviors based upon the set of parameters describing poses and movement. 1. A behavioral classification system , comprising:a microprocessor; andmemory containing a classification application; identify at least a primary subject interacting with a secondary subject within a sequence of frames of image data comprising depth information;', 'determine poses for at least the primary subject and the secondary subject within a plurality of frames from the sequence of frames of image data;', 'extract a set of parameters describing the poses and movement of at least the primary and secondary subjects from the plurality of frames from the sequence of frames of image data; and', 'detect a social behavior performed by at least the primary subject and involving at least the second subject using a classifier trained to discriminate between a plurality of social behaviors based upon the set of parameters describing poses and movement of a plurality of subjects extracted from a plurality of frames of image data comprising depth information., 'wherein the classification application directs the microprocessor to2. The behavioral classification system of claim 1 , wherein the classifier is trained to discriminate between a plurality of ...

Подробнее
16-02-2017 дата публикации

Image Capture and Identification System and Process

Номер: US20170046570A1
Принадлежит:

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object. 1. A method of enabling object recognition for a game , comprising:storing, in a database, at least one digital representation of an object, the at least one digital representation including location information of the object and being of a modality corresponding to a mobile device sensor of at least one mobile device;indexing, by at least one of a server and the at least one mobile device, the object in the database based on the at least one digital representation of the object, such that the object can be identified as a target of interest from a digital representation of the environment that includes location information of the at least one mobile device, received from the mobile device sensor;associating, by at least one of the server and the at least one mobile device, the object with content information associated with a game via an information address, such that the at least one mobile device can access the content information based on the identified object as the target of interest; andenabling the at least one mobile device to incorporate the content information into the game.2. The method of claim 1 , the indexing further comprising indexing the object based on salient parameters derived from the at least one digital representation.3. The method of claim 1 , wherein the content information comprises at least one of graphical content information claim 1 , animation content information claim 1 , video content information and text content information.4. The method of claim 3 , wherein incorporating the content information into the game further comprises visually presenting by the at least one mobile device claim 3 , the content information based on at least one of a position and orientation of the at least one ...

Подробнее
16-02-2017 дата публикации

Self-optimized object detection using online detector selection

Номер: US20170046587A1

Embodiments are directed to an object detection system having at least one processor circuit configured to receive a series of image regions and apply to each image region in the series a detector, which is configured to determine a presence of a predetermined object in the image region. The object detection system performs a method of selecting and applying the detector from among a plurality of foreground detectors and a plurality of background detectors in a repeated pattern that includes sequentially selecting a selected one of the plurality of foreground detectors; sequentially applying the selected one of the plurality of foreground detectors to one of the series of image regions until all of the plurality of foreground detectors have been applied; selecting a selected one of the plurality of background detectors; and applying the selected one of the plurality of background detectors to one of the series of image regions.

Подробнее
15-02-2018 дата публикации

Methods and systems of performing blob filtering in video analytics

Номер: US20180046858A1
Принадлежит: Qualcomm Inc

Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive blob filtering. A number of blobs generated for a video frame is determined. A size of a first blob from the blobs is determined, the first blob including pixels of at least a portion of a first foreground object in the video frame. The first blob is filtered from the plurality of blobs when the size of the first blob is less than a size threshold. The size threshold is determined based on the number of the plurality of blobs generated for the video frame.

Подробнее
15-02-2018 дата публикации

IMAGE-PROCESSING APPARATUS, IMAGE-PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT

Номер: US20180046876A1
Принадлежит:

According to the present disclosure, an image-processing apparatus identifies for each gradation value a connected component of pixels of not less than or not more than the gradation value neighboring and connected to each other in an input image, thereby generating hierarchical structure data of a hierarchical structure including the connected component, extracts based on the hierarchical structure data a connected component satisfying character likelihood as a character-like region, acquires a threshold value of binarization used exclusively for the character-like region, acquires a corrected region where the character-like region is binarized, acquires a background where a gradation value of a pixel included in a region of the input image other than the corrected region is changed to a gradation value for a background, and acquires a binary image data of a binary image composed of the corrected region and the background region. 1. An image-processing apparatus comprising:a hierarchical structure generating unit that identifies for each gradation value a connected component of pixels of not less than or not more than the gradation value neighboring and connected to each other in an input image, and generates hierarchical structure data of a hierarchical structure including the connected component;a region extracting unit that determines based on the hierarchical structure data whether the connected component satisfies a feature of character likelihood, and extracts the connected component satisfying the feature of character likelihood as a character-like region;a correcting unit that acquires based on a maximum gradation value and a minimum gradation value of pixels included in the character-like region a threshold of binarization used exclusively for the character-like region, and acquires based on the threshold of binarization a corrected region where the character-like region is binarized; andan image acquiring unit that acquires a background region where a ...

Подробнее
16-02-2017 дата публикации

Reconstruction with Object Detection for Images Captured from a Capsule Camera

Номер: US20170046825A1
Принадлежит:

A method of processing images captured using a capsule camera is disclosed. According to one embodiment, two images designated as a reference image and a float image are received, where the float image corresponds to a captured capsule image and the reference image corresponds to a previously composite image or another captured capsule image prior to the float image. Automatic segmentation is applied to the float image and the reference image to detect any non-GI (non-gastrointestinal) region. The non-GI regions are excluded in match measure between the reference image and a deformed float image during the registration process. The two images are stitched together by rendering the two images at the common coordinate. In another embodiment, large area of non-GI regions are removed directly from the input image, and remaining portions are stitched together to form a new image without performing image registration. 1. A method of processing images of human gastrointestinal (GI) tract captured using a capsule camera , the method comprising:receiving two images designated as a reference image and a float image, wherein the float image corresponds to a current capsule image and the reference image corresponds to a previously composite image or another captured capsule image prior to the float image;automatically segmenting the float image into one or more first native GI regions if any native GI sample is detected for the float image and one or more first non-GI regions if any non-GI sample is detected for the float image;automatically segmenting the reference image into one or more second native GI regions if any native GI sample is detected for the reference image and one or more second non-GI regions if any non-GI sample is detected for the reference image;registering the float image with respect to the reference image by optimizing a match measure between the reference image and a deformed float image with said one or more first non-GI regions and said one or more ...

Подробнее
16-02-2017 дата публикации

METHOD AND APPARATUS FOR PROCESSING BLOCK TO BE PROCESSED OF URINE SEDIMENT IMAGE

Номер: US20170046838A1
Принадлежит: SIEMENS HEALTHCARE DIAGNOSTICS INC.

In the present invention are a method and apparatus for processing a block to be processed of a urine sediment image. The method comprises: dividing a block to be processed into a plurality of grids; calculating an n-dimensional local feature vector of each grid of the plurality of grids, where n is a positive integer; in the block to be processed, merging at least two adjacent grids of the plurality of grids into an intermediate block; calculating an intermediate block merging feature vector of the intermediate block; according to a predetermined combination rule, combining the intermediate block merging feature vectors obtained for different intermediate blocks of the block to be processed into a general combination feature vector of the block to be processed; and by way of taking the general combination feature vector as a feature in a feature set of block processing, processing the block to be processed. 1. A method for processing a block to be processed of a urine sediment image , comprising:dividing a block to be processed into a plurality of grids according to a predetermined division rule;calculating an n-dimensional local feature vector of each grid of the plurality of grids, where n is a positive integer;in the block to be processed, according to a predetermined merging rule, merging at least two adjacent grids of the plurality of grids into an intermediate block;according to the n-dimensional local feature vectors of the grids contained in the intermediate block, calculating an intermediate block merging feature vector of the intermediate block;according to a predetermined combination rule, combining the intermediate block merging feature vectors obtained for different intermediate blocks of the block to be processed into a general combination feature vector of the block to be processed; andby way of taking the general combination feature vector as a feature in a feature set of block processing, processing the block to be processed.2. The method as ...

Подробнее
16-02-2017 дата публикации

Segmentation of Magnetic Resonance Imaging Data

Номер: US20170046849A1
Принадлежит:

There is described herein an image segmentation technique using an iterative process. A contour, which begins with a single point that expands into a hollow shape, is iteratively deformed into a defined structure. As the contour is deformed, various constraints are applied to points along the contour to dictate its rate of change and direction of change are modified dynamically. The constraints may be modified after one or more iterations, at each point along the contour, in accordance with newly measured or determined data. 1. A computer-implemented method for segmenting magnetic resonance imaging (MRI) data , the method comprising:determining an initial position on an image for a given structure;converting the initial position into an initial contour within the given structure; anditeratively deforming the initial contour to expand into a shape matching the given structure by dynamically applying a set of constraints locally to each point along the initial contour and updating the set of constraints after one or more iterations.2. The method of claim 1 , wherein determining the initial position claim 1 , converting the initial position into the initial contour claim 1 , and iteratively deforming the initial contour are performed for a plurality of images for the given structure claim 1 , the plurality of images processed at least one of sequentially and in parallel.3. The method of claim 1 , wherein iteratively deforming the initial contour comprises performing a first contour definition of the given structure along a first direction followed by a second contour definition of the given structure along a second direction claim 1 , and merging the first and second contour definitions such that data from the second contour definition complements data from the first contour definition.4. The method of claim 3 , further comprising obtaining a points cloud representative of the first contour definition claim 3 , converting the points cloud into a mesh model claim 3 , ...

Подробнее
18-02-2016 дата публикации

Skew detection

Номер: US20160050338A1
Принадлежит: Hewlett Packard Development Co LP

Presented is a skew detection apparatus. In one form, the apparatus estimates skew based on the locations of a set of foreground content pixels or a set of edge pixels that are nearest to the side of an image of a document. In another form, the apparatus includes a skew estimation unit adapted to estimate skew based on the orientation of foreground or background content in the interior of a document. In another form, the apparatus estimates skew using segments of an image of a document. Also presented is a document image processing apparatus including the skew detection apparatus.

Подробнее