Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 5793. Отображено 200.
11-09-2023 дата публикации

УСТРОЙСТВО И СПОСОБ ДЛЯ НАНЕСЕНИЯ КОМПОЗИЦИЙ ДЛЯ МЕСТНОГО НАНЕСЕНИЯ С НАПРАВЛЕНИЕМ ПО ПРОЕЦИРУЕМЫМ РЕПЕРНЫМ МАРКЕРАМ

Номер: RU2803227C2

Группа изобретений относится к медицине, а именно к нанесению композиции на подлежащую обработке поверхность. Предложено устройство и способ, в котором устройство может включать в себя источник оптического излучения, проецирующий реперные маркеры на кожу; детектор, получающий данные изображения, соответствующие изображению области кожи, размеченной реперными маркерами; аппликатор, наносящий композицию на место в пределах области кожи; и узел обработки. Узел обработки получает данные изображения, анализирует данные изображения для определения морфологии области кожи на основании реперных маркеров, захваченных в пределах изображения, выявляет на основании морфологии зону в пределах изображения, соответствующую месту, являющемуся мишенью для узла аппликатора, анализирует данные изображения для определения того, соответствует ли выявленная зона артефакту кожи, и направляет узел аппликатора для выборочного нанесения композиции на место, если в выявленной зоне обнаружен артефакт кожи. Группа ...

Подробнее
13-05-2020 дата публикации

МОНИТОРИНГ КАЧЕСТВА ЗЕРНА

Номер: RU2720867C2

Изобретение относится к сельскому хозяйству, в частности к улучшению сбора зерна. Способ управления работой уборочной машины предусматривает захват изображения насыпи зерна, переносимого уборочной машиной; применение выделителя признаков к изображению, чтобы определять признак насыпи зерна в изображении; для каждого из множества различных местоположений выборки в изображении определяют балл классификации на основе признака насыпи зерна в местоположении выборки для представления классификации материала в местоположении выборки; вывод сигнала, указывающего на качество насыпи зерна на изображения на основе совокупности баллов классификации различных местоположений выборки; автоматическое регулирование рабочих настроек уборочной машины на основе сигнала при прохождении уборочной машиной поля и при сборе насыпи зерна уборочной машиной. Способ управления работой уборочной машины осуществляется компьютерно-читаемым носителем с программой для работы уборочной машины. Предлагаемый способ управления ...

Подробнее
11-11-2022 дата публикации

УСТРОЙСТВО И СПОСОБ РАСПОЗНАВАНИЯ КОНТУРОВ СЕЛЬСКОХОЗЯЙСТВЕННЫХ ПОЛЕЙ С ПРИМЕНЕНИЕМ ГЛУБИННОГО ОБУЧЕНИЯ ПО ДАННЫМ ДИСТАНЦИОННОГО ЗОНДИРОВАНИЯ ЗЕМЛИ

Номер: RU2783296C1

Настоящее техническое решение относится к области вычислительной техники. Технический результат заключается в повышении точности при определении контура сельскохозяйственного (с/х) поля. Технический результат достигается за счёт этапов, на которых: получают мультиспектральный снимок; определяют для снимка маску сегментации для класса пашня, маску сегментации для класса граница с/х поля и маску расстояний от центра с/х поля до ближайшей границы этого поля; формируют топологическую маску, которая определяет наличие пашни на мультиспектральном снимке; на основе значений маски расстояний от центра с/х поля до ближайшей границы этого поля определяют начальные точки; помечают связанные кластеры начальных точек отдельными идентификаторами; распределяют точки мультиспектрального снимка по определенным выше кластерам от определенных выше связных кластеров начальных точек с использованием значений маски расстояний от центра с/х поля до ближайшей границы поля; получают отдельно связанные единым идентификатором ...

Подробнее
09-02-2021 дата публикации

СПОСОБ ИНТЕРАКТИВНОЙ СЕГМЕНТАЦИИ ОБЪЕКТА НА ИЗОБРАЖЕНИИ И ЭЛЕКТРОННОЕ ВЫЧИСЛИТЕЛЬНОЕ УСТРОЙСТВО ДЛЯ ЕГО РЕАЛИЗАЦИИ

Номер: RU2742701C1

Изобретение относится к областям компьютерного зрения и компьютерной графики с использованием нейронных сетей, машинного обучения для интерактивной сегментации объектов на изображениях, и в частности к способу интерактивной сегментации объекта на изображении и электронному вычислительному устройству для реализации данного способа. Технический результат заключается в обеспечении сегментации одного или более объектов на изображении, выбранных пользователем в интерактивном режиме. Технический результат достигается за счет того, что реализована схема уточнения признаков путем обратных проходов (f-BRS), которая решает задачу оптимизации в отношении вспомогательных переменных вместо сетевых вводов и требует выполнения прямого и обратного прохода только для небольшой части сети (т.е. несколько последних слоев). Для оптимизации вводится набор вспомогательных параметров, которые инвариантны к положению на изображении. Оптимизация в отношении этих параметров приводит к аналогичному эффекту, как в ...

Подробнее
31-05-2021 дата публикации

СПОСОБ ОБНАРУЖЕНИЯ И АВТОСОПРОВОЖДЕНИЯ ОБЪЕКТОВ ЦЕЛЕУКАЗАНИЯ ОПТИКО-ЭЛЕКТРОННОЙ СИСТЕМОЙ БЕСПИЛОТНОГО ЛЕТАТЕЛЬНОГО АППАРАТА

Номер: RU2748763C1

Изобретение относится к цифровой обработки изображений, в частности к обнаружению на изображениях, полученных от оптико-электронных систем беспилотных летательных аппаратов (БЛА), объектов целеуказания и их автосопровождение. Технический результат заключается в повышении информативности изображений для идентификации и автосопровождения объектов в аномальных условиях. Предложен способ обнаружения и автосопровождения объектов, на котором производят анализ удаленности каждой области объекта целеуказания от точки целеуказания и принадлежность точки целеуказания найденной области. Объединяют мелкие области; делят область целеуказания на несколько прямоугольных областей, центр которых находится в точке целеуказания. Также осуществляют процедуру математического морфологического описания объекта целеуказания; исключают области, в которых не лежит точка целеуказания и площадь меньше минимальной площади объекта; и производят начальное обучение представления модели объекта и обучение соответствующего ...

Подробнее
04-04-1996 дата публикации

Object shape identification system

Номер: DE0004443728C1

The shape identification system detects the contour of the object and has the detected contour represented as a number of points or pixels, converted into a contour function independent of the position of the object, by transformation of the point or pixel values. The obtained contour function is compared with a reference contour function to allow the object shape to be identified. Pref. the curvature of the contour is determined for the transformation, by determining the angles between each 2 contour points or pixels from a given source point.

Подробнее
16-08-2017 дата публикации

Image reconstruction system and method

Номер: GB0002547360A
Принадлежит:

A method and system for image reconstruction are provided. A projection image of a projection object may be obtained (420). A processed projection image may be generated based on the projection image through one or more pre-process operations (430). A reconstructed image including an artifact may be reconstructed based on the processed projection image (440). The artifact may be a detector edge artifact, a projection object edge artifact, and a serrated artifact. The detector edge artifact, the projection object edge artifact, and the serrated artifact may be removed from the reconstructed image (450).

Подробнее
08-12-2021 дата публикации

Image data pre-processing for neural networks

Номер: GB2585232B
Принадлежит: APICAL LTD, Apical Ltd.

Подробнее
18-05-2022 дата публикации

Identifying background features using LIDAR

Номер: GB0002601024A
Автор: TOLGA OZASLAN [US]
Принадлежит:

A method and vehicle 100 mounted system comprising at least one LIDAR 123 device configured to detected reflected radiation from objects proximate to the vehicle in the environment 190 and generate point cloud information. A processor communicatively coupled to the LIDAR device and configured to receive the LIDAR point cloud data, model the point cloud information as a sphere, based on the sphere identifying faces corresponding to respective clusters of points of the received LIDAR point cloud information, generating a graph data structure that includes vertices corresponding to the respective faces of the identified faces, connecting vertices of the graph data structure based on adjacency of the underlying points and characteristics of the faces. Based on the graph data, identifying subgraphs each including connected vertices and based on analysis of characteristics of the faces in the subgraphs identifying subgraphs that correspond to a background feature of the environment scanned by ...

Подробнее
13-05-2021 дата публикации

Methods and systems for ocular imaging, diagnosis and prognosis

Номер: AU2021202217A1
Принадлежит:

Abstract Embodiments of the invention involve combining data representative of the eye obtained from multiple modalities into a virtual model of the eye. The multiple modalities indicate anatomical, physiological, and/or functional features of the eye. The data from different modalities is registered in order tocombine the data intothevirtual model. Further embodiments involve analysingeye data, for example in the form of the virtual model, using neural networks to obtain insights about medical conditions of the eye, for example the diagnosis or prognosis of conditions, and/or predicting how the eye will respond to certain treatments. WO 2020/055272 PCT/NZ2019/050121 25-/ -5 -10 -10 x Inferior/superior y = Temporal/nasal ...

Подробнее
23-04-2020 дата публикации

SYSTEM FOR CONTROLLING AN EMULSIFICATION PROCESS

Номер: CA3116835A1
Принадлежит:

A system (1A) and method (IB) for controlling an emulsification process including the steps of acquiring (9) images (3) such as micrographs (2) of an emulsification process at preset intervals between a start and an end of the emulsification process; detecting (10) selected droplet characteristics such as size and count using image segmentation such as a histogram-based technique (5); analysing (11) the measured droplet characteristics (6); comparing (12) the measured droplet characteristics with a desired droplet characteristic specification(S); and terminating the emulsification process when said desired droplet characteristic is achieved.

Подробнее
09-04-2020 дата публикации

METHOD FOR AUTOMATIC SHAPE QUANTIFICATION OF AN OPTIC NERVE HEAD

Номер: CA3114482A1
Принадлежит:

The invention relates to a method and a computer program for automatic shape quantification of an optic nerve head from three-dimensional image data (1) acquired with optical coherence tomography, comprising the steps of: a) Providing (100) three-dimensional image data (1) of the retina, the image data comprising at least a portion of the optic nerve head, wherein the image data comprises pixels with associated pixel values; b) In the three-dimensional image data (1) identifying (200, 300) anatomic portions of the optic nerve head, the anatomic portions comprising a retinal pigment epithelium (RPE) portion (3) and an inner limiting membrane (ILM) portion (2); c) Determining an RPE polygon mesh (30) for a lower boundary of the retinal pigment epithelium portion (3), wherein the RPE polygon mesh (30) extends along the lower boundary of the retinal pigment epithelium portion (3); d) Determining an ILM polygon mesh (20) for the inner limiting membrane portion (2), wherein the ILM polygon mesh ...

Подробнее
24-07-2020 дата публикации

Storage battery leakage detection method based on neural network and thermal image graph

Номер: CN0111445462A
Автор:
Принадлежит:

Подробнее
23-06-2020 дата публикации

CT image segmentation system based on attention convolution neural network

Номер: CN0111325751A
Автор:
Принадлежит:

Подробнее
28-02-2020 дата публикации

Image collaborative segmentation method based on minimum fuzzy divergence

Номер: CN0110853064A
Принадлежит:

Подробнее
17-03-2020 дата публикации

U Type network for fundus image vessel segmentation

Номер: CN0110889859A
Автор:
Принадлежит:

Подробнее
29-12-2017 дата публикации

Cotton identifying and positioning method based on binocular camera

Номер: CN0107527367A
Автор: LIU XIANDA, ZHANG GUO
Принадлежит:

Подробнее
27-12-2019 дата публикации

Image blurring region positioning method based on edge point frequency domain spatial domain characteristics

Номер: CN0110619647A
Автор:
Принадлежит:

Подробнее
28-02-2020 дата публикации

Piloted driving vehicle training method based on virtual environment and depth double-Q network

Номер: CN0110850877A
Принадлежит:

Подробнее
16-11-2018 дата публикации

Automatic segmentation method and device for MRI image

Номер: CN0108830326A
Принадлежит:

Подробнее
01-05-2020 дата публикации

Method for identifying lost fault of assembly nut of cross beam of rail wagon

Номер: CN0111091541A
Автор:
Принадлежит:

Подробнее
17-01-2020 дата публикации

High-voltage circuit breaker rapid overhauling method based on big data technology

Номер: CN0110703075A
Автор:
Принадлежит:

Подробнее
24-04-2018 дата публикации

The target Image marking device, method and apparatus

Номер: CN0104094315B
Автор:
Принадлежит:

Подробнее
30-04-2019 дата публикации

A turbine blade temperature field image processing method

Номер: CN0109697723A
Принадлежит:

Подробнее
24-05-2019 дата публикации

urban waterlogging ponding depth information extraction method based on video data

Номер: CN0109801327A
Принадлежит:

Подробнее
02-10-2020 дата публикации

Complex background power line extraction method based on digital image features

Номер: CN0111739042A
Автор:
Принадлежит:

Подробнее
15-12-2020 дата публикации

Номер: CN0112085703A
Автор:
Принадлежит:

Подробнее
25-12-2018 дата публикации

An image screening processing method applied to a tilt sensor

Номер: CN0109087320A
Автор: GONG XIAOLIN
Принадлежит:

Подробнее
09-08-2019 дата публикации

Pedestrian profile tracking method fusing RGBD multi-modal information

Номер: CN0110111351A
Автор:
Принадлежит:

Подробнее
22-02-2017 дата публикации

Non-uniform severe motion degradation image object segmentation method

Номер: CN0106447681A
Принадлежит:

Подробнее
21-12-2018 дата публикации

Rail surface defect detection method based on depth learning

Номер: CN0109064462A
Принадлежит:

Подробнее
14-08-2020 дата публикации

Wheat ear detection and counting method based on deep learning point monitoring idea

Номер: CN0110766690B
Автор:
Принадлежит:

Подробнее
05-11-2019 дата публикации

A faces the visual characteristic is degraded Image of solid waste object segmentation method

Номер: CN0107527350B
Автор:
Принадлежит:

Подробнее
16-03-2018 дата публикации

Image segmentation method, device and apparatus

Номер: CN0104156947B
Автор:
Принадлежит:

Подробнее
06-07-2018 дата публикации

A red Image detection method

Номер: CN0103914849B
Автор:
Принадлежит:

Подробнее
27-10-2020 дата публикации

Automatic segmentation method and system for endoscopic OCT image hierarchical structure in esophagus

Номер: CN0108765388B
Автор:
Принадлежит:

Подробнее
02-05-2023 дата публикации

Method and system for acquiring medical image segmentation model, electronic equipment and medium

Номер: CN116051579A
Принадлежит:

The invention provides a medical image segmentation model acquisition method and system, a vascular medical image segmentation method, electronic equipment and a medium. The medical image segmentation model acquisition method comprises the following steps: acquiring a first training sample of a medical image segmentation model, wherein the first training sample comprises a first medical training image and a first label image corresponding to the first medical training image; shape constraint information of a target area in the first medical training image is acquired; and according to the first medical training image and the shape constraint information of the target area, training the initialized first neural network model until a first preset training ending condition is satisfied, and obtaining the medical image segmentation model. According to the invention, the segmentation efficiency and the segmentation precision of the medical image and the stability of the segmentation result can ...

Подробнее
14-07-2023 дата публикации

Personalized target spot selection method oriented to noninvasive nerve regulation technology

Номер: CN116433967A
Принадлежит:

The invention relates to a personalized target spot selection method for a noninvasive nerve regulation technology, which comprises the following steps: preprocessing fMRI data in nuclear magnetic resonance scanning data of a current patient to obtain fMRI brain image feature data, inputting a pre-trained subtype classification model, and obtaining a subtype tag to which the current patient belongs and all feature voxels of the subtype tag; t1 weighted magnetic resonance imaging data of sMRI data in the magnetic resonance imaging scanning data of the current patient is preprocessed, and a skull outer contour line and a transformation matrix of fMRI data are obtained; coordinate transformation is carried out on the feature voxels, the distance between the voxels on each skull outer contour line and the feature voxels is calculated, response feature voxels are marked, and the number of the response feature voxels is counted; and sorting according to the number, and finally selecting voxels ...

Подробнее
28-04-2023 дата публикации

Remote sensing image instance segmentation method and device

Номер: CN116030080A
Автор: LIU SIQI
Принадлежит:

The invention provides a remote sensing image instance segmentation method and device. The method comprises the following steps: acquiring a target remote sensing image; inputting the target remote sensing image into the image instance segmentation model to obtain an image instance segmentation result output by the image instance segmentation model; wherein the image instance segmentation model is obtained by training according to a remote sensing training image, an instance segmentation vector truth value corresponding to the remote sensing training image and an instance category label, and the instance segmentation vector truth value is obtained by performing contour point extraction on the remote sensing training image based on a slope difference. Through the image instance segmentation model, the segmentation precision is remarkably improved, the model parameter quantity is greatly reduced, the segmentation operation complexity is reduced, and the method is friendly to small targets ...

Подробнее
04-04-2023 дата публикации

Osteoarthritis-oriented three-dimensional cartilage cell indentation image segmentation method

Номер: CN115908456A
Автор: XU HAO
Принадлежит:

The invention discloses a cartilage cell indentation SR-PCT (Syndrome Received Proportion Tomography) three-dimensional image segmentation method for osteoarthritis, and the method comprises the following steps: carrying out the segmentation of a cartilage cell indentation SR-PCT (Syndrome Received Proportion Tomography) three-dimensional image, and carrying out the segmentation of the cartilage cell indentation SR-PCT (Syndrome Received Proportion Tomography) three-dimensional image. The method comprises the following implementation steps: (1) carrying out image segmentation on cell pits (including cartilage cell pits and inferior bone cell pits) by utilizing a watershed segmentation algorithm based on monogenic signal local phase characteristics and mark control and an inverse operation and connected domain analysis method; and (2) performing image segmentation on the calcified cartilage in the calcified bone tissue by using an nnU-Net deep learning method, and using the calcified cartilage ...

Подробнее
04-04-2023 дата публикации

Visual identification-measurement-positioning integrated method for mushroom picking robot

Номер: CN115909029A
Автор: LU WEI, ZOU MINGXUAN
Принадлежит:

The invention discloses a mushroom picking robot visual identification-measurement-positioning integrated method, which is characterized in that the mushroom picking robot visual identification-measurement-positioning requirements can be integrated, target mushroom identification is realized through a YOLO v5-TL mushroom identification model, and the mushroom picking robot visual identification efficiency can be improved; an adhesion judgment module is used for extracting the edge contour of a single mushroom and outputting an edge point pixel coordinate matrix and a central point pixel coordinate of the single mushroom, so that the problem of adhered mushroom identification can be solved; and the world coordinates of the central point of the single mushroom and the diameter of the single mushroom are output through the coordinate conversion module and the size measurement module, so that the size measurement and positioning of the target mushroom are realized.

Подробнее
22-02-2018 дата публикации

TEACHING TOY KIT AND CIRCUIT ELEMENT AND ELECTRIC WIRE IDENTIFICATION METHOD THEREOF

Номер: WO2018032631A1
Принадлежит:

A teaching toy kit and a circuit element and electric wire identification method thereof. The teaching toy kit comprises a bottom plate (1), circuit elements (2), and electric wires (3). The bottom plate (1) is placed on a plane, and the circuit elements (2) and the electric wires (3) are placed on the bottom plate (1). The method comprises: placing the circuit elements (2) and the electric wires (3) on the bottom plate (1) of a game; installing a game program in a tablet computer; by means of a camera of the tablet computer, acquiring images of the circuit elements (2) and the electric wires (3) placed on the bottom plate (1); identifying the circuit elements (2) and the electric wires (3) according to predefined colors, contour information, and color code information; allowing a child to connect the circuit elements (2) and the electric wires(3); and determining whether a connected circuit is correct. The present invention enhances the imagination of a child, makes the game more fun, ...

Подробнее
05-03-2020 дата публикации

METHOD AND DEVICE FOR POINT CLOUD DATA PARTITIONING, STORAGE MEDIUM, AND ELECTRONIC DEVICE

Номер: WO2020043041A1
Автор: ZENG, Chao
Принадлежит:

Disclosed in the present application are a method and device for point cloud data partitioning, a storage medium, and an electronic device. The method comprises: acquiring target point cloud data, wherein the target point cloud data is data obtained by using a laser beam to scan target objects surrounding a vehicle; clustering the target point cloud data to obtain a plurality of first data sets, wherein feature points represented by the point cloud data comprised in each first data set is fit on the same partitioning segment, and the feature points are points on the target objects; merging the plurality of first data sets according to the distance between the plurality of partitioning segments to obtain second data sets, wherein the second data sets comprise at least one first data set. The present application solves the technical problem of inefficient point cloud partitioning in the related art.

Подробнее
26-11-2020 дата публикации

METHODS AND SYSTEMS FOR MEASURING THE TEXTURE OF CARPET

Номер: WO2020237069A1
Принадлежит:

Methods and systems are disclosed for analyzing one or more images of a textile to determine a presence or absence of defects. In one example, an image of at least a portion of a textile may be obtained and compared to a reference image of a reference textile. Based on the comparison, one or more areas indicative of a height variation between the textile and the reference textile may be determined. An action may be performed based on the one or more areas indicative of the height variation.

Подробнее
15-11-2018 дата публикации

METHOD OF SEGMENTATION OF A THREE-DIMENSIONAL IMAGE FOR GENERATING A MODEL OF A MYOCARDIAL WALL FOR THE DETECTION OF AT LEAST ONE SINGULAR ZONE OF ELECTRICAL CIRCULATION

Номер: WO2018206796A1
Принадлежит:

The method of segmentation of a three-dimensional image for generating a model of a myocardial wall comprises: • Recording (ACQ) a three-dimensional image of a wall of the myocardium, said wall delimiting at least one cavity of the heart; • Segmenting (SEG_1) a continuous part of the wall (13) into at least a first volume (V1) having a thickness less than a first predefined thickness threshold (S1) of between 0 and 5 mm and a second volume (V2) of a continuous part of the wall (13) having a thickness greater than the first threshold (S1); • Generating a model of the wall (MOD_P) of the myocardium, where the continuous part of the wall of the myocardium is modelled (MOD_VOL) according to at least two volumes (V1, V2) that continue each other.

Подробнее
28-04-2022 дата публикации

DETECTING ANATOMICAL ABNORMALITIES BY SEGMENTATION RESULTS WITH AND WITHOUT SHAPE PRIORS

Номер: WO2022084074A1
Принадлежит:

A system and related method for image processing. The system comprises an input (IN) interface for receiving two segmentation maps for an input image. The two segmentation maps (11,12) obtained by respective segmentors, a first segmentor (SEG1) and a second segmentor (SEG2). The first segmentor (SEG1) implements a shape-prior-based segmentation algorithm. The second segmentor (SEG2) implements a segmentation algorithm that is not based on a shape-prior, or at least the second segmentor (SEG2) accounts for one or more shape priors at a lower weight as compared to the first segmentor (SEG1). A differentiator (DIF) configured to ascertain a difference between the two segmentation maps. The system may allow detection of abnormalities.

Подробнее
12-08-2021 дата публикации

FLUID FLOW RATE DETERMINATIONS USING VELOCITY VECTOR MAPS

Номер: US20210244291A1
Принадлежит:

A method of determining volume flow rate of a bodily fluid in a biological conduit includes determining a cross-sectional area of a biological conduit using a velocity vector map representing moving entities or moving fluid portions in a bodily fluid flowing within the biological conduit, calculating an average speed of the moving entities or moving fluid portions in the bodily fluid flowing across the determined cross-sectional area of the biological conduit, and calculating volume flow rate of the bodily fluid in the biological conduit from the determined cross-sectional area and the calculated instantaneous or average speed. A series of velocity vector maps may be collected over time so as to generate a flow rate profile representing flow rate as a function of time.

Подробнее
12-07-2018 дата публикации

INTERACTIVE IMAGE SEGMENTING APPARATUS AND METHOD

Номер: US20180197292A1
Автор: Won Sik KIM
Принадлежит:

An interactive image segmenting apparatus and method are provided. The image segmenting apparatus and corresponding method include a boundary detector, a condition generator, and a boundary modifier. The boundary detector is configured to detect a boundary from an image using an image segmentation process. The feedback receiver is configured to receive information about the detected boundary. The condition generator is configured to generate a constraint for the image segmentation process based on the information. The boundary modifier is configured to modify the detected boundary by applying the generated constraint to the image segmentation process.

Подробнее
19-10-2017 дата публикации

ANALYZING AORTIC VALVE CALCIFICATION

Номер: US20170301096A1
Принадлежит:

A system and a method are provided for analyzing an image of an aortic valve structure to enable assessment of aortic valve calcifications. The system comprises an image interface for obtaining an image of an aortic valve structure, the aortic valve structure comprising aortic valve leaflets and an aortic bulbus. The system further comprises a segmentation subsystem for segmenting the aortic valve structure in the image to obtain a segmentation of the aortic valve structure. The system further comprises an identification subsystem for identifying a calcification on the aortic valve leaflets by analyzing the image of the aortic valve structure. The system further comprises an analysis subsystem configured for determining a centerline of the aortic bulbus by analyzing the segmentation of the aortic valve structure, and for projecting the calcification from the centerline of the aortic bulbus onto the aortic bulbus, thereby obtaining a projection indicating a location of the calcification ...

Подробнее
23-06-2020 дата публикации

System and method for image segmentation, bone model generation and modification, and surgical planning

Номер: US0010687856B2

A computer-implemented method of preoperatively planning a surgical procedure on a knee of a patient including determining femoral condyle vectors and tibial plateau vectors based on image data of the knee, the femoral condyle vectors and the tibial plateau vectors corresponding to motion vectors of the femoral condyles and the tibial plateau as they move relative to each other. The method may also include modifying a bone model representative of at least one of the femur and the tibia into a modified bone model based on the femoral condyle vectors and the tibial plateau vectors. And the method may further include determining coordinate locations for a resection of the modified bone model.

Подробнее
12-06-2018 дата публикации

Image processing method and image processing apparatus

Номер: US0009996762B2

An image processing apparatus performs an image recognition process, such as pattern matching or contour detection, on image data supplied from an image pickup device, and stores history data of the image recognition process in an external storage apparatus. In this case, an extraction image is extracted from an extraction region determined in accordance with the image recognition process performed on the input image data and is stored in the external storage device as history data. Furthermore, the history data logged in the external storage device may include a compressed image that is obtained by compressing, using lossy compression, the entire image data subjected to the image processing performed by the image processing apparatus.

Подробнее
01-09-2020 дата публикации

Beautifying freeform drawings using arc and circle center snapping

Номер: US0010762674B2
Принадлежит: Adobe Inc.

Embodiments of the present invention are directed to beautifying freeform input paths in accordance with paths existing in the drawing (i.e., resolved paths). In some embodiments of the present invention, freeform input paths of a curved format can be modified or replaced to more precisely illustrate a path desired by a user. As such, a user can provide a freeform input path that resembles a path of interest by the user, but is not as precise as desired. Based on existing paths in the electronic drawing, a path suggestion(s) can be generated to rectify, modify, or replace the input path with a more precise path. In some cases, a user can then select a desired path suggestion, and the selected path then replaces the initially provided freeform input path.

Подробнее
13-10-2020 дата публикации

Systems and methods for recognizing symbols in images

Номер: US0010803337B2

A computer-implemented method comprises generating a description of a character symbol from a binarized image; comparing a template for the character symbol with the description of the character symbol based on a reference description, wherein the template comprises a grid of cells, a set of local features which may be present in the grid of cells, the reference description specifying which member of the set of local features should be present or absent in the grid of cells, and a threshold of an accepted deviation with the description of the character symbol; assigning a penalty value to the description of the character symbol via a cost function when a discrepancy exists based on the comparing; selecting the template as a match candidate for the character symbol when the penalty value is below the threshold; recognizing the character symbol based on the selecting.

Подробнее
17-10-2019 дата публикации

PARALLELISM IN DISPARITY MAP GENERATION

Номер: US20190318494A1
Принадлежит:

Input images are partitioned into non-overlapping segments perpendicular to a disparity dimension of the input images. Each segment includes a contiguous region of pixels spanning from a first edge to a second edge of the image, with the two edges parallel to the disparity dimension. In some aspects, contiguous input image segments are assigned in a “round robin” manner to a set of sub-images. Each pair of input images generates a corresponding pair of sub-image sets. Semi-global matching processes are then performed on pairs of corresponding sub-images generated from each input image. The SGM processes may be run in parallel, reducing an elapsed time to generate respective disparity sub-maps. The disparity sub-maps are then combined to provide a single disparity map of equivalent size to the original two input images.

Подробнее
05-12-2019 дата публикации

METHOD FOR DETECTING RAISED PAVEMENT MARKERS, COMPUTER PROGRAM PRODUCT AND CAMERA SYSTEM FOR A VEHICLE

Номер: US2019370563A1
Принадлежит:

A method is disclosed for detecting raised pavement markers in an environment of a vehicle by a camera system. The method includes capturing at least one first image of at least one first part of the environment by at least one first camera of the camera system and analyzing the at least one first image and determining whether at least one first pavement marker is present in the environment in dependency of a result of the analysis of the at least one first image. The method further includes capturing at least one second image of at least one second part of the environment by at least one second camera of the camera system and analyzing the at least one second image and determining whether the at least one first or at least one second pavement marker is present in the environment in dependency of a result of the analysis.

Подробнее
13-11-2018 дата публикации

Trailer type identification system

Номер: US0010127459B2

A trailer type identification system is provided herein. The system includes an imaging device for capturing images of a trailer connected to a vehicle, and a controller for analyzing the captured. The controller identifies vehicle and trailer contours, predicts a trailer type based on detection of a connection between the identified vehicle and trailer contours, and validates the prediction if the identified trailer contour exhibits motion during a vehicle turn event.

Подробнее
04-12-2018 дата публикации

Interactive segmentation

Номер: US0010147185B2

A method for three-dimensional interactive segmentation, including: receiving a three-dimensional medical image of an interior volume of a patient's body; automatically performing three dimensional segmentation on the three dimensional medical image to detect and define a region of interest, wherein the performing of the three dimensional segmentation comprises automatically determining a boundary defining the region of interest; receiving from a user spatial information indicating one or more regions of disagreement in the three-dimensional medical image with respect to the determined boundary; and updating the three dimensional segmentation of the three dimensional medical image based on the spatial information received from the user, wherein the updating comprises updating the determined boundary based on the spatial information to redefine the area of interest.

Подробнее
01-07-2021 дата публикации

METHOD FOR GENERATING ROOF OUTLINES FROM LATERAL IMAGES

Номер: US20210201524A1
Принадлежит:

A computer system generates an outline of a roof of a structure based on a set of lateral images depicting the structure. For each image in the set of lateral images, one or more rooflines corresponding to the roof of the structure are determined. The computer system determines how the rooflines connect to one another. Based on the determination, the rooflines are connected to generate an outline of the roof.

Подробнее
09-07-2020 дата публикации

SYSTEMS AND METHODS FOR MOBILE IMAGE CAPTURE AND PROCESSING

Номер: US20200219202A1
Принадлежит:

In several embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. The techniques include capturing image data depicting a document; defining a plurality of candidate edge points within the image data; and defining four sides of a tetragon based on at least some of the plurality of candidate edge points; wherein each side of the tetragon corresponds to a different side of the document; wherein an area of the tetragon comprises at least a threshold percentage of a total area of the digital image; and wherein the tetragon bounds the digital representation of the document.

Подробнее
10-04-2007 дата публикации

Knowledge based computer aided diagnosis

Номер: US0007203354B2

A method of extracting computer graphical objects including at least one vessel structure from a data volume of a portion of an anatomy, the method comprising the steps of: utilizing knowledge based image processing to locate centerlines and utilizing an active surface technique to extract the outer surface of said vessel structure; and storing co-ordinate information of said outer surface for subsequent display.

Подробнее
17-03-2020 дата публикации

Information processing apparatus, information processing method, and storage medium

Номер: US0010593044B2

An information processing apparatus includes a depth image acquisition unit configured to acquire a depth image from a measurement apparatus that has measured a distance to an object, an image acquisition unit configured to acquire a captured image from an image capturing apparatus that has captured an image of the object, and an estimation unit configured to estimate a shape of the object based on the depth image and the captured image. The estimation unit acquires information about a contour of the object from the captured image, corrects the information about the contour based on the depth image, and estimates the shape of the object based on the corrected information about the contour.

Подробнее
25-06-2019 дата публикации

Extrapolating speed limits within road graphs

Номер: US0010332389B2

In one embodiment, a speed limit application associates speed limits with road segments based on a road graph. In operation, the speed limit application selects a source road segment that meets a target road segment at an intersection based on the road graph. The source road segment is associated with a speed limit. Subsequently, the speed limit application determines a confidence value associated with extrapolating the first speed limit to the target road segment based on the first road graph. The speed limit application then determines that the confidence value indicates that a confidence in the extrapolation satisfies a minimum confidence requirement. Consequently, the speed limit application generates an attribute that associates the first speed limit with the target road segment. Finally, the speed limit application causes a navigation-related operation to be performed based on the attribute.

Подробнее
01-08-2019 дата публикации

COMPUTER AIDED DIAGNOSIS SYSTEM FOR CLASSIFYING KIDNEYS

Номер: US20190237186A1
Принадлежит:

A computer aided diagnostic system and automated method to classify a kidney utilizes medical image data and clinical biomarkers in evaluation of kidney function pre- and post-transplantation. The system receives image data from a medical scan that includes image data of a kidney, then segments kidney image data from other image data of the medical scan. The kidney is then classified by analyzing at least one feature determined from the kidney image data and the at least one clinical biomarker.

Подробнее
12-11-2020 дата публикации

SYSTEM AND METHOD FOR ADAPTIVE POSITIONING OF A SUBJECT FOR CAPTURING A THERMAL IMAGE

Номер: US20200352452A1
Принадлежит:

A method for determining a view angle of a thermal image from a user and generating a suggestion to enable the user for adaptive positioning of a subject for capturing the thermal image is provided. The method includes (i) receiving a thermal image of a body of a subject, (ii) automatically determining a view angle of the thermal image from a user using a view angle estimator, (iii) determining an angular adjustment to be made to a view position of the thermal imaging camera or a position of the subject by comparing the determined view angle with a required view angle as per thermal imaging protocol when the thermal image does not meet the required view angle and (iv) generating instructions to the user for adjusting the view position of the thermal imaging camera for capturing a new thermal image at the required view angle as per thermal imaging protocol.

Подробнее
08-08-2023 дата публикации

Identifying retinal layer boundaries

Номер: US0011717155B2
Автор: Yali Jia, Yukun Guo
Принадлежит: Oregon Health & Science University

Methods for automatically identifying retinal boundaries from a reflectance image are disclosed. An example of the method includes identifying a reflectance image of the retina of a subject; generating a gradient map of the reflectance image, the gradient map representing dark-to-light or light-to-dark reflectance differentials between adjacent pixel pairs in the reflectance image; generating a guidance point array corresponding to a retinal layer boundary depicted in the reflectance image using the gradient map; generating multiple candidate paths estimating the retinal layer boundary in the reflectance image by performing a guided bidirectional graph search on the reflectance image using the guidance point array; and identifying the retinal layer boundary by merging two or more of the multiple candidate paths.

Подробнее
22-08-2023 дата публикации

System and method for generating computerized models of structures using geometry extraction and reconstruction techniques

Номер: US0011734468B2
Принадлежит: Xactware Solutions, Inc.

Described in detail herein are systems and methods for generating computerized models of structures using geometry extraction and reconstruction techniques. The system includes a computing device coupled to a input device. The input device obtains raw data scanned by a sensor. The computing device is programmed to execute a data fusion process is applied to fuse the raw data, and a geometry extraction process is performed on the fused data to extract features such as walls, floors, ceilings, roof planes, etc. Large- and small-scale features of the structure are reconstructed using the extracted features. The large- and small-scale features are reconstructed by the system into a floor plan (contour) and/or a polyhedron corresponding to the structure. The system can also process exterior features of the structure to automatically identify condition and areas of roof damage.

Подробнее
06-04-2023 дата публикации

Object Recognition Method and Object Recognition Device

Номер: US20230106443A1
Принадлежит:

An object recognition method including: acquiring a group of points of a plurality of positions of objects in surroundings of an own vehicle ; generating a captured image of surroundings of the own vehicle; grouping points in the group of points into a group of object candidate points; extracting, from among object candidate points, included in the group of object candidate points, a position at which change in distance from the own vehicle between adjacent object candidate points increases from a value equal to or less than a threshold value to greater than the threshold value as a boundary position candidate; extracting a partial region in which a person is detected in the captured image; and when, in the captured image, a position of the boundary position candidate coincides with a boundary position of the partial region in the predetermined direction, recognizing that a pedestrian exists in the partial region.

Подробнее
14-09-2023 дата публикации

METHOD AND APPARATUS FOR INSPECTING BATTERY TAB AND STORAGE MEDIUM

Номер: US20230289948A1
Принадлежит:

The present application discloses a method for inspecting a battery tab, the method including: obtaining a sectional view of a plurality of layers of tabs of a battery; identifying and analyzing the sectional view to obtain a plurality of connected domains, where each connected domain includes one tab or a plurality of tabs that are bonded with each other; determining, based on positions and a number of intersection points of tab bonding in each connected domain, a number of layers of tabs corresponding to the connected domain; calculating a total number of layers of the plurality of layers of tabs in the sectional view based on the number of layers of tabs corresponding to the connected domain; and determining, based on the total number of layers of tabs and a preset real number of layers of tabs, whether the plurality of layers of tabs are folded.

Подробнее
05-05-2022 дата публикации

Refining Lesion Contours with Combined Active Contour and Inpainting

Номер: US20220138956A1
Принадлежит:

A mechanism is provided in a data processing system for refining lesion contours with combined active contour and inpainting. The mechanism receives an initial segmented medical image having organ tissue including a set of object contours and a contour to be refined. The mechanism inpaints object voxels inside all contours of the set. The mechanism calculates an updated contour around the contour to be refined based on the in-painted object voxels to form an updated segmented medical image. The mechanism determines whether the updated segmented medical image is improved compared to the initial segmented medical image. The mechanism keeps the updated segmented medical image responsive to the updated segmented medical image being improved.

Подробнее
24-05-2022 дата публикации

Systems and methods for adaptive enhancement of vascular imaging

Номер: US0011341633B2

An ultrasound system (100) includes an ultrasound transducer, a processing circuit (210, 300), and a display. The ultrasound transducer is configured to detect ultrasound information regarding a patient and output the ultrasound information as an ultrasound data sample. The processing circuit (210, 300) is configured to segment the ultrasound data sample into a binary image including at least one first region and at least one second region, obtain a first location of a first vascular feature of the binary image based on a boundary between the at least one first region and the at least one second region, and modify the binary image based on the first location of the first vascular feature. The first vascular feature is associated with an intima media thickness. The display is configured to display the modified image.

Подробнее
14-11-2023 дата публикации

Devices, systems, and methods for medical imaging

Номер: US0011816832B2
Автор: Qiulin Tang, Jian Zhou, Zhou Yu
Принадлежит: CANON MEDICAL SYSTEMS CORPORATION

Devices, systems, and methods obtain scan data that were generated by scanning a scanned region, wherein the scan data include groups of scan data that were captured at respective angles; generate partial reconstructions of at least a part of the scanned region, wherein each partial reconstruction of the partial reconstructions is generated based on a respective one or more groups of the groups of scan data, and wherein a collective scanning range of the respective one or more groups is less than the angular scanning range; input the partial reconstructions into a machine-learning model, which generates one or more motion-compensated reconstructions of the at least part of the scanned region based on the partial reconstructions; calculate a respective edge entropy of each of the one or more motion-compensated reconstructions of the at least part of the scanned region; and adjust the machine-learning model based on the respective edge entropies.

Подробнее
28-09-2023 дата публикации

SKETCH-PROCESSING

Номер: US20230306162A1
Автор: Éloi MEHR, Ariane JOURDAN
Принадлежит: DASSAULT SYSTEMES

A computer-implemented method for sketch-processing. The method including obtaining one or more input sketches and determining one or more output sketches from the one or more input sketches. Each output sketch is closed and manifold. The determining of the one or more output sketches includes constructing a set of manifold sketches including each manifold input sketch. The constructing of the set of manifold sketches includes, for each respective non-manifold input sketch, determining two or more respective manifold sketches based on at least one intra-sketch intersection of the respective non-manifold input sketch. The determining of the one or more output sketches includes combining each pair of manifold sketches of the constructed set that share at least two intersections, to form one or more closed and manifold sketches. The method forms an improved solution for sketch-processing.

Подробнее
07-04-2021 дата публикации

VISUAL CAMERA-BASED METHOD FOR IDENTIFYING EDGE OF SELF-SHADOWING OBJECT, DEVICE, AND VEHICLE

Номер: EP3800575A1
Автор: MIYAHARA, Shunji
Принадлежит:

The present disclosure provides a method and device for edge identification of a self-shadowing object based on a visual camera and a vehicle. The method includes: in a process of travelling of a vehicle, collecting image data of an object to be identified, and performing differentiation processing; according to a preset threshold, performing three-value processing to the image that has been differentiation-processed, to acquire a three-value image including positive-direction boundary pixels and negative-direction boundary pixels; according to the positive-direction boundary pixels and the negative-direction boundary pixels, acquiring a positive-direction straight linear segment and a negative-direction straight linear segment that represent boundary trends of the object to be identified; if the straight linear segments create a target object that satisfy a predetermined condition, determining the straight linear segments to be peripheral boundaries of the object to be identified; or else ...

Подробнее
15-11-2017 дата публикации

Scale estimation for object segmentation in a medical image

Номер: GB0002529813B
Автор: YUTA NAKANO, Yuta Nakano
Принадлежит: CANON KK, Canon Kabushiki Kaisha

Подробнее
15-05-2001 дата публикации

PROCEDURE FOR THE EVALUATION OF BATTLE ANIMAL HALVES BY OPTICAL IMAGE PROCESSING

Номер: AT0000200953T
Принадлежит:

Подробнее
14-10-2021 дата публикации

FEATURE EXTRACTION FROM MOBILE LIDAR AND IMAGERY DATA

Номер: AU2020202249A1
Принадлежит:

Abstract Processes for automatically identifying road surfaces and related features such as roadside poles, trees, road dividers and walls from mobile LiDAR point cloud data. The processes use corresponding image data to improve feature identification. Brelrowesfing: Rew-me ll point Abud oveate n Flc xit lu int s all Remove road part from filtered cloud Pole Detection: Fuclidean Road Vertical Road Id entify Cyl in der typ e d ivider wall ,Median object Remove false pole by comparing tilt angle and Transfer image processing pole detection information to , on .. c... o. ud.. Tree Detection: I CC P Fxt rat feature frorn every segment Apply machine learning Figure 1 ...

Подробнее
15-03-2018 дата публикации

SMART AND AUTOMATIC SELECTION OF SAME SHAPED OBJECTS IN IMAGES

Номер: AU2017204531A1

Abstract A method for automatically selecting similar objects for modification within images stored in a computing device, comprising: receiving, by a processor of a computing device, input via an input device of the computing device specifying a first selection of a first object in an image displayed on a user interface of the computing device; generating, by the processor, a first set of feature descriptors that describe a shape formed by edges of the first object; identifying, by the processor, a plurality of other edges in the remaining portion of the image and generating a second set of feature descriptors describing shapes formed by the other edges; determining, by the processor, one or more edge objects in the image that are similar in shape to the first object. ,r 900 Receive input specifying a selection of a first object displayed in an image NO Identify other similar objects? Generate object feature descriptors describing the shape formed by the edges of the first object Identify ...

Подробнее
20-02-2020 дата публикации

A method of target tracking algorithms applied in mis

Номер: AU2020100055A4
Принадлежит: Qian Wang

In order to solve the shortcomings of existing target tracking algorithms applied in MIS, this invention does some optimization based on TLST model and proposes a novel target tracking algorithm combined with KCF and region growing algorithm. Compared with the traditional TLST, our optimized multi-model TLST reduces the complexity of computation and has a better performance on dealing with singular point. Then, through combining KCF and region growing algorithm, the rigid surgical tool can be detected more accurately. Besides, this invention proposes a novel feature extraction which can be optionally used to do further examination.

Подробнее
24-12-2020 дата публикации

A Single Tree Crown Segmentation Algorithm Based on Super-pixels and Topological Features in Aerial Images

Номер: AU2020103026A4
Принадлежит: Alder IP Pty Ltd

Abstract The invention discloses an individual tree crown(ITC) segmentation algorithm based on aerial images using super-pixel and topological features, which comprises the following steps: performing Simple Linear Iterative Clustering (SLIC) superpixel segmentation on original aerial images, and simultaneously acquiring the coronal boundaries of the images via holistically-nested edge detection (HED) network; Calculating three similarity measurement indexes between two adjacent superpixels, i.e., the difference between RGB average values of two adjacent superpixels, the number of intersecting pixels of two adjacent superpixels, and the number of intersecting boundary pixels obtained from HED network, by which the similarity weights between two adjacent superpixels are constructed; Conducting a superpixel neighborhood connected graph based on the center point of each superpixel, and the minimum spanning tree(MST) is extracted from the connected graph to generate the connected tree of aerial ...

Подробнее
27-09-2019 дата публикации

Multi-target segmentation method for uneven illumination image

Номер: CN0110288618A
Автор:
Принадлежит:

Подробнее
18-12-2018 дата публикации

A cervical cell pathological slice pathological cell segmentation method and system

Номер: CN0109035269A
Принадлежит:

Подробнее
23-11-2018 дата публикации

Image calibration method for bubble offset measurement of bubble tube

Номер: CN0108876860A
Принадлежит:

Подробнее
21-06-2019 дата публикации

target object identification method and device

Номер: CN0109919954A
Автор: TONG YUNFEI, ZHAO WEI
Принадлежит:

Подробнее
15-03-2017 дата публикации

FCM image segmentation method and system

Номер: CN0106504260A
Автор: HOU LILI, ZHU PINPIN
Принадлежит:

Подробнее
21-02-2020 дата публикации

View field region segmentation method and device for texture-free scene video

Номер: CN0110826446A
Автор: ZHANG RUI, ZHANG XUELIAN
Принадлежит:

Подробнее
03-11-2020 дата публикации

Mobile phone battery dimension measuring method based on machine vision

Номер: CN0111879241A
Автор:
Принадлежит:

Подробнее
20-09-2019 дата публикации

Based on depth study of semantic Image segmentation method

Номер: CN0110264483A
Автор:
Принадлежит:

Подробнее
03-11-2010 дата публикации

Image reference-free quality evaluation method and system based on gradient profile

Номер: CN0101877127A
Принадлежит:

The invention discloses image reference-free quality evaluation method and system based on a gradient profile. The image reference-free quality evaluation system comprises a gradient profile extraction device, a blurring effect evaluation device, a ringing effect evaluation device and an integrated evaluation device, wherein the gradient profile extraction device is used for detecting input image edge points and extracting the gradient profile according to the edge points; the blurring effect evaluation device is used for measuring the blurring effect of images according to the gradient profile; the ringing effect evaluation device is used for measuring the ringing effect of the images according to the gradient profile; and the integrated evaluation device is used for fusing the blurring effect measurement and the ringing effect measurement to acquire a quality evaluation reference value of input images. Based on the invention, the quality evaluation can be carried out on various types ...

Подробнее
13-12-2019 дата публикации

Image segmentation method based on depth perception

Номер: CN0110570436A
Автор:
Принадлежит:

Подробнее
30-10-2020 дата публикации

Method for measuring abdominal circumference based on CT image

Номер: CN0111862072A
Автор:
Принадлежит:

Подробнее
23-10-2018 дата публикации

A used for limiting the Image of the boundary of the method and apparatus

Номер: CN0105069782B
Автор:
Принадлежит:

Подробнее
24-07-2020 дата публикации

MCAASPP neural network fundus image video disc segmentation model based on Attentention mechanism

Номер: CN0110610480B
Автор:
Принадлежит:

Подробнее
11-10-2019 дата публикации

Based on depth prior analog fog diagram generating method

Номер: CN0106709901B
Автор:
Принадлежит:

Подробнее
24-07-2020 дата публикации

Recording dose data from a medicament injection device using optical character recognition technology

Номер: CN0107073226B
Автор:
Принадлежит:

Подробнее
16-06-2017 дата публикации

A single static Image depth estimation method and device

Номер: CN0104537637B
Автор:
Принадлежит:

Подробнее
02-02-2012 дата публикации

System and method for interactive live-mesh segmentation

Номер: US20120026168A1
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS NV

A system and method for segmenting an anatomical structure. The system and method initiating a segmentation algorithm, which produces a surface mesh of the anatomical structure from a series of volumetric images, the surface mesh formed of a plurality of polygons including vertices and edges, assigning a spring to each of the edges and a mass point to each of the vertices of the surface mesh, displaying a 2D reformatted view including a 2D view of the surface mesh and the anatomical structure, adding pull springs to the surface mesh, the pull springs added based upon a selected point on a surface of the surface mesh and moving a portion of the surface mesh via an interactive point.

Подробнее
24-01-2013 дата публикации

Head recognition method

Номер: US20130022262A1
Принадлежит: Softkinetic Software SA

Described herein is a method for recognising a human head in a source image. The method comprises detecting a contour of at least part of a human body in the source image, calculating a depth of the human body in the source image. From the source image, a major radius size and a minor radius size of an ellipse corresponding to a human head at the depth is calculated, and, for at least several of a set of pixels of the detected contour, generating in an accumulator array at least one segment of an ellipse centred on the position of the contour pixel and having the major and minor radius sizes. Positions of local intensity maxima in the accumulator array are selected as corresponding to positions of the human head candidates in the source image.

Подробнее
09-05-2013 дата публикации

INTERACTIVE IMAGE ANALYSIS

Номер: US20130117712A1
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS N.V.

A system for interactive image analysis is disclosed, comprising an image visualization subsystem () for visualizing an image (). An indicated position determiner () is arranged for determining an indicated position of a pointing device with respect to the image (). A result determiner () is arranged for determining a result of a local image processing of the image () at the indicated position. A display subsystem () displays either at least part of the result of the local image processing () or a visible mark (), based on the image processing result. The result of the local image processing is indicative of the presence or absence of an object () at or near the indicated position (), and the display subsystem () is arranged for displaying the visible mark () in the absence of such an object () at or near the indicated position (). 1. A system for interactive image analysis , comprising:{'b': '8', 'an image visualization subsystem for visualizing an image ();'}a position input for enabling a user to indicate a position with respect to the image, to obtain an indicated position;a result determiner for determining a result of a local image processing of the image at the indicated position;a decider for deciding whether to display a mark, based on the result of the local image processing, to obtain a decision; anda display subsystem arranged for displaying a visible mark in response to the decision, wherein the visible mark is indicative of a right owner with respect to the system.2. The system according to claim 1 , wherein the result of the local image processing is indicative of the absence of an object at or near the indicated position claim 1 , and wherein the decider is arranged for deciding to display the visible mark in the absence of such an object at or near the indicated position.3403. The system according to claim 1 , wherein the result of the local image processing is indicative of the presence of an object at or near the indicated position claim 1 , and ...

Подробнее
16-05-2013 дата публикации

Edge-Based Approach for Interactive Image Editing

Номер: US20130121593A1
Автор: Jue Wang, Shulin Yang
Принадлежит: Adobe Systems Inc

A method, system, and computer-readable storage medium are disclosed for aligning user scribbles to edges in an image. A plurality of edges in the image may be determined. User input comprising a scribble may be received, wherein the scribble comprises a freeform line overlaid on the image. The scribble may be automatically aligned to one or more of the edges in the image.

Подробнее
15-08-2013 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM

Номер: US20130208980A1
Автор: ONO Satoru
Принадлежит: SEIKO EPSON CORPORATION

An image processing device includes a pattern type determination unit in which a predetermined pattern is determined corresponding to color information of an image; a region setting unit in which a predetermined region in the image is set; an image data generation unit in which image data including the image and the pattern are included; and a size correction unit in which a size of the pattern is changed corresponding to a size of the region, or a position correction unit in which a positional relationship between a characteristic portion from which a type of the pattern included in the pattern which is determined in the pattern type determination unit can be discriminated and the region which is set in the region setting unit is corrected. 1. An image processing device comprising:a pattern type determination unit in which a predetermined pattern is determined corresponding to color information of an image;a region setting unit in which a predetermined region in the image is set;an image data generation unit in which image data including the image and the pattern are included; anda size correction unit in which a size of the pattern corresponding to a size of the region is changed, or a position correction unit in which a positional relationship between a characteristic portion from which a type of the pattern included in the pattern which is determined in the pattern type determination unit can be determinated and the region which is set in the region setting unit is corrected,wherein, when the position correction unit is included, an applying position of the characteristic portion in the region can be corrected toward a center of gravity of the region by the position correction unit.2. The image processing device according to claim 1 ,wherein, when the region is a minimum unit or more in which the region can express the characteristic portion, a correction of an application position of the characteristic portion is performed.3. The image processing device ...

Подробнее
29-08-2013 дата публикации

COMPUTING DEVICE AND METHOD OF DETERMINING BORDER POINTS FOR MEASURING IMAGES OF OBJECTS

Номер: US20130223761A1
Принадлежит:

In a method of determining border points for measuring an image of an object using a computing device, grayscale values of pixel points in an image being measured are acquired, and definition values of the pixel points are computed according to the grayscale values. A line which intersects with the image being measured is constructed, and the definition values of the pixel point values in the lines are obtained. A location range of a border point of the image being measured is determined according to the definition values of the pixel point values in the line, and the border point is selected from the location range. A border line of the image being measured is fitted using the border points. 1. A computerized method of determining border points for measuring an image of an object , the method being executed by at least one processor of a computing device and comprising:(a) acquiring grayscale values of pixel points in the image being measured by the at least one processor;(b) computing definition values of the pixel points according to the grayscale values by the at least one processor;(c) constructing a line which intersects with the image being measured, and obtaining the definition values of the pixel point values in the lines by the at least one processor;(d) determining a location range of a border point of the image being measured using the definition values of the pixel point values in the line by the at least one processor;(e) determining the border point from the location range by the at least one processor;(f) repeating step (c) to (e) until all border points of the image being measured has been determined by the at least one processor; and(g) fitting a border line of the image being measured using the border points, and outputting the border line by the at least one processor.2. The method according to claim 1 , wherein the definition value of a pixel point P in the image being measured is computed by:constructing a rectangle, which is centered at the ...

Подробнее
17-10-2013 дата публикации

Method and Systems for Measuring Interpupillary Distance

Номер: US20130271726A1
Принадлежит: Cyber Imaging Systems Inc

The proposed innovation provides methods and systems for measuring the interpupillary distance. The proposed innovation provides a fitting pad ( 102 ) having two detection points ( 104 and 106 ). The fitting pad is placed on the forehead of the user and an image is captured. The image is uploaded and pupil distance calculator software locates the fitting detection points and calculates the distance in pixels of the left and right X, Y coordinates. The software creates an image scale by dividing the pixel counts between the detection points. The software automatically locates the X, Y coordinates between the center of the left and right pupils and calculates the distance in pixels. The resulting pixel distance divided by the image scale provides the interpupillary distance in millimeter. In embodiments, segment height is calculated based upon an image imported by the user and the combined scaled images of the user and the frame.

Подробнее
14-11-2013 дата публикации

Three-Dimensional Shape Measurement Method and Three-Dimensional Shape Measurement Device

Номер: US20130301909A1
Автор: Sato Kunihiro
Принадлежит: University of Hyogo

This three-dimensional shape measurement method comprises: a projection step for projecting an interference fringe pattern (F) having a single spatial frequency (fi) onto an object surface; a recording step for recording the pattern (F) as a digital hologram; and a measurement step for generating a plurality of reconstructed images having different focal distances from the hologram, and deriving the distance to each point on the object surface by applying a focusing method to the pattern (F) on each of the reconstructed images. The measurement step extracts the component of the single spatial frequency (fi) corresponding to the pattern (F) from each of the reconstructed images by spatial frequency filtering, upon applying the focusing method, and makes it possible to achieve a highly accurate measurement in which the adverse effect of speckles is reduced and the advantage of a free-focus image reconstruction with holography is used effectively. 113-. (canceled)14. A method for measuring a three-dimensional shape of an object surface using a digital hologram for recording an interference fringe pattern projected onto the object surface , comprising the steps of:a projection step for projecting an interference fringe pattern (F) having a single spatial frequency (fi) onto an object surface;a recording step for recording the interference fringe pattern (F) projected on the object surface by the projection step as a hologram using a photo detector; anda measurement step for generating a plurality of reconstructed images having different focal distances from the hologram recorded by the recording step, and deriving the distance to each point on the object surface by applying a focusing method to the interference fringe pattern (F) in each of the reconstructed images, whereinthe measurement step comprises an interference fringe pattern extraction step for extracting the component of the single spatial frequency (fi) corresponding to the interference fringe pattern from ...

Подробнее
10-04-2014 дата публикации

TOUCH AND MOTION DETECTION USING SURFACE MAP, OBJECT SHADOW AND A SINGLE CAMERA

Номер: US20140098224A1
Автор: ZHANG Wei

The present invention provides an optical method and a system for obtaining positional and/or motional information of an object with respect to a reference surface, including detecting if the object touches the reference surface, by using a projector and one camera. A surface map is used for mapping a location on the reference surface and a corresponding location in a camera-captured image having a view of the reference surface. In particular, a camera-observed shadow length, i.e. a length of the object's shadow observable by the camera, estimated by using the surface map, is used to compute the object's height above the reference surface (a Z coordinate). Whether or not the object touches the reference surface is also obtainable. After an XY coordinate is estimated, a 3D coordinate of the object is obtained. By computing a time sequence of 3D coordinates, the motional information, such as velocity and acceleration, is obtainable. 1. An optical method for a system comprising a projector and a camera to obtain positional or motional information of an object with respect to a reference surface , the object having a pre-determined reference peripheral point , the method comprising:obtaining a surface profile of the reference surface, and a surface map configured to map any point on an image captured by the camera to a corresponding physical location on the reference surface;at a time instant after the object is identified to be present, initiating a positional-information obtaining process; andarranging the projector and the camera with a positional configuration such that when the object not touching the reference surface is illuminated by the projector, a part of the object's shadow formed on the reference surface along a topographical surface line is observable by the camera, and such that a length of the aforesaid part of the shadow, regarded as a camera-observed shadow length, is usable for uniquely determining the object's height above the reference surface in ...

Подробнее
05-01-2017 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

Номер: US20170004626A1
Автор: KOBAYASHI Satomi
Принадлежит: OLYMPUS CORPORATION

An image processing device includes a processor including hardware. The processor is configured to: detect an area not suitable for an observation inside each image included in an image group in which images are acquired in time sequence; calculate an importance of the image for each of the images included in the image group based on the area not suitable for the observation inside the image; integrate the importance in order of time sequence; and determine whether the integrated value exceeds a threshold value. 1. An image processing device comprising: detect an area not suitable for an observation inside each image included in an image group in which images are acquired in time sequence;', 'calculate an importance of the image for each of the images included in the image group based on the area not suitable for the observation inside the image;', 'integrate the importance in order of time sequence; and', 'determine whether the integrated value exceeds a threshold value., 'a processor comprising hardware, wherein the processor is configured to2. The image processing device according to claim 1 ,wherein the processor is further configured to set the image having the importance integrated at the last time when the integrated value exceeds the threshold value as a boundary of dividing the image group into a plurality of selection ranges if the processor determines that the integrated value exceeds the threshold value.3. The image processing device according to claim 2 ,wherein the processor is further configured to select a representative image from the images included in each of the selection ranges.4. The image processing device according to claim 1 ,wherein the area not suitable for the observation is an area other than an area suitable for the observation.5. The image processing device according to claim 4 ,wherein the area not suitable for the observation is at least one of an area in which residue or bubbles are reflected, an area in which a deep part of a lumen ...

Подробнее
04-01-2018 дата публикации

Pattern Matching Using Edge-Driven Dissected Rectangles

Номер: US20180004888A1
Принадлежит:

Aspects of the disclosed technology relate to techniques of pattern matching. Matching rectangles in a layout design that match rectangle members of a search pattern are identified based on edge operations. The rectangle members comprise an origin rectangle member and one or more reference rectangle members. Grid element identification values are attached to the matching rectangles. The matching rectangles that match the one or more reference rectangle members in neighborhoods of the matching rectangles that match the origin rectangle member are then analyzed. The neighborhoods are determined based on the grid element identification values. Based on the analysis, matching patterns in the layout design that match the search pattern are determined. 1. One or more computer-readable media storing computer-executable instructions for causing one or more processors to perform a method , the method comprising:identifying matching rectangles in a layout design that match rectangle members of a search pattern based on edge operations and a first set of criteria, the rectangle members comprising an origin rectangle member and one or more reference rectangle members;attaching grid element identification values to the matching rectangles, the grid element identification values being associated with a regular grid that divides the layout design into a number of equal rectangle regions (grid elements);determining matching patterns in the layout design that match the search pattern based on a second set of criteria, the determining comprising analyzing the matching rectangles that match the one or more reference rectangle members in neighborhoods of the matching rectangles that match the origin rectangle member, wherein the neighborhoods are determined based on the grid element identification values; andstoring the matching patterns.2. The one or more computer-readable media recited in claim 1 , wherein the edge operations are edge-based operations of a design rule checking (DRC) ...

Подробнее
13-01-2022 дата публикации

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM STORING INFORMATION PROCESSING PROGRAM

Номер: US20220012906A1
Автор: SHOJI Tetsuya, YANO Masao
Принадлежит: TOYOTA JIDOSHA KABUSHIKI KAISHA

An information processing device acquires a material formation image representing a formation of a material, the material formation image being obtained by imaging the material. The information processing device generates a Fourier transform result representing a power spectrum by applying a Fourier transform to the acquired material formation image. The information processing device, on the basis of the Fourier transform result of the material formation image, employs an expectation-maximization algorithm to generate a size distribution of structures forming the material. 1. An information processing device , comprising:a memory; anda processor connected to the memory, the processor being configured to:acquire a material formation image representing a formation of a material, the material formation image being obtained by imaging the material;generate a Fourier transform result representing a power spectrum by applying a Fourier transform to the acquired material formation image; and,on the basis of the Fourier transform result of the material formation image, employ an expectation-maximization algorithm to generate a size distribution of structures forming the material.2. The information processing device according to claim 1 , wherein the processor is further configured to:on the basis of the acquired material formation image, generate a plurality of contrast images comprising a plurality of material formation images with different levels of contrast;generate a Fourier transform result for each of the plurality of contrast images; and,on the basis of the respective Fourier transform results of the plurality of contrast images, employ the expectation-maximization algorithm to generate the size distribution of the structures forming the material.3. An information processing method claim 1 , in which a processor executes processing comprising:acquiring a material formation image representing a formation of a material, the material formation image being obtained by ...

Подробнее
04-01-2018 дата публикации

DETERMINING THE POSITION OF AN OBJECT IN A SCENE

Номер: US20180005049A1
Автор: DODD Luke, Hawkins Paul
Принадлежит:

A method of determining the position of an object in a scene, comprising: receiving captured images of the scene, each image being captured from a different field of view of the scene, wherein a portion of the scene with a volume comprises a detectable object, the volume is divided into volume portions, and each volume portion is within the captured field of view of at least two of the captured images so that an image of each volume portion appears in the at least two of the captured images; detecting, for each volume portion in each of the captured images within which an image of that volume portion appears, whether or not an image of one of the detectable objects in the scene is positioned within a distance of the position of the image of that volume portion, a correspondence between the images of the detectable objects detected in the at least two of the images is established, the correspondence indicating that the images of the detectable objects detected in the at least two of the images correspond to a single detectable object in the scene, and the position in the scene of that volume portion is established as a position in the scene of the single detectable object. 1. A method of determining the position of an object in a scene , comprising:receiving a plurality of captured images of the scene, each respective one of the plurality of images being captured from a different field of view of the scene, wherein a predetermined portion of the scene with a predetermined volume comprises a plurality of detectable objects, the predetermined volume is divided into a plurality of volume portions, and each volume portion is within the captured field of view of at least two of the captured images so that an image of each volume portion appears in the at least two of the captured images;detecting, for each volume portion in each of the captured images within which an image of that volume portion appears, whether or not an image of one of the detectable objects in the ...

Подробнее
02-01-2020 дата публикации

DISPLAY CONTROL SYSTEM AND RECORDING MEDIUM

Номер: US20200005099A1
Принадлежит:

There is provided a display control system including a plurality of display units, an imaging unit configured to capture a subject, a predictor configured to predict an action of the subject according to a captured image captured by the imaging unit, a guide image generator configured to generate a guide image that guides the subject according to a prediction result from the predictor, and a display controller configured to, on the basis of the prediction result from the predictor, select a display unit capable of displaying an image at a position corresponding to the subject from the plurality of display units, and to control the selected display unit to display the guide image at the position corresponding to the subject. 1. (canceled)2. A system comprising:an action history information acquirer for acquiring information of user motion around a table;a guide image generator for generating one or more images suggesting an action of the user based on the acquired information; anda display controller for controlling display of the one or more images on the table.3. The system according to claim 2 , further comprising a predictor for generating one or more predicted actions of the user according the information of user motion.4. The system according to claim 3 , wherein the guide image generator generates the one or more images based on claim 3 , at least claim 3 , one of the predicted actions.5. The system according to claim 2 , further comprising a learning unit for learning one or more patterns of items placed on the table claim 2 , and wherein the guide image generator generates the one or more images based on claim 2 , at least claim 2 , one of the patterns.6. The system according to claim 5 , wherein the one or more patterns comprises a pattern of dishes.7. The system according to claim 5 , wherein the one or more patterns comprises a pattern of cutlery.8. The system according to claim 2 , further comprising one or more imaging units for generating the ...

Подробнее
03-01-2019 дата публикации

Methods, Software, and Apparatus for Porous Material or Medium Characterization, Flow Simulation and Design

Номер: US20190005172A1
Принадлежит:

Methods, software, and apparatuses for accurate and computationally fast and efficient topologic and geometric characterization of porous material or medium, flow characterization of porous material or medium, and porous material or medium design are described. 1. A method for topological and geometric characterization of a three-dimensional porous material or medium , said method comprising:receiving an image of a three-dimensional porous material or medium configured to be disposed in a fluid flow-path, said porous material or medium comprising a void space;extracting a medial surface of the void space of the porous material or medium;utilizing the extracted medial surface of the void space of the porous material or medium as a representative geometry of the porous material or medium; andcalculating the pore size distribution of the void space of the porous material or medium based on the extracted medial surface of the void space.2. The method of claim 1 , wherein said image of the three-dimensional porous material or medium comprises a voxelized binary image of the porous material or medium.3. The method of claim 2 , wherein said extracted medial surface is a voxel-wide medial surface.4. The method of claim 3 , wherein a pore of the extracted medial surface is defined as a voxel-wide square prism claim 3 , wherein the voxel-wide square prism is centered at each voxel of the medial surface and perpendicular to the medial surface at said voxel.5. The method of claim 4 , wherein calculating the pore size distribution of the void space of the porous material or medium based on the extracted medial surface of the void space comprises assigning a value r to each voxel on the medial surface claim 4 , wherein r is equal to the distance between the medial surface voxel and a closest solid boundary claim 4 , and wherein the pore size distribution of the porous material or medium is the distribution of r values on the medial surface voxels.6. A method for characterizing ...

Подробнее
04-01-2018 дата публикации

APPARATUS AND METHOD FOR LARGE FIELD-OF-VIEW MEASUREMENTS OF GEOMETRIC DISTORTION AND SPATIAL UNIFORMITY OF SIGNALS ACQUIRED IN IMAGING SYSTEMS

Номер: US20180005401A1
Принадлежит: THE PHANTOM LABORATORY, INCORPORATED

An apparatus and method for imaging quality assessment of an imaging system employs an aggregate phantom and a processor for imaging analysis. The aggregate phantom includes a plurality of self-contained sections configured to be moved independently and re-assembled in the imaging system. Each section includes fiducial features of known relative location. The processor: quantitatively determines location of the fiducial features within an image of the aggregate phantom; compares the determined location within the image to the known relative location of the fiducial features to produce a distortion field; and distinguishes between actual geometric distortion of the imaging system and rigid-body transformations of sections of the aggregate phantom, in the distortion field. For extended fields-of-view, the aggregate phantom may be repositioned, and sets of images combined to determine a distortion field of the extended image. A method employing virtual features for measuring spatial uniformity of an acquired signal, is also provided. 1. Apparatus for image quality assessment of an imaging system , comprising:an aggregate phantom having a plurality of self-contained sections configured to be moved independently and re-assembled in the imaging system, each section including fiducial features of known relative location, and quantitatively determining location of the fiducial features within an image, produced by the imaging system, of the aggregate phantom,', 'comparing the determined location within the image to the known relative location of the fiducial features to produce a distortion field, and', 'distinguishing between actual geometric distortion of the imaging system and rigid-body transformations of sections of the aggregate phantom, in the distortion field., 'a processor for image analysis configured for2. The apparatus of claim 1 , wherein the distinguishing comprises:identifying and quantifying displacement components attributable to the rigid-body ...

Подробнее
04-01-2018 дата публикации

Methods and Systems for Processing Plenoptic images

Номер: US20180005402A1
Принадлежит:

Methods and systems are disclosed for deriving quantitative measurements of an imaged material using plenoptic imaging. In one or more embodiments, image data is generated by a plenoptic camera having a filter configured to transmit a plurality of different spectra in different regions of the filter. A set of plenoptic image data is produced by determining respective sets of pixels in the image data corresponding to the different regions of the filter and determining light intensities of the plurality of different spectra for respective super-pixel groups of the pixels in the image data. One or more additional quantitative measurements of an imaged material are then derived from a comparison of the determined light intensities of two or more of the plurality of different spectra. 1. An apparatus , comprising: determining respective sets of pixels in the image data corresponding to the different regions of the filter; and', 'determining intensities of the light with the plurality of different, characteristics for respective super-pixel groups of the pixels in the image data; and, 'a first processing circuit configured to, in response to receiving image data from a plenoptic camera having a filter configured to transmit light with a plurality of different characteristics in respective regions of the filter, produce a set of plenoptic image data bya second processing circuit coupled to the first processing circuit and configured to derive one or more additional quantitative measurements of an imaged material or media from a comparison of the determined light intensities for one, or more of the plurality of different characteristics.2. The apparatus of claim 1 , wherein the second processing circuit is further configured to claim 1 , in deriving the one or more additional quantitative measurements claim 1 , performing geometric analysis of an imaged material depicted by the pixels based on angular resolution of the different regions of the filter.3. The apparatus of ...

Подробнее
02-01-2020 дата публикации

Cylindrical Panorama

Номер: US20200005508A1
Автор: Hu Shane Ching-Feng
Принадлежит:

A method for generating a panoramic image is disclosed. The method comprises simultaneously capturing images from multiple camera sensors aligned horizontally along an arc and having an overlapping field of view; performing a cylindrical projection to project the captured images from the multiple camera sensors to a cylindrical images; and aligning overlapping regions of the cylindrical images corresponding to the overlapping field of view based on an absolute difference of luminance, wherein the cylindrical projection is performed by adjusting a radius for the cylindrical projection, wherein the radius is adjusted based on a scale factor and wherein the scale factor is calculated based on a rigid transform and wherein the scale factor is iteratively calculated for two sensors from the multiple camera sensors. 1. A method for generating a panoramic image , comprising:capturing images simultaneously from each of multiple camera sensors aligned horizontally along an arc and having an overlapping field of view;performing a cylindrical projection to project each of the captured images from the multiple camera sensors to cylindrical images; andaligning overlapping regions of the cylindrical images corresponding to the overlapping field of view based on absolute difference of luminance, wherein the cylindrical projection is performed by adjusting a radius for the cylindrical projection, wherein the radius is adjusted based on a scale factor and wherein the scale factor is calculated based on a rigid transform, and wherein the scale factor is iteratively calculated for two sensors from the multiple camera sensors.2. The method of claim 1 , wherein aligning the overlapping regions and adjusting the radius is performed as an integrated step.3. The method of claim 2 , wherein said integrated step is part of an iterated calibration process.4. The method of claim 3 , wherein a correction for lens distortion and the cylindrical projection is combined as a single reverse address ...

Подробнее
03-01-2019 дата публикации

COMMERCIAL PRODUCT SIZE DETERMINATION DEVICE AND COMMERCIAL PRODUCT SIZE DETERMINATION METHOD

Номер: US20190005672A1
Принадлежит:

An image acquisition unit that acquires a photographic image showing a hand on which a card of a known size is placed. A card detection unit detects a size of the card in the photographic image. A finger joint measurement unit measures a size of the joint of the finger for the ring in the photographic image. A finger size estimation unit estimates an actual size of the joint of the finger for the ring from a measured value of the size of the joint of the finger of the ring in the photographic image, based on a ratio between the known size of the card and the size of the card in the photographic image detected. A ring size determination unit determines a size of the ring based on an estimated actual size of the joint of the finger for the ring. 1. A commercial product size determination device comprising:an image acquisition unit that acquires a photographic image showing a commercial product of a known size and a body portion that should wear the commercial product;a detection unit that detects a size of the product in the photographic image;a measurement unit that measures a size of the body portion in the photographic image; andan estimation unit that estimates an actual size of the body portion from a measured value of the size of the body portion in the photographic image, based on a ratio between the known size of the product and the size of the product in the photographic image.2. The commercial size determination device according to claim 1 , whereinthe detection unit detects the size of the product in the photographic image by performing projective transform that transforms a shape of the product in the photographic image into an original shape, andthe measurement unit subjects the photographic image to the projective transform before measuring the size of the body portion in the photographic image.3. The commercial product size determination device according to claim 1 , further comprising:a determination unit that determines a size of the commercial ...

Подробнее
27-01-2022 дата публикации

APPARATUS AND METHOD FOR DETECTING FOG ON ROAD

Номер: US20220028118A1
Принадлежит:

According to an embodiment, a device for detecting fog on a road comprises an imaging device installed to capture a two-way road and capturing a fog on the two-way road, a network configuring device provided under the imaging device and transmitting an image captured by the imaging device, a fog monitoring device receiving the image from the network configuring device, analyzing the image to thereby detect the fog, and outputting an alert per predetermined crisis level, and a display device displaying the alert output from the fog monitoring device and transmitting the alert via a wired or wireless network. 1. A device for detecting fog on a road , the device comprising:an imaging device installed to capture a two-way road and capturing a fog on the two-way road;a network configuring device provided under the imaging device and transmitting an image captured by the imaging device;a fog monitoring device receiving the image from the network configuring device, analyzing the image to thereby detect the fog, and outputting an alert per predetermined crisis level; anda display device displaying the alert output from the fog monitoring device and transmitting the alert via a wired or wireless network.2. The device of claim 1 , wherein the imaging device includes:a camera capturing the two-way road;a bracket provided under the camera and adjusting a direction thereof using a bolt and a nut;a median strip guardrail fixing base formed in a double-winged structure to be mounted on a median strip guardrail without damaging the median strip guardrail and fixed to, or removed from, the median strip guardrail using a bolt and a nut; anda supporting pole connecting the bracket with the median strip guardrail fixing base and adjusting a height thereof using at least one bolt.3. The device of claim 1 , wherein the fog monitoring device includes:an image receiver installed within a predetermined distance from a site where the imaging device is installed and storing the image from ...

Подробнее
12-01-2017 дата публикации

METHOD AND DEVICE FOR DETECTING DEFECTS IN A PRESSING TEST OF A TOUCH SCREEN

Номер: US20170011504A1
Принадлежит:

The present invention provides a method and a device for detecting defects in a pressing test of a touch screen. The method for detecting defects in a pressing test of a touch screen includes: Step S acquiring, after each test point of the touch screen is tested, an image of a test region in which the test point is located, when performing the pressing test; Step S identifying a quantity of abnormal points in the acquired image; and Step S comparing the identified quantity of abnormal points with a quantity of abnormal points allowed in the pressing test, and determining that the touch screen has defects if the identified quantity of abnormal points exceeds the quantity of abnormal points allowed in the pressing test. The above method for detecting defects can check instantly and accurately whether defects arise on the touch screen during the pressing test. 1. A method for detecting defects in a pressing test of a touch screen , comprising:{'b': '1', 'Step S: acquiring, after each test point of the touch screen is tested, an image of a test region in which the test point is located, when performing the pressing test;'}{'b': '2', 'Step S: identifying a quantity of abnormal points in the acquired image; and'}{'b': '3', 'Step S: comparing the identified quantity of abnormal points with a quantity of abnormal points allowed in the pressing test, and determining that the touch screen has defects if the identified quantity of abnormal points exceeds the quantity of abnormal points allowed in the pressing test.'}23. The method for detecting defects in a pressing test of a touch screen according to claim 1 , wherein Step S further comprises: determining a defect level of the touch screen based on a value by which the identified quantity of abnormal points exceeds the quantity of abnormal points allowed in the pressing test.32. The method for detecting defects in a pressing test of a touch screen according to claim 1 , wherein Step S further comprises: identifying a size of ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE AND COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM OF RECOGNIZING LANE BOUNDARY LINES ON ROADWAY

Номер: US20160012298A1
Принадлежит:

An in-vehicle camera obtains image frames of a scene surrounding an own vehicle on a roadway. An extracting section in a lane boundary line recognition device extracts white line candidates from the image frames. The white line candidates indicate a degree of probability of white lines on an own vehicle lane on the roadway and a white line of a branch road which branches from the roadway. A branch judgment section calculates a likelihood of the white line as the white line of the branch road, and judges whether or not the white line candidate is the white line of the branch road based on the calculated likelihood. The branch judgment section decreases the calculated likelihood when a recognizable distance of the lane boundary line candidate monotonically decreases in a predetermined number of the image frames. 1. A lane boundary line recognition device comprising:a detection section capable of detecting lane boundary line candidates on a roadway on which an own vehicle drives on the basis of image frames of a surrounding area of the own vehicle on the roadway, captured by an in-vehicle camera mounted on the own vehicle; anda branch judgment section capable of calculating a likelihood which indicates a degree of whether each of the lane boundary line candidates detected by the detection section is a lane boundary line of a branch road, the branch road branching from the roadway, and the branch judgment section judging whether or not the lane boundary line candidate detected by the detection section is the lane boundary line of the branch road on the basis of the calculated likelihood, the branch judgment section increasing the likelihood of the lane boundary line candidate when a recognizable distance of the lane boundary line candidate monotonically decreases in a predetermined number of the image frames, where the recognizable distance indicates a distance to a farthest recognizable end point of the lane boundary line candidate.2. The lane boundary line recognition ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE AND COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM OF RECOGNIZING LANE BOUNDARY LINES ON ROADWAY

Номер: US20160012299A1
Принадлежит:

A lane boundary line recognition device detects lane boundary line candidates of a roadway from images captured by an in-vehicle camera, judges that the lane boundary line candidate is a lane boundary line of a branch road, and calculates a curvature of the lane boundary line candidate, and recognizes the lane boundary line based on the calculated curvature. The device removes the lane boundary line candidate, which has been judged as the lane boundary line of the branch road, is removed from a group of the lane boundary line candidates, and calculates the curvature of the lane boundary line candidate based on an estimated rate of change of the curvature. The device uses a past curvature calculated predetermined-number of images before when the lane boundary line candidate is the lane boundary line of the branch road, and resets the estimated rate of change of the curvature to zero. 1. A lane boundary line recognition device comprising:a detection section capable of detecting lane boundary line candidates of a roadway on the basis of frame images of the roadway around an own vehicle transmitted from an in-vehicle camera;a branch judgment section capable of judging whether the lane boundary line candidate detected by the detection section corresponds to a lane boundary line of a branch road; anda recognition section capable of calculating feature values comprising a curvature of the lane boundary line candidate detected by the detection section, and recognizing the lane boundary line on the basis of the calculated feature values, the recognition section comprising:a removing section capable of removing the lane boundary line candidate, which has been judged to correspond to the lane boundary line of the branch road by the branch judgement section, is removed from the lane boundary line candidates;a curvature calculation section capable of calculating a curvature of the lane boundary line candidate on the basis of an estimated rate of change of the curvature of the ...

Подробнее
14-01-2016 дата публикации

LANE BOUNDARY LINE RECOGNITION DEVICE

Номер: US20160012300A1
Принадлежит:

In a lane boundary line recognition device, an extraction unit extracts lane boundary line candidates from image acquired by an in-vehicle camera. A position estimation unit estimates a position of each lane boundary line based on drive lane information containing a number of drive lanes on a roadway and a width of each drive lane when (a) and (b) are satisfied, (a) when an own vehicle drives on an own vehicle lane specified by the drive lane specifying unit, and (b) when the lane boundary line candidate corresponds to lane boundary lines of the own vehicle lane. A likelihood calculation unit increases a likelihood of the lane boundary line candidate when a distance between a position of the lane boundary line candidate and an estimated position of the lane boundary line candidate obtained by the drive lane boundary line position estimation unit is within a predetermined range. 1. A lane boundary line recognition device comprising:an image acquiring unit capable of acquiring surrounding images of a roadway on which an own vehicle drives;a drive lane boundary line candidate extraction unit capable of extracting lane boundary line candidates from the images acquired by the image acquiring unit;a likelihood calculation unit capable of calculating a likelihood of each of the lane boundary line candidates;a drive lane boundary line recognition unit capable of recognizing, as a lane boundary line, the lane boundary line candidate having the likelihood of not less than a predetermined threshold value;a selection unit capable of selecting a predetermined number of the lane boundary line candidates having the likelihood of not less than the predetermined threshold value;a drive lane information acquiring unit capable of obtaining drive lane information containing a number of drive lanes on the roadway on which the own vehicle drives, and a width of each of the drive lanes;a drive lane specifying unit capable of correlating the image with the drive lane information, and ...

Подробнее
14-01-2016 дата публикации

ROOM INFORMATION INFERRING APPARATUS, ROOM INFORMATION INFERRING METHOD, AND AIR CONDITIONING APPARATUS

Номер: US20160012309A1
Принадлежит: Omron Corporation

A room information inferring apparatus that infers information regarding a room has an imaging unit that captures an image of a room that is to be subjected to inferring, a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room, a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times, and an inferring unit that infers information regarding the room based on the presence map. 1. A room information inferring apparatus that infers information regarding a room , comprising:an imaging unit that captures an image of a room that is to be subjected to inferring;a person detector that detects a person in an image captured by the imaging unit, and acquires a position of the person in the room;a presence map generator that generates a presence map indicating a distribution of detection points corresponding to persons detected in a plurality of images captured at different times; andan inferring unit that infers information regarding the room based on the presence map.2. The room information inferring apparatus according to claim 1 , wherein the person detector detects a face claim 1 , a head claim 1 , or an upper body of the person in the image claim 1 , and acquires the position of the person in the room based on a position and a size of the face claim 1 , the head claim 1 , or the upper body in the image.3. The room information inferring apparatus according to claim 1 , wherein the inferring unit infers a shape of the room based on the presence map.4. The room information inferring apparatus according to claim 3 , wherein the inferring unit infers that a polygon circumscribed around the distribution of detection points in the presence map is the shape of the room.5. The room information inferring apparatus according to claim 4 , wherein the inferring unit infers the shape ...

Подробнее
11-01-2018 дата публикации

THREE-DIMENSIONAL MAPPING SYSTEM

Номер: US20180012364A1
Автор: Mullins Brian
Принадлежит:

A survey application generates a survey of components associated with a three-dimensional model of an object. The survey application receives video feeds, location information, and orientation information from wearable devices in proximity to the object. The three-dimensional model of the object is generated based on the video feeds, sensor data, location information, and orientation information received from the wearable devices. Analytics is performed from the video feeds to identify a manipulation on the object. The three-dimensional model of the object is updated based on the manipulation on the object. A dynamic status related to the manipulation on the object is generated with respect to reference data related the object. A survey of components associated with the three-dimensional model of the object is generated. 1. A server comprising:a storage device storing instructions; and receive video feeds, location information, and orientation information from a plurality of wearable devices;', 'generate a three-dimensional model of the object based on the video feeds, the location information, and the orientation information received from the plurality of wearable devices;', 'perform analytics from the video feeds to identify a manipulation on the object, to update the three-dimensional model of the object based on the manipulation on the object, and to generate a dynamic status of the object based on the manipulation on the object with respect to reference data related to the object;', 'keep an inventory of components of the object; and', 'generate a history of manipulations of the components., 'a hardware processor communicatively coupled to the storage device and configured by the instructions to2. The server of claim 1 , wherein the manipulation of the object comprises a modification of an existing component on the object claim 1 , an addition of a new component to the object claim 1 , or a removal of an existing component on the object.3. The server of claim 1 ...

Подробнее
14-01-2021 дата публикации

Image Evaluation and Dynamic Cropping System

Номер: US20210012132A1
Принадлежит:

Systems for image evaluation and dynamic cropping are provided. In some examples, a system, may receive an instrument or image of an instrument. Identifying information may be extracted from the instrument or image of the instrument. Based on the extracted identifying information, a check/check image profile may be retrieved. In some examples, expected size and/or shape data may be extracted from the check/check image profile. The extracted expected size and/or shape data may be compared to size and/or shape data from the received instrument or image of the instrument to identify any anomalies (e.g., to determine whether the expected size and/or shape data matches the size and/or shape data of the received instrument or image of the instrument. If the expected size and/or shape data does not match size and/or shape data from the received instrument or image of the instrument, the instrument or image of the instrument may be programmatically modified and a modified image of the instrument may be generated. 1. A computing platform , comprising:at least one processor;a communication interface communicatively coupled to the at least one processor; and receive an image of a document;', 'extract, from the received image of the document, identifying information;', 'retrieve, based on the extracted identifying information, a document profile;', 'extract, from the document profile, expected data of the document;', 'compare data of the document in the received image of the document to the extracted expected data of the document;', 'determine, based on the comparing, whether an anomaly exists between the data of the document in the received image and the extracted expected data of the document;', 'responsive to determining that an anomaly does not exist, evaluate validity of the document based on the image of the document; and', programmatically modify, based on one or more machine learning datasets, the received image of the document;', 'generate a modified image of the ...

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING DEVICE AND RECOGNITION SUPPORT METHOD

Номер: US20210012139A1
Автор: IKEDA Hiroo
Принадлежит:

In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device comprises a detection unit and an environment acquisition unit . The detection unit detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone. 113-. (canceled)14. An information processing device comprising: detect a marker from an image;', 'acquire information representing an accuracy of recognition as an image of a the marker be recognized at the target area where the marker disposed; and', 'based on the information, control a display device including a display screen such that an second image is displayed superimposedly on a part of the image, the second image being corresponded with the information,', 'wherein the second image is changed based on the information., 'at least one processor configured to15. The information processing device according to claim 14 , wherein the marker is disposed at an arbitrary place within a target area to be recognized.16. The information processing device according to claim 14 , wherein the at least one processor acquires the information based on image information on the marker itself described in the detected marker.17. The information processing device according to claim 14 , wherein the marker includes grid pattern comprising black grids and white grids.18. A recognition support ...

Подробнее
12-01-2017 дата публикации

SYSTEM AND METHOD FOR FOCUSING IMAGING DEVICES

Номер: US20170013187A1
Принадлежит:

A system and method for automatically focusing imaging devices on an imaging set employs at least one tracker and two or more tracking markers, each tracking marker having an identification means and a tracking pattern. The tracking markers are configured for attaching to the imaging devices and to corresponding subjects to be imaged. A tracker gathers image information of the imaging set and provides it to a controller, which compares the image information to predetermined stored information about the tracking patterns of the various tracking markers. The tracking markers are identified and their three-dimensional positions determined. The distances between the imaging devices and the subjects are determined and the distances between the imaging devices and the subjects are calculated. This provides the focus setting information for communication to the imaging devices. The tracking patterns may have no rotational symmetry, allowing the orientation of subjects to be determined. 1. A system for automatically adjusting a focus setting of at least one imaging device on an imaging set , the system comprising:two or more tracking markers, each tracking marker comprising an identification means and a tracking pattern, the two or more tracking markers configured for attaching to the at least one imaging device and to corresponding one or more subjects to be imaged on the imaging set by the at least one imaging device;a first tracker disposed proximate the imaging set to gather first image information of the imaging set, the first tracker having a field of view including the at least one imaging device and the one or more subjects;a controller configured for receiving the first image information from the first tracker and for communicating to the at least one imaging device control signals for the adjusting of focus settings for the at least one imaging device based on distances between the at least one imaging device and the one or more subjects; anda database comprising ...

Подробнее
14-01-2021 дата публикации

IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Номер: US20210012530A1
Автор: ZHENG Congyao
Принадлежит:

A 2D image comprising at least one target object is obtained. First 2D coordinate of a first key point and second 2D coordinate of a second key point are obtained from the 2D image. The first key point is an imaging point of a first part of the target object in the 2D image, and the second key point is an imaging point of a second part of the target object in the 2D image. Relative coordinate is determined based on the first 2D coordinate and the second 2D coordinate. The relative coordinate is used for characterizing a relative position between the first part and the second part. The relative coordinate is projected into a virtual three-dimensional space and 3D coordinate corresponding to the relative coordinate is obtained. The 3D coordinate is used for controlling coordinate conversion of the target object on a controlled device. 1. An image processing method , comprising:obtaining a two-dimensional (2D) image comprising at least one target object;obtaining first 2D coordinate of a first key point and second 2D coordinate of a second key point from the 2D image, wherein the first key point is an imaging point of a first part of the target object in the 2D image, and the second key point is an imaging point of a second part of the target object in the 2D image;determining relative coordinate based on the first 2D coordinate and the second 2D coordinate, wherein the relative coordinate is used for characterizing a relative position between the first part and the second part;projecting the relative coordinate into a virtual three-dimensional (3D) space and obtaining 3D coordinate corresponding to the relative coordinate, wherein the 3D coordinate is used for controlling coordinate conversion of the target object on a controlled device.3. The method according to claim 2 , wherein mapping the first 2D coordinate into the second 2D coordinate system to obtain the third 2D coordinate comprises:determining, according to the first 2D coordinate system and the second 2D ...

Подробнее
09-01-2020 дата публикации

METHOD FOR QUANTIFICATION OF UNCERTAINTY OF CONTOURS IN MANUAL & AUTO SEGMENTING ALGORITHMS

Номер: US20200013171A1
Автор: RANJAN Uma Satya
Принадлежит:

A system () quantifies uncertainty in contours. The system () includes at least one processor () programmed to receive an image () including an object of interest (OOI) (). Further, a band of uncertainty () delineating a region () in the received image () is received. The region () includes the boundary of the OOI (). The boundary is delineated in the region () using iterative filtering of the region () and a metric of uncertainty of the delineation is determined for the region (). 110. A system for quantification of uncertainty of contours , said system () comprising: [{'b': '20', 'receive an image including an object of interest (OOI) ();'}, {'b': '20', 'receive a band of uncertainty delineating a region in the received image, the region including the boundary of the OOI ();'}, 'delineate the boundary in the region using iterative filtering of the region; and,', 'determine at least one metric of uncertainty of the delineation for the region., 'at least one processor programmed to2. The system according to claim 1 , wherein the band of uncertainty includes an inner contour and an outer contour claim 1 , the inner contour within the outer contour.3. The system according to claim 1 , wherein the processor is further programmed to:display a contour representing the delineated boundary of the OOI, the contour color coded according to metric of uncertainty.4. The system according to claim 1 , wherein the processor is further programmed to: iteratively filter the sub-region until the boundary in the sub-region can be delineated with a confidence level exceeding a predetermined level; and,', 'delineate the boundary in the filtered sub-region., 'for each of at least one of a plurality of sub-regions defining the region5. The system according to claim 4 , wherein the processor is further programmed to: 'determine the metric of uncertainty of the delineation for the sub-region, the metric of uncertainty based on the number of iterations.', 'for each of the at least one of ...

Подробнее
09-01-2020 дата публикации

METHOD AND APPARATUS FOR OBTAINING SAMPLED POSITIONS OF TEXTURING OPERATIONS

Номер: US20200013174A1
Принадлежит:

Methods and apparatuses are disclosed for reporting texture footprint information. A texture footprint identifies the portion of a texture that will be utilized in rendering a pixel in a scene. The disclosed methods and apparatuses advantageously improve system efficiency in decoupled shading systems by first identifying which texels in a given texture map are needed for subsequently rendering a scene. Therefore, the number of texels that are generated and stored may be reduced to include the identified texels. Texels that are not identified need not be rendered and/or stored. 1. A method for obtaining a bitmap identifying texel locations corresponding to a pixel , comprising:obtaining a plurality of texture map coordinates in texture space;identifying a plurality of texel locations based on the plurality of texture map coordinates, wherein each texel location identifies a location of a texel in texture space that is covered by a projection of a pixel into the texture space;generating a bitmap representing the plurality of texel locations; andstoring the bitmap, wherein the bitmap is applicable for shading the pixel.2. The method of claim 1 , further comprising:obtaining gradient information for each of the plurality of texture map coordinates; andutilizing the gradient information in identifying the plurality of texel locations.3. The method of claim 1 , further comprising:obtaining a texture map level of detail parameter; andutilizing the texture map level of detail parameter in identifying the plurality of texel locations.4. The method of claim 1 , further comprising:obtaining a resolution specification indicating a number of texel locations to be represented for a dimension of the bitmap; andutilizing the resolution specification in generating the bitmap.5. The method of claim 1 , further comprising:generating a coarsening factor, wherein the coarsening factor indicates the number of texel locations represented per bit of the bitmap along a dimension of the ...

Подробнее
09-01-2020 дата публикации

HEIGHT CALCULATION SYSTEM, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

Номер: US20200013180A1
Принадлежит: FUJI XEROX CO., LTD.

A height calculation system includes a capturing section that captures an image, a detection section that detects a specific body part of a human in the image captured by the capturing section, and a calculation section that calculates the height of the human based on the size of the part when the one part detected by the detection section overlaps with a specific area present in the image. 1. A height calculation system comprising:a capturing section that captures an image;a detection section that detects a specific body part of a human in the image captured by the capturing section; anda calculation section that calculates a height of the human based on a size of the part when the part detected by the detection section overlaps with a specific area present in the image.2. The height calculation system according to claim 1 , further comprising:an acquiring section that acquires part information related to the size of the part when the specific body part detected by the detection section overlaps with the specific area; anda determination section that determines a size to be associated with a predetermined height based on the part information acquired by the acquiring section,wherein the calculation section calculates the height of the person based on a relationship between the size of the part when the one part overlaps with the specific area and the size associated with the predetermined height.3. The height calculation system according to claim 2 , further comprising:an estimation section that estimates an attribute of the person related to the part information,wherein the determination section determines each size to be associated with a height determined for each attribute, andthe calculation section calculates the height of the person based on a relationship between the size of the part when the one part overlaps with the specific area and a size that is the closest to the size of the one part among the sizes associated with the height for each attribute.4. ...

Подробнее
14-01-2016 дата публикации

IMAGE PROCESSING DEVICE, ENDOSCOPE APPARATUS, INFORMATION STORAGE DEVICE, AND IMAGE PROCESSING METHOD

Номер: US20160014328A1
Автор: Rokutanda Etsuko
Принадлежит: OLYMPUS CORPORATION

An image processing device includes an image acquisition section that acquires a captured image that includes an image of the object, a distance information acquisition section that acquires distance information based on the distance from an imaging section to the object when the imaging section captured the captured image, an in-focus determination section that determines whether or not the object is in focus within a pixel or an area within the captured image based on the distance information, a classification section that performs a classification process that classifies the structure of the object, and controls the target of the classification process corresponding to the results of the determination as to whether or not the object is in focus within the pixel or the area, and an enhancement processing section that performs an enhancement process on the captured image based on the results of the classification process. 1. An image processing device comprising:an image acquisition section that acquires a captured image that includes an image of an object;a distance information acquisition section that acquires distance information based on a distance from an imaging section to the object when the imaging section captured the captured image;an in-focus determination section that determines whether or not the object is in focus within a pixel or an area within the captured image based on the distance information;a classification section that performs a classification process that classifies a structure of the object, and controls a target of the classification process corresponding to results of the determination as to whether or not the object is in focus within the pixel or the area; andan enhancement processing section that performs an enhancement process on the captured image based on results of the classification process.2. The image processing device as defined in claim 1 ,the classification section outputting a classification result that corresponds to an ...

Подробнее
15-01-2015 дата публикации

IMAGING APPARATUS FOR IMAGING AN OBJECT

Номер: US20150016704A1
Принадлежит:

The invention relates to an imaging apparatus for imaging an object. A geometric relation determination unit () determines a geometric relation between first and second images of the object, wherein a marker determination unit () determines corresponding marker locations in the first and second images and marker appearances based on the geometric relation such that the marker appearances of a first marker to be located at a first location in the first image and of a second marker to be located at a second corresponding location in the second image are indicative of the geometric relation. The images with the markers at the respective corresponding locations are shown on a display unit (). Since the marker appearances are indicative of the geometric relation between the images, a comparative reviewing of the images can be facilitated, in particular, if they correspond to different viewing geometries. 11. An imaging apparatus for imaging an object , the imaging apparatus () comprising:{'b': 7', '25, 'a first image providing unit () for providing a first image () of the object,'}{'b': 11', '27, 'a second image providing unit () for a providing a second image () of the object,'}{'b': 10', '25', '27, 'a geometric relation determination unit () for determining a geometric relation between the first image () and the second image (),'}{'b': 14', '25', '27', '25', '27', '30', '26', '25', '27, 'a marker determination unit () for determining corresponding marker locations in the first and second images (, ) and marker appearances based on the geometric relation such that a first location in the first image () and a second location in the second image () show the same part of the object and such that the marker appearances of a first marker () to be located at the first location and of a second marker () to be located at the second location are indicative of the geometric relation between the first image () and the second image (),'}{'b': 16', '25', '30', '27', '26, 'a display ...

Подробнее
17-01-2019 дата публикации

CARDIAC FUNCTION MEASUREMENT DEVICE, CARDIAC FUNCTION MEASUREMENT METHOD, AND CARDIAC FUNCTION MEASURING PROGRAM

Номер: US20190014991A1
Автор: Maki Shin, UTSUGIDA Tomoki
Принадлежит: TERUMO KABUSHIKI KAISHA

A cardiac function measuring apparatus, a cardiac function measuring method, and a cardiac function measuring program capable of monitoring cardiac functions are disclosed. The cardiac function measuring apparatus for measuring data for evaluating cardiac functions includes an irradiation unit for irradiating a jugular with light, an imaging unit configured to acquire image data of the jugular, and a vein discriminating part configured to discriminate a shape of the jugular vein in the acquired image data and to calculate a shape complexity level indicating complexity in the shape of the jugular. 1. A cardiac function measuring apparatus configured to measure data for evaluating cardiac functions , comprising:an irradiation unit configured to irradiate a jugular with light;an imaging unit configured to acquire image data of the jugular; anda vein discriminating unit configured to discriminate a shape of a jugular vein in the acquired image data and to calculate a shape complexity level indicating complexity in the shape of the jugular vein.2. The cardiac function measuring apparatus according to claim 1 , further comprising:a memory unit capable of storing a plurality of shape complexity levels of the jugular vein; andwherein cardiac function measuring apparatus is configured to compare the plurality of shape complexity levels acquired at different times.3. The cardiac function measuring apparatus according to claim 1 , wherein the vein discriminating unit is configured to calculate a line in conformance with the shape of the discriminated jugular vein claim 1 , calculate a regression curve of the line claim 1 , and determine the shape complexity level as a number of inflection points of the regression curve.4. The cardiac function measuring apparatus according to claim 1 , further comprising:a display unit configured to display a result when a presence of an abnormality in the cardiac functions is determined.5. The cardiac function measuring apparatus according to ...

Подробнее
19-01-2017 дата публикации

DATA VISUALIZATION SYSTEM AND METHOD

Номер: US20170018102A1
Автор: Cardno Andrew John
Принадлежит:

A data visualization system comprising: a data retrieval module arranged to retrieve data from a data storage module in communication with the data visualization system, wherein the retrieved data includes data sets for representation in a tree map; a tree map generation module arranged to generate a tree map based on the retrieved data, wherein the tree map generation module is further arranged to: i) sort the retrieved data sets according to the size of the data sets; ii) define an area for generating multiple rectangles, each rectangle representing one of the data sets, wherein the area is defined to allow the data sets to be spatially arranged within the area; iii) accumulate data points for data within the data sets to generate a rectangle that has dimensions that fall within pre-defined parameters; iv) generate a rectangle for each data set; and v) orientate the rectangle such that its orientation is only changed if the rectangle does not fit in the available area. 1. A data visualization system including:a data retrieval module arranged to retrieve data from a data storage module in communication with the data visualization system, wherein the retrieved data includes data sets for representation in a tree map;a tree map generation module arranged to generate a tree map based on the retrieved data, wherein the tree map generation module is further arranged to:i) sort the retrieved data sets according to the size of the data sets;ii) define an area for generating multiple rectangles, each rectangle representing one of the data sets, wherein the area is defined to allow the data sets to be spatially arranged within the area;iii) generate a rectangle for each data set; andiv) orientate the rectangle while maintaining the area of the rectangle such that its orientation is only changed if the rectangle does not fit in the available area.2. The system of claim 1 , wherein the tree map generation module is further arranged to determine a total number of data points ...

Подробнее
03-02-2022 дата публикации

SCREEN CODING METHODS AND SYSTEMS BASED ON MASS CENTER COINCIDENCE

Номер: US20220036594A1
Принадлежит:

A screen coding method and system based on mass center coincidence. The screen coding method based on mass center coincidence includes: constructing a plurality of coding unit models composed of a combination of a plurality of geometric figures with coincident mass centers, where vertices of the geometric figures do not coincide; and filling in data information to each vertex of the coding unit models according to a method of data information arrangement of a plurality of data combinations to generate a coding unit so as to implement different data lengths of the same coding unit. As such, a data length of the coding unit can be controlled, so that when more data needs to be coded, the overall size of the coding unit does not need to be changed, which greatly improves coding efficiency. 1. A screen coding method based on mass center coincidence , comprising:constructing a plurality of coding unit models comprising a combination of a plurality of geometric figures with coincident mass centers, wherein vertices of the geometric figures do not coincide; and during the constructing, the coding unit models are each implemented as a combination of two equilateral polygons with coincident mass centers, and', using vertices of the two equilateral polygons as a first change element;', 'using the number of coded data points as a second change element; and', 'performing arrangement and combination according to the first change element and the second change element to determine the method of data information arrangement., 'the method of data information arrangement of the plurality of data combinations comprises], 'filling in data information to each vertex of the coding unit models according to a method of data information arrangement of a plurality of data combinations to generate a coding unit so as to implement different data lengths of the same coding unit; wherein2. The screen coding method based on mass center coincidence according to claim 1 , wherein the method of data ...

Подробнее
18-01-2018 дата публикации

SYSTEM AND METHOD FOR OBJECT COUNTING AND TRACKING

Номер: US20180018788A1
Автор: Olmstead Bryan L.
Принадлежит:

Disclosed systems and methods for detecting and tracking a quantity of items in a particular location by optical means. The system includes an imager having a field of view directed over a region of interest where the items to be tracked are located, the imager being operable to acquire images of the items. The system further includes a controller in operative communication with the imager, where the controller acquires depth data from the images and determines volume measurements based on the depth data. Based on the determined volume measurements, the system is capable of counting and tracking the items present in the region of interest using optical means to avoid relying on barcodes or other identifier information affixed to the items. 1. A detection system for tracking items located in a region of interest , the system comprising:an imager having a field of view directed onto the region of interest, the imager operable to acquire a first image at a first time and a second image at a second time of the region of interest; and acquire a first set of depth data for a first quantity of items in the region of interest based on the first image;', 'determine a baseline volume measurement for the first quantity of items based on the first set of depth data from the first image;', 'acquire a second set of depth data for a second quantity of items in the region of interest based on the second image;', 'determine a current volume measurement for the second quantity of items based on the second set of depth data from the second image; and', 'tally a number of the second quantity of items present at the region of interest based on the current volume measurement and the baseline volume measurement., 'a controller in operative communication with the imager, the controller operable to2. The detection system of claim 1 , wherein the controller is further operable to determine a variance volume measurement based on the baseline volume measurement and the current volume ...

Подробнее
28-01-2016 дата публикации

SYSTEM AND METHOD FOR MANAGING A SUPPLY OF BREAST MILK

Номер: US20160022886A1
Автор: Bauer Ryan
Принадлежит:

A system is disclosed for managing a supply of breast milk. In one form the system includes a codified container for receiving expressed breast milk. A computing device receives an image of the expressed milk in the codified container. The codification allows for software to recognize the size and type of the container, as well as scale and orientation, to translate the image into an accurate volume. The milk data is then processed and analyzed to produce feedback regarding the pumping session, such as logs, charts, or reminders. In other embodiments, nipple positioning may be analyzed as well. 1. (canceled)2. A system for monitoring and analyzing milk collection comprising:a milk collection device configured to enable breastpumping data collection through interaction with an imaging component of a computing device and a custom software application.3. The system of claim 2 , the computing device selected from the group consisting of a smart phone claim 2 , a tablet claim 2 , a breastpump and a digital camera claim 2 , the computing device including a camera capable of taking at least one of an image and a video.4. The system of claim 2 , the computing device developing at least one of real-time performance feedback and troubleshooting.5. The system of claim 2 , the computing device configured to execute the custom software application claim 2 , the software application configured to recognize a codification element associated with the milk collection device from an image generated by the imaging component to determine milk volume within the milk collection device.6. The system of claim 2 , the computing device configured to scan and capture an image of the milk collection device via the imaging component.7. The system of claim 2 , the computing device configured to record a video (or series of images) of the collection container via the imaging component.8. The system of claim 2 , the computing device configured to execute the software and to recognize the size and/ ...

Подробнее
16-01-2020 дата публикации

Automatic Focusing Method and Apparatus Based on Region of Interest

Номер: US20200021747A1
Принадлежит:

An automatic focusing method and apparatus comprise the following steps: acquiring a target image that has been divided into blocks; acquiring the definition of each block, respectively; acquiring normalized central coordinates and a normalized size of a region of interest on the target image; respectively calculating a full width at half maximum coefficient in the horizontal direction and the vertical direction according to the normalized size; calculating a weight value of each block using a two-dimensional discrete Gaussian function according to the normalized central coordinates and the full width at half maximum coefficient; calculating a normalized overall definition of the target image according to the weight value and definition of each block; and focusing according to the normalized overall definition. The method and apparatus can automatically calculate a mask of the region of interest, thereby avoiding the occupying of storage space required when storing ROI mask data. 1. An automatic focusing method based on a region of interest , comprising the following steps:acquiring a target image that has been divided into blocks;acquiring the definition of each block, respectively;acquiring normalized central coordinates and a normalized size of a region of interest on the target image;respectively calculating a full width at half maximum coefficient in the horizontal direction and the vertical direction according to the normalized size;calculating a weight value of each block using a two-dimensional discrete Gaussian function according to the normalized central coordinates and the full width at half maximum coefficient;calculating a normalized overall definition of the target image according to the weight value and definition of each block; andfocusing according to the normalized overall definition.2. The method according to claim 1 , wherein claim 1 , the step of respectively calculating a full width at half maximum coefficient in the horizontal direction and ...

Подробнее
26-01-2017 дата публикации

Image processing method

Номер: US20170024626A1
Автор: Yasushi Inaba
Принадлежит: Canon Imaging Systems Inc

An image processing method for a picture of a participant, photographed in an event, such as a marathon race, increases the accuracy of recognition of a race bib number by performing image processing on a detected race bib area, and associates the recognized race bib number with a person included in the picture. This image processing method detects a person from an input image, estimates an area in which a race bib exists based on a face position of the detected person, detects an area including a race bib number from the estimated area, performs image processing on the detected area to thereby perform character recognition of the race bib number from an image subjected to image processing, and associates the result of character recognition with the input image.

Подробнее
28-01-2016 дата публикации

METHOD AND SYSTEM FOR OBJECT DETECTION WITH MULTI-SCALE SINGLE PASS SLIDING WINDOW HOG LINEAR SVM CLASSIFIERS

Номер: US20160026898A1
Принадлежит:

The invention provides methods and systems for reliably detecting objects in a received video stream from a camera. Objects are selected and a bound around selected objects is calculated and displayed. Bounded objects can be tracked. Bounding is performed by using Histogram of Oriented Gradients and linear Support Vector Machine classifiers. 1. A method for reliably detecting an object in a video frame comprising:predetermining one or more trained object classifiers based on one or more samples of predetermined size;receiving a video stream from a camera;selecting an object within at least one frame of said video stream;determining a bound of said object based on said predetermined trained object classifiers; anddetecting said object based on said bound.2. The method of claim 1 , wherein said objects are vehicles.3. The method of claim 1 , wherein said object classifiers are linear histogram of oriented gradients classifiers claim 1 , each based on histogram of oriented gradients feature vectors.4. The method of further comprising determining said bound of said object based on multi-scale single pass sliding window histogram of oriented gradients linear support vector machine classifiers.5. The method of claim 4 , further comprising predetermining a calibration based on said trained object classifiers claim 4 , and performing said multi-scale single pass sliding window based on said calibration.6. The method of claim 5 , wherein said object classifiers are trained for the same object or object category for a plurality of grid sizes claim 5 , and said object classifiers are trained with positive and negative histogram of oriented gradients feature vector samples extracted from a plurality of predetermined video image samples.7. The method of claim 6 , wherein said calibration further comprises determining said histogram of oriented gradients feature vectors by: dividing said at least one frame into a grid of cells; calculating a fixed size histogram of oriented ...

Подробнее
10-02-2022 дата публикации

FACE AND INNER CANTHI DETECTION FOR THERMOGRAPHIC BODY TEMPERATURE MEASUREMENT

Номер: US20220042851A1
Принадлежит:

One example temperature sensing device includes an electronic processor configured to receive a thermal image of a person captured by a thermal camera. The electronic processor is configured to determine a first temperature and a first location of a first hotspot on the person. The electronic processor is configured to determine a second location of a second hotspot on the person based on the second location being approximately symmetrical with respect to the first location about an axis, and the second hotspot having a second temperature that is approximately equal to the first temperature. The electronic processor is configured to determine a distance between the first location of the first hotspot and the second location of the second hotspot. In response to determining that the distance is within the predetermined range of distances, the electronic processor is configured to generate and output an estimated temperature of the person. 1. A temperature sensing device comprising:an output device configured to provide an output;a thermal camera configured to capture a thermal image of a person; and receive the thermal image of the person from the thermal camera,', 'determine a first temperature and a first location of a first hotspot on the person and included in the thermal image,', the second location being approximately symmetrical with respect to the first location about an axis, and', 'the second hotspot having a second temperature that is approximately equal to the first temperature,, 'determine a second location of a second hotspot on the person and included in the thermal image based on'}, 'determine a distance between the first location of the first hotspot and the second location of the second hotspot,', 'determine whether the distance between the first location and the second location is within a predetermined range of distances,', 'in response to determining that the distance is within the predetermined range of distances, generate an estimated ...

Подробнее
26-01-2017 дата публикации

DETERMINING DIMENSION OF TARGET OBJECT IN AN IMAGE USING REFERENCE OBJECT

Номер: US20170024898A1
Принадлежит:

Systems and methods for determining dimensions of an object using a digital image. In particular, systems and methods for determining an actual dimension of a target object using a digital image of that object along with a reference object are disclosed. The digital image may be of a mirrored reflection of the reference object and the target object. 117-. (canceled)18. A method for determining separation between two regions in an image captured by a camera , the method comprising:identifying a digital picture captured by a camera of a mobile phone, wherein the digital picture includes an image of a user's body part and an image of the mobile phone;identifying a location of a first digital marker placed by the user on the digital picture;identifying a boundary of the body part image using the location of the first digital marker;computing a first distance between the identified boundary of the body part image and another boundary of the body part image;computing a second distance between boundaries of the mobile phone image;identifying a known physical dimension of the mobile phone;determining a scaling factor using the known physical dimension of the mobile phone and the second distance;determining a physical dimension of the user's body part by applying the scaling factor to the first distance.19. The method of claim 18 , wherein the method comprises:identifying a position coordinate of an initial location of the first digital marker placed by the user on the digital picture;identifying a location of an estimated end point of one section of the body part imageidentifying a position coordinate of the location of the estimated end point;determining a difference between the position coordinate of the initial location of the first digital marker and the position coordinate of the location of the estimated end point; andafter determining that the difference exceeds a permitted difference, instructing the user to move the first digital marker from the initial location, ...

Подробнее
26-01-2017 дата публикации

Association Methods and Association Devices

Номер: US20170024902A1
Автор: XU Ran
Принадлежит:

This application provides an association method and device, and relates to the field of communications. The method comprises: obtaining image data in a visual field area of an imaging device; dividing the visual field area into multiple visual field subareas; obtaining first attribute information of an object in any one of the multiple visual field subareas by means of beam scanning; and establishing a correspondence between the first attribute information of the object in the visual field subarea and image data corresponding to the visual field subarea. By means of the association method and device, a high-accuracy correspondence between the object in the visual field area of the imaging device and the first attribute information of the object can be established, which is beneficial to presenting a user with the attribute information of the corresponding object in a more accurate and intuitive way. 1. A method , comprising:obtaining, by an imaging device comprising a processor, image data in a visual field area of the imaging device;dividing the visual field area into multiple visual field subareas;obtaining first attribute information of an object in any one of the multiple visual field subareas by beam scanning; andestablishing a correspondence between the first attribute information of the object and the image data corresponding to the visual field subarea.2. The method of claim 1 , wherein the obtaining the first attribute information comprises:scanning the visual field subarea by using a directional beam; andreceiving the first attribute information fed back by the object in the visual field subarea according to the scanning using the directional beam.3. The method of claim 2 , wherein the first attribute information is received from a network device of a wireless network.4. The method of claim 3 , wherein the network device of the wireless network comprises: a radio frequency identification (RFID) device of an RFID network.5. The method of claim 3 , wherein ...

Подробнее
25-01-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20180025250A1
Автор: Chen Bin, FURUKAWA Daisuke
Принадлежит:

An image processing apparatus is configured to extract an object region from an image. The image processing apparatus includes: a setting unit configured to set a plurality of reference points in the image; an obtaining unit configured to obtain a contour of the object region corresponding to each of the plurality of reference points as an initial extraction result based on a characteristic of the object region; and an extraction unit configured to extract the object region from the image based on an integration result obtained by integrating values of pixels in a plurality of initial extraction results. 1. An image processing apparatus configured to extract an object region from an image , comprising:a setting unit configured to set a plurality of reference points in the image;an obtaining unit configured to obtain a contour of the object region corresponding to each of the plurality of reference points as an initial extraction result based on a characteristic of the object region; andan extraction unit configured to extract the object region from the image based on an integration result obtained by integrating values of pixels in a plurality of initial extraction results.2. The apparatus according to claim 1 , wherein the extraction unit generates a likelihood map representing a likelihood that each pixel in the integration result is included in the object region claim 1 , and extracts the object region from the image based on the likelihood map.3. The apparatus according to claim 2 , wherein the extraction unit extracts the object region from the image based on the likelihood of each pixel in the likelihood map and the characteristic of the object region.4. The apparatus according to claim 2 , wherein the characteristic of the object region includes a characteristic of an intensity value change representing that an intensity value changes between an inside and an outside of the object region claim 2 , and a characteristic of a contour shape of the object region.5 ...

Подробнее
25-01-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND, NON-TRANSITORY COMPUTER READABLE MEDIUM

Номер: US20180025548A1
Принадлежит:

An image processing apparatus of the present invention includes an image obtaining unit obtaining first and second three-dimensional images, a deformation information obtaining unit obtaining deformation between two images, a cross-sectional image generating unit generating first and second cross-sectional images, a target position obtaining unit obtaining a target position in the first cross-sectional image, a corresponding position obtaining unit obtaining a corresponding position in the second three-dimensional image which corresponds to the target position on the basis of the deformation information. 1. An image processing apparatus , comprising:an image obtaining unit configured to obtain a first three-dimensional image and a second three-dimensional image different from the first three-dimensional image;a deformation information obtaining unit configured to obtain deformation information representing deformation between the first and second three-dimensional images;a cross-sectional image generating unit configured to generate a first cross-sectional image from the first three-dimensional image and generates a second cross-sectional image from the second three-dimensional image;a target position obtaining unit configured to obtain a target position in the first cross-sectional image;a corresponding position obtaining unit configured to obtain, on the basis of the deformation information, a corresponding position in the second three-dimensional image which corresponds to the target position; anda display controlling unit configured to control display of the first and second cross-sectional images on a displaying unit,wherein the cross-sectional image generating unit is configured to generate a cross-sectional image including the corresponding position as the second cross-sectional image.2. The image processing apparatus according to claim 1 , wherein the display controlling unit is configured to display the first and second cross-sectional images such that ...

Подробнее
28-01-2021 дата публикации

METHOD AND APPARATUS FOR MULTI-FACE TRACKING OF A FACE EFFECT, AND ELECTRONIC DEVICE

Номер: US20210027046A1
Автор: Lin Xin, Liu Gao
Принадлежит:

Disclosed is a method and apparatus for multi-face tracking of a face effect, and a computer readable storage medium. The method for multi-face tracking of a face effect comprises steps of: selecting a face effect in response to an effect selection command; selecting a face tracking type of the face effect in response to a face tracking type selection command; generating a face tracking sequence based on the face tracking type; recognizing a face image captured by an image sensor; superimposing the face effect on at least one of the face images according to the face tracking sequence. In the embodiment of the invention, for the face that needs to be tracked as specified by the effect, the number of faces of superimposed faces for the face effect, the superimposition order, and the display duration of the face effect can be arbitrarily set, and different effects can be superimposed on multiple faces, so as to improve the user experience. 1. A method for outputting data , comprising:obtaining a set of human-face key point data, wherein the human-face key point data characterizes a position of a key point of a human face in a target human-face image;determining human-eye feature data for characterizing a shape feature of a human eye, based on the set of the human-face key point data; andinputting the human-eye feature data into a human-eye size recognition model obtained by pre-training to obtain a degree value for characterizing a size of the human eye, and outputting the degree value, wherein the human-eye size recognition model characterizes a correspondence between human-eye feature data and a degree value.2. The method according to claim 1 , wherein the obtaining the set of the human-face key point data comprises:obtaining the target human-face image; andinputting the target human-face image into a human-face key point extraction model obtained by pre-training to obtain the set of the human-face key point data, wherein the human-face key point extraction model ...

Подробнее
02-02-2017 дата публикации

Measurement Target Measuring Program, Measurement Target Measuring Method, And Magnifying Observation Device

Номер: US20170030706A1
Принадлежит: KEYENCE CORPORATION

Provided are a measurement target measuring method, and a magnifying observation device which make it possible to readily and intuitively recognize a deviation between actual height image data and CAD data concerning a specific portion of a measurement target. A CAD height data generation unit generates a plurality of pieces of CAD height data based on basic CAD data. A reference height data selection unit selects reference height data from the plurality of pieces of CAD height data. A reference appearance image data acquisition unit acquires a reference appearance image corresponding to the reference height data. A target image display unit displays a target image based on texture image data or actual height image data, and a reference image display unit displays a reference image based on the reference appearance image data or the reference height data. 1. A measurement target measuring method , comprising:acquiring three-dimensional CAD data representing a measurement target;acquiring actual height image data that includes as height information a distance from a reference position to each part on the surface of the measurement target in one direction;generating a plurality of pieces of CAD height data that each include distances from a reference position to respective parts on the surface of the measurement target in a plurality of directions on the basis of the CAD data;selecting, from the plurality of pieces of CAD height data, CAD height data with the highest matching degree with respect to the actual height image data, as reference height data;displaying as a target image a first image based on the actual height image data or a second image corresponding to the first image, and displaying as a reference image a third image based on the reference height data or a fourth image corresponding to the third image;performing alignment of the target image and the reference image as first alignment by pattern matching;specifying a measurement place for the measurement ...

Подробнее
02-02-2017 дата публикации

Image Inspection Device, Image Inspection Method And Image Inspection Program

Номер: US20170032177A1
Принадлежит: KEYENCE CORPORATION

Provided is an image inspection device, an image inspection method and an image inspection program which are capable of easily and accurately inspecting a shape of an inspection target. In a setting mode, positioning image data of a setting target placed on a stage is registered. In an inspection mode, a positioning image is displayed on a display part based on the positioning image data. An image for positioning of the inspection target placed on the stage is displayed in the display part. Thereafter, image data for alignment of the inspection target is acquired, and then aligned to image data for alignment of the setting target. A size in a height direction of a measurement target place of the inspection target is measured based on the aligned height image data, to determine Pass/Fail of the inspection target. The determined determination result is displayed on the display unit. 2. The image inspection device according to claim 1 , wherein the first registration unit registers a plurality of pieces of positioning image data respectively showing a plurality of portions of the setting target claim 1 , or a plurality of pieces of positioning image data with mutually different magnifications claim 1 , in the setting mode.3. The image inspection device according to claim 2 , further comprising:a first operation unit that is operated for sequentially displaying on the display unit a plurality of positioning images registered by the first registration unit,wherein the first display command unit gives a command to the display unit to sequentially display positioning images based on the plurality of pieces of positioning image data registered by the first registration unit on the basis of operation of the first operation unit in the inspection mode.4. The image inspection device according to claim 1 ,wherein the control unit further includesa matching degree calculation unit for calculating a matching degree between the positioning image data registered by the first ...

Подробнее
02-02-2017 дата публикации

IMAGE DISPLAY APPARATUS AND IMAGE DISPLAY METHOD

Номер: US20170032493A1
Принадлежит:

An image display apparatus for displaying an image containing a plurality of objects includes a setting unit configured to set a display magnification and a display position according to an attribute of a display target object when a first display mode for displaying each object included in the image is specified, and a display control unit configured to perform control to display on a screen the image containing the display target object based on the display magnification and the display position set by the setting unit. 1. An image display apparatus for displaying an image containing a plurality of objects , the image display apparatus comprising:a setting unit configured to set one of the plurality of objects as a display target object and set a magnification according to at least a size of the display target object; anda display control unit configured to perform control in such a manner that, based on the magnification set by the setting unit, zooming in to at least a part of the image is performed and the display target object is displayed on a screen,wherein, in a case where an instruction to display on the screen an object, which is among the plurality of objects and is other than the display target object being displayed on the screen, is received from a user, an object to be displayed next is determined from among the plurality of objects without reception of designation from the user.2. The image display apparatus according to claim 1 , wherein the display control unit performs control in such a manner that claim 1 , based on the magnification set by the setting unit claim 1 , zooming in to at least the part of the image is performed so that the display target object appears on the screen.3. The image display apparatus according to claim 1 , wherein claim 1 , in a case where the instruction is received from the user claim 1 , the display control unit performs control in such a manner that claim 1 , based on a magnification to be set according to at least ...

Подробнее
02-02-2017 дата публикации

INVENTORY, GROWTH, AND RISK PREDICTION USING IMAGE PROCESSING

Номер: US20170032509A1
Принадлежит: Accenture Global Services Limited

According to examples, inventory, growth, and risk prediction using image processing may include receiving a plurality of images captured by a vehicle during movement of the vehicle along a vehicle path. The images may include a plurality of objects. The images may be pre-processed for feature extraction. A plurality of features of the objects may be extracted from the pre-processed images by using a combination of computer vision techniques. A parameter related to the objects may be determined from the extracted features. A spatial density model may be generated, based on the determined parameter and the extracted features, to provide a visual indication of density of distribution of the objects related to a portion of the images, and/or to provide an alert corresponding to the objects related to the portion of the images. 1. An inventory , growth , and risk prediction using image processing system comprising: receive a plurality of images captured by a vehicle during movement of the vehicle along a vehicle path, where the plurality of images include a plurality of objects, and', 'pre-process the plurality of images for feature extraction from the plurality of images;, 'an image pre-processor, executed by at least one hardware processor, to'}a feature extractor, executed by the at least one hardware processor, to extract a plurality of features of the plurality of objects from the plurality of pre-processed images by using a combination of computer vision techniques;an object level parameter generator, executed by the at least one hardware processor, to determine at least one parameter related to the plurality of objects from the plurality of extracted features; and a visual indication of density of distribution of the plurality of objects related to a portion of at least one of the plurality of images, and', 'an alert corresponding to the plurality of objects related to the portion of the at least one of the plurality of images., 'a partition level output ...

Подробнее
04-02-2016 дата публикации

METHOD AND DEVICE FOR DETECTING INTEREST POINTS IN IMAGE

Номер: US20160034780A1
Принадлежит:

The present invention provides a method and a device for detecting interest points in an image. The method includes: acquiring an original input image; performing down-sampling processing on the original input image, so as to obtain a plurality of sampling images with different resolutions; dividing each sampling image into a plurality of small image blocks; performing filtering processing on the plurality of small image blocks in each sampling image in sequence by using Laplacian-of-Gaussian filters, so as to obtain filtered images of the plurality of small image blocks in each sampling image; and acquiring interest points in an image in filtered images of the plurality of small image blocks in each sampling image. The present invention is used for solving the problems of more memory consumption and a low detection speed in the prior art. 1. A method for detecting interest points in an image , comprising:acquiring an original input image;performing down-sampling processing on the original input image, so as to obtain a plurality of sampling images with different resolutions;dividing each sampling image into a plurality of small image blocks;performing filtering processing on the plurality of small image blocks in each sampling image in sequence by using Laplacian-of-Gaussian filters, so as to obtain filtered images of the plurality of small image blocks in each sampling image; andacquiring interest points in an image in each sampling image, according to the filtered images of the plurality of small image blocks in each sampling image.2. The method according to claim 1 , wherein claim 1 , the dividing each sampling image into the plurality of small image blocks claim 1 , comprises:dividing each sampling image into a plurality of small square image blocks having a width of X and a height of Y, wherein, both X and Y are positive integers, if the small image block at the boundary of the sampling image has a width less than X or a height less than Y, then filling the ...

Подробнее
04-02-2016 дата публикации

Method for Accurately Determining the Position and Orientation of Each of a Plurality of Identical Recognition Target Objects in a Search Target Image

Номер: US20160034781A1

Embodiments of the invention relate to detecting the number, position, and orientation of objects when a plurality of recognition target objects are present in a search target image. Dictionary image data is provided, including a recognition target pattern, a plurality of feature points of the recognition target pattern, and an offset (O, O) from the coordinates at the center of the image to the coordinates of the feature point. The sizes (R) and directions (θ) of feature vectors for the coordinates (T, T) of a plurality of feature points in the target image are also provided. The coordinates (F, F) of a virtual center point in the target image is derived. Additional virtual center points within a radius of the coordinates (F, F) is counted. Presence of a recognition target object is recognized near the virtual center point coordinates of the search target image. 1. A computer implemented method comprising:{'sub': m', 'm', 'x', 'y, 'providing dictionary image data including a recognition target pattern, a plurality of feature points of the recognition target pattern including a size (R) and direction (θ) of a feature vector, and an offset (O, O) from coordinates at a center of a target image to coordinates of a feature point;'}{'sub': t', 't', 'x', 'y, 'providing a size (R) and direction (θ) of the feature vector for coordinates (T, T) of a plurality of feature points in the target image;'}{'sub': x', 'y', 'x', 'y', 'x', 'y', 'm', 't', 'm', 't, 'calculating coordinates (F, F) of a virtual center point in the target image derived from T, T, O, O, R, R, θ, and θ;'}{'sub': x', 'y, 'counting a number of additional virtual center points within a predetermined radius (r) of the coordinates (F, F) of the virtual center point; and'}{'sub': x', 'y, 'storing the coordinates (F, F) of the virtual center point and the number of counted virtual center points.'}2. The method of claim 1 , further comprising repeating the counting and storing on all feature points in the target ...

Подробнее
04-02-2016 дата публикации

DETECTING SPECIFIED IMAGE IDENTIFIERS ON OBJECTS

Номер: US20160034783A1
Принадлежит:

Embodiments of the present application relate to a method, apparatus, and system for detecting a specified image identifier. The method includes retrieving a target image to be detected from a predetermined area, binarizing the target image to be detected to obtain a target binary image to be detected, calibrating connected domains of the target binary image to be detected, successively retrieving image features of candidate connected domains, and comparing the image features corresponding to the candidate connected domains to image features of a standard specified identifier image, wherein the candidate connected domains are determined based at least in part on the calibration of the connected domains, and determining a candidate connected domain as the location of the specified identifier image based at least in part on the comparison of the image features corresponding to the candidate connected domains to image features of the standard specified identifier image. 1. A method , comprising:retrieving a target image to be detected from a predetermined area;binarizing the target image to be detected to obtain a target binary image to be detected, wherein the target binary image to be detected corresponds to a binary image of the target image to be detected and a negative image of the binary image;calibrating a plurality of connected domains of the target binary image to be detected;retrieving a set of one or more image features of a plurality of candidate connected domains, and comparing the image features corresponding to the plurality of candidate connected domains to image features of a standard specified identifier image, wherein the candidate connected domains are determined based at least in part on the calibration of the plurality of connected domains; anddetermining a candidate connected domain among the plurality of candidate connected domains as a location of the specified identifier image based at least in part on the comparison of the image features ...

Подробнее
05-02-2015 дата публикации

Method for Measuring Microphysical Characteristics of Natural Precipitation using Particle Image Velocimetry

Номер: US20150035944A1

A method and video sensor for precipitation microphysical features measurement based on particle image velocimetry. The CCD camera is placed facing towards the light source, which forms a three-dimensional sampling space. As the precipitation particles fall through the sampling space, double-exposure images of precipitation particles illuminated by pulse light source are recorded by CCD camera. Combined with the telecentric imaging system, the time between the two exposures are adaptive and can be adjusted according to the velocity of precipitation particles. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in the double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. This invention provides a method for measuring the shape, size, velocity, and other microphysical characteristics of various precipitation particles.

Подробнее
17-02-2022 дата публикации

TRAINING DATA GENERATION METHOD AND TRAINING DATA GENERATION DEVICE

Номер: US20220051055A1
Автор: Sakuma Shogo

A training data generation method includes: obtaining a camera image, a labeled image generated by adding annotation information to the camera image, and an object image showing an object to be detected by a learning model; identifying a specific region corresponding to the object based on the labeled image; and compositing the object image in the specific region on each of the camera image and the annotated image. 1. A training data generation method , comprising:obtaining a camera image, an annotated image generated by adding annotation information to the camera image, and an object image showing an object to be detected by a learning model;identifying a specific region corresponding to the object based on the annotated image; andcompositing the object image in the specific region on each of the camera image and the annotated image.2. The training data generation method according to claim 1 , further comprising:calculating a center coordinate of the specific region based on the annotated image, whereinthe object image is composited to overlap the center coordinate on each of the camera image and the annotated image.3. The training data generation method according to claim 1 , further comprising:calculating an orientation of the specific region based on the annotated image, whereinthe object image is composited in an orientation corresponding to the orientation of the specific region.4. The training data generation method according to claim 1 , further comprising:obtaining a size of the specific region based on the annotated image, whereinthe object image is scaled to a size smaller than or equal to the size of the specific region, and is composited.5. The training data generation method according to claim 1 , further comprising:calculating a total number of specific regions corresponding to the object based on the annotated image, the specific regions each being the specific region;calculating combinations of compositing the object image in one or more of the ...

Подробнее
01-05-2014 дата публикации

COMPOSITION DETERMINATION DEVICE, COMPOSITION DETERMINATION METHOD, AND PROGRAM

Номер: US20140119601A1
Автор: Yoshizumi Shingo
Принадлежит: SONY CORPORATION

A composition determination device includes: a subject detection unit configured to detect a subject in an image based on acquired image data; an actual subject size detection unit configured to detect the actual size which can be viewed as being equivalent to actual measurements, for each subject detected by the subject detection unit; a subject distinguishing unit configured to distinguish relevant subjects from subjects detected by the subject detection unit, based on determination regarding whether or not the actual size detected by the actual subject size detection unit is an appropriate value corresponding to a relevant subject; and a composition determination unit configured to determine a composition with only relevant subjects, distinguished by the subject distinguishing unit, as objects. 1. (canceled)2. A image processing apparatus comprising: detect a subject in an image based on acquired image data, and to determine a predetermined attribute relating to the difference in size of each detected subject, in actual measurements;', 'detect a subject distance for each subject detected by the circuitry;', 'detect an in-image size which is the size of the subject in said image, for each subject detected by the circuitry; and', 'detect an actual size using at least said subject distance and said in-image size., 'circuitry configured to'}3. The image processing apparatus according to claim 2 ,the circuitry being further configured to distinguish relevant subjects from subjects detected by the circuitry, based on a determination regarding whether or not the actual size detected by the circuitry is an appropriate value corresponding to a relevant subject after having been corrected using a coefficient corresponding to said determined attribute.4. The image processing apparatus according to claim 2 ,the circuitry being further configured to determine a composition with subjects, distinguished by the circuitry, as objects.5. The image processing apparatus according to ...

Подробнее
30-01-2020 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM TO ESTIMATE REFLECTION CHARACTERISTIC OF OBJECT

Номер: US20200034652A1
Автор: Inoshita Chika
Принадлежит:

An image processing apparatus includes first and second acquisition units, a determination unit, and an estimation unit. The first acquisition unit is configured to acquire shape information of an object. The second acquisition unit is configured to acquire a plurality of pieces of image data. The determination unit is configured to determine a pixel position corresponding to a position at which an orientation of a surface is the same as or similar to an orientation of a surface at a position of interest on the object, as a pixel position for estimating a reflection characteristic of the object at the position of interest. The estimation unit is configured to estimate the reflection characteristic of the object at the position of interest by using a pixel value at the pixel position determined by the determination unit. 1. An image processing apparatus comprising:a first acquisition unit configured to acquire shape information indicating a shape of a surface of an object;a second acquisition unit configured to acquire a plurality of pieces of image data acquired by imaging the object under a plurality of geometric conditions;a determination unit configured to determine a pixel position corresponding to a position at which an orientation of a surface is the same as or similar to an orientation of a surface at a position of interest on the object, as a pixel position for estimating a reflection characteristic of the object at the position of interest, in a plurality of images indicated by the plurality of pieces of image data, based on the shape information; andan estimation unit configured to estimate the reflection characteristic of the object at the position of interest by using a pixel value at the pixel position determined by the determination unit.2. The image processing apparatus according to claim 1 , wherein the estimation unit estimates the reflection characteristic of the object by fitting a reflection model to the pixel value determined by the ...

Подробнее
04-02-2021 дата публикации

METHOD FOR PRESERVING PERCEPTUAL CONSTANCY OF OBJECTS IN IMAGES

Номер: US20210035262A1
Принадлежит: FoVo Technology Limited

A method of modifying a 2D image representing a 3D scene in order to preserve perceptual constancy of objects in the scene the method including the steps: processing an image of a 3D scene to generate an unmodified view of the 3D scene and one or more 3D objects within the scene; selecting one or more objects from within the scene; determining a modified view of the one or more objects; comparing the modified view of the one or more objects with the unmodified view of the one or more objects; interpolating one or more stages between the unmodified view and modified view of the one or more objects; selecting an interpolation for the one or more objects; generating a new 3D scene with the selected interpolated one or more objects; and, projecting and rendering the new 3D scene into a 2D image onto a display. 1. A method of modifying a 2D image representing a 3D scene in order to preserve perceptual constancy of objects in the 3D scene , the method comprising:processing, at a processor, the 2D image of the 3D scene to generate an unmodified view of the 3D scene and of one or more objects within the 3D scene;selecting, at the processor, the one or more objects from within the 3D scene;determining, at the processor, a modified view of the one or more objects;comparing, at the processor, the modified view of the one or more objects with the unmodified view of the one or more objects;interpolating, at the processor, one or more stages between the unmodified view and modified view of the one or more objects, resulting in one or more interpolated stages;selecting, at the processor, a particular interpolated stage for the one or more objects from the one or more interpolated stages;generating, at the processor, a new view of the 3D scene with the selected particular interpolated stage for the one or more objects; and,projecting and rendering, at the processor, the new view of the 3D scene into the 2D image onto a display.2. The method of claim 1 , wherein the unmodified view ...

Подробнее
09-02-2017 дата публикации

OBJECT INGESTION THROUGH CANONICAL SHAPES, SYSTEMS AND METHODS

Номер: US20170039442A1
Принадлежит: NANT HOLDINGS IP, LLC

An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time. 124-. (canceled)25. An object ingestion device comprising:An image sensor;A non-transitory computer readable memory storing object ingestion software instructions; and obtain a digital representation of a scene including an image of a target object captured by the image sensor and a location;', 'determine a context associated with the scene based, at least in part, on the digital representation and the location;', 'identify contextually relevant shape objects in a shape database based on the context;', 'derive a set of edges from the image of the target object;', 'select at least one target shape object from the contextually relevant shape objects based on the set of edges;', 'generate a target object model from the at least one target shape object and portions of the image data associated with the set of edges;', 'create a set of key frame bundles from the target object model as a function of recognition algorithm descriptors and points of view associated with the at least one target shape object; and', 'send the set of key frame bundles to an ...

Подробнее
11-02-2016 дата публикации

DETECTION AND TRACKING OF ITEM FEATURES

Номер: US20160042517A1
Автор: Hunt Shawn
Принадлежит:

Technologies are generally described for detection and tracking of item features. In some examples, features of an object may be initially found through detection of one or more edges and one or more corners of the object from a first perspective. In addition, determination may be made whether the detected edges and corners of the object are also detectable from one or more other perspectives. In response to an affirmative determination the detected edges and of the object may be marked as features to be tracked, for example, in subsequent frames of a camera feed. The perspectives may correspond to distributed locations in a substantially umbrella-shaped formation centered over the object. In other examples, lighting conditions of an environment where the object is being tracked may be programmatically controlled. 1. A method to implement detection and tracking of item features , the method comprising:detecting one or more edges and one or more corners of an item from a first perspective;determining whether the one or more edges and the one or more corners of the item are detectable from at least a second perspective;in response to a determination that the one or more edges and the one or more corners of the item are also detectable from at least the second perspective, marking the one or more edges and the one or more corners of the item as features to be tracked; andtracking the item through at least two captured image frames based on a correlation of the one or more edges and the one or more corners.2. The method of claim 1 , further comprising:determining whether the one or more edges and the one or more corners of the item are detectable from a plurality of perspectives; andselecting the plurality of perspectives from randomly distributed locations in a substantially umbrella-shaped formation centered over the item.3. (canceled)4. The method of claim 1 , wherein detecting the one or more edges and the one or more corners of the item comprises:detecting the one ...

Подробнее
11-02-2016 дата публикации

METHOD AND APPARATUS FOR DETERMINING A SEQUENCE OF TRANSITIONS

Номер: US20160042528A1
Принадлежит:

An apparatus and a method of determining a sequence of transitions for a varying state of a system, wherein the system is described by a finite number n of states, and wherein a transition from a current state to a next state causes a cost in dependence of a distance that is dependent on a previous state, the current state, and the next state. The method comprises: combining each two consecutive states to generate super states, wherein the cost for a transition from a current super state to a next super state only depends on the current super state and the next super state; in an iterative process, applying a dynamic programming algorithm to the super states in order to determine a minimum accumulated cost for each varying super state and to determine a preceding super state that led to the minimum accumulated cost; and after a final iteration, determining a final super state with the minimum accumulated cost and retrieving the sequence of the preceding super states leading to the final super state with the minimum accumulated cost. 1. A method for determining a sequence of optimal states for a varying state of a system describing a varying margin line in a sequence of images , the margin line being divided into a plurality of segments , wherein for each segment an optimal state out of a finite number of n states is to be determined , each state describing a profile across the margin line , and wherein a transition from a current state in a current segment to a next state in a next segment causes a cost in dependence of a distance that is dependent on a previous state in a preceding segment , the current state , and the next state , the method comprising:combining the states of each two consecutive segments along the margin line into super states; anddetermining an optimal state for each segment by applying a dynamic programming algorithm to the sequence of super states.2. The method according to claim 1 , wherein the dynamic programming algorithm is accelerated ...

Подробнее
11-02-2016 дата публикации

AUGMENTED REALITY WITH GRAPHICS RENDERING CONTROLLED BY MOBILE DEVICE POSITION

Номер: US20160042569A1
Автор: Evans Dave, Wyld Andrew
Принадлежит:

Systems and methods are provided for rendering graphics in augmented reality software based on the movement of a device in relation to a target object, in order to produce more desired rendering effects. An augmented reality graphic can be both scaled and shifted laterally compared to the target based on a position of the device, and can then be cropped to match the target. Scaling and shifting related to movement parallel to the target can be performed using a first (parallel) function, and scaling and shifting related to movement toward and away from the target can be performed using a second (perpendicular) function. Both functions can be chosen to ensure that an edge of the augmented image is not passed over so as to provide blank space. 1. A method for augmenting a video feed displayed on a mobile device comprising one or more processors , the method comprising performing by the mobile device:receiving a live video feed taken by a camera that is communicably coupled to at least one processor of the mobile device, the live video feed including a target object;receiving a graphic corresponding to the target object, the graphic having boundaries;determining a magnitude and a direction of a lateral offset vector of the mobile device, the lateral offset vector being a measurement of the distance from the center of the target object to the mobile device in a direction parallel to the plane of the target object;determining, based on the magnitude of the lateral offset vector, a shift magnitude;determining, based on the direction of the lateral offset vector, a shift direction, wherein the shift direction is opposite the direction of the lateral offset vector;determining a display position, wherein the display position is a position on the graphic, and wherein the display position is separated from a center point on the graphic by the shift magnitude in the shift direction;determining a portion of the graphic to be displayed, wherein the portion is located at the ...

Подробнее
09-02-2017 дата публикации

METHOD AND SYSTEM OF PLANAR SURFACE DETECTION FOR IMAGE PROCESSING

Номер: US20170039731A1
Принадлежит:

A system, article, and method of planar surface detection for image processing. 1. A computer-implemented method of planar surface detection for image processing , comprising:obtaining depth image data having three dimensional coordinates for multiple pixels wherein each pixel forms a point in a depth image;selecting sample points in the depth image;generating a plane hypothesis for each one of multiple individual sample points by using multiple points; andperforming voting to determine which pixels have content on the image that is likely to exist on at least one of the plane hypotheses.2. The method of wherein selecting the sample points comprises selecting sample points that are spaced from each other on the image.3. The method of wherein selecting the sample points comprises selecting sample points with uniform spacing from sample point to sample point.4. The method of wherein selecting the sample points comprises selecting sample points that are spaced to form a horizontal and vertical array throughout an entire image.5. The method of wherein the sample points are approximately 5 to 25 pixels apart from each other.6. The method of wherein the spacing of the sample points from each other is selected based on claim 1 , at least in part claim 1 , the minimum depth a camera providing the image can sense.7. The method of wherein the spacing of the sample points from each other is selected based on claim 1 , at least in part claim 1 , the average depth detected in the image.8. The method of wherein the spacing of the sample points from each other is selected based on claim 1 , at least in part claim 1 , the focal length of the camera forming the image.9. The method of wherein the spacing of the sample points from each other is selected based on which application is to use the plane hypothesis to augment the image.10. The method of comprising determining the plane hypotheses without testing a range of possible values for the parameters (a claim 1 , b claim 1 , c claim ...

Подробнее
24-02-2022 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR EXTRACTING AN IRRADIATION FIELD OF A RADIOGRAPH

Номер: US20220058423A1
Автор: Kobayashi Tsuyoshi
Принадлежит:

An image processing apparatus configured to extract an irradiation field from an image obtained through radiation imaging, comprises: an inference unit configured to obtain an irradiation field candidate in the image based on inference processing; a contour extracting unit configured to extract a contour of the irradiation field based on contour extraction processing performed on the irradiation field candidate; and a field extracting unit configured to extract the irradiation field based on the contour. 1. An image processing apparatus configured to extract an irradiation field from an image obtained through radiation imaging , comprising:an inference unit configured to obtain an irradiation field candidate in the image based on inference processing;a contour extracting unit configured to extract a contour of the irradiation field based on contour extraction processing performed on the irradiation field candidate; anda field extracting unit configured to extract the irradiation field based on the contour.221.-. (canceled) This application is a continuation of U.S. patent application Ser. No. 16/534,512, filed on Aug. 7, 2019, which claims the benefit of and priority to Japanese Patent Application No. 2018-152722, filed on Aug. 14, 2018, each of which is hereby incorporated by reference herein in their entirety.The present invention relates to an image processing apparatus, an image processing method, and a storage medium for extracting an irradiation field of a radiograph.In recent years, radiation imaging apparatuses have been widely used in medical settings, and radiographs are obtained as digital signals, subjected to image processing, and then displayed by a display apparatus and used to make a diagnosis.In radiation imaging, irradiation of a region other than a field of interest (hereinafter referred to as an “irradiation field”) that is necessary for diagnosis is usually prevented by narrowing down the irradiation field using a collimator in order to suppress ...

Подробнее
18-02-2021 дата публикации

METHOD AND SYSTEM FOR PRECISELY POSITIONING COLLAPSED AREA OF HIGH SLOPE

Номер: US20210048523A1
Принадлежит:

The present disclosure provides a method and a system for precise location of a high slope collapse area. Firstly, the slope images in a long time series are obtained, the slope images in the long time series are composed into a two-dimensional slope deformation graph, and an area with the maximum deformation in the two-dimensional slope deformation graph is selected as a deformation area. Then, the deformation area is segmented by straight line, and the deformation region obtained by straight line segmentation is displayed in overlapping way in the slope images of long time series, and the region corresponding to the connecting line with the largest change range is selected as the monitoring line area from the overlapping image. Finally, the monitoring points are selected from the monitoring line area to determine the location of the high slope collapse area 1. A method for precise location of a high slope collapse area , wherein the precise location method comprises the following steps:obtaining slope images in a long time series by using a ground-based synthetic aperture radar;composing the slope images in the long time series into a two-dimensional slope deformation graph by using a false color location method;selecting an area with the maximum deformation in the two-dimensional slope deformation graph as a deformed area;conducting line partitioning on a deformed area in a slope image at the first time point in the slope images in the long time series, and setting multiple reference points on each straight line;conducting line partitioning on deformed areas in slope images in a long time series, comprising slope images in all time points after the first time point, of the slope images in the long time series through line connection based on the multiple reference points of each straight line, to obtain a corresponding connecting line of each straight line in a slope image in each time series that is after a first time series;displaying deformed areas, obtained ...

Подробнее
06-02-2020 дата публикации

METHOD AND DEVICE FOR DETECTING OBJECT STACKING STATE AND INTELLIGENT SHELF

Номер: US20200043192A1
Принадлежит:

A method and a device for detecting an object stacking state, and an intelligent shelf are disclosed. The method comprises capturing a color image and a depth image that are aligned with each other above a reference plane in which the object is located, identifying the object and an area occupied by the object in the color image, converting the depth image into a height map relative to the reference plane, determining a reference height of the object based on the height map and the area occupied by the object, acquiring an actual height of the object based on the identified object, and comparing the reference height of the object with the actual height of the object and judging a stacking state of the object based on a result of the comparing. 1. A method for detecting a stacking state of an object , comprising:capturing a color image and a depth image that are aligned with each other above a reference plane in which the object is located;identifying the object and an area occupied by the object in the color image;converting the depth image into a height map relative to the reference plane;determining a reference height of the object based on the height map and the area occupied by the object;acquiring an actual height of the object based on the object that was identified; andcomparing the reference height of the object with the actual height of the object and judging the stacking state of the object based on a result of the comparing.2. The method according to claim 1 , wherein the capturing the color image and the depth image that are aligned with each other above the reference plane in which the object is located comprises at least one of:capturing the depth image by a passive ranging; orcapturing the depth image by an active ranging.3. The method according to claim 2 , wherein the capturing the depth image by the passive ranging comprises:capturing the depth image by a binocular distance measurement.4. The method according to claim 1 , wherein the converting the ...

Подробнее
15-02-2018 дата публикации

ELEMENT PROVIDED WITH PORTION FOR POSITION DETERMINATION AND MEASURING METHOD

Номер: US20180045603A1
Принадлежит:

A method for measuring a position of a target surface provided with portions for position determination thereon, wherein a diffuse reflectance of the target surface is 0.1% or less, and a diffuse reflectance of the portions for position determination is 5% or more, and wherein the target surface is configured such that a tangential plane at any point on the target surface where each of the portions for position determination is installed forms an arbitrary angle between 15 degrees and 75 degrees inclusive with a certain direction, the method including the steps of illuminating the target surface with parallel light in the certain direction; determining positions of border lines of the plural portions for position determination from an image of the target surface; and determining the position of the target surface from the positions of the border lines of the plural portions for position determination. 1. A method for measuring a position of a target surface provided with portions for position determination thereon , wherein a diffuse reflectance of the target surface is 0.1% or less , and a diffuse reflectance of the portions for position determination is 5% or more , andwherein the target surface is configured such that a normal to a tangential plane at any point on the target surface where each of the portions for position determination is installed forms an arbitrary angle between 15 degrees and 75 degrees inclusive with a certain direction,the method including the steps of:illuminating the target surface with parallel light in the certain direction;determining positions of border lines of the plural portions for position determination from an image of the target surface; anddetermining the position of the target surface from the positions of the border lines of the plural portions for position determination.2. A method according to claim 1 , wherein the target surface is a surface of an element provided with a first plane and a second plane forming an angle ...

Подробнее
18-02-2021 дата публикации

PANORAMIC IMAGE CONSTRUCTION BASED ON IMAGES CAPTURED BY ROTATING IMAGER

Номер: US20210049738A1
Принадлежит:

Techniques are disclosed for panoramic image construction based on images captured by rotating imagers. In one example, a method includes receiving a first sequence of images associated with a scene and captured during continuous rotation of an image sensor. Each image of the first sequence has a portion that overlaps with another image of the first sequence. The method further includes generating a first panoramic image. The generating includes processing a second sequence of images based on a point-spread function to mitigate blur associated with the continuous rotation to obtain a deblurred sequence of images, and processing the deblurred sequence based on a noise power spectral density to obtain a denoised sequence of images. The point-spread function is associated with the image sensor's rotation speed. The second sequence is based on the first sequence. The first panoramic image is based on the denoised sequence. 1. A method , comprising:receiving a first sequence of images captured during continuous rotation of an image sensor, wherein the first sequence is associated with a scene, and wherein each image of the first sequence has a portion that overlaps with another image of the first sequence; and processing a second sequence of images based on a first point-spread function (PSF) to mitigate blur associated with the continuous rotation of the image sensor to obtain a deblurred sequence of images, wherein the first PSF is associated with a rotation speed of the image sensor, and wherein the second sequence is based on the first sequence; and', 'processing the deblurred sequence of images based on at least one noise power spectral density (PSD) to obtain a denoised sequence of images, wherein the first panoramic image is based on the denoised sequence of images., 'generating a first panoramic image, wherein the generating comprises2. The method of claim 1 , further comprising:determining shear in the first panoramic image, wherein the shear is based at least ...

Подробнее
18-02-2021 дата публикации

METHOD AND SYSTEM OF ANTENNA MEASUREMENT FOR MOBILE COMMUNICATION BASE STATION

Номер: US20210049782A1
Принадлежит: WUYI UNIVERSITY

A method and system for mobile communication base station antenna measurement is disclosed. The method comprises steps of: acquiring a set of images containing antennas of a base station; processing the set of images with a model based on instance segmentation network, and generating visualized images corresponding to the set of images of antennas; calculating, from the visualized images, the quantity of antennas of the base station and separating data for each antenna; measuring parameters of each antenna by data fitting. The system comprises a processor and a memory storing program instructions thereon, the program instructions executable by the processor to cause the system to perform the steps of the method. 1. A method for mobile communication base station antenna measurement , comprising:acquiring a set of images containing antennas of a base station;processing the set of images with a model based on instance segmentation network, and generating visualized images corresponding to the set of images of antennas;calculating, from the visualized images, quantity of antennas of the base station and separating data for each antenna;measuring parameters of each antenna by data fitting.2. The method of claim 1 , wherein the acquiring of the set of images comprises:capturing a video data of the antennas by at least one UAV flying around a base station;framing the video data into the set of images, at a reduced frame rate to the video data.3. The method of claim 1 , wherein the generating of visualized images comprises:detecting all antennas of the base station in the set of images;segmenting each antenna with individual antenna mask.4. The method of claim 3 , wherein the calculating and separating comprises: utilizing pixel coordinates and a threshold to measure the quantity of antennas and separate data for each antenna.5. The method of claim 4 , wherein the measuring of parameters comprises:performing data fitting on at least one of following: antenna down-tilt angle ...

Подробнее
18-02-2016 дата публикации

ASSEMBLY COMPRISING A RADAR AND AN IMAGING ELEMENT

Номер: US20160048975A9
Принадлежит: TRACKMAN A/S

An assembly comprising a radar and a camera for both deriving data relating to a golf ball and a golf club at launch, radar data relating to the ball and club being illustrated in an image provided by the camera. The data illustrated may be trajectories of the ball/club/club head, directions and/or angles, such as an angle of a face of the golf club striking the ball, the lie angle of the club head or the like. An assembly of this type may also be used for defining an angle or direction in the image and rotating e.g. an image of the golfer to have the determined direction or angle coincide with a predetermined angle/direction in order to be able to compare different images. 1. An assembly comprising a radar and an imaging device both adapted to provide information relating to a plurality of objects , the assembly further comprising a controller adapted to:receive radar data from the radar and an image from the imaging devicedetermine, from the radar data, movement and/or position data of objects imaged by the imaging device and positioned in a field of view of the radar, the movement/position data describing positions, directions, trajectories or planes of movement of the objects andprovide data relating to the movement/position data in the image, the data relating to the movement/position illustrating the positions/trajectories/planes/directions in the image.2. An assembly according to claim 1 , wherein the controller is adapted to identify claim 1 , from the radar data and/or the image claim 1 , a position of impact of two of the objects and illustrate:for a first of the objects at least one of: a trajectory, a 3D launch vector and spin after impact andfor a second of the objects at least one of trajectory, direction/angle of movement and 3D impact vector at impact.3. An assembly according to claim 2 , adapted to provide information relating to an impact between a golf club and a golf ball claim 2 , the golf ball being the first of the objects and the golf club ...

Подробнее
15-02-2018 дата публикации

METHODS AND SYSTEMS FOR ENHANCING USER LIVENESS DETECTION

Номер: US20180046852A1
Автор: Ionita Mircea
Принадлежит:

A method for enhancing user liveness detection is provided that includes calculating, by a computing device, parameters for each frame in a video of captured face biometric data. Each parameter results from movement of at least one of the computing device and the biometric data during capture of the biometric data. The method also includes creating a signal for each parameter and calculating a similarity score. The similarity score indicates the similarity between the signals. Moreover, the method includes determining the user is live when the similarity score is at least equal to a threshold score. 1. A method for enhancing user liveness detection comprising:calculating, by a computing device, parameters for frames included in a video of captured face biometric data, each parameter resulting from movement of at least one of the computing device and the biometric data during capture of the biometric data;creating a signal for each parameter;calculating a similarity score, the similarity score indicating the similarity between the signals; anddetermining the user is live when the similarity score is at least equal to a threshold score.2. A method for enhancing user liveness detection in accordance with claim 1 , said calculating parameters step further comprising calculating claim 1 , for each frame claim 1 , an angle of light illuminating the biometric data.3. A method for enhancing user liveness detection in accordance with claim 2 , said calculating parameters step further comprising calculating a second angle for each frame claim 2 , the second angle being between a plane defined by a front face of the computing device and a vertical axis.4. A method for enhancing user liveness detection in accordance with claim 1 , said calculating parameters step further comprising:calculating a perpendicular distance for each point of interest within a field of view of the computing device; andcalculating an angle for each frame, the angle being between a plane defined by a ...

Подробнее
16-02-2017 дата публикации

METHOD FOR DETECTING HORIZONTAL AND GRAVITY DIRECTIONS OF AN IMAGE

Номер: US20170046855A1
Принадлежит:

The disclosure relates to a method for detection of the horizontal and gravity directions of an image, the method comprising: selecting equidistant sampling points in an image at an interval of the radius of the sampling circle of an attention focus detector; placing the center of the sampling circle of the attention focus detector on each of the sampling points, and using the attention focus detector to acquire attention focus coordinates and the corresponding significant orientation angle, and all the attention focus coordinates and the corresponding significant orientation angles constitute a set Ω; using an orientation perceptron to determine a local orientation angle and a weight at the attention focus according to the gray image information, and generating a local orientation function; obtaining a sum of each of the local orientation functions as an image direction function; obtaining a function M(β), and further obtaining the horizontal and gravity identification angles. 21. The method for detecting the horizontal and gravity directions of an image according to claim 1 , characterized in that in said step S claim 1 , the diameter of the sampling circle of the attention focus detector is 0.06 times of the short side length of the image. The present invention relates to the field of image processing, in particular to a method. for detecting the horizontal and gravity directions of an image.Detection of the horizontal and gravity directions of an image can be used in vehicle rollover warning, image tilt detection and so on, wherein the image tilt detection can be used in such applications as automatic scanning of images and image correction.In the field of vehicle control, rollover prevention is an important aspect. The existing vision-based methods usually employ specific reference objects or are based on prior knowledge of the known environments, so they are suitable for highly structured road environments, but these methods lack universality and adaptability ...

Подробнее
15-02-2018 дата публикации

ADAPTIVE BOUNDING BOX MERGE METHOD IN BLOB ANALYSIS FOR VIDEO ANALYTICS

Номер: US20180047193A1
Принадлежит:

Provided are methods, apparatuses, and computer-readable medium for content-adaptive bounding box merging. A system using content-adaptive bounding box merging can adapt its merging criteria according to the objects typically present in a scene. When two bounding boxes overlap, the content-adaptive merge engine can consider the overlap ratio, and compare the merged bounding box against a minimum object size. The minimum object size can be adapted to the size of the blobs detected in the scene. When two bounding boxes do not overlap, the system can consider the horizontal and vertical distances between the bounding boxes. The system can further compare the distances against content-adaptive thresholds. Using a content-adaptive bounding box merge engine, a video content analysis system may be able to more accurately merge (or not merge) bounding boxes and their associated blobs. 1. A method for merging bounding boxes , comprising:determining a candidate merged bounding box for a first bounding box and a second bounding box, wherein the first bounding box is associated with a first blob, wherein the first blob includes pixels of at least a portion of a first foreground object in a video frame, wherein the second bounding box is associated with a second blob, wherein the second blob includes pixels of at least a portion of a second foreground object in the video frame, and wherein the candidate merged bounding box includes the first blob and the second blob;determining a size of the candidate merged bounding box;comparing the size of the candidate merged bounding box against a size threshold; anddetermining to merge the first bounding box and the second bounding box based on the size of the candidate merged bounding box being less than the size threshold.2. The method of claim 1 , further comprising:determining that the first bounding box and the second bounding box have an intersecting region and a non-intersecting region;determining a ratio between an area of the non- ...

Подробнее
22-02-2018 дата публикации

ULTRASONIC DIAGNOSIS OF CARDIAC PERFORMANCE BY SINGLE DEGREE OF FREEDOM CHAMBER SEGMENTATION

Номер: US20180049718A1
Принадлежит:

An ultrasonic diagnostic imaging system has a user control by which a user positions the user's selection of a heart chamber border in relation to two machine-drawn heart chamber tracings. The user's border is positioned by a single degree of freedom control which positions the border as a function of a single user-determined value. This overcomes the vagaries of machine-drawn borders and their mixed acceptance by clinicians, who can now create repeatably-drawn borders and exchange the control value for use by others to obtain the same results. 2. (canceled)3. The ultrasonic diagnostic imaging system of claim 1 , wherein the border detection processor is arranged to identify in the cardiac image data an endocardium or a myocardium-blood pool interface as the inner boundary claim 1 , andan epicardium or an interface between the trabeculaeted myocardium and the compacted myocardium as the outer boundary.4. The ultrasonic diagnostic imaging system of claim 1 , wherein the user control further comprises a slider claim 1 , a knob claim 1 , a switch claim 1 , a trackball claim 1 , a rocker control claim 1 , toggle buttons claim 1 , a list box claim 1 , or a numerical entry box.5. The ultrasonic diagnostic imaging system of claim 4 , wherein the user control further comprises a softkey control or a physical control.6. (canceled)7. The ultrasonic diagnostic imaging system of claim 1 , wherein the source of cardiac image data further comprises a memory device containing two-dimensional or three-dimensional cardiac images.8. The ultrasonic diagnostic imaging system of claim 7 , wherein the source of cardiac image data is adapted to provide the border detection processor with two-dimensional or three-dimensional cardiac images including a view of a left ventricle.9. The ultrasonic diagnostic imaging system of claim 1 , wherein the border detection processor further comprises a semi-automatic heart boundary image processor.10. The ultrasonic diagnostic imaging system of claim 9 ...

Подробнее
03-03-2022 дата публикации

DYNAMIC MEASUREMENT OPTIMIZATION BASED ON IMAGE QUALITY

Номер: US20220067954A1
Принадлежит:

A method for sizing of an object to be used by a user based upon a user image, including: receiving a user image; determining user features from the user image using a first machine learning model; calculating a set of image quality variables based upon the features from the first machine learning model and user image parameters; determining an accuracy rating based upon the set of image quality variables; determining if the accuracy of the user image is acceptable; determining ruleset adjustments using a second machine learning model when the accuracy of the user image is unacceptable; adjusting a default ruleset based upon the ruleset adjustments; and determining an object size by applying the adjusted ruleset to user features. 1. A method for sizing of an object to be used by a user based upon a user image , comprising:receiving a user image;determining features from the user image using a first machine learning model;calculating a set of image quality variables based upon the features from the first machine learning model and user image parameters;determining an accuracy rating based upon the set of image quality variables;determining if the accuracy of the user image is acceptable;determining ruleset adjustments using a second machine learning model when the accuracy of the user image is unacceptable;adjusting a default ruleset based upon the ruleset adjustments; anddetermining an object size by applying the adjusted ruleset to user features.2. The method of claim 1 , wherein determining user features from the user image includes features from the users face claim 1 , users foot claim 1 , users hand claim 1 , and/or users joint.3. The method of claim 1 , wherein a set of image quality variables includes one of face-to-scene ratios claim 1 , unconstrained pose claim 1 , pixel density claim 1 , aspect ratio claim 1 , inter-feature distances claim 1 , and image angle.4. The method of claim 1 , further comprising determining the object size by applying the default ...

Подробнее
25-02-2016 дата публикации

DETERMINING DISTANCE BETWEEN AN OBJECT AND A CAPTURE DEVICE BASED ON CAPTURED IMAGE DATA

Номер: US20160055395A1
Принадлежит: KOFAX, INC.

In various embodiments, methods, systems, and computer program products for determining distance between an object and a capture device are disclosed. The distance determination techniques are based on image data captured by the capture device, where the image data represent the object. These techniques improve the function of capture devices such as mobile phones by enabling determination of distance using a single lens capture device, and based on intrinsic parameters of the capture device, such as focal length and scaling factor(s), in preferred approaches. In some approaches, the distance estimation may be based in part on a priori knowledge regarding size of the object represented in the image data. Distance determination may be based on a homography transform and/or reference image data representing the object, a same type or similar type of object, in more approaches. 1. A method , comprising: determining a distance between an object and a capture device based on image data captured by the capture device , wherein the image data represent the object.2. The method as recited in claim 1 , wherein determining the distance comprises estimating a normalized homography matrix {tilde over (H)}.3. The method as recited in claim 1 , comprising determining a capture device focal length using an API call to the capture device.4. The method of claim 1 , wherein the determining is based at least in part on a size of the object.5. The method as recited in claim 1 , wherein the determining is not based on a size of the object.6. The method as recited in claim 1 , wherein the determining is based on a translation vector of the object relative to the capture device.7. The method as recited in claim 1 , wherein the determining is based at least in part on one or more intrinsic capture device parameters.8. The method as recited in claim 7 , wherein the determining is based on a capture device intrinsic parameter matrix A.10. The method as recited in claim 7 , comprising ...

Подробнее
25-02-2021 дата публикации

VIAL CONTENT DETECTION USING ILLUMINATED BACKGROUND PATTERN

Номер: US20210056660A1
Автор: Bower Kevin D., CHEN MO
Принадлежит: Novanta Corporation

A machine vision system that uses an imager to capture an optical image of a target object that may contain a liquid. The target object is illuminated by an illumination source positioned oppositely from the imager and a predetermined pattern is positioned between the illumination source and the target object so that the imager will capture optical images of the background pattern through any liquid positioned in the target object. A processor is programmed to analyze captured images to detect any distortions of the pattern that are attributable to the presence of a liquid in the target object. 1. A machine vision system , comprising:an imager for capturing an optical image of a target object;an illumination source positioned oppositely from the imager;a predetermined pattern positioned between the illumination source and the target object; anda processor programmed to evaluate the optical image to determine whether there are any distortions of the predetermined pattern attributable to a liquid positioned in the target object.2. The system of claim 1 , wherein the processor is programmed to determine whether there any distortions of the predetermined pattern based upon any refraction of light passing through the predetermined pattern and the target object.3. The system of claim 2 , further comprising a plurality of the target objects and wherein the processor is programmed to evaluate the optical image to determine whether there are any distortions of the predetermined pattern in each of the plurality of the target objects.4. The system of claim 3 , wherein the processor is programed to whether there any distortions of the predetermined pattern using fast Fourier transform.5. The system of claim 3 , wherein the processor is programed to whether there any distortions of the predetermined pattern based on whether there is a change in color.6. The system of claim 3 , wherein the pattern comprises a grid.7. The system of claim 6 , wherein the illumination source ...

Подробнее
22-02-2018 дата публикации

Methods, Systems and Apparatus for Segmenting and Dimensioning Objects

Номер: US20180053305A1
Принадлежит:

Methods, systems, and apparatus for segmenting and dimensioning objects are disclosed. An example method disclosed herein includes determining a first sensor of a plurality of sensors toward which a vehicle is moving based on image data generating by the plurality of sensors; designating the first sensor as a reference sensor; combining the image data from the plurality of sensors to generate combined image data representative of the vehicle and an object carried by the vehicle, the combining based on reference sensor; generating a plurality of clusters based on the combined image data; and identifying a first one of the clusters nearest the reference sensor as the object. 1. A method comprising:determining, using a logic circuit, a first sensor of a plurality of sensors toward which a vehicle is moving based on image data generating by the plurality of sensors, the image data representative of the vehicle and an object carried by the vehicle;designating the first sensor as a reference sensor;combining, using the logic circuit, the image data from the plurality of sensors to generate combined image data representative of the vehicle and the object carried by the vehicle, the combining based on the reference sensor;generating, using the logic circuit, a plurality of clusters based on the combined image data; andidentifying, using the logic circuit, a first one of the clusters nearest the reference sensor as the object.2. A method as defined in claim 1 , further comprising segmenting the first one of the clusters from a second one of the clusters by removing the second one of the clusters from the combined image data.3. A method as defined in claim 2 , further comprising dimensioning the first one of the clusters.4. A method as defined in claim 1 , further comprising:identifying a first structure of the vehicle nearer to the reference sensor than other structures of the vehicle; andremoving points in the combined image data corresponding to the first structure of the ...

Подробнее
22-02-2018 дата публикации

Spatial Alignment of Inertial Measurement Unit Captured Golf Swing and 3D Human Model For Golf Swing Analysis Using IR Reflective Marker

Номер: US20180053308A1
Принадлежит:

A method for spatial alignment of golf-club inertial measurement data and a three-dimensional human model for golf club swing analysis is provided. The method includes capturing inertial measurement data through an inertial measurement unit (IMU), and sending the inertial measurement data from the IMU to a computing device. The computing device is configured to determine a three-dimensional trajectory in IMU coordinate space, determine in human model coordinate space a three-dimensional trajectory of an infrared marker in a video with the video having depth or depth information, determine a transformation matrix from human model coordinate space to IMU coordinate space, perform spatial alignment of the three-dimensional trajectory and a three-dimensional human model based on the video having depth or depth information, using the transformation matrix, and overlay a projected trajectory onto the three-dimensional human model. 1. A method for spatial alignment of golf-club inertial measurement data and a three-dimensional human model for golf club swing analysis , comprising:capturing inertial measurement data of a golf club swing through an inertial measurement unit (IMU); andsending the inertial measurement data of the golf club swing from the inertial measurement unit to a computing device, so that the computing device determines a three-dimensional trajectory of the golf club swing in a coordinate space of the IMU, determines in human model coordinate space a three-dimensional trajectory of an infrared marker in a video of the golf club swing with the video having depth or depth information, determines a transformation matrix from the human model coordinate space to the IMU coordinate space, performs spatial alignment of the three-dimensional trajectory of the golf club swing and a three-dimensional human model based on the video having depth or depth information, using the transformation matrix, and overlays a projected golf club trajectory onto the three- ...

Подробнее
22-02-2018 дата публикации

Spatial Alignment of M-Tracer and 3-D Human Model For Golf Swing Analysis Using Skeleton

Номер: US20180053309A1
Принадлежит:

A method for spatial alignment of golf-club inertial measurement data and a three-dimensional human skeleton model for golf club swing analysis are provided. The method includes capturing inertial measurement data of a golf club swing through an inertial measurement unit (IMU), and sending the inertial measurement data from the inertial measurement unit to a computing device. The computing device is configured to determine a three-dimensional trajectory in IMU coordinate space, determine in human model coordinate space a three-dimensional trajectory of a plurality of human skeleton points in a video with the video having depth or depth information, determine a transformation matrix from human model coordinate space to IMU coordinate space, and calculate an arm-golf club angle that is based on the inertial measurement data, the transformation matrix, and the three-dimensional trajectory of the plurality of human skeleton points. 1. A method for spatial alignment of golf-club inertial measurement data and a three-dimensional human skeleton model for golf club swing analysis , comprising:capturing inertial measurement data of a golf club swing through an inertial measurement unit (IMU); andsending the inertial measurement data of the golf club swing from the inertial measurement unit to a computing device, so that the computing device determines a three-dimensional trajectory of the golf club swing in a coordinate space of the IMU, determines in human model coordinate space a three-dimensional trajectory of a plurality of human skeleton points in a video of the golf club swing with the video having depth or depth information, determines a transformation matrix from the human model coordinate space to the IMU coordinate space, and calculates an arm-golf club angle that is based on the inertial measurement data of the golf club swing, the transformation matrix, and the three-dimensional trajectory of the plurality of human skeleton points.2. The method of claim 1 , ...

Подробнее
22-02-2018 дата публикации

METHODS AND SYSTEMS FOR WIREFRAMES OF A STRUCTURE OR ELEMENT OF INTEREST AND WIREFRAMES GENERATED THEREFROM

Номер: US20180053347A1
Принадлежит:

The disclosure relates to systems and processes for generating verified wireframes corresponding to at least part of a structure or element of interest can be generated from 2D images, 3D representations (e.g., a point cloud), or a combination thereof. The wireframe can include one or more features that correspond to a structural aspect of the structure or element of interest. The verification can comprise projecting or overlaying the generated wireframe over selected 2D images and/or a point cloud that incorporates the one or more features. The wireframe can be adjusted by a user and/or a computer to align the 2D images and/or 3D representations thereto, thereby generating a verified wireframe including at least a portion of the structure or element of interest. The verified wireframes can be used to generate wireframe models, measurement information, reports, construction estimates or the like. 1. A method of generating a verified wireframe of a structure or element of interest comprising: 1. is derived from a plurality of overlapping 2D images of the structure or element of interest, wherein the 2D images are generated from a passive image capture device, and incorporates one or more of the structural aspects; and', '2. comprises one or more features that correspond to the one or more structural aspects; and, 'the unverified wireframe, 'a. generating, by a computer or a user, an unverified wireframe corresponding to at least part of a structure or element of interest comprising one or more structural aspects of interest, wherein i. one or more 2D images selected from the plurality of overlapping 2D images, wherein each of the selected 2D images incorporates at least some of the one or more structural aspects; or', 'ii. a point cloud derived from the plurality of overlapping 2D images, wherein the point cloud incorporates at least some of the one or more structural aspects; and, 'b. projecting the unverified wireframe over either or both of indicating, by either ...

Подробнее
23-02-2017 дата публикации

LIVENESS DETECTION APPARATUS AND LIVENESS DETECTION METHOD

Номер: US20170053174A1
Автор: FAN Haoqiang, JIA Kai, Yin Qi
Принадлежит:

A liveness detection apparatus and a liveness detection method are provided. The liveness detection apparatus may comprise: a specific exhibiting device, for exhibiting a specific identification content; an image acquiring device, for acquiring image data of a target object to be recognized during the exhibition of the identification content; a processor, for determining whether there is a reflective region corresponding to the identification content in the acquired image data, determining a regional feature of the reflective region when there is the reflective region, to obtain a determination result, and recognizing whether the target object is a living body based on the determination result. 1. A liveness detection apparatus , comprising:a specific exhibiting device, for exhibiting a specific identification content;an image acquiring device, for acquiring image data of a target object to be recognized during the exhibition of the identification content;a processor, for determining whether there is a reflective region corresponding to the identification content in the acquired image data, determining a regional feature of the reflective region when there is the reflective region, to obtain a determination result, and recognizing whether the target object is a living body based on the determination result.2. The liveness detection apparatus according to claim 1 , wherein claim 1 , the specific exhibiting device is used for exhibiting at least one of a title bar claim 1 , a tool bar and a background region of the liveness detection apparatus as the identification content.3. The liveness detection apparatus according to claim 1 , wherein claim 1 , the specific exhibiting device includes:a sequence generator, for randomly generating a reference sequence; anda display, for applying the reference sequence to the identification content, to adjust a display effect of the identification content.4. The liveness detection apparatus according to claim 3 , wherein claim 3 , ...

Подробнее
23-02-2017 дата публикации

EDGE GUIDED INTERPOLATION AND SHARPENING

Номер: US20170053380A1
Автор: McNally Scott
Принадлежит:

Techniques, methods, and systems for image processing may be provided. The image processing may be provided for upsampling and interpolating images. The upsampling and interpolating may include interpolating the image through at least an edge weight and a spatial weight. In various embodiments, the edge weight and/or the spatial weight may be calculated with a kernel. The kernel may be a kernel with a two dimensional (2D) distribution such as a Gaussian kernel, a Laplacian kernel, or another such statistically based kernel. The image processing may also include refining the upsampled and interpolated image through a refinement weight calculation and/or through back projection. 1. A method comprising:receiving an image, the image comprising a plurality of pixels with each pixel including a pixel value;selecting a pixel to be processed;determining an edge weight and a spatial weight associated with the selected pixel, wherein at least the spatial weight is determined with a spatial kernel; andprocessing the image with at least the edge weight and the spatial weight.2. The method of claim 1 , wherein the spatial kernel is a Gaussian based kernel and/or is based on a two dimensional (2D) distribution.3. The method of claim 1 , wherein the selected pixel is an upsampled pixel claim 1 , the upsampled pixel is the location of a pixel in the upsampled image claim 1 , the processing the image comprises interpolating the image with at least the edge weight and the spatial weight and the method further comprises upsampling the image.4. The method of claim 3 , wherein the determining the edge weight comprises:applying an edge weight kernel to a neighborhood set of the pixels to determine weighted pixel edge values for the pixels of the neighborhood set;determining an edge indicator value using at least the weighted pixel edge values; anddetermining the edge weight using at least the edge indicator value.5. The method of claim 3 , wherein the spatial kernel comprises an N×N ...

Подробнее
23-02-2017 дата публикации

METHODS AND SYSTEMS FOR PROGRAMATICALLY IDENTIFYING SHAPES IN GRAPHICAL ARTIFACTS

Номер: US20170053420A1
Принадлежит: HONEYWELL INTERNATIONAL INC.

Methods and systems are provided for processing a graphical artifact. In one embodiment, a method includes: receiving, by a processor, a graphical artifact having at least one unknown graphical element; determining, by the processor, graphical features of the unknown graphical element; computing, by the processor, a plurality of similarity scores based on the features of the unknown graphical element and features of a plurality of known graphical elements; and storing data associated with the unknown graphical element with data associated with a known graphical element based on the plurality of similarity scores. 1. A method of processing a graphical artifact , comprising:receiving, by a processor, a graphical artifact having at least one unknown graphical element;determining, by the processor, graphical features of the unknown graphical element;computing, by the processor, a plurality of similarity scores based on the features of the unknown graphical element and features of a plurality of known graphical elements; andstoring data associated with the unknown graphical element with data associated with a known graphical element based on the plurality of similarity scores.2. The method of claim 1 , wherein the graphical features include a straight line feature.3. The method of claim 1 , wherein the graphical features include a curved line feature.4. The method of claim 1 , wherein the graphical features include a vertex feature.5. The method of claim 1 , wherein the computing the similarity score is based on a hamming distance.6. The method of claim 1 , further comprising arranging the plurality of similarity score in ascending order and wherein the storing the data is based on a first similarity score of the ascending order.7. The method of claim 1 , further comprising presenting the plurality of similarity scores to a user; and receiving a confirmation of a selected similarity scored based on the presentation.8. A system for processing a graphical artifact claim 1 ...

Подробнее
23-02-2017 дата публикации

ANNOTATION LINE DETERMINING UNIT, ANNOTATION LINE REMOVING UNIT, MEDICAL DISPLAY, AND METHOD THEREFOR

Номер: US20170053421A1
Автор: Chen Haifeng
Принадлежит: EIZO Corporation

To detect annotation lines in medical image data. Horizontal annotation pixel determination means obtains the color component value difference between each pixel of a predetermined number of connected adjacent pixels in a first direction of the target pixel and an adjacent pixel thereof. If the total number of pixels having color component value differences, of the predetermined number of pixels is equal to or smaller than a first threshold, the horizontal annotation pixel determination means determines that the target pixel is an annotation pixel. If annotation pixels are successive in the horizontal direction in a predetermined number, horizontal annotation line determination means determines that the annotation pixels form an annotation line. The same applies to the vertical direction. The determined annotation lines are provided to border detection means. 1. A device for determining annotation lines added to regions in which medical images are displayed , comprising:annotation pixel determination means configured to obtain color component value differences between each pixel of a predetermined number of connected adjacent pixels in a first direction of a target pixel and an adjacent pixel thereof and, to determine that the target pixel is an annotation pixel if a total number of pixels having color component value differences, of the predetermined number of pixels is a first threshold or less; andannotation line determination means configured to, if the annotation pixel is successive in the first direction in a number equal to or greater than a second threshold, determine that a line formed by the successive annotation pixels is an annotation line.2. The annotation line determination device of claim 1 , wherein color component values of the pixels are represented by UV values claim 1 , andif UV values of the target pixel are not gray, the annotation pixel determination means determines that the target pixel is an annotation pixel.3. The annotation line ...

Подробнее
23-02-2017 дата публикации

Systems and Methods for Generating Compressed Light Field Representation Data using Captured Light Fields, Array Geometry, and Parallax Information

Номер: US20170054901A1
Принадлежит: Pelican Imaging Corporation

Systems and methods for the generating compressed light field representation data using captured light fields in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory connected configured to store an image processing application, wherein the image processing application configures the processor to obtain image data, wherein the image data includes a set of images including a reference image and at least one alternate view image, generate a depth map based on the image data, determine at least one prediction image based on the reference image and the depth map, compute prediction error data based on the at least one prediction image and the at least one alternate view image, and generate compressed light field representation data based on the reference image, the prediction error data, and the depth map. 1a processor; anda memory connected to the processor and configured to store an image processing application; [ the image data comprises a set of images comprising a reference image and at least one alternate view image; and', 'each image in the set of images comprises a set of pixels;, 'obtain image data, wherein, 'generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image;', 'determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image;', 'compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image; and', 'generate compressed light field representation data based on the reference image, ...

Подробнее
02-03-2017 дата публикации

Inclined Super-GEO Orbit for Improved Space-Surveillance

Номер: US20170057661A1
Принадлежит:

Systems, methods, and apparatus for space surveillance are disclosed herein. In one or more embodiments, the disclosed method involves scanning, by at least one sensor on at least one satellite in inclined super-geostationary earth orbit (super-GEO), a raster scan over a field of regard (FOR). In one or more embodiments, the scanning is at a variable rate, which is dependent upon a target dwell time for detecting a target of interest. In at least one embodiment, the target dwell time is a function of a range from at least one sensor to the target of interest and a function of a solar phase angle. In some embodiments, the axis of inclination of the inclined super-GEO is a function of the solar phase angle. 1. A method for space surveillance , the method comprising:scanning, by at least one sensor on at least one satellite in inclined super-geostationary earth orbit (super-GEO), a raster scan over a field of regard (FOR),wherein the scanning is at a variable rate, which is dependent upon a target dwell time for detecting a target of interest,wherein the target dwell time is a function of a characteristic brightness of the target.2. The method of claim 1 , wherein an axis of inclination of the inclined super-GEO is chosen to minimize performance degradations due to earth exclusions.3. The method of claim 1 , wherein the target dwell time is further a function of a range from the at least one sensor to the target of interest and a function of a solar phase angle.43511670. The method of claim 1 , wherein the raster scan comprises at least one sweep. Customer No.5. The method of claim 4 , wherein the at least one sweep is a continuous sweep.6. The method of claim 1 , wherein the field of regard (FOR) of the at least one sensor is a function of a geometry between a sun and the at least one satellite and a function of an angle that the at least one satellite is pointing.7. The method of claim 1 , wherein the method further comprises claim 1 , during the scanning claim 1 , ...

Подробнее
02-03-2017 дата публикации

Adaptive Scan Rate Space Surveillance Sensor for Super-GEO Orbits

Номер: US20170057662A1
Принадлежит:

Systems, methods, and apparatus for space surveillance are disclosed herein. In one or more embodiments, the disclosed method involves scanning, by at least one sensor on at least one satellite in super-geostationary earth orbit (super-GEO), a raster scan over a field of regard (FOR). In one or more embodiments, the scanning is at a variable rate, which is dependent upon a target dwell time for detecting a target of interest. In at least one embodiment, the target dwell time is a function of a characteristic brightness of the target. 1. A method for space surveillance , the method comprising:scanning, by at least one sensor on at least one satellite in super-geostationary earth orbit (super-GEO), a raster scan over a field of regard (FOR),wherein the scanning is at a variable rate, which is dependent upon a target dwell time for detecting a target of interest,wherein the target dwell time is a function of a characteristic brightness of the target.2. The method of claim 1 , wherein the target dwell time is further a function of a range from the at least one sensor to the target of interest and a function of a solar phase angle.3. The method of claim 1 , wherein the raster scan comprises at least one sweep.4. The method of claim 3 , wherein the at least one sweep is a continuous sweep.5. The method of claim 1 , wherein the field of regard (FOR) of the at least one sensor is a function of a geometry between a sun and the at least one satellite and a function of an angle that the at least one satellite is pointing.6. The method of claim 1 , wherein the method further comprises claim 1 , during the scanning claim 1 , collecting claim 1 , by the at least one sensor claim 1 , image frames over time.7. The method of claim 6 , wherein the image frames overlap.8. The method of claim 1 , wherein super-GEO is an orbit has a radius that is larger than a geostationary earth orbit (GEO) radius for a majority of a duration of an orbital cycle.9. The method of claim 1 , wherein a ...

Подробнее
21-02-2019 дата публикации

NAVIGATION SYSTEM

Номер: US20190056738A1
Принадлежит:

A navigation-system for use on an automated vehicle includes a perception-sensor and a controller. The perception-sensor detects objects present proximate to a host-vehicle and detects a gradient of an area proximate to the host-vehicle. The controller is in communication with the perception-sensor. The controller is configured to control the host-vehicle. The controller determines a free-space defined as off of a roadway traveled by the host-vehicle, and drives the host-vehicle through the free-space when the gradient of the free-space is less than a slope-threshold and the objects can be traversed. 1. A navigation-system for use on an automated vehicle , said system comprising:a perception-sensor that detects objects present proximate to a host-vehicle and detects a gradient of an area proximate to the host-vehicle; anda controller in communication with the perception-sensor, said controller configured to control the host-vehicle, wherein the controller determines a free-space defined as off of a roadway traveled by the host-vehicle, and drives the host-vehicle through the free-space when the gradient of the free-space is less than a slope-threshold and the objects can be traversed.2. The system in accordance with claim 1 , wherein the controller distinguishes between the objects that are a barrier and the objects that are grass based on the perception-sensor.3. The system in accordance with claim 2 , wherein the controller further determines a height of the grass.4. The system in accordance with claim 1 , wherein the slope-threshold is determined based on a dynamic-model of the host-vehicle.5. The system in accordance with claim 1 , wherein the controller further determines a path to drive the host-vehicle from the roadway through the free-space and return to the roadway.6. The system in accordance with claim 1 , wherein the system further includes an alert-device in communication with the controller claim 1 , wherein the alert-device notifies an operator of the ...

Подробнее
21-02-2019 дата публикации

Planar visualization of anatomical structures

Номер: US20190057541A1
Принадлежит: Siemens Healthcare GmbH

A method, for two-dimensional mapping of anatomical structures of a patient, includes acquiring three-dimensional image data of anatomical structures of a patient; adapting a virtual network structure to a spatial course of the anatomical structures; defining a user-defined map projection for projection of two-dimensional pixel positions of an image to be output onto a geometric figure around a center of the anatomical structures for which mapping onto a two-dimensional space is defined; ascertaining points of intersection of radially extending half lines assigned to the two-dimensional pixel positions of the image to be output with the virtual network structure; and ascertaining the image to be output based upon image intensity values assigned to the points of intersection ascertained. A method for two-dimensional mapping of the tree-like elongated structure of the patient; a method for simultaneous mapping of a tree-like elongated structure; and corresponding apparatuses are also described. 1. A method for two-dimensional mapping of anatomical structures of a patient , comprising:acquiring three-dimensional image data of anatomical structures of a patient;adapting a virtual network structure to a spatial course of the anatomical structures;defining a user-defined map projection for projection of two-dimensional pixel positions of an image to be output onto a geometric figure around a center of the anatomical structures for which mapping onto a two-dimensional space is defined;ascertaining points of intersection of radially extending half lines assigned to the two-dimensional pixel positions of the image to be output with the virtual network structure; andascertaining the image to be output based upon image intensity values assigned to the points of intersection ascertained.2. The method of claim 1 , wherein the anatomical structures have a hollow structure.3. The method of claim 2 , wherein the hollow structure includes a hollow organ with blood vessel structures. ...

Подробнее
02-03-2017 дата публикации

METHOD FOR DETERMINING A SPATIAL DISPLACEMENT VECTOR FIELD

Номер: US20170059307A1
Автор: Wieneke Bernhard
Принадлежит:

An inherent pattern image (F) of a test object () is recorded when the test object is illuminated with uniform illumination light, and a projection pattern image (F) is recorded when the test object is illuminated with a spatially modulated projection pattern. A planar displacement vector field (F) is calculated from the inherent pattern image, and a shape (F) is calculated from the projection pattern image. An image of the first type is recorded at time (t), and images of the second type are recorded at times (t−; t+) before and after the time (t). A representation of the test object at the test time (t) is estimated by averaging. A spatial displacement vector field () is based on the calculated representation of the test object of the first image type and the representation of the test object estimated from the images of the second type. 130010. A method for determining a spatial displacement vector field () of a test object , () , comprising:{'b': 100', '100', '10, 'recording at least one inherent pattern image (, F) while the test object () is illuminated with uniform illumination light,'}{'b': 200', '200', '10, 'recording at least one projection pattern image (, F), while the test object () is illuminated with a spatially modulated projection pattern projected onto said test object,'}{'b': 110', '110', '10', '100', '100', '110', '110', '10', '100', '100, 'calculating a planar displacement vector field (, F) of the test object () from the inherent pattern image (, F) when the inherent pattern image (, F) is recorded as a representation of the test object () assigned to the inherent pattern image (, F) and'}{'b': 210', '210', '200', '200', '200', '200', '10', '200', '200, 'calculating a shape (, F) of the test object from the projection pattern image (, F) when the projection pattern image (, F) is recorded as a representation of the test object () assigned to the projection pattern image (, F),'}{'b': 100', '200', '100', '200, 'sub': T', 'T', 'T', 'T, 'recording ...

Подробнее
02-03-2017 дата публикации

SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING CAD FIELD SKETCHES AND AUTOMATIC VALIDATION THEREOF

Номер: US20170059317A1
Принадлежит: DATUMATE LTD.

A method for automatically validating measurements of a field survey including providing, on a field computing device, a two-dimensional image of a field to be surveyed, providing actual coordinates of at least two field reference points, each corresponding to an image reference point on the two-dimensional image, employing the field computing device to outline, on the two-dimensional image, features of interest of the field, employing the field computing device to manually select, on the outline, a plurality of image measuring points, for each image measuring point, identifying a corresponding field measuring point, measuring the actual coordinates of each field measuring point, thereby obtaining actual coordinates thereof, and responsive to the obtaining, automatically ascertaining for each image measuring point and corresponding field measuring point, whether there is a discrepancy between the location of the image measuring point on the two-dimensional image and the actual coordinates of the corresponding field measuring point. 1. A method for automatically validating measurements of a field survey , said method comprising:providing, on a field computing device, a two-dimensional image of a field to be surveyed;providing actual coordinates of at least two field reference points in said field, each of said at least two field reference points corresponding to an image reference point on said two-dimensional image;employing said field computing device to outline, on said two-dimensional image, features of interest of said field to be surveyed;employing said field computing device to manually select, on said outline, a plurality of image measuring points;for each image measuring point of said plurality of image measuring points, identifying, in said field, a corresponding field measuring point;measuring, in said field, the actual coordinates of each said field measuring point, thereby obtaining actual coordinates of each said field measuring point; andresponsive to ...

Подробнее
03-03-2016 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM

Номер: US20160063340A1
Принадлежит:

According to one embodiment, an information processing apparatus includes an image acquisition module, an elevation-angle acquisition module, a character deformation specification module, a character detection dictionary storage, a character detection dictionary selector and a character detector. The elevation-angle acquisition module is configured to acquire an elevation angle of a photographic device assumed when the photographic device has obtained an acquired image. The character deformation specification module is configured to specify how an appearance of the character in the acquired image is deformed, based on the acquired elevation angle. 1. An information processing apparatus comprising:an image acquisition module configured to acquire an image obtained by photographing a sheet surface printed with a character;an elevation-angle acquisition module configured to acquire an elevation angle of a photographic device assumed when the photographic device has obtained the acquired image;a character deformation specification module configured to specify how an appearance of the character in the acquired image is deformed, based on the acquired elevation angle;a character detection dictionary storage configured to store a plurality of character detection dictionaries associated with variously deformed appearances of the character;a character detection dictionary selector configured to select, after the character deformation specification module specifies how the appearance of the character is deformed, one of the character detection dictionaries from the character detection dictionary storage, the one character detection dictionary being associated with the specified appearance of the character; anda character detector configured to execute character detection processing on the acquired image to detect an area of the character in the acquired image, using the selected character detection dictionary.23. The apparatus of , wherein the character deformation ...

Подробнее
01-03-2018 дата публикации

Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition

Номер: US20180060652A1
Принадлежит:

A method and apparatus for deep learning based fine-grained body part recognition in medical imaging data is disclosed. A paired convolutional neural network (P-CNN) for slice ordering is trained based on unlabeled training medical image volumes. A convolutional neural network (CNN) for fine-grained body part recognition is trained by fine-tuning learned weights of the trained P-CNN for slice ordering. The CNN for fine-grained body part recognition is trained to calculate, for an input transversal slice of a medical imaging volume, a normalized height score indicating a normalized height of the input transversal slice in the human body. 1. A method for deep learning based fine-grained body part recognition in medical imaging data , comprising:training a paired convolutional neural network (P-CNN) for slice ordering based on unlabeled training medical image volumes; andtraining a convolutional neural network (CNN) for fine-grained body part recognition by fine-tuning learned weights of the trained P-CNN for slice ordering.2. The method of claim 1 , wherein training a paired convolutional neural network (P-CNN) for slice ordering based on unlabeled training medical image volumes comprises:randomly sampling transversal slice pairs from the unlabeled medical image training volumes, wherein each transversal slice pair is randomly sampled from the same training volume; andtraining the P-CNN to predict a relative order of a pair of transversal slices of a medical imaging volume based on the randomly sampled transversal slice pairs, wherein the P-CNN includes two identical sub-networks for a first plurality of layers, each to extract feature from a respective slice of the pair of transversal slices, and global final layers to fuse outputs of the sub-networks and calculate a binary classification result regarding the relative order of the pair of transversal slices.3. The method of claim 2 , wherein the CNN for fine-grained body part recognition includes a first plurality of ...

Подробнее
20-02-2020 дата публикации

SUBSTANCE PREPARATION EVALUATION SYSTEM

Номер: US20200057880A1
Принадлежит: Beckman Coulter, Inc.

Automatic substance preparation and evaluation systems and methods are provided for preparing and evaluating a fluidic substance, such as e.g. a sample with bodily fluid, in a container and/or in a dispense tip. The systems and methods can detect volumes, evaluate integrities, and check particle concentrations in the container and/or the dispense tip. 1. A method of evaluating a fluidic substance in a container , the method comprising:capturing, using an image capture device, an image of at least a portion of the container;obtaining, using at least one computing device, a plurality of color parameters of at least a portion of the image; andgenerating a sample classification result for the fluidic substance contained in the container based on the plurality of color parameters;wherein the sample classification result is representative of a concentration of at least one interferent in the fluidic substance.2. The method of claim 1 , wherein obtaining a plurality of color parameters includes:generating a histogram for at least a portion of the image, the histogram comprising a plurality of color channels; andobtaining a plurality of mean values for the plurality of color channels, wherein the plurality of color parameters includes the plurality of mean values for the plurality of color channels.3. The method of any of and claim 1 , wherein obtaining a plurality of color parameters includes:generating a histogram for at least a portion of the image, the histogram comprising a plurality of color channels;obtaining a plurality of Riemann sums for the plurality of color channels;wherein the plurality of color parameters includes the plurality of Riemann sums for the plurality of color channels.4. The method according to any of the preceding claims claim 1 , wherein obtaining a plurality of color parameters includes:generating a histogram for at least a portion of the image, the histogram comprising a plurality of color channels;obtaining a plurality of modes for the ...

Подробнее
20-02-2020 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20200057907A1
Автор: Kobayashi Tsuyoshi
Принадлежит:

An image processing apparatus configured to extract an irradiation field from an image obtained through radiation imaging, comprises: an inference unit configured to obtain an irradiation field candidate in the image based on inference processing; a contour extracting unit configured to extract a contour of the irradiation field based on contour extraction processing performed on the irradiation field candidate; and a field extracting unit configured to extract the irradiation field based on the contour. 1. An image processing apparatus configured to extract an irradiation field from an image obtained through radiation imaging , comprising:an inference unit configured to obtain an irradiation field candidate in the image based on inference processing;a contour extracting unit configured to extract a contour of the irradiation field based on contour extraction processing performed on the irradiation field candidate; anda field extracting unit configured to extract the irradiation field based on the contour.2. The image processing apparatus according to claim 1 ,wherein the inference unit obtains, as the irradiation field candidate, a probability map that indicates a probability of being an irradiation field or a probability of not being an irradiation field for each pixel of the image.3. The image processing apparatus according to claim 2 ,wherein the contour extracting unit extracts the contour of the irradiation field from an image that is obtained from the probability map and includes an edge that indicates a boundary between the irradiation field and a collimator region.4. The image processing apparatus according to claim 2 ,wherein the contour extracting unit performs, on the irradiation field candidate, contour extraction processing for extracting the contour based on a shape of a collimator.5. The image processing apparatus according to claim 4 ,wherein the contour extracting unit changes the contour extraction processing according to the shape.6. The image ...

Подробнее
01-03-2018 дата публикации

Automated Cephalometric Analysis Using Machine Learning

Номер: US20180061054A1
Принадлежит:

A system and method are described for automating the analysis of cephalometric x-rays. Included in the analysis is a method for automatic anatomical landmark localization based on convolutional neural networks. In an aspect, the system and method employ a deep database of images and/or prior image analysis results so as to improve the outcome from the present automated landmark detection scheme. 1. A method for automated processing a cephalometric image , in a processor-based machine using machine learning steps , so as to provide results for treatment planning and follow-up , comprising:receiving a cephalometric image from a user;pre-processing said received cephalometric image, in a processor-based machine, to determine whether the quality of the cephalometric image meets a pre-determined image quality criterion;if said pre-determined image quality criterion is met, carrying out a sequence of automated localization of anatomical landmark points of interest on said cephalometric image, in said processor-based machine, based on machine learning from previously-analyzed cephalometric images;generation of a cephalometric report including analyses results based on said landmarks;formatting said cephalometric report into a user-readable format; andproviding said cephalometric report in said user-readable format to said user.2. The method of claim 1 , further comprising generating a two-dimensional cephalometric X-ray image of a subject prior to receiving said image from the user.3. The method of claim 1 , receiving said cephalometric image comprising receiving the cephalometric image over a cloud-based communication network.4. The method of claim 1 , further comprising authenticating said user with said processor-based machine claim 1 , over a communication network claim 1 , wherein said processor-based machine comprises a server with a user interface.5. The method of claim 1 , said automated localization step comprising using a convolutional neural network (CNN) ...

Подробнее