Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 4865. Отображено 200.
03-04-2018 дата публикации

УСТРОЙСТВО И СПОСОБ ДЛЯ ПОЛУЧЕНИЯ ИНФОРМАЦИИ О ЖИЗНЕННО ВАЖНЫХ ПОКАЗАТЕЛЯХ ЖИВОГО СУЩЕСТВА

Номер: RU2649529C2

Группа изобретений относится к устройству и способу для получения информации о жизненно важных показателях живого существа. Устройство включает в себя блок обнаружения для приема света по меньшей мере в одном интервале длин волн, отраженного по меньшей мере от интересующей области живого существа, а также для генерирования входного сигнала из принятого света, обрабатывающий блок для обработки входного сигнала и извлечения информации о жизненно важных показателях упомянутого живого существа из упомянутого входного сигнала при помощи дистанционной фотоплетизмографии, и блок освещения для освещения по меньшей мере упомянутой интересующей области во время интервалов освещения, причем упомянутый свет во время упомянутых интервалов освещения является доминирующим над окружающим светом по меньшей мере в том диапазоне длин волн, в котором блок обнаружения принимает свет и таким образом оптимизируется для извлечения информации о жизненно важных показателях из входного сигнала, сгенерированного при ...

Подробнее
20-08-2002 дата публикации

СПОСОБ И УСТРОЙСТВО ПРЕОБРАЗОВАНИЯ ИЗОБРАЖЕНИЯ

Номер: RU2187904C1

Изобретение относится к области биометрии и может быть использовано для преобразования, получения, обработки и анализа электронных изображений живых биологических объектов. Техническим результатом является возможность выявлять объекты (прежде всего живые), совершающие незначительные, практически невидимые глазом, периодические колебания, получать и анализировать изображения таких объектов на фоне как стационарных, так и движущихся объектов. Технический результат достигается тем, что получают последовательные кадры изображения объекта, находят межкадровую разность изображений и при обработке изображения накапливают сумму разностей из не менее двух выбранных последовательных кадров изображения, причем устройство преобразования изображения объекта выполнено в виде КМОП датчика изображения, содержащего фоточувствительный многоэлементный преобразователь и средство для обработки изображения межкадровой разности, выполняющее операцию накопления суммы разностей кадров изображения, полученных фоточувствительным ...

Подробнее
27-10-2016 дата публикации

СПОСОБ ОБНАРУЖЕНИЯ ДВИЖУЩЕГОСЯ ОБЪЕКТА

Номер: RU2015111679A
Принадлежит:

... 1. Способ обнаружения движущегося объекта, являющегося объектом наблюдения, по изображениям, полученным в постоянных интервалах, основываясь на значении оценки, полученном из пиксельных значений в одних и тех же положениях пикселей, накладывающихся друг на друга на изображениях, последовательно перемещая изображения в соответствии с содержанием, соответствующим предполагаемому содержанию движения движущегося объекта, являющегося объектом наблюдения, во время постоянных интервалов, содержащий:этап вычисления среднего значения для ограниченного количества пикселей, на котором вычисляют среднее значение, используя пиксельные значения, равные или меньшие, чем пороговое значение, чтобы отличить движущийся объект, являющийся объектом наблюдения, от светоизлучающего элемента с более высокой яркостью, чем яркость движущегося объекта, причем пиксельные значения являются пиксельными значениями в одних и тех же положениях пикселей на изображениях; иэтап оценки, на котором оценивают пиксели в одних ...

Подробнее
10-09-2015 дата публикации

УСТРОЙСТВО ОБНАРУЖЕНИЯ ДВИЖУЩИХСЯ ТЕЛ И СИСТЕМА ОБНАРУЖЕНИЯ ДВИЖУЩИХСЯ ТЕЛ

Номер: RU2014107925A
Принадлежит:

... 1. Устройство обнаружения движущихся тел для обнаружения движущегося тела в окрестностях транспортного средства, при этом устройство обнаружения движущихся тел характеризуется тем, что оно содержит:- средство захвата изображений, установленное на борту транспортного средства, для захвата изображения сзади транспортного средства;- средство преобразования изображений для преобразования точки наблюдения захваченного изображения, полученного средством захвата изображений, в изображение вида с высоты птичьего полета;- средство формирования форм разностных сигналов для позиционного совмещения, в виде с высоты птичьего полета, позиций изображений вида с высоты птичьего полета, полученных в различные моменты времени средством захвата изображений, подсчета числа пикселов, демонстрирующих предварительно определенную разность в разностном изображении для позиционно совмещенных изображений вида с высоты птичьего полета и создания частотного распределения и за счет этого формирования информации форм ...

Подробнее
20-07-2015 дата публикации

УСТРОЙСТВО ДЛЯ ИДЕНТИФИКАЦИИ ОБЛАСТИ ДВИЖУЩЕГОСЯ ИЗОБРАЖЕНИЯ И СПОСОБ

Номер: RU2013154986A
Принадлежит:

... 1. Способ определения прямоугольной области движущегося изображения, отображаемого в части дисплейной области, имеющей пиксели, расположенные в ней в направлениях рядов и столбцов, включающий в себя:этап оценивания движущегося единичного блока, на котором дисплейную область делят на единичные блоки, каждый из которых включает в себя предварительно определенное число пикселей, и оценивают, является ли каждый единичный блок движущимся единичным блоком, имеющим движение;этап определения движущегося блока столбца, на котором в качестве блока столбца устанавливают совокупность единичных блоков, содержащихся в столбце, включающую в себя один из самых верхних единичных блоков из указанных единичных блоков, и, если один блок столбца включает в себя по меньшей мере один движущийся единичный блок, определяют указанный блок столбца в качестве движущегося блока столбца;этап определения движущегося блока ряда, на котором в качестве блока ряда устанавливают совокупность единичных блоков, содержащихся ...

Подробнее
10-09-2014 дата публикации

СПОСОБ И УСТРОЙСТВО ДЛЯ ОБНАРУЖЕНИЯ ДВИЖУЩИХСЯ ОБЪЕКТОВ В ПОСЛЕДОВАТЕЛЬНОСТИ ВИДЕОИЗОБРАЖЕНИЙ

Номер: RU2013109127A
Принадлежит:

... 1. Способ обнаружения движущихся объектов в последовательности видеоизображений, включающий в себя шаги:а) установление характерных отличительных точек в изображении пары следующих друг за другом изображений последовательности видеоизображений,б) установление математического преобразования для отображения одного из обоих изображений пары изображений на другое из обоих изображений пары изображений с использованием установленных на шаге а) характерных отличительных точек,в) установление разностного изображения в виде разности между преобразованными друг относительно друга посредством установленного на шаге б) преобразования изображениями пары изображений,г) установление характерных точек изменения изображения в установленном на шаге в) разностном изображении,д) установление точек объекта из установленных на шаге г) характерных точек изменения изображения,отличающийся тем, что шаг г) включает в себя следующие шаги:г1) задание порогового значения изменения изображения и установление точек изменения ...

Подробнее
27-08-2008 дата публикации

УСТРОЙСТВО ФОРМИРОВАНИЯ ИЗОБРАЖЕНИЯ, СПОСОБ ОБРАБОТКИ ИЗОБРАЖЕНИЯ И ИНТЕГРАЛЬНАЯ СХЕМА

Номер: RU2007106899A
Принадлежит:

... 1. Устройство формирования изображения, содержащее интегральную схему, сформированную путем наложения друг на друга множества полупроводниковых схем, в котором в верхней полупроводниковой схеме интегральной схемы предусмотрено устройство съемки изображения, пиксели которого расположены в виде матрицы, и которыми управляют путем управления адресом XY для передачи сигналов изображения, предоставляемых пикселями, в расположенную ниже полупроводниковую схему, которая расположена под верхней полупроводниковой схемой, и в нижней полупроводниковой схеме предусмотрена схема детектирования движения, которая выполняет процесс обработки сигналов изображения, предоставляемых устройством съемки изображения, и получает информацию движения по отдельным пикселям, и схема обработки движения, которая выполняет процесс обработки информации движения по отдельным пикселям и предоставляет результаты обработки. 2. Устройство формирования изображения по п.1, в котором процесс, выполняемый схемой обработки движения ...

Подробнее
18-01-2018 дата публикации

Verfahren zum Darstellen einer ersten Struktur einer Körperregion mittels digitaler Subtraktionsangiographie, Auswerteeinrichtung sowie Angiographiesystem

Номер: DE102015224806B4

Verfahren zum Darstellen einer ersten Struktur (9) einer Körperregion (8) mittels digitaler Subtraktionsangiographie mit folgenden Schritten: a) Empfangen von zumindest einem, mittels einer Angiographievorrichtung (2) erzeugten Füllbild (I2, I3) der Körperregion (8), welches eine zweite Struktur (10) der Körperregion (8) und die erste Struktur (9) bei einer ersten Kontrastmittelkonzentration in der ersten Struktur (9) darstellt (V1), wobei ein mittels der Angiographievorrichtung (2) ohne ein Kontrastmittel in der ersten Struktur (9) erzeugtes Leerbild (I1) empfangen wird; b) Bestimmen eines, die zweite Struktur (10) darstellenden Maskenbildes (M) der Körperregion (8) (V4), wobei das Maskenbild (M) anhand des Leerbildes (I1) und des zumindest einen Füllbildes (I2, I3) bestimmt wird, und wobei zum Bestimmen des Maskenbildes (M) das Leerbild (I1) und das zumindest eine Füllbild (I2, I3) mittels einer von dem Leerbild (I1) abhängigen Gewichtungsfunktion zeitlich gemittelt werden; c) Bestimmen ...

Подробнее
17-09-2020 дата публикации

PHOTOMETRISCHE STEREO-OBJEKTDETEKTION FÜR IN EINEM AUTONOMEN FAHRZEUG VERGESSENE ARTIKEL

Номер: DE102020106524A1
Принадлежит:

Diese Offenbarung stellt photometrische Stereo-Objektdetektion für in einem Fahrzeug vergessene Artikel bereit. Vergessene Artikel, die durch einen Benutzer, der ein autonomes Fahrzeug verlässt, durch Aufnehmen von Bildern, einschließlich einer Vielzahl von verschieden beleuchteten Bildern eines Zielbereichs einer Fahrgastkabine des Fahrzeugs automatisch detektiert. Eine Vielzahl von Normalenvektoren sind für wiederholte Pixel, die den Zielbereich in einem Normalenextraktor auf Grundlage der Bilder darstellen, bestimmt. Eine normalengetriebene Karte ist in einem ersten Array gespeichert, als Reaktion auf die Vielzahl von Normalenvektoren. Eine Grundlinienkarte ist in einem zweiten Array gespeichert, das aus Grundlinienbildern des Zielbereichs in einem nominellen sauberen Zustand zusammengesetzt ist. Unterschiede zwischen der normalengetriebenen Karte und der Grundlinienkarte, die ein Objekt angeben, das nicht in dem sauberen Zustand vorhanden ist, sind in einer Vergleichseinheit detektiert ...

Подробнее
03-03-2005 дата публикации

Background picture determination method e.g. for traffic surveillance, involves determining background picture and from time T1 free areas of picture are determined

Номер: DE0010334136A1
Принадлежит:

The method involves determining a background picture and from time T1 free areas of a picture are determined. For the free areas the background picture (H1) is provided, with a first mask. At time T1, the covered areas are masked and a second mask is provided at a second time. By comparison of the masks, free areas are determined, and the background picture is extended by the background of the freed areas. An independent claim is included for a device.

Подробнее
26-02-1998 дата публикации

Motion vector selection for real-time motion estimation in moving image sequence

Номер: DE0019633581C1
Принадлежит: SIEMENS AG, SIEMENS AG, 80333 MUENCHEN, DE

The method includes the step of forming differences between a current image section (Sf (k, l) ) and a preceding image section (Sf-1 (k+i, l+j) ) for a number of pixel vectors (k, 1) of the image sections and for a number of movement vectors (i, j). The absolute value of a respective difference is associated with a quantised difference (T (k, l, i, j) ), corresponding to a stepped quantisation characteristic with exponentially rising level width and exponentially rising level height. All quantised difference values or all squared quantised difference values are added to a sum value (LPDC (i,j) ) to form a motion vector for each of the number of pixel vectors. The most probable motion vector is determined, by selecting the motion vector with the lowest sum value.

Подробнее
07-04-2005 дата публикации

Verfahren zur feldmäßigen Bestimmung von Deformationszuständen in mikroskopisch dimensionierten Prüflingsbereichen und Verwendung des Verfahrens

Номер: DE0019614896B4

Verfahren zur feldmäßigen Bestimmung von Deformationszuständen in mikroskopisch dimensionierten Prüfingsbereichen, unter Verwendung von digitalisierten Bildern als zweidimensionale Bildmatrizen mit diskreten Pixelwerten, die einer Grauwertskala zugeordnet sind, mit den folgenden Schritten: a. Erzeugung eines ersten digitalisierten Bildes (B1) des Prüflingsbereiches (A) in einem ersten Zustand, b. Erzeugung eines zweiten digitalisierten Bildes (B2) des Prüflingsbereiches (A) in einem zweiten Zustand, der sich vom ersten Zustand durch Deformation des Prüflings unterscheidet, c. Bestimmung des Verschiebungsvektors (V) der örtlichen Deformation durch Vergleich des ersten digitalisierten Bildes (B1) mit dem zweiten digitalisierten Bild (B2) dergestalt, daß einem digitaliesierten Bild als Referenzbild jeweils Referenzmatrizen (Si) entnommen werden, deren Grauwertinhalte (g) innerhalb eines jeder Referenzmatrix (Si) zugeordneten Suchbereichs (SB) mit den Grauwertinhalten (g) von Vergleichsmatrizen ...

Подробнее
06-02-2020 дата публикации

HOCHAUFLÖSENDER VIRTUELLER RADDREHZAHLSENSOR

Номер: DE102019112873A1
Принадлежит:

Ein Verfahren zum Erzeugen von hochauflösenden virtuellen Raddrehzahlsensordaten beinhaltet das gleichzeitige Sammeln von Raddrehzahlsensordaten (WSS-dATEN) von mehreren Raddrehzahlsensoren, die jeweils die Drehung eines von mehreren Rädern eines Kraftfahrzeugs erfassen. Aus mindestens einer im Kraftfahrzeug montierten Kamera wird ein Kamerabild einer Fahrzeugumgebung erzeugt. Ein Programm für den optischen Fluss wird angewendet, um das Kamerabild in Pixeln zu diskretisieren. Dem diskretisierten Kamerabild sind mehrere Entfernungsintervalle überlagert, die jeweils eine zurückgelegte Fahrzeugdistanz darstellen, die eine Auflösung jedes der mehreren Raddrehzahlsensoren definiert. Es wird eine Wahrscheinlichkeitsverteilungsfunktion erstellt, die eine zurückgelegte Strecke für eine nächste WSS-Ausgabe vorhersagt.

Подробнее
10-12-1987 дата публикации

CIRCUIT FOR DETECTING PICTURE MOTION IN INTERLACED TELEVISION SIGNAL

Номер: DE0003467281D1
Принадлежит: HITACHI LTD, HITACHI, LTD.

Подробнее
02-07-2009 дата публикации

VIDEOBILD-GLEISÜBERWACHUNGSSYSTEM

Номер: DE602007001145D1
Принадлежит: SCHOLZ SVEN, SCHOLZ, SVEN

Подробнее
10-07-2013 дата публикации

Image processing

Номер: GB0201309489D0
Автор:
Принадлежит:

Подробнее
16-03-2005 дата публикации

Optical device for controlling a screen cursor

Номер: GB2405925A
Принадлежит:

An apparatus, such as a mouse, for controlling a screen cursor for an electronic device (10) having a display screen includes a light source (2) for illuminating an imaging surface (6), thereby generating reflected images. An optical motion sensor generates digital images from the reflected images at a first rate. The motion sensor (16) is configured to generate movement data based on digital images. The movement data is indicative of relative motion between the imaging surface and the apparatus. The motion sensor is configured to modify the first frame rate to one of a plurality of alternative frame rates based on a current relative velocity between the imaging surface and the apparatus. In an alternative embodiment the apparatus is configured to enter a low power mode after a period of inactivity, wherein the generated digital images have few pixels than in the full power mode. In yet another embodiment, the motion sensor generates a plurality of pairs of digital images with an intra-pair ...

Подробнее
26-09-2001 дата публикации

Analysis of portal image for radiotherapy treatment

Номер: GB2360684A
Принадлежит:

The analysis involves a check on the apparatus setup and a check on the positioning of the patient. To check the apparatus setup a reference radiation field image is compared with a sample radiation field image to determine a difference image. At least the sample image is filtered to remove low frequency variations. A region of interest in the reference and sample images is selected and correlated with each other to determine relative displacements of the reference and sample image. The patient positioning is checked by determining relative displacements between reference image and sample image. The invention can be used for positioning of patient for radiation treatment of cancer to determine if radiation beam is correctly set up and if patient is positioned correctly, analysis of portal image is derived during first treatment session or using digitally reconstructed radiograph.

Подробнее
21-01-2009 дата публикации

IMAGE PROCESSING

Номер: GB0000822953D0
Автор:
Принадлежит:

Подробнее
26-08-2020 дата публикации

Tracking device and tracking program

Номер: GB0002581715A
Принадлежит:

Provided is a tracking device in which a graph generation unit (21) uses each of a plurality of objects detected from a plurality of frames constituting image data as nodes to provide an edge connecting objects between two consecutive frames and generate a tracking graph. A vector calculation unit (22) calculates a speed vector for an object detected from a target frame on the basis of the correspondence between the object detected from the target frame and an object detected from the frame preceding the target frame. A cost calculation unit (23) uses the speed vector as a basis to calculate the cost of an edge connecting objects between the target frame in the tracking graph and the next frame after the target frame. A correspondence identification unit (24) uses the tracking graph and the cost as a basis to identify the correspondence between the objects between the target frame and the next frame.

Подробнее
16-05-2012 дата публикации

Video Surveillance System that Detects Changes by Comparing a Current Image with a Reference Image

Номер: GB0002485390A
Принадлежит:

Video processing apparatus comprises an image buffer 40 to store data relating to images of a video signal captured over a reference period relative to a current image; a comparator 100 to compare the current image with a reference image of the video signal captured within the reference period, so as to detect image features in the current image which represent image changes with respect to a corresponding position in the reference image; a detector 110 to detect whether a detected image feature in the current image has remained at substantially the same image position over a group of images comprising at least a threshold number of images of the video signal preceding the current image and a display arrangement to display the current image with an indication of any such image features and an indication of the duration of the presence of the image feature in respect of that image position.

Подробнее
31-07-2013 дата публикации

Detecting Movement of an Object Between a First and a Second Image

Номер: GB0002498720A
Принадлежит:

Movement of an object between first and second images is detected. A global distortion model (e.g. first order polynomial function modeling rotation, scale and translation) is determined based on the association of one or more points of interest in the first image with corresponding point(s) of interest in the second. Transformation (e.g. backward warping process) is performed on the second image (S22) to align it with the first based on the distortion model. A first observed residual between the first image and the transformed second image is computed (S23) and a predicted residual is computed (S26) based on a quantitative estimation of the variation of aliasing between pixels of the transformed second image (S24) and corresponding pixels of the first image, and on a quantitative evaluation of the level of aliasing within the first image based on the frequency of content detail of pixels of the first image (S25). The observed residual is compared with the predicted residual to identify ...

Подробнее
12-09-2012 дата публикации

Detectig moving vehicles

Номер: GB0201213604D0
Автор:
Принадлежит:

Подробнее
03-10-2018 дата публикации

A system and method for assessing the interior of an autonomous vehicle

Номер: GB0002561062A
Принадлежит:

A vehicle comprises an interior Infrared (IR) camera and optionally a visible light camera. Images of an interior of the vehicle are captured using the cameras both before 302 and after 308 a passenger rides in the vehicle. The IR images from before and after are subtracted to obtain a difference image 310. Pixels above a threshold intensity may be clustered 312. Clusters having an above-threshold size are determined to be anomalies 314. Portions of images from the visible light camera corresponding to the anomalies are sent to a dispatcher 318, who may then clear the vehicle to pick up another passenger 320 or proceed to a cleaning station. Anomalies may be identified based on a combination of the IR images and visible light images. The system is particularly intended for use in an autonomous vehicle, where anomalies such as a spilled liquid can be detected remotely prior to the vehicle picking up the next occupant.

Подробнее
23-02-1983 дата публикации

METHOD OF AND APPARATUS FOR MOVEMENT PORTRAYAL WITH A RASTER E.G.TELEVISION DISPLAY

Номер: GB0002022357B
Автор:
Принадлежит: BRITISH BROADCASTING CORP

Подробнее
03-01-2018 дата публикации

Detection of lane-splitting motorcycles

Номер: GB0201719003D0
Автор:
Принадлежит:

Подробнее
17-06-2020 дата публикации

Traking device and tracking program

Номер: GB0202006426D0
Автор:
Принадлежит:

Подробнее
23-04-1980 дата публикации

Television standards conversion

Номер: GB0002031687A
Принадлежит:

A video standards converter in which digital persistence is effected by including first and second picture stores (22). First and second coefficient blocks (300, 302) are connected to the first and second stores respectively and the outputs of the blocks added in an adder (301). Coefficients are selected for use in each block to determine the portion of information passed on to the adder. These coefficients may be varied cyclically and/or in dependence on any picture movement to provide variation in the degree of persistence applied. ...

Подробнее
27-01-2016 дата публикации

Image processing systems and methods

Номер: GB0201521896D0
Автор:
Принадлежит:

Подробнее
08-08-2018 дата публикации

Unattended object monitoring device, unattended object monitoring system equipped with same, and unattended object monitoring method

Номер: GB0201810066D0
Автор:
Принадлежит:

Подробнее
30-04-2006 дата публикации

Enhanced Video based surveillance system.

Номер: AP2006003571A0
Автор: COX GREGORY, ANDREW COLIN
Принадлежит:

Подробнее
30-04-2006 дата публикации

Enhanced Video based surveillance system.

Номер: AP0200603571D0
Автор: COX GREGORY, ANDREW COLIN
Принадлежит:

Подробнее
30-04-2006 дата публикации

Enhanced Video based surveillance system.

Номер: AP0200603571A0
Автор: COX GREGORY, ANDREW COLIN
Принадлежит:

Подробнее
15-07-2007 дата публикации

DETECTION OF DIAGRAM OVERLAYS

Номер: AT0000365354T
Принадлежит:

Подробнее
15-03-2010 дата публикации

EXTRACTOR FOR VISUAL BACKGROUND

Номер: AT0000458231T
Принадлежит:

Подробнее
15-01-2012 дата публикации

PROCEDURE AND SYSTEM FOR THE DETECTION OF A BUILDING DEFORMATION

Номер: AT0000541184T
Принадлежит:

Подробнее
15-08-2008 дата публикации

SYSTEM AND PROCEDURE FOR THE RECOGNITION OF FOREIGN BODIES

Номер: AT0000404964T
Принадлежит:

Подробнее
15-09-2009 дата публикации

VERFAHREN UND ANORDNUNG ZUR DETEKTION VON VERÄNDERUNGEN EINER AUFGENOMMENEN SZENE

Номер: AT0000506412B1
Автор:
Принадлежит:

Подробнее
15-02-2011 дата публикации

VORBEARBEITUNG VON SPIEL-VIDEOSEQUENZEN ZUR ÜBERTRAGUNG ÜBER MOBILNETZE

Номер: AT0000508595B1
Автор:
Принадлежит:

A method and a system for preprocessing game video sequences comprising frames and including a ball or puck as movable game object, for transmission of the video sequences in compressed form; in an initial search (12), frames are searched for the game object on the basis of comparisons of the frames with stored game object features; then, respective frames are compared with preceding frames, to decide on the basis of differences between consecutive frames whether a scene change (14b) has occurred or not, and in the case of a scene change, an initial search is started again; otherwise, tracking of the game object (18) is carried out by determining the positions of the game object in respective frames; at least for one frame, a dominant game playfield color is detected and is replaced by a unitary replacement color so that a playfield representation essentially consists of points of the same color; and the presence, size and/or shape of the detected game object is determined, to possibly ...

Подробнее
15-02-2013 дата публикации

VERFAHREN ZUM ERFASSEN UND/ODER AUSWERTEN VON BEWEGUNGSABLÄUFEN

Номер: AT0000506051B1
Автор:
Принадлежит:

The invention relates to a method for detecting and/or evaluating sequences of movements using a digital display optical receiving device. According to said method, characteristics of one or more pixels of the digital camera, such as, for example, the coordinates on the image plane, the colour, the intensity and/or the contrast of adjacent pixels, are used for detection and/or evaluation. Said method is characterised in that the coordinates of the pixel are compared to each other at time intervals and if there is a variation from the set value, they are counted and/or evaluated as an event and/or if the subsequent position of the pixel(s) is/are unmodified, they are counted and/or evaluated as an event with a longer duration.

Подробнее
07-05-2020 дата публикации

System and method for aerial video traffic analysis

Номер: AU2018345330A1
Принадлежит: FB Rice Pty Ltd

A system and method for aerial video traffic analysis are disclosed. A particular embodiment is configured to: receive a captured video image sequence from an unmanned aerial vehicle (UAV); clip the video image sequence by removing unnecessary images; stabilize the video image sequence by choosing a reference image and adjusting other images to the reference image; extract a background image of the video image sequence for vehicle segmentation; perform vehicle segmentation to identify vehicles in the video image sequence on a pixel by pixel basis; determine a centroid, heading, and rectangular shape of each identified vehicle; perform vehicle tracking to detect a same identified vehicle in multiple image frames of the video image sequence; and produce output and visualization of the video image sequence including a combination of the background image and the images of each identified vehicle.

Подробнее
11-06-2020 дата публикации

Golf ball tracking system

Номер: AU2018404876A1
Принадлежит: Halfords IP

A ball tracking system is disclosed which includes a display, sensors, a launch monitor and a processor that receives data from the sensors and launch monitors and outputs a rendering to the display. Specifically the sensors are positioned to detect a plurality of observed ball flight paths, each in the plurality originating from a different ball strike at a different location. The sensors field of view is correlated to three-dimensional space. The launch monitor is positioned to detect one of the ball strikes, and measures the launch parameters of that ball strike. The processor performs several processing steps to match the ball strike detected by the launch monitor to the ball flight paths observed by the sensors, and creates a rendering using both the predicted and observed data.

Подробнее
16-09-2004 дата публикации

System or method for selecting classifier attribute types

Номер: AU2004200298A1
Принадлежит:

Подробнее
28-04-2016 дата публикации

Virtual golf simulation apparatus and sensing device and method used for same

Номер: AU2012231931B2
Принадлежит:

The purpose of the present invention is to provide a device for a virtual golf simulation, a sensing device and method used in same, wherein a virtual golf simulation device can present an image of a simulated trajectory of a ball, so as to sense a ball hit with a golf club by a user using a virtual golf simulation, by acquiring an image of the ball hit and extracting a trajectory of motion for the ball found in the image, and more specifically, by analyzing a two-dimensional trajectory of ball candidates deemed to be a ball on an image obtained from a camera, and thus accurately and quickly calculate physical properties on a fast-moving ball, even when a slow low-resolution camera is used.

Подробнее
21-01-2016 дата публикации

Operations monitoring in an area

Номер: AU2014265298A1
Принадлежит:

An assembly for monitoring an area is provided. The assembly can include two or more cameras sensitive to radiation of distinct wavelength ranges. The fields of view of the cameras can be substantially co-registered at the area to be monitored. The assembly can include a computer system which can process the image data to monitor the area. The computer system can be configured to identify relevant objects present in the area, update tracking information for the relevant objects, and evaluate whether an alert condition is present using the tracking information.

Подробнее
19-01-2017 дата публикации

Infrared projection billiard entertainment system and implementation method thereof

Номер: AU2015283463A1
Принадлежит: Madderns Patent & Trade Mark Attorneys

An infrared projection billiard entertainment system and an implementation method thereof. The system comprises a billiard table (8), an image capture device (7), a projection device (6), a computer (3), a hollow billiard illuminating lamp (4), an infrared light supplementary lamp (5), a projection hanger (1) and an image capture device hanger (2). The computer (3) controls the image capture device (7) to switch to a visible light filter disc and capture the movement image information of billiard balls (9) on the billiard table (8) so as to obtain the information of positions, movement tracks and movement results of the billiard balls (9) in the image information. According to the movement tracks of the billiard balls (9) and/or the movement results of the billiard balls (9), the computer (3) controls the projection device (6) to play corresponding special-effect images on the billiard table (8) and/or the computer (3) plays corresponding sound effects.

Подробнее
01-05-2014 дата публикации

Immortal background modes

Номер: AU2011201582B2
Принадлежит:

IMMORTAL BACKGROUND MODES Disclosed herein are a method and system for updating a visual element model (250) of a scene model (230) associated with a scene captured in an image sequence. The visual element model (250) includes a set of mode models (260, 270) for a visual element (240) corresponding to a location (220) of the scene. The method identifies a first mode model from the set of mode models (260, 270) for the visual element model as a candidate deletion mode model. The method then removes the identified candidate deletion mode model from the set of mode models (260, 270) for the visual element model (250), to update the visual element model (250) for the video sequence, if one of the following two conditions is satisfied: (a) a first temporal attribute associated with the candidate deletion mode model does not satisfy a first threshold (T); or (b) the first temporal attribute associated with the candidate deletion mode model satisfies said first threshold and a second temporal ...

Подробнее
20-08-2001 дата публикации

System and method of facilities and operations monitoring and remote management support

Номер: AU0003693201A
Принадлежит:

Подробнее
23-09-2014 дата публикации

SYSTEMS AND METHODS FOR TISSUE IMAGING

Номер: CA0002683805C

The present invention provides systems and methods for monitoring tissue regions. In particular, the present invention provides systems and methods for detecting changes in tissue regions over a period of time. In some embodiments, the systems and methods of the present invention are used to evaluate the effectiveness of a particular treatment of a tissue region. In some embodiments, the systems and methods employ functional diffusion map algorithms for imaging changes in tissue regions over time and/or in response to therapeutic interventions.

Подробнее
28-07-2011 дата публикации

A METHOD, DEVICE AND SYSTEM FOR DETERMINING THE PRESENCE OF VOLATILE ORGANIC COMPOUNDS (VOC) IN VIDEO

Номер: CA0002787303A1
Принадлежит:

A video based method to detect volatile organic compounds (VOC) leaking out of components used in chemical processes in petrochemical refineries. Leaking VOC plume from a damaged component has distinctive properties that can be detected in realtime by an analysis of images from a combination of infrared and optical cameras. Particular VOC vapors have unique absorption bands, which allow these vapors to be detected and distinguished. A method of comparative analysis of images from a suitable combination of cameras, each covering a range in the IR or visible spectrum, is described. VOC vapors also cause the edges present in image frames to loose their sharpness, leading to a decrease in the high frequency content of the image. Analysis of image sequence frequency data from visible and infrared cameras enable detection of VOC plumes. Analysis techniques using adaptive background subtraction, sub-band analysis, threshold adaptation, and Markov modeling are described.

Подробнее
13-03-2018 дата публикации

HIGH DYNAMIC RANGE IMAGE GENERATION AND RENDERING

Номер: CA0002786456C
Автор: SUN, SHIJUN, SUN SHIJUN

Techniques and tools for high dynamic range (HDR) image rendering and generation. An HDR image generating system performs motion analysis on a set of lower dynamic range (LDR) images and derives relative exposure levels for the images based on information obtained in the motion analysis. These relative exposure levels are used when integrating the LDR images to form an HDR image. An HDR image rendering system tone maps sample values in an HDR image to a respective lower dynamic range value, and calculates local contrast values. Residual signals are derived based on local contrast, and sample values for an LDR image are calculated based on the tone-mapped sample values and the residual signals. User preference information can be used during various stages of HDR image generation or rendering.

Подробнее
18-07-1989 дата публикации

Номер: CA1257690C
Автор:
Принадлежит:

Подробнее
19-05-1987 дата публикации

MOTION DETECTING CIRCUIT UTILIZING INTER-FRAME DIFFERENCE SIGNALS OF SUCCESSIVE FIELDS

Номер: CA0001222049A1
Автор: ACHIHA MASAHIKO
Принадлежит:

Подробнее
21-11-2013 дата публикации

DIVIDED-APERTURE INFRA-RED SPECTRAL IMAGING SYSTEM FOR CHEMICAL DETECTION

Номер: CA0003088289A1
Принадлежит: MACRAE & CO.

Подробнее
11-03-2021 дата публикации

NIR MOTION DETECTION SYSTEM AND METHOD

Номер: CA3088627A1
Принадлежит:

A motion sensor for detection motion of humans is provided. The motion sensor contains a near infrared (NIR) low resolution image sensor that captures image frames in the near infrared spectrum and a sensor that detects the amount of visible light. In addition, a processor is connected to the visible light sensor and the NIR motion sensor. The processor is configured to receive the amount of visible light from the visible light sensor and the images from the NIR low resolution image sensor. The processor is further configured to compare the image frames to detect motion; the sensitivity of the detection of motion is determined by the amount of visible light detected by the visible light sensor. The output has two or modes based on the detection of motion by the processor.

Подробнее
22-01-2018 дата публикации

METHOD OF TRACKING ONE OR MORE MOBILE OBJECTS IN A SITE AND A SYSTEM EMPLOYING SAME

Номер: CA0002974031A1
Принадлежит:

A mobile object tracking system has one or more imaging devices for capturing images of a site, one or more reference wireless devices in wireless communication with one or more mobile wireless devices (MWDs) via one or more wireless signals, and one or more received signal strength models (RSSMs) of the site for the wireless signals. Each MWD is associated with a mobile object and movable therewith. The system tracks the mobile objects by combining the captured images, the received signal strength (RSS) observables of the wireless signals, and the RSSMs. The system may calibrate the RSSMs at an initial stage and during mobile object tracking.

Подробнее
05-08-2010 дата публикации

PROCESSING OF REMOTELY ACQUIRED IMAGING DATA INCLUDING MOVING OBJECTS

Номер: CA0002749474A1
Принадлежит:

A system for processing remotely acquired imagery includes a storage element (316) for receiving first and second sets of imagery data associated metadata defining a first image of a panchromatic image type and a second image of a multi-spectral image type (404). The system also includes a processing element (302) communicatively coupled to the storage element and configured for obtaining a mapping between pixels in the first image and the second image based on the associated metadata (406). The processing element is further configured for generating a third set of imagery data defining a third image of a panchromatic type based on the second set of imagery data (408). The processing element is also configured for generating an alternate mapping for the first and second images based on comparing areas of pixels in the first and third images that are non-corresponding according to the mapping function (426).

Подробнее
08-11-2012 дата публикации

SYSTEMS AND METHODS FOR AUTOMATIC DETECTION AND TESTING OF IMAGES FOR CLINICAL RELEVANCE

Номер: CA0002831377A1
Принадлежит:

Disclosed herein are systems and methods for automatic detection of clinical relevance of images of an anatomical situation. The method includes comparing a first image and a second image and determining whether a difference between the first and second images is at least one of a local type difference and a global type difference. The local type difference is a local difference of the first image and the second image and the global type difference is a global difference between the first image and the second image. The second image is determined as having a clinical relevance if it is determined that the difference between the first image and the second image comprises a local type difference.

Подробнее
07-02-1978 дата публикации

METHOD AND APPARATUS FOR PRODUCING A COMPOSITE STILL PICTURE OF A MOVING OBJECT IN SUCCESSIVE POSITIONS

Номер: CA0001025995A1
Принадлежит:

Подробнее
09-01-2014 дата публикации

SYSTEMS AND METHODS OF CAMERA-BASED BODY-MOTION TRACKING

Номер: CA0002875815A1
Принадлежит: GOWLING LAFLEUR HENDERSON LLP

Systems and methods for camera-based fingertip tracking are disclosed. One such method includes identifying at least one location of a fingertip in at least one of the video frames, and mapping the location to a user input based on the location of the fingertip relative to a virtual user input device.

Подробнее
27-07-2006 дата публикации

DEVICES AND METHODS FOR IDENTIFYING AND MONITORING CHANGES OF A SUSPECT AREA ON A PATIENT

Номер: CA0002856932A1
Принадлежит:

A method of comparing at least two images, each image capturing a suspect area, the method comprising: identifying a reference item in the at least two images; measuring an attribute of the reference item in a first image; transforming a second image based on the measured attribute of the reference item in the first image using a transformation algorithm performed by a computing system, wherein a reference item in the second image is transformed to correspond with an orientation and size of the reference item in the first image; measuring an attribute of the suspect areas in both images; and comparing the respective measured attributes of the respective suspect areas.

Подробнее
15-12-2005 дата публикации

Automatic detection of a person or object in an access control system uses image processing of digital video camera images

Номер: CH0000695123A5

A security access system provides automatic detection of a person or an object in an entry area [1]. Images are obtained against a coloured background by a digital video camera [3] and are transmitted to an image analysis system [5]. Identification of the person or object is made and data transmitted to an access control computer and alarm system [6].

Подробнее
26-04-2019 дата публикации

Range hood capable of identifying harmful substances in oil smoke

Номер: CN0109681937A
Принадлежит:

Подробнее
28-02-2020 дата публикации

Landmark ship identity tracing method based on video and AIS information fusion

Номер: CN0110852985A
Принадлежит:

Подробнее
11-11-2009 дата публикации

Method and device for detecting static targets

Номер: CN0101576952A
Принадлежит:

The invention provides a method and a device for detecting static targets. The method comprises the following steps of: step one, dividing a monitoring area into a plurality of monitoring area blocks and obtaining image blocks corresponding to each monitoring area block in each frame video image; step two, calculating the characteristic values of the image blocks; step three, carrying out statistics in real time to the characteristic values of the image blocks corresponding to each monitoring area block to obtain statistical result corresponding to each monitoring area block; and step four, according to the statistical result, carrying out initialization to the background images of the monitoring area blocks, detecting static targets in the video images or carrying out updating to the background images. The invention can carry out real-time analysis to monitoring scenes based on the statistics, is not easily interfered by noise, can process the condition that the static target is blocked ...

Подробнее
14-12-2018 дата публикации

3D path detection system

Номер: CN0109001721A
Принадлежит:

Подробнее
10-07-2009 дата публикации

PROCESS OF DETECTION Of EVENTS BY VIDEOSURVEILLANCE

Номер: FR0002872326B1
Принадлежит: FOXSTREAM

Подробнее
19-04-2019 дата публикации

METHOD FOR PROCESSING IMAGES FOR THE SUPPRESSION OF BRIGHT AREAS

Номер: FR0003065560B1
Принадлежит:

Подробнее
06-05-2016 дата публикации

MOTION ESTIMATION OF AN IMAGE

Номер: FR0003001073B1
Принадлежит: SAGEM DEFENSE SECURITE

Подробнее
05-04-2002 дата публикации

Industrial site/public place/traffic route remote surveillance system having current image reference image compared with block transformation measuring spatial activity providing digital intrusion signal/reference comparing.

Номер: FR0002814895A1
Принадлежит:

L'invention concerne un procédé de détection d'intrusion dans un système de télésurveillance mettant en oeuvre au moins une caméra vidéo fournissant une image vidéo, caractérisé en ce qu'il met en oeuvre pour une image résultante obtenue en soustrayant les valeurs pixel par pixel d'une image courante et d'une image de référence, une transformée par blocs, dans au moins une zone caractéristique de l'image comprenant N blocs, puis le calcul d'une activité spatiale moyenne des n blocs de ladite zone, notée Az et qui constitue un signal numérique de détection d'intrusion.

Подробнее
28-11-2017 дата публикации

화상처리장치 및 화상처리방법

Номер: KR0101802146B1
Принадлежит: 캐논 가부시끼가이샤

... 화상처리장치는, 영상입력부와, 상기 영상입력부에서 취득한 화상을, 유사한 속성의 화소들을 각각 포함하는 복수의 영역으로 분할하는 영역분할부와, 분할한 각 영역으로부터 특징을 추출하는 특징추출부와, 배경의 특징으로부터 생성된 배경 모델을 미리 기억하는 배경 모델 기억부와, 상기 추출한 특징과 상기 배경 모델에서의 특징을 비교하고, 상기 영역들의 각각에 대해서 상기 영역이 배경인가 아닌가를 판정하는 특징비교부를 구비한다.

Подробнее
28-05-2018 дата публикации

DEVICE AND METHOD FOR DETECTING MODULATION OF LOCAL REGION IN IMAGE

Номер: KR101861708B1

The present invention relates to a device and a method for detecting modulation, which detect modulation of a local region in an image. More specifically, the present invention provides a device and a method for detecting modulation which detect a difference of the local region in a plurality of images having different sizes and directionalities and determine whether the image is modulated. The device for detecting modulation comprises an image input part, a feature point detecting part, a corresponding relationship setting part, a corresponding relationship selecting part, an image correcting part, an image segmenting part, a similarity calculating part, and a modulation determining part. COPYRIGHT KIPO 2018 (110) Image input part (120) Feature point detecting part (130) Corresponding relationship setting part (140) Corresponding relationship selecting part (150) Image correcting part (160) Image segmenting part (170) Similarity calculating part (180) Modulation determining part ...

Подробнее
13-01-2005 дата публикации

METHOD AND DEVICE FOR DETERMINING THE POSITION OF AN OBJECT WITHIN A GIVEN AREA

Номер: KR0100465608B1
Автор:
Принадлежит:

Подробнее
11-03-2019 дата публикации

Номер: KR0101955506B1
Автор:
Принадлежит:

Подробнее
13-10-2015 дата публикации

OBJECT COUNTER AND METHOD FOR COUNTING OBJECTS

Номер: KR0101556693B1
Принадлежит: 엑시스 에이비

... 본 발명은 객체 카운터 및 객체를 카운트하는 방법에 관한 것이다. 상기 방법은 소정의 카운팅 뷰의 움직이는 이미지들에 해당하는 이미지들을 캡처하는 것, 상기 소정의 카운팅 뷰의 상기 움직이는 이미지들에서 움직임 영역을 검출하는 것, 상기 움직임 영역의 이동 속도를 가리키는 움직임 영역 속도 값을 계산하는 것, 소정의 카운팅 경계, 움직임 영역 속도 값, 및 기여 시간 간격 를 기반으로 반복적으로 기여 구역을 규정하는 것, 상기 규정된 기여 구역에 포함된 상기 움직임 영역의 면적 크기를 나타내는 부 면적 값을 반복적으로 검색하고 기록하는 것, 복수의 기록된 부 면적 값들을 덧셈함으로써 총 면적 값을 발생시키는 것, 그리고 상기 총 면적 값을 기준 객체 면적 값으로 나눗셈하여 상기 카운팅 경계를 지나간 객체들의 개수를 추정하는 것을 포함하며, 상기 기여 시간 간격은 부 면적 값을 검색하는 2개의 연속된 동작들 사이의 시간 간격에 해당한다.

Подробнее
16-02-2015 дата публикации

Номер: KR1020150017370A
Автор:
Принадлежит:

Подробнее
19-02-2019 дата публикации

위성 영상의 노이즈 검출 및 영상 복원 방법과 그 장치

Номер: KR1020190016722A
Автор: 김태정, 안도섭, 정일구
Принадлежит:

... 영상 복원 방법이 개시된다. 본 발명의 일 실시예에 따른 영상 복원 방법은, 복수의 상이한 채널들을 이용하여 동일 오브젝트에 대한 영상들을 획득하는 단계; 상기 멀티스펙트럼 영상들 중 노이즈가 포함된 노이즈 영상을 식별하는 단계; 상기 멀티스펙트럼 영상들 중 상기 노이즈 영상의 복원에 필요한 참조 영상들을 결정하는 단계; 상기 노이즈 영상 및 상기 참조 영상들 간의 관계를 이용하여 상기 노이즈 영상의 노이즈 영역을 검출하는 단계; 및 상기 참조 영상들의 화소를 이용하여 상기 검출된 노이즈 영역을 복원하는 단계를 포함할 수 있다.

Подробнее
04-03-2015 дата публикации

Номер: KR1020150022076A
Автор:
Принадлежит:

Подробнее
30-10-2017 дата публикации

DISPLAY APPARATUS, DISPLAY CONTROL METHOD, DISPLAY CONTROL PROGRAM, AND DISPLAY SYSTEM

Номер: SG11201707278SA
Принадлежит:

Подробнее
27-04-2018 дата публикации

IMAGE-PROCESSING DEVICE

Номер: SG11201801781RA
Принадлежит:

Подробнее
09-06-2020 дата публикации

Use of a reference image to detect a road obstacle

Номер: US0010678259B1
Принадлежит: Waymo LLC, WAYMO LLC

Methods and systems for use of a reference image to detect a road obstacle are described. A computing device configured to control a vehicle, may be configured to receive, from an image-capture device, an image of a road on which the vehicle is travelling. The computing device may be configured to compare the image to a reference image; and identify a difference between the image and the reference image. Further, the computing device may be configured to determine a level of confidence for identification of the difference. Based on the difference and the level of confidence, the computing device may be configured to modify a control strategy associated with a driving behavior of the vehicle; and control the vehicle based on the modified control strategy.

Подробнее
30-04-2019 дата публикации

Methods and systems for detection of artifacts in a video after error concealment

Номер: US0010275894B2

A method and system for detection of artifacts in a video after application of an error concealment strategy by a decoder is disclosed. An absolute difference image is determined by subtraction of a current image and a previously decoded image. A threshold marked buffer is determined to replace the pixel values of the absolute difference image with a first pixel value or a second pixel value, based on comparison of pixel values with a first predefined threshold. A candidate region is determined by determining a pair of edges of the threshold marked buffer having length above a second predefined threshold, distance between them above a third predefined threshold, and pixel values between them in the absolute difference image, less than a fourth predefined threshold. Validation of candidate region is based on comparison of characteristics of the candidate region with characteristics of the current image and/or previously decoded images.

Подробнее
13-09-2012 дата публикации

SECURITY SYSTEM AND METHOD

Номер: US20120229630A1
Принадлежит: HON HAI PRECISION INDUSTRY CO., LTD.

A computing system displays current real-time images of an area monitored by a camera. The computing system includes a motion detection unit. The motion detection unit determines a number of varied pixels in the real-time image compared with a previous image, and to determine a ratio of pixels of the number of the varied pixels to a total number of pixels in the real-time image. If the ratio is greater than a predefined number, the computing system increments an abnormal pixel count by one. If the abnormal pixel count is greater than a maximum abnormal pixel number, the computing system starts an alarm device connected to the computing system.

Подробнее
31-07-2007 дата публикации

Mobile body surrounding surveillance apparatus, mobile body surrounding surveillance method, control program, and readable recording medium

Номер: US0007250593B2

A mobile body surrounding surveillance apparatus comprises an image capturing section for capturing an image of a surrounding of a mobile body, a setting section for setting a landscape, band-like particular region parallel to a frame image with respect to image data captured by the image capturing section, an extraction section for taking image data of the particular region every one or more frame images captured in time series by the image capturing section, and extracting movement vector information based on the image data of the particular region, and a detection section for detecting another mobile body present in the surrounding of the mobile body based on the movement vector information.

Подробнее
11-09-2012 дата публикации

Inter-mode region-of-interest video object segmentation

Номер: US0008265392B2

The disclosure is directed to techniques for automatic segmentation of a region-of-interest (ROI) video object from a video sequence. ROI object segmentation enables selected ROI or foreground objects of a video sequence that may be of interest to a viewer to be extracted from non-ROI or background areas of the video sequence. Examples of a ROI object are a human face or a head and shoulder area of a human body. The disclosed techniques include a hybrid technique that combines ROI feature detection, region segmentation, and background subtraction. In this way, the disclosed techniques may provide accurate foreground object generation and low-complexity extraction of the foreground object from the video sequence. A ROI object segmentation system may implement the techniques described herein. In addition, ROI object segmentation may be useful in a wide range of multimedia applications that utilize video sequences, such as video telephony applications and video surveillance applications.

Подробнее
23-10-2012 дата публикации

Video image monitoring system

Номер: US0008294765B2

This is a video image monitoring system which can effectively detect a mobile object appearing in a captured video image even if a background image and other camera condition change continuously. The video image monitoring system comprises: a video-image-capturing section 100 for putting out image data based on a video image signal obtained by using a camera 10; a mobile-object-candidate-area-detecting section 101 for extracting a candidate area of a mobile object from the image data; and a mobile-object-detecting section 102 for determining whether the candidate area is the mobile object. The mobile-object-candidate-area-detecting section 101 quantizes a brightness gradient direction of the image data, and calculates a spatio-temporal histogram which represents the frequency of a direction code appearing in a predetermined spatio-temporal space. After that, the mobile-object-candidate-area-detecting section 101 calculates a statistical spatio-temporal space evaluation value of the spatio-temporal ...

Подробнее
18-10-2018 дата публикации

METHOD AND SYSTEM FOR DETERMINING THE VELOCITY OF A MOVING FLUID SURFACE

Номер: US20180299478A1
Принадлежит:

A method for determining the velocity of a moving fluid surface, which comprises the following steps S1 to S5: S1) taking a sequence of images of the moving fluid surface by at least one camera; S2) comparing a first image from the sequence with a second image from the sequence in order to distinguish moving patterns of the fluid surface from non-moving parts and to obtain a first processed image (im_1f) comprising the moving patterns; S3) comparing a third image from the sequence with a fourth image from the sequence in order to distinguish moving patterns of the fluid surface from non-moving parts and to obtain a second processed image (im_2f) comprising the moving patterns; S4) comparing the first and second processed images in order to determine the spatial displacements of the moving patterns; and S5) determining from the spatial displacements the velocity.

Подробнее
14-06-2012 дата публикации

System and method for measuring flight information of a spherical object with high-speed stereo camera

Номер: US20120148099A1

Disclosed is a method for automatically extracting centroids and features of a spherical object required to measure a flight speed, a flight direction, a rotation speed, and a rotation axis of the spherical object in a system for measuring flight information of the spherical object with a high-speed stereo camera. In order to automatically extract the centroids and the features of the spherical object, the present invention may automatically extract the centroid and the feature by detecting only the pixel of the foreground image including the spherical object generated by excluding the motionless background image from each camera image, sorting the interconnected pixels among the pixels of the detected foreground image into the independent pixel cluster, and then, using, in the extraction of the centroid and the feature, only one pixel cluster having a size similar to the actual spherical object.

Подробнее
14-06-2012 дата публикации

Method and system for automatic object detection and subsequent object tracking in accordance with the object shape

Номер: US20120148103A1

A method and system for automatic object detection and subsequent object tracking in accordance with the object shape in digital video systems having at least one camera for recording and transmitting video sequences. In accordance with the method and system, an object detection algorithm based on a Gaussian mixture model and expanded object tracking based on Mean-Shift are combined with each other in object detection. The object detection is expanded in accordance with a model of the background by improved removal of shadows, the binary mask generated in this way is used to create an asymmetric filter core, and then the actual algorithm for the shape-adaptive object tracking, expanded by a segmentation step for adapting the shape, is initialized, and therefore a determination at least of the object shape or object contour or the orientation of the object in space is made possible.

Подробнее
05-07-2012 дата публикации

Rain detection apparatus and method

Номер: US20120169877A1
Принадлежит: TRW Ltd

A rain detection apparatus includes a camera that views a surface and a processor that captures an image from the camera. The processor generates a signal indicative of rain on the surface from information contained in the captured image and optionally drives a surface cleaning apparatus in response thereto. The apparatus captures images focused at a plurality of distances. The processor includes an edge detector that detects edges visible in the captured image and a difference structure that calculates the difference between the number of edges visible between differing images. The edge detector disregards edges close to areas of light larger than the largest raindrop that is desired or expected to be detected. The apparatus optionally includes a backlight, and the difference in numbers of edges between frames with and without the backlight illuminated are used to distinguish between background features and rain on the surface.

Подробнее
12-07-2012 дата публикации

Motion detection using depth images

Номер: US20120177254A1
Принадлежит: Microsoft Corp

A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application.

Подробнее
04-10-2012 дата публикации

Image processing apparatus, image processing method, and recording medium capable of identifying subject motion

Номер: US20120249593A1
Автор: Kouichi Nakagome
Принадлежит: Casio Computer Co Ltd

An image capturing apparatus 1 includes an image obtaining section 51 , a difference image generating section 54 , an enhanced image generating section 55 , a Hough transform section 562 , and a position identifying section 153 . The image obtaining section 51 obtains a plurality of image data where subject motion is captured continuously. The difference image generating section 54 generates difference image data between a plurality of image data temporally adjacent to each other, from the plurality of image data obtained by the image obtaining section 51 . The enhanced image generating section 55 generates image data for identifying the subject motion, from the difference image data generated by the first difference image generating section. The position identifying section 153 identifies a change point of the subject motion, based on the image data generated by the enhanced image generating section 55.

Подробнее
20-12-2012 дата публикации

Motion Detection Method, Program and Gaming System

Номер: US20120322551A1
Принадлежит: Omnimotion Technology Ltd

This invention relates to a method of processing an image, specifically an image taken from a web camera. The processed image is thereafter preferably used as an input to a game. The image is simplified to a point whereby a very limited number of region bounded boxes are provided to a game environment and these region bounded boxes are used to determine the intended user input. By implementing this method, the amount of processing required is decreased and the speed at which the game may be rendered is increased thereby providing a richer game experience for the player. Furthermore, the method of processing the image is practically universally applicable and can be used with a wide range of web cameras thereby obviating the need for additional specialist equipment to be purchased and allowing the games to be web based.

Подробнее
18-04-2013 дата публикации

Three-frame difference moving target acquisition system and method for target track identification

Номер: US20130094694A1
Принадлежит: Raytheon Co

Embodiments of a target-tracking system and method of determining an initial target track in a high-clutter environment are generally described herein. The target-tracking system may register image information of first and second warped images with image information of a reference image. Pixels of the warped images may be offset based on the outputs of the registration to align each warped images with the reference image. A three-frame difference calculation may be performed on the offset images and the reference image to generate a three-frame difference output image. Clutter suppression may be performed on the three-frame difference image to generate a clutter-suppressed output image for use in target-track identification. The clutter suppression may include performing a gradient operation on a background image to remove any gradient objects.

Подробнее
23-05-2013 дата публикации

Real-Time Player Detection From A Single Calibrated Camera

Номер: US20130128034A1
Автор: G. Peter K. Carr
Принадлежит: Disney Enterprises Inc

A method for detecting the location of objects from a calibrated camera involves receiving an image capturing an object on a surface from a first vantage point; generating an occupancy map corresponding to the surface; filtering the occupancy map using a spatially varying kernel specific to the object shape and the first vantage point, resulting in a filtered occupancy map; and estimating the ground location of the object based on the filtered occupancy map.

Подробнее
23-05-2013 дата публикации

Motion detection using depth images

Номер: US20130129155A1
Принадлежит: Microsoft Corp

A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application.

Подробнее
04-07-2013 дата публикации

Diagnosing method of golf swing

Номер: US20130172094A1

A camera 10 photographs a golf player swinging a golf club to hit a golf ball and the golf club. Image data is obtained by photographing. A calculating part 16 extracts a plurality of frames from the image data. The calculating part 16 determines a check frame in which the golf player is in a predetermined posture from the plurality of frames. The calculating part 16 determines a contour of the golf player from the check frame. The calculating part 16 decides a swing from the contour of the golf player. An extreme value constituting the contour is determined in deciding the swing. A feature point is determined from the extreme value. The swing is diagnosed using the feature point.

Подробнее
31-10-2013 дата публикации

Method, device and system for determining the presence of volatile organic compounds (voc) in video

Номер: US20130286213A1
Принадлежит: Delacom Detection Systems LLC

A video based method to detect volatile organic compounds (VOC) leaking out of components used in chemical processes in petrochemical refineries. Leaking VOC plume from a damaged component has distinctive properties that can be detected in realtime by an analysis of images from a combination of infrared and optical cameras. Particular VOC vapors have unique absorption bands, which allow these vapors to be detected and distinguished. A method of comparative analysis of images from a suitable combination of cameras, each covering a range in the IR or visible spectrum, is described. VOC vapors also cause the edges present in image frames to loose their sharpness, leading to a decrease in the high frequency content of the image. Analysis of image sequence frequency data from visible and infrared cameras enable detection of VOC plumes. Analysis techniques using adaptive background subtraction, sub-band analysis, threshold adaptation, and Markov modeling are described.

Подробнее
13-03-2014 дата публикации

Devices and Methods for Augmented Reality Applications

Номер: US20140071241A1
Автор: Ning Bi, Ruiduo Yang
Принадлежит: Qualcomm Inc

In a particular embodiment, a method includes evaluating, at a mobile device, a first area of pixels to generate a first result. The method further includes evaluating, at the mobile device, a second area of pixels to generate a second result. Based on comparing a threshold with a difference between the first result and the second result, a determination is made that the second area of pixels corresponds to a background portion of a scene or a foreground portion of the scene.

Подробнее
07-01-2021 дата публикации

FLUORESCENCE BASED FLOW IMAGING AND MEASUREMENTS

Номер: US20210000352A1
Принадлежит:

Fluorescence based tracking of a light-emitting marker in a bodily fluid stream is conducted by: providing a light-emitting marker into a fluid stream; establishing field of view monitoring by placement of a sensor, such as a high speed camera, at a region of interest; recording image data of light emitted by the marker at the region of interest; determining time characteristics of the light output of the marker traversing the field of view; and calculating flow characteristics based on the time characteristics. Furthermore generating a velocity vector map may be conducted using a cross correlation technique, leading and falling edge considerations, subtraction, and/or thresholding. 1. A system for fluorescence based tracking of a light-emitting marker in a bodily fluid stream , the system comprising:a delivery apparatus configured to provide a light-emitting marker into the bodily fluid stream;a camera configured to monitor a region of interest traversed by the bodily fluid stream; and record motion video data generated by the camera;', 'determine time characteristics of the recorded data; and', 'calculate flow characteristics based on the time characteristics., 'a computing device configured to2. The system according to claim 1 , wherein the computing device is further configured to:divide the motion video data into kernels;identify which of the kernels receive some portion of the light-emitting marker using an intensity threshold;compute, for each identified kernel, an intensity signal data set comprising information of mean light intensity versus time;perform smoothing on each intensity signal data set; andcalculate a lag time between the intensity signal data sets of neighboring identified kernels using cross-correlation.3. The system according to claim 1 , wherein the computing device is further configured to:using a spatial resolution and the lag time, calculate velocity vectors;sum the velocity vectors of neighboring kernels to create a resultant velocity ...

Подробнее
03-01-2019 дата публикации

Multi-human tracking system and method with single kinect for supporting mobile virtual reality application

Номер: US20190004597A1
Принадлежит: Shandong University

The invention discloses a multi-human tracking system and method with single Kinect for supporting mobile virtual reality applications. The system can complete the real-time tracking of users occluded in different degrees with a single Kinect capture device to ensure smooth and immersive experience of players. The method utilizes the principle that the user's shadow is not occluded when the user is occluded under certain lighting conditions, and converts the calculation of the motion of the occluded user into a problem of solving the movement of the user's shadow, and can accurately detect the position of each user, rather than just predicting the user's position, thereby actually realizing tracking.

Подробнее
07-01-2021 дата публикации

IDENTIFICATION AND CLASSIFICATION OF TRAFFIC CONFLICTS

Номер: US20210004607A1
Принадлежит:

A practical method and system for transportation agencies (federal, state, and local) to monitor and assess the safety of their roadway networks in real time based on traffic conflict events such that corrective actions can be proactively undertaken to keep their roadway systems safe for travelling public. The method and system also provides a tool for evaluating the performance of autonomous vehicle/self-driving car technologies with respect to safety and efficiency. 1. A device comprising one or more processors and memory storing instructions that , when executed by the one or more processors , cause the device to:receive, from at least one device, first data associated with a roadway;generate, based at least on the first data, second data indicative of the roadway, where the second data comprises a transformation of the first data;determine, based at least on the second data, movement of an object relative to the roadway; generating, based at least on the parameter of the at least one device, a three-dimensional model of at least a portion of one or more of the roadway or object; and', 'determining, using the three-dimensional model, the dimensions of the object;, 'determine, based at least on a parameter of the at least one device, dimensions of the object, wherein the determining the dimensions of the object comprisesdetermine, using the dimensions of the object, a timing of the movement of the object; anddetermine, based at least on the timing of the movement of the object, a conflict associated with the roadway.2. The device of claim 1 , wherein the first data comprises a three-dimensional representation of the roadway and the second data comprises a two-dimensional representation of the roadway.3. The device of claim 1 , where generating the second data comprises transforming the first data to a ground coordinate system using a plurality of ground reference points.4. The device of claim 1 , wherein the parameter of the at least one device comprises at least ...

Подробнее
13-01-2022 дата публикации

METHOD AND SYSTEM FOR AUGMENTED IMAGING USING MULTISPECTRAL INFORMATION

Номер: US20220012874A1
Принадлежит:

Disclosed herein is a method of generating augmented images of tissue of a patient, wherein each augmented image associates at least one tissue parameter with a region or pixel of the image of the tissue, said method comprising the following steps: obtaining one or more multispectral images of said tissue, and applying a machine learning based regressor or classifier, or an out of distribution (OoD) detection algorithm for determining information about the closeness of the multispectral image or parts of said multispectral image to a given training data set, or a change detection algorithm to at least a part of said one or more multispectral images, or an image derived from said multispectral image, or to a time sequence of multispectral images, parts of multiple images or images derived therefrom, to thereby derive one or more tissue parameters associated with image regions or pixels of the corresponding multispectral image. 1. A method of generating augmented images of tissue of a patient , wherein each augmented image associates at least one tissue parameter with a region or pixel of the image of the tissue , said method comprising the following steps: a machine learning based regressor or classifier, or', 'an out of distribution (OoD) detection algorithm for determining information about the closeness of the multispectral image or parts of said multispectral image to a given training data set, or', 'a change detection algorithm, 'obtaining one or more multispectral images of said tissue, and applying'}to at least a part of said one or more multispectral images, or an image derived from said multispectral image, or to a time sequence of multispectral images, parts of multiple images or images derived therefrom, to thereby derive one or more tissue parameters associated with image regions or pixels of the corresponding multispectral image.240.-. (canceled)41. The method of claim 1 , further comprising applying out of distribution (OoD) detection and applying said ...

Подробнее
02-01-2020 дата публикации

Automatic Crop Health Change Detection and Alerting System

Номер: US20200005038A1
Принадлежит: Farmers Edge Inc

A method and system for crop health change monitoring is provided. The method includes acquiring a companion image of a crop growing within a field at a first point in time, acquiring a master image of the crop growing within the field at a second point in time, and computing, using a processor, vegetation indices using the master image and the companion image, determining, using the processor, regions of change within the master image using the vegetation indices and generating an alert indicative of a change in crop condition of the crop growing within the field, and communicating the alert indicative of the change in crop condition over a network to a computing device configured to receive the alert.

Подробнее
04-01-2018 дата публикации

Atlas-Based Determination of Tumor Growth Direction

Номер: US20180005378A1
Принадлежит:

The invention relates to a method for determining the spatial development of tumor tissue, by acquiring patient medical image data describing sequences of patient medical images of tumors in parts of patient bodies, wherein the patient medical images of each sequence have been taken at subsequent points in time and each sequence has been taken tier a different patient; determining, by additively fusing subsequent patient medical images of each sequence to one another, patient spatial development data describing the spatial development of a tumor in each patient body; acquiring atlas data describing an atlas representation of the parts of patient bodies; determining, based on the atlas data and the patient development data, development probability data describing a probability for a spatial development of a tumor. 115-. (canceled)16. A method for determining the spatial development of tumor tissue , executed. by one or more processors , comprising:acquiring, by one or more of the processors, patient medical image data describing sequences of patient medical images of tumors in parts of patient bodies, wherein the patient medical images of each sequence have been taken at subsequent points in time and each sequence has been taken for a different patient;determining, by one or more of the processors, by additively fusing subsequent patient medical images of each sequence to one another, patient spatial development data describing the spatial development of a tumor in each patient body;acquiring, by one or more of the processors, atlas data describing an atlas representation of the parts of patient bodies;determining, by one or more of the processors, based on the atlas data and the patient development data, development probability data describing a probability for a spatial development of a tumor.17. The method according to claim 16 , wherein the development probability data is determined claim 16 , by one or more of the processors claim 16 , based on transforming the ...

Подробнее
02-01-2020 дата публикации

Image processing device, image processing method, and recording medium

Номер: US20200005492A1
Автор: Kyota Higa
Принадлежит: NEC Corp

A state of a display rack is determined more accurately. An image processing device includes a detection unit configured to detect a change area related to a display rack from a captured image in which an image of the display rack is captured, and a classification unit configured to classify a change related to the display rack in the change area, based on a previously learned model of the change related to the display rack or distance information indicating an image captured before an image capturing time of the captured image.

Подробнее
04-01-2018 дата публикации

DISPLAY APPARATUS, DISPLAY CONTROL METHOD, AND DISPLAY SYSTEM

Номер: US20180005555A1
Принадлежит: RICOH COMPANY, LTD.

A display apparatus includes an image acquisition unit, an image extraction unit, a registration unit, a display control unit, a coordinate generation unit, and a motion detection unit. The coordinate generation unit generates, based on a detection result of a detection unit configured to detect the position of an object in a three-dimensional space, coordinates of the object in a screen. The motion detection unit detects a motion of the object based the coordinates in the screen generated by the coordinate generation unit. The display control unit displays a first image on the screen. When the motion is detected by the motion detection unit, the display control unit further displays a second image on the screen based on coordinates corresponding to the detected motion, and changes the display of the first image. 1. A display apparatus , comprising:an image acquisition unit configured to acquire an image including a drawing region drawn by a user;an image extraction unit configured to extract, from the acquired image, a first image being an image in the drawing region;a registration unit configured to register attribute information indicating attributes that is set with respect to the extracted first image and is used for controlling of moving the first image on a screen;a display control unit configured to control display on the screen;a coordinate generation unit configured to generate, based on a detection result of a detection unit configured to detect a position of an object in a three-dimensional space, coordinates of the object in the screen; anda motion detection unit configured to detect a motion of the object based on the coordinates, whereinthe display control unit is configured to further display, when the motion is detected by the motion detection unit, a second image on the screen based on the coordinates corresponding to the detected motion, and change the display of the first image to which attribute information of a certain attribute among the ...

Подробнее
14-01-2021 дата публикации

MONITORING A TRANSVERSE POSITION OF A CONVEYOR BELT AND ITS MATERIAL LOAD BY DIGITAL IMAGE ANALYSIS

Номер: US20210009359A1
Принадлежит:

A method, system and computer program product are provided for monitoring a transverse position of a conveyor belt or its material load. A processor receives a digital video or digital images capturing movement of the conveyor belt and the material load. The processor segments the images or frames into a group of contiguous pixels representative of the conveyor belt, and the material load, such as by moving object detection, using background segmentation and threshold processing, pixel intensity-based segmentation or image texture-based segmentation. The processor determines a pixel coordinate of the group of contiguous pixels, which is indicative of the transverse position of the conveyor belt or the material load. The processor generates an alarm or report or a data signal, which depends, directly or indirectly, on the determined pixel coordinates. 1. A method for monitoring a transverse position of a conveyor belt , the method implemented by a processor and comprising the steps of:(a) receiving at least one digital image of at least a portion of the conveyor belt; (i) segmenting the digital image into a group of contiguous pixels representative of at least the portion of the conveyor belt; and', '(ii) determining a pixel coordinate indicative of the transverse position of the conveyor belt, based on the group of contiguous pixels; and, '(b) for each digital image(c) generating either an alarm or report that is audible or visible to a human, or a data signal, wherein generation of the alarm, the report or the data signal, or a characteristic of the generated alarm, report or data signal depends, directly or indirectly, on the determined pixel coordinates.2. The method of claim 1 , wherein the receiving step (a) comprises receiving a time sequenced succession of digital images or digital video frames claim 1 , and the segmenting step (b)(i) comprises performing moving object detection on the frame to differentiate the pixels representative of at least the portion ...

Подробнее
27-01-2022 дата публикации

METHODS AND SYSTEMS FOR GENERATING AN ANIMATION CONTROL RIG

Номер: US20220028144A1
Принадлежит: Weta Digital Limited

An aspect provides a computer-implemented method for training controls for an animation control rig using a neural network. The method comprises receiving a combination of training data, wherein the combination of training data includes a first set of training data derived from image capture poses of a physical object and a second set of training data derived from posing an animation version of the physical object; receiving a set of coarse error data derived from a first training process of the combination of training data, the first training process comprising passing at least some of the combination of training data to the neural network as a sequence of training steps; determining a rate of change of training errors associated with the coarse error data; in response to detecting a rate of change of training errors that crosses a rate of change threshold, during a second processing of the combination of training data, varying a learning rate over time based on a slope of the rate of change of training errors. 1. A computer-implemented method for training a computer system to use motion capture data to animate an animation control rig describing a model in a plurality of poses , the method comprising:receiving training data, the training data including motion capture data associated with a target animation sequence;selecting parameters;generating a test animation sequence from the training data by using a matching system configured with the selected parameters;determining an error value by comparing the test animation sequence to the target animation sequence and accumulating positional differences of corresponding components;generating a set of error values by repeating the above selecting, generating and determining, wherein each iteration of selecting parameters varies at least one parameter by a step amount;analyzing the set of error values to identify a group of two or more error values that decrease more than a predetermined rate threshold;identifying ...

Подробнее
10-01-2019 дата публикации

Imaging System for Counting and Sizing Particles in Fluid-Filled Vessels

Номер: US20190011688A1
Принадлежит:

A system is described to facilitate the characterization of particles within a fluid contained in a vessel using an illumination system that directs source light through each vessel. One or more optical elements may be implemented to refract the source light and to illuminate the entire volume of the vessel. As the refracted source light passes through the vessel and interacts with particles suspended in the fluid, scattered light is produced and directed to an imager, while the refracted source light is diverted away from the imager to prevent the source light from drowning out the scattered light. The system can therefore advantageously utilize an imager with a large depth of field to accurately image the entire volume of fluid at the same time, facilitating the determination of the number and size of particles suspended in the fluid. 1. A system comprising:a light source configured to generate source light that is incoherent; refract the source light to produce refracted source light; and', 'direct the refracted source light through the fluid contained in the vessel to produce scattered light as a result of an interaction between the refracted source light and particles suspended in the fluid,, 'an optical element that is separate from the light source and is disposed between the light source and a vessel containing a fluid that substantially occupies three dimensions, the optical element being configured towherein the optical element is configured to refract the source light such that, once the refracted source light has passed through the fluid contained within the vessel, the refracted source light does not impinge upon an imager that is configured to acquire images using the scattered light, andwherein the light source and the imager are positioned such that, but for the optical element refracting the source light, the source light would impinge upon the imager; andone or more processors configured to perform image analysis on the acquired images to determine ...

Подробнее
11-01-2018 дата публикации

MOVING OBJECT DETECTION DEVICE, IMAGE PROCESSING DEVICE, MOVING OBJECT DETECTION METHOD, AND INTEGRATED CIRCUIT

Номер: US20180012368A1
Принадлежит:

A moving object detection device includes: an image capturing unit with which a vehicle is equipped, and which is configured to obtain a captured image by capturing a view in a travel direction of the vehicle; a calculation unit configured to calculate, for each of first regions which are unit regions of the captured image, a first motion vector indicating movement of an image in the first region; an estimation unit configured to estimate, for each of one or more second regions which are unit regions each including first regions, a second motion vector using first motion vectors, the second motion vector indicating movement of a stationary object which has occurred in the captured image due to the vehicle traveling; and a detection unit configured to detect a moving object present in the travel direction, based on a difference between a first motion vector and a second motion vector. 1. A moving object detection device comprising:an image capturing unit with which a vehicle is equipped, and which is configured to obtain a captured image by capturing a view in a travel direction of the vehicle;a calculation unit configured to calculate, for each of first regions which are unit regions of the captured image, a first motion vector indicating movement of an image in the first region;an estimation unit configured to estimate, for each of one or more second regions which are unit regions each including the first regions, a second motion vector using first motion vectors of the first regions included in the second region, the second motion vector indicating movement of a stationary object which has occurred in the captured image due to the vehicle traveling; anda detection unit configured to detect a moving object present in the travel direction, based on a difference between one of the first motion vectors and one of the one or more second motion vectors.2. The moving object detection device according to claim 1 , whereinfor each of the one or more second regions, the ...

Подробнее
10-01-2019 дата публикации

Enhanced Contrast for Object Detection and Characterization By Optical Imaging Based on Differences Between Images

Номер: US20190012564A1
Автор: HOLZ David S., YANG Hua
Принадлежит: Leap Motion, Inc.

Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. 1. A method of capturing and analyzing an image , the method comprising: operate the at least one camera to capture a sequence of images including a first image captured at a time when the at least one light source is illuminating a field of view;', 'identify pixels corresponding to an object of interest rather than to a background;', 'based on the identified pixels, construct a 3D model of the object of interest, including a position and shape of the object of interest; and', 'distinguish between (i) foreground image components corresponding to objects located within a proximal zone of the field of view, the proximal zone extending from the at least one camera and having a depth relative thereto of at least twice an expected maximum distance between the objects corresponding to the foreground image components and the at least one camera, and (ii) background image components corresponding to objects located within a distal zone of the field of view, the distal zone being located, relative to the at least one camera, beyond the proximal zone., 'utilizing an image analyzer coupled to at least one camera and at least one light source to2. The method of claim 1 , wherein the proximal zone has a depth of at least four times the expected maximum distance.3. The method of claim 1 , wherein the at least one light source is a diffuse emitter.4. The method of claim 3 , wherein the at least one light source is an infrared light-emitting diode and the at least one camera is an infrared-sensitive ...

Подробнее
14-01-2021 дата публикации

TRACKING OF HANDHELD SPORTING IMPLEMENTS USING COMPUTER VISION

Номер: US20210012098A1
Автор: Painter James G.
Принадлежит: SportsMEDIA Technology Corporation

A path and/or orientation of object approaching an athlete is tracked using two or more cameras. At least two sets of images of the object are obtained using at least two different cameras having different positions. Motion regions within images are identified, and candidate locations in 2D space of the object are identified within the motion region(s). Based thereon, a probable location in 3D space of the identifiable portion is identified, for each of a plurality of instants during which the object was approaching. A piecewise 3D trajectory of at least the identifiable portion of the object is approximated from the probable locations in 3D space of the object for multiple instants during which the object was approaching the athlete. A graphical representation of the 3D trajectory of the object is incorporated into at least one of the sets of images. 1. A method for tracking a sporting implement during a sporting event , comprising:at least one processor constructed and configured for receiving at least two sets of images of the sporting implement;the at least one processor identifying at least one motion region in the at least two sets of images; andthe at least one processor identifying a first location of the sporting implement within the at least one motion region and a second location of the sporting implement based on the first location.2. The method of claim 1 , wherein a probable location in 3D space for an identifiable portion of the sporting implement is identified for each of a plurality of instants comprising a timespan that the sporting implement was in motion.3. The method of claim 2 , wherein the probable location in 3D space for each of the plurality of instants is converted back into 2D space and superimposed on one or more images of the sporting implement.4. The method of claim 2 , wherein a 3D trajectory of the sporting implement is approximated based on the probable location in 3D space for each of the plurality of instants comprising the ...

Подробнее
10-01-2019 дата публикации

DETECTED OBJECT TRACKER FOR A VIDEO ANALYTICS SYSTEM

Номер: US20190012761A1
Принадлежит: Omni AI, Inc.

Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel. 1761-. (canceled)62. A computer-implemented method for tracking foreground objects depicted in a video scene , the method comprising:receiving via at least one processor, a background/foreground BG/FG segmentation of a current video frame from a plurality of video frames of the video scene, the current video frame including at least one appearance value for each of a plurality of pixels and wherein the BG/FG segmentation classifies at least one pixel in the plurality of the pixels as depicting at least one of scene foreground or scene background;for each region of pixels in the current frame classified as depicting the video scene foreground, determining via the at least one processor, an ellipse to bound that region;comparing via the at least one processor, a geometry of at least a first one of the ellipses in the current frame with a geometry of a second ellipse in at least a prior frame from the plurality of video frames;classifying via the at least one processor, the first ellipse as corresponding to a first known foreground object tracked in at least the prior frame based at least in part on the comparison;extending via the at least one processor, a ...

Подробнее
10-01-2019 дата публикации

MOVEMENT MONITORING SYSTEM

Номер: US20190012794A1
Принадлежит: WISCONSIN ALUMNI RESEARCH FOUNDATION

A monitoring system may include an input port, an output port, and a controller in communication with the input port and the output port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a predetermined task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information. 1. A monitoring system comprising:an input port for receiving video;an output port; identify a subject within a frame of video relative to a background within the frame;', 'determine when the subject in the video is performing a task;', 'identify a height dimension and a width dimension of the subject in one or more frames of the video during the task; and', 'output via the output port position assessment information relative to the subject during the task based on the height dimension and the width dimension for the subject in one or more frames of the video during the task., 'a controller in communication with the input port and the output port, the controller configured to2. The monitoring system of claim 1 , further comprising:an image capturing device adapted to capture video of the subject and provide the video to the controller via the input port.3. The monitoring system of claim 1 , wherein the controller is configured to:determine extreme-most pixels in two dimensions of the subject to identify the height dimension and the width dimension based on the identified subject; andidentify a ...

Подробнее
10-01-2019 дата публикации

AUTOMATIC DETECTION OF AN ARTIFACT IN PATIENT IMAGE DATA

Номер: US20190012805A1
Принадлежит:

A medical data processing method and system determines the position of an artifact in patient image data describing a set of tomographic slice images of an anatomical structure of a patient. The images are described by color Values. Color value difference data describing differences in color values for image elements in adjacent slice images is determined. At least one of positive or negative difference data, describing a subset of the differences and consisting of differences having a positive or negative value are determined. Smoothed difference data describing a smoothing of the differences contained in the positive or negative difference data are determined and, based on the positive or negative difference data and the smoothed difference data, artifact position data is determined describing the position of an artifact in the patient image data. 1. A method for determining the position of an artifact in patient image data , the method comprising:acquiring at an input of a medical data processing system comprising a memory device, a processor, and the input, patient image data, the patient image data describing a set of tomographic slice images of an anatomical structure of an associated patient, wherein the tomographic slice images of the patient image data are described by color values;determining by the processor of the medical data processing system, based on the patient image data and for each of a plurality of pairs of adjacent ones of the tomographic slice images, a corresponding plurality of color value difference data sets describing differences in color values of image elements between the adjacent ones of the tomographic slice images, each selected color value difference data set being determined as differences by subtracting, for a corresponding selected pair of adjacent tomographic slice images, a color value of an element of a first one of the selected pair of adjacent tomographic slice images from a color value of an element of a second one of the ...

Подробнее
09-01-2020 дата публикации

Systems and methods to improve data clustering using a meta-clustering model

Номер: US20200012886A1
Принадлежит: Capital One Services LLC

Systems and methods for clustering data are disclosed. For example, a system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving data from a client device and generating preliminary clustered data based on the received data, using a plurality of embedding network layers. The operations may include generating a data map based on the preliminary clustered data using a meta-clustering model. The operations may include determining a number of clusters based on the data map using the meta-clustering model and generating final clustered data based on the number of clusters using the meta-clustering model. The operations may include and transmitting the final clustered data to the client device.

Подробнее
09-01-2020 дата публикации

Systems and methods for hyperparameter tuning

Номер: US20200012935A1
Принадлежит: Capital One Services LLC

A model optimizer is disclosed for managing training of models with automatic hyperparameter tuning. The model optimizer can perform a process including multiple steps. The steps can include receiving a model generation request, retrieving from a model storage a stored model and a stored hyperparameter value for the stored model, and provisioning computing resources with the stored model according to the stored hyperparameter value to generate a first trained model. The steps can further include provisioning the computing resources with the stored model according to a new hyperparameter value to generate a second trained model, determining a satisfaction of a termination condition, storing the second trained model and the new hyperparameter value in the model storage, and providing the second trained model in response to the model generation request.

Подробнее
09-01-2020 дата публикации

Systems and methods to identify neural network brittleness based on sample data and seed generation

Номер: US20200012937A1
Принадлежит: Capital One Services LLC

Systems and methods for determining neural network brittleness are disclosed. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving a modeling request comprising a preliminary model and a dataset. The operations may include determining a preliminary brittleness score of the preliminary model. The operations may include identifying a reference model and determining a reference brittleness score of the reference model. The operations may include comparing the preliminary brittleness score to the reference brittleness score and generating a preferred model based on the comparison. The operations may include providing the preferred model.

Подробнее
09-01-2020 дата публикации

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20200013169A1
Автор: Higa Kyota
Принадлежит: NEC Corporation

A state of a display rack is evaluated accurately. An image processing device includes a detection unit configured to detect a change area related to a display rack from a captured image in which an image of the display rack is captured, a classification unit configured to classify a change related to the display rack in the change area, and an evaluation unit configured to evaluate a display state of goods, based on a classification result. 1. An image processing device comprising a processor configured to:detect a change area related to a display rack from a captured image in which an image of the display rack is captured;classify a change related to the display rack in the change area; andevaluate a display state of goods, based on a classification result.2. The image processing device according to claim 1 , whereinthe processor calculates an amount of display of the goods, based on the classification result, information about the change area, and monitored area information indicating a target area where the display state of the goods is monitored in the captured image.3. The image processing device according to claim 2 , whereinthe processor evaluates the display state of the goods, based on a transition of the amount of display.4. The image processing device according to claim 1 , the processor further configured tooutput information about the display state of the goods to an output device, based on an evaluation result.5. The image processing device according to claim 1 , whereinthe processor classifies the change related to the display rack in the change area, based on a previously learned model of the change related to the display rack or distance information indicating an image captured before an image capturing time of the captured image.6. The image processing device according to claim 5 , whereinthe captured image is a color image,the processor detects the change area by comparing the captured image with background information indicating the image ...

Подробнее
18-01-2018 дата публикации

Dynamic analysis apparatus

Номер: US20180014802A1
Принадлежит: KONICA MINOLTA INC

A dynamic analysis apparatus includes: an obtainment unit configured to set a region of interest in dynamic images obtained by photographing a dynamic state by irradiation of a check target part with radial rays, and obtain movement information on movement of the region of interest; a determination unit configured to determine an emphasis level of a pixel signal value of an attentional pixel corresponding to a pixel in the region of interest on the basis of the movement information of the region of interest obtained by the obtainment unit; and a correction unit configured to correct the pixel signal value of the attentional pixel of the dynamic images or analysis result images generated by analyzing the dynamic images, on the basis of the emphasis level determined by the determination unit.

Подробнее
18-01-2018 дата публикации

Detecting periodic patterns and apeture problems for motion estimation

Номер: US20180018777A1
Автор: Idan Ram, Omry Sendik
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method of evaluating motion estimation between a pair of digitized images includes receiving a distance map between a source block in a source image and all the blocks in a search area in a target image, scanning each column of the distance map, and saving indices of a minimum distance value for each column, scanning each row of the distance map, and saving indices of a minimum distance value for each row, locating candidate lines that pass through at least some local minima points that correspond to locations in the distance map of the minimum distance value in each of the columns or the minimum distance value in each of the rows determining a confidence level for each candidate line that passes through at least some of the local minima points, and selecting those candidate lines whose confidence level is greater than a predetermined threshold value.

Подробнее
22-01-2015 дата публикации

Method and apparatus for detecting interfacing region in depth image

Номер: US20150022441A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus for detecting an interfacing region in a depth image detects the interfacing region based on a depth of a first region and a depth of a second region which is an external region of the first region in a depth image.

Подробнее
16-01-2020 дата публикации

Image-processing method for removing light zones

Номер: US20200020114A1

An image-processing method for filtering light pollution appearing in a video image stream acquired by a video camera. The method includes, for a current image of the video image stream, the steps of subtracting the background represented in the current image in order to obtain the foreground of the current image, determining a brightening matrix, determining a compensating matrix by restricting the values of the pixels of the determined brightening matrix, segmenting the determined brightening matrix, determining a mask from the segmented brightening matrix, applying the mask to the determined compensating matrix in order to obtain a filtering matrix, and filtering the foreground of the current image by applying the filtering matrix in order to decrease the zones of light pollution in the images of the image stream.

Подробнее
21-01-2021 дата публикации

Digital Video Fingerprinting Using Motion Segmentation

Номер: US20210020171A1
Принадлежит:

Methods of processing video are presented to generate signatures for motion segmented regions over two or more frames. Two frames are differenced using an adaptive threshold to generate a two-frame difference image. The adaptive threshold is based on a motion histogram analysis which may vary according to motion history data. Also, a count of pixels is determined in image regions of the motion adapted two-frame difference image which identifies when the count is not within a threshold range to modify the motion adaptive threshold. A motion history image is created from the two-frame difference image. The motion history image is segmented to generate one or more motion segmented regions and a descriptor and a signature are generated for a selected motion segmented region. 1. A system comprising:a memory that stores instructions; and differencing two frames using an adaptive threshold to generate a two-frame difference image;', 'creating a motion history image from the two-frame difference image;', 'segmenting the motion history image to generate one or more motion segmented regions; and', 'generating a descriptor and a fingerprint for a selected motion segmented region., 'one or more processors configured by the instructions to perform operations comprising2. The system of claim 1 , wherein the two frames are a first frame and an second frame immediately following in sequence from the first frame.3. The system of claim 1 , wherein the two frames are a first frame and a third frame skipping an intermediary second frame claim 1 , wherein the second frame and third frame are in sequence from the first frame.4. The system of further comprising:tracking previously detected segments in previously segmented motion history images that are not included among the one or more motion segmented regions.5. The system of further comprising:adaptively modifying a threshold when a pixel count in a detected image region of the differenced frames is outside defined limits.6. The system ...

Подробнее
16-01-2020 дата публикации

Method and apparatus for applying deep learning techniques in video coding, restoration and video quality analysis (vqa)

Номер: US20200021865A1
Принадлежит: FastVDO LLC

Video quality analysis may be used in many multimedia transmission and communication applications, such as encoder optimization, stream selection, and/or video reconstruction. An objective VQA metric that accurately reflects the quality of processed video relative to a source unprocessed video may take into account both spatial measures and temporal, motion-based measures when evaluating the processed video. Temporal measures may include differential motion metrics indicating a difference between a frame difference of a plurality of frames of the processed video relative to that of a corresponding plurality of frames of the source video. In addition, neural networks and deep learning techniques can be used to develop additional improved VQA metrics that take into account both spatial and temporal aspects of the processed and unprocessed videos.

Подробнее
26-01-2017 дата публикации

Image sensing apparatus, object detecting method thereof and non-transitory computer readable recording medium

Номер: US20170024631A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus and a method of detecting an object of the apparatus are provided. The apparatus includes a sensing part configured to photograph an image, a storage configured to store a background image frame, and a controller configured to obtain a first difference image and a second difference image from the photographed image and to determine an existence and a position of an object using the first and the second difference images, wherein the first difference image is an image indicating a difference between a currently photographed image frame and a previously photographed image frame, and the second difference image is an image indicating a difference between the currently photographed image frame and the background image frame stored in the storage.

Подробнее
25-01-2018 дата публикации

METHOD OF TRACKING ONE OR MORE MOBILE OBJECTS IN A SITE AND A SYSTEM EMPLOYING SAME

Номер: US20180025500A1
Принадлежит:

A mobile object tracking system has one or more imaging devices for capturing images of a site, one or more reference wireless devices in wireless communication with one or more mobile wireless devices (MWDs) via one or more wireless signals, and one or more received signal strength models (RSSMs) of the site for the wireless signals. Each MWD is associated with a mobile object and movable therewith. The system tracks the mobile objects by combining the captured images, the received signal strength (RSS) observables of the wireless signals, and the RSSMs. The system may calibrate the RSSMs at an initial stage and during mobile object tracking. 1. A system for tracking at least one mobile object in a site , the system comprising:one or more imaging devices each capturing images of at least a portion of the site;one or more reference wireless devices;one or more mobile wireless devices (MWDs) in wireless communication with the one or more reference wireless devices via one or more wireless signals, each of the one or more MWDs being associated with one of the at least one mobile object, and movable therewith; andat least one processing structure functionally coupled to the one or more imaging devices, the one or more reference wireless devices, and the one or more MWDs, the at least one processing structure acting for:maintaining one or more received signal strength models (RSSMs) of the site for the one or more wireless signals;obtaining received signal strength (RSS) observables of the one or more wireless signals; andtracking the at least one mobile object by combining the captured images, the obtained RSS observables, and the one or more RSSMs.2. The system of wherein each of the one or more RSSMs is any one or a combination of a parametric RSSM and a nonparametric RSSM claim 1 , and wherein each nonparametric RSSM comprises a radio map of the site for one of the one or more wireless signals.3. The system of wherein said tracking the at least one mobile object ...

Подробнее
10-02-2022 дата публикации

BALL TRAJECTORY TRACKING

Номер: US20220044423A1
Принадлежит: PLAYSIGHT INTERACTIVE LTD.

A method of ball trajectory tracking, the method comprising computer executable steps of: receiving a plurality of training frames, each one of the training frames showing a trajectory of a ball as a series of one or more elements, using the received training frames, training a first neuronal network to locate a trajectory of a ball in a frame, receiving a second frame, and using the first neuronal network, locating a trajectory of a ball in the second frame, the trajectory being shown in the second frame as a series of images of the ball having the located trajectory. 1. A method of ball trajectory tracking , the method comprising computer executable steps of:receiving a plurality of training frames, each one of the training frames showing a trajectory of a ball as a series of one or more elements;using the received training frames, training a first neuronal network to locate a trajectory of a ball in a frame;receiving a second frame; andusing the first neuronal network, locating a trajectory of a ball in the second frame, the trajectory being shown in the second frame as a series of images of the ball having the located trajectory,the method further comprising computer executable steps of:receiving a video sequence capturing movement of a ball during a sport event in a series of video frames;calculating a plurality of difference-frames, each difference-frame being calculated over a respective group of at least two of the video frames of the received video sequence; andcombining at least two of the calculated difference-frames, to form a composite frame representing a trajectory taken by the ball in the movement as a series of images of the ball as captured in the received video sequence, the composite frame being one of the group consisting of the training frames and the second frame.2. The method of claim 1 , wherein at least one of the elements represents a respective position of the ball along the trajectory.3. The method of claim 1 , further comprising ...

Подробнее
23-01-2020 дата публикации

SYSTEMS TO TRACK A MOVING SPORTS OBJECT

Номер: US20200025907A1
Автор: Johnson Henri
Принадлежит:

Systems, methods and computer-readable media are provided for tracking a moving sports object. In one example, a method of tracking a moving sports object includes calibrating a perspective of an image of a camera to a perspective of a Doppler radar for simultaneous tracking of the moving sports object, and tracking the moving sports object simultaneously with the camera and Doppler radar. The method may further comprise removing offsets or minimizing differences between simultaneous camera measurements and Doppler radar measurements of the moving sports object. The method may also include combining a camera measurement of an angular position of the moving sports object with a simultaneous Doppler measurement of a radial distance, speed or other measurement of the moving sports object. 1. A method of tracking a moving sports object , the method including:arranging two sensors to track the moving sports object simultaneously, wherein one sensor is a Doppler radar and the other sensor is a camera:calibrating a perspective of an image of the camera to a perspective of the Doppler radar for simultaneous tracking of the moving sports object;tracking the moving sports object simultaneously with the camera and Doppler radar; andinterpolating a three-dimensional trajectory of the moving sports object's motion based at least in part on a failure to receive usable measurements from either the camera or the radar during a period that the moving sports object is being tracked.2. The method according to claim 1 , further comprising:Removing offsets or minimizing differences between simultaneous camera measurements and Doppler radar measurements of the moving sports object3. The method according to claim 1 , further comprising:combining a camera measurement of an angular position of the moving sports object with a simultaneous Doppler measurement of a radial distance, speed or other measurement of the moving sports object.4. The method according to claim 3 , further comprising: ...

Подробнее
24-01-2019 дата публикации

IMAGE-PROCESSING DEVICE, IMAGE-PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20190026906A1
Автор: Kawai Ryo
Принадлежит: NEC Corporation

In order to produce a discriminator that has higher discrimination ability, this image-processing device is provided with a synthesis unit for synthesizing a background image and an object image the hue and/or brightness of which at least partially resembles at least a portion of the background image, a generation unit for generating a difference image between the synthesized image and the background image, and a machine learning unit for performing machine learning using the generated difference image as learning data. 1. An image-processing device comprising:at least one memory storing instructions; andat least one processor configured to execute the instructions to:generate a difference image between a background image and a synthesized image wherein the synthesized image is generated by synthesizing the background image and an object image having at least one portion close in at least one of hue, saturation and brightness to at least one portion of the background image; andperform machine learning using the difference image as learning data.2. The image-processing device according to claim 1 ,wherein the at least one processor is further configured to add noise to the synthesized image, andgenerate a difference image between the background image and the synthesized image to which the noise has been added.3. The image-processing device according to claim 2 , wherein the noise is at least one of a pseudo-shadow claim 2 , impulse noise claim 2 , and Gaussian noise.4. The image-processing device according to claim 3 , wherein the at least one processor is further configured to adds the pseudo-shadow as noise to the synthesized image by transforming the object image claim 3 , inferring a shadow part segment of an object in the synthesized image by using the synthesized image and the transformed object image claim 3 , and altering a luminance of the shadow part segment of the synthesized image in relation to the synthesized image.5. The image-processing device ...

Подробнее
24-01-2019 дата публикации

A BUILDING MANAGEMENT SYSTEM USING OBJECT DETECTION AND TRACKING IN A LARGE SPACE WITH A LOW RESOLUTION SENSOR

Номер: US20190026908A1
Принадлежит:

A method of operating an object detection and tracking system includes the step of estimating () a current background of a current frame of sensor data generated by a sensor based on a previous frame of sensor data by a computer-based processor. The method further includes estimating () a foreground of the current frame of sensor data by comparing the current frame of sensor data to the current background, and detecting () an object using a sensor-specific object model. 1. A method of operating an object detection and tracking system comprising:estimating a current background of a current frame of sensor data generated by a sensor and based on a previous frame of sensor data by a computer-based processor;estimating a foreground of the current frame of sensor data by comparing the current frame of sensor data to the current background; anddetecting an object using a sensor-specific object model.2. The method set forth in claim 1 , wherein the sensor is an absolute intensity sensor utilizing a chopper.3. The method set forth in further comprising:tracking the object via a Bayesian Estimator, and wherein the sensor-specific object model is a chopped-data object model.4. The method set forth in claim 3 , wherein the chopped-data object model is a Gaussian Mixture object model.5. The method set forth in claim 3 , wherein the chopped-data object model is parameterized at least in-part by perspective data.6. The method set forth in claim 3 , wherein the chopped-data object model is learned by discriminative dictionary learning.7. The method set forth in claim 3 , wherein the Bayesian Estimator is a Kalman Filter.8. The method set forth in claim 3 , wherein the Bayesian Estimator is a Particle Filter.9. The method set forth in claim 1 , wherein the sensor is a relative intensity sensor that does not utilize a chopper.10. The method set forth in further comprising:tracking the object utilizing a Bayesian Estimator, and wherein the object is detected via a sensor-specific ...

Подробнее
28-01-2021 дата публикации

SYSTEMS AND VISUALIZATION INTERFACES FOR ORBITAL PATHS AND PATH PARAMETERS OF SPACE OBJECTS

Номер: US20210026524A1
Принадлежит:

A display system can be configured to receive, via a user interface, a first identifier associated with a first space object and determine a first maneuver of the first space object. The first maneuver can include a perturbation of the path of the first space object. Based on the first identifier and the first maneuver, the display system can identify one or more path parameters associated with a path of the first space object and generate a display interface. The display interface can include a longitude-time graph having a longitude axis spanning from a lower-longitude limit to an upper-longitude limit and a time axis spanning from the lower-time limit to the upper-time limit and an indication of the one or more path parameters. 1. A system for determining and displaying path parameters of one or more space objects , the system comprising:a space object data interface configured to receive a plurality of identifiers associated with one or more space objects;a non-transitory computer readable storage storing machine-executable instructions configured to cause the system to determine and display path parameters of one or more space objects; and receive, via a user interface, a first identifier associated with a first space object;', 'determine a first maneuver of the first space object, the first maneuver comprising a perturbation of the path of the first space object;', 'based on the first identifier and the first maneuver, identify one or more path parameters associated with a path of the first space object; and', a longitude-time graph comprising a longitude axis spanning from a lower-longitude limit to an upper-longitude limit and a time axis spanning from the lower-time limit to the upper-time limit; and', 'an indication of the one or more path parameters., 'generate a display interface comprising], 'a hardware processor in communication with the computer-readable storage, wherein the instructions, when executed by the hardware processor, are configured to ...

Подробнее
17-02-2022 дата публикации

Device, method and storage medium

Номер: US20220051415A1
Автор: Atsushi Wada, Osamu Kojima
Принадлежит: Yokogawa Electric Corp

There is provide a device including: a first storage unit configured to store, when an object moves between separate image capturing areas which are captured by a plurality of surveillance cameras, a plurality of movement histories of the object between image data respectively captured by the surveillance cameras; an identification unit configured to identify, among the plurality of surveillance cameras, one surveillance camera that has captured a target object to track, and an image capturing time, according to an operation of an operator; and an estimation unit configured to estimate at least one other surveillance camera that is different from the one surveillance camera and that captures the target object, among the plurality of surveillance cameras, and an estimated time when the other surveillance camera captures the target object, based on the movement history and an identification result obtained by the identification unit.

Подробнее
17-02-2022 дата публикации

Sanitization Analysis Devices, Systems, and Methods

Номер: US20220051547A1
Автор: Skinner Sam Michael
Принадлежит: Hygenius, Inc.

Systems, devices, methods, and software of the present invention provide for sanitization monitoring of hands, other body parts, and objects. The systems and devices including a detector to provide images of the object within its detection range, and at least one processor to receive the images from the detector, determine areas of the image corresponding to sanitized areas of the object from unsanitized areas of the object, calculate a percentage of sanitized areas to the total area corresponding to the sanitized and unsanitized areas, and report at least the percentage of sanitized area. In various embodiments, users sanitize their hands with fluorescing hand sanitizer and/or a fluorescing germ-proxy agent with soap and water and the sanitized and unsanitized areas are determined based on the amount of fluorescing material remaining on the hands after application. 1. A hand sanitization monitoring system comprising:a chamber;at least one illumination device positioned within the chamber to illuminate a hand detection area within the chamber;a detector positioned within the chamber to detect and provide images of hands within the hand detection area; receive the images from the detector,', 'determine areas of the image corresponding to sanitized areas of the hands from unsanitized areas of the hands,', 'calculate a score using at least a percentage of the sanitized areas to a total area corresponding to the sanitized area plus the unsanitized area, and', 'provide at least the score; and, 'at least one processor and a memory positioned proximate the chamber, the processor to'}a display positioned proximate the chamber to display at least the score provided by the processor.2. The system of claim 1 , wherethe at least one illumination device is at least one UV light; andthe detector is a camera capturing visual images of the hands.3. The system of claim 1 , further comprisingat least one of a hand sanitizer dispenser containing fluorescing hand sanitizer and a ...

Подробнее
31-01-2019 дата публикации

SURVEILLANCE METHOD AND COMPUTING DEVICE USING THE SAME

Номер: US20190035092A1
Принадлежит:

A computing device is able to detect one or more motion events based on two consecutive images, such as a first image and a second image. In the detection process, the computing device assigns identifiers to difference blocks retrieved from a plurality of first blocks of the first image, then defines a scanning window and moves the scanning window on a preset route over the first image. A new identical identifier is assigned for difference blocks within a current image subarea which falls into the scanning window. After a scanning period is completed, the computing device determines the happening of a motion event according to sufficient pixel similarities found in one of new identifiers. 1. A computing device comprising:at least one processor; retrieving a plurality of difference blocks from a plurality of first blocks of a first image by comparing the first image with a second image;', 'assigning identifiers to the difference blocks, wherein adjacent difference blocks are assigned with an identical identifier;', 'defining a scanning window and moving the scanning window on a preset route over the first image, reassigning a new identical identifier to difference blocks within a current image subarea which is falling into the scanning window, wherein the new identical identifier is selected from current identifiers of the difference blocks within the current image subarea according a preset rule;', 'selecting a target identifier associating with a target object from the new identifiers and determining whether the amount of difference blocks associating with the target identifier exceeds a first preset value; and', 'outputting a motion event of the target object upon the condition that the amount of difference blocks associating with the target identifier exceeds the first preset value., 'a non-transitory storage system coupled to the at least one processor and configured to store one or more programs to be executed by the at least one processor, the one or more ...

Подробнее
30-01-2020 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20200034631A1
Автор: KAWANO Atsushi
Принадлежит:

An image processing apparatus includes, an input unit configured to input images, a first detection unit configured to detect a first area based on features in the images input by the input unit, a second detection unit configured to detect a second area based on variations between the images input by the input unit, a generation unit configured to generate a background image by using a result of the first area detection by the first detection unit and a result of the second area detection by the second detection unit, and a processing unit configured to perform image processing for reducing a visibility of a specific region identified through a comparison between a processing target image acquired after a generation of the background image by the generation unit and the background image. 1. An image processing apparatus comprising:an acquiring unit configured to acquire an image;a first detection unit configured to perform first detection for detecting a first area in the image acquired by the acquiring unit;a second detection unit configured to perform second detection, which is different from the first detection, for detecting a second area in the image acquired by the acquiring unit;a generation unit configured to perform first generation process of generating a background image by using a result of the first detection by the first detection unit and a result of the second detection by the second detection unit and perform second generation process of generating a background image by using the result of the first detection by the first detection unit and not using the result of the second detection by the second detection unit, wherein the generation unit performs the first generation process or the second generation process according to predetermined condition; anda processing unit configured to perform image processing for concealing a specific region identified through a comparison between a target image acquired by the acquiring unit and the background image ...

Подробнее
04-02-2021 дата публикации

ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR

Номер: US20210035309A1
Принадлежит:

An electronic device and a control method therefor are disclosed. A method for controlling an electronic device according to the present invention comprises the steps of: receiving a current frame; determining a region, within the current frame, where there is a movement, on the basis of a prior frame and the current frame; inputting the current frame into an artificial intelligence learning model on the basis of the region where there is the movement, to obtain information relating to at least one object included in the current frame; and determining the object included in the region where there is the movement, by using the obtained information relating to the at least one object. Therefore, electronic device can rapidly determine an object included in a frame configuring a captured image. 1. A control method of an electronic device , the method comprising:receiving a current frame;identifying an area with movement in the current frame based on a previous frame and the current frame;obtaining information on at least one object comprised in the current frame by inputting the current frame to an artificial intelligence model based on the area with movement; andidentifying an object comprised in the area with movement by using the obtained information on the at least one object.2. The method of claim 1 , wherein the identifying the area with movement further comprises:comparing a pixel value of the previous frame and a pixel value of the current frame; andbased on the comparison, identifying an area in which a difference in pixel value exceeds a pre-set threshold value as the area with movement.3. The method of claim 2 , wherein the identifying the area with movement further comprises:storing a coordinate value on an area identified as the area with movement.4. The method of claim 2 , wherein the obtaining comprises inputting the current frame to the artificial intelligence learning model to read an area in which the difference in pixel value is less than or equal to ...

Подробнее
04-02-2021 дата публикации

OPERATION DETECTION DEVICE AND OPERATION DETECTION METHOD

Номер: US20210035311A1
Автор: MOCHIZUKI Takayoshi
Принадлежит: MURAKAMI CORPORATION

An operation detection device according to an embodiment is an operation detection device that detects an object approaching an operation unit. The operation detection device includes: a sensor that detects a distance from the object as a plurality of pixels; and an object detection unit that detects the object. The object detection unit specifies a first pixel corresponding to the distance that is the shortest among the plurality of pixels, scans a plurality of second pixels located around the first pixel, and detects the object when the number of second pixels for which a difference between a distance corresponding to each of the second pixels and the shortest distance is equal to or less than a predetermined value, among the plurality of second pixels, is equal to or greater than a predetermined number. 1. An operation detection device for detecting an object approaching an operation unit , comprising:a sensor that detects a distance from the object as a plurality of pixels; andan object detection unit that detects the object,wherein the object detection unit specifies a first pixel corresponding to the distance that is the shortest among the plurality of pixels, scans a plurality of second pixels located around the first pixel, and detects the object when the number of second pixels for which a difference between a distance corresponding to each of the second pixels and the shortest distance is equal to or less than a predetermined value, among the plurality of second pixels, is equal to or greater than a predetermined number.2. The operation detection device according to claim 1 , further comprising:a determination unit that determines whether or not an operation on the operation unit by the object detected by the object detection unit has been performed.3. The operation detection device according to claim 1 ,wherein the operation unit is displayed as a virtual image.4. An operation detection method for detecting an object approaching an operation unit using a ...

Подробнее
04-02-2021 дата публикации

SYSTEM AND METHOD OF CORRELATING MOUTH IMAGES TO INPUT COMMANDS

Номер: US20210035586A1
Принадлежит:

A system for automated speech recognition utilizes computer memory, a processor executing imaging software and audio processing software, and a camera transmitting images of a physical source of speech input. Audio processing software includes an audio data stream of audio samples derived from at least one speech input. At least one timer is configured to transmit elapsed time values as measured in response to respective triggers received by the timer. The audio processing software is configured to assert and de-assert the timer triggers to measure respective audio sample times and interim period times between the audio samples. The audio processing software is further configured to compare the interim period times with a command spacing time value corresponding to an expected interim time value between commands, thereby determining if the speech input is command data or non-command data. 1. A system for monitoring an area within a vehicle comprising:computer memory;a processor executing imaging software and audio processing software;an imaging device transmitting to said imaging software a plurality of frames of pixel data from an image acquired from a field of view within the vehicle and associated with the imaging device;a speech input device transmitting to said audio processing software an audio data stream of audio samples derived from at least one speech input;wherein the processor is configured to identify a source of the audio data stream from the frames of pixel data and the audio samples.2. (canceled)3. (canceled)4. (canceled)5. (canceled)6. A system according to claim 1 , further comprising command processing software configured to (i) track valid audio samples in a time domain claim 1 , (ii) discard invalid audio samples and (iii) track interim periods in said time domain claim 1 , wherein said command processing software also tracks said frames of pixel data in said time domain and utilizes said processor and said computer memory to group claim 1 , in ...

Подробнее
08-02-2018 дата публикации

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD

Номер: US20180039860A1
Принадлежит: KABUSHIKI KAISHA TOSHIBA

An image processing method according to an embodiment includes an image acquisition unit, a calculation unit, a region acquisition unit and an estimation unit. The image acquisition unit acquires a target image. The calculation unit calculates a density distribution of targets included in the target image. The estimation unit estimates the density distribution in a first region in the target image based on the density distribution in a surrounding region of the first region in the target image. 1. An image processing apparatus comprising:a memory; and an image acquisition unit that acquires a target image;', 'a calculation unit that calculates a density distribution of targets included in the target image;', 'and', 'an estimation unit that estimates the density distribution in a first region in the target image based on the density distribution in a surrounding region of the first region in the target image., 'processing circuitry configured to operate as2. The image processing apparatus according to claim 1 , wherein the estimation unit estimates the density distribution in the first region by performing polynomial interpolation of the density distribution in the surrounding region in the target image.3. The image processing apparatus according to claim 1 , wherein the estimation unit estimates the density distribution in the first region using an average value of densities represented by the density distribution in the surrounding region in the target image.4. The image processing apparatus according to claim 1 , wherein the estimation unit estimates a density distribution in the first region in the target image from the density distribution in the surrounding region in the target image using a function representing a regression plane or a regression curve that approximates a density distribution in the target image based on densities in areas included in the surrounding region in the target image.5. The image processing apparatus according to claim 1 , wherein ...

Подробнее
07-02-2019 дата публикации

AUTOMATED OR ASSISTED UMPIRING OF BASEBALL GAME USING COMPUTER VISION

Номер: US20190038952A1
Принадлежит: SportsMEDIA Technology Corporation

Methods and systems for use in automating or assisting umpiring of a baseball or softball game are described herein. A location of a strike zone is determined based on video images of a batter standing next to home plate captured by a camera. Locations of a ball traveling towards the batter, and locations of the bat being held by the batter, are autonomously tracked using computer vision based on video images captured by at least two cameras having different positions. Additionally, there are autonomous determinations of whether a location of the ball intersects with the strike zone, and whether the batter made a genuine attempt to swing the bat at the ball, and based one at least one of these determinations, there is an autonomous determination of whether a “strike” or a “ball” occurred. Additionally, an indication of whether a “strike” or a “ball” occurred is autonomously output. 1. A method for automating or assisting umpiring of a baseball or softball game comprising:receiving images of a batter, images of a bat held by the batter, and images of a ball traveling towards the batter from at least two cameras;determining a location of a strike zone based on the images of the batter;adding a strike zone graphic to the location of the strike zone on the images of the batter;autonomously tracking locations of the ball traveling towards the batter based on the images of the ball traveling towards the batter using transformations associated with the at least two cameras to determine the locations of the ball traveling towards the batter in three-dimensional (3D) space;adding an animated trail representing the locations of the ball on the images of the batter;autonomously tracking locations of the bat based on the images of the bat using transformations associated with the at least two cameras to determine locations of the bat in 3D space;autonomously determining whether a location of the bat in 3D space at a point in time is the same as a location of the ball in 3D ...

Подробнее
08-02-2018 дата публикации

Image Processing Systems and Methods

Номер: US20180040111A1
Принадлежит: Light Blue Optics Ltd.

We describe a method of capturing writing or drawing on a whiteboard. The method comprises: inputting camera data for a succession of image frames, wherein the camera data is from a camera directed towards the whiteboard and the image frames comprise successive images of the whiteboard from the camera; and user filter processing data from said image frames to remove parts of said image frames corresponding to parts of a user or user pen writing or drawing on said whiteboard. The user-filter processing comprises filtering to distinguish between motion of the user/user pen parts in the image frames and writing/drawing image information in the image frames which appears or changes during said writing or drawing but which is thereafter substantially unchanging. The method outputs writing/drawing data from the user filter-processing, this defining captured writing or drawing from the whiteboard. 1. A method of capturing writing or drawing on a whiteboard , the method comprising:inputting camera data for a succession of image frames, wherein said camera data is from a camera directed towards said whiteboard and said image frames comprise successive images of said whiteboard from said camera;user filter processing data from said image frames to remove parts of said image frames corresponding to parts of a user or user pen writing or drawing on said whiteboard;wherein said user filter processing comprises filtering to distinguish between motion of said user/user pen parts in said image frames and writing/drawing image information in said image frames which appears or changes during said writing or drawing but which is thereafter substantially unchanging; andoutputting writing/drawing data from said user filter processing, wherein said writing/drawing data defines captured writing or drawing on said whiteboard.2. The method of claim 1 , wherein said user filter processing comprises subdividing said image frames into blocks for blockwise processing claim 1 , said blockwise ...

Подробнее
08-02-2018 дата публикации

TRACKING OBJECTS BETWEEN IMAGES

Номер: US20180040134A1
Принадлежит:

Systems and methods track one or more points between images. A point for tracking may be selected, at least in part, on a determination of how discriminable the point is relative to other points in a region containing the point. A point of an image being tracked may be located in another image by matching a patch containing the point with another patch of the other image. A search for a matching patch may be focused in a region that is determined based at least in part on an estimate of movement of the point between images. Points may be tracked across multiple images. If an ability to track one or more points is lost, information about the points being tracked may be used to relocate the points in another image. 1. A computing device comprising:a camera;one or more processors; receive first image data captured using the camera;', 'select a first tracking point within an area of the first image data;', 'compare a first patch of pixels surrounding the first tracking point within the area with a second patch of pixels within the area to determine a similarity score;', 'determine a distinctiveness of the first tracking point based at least in part the similarity score;', 'receive second image data captured using the camera; and', 'determine a second tracking point in the second image data corresponding to the first tracking point., 'a memory device including instructions that, when executed by the one or more processors, cause the computing device to2. The computing device of claim 1 , wherein the instructions claim 1 , when executed claim 1 , further cause the computing device to:determine respective measures of similarity between the first patch of pixels and a plurality of other patches of pixels within the area; anddetermine the distinctiveness of the first tracking point based at least in part on the measures of similarity.3. The computing device of claim 1 , wherein the instructions claim 1 , when executed claim 1 , further cause the computing device to:compare ...

Подробнее
06-02-2020 дата публикации

High resolution virtual wheel speed sensor

Номер: US20200041304A1
Принадлежит: GM GLOBAL TECHNOLOGY OPERATIONS LLC

A method for producing high resolution virtual wheel speed sensor data includes simultaneously collecting wheel speed sensor (WSS) data from multiple wheel speed sensors each sensing rotation of one of multiple wheels of an automobile vehicle. A camera image is generated of a vehicle environment from at least one camera mounted in the automobile vehicle. An optical flow program is applied to discretize the camera image in pixels. Multiple distance intervals are overlayed onto the discretized camera image each representing a vehicle distance traveled defining a resolution of each of the multiple wheel speed sensors. A probability distribution function is created predicting a distance traveled for a next WSS output.

Подробнее
07-02-2019 дата публикации

OBJECT DISPLACEMENT DETECTION METHOD FOR DETECTING OBJECT DISPLACEMENT BY MEANS OF DIFFERENCE IMAGE DOTS

Номер: US20190043206A1
Автор: HO Yi-Chen, Wu Po-Fu
Принадлежит:

An object displacement detection method includes capturing n images of an object for obtaining n sets of image dots, where the object corresponds to an iset of image dots in an iimage of the n images; performing (n−1) difference calculations using the n sets of image dots to obtain (n−1) sets of difference image dots, where a jset of difference image dots of the (n−1) sets of difference image dots is generated by performing a jdifference calculation of the (n−1) difference calculations using a (j+1)set of image dots and a jset of image dots of the n sets of the image dots; and determining the object has displaced when a sum of numbers of the (n−1) sets of difference image dots reaches a first threshold. 1. An object displacement detection method comprising:{'sub': th', 'th, 'capturing n images of an object for obtaining n sets of image dots, wherein the object corresponds to an iset of image dots in an iimage of the n images;'}{'sub': th', 'th', 'th', 'th, 'performing (n−1) difference calculations using the n sets of image dots to obtain (n−1) sets of difference image dots, wherein a jset of difference image dots of the (n−1) sets of difference image dots is generated by performing a jdifference calculation of the (n−1) difference calculations using a (j+1)set of image dots and a jset of image dots of the n sets of the image dots; and'}determining the object has displaced when a sum of numbers of the (n−1) sets of difference image dots reaches a first threshold;wherein i, n and j are positive integers, i≤n, and j+1≤n.2. The method of claim 1 , wherein the n images are captured during a first time interval claim 1 , and the method further comprises:{'sub': th', 'th, 'capturing m images during a second time interval following the first time interval for obtaining m sets of image dots, wherein the object corresponds to a pset of image dots in a pimage of the m images;'}{'sub': th', 'th', 'th', 'th, 'performing (m−1) difference calculations using the m sets of image ...

Подробнее
18-02-2021 дата публикации

Automated honeypot creation within a network

Номер: US20210049054A1
Принадлежит: Capital One Services LLC

Systems and methods for managing Application Programming Interfaces (APIs) are disclosed. Systems may involve automatically generating a honeypot. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving, from a client device, a call to an API node and classifying the call as unauthorized. The operation may include sending the call to a node-imitating model associated with the API node and receiving, from the node-imitating model, synthetic node output data. The operations may include sending a notification based on the synthetic node output data to the client device.

Подробнее
18-02-2016 дата публикации

Three-Dimensional Hand Tracking Using Depth Sequences

Номер: US20160048726A1
Автор: Ang Li, FENG Tang, Xiaojin Shi
Принадлежит: Apple Inc

In the field of Human-computer interaction (HCI), i.e., the study of the interfaces between people (i.e., users) and computers, understanding the intentions and desires of how the user wishes to interact with the computer is a very important problem. The ability to understand human gestures, and, in particular, hand gestures, as they relate to HCI, is a very important aspect in understanding the intentions and desires of the user in a wide variety of applications. In this disclosure, a novel system and method for three-dimensional hand tracking using depth sequences is described. Some of the major contributions of the hand tracking system described herein include: 1.) a robust hand detector that is invariant to scene background changes; 2.) a bi-directional tracking algorithm that prevents detected hands from always drifting closer to the front of the scene (i.e., forward along the z-axis of the scene); and 3.) various hand verification heuristics.

Подробнее
18-02-2021 дата публикации

FEATURE EXTRACTION METHOD, COMPARISON SYSTEM, AND STORAGE MEDIUM

Номер: US20210049777A1
Автор: Kawai Ryo
Принадлежит: NEC Corporation

The feature extraction device according to one aspect of the present disclosure comprises: a reliability determination unit that determines a degree of reliability with respect to a second region, which is a region that has been extracted as a foreground region of an image and is within a first region that has been extracted from the image as a partial region containing a recognition subject, said degree of reliability indicating the likelihood of being the recognition subject; a feature determination unit that, on the basis of the degree of reliability, uses a first feature which is a feature extracted from the first region and a second feature which is a feature extracted from the second region to determine a feature of the recognition subject; and an output unit that outputs information indicating the determined feature of the recognition subject. 1. A comparison system comprising:a memory; andat least one processor coupled to the memory,the at least one processor performing operations to:determine a degree of reliability indicating a likelihood of being a recognition target, with respect to a second region being a region extracted as a foreground region of an image, the second region being a region within a first region, the first region being a region extracted from the image as a partial region including the recognition target;determine a feature of the recognition target, based on the degree of reliability, by using a first feature being a feature extracted from the first region and a second feature being a feature extracted from the second region; andoutput information indicating the feature of the recognition target determined.2. The comparison system according to claim 1 , wherein the at least one processor further performs operation to:determine the feature of the recognition target by a feature determination method in which the second feature is greatly reflected as the degree of reliability increases, and the first feature is greatly reflected as the ...

Подробнее
18-02-2016 дата публикации

Statistical Noise Analysis for Motion Detection

Номер: US20160048974A1
Принадлежит: Lenovo Singapore Pte Ltd

An approach is provided to detecting motion using statistical noise analysis. In the approach, reference statistics are calculated that relate to one or more noise characteristics that correspond to pixels in a first set of video images of an area being monitored. Current noise characteristics are received that correspond to the same pixels in a second set of video images of the area being monitored, with the first set of video images being captured before the second set of video images. Motion is detected in the area being monitored by comparing the reference statistics to the current noise characteristics.

Подробнее
15-02-2018 дата публикации

Linear-Based Eulerian Motion Modulation

Номер: US20180047160A1
Принадлежит: Quanta Computer, Inc.

In an embodiment, a method converts two images to a transform representation in a transform domain. For each spatial position, the method examines coefficients representing a neighborhood of the spatial position that is spatially the same across each of the two images. The method calculates a first vector in the transform domain based on first coefficients representing the spatial position, the first vector representing change from a first to second image of the two images describing deformation. The method modifies the first vector to create a second vector in the transform domain representing amplified movement at the spatial position between the first and second images. The method calculates second coefficients based on the second vector of the transform domain. From the second coefficients, the method generates an output image showing motion amplified according to the second vector for each spatial position between the first and second images. 1. A method of amplifying temporal variation in at least two images , the method comprising:converting at least two images to a transform representation in a transform domain;for each particular spatial position within the at least two images, examining a plurality of coefficient values representing a neighborhood of the spatial position, the neighborhood of the spatial position being spatially the same across each of the at least two images;calculating a first vector in the transform domain based on the plurality of coefficient values representing the particular spatial position, the first vector representing change from a first image to a second image of the at least two images describing deformation;modifying the first vector to create a second vector in the transform domain representing amplified movement at the particular spatial position between the first and second images;calculating a second plurality of coefficients based on the second vector of the transform domain; andfrom the second plurality of coefficients, ...

Подробнее
14-02-2019 дата публикации

TRACKING OF HANDHELD SPORTING IMPLEMENTS USING COMPUTER VISION

Номер: US20190050636A1
Автор: Painter James G.
Принадлежит: SportsMEDIA Technology Corporation

A path and/or orientation of object approaching an athlete is tracked using two or more cameras. At least two sets of images of the object are obtained using at least two different cameras having different positions. Motion regions within images are identified, and candidate locations in 2D space of the object are identified within the motion region(s). Based thereon, a probable location in 3D space of the identifiable portion is identified, for each of a plurality of instants during which the object was approaching. A piecewise 3D trajectory of at least the identifiable portion of the object is approximated from the probable locations in 3D space of the object for multiple instants during which the object was approaching the athlete. A graphical representation of the 3D trajectory of the object is incorporated into at least one of the sets of images. 1. A method for tracking a sporting implement during a sporting event , comprising:providing at least two cameras constructed and configured for network communication with at least one processor, wherein the at least two cameras are located at different positions in a sporting event facility;the at least two cameras capturing at least two different sets of images of the sporting implement;the at least one processor receiving the at least two different sets of images of the sporting implement from the at least two cameras;the at least one processor identifying at least one motion region in the at least two different sets of images;the at least one processor identifying a first location of at least one portion of the sporting implement within the at least one motion region; andthe at least one processor identifying a second location of the at least one portion of the sporting implement based on the first location.2. The method of claim 1 , wherein the first location comprises at least one candidate location claim 1 , and wherein the second location comprises at least one probable location.3. The method of claim 2 , ...

Подробнее
03-03-2022 дата публикации

Method and apparatus for controlling a lighting fixture based on motion detection and related lighting fixture

Номер: US20220070987A1
Принадлежит: Himax Imaging Ltd

A method of controlling a lighting fixture based on motion detection includes: receiving a plurality of captured image frames; obtaining a plurality of resampled image frames by resampling the captured image frames according to regional characteristics of the resampled image frames; dynamically adjusting a sensitivity for motion detection according to the regional characteristics of the resampled image frames; performing motion detection on the resampled image frames according to the sensitivity; and controlling the lighting fixture according to a result of the motion detection.

Подробнее
22-02-2018 дата публикации

Systems and Methods of Detecting Motion

Номер: US20180053313A1
Автор: David M. Smith
Принадлежит: Individual

Motion is detected within a defined proximity of a vehicle or fixed location equipped with a recording system by correlating frame-to-frame changes in the video streams of two or more cameras with converging views.

Подробнее
20-02-2020 дата публикации

METHOD OF NEEDLE LOCALIZATION VIA PARTIAL COMPUTERIZED TOMOGRAPHIC SCANNING AND SYSTEM THEREOF

Номер: US20200054295A1
Автор: JOSKOWICZ Leo, MEDAN Guy
Принадлежит:

There is provided a method of locating a tip of a metallic instrument inserted in a body, utilizing a baseline sinogram comprising projections in N exposure directions and derived from a prior computerized tomography (CT) scanning of the body, the method comprising: performing three-dimensional Radon space registration of a sparse repeat sinogram to the baseline sinogram, the repeat CT scanning having the metallic instrument inserted into the body, the metallic instrument having an attached marker located at a known distance from the instrument tip; subtracting the baseline sinogram from the repeat sinogram in accordance with the registration parameters to obtain projection difference images; and using the projection difference images and the known distance of the attached marker from the metallic instrument tip to determine a three-dimensional location of the metallic instrument tip. 1. A computer-implemented method of locating a tip of a metallic instrument inserted in a body , wherein the method utilizes a baseline sinogram derived from a prior computerized tomography (CT) scanning of the body , and wherein the baseline sinogram comprises projections in N exposure directions , the method comprising: 'wherein the sparse repeat sinogram is derived from a repeat CT scanning of the body and comprises projections in n exposure directions, n being substantially less than N, and wherein the repeat CT scanning is provided with the metallic instrument inserted into the body, the metallic instrument having an attached marker located at a known distance from the instrument tip;', 'a) performing, by a computer, three-dimensional Radon space registration of a sparse repeat sinogram to the baseline sinogram, thereby giving rise to registration parameters,'}b) subtracting, by the computer, the baseline sinogram from the repeat sinogram in accordance with the registration parameters to obtain projection difference images, wherein each of the projection difference images is ...

Подробнее
10-03-2022 дата публикации

SYSTEMS AND METHODS FOR REPLACING SENSITIVE DATA

Номер: US20220075670A1
Принадлежит: Capital One Services, LLC

A model optimizer is disclosed for managing training of models with automatic hyperparameter tuning. The model optimizer can perform a process including multiple steps. The steps can include receiving a model generation request, retrieving from a model storage a stored model and a stored hyperparameter value for the stored model, and provisioning computing resources with the stored model according to the stored hyperparameter value to generate a first trained model. The steps can further include provisioning the computing resources with the stored model according to a new hyperparameter value to generate a second trained model, determining a satisfaction of a termination condition, storing the second trained model and the new hyperparameter value in the model storage, and providing the second trained model in response to the model generation request. 120-. (canceled)21. A system comprising:at least one processor; and receiving actual data having at least one sensitive data portion;', 'determining a class associated with the at least one sensitive data portion;', 'accessing a synthetic data generation model trained using a data space having data of the class;', 'generating, using the synthetic data generation model, at least one synthetic data portion; and', 'replacing the at least one sensitive data portion with the at least one synthetic data portion., 'at least one non-transitory memory storing instructions that, when executed by the at least one processor, cause the system to perform operations comprising22. The system of claim 21 , wherein the synthetic data generation model is trained to generate synthetic data satisfying a similarity criterion.23. The system of claim 22 , wherein the similarity criterion is based on at least one of a statistical correlation score claim 22 , a data similarity score claim 22 , or a data quality score.24. The system of claim 22 , wherein the synthetic data generation model is a generative adversarial network (GAN).25. The system ...

Подробнее
21-02-2019 дата публикации

LEARNING RIGIDITY OF DYNAMIC SCENES FOR THREE-DIMENSIONAL SCENE FLOW ESTIMATION

Номер: US20190057509A1
Принадлежит:

A neural network model receives color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space. Motion of objects in the image sequence results from a combination of a dynamic camera orientation and motion or a change in the shape of an object in the 3D space. The neural network model generates two components that are used to produce a 3D motion field representing the dynamic (non-rigid) part of the scene. The two components are information identifying dynamic and static portions of each image and the camera orientation. The dynamic portions of each image contain motion in the 3D space that is independent of the camera orientation. In other words, the motion in the 3D space (estimated 3D scene flow data) is separated from the motion of the camera. 1. A computer-implemented method , comprising:receiving color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space including a first image and a second image, wherein the first image is captured from a first viewpoint and the second image is captured from a second viewpoint; andprocessing the color data by layers of a neural network model to generate segmentation data indicating a portion of the second image where a first object changes position or shape relative the first object in the first image.2. The computer-implemented method of claim 1 , further comprising processing the color data by the layers of the neural network model to produce a pose of the second viewpoint claim 1 , the pose including a position and orientation in the 3D space.3. The computer-implemented method of claim 2 , further comprising:warping the pose to generate 2D viewpoint motion flow data for the second image; andsubtracting the 2D viewpoint motion flow data from two-dimensional optical flow data for the sequence of images to produce estimated projected 3D scene flow data for the second image.4. The computer-implemented method of claim 2 , further comprising refining ...

Подробнее
20-02-2020 дата публикации

Methods and systems for using artificial intelligence to evaluate, correct, and monitor user attentiveness

Номер: US20200057487A1
Принадлежит: Telelingo D/b/a Dreyev

In an aspect, a system for using artificial intelligence to evaluate, correct, and monitor user attentiveness includes a forward-facing camera, the forward-facing camera configured to capture a video feed of a field of vision on a digital screen, at least a user alert mechanism configured to output a directional alert to a user, a processing unit in communication with the forward-facing camera and the at least a user alert mechanism, a screen location to spatial location map operating on the processing unit, and a motion detection analyzer operating on the processing unit, the motion detection analyzer designed and configured to detect, on the digital screen, a rapid parameter change, determine a screen location on the digital screen of the rapid parameter change, retrieve, from the screen location to spatial location map, a spatial location based on the screen location, and generate, using the spatial location, the directional alert.

Подробнее
04-03-2021 дата публикации

LIVE CELL VISUALIZATION AND ANALYSIS

Номер: US20210065362A1
Принадлежит:

Systems and methods are provided for automatically imaging and analyzing cell samples in an incubator. An actuated microscope operates to generate images of samples within wells of a sample container across days, weeks, or months. A plurality of images is generated for each scan of a particular well, and the images within such a scan are used to image and analysis metabolically active cells in the well. Tins analysis includes generating a “range image” by subtracting the minimum intensity value, across the scan, for each pixel from the maximum intensity value. This range image thus emphasizes cells or portions of cells that exhibit changes in activity over a scan period (e.g., neurons, myocytes, cardiomyocytes) while de-emphasizing regions that exhibit consistently high intensities when images (e.g., regions exhibiting a great deal of autofluorescence unrelated to cell activity). 1. A method comprising:capturing a movie of a cell culture vessel; andgenerating a static range image from the movie, wherein the range image is composed of pixels representing the minimum fluorescence intensity subtracted from the maximum fluorescence intensity at each pixel location over a complete scan period.2. The method of claim 1 , further comprising:defining objects by segmenting the range image.3. The method of claim 2 , further comprising:from the objects, deriving average object mean intensities from the complete scan period.4. The method of any of claims 2 , further comprising:from the objects, deriving a pairwise correlation analysis of all object traces over the complete scan period.5. The method of claim 2 , further comprising:from the objects, deriving a mean of all objects mean burst duration from the complete scan period.6. The method of claim 2 , further comprising:from the objects, deriving a strength of each burst of all objects from the complete scan period.7. The method of claim 6 , further comprising:deriving an overall burst strength metric as a mean of all objects ...

Подробнее
04-03-2021 дата публикации

SECURITY CAMERA AND MOTION DETECTING METHOD FOR SECURITY CAMERA

Номер: US20210065385A1
Принадлежит:

A security camera with a motion detection function, which comprises: an image sensor, configured to capture original images; a variation level computation circuit, configured to compute image variation levels of the original images; a long term computation circuit, configured to calculate a first average level for the image variation levels corresponding to M of the original images; a short term computation circuit, configured to calculate a second average level for the image variation levels corresponding to N of the original images, wherein M>N; and a motion determining circuit, configured to determine whether a motion of an object appears in a detection range of the image sensor according to a relation between the first average level and the second average level. By such security camera, the interference caused by noise or small object can be avoided. Accordingly, the motion detection of the security camera can be more accurate. 1. A security camera with a motion detection function , comprising:an image sensor, configured to capture a plurality of original images;a variation level computation circuit, configured to compute image variation levels of the original images;a long term computation circuit, configured to calculate a first average level for the image variation levels corresponding to M of the original images;a short term computation circuit, configured to calculate a second average level for the image variation levels corresponding to N of the original images, wherein M>N; anda motion determining circuit, configured to determine whether a motion of an object appears in a detection range of the image sensor according to a relation between the first average level and the second average level.2. The security camera of claim 1 , wherein the motion determining circuit determines whether the motion of the object occurs in the detection range of the image sensor according to a relation between a motion threshold and a difference between the first average level ...

Подробнее
01-03-2018 дата публикации

APPARATUS AND METHOD FOR DETECTING OBJECT AUTOMATICALLY AND ESTIMATING DEPTH INFORMATION OF IMAGE CAPTURED BY IMAGING DEVICE HAVING MULTIPLE COLOR-FILTER APERTURE

Номер: US20180063511A1

Disclosed are an apparatus and a method for detecting an object automatically and estimating depth information of an image captured by an imaging device having a multiple color-filter aperture. A background generation unit detects a movement from a current image frame among a plurality of continuous image frames captured by an MCA camera to generate a background image frame corresponding to the current image frame. An object detection unit detects an object region included in the current image frame based on differentiation between a plurality of color channels of the current image frame and a plurality of color channels of the background image frame. According to an embodiment of the present invention, it is possible to automatically detect an object by a repetitively updated background image frame and to accurately estimate object information by separately detecting an object for each color channel by considering a property of the MCA camera. 1. A depth information estimation apparatus comprising:a color shift vector calculation unit configured to calculate a color shift vector indicating a degree of color channel shift in an edge region extracted from color channels of an input image captured by an imaging device having different color filters installed in a plurality of openings formed in an aperture; anda depth map estimation unit configured to estimate a sparse depth map for the edge region by using a value of the estimated color shift vector, and interpolate depth information on a remaining region other than the edge region of the input image based on the sparse depth map to estimate a full depth map for the input image.2. The apparatus of claim 1 , wherein the depth map estimation unit estimates the full depth map from the sparse depth map as expressed in Equation A below:{'br': None, 'i': d', 'L+λA', 'λ{circumflex over (d)}, 'sup': '−1', '=()\u2003\u2003[Equation A]'}{'sub': ii', 'ii, 'where, d is a full depth map, L is a matting Laplacian matrix, A is a ...

Подробнее
04-03-2021 дата публикации

SYSTEMS AND METHODS FOR RECONSTRUCTING FRAMES

Номер: US20210067801A1
Принадлежит: Disney Enterprises, Inc.

Systems and methods are disclosed for reconstructing a frame. A computer-implemented method may use a computer system that includes non-transient electronic storage, a graphical user interface, and one or more physical computer processors. The computer-implemented method may include: obtaining one or more reference frames from non-transient electronic storage, generating one or more displacement maps based on the one or more reference frames and a target frame with the physical computer processor, generating one or more warped frames based on the one or more reference frames and the one or more displacement maps with the physical computer processor, obtaining a conditioned reconstruction model from the non-transient electronic storage, and generating one or more blending coefficients and one or more reconstructed displacement maps by applying the one or more displacement maps, the one or more warped frames, and a target frame to the conditioned reconstruction model with the physical computer processor. 1. A computer-implemented method comprising:generating, using an optical flow model, one or more displacement maps based on one or more reference frames and a target frame;generating one or more warped frames based on the one or more reference frames and the one or more displacement maps;generating a conditioned reconstruction model by training an initial reconstruction model using training content and one or more reconstruction parameters, wherein the training content comprising a training target frame and one or more training reference frames, and wherein the conditioned reconstruction model optimizes for the one or more reconstruction parameters; andgenerating, using the conditioned reconstruction model, one or more blending coefficients and one or more reconstructed displacement maps based on the one or more displacement maps, the one or more warped frames, and the target frame.2. The computer-implemented method of claim 1 , further comprisinggenerating a ...

Подробнее
17-03-2022 дата публикации

DATASET CONNECTOR AND CRAWLER TO IDENTIFY DATA LINEAGE AND SEGMENT DATA

Номер: US20220083402A1
Принадлежит: Capital One Services, LLC

Systems and methods for connecting datasets are disclosed. For example, a system may include a memory unit storing instructions and a processor configured to execute the instructions to perform operations. The operations may include receiving a plurality of datasets and a request to identify a cluster of connected datasets among the received plurality of datasets. The operations may include selecting a dataset. In some embodiments, the operations include identifying a data schema of the selected dataset and determining a statistical metric of the selected dataset. The operations may include identifying foreign key scores. The operations may include generating a plurality of edges between the datasets based on the foreign key scores, the data schema, and the statistical metric. The operations may include segmenting and returning datasets based on the plurality of edges. 1. A dataset connector system comprising:one or more memory units storing instructions; and{'claim-text': ['receiving, by the dataset connector system, a plurality of datasets;', 'receiving, by the dataset connector system, a request to identify a cluster of connected datasets among the received plurality of datasets;', 'selecting, by the dataset connector system, a dataset from among the received plurality of datasets;', 'identifying, by a data profiling model, a data schema of the selected dataset;', 'determining, by the data profiling model, a statistical metric of the elected dataset;', 'identifying, by the data profiling model, a plurality of candidate foreign keys of the selected dataset;', 'determining, by a data mapping model, respective foreign key scores for individual ones of the plurality of candidate foreign keys;', 'generating, by the data mapping model, a plurality of edges between the selected dataset and the received plurality of datasets based on the foreign key scores, the data schema, and the statistical metric;', 'segmenting, by a data classification model, a cluster of connected ...

Подробнее
28-02-2019 дата публикации

APPARATUS AND METHODS FOR VIDEO ALARM VERIFICATION

Номер: US20190066472A1
Принадлежит: CHECKVIDEO LLC

A method for verification of alarms is disclosed. The method involves receiving an alarm signal trigger associated with an alarm signal, receiving video data from a premise associated with the alarm signal, rapidly analyzing the video data to test for the existence a significant event, and when a significant event exists, sending a representation of a segment of interest of the video data, the segment of interest being associated with the significant event, to a user. 1. A non-transitory processor-readable medium storing code representing instructions to be executed by a processor , the code comprising code to cause the processor to:receive from a sensor an indication of an event at a monitored premise having a predetermined zone;analyze image data associated with the monitored premise (1) based on a user-defined profile to identify an object associated with the event, and (2) to identify the predetermined zone in which the object is located; andsend to a recipient an indication of an alarm event when the zone from which the object is located does not meet a zone authorization criterion assigned to the identity of the object and defined by the user-defined profile.2. The non-transitory processor-readable medium of claim 1 , the code further comprising code to cause the processor to identify a time associated with the event claim 1 ,the code to cause the processor to send includes code to cause the processor to send to the recipient the indication of the alarm event when the time associated with the event does not meet a time authorization criterion assigned to the identity of the object and defined by the user-defined profile.3. The non-transitory processor-readable medium of claim 1 , wherein the code to cause the processor to analyze the image data includes code to cause the processor to analyze the image data in response to the indication of the event.4. The non-transitory processor-readable medium of claim 1 , wherein the object is a first object claim 1 , the ...

Подробнее
12-03-2015 дата публикации

Dynamic diagnosis support information generation system

Номер: US20150073257A1
Принадлежит: KONICA MINOLTA INC

A dynamic diagnosis support information generation system includes: a radiation generator capable of irradiating a pulsed radiation; a radiation detector which is provided with a plurality of detecting elements arranged in two-dimension, detects the pulsed radiation irradiated from the radiation generator at each of the plurality of detecting elements and generates frame images successively; and an analysis section which calculates and outputs a feature value relating to a dynamic image of a subject based on a plurality of frame images generated by radiographing the subject by using the radiation generator and the radiation detector, wherein the analysis section calculates the feature value relating to the dynamic image of the subject by corresponding pixels to each others representing outputs of a detecting element at a same position in the radiation detector among the plurality of the frame images.

Подробнее
27-02-2020 дата публикации

DATA MODEL GENERATION USING GENERATIVE ADVERSARIAL NETWORKS

Номер: US20200065221A1
Принадлежит: Capital One Services, LLC

Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data. 120-. (canceled)21. A method for generating data models , comprising:receiving, by a model optimizer from an interface, a data model generation request;provisioning, by the model optimizer, computing resources with a data model; identifying a first point and a second point in the sample space,', 'generating a first representative point and a second representative point in the code space using the first point, the second point, and an encoder network corresponding to the decoder network,', 'determining a vector connecting the first representative point and the second representative point,', 'generating an extreme point in the code space by sampling the code space along an extension of the vector beyond the second representative point, and', 'converting the extreme point in the code space into the sample space using the decoder network;, 'generating, by a dataset generator, a synthetic dataset for training the data model using a generative network, wherein the generative network comprises a decoder network configured to generate decoder output data in a sample space having a first ...

Подробнее
27-02-2020 дата публикации

AUTONOMOUS CAMERA-TO-CAMERA CHANGE DETECTION SYSTEM

Номер: US20200065977A1
Автор: Duran Melvin G.
Принадлежит:

Embodiments disclosed herein are directed to an autonomous camera-to-camera scene change detection system whereby a first camera controls a second camera without human input. More specifically, a first camera having a field of view may receive and process an image. Based on the processed image, the first camera sends instructions to a second camera to focus in on an area of interest or a target identified in the processed image. 120.-. (canceled)21. A camera-to-camera control system , comprising:a first camera having a fixed field of view, the first camera being trained to distinguish between an anticipated change in the fixed field of view and an unanticipated change in the fixed field of view; and instructions for tilting the second camera;', 'instructions for panning the second camera;', 'instructions for zooming the second camera; or', 'instructions for tracking movement of the object of interest as the object of interest moves through the fixed field of view., 'receive camera control instructions from the first camera in response to the first camera detecting an unanticipated change caused by an object of interest in the fixed field of view, the camera control instructions comprising one or more of, 'a second camera communicatively coupled to the first camera and having a second field of view that is at least partially contained within, and moveable within, the fixed field of view, the second camera adapted to22. The camera-to-camera control system of claim 21 , wherein the camera control instructions further comprise instructions for capturing one or more images of the object of interest as the object of interest moves through the fixed field of view.23. The camera-to-camera control system of claim 21 , wherein the fixed field of view is divided into at least a first zone and a second zone.24. The camera-to-camera control system of claim 23 , wherein a size of at least one of the first zone or the second zone is automatically determined.25. The camera-to- ...

Подробнее
27-02-2020 дата публикации

Eccentricity maps

Номер: US20200065980A1
Принадлежит: FORD GLOBAL TECHNOLOGIES LLC

A computing system can determine moving objects in a sequence of images based on recursively calculating red-green-blue (RGB) eccentricity 249 k based on a video data stream. A vehicle can be operated based on the determined moving objects. The video data stream can be acquired by a color video sensor included in the vehicle or a traffic infrastructure system.

Подробнее
27-02-2020 дата публикации

MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD

Номер: US20200065981A1
Принадлежит:

Provided are an inexpensive and safe moving object detection apparatus and moving object detection method that enable accurate detection of a moving object from an image sequence captured by a monocular camera at a high speed. A representative configuration of the moving object detection apparatus according to the present invention is provided with a horizon line detection unit that detects a horizon line in a frame image, an edge image generation unit that generates an edge image from a frame image, and a moving object detection unit that sets a detection box on a moving object, and the edge image generation unit extracts an edge image below a horizon line detected by the horizon line detection unit, and the moving object detection unit generates a foreground by combining the difference between the edge image below the horizon line and a background image of the edge image with the difference between a gray scale image and a background image of the gray scale image. 1. A moving object detection apparatus comprising:an input unit for inputting an image sequence;a frame acquisition unit that continuously extracts a plurality of frame images from an image sequence;a horizon line detection unit that detects a horizon line in a frame image;an edge image generation unit that generates an edge image from a frame image; anda moving object detection unit that sets a detection box on a moving object,wherein the edge image generation unit extracts an edge image below the horizon line detected by the horizon line detection unit, andextracts a difference between the edge image below the horizon line and a background image of the edge image using a background subtraction method, andthe moving object detection unit converts a frame image into a gray scale image and extracts a difference between the gray scale image and a background image of the gray scale image using the background subtraction method, andgenerates a foreground by combining the difference between the edge image ...

Подробнее
24-03-2022 дата публикации

Ultrasound Speckle Decorrelation Estimation of Lung Motion and Ventilation

Номер: US20220087647A1
Принадлежит:

A method of estimating lung motion includes collecting multiple ultrasound image data captured at one or more locations of a sample region of tissue. The method further includes comparing the multiple ultrasound image data and determining temporal correlation coefficients between each of the multiple ultrasound image data. The method still further includes displaying an image of the sample region of the tissue with the temporal correlation coefficients identified, thereby indicating lung motion. In further methods, the determined temporal correlation coefficients are used to determine an amount of decorrelation, which can be used to determine strain of the tissue over the sample region and to calculate lung displacements and lung shape changes representing ventilation. 1. A method of estimating lung motion , the method comprising:collecting multiple ultrasound image data captured at one or more locations of a sample region of tissue;comparing the multiple ultrasound image data and determining temporal correlation coefficients between each of the multiple ultrasound image data; anddisplaying an image of the sample region of the tissue with the temporal correlation coefficients identified thereby indicating lung motion.2. The method of claim 1 , further comprising:collecting the multiple ultrasound image data at a plurality of locations of the sample region; andcomparing the multiple ultrasound image data and determining temporal correlation coefficients for each of the plurality of locations of the sample region.3. The method of claim 1 , further comprising:identifying a surface of the sample region based on the determined temporal correlation coefficients.4. The method of claim 1 , further comprising:identifying an internal structure of the sample region based on the determined temporal correlation coefficients.5. The method of claim 1 , wherein collecting the multiple ultrasound image data captured at the one or more locations of the sample region of tissue ...

Подробнее
11-03-2021 дата публикации

Image processing method and image processing circuit

Номер: US20210073952A1
Автор: SAN GUANGYU
Принадлежит:

An image processing method and an image processing circuit are provided. The method and circuit are applied to motion estimation. The method includes the steps of: performing low-pass filtering on a first image and a second image, wherein the first image is part of a first frame, the second image is part of a second frame, and the first frame is different from the second frame; calculating a first characteristic value of the first image and calculating a second characteristic value of the second image; calculating a sum of absolute difference (SAD) between the first image and the second image; blending the difference between the first characteristic value and the second characteristic value and the SAD to generate a blended result; and estimating a motion vector between the first image and the second image according to the blended result. 1. A circuit , comprising:a memory configured to store a plurality of pixel data of at least one part of a first frame and a plurality of pixel data of at least one part of a second frame, the first frame being different from the second frame; and performing low-pass filtering on a first image and a second image, the first image being a part of the first frame and the second image being a part of the second frame;', 'calculating a first characteristic value of the first image and a second characteristic value of the second image;', 'calculating a sum of absolute differences (SAD) between the first image and the second image;', 'calculating a difference between the first characteristic value and the second characteristic value;', 'blending the difference and the SAD to generate a blended result; and', 'estimating a motion vector between the first image and the second image according to the blended result., 'a processor coupled to the memory and configured to perform following steps2. The circuit of claim 1 , wherein the processor further performs following steps to calculate the first characteristic value and the second ...

Подробнее
11-03-2021 дата публикации

Motion Image Integration Method and Motion Image Integration System Capable of Merging Motion Object Images

Номер: US20210074002A1
Принадлежит: Realtek Semiconductor Corp

A motion image integration method includes acquiring a raw image, detecting a first motion region image and a second motion region image by using a motion detector according to the raw image, merging the first motion region image with the second motion region image for generating a motion object image according to a relative position between the first motion region image and the second motion region image, and cropping the raw image to generate a sub-image corresponding to the motion object image according to the motion object image. A range of the motion object image is greater than or equal to a total range of the first motion region image and the second motion region image. Shapes of the first motion region image, the second motion region image, and the motion object image are polygonal shapes.

Подробнее
05-06-2014 дата публикации

Image synthesis device and image synthesis method

Номер: US20140152876A1
Принадлежит: Omron Corp

An image synthesis device and a corresponding method, including first and second illumination light sources configured to illuminate an object to be detected, a light source controller configured to alternately turn on the light sources, a photographing section configured to photograph the object while the first and second illumination light sources are turned on to generate first and second images, respectively, a storage section configured to store first and second reference images generated by photographing a range of the photographing section where the illumination light sources are turned on and where the object is not present, a difference image generation section configured to generate a first difference image based on a difference between the first image and first reference image and a second difference image based on a difference between the second image and second reference image, and a synthesis section configured to synthesize the first and second difference images to generate a synthetic image.

Подробнее
15-03-2018 дата публикации

DISTANCE IMAGE ACQUISITION APPARATUS AND DISTANCE IMAGE ACQUISITION METHOD

Номер: US20180073873A1
Принадлежит: FUJIFILM Corporation

A distance image acquisition apparatus includes a projection unit which performs a plurality of times of light emission with a plurality of light emission intensities to project a first pattern of structured light onto a subject within a distance measurement region, an imaging unit which is provided in parallel with and apart from the projection unit by a baseline length, images the subject in synchronization with each of the plurality of times of light emission, and generates a plurality of captured images corresponding to the plurality of light emission intensities, a normalization unit which normalizes a plurality of captured images with coefficients corresponding to the plurality of light emission intensities to acquire a plurality of normalized images, a discrimination unit which compares a plurality of normalized images and discriminates the first pattern projected from the projection unit, and a distance image acquisition unit. 1. A distance image acquisition apparatus comprising:a projection unit which performs a plurality of times of light emission with a plurality of light emission intensities to project a first pattern of structured light distributed in a two-dimensional manner with respect to a subject within a distance measurement region;an imaging unit which is provided in parallel with and apart from the projection unit by a baseline length, images the subject in synchronization with each of the plurality of times of light emission and generates a plurality of captured images including the first pattern reflected from the subject and corresponding to the plurality of light emission intensities;a normalization unit which normalizes the plurality of captured images with coefficients corresponding to the plurality of light emission intensities to acquire a plurality of normalized images;a discrimination unit which compares the plurality of normalized images and discriminates the first pattern projected from the projection unit; anda distance image ...

Подробнее
18-03-2021 дата публикации

FEATURE-BASED JOINT RANGE OF MOTION CAPTURING SYSTEM AND RELATED METHODS

Номер: US20210076985A1
Автор: Leszko Filip
Принадлежит:

Feature-based joint range of motion (ROM) capture systems and methods can provide increased quality and quantity of data to assess joint ROM without requiring a patient to seek professional assistance or equipment. ROM systems disclosed herein can include a first pattern coupled to a first portion of patient anatomy on a first side of a joint and a second pattern coupled to a second portion of patient anatomy on a second side of the joint opposite the first side. At least one image containing the first and second patterns and the joint can be captured with an image capture device. The first and second patterns can be detected in the at least one image using a feature-based image recognition algorithm. Based on the detected patterns, at least one ROM metric of the associated joint, e.g., a maximum flexion angle, a maximum extension angle, or a range-of-motion, can be calculated. 1. A system for measuring joint range of motion , comprising:a first pattern configured to be coupled to a first portion of anatomy of a patient on a first side of a joint;a second pattern configured to be coupled to a second portion of anatomy of the patient on a second side of the joint opposite the first side;an image sensor configured to capture at least one image containing the joint, the first pattern, and the second pattern; anda processor configured to, for one or more of the at least one images, recognize the first pattern and the second pattern, calculate axes of the first and second portion of anatomy to which the first and second patterns are coupled, and calculate an angle between the axes;wherein the processor is further configured to calculate at least one range of motion metric based on the calculated angle between the axes in the at least one image.2. The system of claim 1 , wherein the range of motion metric is a full range of motion and the at least one image includes a first image in which the joint is at maximum extension and a second image in which the joint is at ...

Подробнее
17-03-2016 дата публикации

Method and apparatus for counting person

Номер: US20160078323A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A counting method and apparatus are provided. The method and/or apparatus includes generating a regression tree by inputting information about a moving object contained in a plurality of images, in response to a new image being input, inputting information about a moving object contained in the new input image to the regression tree, and determining the number of people contained in the new image based on a result value of the regression tree.

Подробнее
24-03-2022 дата публикации

METHOD, SYSTEM, AND COMPUTER-READABLE MEDIUM FOR STYLIZING VIDEO FRAMES

Номер: US20220092728A1
Автор: Hsiao Jenhao
Принадлежит:

In an embodiment, a method includes receiving first and second images of a video sequence, wherein the first and second images are consecutive image frames; applying a style network model to the first and second images to generate first and second stylized images in a style of a style image, respectively; applying a loss network model to the first and second images, the first and second stylized images, and the style image to generate a loss function; determining a set of weights for the style network model based on the generated loss function; and stylizing the video frames using the style network model. The method can mitigate flicker artifacts between the stylized consecutive frames. 1. A method for stylizing video frames , comprising:receiving a first image and a second image of a video sequence, wherein the first image and the second image are consecutive image frames;applying a style network model associated with a style image to the first image and the second image to generate a first stylized image and a second stylized image in a style of the style image, respectively;applying a loss network model to the first image, the second image, the first stylized image, the second stylized image, and the style image to generate a loss function;determining a set of weights for the style network model based on the generated loss function; andstylizing, by at least one processor, the video frames by applying the style network model with the determined set of weights to the video frames.2. The method according to claim 1 , wherein the style network model comprises a first style network and a second style network claim 1 , and applying the style network model to the first image and the second image comprises:applying the first style network to the first image to generate the first stylized image in the style of the style image; andapplying the second style network to the second image to generate the second stylized image in the style of the style image.3. The method ...

Подробнее
24-03-2022 дата публикации

METHODS AND SYSTEMS OF TRACKING VELOCITY

Номер: US20220092794A1
Автор: VanSickle David
Принадлежит:

Systems and methods for determining a velocity of a fluid or an object are described. Systems and methods include receiving image data of the fluid or the object, the image data comprising a plurality of frames. Each frame comprises an array of pixel values. Systems and methods include creating a frame difference by subtracting an array of pixel values for a first frame of the image data from an array of pixel values for a second frame of the image data. Systems and methods include measuring a difference between a location of the object in the first frame of the image data and the second frame of the image data. Systems and methods include creating a correlation matrix based on the measured difference. Systems and methods include using the frame difference and the correlation matrix to automatically determine the velocity of the fluid or the object. 1. A computer-implemented method for determining a velocity of a fluid or an object , the method comprising performing operations as follows on a processor of a computer:receiving image data of the fluid or the object, the image data comprising a plurality of frames, wherein each frame comprises an array of pixel values; subtracting an array of pixel values for a first frame of the image data from an array of pixel values for a second frame of the image data, or', 'subtracting the array of pixel values for the second frame of the image data from the array of pixel values for the first frame of the image data;, 'creating an image using frame differencing byutilizing the image created with frame differencing to determine a location of the object or one or more objects within the fluid; 'based on the distance traveled by the object or the one or more objects within the fluid, automatically determine the velocity of the fluid or the object.', 'utilizing a correlation or a convolution to determine a distance traveled by the object or the one or more objects within the fluid between two or more frames; and'}220-. (canceled) ...

Подробнее
05-03-2020 дата публикации

Rgbd sensing based object detection system and method thereof

Номер: US20200074228A1
Принадлежит: ROBERT BOSCH GMBH

An RGBD sensing based system for detecting, tracking, classifying, and reporting objects in real-time includes a processor, a computer readable media, and a communication interface communicatively coupled to each other via a system bus is illustrated. An object detection module is integrated into the system that detects and tracks any objects that are moved under its field of view.

Подробнее
05-03-2020 дата публикации

Parcel Theft Deterrence for A/V Recording and Communication Devices

Номер: US20200074824A1
Принадлежит: Amazon Technologies Inc

Parcel theft deterrence for audio/video (A/V) recording and communication devices, such as video doorbells and security cameras. When an A/V recording and communication device captures image data that includes a parcel, a parcel boundary may be created for monitoring the parcel within. In various embodiments, when the parcel is removed from the parcel boundary, a user alert may be generated to notify a user of a client device associated with the A/V recording and communication device that the parcel has been removed.

Подробнее
18-03-2021 дата публикации

SYSTEMS AND METHODS FOR SYNTHETIC DATA GENERATION FOR TIME-SERIES DATA USING DATA SEGMENTS

Номер: US20210081261A1
Принадлежит: Capital One Services, LLC

Systems and methods for generating synthetic data are disclosed. For example, a system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving a dataset including time-series data. The operations may include generating a plurality of data segments based on the dataset, determining respective segment parameters of the data segments, and determining respective distribution measures of the data segments. The operations may include training a parameter model to generate synthetic segment parameters. Training the parameter model may be based on the segment parameters. The operations may include training a distribution model to generate synthetic data segments. Training the distribution model may be based on the distribution measures and the segment parameters. The operations may include generating a synthetic dataset using the parameter model and the distribution model and storing the synthetic dataset. 120-. (canceled)21. A system for generating synthetic data , comprising:one or more memory units storing instructions; and receiving a request to generate a synthetic time-series dataset, the request including a request dataset;', 'determining a profile of the request dataset;', 'accessing a distribution model based on the determined profile of the request dataset, the distribution model having been trained to generate synthetic data segments based on distribution measures and segment parameters of actual time series data; and', 'generating, according to the distribution model, a synthetic time-series dataset., 'one or more processors that execute the instructions to perform operations comprising22. The system of claim 21 , wherein:the operations further comprise generating synthetic segment parameters using a parameter model; and generating synthetic data segments according to the distribution model; and', 'combining the synthetic data segments ...

Подробнее
18-03-2021 дата публикации

Classification using multiframe analysis

Номер: US20210081686A1
Принадлежит: Lytx Inc

A system for video analysis includes an interface and a processor. The interface is configured to receive a trigger indication. The processor is configured to determine a time sequence set of video frames associated with the trigger indication; determine a decision based at least in part on an analysis of the time sequence set of video frames; and indicate the decision.

Подробнее
18-03-2021 дата публикации

Information processing apparatus and non-transitory computer readable medium

Номер: US20210081696A1
Принадлежит: Fuji Xerox Co Ltd

An information processing apparatus includes a processor configured to: acquire an amount of color change, the amount of color change being an amount of color change caused by processing performed on image data, the amount of color change being acquired for each area with a color change within the image data; and extract an interest region from the image data, the interest region being an area in the image data where the amount of color change is greater than in other areas.

Подробнее
22-03-2018 дата публикации

Distance image acquisition apparatus and distance image acquisition method

Номер: US20180080761A1
Принадлежит: Fujifilm Corp

Disclosed are a distance image acquisition apparatus and a distance image acquisition method capable of acquiring a distance image with satisfactory accuracy based on a first pattern projected from a host apparatus even in a case where patterns of structured light having the same shape are projected simultaneously from the host apparatus and another apparatus. A distance image acquisition apparatus ( 10 ) includes a projection unit ( 12 ) which projects a pattern of structured light, a light modulation unit ( 22 ) which modulates a switching timing of projection and non-projection of the pattern with a code, an imaging unit ( 14 ) which is provided in parallel with and apart from the projection unit ( 12 ) by a baseline length, performs imaging in synchronization with a projection period and a non-projection period of the pattern, and generates a first captured image captured in the projection period of the pattern and a second captured image captured in the non-projection period of the pattern, a differential image generation unit ( 20 D) which generates a differential image of the first captured image and the second captured image, a pattern extraction unit ( 20 A) which extracts the pattern from the differential image, and a distance image acquisition unit ( 20 B) which acquires a distance image indicating a distance of a subject within a distance measurement region based on the pattern extracted by the pattern extraction unit ( 20 A).

Подробнее
14-03-2019 дата публикации

Virtualization of Tangible Interface Objects

Номер: US20190080173A1
Принадлежит: Tangible Play Inc

An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.

Подробнее
22-03-2018 дата публикации

BACKGROUND FOREGROUND MODEL WITH DYNAMIC ABSORBTION WINDOW AND INCREMENTAL UPDATE FOR BACKGROUND MODEL THRESHOLDS

Номер: US20180082442A1
Принадлежит: Omni AI, Inc.

Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds. 1. A computer-implemented method for absorbing elements of scene foreground into a background model associated with a scene depicted in a sequence of video frames captured by a video camera , the method comprising:receiving image data for a current video frame, wherein the image data classifies each pixel in the current video frame as depicting either foreground or background; andfor each pixel in the current video frame classified as depicting scene foreground, updating corresponding pixel data in the background model based on one or more color channel values of the pixel in the current video frame and an absorption factor.245-. (canceled) This application is a continuation of International Patent Application PCT/US15/58071, filed on Oct. 29, 2015, which in turn claims priority to and benefit of: (1) U.S. patent application Ser. No. 14/526,879, filed on Oct. 29, 2014 (now U.S. Pat. No. 9,460,522), and (2) U.S. patent application Ser. No. 14/526,815, filed on Oct. 29, 2014 (now U.S. Pat. No. 9,471,844); the entirety of each of the aforementioned applications is hereby ...

Подробнее
14-03-2019 дата публикации

System and method for image guided tracking to enhance radiation therapy

Номер: US20190080442A1

This invention provides a system and method that allows the utilization of computer vision system techniques and processes, such as multi-layer separation and contrast mapping, to enhance the detectability of an imaged tumor, opening the door to real-time tumor tracking and/or modulation of a treatment radiation beam so as to maximize the radiation dosage applied to the tumor itself while minimizing the dosage received by surrounding tissues. The techniques and processes also permit more accurate assessment of the level of radiation dosage delivered to the tumor. An image processor receives the image data from the detector as a plurality of image frames, and performs contrast stretching on the image frames to resolve features. A motion analysis module compares static and dynamic features in the contrast-stretched image frames to derive layers of features. The image frames are output as enhanced image frames. The output can be used to guide the beam.

Подробнее