Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 932. Отображено 193.
17-09-2018 дата публикации

Номер: RU2016135936A3
Автор:
Принадлежит:

Подробнее
10-12-2016 дата публикации

СИСТЕМА ВНЕШНЕГО ОБРАЗА И/ИЛИ ПРИЦЕЛИВАНИЯ ОРУДИЯ ДЛЯ ВОЕННЫХ СУХОПУТНЫХ ТРАНСПОРТНЫХ СРЕДСТВ И КОРАБЛЕЙ ВОЕННО-МОРСКОГО ФЛОТА

Номер: RU2015118275A
Принадлежит: СЕЛЕКС ЕС С.п.А.

1. Система (1) внешнего обзора и/или прицеливания орудия, предназначенная для установки на борту военного сухопутного транспортного средства и/или корабля военно-морского флота и содержащая:два датчика (11, 12), конфигурация которых обеспечивает захват видеопотоков, содержащих изображения одной и той же сцены снаружи военного транспортного средства и/или корабля военно-морского флота, причем конфигурация каждого датчика (11, 12) обеспечивает захват соответствующего видеопотока в соответствующей области спектра;электронный блок (13) обработки, соединенный с двумя датчиками (11, 12) для улавливания двух захватываемых видеопотоков и имеющий конфигурацию, обеспечивающуювведение соответствующей прицельной сетки в изображения каждого улавливаемого видеопотока, которая указывает ориентацию соответствующего датчика, который захватил упомянутый видеопоток, вследствие чего генерируется соответствующий предварительно обработанный видеопоток, иобработку двух предварительно обработанных видеопотоков; ипользовательский интерфейс (14), соединенный с электронным блоком (13) обработки для приема обработанных видеопотоков и содержащий экран (15), конфигурация которого обеспечивает отображение видеопотока, принимаемого из упомянутого электронного блока (13) обработки;отличающаяся тем, что конфигурация электронного блока (13) обработки обеспечивает обработку двух предварительно обработанных видеопотоков посредством:функциональных возможностей повышения качества изображений, вследствие чего генерируются два первых видеопотока повышенного качества; ифункциональных возможностей технологии «картинка в картинке», вследствие чего РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (51) МПК F41G 3/16 (13) 2015 118 275 A (2006.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ЗАЯВКА НА ИЗОБРЕТЕНИЕ (21)(22) Заявка: 2015118275, 16.10.2013 (71) Заявитель(и): СЕЛЕКС ЕС С.П.А. (IL) Приоритет(ы): (30) Конвенционный приоритет: 16.10.2012 IT TO2012A000907 (85) Дата начала рассмотрения заявки PCT на ...

Подробнее
17-06-2010 дата публикации

Verfahren zur entropiebasierten Bestimmung von Objektrandkurven

Номер: DE102009009572B3
Принадлежит: EADS DEUTSCHLAND GMBH

Verfahren zur Bestimmung von Objektrandkurven in einem aufgenommenen Bild einer Multispektralkamera umfassend folgende Verfahrensschritte: a. Konvertieren des aufgenommenen Bildes in ein digitales Falschfarbenbild, b. in dem Falschfarbenbild wird jedem Pixel ein Farbtonwert aus dem HSV-Farbraum zugeordnet, wobei der Farbtonwert einem Farbwinkel H auf einem vorgegebenen Farbkreis entspricht, c. Klassifizierung der einzelnen Pixel als Objektpixel und Hintergrundpixel, wobei als Objektpixel diejenigen Pixel definiert werden, deren Farbtonwert innerhalb eines vorgegebenen Wertebereichs liegen, d. Bestimmung eines Entropieprofils (Fig. 5) mittels eines verschiebbaren Auswertefensters, wobei für jedes Pixel die Mischungsentropie S aus Objektpixel und Hintergrundpixel berechnet wird gemäß S=-k[nln(n)/(n+ n) + nln(n)/(n+ n)] wobei ndie Zahl der Objektpixel innerhalb des Auswertefensters, ndie Zahl der Hintergrundpixel innerhalb des Auswertefensters und k einen Proportionalitätsfaktor angibt, wobei ...

Подробнее
11-03-2015 дата публикации

Combination of narrow and wide view images

Номер: GB0002507690B

Подробнее
22-01-2014 дата публикации

Method and apparatus for matching a model and an image of the same scene

Номер: GB0002504051A
Принадлежит:

A method and apparatus for matching a model and an image reproducing the same scene, by determining the transformation required between a frame of reference connected to the said model and a frame of reference connected to the said image comprises: means (2,3) for defining primitives and measurement points on the image and the model; means (5) for defining proximity lists formed from measurement points; and means (7, 10, 12) for determining possible transformation hypotheses, for filtering these hypotheses and for determining the required transformation. As described such matching is used in a navigational or landing aid for an aeroplane or missile using synthetic aperture radar.

Подробнее
15-05-2019 дата публикации

Apparatus and method for defining and interacting with regions of an operational area

Номер: GB0002568361A
Принадлежит:

A display apparatus and method for displaying an operational area to an operative of a host platform, said operational area being defined within an external real-world environment relative to said host platform, the apparatus comprising a viewing device 12 configured to provide to said operative, in use, a three-dimensional view of said external real-world environment, a screen, a user input 33 configured to receive user input data 35 representative of a specified target or region in respect of which an operation is to be performed, said user input data including data representative of the location within said external real-world environment of said specified target or region and data representative of said operation to be performed in respect thereof. A processor 32 is configured to use said user input data to generate or obtain three-dimensional image data representative of a geometric volume based, at least, on said operation to be performed and display one or more images depicting said ...

Подробнее
11-12-2013 дата публикации

Optical sighting

Номер: GB0201318990D0
Автор:
Принадлежит:

Подробнее
05-02-2003 дата публикации

DEVICE FOR MONITORING THE SURROUNDING SPACE IN REAL TIME,FOR DETECTING AND TRACKING POSSIBLE ENEMY MISSILES

Номер: GB0009026116D0
Автор:
Принадлежит:

Подробнее
25-10-2017 дата публикации

An apparatus and method for displaying an operational area

Номер: GB0201714571D0
Автор:
Принадлежит:

Подробнее
12-04-2018 дата публикации

Haptic augmented reality device for facilitating actions on a flying boom

Номер: AU2016328885A1
Принадлежит: Walker IP

The invention relates to a haptic augmented reality device for facilitating actions to control a boom in full flight, allowing the boom to be controlled with a single hand, without the need for any training in what the manoeuvres refer to. In addition, the system not only provides information concerning how the operation is being performed, but also provides assistance, indicating to the boomer when a particular movement is completely prohibited (making the movement impossible) or is not recommendable (making it necessary to apply more force to perform the movement).

Подробнее
03-02-2005 дата публикации

SPECTRAL TRACKING

Номер: CA0002533130A1
Автор: SHAPIRA, RUTH
Принадлежит:

An aircraft (120) has a spectral imager (122) along with a panchromatic imaging machanism (22) in order to image the scene. A processor (124) implements the algorthms for spectral image processing in which a first spectral image and a second spectral image are processed and in which a spectral reference window is hypercorrelated with the second spectral image.

Подробнее
10-04-2012 дата публикации

COMPENSATION FOR OVERFLIGHT VELOCITY WHEN STABILIZING AN AIRBORNE CAMERA

Номер: CA0002513514C
Принадлежит: INSITU, INC.

A method and system for maintaining the line of sight of an airborne camera fixed on a target by compensating for overflight velocity of the aircraft. The compensation system automatically commands an angular velocity of the line of sight to maintain the camera pointing at the target being overflown. This angular velocity of the line of sight is computed based upon the aircraft overflight velocity and upon a vector from the aircraft to the target. This automatic compensation for aircraft overflight velocity causes the line of sight to remain fixed upon the target. The compensation system drives a gimbal system upon which the camera is mounted to perform this compensation automatically.

Подробнее
15-04-2009 дата публикации

METHOD OF OBJECT RECOGNITION IN IMAGE DATA USING COMBINED EDGE MAGNITUDE AND EDGE DIRECTION ANALYSIS TECHNIQUES

Номер: CA0002640931A1
Принадлежит:

Methods for detecting areas of interest in an image using combined edge magnitude and edge direction analysis techniques are presented. One embodiment features using thermal imaging data to detect hotspots in maritime settings that may be potential targets f or tracking or weapons systems. The edge magnitude and edge direction data are derived from the intensity image and then combined with the intensity image and analyzed morphologicall y to remove noise and background elements. The combined image data is then selectively filtered to remove horizontal non-target elements and then analyzed further against targ et size information to determine which detected and analyzed hotspots are valid targets. Another embodiment features receiving as input an intensity image along with its associated edge magnitude and edge direction images, which have both been created by a means outside the detection method. Yet another embodiment features a detection method that do es not selectively filter out horizontal ...

Подробнее
26-04-2016 дата публикации

METHOD AND SYSTEM FOR COORDINATING BETWEEN IMAGE SENSORS

Номер: CA0002914188C

Method and system for coordinating between separate image sensors imaging a mutual area of interest at different imaging perspectives. A target point is designated on a first image acquired by a first image sensor. Feature points are defined and characterized on the first image and transmitted over a data communication link to a second image sensor. The target point is identified in a second image acquired by the second image sensor using an iterative convergence operation. The first iteration involves locating feature points in the second image corresponding to the defined first image feature points. Subsequent iterations involve locating feature points in a subregion of the second image corresponding to decreasing subsets of first image feature points, the subregion defined by the feature point cluster located in the previous iteration. When a termination condition is reached, the remaining cluster of located feature points is established to represent the target point.

Подробнее
22-01-2015 дата публикации

A METHOD FOR REDUCING BLUR OF TDI-CCD CAMERA IMAGES

Номер: CA0002918511A1
Принадлежит:

The present invention belongs to the field of image processing, and particularly relates to the determination of an aerial remote sensing image fuzzy parameter and the elimination of aerial remote sensing image fuzziness based on a TDI-CCD camera. The method comprises the following specific steps: establishing an image coordinate system, reading an area array image, constructing a similarity matching criterion, conducting offset resolving to acquire homonymy points so as to obtain a digital image reducing the chattering influence. The method is relatively simple and precise in computing process, and good in processing effect.

Подробнее
29-07-2011 дата публикации

PROCESSOR Of IMAGE

Номер: FR0002933796B1
Принадлежит: MBDA UK LIMITED

Подробнее
14-12-2012 дата публикации

PROCESS AND DEVICE AUTOMATICALLY TO DETERMINE CONTOURS HEIGHTS OF the RELIEF Of an Geographical area.

Номер: FR0002976386A1
Автор: LECUELLE JEREMY
Принадлежит: MBDA FRANCE

... - Le dispositif (1) comporte des moyens (3) pour recevoir une image de la zone géographique considérée, des moyens (8) pour supprimer, à partir de cette image, la surface de fond du relief qui illustre les variations lentes des altitudes de ladite zone géographique, des moyens (9) pour réaliser un seuillage afin de former une image binaire, contenant uniquement des reliefs dont les hauteurs sont supérieures à une valeur de seuil, des moyens (11) pour extraire de cette image binaire des contours bruts des hauteurs du relief, des moyens (13) pour simplifier lesdits contours bruts de manière à obtenir un ensemble de polygones illustrant lesdits contours des hauteurs du relief de ladite zone géographique, et des moyens (19) pour transmettre cet ensemble de polygones à des moyens utilisateurs (22).

Подробнее
14-08-2015 дата публикации

METHOD FOR DETECTING AND TRACKING TARGETS

Номер: FR0003017481A1
Принадлежит:

Un procédé de détection et de pistage de cibles dans une série d'images successives comprend de limiter le nombre des plots qui font l'objet de pistages simultanés ou de fausses pistes. Le fonctionnement du module de pistage est ainsi amélioré sans qu'il soit nécessaire d'augmenter un seuil de détection des plots. Le seuil de détection peut même être réduit, si bien que la portée de détection est augmentée et le pistage de chaque cible est plus continu, sans que la probabilité de fausse alarme soit elle-même augmentée.

Подробнее
15-01-2010 дата публикации

Image processing apparatus for homing head i.e. infra-red homing head on self-guided missile, has subtraction module subtracting intensity function from output intensity function to provide non-polarized target and background data

Номер: FR0002933796A1
Принадлежит: MBDA UK LIMITED

L'évaluation à grande vitesse de polynômes d'ordre élevé à plusieurs dimensions nécessite des ressources de calcul complexes. Cette invention comprend un appareil de calcul (9, 10) qui permet le calcul efficace et très rapide de tels polynômes avec une ressource de calcul relativement simple. La ressource de calcul est entièrement constituée d'additionneurs (9) et de circuits à verrouillage (10). Il n'y a pas de multiplicateurs. L'invention a une application particulière pour supprimer une distorsion d'image à basses fréquences due à l'échauffement cinétique dans des missiles munis d'une formation d'image infrarouge.

Подробнее
22-01-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0031242571B1
Автор:
Принадлежит:

Подробнее
12-07-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0030283801B1
Автор:
Принадлежит:

Подробнее
07-08-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0030932239B1
Автор:
Принадлежит:

Подробнее
09-01-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0031048479B1
Автор:
Принадлежит:

Подробнее
22-01-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0038658288B1
Автор:
Принадлежит:

Подробнее
19-05-2011 дата публикации

PASSIVE ELECTRO-OPTICAL TRACKER

Номер: WO2011059530A2
Принадлежит:

A passive electro-optical tracker uses a two-band IR intensity ratio to discriminate high-speed projectiles and obtain a time-varying speed estimate from their time- varying temperature, as well as determining the trajectory back to the source of fire. In an omnidirectional system a hemispheric imager with an MWIR spectrum splitter forms two CCD images of the environment. Various methods are given to determine the azimuth and range of a projectile, both for clear atmospheric conditions and for nonhomogeneous atmospheric conditions. One approach uses the relative intensity of the image of the projectile on the pixels of a CCD camera to determine the azimuthal angle of trajectory with respect to the ground, and its range. A second uses a least squares optimization over multiple frames based on a triangle representation of the smeared image to yield a real-time trajectory estimate.

Подробнее
15-01-2015 дата публикации

METHOD FOR UPDATING TARGET TRACKING WINDOW SIZE

Номер: WO2015004501A1
Автор: AKAGUNDUZ, Erdem
Принадлежит:

The present invention relates to the field of image processing and methodologies to determine a target tracking window and update size of the window at every new image. An objective of the present invention is to provide a methodology to determine a target tracking window and update size of the window at every new image, using the initial (estimated) target window and based on the principle of standard deviation value changes of pixels.

Подробнее
09-04-2009 дата публикации

OBJECT DETECTION INCORPORATING BACKGROUND CLUTTER REMOVAL

Номер: WO000002009045578A3
Принадлежит:

A method for optically detecting an object (16) within a field of view (14), where the field of view contains background clutter tending to obscure visibility of the object includes optically tracking (102) said object such that said object is motion stabilized against said background clutter, during said optical tracking, obtaining (104) a plurality of frames of said field of view, using (110) said plurality of frames to perform a frame-to-frame analysis of variances in intensities of pixels within said frames, and using (116) said variances in intensities to discern said object.

Подробнее
08-01-2004 дата публикации

Video surveillance with speckle imaging

Номер: US20040005098A1

A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

Подробнее
01-12-1992 дата публикации

Confirmed boundary pattern matching

Номер: US0005168530A1
Принадлежит: Raytheon Company

A method of aligning two images of the same scene by matching features in a first image to features in a second image is disclosed. The method comprises identifying edges of objects in the first image using two different processes. The edges identified using both processes are compared and combined into one image representing confirmed edges which are readily identified in other images of the same scene. A template is then formed from the confirmed edges which is matched to a subregion of the second image.

Подробнее
11-08-1998 дата публикации

Plume or combustion detection by time sequence differentiation

Номер: US0005793889A1
Автор: Bushman; Boyd B.
Принадлежит: Lockheed Corporation

A method and system of image modulation detection of an aircraft exhaust plume by time sequence differentiation is provided. The method comprises the steps of forming two sequential images of the field of view in which an exhaust plume to be detected is located, and forming a differential image from the sequential images showing components of the aircraft's exhaust plume that are modulating at a rate greater than the frame rate of the detection system. The nonmodulating components such as the sky hills, and even the missile body are eliminated from the differential image. Only the plume remains and only the plume is detected. Therefore there is not false alarm note. Each image is formed by a plurality of pixels, wherein each pixel images a portion of the field of view. A value is assigned to each pixel in each of the sequential images that corresponds to one or more characteristics of the pixel. The value of each pixel in one sequential image is subtracted from the value of the corresponding ...

Подробнее
20-09-2018 дата публикации

Method and System of Image-Based Change Detection

Номер: US20180268239A1
Принадлежит:

A method for collecting and processing remotely sensed imagery in order to achieve precise spatial co-registration (e.g., matched alignment) between multi-temporal image sets is presented. Such precise alignment or spatial co-registration of imagery can be used for change detection, image fusion, and temporal analysis/modeling. Further, images collected in this manner may be further processed in such a way that image frames or line arrays from corresponding photo stations are matched, co-aligned and if desired merged into a single image and/or subjected to the same processing sequence. A second methodology for automated detection of moving objects within a scene using a time series of remotely sensed imagery is also presented. Specialized image collection and preprocessing procedures are utilized to obtain precise spatial co-registration (image registration) between multitemporal image frame sets. In addition, specialized change detection techniques are employed in order to automate the detection of moving objects. 120-. (canceled)21. A method of image-based change detection , comprising:(a) planning image capture of an airborne or satellite image at location loc=1 by obtaining georeferencing coordinates and altitude information using a tool selected from the group consisting of: (i) a GPS data tool for logging and digitally archiving flight line and sensor station coordinates for each image acquisition, (ii) a flight planning software tool integrated with digital coordinates of flight line and sensor stations from previous image dates/times or with digital coordinates from original flight plans, (iii) using either an in-flight, heads-up display tool that enables a pilot to maintain flight line course and altitude based on GPS coordinates or an autopilot that enables an unmanned aircraft system to maintain flight line course and altitude based on GPS coordinates, and (iv) using an automated tool based on digitally archived coordinates and in-flight GPS for automatic ...

Подробнее
04-07-2019 дата публикации

SYSTEMS, METHODS, AND DEVICES FOR UNMANNED VEHICLE DETECTION

Номер: US2019208112A1
Принадлежит:

Systems, methods, and apparatus for detecting UAVs in an RF environment are disclosed. An apparatus is constructed and configured for network communication with at least one camera. The at least one camera captures images of the RF environment and transmits video data to the apparatus. The apparatus receives RF data and generates FFT data based on the RF data, identifies at least one signal based on a first derivative and a second derivative of the FFT data, measures a direction from which the at least one signal is transmitted, analyzes the video data. The apparatus then identifies at least one UAV to which the at least one signal is related based on the analyzed video data, the RF data, and the direction from which the at least one signal is transmitted, and controls the at least one camera based on the analyzed video data.

Подробнее
27-06-2019 дата публикации

CLUTTERED BACKGROUND REMOVAL FROM IMAGERY FOR OBJECT DETECTION

Номер: US20190197724A1
Принадлежит:

Embodiments herein describe tracking the location and orientation of a target in a digital image. In an embodiment, this tracking can be used to control navigation for a vehicle. In an embodiment, a digital image can be captured by a visual sensor is received. A first array including a plurality of binary values related to the pixel velocity of a first plurality of pixels in the digital image as compared to corresponding pixels in a first one or more prior digital images can be generated. A second array including a plurality of values related to the standard deviation of pixel intensity of the first plurality of pixels in the digital image as compared to corresponding pixels in a second one or more prior digital images can be further generated. A plurality of thresholds relating to the values in the second array can be determined. A plurality of target pixels and a plurality of background pixels can be identified in the digital image, based on the first array, the second array, and the ...

Подробнее
07-11-2023 дата публикации

Firearm system that tracks points of aim of a firearm

Номер: US0011808547B2
Автор: Philip Scott Lyren
Принадлежит: Philip Scott Lyren

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm.

Подробнее
04-08-2010 дата публикации

Method of object recognition in image data using combined edge magnitude and edge direction analysis techniques

Номер: EP2051207A3
Автор: Sefcik, Jason A.
Принадлежит:

Methods, programs, and computer systems for detecting areas of interest in an image using combined edge magnitude and edge direction analysis techniques are presented. One embodiment features using thermal imaging data to detect hotspots in maritime settings that may be potential targets for tracking or weapons systems. The edge magnitude and edge direction data are derived from the intensity image and then combined with the intensity image and analyzed morphologically to remove noise and background elements. The combined image data is then selectively filtered to remove horizontal non-target elements and then analyzed further against target size information to determine which detected and analyzed hotspots are valid targets. Another embodiment features receiving as input an intensity image along with its associated edge magnitude and edge direction images, which have both been created by a means outside the detection method. Yet another embodiment features a detection method that does ...

Подробнее
29-04-2019 дата публикации

СПОСОБ ДЛЯ ДЕТЕКТИРОВАНИЯ И КЛАССИФИЦИРОВАНИЯ СОБЫТИЙ СЦЕНЫ

Номер: RU2686566C2
Принадлежит: ТАЛЬ (FR)

FIELD: data processing.SUBSTANCE: invention relates to detection and classification of scene events. Achievement of the result is ensured by means of a single-frame imaging system equipped with a VisNIR detector in 0.6–1.1 mcm wavelength range and a SWIR detector. Step of obtaining synchronized VisNIR and SWIR of successive two-dimensional images is carried out, a step of displaying VisNIR images and a step of processing said images, which consists in comparing SWIR images to determine, for each pixel, change in illumination from one SWIR image to another and a peak value of said SWIR illumination.EFFECT: providing reliable recognition of scene events.7 cl, 7 dwg РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 2 686 566 C2 (51) МПК G06K 9/00 (2006.01) G01J 1/02 (2006.01) G01J 5/00 (2006.01) G01J 5/60 (2006.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ОПИСАНИЕ ИЗОБРЕТЕНИЯ К ПАТЕНТУ (52) СПК F41G 3/147 (2013.01); G01J 1/0219 (2013.01); G01J 1/0242 (2013.01) (21) (22) Заявка: 2016135936, 05.02.2015 (24) Дата начала отсчета срока действия патента: 05.02.2015 (72) Автор(ы): МИДАВЭН Тьерри (FR), ГАРЭН Оливье (FR) 29.04.2019 07.02.2014 FR 1400350 (43) Дата публикации заявки: 13.03.2018 Бюл. № 8 (45) Опубликовано: 29.04.2019 Бюл. № 13 7778006 B2, 17.08.2010. RU 2494566 C2, 27.09.2013. WO 2012/034963 A1, 22.03.2012. US 2013/0266210 A1, 10.10.2013. US 6178141 B1, 23.01.2001. US 8547441 B2, 01.10.2013. US 8642961 B2, 04.02.2014. (85) Дата начала рассмотрения заявки PCT на национальной фазе: 07.09.2016 2 6 8 6 5 6 6 (56) Список документов, цитированных в отчете о поиске: RU 2475853 C2, 20.02.2013. US Приоритет(ы): (30) Конвенционный приоритет: R U (73) Патентообладатель(и): ТАЛЬ (FR) Дата регистрации: (86) Заявка PCT: C 2 EP 2015/052430 (05.02.2015) (87) Публикация заявки PCT: R U 2 6 8 6 5 6 6 C 2 WO 2015/118075 (13.08.2015) Адрес для переписки: 129090, Москва, ул. Б. Спасская, 25, стр. 3, ООО "Юридическая фирма Городисский и Партнеры" (54) СПОСОБ ДЛЯ ДЕТЕКТИРОВАНИЯ И ...

Подробнее
27-06-2013 дата публикации

СПОСОБ ИСКЛЮЧЕНИЯ МАСКИ ОТ КАБИНЫ ЭКИПАЖА И СООТВЕТСТВУЮЩАЯ ДИСПЛЕЙНАЯ СИСТЕМА, СМОНТИРОВАННАЯ НА ШЛЕМЕ

Номер: RU2011152600A
Принадлежит:

... 1. Способ исключения маски от кабины экипажа в дисплейном устройстве, смонтированном на шлеме, носимом пилотом, причем шлем содержит устройство для обнаружения положения головы пилота, упомянутый пилот расположен в кабине экипажа летательного аппарата, которая включает в себя, по меньшей мере, одну стойку (М), расположенную в поле зрения пилота, при этом дисплейное устройство включает в себяпервый бинокулярный блок датчиков изображения (C), выполненный с возможностью работать при низких уровнях света и выдавать первое усиленное изображение и второе усиленное изображение, причем каждое из упомянутых усиленных изображений содержит изображение упомянутой стойки, наложенное на изображение наружного ландшафта;второй бинокулярный блок дисплея, смонтированного на шлеме (HMD), скомпонованный так, чтобы представлять пилоту первое усиленное изображение и второе усиленное изображение;компьютер с графической обработкой изображений;отличающийся тем, что способ для исключения маски от кабины экипажа ...

Подробнее
06-11-2013 дата публикации

Collaborative sighting

Номер: GB0002501684A
Принадлежит:

Collaborative sighting is performed using plural optical sighting apparatus that each comprises a camera unit and a display device. Geometric calibration of the camera units with respect to the imaged scene is performed by detecting features within captured images (Figure 6, S61), generating descriptors from respective patches of the image at the features (Figure 6, S62), detecting corresponding descriptors from different images S4, and deriving the geometric calibration from the positions in the respective images of the corresponding descriptors S5. A target location in the image captured by a first optical sighting apparatus is designated and a corresponding location relative to the image captured by the second optical sighting apparatus is identified from the geometric calibration S6. The display device of the second optical sighting apparatus indicates where the corresponding location lies relative to the displayed image S7. The descriptors may be generated using techniques such as ...

Подробнее
19-09-2018 дата публикации

Personal electronic target vision system, device and method

Номер: GB0201812910D0
Автор:
Принадлежит:

Подробнее
18-12-2013 дата публикации

Method and apparatus for tracking an object

Номер: GB0201313682D0
Автор:
Принадлежит:

Подробнее
14-11-2012 дата публикации

System and method for geospatial partitioning of a geographical region

Номер: GB0201217396D0
Автор:
Принадлежит:

Подробнее
14-11-2012 дата публикации

Determination of position from images and associated camera positions

Номер: GB0201217395D0
Автор:
Принадлежит:

Подробнее
25-10-2017 дата публикации

Apparatus and method for defining and interacting with regions of an operational area.

Номер: GB0201714573D0
Автор:
Принадлежит:

Подробнее
08-08-2012 дата публикации

Surveillance process and apparatus

Номер: GB0201211465D0
Автор:
Принадлежит:

Подробнее
04-07-2012 дата публикации

Range and Velocity Estimation in Video Data Using Anthropometric Measures

Номер: GB0002486980A
Принадлежит:

Object (e.g. omega-shaped human head and shoulders) range and velocity in image data (e.g. video) is calculated. An overlay, e.g. moving target predicted location, is displayed using the calculation, for instance on a firearm scope. The range calculation includes detecting an object contour (fig. 1) from the image data, forming a template from the image data based on the contour (fig. 2), and calculating a range to the object using pixel resolution and object dimension statistics (fig. 5). A three-dimensional velocity of the object is determined by calculating radial and angular (image plane) velocity components. The radial component is calculated by determining the object range in two or more image frames using the contour detection method above, determining a time differential between the image frames, and calculating the radial velocity as a function of the object range in the image frames and the time differential between the image frames. The angular component is calculated using spatial-temporal ...

Подробнее
30-06-2010 дата публикации

System and method for image registration

Номер: GB0201008104D0
Автор:
Принадлежит:

Подробнее
15-04-2012 дата публикации

PROCEDURE, COMPUTER PROGRAMME AND DEVICE FOR DETERMINING THE DANGER OF A COLLISION IN AIR

Номер: AT0000551674T
Принадлежит:

Подробнее
08-09-2011 дата публикации

Method for the entropy-based determination of object edge curves

Номер: AU2010217093A1
Принадлежит:

The invention relates to a method for determining object edge curves in a recorded image of a multispectral camera, comprising the following steps: a. converting the recorded image into a digital false color image; b. assigning each pixel in the false color image a hue value H from the HSV color space, wherein the hue value corresponds to a color angle H on a specified color circle; c. classifying the individual pixels as object pixels and background pixels, wherein those pixels are defined as object pixels which have a hue value within a specified value range; d. determining an entropy profile, wherein for each pixel the mixing entropy S is calculated from the object pixel and background pixel according to (formula), where n indicates the number of the object pixels, n the number of the background pixels, and k * a proportionality factor, wherein a differentiation and extreme value assessment of the determined entropy profile are carried out.

Подробнее
25-11-2004 дата публикации

COMPENSATION FOR OVERFLIGHT VELOCITY WHEN STABILIZING AN AIRBORNE CAMERA

Номер: CA0002513514A1
Принадлежит:

A method and system for maintaining the line of sight of an airborne camera fixed on a target by compensating for overflight velocity of the aircraft. The compensation system automatically commands an angular velocity of the line of sight to maintain the camera pointing at the target being overflown. This angular velocity of the line of sight is computed based upon the aircraft overflight velocity and upon a vector from the aircraft to the target. This automatic compensation for aircraft overflight velocity causes the line of sight to remain fixed upon the target. The compensation system drives a gimbal system upon which the camera is mounted to perform this compensation automatically.

Подробнее
15-09-2016 дата публикации

OPTICAL SENSOR FOR RANGE FINDING AND WIND SENSING MEASUREMENTS

Номер: CA0002976375A1
Принадлежит:

Techniques are disclosed for providing an optical sensor that can be used for wind sensing and an optical scope. The optical sensor can include a photodiode, an electrical switch, a trans-impedance amplifier (TIA), and a capacitive trans-impedance amplifier (CTIA), enabling the optical sensor to perform both wind-sensing and range-finding functions. Some embodiments may include some or all of these components in an application-specific integrated circuit (ASIC), depending on desired functionality.

Подробнее
01-08-2006 дата публикации

PROCEDURE AND DEVICE FOR CONTOURING THE VALLEYS OF A SPECIFIC GEOGRAPHIC AREA, AND APPLICATIONS

Номер: CA0002231029C

... - La présente invention concerne un procédé et un dispositif pour déterminer le contour de vallées d'une zone géographique déterminée. - Selon l'invention, ledit dispositif (1) comporte : .cndot. des moyens (2) pour déterminer le réseau hydrographique de la zone géographique qui est présenté sur une première matrice comportant à chaque point susceptible de faire partie d'un cours d'eau, un code caractéristique désignant ledit cours d'eau ; .cndot. des moyens (3) pour former une seconde matrice qui comporte à chaque point l'altitude de la partie correspondante de ladite zone géographique ; et .cndot. des moyens (4, 5, 6, 7) pour déterminer, à partir desdites première est seconde matrices, les contours des vallées.

Подробнее
28-12-2012 дата публикации

PROCEEDED WITH REMOVAL Of a MASK OF COCKPIT AND DISPLAY SYSTEM OF HELMET ASSOCIATES

Номер: FR0002969792B1
Принадлежит: THALES

Подробнее
08-09-2017 дата публикации

METHOD OF DETECTING AND CLASSIFYING EVENTS OF A SCENE

Номер: FR0003017480B1
Принадлежит: THALES

Подробнее
15-01-2016 дата публикации

IMAGE PROCESSING METHOD, IN PARTICULAR FROM VISUALIZATION SYSTEMS NIGHT AND SYSTEM THEREFOR

Номер: FR0003015090B1
Автор: GROSSETETE MATTHIEU
Принадлежит: THALES

Подробнее
26-02-2016 дата публикации

METHOD FOR DETECTING AND TRACKING TARGETS

Номер: FR0003017481B1
Принадлежит: SAGEM DEFENSE SECURITE

Подробнее
24-05-2019 дата публикации

METHOD OF DETECTING TARGETS ON THE GROUND AND IN MOTION IN A VIDEO STREAM ACQUIRED BY AIRBORNE CAMERA

Номер: FR0003047103B1
Принадлежит: THALES SA, THALES

Ce procédé (100) de détection de cibles au sol et en mouvement dans un flux vidéo acquis par une caméra numérique aéroportée se caractérise en ce qu'il comporte les étapes consistant à : traiter (20 - 40) une pluralité de trames successives de manière à stabiliser les trames comme si elles avaient été acquises par une caméra fixe ; et comparer (50 - 60) deux trames traitées, séparées temporellement l'une de l'autre, de manière à identifier les zones de pixels en mouvement d'une trame à l'autre, les zones de pixels en mouvement constituant des cibles détectées.

Подробнее
19-04-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0032920852B1
Автор:
Принадлежит:

Подробнее
16-12-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0030799773B1
Автор:
Принадлежит:

Подробнее
14-06-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0037520388B1
Автор:
Принадлежит:

Подробнее
03-06-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0034395524B1
Автор:
Принадлежит:

Подробнее
21-03-2013 дата публикации

METHODS AND SYSTEMS FOR DETERMINING AN ENHANCED RANK ORDER VALUE OF A DATA SET

Номер: WO2013039595A1
Автор: WILLIAMS, Darin S.
Принадлежит:

The value of a median or other rank of interest in a dataset is efficiently determined. Each active hit of the dataset is serially processed to compute one bit of the output value from each bit of the input dataset. If any sample in the dataset has an active bit that differs from the determined output value for that bit, then that sample can be marked as no longer in consideration. After an active bit has been processed, the data for that hit maybe discarded or subsequently ignored. These techniques allow the rank value to be efficiently determined using pipelined logic in a configurable gate array (CGA). Further implementations may be enhanced to compute clipped means, to identify "next highest" or "next, lowest" values, to reduce quantization errors through less- significant bit interpolation, to simultaneously process multiple values in a common pipeline, or for any other purpose.

Подробнее
11-03-2010 дата публикации

IMAGE PROCESSING DEVICE USING SELECTIVE NEIGHBORING VOXEL REMOVAL AND RELATED METHODS

Номер: WO2010027795A1
Принадлежит:

An image processing device includes a memory, and a controller. The controller cooperates with the memory for determining N nearest neighbors for each voxel among a plurality thereof, and determines a respective distance between each voxel and its N nearest neighboring voxels. The controller also cooperates with the memory for selectively removing each given voxel if a respective distance to an Mth nearest neighboring voxel is greater than a first threshold, and with M being less than or equal to N. Optionally, the controller may also cooperate with the memory for selectively removing each other given voxel if a respective distance to an Lth nearest neighboring voxel is less than a second threshold, with the second threshold being less than the first threshold and with L being less than M.

Подробнее
31-01-2012 дата публикации

Method and apparatus for segmenting small structures in images

Номер: US000RE43152E1
Принадлежит: The Johns Hopkins University

A method for segmenting a small feature in a multidimensional digital array of intensity values in a data processor computes an edge metric along each ray of a plurality of multidimensional rays originating at a local intensity extreme (local maximum or minimum). A multidimensional point corresponding to a maximum edge metric on each said ray is identified as a ray edge point. Every point on each ray from the local extreme to the ray edge point is labeled as part of the small object. Further points on the feature are grown by labeling an unlabeled point if the unlabeled point is adjacent to a labeled point, and the unlabeled point has a more extreme intensity than the labeled point, and the unlabeled point is closer than the labeled point to the local extreme. The resulting segmentation is quick, and identifies boundaries of small features analogous to boundaries identified by human analysts, and does not require statistical parameterizations or thresholds manually determined by a user.

Подробнее
29-01-2013 дата публикации

Video processing system providing enhanced tracking features for moving objects outside of a viewable window and related methods

Номер: US0008363109B2

A video processing system may include a display and a video processor coupled to the display. The video processor may be configured to display a georeferenced video feed on the display defining a viewable area, determine actual geospatial location data for a selected moving object within the viewable area, and generate estimated geospatial location data along a predicted path for the moving object when the moving object is no longer within the viewable area and based upon the actual geospatial location data. The video processor may be further configured to define a successively expanding search area for the moving object when the moving object is no longer within the viewable window and based upon the estimated geospatial location data, and search within the successively expanding search area for the moving object when the successively expanding search area is within the viewable area.

Подробнее
05-09-2019 дата публикации

METHOD FOR ASSISTING THE LOCATION OF A TARGET AND OBSERVERATION DEVICE ENABLING THE IMPLEMENTATION OF THIS METHOD

Номер: US20190272649A1
Принадлежит: NEXTER SYSTEMS

A method for assisting the location of a target for a first user equipped with an observation device includes an augmented reality observation device associated with a first user reference frame. According to this method, a reference platform associated with a master reference frame is positioned on the terrain, the reference platform is observed from at least one camera worn by the first user, the geometry of the observed platform is compared with a numerical model of same and the orientation and location of the first user reference frame is deduced with respect to the master reference frame. It is then possible to display, on an augmented reality observation device, at least one virtual reticle locating the target. 19-. (canceled)10. A method for assisting the location of a target for a first user equipped with an observation device comprising an augmented reality observation means secured to a support plate and coupled to a computing means and associated with communication means , wherein , a reference platform being positioned on the field and having a spatial orientation reference frame called master reference frame , and a first user spatial orientation reference frame called first user reference frame being defined by design of the observation device , and associated with the observation device of the first user , the method comprises the following steps:sending the first user, from the platform or by a second user and via the communication means, the coordinates of the target in the master reference frame;observing, by the first user, the reference platform from at least one camera that is secured to the plate carried by the first user and that supports the augmented reality observation means;comparing the geometry of the platform thus observed to a digital model of the platform, which has been placed in a memory of the computing means, to deduce therefrom the orientation and the location of the first user reference frame relative to the master reference ...

Подробнее
11-05-1993 дата публикации

SYSTEM AND METHOD FOR RANKING AND EXTRACTING SALIENT CONTOURS FOR TARGET RECOGNITION

Номер: US5210799A
Автор:
Принадлежит:

Подробнее
28-11-2017 дата публикации

System, method, and computer program product for indicating hostile fire

Номер: US0009830695B2
Принадлежит: Lockheed Martin Corporation, LOCKHEED CORP

Systems, methods, and computer program products for identifying hostile fire. A characteristic of a fired projectile is detected using an optical system and the projectile's travel path in relation to a vehicle is determined. If the determined travel path of the projectile is within a predetermined distance from the vehicle, it is determined that the projectile is hostile towards the vehicle and a warning is output.

Подробнее
29-08-2023 дата публикации

Aerial video based point, distance, and velocity real-time measurement system

Номер: US0011740080B2
Автор: Gary A. Lunt

A method of determining geo-reference data for a portion of a measurement area includes providing a monitoring assembly comprising a ground station, providing an imaging assembly comprising an imaging device with a lens operably coupled to an aerial device, hovering the aerial device over a measurement area, capturing at least one image of the measurement area within the imaging device, transmitting the at least one image to the ground station using a data transmitting assembly, and scaling the at least one image to determine the geo-reference data for the portion of the measurement area by calculating a size of a field-of-view (FOV) of the lens based on a distance between the imaging device and the measurement area.

Подробнее
26-09-2012 дата публикации

Collaborative navigation using conditional updates

Номер: EP2503287A2
Принадлежит:

A method for collaborative navigation between two or more platforms is provided. The method comprises establishing a communication link between a first platform and a second platform, making a sensor measurement from the first platform, updating state and covariance elements of the first platform, and transmitting the updated state and covariance elements from the first platform to the second platform. A conditional update is performed on the second platform to compute a new estimate of state and covariance elements on the second platform, which takes into account the measurement from the first platform. The method further comprises making a sensor measurement from the second platform, updating state and covariance elements of the second platform, and transmitting the updated state and covariance elements from the second platform to the first platform. A conditional update is performed on the first platform to compute a new estimate of state and covariance elements on the first platform ...

Подробнее
27-06-2015 дата публикации

ОБНАРУЖЕНИЕ И ОТСЛЕЖИВАНИЕ ЦЕЛЕЙ В ПОСЛЕДОВАТЕЛЬНОСТИ ИЗОБРАЖЕНИЙ

Номер: RU2013156825A
Принадлежит:

... 1. Способ обнаружения и отслеживания целей в серии оптронных изображений одного поля наблюдения, содержащий этапы, на которых:/1/ заранее определяют уровни пассивных помех изображения в зависимости от значений вероятности ложной тревоги и к уровням пассивных помех соответственно применяют последовательности доводки таким образом, чтобы, по меньшей мере, два из уровней пассивных помех соответствовали разным последовательностям доводки;/2/ каждое изображение (100) разбивают на смежные участки (10), затем для каждого участка изображения:/2-1/ устанавливают критерий обнаружения и вычисляют значение вероятности ложной тревоги для указанного участка изображения;/2-2/ на основании указанного значения вероятности ложной тревоги указанному участку изображения присваивают один из уровней пассивных помех;/2-3/ внутри указанного участка изображения выявляют пятна с использованием критерия обнаружения, затем с каждым обнаруженным пятном связывают последовательность доводки уровня пассивных помех, присвоенного ...

Подробнее
08-04-2015 дата публикации

Method and apparatus for tracking an object

Номер: GB0002518948A
Принадлежит:

In a method of tracking an object, a plurality of images of a target object is obtained and a super-resolved image of the target object is calculated from the plurality of images which may include averaging all regions of interest in the plurality of images that are the same phase and/or extracting frame portions from the image and storing said portions in a stack. A further image of the target object is then obtained which is correlated with the super-resolved image, in order to identify the location of the target object in the further image. The correlation may be in the spatial or frequency domain and prior to any correlation the super resolution image may be de-resolved.

Подробнее
12-12-2018 дата публикации

Collaborative sighting

Номер: GB0002501684B
Принадлежит: 2D3 LTD, 2d3 Limited

Подробнее
15-05-2019 дата публикации

Apparatus and method for displaying an operational area

Номер: GB0002568362A
Принадлежит:

An operational area is displayed to an operative of a host platform, such as a pilot in an aircraft. The operational area is defined within an external real-world environment relative to said host platform. A viewing device, such as a helmet mounted display is configured to provide a three-dimensional view of said external real-world environment, and has a screen, and a processor comprising an input for receiving real-time first data representative of a specified target and its location within the external real-world environment and configured to receive or obtain second data representative of at least one characteristic of the target. The processor also uses the data to calculate a geometric volume representative of a region of influence of the target relative to the real-world external environment and/or said host platform, generate three-dimensional image data representative of said geometric volume, and display a three dimensional model on the screen. The three-dimensional model is ...

Подробнее
13-06-2012 дата публикации

Collaborative sighting

Номер: GB0201207480D0
Автор:
Принадлежит:

Подробнее
15-02-2012 дата публикации

SELECTIVE NACHBARVOXELENTFERNUNG USING IMAGE PROCESSING MECHANISM AND RELEVANT ONE PROCEDURE

Номер: AT0000543158T
Принадлежит:

Подробнее
03-02-2005 дата публикации

Spectral tracking

Номер: AU2004259389A1
Автор: SHAPIRA RUTH, RUTH SHAPIRA
Принадлежит:

Подробнее
02-05-2013 дата публикации

Image processing

Номер: AU2011311387A1
Принадлежит:

Apparatus and method for processing a sequence of images of a scene, the method comprising: tracking a region of interest (14) in the sequence of images (e.g. using a Self Adaptive Discriminant filter), selecting a particular image (12) in the sequence, selecting a set of images from the sequence,the set of images comprises one or more images that precede the particular image (12) in the sequence of images; and determining a value indicative of the level of change between the region of interest (14) in the particular image (12) and the regions of interest (14) in the images in the set of images (e.g. using a Change Detection Process).

Подробнее
15-04-2021 дата публикации

Adversary Distillation for one-shot attacks on 3D target tracking

Номер: AU2021100474A4
Принадлежит:

Abstraction. Considering the vulnerability of existing deep models in the adversarial scenario, the robustness of 3D target tracking is not guaranteed. In this invention, we present an efficient generation based adversarial attack, termed Adversary Distillation Network (AD-Net), which is able to distract a victim tracker in a single shot. This method is composed of a two stages. In the first stage, a devised tracker identifies the template and the search area. In the second stage, a generative network is designed to distill the template to generate adversarial example to distract the tracker. The second stage consists of two modules to formulate Bernoulli distribution to filter points by the fusion feature to generate adversarial examples. This invention can distract the deep tracker effectively exploring the defects of the deep tracker.

Подробнее
25-06-2015 дата публикации

METHOD, SYSTEM AND COMPUTER PROGRAM FOR DECISION SUPPORT

Номер: CA0002933954A1
Принадлежит:

Embodiments of the invention provide a decision support method for use by an operator surrounded by adverse entities in a battlefield environment, the method comprising generating a layered representation (200) of the physical environment surrounding the operator from sensor information by mapping the spherical physical environment (100) of the operator into a geometrical representation suitable for display on a screen, said representation being segmented into a plurality of layers (201, 202, 203, 204, 205) having respective sizes, each layer being associated with a respective category of tactical actions. The representation further including visual elements (230) representing adverse entities in the surrounding physical environment of the operator, each visual element being represented so as to be superposed with a given layer.

Подробнее
02-05-2013 дата публикации

IDENTIFICATION AND ANALYSIS OF AIRCRAFT LANDING SITES

Номер: CA0002853546A1
Принадлежит:

A method and apparatus for identifying and analysing an aircraft landing site during flight is provided. The method includes the steps of using image capture means such as an infrared camera to capture in-flight images of the ground in the region of a possible landing site, using a database of computer modelled images of possible aircraft landing sites mapped to a global co-ordinate reference frame to compare the in-flight images with a modelled image of the possible landing site and optimising correspondence between the two images to obtain an in-flight image optimally mapped to the global co-ordinate reference frame. Thereafter, the landing site which corresponds to the optimally mapped in-flight image is analysed to ascertain the presence of any obstructions such as life forms, vehicles or newly erected buildings thereon.

Подробнее
30-11-2012 дата публикации

DETECTION AND TRACKING OF TARGETS IN a SERIES Of IMAGES

Номер: FR0002975807A1
Автор: MALTESE DOMINIQUE
Принадлежит: SAGEM DEFENSE SECURITE

Un procédé de détection et de pistage de cibles dans une série d'images optroniques comprend un découpage de chaque image en portions d'image, et l'attribution d'une séquence de mise en piste à chaque portion d'image. La séquence de mise en piste est sélectionnée en fonction d'un niveau de fouillis qui est déterminé pour chaque portion d'image. Cette séquence est ensuite utilisée pour tester des plots qui sont détectés dans la portion d'image, par rapport à leur présence dans plusieurs images successives. Un compromis amélioré entre probabilité de fausse piste et temps moyen de mise en piste est ainsi obtenu.

Подробнее
26-04-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0039455338B1
Автор:
Принадлежит:

Подробнее
01-07-2000 дата публикации

SURVEILLANCE DEVICE OF SURROUNDING SPACE IN Real-time, FOR the DETECTION AND OF the CONTINUATION Of POSSIBLE ENEMY MACHINES

Номер: FR0035751081B1
Автор:
Принадлежит:

Подробнее
15-03-2016 дата публикации

PROCESS FOR DETERMINATION ON THE BASIS OF ENTROPY OF CURVES OF OBJECT EDGE

Номер: BR0PI1008595A2
Принадлежит:

Подробнее
22-04-2010 дата публикации

EMITTER TRACKING SYSTEM

Номер: WO2010044927A1
Принадлежит:

An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images.

Подробнее
03-02-2005 дата публикации

SPECTRAL TRACKING

Номер: WO2005010547A2
Автор: SHAPIRA, Ruth
Принадлежит:

A method of tracking a target. The method includes the steps of acquiring a first spectral image of a scene that includes the target, designating a spectral reference window, in the first spectral image, that includes a respective plurality of pixel vectors, acquiring a second spectral image, of the scene, that includes a respective plurality of pixel vectors, and hypercorrelating the spectral reference window with the second spectral image, thereby obtaining a hypercorrelation function, a maximum of the hypercorrelation function then corresponding to a location of the target in the scene.

Подробнее
08-10-2009 дата публикации

METHODS FOR TRANSFERRING POINTS OF INTEREST BETWEEN IMAGES WITH NON-PARALLEL VIEWING DIRECTIONS

Номер: WO2009122316A3
Автор: ROTEM, Efrat, ADIV, Gilad
Принадлежит:

A method for identifying within a working image of a scene a point of interest designated within a designation image of a scene, the designation image being taken along a first viewing direction and the working image being taken along a second viewing direction, the second viewing direction being significantly non-parallel to the first viewing direction, the method comprising the steps of: obtaining a designation image of the scene; obtaining a working image of the scene; correlating said designation image and said working image with each other, directly or by correlating each of said designation image and said working image with a common reference image, so as to derive an interrelation between said designation image and said working image; and employing said interrelation between said designation image and said working image to derive a location within said working image of a point of interest designated within said designation image, characterized in that the method further comprises ...

Подробнее
05-05-2011 дата публикации

METHODS AND SYSTEMS FOR PROCESSING DATA USING NON-LINEAR SLOPE COMPENSATION

Номер: WO2011053394A1
Автор: WILLIAMS, Darin S.
Принадлежит:

Systems and devices for processing image or other data using non-linear methods to compensate for localized slopes are described. In one implementation, the slope of the sample values in an image or other dataset is estimated in one or more directions using a non-linear filter, such as a median filter. The values of at least some of the samples of interest are compensated using the estimated slope values to remove the effects of the slope. The compensated values may then be processed to determine if the target is present in the samples of interest, or for any other purpose.

Подробнее
29-11-2012 дата публикации

Automatic Device Alignment Mechanism

Номер: US20120300082A1
Принадлежит: JOHNS HOPKINS UNIVERSITY

An alignment suite includes first and second targeting devices and an optical coupler. The first targeting device is configured to perform a positional determination regarding a downrange target. The first targeting device includes an image processor. The second targeting device is configured to perform a targeting function relative to the downrange target and is affixable to the first targeting device. The optical coupler enables the image processor to capture an image of a reference object at the second targeting device responsive to the first and second targeting devices being affixed together. The image processor employs processing circuitry that determines pose information indicative of an alignment relationship between the first and second targeting devices relative to the downrange target based on the image captured.

Подробнее
18-04-2013 дата публикации

Three-frame difference moving target acquisition system and method for target track identification

Номер: US20130094694A1
Принадлежит: Raytheon Co

Embodiments of a target-tracking system and method of determining an initial target track in a high-clutter environment are generally described herein. The target-tracking system may register image information of first and second warped images with image information of a reference image. Pixels of the warped images may be offset based on the outputs of the registration to align each warped images with the reference image. A three-frame difference calculation may be performed on the offset images and the reference image to generate a three-frame difference output image. Clutter suppression may be performed on the three-frame difference image to generate a clutter-suppressed output image for use in target-track identification. The clutter suppression may include performing a gradient operation on a background image to remove any gradient objects.

Подробнее
05-09-2013 дата публикации

Foliage penetration based on 4d lidar datasets

Номер: US20130230206A1
Принадлежит: Exelis Inc

A method for detecting terrain, through foliage, includes the steps of: receiving point cloud data in a three-dimensional (3D) space from an airborne platform, in which the point cloud data includes foliage that obscures the object; reformatting the point cloud data from the 3D space into a one-dimensional (1D) space to form a 1D signal; and decomposing the 1D signal using a wavelet transform (WT) to form a decomposed WT signal. The decomposed WT signal is reconstructed to form a low-pass filtered profile. The method classifies the low-pass filtered profile as terrain. The terrain includes a natural terrain, or a ground profile.

Подробнее
12-01-2017 дата публикации

LASER BORE SIGHTING

Номер: US20170010070A1
Принадлежит:

Techniques are disclosed for laser-based bore sighting, enabling wind sensing to be performed on captured images of the laser spot. Techniques can include image averaging, background subtraction, and filtering to help ensure that the Gaussian laser spot is detected in the image. Embodiments may include defining a bounding region and altering the operation of a camera such that the camera does not provide pixel data from pixels sensors corresponding pixels of outside the bounding region in subsequent image captures. Embodiments may additionally or alternatively include extracting two stereoscopic images from a single image capture. 1. A method of laser-based bore sighting , the method comprising:obtaining, from a visible-light camera, a visible-light image of a scene; the first set of images comprises images of the scene taken over a first period of time in which the laser spot is not present in the scene, and', 'the second set of images comprises images of the scene taken over a second period of time in which the laser spot is present in the scene;, 'obtaining, from a second camera, a plurality of images comprising a first set of images and a second set of images, whereincombining the images in the first set of images to create a first composite image;combining the images in the second set of images to create a second composite image;creating a difference image by determining a difference in pixel values of pixels in the first composite image from pixel values of pixels in the second composite image;creating a filtered difference image by applying a Gaussian filter to the difference image;comparing one or more pixel values of one or more pixels of the filtered difference image with a threshold value to determine the location of the laser spot within the filtered difference image; andmapping the location of the laser spot within the filtered difference image to the visible-light image; anddisplaying the location of the laser spot in the visible-light image of the ...

Подробнее
26-01-2017 дата публикации

DEPLOYABLE AIRBORNE SENSOR ARRAY SYSTEM AND METHOD OF USE

Номер: US20170024854A1
Автор: Humfeld Keith Daniel
Принадлежит:

A deployable airborne sensor array system and method of use are provided herein. The system includes a tether configured to be coupled to and deployed from an aircraft and a plurality of airborne vehicles coupled to the tether. Each of the plurality of airborne vehicles includes different lift characteristics to form a three-dimensional (3D) array of airborne vehicles. Each airborne vehicle includes a sensing device configured to generate sensor data associated with a target. The system also include a computing device configured to process the sensor data received from each of said plurality of airborne vehicles and generate an image of the target based on the sensor data. 1. A deployable airborne sensor array system comprising:a tether configured to be coupled to and deployed from an aircraft;a plurality of airborne vehicles coupled to said tether, each of said plurality of airborne vehicles having different lift characteristics to form a three-dimensional (3D) array of airborne vehicles, each airborne vehicle comprising a sensor device configured to generate sensor data associated with a target; anda computing device configured to:process the sensor data received from each of said plurality of airborne vehicles; andgenerate an image of the target based on the sensor data.2. The system of claim 1 , wherein said tether comprises a tether network having a plurality of tethers coupling together one or more of said plurality of airborne vehicles.3. The system of claim 1 , wherein said different lift characteristics include unbalanced wings on at least first and second airborne vehicles that cause the first and second airborne vehicles to respectively glide to the left and to the right of the aircraft claim 1 , and further include a positive lift profile and negative lift profile on at least third and fourth airborne vehicles that cause the third and fourth airborne vehicles to respectively glide above and below the aircraft claim 1 , such that the plurality of airborne ...

Подробнее
02-02-2017 дата публикации

Method of calibrating a helmet and a system therefor

Номер: US20170032529A1

A system and method for calibrating a helmet with a plurality of optical markers thereon is provided. The system includes a memory, a camera and a mechanical actuator to which the helmet is connected during the calibration process so that it is movable relative to the camera. A processor is connected to the camera and the mechanical actuator and is programmed to control the mechanical actuator to move the helmet relative to the camera through a plurality of discrete points on a calibration target pattern and at each of the discrete points, control the camera to take a digital image. For each of the images, the processor determines the position of at least one of the plurality of markers in the image and uses the position of the at least one marker in the image together with the position of the mechanical actuator to calibrate the helmet.

Подробнее
04-02-2021 дата публикации

APPARATUS AND METHOD FOR DISPLAYING AN OPERATIONAL AREA

Номер: US20210035365A1
Принадлежит: BAE SYSTEMS plc

An apparatus and method for displaying an operational area to an operative of a host platform, said operational area being defined within an external real-world environment relative to said host platform, the apparatus comprising a viewing device () configured to provide to said operative, in use, a three-dimensional view of said external real-world environment, a display generating device for creating images at the viewing device, and a processor () comprising an input () for receiving real-time first data representative of a specified target and its location within said external real-world environment and configured to receive or obtain second data representative of at least one characteristic of said specified target, the processor () being further configured to: use said first and second data to calculate a geometric volume representative of a region of influence of said specified target relative to said real-world external environment and/or said host platform, generate three-dimensional image data representative of said geometric volume, and display a three dimensional model, depicting said geometric volume and created using said three-dimensional image data, on said display generating device for creating images at the viewing device, the apparatus being configured to project or blend said three-dimensional model within said view of said external real-world environment at the relative location therein of said specified target. 1: A display apparatus that is able to display an operational area to an operative of a host platform , said operational area being defined within an external real-world environment relative to said host platform , the apparatus comprising:a viewing device configured to provide to said operative, in use, a three-dimensional view of said external real-world environment;a display generating device for creating images at the viewing device; and use said first and second data to calculate a geometric volume representative of a region of ...

Подробнее
09-02-2017 дата публикации

SYSTEM AND METHOD FOR REAL-TIME OVERLAY OF MAP FEATURES ONTO A VIDEO FEED

Номер: US20170039765A1
Принадлежит:

A method is provided for augmenting video feed obtained by a camera of a aerial vehicle to a user interface. The method can include obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle; obtaining supplemental data based on the sequence of video images and the sensor metadata; correcting an error in the sensor metadata using a reconstruction error minimization technique; creating a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images; overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model; creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; and providing the video stream to a UI to be render onto a display. 1. A method for providing an augmented video feed obtained by a camera of a manned or unmanned aerial vehicle (“UAV”) to a user interface (“UI”) , the method comprising:obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle;obtaining supplemental data based on the sequence of video images and the sensor metadata;correcting, by a processor, an error in the sensor metadata using a reconstruction error minimization technique;creating, by a processor, a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images;overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model;creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; andproviding the video stream to a UI to be render onto a display.2. The method of claim 1 , wherein the ...

Подробнее
09-02-2017 дата публикации

NETWORKED LOW-BANDWIDTH TERMINALS FOR TRANSMISSION OF IMAGERY

Номер: US20170041472A1
Автор: Steadman Robert Lloyd
Принадлежит:

A system includes nodes deployable across an area and self-forming a mobile ad-hoc network. The nodes include (1) imaging circuitry for capturing an image of a local sub-area based on a triggering event, (2) image-transfer circuitry for partitioning a captured image into sub-images or image segments and transmitting them to other nodes, (3) image-transmission circuitry for transmitting a sub-image from another node on an uplink to a relay such as a satellite. The relay (1) receives transmissions of respective sub-images, in parallel on independent channels, from the nodes via respective uplinks, and (2) retransmits the sub-images to the remote location via a downlink. A central control station at the remote location (1) receives the sub-images from the relay via the downlink, (2) re-creates the captured image by combining the received sub-images, and (3) utilizes the re-created image in a monitoring or control operation of the central control station. 1. A system , comprising:a set of nodes deployable across an area, the nodes being configured and operative to self-form a mobile ad-hoc network for communication thereamong, the set including nodes having (1) imaging circuitry by which a node captures an image of a respective local sub-area based on a triggering event, (2) image-transfer circuitry by which a node partitions a captured image into a plurality of sub-images and transmits the sub-images to respective other nodes, (3) image-transmission circuitry by which a node transmits on a respective uplink a respective sub-image received from another node;a relay configured and operative, with multiple channels, to (1) receive transmissions of respective sub-images, in parallel on independent channels, from the nodes via respective uplinks, and (2) retransmit the sub-images to the remote location via a downlink; anda control station at the remote location, the control station being configured and operative to (1) receive the sub-images from the relay via the downlink, ...

Подробнее
07-02-2019 дата публикации

BROAD AREA GEOSPATIAL OBJECT DETECTION USING AUTOGENERATED DEEP LEARNING MODELS

Номер: US20190043217A1
Принадлежит:

A system for automated geospatial image analysis comprising a deep learning model that receives orthorectified geospatial images, pre-labeled to demarcate objects of interest. The module presents marked geospatial images and a second set of unmarked, optimized, training geospatial images to a convolutional neural network. This process may be repeated so that an image analysis software module can detect multiple object types or categories. The image analysis software module receives orthorectified geospatial images from one or more geospatial image caches. Using a multi-scale sliding window submodule, image analysis software scans geospatial images, detects objects present and geospatially locates them. 1. A system for broad area geospatial object detection using auto-generated deep learning models , comprising:a deep learning model training software module stored in a memory of and operating on a processor of a computing device; andan image analysis software module stored in the memory of and operating on the processor of the computing device; receives training data comprising a plurality of orthorectified geospatial images with a plurality of objects present therein, at least a first subset of the plurality of objects being labeled and a second subset of objects being unlabeled;', 'classifies the training data into a plurality of categories;', 'applies one or more image modification steps to the training data; and', 'generates an object classification model from the training data using a deep learning method comprising separate processing of the first and second subsets of the training data through a convolutional neural network system; and, 'wherein the deep learning model training software module receives orthorectified geospatial imagery;', 'applies a plurality of image modifications to the unanalyzed orthorectified geospatial imagery;', 'discards images unsuitable for analysis;', 'uses the object classification model to automatically identify and label all ...

Подробнее
15-02-2018 дата публикации

Automated Change Detection for Synthetic Aperture Sonar

Номер: US20180046880A1
Принадлежит:

Methods and systems detect changes occurring over time between synthetic aperture sonar (SAS) images. A processor performs coarse navigational alignment, fine-scale co-registration and local co-registration between current image data and historical image data. Local co-registration includes obtaining correlation peaks for large neighborhood non-overlapping patches. Relative patch translations are estimated and parameterized into error vectors. Interpolation functions formed from the vectors re-map the current image onto the same grid as the historical image and the complex correlation coefficient between images is calculated. The resulting interferogram is decomposed into surge and sway functions used to define the argument of a phase function, which is multiplied by the current image to remove the effects of surge and sway on the interferogram. Based on the aforementioned computations, a canonical correlation analysis is performed to detect scene changes between the historical and new SAS images. 1. A method of detecting changes between a current image of an area and a historical image of said area , wherein said current image is taken at a time later than said historical image , said method comprising:relating pixel locations of said current image and said historical image;performing fine-scale co-registration of said current image and said historical image;optimizing inter-scene phase coherence of said current image and said historical image;performing local co-registration of said current image and said historical image based on said optimizing; andperforming a canonical correlation analysis based on said local co-registration.2. The method of further comprising claim 1 , prior to relating pixel locations claim 1 , retrieving said historical image from a database based on said current image and said historical image having corresponding geographical locations.3. The method of claim 2 , wherein performing fine-scale co-registration further comprises applying a ...

Подробнее
15-02-2018 дата публикации

TARGET MONITORING SYSTEM AND TARGET MONITORING METHOD

Номер: US20180047174A1
Автор: Murata Koichi
Принадлежит:

When the position of a target moving in a monitoring region is intermittently detected, the position of the target is predicted in a higher precision. The prediction precision of the position of the target can be improved by calculating an existence probability distribution of the target and by accumulating the data of the intermittent detection by use of the Bayes' theorem. 1. A target monitoring system comprising:a storage unit;a processing unit; anda display unit,wherein the storage unit stores a physical model of a target, a non-physical model of the target, and map data,wherein the physical model shows physical constraints of the target, and the non-physical model shows a behavior pattern of the target,{'sub': D', 'n, 'wherein, when n is an optional natural number equal to or more than 2, the processing unit executes existence probability distribution calculation processing of calculating an existence probability distribution Pof the target at a time tbased on data received from an external sensor,'}{'sub': M', 'n', 'n', 'n−1', 'n−1', 'n, 'wherein the processing unit executes diffusion existence probability distribution calculation processing of calculating a diffusion existence probability distribution P(t) of the target at the time tbased on an integration target distribution P(t) of the target at a time tprevious to the time tand the physical model of the target,'}{'sub': 0', 'n', 'D', 'n, 'wherein the processing unit executes reliability calculation processing of calculating a reliability p(t) of the existence probability distribution P(t) based on at least one of a kind of the external sensor and environment around the external sensor,'}{'sub': n', '0, 'claim-text': {'br': None, 'i': P', 't', 'p', 't', 'P', 't', '−p', 't', 'P', 't, 'sub': n', '0', 'n', 'D', 'n', '0', 'n', 'M', 'n, '()=()×()+(1())×() \u2003\u2003(1),'}, 'wherein the processing unit calculates the integration target distribution P(t) of the target at the time tbased on the following equation ...

Подробнее
22-02-2018 дата публикации

Image Target Relative Position Determining Method, Device, and System Thereof

Номер: US20180053321A1
Автор: Xiao Jingjing
Принадлежит:

An image target tracking method, device, and system thereof are provided in the present disclosure. The image target relative position determining method includes the following steps: obtaining a target initial position, and performing a sparse sampling according to the target initial position; dividing sampling points into foreground sampling points and background sampling points; clustering adjacent foreground sampling points according to a spatial distribution of the foreground sampling points in order to obtain a clustering result containing a plurality of clusters; and performing a robust estimation according to the clustering result in order to determine a relative position between a target and a camouflage interference in an image. Throughout the process, a multi-feature cascade clustering is completed by using sparse sampling, sampling point division, and adjacent foreground sampling point clustering; a robust estimation is performed in order to accurately predict a relative position between a target and a camouflage interference. 1. An image target relative position determining method , comprising:obtaining a target initial position, and performing a sparse sampling according to the target initial position;dividing sampling points into foreground sampling points and background sampling points;clustering adjacent foreground sampling points according to a spatial distribution of the foreground sampling points in order to obtain a clustering result containing a plurality of clusters; andperforming a robust estimation according to the clustering result in order to determine a relative position between a target and a camouflage interference in an image.2. The image target relative position determining method according to claim 1 , wherein the step of performing a robust estimation according to the clustering result in order to determine a relative position between a target and a camouflage interference in an image comprises:performing a cluster dense sampling ...

Подробнее
23-02-2017 дата публикации

COLLABORATIVE SIGHTING

Номер: US20170054975A1
Принадлежит: 2D3 Limited

Collaborative sighting is performed using plural optical sighting apparatuses that each comprise: a camera unit and a display device. Geometric calibration of the camera units with respect to the imaged scene is performed by detecting features within captured images, generating descriptors from respective patches of the image at the features; detecting corresponding descriptors from different images, Position and deriving the geometric calibration from the positions in the respective images of the corresponding descriptors. A target location in the image captured by a first optical sighting apparatus is designated and a corresponding location relative to the image captured by the second one optical sighting apparatus is identified from the geometric calibration. The display device of the second optical sighting apparatus indicates where the corresponding location lies relative to the displayed image. 1detecting features within images captured by the camera units of the optical sighting apparatuses;generating descriptors in respect of each image from respective patches of the image at the position of each detected feature;detecting corresponding descriptors generated from different images;deriving the geometric calibration of the camera units with respect to the scene from the positions in the respective images of the features corresponding to the detected corresponding descriptors;in respect of at least one target location in the image captured by the camera unit of a first one of the optical sighting apparatuses, identifying, from the derived geometric calibration of the camera units, a corresponding location relative to the image captured by the camera unit of a second one of the optical sighting apparatuses that corresponds to a target location in the scene that itself corresponds to the target location in the image captured by the camera unit of the first one of the optical sighting apparatuses; andindicating on the display device of the second optical sighting ...

Подробнее
02-03-2017 дата публикации

Method for focusing a high-energy beam on a reference point on the surface of a flying object in flight

Номер: US20170059282A1
Автор: Wolfgang Schlosser
Принадлежит: MBDA Deutschland GmbH

A method for focusing a beam of a high energy radiation source on a reference point on the surface of a flying object, comprising: recording a number of consecutive two-dimensional images of the flying object determining the trajectory of the flight path simultaneously determining the line of sight angle between the image acquisition device and the position of the flying object calculating a three-dimensional model of the flying object displaying the currently acquired two-dimensional image marking the reference point on the displayed two-dimensional image of the flying object; calculating the three-dimensional reference point on the surface of the flying object focusing the beam of the high energy radiation source on the three-dimensional reference point.

Подробнее
15-03-2018 дата публикации

HIGH KINETIC ENERGY ABSORPTION WITH LOW BACK FACE DEFORMATION BALLISTIC COMPOSITES

Номер: US20180075613A1
Принадлежит: HONEYWELL INTERNATIONAL INC.

Viscoelastic, lightweight composite armor that is resistant to backface deformation, and to a method for evaluating the effectiveness of composite armor in resisting backface deformation. The index of retraction of a composite is determined by evaluating the degree of composite retraction at the site of impact of projectile after movement of the projectile is stopped. The degree of retraction indicates the ability of the composite to resist backface deformation. 1. A process for determining an index of retraction of a flat composite panel , the process comprising:a) providing a flat composite panel comprising a consolidated plurality of fibrous plies, each of said fibrous plies comprising a plurality of fibers; said panel having a front surface and a rear surface;b) firing a projectile at the front surface of said panel whereby the projectile impacts the panel at an impact site, and wherein the impact of the projectile causes a deflection of the panel at said impact site, whereby a depression is formed in the front surface of the panel and a corresponding rear protrusion is formed extending from the rear surface of the panel; wherein the flat composite panel stops the movement of said projectile, and wherein the rear protrusion partially retracts after the projectile is stopped, resulting in a transient deflection after the projectile is stopped but prior to the beginning of said retraction, and resulting in a permanent deflection after said partial retraction ends;c) measuring the distance of said transient deflection from said rear surface and measuring the distance of said permanent deflection from said rear surface; andd) determining said index of retraction by determining the percent of retraction of said composite panel when comparing the transient deflection distance to the permanent deflection distance.2. The process of wherein step b) is recorded with a video camera and the distance of said transient deflection from said rear surface and the distance of ...

Подробнее
12-03-2020 дата публикации

COLLABORATIVE SIGHTING

Номер: US20200084440A1
Принадлежит:

A method includes generating calibration data by geometrically calibrating first image data from a first camera unit relative to second image data from a second camera unit based on first descriptor data and second descriptor data. The first descriptor data is based on the first image data. The second descriptor data is based on the second image data. The calibration data is generated based on first position data corresponding to the first camera unit and second position data corresponding to the second camera unit. The method includes identifying, based on the calibration data, a target location relative to the first image data. The method further includes generating an output image that includes the first image data and an indication of where the target location is relative to a scene depicted in the first image data. 1. A method comprising:generating, at a processor, calibration data by geometrically calibrating first image data from a first camera unit relative to second image data from a second camera unit based on first descriptor data and second descriptor data, wherein the first descriptor data is based on the first image data, wherein the second descriptor data is based on the second image data, and wherein the calibration data is generated based on first position data corresponding to the first camera unit and second position data corresponding to the second camera unit;identifying, based on the calibration data, a target location relative to the first image data; andgenerating an output image, wherein the output image includes the first image data and an indication of where the target location is relative to a scene depicted in the first image data.2. The method of claim 1 , wherein the calibration data is generated in real time.3. The method of claim 1 , further comprising receiving the second image data from the second camera unit via a communication network.4. The method of claim 1 , wherein geometrically calibrating comprises:identifying a plurality ...

Подробнее
02-04-2015 дата публикации

Exterior hybrid photo mapping

Номер: US20150094952A1
Принадлежит: Qualcomm Inc

Embodiments disclosed pertain to the use of user equipment (UE) for the generation of a 3D exterior envelope of a structure based on captured images and a measurement set associated with each captured image. In some embodiments, a sequence of exterior images of a structure is captured and a corresponding measurement set comprising Inertial Measurement Unit (IMU) measurements, wireless measurements (including Global Navigation Satellite (GNSS) measurements) and/or other non-wireless sensor measurements may be obtained concurrently. A closed-loop trajectory of the UE in global coordinates may be determined and a 3D structural envelope of the structure may be obtained based on the closed loop trajectory and feature points in a subset of images selected from the sequence of exterior images of the structure.

Подробнее
31-03-2016 дата публикации

ORIENTATION INVARIANT OBJECT IDENTIFICATION USING MODEL-BASED IMAGE PROCESSING

Номер: US20160093097A1
Принадлежит:

A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. The system also includes the generation of shadows and allows the user to manipulate the sun angle to approximate the lighting conditions of the test range in the provided video. 1. A method of identifying an object , comprising the steps of:a) storing geometric information about a plurality of candidate objects;b) imaging a target object to be identified with a camera;c) rendering stored geometric information to a simulated image utilizing pose and environment information;d) comparing the image of the target object to the stored geometric information;e) rotating and moving the image of the target object in three-dimensional space;f) repeating steps c), d) and e) to determine the best match or matches between the target object and the candidate objects.2. The method of claim 1 , wherein:the target object is imaged at a distance; andit is assumed that the target object is positioned on relatively flat ground and that the camera roll angle stays near zero.3. The method of claim 1 , wherein:the target object is imaged at ...

Подробнее
05-05-2022 дата публикации

STITCHED IMAGE

Номер: US20220139077A1
Принадлежит:

Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.

Подробнее
19-03-2020 дата публикации

CLUTTERED BACKGROUND REMOVAL FROM IMAGERY FOR OBJECT DETECTION

Номер: US20200090364A1
Принадлежит:

Embodiments herein describe tracking the location and orientation of a target in a digital image. In an embodiment, this tracking can be used to control navigation for a vehicle. In an embodiment, a digital image can be captured by a visual sensor is received. A first array including a plurality of binary values related to the pixel velocity of a first plurality of pixels in the digital image as compared to corresponding pixels in a first one or more prior digital images can be generated. A second array including a plurality of values related to the standard deviation of pixel intensity of the first plurality of pixels in the digital image as compared to corresponding pixels in a second one or more prior digital images can be further generated. A plurality of thresholds relating to the values in the second array can be determined. A plurality of target pixels and a plurality of background pixels can be identified in the digital image, based on the first array, the second array, and the plurality of thresholds. A binary image related to the digital image, based on the identified plurality of target pixels and the identified plurality of background pixels, and identifying at least one of a location and an orientation of the target in the digital image based on the binary image, can be generated. In an embodiment, a command can be transmitted to a navigation system for a vehicle, to assist in navigating the vehicle toward the target, based on the identified at least one of a location and an orientation of the target. 1. A method for controlling navigation of a vehicle by tracking a location and orientation of a target in a digital image , the method comprising:receiving a digital image captured by a visual sensor;generating a first array comprising a plurality of binary values related to a pixel velocity of a first plurality of pixels in the digital image as compared to corresponding pixels in a first one or more prior digital images;generating a second array ...

Подробнее
12-05-2022 дата публикации

Detecting Target Objects in a 3D Space

Номер: US20220150417A1
Принадлежит:

Search points in a search space may be projected onto images from cameras imaging different parts of the search space. Subimages, corresponding to the projected search points, may be selected and processed to determine if a target object has been detected. Based on subimages in which target objects are detected, as well as orientation data from cameras capturing images from which the subimages were selected, positions of the target objects in the search space may be determined. 1. A method comprising:determining, by one or more computing devices, a plurality of search point spatial positions in a three-dimensional search space;determining, based on the search point spatial positions and based on orientation data for a plurality of cameras imaging at least portions of the search space, a plurality of search point image positions, wherein each of the search point image positions comprises a location in an image from a camera, of the plurality of cameras, to which a search point spatial position, of the plurality of search point spatial positions, is projected;selecting, from images in which the search point image positions are located, subimages corresponding to the search point image positions;determining, from the subimages, a plurality of subimages comprising a target object image;determining, based on the plurality of subimages comprising a target object image, a position of a target object in the search space;assigning a subset of a plurality of search points to the target object;determining, based on the position of the target object, search point spatial positions in the search space for a first set of the search points of the subset;determining, without regard to the position of the target object, search point spatial positions in the search space for remaining search points of the subset; andoutputting information indicating a subsequent position of the target object.2. The method of claim 1 , wherein each of the subimages corresponds to a search point image ...

Подробнее
21-04-2016 дата публикации

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR INDICATING HOSTILE FIRE

Номер: US20160112655A1
Принадлежит:

Systems, methods, and computer program products for identifying hostile fire. A characteristic of a fired projectile is detected using an optical system and the projectile's travel path in relation to a vehicle is determined. If the determined travel path of the projectile is within a predetermined distance from the vehicle, it is determined that the projectile is hostile towards the vehicle and a warning is output. 1. A system that is operative during the day and at night to determine whether a position is an intended target of a fired unguided energetic projectile , the system comprising:an infrared (IR) camera mountable at the position, said IR camera having a field of view (FOV) and a predetermined sensitivity sufficient to capture a heat signature of the fired unguided energetic projectile, the heat signature including at least one of a firing component generated upon firing of the projectile and a friction component generated by friction as the projectile travels through the troposphere, pixels of said IR camera being operative to capture a portion of a trail of energy associated with the heat signature of the fired projectile; andan image processor operative to receive signals from said IR camera corresponding to the captured portion of the trail of energy, said image processor being operative to post-process in near real time the signals from said IR camera and to make a determination as to whether the position was the intended target of the fired projectile by analyzing calculated vertical and horizontal miss distances of the fired projectile, the vertical and horizontal miss distances being calculated based on a product of vertical and horizontal pixel trails, respectively, in the captured portion of the trail of energy as functions of time and projectile velocity.2. The system according to claim 1 , comprising an alert system operatively coupled to said image processor to generate timely audible and visible indications that the position is the intended ...

Подробнее
29-04-2021 дата публикации

Estimation of Atmospheric Turbulence Parameters using Differential Motion of Extended Features in Time-lapse Imagery

Номер: US20210125348A1
Принадлежит: US Air Force

A system and method provide improved remote turbulence measurement. The system includes an image capturing device that captures frames of images of a target. A controller of the system tracks motion of one or more features of the images between frames using a pattern recognition algorithm. The controller computes subpixel motion based on results of the pattern recognition algorithm and computes differential motion or tilts between pairs of features. The controller computes differential tilt variances from every set of frames. The controller computes theoretical weighting functions for differential tilt variances. The controller determines weights to linearly combine weighting functions such that the combined weighting function closely resembles a desired weighting function that corresponds to the turbulence parameter of interest. The controller linearly combines the differential tilt variances using the determined weights to obtain the turbulence parameter of interest.

Подробнее
27-04-2017 дата публикации

METHOD, SYSTEM AND PROCESSOR FOR INSTANTLY RECOGNIZING AND POSITIONING AN OBJECT

Номер: US20170116496A1
Автор: FUNG Chuck, Fung Wai Tong
Принадлежит:

The invention provides a method for instantly recognizing and positioning an object, comprising steps of a) wirelessly searching a wireless identification of the object; b) capturing a plurality of images of the object for each image capture; c) determining a 2D center coordinate (x, y) of the object based on a center coordinate (x, y) of the wireless identification of the object; d) transforming the captured images of the object to acquire a 3D pattern of the object, and comparing the 3D pattern of the object with 3D patterns pre-stored; and e) if the 3D pattern of the object matches with a pre-stored 3D pattern, calculating and obtaining a 3D center coordinate (x, y, z) of the object to recognize and position the object The invention also provides a system and a processor enabling the method, and use of the system. 139-. (canceled)40. A method for time correction of a timepiece , comprising applying a system for instantly recognizing and positioning an object for detecting the presence of time offset from a correct time in the timepiece , and activating a processing device of the timepiece for time correction of the timepiece in response to the presence of the time offset ,wherein the system for instantly recognizing and positioning an object comprises a camera and a processor, wherein:the camera is configured to communicate with the processor and capture a plurality of images of the object by rotating a predetermined angle for each image capture; and wirelessly search a wireless identification of the object;', 'receive the captured images of the object from the camera; and', 'recognize the time offset based on the captured images of the object to activate the time correction of the time piece., 'the processor is configured to41. The method according to claim 40 , wherein the processor is configured to:{'sub': w', 'w, 'determine a 2D center coordinate (x, v) of the object based on a center coordinate (x, y) of the wireless identification of the object by point to ...

Подробнее
05-05-2016 дата публикации

System and method for processing of tactical information in combat vehicles

Номер: US20160123757A1
Принадлежит: BAE Systems Hagglunds AB

The invention pertains to a system, a method, a computer program and a combat vehicle with an integrated system, for processing of tactical information. The system comprises at least one sensor for registration of at least one image sequence display at least a portion of the surroundings of the combat vehicle. The system further comprises a navigation module arranged to register a current position of the vehicle. The system further comprises a tactical data module for storage of tactical information and an information processing unit arranged to process said image sequence to superimpose the tactical information onto said image sequence.

Подробнее
05-05-2016 дата публикации

Method and system for coordinating between image sensors

Номер: US20160125267A1
Автор: Benny GOLDMAN, ldo BERGMAN
Принадлежит: Elbit Systems Land and C4I Ltd

Method and system for coordinating between separate image sensors imaging a mutual area of interest at different imaging perspectives. A target point is designated on a first image acquired by a first image sensor. Feature points are defined and characterized on the first image and transmitted over a data communication link to a second image sensor. The target point is identified in a second image acquired by the second image sensor using an iterative convergence operation. The first iteration involves locating feature points in the second image corresponding to the defined first image feature points. Subsequent iterations involve locating feature points in a subregion of the second image corresponding to decreasing subsets of first image feature points, the subregion defined by the feature point cluster located in the previous iteration. When a termination condition is reached, the remaining cluster of located feature points is established to represent the target point.

Подробнее
14-05-2015 дата публикации

METHOD AND SYSTEM FOR INTEGRATED OPTICAL SYSTEMS

Номер: US20150130950A1
Принадлежит:

A method of operating optical systems includes forming a stitched image of a field of regard using a first optical device. The stitched image of the field of regard comprises a plurality of sub-images associated with a first field of view. The method also includes receiving an image of a second field of view from a second optical device and determining a location of the image of the second field of view in the stitched image. The method further includes communicating an indicator to the second optical device. The indicator is to the location of the image of the second field of view in the stitched image. 1. A method of operating optical systems , the method comprising:forming a stitched image of a field of regard using a first optical device, wherein the stitched image of the field of regard comprises a plurality of sub-images associated with a first field of view;receiving an image of a second field of view from a second optical device;determining a location of the image of the second field of view in the stitched image; andcommunicating an indicator to the second optical device, the indicator being related to the location of the image of the second field of view in the stitched image.2. The method of further comprising updating the indicator as a separation between the second field of view and the first field of view decreases.3. The method of further comprising receiving additional images of the second field of view and updating the indicator to direct the second optical system to overlap the second field of view with the first field of view.4. The method of wherein the first field of view is larger than the second field of view.5. The method of wherein the first optical device comprises a video imaging device.6. The method of wherein the second optical device comprise a still imaging device.7. The method of wherein determining a location of the image of the second field of view comprises correlating the image of the second field of view with a portion of the ...

Подробнее
16-04-2020 дата публикации

LAYERED TACTICAL INFORMATION SYSTEM AND METHOD

Номер: US20200116848A1
Принадлежит: GOODRICH CORPORATION

A layered tactical information system and a method of sharing tactical information between multiple layers are provided. The method performed by a processor of each layer of the multiple layers includes searching for relevant data available to the processor of the layer in response to at least one of a layer request from the processor of another layer of the multiple layers and a directive received by a processor of a first layer of the multiple layers. The directive specifies an end state, one or more target types, a time window, a geographic location area, and a first layer requirement of at least one layer requirement to be resolved by the processor of the first layer. The method performed by the processor further includes evaluating whether a layer requirement of the at least one layer requirement for the layer is satisfied based on at least one of (a) any found relevant data that was found by the search and (b) layer data obtained from at least one of the other layers. When the layer requirement for the layer is not satisfied, the method performed by the processor further includes performing at least one of (a) transmitting, by the processor of the layer to the processor of another layer of the multiple layers, a layer request to gather further information and (b) generating layer data and providing the layer data generated to the processor of at least one other layer. 1. A layered tactical information system , the system comprising multiple layers , each layer comprising:a memory configured to store instructions; search for relevant data available to the processor of the layer in response to at least one of a layer request from the processor of another layer of the multiple layers and a directive received by a processor of a first layer of the multiple layers, the directive specifying an end state, one or more target types, a time window, a geographic location area, and a first layer requirement of at least one layer requirement to be resolved by the processor ...

Подробнее
27-05-2021 дата публикации

Firearm System that Tracks Points of Aim of a Firearm

Номер: US20210156647A1
Автор: Lyren Philip Scott
Принадлежит:

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm. 120.-. (canceled)21. A head mounted display , comprising:a receiver that communicates with electronics of a rifle having a point of aim and being held in hands of a user wearing the head mounted display; anda display that simultaneously displays the point of aim of the rifle held in the hands of the user and an augmented reality (AR) image of the rifle that moves in real-time with movements of the rifle held in the hands of the user.22. The head mounted display of claim 21 , wherein the display simultaneously displays the point of aim of the rifle along with an AR image of the hands of the user holding the AR image of the rifle.23. The head mounted display of claim 21 , wherein the display displays an AR image of a trajectory of a bullet fired from the rifle.24. The head mounted display of further comprising:a natural language user interface; anda processor that commences an event tracking system in response to receiving a verbal command from the user at the natural language user interface.25. The head mounted display of further comprising:one or more sensors that sense a gesture-based command from the user; anda camera that commences to record video in response to the one or more sensors sensing the gesture-based command from the user.26. The head mounted display of further comprising:a global positioning system (GPS) locator that determines a GPS location of the user wearing the head mounted display, wherein the display displays an AR image of a topographical map generated from aerial photogrammetry of the GPS location.27. The head mounted display of claim 21 , wherein the display displays an AR image that shows a path between different points of aim of the rifle ...

Подробнее
16-04-2020 дата публикации

Broad area geospatial object detection using autogenerated deep learning models

Номер: US20200118292A1
Принадлежит: DigitalGlobe Inc

A system for automated geospatial image analysis comprising a deep learning model that receives orthorectified geospatial images, pre-labeled to demarcate objects of interest. The module presents marked geospatial images and a second set of unmarked, optimized, training geospatial images to a convolutional neural network. This process may be repeated so that an image analysis software module can detect multiple object types or categories. The image analysis software module receives orthorectified geospatial images from one or more geospatial image caches. Using a multi-scale sliding window submodule, image analysis software scans geospatial images, detects objects present and geospatially locates them.

Подробнее
10-05-2018 дата публикации

Systems, Methods, Apparatuses, and Devices for Identifying, Tracking, and Managing Unmanned Aerial Vehicles

Номер: US20180129881A1
Принадлежит:

Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs. 1. A method for identifying unmanned aerial vehicles (UAVs) in a particular air space via the use of one or more video sensors , comprising the steps of:receiving a video frame from a video feed of the particular air space, wherein the video feed was captured by a particular video sensor proximate to the particular air space;identifying at least one region of interest (ROI) in the video frame, the at least one ROI comprising an image of an object that may be a UAV flying within the particular air space; extracting image data from the image of the at least one ROI;', 'comparing the extracted image data to prior image data of objects known to be UAVs to determine a probability that the object in the image is a UAV; and', 'upon determination that the probability that the object in the image is a UAV exceeds a predetermined threshold, denoting the object in the image as a UAV., 'performing an object classification process with respect to the at least one ROI to determine whether the object in the image is a UAV, the object classification process comprising the steps of2. The method of claim 1 , further comprising the step ...

Подробнее
10-05-2018 дата публикации

Systems, Methods, Apparatuses, and Devices for Identifying, Tracking, and Managing Unmanned Aerial Vehicles

Номер: US20180129882A1
Принадлежит:

Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs. 1. A method for identifying an aerial object type corresponding to an aerial object in a particular air space , comprising the steps of:receiving a first video frame and a second video frame from a video feed of the particular air space, wherein the video feed is captured by a particular video sensor proximate to the particular air space and wherein the first video frame is captured earlier in time than the second video frame;identifying a first region of interest (ROI) in the first video frame and a second ROI in the second video frame, the first ROI and the second ROI each comprising an image of an object flying within the particular air space;comparing a determined size of the first ROI to a determined size of the second ROI to determine a delta size parameter between the first ROI and the second ROI;determining whether the delta size parameter is within a predetermined size threshold expected for a particular object type;upon determination that the delta size parameter is within the predetermined size threshold, comparing a determined center position of the first ROI to a determined center position of the second ROI ...

Подробнее
10-05-2018 дата публикации

Systems, Methods, Apparatuses, and Devices for Identifying and Tracking Unmanned Aerial Vehicles via a Plurality of Sensors

Номер: US20180129884A1
Принадлежит:

Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs. 126-. (canceled)27. A method for identifying unmanned aerial vehicles (UAVs) in a particular air space , comprising the steps of:receiving video data from a particular video sensor proximate to the particular air space, the video data including at least one image of an object that may be a UAV flying within the particular air space;analyzing the video data to determine a first confidence measure that the object in the at least one image comprises a UAV;receiving radio frequency (RF) signal data from a particular RF sensor proximate to the particular air space, the RF signal data including data indicating a possible presence of a UAV within the particular air space;analyzing the RF signal data to determine a second confidence measure that the RF signal data corresponds to a UAV;aggregating the first confidence measure and the second confidence measure into a combined confidence measure indicating a possible presence of a UAV in the particular air space; andupon determination that the combined confidence measure exceeds a predetermined threshold value, storing an indication in a database that a UAV was identified in the ...

Подробнее
23-04-2020 дата публикации

Augmented reality system and method of displaying an augmented reality image

Номер: US20200126265A1
Принадлежит: AUGMENTI AS

An augmented reality system includes a global navigation satellite system module adapted to output position data, an orientation measurement module adapted to output orientation data, an augmented reality module, at least one AR-client having a camera and a display. The augmented reality module is adapted to determine a position and orientation of the camera of the at least one AR-client based on the position data and orientation data, calculating screen positions of at least one AR object based on the position and orientation of the camera of the at least one AR-client to create at least one AR-overlay, transmitting the at least one AR overlay to at least one AR-client, and the AR-client is adapted to merging the at least one AR-overlay with a picture received from the camera of the at least one AR-client to provide an AR-image, and displaying the AR-image on the display.

Подробнее
09-05-2019 дата публикации

Firearm System that Tracks Points of Aim of a Firearm

Номер: US20190137218A1
Автор: Lyren Philip Scott
Принадлежит:

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm. 120.-. (canceled)21. A method executed by a firearm system , comprising:determining, with the firearm system, events that include tracking a path between different points of aim of a handgun as the handgun moves between the different points of aim while at a location, determining a compass direction for one or more of the different points of aim, and determining a duration of time that the handgun is pointed at the different points of aim;transmitting, from the handgun and to a computer, event data that includes the path between the different points of aim of the handgun as the handgun moves between the different points of aim while at the location, the compass direction for the one or more of the different points of aim, and the duration of time that the handgun is pointed at the different points of aim;reconstructing, at the computer and from the event data, the events that include the path between the different points of aim of the handgun as the handgun moves between the different points of aim while at the location, the compass direction for the one or more of the different points of aim, and the duration of time that the handgun is pointed at the different points of aim; anddisplaying, at the computer, a reconstruction of the events that includes showing an image of the location, the path between the different points of aim of the handgun with the image of the location, the compass direction for the one or more of the different points of aim, and the duration of time that the handgun is pointed at the different points of aim.22. The method of claim 21 , further comprising:displaying, at the computer, the path as one or more lines on a display of the computer ...

Подробнее
30-04-2020 дата публикации

Obscuration Map Generation

Номер: US20200134854A1
Принадлежит:

A camera is arranged on a transmitter or receiver mount configured to provide a transmitter or receiver with a field of regard. Image data of the field of regard is captured by the camera. A location of an obscuration within the field of regard from the image data is determined from the image data. A map of obscurations within the field of regard is generated based upon the image data and the location of the obscuration within the field of regard. 1. A method comprising:arranging a camera on a transmitter or receiver mount of a vehicle, wherein the transmitter or receiver mount is configured to provide a transmitter or receiver with a field of regard;capturing, via the camera, image data of the field of regard;determining a location of an obscuration caused by a portion of the vehicle within the field of regard from the image data; andgenerating a map of obscurations within the field of regard based upon the image data and the location of the obscuration within the field of regard, generating an image from the image data;', 'presenting the image to a user via a user display device; and', 'receiving a user input comprising an indication of a coordinate of the obscuration within the image., 'wherein determining the location of the obscuration comprises2. The method of claim 1 , wherein transmitter or receiver comprises a directional infrared countermeasure transmitter or receiver.3. The method of claim 1 , wherein determining the location of an obscuration within the field of regard comprises determining an angular location of the obscuration relative to the camera and a distance from the camera.4. The method of claim 1 , wherein arranging the camera comprises arranging a plurality of cameras on the transmitter or receiver mount; and capturing image data from each of the plurality of cameras, and', 'generating three-dimensional data from the image data from each of the plurality of cameras; and, 'wherein capturing image data of the field of regard compriseswherein ...

Подробнее
25-05-2017 дата публикации

Firearm System that Tracks Points of Aim of a Firearm

Номер: US20170146319A1
Автор: Philip Scott Lyren
Принадлежит: Individual

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm.

Подробнее
16-05-2019 дата публикации

REMOTE WEAPON CONTROL DEVICE AND METHOD FOR TARGETING AND SHOOTING MULTIPLE OBJECTS

Номер: US20190145738A1
Автор: CHAE Hee Seo
Принадлежит: HANWHA LAND SYSTEMS CO., LTD.

A remote weapon control device for controlling a weapon having a photographing device is provided. The remote weapon control device includes: a communication interface configured to receive an image captured by the photographing device; an object extractor configured to extract objects from the image; a target extractor configured to extract targets from the objects; a shooting order determinator configured to determine an order of the targets for shooting; and a control signal generator configured to generate a shooting control signal for controlling the weapon to shoot the targets in the determined order. 1. A remote weapon control device for controlling a weapon having a photographing device , the remote weapon control device comprising:a communication interface configured to receive an image captured by the photographing device;an object extractor configured to extract objects from the image;a target extractor configured to extract targets from the objects;a shooting order determinator configured to determine an order of the targets for shooting; anda control signal generator configured to generate a shooting control signal for controlling the weapon to shoot the targets in the determined order.2. The remote weapon control device of claim 1 , further comprising an input interface configured to receive claim 1 , from a user claim 1 , a target selection command for selecting the targets from the extracted objects.3. The remote weapon control device of claim 2 , wherein the target selection command comprises an area division command for dividing the image into a plurality of areas and an area selection command for selecting at least one of the areas claim 2 , andwherein, in response to the area division command and the area selection command being input through the input interface, the target extractor extracts objects located in a selected area as the targets.4. The remote weapon control device of claim 2 , wherein the target selection command comprises a select ...

Подробнее
01-06-2017 дата публикации

Orientation invariant object identification using model-based image processing

Номер: US20170154459A1
Принадлежит: Individual

A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images. The system also includes the generation of shadows and allows the user to manipulate the sun angle to approximate the lighting conditions of the test range in the provided video.

Подробнее
14-06-2018 дата публикации

MACHINE VISION ENABLED SWARM GUIDANCE TECHNOLOGY

Номер: US20180164820A1
Принадлежит:

A system and method for controlling a swarm of UAVs that are stored on and released from an airborne platform, fly to and destroy a target, where the UAVs download target information from the airborne platform before being released therefrom, do not communicate with each other or the airborne platform while in flight, and do not depend of the presence of GPS. Each UAV includes a vision sensor that provides image data, a navigation module that receives the image data and causes the UAV to navigate and fly towards the target, and a target destruction module that receives the image data and causes the UAV to destroy the target. 1. A target swarm system comprising:an airborne platform; anda plurality of unmanned aerial vehicles (UAVs) configured to be launched from the platform, each UAV including a vision sensor providing image data after the UAV is launched from the platform, a navigation module receiving the image data and causing the UAV to fly towards a target, a target destruction module receiving the image data and causing the UAV to engage the target, and munitions to destroy the target.2. The system according to wherein the UAVs download target information from the airborne platform before being launched therefrom.3. The system according to wherein the airborne platform includes a weapon sensor model that identifies the vision sensor on each UAV and provides target type and state and environment information to each UAV claim 1 , and a reduced dimension feature extraction model that constructs grids and a target contour dome over the target.4. The system according to wherein the navigation module includes a sensing layer having weapons navigation sensors and weapons databases that identify terrain and landmarks around the target claim 1 , a processing layer that provides sensor processing and environment feature detection claim 1 , feature correlation and tracking claim 1 , geo-registration and sensor resource management claim 1 , a measurement and abstraction ...

Подробнее
21-05-2020 дата публикации

VARIABLE MEASURING OBJECT DEPENDENT CAMERA SETUP AND CALIBRATION THEREOF

Номер: US20200160555A1
Принадлежит:

A method and a system for determining a 6-DOF-pose of an object in space use at least one marker attached to the object and a plurality of cameras. Perspectives of the cameras are directed to a common measuring space in which the object is positioned. At least one camera from the plurality of cameras is movable such that the movable camera can be adjusted with respect to the object. At least one of the cameras captures an image of said marker attached to the object. Based on the at least one captured image, spatial parameters representing the 3D-position or the 6-DOF-pose of the marker and, consequently, the 6-DOF-pose of the object in space can be determined. 1. A method for determining a 6 degrees of freedom pose (6-DOF-pose) of an object in a space , said 6-DOF-pose defining a three-dimensional position (3D-position) and a three-dimensional orientation (3D-orientation) of the object in the space , the method comprising:arranging a plurality of cameras at individual 6-DOF-poses in the space, each of the plurality of cameras having at an individual 6-DOF pose an individual perspective into the space, individual perspectives of the plurality of cameras together defining a common measuring space (CMS), and at least one of the plurality of cameras being movable in the space;attaching a marker assembly to the object;positioning the object with the marker assembly within the CMS;adjusting the CMS with respect to the object by moving the at least one of the plurality of cameras being movable in the space, thereby changing the individual perspective of the at least one of the plurality of cameras being movable in the space;capturing at least one image of said marker assembly attached to the object by the plurality of cameras;determining spatial parameters of the marker assembly based on the at least one image;determining a 6-DOF-pose of the object in the space based on the spatial parameters of the marker assembly, andthe object being a measuring device including a 3D ...

Подробнее
02-07-2015 дата публикации

System and Method for Urban Mapping and Positioning

Номер: US20150185025A1
Принадлежит: Individual

UMAPS is a multifaceted system that can be robot-mounted, human-worn, or canine carried. UMAPS produces real-time, 3D mapping and localization for the user as they move throughout a GPS-denied environment (e.g. buildings, caves, or tunnels). An Operator Control Unit (OCU) displays information collected by UMAPS; 2D floorplans; 3D textured-enriched surfaces of the structure's interior; and the location of the users within that structure. UMAPS has an open architecture that allows it to function with any OCU. UMAPS has three distinct subsystems: obstacle maps for robot mobility, mapping, and positioning

Подробнее
28-05-2020 дата публикации

Firearm System that Tracks Points of Aim of a Firearm

Номер: US20200166307A1
Автор: Lyren Philip Scott
Принадлежит:

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm. 120.-. (canceled)21. A firearm system , comprising:electronics in a firearm of a user that collect data of a path between different points of aim of the firearm as the firearm moves between the different points of aim at a location; andone or more processors that receive the data from the electronics and build an image of the path between the different points of aim of the firearm and provide the image of the path and an image of the location to a display, wherein the electronics in the firearm include a compass that determines a direction of the different points of aim of the firearm, and a clock that determines a duration of time that the firearm is pointed at the different points of aim, wherein the firearm is a handgun or a rifle.22. The firearm system of further comprising:a wearable electronic device (WED) worn on a head of the user that displays an augmented reality (AR) image of the path between the different points of aim of the firearm.23. The firearm system of further comprising:a wearable electronic device (WED) worn on the head of the user that displays an augmented reality (AR) image of a point of impact where a bullet would impact if the bullet were fired from the firearm.24. The firearm system of further comprising:a wearable electronic device (WED) worn on the head of the user that displays an augmented reality (AR) image at a location where a bullet fired from the firearm impacted.25. The firearm system of further comprising:a wearable electronic device (WED) worn on the head of the user that displays an augmented reality (AR) image at a point of impact of a bullet fired from the firearm, wherein clicking or activating the AR image causes the WED ...

Подробнее
22-06-2017 дата публикации

Determination of position from images and associated camera positions

Номер: US20170178358A1
Принадлежит: 2d3 Ltd

An apparatus includes an interface configured to receive image data and position data. The image data is associated with a plurality of images of a scene including an object. The position data is associated with positions of a camera that captured the plurality of images. The apparatus further includes a processor configured to identify a corresponding camera position for a first image of the plurality of images and to output an indication of a global position of the object based on first image data corresponding to the first image and based on the corresponding camera position.

Подробнее
28-06-2018 дата публикации

OBJECT DETECTION SYSTEM

Номер: US20180181790A1
Автор: Li Bing
Принадлежит:

An airborne mine countermeasure system includes a processor coupled to a memory having stored therein software instructions that, when executed by the processor, cause the processor to perform a series of image processing operations. The operations include obtaining input image data from an external image sensor, and extracting a sequence of 2-D slices from the input image data. The operations also include performing a 3-D connected region analysis on the sequence of 2-D slices, and extracting 3-D invariant features in the image data. The operations further include performing coarse filtering, performing fine recognition and outputting an image processing result having an indication of the presence of any mines within the input image data. 120-. (canceled)21. An object detection system , comprising:an image sensor configured to obtain image data in real-time;a processor for executing modules; and a live data processing system configured to process the image data in real-time;', 'an invariant feature extraction module configured to extract 3-D invariant features of at least one object in the image data; and', 'a classifier configured to compare the 3-D invariant features of the at least one object in the image data to a data store of trained 3-D invariant features of known objects, wherein the classifier is further configured to output an indication of how close the 3-D invariant features of the at least one object in the image data match the trained 3-D invariant features., 'a memory containing the modules, the modules including'}22. The object detection system of claim 21 , further comprising:an object training module configured to extract training 3-D invariant features of the known objects in training data, and store the training 3-D invariant features as the trained 3-D invariant features and the known objects in the data store.23. The object detection system of claim 21 , wherein the classifier is further configured to perform coarse filtering based on the 3-D ...

Подробнее
15-07-2021 дата публикации

AUGMENTED REALITY SYSTEM AND METHOD OF DISPLAYING AN AUGMENTED REALITY IMAGE

Номер: US20210217210A1
Принадлежит: AUGMENTI AS

An augmented reality system includes a global navigation satellite system module adapted to output position data, an orientation measurement module adapted to output orientation data, an augmented reality module, at least one AR-client having a camera and a display. The augmented reality module is adapted to determine a position and orientation of the camera of the at least one AR-client based on the position data and orientation data, calculating screen positions of at least one AR object based on the position and orientation of the camera of the at least one AR-client to create at least one AR-overlay, transmitting the at least one AR overlay to at least one AR-client, and the AR-client is adapted to merging the at least one AR-overlay with a picture received from the camera of the at least one AR-client to provide an AR-image, and displaying the AR-image on the display. 1. A system , comprising:a camera configured to capture an image surrounding the camera;global navigation satellite system circuitry configured to obtain position data of a device;inertial measurement circuitry configured to obtain orientation data of the device; and identify a heading of the device based on the position data and the orientation data,', 'analyze the image and identify an earth fixed feature in the image,', 'track movement of the identified earth fixed feature,', 'adjusting the identified heading of the device by compensating for the movement of the earth fixed feature, and', 'output the adjusted heading of the device., 'processing circuitry configured to2. The system according to claim 1 , whereinthe device is an augmented reality (AR) client.3. The system according to claim 2 , whereinthe processing circuitry is further configured to calculate a screen position of an AR object based on the position and the orientation of the AR client to create an AR overlay.4. The system according to claim 3 , whereinthe processing circuitry is further configured to transmit the AR overlay to ...

Подробнее
05-07-2018 дата публикации

COLLABORATIVE MULTI SENSOR SYSTEM FOR SITE EXPLOITATION

Номер: US20180190014A1
Принадлежит:

A system communicate with a plurality of units. Each of the plurality of units includes a two dimensional camera, a three dimensional camera, a multispectral camera, and/or an inertial measurement unit. Each of the plurality of units is associated with a person, a vehicle, or a robot, and each of the units collects data relating to an environment. The system receives the data relating to the environment from the plurality of units, and uses the data from each of the plurality of units to estimate the positions of the units and to track the positions of the units. The system enables the plurality of units to communicate with each other regarding the collection of the data relating to the environment, commingles and analyzes the data from the plurality of units, and uses the commingled and analyzed data to build a three-dimensional map of the environment. 1. A system comprising: communicate with a plurality of units, each of the plurality of units comprising one or more of a two dimensional camera, a three dimensional camera, a multispectral camera, a sensor suite, and an inertial measurement unit; wherein each of the plurality of units is associated with a person, a vehicle, or a robot and each of the units is operable to collect data relating to an environment;', 'receive the data relating to the environment from the plurality of units;', 'use the data from each of the plurality of units to estimate the positions of the units and to track the positions of the units;', 'permit the plurality of units to communicate with each other regarding the collection of the data relating to the environment;', 'commingle and analyze the data from the plurality of units; and', 'use the commingled and analyzed data to build a three-dimensional map of the environment., 'a computer processor and a computer storage device configured to2. The system of claim 1 , comprising using the commingled and analyzed data to create a virtual environment.3. The system of claim 1 , wherein the ...

Подробнее
11-06-2020 дата публикации

SUPPRESSION OF IRON SIGHT BLOOMING IN INFRARED WEAPON SIGHTS

Номер: US20200186682A1

A system for suppressing sight blooming in an infrared sight includes determining an n×n grid size; creating a grid of n×n averages; calculating a mean; determining if a heat source is detected; detecting a heat source radius; calculating the average of the outside boxes; estimating the average of the inside boxes; setting the source to zero; smoothing the boxes; subtracting the mean of the outside from the mean of the inside; feeding back the result into history; up-sampling the offset; subtracting the offsets; and displaying the image. 1. A device for suppressing blooming for an infrared imager comprising:an infrared imager;an infrared image processing module;a processor of said infrared image processing module:at said infrared image processing module, receiving infrared scene imagery from said infrared imager;in said infrared image processing module, processing said infrared scene to detect blooming;to in said infrared image processing module, processing said infrared scene to suppress blooming;in an infrared display, displaying said infrared scene with blooming suppressed.2. The device for suppressing blooming for an infrared imager of claim 1 , comprising:in said processor of said infrared image processing module, determining a size of a grid of n×n based on a format of a sensor of said infrared imager;creating a grid of n×n averages over an area of interest for a contiguous heat source proximate and in fixed relation to said infrared imager, whereby said region is down-sampled;calculating a mean of outer boxes of said n×n grid;determining if said contiguous heat source proximate and in fixed relation to said infrared imager is detected based on said mean of said outer boxes of said n×n grid;detecting, in a pattern moving out from a center, a radius of a signature of said contiguous heat source;calculating an average of all boxes outside of said radius;using said average as an estimate for an average within said contiguous heat source signature;smoothing boxes ...

Подробнее
28-07-2016 дата публикации

SYSTEMS AND METHODS FOR MAPPING SENSOR FEEDBACK ONTO VIRTUAL REPRESENTATIONS OF DETECTION SURFACES

Номер: US20160217578A1
Принадлежит: RED LOTUS TECHNOLOGIES, INC.

Systems and methods for mapping sensor feedback onto virtual representations of detection surfaces are disclosed herein. A system configured in accordance with an embodiment of the present technology can, for example, record and process feedback from a sensing device (e.g., a metal detector), record and process user inputs from a user input device (e.g. user-determined locations of disturbances in the soil surface), determine the 3D position, orientation, and motion of the sensing device with respect to a detection surface (e.g., a region of land being surveyed for landmines) and visually integrate captured and computed information to support decision-making (e.g. overlay a feedback intensity map on an image of the ground surface). In various embodiments, the system can also determine the 3D position, orientation, and motion of the sensing device with respect to the earth's absolute coordinate frame, and/or record and process information about the detection surface. 1. A method in a computing system of mapping onto a virtual representation of a detection surface feedback from an above-surface mobile detector of objects below the detection surface , the method comprising:receiving data characterizing a position and motion of the mobile detector from one or more of inertial sensors, a GPS receiver, ultrasound transducers, and optical sensors associated with the mobile detector;determining, by the computing system and based on the received data, a pose of the mobile detector;receiving information characterizing the detection surface from one or more imaging sensors associated with the mobile detector;generating, by the computing system, a virtual representation of the detection surface based on the determined pose of the mobile detector and the received information characterizing the detection surface;capturing feedback from the mobile detector regarding detection of an object below the detection surface at a certain time;identifying a detected object location based on ...

Подробнее
13-08-2015 дата публикации

AIRBORNE MINE COUNTERMEASURES

Номер: US20150227807A1
Автор: Li Bing
Принадлежит: LOCKHEED MARTIN CORPORATION

An airborne mine countermeasure system includes a processor coupled to a memory having stored therein software instructions that, when executed by the processor, cause the processor to perform a series of image processing operations. The operations include obtaining input image data from an external image sensor, and extracting a sequence of 2-D slices from the input image data. The operations also include performing a 3-D connected region analysis on the sequence of 2-D slices, and extracting 3-D invariant features in the image data. The operations further include performing coarse filtering, performing fine recognition and outputting an image processing result having an indication of the presence of any mines within the input image data. 1. An airborne underwater mine countermeasure system adapted to operate onboard an aircraft , the airborne underwater mine countermeasure system comprising: obtaining input image data of a body of water from an external image sensor;', 'extracting a sequence of 2-D slices from the input image data by applying diffusion equations to generate the sequence of 2-D slices;', 'performing a 3-D connected region analysis on the sequence of 2-D slices, the connected region analysis including analyzing volumetric pixel elements;', 'computing 3-D invariant features in the image data;', 'performing coarse filtering based on the 3-D invariant features, the coarse filtering including: comparing 3-D invariant features in the image data to a store of invariant features associated with known objects, providing a list of found objects in the image data based on the store, and for each of the found objects, providing an indication of how close features of the found object match features in the store;', 'performing fine recognition; and', 'outputting an image processing result having an indication of the presence of any underwater mines within the input image data., 'a processor onboard the aircraft, the processor coupled to a memory onboard the ...

Подробнее
20-08-2015 дата публикации

EXTERNAL VISION AND/OR WEAPON AIMING SYSTEM FOR MILITARY LAND VEHICLES AND MILITARY NAVAL UNITS

Номер: US20150235400A1
Принадлежит: Selex ES S.p.A.

The invention regards an external vision and/or weapon aiming system () for a military land vehicle and/or a military naval unit. The system comprises: two sensors () configured to capture video streams of a same external scene, each in a respective spectral band; an electronic processing unit () configured to insert a respective aiming reticle in the images of each captured video stream, thereby generating a corresponding pre-processed video stream, and to process the two pre-processed video streams; and a user interface () configured to display a video stream received from the electronic processing unit (). The system is characterized in that the electronic processing unit () is configured to process the two pre-processed video streams by means of image enhancement and picture-in-picture functionalities, thereby generating first and second enhanced video streams. 1. An external vision and/or weapon aiming system designed to be installed on board a military land vehicle and/or a military naval unit , and comprising:two sensors configured to capture video streams comprising images of a same scene outside the military vehicle or the military naval unit, each sensor being configured to capture a respective video stream in a respective spectral band; insert a respective aiming reticle in the images of each grabbed video stream that indicates the aiming of the respective sensor that has captured said video stream, thereby generating a corresponding pre-processed video stream, and', 'process the two pre-processed video streams; and, 'an electronic processing unit connected to the two sensors to grab the two captured video streams and configured to'}a user interface connected to the electronic processing unit to receive the processed video streams and comprising a screen configured to display a video stream received from said electronic processing unit;wherein the electronic processing unit is configured to process the two pre-processed video streams by means of:an image ...

Подробнее
10-08-2017 дата публикации

STITCHED IMAGE

Номер: US20170228903A1
Принадлежит:

Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images. 113-. (canceled)14. A system , comprising:an access component configured to access a stitched image of a location and an offline image of the location; andan alignment component configured to align the stitched image and the offline image, where the stitched image is a real-time image and the offline image is a non-real-time image.15. The system of claim 14 , where the stitched image is a compound image of segment images and where the segment images are of a lower resolution then a resolution of the compound image and where the offline image is a map of an area taken aerially and where the segment images are images of the area taken from a machine-launched projectile.16. The system of claim 14 , comprising:a comparison component configured to make a comparison between the stitched image against the offline image;an analysis component configured to perform an analysis on a result of the comparison; anda feature component configured to find a common feature set among the stitched image and the offline image through employment of a result of the analysis, where the alignment component uses the common feature set to align the stitched image and the offline image.1720-. (canceled)21. The system of claim 14 ,where the alignment component is configured to combine the stitched image and the offline image to form into an aligned image andwhere the aligned image is presented on a display.22. The system of claim 21 , where the ...

Подробнее
10-08-2017 дата публикации

STITCHED IMAGE

Номер: US20170228904A1
Принадлежит:

Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images. 116-. (canceled)17. A system , comprising:a processor; and comparing a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together;', 'identifying a position of the vicinity through use of the comparison result; and', 'causing an information set that is indicative of the position to be disclosed on an interface., 'a non-transitory computer-readable medium configured to store computer-executable instructions that when executed by the processor cause the processor to perform a method, the method comprising18. The system of claim 17 , where the information set that is indicative of the position is displayed concurrently with the combination image on the interface.19. The system of claim 17 , where the set of sub-combination images are captured by a launched projectile.20. The system of claim 17 , where the information set is a command message set and where an individual command of the command message set causes a battle command action to occur claim 17 , when selected claim 17 , at a locality that is indicated by the location of the combination image.21. The system of claim 17 , where the interface discloses the combination image concurrently with the non-real-time image and where the combination image is displayed on top of the non-real-time image.22 ...

Подробнее
16-08-2018 дата публикации

Firearm System that Tracks Points of Aim of a Firearm

Номер: US20180231353A1
Автор: Lyren Philip Scott
Принадлежит:

A firearm system includes a firearm and a computer. Electronics in the firearm determine data that includes a pathway between different points of aim of the firearm as the firearm moves. The computer receives this data and builds an image of the pathway between the different points of aim of the firearm. 120.-. (canceled)21. A method executed by a firearm system , comprising:determining, with electronics in a handgun, events that include tracking a continuous path between different points of aim of the handgun as the handgun moves between the different points of aim while at a location, determining a compass direction of the different points of aim, and determining a duration of time that the handgun is pointed at the compass direction;wirelessly transmitting, from the handgun and to a remote computer, event data that includes the continuous path between the different points of aim of the handgun as the handgun moves between the different points of aim while at the location, the compass direction of the different points of aim, and the duration of time that the handgun is pointed at the compass direction;reconstructing, at the remote computer and from the event data, the events that include the continuous path between the different points of aim of the handgun as the handgun moves between the different points of aim while at the location, the compass direction of the different points of aim, and the duration of time that the handgun is pointed at the compass direction; anddisplaying, at the remote computer, a reconstruction of the events that includes showing an image of the location, an image of the handgun, the continuous path between the different points of aim of the handgun on the image of the location, the compass direction of the different points of aim, and the duration of time that the handgun is pointed at the compass direction.22. The method of claim 21 , further comprising:displaying, at the remote computer, the continuous path as one or more lines on a ...

Подробнее
17-08-2017 дата публикации

DETECTING AND LOCATING BRIGHT LIGHT SOURCES FROM MOVING AIRCRAFT

Номер: US20170237947A1
Принадлежит:

A method and system for a light source detection system, comprising an aircraft carrying at least one camera. The system includes a database for storing information about the aircraft's motion, direction, and position and ground location information, such as the onboard navigation system database or an remotely accessible database. The system also includes a processor that accesses the database and is connected to the camera. The processor uses image analysis and processing techniques to determine the ground location corresponding to the light source from an image of that light captured by the camera. It determines the path traveled by that light and estimates its location as being a pre-selected distance vertically above the ground along the path traveled by the light to the aircraft when the image of the light was captured. 1. A light source detection system , comprising:an aircraft;at least one camera carried by said aircraft;a database, said database storing motion, direction, and position information of said aircraft and ground location information; anda processor having access to said database and in electronic communication with said at least one camera, said processor configured to determine a ground location corresponding to a source of light when said at least one camera records an image of said light from said source.2. The light source detection system as recited in claim 1 , wherein said processor is configured to perform a spectral analysis on light in said image.3. The light source detection system as recited in claim 1 , wherein said processor is configured to perform a spectral analysis to identify light from a laser in said image.4. The light source detection system as recited in claim 1 , wherein said processor is configured to perform a spectral analysis to identify light from a muzzle flash in said image.5. The light source detection system as recited in claim 1 , wherein said at least one camera has a filter.6. The light source detection system ...

Подробнее
26-08-2021 дата публикации

METHOD FOR GUIDING A MISSILE, MISSILE CONTROLLER AND MISSILE

Номер: US20210262762A1
Принадлежит:

A missile controller guides a missile along a flight path to a stationary or moving target object. The missile controller has at least one side-looking sensor, configured to record surroundings data and has a field of view aligned transverse to the longitudinal axis of the missile, and a control unit having a reception unit for receiving target object data regarding the target object. The target object data containing position data, orientation data and/or speed data of the target object. The control unit is configured to set the orientation of the missile during the guidance at least partly based on the received target object data such that the target object is located in the field of view of the side-looking sensor at least in sections of a final guidance phase. 1. A missile controller for guiding a missile along a flight path to a target object , the missile controller comprising:at least one side-looking sensor, configured to record surroundings data, and having a field of view aligned transverse to a longitudinal axis of the missile; anda controller having a reception unit receiving target object data regarding the target object, the target object data containing position data, orientation data and/or speed data of the target object, said controller configured to set an orientation of the missile during a guidance at least partly based on the target object data received such that the target object is located in the field of view of said at least one side-looking sensor at least in sections of a final guidance phase.2. The missile controller according to claim 1 , wherein said controller is configured claim 1 , based on the target object data received claim 1 , to set the orientation of the missile by actively controlling a pitch claim 1 , yaw and/or roll angle of the missile such that the target object is located claim 1 , at least in sections of the guidance claim 1 , within an azimuthal angle range covered by the field of view with respect to a polar axis ...

Подробнее
03-09-2015 дата публикации

METHOD FOR IDENTIFYING AND POSITIONING BUILDING USING OUTLINE REGION RESTRAINT OF MOUNTAIN

Номер: US20150248579A1
Принадлежит:

A method for identifying and positioning a building using mountain-based outline region restraint, including steps of: () obtaining a real-time image, detecting a mountain-based outline of the real-time image, and extending the mountain-based outline thereby obtaining a mountain-based outline restraint region, () conducting morphological enhancement and background suppression on the image in the mountain-based outline restraint region, () conducting recursive segmentation in the mountain-based outline restraint region thereby transforming an image obtained in step () into a binary image, () extracting local regions of interest of a target building in the mountain-based outline restraint region according to the binary image, and () directly identifying and positioning the target building in the local regions of interest. 1. A method for identifying and positioning a building using a mountain-based outline region restraint , comprising the steps of:(1) obtaining a real-time image, detecting a mountain-based outline of said real-time image, and extending said mountain-based outline, wherein obtaining a mountain-based outline restraint region;(2) conducting morphological enhancement and background suppression on said image in said mountain-based outline restraint region;(3) conducting recursive segmentation in said mountain-based outline restraint region, wherein transforming an image obtained in step (2) into a binary image;(4) extracting local regions of interest of a target building in said mountain-based outline restraint region according to said binary image; and(5) directly identifying and positioning said target building in said local regions of interest.2. The method of claim 1 , wherein step (1) further comprises sub-steps of:(1-1) obtaining said real-time image, and conducting binary segmentation thereon via a maximum variance between clusters method;(1-2) processing said binary-segmented image via a morphological dilation and erosion scheme after conducting ...

Подробнее
10-09-2015 дата публикации

MACHINE VISION SYSTEM FOR IDENTIFYING AND SORTING PROJECTILES AND OTHER OBJECTS

Номер: US20150254828A1
Принадлежит:

A machine vision system for automatically identifying and inspecting objects is disclosed, including composable vision-based recognition modules and a decision algorithm to perform the final determination on object type and quality. This vision system has been used to develop a Projectile Identification System and an Automated Tactical Ammunition Classification System. The technology can be used to create numerous other inspection and automated identification systems. 1. A method of inspecting and sorting munitions , ordnance or other manufactured parts , comprising the steps of:consecutively conveying parts along a path which extends through a plurality of inspection stations including a circumference vision station;facilitating movement of the parts through the circumference vision station such that all sides of each part are at least temporarily visually unobstructed;illuminating an exterior side surface of each part when the part is visually unobstructed to generate a plurality of side images of the parts;processing the side images of the part to identify physical part characteristics or defects; andsorting the parts in accordance with the part characteristics or defects.2. The method of claim 1 , including the steps of:directing parts identified as having an unacceptable defect to a defective part area; anddirecting parts not identified as having an unacceptable defect to an acceptable part area.3. The method of claim 1 , including the step of generating at least two opposing side images of a part.4. The method of claim 1 , including the step of constructing a profile of a part based upon the multiple side images.5. The method of claim 1 , including the steps of:storing a plurality of object templates;processing the side images to generate length and profile information; anddetermining part type by comparing the length and profile information to the stored templates.6. The method of claim 1 , including the steps of:storing information relating to one or more ...

Подробнее
15-09-2016 дата публикации

LASER SPOT FINDING

Номер: US20160267679A1
Принадлежит: Cubic Corporation

Techniques are disclosed for determining the location of laser spot in an image, enabling wind sensing to be performed on captured images of the laser spot. Techniques can include image averaging, background subtraction, and filtering to help ensure that the Gaussian laser spot is detected in the image. Embodiments may include defining a bounding region and altering the operation of a camera such that the camera does not provide pixel data from pixels sensors corresponding pixels of outside the bounding region in subsequent image captures. Embodiments may additionally or alternatively include extracting two stereoscopic images from a single image capture. 1. A method of determining a location of a laser spot in an image , the method comprising: the first set of images comprises images of a scene taken over a first period of time in which the laser spot is not present in the scene, and', 'the second set of images comprises images of the scene taken over a second period of time in which the laser spot is present in the scene;, 'obtaining, from a camera, a plurality of images comprising a first set of images and a second set of images, whereincombining the images in the first set of images to create a first composite image;combining the images in the second set of images to create a second composite image;creating a difference image by determining a difference in pixel values of pixels in the first composite image from pixel values of pixels in the second composite image;creating a filtered difference image by applying a Gaussian filter to the difference image; andcomparing one or more pixel values of one or more pixels of the filtered difference image with a threshold value to determine the location of the laser spot within the filtered difference image.2. The method of determining the location of a laser spot within an image of claim 1 , further comprising finding a center location of the laser spot claim 1 , representative of the location of the laser spot within ...

Подробнее
01-10-2015 дата публикации

Method for Image Processing and Method that Can be Performed Therewith For the Automatic Detection of Objects, Observation Device and Method for High-Precision Tracking of the Course Followed by Launched Rockets over Large Distances

Номер: US20150281572A1
Принадлежит:

A method for image processing involves collecting image data of a scene as electromagnetic radiation and processing the image data to improve the signal-to-noise ratio of the image data. The image processing involves dividing a raw image that contains the image data into lines and columns, to create a raster image, superimposing a central raster filter element of a raster filter having an odd number of lines and an odd number of columns onto a raster image element, determining the brightness values of each of the raster image elements covered by the raster filter, wherein except for the central raster filter element, every other raster filter element has an individual light-reducing property, and adding up the brightness values to produce a total brightness value, and assigning this total brightness value to the raster image element covered by the central raster filter element. These image processing steps are repeated for all remaining raster image elements to produce a result image having the same resolution as the raw image from the total brightness values of the raster image elements. 111-. (canceled)12. A method for image processing , comprising:a) collecting image data of a scene as electromagnetic radiation using an optical device; b1) creating a raster image by dividing a raw image that contains the image data into lines and columns;', 'b2) superimposing a central raster filter element of a raster filter having an odd number of lines and an odd number of columns onto a raster image element;', 'b3) determining brightness values of each of the raster image elements covered by the raster filter, wherein except for the central raster filter element, every other raster filter element has an individual light-reducing property;', 'b4) producing a total brightness value by summing the brightness values determined in step b3), and assigning the total brightness value to the raster image element covered by the central raster filter element;', 'b5) repeating steps b2) ...

Подробнее
29-09-2016 дата публикации

Object detection system

Номер: US20160283775A1
Автор: Bing Li
Принадлежит: Lockheed Martin Corp

An airborne mine countermeasure system includes a processor coupled to a memory having stored therein software instructions that, when executed by the processor, cause the processor to perform a series of image processing operations. The operations include obtaining input image data from an external image sensor, and extracting a sequence of 2-D slices from the input image data. The operations also include performing a 3-D connected region analysis on the sequence of 2-D slices, and extracting 3-D invariant features in the image data. The operations further include performing coarse filtering, performing tine recognition and outputting an image processing result having an indication of the presence of any mines within the input image data.

Подробнее
29-08-2019 дата публикации

SYSTEMS, METHODS, APPARATUSES, AND DEVICES FOR IDENTIFYING, TRACKING, AND MANAGING UNMANNED AERIAL VEHICLES

Номер: US20190266410A1
Принадлежит:

Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs. 131-. (canceled)32. A method for identifying unmanned aerial vehicles (UAVs) , comprising the steps of:receiving a plurality of pixels corresponding to a frame captured via an image capturing device configured to monitor a particular air space;identifying at least one region of interest (ROI) in the frame, the at least one ROI comprising an image of an object that may be a UAV flying within the particular air space, wherein the at least one ROI comprises a subset of the plurality of pixels, and wherein the at least one ROI includes a first ROI inter-frame position and a first ROI velocity; comparing the first ROI inter-frame position and the first ROI velocity of the at least one ROI to a plurality of stored ROI inter-frame positions and a plurality of stored ROI velocities of a plurality of stored ROIs to determine if the at least one ROI substantially matches any of the plurality of stored ROIs, wherein the plurality of stored ROIs are associated with substantially reoccurring objects in the particular air space; and', extracting image data from the image of the at least one ROI;', 'comparing the extracted image data ...

Подробнее
27-08-2020 дата публикации

IDENTIFYING, TRACKING, AND DISRUPTING UNMANNED AERIAL VEHICLES

Номер: US20200272827A1
Принадлежит:

Systems, methods, and apparatus for identifying, tracking, and disrupting UAVs are described herein. Sensor data can be received from one or more portable countermeasure devices or sensors. The sensor data can relate to an object detected proximate to a particular airspace. The system can analyze the sensor data relating to the object to determine a location of the object and determine that the object is flying within the particular airspace based at least in part on location data. A portable countermeasure device can be identified that corresponds to the location of the object. The system can transmit information about the object to the identified portable countermeasure device. The portable countermeasure device can transmit additional data relating to the object to the system. 1. A method comprising:receiving, via at least one computing device, sensor data relating to an object detected proximate to a particular airspace from a plurality of sensors;analyzing, via the at least one computing device, the sensor data relating to the object to determine a location of the object;determining, via the at least one computing device, that the object is flying within the particular airspace based at least in part on the location;identifying, via the at least one computing device, at least one portable countermeasure device based at least in part on the location of the object;transmitting, via the at least one computing device, information about the object to the at least one portable countermeasure device; andreceiving, via the at least one computing device, additional data relating to the object from the portable countermeasure device.2. The method of claim 1 , wherein the additional data is received as a data stream over a communication network from the at least one portable countermeasure device.3. The method of claim 1 , wherein the particular airspace is proximate to at least one of the plurality of sensors.4. The method of claim 1 , wherein the plurality of sensors ...

Подробнее
13-10-2016 дата публикации

METHOD, SYSTEM AND COMPUTER PROGRAM FOR DECISION SUPPORT

Номер: US20160300504A1
Принадлежит:

A decision support method for use by an operator surrounded by adverse entities in a battlefield environment comprises generating a layered representation of the physical environment surrounding the operator from sensor information by mapping the spherical physical environment of the operator into a geometrical representation suitable for display on a screen, the representation being segmented into a plurality of layers having respective sizes, each layer being associated with a respective category of tactical actions. The representation further comprises visual elements representing adverse entities in the surrounding physical environment of the operator, each visual element being represented so as to be superposed with a given layer. 1. A decision support method for use by an operator surrounded by adverse entities in a battlefield environment , the method comprising generating a layered representation of the physical environment surrounding the operator from sensor information by mapping the spherical physical environment of the operator into a geometrical representation suitable for display on a screen , said representation being segmented into a plurality of layers having respective sizes , each layer being associated with a respective category of tactical actions , said representation further including visual elements representing adverse entities in the surrounding physical environment of the operator , each visual element being represented so as to be superposed with a given layer.2. The method of claim 1 , wherein it comprises associating at least one geographical parameter with each layer claim 1 , said parameters comprising the distance range delimiting the layer claim 1 , said distance range being associated with a begin range and an end range.3. The method of claim 2 , wherein the distance range associated with each layer is predefined and static claim 2 , or dynamically defined depending on predefined criteria.4. The method of claim 1 , wherein it ...

Подробнее
19-10-2017 дата публикации

AUTOMATED MULTIPLE TARGET DETECTION AND TRACKING SYSTEM

Номер: US20170300759A1
Принадлежит:

For automated detection and tracking of multiple targets, an apparatus, method, and program product are disclosed. The apparatus includes a camera that captures video data and a processor that compensates for camera motion in the video data, processes the compensated video data to remove noise and spurious returns, detects one or more targets within the processed video data, and identifies target information for each target in the processed video data. 1. A method for automated detection and tracking of multiple targets , comprising:receiving video data;compensating for platform motion in the video data;removing noise and spurious returns from the video data;detecting one or more targets within the video data; andidentifying target information for each target in the video data.2. The method of further comprising:receiving position information for a vantage point;determining whether current location matches the vantage point;receiving camera information and target information in response to the current location matching the vantage point; andidentifying a set of objects of interest from among the detected one or more targets in the video data based on the target information.3. The method of claim 2 , further comprising handing over target tracking from a first camera platform to a second camera platform in response to the second camera platform arriving at the vantage point claim 2 , wherein the second camera platform receives the camera information and target information from the first camera platform.4. The method of claim 2 , wherein receiving video data comprises controlling a camera based on the camera information to acquire the video data.5. The method of claim 1 , wherein compensating for platform motion comprises applying a geometric transformation to frames of the video data.6. The method of claim 5 , wherein the geometric transformation is based on movement data captured by an inertial measurement unit (IMU) coupled to a camera platform claim 5 , wherein ...

Подробнее
29-10-2015 дата публикации

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR INDICATING HOSTILE FIRE

Номер: US20150310627A1
Принадлежит:

A network for indicating and communicating detection of hostile fire, and systems, methods, and computer program products thereof. Hostile fire is optically detected and identified at a first vehicle and such identification is transmitted from the first vehicle to one or more other vehicles in the network. Data regarding hostile fire directed at the first vehicle can be stored at one or more of the other vehicles and even retransmitted to other vehicles or base stations. 120-. (canceled)21. A system for detecting hostile fire , the system comprising:a first system located at a first position, the first system comprising:an infrared (IR) camera having a field of view (FOV) and a sensitivity sufficient to capture a heat signature of a fired unguided energetic projectile, the heat signature being an image pixel trail representing a portion of a trail of energy of the projectile; receive pixel data from the IR camera, the pixel data corresponding to the captured portion of the trail of energy,', 'calculate, based on the pixel data, a vertical velocity and a vertical acceleration of the pixel data and a horizontal velocity and a horizontal acceleration of the pixel data,', 'calculate an estimated vertical miss distance of the fired projectile based on the vertical velocity and vertical acceleration of the pixel data,', 'calculate an estimated horizontal miss distance of the fired projectile based on the horizontal velocity and horizontal acceleration of the pixel data, and', 'make a determination as to the likelihood of whether the first position will be hit by the fired projectile by analyzing calculated estimated vertical and horizontal miss distances of the fired projectile., 'an image processor configured to22. The system of claim 21 , wherein the IR camera is mounted on a moving object claim 21 , the object being a person or vehicle claim 21 , and wherein calculating the vertical and horizontal velocities is further based on the velocity of the object.23. The system ...

Подробнее
18-10-2018 дата публикации

IMPROVEMENTS IN AND RELATING TO MISSILE TARGETING

Номер: US20180299228A1
Автор: NAFTEL Andrew James
Принадлежит: MBDA UK LIMITED

A method of targeting a missile. A plurality of images of a target, taken from a plurality of viewpoints, are received. Features in the images characteristic of the target are identified. Data representing the characteristic features are provided to the missile to enable the missile to identify, using the characteristic features, the target in images of the environment of the missile obtained from an imager included in the missile. 1. A method of targeting a missile , the method comprising:receiving a plurality of images of a target taken from a plurality of viewpoints;identifying in the images features characteristic of the target;providing data representing the characteristic features to the missile to enable the missile to identify, using the characteristic features, the target in images of the environment of the missile obtained from an imager included in the missile.2. A method as claimed in claim 1 , wherein the plurality of viewpoints are overlapping viewpoints.3. A method as claimed in claim 1 , wherein the characteristic features are regions of the target which in the image of the target provide a change in contrast greater than a selected threshold value.4. A method as claimed in claim 1 , wherein the characteristic features are identified using a scale-invariant feature transform algorithm.5. A method as claimed in claim 1 , wherein the identification of the characteristic features includes the step of generating resealed versions of at least one of the images of the target.6. A method as claimed in claim 5 , wherein the identification of the characteristic features includes the step of smoothing the resealed image versions.7. A method as claimed in claim 6 , wherein the identification of the characteristic features includes the step of calculating difference images between the smoothed claim 6 , resealed image versions.8. A method as claimed in claim 7 , wherein the identification of the characteristic features includes the step of finding extrema in the ...

Подробнее
03-11-2016 дата публикации

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR INDICATING HOSTILE FIRE

Номер: US20160321798A1
Принадлежит:

Systems, methods, and computer program products for identifying hostile fire. A characteristic of a fired projectile is detected using an optical system and the projectile's travel path in relation to a vehicle is determined. If the determined travel path of the projectile is within a predetermined distance from the vehicle, it is determined that the projectile is hostile towards the vehicle and a warning is output. 1. A system that is operative during the day and at night to determine whether a position is an intended target of a fired unguided energetic projectile , the system comprising:an infrared (IR) camera having a field of view (FOV) and a predetermined sensitivity sufficient to capture a heat signature of the fired unguided energetic projectile, pixels of said IR camera being operative to capture a portion of a trail of energy associated with the heat signature of the fired projectile; andan image processor operative to receive signals from said IR camera corresponding to the captured portion of the trail of energy, said image processor being operative to process the signals from said IR camera and to make a determination as to whether the position was the intended target of the fired projectile by analyzing a calculated miss distance of the fired projectile.2. The system according to claim 1 ,further comprising an alert system operatively coupled to said image processor to generate timely audible and visible indications that the position is the intended target of the fired projectile based on the determination by said image processor that the position was the intended target of the fired projectile.3. The system according to claim 1 ,wherein said IR camera is calibrated so as to set one or more miss distance thresholds indicative of hostile fire based on known velocities or velocity ranges of select known-to-be hostile projectiles and corresponding distance ranges of said projectiles.4. The system according to claim 1 , wherein the position is one of a ...

Подробнее
19-11-2015 дата публикации

Demosaicking System and Method for Color array Based Multi-Spectral Sensors

Номер: US20150332434A1
Автор: Kanaev Andrey V.

A computer-implemented demosaicking system and method that can receive an image (or many images that represent individual frames of a video) at a demosaicking processor from a multi-spectral band camera. The image can include four or more band images that each correspond to an unique spectral band obtained by the multi-spectral band camera. A clustering module can perform spectral clustering of the four or more band images to identify multiple clusters. For each of the plurality of clusters, a weights module can determine a cluster weight by computing correlations between each of the unique spectral bands in each cluster. A super-resolution module can perform super-resolution for each of the unique spectral bands by utilizing the cluster weights from the weights module. The super-resolution module can iteratively apply the super-resolution for each of the unique spectral bands and a value for each unique spectral band can be updated after each iteration.

Подробнее
01-10-2020 дата публикации

SYSTEMS AND METHODS FOR MULTI-SIGNATURE COUNTERMEASURE TESTING

Номер: US20200311950A1
Автор: Ordway Matthew R.
Принадлежит:

Systems, methods, and computer-readable media are provided for an autonomous and fully automated method of validating multi-signature decoys that are configured to release infrared flares at multiple points after launch. In one aspect, a method includes capturing, using at least one image capturing device, raw image data of a launched decoy, the decoy having one or more segments configured to be released after launch and automatically processing the raw image data to (1) identify a release point for each of the one or more segments and (2) identify an infrared signature associated with each release point. The method further includes generating a visual display of the release point(s) of the one or more segments and the infrared signature(s) originating from the release point(s). 1. A method comprising:capturing, using at least one image capturing device, raw image data of a launched decoy, the decoy having one or more segments configured to be released after launch; (1) identify a release point for each of the one or more segments and', '(2) identify an infrared signature associated with each release point; and, 'automatically processing the raw image data togenerating a visual display of the release point(s) of the one or more segments and the infrared signature(s) originating from the release point(s).2. The method of claim 1 , wherein electronically processing the raw image data comprises:generating a radiance map sequence of the launched decoy based on the raw image data;filtering the radiance map to remove background noise to yield a filtered radiance map; anddetermining, using the filtered radiance map, a trajectory along which the launched decoy travels, wherein each release point is identified within a band around the trajectory.3. The method of claim 2 , wherein generating the radiance map comprises:for each of a plurality of integration times, generating a corresponding radiation map based on a corresponding portion of the raw image data; andforming the ...

Подробнее
17-11-2016 дата публикации

METHOD FOR DETECTING AND CLASSIFYING EVENTS OF A SCENE

Номер: US20160335504A1
Принадлежит:

The subject of the invention is a method for detecting and classifying events of a scene by means of a single-pupil imaging system equipped with a VisNIR detector in the μm-m band and with an SWIR detector, which comprises steps of acquiring synchronized VisNIR and SWIR successive D images, of displaying the VisNIR images, and of processing these images, which consists in: comparing the SWIR images so as to determine, for each pixel, the variation in illumination from one SWIR image to another and the peak value of these SWIR illuminations, if this variation in SWIR illumination is greater than a threshold, then an event associated with said pixel is detected and: its date, its temporal shape and its duration are determined, in the VisNIR images, the coordinates are determined of the corresponding pixel for which: the variation in the illumination from one VisNIR image to another and the peak value of these VisNIR illuminations are calculated, and these variations in SWIR and VisNIR illumination and their peak values are compared so as to estimate a temperature of the event, the distance of the corresponding point of the scene is estimated so as to calculate the intensity of the event on the basis of the SWIR and VisNIR illuminations and on the basis of this distance, the total energy of the event is estimated on the basis of its temporal shape and of its intensity, the event is classified as a function of its duration, its temperature, its intensity and its energy, the previous steps are repeated for another pixel of the SWIR images. 1. A method for detecting and classifying events of firing threats type of a scene by means of a single-pupil imaging system mounted on a mobile platform and equipped with several detectors , including a detector in the 0.6 μm-1.1 μm wavelength band termed the VisNIR detector and a detector in the 0.9 μm-1.7 μm wavelength band termed the SWIR detector , associated with a processing unit , which comprises a step of acquiring successive ...

Подробнее
24-11-2016 дата публикации

Target Locating Device and Methods

Номер: US20160341543A1
Принадлежит:

A target location device has a video camera, a range finder, a self location module and an inertial measurement unit. The device provides a video display including video output of the camera and an object marker such as a cross-hair overlayed on the video output for aiming the camera on a reference point or on a target in the video output. The range finder determines a distance from the device to the target. The self location module identifies the geographic location of the device. The inertial measurement unit includes at least one gyro for providing outputs corresponding to the location of the reference point and the target. Video stabilization may be use to allow accurate placement of the object marker on the target or reference point. Automatic range finder firing and gyroscope error compensation are also provided. 1. A device comprising:a gyroscope; receive a reference point gyroscope output from the gyroscope corresponding to a position of a reference point;', 'receive a target gyroscope output from the gyroscope corresponding to a position of a target;', 'receive an updated reference point gyroscope output from the gyroscope corresponding to the position of the reference point;', 'calculate a gyroscope error based on the reference point gyroscope output and the updated reference point gyroscope output; and', 'reset the gyroscope based on the calculated gyroscope error to compensate for drift., 'a controller coupled to the gyroscope and configured to2. The device of claim 1 , wherein the controller is further configured to calculate a geographic position of the target based at least in part on the calculated gyroscope error.3. The device of claim 1 , further comprising:a video camera;a display for providing a video output of the video camera and a cursor overlayed on the video output for aiming the video camera on the reference point and on the target in the video output by positioning the cursor on the reference point and the target, respectively;a range ...

Подробнее
24-11-2016 дата публикации

Aerial video based point, distance, and velocity real-time measurement system

Номер: US20160344981A1
Автор: Gary A. Lunt
Принадлежит: US Department of Navy

A method of determining geo-reference data for a portion of a measurement area includes providing a monitoring assembly comprising a ground station, providing an imaging assembly comprising an imaging device with a lens operably coupled to an aerial device, hovering the aerial device over a measurement area, capturing at least one image of the measurement area within the imaging device, transmitting the at least one image to the ground station using a data transmitting assembly, and scaling the at least one image to determine the geo-reference data for the portion of the measurement area by calculating a size of a field-of-view (FOV) of the lens based on a distance between the imaging device and the measurement area.

Подробнее
17-12-2015 дата публикации

EMITTER TRACKING SYSTEM

Номер: US20150363646A1
Принадлежит: FLIR SYSTEMS, INC.

An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images. 1. An emitter tracking system , comprising:a signal detection module configured to detect an emitter signal generated by an emitter;an imaging module configured to capture images of the emitter upon receiving an activation signal; anda processor configured to receive the emitter signal from the signal detection module, analyze the emitter signal, and transmit the activation signal to the imaging module only if the emitter signal includes a predetermined signature.2. The system of claim 1 , wherein the processor is configured to synchronize the imaging module with the emitter signal.3. The system of claim 1 , wherein the processor is configured to cause the imaging module to capture sequential images of the emitter as the emitter alternates between an emissive state and a non-emissive state.4. The system of claim 1 , wherein the processor is configured to construct a first subtracted image by electronically subtracting a first image of the emitter in a non-emissive state from a second image of the emitter in an emissive state.5. The system of claim 4 , wherein the processor is configured to construct a second subtracted image by electronically subtracting a third image of the emitter in a non-emissive state from a fourth image of the emitter in an emissive state claim 4 , and to construct a combined subtracted image by electronically adding the first and second subtracted images.6. The system of claim 1 , wherein the signal detection module includes a filter configured to filter out electromagnetic radiation having wavelengths outside a ...

Подробнее
15-12-2016 дата публикации

SYSTEM AND METHOD FOR VISUAL CORRELATION OF DIGITAL IMAGES

Номер: US20160364876A1
Принадлежит:

A system and method for determining if a first image and a second image are correlated images includes partitioning a first image and a second image into a plurality of corresponding pixel partitions, calculating an average luminance value for each of the plurality of pixel partitions, determining if each of the plurality of pixel partitions of the first image is correlated with each of the corresponding plurality of pixel partitions of the second image, calculating a percentage of correlated pixel partitions of the first image and the corresponding plurality of pixel partitions of the second image and determining that the first image and the second image are correlated images if the percentage of correlated pixel partitions exceeds a predetermined pixel partition correlation threshold. The objective metric of the present invention determines whether two static rendered images are correlated enough to be undetectable by a human observer. 1. A method for determining if a first image and a second image are correlated images , the method comprising:partitioning a first image, comprising a plurality of pixels, into a plurality of corresponding pixel partitions, each of the plurality of pixel partitions comprising a predetermined number of pixels;partitioning a second image, comprising a plurality of pixels, into a plurality of corresponding pixel partitions, each of the plurality of pixel partitions comprising a predetermined number of pixels;calculating an average luminance value for each of the plurality of pixel partitions of the first image;calculating an average luminance value for each of the plurality of pixel partitions of the second image;determining if each of the plurality of pixel partitions of the first image is correlated with each of the corresponding plurality of pixel partitions of the second image;calculating a percentage of correlated pixel partitions of the plurality of pixel partitions of the first image and the corresponding plurality of pixel ...

Подробнее
12-11-2020 дата публикации

SYSTEMS, METHODS, APPARATUSES, AND DEVICES FOR IDENTIFYING, TRACKING, AND MANAGING UNMANNED AERIAL VEHICLES

Номер: US20200356783A1
Принадлежит:

Systems, methods, and apparatus for identifying and tracking UAVs including a plurality of sensors operatively connected over a network to a configuration of software and/or hardware. Generally, the plurality of sensors monitors a particular environment and transmits the sensor data to the configuration of software and/or hardware. The data from each individual sensor can be directed towards a process configured to best determine if a UAV is present or approaching the monitored environment. The system generally allows for a detected UAV to be tracked, which may allow for the system or a user of the system to predict how the UAV will continue to behave over time. The sensor information as well as the results generated from the systems and methods may be stored in one or more databases in order to improve the continued identifying and tracking of UAVs. 1receiving a video frame from a video feed of the particular air space, wherein the video feed was captured by a particular video sensor proximate to the particular air space;identifying at least one region of interest (ROI) in the video frame, the at least one ROI comprising an image of an object that may be a UAV flying within the particular air space; extracting image data from the image of the at least one ROI;', 'comparing the extracted image data to prior image data of objects known to be UAVs to determine a probability that the object in the image is a UAV; and', 'upon determination that the probability that the object in the image is a UAV exceeds a predetermined threshold, denoting the object in the image as a UAV., 'performing an object classification process with respect to the at least one ROI to determine whether the object in the image is a UAV, the object classification process comprising the steps of. A method for identifying unmanned aerial vehicles (UAVs) in a particular air space via the use of one or more video sensors, comprising the steps of: This application is a continuation of U.S. application ...

Подробнее
28-12-2011 дата публикации

Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot

Номер: EP2400460A1
Автор: Thomas Derbanne
Принадлежит: PARROT SA

The method involves updating multiresolution representation of pyramid image type models of scene at different successively-decreasing resolutions. The texturing parameter representative of level of microcontrasts is obtained in picked-up scene. An approximation of the horizontal translation speed of drone (10) is obtained. A battery of predetermined criteria is applied to texturing parameter and to speed approximation. The optical-flow algorithm is switched to algorithm of corner detector type so as to estimate differential movement of scene from one image to next.

Подробнее
13-12-2017 дата публикации

External view and/or sighting system of equipment for military land vehicles and ships of naval forces

Номер: RU2638509C2
Принадлежит: СЕЛЕКС ЕС С.п.А.

Изобретение относится к системе внешнего обзора и/или прицеливания орудия, предназначенной для установки на борту военного сухопутного транспортного средства и/или корабля военно-морского флота. Система содержит два датчика, конфигурация которых обеспечивает захват видеопотоков одной и той же сцены, каждый из которых находится в соответствующей области спектра; электронный блок обработки, конфигурация которого обеспечивает введение соответствующей прицельной сетки в изображения каждого захватываемого видеопотока, вследствие чего происходит генерирование соответствующего предварительно обработанного видеопотока, а также обработку двух предварительно обработанных видеопотоков; и пользовательский интерфейс, конфигурация которого обеспечивает отображение видеопотока, принимаемого из электронного блока обработки. Система отличается тем, что конфигурация электронного блока обработки обеспечивает обработку двух предварительно обработанных видеопотоков посредством функциональных возможностей повышения качества изображений и технологии «картинка в картинке», вследствие чего генерируются первые и вторые видеопотоки повышенного качества. 5 н. и 5 з.п. ф-лы, 4 ил. РОССИЙСКАЯ ФЕДЕРАЦИЯ (19) RU (11) (13) 2 638 509 C2 (51) МПК F41G 3/16 (2006.01) ФЕДЕРАЛЬНАЯ СЛУЖБА ПО ИНТЕЛЛЕКТУАЛЬНОЙ СОБСТВЕННОСТИ (12) ОПИСАНИЕ ИЗОБРЕТЕНИЯ К ПАТЕНТУ (21)(22) Заявка: 2015118275, 16.10.2013 (24) Дата начала отсчета срока действия патента: 16.10.2013 (72) Автор(ы): МАДЖИ Андреа (IT), НАТАЛИ Федерико (IT) (73) Патентообладатель(и): СЕЛЕКС ЕС С.П.А. (IL) Дата регистрации: (56) Список документов, цитированных в отчете о поиске: DE 3342338 A1, 05.09.1985. US Приоритет(ы): (30) Конвенционный приоритет: 16.10.2012 IT TO2012A000907 (45) Опубликовано: 13.12.2017 Бюл. № 35 (85) Дата начала рассмотрения заявки PCT на национальной фазе: 18.05.2015 (86) Заявка PCT: IB 2013/059405 (16.10.2013) (87) Публикация заявки PCT: 2 6 3 8 5 0 9 (43) Дата публикации заявки: 10.12.2016 Бюл. № 34 2009195652 A1, 06.08.2009. ...

Подробнее
20-05-2020 дата публикации

Variable measurement device-dependent camera setup and calibration thereof

Номер: DE102018129143A1

Ein Verfahren (30) und ein System (10) zum Bestimmen einer 6-DOF-Stellung eines Objekts (12) im Raum verwenden wenigstens einen Marker (14), der an dem Objekt (12) angebracht ist, und mehrere Kameras (16). Blickwinkel der Kameras (16) sind zu einem gemeinsamen Messraum gerichtet, in dem das Objekt (12) positioniert ist. Wenigstens eine Kamera (16a) der Kameras (16) ist beweglich, so dass die bewegliche Kamera (16a) mit Bezug auf das Objekt angepasst werden kann. Wenigstens eine der Kameras (16) erfasst ein Bild des Markers (14), der an dem Objekt (12) angebracht ist. Basierend auf dem wenigstens einen erfassten Bild können räumliche Parameter, die die 3D-Position oder die 6-DOF-Stellung des Markers (14) und folglich die 6-DOF-Stellung des Objekts (12) im Raum repräsentieren, bestimmt werden. A method (30) and a system (10) for determining a 6-DOF position of an object (12) in space use at least one marker (14) attached to the object (12) and a plurality of cameras (16) . The viewing angles of the cameras (16) are directed towards a common measuring space in which the object (12) is positioned. At least one camera (16a) of the cameras (16) is movable, so that the movable camera (16a) can be adjusted with respect to the object. At least one of the cameras (16) captures an image of the marker (14) attached to the object (12). Based on the at least one captured image, spatial parameters that represent the 3D position or the 6-DOF position of the marker (14) and consequently the 6-DOF position of the object (12) in space can be determined.

Подробнее
17-06-2021 дата публикации

Variable measurement object-dependent camera structure and calibration thereof

Номер: DE102018129143B4

Verfahren zum Bestimmen einer 6-DOF-Stellung eines Objekts (12) im Raum, wobei die 6-DOF-Stellung eine 3D-Position und eine 3D-Orientierung des Objekts (12) im Raum definiert, wobei das Verfahren die folgenden Schritte aufweist:Anordnen (32) mehrerer Kameras (16) mit individuellen 6-DOF-Stellungen im Raum, wobei jede Kamera (16) einen individuellen Blickwinkel aufweist, wobei die individuellen Blickwinkel zu einem gemeinsamen Messraum CMS gerichtet sind, wobei wenigstens eine der Kameras (16a) beweglich ist;Anbringen (35) einer Markerformation (14) an dem Objekt (12);Positionieren (36) des Objekts (12) mit der Markerformation (14) innerhalb des CMS;Anpassen (38) des CMS mit Bezug auf das Objekt (12) durch Ändern der 6-DOF-Stellung wenigstens einer beweglichen Kamera (16a) im Raum;Erfassen (42) eines jeweiligen Bildes der Markerformation (14), die an dem Objekt (12) angebracht ist, durch wenigstens eine der Kameras (16);Bestimmen (44) räumlicher Parameter der Markerformation (14) basierend auf dem erfassten Bild; undBestimmen (46) einer 6-DOF-Stellung des Objekts (12) im Raum basierend auf den räumlichen Parametern der Markerformation (14),wobei das Objekt (12) eine Messvorrichtung (68) ist, die einen 3D-Sensor (70) zum Messen eines Messobjekts (72) aufweist.

Подробнее
28-04-2022 дата публикации

Remote Controlled Weapon System in Moving Platform and Tracking Method of Moving Target thereof

Номер: KR20220052765A
Принадлежит: 한화디펜스 주식회사

본 발명의 일 실시예에 따른 이동 플랫폼에 탑재된 원격무장시스템은, 제1영상과 상기 제1영상 이후에 입력되는 제2영상의 시간 간격 동안 카메라의 자세 변화량에 대응하는 제1화소이동량을 산출하는 제1자세산출부; 상기 제2영상으로부터 검출된 이동 표적을 조준점에 일치시키기 위해 상기 카메라의 자세를 변경하는 제어명령에 대응하는 제2화소이동량을 산출하는 제2자세산출부; 및 상기 제1화소이동량과 상기 제2화소이동량을 기초로 상기 카메라의 진동에 대응하는 제3화소이동량을 산출하고, 상기 제3화소이동량을 기초로 상기 제2영상의 상기 표적에 설정되는 관심영역의 위치를 예측하는 ROI제어부;를 포함한다.

Подробнее
19-06-2008 дата публикации

Stereo-Based Visual Odometry Method and System

Номер: US20080144925A1
Принадлежит: Sarnoff Corp

A method for estimating pose from a sequence of images, which includes the steps of detecting at least three feature points in both the left image and right image of a first pair of stereo images at a first point in time; matching the at least three feature points in the left image to the at least three feature points in the right image to obtain at least three two-dimensional feature correspondences; calculating the three-dimensional coordinates of the at least three two-dimensional feature correspondences to obtain at least three three-dimensional reference feature points; tracking the at least three feature points in one of the left image and right image of a second pair of stereo images at a second point in time different from the first point in time to obtain at least three two-dimensional reference feature points; and calculating a pose based on the at least three three-dimensional reference feature points and its corresponding two-dimensional reference feature points in the stereo images. The pose is found by minimizing projection residuals of a set of three-dimensional reference feature points in an image plane.

Подробнее
29-06-2020 дата публикации

Method of artifical intelligence target learning and target identification for next generation naval ship using digital twin

Номер: KR102127657B1
Автор: 김범준
Принадлежит: 한화시스템 주식회사

Disclosed is a method of artificial intelligence (AI) target learning and target identification for a next generation naval ship using a digital twin. The method comprises: a step wherein a target image input module receives a target image; a step wherein a target three-dimensional rendering module applies a simulation sea environment to the target image received by the target image input module to perform three-dimensional rendering; and a step wherein the target three-dimensional rendering module outputs a three-dimensional target image on which the three-dimensional rendering is performed. The method of artificial intelligence target learning and target identification for the next generation naval ship using the digital twin applies various simulation sea environments to several images for a target to perform three-dimensional rendering to construct a database for three-dimensional target images to easily procure thousands of images under various sea environments required to perform AI-based recognition for a target naval ship, and uses the three-dimensional target images to perform AI-based learning to accurately recognize a target of a sensor system image based on AI to automatically recognize an enemy target to allow quick response and determination of a commanding officer.

Подробнее
14-10-2010 дата публикации

Delay Compensated Feature Target System

Номер: US20100259614A1
Автор: Ji Chen
Принадлежит: Honeywell International Inc

Apparatus and methods are provided for acquiring a target. A vehicle generates video frames that are sent to a control. The vehicle stores a subset of the frames. The vehicle may receive a lock message from the control that identifies a feature. Based on the information in the lock message, the vehicle may find a stored “fast-forward” frame with the feature, locate the feature on the fast-forward frame, determine a trajectory of the feature, and then determine a current location of the feature in a current frame. Once the vehicle has determined the current location of the feature, the vehicle may send a target acquired message to the control. An estimate of communication latency between the control and the vehicle may be determined. Then, the fast-forward frame may be determined based on a time of arrival of the lock message and the latency.

Подробнее
11-05-2018 дата публикации

A kind of detection method of small target based on morphologic filtering and SVD

Номер: CN105469428B
Автор: 王敏
Принадлежит: Hohai University HHU

本发明公开了一种基于形态学滤波和SVD的弱小目标检测方法,步骤1:通过形态学滤波目标增强算法进行背景抑制、噪声去除,得到预处理后图像序列;步骤2:读入N max 幅所得图像序列,进行帧数估计,得到需要处理的帧数N;步骤3:将N+1幅图像组成的图像合并成二维数据,求其自相关矩阵并对其自相关矩阵进行SVD;步骤4:选择合适的特征向量重构图像序列,得到新的特征图像序列;步骤5:对重构图像序列进行阈值分割,从背景中分离得到原图像中弱小目标的位置;步骤6:对序列中的每幅图像分别进行修正;步骤7:将N代替N max 后,重复步骤2~7。本发明将形态学滤波与奇异值分解的方法有效的结合对视频中的弱小目标进行检测,计算时间短,检测效率高,准确性和鲁棒性都比较好。

Подробнее
28-12-2011 дата публикации

Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot

Номер: CN102298070A
Автор: T·德尔巴纳
Принадлежит: PARROT SA

本发明涉及估算无人机,尤其是能够在自动驾驶下执行悬停飞行的无人机的水平速度的方法。本发明的方法通过估算垂直定向的相机拍摄的场景的差异移动来操作。估算包括定期和连续地更新图像类型金字塔的多分辨率表示,该多分辨率表示以不同的连续降低的分辨率建模场景的给定被拍摄图像。对每个新拍摄图像,应用光流型迭代算法到所述表示。该方法进一步提供响应由光流算法生产的数据以获取拍摄场景中微对比等级的至少一个纹理参数表示,并获取速度的近似值,随后对此参数应用一组预定标准参数。如果满足所述一组预定标准,则该系统从光流算法转到角点探测器型算法。

Подробнее
16-08-2016 дата публикации

Method and apparatus for establishing a north reference for inertial measurement units using scene correlation

Номер: US9418430B2

A scene correlation-based target system and related methods are provided. A reference image depicts a remotely-positioned object having identifiable characteristics, wherein a reference directional vector is established relative to the reference image. A target image of a general vicinity of the remotely-positioned object has an unknown directional vector, the target image having at least a portion of the identifiable characteristics. An inertial measuring unit has a scene correlation system, wherein the scene correlation system matches the portion of the identifiable characteristics of the target image with the identifiable characteristics of the reference image, wherein a slew angle between the reference image and the target image is calculated. A target image directional vector is derived from the calculated slew angle and the reference directional vector.

Подробнее
23-12-2011 дата публикации

METHOD FOR EVALUATING THE HORIZONTAL SPEED OF A DRONE, IN PARTICULAR A DRONE SUITABLE FOR AUTOPILOT STATIONARY FLIGHT

Номер: FR2961601A1
Автор: Thomas Derbanne
Принадлежит: PARROT SA

Le procédé opère en estimant le déplacement différentiel de la scène captée par une caméra à visée verticale. L'estimation prévoit la mise à jour périodique et continue d'une représentation multi-résolution de type pyramide d'images, modélisant à différentes résolutions successivement décroissantes une même image captée de la scène. Pour chaque nouvelle image captée, on applique à cette représentation un algorithme itératif de type flot optique. Le procédé prévoit en outre l'obtention, à partir des données produites par l'algorithme à flot optique (106), d'au moins un paramètre de texturation représentatif du niveau des micro-contrastes de la scène captée ainsi que l'obtention d'une approximée de la vitesse, paramètres auxquels on applique ensuite une batterie de critères prédéterminés (108). Si cette batterie de critères est vérifiée, on bascule (110) de l'algorithme à flot optique vers une algorithme de type détecteur de coins. The method operates by estimating the differential displacement of the scene captured by a camera with vertical aim. The estimation provides for the periodic and continuous updating of a pyramid type multi-resolution representation of images, modeling at different successively decreasing resolutions the same captured image of the scene. For each new image captured, we apply to this representation an iterative algorithm of optical flow type. The method also provides for obtaining, from the data produced by the optical flow algorithm (106), at least one texturing parameter representative of the level of the micro-contrasts of the captured scene as well as obtaining an approximation of the speed, parameters which are then applied to a set of predetermined criteria (108). If this set of criteria is verified, one switches (110) from the optical flow algorithm to a wedge detector type algorithm.

Подробнее
14-08-2015 дата публикации

METHOD FOR DETECTING AND CLASSIFYING EVENTS OF A SCENE

Номер: FR3017480A1
Принадлежит: Thales SA

L'invention a pour objet un procédé de détection et de classification d'évènements d'une scène au moyen d'un système d'imagerie monopupille équipé d'un détecteur VisPIR dans la bande 0,6 µm-1,1 µm et d'un détecteur SWIR, qui comprend des étapes d'acquisition d'images 2D successives VisPIR et SWIR synchronisées, d'affichage des images VisPIR, et de traitement de ces images, qui consiste à : - comparer les images SWIR pour déterminer pour chaque pixel la variation d'éclairement d'une image SWIR à l'autre et la valeur crête de ces éclairements SWIR, si cette variation d'éclairement SWIR est supérieure à un seuil, alors un évènement associé audit pixel est détecté et : - on détermine sa date, sa forme temporelle et sa durée, - on détermine dans les images VisPIR, les coordonnées du pixel correspondant pour lequel, o on calcule la variation de l'éclairement d'une image VisPIR à l'autre et la valeur crête de ces éclairements VisPIR, et on compare ces variations d'éclairement SWIR et VisPIR et leurs valeurs crêtes pour estimer une température de l'évènement, o on estime la distance du point correspondant de la scène pour calculer l'intensité de l'événement à partir des éclairements SWIR et VisPIR et à partir de cette distance, o on estime l'énergie totale de l'évènement à partir de sa forme temporelle et de son intensité, - on classifie l'évènement en fonction de sa durée, de sa température, de son intensité et de son énergie, on réitère les étapes précédentes pour un autre pixel des images SWIR. The subject of the invention is a method for detecting and classifying events of a scene by means of a monopupill imaging system equipped with a VisPIR detector in the 0.6 μm-1.1 μm band. a SWIR detector, which comprises steps of acquisition of successive synchronized VisPIR and SWIR 2D images, display of VisPIR images, and processing of these images, which consists in: comparing the ...

Подробнее
28-01-2022 дата публикации

Method and device for predicting the state of a moving target

Номер: FR3112873A1
Принадлежит: Thales SA

Les modes de réalisation de l’invention fournissent un procédé de prédiction de l’état d’une cible mobile (101) mis en œuvre dans un système de surveillance (100) comprenant un viseur optronique (104) et un dispositif de tir de projectiles (105) configuré pour tirer un ou plusieurs projectiles selon une séquence de tir donnée. Le procédé de prédiction comprend les étapes consistant à : - recevoir une pluralité de mesures brutes ; - sélectionner parmi les mesures brutes une ou plusieurs mesures brutes valides en fonction de la séquence de tir et en fonction d’un ou de plusieurs paramètres d’erreur ; - estimer à partir des mesures valides et à partir de la séquence de tir une cinématique courante de la cible mobile (101) et un modèle dynamique de mouvement vraisemblable ; - déterminer un point de rendez-vous spatio-temporel ; - calculer une pluralité de paramètres de tir permettant d’intercepter la cible mobile (101). Figure pour l’abrégé : Fig. 1A Embodiments of the invention provide a method for predicting the state of a moving target (101) implemented in a surveillance system (100) comprising an optronic sight (104) and a projectile firing device (105) configured to fire one or more projectiles according to a given firing sequence. The prediction method comprises the steps of: - receiving a plurality of raw measurements; - select from the raw measurements one or more valid raw measurements according to the firing sequence and according to one or more error parameters; - estimating from the valid measurements and from the firing sequence a current kinematics of the moving target (101) and a probable dynamic movement model; - determine a space-time rendezvous point; - calculating a plurality of firing parameters making it possible to intercept the moving target (101). Figure for the abstract: Fig. 1A

Подробнее
29-06-2012 дата публикации

METHOD OF SUPPRESSING A COCKPIT MASK AND ASSOCIATED HELMET VISUALIZATION SYSTEM

Номер: FR2969792A1
Принадлежит: Thales SA

Le domaine général de l'invention concerne les dispositifs de visualisation de casque binoculaire portés par les pilotes d'aéronef. En utilisation de nuit, un des inconvénients de ce type de dispositif est que les montants du cockpit introduisent des masques visuels importants dans le champ des capteurs optiques. Le procédé selon l'invention est une méthode de suppression de ces masques dans les images présentées au pilote par traitement graphique des images binoculaires. Il repose sur le fait que, compte-tenu de la parallaxe, les montants (M) occupent dans les deux images des capteurs droit et gauche (CBNL) des positions différentes. La comparaison des deux images permet de repérer, puis de supprimer ces masques et enfin de les remplacer par des parties d'images du paysage extérieur. The general field of the invention relates binocular helmet viewing devices worn by aircraft pilots. In night use, one of the disadvantages of this type of device is that the cockpit uprights introduce important visual masks into the field of optical sensors. The method according to the invention is a method of removing these masks in the images presented to the pilot by graphical processing of the binocular images. It is based on the fact that, in view of the parallax, the uprights (M) occupy in the two images of the right and left sensors (CBNL) different positions. The comparison of the two images makes it possible to locate, then to delete these masks and finally to replace them with parts of images of the external landscape.

Подробнее
01-01-2021 дата публикации

PASSIVE TELEMETRY METHOD AND DEVICE BY IMAGE PROCESSING

Номер: FR3097975A1
Автор: Paul DORBESSAN
Принадлежит: MBDA France SAS

- Procédé et dispositif de télémétrie passive par traitement d’image. - Le dispositif (1) destiné à estimer la distance entre un observateur (2) et une cible (4), à l’aide d’au moins une image générée par un générateur d’images (6) numérique à partir de la position (P1) de l’observateur (2), comporte une unité de détection et d’identification configurée pour détecter et identifier une cible (4) dans l’image à partir au moins d’une liste prédéterminée de cibles connues, une unité d’estimation d’orientation configurée pour déterminer l’orientation de la cible identifiée dans l’image, et une unité d’estimation de distance configurée pour mesurer sur l’image la longueur d’au moins un segment caractéristique de la cible et pour calculer la distance à partir de ladite longueur mesurée, d’une longueur réelle du segment sur la cible en tenant compte de l’orientation de la cible et de la résolution spatiale du générateur d’images (6) numérique. Figure pour l'abrégé : Fig 2 - Method and device for passive telemetry by image processing. - The device (1) intended to estimate the distance between an observer (2) and a target (4), using at least one image generated by a digital image generator (6) from the position (P1) of the observer (2), comprises a detection and identification unit configured to detect and identify a target (4) in the image from at least a predetermined list of known targets, a unit of orientation estimate configured to determine the orientation of the target identified in the image, and a distance estimate unit configured to measure on the image the length of at least one characteristic segment of the target and to calculate the distance from said measured length, from an actual length of the segment on the target taking into account the orientation of the target and the spatial resolution of the digital image generator (6). Figure for abstract: Fig 2

Подробнее
14-07-2017 дата публикации

METHOD FOR VISUALIZING A MULTISPECTRAL IMAGE

Номер: FR3037428B1
Принадлежит: Sagem Defense Securite SA

Подробнее
16-12-2016 дата публикации

METHOD FOR VISUALIZING A MULTISPECTRAL IMAGE

Номер: FR3037428A1
Принадлежит: Sagem Defense Securite SA

Un procédé de visualisation d'une image multispectrale met en œuvre plusieurs zones de cible (T1, T2, T3, T4) qui sont sélectionnées dans une fenêtre d'analyse (FA), et une projection de Fisher est déterminée séparément pour chaque zone de cible, pour optimiser un contraste de l'image multispectrale dans la fenêtre d'analyse. Des images élémentaires à contraste renforcé sont alors obtenues respectivement pour toutes les zones de cible, dans la même fenêtre d'analyse, puis une image de synthèse est construite à partir des images élémentaires, et affichée sur une unité d'affichage d'image. L'invention permet d'améliorer la visualisation de l'image multispectrale, y compris pour des configurations particulières du contenu de la scène qui est saisie dans l'image multispectrale. A method for viewing a multispectral image implements a plurality of target areas (T1, T2, T3, T4) that are selected in an analysis window (FA), and a Fisher projection is determined separately for each area of interest. target, to optimize a contrast of the multispectral image in the analysis window. Reinforced contrast elementary images are then obtained respectively for all target areas, in the same analysis window, then a synthesis image is constructed from the elementary images, and displayed on an image display unit. The invention improves the visualization of the multispectral image, including for particular configurations of the content of the scene that is captured in the multispectral image.

Подробнее