Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1722. Отображено 197.
29-09-2017 дата публикации

СПОСОБ, УСТРОЙСТВО И СЕРВЕР ДЛЯ ОПРЕДЕЛЕНИЯ ПЛАНА СЪЕМКИ ИЗОБРАЖЕНИЯ

Номер: RU2631994C1
Принадлежит: СЯОМИ ИНК. (CN)

Изобретение относится к определению плана съемки изображения. Техническим результатом является повышение точности классификации изображений. В способе получают галерею пользовательского терминала; осуществляют идентификацию и маркировку изображения; получают обучающий набор выборки; вводят каждую из множества последовательностей обучающих изображений; обучают коэффициенты признака между уровнями скрытых узлов модели определения исходного изображения плана съемки; получают тестовый набор выборки; осуществляют идентификацию тестовых изображений; определяют точность классификации модели определения плана съемки изображения; если точность классификации меньше заданного порогового значения, выполняют: обновление обучающего набора выборки; обучение, в соответствии с обновленным обучающим набором выборки, коэффициентов признака между соответствующими уровнями скрытых узлов модели определения плана съемки изображения; выполнения итерации обновления модели определения плана съемки изображения; выполнение ...

Подробнее
20-07-2009 дата публикации

СПОСОБ, СИСТЕМА, ЦИФРОВАЯ ФОТОКАМЕРА И СИС, ОБЕСПЕЧИВАЮЩИЕ ГЕОМЕТРИЧЕСКОЕ ПРЕОБРАЗОВАНИЕ ИЗОБРАЖЕНИЯ НА ОСНОВАНИИ ПОИСКА ТЕКСТОВЫХ СТРОК

Номер: RU2007149518A
Принадлежит:

... 1. Способ для геометрического преобразования деформированного изображения, содержащего текст, с помощью поиска текстовых строк на изображении, причем способ содержит этапы, на которых ! a) выполняют начальный анализ, чтобы оценить, достаточно ли на изображении похожих на текст структур для выполнения преобразования, ! b) идентифицируют связанные элементы изображения, вероятно формирующие символы, слова, и осуществляют поиск вероятных символов, слов, чтобы идентифицировать направление каждого вероятного символа, слова, отражающее направление текстовых строк, направляющих линий или подобных элементов, составляющих направление текстовых строк в каждой из соответствующих позиций на изображении, которые содержат каждый из идентифицированных связанных элементов изображения, формирующих символы, слова, ! c) компонуют идентифицированные направления соседних идентифицированных связанных элементов изображения, тем самым идентифицируя текстовые строки, направляющие линии или подобные элементы, составляющие ...

Подробнее
12-12-2003 дата публикации

Method for transmission of information by means of a camera

Номер: AU2003254539A8
Принадлежит:

Подробнее
04-01-2018 дата публикации

Edge-aware bilateral image processing

Номер: AU2016349518A1

Example embodiments may allow for the efficient, edge-preserving filtering, upsampling, or other processing of image data with respect to a reference image. A cost- minimization problem to generate an output image from the input array is mapped onto regularly- spaced vertices in a multidimensional vertex space. This mapping is based on an association between pixels of the reference image and the vertices, and between elements of the input array and the pixels of the reference image. The problem is them solved to determine vertex disparity values for each of the vertices. Pixels of the output image can be determined based on determined vertex disparity values for respective one or more vertices associated with each of the pixels. This fast, efficient image processing method can be used to enable edge-preserving image upsampling, image colorization, semantic segmentation of image contents, image filtering or de-noising, or other applications.

Подробнее
14-12-1982 дата публикации

DATA MANIPULATION APPARATUS FOR CONVERTING RASTER-SCANNED DATA TO A LOWER RESOLUTION

Номер: CA1137619A

Data manipulation apparatus is described for converting raster-scanned data received for example from a scanner at a first picture element (pel) resolution to a second lower pel resolution for display for example on a CRT terminal. The apparatus includes a scale-changing means which functions to replace selected sub-groups of pels in the input image by single pels at its output. The significance of each single pel reflects the presence or absence of a pel representing part of an image object in the associated subgroup of pels. The number of pels in the selected sub groups are determined by the degree of compression required to convert to the lower pel resolution. Prior to scale change the apparatus functions to modify the input data in order to minimize merging of adjacent image objects as a result of scale change and thereby improve the legibility of the output image at the lower resolution. The scanned data is first supplied to a data sensitive thinner which detects narrow gaps between ...

Подробнее
09-03-1982 дата публикации

MAGNETIC INK CHARACTER RECOGNITION WAVEFORM ANALYZER

Номер: CA0001119725A1
Автор: KAO CHARLES T
Принадлежит:

Подробнее
28-03-1996 дата публикации

BIOLOGICAL ANALYSIS SYSTEM SELF-CALIBRATION APPARATUS

Номер: CA0002200457A1
Принадлежит:

Reference information (15) for a biological slide (11) is obtained. The reference information (15) normalizes the measured object features (19). Calibrated feature measurement (21), not based on absolute measurements, selfadjusts to match the situation of each slide (11), where each slide (11) is characterized by the reference information (15). The reference may be different from slide to slide because of the preparation variations. The calibrated features (21) will not carry the interslide variations. In addition, the reference information (15) provides a good indication of the slide condition such as dark stained, air dried, etc. which can be used as slide features (21) for the specimen classification. No alteration of the current practice of specimen preparation is required. The many slide context dependent features (21) improve the classification accuracy of the objects in a specimen (11).

Подробнее
29-08-1986 дата публикации

STRUCTURE RECOGNITION DEVICE AND PROCEDURE FOR YOUR ENTERPRISE.

Номер: CH0000657489A5
Принадлежит: VIEW ENG, VIEW ENGINEERING, INC.

Подробнее
04-12-2020 дата публикации

METHOD AND SYSTEM FOR SECURE BIOMETRIC SIGNAL RECOGNITION

Номер: FR0003088752B1
Принадлежит:

Подробнее
20-11-1970 дата публикации

Номер: FR0002031766A5
Автор:
Принадлежит:

Подробнее
10-09-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0031395487B1
Принадлежит:

Подробнее
04-01-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0033333461B1
Принадлежит:

Подробнее
19-05-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0039822993B1
Принадлежит:

Подробнее
28-09-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0032344826B1
Принадлежит:

Подробнее
24-02-2017 дата публикации

이미지 식별 시스템 및 방법

Номер: KR0101710050B1
Автор: 추이 밍, 티엔 원셩
Принадлежит: 추이 밍, 티엔 원셩

... 본 발명은 코드, 상기 코드에 의한 이미지 식별 시스템 및 방법과 검색 시스템 및 방법에 있어서, 본 발명의 모든 구상은 주로 코드에 의한 것이고, 상기 코드는 식별 도면 및 상기 식별 도면에 대응되는 하나의 식별 데이터를 포함하고, 상기 식별 도면은 트루 컬러 이미지(true color images), 이차원 코드, 이차원 코드에 중첩된 색채 및 ID번호를 포함하며, 상기 트루 컬러 이미지, 이차원 코드, 이차원 코드에 중첩된 색채, ID번호는 동일하거나 상응한 색인을 갖고 있다. 하나의 서버를 통하여 코드에 대응되는 데이터를 저장하고, 식별하거나 검색할 때 코드 또는 식별 도면을 스캐닝하고, 이미지 식별 처리만 거치면 상응하는 데이터를 검색하여, 이동 단말기로 복귀시킬 수 있고, 본 발명의 식별 정밀도는 더 높고 적용되는 범위도 넓어 다양한 상업 용도로 사용될 수 있다.

Подробнее
16-08-2008 дата публикации

Image processing method

Номер: TW0200834466A
Принадлежит:

The invention relates to an image processing method to generate, from a source image, an image of reduced size whose ratio between the width and the height is equal to a predetermined value, called reduced ratio (RR). It comprises the following steps: selecting (100) one rectangular image part in the source image, and extracting (140) the rectangular image part to generate the reduced image. According to an essential characteristic of the method, if the ratio between the width (PL) and height (PH) of the rectangular image part, called first ratio (RF), is not equal to the reduced ratio (RR), the width (PL) or the height (PH) of the rectangular image part is modified (130) before the extraction step (140) according to values of perceptual interest associated with each pixel of the source image in such a manner that the ratio between the width and the height of the modified rectangular image part, called second ratio, is equal to the reduced ratio (RR).

Подробнее
08-12-2002 дата публикации

Förfarande och anordning för extraktion av information från ett målområde inom ett tvådimensionellt grafiskt objekt i en bild

Номер: SE0000102021L
Автор:
Принадлежит:

Подробнее
09-07-2009 дата публикации

METHOD, SYSTEM, AND COMPUTER PROGRAM FOR IDENTIFICATION AND SHARING OF DIGITAL IMAGES WITH FACE SIGNATURES

Номер: WO2009082814A8
Принадлежит:

The present invention solves the problem of automatically recognizing multiple known faces in photos or videos on a local computer storage device (on a home computer). It further allows for sophisticated organization and presentation of the photos or videos based on the graphical selection of known faces (by selecting thumbnail images of people). It also solves the problem of sharing or distributing photos or videos in an automated fashion between 'friends' who are also using the same software that enables the invention. It further solves the problem of allowing a user of the invention to review the results of the automatic face detection, eye detection, and face recognition methods and to correct any errors resulting from the automated process.

Подробнее
27-07-2006 дата публикации

Method for digital recording, storage and/or transmission of information by means of a camera provided on a comunication terminal

Номер: US20060164517A1
Автор: Martin Lefebure
Принадлежит: Real Eyes 3D

The invention relates to a method for selection of a digitising zone by a camera (CN), correction of the projection distortion, resolution enhancement, then binarisation, comprising the following operation steps: generation of a closed contour (DC) within the document for processing (O) or around the document for processing (O), produced manually or printed, presentation of the document for processing (O) in front of the camera (CN) at an angle such that said contour is entirely visible within the image present on the visualisation screen (AF), recording the image and searching for the contour within the image, calculation of projection distortions (bloc CC), extraction and fusion of the image contents and generation of the final image.

Подробнее
11-05-2006 дата публикации

System and method of enabling a cellular/wireless device with imaging capabilities to decode printed alphanumeric characters

Номер: US20060098874A1
Автор: Zvi Lev
Принадлежит: DSPV, LTD.

A system and method for decoding printed alphanumeric characters from images or video sequences captured by a wireless device, including the pre-processing of the image or video sequence to optimize processing in all subsequent steps, the searching of one or more grayscale images for key alphanumeric characters on a range of scales, the comparing of the values on the range of scales to a plurality of templates in order to determine the characteristics of the alphanumeric characters, the performing of additional comparisons to a plurality of templates to determine character lines, line edges, and line orientation, the processing of information from prior operations to determine the corrected scale and orientation of each line, the recognizing of the identity of each alphanumeric characters in a string of such characters, and the decoding of the entire character string in digitized alphanumeric format.

Подробнее
26-05-1998 дата публикации

Apparatus and method for nonlinear normalization of image

Номер: US0005757979A
Автор:
Принадлежит:

A method for nonlinear normalization of an image, which performs pre-processing for computing the correlation between an unknown pattern and a reference pattern. A local spatial density function rho (Xi, Yj) (i=1-I, j=1-J) is calculated from a two-dimensional pattern f(Xi, Yj) which is obtained by sampling the unknown pattern at a sampling interval gamma . The spatial density function rho (Xi, Yj) is obtained as the product of reciprocals of line pitches in both the X and Y directions. An x-direction cumulative function hx(Xi) and a y-direction cumulative function hy(Yj) are computed by successively adding the space density function rho (Xi, Yj). New sampling points (Xi, Yj) are computed in such a fashion that new sampling intervals ( delta i, epsilon j), defined as intervals between two adjacent points of the new sampling points (Xi, Yj), satisfy the condition that a product between the cumulative function hx(Xi) and delta i takes a first fixed value, and a product between the cumulative ...

Подробнее
29-03-2005 дата публикации

Method and apparatus for resolving perspective distortion in a document image and for calculating line sums in images

Номер: US0006873732B2
Принадлежит: Xerox Corporation, XEROX CORP, XEROX CORPORATION

Perspective distortion is estimated in a digital document image by detecting perspective pencils in two directions, one being parallel to text lines, and the other being parallel to the boundaries of formatted text columns. The pencils are detected by analyzing directional statistical characteristics of the image. To detect a pencil, a first statistical line transform is applied to transform the image into line space, and a second statistical score transform is applied to transform the image into pencil space. A dominant peak in pencil space identifies the perspective pencil. In addition, a computationally efficient line summing technique is used for effecting sums of pixels along inclined target lines (or curves) through an image. The technique includes pre-generating partial sums, and summing along step segments of a target line using the partial sums.

Подробнее
24-10-2019 дата публикации

SYSTEMS AND METHODS FOR DETECTING OBJECTS IN IMAGES

Номер: US20190325605A1
Принадлежит: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.

A method configured to implemented on at least one image processing device for detecting objects in images includes obtaining an image including an object. The method also includes generating one or more feature vectors related to the image based on a first convolutional neural network, wherein the one or more feature vectors includes a plurality of parameters. The method further includes determining the position of the object based on at least one of the plurality of parameters. The method still further includes determining a category associated with the object based on at least one the plurality of parameters.

Подробнее
06-02-2014 дата публикации

CHARACTER RECOGNITION METHOD, CHARACTER RECOGNITION APPARATUS AND FINANCIAL APPARATUS

Номер: US20140037181A1
Принадлежит: LG CNS CO., LTD.

A character recognition method for recognizing a character of a medium is provided. A character image of an individual character from a medium is acquired and the character image is read out step by step to determine the character according to a hierarchical structure in which a set of predetermined characters are hierarchically classified into a plurality of groups configured of main groups and sub groups.

Подробнее
05-07-2007 дата публикации

Handwriting recognition training and synthesis

Номер: US2007154094A1
Принадлежит:

Methods and systems for converting text into natural personal handwriting are provided. One aspect relates to the training of a computer to recognize a user's handwriting style. In one embodiment, the computer receives handwriting samples of at least one character written by the user, such as the character being provided as the beginning, middle, or ending character among a plurality of other characters. Further embodiments allow for increased personalization of the handwriting. Another aspect relates to system and methods for displaying a representation of a computer user's handwriting. In one embodiment, the handwriting comprises variant shapes of letters, personalized connection style between letters, and connection parts that look pressure-sensitive. In another embodiment, characters are adjusted, such as cutting portions of the character to create a more realistic recreation and synthesis of the handwriting.

Подробнее
13-06-2019 дата публикации

PROCESSING POINT CLOUDS OF VEHICLE SENSORS HAVING VARIABLE SCAN LINE DISTRIBUTIONS USING TWO-DIMENSIONAL INTERPOLATION AND DISTANCE THRESHOLDING

Номер: US20190179027A1
Принадлежит:

A method for processing point clouds having variable spatial distributions of scan lines includes receiving a point cloud frame generated by a sensor configured to sense a vehicle environment. Each of the points in the frame has associated two-dimensional coordinates and an associated parameter value. The method also includes generating a normalized point cloud frame by adding interpolated points not present in the received frame, at least by, for each interpolated point, identifying one or more neighboring points having associated two-dimensional coordinates that are within a threshold distance of two-dimensional coordinates for the interpolated point, and calculating an estimated parameter value of the interpolated point using, for each of the identified neighboring points, a distance between the two-dimensional coordinates and the parameter value associated with the identified neighboring point. The method also includes generating, using the normalized point cloud frame, signals descriptive ...

Подробнее
25-07-2019 дата публикации

LOSSY LAYER COMPRESSION FOR DYNAMIC SCALING OF DEEP NEURAL NETWORK PROCESSING

Номер: US20190228284A1
Принадлежит:

An apparatus of operating a neural network is configured to compress one or more of activations or weights in one or more layer of the neural network. The activations and/or weights may be compressed based on a compression ratio or a system event. The system event may be a bandwidth condition, a power condition, a debug condition, a thermal condition or the like. The apparatus may operate the neural network to compute an inference based on the compressed activations or the compressed weights.

Подробнее
02-09-2021 дата публикации

SYSTEMS AND METHODS FOR IMAGE PREPROCESSING

Номер: US20210271847A1
Принадлежит:

A method and apparatus of a device that classifies an image is described. In an exemplary embodiment, the device segments the image into a region of interest that includes information useful for classification and a background region by applying a first convolutional neural network. In addition, the device tiles the region of interest into a set of tiles. For each tile, the device extracts a feature vector of that tile by applying a second convolutional neural network, where the features of the feature vectors represent local descriptors of the tile. Furthermore, the device processes the extracted feature vectors of the set of tiles to classify the image.

Подробнее
11-09-2003 дата публикации

Systems and methods for automatic scale selection in real-time imaging

Номер: US2003169942A1
Автор:
Принадлежит:

A system and method for automatic scale selection in real-time image and video processing and computer vision applications. In one aspect, a non-parametric variable bandwidth mean shift technique, which is based on adaptive estimation of a normalized density gradient, is used for detecting one or more modes in the underlying data and clustering the underlying data. In another aspect, a data-driven bandwidth (or scale) selection technique is provided for the variable bandwidth mean shift method, which estimates for each data point the covariance matrix that is the most stable across a plurality of scales. The methods can be used for detecting modes and clustering data for various types of data such as image data, video data speech data, handwriting data, etc.

Подробнее
20-10-2010 дата публикации

VISUAL DEVICE, INTERLOCKING COUNTER, AND IMAGE SENSOR

Номер: EP1378862B1
Автор: AJIOKA, Yoshiaki
Принадлежит: Ecchandes Inc.

Подробнее
30-01-2008 дата публикации

Номер: JP0004041496B2
Автор:
Принадлежит:

Подробнее
24-12-2008 дата публикации

Номер: JP0004202146B2
Автор:
Принадлежит:

Подробнее
20-01-2011 дата публикации

ИДЕНТИФИКАЦИЯ И КЛАССИФИКАЦИЯ ВИРУСНЫХ ЧАСТИЦ НА ТЕКСТУРИРОВАННЫХ ЭЛЕКТРОННЫХ МИКРОФОТОГРАФИЯХ

Номер: RU2409855C2

Изобретение относится к способу идентификации и определения характеристик структур на электронных микрофотографиях. Техническим результатом является повышение качества идентификации и сокращение времени анализа вирусных частиц с помощью электронного микроскопа. Способ включает: отбор структур на первом изображении, при этом структуры имеют первый тип формы, деформированный в первом направлении; преобразование отобранных структур во второй тип формы, отличающийся от первого типа формы; использование преобразованных структур второго типа формы для формирования эталонных изображений; идентификация новой структуры на втором изображении; при этом новая структура имеет первый тип формы; деформирование структуры со вторым типом формы на каждом эталонном изображении в первом направлении; определение какое из эталонных изображений является предпочтительным эталонным изображением, которое наилучшим образом соответствует новой структуре; и деформирование ряда эталонных изображений, чтобы они приобрели ...

Подробнее
12-10-2018 дата публикации

ИНИЦИАЛИЗАЦИЯ МОДЕЛИ НА ОСНОВЕ КЛАССИФИКАЦИИ ВИДОВ

Номер: RU2669680C2

Изобретение относится к области обработки изображений. Технический результат – повышение точности результатов сегментации за счет определения параметров расположения объекта, зафиксированного на изображении. Устройство обработки изображений содержит: порт (IN) ввода для приема изображения (3DV) объекта (HT), полученного в поле обзора (FoV) посредством формирователя изображений (USP), при этом изображение фиксирует расположение объекта, соответствующее полю обзора (FoV) формирователя изображений; классификатор, сконфигурированный с возможностью использования геометрической модели (MOD) объекта (HT) для определения, из совокупности предварительно определенных расположений, расположения объекта, зафиксированного на изображении; порт (OUT) вывода, сконфигурированный с возможностью вывода параметров расположения, описывающих найденное расположение, при этом классификатор использует обобщенное преобразование Хафа (GHT), для определения расположения объекта, причем каждое из предварительно определенных ...

Подробнее
28-04-2018 дата публикации

ОБРАБОТКА ДАННЫХ ДЛЯ СВЕРХРАЗРЕШЕНИЯ

Номер: RU2652722C1

Изобретение относится к области цифровой обработки изображений и видео. Технический результат – улучшение качества изображения и видео без потери данных изображений. Устройство обработки данных для сверхразрешения содержит блок оценки смещения, выполненный с возможностью принимать набор изображений низкого разрешения одной сцены, получать наборы смещений пикселей для изображений низкого разрешения для всех пикселей, соответствующих одним и тем же фрагментам в наборе изображений низкого разрешения, получать наборы целочисленных смещений пикселей посредством вычисления целочисленного смещения пикселя для каждого смещения пикселя, получать наборы дробных смещений пикселей посредством вычисления дробного смещения пикселя для каждого смещения пикселя; банк фильтров, выполненный с возможностью хранения наборов фильтров; блок выбора фильтров; блок получения изображения высокого разрешения, выполненный с возможностью получения изображения высокого разрешения в формате RGB. 2 н. и 16 з.п. ф-лы, ...

Подробнее
04-02-1970 дата публикации

Apparatus for Recognising a Pattern

Номер: GB0001180290A
Принадлежит:

... 1,180,290. Character recognition. TOKYO SHIBAURA ELECTRIC CO. Ltd. 26 Oct., 1967 [31 Oct., 1966 (3)], No. 48752/67. Heading G4R. In pattern recognition apparatus, pattern signals are quantized and then divided into a number of channels, primary and secondary pattern characteristics for each channel being derived, the secondary from the primary, the pattern being identified from the combination of the secondary characteristics and tertiary characteristics derived from a connection relationship between the secondary characteristics of adjacent channels. Character signals from a raster scan of a character, after quantization, are provided from 501 in Fig. 2, to be thinned and reduced at gate 55 and OR gate 59 respectively, gates 55 looking at successive 3 x 4 portions of the raster in shift register 53 and OR gate 59 looking at successive 3 bit L portions of the raster in shift register 57. Reduction thus halves the number of rows in the raster. The output from Fig. 2 at 500 is fed to a shift ...

Подробнее
16-12-2020 дата публикации

Writing recognition using wearable pressure sensing device

Номер: GB0202017218D0
Автор:
Принадлежит:

Подробнее
10-06-2003 дата публикации

LINEAR IMAGER RESCALING METHOD

Номер: AU2002343522A1
Принадлежит:

Подробнее
12-06-2001 дата публикации

Data acquisition system, artificial eye, vision device, image sensor and associated device

Номер: AU0001552101A
Принадлежит:

Подробнее
24-07-2014 дата публикации

Method and system for fast and robust identification of specific products in images

Номер: AU2011269050B2
Принадлежит:

Identification of objects in images. All images are scanned for key- points and a descriptor is computed for each region. A large number of descriptor examples are clustered into a Vocabulary of Visual Words. An inverted file structure is extended to support clustering of matches in the pose space. It has a hit list for every visual word, which stores all occurrences of the word in all reference images. Every hit stores an identifier of the reference image where the key-point was detected and its scale and orientation. Recognition starts by assigning key-points from the query image to the closest visual words. Then, every pairing of the key-point and one of the hits from the list casts a vote into a pose accumulator corresponding to the reference image where the hit was found. Every pair key-point/hit predicts specific orientation and scale of the model represented by the reference image.

Подробнее
07-12-2006 дата публикации

METHOD, SYSTEM, DIGITAL CAMERA AND ASIC FOR GEOMETRIC IMAGE TRANSFORMATION BASED ON TEXT LINE SEARCHING

Номер: CA0002610214A1
Принадлежит:

The present invention provides a method, system and/or a digital camera providing a geometrical transformation of deformed images of documents comprising text, by text line tracking, resulting in an image comprising parallel text lines. The transformed image is provided as an input to an OCR program either running in a computer system or in a processing element comprised in said digital camera.

Подробнее
14-01-1986 дата публикации

VIDEO NORMALIZATION FOR HAND PRINT RECOGNITION

Номер: CA1199408A

In the machine recognition of hand printed characters it is desirable that the characters be constrained to a uniform size. This may be accomplished by taking a relatively unconstrained hand printed character and, through simple logical processes involving eliminating and/or combining portions of the unconstrained character, obtain a constrained character of a specified size. To obtain this specified character, the unconstrained characters are stored in a video line buffer as a plurality of rows of binary bits. When a row character is located in the video line buffer a process of vertical and horizontal consolidation is initiated. In the vertical consolidation, which is performed first, predetermined rows of the bits of the row character are selectively combined with other rows of stored bits or selected without change to reduce the height of the character to a predetermined desired height. Unselected rows are ignored. As the rows of bits are obtained during vertical consolidation, horizontal ...

Подробнее
10-09-2019 дата публикации

ACCELERATED LIGHT FIELD DISPLAY

Номер: CA0003018205C
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

This disclosure describes methods, techniques, devices, and apparatuses for graphics and display processing for light field projection displays. In some examples, this disclosure describes a projection display system capable of rendering and displaying multiple annotations at the same time. An annotation is any information (e.g., texts, signs, directions, logos, phone numbers, etc.) that may be displayed. In one example, this disclosure proposes techniques for rendering and displaying multiple annotations at the same time at multiple different focal lengths.

Подробнее
25-08-2017 дата публикации

Character recognition method, character recognition device and financial equipment

Номер: CN0103577820B
Автор:
Принадлежит:

Подробнее
07-06-2019 дата публикации

ROTARY TOOL

Номер: FR0003063373B1
Принадлежит:

Подробнее
14-10-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0035885700B1
Принадлежит:

Подробнее
03-10-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0035601968B1
Принадлежит:

Подробнее
22-01-2019 дата публикации

가속된 라이트 필드 디스플레이

Номер: KR0101940971B1
Принадлежит: 퀄컴 인코포레이티드

... 본 개시물은 라이트 필드 프로젝션 디스플레이에 대한 그래픽 및 디스플레이 프로세싱을 위한 방법들, 기술들, 디바이스들 및 장치들을 설명한다. 일부 예들에서, 본 개시물은 동시에 다수의 주석들을 렌더링 및 디스플레이할 수 있는 프로젝션 디스플레이 시스템을 설명한다. 주석은 디스플레이될 수 있는 모든 정보 (예를 들어, 텍스트, 간판, 디렉션, 로고, 전화 번호 등) 이다. 일례에서, 본 개시물은 다수의 상이한 초점 길이들에서 동시에 다수의 주석들을 렌더링 및 디스플레이하기 위한 기술들을 제안한다.

Подробнее
22-01-2019 дата публикации

치아 건전성 판정 지원장치 및 치아 건전성 판정 지원시스템

Номер: KR1020190007451A
Автор: 캄바라, 마사키
Принадлежит:

... 치과 의사에 따른 치아의 건전성 판정을 지원하는 치아 건전성 판정 지원장치를 제공한다. 치아 건전성 판정 지원장치(20)는, 여기광이 조사된 치아의 형광을 촬상한 형광 촬상 이미지를 그레이스케일 이미지로 변환하는 그레이스케일 변환부(21)와, 그레이스케일 이미지 중의 치아 이미지에서 기준점 및 복수 평가점의 그레이스케일 값을 얻는 그레이스케일 값 취득부(22)와, 기준점의 그레이스케일 값으로 복수 평가점의 그레이스케일 값을 정규화하는 정규화부(23)와, 기준점의 그레이스케일 값과 정규화된 복수 평가점의 그레이스케일 값을 시각적으로 표시하는 치아 건전성 판정용 데이터를 생성하는 치아 건전성 판정용 데이터 생성부(24)를 구비한다.

Подробнее
16-01-2019 дата публикации

영상 패치 정규화 방법 및 시스템

Номер: KR1020190005516A
Принадлежит:

... 본 발명은 영상 패치 정규화 방법 및 시스템에 관한 것으로, 본 발명에 따른 방법은 입력 영상으로부터 인식 대상이 포함된 영상 패치를 추출하는 단계, 추출된 영상 패치를 인식 대상의 거리 정보에 기초하여 미리 정해진 크기로 정규화하는 단계, 그리고 정규화된 영상 패치를 인식 대상을 검출하기 위한 인식기에 전송하는 단계를 포함한다. 본 발명에 의하면, 입력 영상에서 추출된 영상 패치를 인식 대상의 거리 정보에 기초하여 미리 정해진 크기로 정규화함으로써 단일의 인식기를 사용하여 인식 대상을 정확하게 검출할 수 있으며, 인식 대상의 검출 시간을 획기적으로 감소시킬 수 있다.

Подробнее
16-10-2020 дата публикации

Nuclear image processing method

Номер: TW0202038252A
Принадлежит:

A nuclear image processing method is disclosed. The method includes the following steps: inputting a normalized standard space nuclear image; selecting a voxel of the normalized standard space nuclear image and collecting the values of the neighbor voxels to form a voxel value set; conducting a data augmentation algorithm to generate a voxel distribution function; calculating an expected value of the distribution and calculating a first standard variation of the potion over the expected value and a second standard variation of the potion lower than the expected value; repeating the above steps to calculate the expected value, the first standard variation and the second standard variation of the necessary voxels, so as to form an image standardization template set including expected value template, first standard variation template and the second standard variation template.

Подробнее
12-09-1996 дата публикации

PROCESS FOR IMPROVING THE RECOGNITION OF TYPED TEXT WITH FIXED CHARACTER SPACING

Номер: WO1996027847A1
Принадлежит:

The invention concerns a process for improving the recognition of typed text with fixed character spacing, wherein the blackened segments (S) from a uniformly printed area of a document, for which a character geometry characteristic of the font in question can be given, are provided with circumscribing rectangles. The geometry data of the resultant rectangles are determined. From the geometry data of the rectangles which watch the characteristic character geometry within a predetermined permissible margin, ideal character expectation zones are calculated for the rectangles whose geometry data are outside the predetermined permissible margin. Only the blackened segments lying within the ideal character expectation zone or the rectangles whose geometry data correspond to the characteristic character geometry within the predetermined permissible margin are considered for recognition purposes.

Подробнее
31-08-2021 дата публикации

Methods and systems for providing interface components for respiratory therapy

Номер: US0011103664B2
Принадлежит: ResMed Pty Ltd, RESMED PTY LTD

Systems and methods permit generation of a digital scan of a user's face such as for obtaining of a patient respiratory mask, or component(s) thereof, based on the digital scan. The method may include: receiving video data comprising a plurality of video frames of the user's face taken from a plurality of angles relative to the user's face, generating a three-dimensional representation of a surface of the user's face based on the plurality of video frames, receiving scale estimation data associated with the received video data, the scale estimation data indicative of a relative size of the user's face, and scaling the digital three-dimensional representation of the user's face based on the scale estimation data. In some aspects, the scale estimation data may be derived from motion information collected by the same device that collects the scan of the user's face.

Подробнее
08-04-2021 дата публикации

METHOD AND APPARATUS FOR GENERATING LEARNING DATA REQUIRED TO LEARN ANIMATION CHARACTERS BASED ON DEEP LEARNING

Номер: US20210103721A1
Принадлежит:

Disclosed are a learning data generation method and apparatus needed to learn animation characters on the basis of deep learning. The learning data generation method needed to learn animation characters on the basis of deep learning may include collecting various images from an external source using wired/wireless communication, acquiring character images from the collected images using a character detection module, clustering the acquired character images, selecting learning data from among the clustered images, and inputting the selected learning data to an artificial neural network for character recognition. 1. A learning data generation method needed to learn animation characters on the basis of deep learning , the learning data generation method comprising:collecting various images from an external source using wired/wireless communication;acquiring character images from the collected images using a character detection module;clustering the acquired character images;selecting learning data from among the clustered images; andinputting the selected learning data to an artificial neural network for character recognition.2. The learning data generation method of claim 1 , wherein the collecting of the various images comprises:collecting a video from the external source using the wired/wireless communication; andextracting frames from the collected video at preset time intervals.3. The learning data generation method of claim 1 , further comprising claim 1 , after the collecting of the various images claim 1 , training the character detection module using the collected images.4. The learning data generation method of claim 3 , wherein the training of the character detection module comprises:labeling the collected images to generate labels corresponding to the respective images; andinputting the generated labels and the collected images to a preset character detection model to train the character detection model.5. The learning data generation method of claim 4 , ...

Подробнее
01-06-2017 дата публикации

ENHANCING THE LEGIBILITY OF IMAGES USING MONOCHROMATIC LIGHT SOURCES

Номер: US20170154410A1
Принадлежит: Georgetown University

A system and method are described for enhancing readability of document images by operating on each document individually. Monochromatic light sources operating at different wavelengths of light can be used to obtain greyscale images. The greyscale images can then be used in any desired image enhancement algorithm. In one example algorithm, an automated method removes image background noise and improves sharpness of the scripts and characters using edge detection and local color contrast computation.

Подробнее
23-03-1993 дата публикации

CHARACTER RECOGNITION APPARATUS

Номер: US5197107A
Автор:
Принадлежит:

Подробнее
13-12-2018 дата публикации

SYSTEMS AND METHODS FOR REDUCING UNWANTED REFLECTIONS IN DISPLAY SYSTEMS INCORPORATING AN UNDER DISPLAY BIOMETRIC SENSOR

Номер: US20180357462A1
Принадлежит:

An optical sensor system includes a display substrate, display pixel circuitry including a plurality of light emitting display elements or pixels disposed over the display substrate, a first circular polarizer disposed over the display substrate and the display pixel circuitry, and a transparent cover sheet disposed over the first circular polarizer. A top surface of the transparent cover sheet provides a sensing surface for an object such as a finger. The optical sensor system also includes a sensor layer disposed below the display substrate, the sensor layer having a plurality of photosensors, and a second circular polarizer disposed between the sensor layer and the display substrate.

Подробнее
04-11-2014 дата публикации

System and method for digital image signal compression using intrinsic images

Номер: US0008879849B2

In a first exemplary embodiment of the present invention, an automated, computerized method is provided for processing an image. According to a feature of the present invention, the method comprises the steps of providing an image file depicting an image, in a computer memory, generating an intrinsic image corresponding to the image, and compressing the intrinsic image to provide a compressed intrinsic image.

Подробнее
05-12-2019 дата публикации

PROCESSING METHOD FOR CHARACTER STROKE AND RELATED DEVICE

Номер: US2019371277A1
Принадлежит:

A processing method for character stroke and related device are provided. The method comprises: obtaining handwriting information of a first handwriting point and handwriting information of a second handwriting point in a character stroke, the handwriting information comprising coordinate information; determining a display effect related to the first handwriting point according to the handwriting information of the first handwriting point and the handwriting information of the second handwriting point; rendering the display effect related to the first handwriting point within a display range of the first handwriting point. The display manner of the character stroke can be enriched through above manner, thereby improving the user experience.

Подробнее
14-10-2004 дата публикации

Image retrieval

Номер: US2004202385A1
Автор:
Принадлежит:

A method for searching an image database includes capturing an image of a photograph and a background, determining a boundary of the photograph in the image, cropping the photograph from the image, correcting the perspective of the photograph, compensating colors of the photograph, and matching the photograph with an image in the image database.

Подробнее
20-02-2011 дата публикации

СПОСОБ, СИСТЕМА, ЦИФРОВАЯ ФОТОКАМЕРА И СИС, ОБЕСПЕЧИВАЮЩИЕ ГЕОМЕТРИЧЕСКОЕ ПРЕОБРАЗОВАНИЕ ИЗОБРАЖЕНИЯ НА ОСНОВАНИИ ПОИСКА ТЕКСТОВЫХ СТРОК

Номер: RU2412482C2
Принадлежит: ЛУМЕКС АС (NO)

Изобретение относится к средствам геометрического преобразования деформированных изображений документов, содержащих текст. Техническим результатом является повышение достоверности распознавания текстов. В способе и системе обеспечивается отслеживание текстовой строки, дающее в результате изображение, содержащее параллельные текстовые строки. Преобразованное изображение используют в качестве входных данных для программы оптического распознавания символов. При этом компонуют идентифицированные направления соседних идентифицированных связанных элементов изображения, тем самым идентифицируя текстовые строки, направляющие линии или подобные элементы, составляющие направление текстовых строк по всей площади изображения или ее части, идентифицируют точки преобразования, относящиеся к скомпонованным направлениям текстовых строк на площади изображения. 4 н. и 40 з.п. ф-лы, 6 ил.

Подробнее
20-11-2009 дата публикации

ИДЕНТИФИКАЦИЯ И КЛАССИФИКАЦИЯ ВИРУСНЫХ ЧАСТИЦ НА ТЕКСТУРИРОВАННЫХ ЭЛЕКТРОННЫХ МИКРОФОТОГРАФИЯХ

Номер: RU2008113161A
Принадлежит:

... 1. Способ идентификации и определения характеристик структур на электронных микрофотографиях, включающий: ! отбор структур на первом изображении (110), при этом структуры имеют первый тип формы, деформированный в первом направлении; ! преобразование отобранных структур во второй тип формы, отличающийся от первого типа формы; ! использование преобразованных структур второго типа формы для формирования эталонных изображений; ! идентификация новой структуры на втором изображении (112); при этом новая структура имеет первый тип формы; ! деформирование структуры со вторым типом формы на каждом эталонном изображении в первом направлении; ! определение какое из эталонных изображений является предпочтительным эталонным изображением, которое наилучшим образом соответствует новой структуре; и ! деформирование ряда эталонных изображений, чтобы они приобрели форму новой структуры, имеющей эллиптическую форму, и исследование каждого деформированного эталонного изображения с целью проверки того, что ...

Подробнее
20-11-2015 дата публикации

СИСТЕМА И СПОСОБ ДЛЯ СЖАТИЯ СИГНАЛА ЦИФРОВОГО ИЗОБРАЖЕНИЯ С ИСПОЛЬЗОВАНИЕМ ИСТИННЫХ ИЗОБРАЖЕНИЙ

Номер: RU2014118769A
Принадлежит:

... 1. Автоматизированный, компьютеризированный способ для обработки изображения, содержащий этапы:обеспечения файла изображения, представляющего изображение, в памяти компьютера;генерации истинного изображения, соответствующего изображению, исжатия истинного изображения, чтобы обеспечить сжатое истинное изображение.2. Способ по п. 1, включающий в себя дополнительный этап передачи сжатого истинного изображения к удаленному устройству.3. Способ по п. 1, включающий в себя дополнительный этап сохранения сжатого истинного изображения в памяти.4. Способ по п. 1, в котором истинное изображение включает в себя набор истинных изображений.5. Способ по п. 4, в котором набор истинных изображений включает в себя изображение материала и изображение освещенности.6. Автоматизированный, компьютеризированный способ для обработки изображения, содержащий этап приема сжатого истинного изображения.7. Способ по п. 6, включающий в себя дополнительный этап разуплотнения сжатого истинного изображения.8. Способ по п ...

Подробнее
02-12-2010 дата публикации

VISUELLE EINRICHTUNG, VERRIEGELNDER ZÄHLER UND BILDSENSOR

Номер: DE0060238041D1
Принадлежит: ECCHANDES INC, ECCHANDES INC.

Подробнее
13-10-1977 дата публикации

VIDEODATENKOMPRESSION BEI DER ABTASTUNG VON SCHRIFTSTUECKEN

Номер: DE0002336180B2
Автор:
Принадлежит:

Подробнее
10-05-2017 дата публикации

Callibration method

Номер: GB0201704847D0
Автор:
Принадлежит:

Подробнее
17-12-1980 дата публикации

COMPRESSION AND EXPANSION OF SCANNED IMAGES

Номер: GB0001581546A
Автор:
Принадлежит:

Подробнее
25-01-2018 дата публикации

Method and system for transforming spectral images

Номер: AU2016290604A1
Принадлежит: Spruson & Ferguson

The invention pertains to a method for transforming a set of spectral images, the method comprising: dividing the images in said set in identically arranged areas; for each of said areas, calculating a predetermined characteristic across said set of images; and, for each of said images, normalizing intensity values in each of said areas in function of said predetermined characteristic of said area. The invention also pertains to a corresponding computer program product and a corresponding image processing system.

Подробнее
26-06-1980 дата публикации

PATTERN RECOGNITION APPARATUS

Номер: AU0005016279A
Принадлежит:

Подробнее
09-09-2021 дата публикации

Identification of individuals in a digital file using media analysis techniques

Номер: AU2018324122B2
Принадлежит:

This description describes a system for identifying individuals within a digital file. The system accesses a digital file describing the movement of unidentified individuals and detects a face for an unidentified individual at a plurality of locations in the video. The system divides the digital file into a set of segments and detects a face of an unidentified individual by applying a detection algorithm to each segment. For each detected face, the system applies a recognition algorithm to extract feature vectors representative of the identity of the detected faces which are stored in computer memory. The system applies a recognition algorithm to query the extracted feature vectors for target individuals by matching unidentified individuals to target individuals, determining a confidence level describing the likelihood that the match is correct, and generating a report to be presented to a user of the system.

Подробнее
22-02-1977 дата публикации

VIDEO COMPACTION FOR PRINTED TEXT

Номер: CA0001005916A1
Автор: NAGY GEORGE, WELCH PETER D
Принадлежит:

Подробнее
19-10-2019 дата публикации

SYSTEM AND METHOD FOR TESTING ELECTRONIC VISUAL USER INTERFACE OUTPUTS

Номер: CA0003039607A1
Принадлежит: HINTON, JAMES W.

A system and method are provided for testing electronic visual user interface outputs. The method includes obtaining a baseline set of one or more screen shots of a user interface, the user interface comprising one or more elements; generating an updated set of one or more screen shots of the user interface, the updated set comprising one or more changes to the user interface; comparing the baseline set to the updated set to generate a differential set of one or more images illustrating differences in how at least one of the user interface elements is rendered. The comparing includes, for each screen shot: identifying coordinates of edges of a content portion of interest relative to an entire content captured in that screen shot; cropping the content portion using the coordinates from the baseline and updated versions of the entire content captured in that screen shot to obtain content portions of a same size for comparison; and performing a spatial unit-by-unit comparison of the same size ...

Подробнее
30-12-2017 дата публикации

SCRAP SORTING SYSTEM

Номер: CA0002971622A1
Принадлежит:

A system and a method of sorting scrap particles includes imaging a moving conveyor containing scrap particles using a vision system to create an image. A computer analyzes the image as a matrix of cells, identifies cells in the matrix containing a particle, and calculates a color input for the particle from a color model by determining color components for each cell associated with the particle. A light beam is directed to the particle on the conveyor downstream of the vision system, and at least one emitted band of light from the particle is isolated and detected at a selected frequency band to provide spectral data for the particle. The computer generates a data vector for the particle containing the color input and the spectral data, and classifies the particle into one of at least two classifications of a material as a function of the vector.

Подробнее
22-11-2007 дата публикации

METHOD AND APPARATUS FOR PROVIDING THREAT IMAGE PROJECTION (TIP) IN A LUGGAGE SCREENING SYSTEM, AND LUGGAGE SCREENING SYSTEM IMPLEMENTING SAME

Номер: CA0002708600A1
Принадлежит:

A method and apparatus for testing luggage screening equipment operators is provides. A sequence of images of contents of luggage items derived from a device that scans the luggage items with penetrating radiation are received. The image generation device is characterized by introducing a certain distortion into these images of contents of luggage items. A display device is caused to display images derived from this sequence of images. Occasionally, the display device is caused to show a simulated threat in a luggage item by displaying a combined image derived based at least in part on an image in the sequence of images and a distorted threat image. The distorted threat image was derived by applying a distortion insertion process to a reference threat image wherein the distortion insertion process tends to approximate the certain distortion introduced in the images of contents of luggage items by the device that scans the luggage items with penetrating radiation.

Подробнее
10-05-1991 дата публикации

METHODS AND DEVICES FOR AUTOMATICALLY IDENTIFYING-WRITTEN WORDS AND MACHINES FOR POST-LABELING OF CHECKS OR ENVELOPES

Номер: FR0002595842B1
Автор:
Принадлежит:

Подробнее
31-08-2018 дата публикации

ROTARY TOOL

Номер: FR0003063373A1
Принадлежит: SAFRAN IDENTITY & SECURITY

Système de traitement d'images (I1,... In), comportant un réseau de neurones principal (2) et en amont de celui-ci un module de prétraitement (3) comportant plusieurs réseaux de neurones (R1,... Rn) travaillant en parallèle pour traiter plusieurs images de départ d'un même objet et configuré pour générer par fusion des sorties de ces réseaux une représentation (D) de l'objet améliorant la performance du réseau de neurones principal, l'apprentissage des réseaux de neurones du module de prétraitement (3) étant effectué au moins en partie simultanément avec celui du réseau de neurones principal (2).

Подробнее
29-11-1968 дата публикации

Apparatus for the identification of a network

Номер: FR0001547790A
Автор:
Принадлежит:

Подробнее
12-08-1994 дата публикации

Apparatus making it possible to observe a detailed feature, far away from which a beam of radiation is received

Номер: FR0002701321A1
Принадлежит:

Un système optique (4) dont l'ouverture est trop petite pour permettre la résolution de deux sources rapprochées (3A, 3B) se trouvant sur une cible éloignée (3) est conçu pour produire une image qui est suffisamment grosse pour s'étaler sur un certain nombre de capteurs (5) d'un réseau. La forme et l'orientation de l'image dépend du nombre et des orientations des sources, si bien que l'on peut utiliser le signal de sortie du réseau de capteurs pour identifier certains types de source.

Подробнее
04-05-1973 дата публикации

Номер: FR0002153729A5
Автор:
Принадлежит:

Подробнее
06-03-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0030313102B1
Принадлежит:

Подробнее
09-07-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0038593500B1
Принадлежит:

Подробнее
14-09-2000 дата публикации

PROCESS USING the MULTI-RESOLUTION OF the IMAGES FOR the OPTICAL RECOGNITION Of Postal sending

Номер: FR0033673149B1
Принадлежит:

Подробнее
27-06-2013 дата публикации

Systems and Methods for Processing of Coverings Such as Leather Hides and Fabrics for Furniture and Other Products

Номер: US20130163826A1
Принадлежит: Vision Automation LLC

Methods and systems for processing coverings such as leather hides and fabrics are provided. A system can include a worktable having a surface on which a covering is placeable. An imaging device can be positionable relative to the worktable. The imaging device can be configured to obtain an image of the covering on the surface of the worktable. A projector can be positionable relative to worktable. The projector can be configured to project an image onto the surface of the worktable and the covering on the surface of the worktable. A controller can be in communication with the imaging device and projector. The controller can be configured to correct images taken by the imaging device. The controller can also be configured to correct the images projected onto the surface of the worktable and the covering thereon. The controller can be configured to permit the showing of virtual markings on the covering placed on the surface of the worktable through an image projected thereon by the projector. The covering can then be marked or cut along the virtual markings.

Подробнее
07-01-2021 дата публикации

METHOD AND APPARATUS FOR PREDICTING FACE BEAUTY GRADE, AND STORAGE MEDIUM

Номер: US20210004570A1
Принадлежит: WUYI UNIVERSITY

A method for predicting a face beauty grade includes the following steps of: acquiring a beautiful face image of a face beauty database, preprocessing the beautiful face image, and extracting a beauty feature vector of the beautiful face image, the preprocessing unifying data of the beautiful face image; recognizing continuous features of samples of the same type in a feature space by using a bionic pattern recognition model, and classifying the beauty feature vector to obtain a face beauty grade prediction model; and collecting a face image to be recognized, and inputting the face image to be recognized into the face beauty grade prediction model to predict a face beauty grade and obtain the beauty grade of the face image to be recognized. 1. A method for predicting a face beauty grade , comprising following steps of:acquiring a beautiful face image from a face beauty database, preprocessing the beautiful face image, and extracting a beauty feature vector of the beautiful face image;classifying the beauty feature vector by using a bionic pattern recognition model to obtain a face beauty grade prediction model trained; andcollecting a face image to be recognized, inputting the face image to be recognized into the face beauty grade prediction model to predict a face beauty grade and obtain the beauty grade of the face image to be recognized.2. The method of claim 1 , wherein the step of acquiring the beautiful face image of the face beauty database claim 1 , preprocessing the beautiful face image claim 1 , and extracting the beauty feature vector of the beautiful face image further comprises steps of:acquiring the beautiful face image of the face beauty database, and extracting a beautiful face key point of the beautiful face image by using a neural network;preprocessing the beautiful face image according to the beautiful face key point to obtain a normalized standard beautiful face image; andprocessing the standard beautiful face image by using a width learning ...

Подробнее
02-01-2020 дата публикации

DEEP LEARNING-BASED AUTOMATIC GESTURE RECOGNITION METHOD AND SYSTEM

Номер: US20200005086A1
Принадлежит: KOREA ELECTRONICS TECHNOLOGY INSTITUTE

Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data. 1. A training method , comprising:extracting a plurality of contours from an input image;generating training data by normalizing pieces of contour information forming each of the contours; andtraining an AI model for gesture recognition by using the generated training data.2. The training method of claim 1 , wherein the contours overlap one another.3. The training method of claim 1 , wherein the pieces of contour information are pieces of information regarding feature points extracted to derive the contours.4. The training method of claim 3 , wherein the pieces of information regarding the feature points comprise pieces of coordinate information of the feature points.5. The training method of claim 4 , wherein the generating the training data comprises normalizing the pieces of contour information through arithmetic operations using a mean of the pieces of coordinate information of the pieces of contour information forming each contour claim 4 , and a standard deviation.6. The training method of claim 1 , wherein the generating the training data comprises generating the training data by adding pieces of reliability information of the pieces of contour information to the normalized pieces of coordinate information.7. The training method of claim 1 , further comprising:extracting feature data from each of regions comprising the contours; andadding the extracted feature data to the generated training data,wherein ...

Подробнее
04-01-2018 дата публикации

Electronic apparatus and method for controlling the electronic apparatus

Номер: US20180005625A1
Автор: Jeong-Ho Han
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An electronic apparatus is disclosed. The electronic apparatus includes an input unit configured to receive a user input, a storage configured to store a recognition model for recognizing the user input, a sensor configured to sense a surrounding circumstance of the electronic apparatus, and a processor configured to control to recognize the received user input based on the stored recognition model and to perform an operation corresponding to the recognized user input, and update the stored recognition model in response to determining that the performed operation is caused by a misrecognition based on a user input recognized after performing the operation and the sensed surrounding circumstance.

Подробнее
20-01-2022 дата публикации

Neural Network-based Optical Character Recognition

Номер: US20220019832A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural network-based optical character recognition. An embodiment of the system may generate a set of bounding boxes based on reshaped image portions that correspond to image data of a source image. The system may merge any intersecting bounding boxes into a merged bounding box to generate a set of merged bounding boxes indicative of image data portions that likely portray one or more words. Each merged bounding box may be fed by the system into a neural network to identify one or more words of the source image represented in the respective merged bounding box. The one or more identified words may be displayed by the system according to a standardized font and a confidence score. 1. A method , comprising:generating a set of bounding boxes based on reshaped image data portions that correspond to image data of a first source image;merging, according to one or more human judgement heuristics, any intersecting bounding boxes into a merged bounding box to generate a first set of merged bounding boxes indicative of first image data portions that likely portray one or more words in the first source image;executing a comparison between the one or more of the merged bounding boxes that correspond to the first source image and one or more ground truth merged bounding boxes;determining a quality of the one or more human judgement heuristics based on generating a matching score resulting from the comparison; anddetermining an update to the one or more human judgement heuristics.2. The method of claim 1 , wherein the one or more ground truth merged bounding boxes each correspond to a ground truth word from one or more documents in a ground truth corpus.3. The method of claim 2 , wherein each ground truth merged bounding box comprises a bounding box defined by an individual.4. The method of claim 2 , wherein executing a comparison between the one or more of the merged bounding ...

Подробнее
14-01-2021 дата публикации

INFORMATION PROCESSING DEVICE AND RECOGNITION SUPPORT METHOD

Номер: US20210012139A1
Автор: IKEDA Hiroo
Принадлежит:

In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device comprises a detection unit and an environment acquisition unit . The detection unit detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone. 113-. (canceled)14. An information processing device comprising: detect a marker from an image;', 'acquire information representing an accuracy of recognition as an image of a the marker be recognized at the target area where the marker disposed; and', 'based on the information, control a display device including a display screen such that an second image is displayed superimposedly on a part of the image, the second image being corresponded with the information,', 'wherein the second image is changed based on the information., 'at least one processor configured to15. The information processing device according to claim 14 , wherein the marker is disposed at an arbitrary place within a target area to be recognized.16. The information processing device according to claim 14 , wherein the at least one processor acquires the information based on image information on the marker itself described in the detected marker.17. The information processing device according to claim 14 , wherein the marker includes grid pattern comprising black grids and white grids.18. A recognition support ...

Подробнее
14-01-2021 дата публикации

Dynamic audiovisual segment padding for machine learning

Номер: US20210012809A1
Принадлежит: International Business Machines Corp

Techniques for padding audiovisual clips (for example, audiovisual clips of sporting events) for the purpose of causing the clip to have a predetermined duration so that the padded clip can be evaluated for viewer interest by a machine learning (ML) algorithm. The unpadded clip is padded with audiovisual segment(s) that will cause the padded clip to have a level of viewer interest that it would have if the unpadded clip had been longer. In some embodiments the padded segments are synthetic images generated by a generative adversarial network such that the synthetic images would have the same level of viewer interest (as adjudged by an ML algorithm) as if the unpadded clip had been shot to be longer.

Подробнее
03-02-2022 дата публикации

TARGET OBJECT IDENTIFICATION METHOD AND APPARATUS

Номер: US20220036141A1
Автор: TIAN Maoqing, WU Jin, YI Shuai
Принадлежит:

Methods, devices, systems, and apparatus for target object identification are provided. In one aspect, a method includes: performing classification on a to-be-identified target object in a target image to determine a prediction category of the to-be-identified target object, determining whether the prediction category is correct according to a hidden layer feature for the to-be-identified target object, and outputting prompt information in response to the prediction category being incorrect. 1. A method of target object identification , the method comprising:performing classification on a to-be-identified target object in a target image to determine a prediction category of the to-be-identified target object;determining whether the prediction category is correct according to a hidden layer feature for the to-be-identified target object; andoutputting prompt information in response to determining that the prediction category is incorrect.2. The method according to claim 1 , further comprising:in response to determining that the prediction category is correct, determining the prediction category as a final category of the to-be-identified target object; andoutputting the final category of the to-be-identified target object.3. The method according to claim 1 , wherein determining whether the prediction category is correct according to the hidden layer feature of the to-be-identified target object comprises:inputting the hidden layer feature for the to-be-identified target object into an authenticity identification model corresponding to the prediction category, such that the authenticity identification model outputs a probability value, wherein the authenticity identification model corresponding to the prediction category reflects a distribution of hidden layer features for target objects belonging to the prediction category, and the probability value represents a probability that a final category of the to-be-identified target object is the prediction category; ...

Подробнее
18-01-2018 дата публикации

GENERATING PIXEL MAPS FROM NON-IMAGE DATA AND DIFFERENCE METRICS FOR PIXEL MAPS

Номер: US20180018517A1
Автор: XU Ying, Zhong Hao
Принадлежит:

Systems and methods for scalable comparisons between two pixel maps are provided. In an embodiment, an agricultural intelligence computer system generates pixel maps from non-image data by transforming a plurality of values and location values into pixel values and pixel locations. The non-image data may include data relating to a particular agricultural field, such as nutrient content in the soil, pH values, soil moisture, elevation, temperature, and/or measured crop yields. The agricultural intelligence computer system converts each pixel map into a vector of values. The agricultural intelligence computer system also generates a matrix of metric coefficients where each value in the matrix of metric coefficients is computed using a spatial distance between to pixel locations in one of the pixel maps. Using the vectors of values and the matrix of metric coefficients, the agricultural intelligence computer system generates a difference metric identifying a difference between the two pixel maps. In an embodiment, the difference metric is normalized so that the difference metric is scalable to pixel maps of different sizes. The difference metric may then be used to select particular images that best match a measured yield, identify relationships between field values and measured crop yields, identify and/or select management zones, investigate management practices, and/or strengthen agronomic models of predicted yield. 1. A computing device comprising:a memory;one or more processors communicatively coupled to the memory;one or more instructions stored in the memory, executed by the one or more processors, and configured to cause the one or more processors to perform:obtaining a first pixel map for a predicted agronomic yield of a particular field wherein each pixel of the first pixel map represents an agronomic yield of a crop at a physical location within the particular field;obtaining a second pixel map for a measured agronomic yield of the particular field wherein ...

Подробнее
21-01-2021 дата публикации

IMAGE PROCESSING METHOD AND DEVICE, AND STORAGE MEDIUM

Номер: US20210019560A1
Принадлежит:

The present disclosure relates to an image processing method and device, an electronic apparatus and a storage medium. The method comprises: performing feature extraction on an image to be processed to obtain a first feature map of the image to be processed; splitting the first feature map into a plurality of first sub-feature maps according to dimension information of the first feature map and a preset splitting rule, wherein the dimension information of the first feature map comprises dimensions of the first feature map and size of each dimension; performing normalization on the plurality of first sub-feature maps respectively to obtain a plurality of second sub-feature maps; and splicing the plurality of second sub-feature maps to obtain a second feature map of the image to be processed. Embodiments of the present disclosure can reduce the statistical errors during normalization of a complete feature map. 1. An image processing method , wherein the method comprises:performing feature extraction on an image to be processed to obtain a first feature map of the image to be processed;splitting the first feature map into a plurality of first sub-feature maps according to dimension information of the first feature map and a preset splitting rule, the dimension information of the first feature map comprising dimensions of the first feature map and size of each dimension;performing normalization on the plurality of first sub-feature maps respectively to obtain a plurality of second sub-feature maps; andsplicing the plurality of second sub-feature maps to obtain a second feature map of the image to be processed.2. The method according to claim 1 , wherein splitting the first feature map into the plurality of first sub-feature maps according to the dimension information of the first feature map and the preset splitting rule comprises:according to sizes of spatial dimensions of the first feature map and the preset splitting rule, splitting the first feature map in the ...

Подробнее
26-01-2017 дата публикации

METHOD AND DEVICE FOR ADAPTIVE SPATIAL-DOMAIN VIDEO DENOISING

Номер: US20170024860A1
Принадлежит:

The embodiments of the present invention provide a method for adaptive spatial-domain video denoising, including: acquiring the pixel value of each pixel at the same positions of a current frame and a previous adjacent frame thereof so as to calculate the noise intensity of the current pixel; and acquiring the pixel values of adjacent pixels in the up, down, left and right sides of the current pixel in a current frame respectively, calculating the denoising weights of the current pixel and the adjacent pixels in the up, down, left and right sides according to the noise intensity, the pixel value of the current pixel and the pixel values of the adjacent pixels in the up, down, left and right sides, and using a value acquired through weighted average to replace the pixel value of the current pixel so as to maximally reserve frame details while implementing the adaptive spatial-domain denoising of the current pixel. 1. A method for adaptive spatial-domain video denoising , comprising:acquiring the pixel values of all the pixels at the same positions of a current frame and a previous adjacent frame thereof respectively and normalizing the pixel values acquired;calculating the noise intensity of a current pixel according to the pixel value of the current pixel in the current frame and the pixel value of the pixel in the previous adjacent frame at the same position with the current pixel after normalizing;acquiring the pixel values of adjacent pixels in the up, down, left and right sides of the current pixel in the current frame respectively; andperforming adaptive spatial-domain denoising on the current pixel according to the noise intensity, the pixel value of the current pixel and the pixel values of the adjacent pixels in the up, down, left and right sides.2. The method for adaptive spatial-domain video denoising according to claim 1 , wherein the calculating the noise intensity of the current pixel further comprising:{'sup': 'n', 'using a following formula L(i, j)=(m ...

Подробнее
25-01-2018 дата публикации

METHOD AND IMAGE PROCESSING APPARATUS FOR IMAGE-BASED OBJECT FEATURE DESCRIPTION

Номер: US20180025239A1
Принадлежит: Tamkang University

A method and an image processing apparatus for image-based object feature description are provided. In the method, an object of interest in an input image is detected and a centroid and a direction angle of the object of interest are calculated. Next, a contour of the object of interest is recognized and a distance and a relative angle of each pixel on the contour to the centroid are calculated, in which the relative angle of each pixel is calibrated by using the direction angle. Then, a 360-degree range centered on the centroid is equally divided into multiple angle intervals and the pixels on the contour are separated into multiple groups according to a range covered by each angle interval. Afterwards, a maximum among the distances of the pixels in each group is obtained and used as a feature value of the group. Finally, the feature values of the groups are normalized and collected to form a feature vector that serves as a feature descriptor of the object of interest. 1. A method for image-based object feature description , adapted for an electronic apparatus to describe an object feature in an input image , the method comprising:detecting an object of interest in the input image, and calculating a centroid and a direction angle of the object of interest;recognizing a contour of the object of interest, and calculating, among a plurality of pixels on the contour, a distance and a relative angle of each of the pixels to the centroid;calibrating the calculated relative angle of each of the pixels by using the direction angle;dividing a 360-degree range centered on the centroid equally into a plurality of angle intervals, and separating the pixels on the contour into a plurality of groups according to a range covered by each of the angle intervals;obtaining a maximum among the distances of the pixels in each of the groups as a feature value of the group; andnormalizing and collecting the feature values of the groups to form a feature vector that serves as a feature ...

Подробнее
10-02-2022 дата публикации

SYSTEMS AND METHODS FOR 3D IMAGE DISTIFICATION

Номер: US20220044069A1
Принадлежит:

Systems and methods are described for Distification of 3D imagery. A computing device may obtain a three dimensional (3D) image that defines a 3D point cloud used to generate a two dimensional (2D) image matrix. The 2D image matrix may include 2D matrix point(s), where each 2D matrix point can be associated with a horizontal coordinate and a vertical coordinate. The computing device can generate an output feature vector that includes at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image. The 3D point in the 3D point cloud is mapped to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the at least one 2D matrix point of the 2D image matrix point. The output feature vector is input into a predictive model. 1. A computing device configured to Distify 3D imagery , the computing device comprising one or more processors configured to:obtain a three dimensional (3D) image, wherein the 3D image defines a 3D point cloud;generate a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; andgenerate an output feature vector as a data structure that includes at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image,wherein the 3D point in the 3D point cloud is mapped to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the at least one 2D matrix point of the 2D image matrix, andwherein the output feature vector is input into a predictive model.2. The computing device of claim 1 , wherein the output feature vector indicates one or more image feature values associated with the 3D point claim 1 , wherein each image feature value defines one or more items of interest in the 3D image.3. The computing device of claim 2 , wherein the one or more items of ...

Подробнее
23-01-2020 дата публикации

SYSTEM, PORTABLE TERMINAL DEVICE, SERVER, PROGRAM, AND METHOD FOR VIEWING CONFIRMATION

Номер: US20200026921A1
Автор: Kurabayashi Shuichi
Принадлежит: CYGAMES, INC.

The system is for confirming that a user of a portable terminal device has viewed posted material in a plurality of places by visiting one of the posted places, the system including the portable terminal device and a server, the device including a portable-terminal control unit, a portable-terminal communication unit, an image capturing unit, a portable-terminal storage unit, and a position-information obtaining unit, the server including a server control unit, a server communication unit, and a server storage unit that stores authenticated images about the posted material in the individual posted places in association with position information of the posted places, wherein the portable-terminal control unit sends a viewing confirmation request including a viewed image, the normalization information, and the portable-terminal position information to the server by using the portable-terminal communication unit, and the server control unit determines whether the viewed image is valid on the basis of the request. 1. A system used to confirm that a user of a portable terminal device has viewed posted material posted in a plurality of places by visiting one of the posted places , the system being characterized by comprising the portable terminal device and a server , the portable terminal device including a portable-terminal control unit , a portable-terminal communication unit , an image capturing unit , a portable-terminal storage unit , and a position-information obtaining unit , and the server including a server control unit , a server communication unit , and a server storage unit that stores authenticated images about the posted material in the individual posted places in association with position information of the posted places , and characterized in that: obtains portable-terminal position information by using the position-information obtaining unit;', 'compares an image of a space captured by the image capturing unit with a reference-posted-material image ...

Подробнее
23-01-2020 дата публикации

TEXT LINE NORMALIZATION SYSTEMS AND METHODS

Номер: US20200026947A1
Принадлежит: LEVERTON GmbH

A method for estimating text heights of text line images includes estimating a text height with a sequence recognizer. The method further includes normalizing a vertical dimension and/or position of text within a text line image based on the text height. The method may also further include calculating a feature of the text line image. In some examples, the sequence recognizer estimates the text height with a machine learning model. 1. A computer-implemented method comprising:(a) receiving a first text line image associated with a first line of text contained within a document image;(b) estimating a first text height of the first text line image with a sequence recognizer; and(c) normalizing the first text line image based on the first text height.2. The method of claim 1 , further comprising:calculating a first feature of the first text line image,wherein the sequence recognizer estimates the first text height using the first feature.3. The method of claim 2 , further comprising:calculating a second feature of a second text line image associated with a second line of text contained within the document image,wherein the sequence recognizer estimates the first text height using the first feature and the second feature.4. The method of claim 2 , wherein the first feature includes a feature chosen from the group consisting of a sum of pixels at a plurality of vertical positions within the first text line image claim 2 , a gradient of the sum of pixels at a plurality of vertical positions within the first text line image claim 2 , a statistical moment of a gray value distribution at a plurality of vertical positions of the first text line image claim 2 , and combinations thereof.5. The method of claim 3 , wherein the second feature includes a second text height of the second text line image.6. The method of claim 1 , further comprising:calculating a third feature of a plurality of text line images associated with a plurality of lines of text contained within the document ...

Подробнее
23-01-2020 дата публикации

METHOD AND SYSTEM OF EXTRACTION OF IMPERVIOUS SURFACE OF REMOTE SENSING IMAGE

Номер: US20200026953A1
Автор: SHAO Zhenfeng, Wang Lei
Принадлежит:

A method of extraction of an impervious surface of a remote sensing image. The method includes: 1) obtaining a remote sensing image of a target region, performing normalization for image data, and dividing the normalized target region image into a sample image and a test image; 2) extracting an image feature of each sample image by constructing a deep convolutional network for feature extraction of the remote sensing image; 3) performing pixel-by-pixel category prediction for each sample image respectively; 4) constructing a loss function by using an error between a prediction value and a true value of the sample image and performing update training for network parameters of the deep convolutional network and network parameters relating to the category prediction; and 5) extracting an image feature from the test image through the deep convolutional network based on the training result obtained in 4). 1. A method , comprising:1) obtaining a remote sensing image of a target region, performing normalization for image data, and dividing the normalized target region image into a sample image and a test image;2) extracting an image feature of each sample image by constructing a deep convolutional network for feature extraction of the remote sensing image, wherein the deep convolutional network is formed by a plurality of convolution layers, pooling layers and corresponding unpooling layers and deconvolution layers;3) performing pixel-by-pixel category prediction for each sample image respectively by using the image feature obtained by extraction;4) constructing a loss function by using an error between a prediction value and a true value of the sample image and performing update training for network parameters of the deep convolutional network and network parameters relating to the category prediction; and5) extracting an image feature from the test image through the deep convolutional network based on a training result obtained in 4), performing the pixel-by-pixel ...

Подробнее
28-01-2021 дата публикации

Optical neural network unit and optical neural network configuration

Номер: US20210027154A1
Автор: Eyal Cohen, Zeev Zalevsky
Принадлежит: BAR ILAN UNIVERSITY

An artificial neuron unit and neural network for processing of input light are described. The artificial neuron unit comprises a modal mixing unit, such as multimode optical fiber, configured for receiving input light and applying selected mixing to light components of two or more modes within the input light and for providing exit light, and a filtering unit configured for applying preselected filter onto said exit light for selecting one or more modes of the exit light thereby providing output light of the artificial neuron unit.

Подробнее
02-02-2017 дата публикации

METHOD, APPARATUS AND COMPUTER-READABLE MEDIUM FOR IMAGE SCENE DETERMINATION

Номер: US20170032189A1
Принадлежит: Xiaomi Inc.

The present disclosure refers to method, apparatus and computer-readable medium for image scene determination. Aspects of the disclosure provide a method for image scene determination. The method includes receiving an image to be processed from a gallery associated with a user account, applying an image scene determination model to the image to determine a scene to which the image corresponds, and marking the image with the scene. The method facilitates image classification of images in a gallery according to scenes and allows a user to view images according to the scenes, so as to improve users experience on usage of the gallery. 1. A method for image scene determination , comprising:receiving an image to be processed from a gallery associated with a user account;applying an image scene determination model to the image to determine a scene to which the image corresponds; andmarking the image with the scene.2. The method of claim 1 , further comprising:receiving a training sample set, the training sample set including training images respectively corresponding, to scenes;initializing a training model with multiple layers according to a neural network, each layer including neuron nodes with feature coefficients between the neuron nodes; andtraining the feature coefficients between the neuron nodes in each layer of the training model using the training images to determine a trained model for image scene determination.3. The method of claim 2 , further comprising:receiving a test sample set, the test sample set including test images respectively corresponding to the scenes;applying the trained model to each of the test images to obtain scene classification results of the respective test images; anddetermining a classification accuracy of the trained model according to the scene classification results of the respective test images.4. The method of claim 3 , wherein when the classification accuracy is less than a predefined threshold claim 3 , the method comprises: ...

Подробнее
04-02-2016 дата публикации

Methods and Apparatus for Quantifying Inflammation

Номер: US20160035091A1
Автор: Kubassova Olga
Принадлежит: Image Analysis Limited

A computer-implemented method and apparatus for quantifying inflammation in tissue or anatomy. The method includes analysing Dynamic Contrast Enhanced MRI data. The analysis comprises determining a value quantifying inflammation in the tissue. The value is a continuous score value and small changes in the inflammation result in a change in the determined value. 2. A method according to claim 1 , wherein the step of analysing comprises determining the first value based on a continuous function being applied to the image data.3. A method according to claim 2 , wherein the first value quantifies the volume of inflammation.4. A method of claim 3 , wherein the step of analysing the image data further comprises determining the first value by quantifying the volume of inflammatory activity in the tissue or anatomy.5. A method of claim 4 , wherein the step of quantifying the volume of inflammatory activity comprises quantifying based on a first continuous function being applied to the image data.6. A method according to claim 1 , wherein the first value quantifies the aggressiveness of inflammatory activity.7. A method of claim 6 , wherein the step of analysing the image data further comprises determining the first value by quantifying the aggressiveness of inflammatory activity in the tissue or anatomy.8. A method of claim 7 , wherein the step of quantifying the aggressiveness comprises quantifying based on a second continuous function being applied to the image data.9. A method according to claim 1 , wherein each image is a magnetic resonance image (MRI).10. A method according to claim 9 , wherein the image is a plurality of temporal magnetic resonance images.11. A method according to claim 1 , wherein each image is a computed axial tomography image or an ultrasound image.12. A method according to claim 1 , wherein the tissue has been exposed to a contrast agent.13. A method according to claim 12 , wherein analysing the data comprises analysing a temporal pattern of ...

Подробнее
09-02-2017 дата публикации

COMPUTERIZED METHOD AND APPARATUS FOR DETERMINING OBSTRUCTED PORTIONS OF DISPLAYED DIGITAL CONTENT

Номер: US20170039443A1
Автор: WANG Yongpan, ZHENG Qi
Принадлежит:

Disclosed are systems and methods for improving interactions with and between computers in content communicating, rendering, generating, hosting and/or providing systems supported by or configured with computing devices, servers and/or platforms. The systems interact to improve the quality of data used in processing interactions between or among processors in such systems for determining obscured portions of displayed digital content. The disclosed method and apparatus involve acquiring and recording coordinates of each pixel in a digital image, and marking the pixels located at a boundary of the image as boundary pixels. The pixels located at a first region block are extracted and marked as obstruction pixels. An obstructed cutting space area corresponding to each pixel is determined based on positional relations of each pixel in the image. An image obstruction score is calculated based on the cutting space areas and utilized for rendering the pixels of the image. 116-. (canceled)17. A method comprising:acquiring, by a computing device over a network, a digital image comprising content associated with a product;analyzing, via the computing device, the digital image, said analysis comprising identifying each pixel in the digital image and recording coordinates of each pixel, said analysis further comprising marking, based on said coordinates the pixels located at a boundary of the digital image as boundary pixels;extracting, via the computing device based on said analysis, a first region block of pixels in the digital image, said extraction comprising marking the pixels located at the first region block as obstructed pixels;calculating, via the computing device, an obstructed cutting space area corresponding to each pixel in the digital image, said calculation based on determined positional relations of each pixel in the digital image, the marked boundary pixels and the marked obstructed pixels; andcalculating, via the computing device, a digital image obstructed ...

Подробнее
24-02-2022 дата публикации

SEMANTIC INPUT SAMPLING FOR EXPLANATION (SISE) OF CONVOLUTIONAL NEURAL NETWORKS

Номер: US20220058431A1
Принадлежит:

Embodiments of the present disclosure relate to generating explanation maps for explaining convolutional neural networks through attribution-based input sampling and block-wise feature aggregation. An example of a disclosed method for generating an explanation map for a convolutional neural network (CNN) includes obtaining an input image resulting in an output determination of the CNN, selecting a plurality of feature maps extracted from a plurality of pooling layers of the CNN, generating a plurality of attribution masks based on the plurality of feature maps, applying the generated attribution masks to the input image to obtain a plurality of visualization maps, and generating an explanation map of the output determination of the CNN based on the plurality of visualization maps. 1. A method for outputting an explanation map for an output determination of a convolutional neural network (CNN) based on an input image , the method comprising:extracting a plurality of sets of feature maps from a corresponding plurality of pooling layers of the CNN;obtaining a plurality of attribution masks based on subsets of the plurality of sets of feature maps;applying the plurality of attribution masks to copies of the input image to obtain a plurality of perturbed input images;obtaining a plurality of visualization maps based on confidence scores by inputting the plurality of perturbed copies of the input image to the CNN; andoutputting an explanation map of the output determination of the CNN based on the plurality of visualization maps.2. The method of claim 1 , further comprising identifying the most deterministic feature maps with regard to the input image of each of the plurality of sets of feature maps as a corresponding subset of feature maps.3. The method of claim 2 , wherein identifying the most deterministic feature maps comprises calculating an average gradient of the model's confidence score with respect to the input image for each feature map claim 2 , andwherein a ...

Подробнее
06-02-2020 дата публикации

APPARATUS AND METHOD FOR IDENTIFYING OBJECT

Номер: US20200042831A1
Принадлежит: LG ELECTRONICS INC.

An artificial intelligence based object identifying apparatus and a method thereof which are capable of easily identifying a type of an object in an image using a small size learning model are disclosed. According to an embodiment of the present disclosure, an object identifying apparatus configured to identify an object from an image includes a receiver configured to receive the image, an image modifier configured to modify the received image by predetermined methods to generate a plurality of modified images, and an object determinator configured to apply the plurality of modified images to a neural network trained to identify an object from the image to obtain a plurality of identification results and determine a type of an object in the received image based on the plurality of identification results. 1. An object identifying apparatus configured to identify an object from an image , the apparatus comprising:a memory; andone or more processors configured to execute instructions stored in the memory, wherein the one or more processors are configured to:receive the image;modify the received image by predetermined methods to generate a plurality of modified images; andapply the plurality of generated modified images to a neural network trained to identify an object from the image to obtain a plurality of identification results and determine a type of an object in the received image based on the plurality of identification results.2. The object identifying apparatus according to claim 1 , wherein modifying the received image comprises at least one of:differently rotating the received image to generate the plurality of modified images; differently removing a noise in the received image to generate a plurality of modified images; differently adjusting a brightness of the received image to generate a plurality of modified images; and differently adjusting a size of the received image to generate a plurality of modified images.3. The object identifying apparatus ...

Подробнее
18-02-2021 дата публикации

METHOD AND DEVICE FOR IMAGE PROCESSING, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Номер: US20210049403A1

A method for image processing, an electronic device, and a storage medium are provided. The method includes the following. For each processing method in a preset set of processing methods, a first feature parameter and a second feature parameter are determined according to image data to-be-processed, where the preset set includes at least two processing methods selected from whitening methods and/or normalization methods, and the image data to-be-processed includes at least one image data. A first weighted average of the first feature parameters is determined according to a weight coefficient of each first feature parameter, and a second weighted average of the second feature parameters is determined according to a weight coefficient of each second feature parameter. The image data to-be-processed is whitened according to the first weighted average and the second weighted average. 1. A method for image processing , the method comprising:for each processing method in a preset set of processing methods, determining a first feature parameter and a second feature parameter according to image data to-be-processed, wherein the preset set comprises at least two processing methods selected from whitening methods and/or normalization methods, and wherein the image data to-be-processed comprises at least one image data;determining a first weighted average of the first feature parameters according to a weight coefficient of each first feature parameter, and determining a second weighted average of the second feature parameters according to a weight coefficient of each second feature parameter; andwhitening the image data to-be-processed according to the first weighted average and the second weighted average.2. The method of claim 1 , wherein the first feature parameter is an average vector claim 1 , and wherein the second feature parameter is a covariance matrix.3. The method of claim 1 , wherein whitening the image data to-be-processed is executed by a neural network.4. The ...

Подробнее
18-02-2016 дата публикации

Identification method, identification system, matching device, and program

Номер: US20160048730A1
Автор: Rui Ishiyama
Принадлежит: NEC Corp

The present invention addresses the problem of acquiring information regarding a component or product and identifying said component or product easily and inexpensively. The present invention has an image-feature storage, an extracting means, an acquiring means, and an identifying means. The image-feature storage stores image features of texture patterns formed on components or products. The extracting means extracts an n-dimensional-symbol image and a texture-pattern image from a taken image containing at least the following: an n-dimensional symbol (n being a natural number) that represents information regarding a component, a product, or a product comprising said component; and a texture pattern formed on said component or product. The acquiring means acquires, from the extracted n-dimensional-symbol image, the aforementioned information regarding the component or product. The identifying means identifies the component, product, or component-comprising product by matching image features of the extracted texture-pattern image against the image features stored by the image-feature storage.

Подробнее
18-02-2021 дата публикации

MOVING OBJECT IDENTIFICATION FROM A VIDEO STREAM

Номер: US20210049770A1
Принадлежит:

A video stream moving object identifier takes a series of video frames as input, reduces the scale of the video frames, then performs pixel analysis on the sequential video frames to identify moving objects. Once moving objects are identified, the moving objects are resized according to input rules for a neural network object classifier to make the resized objects the correct size to be input to the neural network object classifier. The moving objects are then sent to a neural network object classifier, which processes the objects and returns an identification of the moving objects. The neural network object classifier can operate using one or more whitelists and one or more blacklists. 1. An apparatus comprising:at least one processor;a memory coupled to the at least one processor;a video stream residing in the memory, wherein the video stream comprises a plurality of frames; anda video stream moving object identifier that processes the plurality of frames in the video stream by performing pixel analysis of sequential frames to identify a moving object, resizes the moving object according to input rules for a neural network object classifier, sends the resized moving object to the neural network object classifier, and receives from the neural network object classifier an identification of the resized moving object.2. The apparatus of wherein the input rules for the neural network object classifier specify pixel size for objects input to the neural network object classifier.3. The apparatus of wherein claim 1 , prior to performing the pixel analysis claim 1 , the video stream moving object identifier reduces scale of the plurality of sequential frames.4. The apparatus of wherein the video stream moving object identifier reduces scale of the plurality of sequential frames by at least half5. The apparatus of wherein the neural network object classifier processes a plurality of input objects in parallel.6. The apparatus of wherein the resizing of the moving object ...

Подробнее
14-02-2019 дата публикации

MACHINE LEARNING IN AGRICULTURAL PLANTING, GROWING, AND HARVESTING CONTEXTS

Номер: US20190050948A1
Принадлежит:

A crop prediction system performs various machine learning operations to predict crop production and to identify a set of farming operations that, if performed, optimize crop production. The crop prediction system uses crop prediction models trained using various machine learning operations based on geographic and agronomic information. Responsive to receiving a request from a grower, the crop prediction system can access information representation of a portion of land corresponding to the request, such as the location of the land and corresponding weather conditions and soil composition. The crop prediction system applies one or more crop prediction models to the access information to predict a crop production and identify an optimized set of farming operations for the grower to perform. 1. A method for optimizing crop productivity comprising:accessing crop growth information describing, for each of a plurality of plots of land, 1) characteristics of the plot of land, 2) a type of crop planted on the plot of land, 3) characteristics of farming operations performed for the planted crop, and 4) a corresponding crop productivity;normalizing the crop growth information by formatting similar portions of the crop growth information into a unified format and a unified scale;storing the normalized crop growth information in a columnar database;training a crop prediction engine by applying one or more one machine learning operations to the stored normalized crop growth information, the crop prediction engine mapping, for a particular crop type, a combination of one or more characteristics of the plot of land and characteristics of farming operations performed for the planted crop to an expected corresponding crop productivity;in response to receiving a request from a grower to optimize crop productivity for a first type of crop and a first portion of land on which the first crop is to be planted, the request identifying a first set of farming operations to be performed by ...

Подробнее
13-02-2020 дата публикации

SYSTEM AND METHOD FOR AUTOMATIC DETECTION AND VERIFICATION OF OPTICAL CHARACTER RECOGNITION DATA

Номер: US20200050848A1
Принадлежит:

Systems and methods for automatically verifying optical character recognition (OCR) detected text of a native electronic document having an image layer comprising a matrix of pixels and a text layer comprising a sequence of characters. The method includes determining a location of OCR-detected text in the text layer of the native electronic document based on a pixel-based coordinate location of the OCR-detected text in the image layer of the native electronic document. The method also includes applying the location of the OCR-detected text to the text layer of the native electronic document to detect text in the text layer corresponding to the OCR-detected text. The method also includes rendering only the detected text in the text layer as an output when the OCR-detected text does not match the detected text in the text layer, to improve accuracy of the output text. 1. A method for automatically verifying optical character recognition (OCR) detected text of a native electronic document having an image layer comprising a matrix of pixels and a text layer comprising a sequence of characters , the method comprising:determining a location of OCR-detected text in the text layer of the native electronic document based on a pixel-based coordinate location of the OCR-detected text in the image layer of the native electronic document;applying the location of the OCR-detected text to the text layer of the native electronic document to detect text in the text layer corresponding to the OCR-detected text; andrendering only the detected text in the text layer as an output when the OCR-detected text does not match the detected text in the text layer, to improve accuracy of the output text.2. The method of claim 1 , further comprising applying normalization processing to the detected text in the text layer to generate normalized text-layer text claim 1 , andwherein the rendered detected text in the text layer is normalized text-layer text.3. The method of claim 1 , wherein the OCR ...

Подробнее
10-03-2022 дата публикации

DEVICE AND METHOD FOR TRAINING A NORMALIZING FLOW

Номер: US20220076044A1
Принадлежит:

A computer-implemented method for training a normalizing flow. The normalizing flow predicts a first density value based on a first input image. The first density value characterizes a likelihood of the first input image to occur. The first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow. The intermediate output is determined based on a plurality of weights of the first convolutional layer. The method for training includes: determining a second input image; determining an output, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as output; determining a second density value based on the output tensor and on the plurality of weights; determining a natural gradient of the plurality of weights with respect to the second density value; adapting the weights according to the natural gradient. 1. A computer-implemented method for training a normalizing flow , wherein the normalizing flow is configured to predict a first density value based on a first input image , wherein the first density value characterizes a likelihood of the first input image to occur , wherein the first density value is predicted based on an intermediate output of a first convolutional layer of the normalizing flow , and wherein the intermediate output is determined based on a plurality of weights of the first convolutional layer , the method for training comprising the following steps:determining a second input image;determining an output tensor, wherein the output is determined by providing the second input image to the normalizing flow and providing an output of the normalizing flow as the output tensor;determining a second density value based on the output tensor and on the plurality of weights;determining a natural gradient of the plurality of weights with respect to the second density value; andadapting the plurality of weights according to ...

Подробнее
10-03-2022 дата публикации

RETINOPATHY RECOGNITION SYSTEM

Номер: US20220076420A1
Принадлежит:

Some embodiments of the disclosure provide a diabetic retinopathy recognition system (S) based on fundus image. According to an embodiment, the system includes an image acquisition apparatus () configured to collect fundus images. The fundus images include target fundus images and reference fundus images taken from a person. The system further includes an automatic recognition apparatus () configured to process the fundus images from the image acquisition apparatus by using a deep learning method. The automatic recognition apparatus automatically determines whether a fundus image has a lesion and outputs the diagnostic result. According to another embodiment, the diabetic retinopathy recognition system (S) utilizes a deep learning method to automatically determine the fundus images and output the diagnostic result. 1. A retinopathy recognition system based on fundus image , comprising:an image acquisition apparatus configured to collect fundus images, the fundus images comprising a target fundus image and a reference fundus image taken from a person; andan automatic recognition apparatus configured to process the fundus images from the image acquisition apparatus using a deep learning method, determine automatically whether a fundus image has a lesion, and output a diagnostic result; the target fundus image and the reference fundus image are respectively used as independent input information; and', 'the target fundus image and the reference fundus image are fundus images of different eyes respectively., 'wherein2. The retinopathy recognition system of claim 1 , wherein the automatic recognition apparatus comprises:a pre-processing module configured to separately pre-process the target fundus image and the reference fundus image;a first neural network module configured to generate a first advanced feature set from the target fundus image;a second neural network module configured to generate a second advanced feature set from the reference fundus image;a feature ...

Подробнее
01-03-2018 дата публикации

FRAME AGGREGATION NETWORK FOR SCALABLE VIDEO FACE RECOGNITION

Номер: US20180060698A1
Принадлежит:

In a video frame processing system, a feature extractor generates, based on a plurality of data sets corresponding to a plurality of frames of a video, a plurality of feature sets, respective ones of the feature sets including features extracted from respective ones of the data sets. A first stage of the feature aggregator generates a kernel for a second stage of the feature aggregator. The kernel is adapted to content of the feature sets so as to emphasize desirable ones of the feature sets and deemphasize undesirable ones of the feature sets. In the second stage of the feature aggregator the kernel generated by the first stage is applied to the plurality of feature sets to generate a plurality of significances corresponding to the plurality of feature sets. The feature sets are weighted based on corresponding significances and weighted feature sets are aggregated to generate an aggregated feature set. 1. A video frame processing system comprising: receive a plurality of data sets, wherein respective ones of the data sets correspond to respective frames of a video, and', 'generate a plurality of feature sets corresponding to the plurality of data sets, wherein respective ones of the feature sets include corresponding features extracted from respective ones of the data sets; and, 'a feature extractor configured to'} generate, in the first stage based on the plurality of feature sets, a kernel for the second stage, wherein the kernel is adapted to content of the plurality of feature sets so as to emphasize ones of the feature sets and deemphasize other ones of the feature sets, and', applying, to the plurality of feature sets, the kernel to generate a plurality of significances corresponding to the plurality of feature sets,', 'weighing respective ones of the feature sets based on corresponding significances of the plurality of significances to generate a plurality of weighted feature sets, and', 'aggregating the plurality of weighted feature sets to generate the ...

Подробнее
28-02-2019 дата публикации

MARINE INTRUSION DETECTION SYSTEM AND METHOD

Номер: US20190065859A1
Принадлежит: HITACHI KOKUSAI ELECTRIC INC.

A system detects a candidate for an object which intrudes based on images of a visible light camera and a far-infrared camera that monitor the sea, further derives a size, a velocity, an intrusion direction, and linearity, and identifies the object to some extent. Also, the system distinguishes between a boat, a human, and a floating matter based on the luminances or the like in a far-infrared image. In addition, the system observes the periodicity of normal waves on the sea surface at a location without any object by performing the Fourier transform on the image. The accuracy of identification of an object is improved based on the correlation between the motion of waves in a normal state and the motion of the object. 18-. (canceled)9. A marine intrusion detection system comprising:a sea surface situation acquisition unit that, based on an input image from a video source, automatically estimates attributes of waves including an amplitude and a period of waves on a water surface which is a background of the input image;a difference method based detection unit that generates a reference image from the input image, and detects from the input image a pixel for which a value changes at a higher speed than in the reference image;an under-silhouette-situation detection unit that detects a dark region as an object of interest from the input image which shows a background having a substantially saturated luminance and an object having a substantially dark luminance;a characteristic quantity based detection unit that extracts an image characteristic quantity from the input image, and when the image characteristic quantity matching an object type learned through machine learning in advance is found, outputs the object type;a tracking unit that attaches a label to a region of interest detected by the difference method based detection unit, the under-silhouette-situation detection unit, and the characteristic quantity based detection unit, associates the region of interest with ...

Подробнее
28-02-2019 дата публикации

Training method and device of neural network for medical image processing, and medical image processing method and device

Номер: US20190065884A1
Автор: Lvwei WANG, Yingying Li
Принадлежит: BOE Technology Group Co Ltd

The present disclosure provides a training method and device of a neural network for medical image processing, a medical image processing method and device, and an electronic apparatus for medical image processing based on a neural network. The training method includes performing a pre-processing process on an original image to obtain a pre-processed image, performing a data-augmenting process on the pre-processed image to obtain an augmented image retaining a pathological feature, the augmented image including at least one image with first resolution and at least one image with second resolution being higher than the first resolution, and training the neural network by selecting the image with first resolution and a part-cropping image from the image with second resolution as training samples.

Подробнее
28-02-2019 дата публикации

DETECTION OF NEAR RECTANGULAR CELLS

Номер: US20190065887A1
Принадлежит: KONICA MINOLTA LABORATORY U.S.A., INC.

A method for image processing is provided. The method includes: generating a skeleton graph associated with the table and comprising a plurality of edges; identifying, in the skeleton graph, a corner vertex, a starting vertex adjacent to the corner vertex, and an ending vertex adjacent to the corner vertex; determining a set of route options for the starting vertex that includes a first set of vertices adjacent to the starting vertex; selecting a candidate vertex from the first set of vertices as a first vertex; determining a set of route options for a second vertex comprising a second set of vertices adjacent to the second vertex; determining the second set of vertices comprises the ending vertex; and generating a route for a cell in the table and comprising the corner vertex, the starting vertex, the first vertex, the second vertex, and the ending vertex. 1. A method for processing an image comprising a table , comprising:processing the image to generate a skeleton graph associated with the table in the image, wherein the skeleton graph comprises a plurality of edges;identifying, in the skeleton graph, a corner vertex, a starting vertex adjacent to the corner vertex, and an ending vertex adjacent to the corner vertex;calculating a travel direction from the corner vertex to the starting vertex; a first set of vertices adjacent to the starting vertex in the skeleton graph;', 'a set of travel directions from the starting vertex to the first set of vertices; and', 'a set of turn costs for the first set of vertices based on the set of travel directions and a perpendicular of the travel direction from the corner vertex to the starting vertex;, 'determining a set of route options for the starting vertex comprisingselecting a candidate vertex from the first set of vertices as a first vertex based on the set of turn costs;determining a set of route options for a second vertex comprising a second set of vertices adjacent to the second vertex in the skeleton graph; ...

Подробнее
28-02-2019 дата публикации

Systems and methods for obtaining insurance offers using mobile image capture

Номер: US20190066224A1
Автор: Mike Strange
Принадлежит: Mitek Systems Inc

Systems and methods for using a mobile device to submit an application for an insurance policy using images of documents captured by the mobile device are provided herein. The information is then used by an insurance company to generate a quote which is then displayed to the user on the mobile device. A user captures images of one or more documents containing information needed to complete an insurance application, after which the information on the documents is extracted and sent to the insurance company where a quote for the insurance policy can be developed. The quote can then be transmitted back to the user. Applications on the mobile device are configured to capture images of the documents needed for an insurance application, such as a driver's license, insurance information card or a vehicle identification number (VIN). The images are then processed to extract the information needed for the insurance application.

Подробнее
28-02-2019 дата публикации

ENHANCING THE LEGIBILITY OF IMAGES USING MONOCHROMATIC LIGHT SOURCES

Номер: US20190066273A1
Принадлежит: GEORGETOWN UNIVERSITY

A system and method are described for enhancing readability of document images by operating on each document individually. Monochromatic light sources operating at different wavelengths of light can be used to obtain greyscale images. The greyscale images can then be used in any desired image enhancement algorithm. In one example algorithm, an automated method removes image background noise and improves sharpness of the scripts and characters using edge detection and local color contrast computation. 1. A method of enhancing imaging of a document , comprising:capturing a first image of the document using monochromatic light at a first wavelength, wherein the first image includes content including one or more of the following: typed text or hand written text;capturing one or more additional images of the document using monochromatic light at one or more different wavelengths than the first wavelength; andcombining the first image and the one or more additional images by subtraction to remove texture from document and to extract the content.2. The method of claim 1 , wherein the content includes the typed text claim 1 , handwritten text in multiple languages claim 1 , or photographs.3. The method of claim 1 , further including:generating a greyscale image based on the combining;correcting an erroneous plateau comprising computing a mean and standard deviation of original pixel colors of the first image in an island portion of the erroneous plateau;computing a mean and standard deviation of the original pixel colors in a border portion of the erroneous plateau;performing a statistical test on the erroneous plateau to determine if the island portion of the erroneous plateau is part of the border portion; andif the island portion is a part of the border portion, correcting the erroneous plateau so the island portion and the border portion are a same color value.4. The method of claim 1 , further including generating a greyscale image claim 1 , wherein the greyscale image ...

Подробнее
27-02-2020 дата публикации

AUTOMATIC SUGGESTION TO SHARE IMAGES

Номер: US20200065613A1
Принадлежит: Google LLC

Some implementations can include a computer-implemented method and/or system for automatic suggestion to share images. The method can include identifying a plurality of images associated with a user and detecting one or more entities in the plurality of images. The method can also include constructing an aggregate feature vector for the plurality of images based on the one or more entities in the plurality of images and determining that the aggregate feature vector matches a first cluster. The method can further include, in response to determining that the aggregate feature vector matches the first cluster, providing a suggestion to the user for an image composition based on the plurality of images. 1identifying a plurality of images associated with a user;detecting one or more entities in the plurality of images;constructing an aggregate feature vector for the plurality of images based on the one or more entities in the plurality of images;determining that the aggregate feature vector matches a first cluster; andin response to determining that the aggregate feature vector matches the first cluster, providing a suggestion to the user for an image composition based on the plurality of images.. A computer-implemented method comprising: This application is a continuation of U.S. patent application Ser. No. 15/352,537, filed Nov. 15, 2016 and titled AUTOMATIC SUGGESTION TO SHARE IMAGES, the entire disclosure of which is incorporated herein by reference.The proliferation of digital image capture devices, such as digital cameras and phones with built-in cameras, permits users to capture a large number of digital images. Users may often remember to share images taken during significant events, e.g., a wedding or graduation. However, users may not remember to share images taken during times other than significant events. Sharing such images may be useful to the users and/or to recipients of the user's shared images.The background description provided herein is for the ...

Подробнее
27-02-2020 дата публикации

Image Classification System For Resizing Images To Maintain Aspect Ratio Information

Номер: US20200065618A1
Автор: ZHANG Wei
Принадлежит:

In an example, an image classification system is disclosed. The image classification system modifies an image having a first height and a first width to be input to a convolutional neural network for image classification. The image classification system includes an image resizing module that is configured to resize the image so that the resized image comprises a second height and a second width. An aspect ratio of the resized image corresponds to an aspect ratio of the image having the first height and the first width. The image classification system also includes an alignment module that is configured to modify pixels of a feature map corresponding to the resized image based upon a comparison of a desired feature map size and an actual feature map size. 1. An image classification system for modifying an image having a first height and a first width to be input to a convolutional neural network for image classification , the image classification system comprising:an image resizing module that is configured to resize the image so that the resized image comprises a second height and a second width, wherein an aspect ratio of the resized image corresponds to an aspect ratio of the image having the first height and the first width; andan alignment module that is configured to modify pixels of a feature map corresponding to the resized image based upon a comparison of a desired feature map size and an actual feature map size.2. The image classification system as recited in claim 1 , wherein the second width is equal to int(a*√{square root over (r)}) claim 1 , where int is an integer operation claim 1 , ais at least one of a desired number of pixels of a width of the resized image or a desired number of pixels of a height of the resized image and r is the aspect ratio of the image having the first height and the first width.4. The image classification system as recited in claim 1 , wherein the alignment module is further configured to at least one of removes the pixels ...

Подробнее
11-03-2021 дата публикации

SYSTEMS AND METHODS FOR MOBILE IMAGE CAPTURE AND PROCESSING OF DOCUMENTS

Номер: US20210073786A1
Принадлежит:

Techniques for processing images of documents captured using a mobile device are provided. The images can include different sides of a document from a mobile device for an authenticated transaction. In an example implementation, a method incudes inspecting the images to detect a feature associated with a first side of the document. In response to determining an image is the first side of the document, a type of content is selected to be analyze on the image of the first side and one or more of regions of interests (ROIs) are identified on the image of the first side that are known to include the selected type of content. A process can include receiving a sub-image of the image of the first side from the preprocessing unit, and performing content detection test on the sub-image. 1. A method comprising using at least one hardware processor to:receive at least one image of a check;identify at least one region of interest in the at least one image of the check;determine whether or not the at least one region of interest contains a specific type of content;when the at least one region of interest does contain the specific type of content, initiate a check deposit process; and,when the at least one region of interest does not contain the specific type of content, alert a user without initiating the check deposit process.2. The method of claim 1 , wherein the at least one region of interest comprises an endorsement area claim 1 , and wherein the specific type of content comprises an endorsement.3. The method of claim 2 , further comprising using the at least one hardware processor to claim 2 , when the at least one region of interest does contain the endorsement claim 2 , determine a type of endorsement in the endorsement area.4. The method of claim 2 , further comprising using the at least one hardware processor to claim 2 , when the at least one region of interest does contain the endorsement claim 2 , compare the endorsement to a reference endorsement stored on a server ...

Подробнее
07-03-2019 дата публикации

IDENTIFICATION OF INDIVIDUALS IN A DIGITAL FILE USING MEDIA ANALYSIS TECHNIQUES

Номер: US20190073520A1
Принадлежит:

This description describes a system for identifying individuals within a digital file. The system accesses a digital file describing the movement of unidentified individuals and detects a face for an unidentified individual at a plurality of locations in the video. The system divides the digital file into a set of segments and detects a face of an unidentified individual by applying a detection algorithm to each segment. For each detected face, the system applies a recognition algorithm to extract feature vectors representative of the identity of the detected faces which are stored in computer memory. The system applies a recognition algorithm to query the extracted feature vectors for target individuals by matching unidentified individuals to target individuals, determining a confidence level describing the likelihood that the match is correct, and generating a report to be presented to a user of the system. 1. A method for identifying individuals within a video , the method comprising:accessing, from computer memory, a video describing the movement of one or more unidentified individuals over a period of time and comprising one or more frames;dividing the video into a set of segments, wherein each segment describes a part of a frame of the video;adjusting, for each segment, a pixel resolution of the segment to a detection resolution such that a detection algorithm detects a face of one more unidentified individuals within the segment, wherein at the detection resolution a size of the face in the segment increases relative to the size of the face in the frame;responsive to the detection algorithm detecting a face, adjusting, for each segment, the pixel resolution of the segment from the detection resolution to a recognition resolution such that a recognition resolution matches the face of the unidentified individual to a target individual;determining, for each match, a confidence level describing the accuracy of the match between the unidentified individual and the ...

Подробнее
15-03-2018 дата публикации

Innovative anti-bullying approach using emotional and behavioral information from mixed worlds

Номер: US20180075293A1

Examples of the disclosure provide for calibrating a virtual reality environment based on data input in response to initial calibration prompts to provide a customized detection phase for a behavior analysis session. User interaction data are received during the customized detection phase and is dynamically pushed through a trained machine learning component to generate a dynamic behavior vector for the behavior analysis session, the dynamic behavior vector updating during the customized detection phase. The virtual reality environment is dynamically modified during the customized detection phase using the dynamic behavior vector.

Подробнее
07-03-2019 дата публикации

System and method for urine analysis and personal health monitoring

Номер: US20190073763A1
Автор: Chun S. Li, Richard Y. Li
Принадлежит: Individual

A system for urine analysis and personal health monitoring includes a testing box, including a testing chamber, a camera aperture, and a test slot, a color pattern chart, including a bar code, a base color calibration strip, a test result area, and a test lookup area; a diagnostic analysis server, and a diagnostic analysis device, including a processor, a memory, an input output, a diagnostic manager, a camera manager, an image analyzer, a camera calibrator, and a camera; such that the diagnostic analysis device captures an original image of the testing chamber interior with a test strip inserted, and analyzes a diagnostic portion of the original image and calculates a test result. Also disclosed is a method for diagnostic analysis, including installing color pattern chart, depositing sample, inserting test strip, capturing image, performing color calibration, reading barcode, extracting test strip image, performing color conversion, and calculating test result.

Подробнее
24-03-2022 дата публикации

INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Номер: US20220092761A1
Принадлежит: FUJIFILM Business Innovation Corp.

An information processing apparatus includes: a memory storing, in an associated form, part identification information and information on at least one drawing; and a processor configured to receive an image of a drawing, read part identification information from the image, retrieve from the memory the information on the at least one drawing associated with the part identification information read from the image, extract a feature from the image, evaluate a quality of the received drawing by inputting the extracted information on the at least one drawing and the feature to artificial intelligence that has learned to evaluate drawings through machine learning, and display evaluation results of the received drawing on a display. 1. An information processing apparatus comprising:a memory storing, in an associated form, part identification information and information on at least one drawing; and receive an image of a drawing,', 'read part identification information from the image,', 'retrieve from the memory the information on the at least one drawing associated with the part identification information read from the image,, 'a processor configured to'}extract a feature from the image,evaluate a quality of the received drawing by inputting the extracted information on the at least one drawing and the feature to artificial intelligence that has learned to evaluate drawings through machine learning, anddisplay evaluation results of the received drawing on a display.2. The information processing apparatus according to claim 1 , wherein the memory stores claim 1 , as the information on the at least one drawing claim 1 , information indicating the quality of the at least one drawing.3. The information processing apparatus according to claim 2 , wherein the memory stores claim 2 , as the information indicating the quality of the at least one drawing claim 2 , at least one piece of information selected from a group including a version number of the at least one drawing claim 2 , ...

Подробнее
21-03-2019 дата публикации

PROCESSING METHOD OF A 3D POINT CLOUD

Номер: US20190086546A1
Принадлежит:

Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space. 1. A method of processing a three-dimensional point cloud , comprising:obtaining a 3D point cloud from a predetermined view point of a depth sensor;extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; an intensity 2D space function of the intensity data of each point;', 'a height 2D space function of an elevation data of each point, and', 'a distance 2D space function of a distanced data between each point of 3D point cloud and the view point; and, 'transforming 3D coordinates and intensity data into at least three two-dimensional (2D) spaces, utilizingdefining a single multi-channel 2D space.2. The method of processing method of claim 1 , wherein the transforming step further includes:detecting background points; andsetting a predetermined default value for detected background points.3. The method of processing method of claim 1 , further including a training phase that further includes before the transforming step:supplying tags data classifying objects in 1:N classes; andlabelling each extracted point data as belonging to objects of 1:N classes according to the supplied tags data.4. The method of processing method of claim 1 , wherein the transforming step includes a normalizing step of at least one of the 2D spaces claim 1 , which includes: ...

Подробнее
29-03-2018 дата публикации

Training Image-Recognition Systems Using a Joint Embedding Model on Online Social Networks

Номер: US20180089541A1
Принадлежит:

In one embodiment, a method includes identifying a shared visual concept in visual-media items based on shared visual features in images of the visual-media items; extracting, for each of the visual-media items, n-grams from communications associated with the visual-media item; generating, in a d-dimensional space, an embedding for each of the visual-media items at a location based on the visual concepts included in the visual-media item; generating, in the d-dimensional space, an embedding for each of the extracted n-grams at a location based on a frequency of occurrence of the n-gram in the communications associated with the visual-media items; and associating, with the shared visual concept, the extracted n-grams that have embeddings within a threshold area of the embeddings for the identified visual-media items. 1. A method comprising , by one or more computing systems:identifying a shared visual concept in two or more visual-media items, wherein each visual-media item comprises one or more images, each image comprising one or more visual features, and wherein each visual-media item comprises one or more visual concepts, the shared visual concept being identified based on one or more shared visual features in the respective images of the visual-media items;extracting, for each of the visual-media items, one or more n-grams from one or more communications associated with the visual-media item;generating, in a d-dimensional space, an embedding for each of the visual-media items, wherein a location of the embedding for the visual-media item is based on the one or more visual concepts included in the visual-media item;generating, in the d-dimensional space, an embedding for each of the extracted n-grams, wherein a location of the embedding for the n-gram is based on a frequency of occurrence of the n-gram in the communications associated with the visual-media items; andassociating, with the shared visual concept, one or more of the extracted n-grams that have ...

Подробнее
29-03-2018 дата публикации

Training Image-Recognition Systems Based on Search Queries on Online Social Networks

Номер: US20180089542A1
Принадлежит:

In one embodiment, a method includes receiving a plurality of search queries comprising n-grams; identifying a subset of the plurality of search queries as being queries for visual-media items based on one or more n-grams of the search query being associated with visual-media content; calculating, for each of the n-grams of the search queries of the subset, a popularity-score based on a count of the search queries in the subset that include the n-gram; determining popular n-grams, wherein each of the popular n-grams is an n-gram of the search queries of the subset of search queries having a popularity-score greater than a threshold popularity-score; and selecting one or more of the popular n-grams for training a visual-concept recognition system, wherein each of the popular n-grams is selected based on whether it is associated with a visual concept. 1. A method comprising , by one or more computing systems:receiving, from a plurality of client systems of a plurality of users, a plurality of search queries, each search query comprising one or more n-grams;identifying a subset of search queries from the plurality of search queries as being queries for visual-media items, each of the search queries in the subset of search queries being identified based on one or more n-grams of the search query being associated with visual-media content;calculating, for each of the n-grams of the search queries of the subset of search queries, a popularity-score based on a count of the search queries in the subset of search queries that include the n-gram;determining one or more popular n-grams, wherein each of the popular n-grams is an n-gram of the search queries of the subset of search queries having a popularity-score greater than a threshold popularity-score; andselecting one or more of the popular n-grams for training a visual-concept recognition system, wherein each of the popular n-grams is selected based on whether it is associated with one or more visual concepts.2. The ...

Подробнее
21-03-2019 дата публикации

DATA ACQUIRING APPARATUS, PRINTING APPARATUS, AND GENUINENESS DISCRIMINATING APPARATUS

Номер: US20190087678A1
Автор: ITO Kensuke
Принадлежит: FUJI XEROX CO., LTD.

A data acquiring apparatus includes: an acquirer that acquires feature data, as registration data, from an image including an object to be registered, the feature data representing a feature that is distributed in a region of a predetermined size determined based on a position defined by an external shape of the object and a position of printing information that has been printed on the object; and a memory that stores the registration data acquired by the acquirer as data for determining identity of the object. 113-. (canceled)14. A data acquiring apparatus comprising:an acquirer that acquires feature data, as registration data, from an image including an object to be registered, the feature data representing a feature that is distributed in a region of a predetermined size determined based on a position defined by an external shape of the object and a position of printing information that has been printed on the object; anda memory that stores the registration data acquired by the acquirer as data for determining identity of the object.15. A printing apparatus comprising:a printer that performs printing on an object to be registered;an acquirer that acquires feature data, as registration data, from a captured image of the object on which printing has been performed by the printer, the feature data representing a feature distributed in a region of a predetermined size determined based on a position defined by an external shape of the object and a position of printing information that has been printed on the object; anda memory that stores the registration data acquired by the acquirer, as data for determining identity of the object.16. A genuineness discriminating apparatus comprising:an imager that captures an image including an object to be discriminated;an acquirer that acquires feature data, as collation data, from the image captured by the imager, the feature data representing a feature that is distributed in a region of a predetermined size determined based on ...

Подробнее
21-03-2019 дата публикации

IMAGE PROCESSING APPARATUS THAT IDENTIFIES CHARACTER PIXEL IN TARGET IMAGE USING FIRST AND SECOND CANDIDATE CHARACTER PIXELS

Номер: US20190087679A1
Автор: YAMADA Ryuji
Принадлежит:

In an image processing apparatus, a controller is configured to perform: acquiring target image data representing a target image including a plurality of pixels; determining a plurality of first candidate character pixels from among the plurality of pixels, determination of the plurality of first candidate character pixels being made for each of the plurality of pixels; setting a plurality of object regions in the target image; determining a plurality of second candidate character pixels from among the plurality of pixels, determination of the plurality of second candidate character pixels being made for each of the plurality of object regions according to a first determination condition; and identifying a character pixel from among the plurality of pixels, the character pixel being included in both the plurality of first candidate character pixels and the plurality of second candidate character pixels. 1. An image processing apparatus comprising a controller configured to perform:(a) acquiring target image data representing a target image, the target image including a plurality of pixels;(b) determining a plurality of first candidate character pixels from among the plurality of pixels, each of the plurality of first candidate character pixels being a candidate to be a character pixel, determination of the plurality of first candidate character pixels being made for each of the plurality of pixels;(c) setting a plurality of object regions in the target image, each of the plurality of object regions including a plurality of object pixels;(d) determining a plurality of second candidate character pixels from among the plurality of pixels, each of the plurality of second candidate character pixels being a candidate to be the character pixel, determination of the plurality of second candidate character pixels being made for each of the plurality of object regions according to a first determination condition; and(e) identifying the character pixel from among the plurality ...

Подробнее
30-03-2017 дата публикации

METHOD AND SYSTEM OF LOW-COMPLEXITY HISTROGRAM OF GRADIENTS GENERATION FOR IMAGE PROCESSING

Номер: US20170091575A1
Принадлежит:

Techniques for a system, article, and method of low-complexity histogram of gradients generation for image processing may include histogram of gradients generation for image processing including the following operations: obtaining image data including horizontal and vertical gradient components of individual pixels of an image; associating the horizontal and vertical gradient components of the same pixel with one of a plurality of angular channels depending on the values of the horizontal and vertical gradient components; determining a gradient magnitude and a gradient orientation of individual angular channels after the horizontal and vertical gradient components are assigned to the channels; and generating a histogram of gradients by using the gradient direction and gradient magnitude of the angular channels. 1. A computer-implemented method of histogram of gradients generation for image processing comprising:obtaining image data comprising horizontal and vertical gradient components of individual pixels of an image;associating the horizontal and vertical gradient components of the same pixel with one of a plurality of angular channels depending on the values of the horizontal and vertical gradient components;determining a gradient magnitude and a gradient orientation of individual angular channels after the horizontal and vertical gradient components are assigned to the channels wherein the gradient orientation of individual angular channels is determined using an arctan operation without first performing an arctan operation on the individual pixels; andgenerating a histogram of gradients by using the gradient direction and gradient magnitude of the angular channels.2. The method of wherein the histograms of gradients each having a gradient distribution and are formed for individual cells of about 8×8 pixels of the image.3. The method of wherein associating individual pixels with one of a plurality of angular channels comprises comparing at least one of the ...

Подробнее
21-03-2019 дата публикации

IDENTIFYING CONSUMER PRODUCTS IN IMAGES

Номер: US20190089931A1
Автор: Booth Robert Reed
Принадлежит:

Systems and methods identify consumer products in images. Known consumer products are captured as grayscale or color images. They are converted to binary at varying thresholds. Connected components in the binary images identify image features according to pixels of a predetermined size, shape, solidity, aspect ratio, and the like. The image features are stored and searched for amongst image features similarly extracted from unknown images of consumer products. Identifying correspondence between the features of the images lends itself to identifying or not known consumer products. 1. A method for identifying consumer products in images having pixels , comprising:receiving first images of known consumer products;determining at least three connected components in the pixels of the first images, including identifying a pixel centroid for each of the at least three connected components; anddevising triangles between the pixel centroids for comparison to connected components in second images of unknown consumer products.2. The method of claim 1 , further including standardizing to a common pixel size said each of the at least three connected components.3. The method of claim 1 , further including binarizing each of the pixels of the first images according to multiple threshold scales to obtain multiple binarized images.4. The method of claim 1 , further including determining at least three connected components in the pixels of the second images claim 1 , including identifying a pixel centroid for each of the at least three connected components in the second images;devising second triangles between the pixel centroids for each of the at least three connected components of the second images; andcomparing the triangles and second triangles for similarity to identify or not possible matches of the known consumer products within the second images, thus identifying or not known consumer products.5. The method of claim 1 , further including determining line lengths between each ...

Подробнее
06-04-2017 дата публикации

ANALYSIS OF IMAGE CONTENT WITH ASSOCIATED MANIPULATION OF EXPRESSION PRESENTATION

Номер: US20170098122A1
Принадлежит: AFFECTIVA, INC.

Image content is analyzed in order to present an associated representation expression. Images of one or more individuals are obtained and the processors are used to identify the faces of the one or more individuals in the images. Facial features are extracted from the identified faces and facial landmark detection is performed. Classifiers are used to map the facial landmarks to various emotional content. The identified facial landmarks are translated into a representative icon, where the translation is based on classifiers. A set of emoji can be imported and the representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face. The selected emoji can be static, animated, or cartoon representations of emotion. The individuals can share the selected emoji through insertion into email, texts, and social sharing websites. 1. A computer-implemented method for image analysis comprising:obtaining an image of an individual;identifying a face of the individual;classifying the face to determine facial content using a plurality of image classifiers wherein the classifying includes generating confidence values for a plurality of action units for the face; andtranslating the facial content into a representative icon wherein the translating the facial content includes summing the confidence values for the plurality of action units.2. The method of wherein the summing includes a weighted summation of the confidence values.3. The method of wherein the summing includes negative weights.4. The method of further comprising performing alignment on the face that was identified.5. The method of further comprising performing normalization on the face that was identified.6. The method of wherein the performing normalization includes resizing the face.7. (canceled)8. The method of further comprising determining regions within the face of the individual.9. The method of further comprising performing a statistical mapping for the ...

Подробнее
28-03-2019 дата публикации

METHOD AND SYSTEM FOR IMAGE CONTENT RECOGNITION

Номер: US20190095753A1
Автор: Mor Noam, WOLF Lior
Принадлежит: Ramot at Tel-Aviv University Ltd.

A method of recognizing image content, comprises applying to the image a neural network which comprises an input layer for receiving the image, a plurality of hidden layers for processing the image, and an output layer for generating output pertaining to an estimated image content based on outputs of the hidden layers. The method further comprises applying to an output of at least one of the hidden layers a neural network branch, which is independent of the neural network and which has an output layer for generating output pertaining to an estimated error level of the estimate. A combined output indicative of the estimated image content and the estimated error level is generated. 1. A method of recognizing image content , comprising:applying a neural network to the image, said neural network comprising an input layer for receiving the image, a plurality of hidden layers for processing the image, and an output layer for generating output pertaining to an estimated image content based on outputs of said hidden layers;applying a neural network branch to an output of at least one of said hidden layers, said neural network branch being independent of said neural network and having an output layer for generating output pertaining to an estimated error level of said estimate; andgenerating a combined output indicative of the estimated image content and the estimated error level.2. The method according to claim 1 , wherein said neural network branch comprises at least one recurrent layer generating a plurality of output values.3. The method of claim 2 , wherein said at least one recurrent neural layer is a Long Short Term Memory (LSTM) layer.4. The method of claim 3 , wherein said LSTM layer is a bi-directional layer.5. The method according to claim 2 , further comprising summing or averaging said plurality of output values or projections thereof claim 2 , thereby providing said estimated error level.6. The method according to claim 1 , wherein said neural network comprises ...

Подробнее
12-04-2018 дата публикации

GENERATING PIXEL MAPS FROM NON-IMAGE DATA AND DIFFERENCE METRICS FOR PIXEL MAPS

Номер: US20180101728A1
Автор: XU Ying, Zhong Hao
Принадлежит:

Systems and methods for scalable comparisons between two pixel maps are provided. In an embodiment, an agricultural intelligence computer system generates pixel maps from non-image data by transforming a plurality of values and location values into pixel values and pixel locations. The non-image data may include data relating to a particular agricultural field, such as nutrient content in the soil, pH values, soil moisture, elevation, temperature, and/or measured crop yields. The agricultural intelligence computer system converts each pixel map into a vector of values. The agricultural intelligence computer system also generates a matrix of metric coefficients where each value in the matrix of metric coefficients is computed using a spatial distance between to pixel locations in one of the pixel maps. Using the vectors of values and the matrix of metric coefficients, the agricultural intelligence computer system generates a difference metric identifying a difference between the two pixel maps. In an embodiment, the difference metric is normalized so that the difference metric is scalable to pixel maps of different sizes. The difference metric may then be used to select particular images that best match a measured yield, identify relationships between field values and measured crop yields, identify and/or select management zones, investigate management practices, and/or strengthen agronomic models of predicted yield. 1. A computing device comprising:one or more processors;a memory storing instructions which, when executed by the one or more processors, cause the one or more processors to cause performance of:obtaining a first pixel map for a first physical property at a plurality of locations in a particular region;obtaining a second pixel map for a second physical property at the plurality of locations in a particular region, wherein the second pixel map is an equal size as the first pixel map;generating, from the first pixel map, a first vector of values;generating ...

Подробнее
12-04-2018 дата публикации

IMAGE PROCESSOR, DETECTION APPARATUS, LEARNING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM STORAGE MEDIUM

Номер: US20180101741A1
Автор: ARAI YUKO, Arata Koji
Принадлежит:

An image processor includes an image converter. The image converter transforms data of an image that is photographed with a camera for photographing a seat, based on a transformation parameter that is calculated in accordance with a camera-position at which the camera is disposed. The image converter outputs the thus-transformed data of the image. The transformation parameter is a parameter for transforming the data of the image such that an appearance of the seat depicted in the image is approximated to a predetermined appearance of the seat. 1. An image processor comprising an image converter configured to transform data of an image photographed with a camera for photographing a seat , based on a transformation parameter which is calculated in accordance with a camera-position at which the camera is disposed , the image converter further configured to output the transformed data of the image ,wherein the transformation parameter is a parameter for transforming the data of the image such that an appearance of the seat depicted in the image is approximated to a predetermined appearance of the seat.2. The image processor according to claim 1 , further comprising a transformation-parameter memory configured to store the transformation parameter claim 1 ,wherein the image converter acquires the transformation parameter from the transformation-parameter memory.3. The image processor according to claim 1 , further comprising a transformation-parameter receiver configured to acquire the transformation parameter from an outside claim 1 ,wherein the image converter acquires the transformation parameter from the transformation-parameter receiver.4. The image processor according to claim 1 , wherein the transformation parameter is a parameter for transforming the data of the image such that a coordinate of a predetermined point on the seat depicted in the image matches a predetermined coordinate.5. The image processor according to claim 1 , wherein the transformation ...

Подробнее
12-04-2018 дата публикации

SYSTEMS AND METHODS FOR MOBILE IMAGE CAPTURE AND PROCESSING OF DOCUMENTS

Номер: US20180101836A1
Принадлежит:

Techniques for processing images of documents captured using a mobile device are provided. The images can include different sides of a document from a mobile device for an authenticated transaction. In an example implementation, a method includes inspecting the images to detect a feature associated with a first side of the document. In response to determining an image is the first side of the document, a type of content is selected to be analyze on the image of the first side and one or more of regions of interests (ROIs) are identified on the image of the first side that are known to include the selected type of content. A process can include receiving a sub-image of the image of the first side from the preprocessing unit, and performing content detection test on the sub-image. 1. A mobile document image processing system , comprising: receive images of different sides of a document from a mobile device for an authenticated transaction;', 'inspecting the images to detect a feature associated with a first side of the document;', 'in response to determining an image is the first side of the document, select a type of content to be analyze on the image of the first side;', 'identify one or more of regions of interests (ROIs) on the image of the first side that are known to include the selected type of content; and', 'transmit a sub-image of the image of the of the first side to a testing unit,, 'a preprocessing unit which is configured towherein the sub-image is an extracted portion smaller than the image of the first side including at least one of the identified ROIs; receive the sub-image of the image of the first side from the preprocessing unit; and', 'perform at least one content detection test on the sub-image which includes at least one of the identified ROIs to test the quality of the selected type of content in the image of the first side; and, 'the testing unit which is configured to 'generate a message to notify the mobile device when the selected type of ...

Подробнее
26-03-2020 дата публикации

SYSTEMS AND METHODS FOR MOBILE AUTOMATED CLEARING HOUSE ENROLLMENT

Номер: US20200097930A1
Принадлежит:

Systems and methods for mobile enrollment in automated clearing house (ACH) transactions using mobile-captured images of financial documents are provided. Applications running on a mobile device provide for the capture and processing of images of documents needed for enrollment in an ACH transaction, such as a blank check, remittance statement and driver's license. Data from the mobile-captured images that is needed for enrolling in ACH transactions is extracted from the processed images, such as a user's name, address, bank account number and bank routing number. The user can edit the extracted data, select the type of document that is being captured, authorize the creation of an ACH transaction and select an originator of the ACH transaction. The extracted data and originator information is transmitted to a remote server along with the user's authorization so the ACH transaction can be setup between the originator's and receiver's bank accounts. 1receiving an identity of at least one originator for the automated clearing house (ACH) transaction;receiving an image of a document captured by a mobile device of a receiver of the ACH transaction;correcting at least one aspect of the image to create a corrected image;executing one or more image quality assurance tests on the corrected image to assess the quality of the corrected image; andextracting ACH enrollment data from the corrected image that is needed to enroll a user in the ACH transaction between the originator and the receiver.. A computer-readable medium comprising instructions which, when executed by a computer, perform a process of mobile enrollment in an automated clearing house (ACH) transaction, comprising: This application is a continuation of U.S. patent application Ser. No. 13/526,532, filed on Jun. 19, 2012, which is a continuation-in-part of U.S. patent application Ser. No. 12/906,036, filed on Oct. 15, 2010, now U.S. Pat. No. 8,577,118, which is a continuation-in-part of U.S. patent application Ser ...

Подробнее
08-04-2021 дата публикации

MULTI-MODAL DETECTION ENGINE OF SENTIMENT AND DEMOGRAPHIC CHARACTERISTICS FOR SOCIAL MEDIA VIDEOS

Номер: US20210103762A1

A system and method for determining a sentiment, a gender and an age group of a subject in a video while the video is being played back. The video is separated into visual data and audio data, the video data is passed to a video processing pipeline and the audio data is passed to both an acoustic processing pipeline and a textual processing pipeline. The system and method performs, in parallel, a video feature extraction process in the video processing pipeline, an acoustic feature extraction process in the acoustic processing pipeline, and a textual feature extraction process in the textual processing pipeline. The system and method combines a resulting visual feature vector, acoustic feature vector, and a textual feature vector into a single feature vector, and determines the sentiment, the gender and the age group of the subject by applying the single feature vector to a machine learning model. 1. A system determining a sentiment , a gender and an age group of a subject in a video , the system comprising:a video playback device;a display device; anda computer system having circuitry,the circuitry configured towhile the video is being played back by the video playback device on the display device,separate the video into visual data and audio data,pass the video data to a video processing pipeline and pass the audio data to both an acoustic processing pipeline and a textual processing pipeline,perform, in parallel, a video feature extraction process in the video processing pipeline to obtain a visual feature vector, an acoustic feature extraction process in the acoustic processing pipeline to obtain an acoustic feature vector, and a textual feature extraction process in the textual processing pipeline to obtain a textual feature vector,combine the visual feature vector, the acoustic feature vector, and the textual feature vector into a single feature vector, anddetermine the sentiment, the gender and the age group of the subject by applying the single feature ...

Подробнее
29-04-2021 дата публикации

AUTOMATIC RULER DETECTION

Номер: US20210124978A1
Принадлежит: MorphoTrak, LLC

In some implementations, a method includes: receiving, from the camera, a sample image that includes a fingerprint and a mensuration reference device, where the sample image is associated with a resolution; identifying (i) a plurality of edge candidate groups within the sample image, and (ii) a set of regularity characteristics associated with each of the plurality of edge candidate groups; determining that the associated set of regularity characteristics indicates the mensuration reference device; identifying a ruler candidate group, from each of the plurality of edge candidate groups, based at least on determining that the associated set of regularity characteristics indicates the mensuration reference device; computing a scale associated with the sample image based at least on extracting a set of ruler marks from the identified ruler candidate group; and generating, based at least on the scale associated with the sample image, a scaled image. 1. A method comprising:identifying a set of regularity characteristics of an edge candidate group within an orientation map of an image that includes a fingerprint and a mensuration reference device;determining that the set of regularity characteristics match a set of reference regularity characteristics associated with the mensuration reference device;predicting a set of ruler markers from the edge candidate group based on determining that the set of regularity characteristics match a set of reference regularity characteristics of a mensuration reference device; andproviding the set of ruler marks for output.2. The method of claim 1 , further comprising:computing an orientation histogram based on the orientation map;identifying a plurality of matching orientations within an interval centered at a peak of the orientation histogram; andgenerating a plurality of edge candidate groups using edge pixels corresponding to the identified plurality of matching orientations.3. The method of claim 2 , wherein the interval centered at ...

Подробнее
11-04-2019 дата публикации

ARTIFICIAL INTELLIGENCE BASED IMAGE DATA PROCESSING METHOD AND IMAGE SENSOR

Номер: US20190108410A1
Автор: Zhang Guangbin
Принадлежит:

An image data processing method includes receiving first frame image data at a first resolution, reducing a resolution of the first frame image data to a second resolution, performing image recognition on the first frame image data to determine one or more regions of interest (ROI) and a priority level of each of the ROIs; receiving second frame image data, and extracting portions of the second frame image data corresponding to the one or more ROIs. The method further includes modifying a resolution of the portions of the second frame image data corresponding to the ROIs based on the priority level of the ROIs, reducing a resolution of the received second frame image data to the second resolution, and combining the resolution-modified portions of the second frame image data corresponding to the ROIs with the second frame image data at the second resolution to generate output frame image data. 1. An image data processing method comprising:receiving, from an image sensor, first frame image data of a first frame at a first resolution;reducing a resolution of the first frame image data to a second resolution;performing an artificial intelligence (AI) based image recognition on the first frame image data at the second resolution to determine one or more regions of interest (ROI) and a priority level of each of the one or more ROIs;receiving, from the image sensor, second frame image data of a second frame subsequent to the first frame at the first resolution;extracting portions of the second frame image data corresponding to the one or more ROIs;modifying a resolution of the portions of the second frame image data corresponding to the one or more ROIs based on the priority level of each of the one or more ROIs; andcombining the resolution-modified portions of the second frame image data corresponding to the one or more ROIs with the first frame image data at the second resolution to generate output frame image data.2. The method of claim 1 , further comprising modifying ...

Подробнее
11-04-2019 дата публикации

IMAGE PROCESSING METHOD AND PROCESSING DEVICE

Номер: US20190108411A1
Автор: Liu Ruitao, Liu Yu
Принадлежит:

A method including normalizing an original image into an intermediate image, the intermediate image including multiple local blocks; calculating image feature data of the local blocks; calculating weight distribution data corresponding to the local blocks in the intermediate image according to the image feature data, the weight distribution data representing a degree of possibility that the local blocks include part or all of an object; and determining a location area of the object in the original image based on the weight distribution data obtained by calculation. By using the technical solutions in this present disclosure, an object in an image is localized rapidly and efficiently, and a subject area is determined, thereby saving a large amount of work for manually labeling images. 1. A method comprising:normalizing an original image into an intermediate image, the intermediate image including multiple local blocks;calculating respective image feature data of a respective local block of the multiple local blocks;calculating respective weight distribution data corresponding to the respective local block according to the respective image feature data, the respective weight distribution data representing a degree of possibility that the respective local block includes part or all of an object; anddetermining a location area of the object in the original image based on the respective weight distribution data.2. The method of claim 1 , wherein the calculating the respective weight distribution data corresponding to the respective local block includes:processing the respective image feature data by using an attention model.3. The method of claim 2 , further comprises training the attention model by using user search behavior data.4. The method of claim 3 , wherein the training the attention model includes:acquiring training data, the training data including a search text and a clicked image related to a click behavior that occurs based on the search text;calculating ...

Подробнее
18-04-2019 дата публикации

DENTAL HEALTH ASSESSMENT ASSISTING APPARATUS AND DENTAL HEALTH ASSESSMENT ASSISTING SYSTEM

Номер: US20190110690A1
Автор: KAMBARA Masaki
Принадлежит:

The dental health assessment assisting apparatus includes a gray scale converter configured to convert a fluorescence image obtained by imaging fluorescence of a tooth irradiated with excitation light into a gray scale image, a gray scale value acquiring unit configured to acquire gray scale values of a reference point and a plurality of evaluation points in an image of the tooth in the gray scale image, a normalization unit configured to normalize the gray scale values of the plurality of evaluation points by the gray scale value of the reference point, and a dental health assessment data generator configured to generate dental health assessment data visually representing the gray scale value of the reference point and the gray scale values of the plurality of evaluation points which have been normalized. 1. A dental health assessment assisting apparatus comprising:a gray scale converter configured to convert a fluorescence image obtained by imaging fluorescence of a tooth irradiated with excitation light into a gray scale image;a gray scale value acquiring unit configured to acquire gray scale values of a reference point and a plurality of evaluation points in an image of the tooth in the gray scale image; anda dental health assessment data generator configured to generate dental health assessment data visually representing the gray scale value of the reference point and the gray scale values of the plurality of evaluation points.2. The dental health assessment assisting apparatus according to claim 1 , further comprising:a normalization unit configured to normalize the gray scale values of the plurality of evaluation points by the gray scale value of the reference point, whereinthe dental health assessment data generator is configured to visually represent the gray scale values of the plurality of evaluation points normalized by the normalization unit in the dental health assessment data.3. The dental health assessment assisting apparatus according to claim 2 , ...

Подробнее
09-06-2022 дата публикации

METHOD AND SYSYTEM FOR REAL TIME OBJECT DETECTION

Номер: US20220180107A1
Принадлежит:

The present disclosure relates to a method for real-time object detection, the method comprising: capturing an image in vicinity of a vehicle; feeding the captured image to a deep fully convolution neural network; extracting one or more relevant features from the captured image; classifying the extracted features using one or more branches to identify different size of objects; predicting objects present in the image based on a predetermined confidence threshold; and marking the predicted objects in the image. 1. A method for real-time object detection for a host vehicle , the method comprising:capturing an image in vicinity of the host vehicle;feeding the captured image to a deep fully convolution neural network;extracting one or more relevant features from the captured image;classifying the extracted features using one or more branches to identify different size of objects;predicting objects present in the image based on a predetermined confidence threshold;marking the predicted objects in the image; andplotting the marked image on a display.2. The method of claim 1 , wherein the captured image is a Ground Truth (GT) image marked using a Bounding Box annotation tool.3. The method of claim 1 , further comprising reshaping the captured images into a predetermined compatible size claim 1 , while still maintaining the aspect ratio of the objects present in the image which in turn is fed to the deep fully convolution neural network.4. The method of claim 1 , wherein each branch of the deep fully convolutional neural network comprises a different receptive field corresponding to the size of the object.5. The method of claim 1 , wherein classifying includes routing the object having a smaller size early off for the prediction in the deep fully convolution neural network.6. The method of claim 1 , wherein the deep fully convolution neural network comprises advanced down sampling technique and down sampling-convolution-receptive block (DCR) technique claim 1 , wherein the ...

Подробнее
09-04-2020 дата публикации

IDENTIFYING A LOCAL COORDINATE SYSTEM FOR GESTURE RECOGNITION

Номер: US20200110929A1
Автор: LERNER Alon, Madmony Maoz
Принадлежит: Intel Corporation

Identifying a local coordinate system is described for gesture recognition. In one example, a method includes receiving a gesture from a user across a horizontal axis at a depth camera, determining a horizontal vector for the user based on the received user gesture, determining a vertical vector; and determining a rotation matrix to convert positions of user gestures received by the camera to a frame of reference of the user. 120-. (canceled)21. A computer program product including one or more non-transitory machine-readable mediums encoding with instructions that when executed by one or more processors cause a process to be carried out for aligning a first coordinate system of a user with a second coordinate system of a depth camera to facilitate user gesture recognition , the process comprising:determining, based on a first gesture of the user, a horizontal vector that forms a horizontal axis of the first coordinate system, the first gesture of the user captured by the depth camera;determining a vertical vector that forms a vertical axis of the first coordinate system of the user;determining a rotation matrix, based on the horizontal vector and the vertical vector;transforming, using the rotation matrix, a second gesture of the user to a frame of reference of the user, the second gesture captured by the depth camera; andinterpreting, subsequent to the transformation, the second gesture.22. The computer program product of claim 21 , the process further comprising:requesting the user to perform one or more calibration gestures,wherein the first gesture of the user is a calibration gesture captured in response to the request to perform the one or more calibration gestures.23. The computer program product of claim 21 , wherein determining the horizontal vector comprises:determining a leftmost endpoint and a rightmost endpoint defining a path of the first gesture; anddetermining the horizontal vector, based on the leftmost endpoint and the rightmost endpoint.24. The ...

Подробнее
13-05-2021 дата публикации

Bone Age Assessment And Height Prediction Model, System Thereof And Prediction Method Thereof

Номер: US20210142477A1
Принадлежит: China Medical University Hospital

The present disclosure provides a bone age assessment and height prediction system including an image capturing unit and a non-transitory machine readable medium. The image capturing unit is for obtaining a target x-ray image data of a subject. The non-transitory machine-readable medium is for storing a program for assessing the development of the bones of a hand and the bone age of the subject, and predicting the adult height of the subject when executed by a processing unit. Therefore, the bone age assessment and height prediction system of the present disclosure can effectively improve the accuracy and sensitivity of the bone age assessment and the height prediction, and the time for assessing the bone age and predicting the height can be further shorten.

Подробнее
04-05-2017 дата публикации

CASCADED NEURAL NETWORK WITH SCALE DEPENDENT POOLING FOR OBJECT DETECTION

Номер: US20170124409A1
Принадлежит:

A computer-implemented method for training a convolutional neural network (CNN) is presented. The method includes receiving regions of interest from an image, generating one or more convolutional layers from the image, each of the one or more convolutional layers having at least one convolutional feature within a region of interest, applying at least one cascaded rejection classifier to the regions of interest to generate a subset of the regions of interest, and applying scale dependent pooling to convolutional features within the subset to determine a likelihood of an object category. 1. A computer-implemented method for training a convolutional neural network (CNN) , the method comprising:receiving regions of interest from an image;generating one or more convolutional layers from the image, each of the one or more convolutional layers having at least one convolutional feature within a region of interest;applying at least one cascaded rejection classifier to the regions of interest to generate a subset of the regions of interest; andapplying scale dependent pooling to convolutional features within the subset to determine a likelihood of an object category.2. The method of claim 1 , wherein the at least one cascaded rejection classifier rejects non-object proposals at each convolutional layer.3. The method of claim 1 , wherein the at least one cascaded rejection classifier eliminates negative bounding boxes claim 1 , the negative bounding boxes including non-conforming convolutional features.4. The method of claim 1 , wherein generating the one or more convolutional layers from the image is performed once to avoid redundant feature extraction.5. The method of claim 1 , wherein the convolutional features in early convolutional layers are representative of weak classifiers.6. The method of claim 1 , wherein the scale dependent pooling determines a scale of each object proposal within each convolutional layer and pools the features from a corresponding convolutional ...

Подробнее
04-05-2017 дата публикации

UNIVERSAL CORRESPONDENCE NETWORK

Номер: US20170124711A1
Принадлежит:

A computer-implemented method for training a convolutional neural network (CNN) is presented. The method includes extracting coordinates of corresponding points in the first and second locations, identifying positive points in the first and second locations, identifying negative points in the first and second locations, training features that correspond to positive points of the first and second locations to move closer to each other, and training features that correspond to negative points in the first and second locations to move away from each other. 1. A computer-implemented method for training a convolutional neural network (CNN) , the method comprising:extracting coordinates of corresponding points in first and second locations;identifying positive points in the first and second locations;identifying negative points in the first and second locations;training features that correspond to positive points of the first and second locations to move closer to each other; andtraining features that correspond to negative points in the first and second locations to move away from each other.2. The method of claim 1 , wherein the CNN has a fully convolutional spatial transformer for normalizing patches to handle rotation and scaling.3. The method of claim 2 , wherein the convolutional spatial transformer applies spatial transformations to lower layer activations.4. The method of claim 1 , wherein a contrastive loss layer encodes distances between the features of the first and second locations.5. The method of claim 1 , wherein a contrastive loss layer is trained with hard negative mining and by reusing activations in overlapping regions.6. The method of claim 5 , wherein hard negative pairs are mined that violate constraints.7. A system for training a convolutional neural network (CNN) claim 5 , the system comprising:a memory; and extract coordinates of corresponding points in the first and second locations;', 'identify positive points in the first and second locations ...

Подробнее
25-08-2022 дата публикации

CROSS-VIEW IMAGE OPTIMIZING METHOD, APPARATUS, COMPUTER EQUIPMENT, AND READABLE STORAGE MEDIUM

Номер: US20220272312A1
Автор: Chen Shihai, Zhu Yingying
Принадлежит:

Disclosed is a cross-view image optimizing method and apparatus, and a computer equipment and a readable storage medium. The method includes: acquiring a sample image and a pre-trained cross-view image generating model; generating an multi-dimensional cross-view image of the sample image by a multi-dimensional feature extracting module of the first generator to obtain dimension features and cross-view initial images at multiple dimensions; obtaining a multi-dimensional feature map with corresponding dimension features by the second generator; inputting the multi-dimensional feature map to a multi-channel attention module of the second generator for feature extraction and calculating a feature weight of each attention channel, obtaining attention feature images, attention images and feature weights in a preset number of the attention channels; and weighting and summing the attention images and the attention feature images of all the channels according to the feature weights, and obtaining a cross-view target image. 1. A cross-view image optimizing method , comprising:acquiring a sample image and a pre-trained cross-view image generating model, the cross-view image generating model including a first generator and a second generator;generating multi-dimensional cross-view images of the sample image by a multi-dimensional feature extracting module of the first generator to obtain dimension features and cross-view initial images of multiple dimensions;normalizing the dimension features by a residual module of the second generator, and then obtaining optimized features by residual processing based on the cross-view initial images; and performing down-sample processing and up-sample processing on the optimized features followed by splicing to obtain a multi-dimensional feature map;inputting the multi-dimensional feature map to a multi-channel attention module of the second generator for feature extraction and calculating a feature weight of each attention channel, ...

Подробнее
11-05-2017 дата публикации

METHOD FOR UPSCALING AN IMAGE AND APPARATUS FOR UPSCALING AN IMAGE

Номер: US20170132759A1
Принадлежит:

Image super-resolution (SR) generally enhance the resolution of images. One of SR's main challenge is discovering mappings between low-resolution (LR) and high-resolution (HR) image patches. The invention learns patch upscaling projection matrices from a training set of images. Input images are divided into overlapping patches, which are normalized and transformed to a defined orientation. Different transformations can be recognized and dealt with by using a simple 2D-projection. The transformed patches are clustered, and cluster specific upscaling projection matrices and corresponding cluster centroids determined during training are applied to obtain upscaled patches. The upscaled patches are assembled to an upscaled image. 1. A method for upscaling an input image , comprisingdividing the input image into overlapping patches;normalizing the patches and transposing and/or flipping at least some of the normalized patches to obtain transposed and/or flipped normalized patches that according to predefined orientation characteristics all have the same orientation, wherein transposed and/or flipped normalized patches are obtained; andfor each transposed and/or flipped normalized patch,determining a nearest neighbor patch among centroid patches of a plurality of trained clusters, and determining an upscaling projection matrix associated with the determined nearest neighbor patch;applying the determined upscaling projection matrix to the respective current transposed and/or flipped projected normalized patch, wherein a transposed and/or flipped upscaled normalized patch is obtained;applying inverse transposing and/or inverse flipping and de-normalizing to the upscaled normalized patch, according to said transposing and/or flipping and normalizing of the respective patch, wherein upscaled patches are obtained; andassembling the upscaled patches to obtain an upscaled image, wherein the upscaled patches overlap.2. The method according to claim 1 , wherein said normalizing the ...

Подробнее
11-05-2017 дата публикации

Edge-Aware Bilateral Image Processing

Номер: US20170132769A1
Принадлежит:

Example embodiments may allow for the efficient, edge-preserving filtering, upsampling, or other processing of image data with respect to a reference image. A cost-minimization problem to generate an output image from the input array is mapped onto regularly-spaced vertices in a multidimensional vertex space. This mapping is based on an association between pixels of the reference image and the vertices, and between elements of the input array and the pixels of the reference image. The problem is them solved to determine vertex disparity values for each of the vertices. Pixels of the output image can be determined based on determined vertex disparity values for respective one or more vertices associated with each of the pixels. This fast, efficient image processing method can be used to enable edge-preserving image upsampling, image colorization, semantic segmentation of image contents, image filtering or de-noising, or other applications. 1. A method comprising:obtaining, by a computing system, (1) a reference image that was captured by a camera device and (2) a target array, wherein the reference image comprises a plurality of pixels that have respective pixel locations in the reference image and respective color variables, wherein the target array comprises target values, and wherein the target values respectively correspond to the pixels of the reference image;associating, by the computing system, the pixels of the reference image with respective vertices in a vertex space, wherein the vertex space comprises two spatial dimensions and a color-space dimension, wherein the association between the pixels of the reference image and the respective vertices is defined by an association matrix, wherein the association matrix comprises a plurality of values of which fewer than half are non-zero;associating, by the computing system, the target values with the respective vertices with which the pixels of the reference image corresponding to the target values are associated ...

Подробнее
23-04-2020 дата публикации

METHOD OF EXTRACTING FEATURES FROM IMAGE, METHOD OF MATCHING IMAGES USING THE SAME AND METHOD OF PROCESSING IMAGES USING THE SAME

Номер: US20200125884A1
Автор: SUNG Chang-Hun
Принадлежит:

In a method of extracting features from an image, a plurality of initial key points are estimated based on an input image. A plurality of descriptors are generated based on a downscaled image that is generated by downscaling the input image. A plurality of feature points are obtained by matching the plurality of initial key points with the plurality of descriptors, respectively. 1. A method of extracting features from an image , the method comprising:estimating a plurality of initial key points based on an input image;generating a plurality of descriptors based on a downscaled image that is generated by downscaling the input image; andobtaining a plurality of feature points by matching the plurality of initial key points with the plurality of descriptors, respectively.2. The method as claimed in claim 1 , wherein generating the plurality of descriptors includes:generating the downscaled image by downscaling the input image using a scaling factor;calculating a plurality of downscaled key points included in the downscaled image based on the scaling factor and the plurality of initial key points, the plurality of downscaled key points corresponding to the plurality of initial key points, respectively; andcalculating the plurality of descriptors for the downscaled image based on a plurality of neighboring points that are adjacent to the plurality of downscaled key points in the downscaled image.3. The method as claimed in claim 2 , wherein an intensity difference between adjacent pixels increases when the input image is converted into the downscaled image.4. The method as claimed in claim 2 , wherein the input image is an 1-channel image.5. The method as claimed in claim 1 , further comprising:converting the input image into an 1-channel image when the input image is not the 1-channel image.6. The method as claimed in claim 5 , wherein converting the input image into the 1-channel image includes:receiving a raw image as the input image from an image pickup device; ...

Подробнее
03-06-2021 дата публикации

Image generation method and computing device

Номер: US20210166058A1
Принадлежит: Ping An Technology Shenzhen Co Ltd

An image generation method and a computing device using the method, includes creating an image database with a plurality of original images, and obtaining a plurality of first outline images of an object by detecting an outline of the object in each of the original images. Numerous first feature matrixes are obtained by calculating a feature matrix of each of the first outline images. A second feature matrix of a second outline image input by a user is calculated. A target feature matrix is selected from the plurality of first feature matrixes, the target feature matrix has a minimum difference as the second feature matrix. A target image corresponding to the target feature matrix is matched and displayed from the image database. The method and device allow detection of an object outline in an image input by users and the generation of an image with the detected outline.

Подробнее
30-04-2020 дата публикации

UNSURPERVISED CLASSIFICATION OF ENCOUNTERING SCENARIOS USING CONNECTED VEHICLE DATASETS

Номер: US20200133269A1
Принадлежит:

The present disclosure provides a method in a data processing system that includes at least one processor and at least one memory. The at least one memory includes instructions executed by the at least one processor to implement a driving encounter recognition system. The method includes receiving information, from one or more sensors coupled to a first vehicle, determining first trajectory information associated with the first vehicle and second trajectory information associated with a second vehicle, extracting a feature vector, providing the feature vector to a trained classifier, the classifier trained using unsupervised learning based on a plurality of feature vectors, and receiving, from the trained classifier, a classification of the current driving encounter in order to facilitate the first vehicle to perform a maneuver based on the current driving encounter. 1. A method in a data processing system comprising at least one processor and at least one memory , the at least one memory comprising instructions executed by the at least one processor to implement a driving encounter recognition system , the method comprising:receiving an information from a one or more sensors coupled to a first vehicle;determining, based on the information received from the one or more sensors coupled to a first vehicle, a first trajectory information associated with the first vehicle and a second trajectory information associated with a second vehicle;extracting, based on a current driving encounter comprising the first trajectory information and the second trajectory information, a feature vector;providing the feature vector to a trained classifier, wherein the classifier was trained using unsupervised learning based on a plurality of feature vectors corresponding to driving encounters; andreceiving, from the trained classifier, a classification of the current driving encounter in order to facilitate the first vehicle to perform a maneuver based on the current driving encounter.2. ...

Подробнее
30-04-2020 дата публикации

IMAGE RETRIEVAL METHODS AND APPARATUSES, DEVICES, AND READABLE STORAGE MEDIA

Номер: US20200133974A1
Автор: KUANG Zhanghui, ZHANG Wei
Принадлежит: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.

An image retrieval method includes: respectively performing a dimension reduction operation on convolutional layer features of an image to be retrieved to obtain dimension-reduced features; clustering the dimension-reduced features to obtain a plurality of clustering features; performing feature fusion on the plurality of clustering features to obtain a global feature; and retrieving, on the basis of the global feature, the image to be retrieved from a database. 1. An image retrieval method , comprising:respectively performing a dimension reduction operation on convolutional layer features of an image to be retrieved to obtain dimension-reduced features, a dimension of each dimension-reduced feature being smaller than a dimension of a respective one of the convolutional layer features;clustering the dimension-reduced features to obtain a plurality of clustering features;performing feature fusion on the plurality of clustering features to obtain a global feature; andretrieving, on the basis of the global feature, the image to be retrieved from a database.2. The image retrieval method according to claim 1 , further comprising: before the respectively performing a dimension reduction operation on convolutional layer features of an image to be retrieved claim 1 ,inputting the image to be retrieved to a convolutional neural network to obtain the convolutional layer features.3. The image retrieval method according to claim 2 , wherein each of the convolutional layer features represents a feature of a corresponding pixel area in the image to be retrieved.4. The image retrieval method according to claim 1 , wherein the clustering the dimension-reduced features to obtain a plurality of clustering features comprises:clustering the dimension-reduced features on the basis of distances among the dimension-reduced features to obtain a plurality of feature clustering centers; andrespectively sampling, for each of the plurality of feature clustering centers, a maximum value of ...

Подробнее
09-05-2019 дата публикации

USER AUTHENTICATION SYSTEMS AND METHODS

Номер: US20190138708A1
Принадлежит:

Data processing systems and methods for authenticating users and for generating user authentication indications is disclosed. In one embodiment, a data processing system for authenticating a user, comprises: a computer processor and a data storage device, the data storage device storing instructions operative by the processor to: receive a user indication identifying a user; receive an authentication indication for the user, the authentication indication comprising a sequence of word-gesture pair indications, each word-gesture pair indication comprising a word indication and a gesture indication; look up a stored authentication indication for the user; compare the received authentication indication with the stored authentication indication; and generate an authentication result indication indicating the result of the comparison. 1. A data processing system for authenticating a user , the data processing system comprising: receive a user indication identifying a user;', 'receive an authentication indication for the user, the authentication indication comprising a sequence of word-gesture pair indications, each word-gesture pair indication comprising a word indication and a gesture indication;', 'look up a stored authentication indication for the user;', 'compare the received authentication indication with the stored authentication indication; and', 'generate an authentication result indication indicating the result of the comparison., 'a computer processor and a data storage device, the data storage device storing instructions operative by the processor to2. The data processing system according to claim 1 , wherein the gesture indications comprise images of the user or a part of the user or a hand of the user.3. The data processing system according to claim 1 , wherein the sequence of word-gesture pair indications comprises a first-word gesture pair indication and a second word-gesture pair indication claim 1 , the first word-gesture pair indication comprising a ...

Подробнее
09-05-2019 дата публикации

AUTOMATIC RULER DETECTION

Номер: US20190138840A1
Принадлежит: MorphoTrak, LLC

In some implementations, a method includes: receiving, from the camera, a sample image that includes a fingerprint and a mensuration reference device, where the sample image is associated with a resolution; identifying (i) a plurality of edge candidate groups within the sample image, and (ii) a set of regularity characteristics associated with each of the plurality of edge candidate groups; determining that the associated set of regularity characteristics indicates the mensuration reference device; identifying a ruler candidate group, from each of the plurality of edge candidate groups, based at least on determining that the associated set of regularity characteristics indicates the mensuration reference device; computing a scale associated with the sample image based at least on extracting a set of ruler marks from the identified ruler candidate group; and generating, based at least on the scale associated with the sample image, a scaled image. 1. A method performed by one or more computers , the method comprising:obtaining, by one or more computers, data indicating (i) an orientation map generated for an image that includes a fingerprint and a mensuration reference device, (ii) a plurality of edge pixels within the image, wherein each edge pixel included in the plurality of edge pixels is associated with a gradient value representing a respective change in pixel intensity of a corresponding pixel in the image with respect to a neighboring pixel along a coordinate axis of the image;identifying, by the one or more computers and within a spatial domain of the orientation map, a plurality of edge candidate groups, wherein each of the plurality of edge candidate groups (i) include two or more edge pixels that have respective orientations satisfying a threshold similarity and (ii) represent regions of the image that are predicted to be occupied by a mensuration reference device;determining, by the one or more computers, that a set of regularity characteristics for a ...

Подробнее
09-05-2019 дата публикации

DEFECT CLASSIFICATION APPARATUS AND DEFECT CLASSIFICATION METHOD

Номер: US20190139210A1
Принадлежит: HITACHI HIGH-TECHNOLOGIES CORPORATION

Provided is a defect classification apparatus classifying images of defects of a sample included in images obtained by capturing the sample, the apparatus including an image storage unit for storing the images of the sample acquired by an external image acquisition unit, a defect class storage unit for storing types of defects included in the images of the sample, an image processing unit for extracting images of defects from the images from the sample, processing the extracted images of defects and generating a plurality of defect images, a classifier learning unit for learning a defect classifier using the images of defects of the sample extracted by the image processing unit and data of the plurality of generated defect images, and a defect classification unit for processing the images of the sample by using the classifier learned by the classifier learning unit, to classify the images of defects of the sample. 1. A defect classification apparatus comprising:an image storage unit for storing images of a sample;a defect class storage unit for storing types of defects included in the images of the sample;an image processing unit for processing the images of the sample and generating a plurality of images; anda classifier learning unit for learning a defect classifier using the images of the sample and the plurality of images, whereinthe image processing unit performs any of a rotation process, a horizontal inversion process or a class unchangeable deformation process, which is performed while the type of a defect image is unchanged or performs a combination thereof to the images of the sample, and generates the plurality of images, andthe channel information accompanying the plurality of generated images is renewed according to the rotation process or the inversion process.2. (canceled)3. A defect classification apparatus comprising:an image storage unit for storing images of a sample;a defect class storage unit for storing types of defects included in the images ...

Подробнее
10-06-2021 дата публикации

SYSTEM AND METHOD FOR HYPERSPECTRAL IMAGE PROCESSING TO IDENTIFY OBJECT

Номер: US20210174490A1
Принадлежит:

A system includes a memory and at least one processor to acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the object, select a region of interest in the hyperspectral image, the region of interest comprising at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, and compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object. 1a memory; and acquire a hyperspectral image of an object by an imaging device, the hyperspectral image of the object comprising a three-dimensional set of images of the object, each image in the set of images representing the object in a wavelength range of the electromagnetic spectrum;', 'normalize the hyperspectral image of the object;', 'select a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images;', 'extract spectral features from the region of interest in the hyperspectral image;', 'compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the object and determine a value for at least one quality parameter for the object; and', 'identify the object based on the spectral features., 'at least one processor to. A system comprising: This application is a continuation of U.S. patent application Ser. No. 15/977,085, filed May 11, 2018, which is related to and claims priority under 35 U.S.C. § 119(e) to U.S. Patent Application No. 62/521,950, filed Jun. 19, 2017, entitled “SYSTEM AND METHOD FOR CLASSIFYING QUALITY PARAMETERS OF AVOCADOS DURING SUPPLY CHAIN ...

Подробнее
10-06-2021 дата публикации

SYSTEM AND METHOD FOR HYPERSPECTRAL IMAGE PROCESSING TO IDENTIFY FOREIGN OBJECT

Номер: US20210174495A1
Принадлежит:

A system includes a memory and at least one processor to acquire a hyperspectral image of a food object by an imaging device, the hyperspectral image of the food object comprising a three-dimensional set of images of the food object, each image in the set of images representing the food object in a wavelength range of the electromagnetic spectrum, normalize the hyperspectral image of the food object, select a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images, extract spectral features from the region of interest in the hyperspectral image, and compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the food object and determine that the hyperspectral image indicates a foreign object. 1a memory; and acquire a hyperspectral image of a food object by an imaging device, the hyperspectral image of the food object comprising a three-dimensional set of images of the food object, each image in the set of images representing the food object in a wavelength range of the electromagnetic spectrum;', 'normalize the hyperspectral image of the food object;', 'select a region of interest in the hyperspectral image, the region of interest comprising a subset of at least one image in the set of images;', 'extract spectral features from the region of interest in the hyperspectral image; and', 'compare the spectral features from the region of interest with a plurality of images in a training set to determine particular characteristics of the food object and determine that the hyperspectral image indicates a foreign object., 'at least one processor to. A system comprising: This application is a continuation of U.S. patent application Ser. No. 15/977,099, filed May 11, 2018, which is related to and claims priority under 35 U.S.C. § 119(e) to U.S. Patent Application No. 62/521,997, filed Jun. 19, 2017, entitled “SYSTEM ...

Подробнее
25-05-2017 дата публикации

IDENTIFYING CONSUMER PRODUCTS IN IMAGES

Номер: US20170147900A1
Автор: Booth Robert Reed
Принадлежит:

Systems and methods identify consumer products in images. Known consumer products are captured as grayscale or color images. They are converted to binary at varying thresholds. Connected components in the binary images identify image features according to pixels of a predetermined size, shape, solidity, aspect ratio, and the like. The image features are stored and searched for amongst image features similarly extracted from unknown images of consumer products. Identifying correspondence between the features of the images lends itself to identifying or not known consumer products. 1. In a grayscale or color image having pluralities of pixels corresponding to a consumer product , a method of normalizing image features thereof , comprising:converting to binary the pixels of the grayscale or color image according to three different thresholds to obtain three binary images having pluralities of binary pixels;identifying connected components in the binary pixels of each of the three binary images, each of the connected components being a subset of the binary pixels of the binary images;filtering the connected components in each of the three binary images to exclude from a set of image features of the binary images the connected components not meeting certain predefined limits; andin the set of image features, standardizing a size of each of the image features to be equal to others of the image features.2. The method of claim 1 , wherein the standardizing the size further includes making said each of the image features into a 30×30 pixel region.3. The method of claim 1 , further including determining a centroid of said each of the image features in the three binary images.4. The method of claim 3 , further including determining a pixel distance away from the centroid for each pixel of said each of the image features.5. The method of claim 4 , further including determining an average pixel distance away from said centroid for said each of the image features.6. The method of ...

Подробнее
31-05-2018 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20180150712A1
Автор: Aoki Takahiro
Принадлежит: FUJITSU LIMITED

An image processing apparatus approximates a three-dimensional shape of a target object, acquired from an image of of the target object, by a quadric surface, applies a plane expansion process that expands the quadric surface into an expansion plane, to data of the image, computes corresponding points before and after the plane expansion process, and generates normalized data of the image from the corresponding points that are computed. The corresponding points are computed based on a point on a straight line of the expansion plane in contact with a point on the quadric surface and corresponding to a reference point on the target object, and an intersection point of an imaginary straight line, passing through the point on the straight line and extending in a direction of a first normal vector with respect to the straight line, and a quadratic curve of the quadric surface. 1. An image processing apparatus comprising:a storage configured to store a program; and approximating a three-dimensional shape of a target object, acquired from an image of of the target object, by a quadric surface,', 'applying a plane expansion process that expands the quadric surface into an expansion plane, to data of the image,', 'computing corresponding points before and after the plane expansion process, and', 'generating normalized data of the image from the corresponding points that are computed,, 'a processor configured to execute the program and perform a process including'}wherein the computing the corresponding points computes the corresponding points, based on a point on a straight line of the expansion plane in contact with a point on the quadric surface and corresponding to a reference point on the target object, and an intersection point of a first imaginary straight line, passing through the point on the straight line and extending in a direction of a first normal vector with respect to the straight line, and a quadratic curve of the quadric surface.2. The image processing ...

Подробнее
31-05-2018 дата публикации

A method and a device for extracting local features of a three-dimensional point cloud

Номер: US20180150714A1
Принадлежит:

A method and a device for extracting local features of a 3D point cloud are disclosed. Angle information and the concavo-convex information about a feature point to be extracted and a point of an adjacent body element are calculated based on a local reference system corresponding to the points of each body element. The feature relation between the two points can be calculated accurately. The property of invariance in translation and rotation is possessed. Since concavo-convex information about a local point cloud is contained during extraction, the inaccurate extraction caused by ignoring concavo-convex ambiguity in previous 3D local feature description is resolved. During normalization processing, exponential normalization processing and second-normal-form normalization are adopted, which solves the problem of inaccurate similarity calculation caused by a circumstance that a few elements in a vector are too large or too small during feature extraction, thus improving accuracy of extracted three-dimensional local features. 1. A method for extracting local features of a 3D point cloud , comprising:calculating angle information about a local feature point to be extracted and points of each body element in a pre-set point cloud sphere;calculating concavo-convex information about a curved surface between the local feature point to be extracted and the points of each body element respectively, wherein the pre-set point cloud sphere contains various body elements, and the body element is adjacent to the local feature point to be extracted;computing histogram statistics according to the angle information and the concavo-convex information;generating a histogram corresponding to each body element;connecting various histograms corresponding to each body element in the pre-set point cloud sphere on a one-to-one basis, to obtain an extracted vector; andperforming exponential normalization processing and second-normal-form normalization processing on the extracted vector.2. The ...

Подробнее
01-06-2017 дата публикации

IMAGE PROCESSING DEVICE, IMAGING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

Номер: US20170154236A1
Принадлежит: SONY CORPORATION

To accurately determine whether there is a sharp change in a frame. 1. An image processing device comprising:a histogram generating unit configured to generate a previous histogram showing a distribution of pixel values in a previous frame that is generated before a predetermined frame and a current histogram showing a distribution of pixel values in the predetermined frame;a normalizing unit configured to perform normalization to match variations of the pixel values of the previous histogram and the current histogram; anda similarity determining unit configured to acquire a degree of similarity of shapes of the previous histogram and the current histogram after the normalization and determine whether the degree of similarity is greater than a predetermined similarity determining threshold value.2. The image processing device according to claim 1 ,wherein the similarity determining unit includesa similarity degree acquiring unit configured to acquire the degree of similarity from the previous histogram and the current histogram after the normalization,a moment difference calculating unit configured to obtain 3rd- or higher-order moments of the previous histogram and the current histogram after the normalization and calculate a difference between the moments as a moment difference,a similarity determining threshold value setting unit configured to set a value according to the moment difference as the similarity determining threshold value, anda comparing unit configured to compare the acquired degree of similarity with the set similarity determining threshold value and determine whether the degree of similarity is greater than the predetermined similarity determining threshold value.3. The image processing device according to claim 2 ,wherein the moment includes skewness.4. The image processing device according to claim 2 ,wherein the moment includes kurtosis.5. The image processing device according to claim 1 ,wherein the previous frame and the current frame each ...

Подробнее
09-06-2016 дата публикации

METHOD AND SYSTEM FOR OCR-FREE VEHICLE IDENTIFICATION NUMBER LOCALIZATION

Номер: US20160162761A1
Принадлежит:

Methods and systems for localizing numbers and characters in captured images. A side image of a vehicle captured by one or more cameras can be preprocessed to determine a region of interest. A confidence value of series of windows within regions of interest of different sizes and aspect ratios containing a structure of interest can be calculated. Highest confidence candidate regions can then be identified with respect to the regions of interest and at least one region adjacent to the highest confidence candidate regions. An OCR operation can then be performed in the adjacent region. An identifier can then be returned from the adjacent region in order to localize numbers and characters in the side image of the vehicle. 1. A method for localizing numbers and characters in captured images , said method comprising:preprocessing a side image of a vehicle captured by at least one camera to determine at least one region of interest;calculating a confidence of a plurality of windows within regions of interest of different sizes and aspect ratios containing a structure of interest;identifying highest confidence candidate regions with respect to said regions of interest and at least one region adjacent to said highest confidence candidate regions;performing an optical character recognition in said at least one adjacent region; andreturning an identifier from said at least one adjacent region in order to localize numbers and characters in said side image of said vehicle.2. The method of wherein said highest confidence candidate regions are identified with nonmaximal suppression.3. The method of wherein a window size of said at least one adjacent region is determined by a window size of at least one candidate region among said highest confidence candidate regions.4. The method of wherein said confidence is determined with a classifier.5. The method of wherein a size and aspect ratio of said plurality of windows spans an expected size and aspect ratio of said structure of ...

Подробнее
07-06-2018 дата публикации

METHOD, SYSTEM AND COMPUTER PROGRAM FOR IDENTIFICATION AND SHARING OF DIGITAL IMAGES WITH FACE SIGNATURES

Номер: US20180157900A1
Принадлежит:

Methods and systems are provided for sharing a digital image depicting one or more faces. The method may include linking a plurality of computer terminals to a computer network, each computer terminal associated with an individual; receiving a digital image at at least one of the computer terminals; executing a face recognition routine on the digital image, the face recognition routine detecting at least one face in the digital image, each detected face corresponding to a person, the face recognition routine recognizing at least one of the persons as being one of the individuals; and for each individual recognized in the digital image by the face recognition routine, initiating dissemination of the digital image to the computer terminal associated with respective individual whose face is recognized in the digital image. 1. A method for recognizing one or more faces in a digital image , the method comprising:enabling a remote web browser, desktop application, or a mobile device to access a proxy server; receiving a digital image;', 'executing a face detection routine on the digital image involving: generating one or more face coordinates corresponding to one or more candidate regions for one or more faces;', 'executing a face recognition routine involving, for each of the one or more candidate regions for the one or more faces, generating a face signature using one or more projection images defined by the face coordinates; and comparing the face signature with one or more known face signatures to determine a distance value for each comparison, computing an aggregation of the distance values, determining a best match between the face signature and the known face signatures using the aggregated distance values and comparing the best match to a similarity threshold, the best match determining an identity corresponding to at least one of the one or more faces; and', 'making available results of the face recognition routine to the web browser, the desktop application or ...

Подробнее
07-06-2018 дата публикации

Target detection method and apparatus

Номер: US20180157938A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method of detecting a target includes generating an image pyramid based on an image on which a detection is to be performed; classifying candidate areas in the image pyramid using a cascade neural network; and determining a target area corresponding to a target included in the image based on the plurality of candidate areas, wherein the cascade neural network includes a plurality of neural networks, and at least one neural network among the neural networks includes parallel sub-neural networks.

Подробнее
14-05-2020 дата публикации

DETERMINING ASSOCIATIONS BETWEEN OBJECTS AND PERSONS USING MACHINE LEARNING MODELS

Номер: US20200151489A1
Принадлежит:

In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations. 1. A method comprising:determining, from an image, one or more persons associated with an object; determining an overlap region of the image corresponding to an overlap in the image between an object region of the object and a person region of the person;', 'applying a mask to portions of the image not included in the overlap region to generate a masked image;', 'applying data representative of the masked image to a neural network trained to predict confidences for associations between objects and persons; and', 'computing, using the neural network and based at least in part on the data, a confidence for an association between the object and the person; and, 'for each person of the one or more persons, performing operations comprisingbased on the confidence for each person of the one or more persons, associating the object to the person of the one or more persons having a highest associated confidence.2. The method of claim 1 , wherein the determining the one or more persons associated with an object further comprises:generating an association region for the object; anddetermining that the one or more persons or one or more bounding shapes corresponding to the one or more persons at least partially overlap with the association region.3. The method of claim 2 , wherein the ...

Подробнее
14-05-2020 дата публикации

Systems and methods for mobile image capture and processing of documents

Номер: US20200151703A1
Принадлежит: Mitek Systems Inc

Techniques for processing images of documents captured using a mobile device are provided. The images can include different sides of a document from a mobile device for an authenticated transaction. In an example implementation, a method includes inspecting the images to detect a feature associated with a first side of the document. In response to determining an image is the first side of the document, a type of content is selected to be analyze on the image of the first side and one or more of regions of interests (ROIs) are identified on the image of the first side that are known to include the selected type of content. A process can include receiving a sub-image of the image of the first side from the preprocessing unit, and performing content detection test on the sub-image.

Подробнее
14-05-2020 дата публикации

Methods and apparatus for label compensation during specimen characterization

Номер: US20200151878A1
Принадлежит: Siemens Healthcare Diagnostics Inc

A method of characterizing a serum and plasma portion of a specimen in regions occluded by one or more labels. The characterization method may be used to provide input to an HILN (H, I, and/or L, or N) detection method. The characterization method includes capturing one or more images of a labeled specimen container including a serum or plasma portion from multiple viewpoints, processing the one or more images to provide segmentation data including identification of a label-containing region, determining a closest label match of the label-containing region to a reference label configuration selected from a reference label configuration database, and generating a combined representation based on the segmentation information and the closest label match. Using the combined representation allows for compensation of the light blocking effects of the label-containing region. Quality check modules and testing apparatus and adapted to carry out the method are described, as are other aspects.

Подробнее
08-06-2017 дата публикации

VIDEO STABILIZATION USING CONTENT-AWARE CAMERA MOTION ESTIMATION

Номер: US20170163892A1
Принадлежит: Intel Corporation

Video stabilization is described using content-aware camera motion estimation. In some versions a luminance target frame and a luminance source frame of a sequence of video frames of a scene are received. Motion is extracted from the received luminance target and source frames and the motion is represented as a motion vector field and weights. The weights are divided into a first set of zeros weights for motion in the motion vector field that is near zero motion and a second set of peak weights for motion in the motion field that is not near zero. The zeros weights are compared to a threshold to determine whether there is motion in the scene and if the zeros weights exceed the threshold then selecting a zero motion motion model. A frame of the video sequence is adjusted corresponding to the target frame based on the selected motion model. 1. A method comprising:receiving a luminance target frame and a luminance source frame of a sequence of video frames of a scene;extracting motion from the received luminance target and source frames and representing the motion as a motion vector field and weights;dividing the weights into a first set of zeros weights for motion in the motion vector field that is near zero motion and a second set of peak weights for motion in the motion field that is not near zero;comparing the zeros weights to a threshold to determine whether there is motion in the scene and if the zeros weights exceed the threshold then selecting a zero motion motion model; andadjusting a frame of the video sequence corresponding to the target frame based on the selected motion model.2. The method of claim 1 , further comprising normalizing the luminance of the target frame to the luminance of the source frame.3. The method of claim 1 , further comprising determining whether the motion vector field is unreliable and claim 1 , if the motion vector field is unreliable claim 1 , then selecting an identity matrix motion model.4. The method of claim 1 , further ...

Подробнее
16-06-2016 дата публикации

UNSTRUCTURED ROAD BOUNDARY DETECTION

Номер: US20160171314A1
Автор: SHAO Xisheng

A method for detecting unstructured road boundary is provided. The method may include: obtaining a color image; selecting a candidate road region within the color image according to a road model; identifying a seed pixel from the candidate road region; obtaining a brightness threshold and a color threshold, where the brightness threshold and the color threshold are determined according to brightness distances and color distances from pixels in the candidate road region to the seed pixel; and performing road segmentation by determining whether the pixels in the candidate road region belong to a road region based on the brightness threshold and the color threshold. The amount of computation can be reduced greatly by using the improved unstructured road boundary detection method. 1. A method for detecting an unstructured road boundary , comprising:obtaining a color image;selecting a candidate road region within the color image according to a road model;identifying a seed pixel from the candidate road region;determining a brightness threshold and a color threshold according to brightness distances and color distances from pixels in the candidate road region to the seed pixel; anddetermining whether the pixels in the candidate road region belong to a road region based on the brightness threshold and the color threshold.2. The method according to claim 1 , further comprising preprocessing the color image claim 1 , by:sampling the color image using a sampling frame that has a predetermined height and a predetermined width;calculating norms of vectors, which respectively correspond to pixels in the sampling frame, where each of the vectors represents a line pointing from an original point to a point corresponding to a pixel in a color space;identifying a first predetermined number of vectors by filtering out a first number of vectors having maximum norms of vectors and a second number of vectors having minimum norms of vectors;{'sup': th', 'th, 'obtaining a weighted average ...

Подробнее
21-05-2020 дата публикации

DIABETIC RETINOPATHY RECOGNITION SYSTEM BASED ON FUNDUS IMAGE

Номер: US20200160521A1
Принадлежит:

Some embodiments of the disclosure provide a diabetic retinopathy recognition system (S) based on fundus image. According to an embodiment, the system includes an image acquisition apparatus () configured to collect fundus images. The fundus images include target fundus images and reference fundus images taken from a person. The system further includes an automatic recognition apparatus () configured to process the fundus images from the image acquisition apparatus by using a deep learning method. The automatic recognition apparatus automatically determines whether a fundus image has a lesion and outputs the diagnostic result. According to another embodiment, the diabetic retinopathy recognition system (S) utilizes a deep learning method to automatically determine the fundus images and output the diagnostic result. 1. A diabetic retinopathy recognition system , comprising:an image acquisition apparatus configured to collect fundus images, the fundus images comprise target fundus images and reference fundus images taken from a person; andan automatic recognition apparatus configured to process the fundus images from the image acquisition apparatus by using a deep learning method, and automatically determine whether a fundus image has a lesion and output a diagnostic result.2. The diabetic retinopathy recognition system of claim 1 , further comprising an output apparatus that outputs an analysis report according to the diagnostic result.3. The diabetic retinopathy recognition system of claim 1 , wherein the image acquisition apparatus is a handheld fundus camera.4. The diabetic retinopathy recognition system of claim 1 , wherein the automatic recognition apparatus is arranged in a cloud server claim 1 , and the image acquisition apparatus interacts with the automatic recognition apparatus based on network communication.5. The diabetic retinopathy recognition system of claim 1 , wherein the automatic recognition apparatus comprises:a pre-processing module configured to ...

Подробнее
06-06-2019 дата публикации

METHODS AND SYSTEMS FOR PROVIDING INTERFACE COMPONENTS FOR RESPIRATORY THERAPY

Номер: US20190167934A1
Принадлежит: RESMED LIMITED

Systems and methods permit generation of a digital scan of a user's face such as for obtaining of a patient respiratory mask, or component(s) thereof, based on the digital scan. The method may include: receiving video data comprising a plurality of video frames of the user's face taken from a plurality of angles relative to the user's face, generating a three-dimensional representation of a surface of the user's face based on the plurality of video frames, receiving scale estimation data associated with the received video data, the scale estimation data indicative of a relative size of the user's face, and scaling the digital three-dimensional representation of the user's face based on the scale estimation data. In some aspects, the scale estimation data may be derived from motion information collected by the same device that collects the scan of the user's face. 1. An apparatus for acquiring data for generation of a three-dimensional facial scan of a user for obtaining a patient respiratory interface component , the apparatus comprising:an image sensor and lens for capturing two dimensional image data of the user's face;a motion sensor configured to sense movement data of at least one of a movement of the apparatus and a change in orientation of the apparatus;a processor configured to receive image data from the image sensor and to receive movement data from the motion sensor, to generate a video file based on the image data received from the image sensor, and to generate a data file based on the movement data received from the motion sensor, and to associate each of the video file and data file with one another; anda transmitter, coupled with the processor, to transmit the associated video and data files to a system comprising a surface engine at a remote destination for generation of a three-dimensional representation of the user's face based on the received image data and the received movement data.2. A system for obtaining a patient respiratory interface ...

Подробнее
01-07-2021 дата публикации

System and Method of Hand Gesture Detection

Номер: US20210201661A1
Принадлежит:

A method includes: identifying, using a first image processing process, one or more first regions of interest (ROI), the first image processing process configured to identify first ROIs corresponding to a predefined portion of a respective human user in an input image; providing a downsized copy of a respective first ROI identified in the input image as input for a second image processing process, the second image processing process configured to identify a predefined feature of a respective human user and to determine a respective control gesture of a plurality of predefined control gestures corresponding to the identified predefined feature; and in accordance with a determination that a first control gesture is identified in the respective first ROI identified in the input image, and that the first control gesture meets preset criteria, performing a control operation in accordance with the first control gesture. 1. A method comprising: identifying, using a first image processing process, one or more first regions of interest (ROI) in a first input image, wherein the first image processing process is configured to identify first ROIs corresponding to a predefined portion of a respective human user in an input image;', 'providing a downsized copy of a respective first ROI identified in the first input image as input for a second image processing process, wherein the second image processing process is configured to identify one or more predefined features of a respective human user and to determine a respective control gesture of a plurality of predefined control gestures corresponding to the identified one or more predefined features; and', 'in accordance with a determination that a first control gesture is identified in the respective first ROI identified in the first input image, and that the first control gesture meets preset first criteria associated with a respective machine, triggering a control operation at the respective machine in accordance with the first ...

Подробнее
21-06-2018 дата публикации

Method and system for transforming spectral images

Номер: US20180172515A1
Принадлежит: VITO NV

A method for transforming a set of spectral images, the method including: dividing the images in the set in identically arranged areas; for each of the areas, calculating a predetermined characteristic across the set of images; and, for each of the images, normalizing intensity values in each of the areas in function of the predetermined characteristic of the area. Additionally, a corresponding computer program product and a corresponding image processing system.

Подробнее
21-06-2018 дата публикации

PEDESTRIAN DETECTION NEURAL NETWORKS

Номер: US20180173971A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating object detection predictions from a neural network. In some implementations, an input characterizing a first region of an environment is obtained. The input includes a projected laser image generated from a three-dimensional laser sensor reading of the first region, a camera image patch generated from a camera image of the first region, and a feature vector of features characterizing the first region. The input is processed using a high precision object detection neural network to generate a respective object score for each object category in a first set of one or more object categories. Each object score represents a respective likelihood that an object belonging to the object category is located in the first region of the environment. 1. A method comprising: (i) a projected laser image generated from a three-dimensional laser sensor reading of the first region;', '(ii) a camera image patch generated from a camera image of the first region; and', '(iii) a feature vector of features characterizing the first region; and, 'obtaining an input characterizing a first region of an environment, the input comprising processing the projected laser image through a laser sub-neural network to generate an alternative representation of the projected laser image;', 'processing the camera image patch through a camera sub-neural network to generate an alternative representation of the camera image patch;', 'processing the feature vector through a feature sub-neural network to generate an alternative representation of the feature vector; and', 'processing the alternative representation of the projected laser image, the alternative representation of the camera image patch, and the alternative representation of the feature vector through a combining sub-neural network to generate the respective object score for each of the one or more object categories., 'processing the input ...

Подробнее
28-05-2020 дата публикации

AUTOMATED METHODS AND SYSTEMS FOR DETECTING CELLS IN STAINED SPECIMEN IMAGES

Номер: US20200167584A1
Принадлежит:

A system and a method for unveiling poorly visible or lightly colored nuclei in an input image are disclosed. An input image is fed to a color deconvolution module for deconvolution into two color channels that are processed separately before being combined. The input image is deconvolved into two separate images: a stain image and a counter stain image. A complement of the stain image is generated in order to clearly reflect the locations of the poorly visible or light-colored nuclei. The complement image and the counter stain image are optionally normalized and then combined and segmented, to generate an output image with clearly defined nuclei. Alternatively, the complement of the stain image and the counter stain image can optionally be normalized, and then segmented prior to being combined to generate the output image. 1. A method comprising:receiving an image depicting a specimen;deconvolving the image into at least a first information channel and a second information channel;generating a stain image along the first information channel;generating a complement of the stain image;generating a counter stain image along the second information channel;combining the complement of the stain image and the counter stain image to generate a combined image;processing the combined image using a segmentation operation to detect a plurality of image objects in the combined image, each of the plurality of image objects corresponding to a cell nucleus of the specimen; andoutputting the combined image with the detected plurality of image objects.2. The method of claim 1 , wherein the first information channel includes a first color channel.3. The method of claim 2 , wherein the second information channel includes a second color channel claim 2 , wherein the second color channel is different from the first color channel.4. The method of claim 1 , wherein the first information channel indicates an immunohistochemistry (IHC) stain.5. The method of claim 1 , wherein the second ...

Подробнее
28-05-2020 дата публикации

MAPPER COMPONENT FOR A NEURO-LINGUISTIC BEHAVIOR RECOGNITION SYSTEM

Номер: US20200167679A1
Принадлежит: Omni AI, Inc.

Techniques are disclosed for generating a sequence of symbols based on input data for a neuro-linguistic model. The model may be used by a behavior recognition system to analyze the input data. A mapper component of a neuro-linguistic module in the behavior recognition system receives one or more normalized vectors generated from the input data. The mapper component generates one or more clusters based on a statistical distribution of the normalized vectors. The mapper component evaluates statistics and identifies statistically relevant clusters. The mapper component assigns a distinct symbol to each of the identified clusters. 1. A method , comprising:receiving a normalized vector of feature values generated from input data, each feature value from the normalized vector of feature values being calculated based on a feature from a plurality of features; evaluating a distribution of a plurality of clusters in a cluster space corresponding to the feature from the plurality of features that is associated with the feature value,', 'mapping the feature value to a single cluster from the plurality of clusters based on the distribution,', 'updating the distribution of the plurality of clusters based on the mapping to produce an updated distribution, and', 'determining, based on the updated distribution, whether or not to merge the plurality of clusters with a further cluster from the cluster space;, 'for each feature value in the normalized vector of feature valuesdetermining a symbol for a statistically significant cluster from the plurality of clusters; andtransmitting the symbol to a behavior recognition system.2. The method of claim 1 , wherein the normalized vector of feature values is a first normalized vector of feature values claim 1 , the input data is first input data claim 1 , and the updated distribution is a first updated distribution claim 1 , the method further comprising:receiving a second normalized vector of feature values generated from second input data ...

Подробнее
08-07-2021 дата публикации

Method and apparatus for recognizing wearing state of safety belt

Номер: US20210209385A1

A method and an apparatus for recognizing a wearing state of a safety belt are disclosed. The method includes: obtaining an image by monitoring a vehicle; performing face recognition on the image to obtain a face region; determining a target region from the image based on a size and a position of the face region; and recognizing a wearing state of a safety belt based on an image feature of the target region.

Подробнее
08-07-2021 дата публикации

ROBUST FRAME SIZE ERROR DETECTION AND RECOVERY MECHANISM TO MINIMIZE FRAME LOSS FOR CAMERA INPUT SUB-SYSTEMS

Номер: US20210209390A1
Принадлежит:

An image data frame is received from an external source. An error concealment operation is performed on the received image data frame in response to determining that a first frame size of the received image data frame is erroneous. The first frame size of the image data frame is determined to be erroneous based on at least one frame synchronization signal associated with the image data frame. An image processing operation is performed on the received image data frame on which the error concealment operation has been performed, thereby enabling an image processing module to perform the image processing operation without entering into a deadlock state and thereby prevent a host processor from having to execute hardware resets of deadlocked modules. 1. An image processing device , comprising:an image data receiver that is configured to receive an image data frame from an external source;an error handling module that is communicatively coupled to receive the image data frame from the image data receiver and that is configured to perform an error concealment operation on the received image data frame in response to determining that a first frame size of the image data frame at the error handling module is erroneous, wherein the error handling module determines that the first frame size of the image data frame is erroneous based on at least one frame synchronization signal associated with the image data frame; andan image processing unit that is communicatively coupled to receive from the error handling module, the image data frame on which the error concealment operation has been performed, wherein the image processing unit is configured to perform an image processing operation on the received image data frame.2. The image processing device according to claim 1 , wherein the error handling module is further configured to:discard data of a remainder of the image data frame, being received by the error handling module from the image data receiver, in response to ...

Подробнее