Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 17403. Отображено 100.
26-01-2012 дата публикации

Image processing apparatus, image processing method, and program

Номер: US20120020528A1
Автор: Hideshi Yamada
Принадлежит: Sony Corp

An image processing apparatus includes an obtaining unit obtaining an image including a closed curve input which encloses an object in an input image, a generation unit generating a distance image having pixel values of individual pixels corresponding to distances from the input closed curve in accordance with a shape of the curve, a calculation unit calculating an input-image energy of the input image including a distance energy changed based on the distances of the pixels or a likelihood energy changed based on likelihoods of the pixels based on color distribution models of an object region and a non-object region in the distance image and a color energy changed in accordance with color differences between adjacent pixels in the distance image, and a generation unit generating a mask image by minimizing the input-image energy and assigning an attribute representing the object region or an attribute representing the non-object region.

Подробнее
26-01-2012 дата публикации

Variable kernel size image matting

Номер: US20120020554A1
Автор: Jian Sun, Kaiming He
Принадлежит: Microsoft Corp

Image matting is performed on an image having a specified foreground region, a background region and an unknown region by selecting a kernel size based on a size of the unknown region. The matting processing is performed using the selected kernel size to provide an alpha matte that distinguishes a foreground portion from a background portion in the unknown region. Further, in some implementations, a trimap of the image may be segmented and matting processing may be performed on each segment using a kernel size appropriate for that segment.

Подробнее
26-01-2012 дата публикации

Interactive image matting

Номер: US20120023456A1
Принадлежит: Microsoft Corp

A user interface enables interactive image matting to be performed on an image The user interface may provide results including an alpha matte as feedback in real time. The user interface may provide interactive tools for selecting a portion of the image, and an unknown region for alpha matte processing may be automatically generated adjacent to the selected region. The user may interactively refine the alpha matte as desired to obtain a satisfactory result.

Подробнее
16-02-2012 дата публикации

Live coherent image selection

Номер: US20120039535A1
Принадлежит: Adobe Systems Inc

Methods, systems, and apparatus, including computer program products, featuring receiving user input defining a sample of pixels from an image, the image being defined by a raster of pixels. While receiving the user input, the following actions are performed one or more times: pixels are coherently classified in the raster of pixels as being foreground or background based on the sample of pixels; and a rendering of the image is updated on a display to depict classified foreground pixels and background pixels as the sample is being defined.

Подробнее
01-03-2012 дата публикации

System for background subtraction with 3d camera

Номер: US20120051631A1
Принадлежит: University of Illinois

A system for background image subtraction includes a computing device coupled with a 3D video camera, a processor of the device programmed to receive a video feed from the camera containing images of one or more subject that include depth information. The processor, for an image: segments pixels and corresponding depth information into three different regions including foreground (FG), background (BG), and unclear (UC); categorizes UC pixels as FG or BG using a function that considers the color and background history (BGH) information associated with the UC pixels and the color and BGH information associated with pixels near the UC pixels; examines the pixels marked as FG and applies temporal and spatial filters to smooth boundaries of the FG regions; constructs a new image by overlaying the FG regions on top of a new background; displays a video feed of the new image in a display device; and continually maintains the BGH.

Подробнее
12-04-2012 дата публикации

Method and computing device in a system for motion detection

Номер: US20120089949A1

A computing device in a system for motion detection comprises an image processing device to determine a motion of an object of interest, and a graphical user interface (GUI) module to drive a virtual role based on the motion determined by the image processing device. The image processing device comprises a foreground extracting module to extract a foreground image from each of a first image of the object of interest taken by a first camera and a second image of the object of interest taken by a second camera, a feature point detecting module to detect feature points in the foreground image, a depth calculating module to calculate the depth of each of the feature points based on disparity images associated with the each feature point, the depth calculating module and the feature point detecting module identifying a three-dimensional (3D) position of each of the feature points, and a motion matching module to identify vectors associated with the 3D positions of the feature points and determine a motion of the object of interest based on the vectors.

Подробнее
19-04-2012 дата публикации

Extraction Of A Color Palette Model From An Image Of A Document

Номер: US20120092359A1
Принадлежит: Hewlett Packard Development Co LP

A system and method are provided for determining a color palette model from an image of a document. Pixel values of the image of the document are clustered to provide image clusters. Color layers of the image are determined, each color layer corresponding to an image cluster. Aspects of the color palette model can be determined using the color layers. Aspects of the color palette model include a foreground-background color pair for a content block in the document and a background-area color of the document.

Подробнее
21-06-2012 дата публикации

Method of separating front view and background and apparatus

Номер: US20120155756A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

A method of initially estimating a front view portion of a photographed image and separating the photographed image into a front view and a background without user interaction and apparatus performing the method are provided. The method of separating a front view and a background of an image includes dividing one or more pixels included in a photographed image into pixel groups according to color similarity between the pixels, estimating the position of the front view in the image divided into the pixel groups, and separating the front view and the background based on the estimated position of the front view. The method automatically separates the front view and the background of the image without a user input.

Подробнее
28-06-2012 дата публикации

Image processing apparatus, image processing method, and recording medium

Номер: US20120163712A1
Автор: Mitsuyasu Nakajima
Принадлежит: Casio Computer Co Ltd

Disclosed is an image processing apparatus including: an obtaining section which obtains a subject exiting image in which a subject and a background exist; a first specification section which specifies a plurality of image regions in the subject existing image; a comparison section which compares a representative color of each of the image regions with a predetermined color; a generation section which generates an extraction-use background color based on a comparison result of the comparison section; and a second specification section which specifies a subject constituent region constituting the subject and a background constituent region constituting the background in the subject existing image based on the extraction-use background color.

Подробнее
19-07-2012 дата публикации

Vascular roadmapping

Номер: US20120183189A1
Принадлежит: KONINKLIJKE PHILIPS ELECTRONICS NV

Cardiac roadmapping consists in correctly overlaying a vessel map sequence derived from an angiogram acquisition onto a fluoroscopy sequence used during PTCA intervention. This enhanced fluoroscopy sequence however suffers from several drawbacks such as breathing motion, high noise level, and most of all suboptimal contrast-enhanced mask due to segmentation defaults. This invention proposes to reverse the process and to locally overlay the intervention device as seen in fluoroscopy onto an optimal contrast-enhanced image of a corresponding cycle. This drastically reduces or suppresses the breathing motion, it provides the high image quality standard of angiograms, and avoids segmentation defaults. This proposal could lead to a brand new navigation practice in PCI procedures.

Подробнее
27-09-2012 дата публикации

Image processing apparatus, image processing method, recording medium, and program

Номер: US20120243737A1
Автор: Kaname Ogawa
Принадлежит: Sony Corp

An image processing apparatus includes: a calculating unit that calculates an evaluation value, which is expressed as a sum of confidence degrees obtained by mixing, at a predetermined mixing ratio, a matching degree of a first feature quantity and a matching degree of a second feature quantity between a target image containing an object to be tracked and a comparison image which is an image of a comparison region compared to the target image of a first frame, when the mixing ratio is varied and obtaining the mixing ratio when the evaluation value is maximum; and a detecting unit that detects an image corresponding to the target image of a second frame based on the confidence degrees in which the mixing ratio is set when the evaluation value is the maximum.

Подробнее
08-11-2012 дата публикации

Method of image processing and associated apparatus

Номер: US20120281905A1
Автор: Kun-Nan Cheng
Принадлежит: MStar Semiconductor Inc Taiwan

A method of image processing is provided for separating an image object from a captured or provided image according to a three-dimensional (3D) depth and generating a synthesized image from the image portions identified and selectively modified in the process. The method retrieves or determines a corresponding three-dimensional (3D) depth for each portion of an image, and enables capturing a selective portion of the image as an image object according to the 3D depth of each portion of the image, so as to synthesize the image object with other image objects by selective processing and superimposing of the image objects to provide synthesized imagery.

Подробнее
20-12-2012 дата публикации

Motion Detection Method, Program and Gaming System

Номер: US20120322551A1
Принадлежит: Omnimotion Technology Ltd

This invention relates to a method of processing an image, specifically an image taken from a web camera. The processed image is thereafter preferably used as an input to a game. The image is simplified to a point whereby a very limited number of region bounded boxes are provided to a game environment and these region bounded boxes are used to determine the intended user input. By implementing this method, the amount of processing required is decreased and the speed at which the game may be rendered is increased thereby providing a richer game experience for the player. Furthermore, the method of processing the image is practically universally applicable and can be used with a wide range of web cameras thereby obviating the need for additional specialist equipment to be purchased and allowing the games to be web based.

Подробнее
14-02-2013 дата публикации

Method and apparatus for generating and playing animated message

Номер: US20130038613A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

Methods and apparatus are provided for generating an animated message. Input objects in an image of the animated message are recognized, and input information, including information about an input time and input coordinates for the input objects, is extracted. Playback information, including information about a playback order of the input objects, is set. The image is displayed in a predetermined handwriting region of the animated message. An encoding region, which is allocated in a predetermined portion of the animated message and in which the input information and the playback information are stored, is divided into blocks having a predetermined size. Display information of the encoding region is generated by mapping the input information and the playback information to the blocks in the encoding region. An animated message including the predetermined handwriting region and the encoding region is generated. The generated animated message is transmitted.

Подробнее
21-02-2013 дата публикации

Image segmentation of organs and anatomical structures

Номер: US20130044930A1
Принадлежит: Individual

A system and method to conduct image segmentation by imaging target morphological shapes evolving from one 2-dimension (2-D) image slice to one or more nearby neighboring 2-D images taken from a 3-dimension (3-D) image. One area defined by a user as a target on an image slice can be found in a corresponding area on a nearby neighboring image slice by using a deformation field generated with deformable image registration procedure between these two image slices. It allows the user to distinguish target and background areas with the same or similar image intensities.

Подробнее
14-03-2013 дата публикации

Image compression and decompression for image matting

Номер: US20130064465A1
Автор: Siu-Kei Tin
Принадлежит: Canon Inc

Encoding image data and mask information to be used for matte images and for image and video matting. Image data and mask information for pixels of the image data in a first representation domain are accessed. The mask information defines background pixels and foreground pixels. The image data in the first representation domain is transformed to a second representation domain. Mask information in the second representation domain is determined by using the mask information in the first representation domain. The image data in the second representation domain is masked by setting image data to zero for background pixels as defined by the determined mask information in the second representation domain. The masked image data in the second representation domain is encoded. Decoding the encoded image data by accessing the encoded image data, decoding the masked image data in the second representation domain, and transforming the masked image data in the second representation domain to the first representation domain to obtain the decoded image data.

Подробнее
18-04-2013 дата публикации

Region segmented image data creating system and feature extracting system for histopathological images

Номер: US20130094733A1

A region segmented image data creating system for histopathological images is provided. The region segmented image data creating system is capable of creating region segmented image data required to generating a region segmented image. A first bi-level image data creating section 12 creates first bi-level image data, in which nucleus regions can be discriminated from other regions, from histopathological image data. A second bi-level image data creating section 14 creates second bi-level image data, in which a background regions can be discriminated from other regions, from the histopathological image data. A three-level image data creating section 15 clarifies cytoplasm regions by computing a negative logical addition of the first bi-level image data and the second bi-level image data, and to create three-level image data as the region segmented image data.

Подробнее
30-05-2013 дата публикации

Foreground subject detection

Номер: US20130136358A1
Принадлежит: Microsoft Corp

Classifying pixels in a digital image includes receiving a primary image from one or more image sensors. The primary image includes a plurality of primary pixels. A depth image from one or more depth sensors is also received. The depth image includes a plurality of depth pixels, each depth pixel registered to one or more primary pixels. The depth image and the primary image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject.

Подробнее
11-07-2013 дата публикации

Silhouette correction method and system and silhouette extraction method and system

Номер: US20130177239A1

A method and system for correcting a silhouette of a person extracted from an image by labeling each pixel as the person or the background are disclosed. The pixels in a target region are corrected by: a step in which, by the use of pixels in the target region labeled as the person, a person histogram is created; a step in which, by the use of pixels in the target region labeled as the background, a background histogram is created; a step in which, for each pixel in the target region, by the use of the person histogram, the background histogram and color data of each pixel in the target region, first connective costs of the pixel to the color data of the person and the background are calculated; and a step in which, for each pixel in the target region, a second connective cost of the pixel is calculated.

Подробнее
18-07-2013 дата публикации

Systems and methods for mobile image capture and processing

Номер: US20130182959A1
Принадлежит: Kofax Inc

In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.

Подробнее
25-07-2013 дата публикации

Image processing apparatus, image processing method, and program

Номер: US20130188826A1
Автор: Katsuaki NISHINO
Принадлежит: Sony Corp

Provided is an image processing apparatus including a moving object detection unit configured to detect a moving object which is an image different from a background in a current image, a temporary pause determination unit configured to determine whether the moving object is paused for a predetermined time period or more, a reliability processing unit configured to calculate non-moving object reliability for a pixel of the current image using the current image and a temporarily paused image including a temporarily paused object serving as the moving object which is paused for a predetermined time period or more, the non-moving object reliability representing likelihood of being a non-moving object which is an image different from the background that does not change for a predetermined time period or more, and a non-moving object detection unit configured to detect the non-moving object from the current image based on the non-moving object reliability.

Подробнее
01-08-2013 дата публикации

Executable virtual objects associated with real objects

Номер: US20130194164A1
Принадлежит: Individual

Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

Подробнее
22-08-2013 дата публикации

Image segmentation using reduced foreground training data

Номер: US20130216127A1
Принадлежит: Microsoft Corp

Methods of image segmentation using reduced foreground training data are described. In an embodiment, the foreground and background training data for use in segmentation of an image is determined by optimization of a modified energy function. The modified energy function is the energy function used in image segmentation with an additional term comprising a scalar value. The optimization is performed for different values of the scalar to produce multiple initial segmentations and one of these segmentations is selected based on pre-defined criteria. The training data is then used in segmenting the image. In other embodiments further methods are described: one places an ellipse inside the user-defined bounding box to define the background training data and another uses a comparison of properties of neighboring image elements, where one is outside the user-defined bounding box, to reduce the foreground training data.

Подробнее
12-09-2013 дата публикации

Image processing device, image processing method, and image processing program

Номер: US20130236098A1
Принадлежит: Omron Corp

An image processing device has an image input part to which a frame image of an imaging area taken with an infrared camera is input, a background model storage part in which a background model is stored with respect to each pixel of the frame image input to the image input part, a frequency of a pixel value of the pixel being modeled in the background model, a background difference image generator that determines whether each pixel of the frame image input to the image input part is a foreground pixel or a background pixel using the background model of the pixel, which is stored in the background model storage part, and generates a background difference image, and a threshold setting part.

Подробнее
19-09-2013 дата публикации

Interactive 3-d examination of root fractures

Номер: US20130243276A1
Принадлежит: Individual

A method for 3-D interactive examination of a subject tooth, executed at least in part by a computer, obtains volume image data containing at least the subject tooth and background content adjacent to the subject tooth and displays a first image from the volume data that shows at least the subject tooth and the background content. A portion of the background content in the first image is identified according to a first operator instruction. Tooth content for at least the subject tooth in the first image is identified according to a second operator instruction. At least the subject tooth is segmented from within the volume data according to the first and second operator instructions. The segmented subject tooth is then displayed.

Подробнее
19-09-2013 дата публикации

Image processing apparatus

Номер: US20130243280A1
Автор: Wataru Takahashi
Принадлежит: Shimadzu Corp

An image process apparatus is operative to obtain an image with high visual recognition property. A linear structural object incorporated into the original image is distinguished by two methods. A first method produces an evaluation image (P 3 ) that evaluates whether each pixel is a linear structural object in the original image. A second method produces the difference image (P 6 ) incorporating a linear structural object by obtaining the difference between the linear structural object incorporated into the original image and the portion other than the linear structural object. Since the linear structural object in the original image is extracted from an original image (P 0 ) holding the contrasting density in the original image based on the two images related to the linear structural object produced by such different methods, and the apparatus provides an image having high visual recognition property.

Подробнее
26-09-2013 дата публикации

Laser projection system and method

Номер: US20130250094A1
Автор: Kurt D. Rueb
Принадлежит: Virtek Vision International ULC

A laser projection system for projecting an image on a workpiece includes a photogrammetry assembly and a laser projector, each communicating with a computer. The photogrammetry assembly includes a first camera for scanning the workpiece, and the laser projector projects a laser image to arbitrary locations. Light is conveyed from the direction of the workpiece to the photogrammetry assembly. The photogrammetry assembly signals the coordinates light conveyed toward the photogrammetry assembly to the computer with the computer being programmable for determining a geometric location of the laser image. The computer establishes a geometric correlation between the photogrammetry assembly, the laser projector, and the workpiece for realigning the laser image to a corrected geometric location relative to the workpiece.

Подробнее
10-10-2013 дата публикации

Image Capture and Identification System and Process

Номер: US20130265435A1
Принадлежит: NANT HOLDINGS IP LLC

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.

Подробнее
10-10-2013 дата публикации

Method for the filtering of target object images in a robot system

Номер: US20130266205A1
Автор: Harri Valpola
Принадлежит: ZenRobotics Oy

The invention relates to a method and system for recognizing physical objects. In the method an object is gripped with a gripper, which is attached to a robot arm or mounted separately. Using an image sensor, a plurality of source images of an area comprising the object is captured while the object is moved with the robot arm. The camera is configured to move along the gripper, attached to the gripper or otherwise able to monitor the movement of the gripper. Moving image elements are extracted from the plurality of source images by computing a variance image from the source images and forming a filtering image from the variance image. A result image is obtained by using the filtering image as a bitmask. The result image is used for classifying the gripped object.

Подробнее
10-10-2013 дата публикации

Region growing method for depth map/color image

Номер: US20130266223A1
Автор: Tao Zhang, Yu-Pao Tsai
Принадлежит: Mediatek Singapore Pte Ltd

An exemplary region growing method include at least the following steps: selecting a seed point of a current frame as an initial growing point of a region in the current frame; determining a background confidence value at a neighboring pixel around the seed point; and utilizing a processing unit for checking if the neighboring pixel is allowed to be included in the region according to at least the background confidence value.

Подробнее
10-10-2013 дата публикации

Region specification method, region specification apparatus, recording medium, server, and system

Номер: US20130266224A1
Автор: Akira Hamada
Принадлежит: Casio Computer Co Ltd

A region specification method is provided of specifying a subject region including a subject from a subject existing image in which a background and the subject exist. The region specification method including: calculating an image boundary pixel number of an image boundary of the subject existing image constituting an edge part of each of divided regions into which the subject existing image is divided by a borderline defined on the subject existing image; specifying, from the divided regions, a reference region having a pixel percentage equal to or more than a predetermined percentage, the pixel percentage indicating the calculated image boundary pixel number in a total pixel number of the edge part; and specifying the subject region from the divided regions of the subject existing image by taking the reference region specified in the specifying the reference region as a reference.

Подробнее
24-10-2013 дата публикации

Anomalous railway component detection

Номер: US20130279743A1
Принадлежит: International Business Machines Corp

A method and system for inspecting railway components. The method includes receiving a stream of images containing railway components, detecting a railway component in each image, generating a plurality of feature vectors for each railway component image, measuring the dissimilarity between the railway component and a set of railway components detected in preceding images, in a sliding window, based on the feature vectors.

Подробнее
24-10-2013 дата публикации

Image Capture and Identification System and Process

Номер: US20130279754A1
Принадлежит: NANT HOLDINGS IP LLC

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.

Подробнее
31-10-2013 дата публикации

Foreground subject detection

Номер: US20130287257A1
Принадлежит: Microsoft Corp

Classifying pixels in a digital image includes receiving a primary image from a primary image sensor. The primary image includes a plurality of primary pixels. Depth information from a depth sensor is also received. The depth information and the primary image are cooperatively used to identify whether a primary pixel images a foreground subject or a background subject.

Подробнее
12-12-2013 дата публикации

Video segmentation method

Номер: US20130329987A1
Автор: Minglun Gong
Принадлежит: Genesis Group Inc

A system and method implemented as a software tool for foreground segmentation of video sequences in real-time, which uses two Competing 1-class Support Vector Machines (C-1SVMs) operating to separately identify background and foreground. A globalized, weighted optimizer may resolve unknown or boundary conditions following convergence of the C-1SVMs. The objective of foreground segmentation is to extract the desired foreground object from live input videos, with fuzzy boundaries captured by freely moving cameras. The present disclosure proposes the method of training and maintaining two competing classifiers, based on Competing 1-class Support Vector Machines (C-1SVMs), at each pixel location, which model local color distributions for foreground and background, respectively. By introducing novel acceleration techniques and exploiting the parallel structure of the algorithm (including reweighing and max-pooling of data), real-time processing speed is achieved for VGA-sized videos.

Подробнее
16-01-2014 дата публикации

Detecting and responding to an out-of-focus camera in a video analytics system

Номер: US20140015984A1
Принадлежит: Behavioral Recognition Systems Inc

Techniques are disclosed for detecting an out-of-focus camera in a video analytics system. In one embodiment, a preprocessor component performs a pyramid image decomposition on a video frame captured by a camera. The preprocessor further determines sharp edge areas, candidate blurry edge areas, and actual blurry edge areas, in each level of the pyramid image decomposition. Based on the sharp edge areas, the candidate blurry edge areas, and actual blurry edge areas, the preprocessor determines a sharpness value and a blurriness value which indicate the overall sharpness and blurriness of the video frame, respectively. Based on the sharpness value and the blurriness value, the preprocessor further determines whether the video frame is out-of-focus and whether to send the video frame to components of a computer vision engine and/or a machine learning engine.

Подробнее
13-03-2014 дата публикации

Devices and Methods for Augmented Reality Applications

Номер: US20140071241A1
Автор: Ning Bi, Ruiduo Yang
Принадлежит: Qualcomm Inc

In a particular embodiment, a method includes evaluating, at a mobile device, a first area of pixels to generate a first result. The method further includes evaluating, at the mobile device, a second area of pixels to generate a second result. Based on comparing a threshold with a difference between the first result and the second result, a determination is made that the second area of pixels corresponds to a background portion of a scene or a foreground portion of the scene.

Подробнее
02-01-2020 дата публикации

IMAGING METHOD AND SYSTEM

Номер: US20200000421A1
Автор: XU Tianyi

A system includes a storage device storing a set of instructions and at least one processor in communication with the storage device, wherein when executing the instructions, the at least one processor is configured to cause the system to determine a first scan area on a scanning object. The system may also acquire raw data generated by scanning the first scan area on the scanning object and generate a positioning image based on the raw data. The system may also generate a pixel value distribution curve based on the positioning image, and determine a second scan area on the scanning object based on the pixel value distribution curve. The system may also scan the second scan area on the scanning object. 132-. (canceled)33. A system , comprising:at least one storage device storing a set of instructions; and acquiring a positioning image of an object;', 'generating a pixel value distribution curve based on the positioning image;', 'determining, based on the pixel value distribution curve, a scan area on the object; and', 'causing the system to scan the scan area on the object., 'at least one processor in communication with the storage device, wherein when executing the set of instructions, the at least one processor is configured to perform operations including34. The system of claim 33 , wherein the positioning image is generated based on raw data obtained by scanning the object.35. The system of claim 33 , wherein the positioning image includes pixels arranged in multiple rows claim 33 , and to generate a pixel value distribution curve based on the positioning image claim 33 , the at least one processor is configured to perform the operations including:for each row of pixels, determining a sum of pixel values of the pixels in the row; andgenerating the pixel value distribution curve based the multiple sums and their respective rows.36. The system of claim 35 , wherein the multiple rows are along a direction perpendicular to a long axis direction of the object.37. The ...

Подробнее
06-01-2022 дата публикации

METHOD OF PROCESSING PICTURE, COMPUTING DEVICE, AND COMPUTER-PROGRAM PRODUCT

Номер: US20220005151A1

A method is provided. The method includes: obtaining a picture to be processed, where the picture to be processed includes a plurality of pixels, and the plurality of pixels comprise first pixels for forming an image and second pixels for forming an image background; rotating the picture to be processed, where for each rotation angle, an intermediate picture is obtained; selecting at least two pictures from the picture to be processed and several intermediate pictures for calculating an area of a bounding box surrounding the image respectively; and removing second pixels outside the bounding box in a picture with the smallest area of bounding box to obtain a processed picture. 1. A method , comprising:obtaining a picture to be processed, wherein the picture to be processed comprises a plurality of pixels, and the plurality of pixels comprise first pixels for forming an image and second pixels for forming an image background;rotating the picture to be processed, wherein for each rotation angle, an intermediate picture is obtained;selecting at least two pictures from the picture to be processed and several intermediate pictures for calculating an area of a bounding box surrounding the image respectively; andremoving second pixels outside the bounding box in a picture with the smallest area of bounding box to obtain a processed picture.2. The method of claim 1 , wherein the rotating the picture to be processed comprises rotating the picture to be processed at a predetermined interval angle within a predetermined angle range claim 1 , andwherein the selecting at least two pictures from the picture to be processed and several intermediate pictures comprises selecting all pictures from the picture to be processed and the several intermediate pictures.3. The method of claim 1 , wherein coordinates of the pixels in a first direction are first coordinates claim 1 , and coordinates in a second direction are second coordinates claim 1 , wherein the first direction is ...

Подробнее
01-01-2015 дата публикации

Method of operating a radiographic inspection system with a modular conveyor chain

Номер: US20150003583A1
Принадлежит: Mettler Toledo Safeline Ltd

A method of operating a radiographic inspection system is designed for a radiographic inspection system in which a conveyor chain with identical modular chain segments transports the articles being inspected. The method encompasses a calibration mode and an inspection mode of the radiographic inspection system. In the calibration mode, calibration data characterizing the radiographic inspection system with the empty conveyor chain are generated and stored as a template image. In the inspection mode, raw images ( 50 ) of the articles ( 3 ) under inspection with the background ( 41 ) of the conveyor chain are acquired and arithmetically merged with the template image. The method results in a clear output image ( 51 ) of the articles under inspection being obtained without the interfering background of the conveyor chain.

Подробнее
01-01-2015 дата публикации

Depth constrained superpixel-based depth map refinement

Номер: US20150003725A1
Автор: Ernest Yiu Cheong Wan
Принадлежит: Canon Inc

A method of forming a refined depth map D R of an image I using a binary depth map D I of the image, said method comprising segmenting ( 315 ) the image into a superpixel image S REP , defining ( 330 ) a foreground and a background in the superpixel image S REP , to form a superpixel depth map D S , intersecting ( 450 ) the respective foreground and the background of the superpixel depth map D S with the binary depth map D I determined independently of the superpixel image S REP , to define a trimap T consisting of a foreground region, a background region and an unknown region, and forming the refined binary depth map D R of the image from the trimap T by reclassifying ( 355, 365 ) the pixels in the unknown region as either foreground or background based on a comparison ( 510 ) of the pixel values in the unknown region with pixel values in at least one of the other trimap regions.

Подробнее
01-01-2015 дата публикации

Image Capture and Identification System and Process

Номер: US20150003747A1
Принадлежит: NANT HOLDINGS IP LLC

A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.

Подробнее
07-01-2021 дата публикации

METHOD TO GENERATE A SLAP/FINGERS FOREGROUND MASK

Номер: US20210004559A1
Автор: Ding Yi, WANG Anne Jinsong
Принадлежит:

The present invention relates to a method to generate a slap/fingers foreground mask to be used for subsequent image processing of fingerprints on an image acquired using a contactless fingerprint reader having at least a flash light, said method comprising the following steps: 1. A method to generate a slap/fingers foreground mask to be used for subsequent image processing of fingerprints on an image acquired using a contactless fingerprint reader having at least a flash light , said method comprising the following steps:acquisition of two images of the slap/fingers in a contactless position in vicinity of the reader, one image taken with flash light on and one image taken without flash light,calculation of a difference map between the image acquired with flash light and the image acquired without flash light,calculation of an adaptive binarization threshold for each pixel of the image, the threshold for each pixel being the corresponding value in the difference map, to which is subtracted this corresponding value multiplied by a corresponding flashlight compensation factor value determined in a flashlight compensation factor map using an image of a non-reflective blank target acquired with flash light and to which is added this corresponding value multiplied by a corresponding background enhancement factor value determined in a background enhancement factor map using the image acquired without flash light,binarization of the difference map by attributing a first value to pixels where the adaptive binarization threshold value is higher than the corresponding value in the difference map and a second value to pixels where the adaptive binarization threshold value is lower than the corresponding value in the difference map, the binarized image being the slap/fingers foreground mask.2. The method according to claim 1 , further comprising a step of noise removal in the binarized image.3. The method according to claim 1 , wherein the flashlight compensation factor is ...

Подробнее
07-01-2021 дата публикации

METHOD, DEVICE, AND MEDIUM FOR PROCESSING IMAGE

Номер: US20210004569A1
Принадлежит:

The present disclosure relates to a method, a device and a medium for making up a face. The method for making up the face of the present disclosure includes: obtaining a first face image; determining facial key-points by detecting the first face image; generating a second face image by applying makeup to a face in the first face image based on the facial key-points; determining a first face region by segmenting the first face image, wherein the first face region is a face region that is not shielded in the first face image; and generating a final face makeup image with makeup based on the first face region and the second face image. 1. A method for processing an image , comprising:obtaining a first face image,determining facial key-points by detecting the first face image;generating a second face image by applying makeup to a face in the first face image based on the facial key-points,determining a first face region by segmenting the first face image, wherein the first face region is a face region that is not shielded in the first face image; andgenerating a third face image based on the first face region and the second face image.2. The method according to claim 1 , wherein said generating the third face image comprises:determining a shielded region and a non-shielded region in a makeup region in the second face image based on the first face region; andgenerating the third face image, by remaining the makeup of the non-shielded region and removing the makeup of the shielded region.3. The method according to claim 2 , further comprising:determining a first overlapping region where the makeup region overlaps with the first face region, and remaining the makeup of the first overlapping region; anddetermining a second overlapping region where the makeup region does not overlap with the first face region, and removing makeup in the second overlapping region.4. The method according to claim 1 , wherein said determining the first face region comprises:determining the ...

Подробнее
07-01-2021 дата публикации

LICENSE PLATE IDENTIFICATION METHOD AND SYSTEM THEREOF

Номер: US20210004627A1
Принадлежит:

A license plate identification method is provided, including steps of: obtaining a to-be-processed image including all characters on a license plate; extracting several feature maps corresponding to character features of the to-be-processed image through a feature map extraction module; for each of the characters, extracting a block and a coordinate according to the feature maps through a character identification model based on a neural network; and obtaining a license plate identification result according to the respective blocks and the respective coordinates of the characters. 1. A license plate identification method , comprising steps of:obtaining a to-be-processed image comprising all of characters on a license plate;extracting a plurality of feature maps comprising character features of the to-be-processed image through a feature map extraction module;for each of the characters, extracting a block and a coordinate according to the feature maps through a character identification model based on a neural network; andobtaining a license plate identification result according to the respective blocks and the respective coordinates of the characters.2. The license plate identification method as claimed in claim 1 , further comprising following steps of:receiving a raw image;extracting a historical background image through a foreground and background subtraction module;comparing the raw image with the historical background image to determine an amount of image change; anddetermining whether the amount of image change is greater than a predetermined value,wherein when the amount of image change is greater than the predetermined value, the to-be-processed image comprising all of the characters is generated.3. (canceled)4. The license plate identification method as claimed in claim 1 , further comprising steps of:receiving a raw image;obtaining a vehicle front image or a vehicle rear image from the raw image through a vehicle front image capturing module or a vehicle ...

Подробнее
13-01-2022 дата публикации

IMAGE PROCESSING METHOD AND RELATED DEVICE

Номер: US20220012851A1
Принадлежит:

This application discloses an image processing method. The method includes: obtaining an image including a target portrait, where the image includes a foreground and a background, and the target portrait corresponds to the foreground; determining a target hairline from the image, where the target hairline includes a part in which a hairline of the target portrait overlaps with the background; and performing blur processing on the image to obtain a target image, where the target hairline is blurred to a smaller degree than the background is blurred, and the target portrait is blurred to a smaller degree than the background is blurred. This image processing method may more highlight a detail of a portrait, so that details, for example a hairline, scattered in a background are not considered as the background for blurring, which improves an experience of portrait photography. 1. An image processing method , wherein the method comprises:obtaining a color image comprising a target portrait, wherein the color image comprises a foreground and a background, and the target portrait corresponds to the foreground;determining a target hairline from the color image, wherein the target hairline comprises a part in which a hairline of the target portrait overlaps with the background; andperforming blur processing on the color image to obtain a target image, wherein the target hairline is blurred to a smaller degree than the background is blurred, and the target portrait is blurred to a smaller degree than the background is blurred.2. The method according to claim 1 , before the obtaining a color image comprising a target portrait claim 1 , further comprising:receiving a target focus instruction, wherein the target focus instruction can focus on the target portrait; orentering a target photographing mode based on an instruction, wherein the target photographing mode can automatically focus on the target portrait.3. The method according to claim 1 , wherein the performing blur ...

Подробнее
04-01-2018 дата публикации

VIDEO MONITORING METHOD AND VIDEO MONITORING DEVICE

Номер: US20180005047A1
Принадлежит:

This application provides a video monitoring method and device. The video monitoring method includes: obtaining video data; inputting at least one frame in the video data into a first neural network to determine object amount information of each pixel dot in the at least one frame; and executing at least one of the following operations by using a second neural network: performing a smoothing operation based on the object amount information in the at least one frame to rectify the object amount information; determining object density information of each pixel dot in the at least one frame based on scene information and the object amount information; predicting object density information of each pixel dot in a to-be-predicted frame next to the at least one frame based on the scene information, the object amount information, and association information between the at least one frame and the to-be-predicted frame. 1. A video monitoring method , comprising:obtaining video data acquired by a video data acquiring module;inputting at least one frame in the video data into a first neural network that is trained in advance, so as to determine object amount information of each pixel dot in the at least one frame; and performing a smoothing operation based on the object amount information in the at least one frame so as to rectify the object amount information;', 'determining object density information of each pixel dot in the at least one frame based on scene information of the acquisition scene for the video data and the object amount information in the at least one frame;', 'predicting object density information of each pixel dot in a to-be-predicted frame next to the at least one frame based on the scene information of the acquisition scene for the video data, the object amount information in the at least one frame, and association information between the at least one frame and the to-be-predicted frame., 'executing at least one of the following operations by using a second ...

Подробнее
02-01-2020 дата публикации

LIVING BODY DETECTION METHOD AND SYSTEM, COMPUTER-READABLE STORAGE MEDIUM

Номер: US20200005061A1
Принадлежит:

A living body detection method and system, and a computer-readable storage medium are disclosed. The living body detection method includes: acquiring a video including an object to be detected; extracting at least two images to be detected from the video, and determining optical flow information according to the at least two images to be detected; dividing each image to be detected into a foreground image and a background image according to the optical flow information; using a classifier to perform category judgment on the foreground image and the background image to obtain a category distribution of the foreground image and a category distribution of the background image; and obtaining a probability that the object to be detected is a living body according to the category distribution of the foreground image and the category distribution of the background image. 1. A living body detection method , comprising:acquiring a video including an object to be detected;extracting at least two images to be detected from the video, and determining optical flow information according to the at least two images to be detected;dividing each image to be detected into a foreground image and a background image according to the optical flow information;using a classifier to perform category judgment on the foreground image and the background image to obtain a category distribution of the foreground image and a category distribution of the background image; andobtaining a probability that the object to be detected is a living body according to the category distribution of the foreground image and the category distribution of the background image.2. The method according to claim 1 , wherein the classifier comprises a neural network model; andusing the classifier to perform category judgment on the foreground image and the background image to obtain the category distribution of the foreground image and the category distribution of the background image comprises:inputting the foreground ...

Подробнее
02-01-2020 дата публикации

SYSTEM AND METHOD FOR USING IMAGES FOR AUTOMATIC VISUAL INSPECTION WITH MACHINE LEARNING

Номер: US20200005422A1
Принадлежит:

A system and method for using images for automatic visual inspection with machine learning are disclosed. A particular embodiment includes an inspection system to: train a machine learning system to detect defects in an object based on training with a set of training images including images of defective and non-defective objects; enable a user to use a camera to capture a plurality of images of an object being inspected at different poses of the object; and detect defects in the object being inspected based on the plurality of images of the object being inspected and the trained machine learning system. 1. A system comprising:a data processor and a camera; and use a trained machine learning system to detect defects in an object based on training with a set of training images including images of defective and non-defective objects;', 'enable a user to use the camera to capture a plurality of images of an object being inspected at different poses of the object; and', 'detect defects in the object being inspected based on the plurality of images of the object being inspected and the trained machine learning system., 'an inspection system, executable by the data processor, to2. The system of being further configured to cause the inspection system to generate visual inspection information from the plurality of images of the object claim 1 , the visual inspection information including information corresponding to defects detected in the object being inspected.3. The system of wherein the visual inspection information further including inspection pass or fail information.4. The system of being further configured to cause the inspection system to provide the visual inspection information to a user of a user platform.5. The system of wherein the camera is a device of a type from the group consisting of: a commodity camera claim 1 , a camera in a mobile phone claim 1 , a camera in a mobile phone attachment claim 1 , a fixed-lens rangefinder camera claim 1 , a digital single- ...

Подробнее
02-01-2020 дата публикации

SYSTEMS AND METHODS FOR IMAGE DATA PROCESSING

Номер: US20200005439A1
Принадлежит: Capital One Services, LLC

Systems and methods for processing image data representing a document to remove deformations contained in the document are disclosed. A system may include one or more memory devices storing instructions and one or more processors configured to execute the instructions. The instructions may instruct the system to provide, to a machine learning system, a training dataset representing a plurality of documents containing a plurality of training deformations. The instructions may also instruct the system to use the machine learning system to process image data representing a target document containing a target document deformation. The machine learning system may generate restored image data representing the target document with the target document deformation removed. The instructions may further instruct the system to provide the restored image data to at least one of a graphical user interface, an image storage device, or a computer vision system. 120-. (canceled)21. A system , comprising:one or more memory devices storing instructions; and generating training image data representing a plurality of training documents;', 'generating transformation image data representing the training documents with training deformations;', 'providing, to a machine learning system, the training image data and the transformation image data;', 'training the machine learning system, using the training image data and the transformation image data, to process image data representing a document containing a document deformation;', 'provide, to the machine learning system, image data representing a target document containing a target document deformation; and', 'generating, using the machine learning system, restored image data representing the target document with the target document deformation corrected., 'one or more processors configured to execute the instructions to perform operations comprising22. The system of claim 21 , wherein:the training image data further comprises data ...

Подробнее
02-01-2020 дата публикации

Image processing device, image processing method, and recording medium

Номер: US20200005492A1
Автор: Kyota Higa
Принадлежит: NEC Corp

A state of a display rack is determined more accurately. An image processing device includes a detection unit configured to detect a change area related to a display rack from a captured image in which an image of the display rack is captured, and a classification unit configured to classify a change related to the display rack in the change area, based on a previously learned model of the change related to the display rack or distance information indicating an image captured before an image capturing time of the captured image.

Подробнее
03-01-2019 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING SYSTEM

Номер: US20190005613A1
Принадлежит:

A coordinate transformation matrix generation unit generates a coordinate transformation matrix corresponding to an image range obtained from a position and a direction of a viewpoint in a virtual space and the viewpoint. A scale transformation adjustment unit performs a scale transformation corresponding to a change of an image range with respect to an actual image by using a scale transformation by an optical zoom of an image pickup unit that generates the actual image, and generates a coordinate transformation matrix including the scale transformation. An image synthesis unit uses the coordinate transformation matrix generated by the coordinate transformation matrix generation unit to perform coordinate transformation of a virtual image, uses the coordinate transformation matrix generated by the scale transformation adjustment unit to perform coordinate transformation of the actual image, and synthesizes the virtual image and the actual image after the coordinate transformations. An image pickup control unit outputs a control signal corresponding to the scale transformation by the optical zoom to the image pickup unit, thereby causing the actual image that has been subjected to the scale transformation by the optical zoom to be generated. It is possible to maintain a resolution of the actual image in the synthesis image of the actual image and the virtual image to be good. 1. An image processing apparatus , comprising:a scale transformation adjustment unit that performs a scale transformation in coordinate transformation to draw an actual image to be synthesized with a virtual image in a virtual space by using a scale transformation by an optical zoom of an image pickup unit that generates the actual image.2. The image processing apparatus according to claim 1 , whereinthe scale transformation adjustment unit performs the scale transformation in the coordinate transformation by using the scale transformation by the optical zoom and a scale transformation not by ...

Подробнее
03-01-2019 дата публикации

SYSTEMS AND METHODS FOR VOLUMETRIC SEGMENTATION OF STRUCTURES IN PLANAR MEDICAL IMAGES

Номер: US20190005649A1
Принадлежит:

Methods and systems for volumetric segmentation of structures in planar medical images. One example method includes displaying a first planar medical image. The method further includes receiving a user input indicating a line segment in the first planar medical image. The method also includes determining an inclusion region using the line segment. The inclusion region consists of a portion of the structure. The method further includes determining a containment region using the line segment. The containment region includes the structure. The method also includes determining a background region using the line segment. The background region excludes the structure. The method further includes determining a three dimensional (3D) contour of the structure using the inclusion region, the containment region, and the background region. The method also includes determining a long axis of the structure using the 3D contour. The method further includes outputting a dimension of the long axis. 1. A method for volumetric segmentation of a structure in a plurality of planar medical images , the method comprising:receiving, at an electronic processor, the plurality of planar medical images, wherein the plurality of planar medical images form a three dimensional (3D) volume including the structure;displaying, on a display, a first planar medical image from the plurality of planar medical images;receiving, with a user interface, a user input indicating a line segment in the first planar medical image;determining, with the electronic processor, an inclusion region of the 3D volume using the line segment, wherein the inclusion region consists of a portion of the structure;determining, with the electronic processor, a containment region of the 3D volume using the line segment, wherein the containment region includes the structure;determining, with the electronic processor, a background region of the 3D volume using the line segment, wherein the background region excludes the structure; ...

Подробнее
03-01-2019 дата публикации

BACKGROUND MODELLING OF SPORT VIDEOS

Номер: US20190005652A1
Автор: PHAM QUANG TUAN
Принадлежит:

A method of classifying foreground and background in an image of a video by determining a pitch colour model of the image, the pitch colour model comprising a pitch colour, colour shades of the pitch colour, and the pitch colour and the colour shades under different shades of shadow. Then determining a pitch mask based on a pitch segmentation of the image of the video, determining a pitch background model based on the pitch mask and the pitch colour model. The method may continue by classifying each of the elements of the pitch mask as background if a colour of the element of the pitch mask matches the pitch colour model and updating the pitch background model and the pitch colour model using the colours of the elements that have been classified to match the pitch colour model. 1. A method of classifying foreground and background in an image of a video , said method comprising:determining a pitch colour model of the image, the pitch colour model comprising a pitch colour, colour shades of the pitch colour, and the pitch colour and the colour shades under different shades of shadow;determining a pitch mask based on a pitch segmentation of the image of the video;determining a pitch background model based on the pitch mask and the pitch colour model;classifying each of the elements of the pitch mask as background if a colour of the element of the pitch mask matches the pitch colour model; andupdating the pitch background model and the pitch colour model using the colours of the elements that have been classified to match the pitch colour model.2. The method according to claim 1 , further comprising:classifying each of the elements of the pitch mask as either foreground or background using a multi-mode scene modelling if the element of the pitch mask does not match the pitch colour model; andupdating the multi-mode scene modelling.3. The method according to claim 1 , wherein the pitch colour model allows a chromatic shift between the colours under different illumination ...

Подробнее
03-01-2019 дата публикации

METHOD AND APPARATUS FOR EXTRACTING FOREGROUND

Номер: US20190005653A1
Принадлежит: SAMSUNG SDS CO., LTD.

A method includes acquiring, by a device, encoded image data corresponding to an original image. The method includes decoding, by the device, the encoded image data. The method included acquiring, by the device, a foreground extraction target frame and an encoding parameter associated with an encoding process of the original image based on decoding the encoded image data. The method includes extracting, by the device, a first candidate foreground associated with the foreground extraction target frame based on the encoding parameter. The method includes extracting, by the device, a second candidate foreground associated with the foreground extraction target frame based on a preset image processing algorithm. The method includes determining, by the device, a final foreground associated with the foreground extraction target frame based on the first candidate foreground and the second candidate foreground. 1. A method of image processing , the method comprising:acquiring encoded image data corresponding to an original image;decoding the encoded image data;acquiring a foreground extraction target frame and an encoding parameter associated with an encoding process of the original image, based on decoding the encoded image data;extracting a first candidate foreground associated with the foreground extraction target frame based on the encoding parameter;extracting a second candidate foreground associated with the foreground extraction target frame based on an image processing algorithm; anddetermining a final foreground associated with the foreground extraction target frame based on the first candidate foreground and the second candidate foreground.2. The method of claim 1 , wherein the encoding parameter includes at least one of a motion vector claim 1 , a discrete cosine transform (DCT) coefficient claim 1 , or partition information associated with a number and size of prediction blocks.3. The method of claim 1 , wherein the extracting of the first candidate foreground ...

Подробнее
07-01-2016 дата публикации

Virtual mask for use in autotracking video camera images

Номер: US20160006991A1

A surveillance camera system includes a camera that acquires images and that has an adjustable field of view. A processing device is operably coupled to the camera. The processing device allows a user to define a virtual mask within the acquired images. The processing device also tracks a moving object of interest in the acquired images with a reduced level of regard for areas of the acquired images that are within the virtual mask.

Подробнее
07-01-2021 дата публикации

SYSTEMS AND METHODS FOR IMPLEMENTING PERSONAL CAMERA THAT ADAPTS TO ITS SURROUNDINGS, BOTH CO-LOCATED AND REMOTE

Номер: US20210006731A1
Принадлежит:

A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting and separating the background in the received real time video conference stream from the user; and replacing the separated background with a background received from a system of a second user or with a pre-recorded background. 1. A computerized system comprising a processing unit and a memory , the system operating in connection with a real-time video conference stream containing a video of a user , wherein the memory embodies a set of computer-executable instructions , which cause the computerized system to perform a method comprising:a. receiving the real time video conference stream containing the video of the user;b. finding and separating a reflection in the received real time video conference stream; andc. modifying the separated reflection in the received real time video conference stream.2. The system of claim 1 , wherein the modifying the separated reflection comprises darkening the separated reflection.3. The system of claim 1 , wherein the separated reflection is eye glasses of the user.4. The system of claim 1 , wherein the modifying the separated reflection comprises replacing the separated reflection with a new reflection using a video conference stream of a second user. This application is a Divisional Application of U.S. application Ser. No. 16/214,041 filed Dec. 8, 2018, the contents of which are incorporated herein by reference.The disclosed embodiments relate in general to smart camera systems and, more specifically, to systems and methods for implementing personal “chameleon” smart camera that adapts to its surroundings, both co-located and remote.As ...

Подробнее
07-01-2021 дата публикации

SYSTEMS AND METHODS FOR IMPLEMENTING PERSONAL CAMERA THAT ADAPTS TO ITS SURROUNDINGS, BOTH CO-LOCATED AND REMOTE

Номер: US20210006732A1
Принадлежит:

A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting the background in the received real time video conference stream from the user; and matching the first background and a second background associated with the second user. 1. A computerized system comprising a processing unit and a memory , the system operating in connection with a real-time video conference stream between at least a first user and a second user , the real-time video conference stream containing a first video of the first user and a first background and a second video of the second user , wherein the memory embodies a set of computer-executable instructions , which cause the computerized system to perform a method comprising:a. receiving the real time video conference stream containing the first video of first the user and the first background;b. detecting the first background in the received real time video conference stream from the first user; andc. matching the first background and a second background associated with the second user.2. The system of claim 1 , wherein the second background is a pre-recorded background retrieved from a database.3. The system of claim 1 , wherein the second background is automatically generated.4. The system of claim 1 , wherein the second background is recorded by the second user in response to a prompt by the system of the second user.5. The system of claim 1 , wherein the detecting and separating the background in the real time video conference stream comprises detecting a face of the first user.6. The system of claim 5 , wherein the face of the first user is detected by identifying a plurality of feature points in ...

Подробнее
08-01-2015 дата публикации

Photographing Method, Photo Management Method and Device

Номер: US20150010239A1
Принадлежит: Huawei Technologies Co Ltd

A photographing method that includes: acquiring to-be-photographed first content; after determining a first subject with which a user is concerned in the first content, acquiring an image composition relationship between a second subject in the first content and the first subject, where the second subject is another background subject in the first content except the first subject; matching the image composition relationship between the second subject and the first subject with a preset image composition template to obtain a matching evaluation degree, and providing an image composition adjustment suggestion on the first content for the user according to the matching evaluation degree and the image composition template, where the adjustment suggestion is a tip on how to adjust the image composition relationship in the first content so that the image composition relationship completely matches the preset image composition template.

Подробнее
10-01-2019 дата публикации

Image Capture and Identification System and Process

Номер: US20190008684A1
Принадлежит:

A digital image depicting a digital representation of a scene is captured by an image sensor of a vehicle. An identification system recognizes a real-world object from the digital image as a target object based on derived image characteristics and identifies object information about the target object based on the recognition. The identification provides the object information to the vehicle data system of the vehicle so that the vehicle data system can execute a control function of the vehicle based on the received object information. 1. A method for vehicle-based object recognition , comprising:obtaining, by an identification system, image data captured by an image sensor of a vehicle, the image data containing a digital representation of a real-world object within a scene;deriving, by the identification system, image characteristics of the real-world object from the digital representation of the real-world object in the image data;recognizing, by the identification system, the real-world object as a target object based on the derived image characteristics;identifying, by the identification system, object information about the target object based on the recognition;providing, by the identification system and to a vehicle data system of the vehicle, the object information; andexecuting, by the vehicle data system, a control function of the vehicle based on the object information.2. The method of claim 1 , wherein the control function comprises at least one of guidance claim 1 , navigation or maneuvering of the vehicle relative to the real-world object.3. The method of claim 1 , wherein the control function comprises planning claim 1 , by the vehicle data system claim 1 , a trajectory relative to the real-world object.4. The method of claim 1 , wherein the object information comprises at least one of location or orientation of the real-world object relative to the vehicle.5. The method of claim 1 , wherein the real-world object is a street sign.6. The method of claim ...

Подробнее
09-01-2020 дата публикации

IMAGE CAPTURE AND IDENTIFICATION SYSTEM AND PROCESS

Номер: US20200008978A1
Принадлежит:

A computing platform that analyzes a captured video stream to identify a document depicted in the video stream, validates identification information corresponding to the document to display an information address associated with the document, and that initiates a transaction based on the validation of the identification information associated with the document. 1. A method of conducting a financial transaction , the method comprising:analyzing, via at least one computing device processor, a video stream;identifying, via the at least one computing device processor, a document in the video stream;validating, via the at least one computing device processor, identification information pertinent to the document based on the video stream;displaying, via the at least one computing device processor, an information address where the information address is related to the document; andinitiating, via the at least one computing device processor and based on validation of the identification information, a financial transaction related to the document.2. The method of claim 1 , further comprising capturing claim 1 , by a mobile device claim 1 , the video stream.3. The method of claim 1 , wherein identifying the document comprises automatically capturing an image of the document from the video stream.4. The method of claim 3 , wherein identifying the document includes recognizing and decoding symbols according symbol type based on location in the image.5. The method of claim 1 , further comprising displaying a visual indicator with the video stream.6. The method of claim 1 , wherein the information address is associated with the financial transaction.7. The method of claim 1 , wherein the information address is associated with a bank account.8. The method of claim 1 , wherein the document is related to an individual who is a user of a mobile device.9. The method of claim 1 , further comprising allowing a user to perform ongoing interactions related to the financial transaction.10. ...

Подробнее
27-01-2022 дата публикации

INFORMATION PROCESSING APPARATUS, IMAGE GENERATION METHOD, CONTROL METHOD, AND STORAGE MEDIUM

Номер: US20220030215A1
Принадлежит:

An information processing apparatus for a system generates a virtual viewpoint image based on image data obtained by performing imaging from a plurality of directions using a plurality of cameras. The information processing apparatus includes an obtaining unit configured to obtain a foreground image based on an object region including a predetermined object in a captured image for generating a virtual viewpoint image and a background image based on a region different from the object region in the captured image, wherein the obtained foreground image and the obtained background image having different frame rates, and an output unit configured to output the foreground image and the background image which are obtained by the obtaining unit and which are associated with each other. 1. (canceled)2. An information processing apparatus comprising:one or more memories storing instructions; andone or more processors executing the instructions to:transmit information for specifying data which corresponds to a time of a virtual viewpoint image to be generated and is used to generate the virtual viewpoint image to an apparatus which controls output of a plurality of items of data which is used to generate a virtual viewpoint image and corresponds to a plurality of times;obtain the data which corresponds to the time specified based on the transmitted information and is used to generate the virtual viewpoint image; andgenerate the virtual viewpoint image corresponding to the time in accordance with the obtained data.3. The information processing apparatus according to claim 2 , wherein the obtained data includes a foreground image which includes an object and corresponds to the time specified based on the information and a background image which does not include the object and corresponds to the time specified based on the information.4. The information processing apparatus according to claim 3 , wherein the background image corresponds to a time which is closest to the time ...

Подробнее
10-01-2019 дата публикации

DISPLAY APPARATUS AND DISPLAY METHOD

Номер: US20190012529A1
Автор: WANG Zifeng
Принадлежит: BOE Technology Group Co., Ltd.

A display apparatus and a display device are provided. The display apparatus includes: a first image acquisition device, configured to acquire a target image of a target region in the case that a human body is in the target region; an image processing device, configured to identify a body physical feature of the human body according to a human body image in the target image; an image generating device, configured to generate, according to the body physical feature, a virtual human body image corresponding to the human body and conforming to a target age; and a display device, configured to display the virtual human body image. A region displaying the virtual human body image is a virtual human body display region. 1. A display apparatus , comprising:a first image acquisition device, configured to acquire a target image of a target region in the case that a human body is in the target region;an image processing device, configured to identify a body physical feature of the human body according to a human body image in the target image;an image generating device, configured to generate, according to the body physical feature, a virtual human body image corresponding to the human body and conforming to a target age; anda display device, configured to display the virtual human body image,wherein, a region displaying the virtual human body image is a virtual human body display region.2. The display apparatus according to claim 1 , wherein claim 1 , the image processing device is further configured to identify claim 1 , according to the target image claim 1 , a region occupied by the human body image in the target image; the region occupied by the human body image corresponds to a human body corresponding region of the display device; and the virtual human body display region is located in the human body corresponding region.3. The display apparatus according to claim 2 , wherein claim 2 , the virtual human body display region changes in real time according to change of ...

Подробнее
14-01-2021 дата публикации

Object Recognition System with Invisible or Nearly Invisible Lighting

Номер: US20210012070A1
Принадлежит:

A barcode reader is provided. The barcode reader includes a first image acquisition assembly having a first imager assembly configured to capture infrared (IR) light and an IR illumination assembly configured to emit IR light over at least a portion of a first field of view (FOV) of the first imager assembly so as to illuminate targets within the first FOV. The barcode reader further includes a second image acquisition assembly having a second imager assembly configured to capture visible light and a visible-light illumination assembly configured to emit visible light over at least a portion of a second FOV of the second imager assembly so as to illuminate targets within the second FOV. 1. A bi-optic barcode reader comprising:a housing having a platter and an upright tower, the platter having a generally horizontal window and the upright tower having a generally upright window;a first image acquisition assembly positioned at least partially within the housing, the first image acquisition assembly having an infrared (IR) illumination assembly and a first imager assembly, the first imager assembly having a first field of view (FOV) and being configured to capture IR light, the first FOV being directed out of the housing through the generally upright window, the IR illumination assembly being configured to emit IR light over at least a portion of the first FOV so as to illuminate targets within the first FOV; anda second image acquisition assembly positioned at least partially within the housing, the second image acquisition assembly having a visible-light illumination assembly and a second imager assembly, the second imager assembly having a second FOV and being configured to capture visible light, the second FOV being directed out of the housing through the generally horizontal window, the visible-light illumination assembly being configured to emit visible light over at least a portion of the second FOV so as to illuminate targets within the second FOV.2. The bi- ...

Подробнее
10-01-2019 дата публикации

Enhanced Contrast for Object Detection and Characterization By Optical Imaging Based on Differences Between Images

Номер: US20190012564A1
Автор: HOLZ David S., YANG Hua
Принадлежит: Leap Motion, Inc.

Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels. 1. A method of capturing and analyzing an image , the method comprising: operate the at least one camera to capture a sequence of images including a first image captured at a time when the at least one light source is illuminating a field of view;', 'identify pixels corresponding to an object of interest rather than to a background;', 'based on the identified pixels, construct a 3D model of the object of interest, including a position and shape of the object of interest; and', 'distinguish between (i) foreground image components corresponding to objects located within a proximal zone of the field of view, the proximal zone extending from the at least one camera and having a depth relative thereto of at least twice an expected maximum distance between the objects corresponding to the foreground image components and the at least one camera, and (ii) background image components corresponding to objects located within a distal zone of the field of view, the distal zone being located, relative to the at least one camera, beyond the proximal zone., 'utilizing an image analyzer coupled to at least one camera and at least one light source to2. The method of claim 1 , wherein the proximal zone has a depth of at least four times the expected maximum distance.3. The method of claim 1 , wherein the at least one light source is a diffuse emitter.4. The method of claim 3 , wherein the at least one light source is an infrared light-emitting diode and the at least one camera is an infrared-sensitive ...

Подробнее
10-01-2019 дата публикации

AUTOMATED VISUAL INSPECTION SYSTEM

Номер: US20190012777A1
Принадлежит:

An example apparatus for measuring a feature of a tested component may include a lighting device, an imaging device, and a computing device. The computing device may receive, from the imaging device, a plurality of images the tested component in a plurality of states. The computing device may segment each image to isolate target areas from background areas. The computing device may measure a plurality of lengths of the target areas and compare corresponding lengths of two or more of the images. 1. An apparatus for measuring a feature of a tested component comprising:a lighting device configured output light to illuminate at least a portion of the tested component;an imaging device; and receive, from the imaging device, a first image of the portion of the tested component in a first state;', 'segment the first image to isolate a first target area of the image from background areas of the first image;', 'measure a plurality of first lengths of at least one portion of the first target area;', 'receive, from the imaging device, a second image of the portion of the tested component in a second, different state;', 'segment the second image to isolate a second target area of the second image from background areas of the second image;', 'measure a plurality of second lengths of at least one portion of the second target area, wherein a respective first length of the plurality of first lengths corresponds to a respective second length of the plurality of second lengths; and', 'compare each respective first length of the plurality of first lengths to the corresponding second length of the plurality of second lengths., 'a computing device configured to2. The apparatus of claim 1 , wherein the computing device is further configured to determine whether a difference between each respective first length of the plurality of first lengths and the corresponding second length of the plurality of second lengths is within a predetermined tolerance.3. The apparatus of claim 1 , wherein ...

Подробнее
10-01-2019 дата публикации

METHOD FOR THE GRAPHICS PROCESSING OF IMAGES

Номер: US20190012791A1
Принадлежит: Laoviland Experience

A method for graphic image processing from existing source image files forming a database from which an image is extracted for said graphic processing operations. The method includes the step of implementing n (n≥1) saliency processing operations in order to form n saliency cards CSi, with i=1 to n; 2. The method for graphic image processing according to claim 1 , wherein at least one saliency criterion underlying a saliency processing is inverted.3. The method for graphic image processing according to claim 1 , wherein the threshold set for the first thresholding is varying.4. The method for graphical image processing according to claim 1 , wherein a second thresholding is performed claim 1 , the threshold of which is at a level close to the threshold selected for the first thresholding claim 1 , a differentiation of the white pixels resulting from the two thresholds leading to a set of pixels permitting the calculation of a contouring of the white spots resulting from the first thresholding.5. The method for graphic image processing according to claim 1 , wherein at the end of the vectorization phase claim 1 , a color inversion of the black and white spots is implemented.6. The method for graphic image processing according to claim 1 , wherein at the end of the vectorization phase claim 1 , the black background is removed and remains transparent.7. The method for graphic image processing according to claim 1 , wherein the filling of each group is performed by selecting a plurality of modes including at least an image mode claim 1 , a color mode claim 1 , a hue mode and a black and white mode.8. The method for graphic image processing according to claim 7 , wherein the image mode consists in comprises filling a given group by the image portion resulting from the source file claim 7 , which has the same shape claim 7 , the same surface area as the group and corresponds to the location of this group in the source image.9. The method for graphic image processing ...

Подробнее
10-01-2019 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

Номер: US20190012793A1
Автор: Ito Kan
Принадлежит:

Anonymization processing for protecting privacy and personal information can be appropriately performed based on a detection state of a moving object within an imaging range. Processing corresponding to one of a first mode for anonymizing an area of a human body based on a fixed background image and a second mode for anonymizing the area of the human body based on a basic background image is performed on an area of a detected moving object based on a detection result of a moving object detection unit. 1. An image processing apparatus comprising:an image capturing unit configured to acquire a captured image;a holding unit configured to hold a first background image;a moving object detection unit configured to detect a moving object in the captured image;an updating unit configured to generate a second background image based on a detection result of the moving object detection unit; andan anonymization unit configured to perform, based on the detection result of the moving object detection unit, processing on an area of the moving object detected by the moving object detection unit, the processing corresponding to one of a first mode for anonymizing the area of the moving object based on the first background image and a second mode for anonymizing the area of the moving object based on the second background image.2. The image processing apparatus according to claim 1 , further comprising a generation unit configured to generate a mask image corresponding to the area of the moving object claim 1 , wherein claim 1 , the anonymization unit is configured to combine claim 1 , based on the number of moving objects detected by the moving object detection unit claim 1 , the mask image with the first background image in the first mode and the mask image with the second background image in the second mode.3. The image processing apparatus according to claim 2 , wherein the anonymization unit is configured to combine the mask image with the first background image in the first ...

Подробнее
10-01-2019 дата публикации

MOVEMENT MONITORING SYSTEM

Номер: US20190012794A1
Принадлежит: WISCONSIN ALUMNI RESEARCH FOUNDATION

A monitoring system may include an input port, an output port, and a controller in communication with the input port and the output port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a predetermined task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information. 1. A monitoring system comprising:an input port for receiving video;an output port; identify a subject within a frame of video relative to a background within the frame;', 'determine when the subject in the video is performing a task;', 'identify a height dimension and a width dimension of the subject in one or more frames of the video during the task; and', 'output via the output port position assessment information relative to the subject during the task based on the height dimension and the width dimension for the subject in one or more frames of the video during the task., 'a controller in communication with the input port and the output port, the controller configured to2. The monitoring system of claim 1 , further comprising:an image capturing device adapted to capture video of the subject and provide the video to the controller via the input port.3. The monitoring system of claim 1 , wherein the controller is configured to:determine extreme-most pixels in two dimensions of the subject to identify the height dimension and the width dimension based on the identified subject; andidentify a ...

Подробнее
10-01-2019 дата публикации

REAL TIME MULTI-OBJECT TRACKING APPARATUS AND METHOD USING GLOBAL MOTION

Номер: US20190012800A1
Автор: Lee Ji Won, MOON Sung Won

Provided are a real time multi-object tracking apparatus and method which use global motion, including separating a background and multiple objects from a detected image, recognizing the multiple objects separated from the background; calculating global motion information of the recognized multiple objects, which is information oriented by the multiple objects, and correcting the recognized multiple objects using the calculated global motion information and tracking the multiple objects. 1. A method of tracking multiple objects in real time using global motion , the method comprising:separating a background and multiple objects from a currently detected image;recognizing the multiple objects separated from the background;calculating global motion information of the recognized multiple objects, which is information oriented by the multiple objects; andcorrecting the recognized multiple objects using the calculated global motion information, and tracking the multiple objects.2. The method of claim 1 , wherein the separating of the background and the multiple objects comprises:separating a background and an object of a current frame from each other on the basis of a likelihood function;separating a background and an object of the current frame from each other using dynamic edge;determining whether the object separated using the dynamic edge is included in the background separated on the basis of the likelihood function; andseparating, when it is determined in the determination that the object separated using the dynamic edge is included in the background separated on the basis of the likelihood function, a region determined to be the object using the dynamic edge from the background separated on the basis of the likelihood function.3. The method of claim 1 , wherein the recognizing of the multiple objects comprises:setting a support window region using minimum coordinates and maximum coordinates of a connected component (connected component analysis: CCA) within a ...

Подробнее
14-01-2021 дата публикации

APPARATUS AND METHOD FOR GENERATING IMAGE

Номер: US20210012503A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

An image generating apparatus includes: a display outputting an image; a memory storing one or more instructions; and a processor. The processor is configured to execute the one or more instructions to detect an object in an image including a plurality of frames, provide a plurality of candidate boundaries for masking the detected object, identify an optimal boundary by assessing the provided plurality of candidate boundaries, and generate a partial moving image with the object moving by using the optimal boundary. 1. An image generating apparatus comprising:a display configured to output an image;a memory configured to store one or more instructions; and detect an object in an image including a plurality of frames;', 'provide a plurality of candidate boundaries for masking the detected object;', 'identify an optimal boundary by assessing the plurality of candidate boundaries; and', 'generate a partial moving image with the object moving by using the optimal boundary., 'a processor configured to execute the one or more instructions to2. The image generating apparatus of claim 1 , wherein the processor is further configured to execute the one or more instructions to:mask the object in one of the plurality of frames by using the identified optimal boundary; andgenerate the partial moving image with the object moving by using the one of the plurality of frames in which the object is masked and the plurality of frames.3. The image generating apparatus of claim 1 , wherein the processor is further configured to execute the one or more instructions to provide the plurality of candidate boundaries for masking the object detected in the image by using a first artificial intelligence (AI) model.4. The image generating apparatus of claim 3 , wherein the first AI model includes a plurality of segmentation AI models claim 3 , andthe processor is further configured to execute the one or more instructions to provide the plurality of candidate boundaries by using the plurality of ...

Подробнее
14-01-2021 дата публикации

IDENTIFYING TARGETS WITHIN IMAGES

Номер: US20210012508A1
Принадлежит:

Methods of detecting and/or identifying an artificial target within an image are provided. These methods comprise: applying to a region of the image a primary classification algorithm for performing a feature extraction of the image region, the primary classification algorithm being based on a spectral profile defined by one or more spectral signatures with one or more features in at least part of the infrared spectrum; obtaining a relation between the extracted features of the image region and the spectral profile; verifying whether a level of confidence of the obtained relation between the extracted features and the spectral profile is higher than a first predetermined confirmation level; and, in case of positive (or true) result of said verification, determining that the image region corresponds to artificial target to be detected, thereby obtaining a confirmed artificial target. Systems and computer programs are also provided that are suitable for performing said methods. 2. The method according to claim 1 , wherein the primary classification algorithm is also based on a spatial profile.3. (canceled)4. The method according to claim 1 , wherein the spectral profile defined by one or more spectral signatures with one or more features in at least part of the infrared spectrum is extracted from the image.5. (canceled)6. The method according to claim 1 , further comprising:obtaining predefined spectral data;applying to the region of the image the primary classification algorithm so that the spectral profile is further extracted from at least one obtained predefined spectral datum.78.-. (canceled)9. The method according to claim 1 , wherein the predefined spectral training data includes data of a material artificially attributed to the target claim 1 , said material modifying the original spectral features of the artificial target.10. The method according to claim 1 , wherein the material modifying the original spectral features of the artificial target comprises a ...

Подробнее
14-01-2021 дата публикации

Image processing method and computer-readable recording medium having recorded thereon image processing program

Номер: US20210012509A1
Автор: Hiroki Fujimoto
Принадлежит: Screen Holdings Co Ltd

An image processing method that includes obtaining an original image including a cultured cell image with a background image, dividing the original image into blocks, each composed of a predetermined number of pixels, and obtaining a spatial frequency component of an image in each block for each block, and classifying each block as the one belonging to a cell cluster corresponding to the cell or the one belonging to other than the cell cluster in a two-dimensional feature amount space composed of a first feature amount which is a total of intensities of low frequency components having a frequency equal to or lower than a predetermined frequency and a second feature amount which is a total of intensities of high frequency components having a higher frequency than the low frequency component, and segmenting the original image into an area occupied by the blocks classified as the cell cluster and another area.

Подробнее
14-01-2021 дата публикации

Depth Image Processing Method and Apparatus, and Electronic Device

Номер: US20210012516A1
Автор: KANG Jian
Принадлежит:

The present disclosure provides a depth image processing method and apparatus, and an electronic device. The method includes: acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor; determining a scene type according to the first image and the second image; and performing a filtering process on the first image according to the scene type. 1. A method for depth image processing , comprising:acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor;determining a scene type according to the first image and the second image; andperforming a filtering process on the first image according to the scene type.2. The method according to claim 1 , wherein determining the scene type according to the first image and the second image claim 1 , comprises:identifying a region of interest from the second image;determining a depth and a confidence coefficient of the depth corresponding to each pixel unit in the region of interest according to the first image; anddetermining the scene type according to the depth and the confidence coefficient of the depth corresponding to each pixel unit in the region of interest.3. The method according to claim 2 , wherein determining the scene type according to the depth and the confidence coefficient of the depth corresponding to each pixel unit in the region of interest claim 2 , comprises:performing statistical analysis on the depths corresponding to respective pixel units in the region of interest to obtain a depth distribution, and performing statistical analysis on the confidence coefficients to obtain a confidence coefficient distribution; anddetermining the scene type according to the depth distribution and the confidence coefficient distribution;wherein the depth distribution is configured to indicate a proportion of pixel units in each depth interval, and the confidence coefficient distribution is configured to indicate a proportion of pixel units in each ...

Подробнее
14-01-2021 дата публикации

VIDEO LIGHTING USING DEPTH AND VIRTUAL LIGHTS

Номер: US20210012560A1
Автор: Cower Dillon, ZHOU Guangyu
Принадлежит: Google LLC

Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data. 1. A computer-implemented method to relight a video , the method comprising:receiving a plurality of frames of the video, wherein each frame includes depth data and color data for a plurality of pixels;segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel;setting depth value of each background pixel to a fixed depth value;applying a Gaussian filter to smooth depth values of the plurality of pixels;calculating surface normals based on the depth values of the plurality of pixels;creating a three-dimensional (3D) mesh based on the depth values of the plurality of pixels and the surface normals; andrendering a relighted frame by adding a virtual light based on the 3D mesh and the color data.2. The computer-implemented method of claim 1 , wherein segmenting the frame comprises:generating a segmentation mask based on a depth range, wherein each pixel with depth value within the depth range is classified as the foreground pixel and each pixel with depth value outside the depth range is classified as the background pixel;performing a morphological opening process to remove ...

Подробнее
09-01-2020 дата публикации

Systems and methods to improve data clustering using a meta-clustering model

Номер: US20200012886A1
Принадлежит: Capital One Services LLC

Systems and methods for clustering data are disclosed. For example, a system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving data from a client device and generating preliminary clustered data based on the received data, using a plurality of embedding network layers. The operations may include generating a data map based on the preliminary clustered data using a meta-clustering model. The operations may include determining a number of clusters based on the data map using the meta-clustering model and generating final clustered data based on the number of clusters using the meta-clustering model. The operations may include and transmitting the final clustered data to the client device.

Подробнее
09-01-2020 дата публикации

Systems and methods for hyperparameter tuning

Номер: US20200012935A1
Принадлежит: Capital One Services LLC

A model optimizer is disclosed for managing training of models with automatic hyperparameter tuning. The model optimizer can perform a process including multiple steps. The steps can include receiving a model generation request, retrieving from a model storage a stored model and a stored hyperparameter value for the stored model, and provisioning computing resources with the stored model according to the stored hyperparameter value to generate a first trained model. The steps can further include provisioning the computing resources with the stored model according to a new hyperparameter value to generate a second trained model, determining a satisfaction of a termination condition, storing the second trained model and the new hyperparameter value in the model storage, and providing the second trained model in response to the model generation request.

Подробнее
09-01-2020 дата публикации

Systems and methods to identify neural network brittleness based on sample data and seed generation

Номер: US20200012937A1
Принадлежит: Capital One Services LLC

Systems and methods for determining neural network brittleness are disclosed. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving a modeling request comprising a preliminary model and a dataset. The operations may include determining a preliminary brittleness score of the preliminary model. The operations may include identifying a reference model and determining a reference brittleness score of the reference model. The operations may include comparing the preliminary brittleness score to the reference brittleness score and generating a preferred model based on the comparison. The operations may include providing the preferred model.

Подробнее
09-01-2020 дата публикации

INSPECTION SUPPORT APPARATUS AND INSPECTION SUPPORT METHOD

Номер: US20200013179A1
Принадлежит:

An inspection support apparatus according to an aspect includes a memory and a processor connected to the memory, and the processor is configured to obtain three-dimensional data of the structure that includes a plurality of planes formed by the frame; from the three-dimensional data, detect a plurality of first planes that include at least three points of the three-dimensional data; for each of the detected first planes, compute, from the three-dimensional data, a number of three-dimensional points at a distance that is equal to or shorter than a prescribed distance from the first plane; and according to the computed number of three-dimensional points, identify, from the detected plurality of first planes, a second plane positioned on a forefront of the frame of the structure. 1. An inspection support apparatus that supports an inspection related to a frame of building and civil-engineering structures , comprising:a memory; anda processor connected to the memory, wherein obtain three-dimensional data of the structure that includes a plurality of planes formed by the frame;', 'from the three-dimensional data, detect a plurality of first planes that include at least three points of the three-dimensional data;', 'for each of the detected first planes, compute, from the three-dimensional data, a number of three-dimensional points at a distance that is equal to or shorter than a prescribed distance from the first plane; and', 'according to the computed number of three-dimensional points, identify, from the detected plurality of first planes, a second plane positioned on a forefront of the frame of the structure., 'the processor is configured to'}2. The inspection support apparatus according to claim 1 , whereinthe processor identifies, as the second plane, a first plane having the largest computed number of three-dimensional points in the detected plurality of first planes.3. The inspection support apparatus according to claim 1 , whereinthe three-dimensional data ...

Подробнее
09-01-2020 дата публикации

CREATING MULTI-DIMENSIONAL OBJECT REPRESENTATIONS

Номер: US20200013219A1
Принадлежит:

Objects can be rendered in three-dimensions and viewed and manipulated in an augmented reality environment. Background images are subtracted from object images from multiple viewpoints to provide baseline representations of the object. Morphological operations can be used to remove errors caused by misalignment of an object image and background image. Using two different contrast thresholds, pixels can be identified that can be said at two different confidence levels to be object pixels. An edge detection algorithm can be used to determine object contours. Low confidence pixels can be associated with the object if they can be connected to high confidence pixels without crossing an object contour. Segmentation masks can be created from high confidence pixels and properly associated low confidence pixels. Segmentation masks can be used to create a three-dimensional representation of the object. 1. A computer-implemented method comprising:under the control of one or more computer systems configured with executable instructions,capturing a background image for each of a plurality of cameras, a background image portraying a background;capturing a plurality of object images, including at least one object image for each of the plurality of cameras, an object image portraying a viewpoint of an object against the background;creating a difference image by subtracting the background image of the viewpoint from the at least one object image of the viewpoint;determining high confidence pixels, the high confidence pixels being pixels that exceed a first threshold contrast with background image;determining low confidence pixels, the low confidence pixels being pixels that exceed a second threshold contrast with the background image, the second threshold contrast being lower than the first threshold contrast;determining pixels associated with the object, including high confidence pixels and a subset of low confidence pixels; andcreating a plurality of segmentation masks ...

Подробнее
11-01-2018 дата публикации

Modification of post-viewing parameters for digital images using image region or feature information

Номер: US20180013950A1
Принадлежит: Fotonation Ireland Ltd

A method of generating one or more new digital images using an original digitally-acquired image including a selected image feature includes identifying within a digital image acquisition device one or more groups of pixels that correspond to the selected image feature based on information from one or more preview images. A portion of the original image is selected that includes the one or more groups of pixels. The technique includes automatically generating values of pixels of one or more new images based on the selected portion in a manner which includes the selected image feature within the one or more new images.

Подробнее
15-01-2015 дата публикации

Opacity Measurement Using A Global Pixel Set

Номер: US20150016717A1
Принадлежит: Microsoft Technology Licensing LLC

A computing device is described herein that is configured to select a pixel pair including a foreground pixel of an image and a background pixel of the image from a global set of pixels based at least on spatial distances from an unknown pixel and color distances from the unknown pixel. The computing device is further configured to determine an opacity measure for the unknown pixel based at least on the selected pixel pair.

Подробнее
15-01-2015 дата публикации

Method for determining the extent of a foreground object in an image

Номер: US20150016724A1
Автор: Noam Levy
Принадлежит: Qualcomm Technologies Inc

Embodiments are directed towards determining within a digital camera whether a pixel belongs to a foreground or background segment within a given image by evaluating a ratio of derivative and deviation metrics in an area around each pixel in the image, or ratios of derivative metrics across a plurality of images. For each pixel within the image, a block of pixels are examined to determine an aggregate relative derivative (ARD) in the block. The ARD is compared to a threshold value to determine whether the pixel is to be assigned in the foreground segment or the background segment. In one embodiment, a single image is used to determine the ARD and the pixel segmentation for that image. Multiple images may also be used to obtain ratios of a numerator of the ARD, useable to determine an extent of the foreground.

Подробнее
16-01-2020 дата публикации

IMAGE CAPTURE AND IDENTIFICATION SYSTEM AND PROCESS

Номер: US20200016003A1
Принадлежит:

An image-based transaction system includes a mobile device with an image sensor that is programmed to capture, via the image sensor, a video stream of a scene. The mobile device identifies a document using image characteristics from the video stream and acquires an image of at least a part of the document, and then identifies symbols in the image based on locations within the image of the document. The symbols can include alphanumeric symbols. The mobile device processes the symbols according to their type to obtain an address related to the document and the symbols and initiates a transaction associated with the identified document. 1. An image-based transaction system: digitally capturing a video stream of a scene via the image sensor;', 'identifying a document using image characteristics from the digitally captured video stream;', 'automatically acquiring an image of at least part of the document in the scene;', 'identifying symbols, including alphanumeric symbols, in the image based on locations within the image of the document;', 'processing the symbols according to their symbol type;', 'obtaining an address related to the identified document and the processed symbols; and', 'initiating a transaction associated with the identified document via a server., 'a mobile device having an image sensor, wherein the mobile device, when software in the mobile device is executed, is caused to execute operations comprising2. The system of claim 1 , further comprising the server.3. The system of claim 1 , wherein the transaction comprises an on-line transaction.4. The system of claim 1 , wherein the transaction is with an account.5. The system of claim 4 , wherein the transaction is with a bank account.6. The system of claim 4 , wherein the transaction is with at least one of the following types of accounts: an account liked to a user claim 4 , an account linked to the mobile device claim 4 , or a credit card account.7. The system of claim 1 , wherein the document identifies ...

Подробнее
03-02-2022 дата публикации

REAL-TIME GESTURE RECOGNITION METHOD AND APPARATUS

Номер: US20220036050A1
Принадлежит:

Disclosed are methods, apparatus and systems for real-time gesture recognition. One exemplary method for the real-time identification of a gesture communicated by a subject includes receiving, by a first thread of the one or more multi-threaded processors, a first set of image frames associated with the gesture, the first set of image frames captured during a first time interval, performing, by the first thread, pose estimation on each frame of the first set of image frames including eliminating background information from each frame to obtain one or more areas of interest, storing information representative of the one or more areas of interest in a shared memory accessible to the one or more multi-threaded processors, and performing, by a second thread of the one or more multi-threaded processors, a gesture recognition operation on a second set of image frames associated with the gesture. 1. A method for real-time recognition , using one or more multi-threaded processors , of a gesture communicated by a subject , the method comprising:receiving, by a first thread of the one or more multi-threaded processors, a first set of image frames associated with the gesture, the first set of image frames captured during a first time interval;performing, by the first thread, pose estimation on each frame of the first set of image frames including eliminating background information from each frame to obtain one or more areas of interest;storing information representative of the one or more areas of interest in a shared memory accessible to the one or more multi-threaded processors; andperforming, by a second thread of the one or more multi-threaded processors, a gesture recognition operation on a second set of image frames associated with the gesture, the second set of image frames captured during a second time interval that is different from the first time interval, using a first processor of the one or more multi-threaded processors that implements a first three-dimensional ...

Подробнее
19-01-2017 дата публикации

Technique for measuring overlay between layers of a multilayer structure

Номер: US20170018066A1
Принадлежит: Applied Materials Israel Ltd

A method for determining overlay between layers of a multilayer structure may include obtaining a given image representing the multilayer structure, obtaining expected images for layers of the multilayer structure, providing a combined expected image of the multilayer structure as a combination of the expected images of said layers, performing registration of the given image against the combined expected image, and providing segmentation of the given image, thereby producing a segmented image, and maps of the layers of said multilayered structure. The method may further include determining overlay between any two selected layers of the multilayer structure by processing the maps of the two selected layers together with the expected images of said two selected layers.

Подробнее
19-01-2017 дата публикации

Surroundings monitoring system for working machine

Номер: US20170018070A1
Принадлежит: HITACHI CONSTRUCTION MACHINERY CO LTD

A surroundings monitoring system for a working machine prevents its own vehicle shadow from having an influence on the detection of an object existing around the working machine. The surroundings monitoring system for a working machine includes: a monocular camera picks up an image of the surroundings of the working machine. A characteristic pattern extraction unit extracts characteristic patterns in the picked-up image based on a characteristic amount of the image. A shadow profile extraction unit extracts a profile of a region, which can be regarded as a shadow of the working machine in the image, based on the characteristic amount of the image; and an object detection unit detects an obstacle existing around the working machine based on the remaining characteristic patterns obtained by excluding a shadow profile characteristic patterns positioned on the profile extracted by the shadow profile extraction unit from the characteristic patterns.

Подробнее
03-02-2022 дата публикации

Method for image segmentation, method for training image segmentation model

Номер: US20220036561A1
Принадлежит: Infervision Medical Technology Co Ltd

The method for image segmentation includes: acquiring, according to an image to be segmented including a background, a mediastinum, an artery and a vein, a first segmentation result of the mediastinum, the artery, the vein and the background in a mediastinum region of the image to be segmented; acquiring, according to the image to be segmented, a second segmentation result of a blood vessel and the background in an epitaxial region of the image to be segmented; and acquiring, according to the first segmentation result and the second segmentation result, a segmentation result of the mediastinum, the artery, the vein and the background of the image to be segmented, so that the segmentation accuracy and the segmentation efficiency of the artery and the vein may be improved.

Подробнее
03-02-2022 дата публикации

DISPLAY RESPONSIVE COMMUNICATION SYSTEM AND METHOD

Номер: US20220036613A1
Принадлежит:

A multimedia communication system and computer-implemented method for transmitting auxiliary display content to an end-user communication device to be rendered on a display device with a special effect to emphasize an image included in the auxiliary display content, comprising a processor and a transmitter. The processor can be arranged to analyze image data included in an auxiliary display content, detect an object image or a background image in the auxiliary display content based on the analysis of the image data, determine a special effect based on the analysis of the image data, and apply the special effect to the auxiliary display content to modify display properties for the auxiliary display content such that the object image is emphasized or pops-out. The transmitter can be arranged to send the auxiliary display content with modified display properties to an end-user communication device. The special effect can comprise a non-customization special effect, a simple foreground special effect or a selective foreground special effect. 1. A multimedia communication system for transmitting auxiliary display content to an end-user communication device to be rendered on a display device with a special effect to emphasize an image included in the auxiliary display content , the multimedia communication system comprising: analyze image data included in an auxiliary display content;', 'detect at least a foreground image and a background image in the auxiliary display content based on the analysis of the image data;', 'determine a special effect to emphasize the foreground image based on the analysis of the image data, wherein the special effect comprises a predetermined shape with a plurality of portions overlapping with the foreground image; and', 'apply the special effect to the auxiliary display content to modify display properties for the auxiliary display content by adding the predetermined shape such that at least one of the plurality of portions overlapping with ...

Подробнее
17-01-2019 дата публикации

EFFICIENT CONTOURS AND GATING

Номер: US20190017921A1
Принадлежит:

Methods and systems for efficient contour and gating in flow cytometry are provided. Event data is compressed to reduce the number of points needed to represent polygon contours for the event data. Selection of a level within the contour then causes the generation of a gate. This allows limited resource devices, such as touchscreen wireless devices, to render and gate flow cytometry data in a resource efficient manner. 1. A computer-implemented method of polygon mesh reduction for flow cytometry events , the method comprising: receiving a requested density level for presenting data for flow cytometry events;', 'generating an initial contour diagram, wherein the initial contour diagram is defined by a plurality of polygons, said plurality of polygons representing regions corresponding to respective density levels for the data in two dimensions, and wherein each polygon is defined as at least a portion of one of a plurality of tiles that divide the initial contour diagram in two dimensions;', 'for a given tile included in the plurality of tiles of the initial contour diagram corresponding to the requested density level, identifying an adjacent tile to the given tile at the requested density level that defines a first polygon matching a second polygon defined by the given tile based at least in part on a tile code for the given tile, wherein the tile code comprises a set of elements, each element encoding a density level for a point defining the second polygon;', 'when the adjacent tile is identified, combining the first polygon and the second polygon into a larger polygon, thereby reducing the number of polygons to form a reduced contour diagram defined by less data than the data for the initial contour diagram; and', 'causing display of the reduced contour diagram for the requested density level., 'under control of one or more processors,'}2. The computer-implemented method of claim 1 , wherein generating the initial contour diagram comprises:receiving a set of data; ...

Подробнее
18-01-2018 дата публикации

Methods and Systems for Image Data Processing

Номер: US20180018498A1
Автор: Roth Wayne D.
Принадлежит:

Methods, storage mediums, and systems for image data processing are provided. Embodiments for the methods, storage mediums, and systems include configurations to perform one or more of the following steps: background signal measurement, particle identification using classification dye emission and cluster rejection, inter-image alignment, inter-image particle correlation, fluorescence integration of reporter emission, and image plane normalization. 17-. (canceled)8. A system , comprising: selecting a first set of one or more optical filters corresponding to a first wavelength band;', 'illuminating the particles through the first set of optical filters;', 'selecting a second set of one or more optical filters corresponding to a second wavelength band; and', 'illuminating the particles through the second set of optical filters; and, 'an imaging subsystem configured to image, at different wavelength bands, particles disposed within the imaging subsystem, wherein the imaging comprises store data acquired for multiple images of the particles, and wherein particular images of the multiple images include spots corresponding to the particles, wherein a first image of the particular images corresponds to the first set of optical filters, wherein a second image of the particular images corresponds to the second set of optical filters;', 'create a first composite image of the multiple images, wherein the first composite image includes first composite spots corresponding to the particles, the first composite spots having a first amount of misalignment from the spots in the particular images; and', 'modify coordinates of at least one of the multiple images such that a second composite image based on the modified coordinates includes second composite spots having a second, smaller amount of misalignment from the spots in the multiple images., 'a data processing subsystem configured to9. The system of claim 8 , wherein the first set of optical filters corresponds to a plurality of ...

Подробнее
18-01-2018 дата публикации

PARKING MANAGEMENT SYSTEM AND METHOD

Номер: US20180018870A1
Принадлежит:

Apparatuses, methods and storage media associated with parking management are disclosed herein. In embodiments, a system may include a plurality of sensors disposed around an expanse of space to collect occupancy data of the expanse of space; and a parking management unit disposed in or adjourning the expanse of space to manage parking of vehicles in the expanse of space, based at least in part on the occupancy data collected by the plurality of sensors. The expanse of space may be a linear expanse of roadway space adjacent to a sidewalk, or an aerial expanse of surface space of a parking lot or a floor of a parking structure. Parking spaces within the expanse of space may be fixed or variably sized/typed. Other embodiments may be disclosed or claimed. 1. A system for managing parking of vehicles , comprising:a plurality of sensors disposed around an expanse of space to collect occupancy data of the expanse of space; anda parking management unit disposed at or adjoining the expanse of space, and communicatively coupled with the plurality of sensors, to manage parking of vehicles in the expanse of space, based at least in part on the occupancy data collected by the plurality of sensors.2. The system of claim 1 , wherein the expanse of space is a linear expanse of roadway space adjacent to a sidewalk claim 1 , or an aerial expanse of surface space of a parking lot or a floor of a parking structure; and wherein the parking management unit is dedicated to manage parking of vehicles within the linear expanse of roadway space adjacent to the sidewalk claim 1 , or the aerial expanse of surface space of a parking lot or a floor of a parking structure.3. The system of claim 1 , wherein the plurality of sensors comprise a plurality of cameras disposed around the expanse of space to capture images of the expanse of space; wherein the parking management unit is to determine availability of parking spaces within the expanse of space claim 1 , based at least in part on the images ...

Подробнее
21-01-2021 дата публикации

Systems and methods for predicting b1+ maps from magnetic resonance calibration images

Номер: US20210018583A1
Принадлежит: GE Precision Healthcare LLC

Methods and systems are provided for predicting B 1 + field maps from magnetic resonance calibration images using deep neural networks. In an exemplary embodiment a method for magnetic resonance imaging comprises, acquiring a magnetic resonance (MR) calibration image of an anatomical region, mapping the MR calibration image to a transmit field map (B 1 + field map) with a trained deep neural network, acquiring a diagnostic MR image of the anatomical region, and correcting inhomogeneities of a transmit field in the diagnostic MR image with the B 1 + field map. Further, methods and systems are provided for collecting and processing training data, as well as utilizing the training data to train a deep learning network to predict B 1 + field maps from MR calibration images.

Подробнее
17-01-2019 дата публикации

METHODS OF YIELD ASSESSMENT WITH CROP PHOTOMETRY

Номер: US20190019281A1
Принадлежит: PIONEER HI-BRED INTERNATIONAL, INC.

A method of evaluating one or more kernels of an ear of maize using digital imagery that includes acquiring a digital image of the one or more kernels of the ear of maize without the use of spatial reference points, processing the digital image to estimate at least one physical property of the one or more kernels of the ear of maize from the digital image, and evaluating the at least one kernel of maize using the estimate of the at least one physical property of the at least one kernel of maize. The method includes using one or more such digital images to estimate yield on a plant, management zone, field, county and country level. 1. A method of predicting yield of a corn crop , comprising:Obtaining one or more images of an ear of corn without the use of spatial reference points,Determining the number of kernels per ear, andCalculating total yield based on the image.2. The method of claim 1 , wherein an image of the corn ear is positioned within a target window on the viewing screen of an imaging device.3. The method of claim 2 , wherein the one or more images are still images from a video feed of the corn ear.4. The method of claim 2 , wherein the images are selected based on having a corn ear to target window surface area ratio between 70% to 90%.5. The method of claim 4 , wherein the corn kernels on the corn ear are segmented and counted.6. The method of claim 1 , wherein the number of kernels/ear is the median number of kernels from a set of still images.7. The method of claim 6 , wherein the image is a two dimensional image and the total number of kernels/ear is obtained by multiplying the number of kernels visible in the image by a calibration factor within the range of 2.25 to 2.50.8. The method of claim 1 , wherein total yield is calculated based on an estimate of the ears/acre claim 1 , which ears/acre estimate is based on the predicted test weight claim 1 , yield or number of ears per plant of the variety based on one or more of variety differences claim 1 ...

Подробнее
17-01-2019 дата публикации

System, method and computer-accessible medium for texture analysis of hepatopancreatobiliary diseases

Номер: US20190019300A1
Принадлежит: Memorial Sloan Kettering Cancer Center

An exemplary system, method and computer-accessible medium for determining the pixel variation of a tissue(s) in an image(s) can be provided, which can include, for example, receiving first imaging information related to the image(s), segmenting a region(s) of interest from the image(s), generating second imaging information by subtracting a structure(s) from the region(s) of interest, and determining the pixel variation based on the second imaging information. The tissue(s) can include a liver and/or a pancreas. A treatment characteristic(s) can be determined based on the pixel variation, which can include (i) a sufficiency of the tissue(s), (ii) a response to chemotherapy by the tissue(s), (iii) a recurrence of cancer in the tissue(s), or (iv) a measure of a genomic expression of the tissue(s).

Подробнее
17-01-2019 дата публикации

Predictive Information for Free Space Gesture Control and Communication

Номер: US20190019332A1
Принадлежит: Leap Motion, Inc.

Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations. 129.-. (canceled)30. A method of capturing gestural motion of a control object in a three-dimensional (3D) sensory space , the method including:determining observation information characterizing a surface of a control object from at least one image of a gestural motion of the control object in a three-dimensional (3D) sensory space;constructing a 3D model to represent the control object by fitting one or more 3D subcomponents to the surface characterized; and determining an error indication between a point on the surface characterized and a corresponding point on at least one of the 3D subcomponents; and', 'responsive to the error indication adjusting the 3D model., 'improving representation of the gestural motion by the 3D model, including31. The method of claim 30 , wherein determining the error indication further includes determining whether the point on the surface and the corresponding point on the at least one of the 3D subcomponents are within a threshold distance.32. The method of claim 30 , wherein determining the error indication further includes:pairing points on the surface with points on axes of the 3D subcomponents, wherein surface points lie on vectors that are normal to the axes; anddetermining a reduced root mean squared deviation (RMSD) of distances between paired points.33. The method of claim 30 , wherein determining the error indication further includes:pairing points on the surface with points on the 3D subcomponents, wherein normal vectors to the points are parallel to each ...

Подробнее
03-02-2022 дата публикации

Projection apparatus and projection method

Номер: US20220038668A1
Принадлежит: Optoma Corp

A projection apparatus and a projection method are provided. The projection apparatus includes a projection device, a capturing image device, and a processing device. The capturing image device is configured to obtain an environmental image. The processing device is coupled to the projection device and the capturing image device. The processing device is configured to analyze the environmental image to provide at least one effective projection region. The processing device selects one of the at least one effective projection regions as a target projection region. The projection device is configured to project a projection image to the target projection region.

Подробнее
18-01-2018 дата публикации

GENERATING A DAY/NIGHT IMAGE

Номер: US20180020124A1

According to one example, there is provided a method of generating a day/night image on a media. The method comprises obtaining an image to be printed as a front-to-back image, printing the obtained image on a first side of the media, processing the obtained image by flipping the image, applying a degree of edge removal, and applying a degree blur. The method further comprises printing the processed image on a reverse side of the media, such that the first printed image and printed modified image are substantially aligned with one another. 1. A method of generating a day/night image on a media comprising:obtaining an image to be printed as a front-to-back image;printing the obtained image on a first side of the media; flipping the image:', 'applying a degree of edge removal; and', 'applying a degree blur;, 'processing the obtained image byandprinting the processed image on a reverse side of the media, such that the first printed image and printed modified image are substantially aligned with one another.2. The method of claim 1 , further comprising applying a degree of edge removal within a range of about 1 to 5 mm.3. The method of claim 1 , further comprising applying a degree of blurring within a range of about 1 to 5 mm.4. The method of claim 1 , further comprising:determining an expected misalignment characteristic of the printing images.5. The method of claim 4 , wherein applying a degree of edge removal comprises applying a degree of edge removal based on the determined misalignment characteristic.6. The method of claim 4 , wherein applying a degree of blur comprises applying a degree of blur based on the determined misalignment characteristic.7. The method of claim 4 , wherein determining the expected misalignment characteristic comprises one or more of:determining a type of media; anddetermining a type of printing system.8. A printing system comprising:a print engine to print on a media; and obtain an image to be printed;', 'control the print engine to print ...

Подробнее
18-01-2018 дата публикации

METHOD AND APPARATUS FOR PROVIDING VIDEO CONFERENCING

Номер: US20180020189A1
Принадлежит: AT&T Intellectual Property I, L.P.

A system that incorporates teachings of the subject disclosure may include, for example, capturing images that are associated with a video conference communication session, obtaining a video conference policy associated with the video conference communication session, applying object pattern recognition to the images to detect an object in the images, and retrieve first replacement image content according to the video conference policy. The images can be adjusted by replacing a first portion of the images other than the detected object with the first replacement image content to generate first adjusted video content. The first adjusted video content can be provided to the first recipient communication device via the video conference communication session. Non-adjusted video content can be provided according to the video conference policy to the second recipient communication device via the video conference communication session. Other embodiments are disclosed. 1. A server , comprising:a memory that stores computer instructions; and receiving images captured by a source communication device associated with a video conference communication session established among video conference participant devices comprising the source communication device, a first recipient communication device and a second recipient communication device;', 'obtaining a video conference policy associated with the video conference communication session, wherein the video conference policy comprises a first presentation policy to be applied to the first recipient communication device and a second presentation policy to be applied to the second recipient communication device, and wherein the first presentation policy and the second presentation policy differ from each other;', 'applying facial pattern recognition to the images to detect a facial object in the images;', 'retrieving first replacement image content and background content according to the video conference policy, wherein the first ...

Подробнее
21-01-2021 дата публикации

DETECTION OF FRAUDULENTLY GENERATED AND PHOTOCOPIED CREDENTIAL DOCUMENTS

Номер: US20210019519A1
Принадлежит: Idemia Identity & Security USA LLC

A method for detecting images of fraudulently generated or photocopied secure credential documents using texture analysis includes receiving, by one or more processors, an image of a secure credential document from a computer device. The one or more processors segment the image of the secure credential document into multiple regions. For each region of the multiple regions, the one or more processors extract local high-resolution texture features from the image of the secure credential document. The one or more processors generate a score based on the local high-resolution texture features using a machine learning model. The score is indicative of a likelihood that the image of the secure credential document is fraudulently generated or photocopied. The one or more processors transmit a message to a display device indicating that the image of the secure credential document is fraudulently generated or photocopied. 1. A method comprising:receiving, by one or more processors, an image of a secure credential document from a computer device;segmenting, by the one or more processors, the image of the secure credential document into a plurality of regions;for each region of the plurality of regions, extracting, by the one or more processors, local high-resolution texture features from the image of the secure credential document;generating, by the one or more processors, a score based on the local high-resolution texture features using a machine learning model, the score indicative of a likelihood that the image of the secure credential document is fraudulently generated or photocopied; andtransmitting, by the one or more processors, a message to a display device indicating that the image of the secure credential document is fraudulently generated or photocopied.2. The method of claim 1 , further comprising removing claim 1 , by the one or more processors claim 1 , a background of the image of the secure credential document.3. The method of claim 2 , wherein the background ...

Подробнее