Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1162. Отображено 100.
01-01-2015 дата публикации

METHOD AND APPARATUS FOR DETECTING SUSPICIOUS ACTIVITY USING VIDEO ANALYSIS

Номер: US20150002675A1
Принадлежит:

A system detects a transaction outcome by obtaining video data associated with a transaction area and analyzing the video data to obtain at least one video transaction parameter concerning transactions associated with the transaction area. The transaction area can be a video count of items indicated in the video data as detected by an automated item detection algorithm applied to the video data. The system obtains at least one expected transaction parameter concerning an expected transaction that occurs in the transaction area, such as a scan count of items scanned at a point of sale terminal. The system automatically compares the video transaction parameter(s) to the expected transaction parameter(s) to identify a transaction outcome that may indicate fraudulent activity such as sweethearting in a retail environment. 1. A method of detecting suspicious activity , the method comprising:receiving real-time video data originating from at least one video camera that monitors a transaction area;receiving real-time transaction data from a transaction terminal, the real-time transaction data tracking events associated with a transaction occurring in the transaction area;analyzing the real-time video data to track, in substantially real-time with respect to the at least one video camera capturing images of the transaction area, items present in the transaction area that are involved in the transaction; andcomparing the video analysis of the tracked items in the real-time video data to the real-time transaction data to identify a particular event that is captured by the received real-time video data but that does not have a corresponding log in the real-time transaction data indicating that the event is part of the transaction.2. The method as in claim 1 , wherein analyzing the real-time video data further comprises:analyzing images in a first region of interest in the real-time video data;analyzing images in a second region of interest in the real-time video data; ...

Подробнее
06-01-2022 дата публикации

DISTURBANCE DETECTION IN VIDEO COMMUNICATIONS

Номер: US20220006976A1
Принадлежит:

Embodiments disclosed herein provide systems, methods, and computer-readable media for detecting disturbances in a media stream from a participant on a communication. In a particular embodiment, a method provides receiving biometric information indicating a motion of the participant and determining that the motion indicates a visual disturbance in a video component of the first media stream. The method further provides identifying the visual disturbance in the video component of the first media stream and removing the visual disturbance from the video component of the first media stream. 1. A method for detecting disturbances in a first media stream captured of a participant on a communication , the method comprising:receiving the first media stream, comprising an audio component and a video component;receiving biometric information captured from the participant by one or more biometric sensors;identifying one or more disturbances in the first media stream based on the audio component, the video component, and the biometric information; andproviding feedback about the one or more disturbances to the participant.2. The method of claim 1 , comprising:determining a disturbance score for the participant based on the one or more disturbances.3. The method of claim 2 , wherein the feedback indicates how the participant can improve the disturbance score.4. The method of claim 2 , wherein determining the disturbance score is based on a number of the one or more disturbances and a frequency of the one or more disturbances.5. The method of claim 2 , wherein determining the disturbance score comprises:updating the disturbance score as each of the one or more disturbances are identified during the communication.6. The method of claim 2 , wherein the participant is one of a group agents of a contact center and wherein the disturbance score is for the group of agents.7. The method of claim 1 , wherein the feedback is provided during the communication.8. The method of claim 1 , ...

Подробнее
02-01-2020 дата публикации

Method and Apparatus for Multi-Dimensional Content Search and Video Identification

Номер: US20200004779A1
Принадлежит:

A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures. 1. A computer-implemented method for storing information associated with videos in a reference database using hash values as traversal indexes , the computer-implemented method comprising: obtaining, by a processor, data associated with the video sequence,', 'determining, by the processor, a multi-dimensional vector signature of a region of a frame of the video sequence,', 'determining, by the processor, a hash value based on the multi-dimensional vector signature, and', 'storing the data associated with the video sequence at a leaf node of a plurality of leaf nodes, wherein the leaf node is addressable by the hash value., 'for each of multiple video sequences2. The computer-implemented method of claim 1 , wherein the region comprises multiple sectors claim 1 , and wherein the multi-dimensional vector signature represents each sector.3. The computer-implemented method of claim 2 , wherein determining the multi-dimensional vector signature comprises comparing features within each sector to a threshold value to generate a value for the sector.4. The computer-implemented method of claim 2 , wherein the region is a rectangular ...

Подробнее
02-01-2020 дата публикации

Method and Apparatus for Multi-Dimensional Content Search and Video Identification

Номер: US20200004780A1
Принадлежит:

A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures. 1. A computer-implemented method for storing information associated with a video sequence in a reference database , the computer-implemented method comprising:obtaining, by a processor, data associated with the video sequence; and determining, by the processor, respective global features of a global region of interest,', 'determining, by the processor, respective local features of a respective keypoint within the global region of interest, wherein, for multiple frames of the set of frames of the video sequence, the respective keypoints correspond to different respective locations within the global region of interest,', 'generating, by the processor, a respective signature using both the global features for the frame and the local features for the frame,', 'determining, by the processor, a respective hash value for the frame based on the signature for the frame, and', 'storing, by the processor, the data associated with the video sequence in the reference database in association with the hash value for the frame., 'for each frame of a set of frames of the video sequence2. The computer-implemented method of claim 1 , wherein ...

Подробнее
02-01-2020 дата публикации

Method and Apparatus for Multi-Dimensional Content Search and Video Identification

Номер: US20200004781A1
Принадлежит:

A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures. 1. A computer-implemented method comprising:obtaining, by a processor, a first query index and a second query index that are derived from different respective features of a frame of a query video;determining, by the processor, that a distance measure between the first query index and a candidate database index of a reference database satisfies a threshold condition, wherein the candidate database index corresponds to a frame of an original video;determining, by the processor, a correlation score for the frame of the query video and the frame of the original video based on a comparison of the second query index and an additional candidate database index corresponding to the frame of the original video;based at least on the correlation score, determining, by the processor, a video sequence likelihood indicative of a confidence of match between the query video and the original video; andbased on the video sequence likelihood, providing, by the processor, a results list that includes a name of the original video.2. The computer-implemented method of claim 1 , wherein the second query index corresponds to a texture signature of a ...

Подробнее
02-01-2020 дата публикации

Method and Apparatus for Multi-Dimensional Content Search and Video Identification

Номер: US20200004782A1
Принадлежит: Gracenote Inc

A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures.

Подробнее
07-01-2016 дата публикации

Method and System for Processing Motion Event Notifications

Номер: US20160005281A1
Принадлежит: Google LLC

The disclosed embodiments include a system for processing motion events. The system obtains a video stream from a camera, the video stream corresponding to a field of view of the camera and obtains identification of a spatial zone, the spatial zone corresponding to at least a portion of the field of view of the camera. For each motion event detected in the video stream: (1) the system determines whether the motion event involves the spatial zone; and (2), in accordance with a determination that the motion event involves the spatial zone, the system suppresses a first user notification for the motion event.

Подробнее
07-01-2021 дата публикации

ASSESSING VIDEO STREAM QUALITY

Номер: US20210004600A1
Принадлежит:

The present invention extends to methods, systems, and computer program products for assessing video stream quality. The quality of a video stream is classified into one of a plurality of quality classifications including a low quality classification and at least one other higher quality classification. A plurality of video quality thresholds are accessed. If any one of the plurality of video quality thresholds is not satisfied, the video stream is classified as low quality. The characteristics of a video stream frame are computed to satisfy each of the plurality of quality thresholds. A video stream technical score is computed from content of the frame based on and subsequent to satisfaction of the plurality of quality thresholds. The video stream is classified as a specified quality, from among the plurality of quality classifications, based on the video stream technical score. 1. A method comprising:accessing a frame from a video stream; accessing plurality of video quality thresholds, any one the plurality of video quality thresholds, if not satisfied, expressly indicating that the video stream is to be classified as a low quality video stream;', 'computing that the characteristics of the frame satisfy each of the plurality of quality thresholds;', 'computing a video stream technical score from content of the frame based on and subsequent to satisfaction of the plurality of quality thresholds; and', 'classifying the video stream as a specified quality, from among the plurality of quality classifications, based on the video stream technical score., 'classifying the quality of the video stream into one of a plurality of quality classifications, the plurality of quality classifications including a low quality classification and at least one other classification indicative of increased quality relative to the low quality classification, including2. The method of claim 1 , wherein classifying the video stream as a specified quality comprises classifying the video ...

Подробнее
04-01-2018 дата публикации

MONITORING

Номер: US20180005024A1
Принадлежит: NOKIA TECHNOLOGIES OY

A method comprising: automatically processing recorded first sensor data from a scene to recognise automatically a first user input from user action in the scene; in response to recognition of the first user input, automatically entering a learning state to enable: automatic processing of the first sensor data from the scene to capture an ad-hoc sequence of spatial events in the scene subsequent to the first user input and automatic processing of subsequently recorded second sensor data from the scene different to the first sensor data of the scene, to recognise automatically a sequence of spatial events in the subsequently recorded second video corresponding to the captured sequence of spatial events. 1. A method comprising:automatically processing recorded first sensor data from a scene to recognise automatically a first user input from user action in the scene;in response to recognition of the first user input, automatically entering a learning state to enable:automatic processing of the first sensor data from the scene to capture an ad-hoc sequence of spatial events from the scene subsequent to the first user input, andautomatic processing of subsequently recorded second sensor data from the scene, different to the first sensor data of the scene, to recognise automatically a sequence of spatial events in the subsequently recorded second sensor data corresponding to the captured sequence of spatial events.2. The method as claimed in claim 1 , wherein the first sensor data records a gesture user input in the first video.3. The method as claimed in claim 1 , wherein first sensor data from the scene comprises at least the first video of the scene claim 1 , the method comprising: automatically processing the recorded first video of the scene to recognise automatically the first user input from user movement in the scene.4. The method as claimed in claim 1 , wherein the first user input is a time-evolving claim 1 , scene independent sequence defined by motion of a ...

Подробнее
02-01-2020 дата публикации

LEFT OBJECT DETECTING SYSTEM

Номер: US20200005044A1
Автор: Nakamura Kohta
Принадлежит:

A system according to an embodiment includes an analyzing device that includes a first database to store image analysis information identifying a person and an object, and determines that the object has been left behind, by using the image analysis information and performing image analysis on video footage captured by cameras installed in locations, associating the identified person with an object carried by the person, and comparing, at timings, the video footage captured by the cameras; and a communication device that includes a second database to store, in association with each other, usage to information and ID information of each of users, and transmit, when the analyzing device has determined that the object has been left behind, an alert to the user or a predetermined destination of notification associated with the object left behind. 1. A left object detecting system , comprising:an image analyzing device that includes a first database configured to store at least image analysis information identifying a person and an object in an image, and determines that the object has been left behind, by using the image analysis information and performing image analysis on video footage captured by cameras installed in a plurality of locations, associating the identified person with an object carried by the person, and comparing, at a plurality of timings, the video footage captured by the cameras; anda communication device that includes a second database configured to store, in association with each other, usage information and ID information of each of a plurality of users, and transmit, when the image analyzing device has determined that the object has been left behind, an alert to the user or a predetermined destination of notification associated with the object left behind.2. The left object detecting system according to claim 1 , wherein the image analyzing device further comprises:a video footage information gathering unit configured to gather a plurality of the ...

Подробнее
02-01-2020 дата публикации

METHOD AND SYSTEM OF EVENT-DRIVEN OBJECT SEGMENTATION FOR IMAGE PROCESSING

Номер: US20200005468A1
Принадлежит: Intel Corporation

Methods, systems, and articles herein are directed to event-driven object segmentation to track events rather than tracking all pixel locations in an image. 1. A computer-implemented method of event-driven object segmentation for image processing , comprising:obtaining clusters of events indicating motion of image content between frames of at least one video sequence and at individual pixel locations;forming cluster groups depending, at least in part, on the position of the clusters relative to each other on a grid of pixel locations forming the frames and without tracking all pixel locations forming the frames;generating regions-of-interest comprising using the cluster groups; andproviding the regions-of-interest to applications associated with object segmentation.2. The method of wherein each event indicates a change in image data at a pixel location that meets a criterion deemed to indicate sufficient motion of image content.3. The method of wherein the clusters are formed by listing an anchor pixel location claim 1 , a timestamp claim 1 , and a size of the cluster without listing all pixel locations on an image and without listing all pixels in the cluster.4. The method of wherein forming cluster groups comprises listing clusters in an order of anchor coordinates of the clusters on a reverse mapping table.5. The method of wherein the reverse mapping table lists an anchor location and a size of the cluster without listing any more parameters of the cluster.6. The method of wherein forming cluster groups comprises determining whether neighbor clusters adjacent to a current cluster meet a criterion.7. The method of comprising placing a cluster group on a patch array claim 1 , and generating representative pixel values that indicate the number of events near a current pixel.8. The method of wherein at least some of the representative pixel values factor two or more adjacent clusters in the cluster group.9. The method of wherein forming cluster groups comprises using ...

Подробнее
02-01-2020 дата публикации

OPTIMIZED NEURAL NETWORK STRUCTURE

Номер: US20200005482A1
Принадлежит:

A method for performing real-time recognition of objects includes receiving an input video stream from a camera, pre-processing a current frame of the input video stream using one or more pre-processing layers of a neural network structure, detecting if there is an object in the current pre-processed frame using an auxiliary branch of the neural network structure, recognizing one or more objects in the current pre-processed frame using a primary branch of the neural network structure if an object is detected in the current pre-processed frame, and displaying the one or more recognized objects of the current frame in one or more bounding boxes. 1. A method for performing real-time recognition of objects , the method comprising:receiving an input video stream from a camera;pre-processing a current frame of the input video stream using one or more pre-processing layers of a neural network structure;detecting if there is an object in the current pre-processed frame using an auxiliary branch of the neural network structure;recognizing one or more objects in the current pre-processed frame using a primary branch of the neural network structure, if an object is detected in the current pre-processed frame; anddisplaying the one or more recognized objects of the current frame in one or more bounding boxes.2. The method of claim 1 , wherein the camera is selected from at least one of: a traffic camera claim 1 , a home doorbell camera claim 1 , a body camera for soldiers or law enforcement claim 1 , and a camera on an unmanned aerial vehicle (UAV).3. The method of further comprising pre-processing a next frame if an object is not detected in the current pre-processed frame.4. The method of claim 1 , wherein the pre-processing of the current frame of the input video stream comprises detecting primitive features and aggregating detected features into primitive shapes or parts of shapes.5. The method of claim 1 , wherein the auxiliary branch generates a binary output and provides ...

Подробнее
04-01-2018 дата публикации

SITUATION IDENTIFICATION METHOD, SITUATION IDENTIFICATION DEVICE, AND STORAGE MEDIUM

Номер: US20180005510A1
Принадлежит: FUJITSU LIMITED

A situation identification method includes acquiring a plurality of images; identifying, for each of the plurality of images, a first area including a bed area where a place to sleep appears in an image, and a second area where an area in a predetermined range around the place to sleep appears in the image; detecting a state of a subject to be monitored for each of the plurality of images based on a result of detection of a head area indicating an area of a head of the subject in the first area and a result of detection of a living object in the second area; when the state of the subject changes from a first state to a second state, identifying a situation of the subject based on a combination of the first state and the second state; and outputting information that indicates the identified situation. 1. A situation identification method executed by a processor included in a situation identification device , the situation identification method comprising:acquiring a plurality of images;identifying, for each of the plurality of images, a first area including a bed area where a place to sleep appears in an image, and a second area where an area in a predetermined range around the place to sleep appears in the image;detecting a state of a subject to be monitored for each of the plurality of images based on a result of detection of a head area indicating an area of a head of the subject to be monitored in the first area and a result of detection of a living object in the second area;when the state of the subject to be monitored changes from a first state to a second state, identifying a situation of the subject to be monitored based on a combination of the first state and the second state; andoutputting information that indicates the identified situation.2. The situation identification method according to claim 1 , whereinthe result of detection of the head area indicates presence or absence of the head area in the first area, andthe result of detection of the living ...

Подробнее
03-01-2019 дата публикации

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION

Номер: US20190005333A1
Принадлежит:

Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities. 1. A method for identifying video scenes , the method comprising:accessing a video of an activity, the activity including an event at a moment within the video;obtaining an identification of a type of the activity;obtaining an identification of a type of the event; the length is a first length based on the type of the activity being of a first activity type and the type of the event being of a first event type;', 'the length is a second length based on the type of the activity being of the first activity type and the type of the event being of a second event type;', 'the length is a third length based on the type of the activity being of a second activity type and the type of the event being of the first event type; and', 'the length is a fourth length based on the type of the activity being of the second activity type and the type of the event being of the second event type,', the first length is different from the second length, the third length, and the fourth length;', 'the second length is different from the third length and the fourth length; and', 'the third length is different from the fourth length; and, 'wherein], 'identifying a scene of the video for the event, the scene including a length of ...

Подробнее
05-01-2017 дата публикации

COGNITIVE RECORDING AND SHARING

Номер: US20170006214A1
Принадлежит:

A system and method and computer program product for cognitive recording and sharing of live events. The system includes: a sensing and transmitting device that can sense the biometric signatures of an individual; a processing unit that analyses the sensed signal and initiate a set of actions; a recording device or the like to record the event; and a networked sharing device configured to subsequently share recorded event content. The system further identifies individuals' pre-cognitive inputs and additional external and internal factor input signals that are precursors to cognitive affirmation of an emotional response. These inputs will be identified, correlated, and used in training the system for subsequent identification and correlation between input factors and resulting emotional state. External factors may include: recognized emotional states, biometric inputs, and/or precognition inputs of other individuals in proximity to the subject individual. Other factors may include an individual's context. 1. An apparatus for cognitive recording and sharing of live events comprising:a processing unit;a recording device to record a live event;one or more sensors, each configured for obtaining a biometric signal data from an individual; obtain a biometric signature of the individual based on a received biometric signal data;', 'obtain a signal representing one or more of: a recognized emotional state of, a biometric signature of, and a determined precognition input of one or more other individuals in proximity to the individual;', "determine an individual's emotional state based on the signature in combination with the obtained signals of said one or more other individuals; and", 'record the live event by said recording device in response to said determined emotional state., 'a transmitting device for communicating the one or more biometric signals for receipt at the processing unit, the processing unit configured to2. The apparatus as claimed in claim 1 , configured as ...

Подробнее
07-01-2021 дата публикации

Method and system for synchronizing procedure videos for comparative learning

Номер: US20210006752A1
Принадлежит: Verb Surgical Inc

Embodiments described herein provide various examples of synchronizing the playback of a recorded video of a surgical procedure with a live video feed of a user performing the surgical procedure. In one aspect, a system can simultaneously receive a recorded video of a surgical procedure and a live video feed of a user performing the surgical procedure in a training session. More specifically, the recorded video is shown to the user as a training reference, and the surgical procedure includes a set of surgical tasks. The system next simultaneously monitors the playback of a current surgical task in the set of surgical tasks in the recorded video and the live video feed depicting the user performing the current surgical task. Next, the system detects that the end of the current surgical task has been reached during the playback of the recorded video. In response to determining that the user has not completed the current surgical task in the live video feed, the system pauses the playback of the recorded video while awaiting the user to complete the current surgical task.

Подробнее
27-01-2022 дата публикации

Video processing for embedded information card localization and content extraction

Номер: US20220027631A1
Принадлежит: STATS LLC

Metadata for one or more highlights of a video stream may be extracted from one or more card images embedded in the video stream. The highlights may be segments of the video stream, such as a broadcast of a sporting event, that are of particular interest. According to one method, video frames of the video stream are stored. One or more information cards embedded in a decoded video frame may be detected by analyzing one or more predetermined video frame regions. Image segmentation, edge detection, and/or closed contour identification may then be performed on identified video frame region(s). Further processing may include obtaining a minimum rectangular perimeter area enclosing all remaining segments, which may then be further processed to determine precise boundaries of information card(s). The card image(s) may be analyzed to obtain metadata, which may be stored in association with at least one of the video frames.

Подробнее
12-01-2017 дата публикации

REMOTELY CONTROLLED ROBOTIC SENSOR BALL

Номер: US20170010607A1
Автор: Barlas Omar
Принадлежит:

A remotely controlled robotic sensor ball and method of operation thereof. The robotic sensor ball includes an outer shell forming a ball, control circuitry positioned within the outer shell, a camera operably connected to the control circuitry, a propulsion system inside the outer shell, and one or more connectors. The control circuitry includes at least one processor, memory, and a wireless communication interface. The camera is configured to generate video signals of a view exterior to the outer shell. The propulsion system configured to cause the outer shell to rotate in response to instructions received via the wireless communication interface. The one or more connectors are configured to operably connect one or more sensors to the control circuitry. The one or more sensors are connectable in a modular manner. 1. An apparatus comprising:an outer shell forming a ball;control circuitry positioned within the outer shell, the control circuitry comprising at least one processor, memory, and a wireless communication interface;a camera operably connected to the control circuitry, the camera configured to generate video signals of a view exterior to the outer shell;a propulsion system inside the outer shell, the propulsion system configured to cause the outer shell to rotate in response to instructions received via the wireless communication interface; andone or more connectors configured to operably connect one or more sensors to the control circuitry, the one or more sensors connectable in a modular manner.2. The apparatus of claim 1 , further comprising:at least one motion sensor positioned within the outer shell and operably connected to the control circuitry via at least one of the one or more connectors,wherein the control circuitry is configured to modify operation of the camera based on an output of the at least one motion sensor.3. The apparatus of claim 2 , further comprising:a housing within the outer shell, the housing containing at least the control ...

Подробнее
14-01-2016 дата публикации

METHOD AND APPARATUS FOR IDENTIFYING SALIENT EVENTS BY ANALYZING SALIENT VIDEO SEGMENTS IDENTIFIED BY SENSOR INFORMATION

Номер: US20160012293A1
Принадлежит:

A method, apparatus and computer program product are provided to identify one or more salient events from an analysis of one or more images in an efficient and accurate manner. In this regard, the method, apparatus and computer program product may limit the visual analysis of the images to only a subset of the images that are determined to be potentially relevant based upon sensor information provided by one or more sensors carried by the image capturing device. In the context of a method, one or more images that are captured by an image capturing device are identified to be a salient video segment based upon sensor information provided by one or more sensors carried by the image capturing device. The method also includes identifying one or more salient events based upon an analysis of the one or more images of the salient video segment. 1. A method comprising:identifying, with a processor, one or more images captured by an image capturing device to be a salient video segment based upon sensor information provided by one or more sensors carried by the image capturing device; andidentifying one or more salient events based upon an analysis of the one or more images of the salient video segment.2. A method according to further comprising:receiving an indication of a salient orientation; anddetermining an orientation of the image capturing device based upon the sensor information, andwherein the method identifies one or more images to be a salient video segment based upon a relationship of the orientation of the image capturing device to the salient orientation.3. A method according to wherein receiving the indication of the salient orientation comprises receiving user input indicative of the salient orientation.4. A method according to wherein identifying one or more images to be the salient video segment comprises identifying one or more images to be the salient video segment based upon satisfaction of a predefined angular distance threshold between the salient ...

Подробнее
03-02-2022 дата публикации

STROKE DETECTION AND MITIGATION

Номер: US20220031162A1
Принадлежит:

A method and system for detecting a possible stroke in a person through the analysis of voice data and image data regarding the gate of the user, facial features and routines, and corroborating any anomalies in one set of data against anomalies in another set of data for a related time frame. 1. A system for detecting a stroke in a user and mitigating the impact of a stroke , comprisinga video camera for monitoring any two or more of a user's gate, a user's facial features, and a user's routines (collectively referred to herein as image parameters),a microphone for monitoring a user's voice parameters, wherein the video camera and microphone are collectively referred to as sensors,a processor, anda memory configured with machine-readable code defining an algorithm for analyzing image data from the video camera and voice data from the microphone to identify anomalies in any of the image parameters or voice parameters indicative of a possible stroke, and for validating any anomaly in the data from the video camera or microphone by comparing said anomaly with any anomaly detected in any of the other parameters, to define a stroke event.2. The system of claim 1 , wherein the anomalies in the user's gate claim 1 , include one or more of: tumbling claim 1 , instability claim 1 , wobbling claim 1 , and problems with coordination.3. The system of claim 1 , wherein anomalies in the user's facial features include facial muscle weakness or partial paralysis/drooping of parts of the user's face.4. The system of claim 1 , wherein anomalies in the voice data of the user include one or more of: difficulty speaking claim 1 , slurred speech claim 1 , speech loss claim 1 , and the absence of a response or non-sensical response when prompted via the speaker.5. The system of claim 1 , further comprising a communications system for notifying one or more predefined persons in the event of a stroke event.6. The system of claim 1 , further comprising a storage medium for storing speech ...

Подробнее
11-01-2018 дата публикации

Methods and Systems for Detecting Persons in a Smart Home Environment

Номер: US20180012077A1
Принадлежит:

The various implementations described herein include methods, devices, and systems for detecting motion and persons. In one aspect, a method is performed at a smart home system that includes a video camera, a server system, and a client device. The video camera captures video and audio, and wirelessly communicates, via the server system, the captured data to the client device. The server system: (1) receives and stores the captured data from the video camera; (2) determines whether an event has occurred, including detected motion; (3) in accordance with a determination that the event has occurred, identifies video and audio corresponding to the event; and (4) classifies the event. The client device receives information indicative of the identified events, displays a user interface for reviewing the video and audio stored by the remote server system, and displays the at least one classification for the event. 1. A smart home system , comprising: an image sensor having a field of view and being configured to capture video within the field of view;', 'a microphone configured to capture audio within proximity of the video camera; and', 'a wireless transceiver configured to wirelessly communicate, via a remote server system, the captured video and audio to a remote client device;, 'a video camera for use in a smart home environment, the video camera including receive and store the captured video and audio from the video camera;', 'determine whether an event has occurred, including determining whether motion is detected in the received video;', 'in accordance with a determination that the event has occurred, identify video and audio corresponding to the event; and', 'classify the event into at least one of a plurality of classifications, the classifications including motion detection and person detection; and, 'the remote server system including processors and memory storing first programs executable by the processors, the first programs including a first application ...

Подробнее
10-01-2019 дата публикации

USER-MACHINE INTERACTION METHOD AND SYSTEM BASED ON FEEDBACK SIGNALS

Номер: US20190011992A1
Автор: Zhang Junfeng, ZHAO Lili
Принадлежит:

A user-machine interaction method and apparatus are disclosed. According to certain embodiments, the method may include obtaining image data. The method may also include analyzing the image data by the machine to detect occurrence of events. The method may also include generating a first signal indicating detection of a first event. The method may further include performing an operation upon detection of a first occurrence of a second event after generation of the first signal. 1. A method for machine processing user commands , comprising:obtaining image data;analyzing the image data by the machine to detect occurrence of events;generating a first signal indicating detection of a first event; andperforming an operation upon detection of a first occurrence of a second event after generation of the first signal.2. The method of claim 1 , wherein analyzing the image data comprises:performing a comparison of the image data to reference data representing the first and second events; anddetecting the first and second events based on the comparison.3. The method of claim 1 , further comprising:upon detection of the first occurrence of the second event after the first signal is generated, generating a second signal indicating detection of the second event.4. The method of claim 1 , further comprising:performing the operation only upon detection of the second event within a predetermined amount of time after generation of the first signal.5. The method of claim 1 , wherein detection of a first occurrence of a second event after generation of the first signal comprises:detecting the first occurrence of a second event after generation of the first signal, based on a value of a monitoring tag.6. The method of claim 5 , further comprising:upon detection of the first event, setting the monitoring tag to a first value.7. The method of claim 6 , further comprising:when neither the first event nor the second event is detected within a predetermined amount of time after the first ...

Подробнее
11-01-2018 дата публикации

Methods and Systems for Providing Event Alerts

Номер: US20180012462A1
Принадлежит:

The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) obtaining a first category for a first motion event, the first motion event corresponding to a first plurality of video frames; (2) sending a first alert indicative of the first category to a user; (3) after sending the first alert, obtaining a second category for a second motion event corresponding to a second plurality of video frames; (4) in accordance with a determination that the second category is the same as the first category, determining whether a predetermined amount of time has elapsed since the sending of the first alert; (5) if the predetermined amount of time has elapsed, sending a second alert indicative of the second category to the user; and (6) if the predetermined amount of time has not elapsed, forgoing sending the second alert. 1. A method , comprising: obtaining a first category of a plurality of motion categories for a first motion event, the first motion event corresponding to a first plurality of video frames from a camera;', 'sending a first alert indicative of the first category to a user associated with the camera;', 'after sending the first alert, obtaining a second category of the plurality of motion categories for a second motion event, the second motion event corresponding to a second plurality of video frames from the camera;', 'in accordance with a determination that the second category is the same as the first category, determining whether a predetermined amount of time has elapsed since the sending of the first alert;', 'in accordance with a determination that the predetermined amount of time has elapsed, sending a second alert indicative of the second category to the user; and', 'in accordance with a determination that the predetermined amount of time has not elapsed, forgoing sending the second alert., 'at a computing system having one or more processors and memory2. The method of claim ...

Подробнее
11-01-2018 дата публикации

Methods and Systems for Person Detection in a Video Feed

Номер: US20180012463A1
Принадлежит:

The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) obtaining a video feed, the video feed comprising a plurality of images; and, (2) for each image, analyzing the image to determine whether the image includes a person, the analyzing including: (a) determining that the image includes a potential instance of a person by analyzing the image at a first resolution; (b) in accordance with the determination that the image includes the potential instance, denoting a region around the potential instance; (c) determining whether the region includes an instance of the person by analyzing the region at a second resolution, greater than the first resolution; and (d) in accordance with a determination that the region includes the instance of the person, determining that the image includes the person. 1. A method , comprising: obtaining a video feed, the video feed comprising a plurality of images; and', determining that the image includes a potential instance of a person by analyzing the image at a first resolution;', 'in accordance with the determination that the image includes the potential instance, denoting a region around the potential instance, wherein the area of the region is less than the area of the image;', 'determining whether the region includes an instance of the person by analyzing the region at a second resolution, greater than the first resolution; and', 'in accordance with a determination that the region includes the instance of the person, determining that the image includes the person., 'for each image in the plurality of images, analyzing the image to determine whether the image includes a person, the analyzing including], 'at a computing system having one or more processors and memory2. The method of claim 1 , further comprising claim 1 , for each image of the plurality of images claim 1 , assigning a confidence score to the image.3. The method of claim 2 , wherein ...

Подробнее
10-01-2019 дата публикации

GENERATING EVENT DEFINITIONS BASED ON SPATIAL AND RELATIONAL RELATIONSHIPS

Номер: US20190012577A1
Автор: Lahr Nils B.
Принадлежит:

Data from one or more sensors is input to a workflow and fragmented to produce HyperFragments. The HyperFragments of input data are processed by a plurality of Distributed Experts, who make decisions about what is included in the HyperFragments or add details relating to elements included therein, producing tagged HyperFragments, which are maintained as tuples in a Semantic Database. Algorithms are applied to process the HyperFragments to create an event definition corresponding to a specific activity. Based on related activity included in historical data and on ground truth data, the event definition is refined to produce a more accurate event definition. The resulting refined event definition can then be used with the current input data to more accurately detect when the specific activity is being carried out. 1. A method for processing input data in a managed workflow to identify events corresponding to a specified activity , comprising:(a) fragmenting the input data to produce fragments of input data that are self-contained and discrete;(b) processing the fragments of input data at one or more nodes, using a plurality of distributed experts, the plurality of distributed experts working at the one or more nodes making determinations about the fragments of input data or adding details to the fragments of input data to produce tagged fragments of input data;(c) reviewing the tagged fragments of input data to create definitions for the events evident in the tagged fragments of input data; and(d) determining if the events evident in the tagged fragments of input data likely correspond to the specified activity.2. The method of claim 1 , further comprising providing training data and ground truth data related to events corresponding to the specified activity claim 1 , for use in determining if the events evident in the tagged fragments of input data likely correspond to the specified activity.3. The method of claim 2 , further comprising refining the definitions of ...

Подробнее
14-01-2021 дата публикации

PREMISES SECURITY SYSTEM WITH DYNAMIC RISK EVALUATION

Номер: US20210012115A1
Принадлежит:

A technique is introduced for utilizing data associated with a monitored premises to determine a likelihood of a crime, or other activity, occurring at the premises. In an example embodiment, premises data is received from one or more sources including sensor devices located at the premises and other data sources including third-party databases. The premises data is processed using a machine learning model, such as an artificial neural network, to generate a risk score that is indicative of the likelihood of a crime occurring at the premises in real-time or in the future. The introduced technique for risk evaluation can be implemented in conjunction with a premises security system, for example, to route alarms generated by monitoring devices located at the premises. 1. A method comprising:receiving, by a computer system, premises data including sensor data generated by one or more sensor devices located at a physical premises;processing, by the computer system, the premises data using a machine learning model to generate a risk score associated with the physical premises, the risk score representing a quantified evaluation of a likelihood of criminal activity occurring at the physical premises;receiving, by the computer system, an alarm generated by a monitoring device located at the physical premises; andprocessing, by the computer system, the alarm generated by the monitoring device located at the physical premises based on the risk score associated with the physical premises.2. The method of claim 1 , wherein processing the alarm includes:routing, by the computer system, based on the risk score associated with the physical premises, a signal indicative of the alarm for delivery to a particular operator of a plurality of operators.3. The method of claim 2 , wherein the particular operator is associated with a particular tier of operators and wherein the signal indicative of the alarm is routed for delivery to the particular operator in response to determining that ...

Подробнее
09-01-2020 дата публикации

SYSTEM AND METHOD OF VIDEO CONTENT FILTERING

Номер: US20200012866A1
Принадлежит:

An input video sequence from a camera is filtered by a process that comprises detecting temporal tracks of moving image parts from the input video sequence and assigning activity scores to temporal segments of the tracks, using respective predefined track dependent activity score functions for a plurality of different activity types. Based on this, event scores for are computed as a function of time. This computation is controlled by a definition of a temporal sequence of activity types or compound activity types for an event type. Successive intermediate scores are computed, each as a function of time for a respective activity types or compound activity types in the temporal sequence. The successive intermediate scores for each respective activity types or compound activity are computed from a combination of the intermediate score for a preceding activity type or compound activity type in the temporal sequence at a preceding time and activity scores that were assigned to segments of the tracks after the preceding time, for the activity type or activity types defined by the compound activity type defined by the respective activity types or compound activity types in the temporal sequence. One of the computed event scores for a selected time. The computation of the selected event score is traced back to identify intermediate scores that were used to compute the selected one of the event scores and to identify segments of the tracks for which the assigned activity scores were used to compute the identified intermediate scores. An output video sequence and/or video image is generates that selectively includes the image parts associated with the selected segments. 1. A method of filtering an input video sequence captured by a camera , the method comprisingdetecting temporal tracks of moving image parts from the input video sequence;assigning activity scores to temporal segments of the tracks, using respective predefined track dependent activity score functions for a ...

Подробнее
03-02-2022 дата публикации

BUMP ALERT

Номер: US20220032941A1
Принадлежит: CARTICA AI LTD

A method for bump alert, the method may include receiving by a vehicle computerized system, at least one visual bump indicator that is visible before driving over at least one bump; obtaining sensed information regarding an environment of the vehicle; processing the sensed information, wherein the processing comprises searching a visual bump indicator of the at least one visual bump indicator; determining whether the vehicle approaches a bump; and generating the bump alert when determining that the vehicle approaches the bump. 1. A method for bump alert , the method comprises:receiving by a vehicle computerized system, at least one visual bump indicator that is visible before driving over at least one bump;obtaining sensed information regarding an environment of the vehicle;processing the sensed information, wherein the processing comprises searching a visual bump indicator of the at least one visual bump indicator;determining whether the vehicle approaches a bump; andgenerating the bump alert when determining that the vehicle approaches the bump.2. The method according to claim 1 , wherein the at least one visual bump indicator was generated by a generation process that comprises:obtaining video information and telemetric information, wherein the video information and the telemetric information are obtained during driving sessions of one or more vehicles;determining, based at least one the telemetric information, multiple suspected driving over bump events;selecting video information segments, wherein each selected video information segment is acquired before a suspected driving over bump event and in timing proximity to the suspected driving over bump event; andapplying a machine learning process on the selected video information segments to find the at least one visual bump indicator that is visible before driving over the at least one bump.3. The method according to claim 1 , comprising generating the at least one visual bump indicator claim 1 , wherein the ...

Подробнее
14-01-2016 дата публикации

IDENTIFYING SPATIAL LOCATIONS OF EVENTS WITHIN VIDEO IMAGE DATA

Номер: US20160014378A1
Принадлежит:

An invention for identifying a spatial location of an event within video image data is provided. Disclosed are embodiments for detecting an object and obtaining trajectory data of a trajectory of the object within the video image data from a sensor device; converting the trajectory data into a contour-coded compressed image; generating, based on the trajectory data, a searchable code that contains a set of locations traversed by the trajectory of the object within the video image; associating the searchable code with the contour-coded compressed image in a database; and returning, in response to a query having a selected location that corresponds a location of the set of locations in the searchable code, an image of the trajectory data corresponding to the object based on the contour-coded compressed image in the database. 1. A method for identifying a spatial location of an event within video image data comprising:generating, by at least one computer device, based on trajectory data of a trajectory of an object within video image data, a searchable code that contains a set of locations traversed by the trajectory of the object within the video image;converting, by the at least one computer device, the trajectory data into a contour-coded compressed image; andreturning from a database, in response to a query having a selected location that corresponds a location of the set of locations in the searchable code, an image of the trajectory data corresponding to the object based on the contour-coded compressed image in the database.2. The method according to claim 1 , further comprising searching the database to identify a spatial location of the event within the video image data.3. The method according to claim 2 , the searching comprising:converting the area of interest to a lossy query code; andcomparing the lossy query code to the searchable code of the trajectory data within the video image data.4. The method according to claim 3 , further comprising:decompressing ...

Подробнее
14-01-2016 дата публикации

VIDEO DATA PROVISION

Номер: US20160014479A1
Принадлежит:

Video clips are generated from a sequence of video data for delivery and playback on demand. Each clip () is identified by an event marker () and a pre/post event ratio (PPER). The pre/post event ratio determines the relative duration of the clip before and after the element defined by the event marker. In response to a request for video clip data, identified with reference to the event marker (), the discrete elements () of video data making up each clip are allocated to a sequence starting with the element associated with the event marker () and following in an order ( etc) determined by the pre/post event ratio. This allows clips having different durations but having the same PPER to be transmitted to different receivers dependant on channel capacity or reliability. The receiver may compile clips of a selected length by compiling the elements starting at the event marker and adding the elements in a sequence, before and after the event marker, determined by the PPER until a clip of a required length is complete. A sequence of several clips can be compiled, the event markers for different clips having different priorities such that if a short sequence is required, the low priority clips are omitted or abbreviated. 1. A method of generating video clips from a sequence of video data for delivery and playback on demand , wherein the sequence is made up of discrete elements of video data , wherein each clip is identified by an event marker and a pre/post event ratio , the event marker defining one of the discrete elements in the sequence , and the pre/post event ratio determines the relative proportions of the duration of the clip before the element defined by the event marker and the duration of the clip after the element defined by the event marker , and wherein , in response to a request for video clip data , identified with reference to the event marker , the discrete elements of video data making up each clip are allocated to a sequence starting with the element ...

Подробнее
09-01-2020 дата публикации

Method and system for door status detection and alert generation

Номер: US20200014888A1
Автор: Harshavardhan Magal
Принадлежит: Zebra Technologies Corp

Disclosed are methods and systems such as an imaging assembly that may include a variety of components, such as, but not limited to, a two-dimensional (2D) camera configured to capture 2D image data, a three-dimensional camera configured to capture three-dimensional (3D) image data, and an evaluation module executing on one or more processors. The 2D camera may be oriented in a direction to capture 2D image data of a first field of view of a container loading area, and the 3D camera may be oriented in a direction to capture 3D image data of a second field of view of the container loading area at least partially overlapping with the first field of view. The evaluation module may be configured to detect a status event in the container loading area based on 2D image data.

Подробнее
03-02-2022 дата публикации

SYSTEMS AND METHODS FOR INDOOR AIR QUALITY BASED ON DYNAMIC PEOPLE MODELING TO SIMULATE OR MONITOR AIRFLOW IMPACT ON PATHOGEN SPREAD IN AN INDOOR SPACE AND TO MODEL AN INDOOR SPACE WITH PATHOGEN KILLING TECHNOLOGY, AND SYSTEMS AND METHODS TO CONTROL ADMINISTRATION OF A PATHOGEN KILLING TECHNOLOGY

Номер: US20220035326A1
Принадлежит:

Described herein are heating, ventilation, air conditioning, and refrigeration (HVACR) systems and methods directed to indoor air quality. HVACR systems and methods are based on dynamic people modeling to simulate and/or to monitor airflow impact on pathogen spread in an indoor space. HVACR systems and methods model an indoor space with pathogen killing technology to deploy the pathogen killing technology. HVACR systems and methods are directed to control administration of a pathogen killing technology to an indoor space based on factors that impact airflow including from dynamic analytics, a known input, and/or detection. 140-. (canceled)41. A method of modeling and simulating indoor air quality (IAQ) for a heating , ventilation , air conditioning , and refrigeration (HVACR) system , comprising:determining behavior parameters for one or more individuals in an indoor space;determining spatial parameters of one or more objects in the indoor space;generating a model based on the behavior parameters and the spatial parameters; anddetermining a critical location of an airflow to disinfect the airflow for the indoor space.42. The method of claim 41 , further comprising:adjusting a control of the HVACR system on the airflow before the airflow reaches the critical location.43. The method of claim 41 , further comprising:running the generated model; andperforming a simulation of the airflow based on running of the generated model.44. The method of claim 43 , further comprising:determining a balanced requirement based on requirements of at least two simulations, each of the at least two models includes a requirement, and', 'determining the balanced requirement is based on the requirement of each of the at least two models., 'wherein the at least two simulations are performed on at least two models, respectively,'}45. The method of claim 41 , further comprising:reducing an order of the model.46. The method of claim 41 , wherein the model includes one or more of an airflow ...

Подробнее
03-02-2022 дата публикации

METHODS AND SYSTEMS FOR IMPROVING DVS FEATURES FOR COMPUTER VISION APPLICATIONS

Номер: US20220036082A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

The disclosure relates to methods and systems for non-linear mapping of motion-compensated DVS events to DVS images. In the non-linear mapping of motion-compensated DVS events to DVS images, the current pixel increments depend on the existing number of accumulated events at that location. Further, the initial events are given a larger weightage to preserve the tracked features which are used for bundle adjustment. Further, disclosure relates to methods and systems for representing polarity in single channel DVS Frame. As the polarity adds additional constraints on feature matching and therefore accurate optical flow is seen in the image. Moreover, disclosure relates to methods and systems for using event-density as a measure of input contrast after DVS event accumulation and this is used to determine the target range for contrast stretching. 1. A method of imaging an event captured by a dynamic vision sensor (DVS) camera , said method comprising: non-linearly incrementing a motion-compensated pixel location of each of the one or more events by: segregating the motion-compensated pixel location of each of the one or more events into one or more groups, each of the one more groups corresponding to a range of number of events occurring at the motion-compensated pixel locations; and', 'incrementing pixel-intensity for the one or more groups by different amounts to non-linearly increment pixel intensities., 'receiving one or more events captured by the DVS camera over a period of time; and'}2. A method of imaging an event captured from dynamic vision camera , said method comprising:receiving one or more events captured by an event-camera over a period of time;determining the one or more events as a weighted linear-combination of polar and non-polar components; converting both the polar and the non-polar components as intermediate RGB color-channels; and', 'generating the single channel frame by converting the RGB color channels to grayscale., 'determining polarity in a ...

Подробнее
03-02-2022 дата публикации

VIDEO EVENT RECOGNITION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM

Номер: US20220036085A1

Technical solutions for video event recognition relate to the fields of knowledge graphs, deep learning and computer vision. A video event graph is constructed, and each event in the video event graph includes: M argument roles of the event and respective arguments of the argument roles, with M being a positive integer greater than one. For a to-be-recognized video, respective arguments of the M argument roles of a to-be-recognized event corresponding to the video are acquired. According to the arguments acquired, an event is selected from the video event graph as a recognized event corresponding to the video. 1. A video event recognition method , comprising:constructing a video event graph, each event in the video event graph including: M argument roles of the event and respective arguments of the argument roles, M being a positive integer greater than one;acquiring, for a to-be-recognized video, respective arguments of the M argument roles of a to-be-recognized event corresponding to the video; andselecting, according to the arguments acquired, an event from the video event graph as a recognized event corresponding to the video.2. The method according to claim 1 , wherein the M argument roles comprise: a spatial scene argument role claim 1 , an action argument role claim 1 , a person argument role claim 1 , an object argument role and a related term argument role.3. The method according to claim 2 , wherein the acquiring respective arguments of the M argument roles of the to-be-recognized event corresponding to the video comprises:performing vision understanding on the video to obtain an argument of the spatial scene argument role, an argument of the action argument role, an argument of the person argument role and an argument of the object argument role of the to-be-recognized event; andperforming text understanding on a text corresponding to the video to obtain an argument of the related term argument role of the to-be-recognized event.4. The method according to ...

Подробнее
03-02-2022 дата публикации

COMPUTING SYSTEM AND A COMPUTER-IMPLEMENTED METHOD FOR SENSING EVENTS FROM GEOSPATIAL DATA

Номер: US20220036087A1
Принадлежит:

A computer-implemented method and computing system for sensing events and optionally and preferably augmenting a video feed with overlay, comprising in some embodiments a data acquisition module, a sensor module, and optionally and preferably an overlay module. By describing the state of an activity with models that capture the semantics of the activity and comparing this description to a library of event patterns, occurrences of events are detected. Detected events are optionally processed by the overlay module to generate video feed augmented with overlay illustrating said events. 1. A computer-implemented method for sensing events during a dynamic activity , the method comprising: a data acquisition step and event sensing step , wherein:a. the data acquisition step comprises the acquisition, by one or more of: video, position-measuring sensors, or digital transfer, a set of geospatial data including the positions of individuals during a time span thereof; [ evaluation of a model graph, comprising a collection of models linked by input-output dependency relationships, with at least one model taking as input at least part of the geospatial data, and', 'storage by digital means of the model outputs, which together provide a high-level description of the activity; and, 'i. the description step comprises'}, 'ii. the event detection step comprises the matching of the description output with patterns representing event types from a pattern library, outputting an event record whenever a match is found., 'b. the event sensing step comprises a description step and an event detection step, wherein'}2. The computer-implemented method of claim 1 , wherein the event detection step further comprises: the model outputs at that timestep are compared to the criteria in the pattern definition using pattern matching criteria comprising one or more inequality relationships (e.g. greater than, less than) defined with reference to model outputs, and', 'in case a match is found, an ...

Подробнее
03-02-2022 дата публикации

IMAGE/VIDEO ANALYSIS WITH ACTIVITY SIGNATURES

Номер: US20220036090A1
Принадлежит:

Video frames from a video are compressed into a single image or a single data structure that represents a unique visual flowprint or visual signature for a given activity being modeled from the video frames. The flowprint comprises a computed summary of the original pixel values associated with the video frames within the single image and the flowprint is specific to movements occurring within the video frames that are associated with the given activity. In an embodiment, the flowprint is provided as input to a machine-learning algorithm to allow the algorithm to perform object tracking and monitoring from the flowprint rather than from the video frames of the video, which substantially improves processor load and memory utilization on a device that executes the algorithm, and substantially improved responsiveness of the algorithm. 1. A method , comprising:obtaining video frames from a video;generating a single image from the video frames; andproviding the single image as a visual flowprint for an activity that was captured in the video frames.2. The method of claim 1 , wherein generating further includes tracking a region within each video frame associated with a modeled activity.3. The method of claim 2 , wherein identifying further includes obtaining pixel values for each video frame associated with points along an expected path of movement for the modeled activity within the region.4. The method of claim 3 , wherein obtaining the expected path further comprises determining an aggregated pixel value for each point along the expected path of movement across the video frames from the corresponding pixel values captured across the video frames for the corresponding point.5. The method of claim 4 , wherein determining further includes calculating the aggregated pixel value as an average of the corresponding pixel values across the video frames.6. The method of claim 4 , wherein determining further includes selecting the aggregated pixel value as a minimum pixel value ...

Подробнее
21-01-2021 дата публикации

SYSTEMS AND METHODS FOR IMPROVED OPERATIONS OF SKI LIFTS

Номер: US20210016813A1
Автор: Queen Bryan Scott
Принадлежит:

Systems and methods for improved operations of ski lifts increase skier safety at on-boarding and off-boarding locations by providing an always-on, always-alert system that “watches” these locations, identifies developing problem situations, and initiates mitigation actions. One or more video cameras feed live video to a video processing module. The video processing module feeds resulting sequences of images to an artificial intelligence (AI) engine. The AI engine makes an inference regarding existence of a potential problem situation based on the sequence of images. This inference is fed to an inference processing module, which determines if the inference processing module should send an alert or interact with the lift motor controller to slow or stop the lift. 121-. (canceled)22. A computerized method for improved ski lift operations , comprising:capturing, by at least one computer processor, digital video of one or more of on-boarding and off-boarding operations of a ski lift;generating, by at least one computer processor, as the ski lift is operating, a plurality of digital images of the one or more of on-boarding and off-boarding operations of the ski lift based on the captured digital video, wherein the plurality of digital images includes sequences of video frames, wherein each sequence of the sequences of video frames includes a plurality of individual video frames representing a sequence of events;automatically detecting, by at least one computer processor, in real-time as the digital video is being captured, as the ski lift is operating, a potential problem situation represented by an abnormal position of a lift rider while on-boarding or off-boarding the ski lift in one or more of an on-boarding area and an off-boarding area of the ski lift based on the plurality of digital images, wherein the automatically detecting the potential problem situation includes:determining, based on a combined sequence of video frames of at least one of the sequences of video ...

Подробнее
19-01-2017 дата публикации

System and method for the detection and counting of repetitions of repetitive activity via a trained network

Номер: US20170017857A1
Автор: Lior Wolf, Ofir Levy
Принадлежит: Individual

A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from YouTube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.

Подробнее
21-01-2016 дата публикации

OPTICAL TOUCH-CONTROL SYSTEM

Номер: US20160019424A1
Принадлежит:

An optical touch-control system is provided. The optical touch-control system includes a display unit; a light source; an image capturing unit, configured to capture a plurality of images reflected by the light emitted by the light source in front of the display unit; and a processor, wherein the processor determines whether a target object is located in an operating space in front of the display unit based on the captured images, wherein when the processor determines that the target object is in a touch zone of the operating space, the processor further determines that the target object is to perform a touch-control operation, wherein when the processor determines that the target object is in a gesture zone of the operating space, the processor further determines that the target object is to perform a gesture operation.

Подробнее
21-01-2016 дата публикации

CONTENT PLAYBACK SYSTEM, SERVER, MOBILE TERMINAL, CONTENT PLAYBACK METHOD, AND RECORDING MEDIUM

Номер: US20160019425A1
Автор: YAMAJI Kei
Принадлежит:

Selected image data or specific information thereon is stored in association with moving image data as a management marker of a selected image. The selected image data is selected from among still image data extracted from the moving image data. When an output image of the selected image is captured, image analysis is performed on the captured image data to acquire a management marker of a captured image. A management marker of a selected image corresponding to the management marker of the captured image from among management markers of selected images stored in the storage is specified. Digest moving image data is generated by picking out a part of moving image data associated with the specific management marker. Control is performed so that a digest moving image is playbacked and displayed on the display section.

Подробнее
03-02-2022 дата публикации

MEETING PRIVACY PROTECTION SYSTEM

Номер: US20220036708A1
Принадлежит:

An intrusion detection system detects when an unexpected person enters the environment of a user who is in a meeting. A privacy protection action which is an action that is to be taken in response to the detected intrusion, is identified. Audio and/or video systems are then controlled to perform the privacy protection action. Machine learning can be used, based upon user interactions, to improve intrusion detection and other parts of the system. 1. A computer system , comprising:a privacy protection system that receives meeting visual data indicative of a visual environment of a current meeting that a user is attending using a user device;a visual intrusion detection system that performs image processing on the meeting visual data to detect a video intrusion event indicative of an unintended object being captured in the meeting visual data and generating a video intrusion event identifier indicative of the detected video intrusion event;a privacy protection action identifier that identifies a privacy protection action based on the video intrusion event identifier; anda privacy protection action controller that automatically implements a computer system configuration to perform the identified privacy protection action.2. The computer system of wherein the visual intrusion detection system is configured to perform facial recognition on the meeting visual data to detect the video intrusion event.3. The computer system of wherein the visual intrusion detection system is configured to perform body recognition on the meeting visual data to identify movement of a body in the visual environment of the current meeting to detect the video intrusion event.4. The computer system of wherein the privacy protection action controller comprises:a video action controller that disables transmission of the meeting visual data.5. The computer system of wherein the privacy protection action controller comprises:a video action controller that processes the meeting visual data to blur a ...

Подробнее
17-01-2019 дата публикации

VIDEO CAMERA WITH CAPTURE MODES

Номер: US20190019534A1
Принадлежит:

Embodiments provide a video camera that can be configured to allow tagging of recorded video and/or capture of video segments or sequences of images in response to user actuation of a camera control identifying an event of interest. For example, a user may press a button on the camera when an event of interest occurs, and in response the camera may tag a captured video file at a timestamp corresponding to the event. In another example, the user may initiate capture of video segments or sequences of images at an occurrence of an event of interest by pressing a button. The camera may include an image data buffer that may enable capture of video segments and/or sequences of images occurring before the user initiates capture of the event. User interfaces may enable the user to quickly review the captured video or sequences of images of the events of interest. 1. (canceled)2. A method comprising:accessing, by one or more processors configured with computer-executable instructions, an image data file including a plurality of sequences of images, at least some of the plurality of sequences of images associated with indications of events of interest identified by a user; and a timeline including indications of beginning and ending points associated with one of the sequences of images; and', 'one or more visual indications, within the timeline, of one or more of the events of interest associated with the one of the sequence of images., 'generating, by the one or more processors, a user interface configured to include3. The method of claim 2 , wherein the timeline is a horizontal timeline claim 2 , and wherein the one or more indications comprise vertical lines on the horizontal timeline.4. The method of claim 2 , wherein the events of interest are identified by a user of a camera on which the image data file is stored in real-time.5. The method of claim 4 , wherein the events of interest are identified in real-time by selection of an event selector associated with the camera ...

Подробнее
03-02-2022 дата публикации

SELECTING SPECTATOR VIEWPOINTS IN VOLUMETRIC VIDEO PRESENTATIONS OF LIVE EVENTS

Номер: US20220038635A1
Принадлежит:

A method for selecting spectator viewpoints in volumetric video presentations of live events includes receiving a plurality of video streams depicting an event occurring in a venue, wherein the plurality of video streams are provided to a processor by a plurality of cameras which are geographically distributed within the venue, identifying an initial position of a target that is present in the venue, based on an analysis of the plurality of video streams, compositing the plurality of video streams to produce a first volumetric video traversal of the live event that follows the target through the venue, predicting a future position of the target in the venue at a future point in time, based in part on a current position of the target, and sending an alert to a display device that is streaming a volumetric video presentation of the event, wherein the alert indicates the future position of the target. 1. A method comprising:receiving, by a processor, a plurality of video streams depicting a live event occurring in a venue, wherein the plurality of video streams is provided to the processor by a plurality of cameras which are geographically distributed within the venue;identifying, by the processor, an initial position of a target that is present in the venue, based on an analysis of the plurality of video streams;compositing, by the processor, the plurality of video streams to produce a first volumetric video traversal of the live event, wherein the first volumetric video traversal comprises a first continuous sequence of viewpoints of the live event that follows the target through the venue;predicting, by the processor, a future position of the target in the venue at a future point in time, based in part on a current position of the target; andsending, by the processor, an alert to a display device that is streaming a second volumetric video traversal of the live event comprising a second continuous sequence of viewpoints of the live event that is different from the ...

Подробнее
03-02-2022 дата публикации

INTELLIGENT COMMENTARY GENERATION AND PLAYING METHODS, APPARATUSES, AND DEVICES, AND COMPUTER STORAGE MEDIUM

Номер: US20220038790A1
Принадлежит:

The present disclosure provides an intelligent commentary generation method. The method includes: obtaining a match data stream; parsing the match data stream, to obtain candidate events from the match data stream; determining events from the candidate events, to generate a sequence of events; and generating commentary scripts corresponding to the match data stream according to the sequence of events. 1. An intelligent commentary generation method , applied to an intelligent commentary generation device , the method comprising:obtaining a match data stream;parsing the match data stream, to obtain candidate events from the match data stream;determining events to from the candidate events to, to generate a sequence of events to; andgenerating commentary scripts corresponding to the match data stream according to the sequence of events.2. The method according to claim 1 , wherein determining the events comprises:obtaining time periods in which the candidate events occur and importance degree parameters of the candidate events;determining, based on the time periods in which the candidate events occur, sets of candidate events to corresponding to the time periods; anddetermining, based on the importance degree parameters of the candidate events, events corresponding to the time periods from the sets of candidate events corresponding to the time periods, to obtain the sequence of events.3. The method according to claim 1 , wherein generating the commentary scripts corresponding to the match data stream comprises:obtaining attribute information of each event in the sequence of events, wherein the attribute information includes at least a place where the event occurs and character information corresponding to the event;obtaining a commentary content generation strategy corresponding to each event;generating commentary texts based on the commentary content generation strategies and the attribute information; andgenerating, based on the commentary texts, the commentary ...

Подробнее
16-01-2020 дата публикации

AUTOMATED DETECTION OF FEATURES AND/OR PARAMETERS WITHIN AN OCEAN ENVIRONMENT USING IMAGE DATA

Номер: US20200019753A1
Автор: FREESTON Benjamin
Принадлежит:

Automated detection of features and/or parameters within an ocean environment using image data. In an embodiment, captured image data is received from ocean-facing camera(s) that are positioned to capture a region of an ocean environment. Feature(s) are identified within the captured image data, and parameter(s) are measured based on the identified feature(s). Then, when a request for data is received from a user system, the requested data is generated based on the parameter(s) and sent to the user system. 1. A method comprising using at least one hardware processor to: receive the captured image data via at least one network,', 'identify one or more features within the captured image data, and', 'measure one or more parameters of the ocean environment based on the identified one or more features within the captured image data; and,, 'for each of one or more ocean-facing cameras that are positioned to capture image data of a region of an ocean environment,'} receive a request for data from the user system via the at least one network,', 'generate the requested data based on the one or more parameters, and', 'send the requested data to the user system via the at least one network., 'for each of one or more user systems,'}2. The method of claim 1 , wherein the one or more features comprise one or more of a surfer claim 1 , a wave claim 1 , one or more species of marine life claim 1 , a vehicle claim 1 , a pier claim 1 , and a lifeguard tower.3. The method of claim 1 , wherein the one or more parameters comprise one or more of a number of the one or more features in the region claim 1 , a number of waves in the region claim 1 , an average wave height in the region claim 1 , a number of surfers in the region claim 1 , a number of one or more species of marine life in the region claim 1 , a number of waves surfed per unit time in the region claim 1 , a number of waves surfed per surfer in the region claim 1 , an average number of surfers per day in the region claim 1 , a ...

Подробнее
21-01-2021 дата публикации

METHOD AND SYSTEM FOR FACILITATING TRAY MANAGEMENT

Номер: US20210019532A1
Автор: Calmus Jonathan
Принадлежит: City of Eden, LLC

The present disclosure relates to a method and system for facilitating tray management by determining at least one missing tool from the plurality of tools placed in tray. The tray management system (TMS) is connected to a weighing system to measure combined weight of tools before and after usage of tools. The combined weight of the tools is measured after at least one tool is removed for usage from the tray or placed back in the tray and a weight discrepancy is determined dynamically. Using the weight discrepancy, image sensor captures plurality of video segments. The TMS identifies occurrences of removing at least one tool from the tray and the corresponding occurrence of placing the tool in the tray in video segments and determines missing tool in the tray. The TMS alerts the user about the missing of tools in real time and enables effective tray management. 121-. (canceled)22. A method comprising:receiving, by a processor of a tray management system, at least a first data and a second data associated with a plurality of tools placed in a tray, wherein the first data is obtained before usage of the plurality of tools and the second data is obtained after usage of the plurality of tools;determining, by the processor, a weight discrepancy of the plurality of tools upon receiving the second data, wherein the weight discrepancy is computed as a difference of the first data and the second data;obtaining, by the processor, a plurality of video segments captured by at least one image sensor, wherein each of the plurality of video segments is captured when the weight discrepancy is determined to be above a threshold;identifying, by the processor, at least one missing tool using the plurality of video segments; anddisplaying an alert about the at least one missing tool, the alert comprising an image of the at least one missing tool.23. The method of claim 22 , wherein the capturing of each video segment is continued until the weight discrepancy is determined to be below ...

Подробнее
21-01-2021 дата публикации

SYSTEM AND METHOD FOR ABNORMAL SCENE DETECTION

Номер: US20210019533A1
Автор: Zhang Zhong

A method for detecting abnormal scene may include obtaining data relating to a video scene, identify at least two motion objects in the video scene based on the data and determining a first motion feature relating to the at least two motion objects based on the data. The method may also include determining a second motion feature relating to at least one portion of each of the at least two motion objects based on the data. The method may further include determining whether the at least two motion objects are involved in a fight based on the first motion feature and the second motion feature. 1. A system , comprising: obtain data relating to a video scene;', 'identify at least two motion objects in the video scene based on the data;', 'determine a first motion feature relating to the at least two motion objects based on the data;', 'determine a second motion feature relating to at least one portion of each of the at least two motion objects based on the data; and', 'determine whether the at least two motion objects are involved in a fight based on the first motion feature and the second motion feature., 'at least one processor in communication with the computer-readable storage medium, when executing the executable instructions, the at least one processor is directed to, 'a computer-readable storage medium storing executable instructions for detecting abnormal scene; and'}2. The system of claim 1 , wherein the at least one processor is further directed to:track movements of the at least two motion objects in the video scene.3. The system of claim 2 , wherein to track movements of the at least two motion objects in the video scene claim 2 , the at least one processor is further directed to:track an entire body movement of each of the at least two motion objects in the video scene; andtrack a movement of at least one portion of each of the at least two motion objects in the video scene.4. The system of claim 1 , wherein the first motion feature relating to the at least ...

Подробнее
16-01-2020 дата публикации

METHODS AND SYSTEMS FOR IMAGE BASED ANOMALY DETECTION

Номер: US20200019790A1
Принадлежит: YOKOGAWA ELECTRIC CORPORATION

The invention provides methods, systems and computer program products for image based detection of occurrence of an anomalous event within a process environment. Detection of occurrence of an anomalous event comprises (i) receiving a first set of information from a first image acquisition device, (ii) analyzing the first set of information for determining whether the first image frame images an occurrence of an anomalous event, (iii) receiving a second set of information generated at a second device, wherein the second set of information represents a state of the process environment, (iv) analyzing the second set of information for determining whether an anomalous event has occurred, and (v) generating an anomaly identification decision based at least on output from analysis of the second set of information. 1. A method for image based detection of occurrence of an anomalous event within a process environment , the method comprising the steps of:receiving a first set of information from a first image acquisition device configured to image at least a part of the process environment, the first set of information comprising image information extracted from a first image frame generated at the first image acquisition device;analyzing the first set of information for determining whether the first image frame images an occurrence of an anomalous event; receiving a second set of information generated at a second device, wherein the second set of information represents a state of the process environment; and', 'analyzing the second set of information for determining whether an anomalous event has occurred; and, 'responsive to determining that the first image frame images an occurrence of an anomalous eventgenerating an anomaly identification decision based at least on output from analysis of the second set of information.2. The method as claimed in claim 1 , wherein:analysis of the first set of information outputs a first score;analysis of the second set of information ...

Подробнее
25-01-2018 дата публикации

SHOPPING FACILITY ASSISTANCE OBJECT DETECTION SYSTEMS, DEVICES AND METHODS

Номер: US20180020896A1
Принадлежит:

Some embodiments provide apparatuses and methods useful to providing control over movement of motorized transport units. In some embodiments, an apparatus providing control over movement of motorized transport units at a shopping facility comprises: a central computer system comprising: a transceiver; a control circuit; a memory coupled to the control circuit and storing computer instructions that when executed by the control circuit cause the control circuit to perform the steps of: obtain, from one or more of the communications received from the motorized transport unit, route condition information comprising information corresponding to an intended route of travel; obtain additional route condition information detected by one or more detectors external to the motorized transport unit; detect an object affecting the intended route of travel; identify an action to be taken by the motorized transport unit with respect to the detected object; and communicate one or more instructions. 1. An apparatus providing control over movement of motorized transport units at a shopping facility , comprising: a transceiver configured to receive communications from the motorized transport unit located at a shopping facility;', 'a control circuit coupled with the transceiver; and', 'a memory coupled to the control circuit and storing computer instructions that when executed by the control circuit cause the control circuit to perform the steps of:', 'obtain, from one or more of the communications received from the motorized transport unit, route condition information acquired by the motorized transport unit, wherein the route condition information comprises information corresponding to an intended route of travel by the motorized transport unit;', 'obtain additional route condition information existing at the shopping facility and detected by one or more detectors external to the motorized transport unit;', 'detect, in response to an evaluation of both the route condition information ...

Подробнее
28-01-2016 дата публикации

SCENE AND ACTIVITY IDENTIFICATION IN VIDEO SUMMARY GENERATION

Номер: US20160027470A1
Принадлежит:

Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities. 1. A method for identifying scenes in captured video for inclusion in a video summary , the method comprising;accessing metadata associated with a video, the accessed metadata representative of capture of the video;identifying a plurality of events of interest within the video based on the accessed metadata;for each identified event of interest, identifying a best scene in the video associated with the identified event of interest, the identified best scene comprising a threshold amount of video occurring before and after a video frame or portion corresponding to the identified event of interest; andselecting one or more identified best scenes for inclusion in a video summary.2. The method of claim 1 , further comprising:receiving a request for a video summary from the user; andgenerating a video summary including a plurality of selected best scenes.3. The method of claim 2 , wherein generating the video summary comprises concatenating the plurality of selected best scenes.4. The method of claim 1 , wherein the accessed metadata is generated by a camera during the capture of the video.5. The method of claim 4 , wherein the accessed metadata comprises telemetry data describing a motion of the camera during ...

Подробнее
25-01-2018 дата публикации

METHODS AND APPARATUS TO MEASURE BRAND EXPOSURE IN MEDIA STREAMS

Номер: US20180025227A1
Принадлежит:

Methods and apparatus to measure brand exposure in media streams are disclosed. Disclosed example apparatus include a scene detector to compare a signature of a detected scene of a media presentation with a library of signatures to identify a first reference scene, and a scene classifier to classify the detected scene as a scene of changed interest when a first region of interest in the detected scene does not include a first reference brand identifier included in a corresponding region of interest in the first reference scene. Disclosed example apparatus further include a graphical user interface to present the detected scene, prompt for selection of an area of the first region of interest in the detected scene, and compare the selected area to a library of reference brand identifiers to identify a second reference brand identifier included in the first region of interest in the detected scene. 1. An apparatus to detect brand exposures included in a media presentation , the apparatus comprising:a scene detector to compare a signature determined from a detected scene obtained from the media presentation with a library of reference signatures corresponding to reference media to identify a first reference scene representative of the detected scene;a scene classifier to classify the detected scene as a scene of changed interest in response to determining a first region of interest in the detected scene, which corresponds to a first region of interest specified in the first reference scene, does not include a first reference brand identifier specified as being included in the first region of interest in the first reference scene; anda graphical user interface (GUI) implemented by a processor to present the detected scene in response to the detected scene being classified as a scene of changed interest, the GUI to further prompt for selection of an area of the first region of interest in the detected scene and compare the selected area to a library of reference brand ...

Подробнее
25-01-2018 дата публикации

Method and System for Motion Vector-Based Video Monitoring and Event Categorization

Номер: US20180025230A9
Принадлежит:

A computer system processes a video stream to detect a start of a first motion event candidate in the video stream, and in response to detecting the start of the first motion event candidate in the video stream, initiates event recognition processing on a first video segment associated with the start of the first motion event candidate. Initiating the event recognition processing further includes: determining a motion track of a first object identified in the first video segment; generating a representative motion vector for the first motion event candidate based on the motion track of the first object; and sending the representative motion vector for the first motion event candidate to an event categorizer, where the event categorizer assigns a respective motion event category to the first motion event candidate based on the representative motion vector of the first motion event candidate. 1. A method of processing a video stream , comprising: obtaining a profile of a motion pixel count for a current frame sequence in the video stream;', 'in response to determining that the obtained profile meets a predetermined trigger criterion, determining that the current frame sequence includes a motion event candidate;', 'identifying a beginning time for a portion of the profile meeting the predetermined trigger criterion; and', 'designating the identified beginning time to be the start of the first motion event candidate; and, 'processing the video stream to detect a start of a first motion event candidate in the video stream, wherein processing comprisesin response to detecting the start of the first motion event candidate in the video stream, initiating event recognition processing on a first video segment associated with the start of the first motion event candidate.2. The method of claim 1 , wherein determining that the obtained profile meets the predetermined trigger criterion includes determining that the motion pixel count satisfies a threshold motion pixel count.3. ...

Подробнее
25-01-2018 дата публикации

SYSTEM AND METHOD FOR PROVIDING SURVEILLANCE DATA

Номер: US20180025231A1
Принадлежит: HANWHA TECHWIN CO., LTD.

A system and method for providing surveillance data are provided. The system includes: a pattern learner configured to learn a time-based data pattern by analyzing at least one of image data of one or more images and sound data of sound obtained from a surveillance zone at a predetermined time or time period, and to generate an event model based on the time-based data pattern; and an event detector configured to detect at least one event by comparing the event model with a time-based data pattern of at least one of first image data of one or more first images and first sound data of first sound obtained from the surveillance zone. 1. A system for providing surveillance data , the system comprising at least one processor to implement:a pattern learner configured to learn a time-based data pattern by analyzing at least one of image data of one or more images and sound data of sound obtained from a surveillance zone at a predetermined time or time period, and to generate an event model based on the time-based data pattern; andan event detector configured to detect at least one event by comparing the event model with a time-based data pattern of at least one of first image data of one or more first images and first sound data of first sound obtained from the surveillance zone.2. The system of claim 1 , wherein the pattern learner comprises:a first learner configured to calculate a statistical data value of at least one of a color, a number of at least one object detected in the surveillance zone, and a degree of movement of the detected at least one object from the one or more images; anda second learner configured to calculate a statistical data value of at least one sound level from the sound,wherein the time-based data pattern corresponds to a time-based variation in the statistical value calculated by the first learner or the second learner.3. The system of claim 2 , wherein the pattern learner is configured to generate the event model based on the image data and ...

Подробнее
10-02-2022 дата публикации

DUAL-MODALITY RELATION NETWORKS FOR AUDIO-VISUAL EVENT LOCALIZATION

Номер: US20220044022A1
Принадлежит:

Dual-modality relation networks for audio-visual event localization can be provided. A video feed for audio-visual event localization can be received. Based on a combination of extracted audio features and video features of the video feed, informative features and regions in the video feed can be determined by running a first neural network. Based on the informative features and regions in the video feed determined by the first neural network, relation-aware video features can be determined by running a second neural network. Based on the informative features and regions in the video feed, relation-aware audio features can be determined by running a third neural network. A dual-modality representation can be obtained based on the relation-aware video features and the relation-aware audio features by running a fourth neural network. The dual-modality representation can be input to a classifier to identity an audio-visual event in the video feed. 1. A system comprising:a hardware processor;a memory coupled with the hardware processor; receive a video feed for audio-visual event localization;', 'based on a combination of extracted audio features and video features of the video feed, determine informative features and regions in the video feed by running a first neural network;', 'based on the informative features and regions in the video feed determined by the first neural network, determine relation-aware video features by running a second neural network;', 'based on the informative features and regions in the video feed determined by the first neural network, determine relation-aware audio features by running a third neural network;', 'obtain a dual-modality representation based on the relation-aware video features and the relation-aware audio features by running a fourth neural network;', 'input the dual-modality representation to a classifier to identity an audio-visual event in the video feed., 'the hardware processor configured to2. The system of claim 1 , ...

Подробнее
10-02-2022 дата публикации

INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM

Номер: US20220044028A1
Автор: OAMI Ryoma
Принадлежит: NEC Corporation

An information processing apparatus () detects a stationary object from video data (). In addition, the information processing apparatus () executes person detection process of detecting a person in vicinity of an object (target object) detected as the stationary object for each of a plurality of video frames () which includes the target object. Furthermore, the information processing apparatus () executes a predetermined process by comparing results of the person detection process for each of the plurality of video frames () 1. An image processing apparatus for analyzing at least one video , the at least one video including video frames , the image processing apparatus comprising:at least one memory storing instructions; andat least one processor configured to execute the instructions to:detect a stationary object from the video;specify a first video frame including the stationary object;specify, based on the first video frame, a second video frame generated before a time when the stationary object is left behind, the second video frame including the detected stationary object and at least one person; anddisplay both of the first video frame and the second video frame.2. The image processing apparatus according to claim 1 , wherein the first video frame and the second video frame are displayed on a same screen.3. The image processing apparatus according to claim 2 , wherein the at least one processor is further configured to execute the instructions to:execute a person detection process of detecting a person in the vicinity of the stationary object for each of the first video frame and the second video frame; andexecute a predetermined process by comparing results of the person detection process for the first video frame and the second video frame.4. A computer-implemented method for analyzing at least one video claim 2 , the at least one video including video frames claim 2 , the method comprising:detecting a stationary object from the video;specifying a first ...

Подробнее
10-02-2022 дата публикации

BALL TRAJECTORY TRACKING

Номер: US20220044423A1
Принадлежит: PLAYSIGHT INTERACTIVE LTD.

A method of ball trajectory tracking, the method comprising computer executable steps of: receiving a plurality of training frames, each one of the training frames showing a trajectory of a ball as a series of one or more elements, using the received training frames, training a first neuronal network to locate a trajectory of a ball in a frame, receiving a second frame, and using the first neuronal network, locating a trajectory of a ball in the second frame, the trajectory being shown in the second frame as a series of images of the ball having the located trajectory. 1. A method of ball trajectory tracking , the method comprising computer executable steps of:receiving a plurality of training frames, each one of the training frames showing a trajectory of a ball as a series of one or more elements;using the received training frames, training a first neuronal network to locate a trajectory of a ball in a frame;receiving a second frame; andusing the first neuronal network, locating a trajectory of a ball in the second frame, the trajectory being shown in the second frame as a series of images of the ball having the located trajectory,the method further comprising computer executable steps of:receiving a video sequence capturing movement of a ball during a sport event in a series of video frames;calculating a plurality of difference-frames, each difference-frame being calculated over a respective group of at least two of the video frames of the received video sequence; andcombining at least two of the calculated difference-frames, to form a composite frame representing a trajectory taken by the ball in the movement as a series of images of the ball as captured in the received video sequence, the composite frame being one of the group consisting of the training frames and the second frame.2. The method of claim 1 , wherein at least one of the elements represents a respective position of the ball along the trajectory.3. The method of claim 1 , further comprising ...

Подробнее
24-01-2019 дата публикации

VIDEO SURVEILLANCE SYSTEM AND VIDEO SURVEILLANCE METHOD

Номер: US20190026564A1
Принадлежит: PEGATRON CORPORATION

A video surveillance method and a video surveillance system applied the method are provided. The method includes capturing an image of at least a part of a monitored area to obtain a plurality of video streams; sensing the monitored area to obtain a plurality of sensing data; if an image of an object of a video stream is determined as a target object, determining whether the target object triggers a target event according to one of the sensing data corresponding to the video stream; if the target object is determined as triggering the target event, outputting a feature value corresponding to the target object according to a preset analysis condition, the video stream including the target object and the target event; and generating a notification event corresponding to the target object according to the feature value and a model weight value corresponding to the target object. 1. A video surveillance system , for monitoring a monitored area , comprising:a video capture module, comprising a plurality of video capture devices, wherein the video capture devices are respectively disposed adjacent to the monitored area, and each of the video capture devices is configured to capture an image including at least a part of the monitored area to obtain a video stream;a sensing module, comprising a plurality of sensing devices, wherein the sensing devices are respectively disposed adjacent to the corresponding video capture devices, and each of the sensing devices senses the monitored area to obtain sensing data, wherein the plurality of sensing data respectively correspond to the video streams output by the video capture devices; and an image recognition module, receiving the video streams of the video capture devices and determining whether an image of an object fitting a target object exists in each of the video streams;', 'an event determination module, coupled to the image recognition module and receiving the plurality of sensing data of the sensing devices, wherein if the ...

Подробнее
24-01-2019 дата публикации

METHOD AND SYSTEM FOR DETECTING AN UNOCCUPIED REGION WITHIN A PARKING FACILITY

Номер: US20190027037A1
Принадлежит:

A method for detecting an unoccupied region within a parking facility, using at least one environment sensor disposed in a stationary manner within the parking facility, is furnished, at least encompassing the following: sensing measured data of at least one segment of the parking facility by way of at least one environment sensor; comparing the measured data with reference measured data in order to recognize a change in the segment of the parking facility; and detecting the segment as an unoccupied or non-unoccupied region as a function of the recognition of a change. 1. A method for detecting an unoccupied region within a parking facility using at least one environment sensor disposed in a stationary manner within the parking facility , the method comprising:sensing measured data of at least one segment of the parking facility by at least one environment sensor;comparing the measured data with reference measured data to recognize a change in the segment of the parking facility; anddetecting the segment as an unoccupied or non-unoccupied region as a function of the recognition of a change.2. The method of claim 1 , wherein at least two environment sensors are used claim 1 , a segment of the parking facility being sensed by at least two of the environment sensors which are configured differently from one another and/or which implement measurement principles that are different from one another.3. The method of claim 1 , wherein at least two environment sensors are used claim 1 , for each of the environment sensors:measured data of the segment of the parking facility being sensed by the environment sensor;the respective measured data are compared with reference measured data associated with the respective environment sensor, a change is recognized for each environment sensor; andthe segment is detected as an unoccupied region as a function of whether no change is recognized for a specific number of the environment sensors.4. The method of claim 3 , wherein the segment ...

Подробнее
25-01-2018 дата публикации

METHOD AND APPARATUS FOR INTEGRATED TRACKING OF VISITORS

Номер: US20180027383A1
Принадлежит:

System and method for tracking a mobile device includes: receiving unique identifications for a mobile device; filtering out the unique identifications to obtain a true identifications for the mobile device; identifying cameras relevant to movement of the mobile device; receiving video streams; generating data structures for the video streams and tracking information of the mobile device, the data structure including time stamped videos, and viewpoints of the identified cameras; utilizing the data structures to retrieve, video and tracking information for the mobile device and the user, as the mobile moves in the site; and applying analytics to the retrieved video and tracking information to analyze behavior of the user and to predict what the user will do while on site. 1. A method for tracking a mobile device in a site , the method comprising:receiving, in real time, a plurality of unique identifications for a mobile device vising the site;filtering out the plurality of unique identifications for the mobile device to obtain a true identifications for the mobile device;in real time, identifying cameras relevant to movement of the mobile device responsive to the true identifications for the mobile device;receiving video streams of the movement of the mobile device from the identified cameras, time stamping the received video streams and storing the time stamped video streams in a computer storage medium;generating data structures for the video streams and tracking information of the mobile device, the data structure including time stamped videos, and viewpoints of the identified cameras;utilizing the data structures to retrieve, in real time, video and tracking information for the mobile device and the user, as the mobile moves in the site; andapplying analytics to the retrieved video and tracking information to analyze behavior of the user and to predict what the user will do while on site.2. The method of claim 1 , wherein the data structures further include a ...

Подробнее
23-01-2020 дата публикации

Focalized Behavioral Measurements in a Video Stream

Номер: US20200026926A1
Принадлежит: RICOH COMPANY, LTD.

A system and method for analyzing behavior in a video is described. The method includes extracting a plurality of salient fragments of a video; generating a focalized visualization, based on a time anchor, from one or more of the plurality of salient fragments of the video; tagging a human subject in the focalized visualization with a unique identifier; and analyzing behavior of the human subject, using the focalized visualization, to generate a behavior score associated with the unique identifier and the time anchor. 1. A computer-implemented method comprising:extracting a plurality of salient fragments of a video;generating a focalized visualization, based on a time anchor, from one or more of the plurality of salient fragments of the video;tagging a human subject in the focalized visualization with a unique identifier; andanalyzing behavior of the human subject, using the focalized visualization, to generate a behavior score associated with the unique identifier and the time anchor.2. The computer-implemented method of claim 1 , further comprising:storing the behavior score as a record in a database using the unique identifier and the time anchor as attributes.3. The computer-implemented method of claim 2 , wherein the unique identifier and the time anchor form a tuple to be used as a database key.4. The computer-implemented method of claim 2 , further comprising:performing a query on the database based on selection criteria that is selected from the group consisting of date of record, unique identifier, time anchor, behavior score, minimum behavior score, maximum behavior score, and average behavior score.5. The computer-implemented method of claim 1 , wherein tagging the human subject comprises:detecting a face in a frame of the focalized visualization;identifying the face using a template of a known human subject, wherein the template is associated with the unique identifier; andassociating the face with the unique identifier.6. The computer-implemented method ...

Подробнее
23-01-2020 дата публикации

Analysis of Operator Behavior Focalized on Machine Events

Номер: US20200026927A1
Принадлежит:

A system and method for analyzing behavior in a video is described. The method includes extracting a plurality of salient fragments of a video; associating a time anchor with an occurrence of a first machine event of a machine operated by a human subject; generating a focalized visualization, based on the time anchor, from one or more of the plurality of salient fragments of the video; tagging the human subject in the focalized visualization with a unique identifier; and analyzing behavior of the human subject, using the focalized visualization, to generate a behavior score associated with the unique identifier and the first machine event. 1. A computer-implemented method comprising:extracting a plurality of salient fragments of a video;associating a time anchor with an occurrence of a first machine event of a machine operated by a human subject;generating a focalized visualization, based on the time anchor, from one or more of the plurality of salient fragments of the video;tagging the human subject in the focalized visualization with a unique identifier; andanalyzing behavior of the human subject, using the focalized visualization, to generate a behavior score associated with the unique identifier and the first machine event.2. The computer-implemented method of claim 1 , further comprising:storing the behavior score as a record in a database using the unique identifier and the first machine event as attributes.3. The computer-implemented method of claim 2 , further comprising:performing a query on the database based on selection criteria that is selected from the group consisting of date of record, unique identifier, time anchor, first machine event, behavior score, minimum behavior score, maximum behavior score, and average behavior score.4. The computer-implemented method of claim 1 , further comprising:generating a baseline behavior for a behavioral attribute; andproducing a contrastive behavior score by comparing the behavior of the human subject to the ...

Подробнее
28-01-2021 дата публикации

METHOD AND SYSTEM FOR DETECTING THE OWNER OF AN ABANDONED OBJECT FROM A SURVEILLANCE VIDEO

Номер: US20210027068A1
Принадлежит:

A video surveillance method and system involves transitioning pixel intensities in a region associated with a fixed location from background values to values representing an image of an object when the object is abandoned at a fixed location in a scene in a video, and identifying an instance of time in the video when the object is abandoned, based on the transitioned pixel intensities resulting from the transitioning of the pixel intensities. 1. A video surveillance method , comprising:transitioning pixel intensities in a region associated with a fixed location from background values to values representing an image of the object, when the object is abandoned as shown at the fixed location in a scene in a video;identifying an instance of time in the video when the object is abandoned, based on the transitioned pixel intensities resulting from the transitioning of the pixel intensities in the region associated with the fixed location from the background values to the values representing the image of the object; andgenerating a first output comprising a video slip of a time before and after the transitioning, wherein the video slip displays an event of abandonment associated with the object, and a second output comprising an image constructed from a cross section in the constructed image intersecting the object and the transitioned pixel intensities in time, wherein at an instant of time in which there is a change in the pixel intensities, a representation in the image of a person associated with the object is confined to a localized region in a vicinity of the object in the image, wherein a search of this localized region results in an indication that the person shown in the localized region is an owner of the object that was abandoned.2. The method of wherein the cross section includes a horizontal cross section.3. The method of wherein the cross section includes a vertical cross section.4. The method of wherein the cross section includes at least one of a horizontal ...

Подробнее
23-01-2020 дата публикации

SUPPLEMENTING VIDEO MATERIAL

Номер: US20200029027A1
Принадлежит:

The present disclosure relates to a computer-implemented method for supplementing video material, the method comprising: controlling a display device to display a first video, associated with a first camera and retrieved from a storage device; analyzing the first video to automatically detect at least one camera, in particular located in an area of interest; and controlling the display device to display a second video associated with a second camera among the detected at least one camera in response to a user input. 1. A computer-implemented method for supplementing video material , the method executed by one or more processing devices and comprising:controlling a display device to display a first video associated with a first camera and retrieved from a storage device;analyzing the first video to detect at least one additional camera; andcontrolling the display device to display a second video associated with a second camera among the detected at least one additional camera in response to a user input.2. The computer-implemented method of claim 1 , wherein controlling the display device to display the second video comprises:receiving, as the user input, a selection input for selecting the second camera input by the user by means of an input device;identifying a video frame of the second video corresponding to the displayed video frame of the first video; andcontrolling the display device to display the second video starting from the identified video frame.3. The computer-implemented method of claim 1 , wherein controlling the display device to display the second video comprises switching to the second video.4. The computer-implemented method of claim 1 , wherein detecting the at least one additional camera comprises:performing pattern recognition on the video data of the first video to detect at least one camera object in the video data.5. The computer-implemented method of claim 1 , wherein detecting the at least one additional camera comprises:performing person ...

Подробнее
02-02-2017 дата публикации

VIDEO MONITORING METHOD, VIDEO MONITORING SYSTEM AND COMPUTER PROGRAM PRODUCT

Номер: US20170032194A1
Автор: LI CHAO, SHANG Zeyuan, Yu Gang
Принадлежит:

The present disclosure relates to a video monitoring method based on a depth video, a video monitoring system and a computer program product. The video monitoring method comprises: obtaining video data collected by a video collecting apparatus; determining an object as a monitoring target based on the video data; and extracting feature information of the object, wherein the video data is video data containing depth information. 1. A video monitoring method comprising:obtaining video data collected by a video collecting apparatus;determining an object as a monitoring target based on the video data; andextracting feature information of the object,wherein the video data is video data containing depth information.2. The video monitoring method according to claim 1 , further comprising:configuring the video collecting apparatus and determining coordinate parameters of the video collecting apparatus.3. The video monitoring method according to claim 2 , wherein determining coordinate parameters of the video collecting apparatus comprise:selecting multiple reference points on a predetermined reference plane;determining transformation relationship between a camera coordinate system of the video collecting apparatus and a world coordinate system based on coordinate information of the multiple reference points; anddetermining the coordinate parameters of the video collecting apparatus based on the transformation relationship.4. The video monitoring method according to claim 1 , wherein determining an object as a monitoring target based on the video data comprises:determining background information in the video data;determining foreground information in each frame of the video data based on the background information;obtaining edge profile information of a foreground area corresponding to the foreground information; anddetermining the object based on the edge profile information.5. The video monitoring method according to claim 4 , wherein determining the object based on the ...

Подробнее
02-02-2017 дата публикации

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

Номер: US20170032443A1
Автор: NAKASHIMA Teruyoshi
Принадлежит: FUJIFILM Corporation

In the image processing apparatus, the image processing method, the program and the recording medium, the first product material selector selects the first product material from among the plurality of product materials in accordance with the instruction of the user. The second product material selector selects the second product material that is different from the first product material from among the plurality of product materials. The product creator creates the recommended product by applying the first image constituting at least part of the group of images to the second product material. When second images constituting at least part of the group of images are displayed on the display of the terminal device of the user in accordance with the instruction of the user, the display controller causes the recommended product to be displayed, together with the second images, on the display at least once. 1. An image processing apparatus comprising:a product material storage configured to store a plurality of product materials therein;an instruction acquiring section configured to acquire an instruction input by a user;a group-of-image acquiring section configured to acquire a group of images in accordance with an instruction of the user;a first product material selector configured to select a first product material from among the plurality of product materials in accordance with an instruction of the user;a second product material selector configured to select a second product material that is different from the first product material from among the plurality of product materials;a product creator configured to create a recommended product by applying a first image constituting at least part of the group of images to the second product material; anda display controller configured to, when second images constituting at least part of the group of images are displayed on a display of a terminal device of the user in accordance with an instruction of the user, cause the ...

Подробнее
01-02-2018 дата публикации

METHODS AND SYSTEMS OF PERFORMING ADAPTIVE MORPHOLOGY OPERATIONS IN VIDEO ANALYTICS

Номер: US20180033152A1
Принадлежит:

Techniques and systems are provided for processing video data. For example, techniques and systems are provided for performing content-adaptive morphology operations. A first erosion function can be performed on a foreground mask of a video frame, including setting one or more foreground pixels of the frame to one or more background pixels. A temporary foreground mask can be generated based on the first erosion function being performed on the foreground mask. One or more connected components can be generated for the frame by performing connected component analysis to connect one or more neighboring foreground pixels. A complexity of the frame (or of the foreground mask of the frame) can be determined by comparing a number of the one or more connected components to a threshold number. A second erosion function can be performed on the temporary foreground mask when the number of the one or more connected components is higher than the threshold number. The one or more connected components can be output for blob processing when the number of the one or more connected components is lower than the threshold number. 1. A method of performing content-adaptive morphology operations , the method comprising:performing a first erosion function on a foreground mask of a frame, the first erosion function setting one or more foreground pixels of the foreground mask to one or more background pixels;determining a complexity of the foreground mask; anddetermining whether to perform one or more additional erosion functions for the frame based on the determined complexity of the foreground mask.2. The method of claim 1 , further comprising:generating one or more connected components by performing connected component analysis on foreground pixels of the foreground mask to connect one or more neighboring foreground pixels; andwherein the complexity of the foreground mask is determined by comparing a number of the one or more connected components to a threshold number.3. The method of ...

Подробнее
04-02-2016 дата публикации

SHARING DIGITAL MEDIA ASSETS FOR PRESENTATION WITHIN AN ONLINE SOCIAL NETWORK

Номер: US20160036900A1
Принадлежит: C/O KODAK ALARIS INC.

A method for reducing the number of images or the length of a video from a digital image collection using a social network, includes receiving a digital image collection captured by a user to be viewed by a viewer; wherein the viewer and the user are members of the same social network and using a processor to access the social network to determine a relationship between a viewer and the user. The method further includes using the processor to determine a set of summarization parameters based on the relationship between the viewer and the user and using the processor to reduce the number of images or the length of the video from the digital image collection using the determined set of summarization parameters to be viewed by the viewer. 1. A method for sharing digital media assets for presentation within an online social network , comprising:a) identifying a user's types of social connections with viewers on the online social network including, friend connections, associative connections, and image connections;b) using a processor to analyze the user's digital media assets to identify aesthetic quality, particular activities of interest, location of the captured scene, significant objects in the scene, and persons;c) using the processor to indicate that a viewer can only see digital media assets of a certain content based on type of social connection between the viewer and user; andd) sharing the digital media within the online social network based on the social connection type and the digital media content type.2. The method of wherein claim 1 , the friend connection includes a relative claim 1 , coworker claim 1 , classmate claim 1 , or friend claim 1 , the associate connection includes claim 1 , membership in common or related groups claim 1 , or similarity in interests claim 1 , and the image connection where people appear together in a digital media asset.3. The method of wherein claim 1 , other factors are considered when determining a social connection type ...

Подробнее
04-02-2016 дата публикации

Dynamic System and Method for Detecting Drowning

Номер: US20160037138A1
Автор: UDLER Danny
Принадлежит:

The present invention discloses a dynamic system for identifying and alerting drowning in a pool. The system comprised of: at least one camera movable along a rail installed within the pool underwater, at least one controller for determining camera movement based on analyzing image captured by said camera, such that the camera viewing area is not distracted, identifying and alerting drowning pattern by analyzing captured images by said movable camera. 1. A dynamic system for identifying and alerting drowning in a pool , said system comprised of:a rail installed within the pool underwater;at least one camera movable along the rail;at least one controller for determining camera movement based on analyzing image captured by said camera, such that the camera viewing area is not distracted, for identifying and alerting drowning pattern by analyzing captured images by said movable camera.2. The system of claim 1 , wherein the rail is designed in a hollow tube having transparent surface at least at the front part of the housing claim 1 , wherein the camera is installed on a moving element having a motor and wheels claim 1 , enabling to move the camera within the tube.3. The system of further a moving element having a motor and wheels wherein the camera and motor is connected to electrical cable claim 1 , where at the far end the cable is connected to controller module located outside the water.4. The system of claim 1 , wherein the camera and the controller are integrated in one housing of moving element which is movable along the rail.5. The system of claim 1 , wherein the controller includes a movement module for controlling the movement of the camera along the rail claim 1 , wherein the movement control is based on analyzing image captured by the camera and environment condition for identifying distraction or lack of clarity in field of view of the camera.6. The system of claim 5 , wherein the analysis of camera movement control includes: during an Idle state or routine ...

Подробнее
31-01-2019 дата публикации

Methods and systems for camera-side cropping of a video feed

Номер: US20190035241A1
Принадлежит: Google LLC

The disclosed embodiments include systems and methods for camera-side cropping of a video feed. In one aspect, a method includes: (1) displaying a first video feed with the first field of view at the first resolution; (2) detecting a first user input to enhance an identified portion of the first video feed; and (3) in response to detecting the first user input: (a) generating a camera crop command for the identified portion instructing the camera to generate a second video feed corresponding to the identified portion, the second video feed having the first resolution and a second field of view that is smaller than the first field of view; (b) sending the camera crop command to the camera; (c) in response receiving the second video feed from the camera; and (d) displaying the second video feed with the second field of view at the first resolution.

Подробнее
31-01-2019 дата публикации

VIDEO PROCESSING ARCHITECTURES WHICH PROVIDE LOOPING VIDEO

Номер: US20190035428A1
Принадлежит: ADOBE SYSTEMS INCORPORATED

Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated. 1. A method , performed by a computing device , for providing looping video , the method comprising:converting a high-resolution video clip, having a plurality of high-resolution frames, to a lower-resolution video clip having a plurality of lower-resolution frames;creating a plurality of edgemaps of the plurality of lower-resolution frames by performing edge detecting on the plurality of lower-resolution frames;forming, using the plurality of lower-resolution frames and the plurality of edgemaps, a confusion matrix that identifies pixels that are parts of edges;generating a filtered confusion matrix by convolving the confusion matrix with a diagonal filter;determining a candidate transition point by identifying a minimum value in the filtered confusion matrix; andrendering a candidate looping video from the high-resolution video clip, wherein the candidate looping video has a start frame and an end frame corresponding to the candidate transition point.2. The method of claim 1 , further comprising constraining the determining a candidate transition point such that the candidate looping video has at least one of a minimum duration or a maximum duration.3. The method of claim 1 , further ...

Подробнее
30-01-2020 дата публикации

SYSTEM AND METHOD FOR MOBILE FEEDBACK GENERATION USING VIDEO PROCESSING AND OBJECT TRACKING

Номер: US20200034628A1
Принадлежит:

A method includes generating, with a mobile device having a camera, images of a portion of a real space containing an object having a position. The control device tracks the position of the object based at least on the images received from the mobile device. The control device monitors the position of the object for an event conforming to a rule specified at the control device. The event is based on the rule and the position of the object in the real space. The control device, or a client device, generates an indication that the event has been detected by the control device. 1. A computer-implemented method comprising:generating, with a mobile device comprising a camera, a plurality of images of a portion of a real space containing an object having a position;tracking, at the control device, the position of the object based at least on the plurality of images received from the mobile device;monitoring, at the control device, the position of the object for an event conforming to a rule, the event based on the rule and the position of the object in the real space;generating, at the control device or at a client device, an indication that the event has been detected by the control device;initiating a match at the control device; andpairing the mobile device to the control device causing the images received from the mobile device to be associated with the match based on a match identifier and a device identifier, where commands transmitted from the control device are executed by the mobile device when the command includes the match identifier and the device identifier.2. The computer-implemented method of claim 1 , the tracking further comprising:identifying the object in the plurality of images based at least on a property of the object as it appears in the plurality of images;mapping image coordinates of the object in the plurality of images to real coordinates of the object in the real space;calculating a trajectory of the object based at least on the real ...

Подробнее
04-02-2021 дата публикации

METHOD FOR IDENTIFYING A PERSON IN A VIDEO, BY A VISUAL SIGNATURE FROM THAT PERSON, ASSOCIATED COMPUTER PROGRAM AND DEVICE

Номер: US20210034877A1
Принадлежит:

A method includes for each of a plurality of successive images in a camera video stream, searching for at least one person present in the image and defining, in the image, for each person found, a field, called person field, at least partially surrounding that person; for each of at least one person found, gathering into a track segment several person fields derived from successive images and at least partially surrounding that same person; for each track segment, identifying the person in that track segment, by a visual signature from that person, this identification including: for each person field in the track segment, determining a visual signature from the person in that track segment, called local visual signature; determining an aggregated visual signature from the local visual signatures; and identifying the person in that track segment from the aggregated visual signature. 1. A method for identifying a person in a video , by a visual signature from that person , the method comprising:for each of a plurality of successive images of a video stream from a camera, searching for at least one person present in the image and defining, in the image, for each person found, a field, called person field, at least partially surrounding that person;for each of at least one person found, gathering into a track segment several person fields derived from successive images and at least partially surrounding that same person;for each track segment, identifying the person in that track segment by a visual signature from that person, this identification comprising:for each person field in the track segment, determining a visual signature from the person, in that track segment, called local visual signature,determining an aggregated visual signature from the local visual signatures, andidentifying the person in the track segment from the aggregated visual signature.2. The method of claim 1 , wherein the aggregated visual signature is a mean of the local visual signatures of the ...

Подробнее
04-02-2021 дата публикации

Time-based automatic video feed selection for a digital video production system

Номер: US20210034880A1
Принадлежит: SLING MEDIA PVT LTD

A video production device is deployed to produce a video production stream of an event occurring within an environment that includes a plurality of different video capture devices capturing respective video input streams of the event. The video production device is programmed and operated to: receive a plurality of video input streams from the plurality of different video capture devices; automatically sequence through the streams to select one of them as a current video output stream, in accordance with a predetermined switching sequence associated with the video capture devices; and provide the selected stream as the current video output stream for a designated time interval associated with the video capture device that captured the selected video input stream.

Подробнее
04-02-2021 дата публикации

SYSTEM FOR MANAGEMENT OF MANAGEMENT OF INSURANCE RISK AND INSURANCE EVENTS

Номер: US20210035231A1
Принадлежит: Scientia Potentia Est, LLC.

This system is directed to the management of insurance information, risks and coverage, wherein a set of non-transitory computer readable instructions included in a kiosk disposed at the construction site can include instructions for creating a certificate of insurance according to a determination that the insurance requirements have been met and storing the certificate of insurance in the distributed ledger. 1. A computerized system for management of insurance risk and insurance events comprising:a kiosk having a kiosk computer readable medium uniquely associated with a construction site and in communication with a distributed ledger;a sensor in communications with the kiosk for detecting a presence of workers at a construction site, materials at the construction site, events and conditions occurring at the construction site; for each worker present at the construction site, determining an arrival time, a departure time, an amount of time worked and worker class,', 'identifying a set of construction materials delivered to the construction site,', 'detecting an installation action performed by a worker when installing the set of construction materials,', 'detecting an insurance event,', 'identifying a set of environmental conditions associated with the insurance event,', 'associating the insurance event with the set of environmental conditions,', 'creating an insurance record according to the insurance event, and transmitting the insurance record to a third party computer device., 'a set of non-transitory computer readable instructions included in the kiosk computer readable medium for2. The computerized system of including a unique identifier taken from a group of a RFID claim 1 , wireless signal claim 1 , a digital identifier claim 1 , an alphanumeric character claim 1 , a graphic or any combination thereof.3. The computerized system of wherein the insurance event is a material loss according to a material identifier associated with a construction material.4. The ...

Подробнее
08-02-2018 дата публикации

METHOD AND SYSTEM FOR AGGREGATING VIDEO CONTENT

Номер: US20180039820A1
Принадлежит:

Aspects of the subject disclosure may include, for example, systems and methods aggregating video content and adjusting the aggregate video content according to a training model. The adjusted aggregate video content comprises a first subset of the images and does not comprise a second subset of the images. The first subset of the images is determined by the training model based on a plurality of categories corresponding to a plurality of events. The illustrative embodiments also include presenting the adjusted aggregate video content and receiving identifications for the first subset of the images in the aggregate video content. Further, the illustrative embodiments include adjusting the training model according to the identifications and providing the adjusted training model to a network device. Other embodiments are disclosed. 1. A device , comprising:a processing system including a processor; and receiving video content from each of a plurality of cameras oriented toward a current premises resulting in a plurality of video content, wherein the plurality of video content comprises images of a plurality of events;', 'aggregating the plurality of video content to generate aggregate video content;', 'applying a selected training model to the aggregate video content resulting in adjusted aggregate video content, wherein the adjusted aggregate video content comprises a first subset of the images and does not comprise a second subset of the images, and wherein the first subset of the images is determined by the selected training model based on a plurality of categories corresponding to the plurality of events;', 'presenting the adjusted aggregate video content;', 'receiving user-generated input for the adjusted aggregate video content, wherein the user-generated input provides identifications for the first subset of the images in the aggregate video content;', 'adjusting the selected training model according to the user-generated input resulting in an adjusted training ...

Подробнее
08-02-2018 дата публикации

SINGLE CALL-TO-CONNECT LIVE COMMUNICATION TERMINAL, METHOD AND TOOL

Номер: US20180039836A1
Автор: SONG Chenfeng
Принадлежит:

The present invention discloses a real-time communication terminal, method and tool that can be connected by unilateral calls, wherein the real-time communication terminal receives a connection request from a trusted user; automatically initiates an IP communication with a trusted user in response to receiving a connection request from a trusted user and automatically issuing a response to the connection request; in the IP communication with the trusted user, the acquired video and audio are sent to the trusted user, and at least the audio from the trusted user is received. Compared with the prior art, the invention automatically enhances the communication experience between the trusted user and the monitored side by enhancing the communication experience of the trusted user by responding to the connection request of the trusted user automatically through the communication terminal that can be connected by unilateral calls. 1. A real-time communication terminal that can be connected by unilateral calls , comprising:a video capturing unit, an audio capturing unit, a speaker and a transceiver; video and audio signals captured by the video capturing unit and the audio capturing unit are transmitted through the transceiver, and audio signals received by the transceiver are output through the speaker, whereinafter receiving a connection request from a trusted user, the transceiver automatically issues a response to the connection request, thereby automatically establishing IP communication with the trusted user.2. The real-time communication terminal according to claim 1 , wherein the transceiver claim 1 , after automatically establishing an IP communication with a trusted user claim 1 , transmits only the video and audio signals acquired by the capturing unit and the audio capturing unit to the trusted user; in response to the bidirectional communication request from the trusted user claim 1 , the transceiver transmits and the video and audio signals to the trusted user ...

Подробнее
12-02-2015 дата публикации

SYSTEM AND METHOD FOR CONTEXUALLY INTERPRETING IMAGE SEQUENCES

Номер: US20150043778A1
Принадлежит:

A system and method for contextually interpreting image sequences are provided. The method comprises receiving video from one or more video sources, and generating one or more questions associated with one or more portions of the video based on at least one user-defined objective. The method further comprises sending the one or more portions of the video and the one or more questions to one or more assistants, receiving one or more answers to the one or more questions from the one or more assistants, and determining a contextual interpretation of the video based on the one or more answers and the video. 1. A method , comprising:receiving, by a computer, video from one or more video sources;generating, by the computer, one or more questions associated with one or more portions of the video;sending, by the computer, the one or more portions of the video and the one or more questions to one or more assistants;receiving, by the computer, one or more answers to the one or more questions from the one or more assistants; anddetermining, by the computer, a contextual interpretation of the video based on the one or more answers and the video.2. The method of claim 1 , wherein:the video comprises recorded image sequences recorded by a surveillance system;the one or more assistants are one or more persons; andthe determining the contextual interpretation by the computer uses interpretations of both the one or more persons and the computer.3. The method of claim 2 , further comprising identifying claim 2 , by the computer claim 2 , the one or more portions of the video for which the determining the contextual interpretation requires assistance based on at least one user-defined objective for the contextual interpretation.4. The method of claim 3 , wherein the at least one user-defined objective for the contextual interpretation comprises quantifying one or more objects in the video.5. The method of claim 3 , wherein the at least one user-defined objective comprises identifying ...

Подробнее
09-02-2017 дата публикации

Trick Play in Digital Video Streaming

Номер: US20170041681A1
Принадлежит:

System and methods for improved playback of a video stream are presented. Video snippets are identified that include a number of consecutive frames for playback. Snippets may be evenly temporally spaced in the video stream or may be content adaptive. Then the first frame of a snippet may be selected as the first frame of a scene or other appropriate stopping point. Scene detection, object detection, motion detection, video metadata, or other information generated during encoding or decoding of the video stream may aid in appropriate snippet selection. 125.-. (canceled)26. A method for displaying a stream of video data , comprising when a fast forward play mode is engaged:identifying a first plurality of frames from the stream of video data to be displayed during the fast forward play mode, the frames in the first plurality having a uniform temporal spacing from each other as determined by a rate of the fast forward play mode;for each respective frame in the first plurality, identifying a second plurality of frames from the stream in an interval between the respective frame and a next frame in the first plurality, wherein the second plurality of frames are consecutive with their respective first frame in regular playback mode; anddisplaying each first frame and its associated second plurality of frames at a normal playback rate.27. The method of claim 26 , wherein the displaying comprises:displaying at least one frame in the second plurality of frames at a snippet transition for an extended period of time, wherein a last frame of the second plurality of frames in an interval between the respective frame and the next frame in the first plurality is displayed for the extended period of time.28. The method of claim 26 , wherein the displaying comprises:displaying frames at a snippet transition for an extended period of time, wherein a last frame of the second plurality of frames in an interval between the respective frame and a next frame in the first plurality is ...

Подробнее
24-02-2022 дата публикации

AUTOMATIC CONFIGURATION OF ANALYTICS RULES FOR A CAMERA

Номер: US20220058393A1
Принадлежит:

Example implementations include a method, apparatus and computer-readable medium for controlling a camera, comprising receiving a video sequence of a scene. The method includes determining one or more scene description metadata in the scene from the video sequence. The method includes identifying one or more scene object types in the scene based on the one or more scene description metadata. The method includes determining one or more rules based on one or both of the scene description metadata or the scene object types, each rule configured to generate an event based on a detected object following a rule-specific pattern of behavior. The method includes applying the one or more rules to operation of the camera. 1. A method of controlling a camera , comprising:receiving, at a processor from the camera, a video sequence of a scene;determining, at the processor, one or more scene description metadata in the scene from the video sequence;identifying, at the processor, one or more scene object types in the scene based on the one or more scene description metadata;determining, at the processor, one or more rules based on one or both of the scene description metadata or the scene object types, wherein each rule is configured to generate an event based on a detected object following a rule-specific pattern of behavior; andapplying, at the processor, the one or more rules to operation of the camera.2. The method of claim 1 , further comprising:storing one or more object-specific rules corresponding to each of a plurality of object types; andwherein determining the one or more rules comprises:identifying a matching object type as one of the plurality of object types that matches with one of the one or more scene object types; andselecting as the one or more rules the one or more object-specific rules corresponding to the matching object type.3. The method of claim 1 , further comprising:receiving a subsequent video sequence;detecting an event based on the one or more rules ...

Подробнее
24-02-2022 дата публикации

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Номер: US20220058395A1
Автор: TAKAHASHI Katsuhiko
Принадлежит: NEC Corporation

An information processing apparatus () includes an event detection unit (), an input reception unit (), and a processing execution unit (). The event detection unit () detects a specific event from video data. The input reception unit () receives, from a user, input for specifying processing to be executed. The processing execution unit () executes first processing specified by input received by the input reception unit (), and executes second processing of generating learning data used for machine learning and storing the generated learning data in a learning data storage unit (). The processing execution unit () discriminates, in the second processing, based on a classification of the first processing specified by input received by the input reception unit (), whether a detection result of a specific event is correct, and generates learning data including at least a part of video data, category information indicating a category of a specific event detected by the event detection unit (), and correct/incorrect information indicating whether a detection result of an specific event is correct or incorrect. 1. An information processing apparatus comprising:an event detection unit detecting a specific event from video data;an input reception unit receiving, from a user, input for specifying processing to be executed; anda processing execution unit executing first processing specified by the input, and executing second processing of generating learning data used for machine learning and storing the generated learning data in a storage apparatus, wherein discriminates, based on a classification of the first processing specified by the input, whether a detection result of the specific event is correct, and', 'generates the learning data including at least a part of the video data, category information indicating a category of the detected specific event, and correct/incorrect information indicating whether the detection result of the specific event is correct or incorrect ...

Подробнее
07-02-2019 дата публикации

Protection and receovery of identities in surveillance camera environments

Номер: US20190042851A1
Принадлежит: Intel Corp

A mechanism is described for facilitating protection and recovery of identities in surveillance camera environments according to one embodiment. An apparatus of embodiments, as described herein, includes detection and reception logic to receive a video stream of a scene as captured by a camera, wherein the scene includes persons. The apparatus may further include recognition and application logic to recognize an abnormal activity and one or more persons associated with the abnormal activity in a video frame of the video stream. The apparatus may further include identity recovery logic to recover one or more identities of the one or more persons in response to the abnormal activity, where the one or more identities are recovered from masked data and encrypted residuals associated with the one or more persons.

Подробнее
06-02-2020 дата публикации

Methods and Systems for Presenting Multiple Live Video Feeds in a User Interface

Номер: US20200042166A1
Принадлежит: Google LLC

A method, in an application executing at a client device, includes: receiving a plurality of video feeds, each video feed of the plurality of video feeds corresponding to a respective remote camera of a plurality of remote cameras, where the video feeds are received concurrently by the device from a server system communicatively coupled to the remote cameras; displaying a first user interface, the first user interface including a plurality of user interface objects, each user interface object of the plurality of user interface objects being associated with a respective remote camera of the remote cameras; and displaying in each user interface object of the plurality of user interface objects the video feed corresponding to the respective remote camera with which the user interface object is associated, where at least one of the video feeds is displayed with cropping.

Подробнее
06-02-2020 дата публикации

Food preparation assistance using image processing

Номер: US20200043156A1
Принадлежит: International Business Machines Corp

A computer system monitors food preparation. Images of food preparation by a user are captured via one or more image capture devices disposed within an area containing food preparation items. A food preparation process being performed by the user is determined. Image processing is performed on the captured images to monitor the food preparation process and detect an event. The user is notified of the detected event, and provided information pertaining to the event. Embodiments of the present invention further include a method and program product for monitoring food preparation in substantially the same manner described above.

Подробнее
16-02-2017 дата публикации

APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT FOR VIDEO ENHANCED PHOTO BROWSING

Номер: US20170046053A1
Автор: Liu Yingfei
Принадлежит:

Mechanisms are described for enhancing a user's photo browsing experience by presenting one or more video clips associated with an area of the photo that the user is viewing. For example, a pre-recorded still image may be presented on a display, and the still image may be associated with a pre-recorded video. One or more video clips of interest may be defined from the pre-recorded video and associated with a viewable area of the pre-recorded still image, e.g., a zoomed-in portion of the pre-recorded video. Receipt of a user input via the zoomed-in portion may cause presentation of a video clip of interest that is associated with the zoomed-in portion. The video clip of interest may, for example be a portion of the pre-recorded video in which an evens occurs, such as a gesture or a laugh or a smile of one of the participants in the scene being captured. 120-. (canceled)21. An apparatus comprising:at least one processor; andat least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to:cause presentation of a pre-recorded still image on a display, wherein the pre-recorded still image is associated with a pre-recorded video;upon receipt of a first user input, cause presentation of a zoomed-in portion of the pre-recorded still image on the display; andupon receipt of a second user input via the zoomed-in portion of the pre-recorded still image, cause presentation of a video clip of interest associated with the zoomed-in portion of the pre-recorded still image, wherein the video clip of interest is a portion of the pre-recorded video in which an event occurs.22. The apparatus according to claim 21 , wherein the at least one memory and the computer program code are further configured to claim 21 , with the at least one processor claim 21 , cause the apparatus to cause presentation of a first video clip of interest and a second video clip of interest ...

Подробнее
16-02-2017 дата публикации

Systems and Methods for Categorizing Motion Events

Номер: US20170046574A1
Принадлежит:

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method includes: (1) obtaining a plurality of video frames, the plurality of video frames corresponding to a scene and a motion event candidate; (2) identifying one or more visual characteristics of the scene; (3) obtaining one or more background factors for the scene; (4) utilizing the obtained background factors to identify one or more motion entities; (5) for each identified motion entity: (a) classifying the motion entity by performing object recognition; and (b) obtaining one or more representative motion vectors based on a motion track of the motion entity; and (6) assigning a motion event category to the motion event candidate based on the identified visual characteristics, the obtained background factors, the classified motion entities, and the obtained representative motion vectors. 1. A method comprising: obtaining a plurality of video frames, the plurality of video frames corresponding to a scene and a motion event candidate;', 'identifying one or more visual characteristics of the scene;', 'obtaining one or more background factors for the scene;', 'utilizing the obtained background factors to identify one or more motion entities;', classifying the motion entity by performing object recognition on the motion entity; and', 'obtaining one or more representative motion vectors based on a motion track of the motion entity; and, 'for each identified motion entity, 'assigning a motion event category of a plurality of motion event categories to the motion event candidate based on the identified one or more visual characteristics, the obtained background factors, the classified motion entities, and the obtained representative motion vectors;', one or more known event types;', 'one or more unknown event types; and', 'a non-event type., 'wherein the motion event category assigned to the motion event candidate is selected from a group ...

Подробнее
16-02-2017 дата публикации

SEMANTIC REPRESENTATION MODULE OF A MACHINE-LEARNING ENGINE IN A VIDEO ANALYSIS SYSTEM

Номер: US20170046576A1
Принадлежит:

A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames. 1. A system for processing data describing a scene depicted in a sequence of video frames , the system comprising:a processor; anda memory communicatively coupled to the processor, the memory comprising computer-readable instructions that, when executed by the processor, cause the system to:identify one or more objects detected in the scene;receive input data associated with the one or more identified objects;evaluate the received input data to identify one or more primitive events, wherein for a first one of the primitive events, a semantic value is provided describing a behavior engaged in by a first one of the objects depicted in the sequence of video frames and wherein the first one of the primitive events has an assigned primitive event symbol;generate, for the first object, a primitive event symbol stream which includes the primitive event symbol corresponding to the first one of the primitive events identified for the first object; andoutput the primitive event symbol stream.2. The system of claim 1 , further comprising:update, for the first object, the primitive event symbol stream as the first object moves about the scene.3. The system of claim 2 , wherein update the primitive event symbol stream further comprising:identify, for the first object, a second one of the primitive events, wherein the ...

Подробнее
14-02-2019 дата публикации

MONITORING TARGET PERSON MONITORING DEVICE, METHOD, AND SYSTEM

Номер: US20190046080A1
Принадлежит: KONICA MINOLTA, INC.

Monitoring target person monitoring device, method, and system according to the present invention sense a predetermined event regarding a monitoring target person to notify the event; acquire an image including at least a video; determine, based on the acquired image, whether or not multiple persons are on the image; and start, in a case where it is determined that the multiple persons are on the image, storing the acquired video to store the video in a video storage. 17.-. (canceled)8. A monitoring target person monitoring device for sensing a predetermined event regarding a monitoring target person as a monitoring target to notify the event , the device comprising:an image acquisitor that acquires an image including at least a video;a video storage that stores the video acquired by the image acquisitor;a multiple-person determiner that determines, based on the image acquired by the image acquisitor, whether or not multiple persons are on the image; anda video storage processor that starts, in a case where the multiple-person determiner determines that the multiple persons are on the image, storing the video acquired by the image acquisitor to store the video in the video storage.9. The monitoring target person monitoring device according to claim 8 , whereinthe video storage processor starts storing the video acquired by the image acquisitor to store the video in the video storage in a case where the event is notified before the multiple-person determiner determines that the multiple persons are on the image.10. The monitoring target person monitoring device according to claim 8 , further comprising:a sound inputter to which sound is input,wherein the video storage processor starts storing the video acquired by the image acquisitor to store the video in the video storage in a case where sound input by the sound inputter is equal to or greater than a predetermined first threshold before or after the multiple-person determiner determines that the multiple persons ...

Подробнее
06-02-2020 дата публикации

DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND PROGRAM

Номер: US20200045242A1
Автор: Funagi Tetsuhiro
Принадлежит:

Causing display of an image captured by an image capturing device. An acquisition unit acquires a classification of an event occurring in a capturing area in which the image capturing device performs capturing. A display control unit causes a display unit to display an image generated by enlargement processing such that a size of a portion of a human figure included in the image of the capturing area is identical to a predetermined size, the portion of the human figure corresponding to the classification of the event acquired by the acquisition unit. 1. A display control device configured to cause a display unit to display an image captured by an image capturing device , the display control device comprising:an acquisition unit configured to acquire a classification of an event occurring in a capturing area in Which the image capturing device performs capturing; anda display control unit configured to cause the display unit to display an image generated by enlargement processing such that a size of a portion of a human figure included in the image of the capturing area is identical to a predetermined size, the portion of the human figure corresponding to the classification of the event acquired by the acquisition unit.2. The display control device according to claim 1 , further comprising:a control unit configured to control a zoom value of the image capturing device,wherein the image generated by the enlargement processing results from capturing by the image capturing device having the zoom value increased by the control unit such that the size of the portion of the human figure is identical to the predetermined size.3. The display control device according to claim 1 , wherein the image generated by the enlargement processing results from enlargement of a partial image including the portion of the human figure such that the size of the portion of the human figure is identical to the predetermined size.4. The display control device according to claim 1 , wherein the ...

Подробнее
16-02-2017 дата публикации

DISTURBANCE DETECTION IN VIDEO COMMUNICATIONS

Номер: US20170048492A1
Принадлежит:

Embodiments disclosed herein provide systems, methods, and computer readable media for detecting disturbances in a media stream from a participant on a communication. In a particular embodiment, a method provides identifying disturbance criteria defining a plurality of audible disturbances, a plurality of visual disturbances, and a plurality of communication disturbances. The method further provides identifying one or more audible disturbances from an audio component of the media stream based on predefined disturbance criteria and identifying one or more visual disturbances from a video component of the media stream based on the disturbance criteria. Additionally, the method provides correlating the audible disturbances with the visual disturbances to determine one or more combined disturbances for the participant based on the disturbance criteria, wherein each of the combined disturbances comprises at least one of the audible disturbances and at least one of the visual disturbances. 1. A method of detecting disturbances in a media stream from a participant on a communication , the method comprising:identifying disturbance criteria defining a plurality of audible disturbances, a plurality of visual disturbances, and a plurality of communication disturbances;identifying one or more audible disturbances from an audio component of the media stream based on predefined disturbance criteria;identifying one or more visual disturbances from a video component of the media stream based on the disturbance criteria; andcorrelating the audible disturbances with the visual disturbances to determine one or more combined disturbances for the participant based on the disturbance criteria, wherein each of the combined disturbances comprises at least one of the audible disturbances and at least one of the visual disturbances.2. The method of claim 1 , further comprising:receiving biometric information about the participant contemporaneously with the media stream; andcorrelating the ...

Подробнее
25-02-2016 дата публикации

TRACTOR-TRAILER CONNECTIONS FOR IMAGE CAPTURE DATA

Номер: US20160052453A1
Принадлежит:

A system comprises a transmitter unit configured for mounting at a trailer connected to a tractor. The transmitter unit comprises a video input for receiving video data from one or more trailer cameras, a power input for receiving power from a power line of the trailer, and a wireless communication module for communicating with a mobile communication device. A memory is configured to store trailer information received from the wireless communication module, the trailer information uniquely identifying the trailer. A transmitter is configured to wirelessly transmit the video data. A receiver unit is configured for mounting at the tractor and comprises a receiver for wirelessly receiving the video data transmitted by the transmitter, a power input for receiving power from a power line of the tractor, and an output for outputting the received video data. The receiver unit is further configured to receive the trailer information from the transmitter unit. 1. A system for use on a vehicle comprising a tractor and a trailer , the system comprising: a video input configured to receive video data from one or more cameras mounted at the trailer;', 'a power input configured to receive power from a power line of the trailer;', 'a wireless communication module configured to communicate with a mobile communication device;', 'a memory configured to store trailer information received from the wireless communication module, the trailer information comprising at least trailer ID data that uniquely identifies the trailer; and', 'a transmitter configured to wirelessly transmit the video data;, 'a transmitter unit configured for mounting at the trailer, the transmitter unit comprising a receiver configured to wirelessly receive the video data transmitted by the transmitter;', 'a power input configured to receive power from a power line of the tractor; and', 'an output configured to output the received video data;, 'a receiver unit configured for mounting at the tractor, the receiver ...

Подробнее
03-03-2022 дата публикации

EMULATION SERVICE FOR PERFORMING CORRESPONDING ACTIONS BASED ON A SEQUENCE OF ACTIONS DEPICTED IN A VIDEO

Номер: US20220067380A1
Автор: HSIAO Teng-Yuan
Принадлежит:

A media casting device detects facial feature points of a beauty advisor and detects the beauty advisor performing a sequence of operations relating to application of cosmetic products on a facial region of the beauty advisor. The media casting device detects a corresponding timestamp for each operation and detects a position of a cosmetic product with respect to facial feature points of the beauty advisor during each of the sequence of operations. The media casting device detects a cosmetic product utilized by the beauty advisor during each of the sequence of operations and generates metadata comprising the sequence of operations, the position of each cosmetic product, the corresponding timestamps, and each detected cosmetic product. The media casting device then transmits the metadata to a client device. 1. A method , comprising:detecting, by a facial region analyzer, facial feature points of a beauty advisor;detecting, by an event detector, the beauty advisor performing a sequence of operations relating to application of cosmetic products on a facial region of the beauty advisor;detecting, by the event detector, a corresponding timestamp for each operation;detecting, by the event detector, a position of a cosmetic product with respect to facial feature points of the beauty advisor during each of the sequence of operations;detecting, by the event detector, a cosmetic product utilized by the beauty advisor during each of the sequence of operations;generating, by a metadata module, metadata comprising the sequence of operations, the position of each cosmetic product, the corresponding timestamps, and each detected cosmetic product; andtransmitting, by a network module, the metadata to a client device.2. The method of claim 1 , wherein detecting claim 1 , by the event detector claim 1 , the position of the cosmetic product comprises determining coordinates of the position of the cosmetic product relative to the facial feature points.3. The method of claim 1 , further ...

Подробнее
03-03-2022 дата публикации

MULTIMODAL GAME VIDEO SUMMARIZATION

Номер: US20220067384A1
Принадлежит:

Video and audio from a computer simulation are processed by a machine learning engine to identify candidate segments of the simulation for use in a video summary of the simulation. Text input is then used to reinforce whether a candidate segment should be included in the video summary.

Подробнее
03-03-2022 дата публикации

MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA

Номер: US20220067385A1
Принадлежит:

Video and audio from a computer simulation are processed by a machine learning engine to identify candidate segments of the simulation for use in a video summary of the simulation. Text input is then used to reinforce whether a candidate segment should be included in the video summary. Metadata can be added to the summary showing game summary information. 1. An apparatus comprising:at least one processor programmed with instructions to:receive audio-video (AV) data;provide a video summary of the AV data that is shorter than the AV data at least in part by:input to a machine learning (ML) engine first modality data;input to the ML engine second modality data;receive the video summary of the AV data from the ML engine responsive to the inputting of the first and second modality data; andpresent in the video data metadata aligned in time with the first and second modality data such that the metadata is perceptible in the video summary.2. The apparatus of claim 1 , wherein the first modality data comprises audio from the AV data and the second modality data comprises computer simulation video from the AV data.3. The apparatus of claim 1 , wherein the metadata represents game event data.4. The apparatus of claim 1 , wherein the metadata represents emotion.5. The apparatus of claim 1 , wherein the metadata represents audio and video features extracted from the AV data.6. The apparatus of claim 1 , wherein the instructions are executable to:highlight portions of video that are subject the metadata.7. The apparatus of claim 1 , wherein the instructions are executable to:present the metadata as text in the video summary.8. The apparatus of claim 1 , wherein the metadata indicates likes for certain portions of the AV data.9. A method comprising:identifying an audio-video (AV) entity;using audio from the AV entity, identifying plural first candidate segments of the AV entity for establishing a summary of the entity;using video from the AV entity, identifying plural second ...

Подробнее
03-03-2022 дата публикации

SYSTEM, METHOD AND STORAGE MEDIUM FOR DETECTING PEOPLE ENTERING AND LEAVING A FIELD

Номер: US20220067391A1

A method for detecting people entering and leaving a field is provided in an embodiment of the disclosure. The method includes the following. An event detection area corresponding to an entrance is set, and the event detection area includes an upper boundary, a lower boundary, and an internal area, and the lower boundary includes a left boundary, a right boundary, and a bottom boundary; a person image corresponding to a person in an image stream is detected and tracked; and whether the person passes through or does not pass through the entrance is determined according to a first detection result and a second detection result.

Подробнее
14-02-2019 дата публикации

Methods and Systems for Classifying Optically Detected Power Quality Disturbances

Номер: US20190049525A1

An optically detected power quality disturbance caused by a remote load is classified as belonging to a class of known classes of power quality disturbances. Features associated with different power quality disturbances that belong to a plurality of different known classes of power quality disturbances are learned. Cross-covariance is applied to the optically detected power quality disturbance and the different power quality disturbances that belong to the different known classes of power quality disturbances to recognize features of the optically detected power quality disturbance that at least partially match the learned features. The class of power quality disturbances among the plurality of classes of different known power quality disturbances to which the optically detected power quality disturbance belongs is determined based on the recognized features. 1. A method for classifying an optically detected power quality disturbance , comprising:a) learning features associated with different power quality disturbances that belong to a plurality of different known classes of power quality disturbances;b) applying cross-covariance to the optically detected power quality disturbance and the different power quality disturbances that belong to the plurality of different known classes of power quality disturbances to recognize features of the optically detected power quality disturbance that at least partially match the learned features associated with the different power quality disturbances, wherein the optically detected power quality disturbance is detected remotely from a load causing the optically detected power quality disturbance; andc) determining a class of power quality disturbances among the plurality of classes of different known power quality disturbances to which the optically detected power quality disturbance belongs based on the recognized features.2. The method of claim 1 , wherein the learned features associated with the power quality disturbances ...

Подробнее
03-03-2022 дата публикации

Automated Spatial Indexing of Images to Video

Номер: US20220070437A1
Принадлежит:

A spatial indexing system receives a video that is a sequence of frames depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model. 1. A method comprising:receiving, from a first image capture system, a set of images each comprising an image timestamp, the set of images captured by the first image capture system as the first image capture system is moved through an environment;generating an estimated camera path of a second image capture system representative of movement through the environment based on a set of video frames captured by the second image capture system;associating the set of images with locations along the estimated camera path based on the image timestamps of the set of images and timestamps of the set of video frames; anddisplaying one or more of the set of images within a three-dimensional rendering of the environment.2. The method of claim 1 , wherein associating the set of images with locations along the estimated camera path is further based on metadata tags in the video frames.3. The method of claim 1 , wherein associating the set of images with locations along the estimated camera path further comprises:performing object detection on the set of video frames captured by the second image capture system to identify a presence of the first image capture system in one of the video frames; andassociating an image of the set of images to one of the locations along the estimated camera path based on the identified presence of the first image capture system in the video frame.4. The method of claim 1 , wherein the first image capture system and the second image capture system are used by a same user such ...

Подробнее
03-03-2022 дата публикации

SMART TIMELAPSE VIDEO TO CONSERVE BANDWIDTH BY REDUCING BIT RATE OF VIDEO ON A CAMERA DEVICE WITH THE ASSISTANCE OF NEURAL NETWORK INPUT

Номер: US20220070453A1
Автор: Tang Jian, Xu Ruian
Принадлежит:

An apparatus including an interface and a processor. The interface may be configured to receive pixel data generated by a capture device. The processor may be configured perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, determine whether the classification of the objects corresponds to an event, generate a full video stream in response to all of the video frames and generate encoded video frames. The full video stream may be recorded to a storage medium local to the apparatus. The encoded video frames may be communicated to a cloud service. The encoded video frames may comprise a first sample of the video frames selected at a first rate when the event is not detected and a second sample of the video frames selected at a second rate while the event is detected.

Подробнее
25-02-2021 дата публикации

IDENTIFYING SEGMENT STARTING LOCATIONS IN VIDEO COMPILATIONS

Номер: US20210056311A1
Принадлежит:

Technology for identifying segment starting locations in video compilations. The method includes: receiving an enumerated video compilation of a plurality of joined video segments; identifying enumerating text in key frames of the video compilation, wherein the key frames are at time intervals in the video compilation; and storing identified enumerating text information in relation to the key frames. The method then includes: analyzing the stored enumerating text information to identify time locations in the video compilation of a first occurrence of each enumerating value; and providing location references in the video compilation of the identified time locations for navigation. 1. A computer-implemented method (CIM) for identifying segment starting locations in video compilations , the CIM comprising:receiving an enumerated video compilation, with the enumerated video compilation including a plurality of joined video segments in a top X format where X is an integer and each joined video segment is respectively associated with a unique enumeration value from 1 to X, and with the plurality of joined video segments including a plurality of key frames occurring at time intervals within the joined video segments;identifying a plurality of portions of enumerating text in the plurality of key frames, with each portion of enumerating text including the enumeration value for one of the joined video segments of the plurality of joined video segments, by performing character recognition on the video images corresponding to the key frames so that the enumerating text is recovered as text from images that make up the video compilation;storing the plurality of portions of enumerating text and an identification of respectively corresponding key frames where each portion of enumerating text is respectively located;analyzing the recovered text making up the plurality of portions of enumerating text to identify time locations in the video compilation of a first occurrence of each ...

Подробнее
25-02-2021 дата публикации

PROVIDING INFORMATION BASED ON DETECTION OF ACTIONS THAT ARE UNDESIRED TO WASTE COLLECTION WORKERS

Номер: US20210056492A1
Автор: Zass Ron
Принадлежит:

Systems, methods and non-transitory computer readable media for providing information based on detection of actions that are undesired to waste collection workers are provided. One or more images captured using one or more image sensors from an environment of a garbage truck may be obtained. The one or more images may be analyzed to detect a waste collection worker in the environment of the garbage truck. The one or more images may be analyzed to determine whether the waste collection worker performs an action that is undesired to the waste collection worker. In response to a determination that the waste collection worker performs an action that is undesired to the waste collection worker, first information may be provided. 1. A non-transitory computer readable medium storing a software program comprising data and computer implementable instructions for carrying out a method for providing information based on detection of actions that are undesired to waste collection workers , the method comprising:obtaining one or more images captured using one or more image sensors from an environment of a garbage truck;analyzing the one or more images to detect a waste collection worker in the environment of the garbage truck;analyzing the one or more images to determine whether the waste collection worker performs an action that is undesired to the waste collection worker; andin response to a determination that the waste collection worker performs an action that is undesired to the waste collection worker, providing first information.2. The non-transitory computer readable medium of claim 1 , wherein the method further comprises:analyzing the one or more images to identify a property of the action that the waste collection worker performs and is undesired to the waste collection worker;in response to a first identified property of the action that the waste collection worker performs and is undesired to the waste collection worker, providing the first information; andin response ...

Подробнее
22-02-2018 дата публикации

METHOD, PROCESSING DEVICE AND SYSTEM FOR MANAGING COPIES OF MEDIA SAMPLES IN A SYSTEM COMPRISING A PLURALITY OF INTERCONNECTED NETWORK CAMERAS

Номер: US20180053389A1
Принадлежит:

The present invention relates to a method for managing copies of media samples recorded by a given network camera of a system comprising a plurality of interconnected network cameras, the method comprising the following steps: 1. A method for managing copies of media samples recorded by a given network camera of a system comprising a plurality of interconnected network cameras , the method comprising:determining a topology of the system as a set of relationships existing between the plurality of interconnected network cameras, based on media samples recorded by the plurality of interconnected network cameras; andselecting a subset of network cameras from the plurality of interconnected network cameras, based on the determined topology and a predetermined level of redundancy to achieve, for storage of the copies of the media samples recorded by the given network camera, in storage units of the subset of network cameras.2. The method according to claim 1 , further comprising sending a copy of media samples recorded by the given network camera to at least one selected network camera of the subset of network cameras.3. The method according to claim 1 , further comprising:transmitting a copy of media samples recorded by the given network camera to each of the plurality of interconnected network cameras or to a predefined number of randomly selected network cameras; andremoving a copy from a storage unit of at least one network camera that does not belong to the subset of network cameras.4. The method according to claim 1 , further comprising determining a level of redundancy as the number of cameras that can be removed from the system without causing loss of information directed video content at the system level.5. The method according to claim 1 , wherein determining a topology as a set of relationships between regions of a scene comprises:extracting a first set of visual descriptors characterizing visual features of a first region of the scene, based on a first image ...

Подробнее