Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 44. Отображено 43.
13-04-2017 дата публикации

SYSTEM AND METHOD FOR PROVIDING LASER CAMERA FUSION FOR IDENTIFYING AND TRACKING A TRAFFIC PARTICIPANT

Номер: US20170103269A1
Принадлежит:

A system and method for providing laser camera fusion for identifying and tracking a traffic participant that include receiving an image of a surrounding environment of a vehicle from a vehicle camera system and a set of object coordinates of at least one object determined within the surrounding environment of the vehicle from a vehicle laser projection system. The system and method also include determining a portion of the image as object space based on the image and the set of object coordinates and filtering the object space to identify a traffic related object. Additionally, the system and method include determining a three dimensional position of the traffic related object and classifying the traffic related object as at least one of: the traffic participant, or a non-traffic participant. The system and method further include tracking the traffic participant based on a three dimensional position of the traffic related object classified as the traffic participant. 1. A method for providing laser camera fusion for identifying and tracking a traffic participant , comprising:receiving an image of a surrounding environment of a vehicle from a vehicle camera system and a set of object coordinates of at least one object determined within the surrounding environment of the vehicle from a vehicle laser projection system;determining a portion of the image as object space based on the image and the set of object coordinates;filtering the object space to identify a traffic related object;determining a three dimensional position of the traffic related object; andclassifying the traffic related object as at least one of: the traffic participant, or a non-traffic participant; and 'wherein a predicted position of the traffic participant relative to the vehicle is determined based on tracking the traffic participant.', 'tracking the traffic participant based on a three dimensional position of the traffic related object classified as the traffic participant,'}2. The method of ...

Подробнее
10-10-2017 дата публикации

System and method for partially occluded object detection

Номер: US0009785828B2
Принадлежит: Honda Motor Co., Ltd., HONDA MOTOR CO LTD

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score.

Подробнее
14-11-2017 дата публикации

System and method for needle deployment detection in image-guided biopsy

Номер: US0009814442B2

A system and method for medical device detection includes a guidance system ( 38 ) configured to deliver a surgical device ( 32 ) into a subject. A surgical device deployment detector ( 25, 40, 42, 44 ) is configured to cooperate with the guidance system and is configured to detect a deployment of the surgical device in the subject. A coordination module ( 22 ) is configured to receive input from the guidance system and the deployment detector to determine and record one or more of a location and time of each deployment.

Подробнее
13-10-2016 дата публикации

PEDESTRIAN PATH PREDICTIONS

Номер: US20160300485A1
Принадлежит:

Systems and techniques for pedestrian path predictions are disclosed herein. For example, an environment, features of the environment, and pedestrians within the environment may be identified. Models for the pedestrians may be generated based on features of the environment. A model may be indicative of goals of a corresponding pedestrian and predicted paths for the corresponding pedestrian. Pedestrian path predictions for the pedestrians may be determined based on corresponding predicted paths. A pedestrian path prediction may be indicative of a probability that the corresponding pedestrian will travel a corresponding predicted path. Pedestrian path predictions may be rendered for the predicted paths, such as using different colors or different display aspects, thereby enabling a driver of a vehicle to be presented with information indicative of where a pedestrian is likely to travel. 1. A system for pedestrian path predictions , comprising:a sensor component identifying an environment, one or more features of the environment, and one or more pedestrians within the environment;a modeling component generating one or more models for one or more of the pedestrians based on one or more features of the environment, wherein a model of one or more of the models is indicative of one or more goals of a corresponding pedestrian and one or more predicted paths for the corresponding pedestrian;a prediction component determining one or more pedestrian path predictions for one or more of the pedestrians based on one or more corresponding predicted paths, wherein a pedestrian path prediction of one or more of the pedestrian path predictions is indicative of a probability that the corresponding pedestrian will travel a corresponding predicted path; andan interface component rendering one or more of the pedestrian path predictions for one or more of the predicted paths.2. The system of claim 1 , wherein the sensor component gathers one or more observations from one or more of the ...

Подробнее
31-10-2017 дата публикации

Partially occluded object detection using context and depth ordering

Номер: US0009805274B2
Принадлежит: HONDA MOTOR CO., LTD., HONDA MOTOR CO LTD

A system and method for verifying detection of partially occluded objects (e.g., pedestrians) in the vicinity of a vehicle. An image input device captures an image and/or video of surroundings. An object detector detects partially occluded pedestrians and other objects in received image information. The detection of a partially occluded pedestrian is verified when a detection window of the partially occluded pedestrian overlaps with a detection window of an occluding object, and the occluding object window is closer to the image input device than the partially occluded object window. Optionally, a range-finding sensor, such as a LIDAR device, determines a range to objects located in the direction of the partially occluded object. The detection of the partially occluded object is verified when the range of one of the other objects located in the direction of the partially occluded object is less than that of the partially occluded object.

Подробнее
25-09-2013 дата публикации

System and method for needle deployment detection in image-guided biopsy

Номер: CN103327907A
Принадлежит:

A system and method for medical device detection includes a guidance system (38) configured to deliver a surgical device (32) into a subject. A surgical device deployment detector (25, 40, 42, 44) is configured to cooperate with the guidance system and is configured to detect a deployment of the surgical device in the subject. A coordination module (22) is configured to receive input from the guidance system and the deployment detector to determine and record one or more of a location and time of each deployment.

Подробнее
10-10-2017 дата публикации

Pedestrian path predictions

Номер: US0009786177B2
Принадлежит: Honda Motor Co., Ltd., HONDA MOTOR CO LTD

Systems and techniques for pedestrian path predictions are disclosed herein. For example, an environment, features of the environment, and pedestrians within the environment may be identified. Models for the pedestrians may be generated based on features of the environment. A model may be indicative of goals of a corresponding pedestrian and predicted paths for the corresponding pedestrian. Pedestrian path predictions for the pedestrians may be determined based on corresponding predicted paths. A pedestrian path prediction may be indicative of a probability that the corresponding pedestrian will travel a corresponding predicted path. Pedestrian path predictions may be rendered for the predicted paths, such as using different colors or different display aspects, thereby enabling a driver of a vehicle to be presented with information indicative of where a pedestrian is likely to travel.

Подробнее
21-03-2013 дата публикации

High-Quality Upscaling of an Image Sequence

Номер: US20130071040A1
Принадлежит:

A method, system, and computer-readable storage medium are disclosed for upscaling an image sequence. An upsampled frame is generated based on an original frame in an original image sequence comprising a plurality of frames. A smoothed image sequence is generated based on the original image sequence. A plurality of patches are determined in the upsampled frame. Each patch comprises a subset of image data in the upsampled frame. Locations of a plurality of corresponding patches are determined in a neighboring set of the plurality of frames in the smoothed image sequence. A plurality of high-frequency patches are generated. Each high-frequency patch is based on image data at the locations of the corresponding patches in the original image sequence. The plurality of high-frequency patches are added to the upsampled frame to generate a high-quality upscaled frame. 1. A computer-implemented method , comprising:generating an upsampled frame based on an original frame in an original image sequence, wherein the original image sequence comprises a plurality of frames;generating a smoothed image sequence based on the original image sequence;determining a plurality of patches in the upsampled frame, wherein each of the plurality of patches comprises a subset of image data in the upsampled frame;determining locations of a plurality of corresponding patches in a neighboring set of the plurality of frames in the smoothed image sequence;generating a plurality of high-frequency patches, wherein each high-frequency patch is based on image data at the locations of the plurality of corresponding patches in the original image sequence; andadding the plurality of high-frequency patches to the upsampled frame.2. The method as recited in claim 1 , further comprising:determining an optical flow in the original image sequence;wherein the locations of the plurality of corresponding patches are determined based on the optical flow.3. The method as recited in claim 1 , wherein determining the ...

Подробнее
21-03-2013 дата публикации

High-Quality Denoising of an Image Sequence

Номер: US20130071041A1
Принадлежит:

A method, system, and computer-readable storage medium are disclosed for denoising an image sequence. A first patch is determined in a first frame in an image sequence comprising a plurality of frames. The first patch comprises a subset of image data in the first frame. Locations of a plurality of corresponding patches are determined in a neighboring set of the plurality of frames. One or more neighboring related patches are determined for each of the plurality of corresponding patches in a same frame as the respective one of the corresponding patches. A denoised first patch is generated by averaging image data in the one or more neighboring related patches in the neighboring set of the plurality of frames. 1. A computer-implemented method , comprising:determining a first patch in a first frame in an image sequence, wherein the image sequence comprises a plurality of frames, and wherein the first patch comprises a subset of image data in the first frame;determining locations of a plurality of corresponding patches in a neighboring set of the plurality of frames in the image sequence;determining one or more neighboring related patches for each of the plurality of corresponding patches in a same frame as the respective one of the corresponding patches; andgenerating a denoised first patch, comprising averaging image data in the one or more neighboring related patches in the neighboring set of the plurality of frames.2. The method as recited in claim 1 , further comprising:determining an optical flow in the image sequence;wherein the locations of the plurality of corresponding patches are determined based on the optical flow.3. The method as recited in claim 1 , wherein determining the locations of the plurality of corresponding patches comprises searching a local window in each of the neighboring set of the plurality of frames in the image sequence.4. The method as recited in claim 3 , further comprising:determining a size of the local window based on a local noise ...

Подробнее
31-10-2013 дата публикации

SYSTEM AND METHOD FOR NEEDLE DEPLOYMENT DETECTION IN IMAGE-GUIDED BIOPSY

Номер: US20130289393A1
Принадлежит: KONINKLIJKE PHILIPS N.V.

A system and method for medical device detection includes a guidance system () configured to deliver a surgical device () into a subject. A surgical device deployment detector () is configured to cooperate with the guidance system and is configured to detect a deployment of the surgical device in the subject. A coordination module ()is configured to receive input from the guidance system and the deployment detector to determine and record one or more of a location and time of each deployment. 1. A system for medical device detection , comprising:{'b': 38', '32, 'a guidance system () configured to deliver a surgical device () into a subject;'}{'b': 24', '25', '42', '44', '40, 'a surgical device deployment detector (, , , , ) configured to cooperate with the guidance system and configured to detect a deployment of the surgical device in the subject; and'}{'b': '22', 'a coordination module () configured to receive input from the guidance system and the deployment detector to coordinate a plurality of inputs to determine and record one or more of a location and time of each deployment.'}2. (canceled)3. (canceled)447. The system as recited in claim 1 , further comprising a filter () configured to highlight the surgical device in an ultrasonic image scan.53844. The system as recited in claim 1 , wherein the surgical device deployment detector () includes a vibration detector () mounted on the guidance system.63842. The system as recited in claim 1 , wherein the surgical device deployment detector () includes an acoustic detector () configured to acoustically indicate a position of the guidance system.73824252425502505504. The system as recited in claim 1 , wherein the surgical device deployment detector () includes a spatial tracking system ( claim 1 , ) claim 1 , wherein the spatial tracking system ( claim 1 , ) includes a spatial tracking device mounted on a biopsy needle () which includes one of a stationary () and a moving spatial tracking device ().8. (canceled)9. ( ...

Подробнее
04-01-2018 дата публикации

SYSTEM AND METHOD FOR PARTIALLY OCCLUDED OBJECT DETECTION

Номер: US20180005025A1
Принадлежит:

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score. 1. A computer-implemented method for partially occluded object detection , comprising:obtaining a response map for a detection window of an input image, wherein the response map is based on a trained model and the response map includes a root layer and a parts layer;determining visibility flags for each root cell of the root layer and each part of the parts layer based on the response map, wherein the visibility flag is one of visible or occluded;determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded, wherein the occlusion penalty is based on a location of the root cell or the part with respect to the detection window;determining a detection score for the detection window based on the visibility flags and the occlusion penalties; andgenerating an estimated visibility map for object detection based on the detection score.2. The computer-implemented method of claim 1 , wherein the trained model is a deformable parts model.3. The computer-implemented method of claim 1 , wherein obtaining the response map comprises determining a ...

Подробнее
04-03-2021 дата публикации

OCCUPANCY PREDICTION NEURAL NETWORKS

Номер: US20210064890A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a future occupancy prediction for a region of an environment. In one aspect, a method comprises: receiving sensor data generated by a sensor system of a vehicle that characterizes an environment in a vicinity of the vehicle as of a current time point, wherein the sensor data comprises a plurality of sensor samples characterizing the environment that were each captured at different time points; processing a network input comprising the sensor data using a neural network to generate an occupancy prediction output for a region of the environment, wherein: the occupancy prediction output characterizes, for one or more future intervals of time after the current time point, a respective likelihood that the region of the environment will be occupied by an agent in the environment during the future interval of time. 1. A method implemented by one or more data processing apparatus , the method comprising:receiving sensor data generated by a sensor system of a vehicle that characterizes an environment in a vicinity of the vehicle as of a current time point, wherein the sensor data comprises a plurality of sensor samples characterizing the environment that were each captured at different time points; the occupancy prediction output characterizes, for one or more future intervals of time after the current time point, a respective likelihood that the region of the environment will be occupied by an agent in the environment during the future interval of time; and', 'the network input is provided to an input layer of the neural network, and the occupancy prediction output for the region of the environment is output by an output layer of the neural network; and, 'processing a network input comprising the sensor data using a neural network to generate an occupancy prediction output for a region of the environment, whereinproviding the occupancy prediction output to a ...

Подробнее
09-06-2022 дата публикации

THREE-DIMENSIONAL LOCATION PREDICTION FROM IMAGES

Номер: US20220180549A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting three-dimensional object locations from images. One of the methods includes obtaining a sequence of images that comprises, at each of a plurality of time steps, a respective image that was captured by a camera at the time step; generating, for each image in the sequence, respective pseudo-lidar features of a respective pseudo-lidar representation of a region in the image that has been determined to depict a first object; generating, for a particular image at a particular time step in the sequence, image patch features of the region in the particular image that has been determined to depict the first object; and generating, from the respective pseudo-lidar features and the image patch features, a prediction that characterizes a location of the first object in a three-dimensional coordinate system at the particular time step in the sequence. 1. A method performed by one or more computers , the method comprising:obtaining a temporal sequence of images that comprises, at each of a plurality of time steps, a respective image that was captured by a camera at the time step;generating, for each image in the temporal sequence, respective pseudo-lidar features of a respective pseudo-lidar representation of a region in the image that has been determined to depict a first object;generating, for a particular image at a particular time step in the temporal sequence, image patch features of the region in the particular image that has been determined to depict the first object; andgenerating, from the respective pseudo-lidar features and the image patch features, a prediction that characterizes a location of the first object in a three-dimensional coordinate system at the particular time step in the temporal sequence.2. The method of claim 1 , wherein the prediction includes an updated depth estimate that estimates a depth of a specified point on the first object at the ...

Подробнее
09-04-2020 дата публикации

Object localization using machine learning

Номер: US20200110175A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location of a particular object relative to a vehicle. In one aspect, a method includes obtaining sensor data captured by one or more sensors of a vehicle. The sensor data is processed by a convolutional neural network to generate a sensor feature representation of the sensor data. Data is obtained which defines a particular spatial region in the sensor data that has been classified as including sensor data that characterizes the particular object. An object feature representation of the particular object is generated from a portion of the sensor feature representation corresponding to the particular spatial region. The object feature representation of the particular object is processed using a localization neural network to generate the location of the particular object relative to the vehicle.

Подробнее
23-04-2020 дата публикации

Object Action Classification For Autonomous Vehicles

Номер: US20200125112A1
Принадлежит: Waymo LLC

Aspects of the disclosure relate to training and using a model for identifying actions of objects. For instance, LIDAR sensor data frames including an object bounding box corresponding to an object as well as an action label for the bounding box may be received. Each sensor frame is associated with a timestamp and is sequenced with respect to other sensor frames. Each given sensor data frame may be projected into a camera image of the object based on the timestamp associated with the given sensor data frame in order to provide fused data. The model may be trained using the fused data such that the model is configured to, in response to receiving fused data, the model outputs an action label for each object bounding box of the fused data. This output may then be used to control a vehicle in an autonomous driving mode.

Подробнее
18-05-2017 дата публикации

METHOD AND SYSTEM FOR MOVING OBJECT DETECTION WITH SINGLE CAMERA

Номер: US20170140231A1
Автор: Ayvaci Alper, Chen Sheng
Принадлежит:

Disclosed are systems and methods for detecting moving objects. A computer-implemented method for detecting moving objects comprises obtaining a streaming video captured by a camera; extracting an input image sequence including a series of images from the streaming video; tracking point features and maintaining a set of point trajectories for at least one of the series of images; measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; and determining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation. 1. A computer-implemented method for detecting moving objects , comprising:obtaining a streaming video captured by a camera;extracting an input image sequence including a series of images from the streaming video;tracking point features and maintaining a set of point trajectories for at least one of the series of images;measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; anddetermining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation.2. The method of claim 1 , wherein the camera comprises a monocular camera.3. The method of claim 1 , wherein the constraints from multi-view geometry comprise at least one of: an epipolar constraint between two-view and trifocal constraints from three-view.5. The method of claim 4 , wherein the pair of point correspondence is determined based on an optical flow between consecutive frames.10. The method of claim 1 , further comprising claim 1 , for each frame T:computing an optical flow and a point trajectory;estimating fundamental matrices and a trifocal tensor;computing an epipolar moving objectness score and a trifocal moving objectness score for each trajectory; andforming the CRF on superpiexels to determine moving labels.11. A system for detecting moving objects claim 1 , the ...

Подробнее
23-06-2016 дата публикации

SYSTEM AND METHOD FOR PARTIALLY OCCLUDED OBJECT DETECTION

Номер: US20160180192A1
Принадлежит:

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score. 1. A computer-implemented method for partially occluded object detection , comprising:obtaining a response map for a detection window of an input image, wherein the response map is based on a trained model and the response map includes a root layer and a parts layer;determining visibility flags for each root cell of the root layer and each part of the parts layer based on the response map, wherein the visibility flag is one of visible or occluded;determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded, wherein the occlusion penalty is based on a location of the root cell or the part with respect to the detection window;determining a detection score for the detection window based on the visibility flags and the occlusion penalties; andgenerating an estimated visibility map for object detection based on the detection score.2. The computer-implemented method of claim 1 , wherein the trained model is a deformable parts model.3. The computer-implemented method of claim 1 , wherein obtaining the response map comprises determining a ...

Подробнее
03-08-2017 дата публикации

PARTIALLY OCCLUDED OBJECT DETECTION USING CONTEXT AND DEPTH ORDERING

Номер: US20170220874A1
Принадлежит:

A system and method for verifying detection of partially occluded objects (e.g., pedestrians) in the vicinity of a vehicle. An image input device captures an image and/or video of surroundings. An object detector detects partially occluded pedestrians and other objects in received image information. The detection of a partially occluded pedestrian is verified when a detection window of the partially occluded pedestrian overlaps with a detection window of an occluding object, and the occluding object window is closer to the image input device than the partially occluded object window. Optionally, a range-finding sensor, such as a LIDAR device, determines a range to objects located in the direction of the partially occluded object. The detection of the partially occluded object is verified when the range of one of the other objects located in the direction of the partially occluded object is less than that of the partially occluded object. 1. A method for verifying detection of a first object partially occluded by a second object relative to a vehicle , the method comprising:receiving image information via an image input device;determining a first detection window bounding a first image in the image information corresponding to the first object;determining a second detection window bounding a second image in the image information corresponding to the second object;determining whether the first window and the second window overlap;determining a first distance to the first detection window and a second distance to the second detection window;comparing the first distance to the second distance, andif the first distance is greater than the second distance, verifying that the first object is partially occluded by the second object.2. The method of claim 1 , wherein verifying that the first object is partially occluded by the second object further includes:receiving an input from a depth sensor, the depth sensor input corresponding to a measured distance between the depth ...

Подробнее
23-09-2021 дата публикации

Interacted Object Detection Neural Network

Номер: US20210295555A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating object interaction predictions using a neural network. One of the methods includes obtaining a sensor input derived from data generated by one or more sensors that characterizes a scene. The sensor input is provided to an object interaction neural network. The object interaction neural network is configured to process the sensor input to generate a plurality of object interaction outputs. Each respective object interaction output includes main object information and interacting object information. The respective object interaction outputs corresponding to the plurality of regions in the sensor input are received as output of the object interaction neural network.

Подробнее
30-09-2021 дата публикации

Automatic labeling of objects in sensor data

Номер: US20210303956A1
Принадлежит: Waymo LLC

Aspects of the disclosure provide for automatically generating labels for sensor data. For instance, first sensor data for a first vehicle may be identified. This first sensor data may have been captured by a first sensor of the vehicle at a first location during a first point in time and may be associated with a first label for an object. Second sensor data for a vehicle may be identified. The second sensor data may have been captured by a second sensor of the vehicle at a second location at a second point in time outside of the first point in time. The second location is different from the first location. The object is a static object may be determined. Based on the determination that the object is a static object, the first label may be used to automatically generate a second label for the second sensor data.

Подробнее
04-10-2018 дата публикации

METHOD AND SYSTEM FOR MOVING OBJECT DETECTION WITH SINGLE CAMERA

Номер: US20180285662A1
Автор: Ayvaci Alper, Chen Sheng
Принадлежит:

Disclosed are systems and methods for detecting moving objects. A computer-implemented method for detecting moving objects comprises obtaining a streaming video captured by a camera; extracting an input image sequence including a series of images from the streaming video; tracking point features and maintaining a set of point trajectories for at least one of the series of images; measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; and determining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation. 1. A computer-implemented method for detecting moving objects , comprising:obtaining a streaming video captured by a camera;extracting an input image sequence including a series of images from the streaming video;tracking point features and maintaining a set of point trajectories for at least one of the series of images;measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; anddetermining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation.2. The method of claim 1 , wherein the camera comprises a monocular camera.3. The method of claim 1 , wherein the constraints from multi-view geometry comprise at least one of: an epipolar constraint between two-view and trifocal constraints from three-view.5. The method of claim 4 , wherein the pair of point correspondence is determined based on an optical flow between consecutive frames.8. The method of claim 3 , wherein the trifocal constraints from three-view are determined based at least in part on a trifocal moving objectness score defined as follows:{'br': None, 'i': x', ',x', ',x', 'd', 'x', ',{circumflex over (x)}, 'sub': i', 'i', 'i', 'pp', 'i', 'i, 'sup': m', 'n', 'p', 'p', 'p, 'γ()=(),'}{'sup': t″', 't', 't′, 'sub': 'pp', 'where {circumflex over (x)} is the estimated ...

Подробнее
10-08-2017 дата публикации

Partially occluded object detection using context and depth ordering

Номер: WO2017136578A1
Принадлежит: HONDA MOTOR CO., LTD.

A system and method for verifying detection of partially occluded objects (e.g., pedestrians) in the vicinity of a vehicle. An image input device captures an image and/or video of surroundings. An object detector detects partially occluded pedestrians and other objects in received image information. The detection of a partially occluded pedestrian is verified when a detection window of the partially occluded pedestrian overlaps with a detection window of an occluding object, and the occluding object window is closer to the image input device than the partially occluded object window. Optionally, a range-finding sensor, such as a LIDAR device, determines a range to objects located in the direction of the partially occluded object. The detection of the partially occluded object is verified when the range of one of the other objects located in the direction of the partially occluded object is less than that of the partially occluded object.

Подробнее
02-08-2022 дата публикации

Occupancy prediction neural networks

Номер: US11403853B2
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a future occupancy prediction for a region of an environment. In one aspect, a method comprises: receiving sensor data generated by a sensor system of a vehicle that characterizes an environment in a vicinity of the vehicle as of a current time point, wherein the sensor data comprises a plurality of sensor samples characterizing the environment that were each captured at different time points; processing a network input comprising the sensor data using a neural network to generate an occupancy prediction output for a region of the environment, wherein: the occupancy prediction output characterizes, for one or more future intervals of time after the current time point, a respective likelihood that the region of the environment will be occupied by an agent in the environment during the future interval of time.

Подробнее
04-03-2021 дата публикации

Occupancy prediction neural networks

Номер: WO2021040910A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a future occupancy prediction for a region of an environment. In one aspect, a method comprises: receiving sensor data generated by a sensor system of a vehicle that characterizes an environment in a vicinity of the vehicle as of a current time point, wherein the sensor data comprises a plurality of sensor samples characterizing the environment that were each captured at different time points; processing a network input comprising the sensor data using a neural network to generate an occupancy prediction output for a region of the environment, wherein: the occupancy prediction output characterizes, for one or more future intervals of time after the current time point, a respective likelihood that the region of the environment will be occupied by an agent in the environment during the future interval of time.

Подробнее
26-10-2022 дата публикации

Unsupervised training of optical flow estimation neural networks

Номер: EP4080452A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to predict optical flow. One of the methods includes obtaining a batch of one or more training image pairs; for each of the pairs: processing the first training image and the second training image using the neural network to generate a final optical flow estimate; generating a cropped final optical flow estimate from the final optical flow estimate; and training the neural network using the cropped optical flow estimate.

Подробнее
10-07-2018 дата публикации

Method and system for moving object detection with single camera

Номер: US10019637B2
Автор: Alper Ayvaci, SHENG Chen
Принадлежит: Honda Motor Co Ltd

Disclosed are systems and methods for detecting moving objects. A computer-implemented method for detecting moving objects comprises obtaining a streaming video captured by a camera; extracting an input image sequence including a series of images from the streaming video; tracking point features and maintaining a set of point trajectories for at least one of the series of images; measuring a likelihood for each point trajectory to determine whether it belongs to a moving object using constraints from multi-view geometry; and determining a conditional random field (CRF) on an entire frame to obtain a moving object segmentation.

Подробнее
15-05-2018 дата публикации

System and method for partially occluded object detection

Номер: US9971934B2
Принадлежит: Honda Motor Co Ltd

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score.

Подробнее
16-02-2023 дата публикации

Automatic labeling of objects in sensor data

Номер: US20230046289A1
Принадлежит: Waymo LLC

Aspects of the disclosure provide for automatically generating labels for sensor data. For instance, first sensor data, for a vehicle may be identified. This first sensor data may have been captured by a first sensor of the vehicle at a first location during a first point in time and may be associated with a first label for an object. Second sensor data for the vehicle may be identified. The second sensor data may have been captured by a second sensor of the vehicle at a second location at a second point in time outside of the first point in time. The second location is different from the first location. A determination may be made as to whether the object is a static object. Based on the determination that the object is a static object, the first label may be used to automatically generate a second label for the second sensor data.

Подробнее
02-02-2023 дата публикации

Optical flow based motion detection

Номер: US20230033989A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating motion detection based on optical flow. One of the methods includes obtaining a first image of a scene in an environment taken by an agent at a first time point and a second image of the scene at a second later time point. A point cloud characterizing the scene in the environment is obtained. A predicted optical flow is determined between the first image and the second image. A respective initial flow prediction for the point that represents motion of the point between the two time points is determined. A respective ego motion flow estimate for the point that represents a motion of the point induced by ego motion of the agent is determined. A respective motion prediction that indicates whether the point was static or in motion between the two time points is determined.

Подробнее
09-04-2020 дата публикации

Object localization using machine learning

Номер: WO2020072193A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location of a particular object relative to a vehicle. In one aspect, a method includes obtaining sensor data captured by one or more sensors of a vehicle. The sensor data is processed by a convolutional neural network to generate a sensor feature representation of the sensor data. Data is obtained which defines a particular spatial region in the sensor data that has been classified as including sensor data that characterizes the particular object. An object feature representation of the particular object is generated from a portion of the sensor feature representation corresponding to the particular spatial region. The object feature representation of the particular object is processed using a localization neural network to generate the location of the particular object relative to the vehicle.

Подробнее
24-08-2023 дата публикации

End-to-end processing in automated driving systems

Номер: WO2023158706A1
Принадлежит: Waymo LLC

The described aspects and implementations enable efficient object detection and tracking. In one implementation, disclosed is a method and a system to perform the method, the system including the sensing system configured to obtain sensing data characterizing an environment of the vehicle. The system further includes a data processing system operatively coupled to the sensing system and configured to process the sensing data using a first (second) set of neural network (NN) layers to obtain a first (second) set of features for a first (second) region of the environment, the first (second) set of features is associated with a first (second) spatial resolution. The data processing system is further to process the two sets of features using a second set of NN layers to detect a location of object(s) in the environment of the vehicle and a state of motion of the object(s).

Подробнее
24-08-2023 дата публикации

Camera-radar data fusion for efficient object detection

Номер: WO2023158642A1
Принадлежит: Waymo LLC

A method includes obtaining, by a processing device, input data derived from a set of sensors associated with an autonomous vehicle (AV), extracting, by the processing device from the input data, a plurality of sets of features, generating, by the processing device using the plurality of sets of features, a fused bird's-eye view (BEV) grid. The fused BEV grid is generated based on a first BEV grid having a first scale and a second BEV grid having a second scale different from the first scale. The method further includes providing, by the processing device, the fused BEV grid for object detection.

Подробнее
03-10-2023 дата публикации

Occupancy prediction neural networks

Номер: US11772654B2
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a future occupancy prediction for a region of an environment. In one aspect, a method comprises: receiving sensor data generated by a sensor system of a vehicle that characterizes an environment in a vicinity of the vehicle as of a current time point, wherein the sensor data comprises a plurality of sensor samples characterizing the environment that were each captured at different time points; processing a network input comprising the sensor data using a neural network to generate an occupancy prediction output for a region of the environment, wherein: the occupancy prediction output characterizes, for one or more future intervals of time after the current time point, a respective likelihood that the region of the environment will be occupied by an agent in the environment during the future interval of time.

Подробнее
07-07-2021 дата публикации

Object localization using machine learning

Номер: EP3844670A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location of a particular object relative to a vehicle. In one aspect, a method includes obtaining sensor data captured by one or more sensors of a vehicle. The sensor data is processed by a convolutional neural network to generate a sensor feature representation of the sensor data. Data is obtained which defines a particular spatial region in the sensor data that has been classified as including sensor data that characterizes the particular object. An object feature representation of the particular object is generated from a portion of the sensor feature representation corresponding to the particular spatial region. The object feature representation of the particular object is processed using a localization neural network to generate the location of the particular object relative to the vehicle.

Подробнее
01-06-2006 дата публикации

Bereichskonkurrenz-Segmentierungsverfahren mittels lokaler Wasserscheiden-Operatoren

Номер: DE102005047329A1
Принадлежит: Siemens Corporate Research Inc

Ein beispielhaftes Verfahren für die Segmentierung eines Objekts aus einem interessierenden Gebilde wird vorgestellt. Ein von einem Benutzer ausgewählter Punkt in einem Bild wird empfangen (bei 1205). Eine Wasserscheiden-Transformation für den vom Benutzer ausgewählten Punkt wird durchgeführt (bei 1210), um ein erstes Objekt zu bestimmen. Ein benachbarter Wasserscheidenbereich wird zu dem ersten Objekt, basierend auf Bereichswachstum und einer Glätterandbedingung, hinzugefügt (bei 1215), um ein aktualisiertes Objekt zu bilden.

Подробнее
19-07-2023 дата публикации

Occupancy prediction neural networks

Номер: EP4000015A4
Принадлежит: Waymo LLC

Подробнее
20-05-2021 дата публикации

Interacted Object Detection Neural Network

Номер: US20210150752A1
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating object interaction predictions using a neural network. One of the methods includes obtaining a sensor input derived from data generated by one or more sensors that characterizes a scene. The sensor input is provided to an object interaction neural network. The object interaction neural network is configured to process the sensor input to generate a plurality of object interaction outputs. Each respective object interaction output includes main object information and interacting object information. The respective object interaction outputs corresponding to the plurality of regions in the sensor input are received as output of the object interaction neural network.

Подробнее
06-06-2024 дата публикации

Multi-frame temporal aggregation and dense motion estimation for autonomous vehicles

Номер: WO2024118992A1
Принадлежит: Waymo LLC

A method includes obtaining, by a processing device, input data derived from a set of sensors associated with an autonomous vehicle (AV). The input data includes camera data and radar data. The method further includes extracting, by the processing device from the input data, a plurality of sets of bird's-eye view (BEV) features. Each set of BEV features corresponds to a respective timestep. The method further includes generating, by the processing device from the plurality of sets of BEV features, an object flow for at least one object. Generating the object flow includes performing at least one of: multi-frame temporal aggregation or multi-frame dense motion estimation. The method further includes causing, by the processing device, a driving path of the AV to be modified in view of the object flow.

Подробнее
21-03-2024 дата публикации

Object identification in bird's-eye view reference frame with explicit depth estimation co-training

Номер: US20240096105A1
Принадлежит: Waymo LLC

The described aspects and implementations enable efficient detection and classification of objects with machine learning models that deploy a bird's-eye view representation and are trained using depth ground truth data. In one implementation, disclosed are system and techniques that include obtaining images, generating, using a first neural network (NN), feature vectors (FVs) and depth distributions pixels of images, wherein the first NN is trained using training images and a depth ground truth data for the training images. The techniques further include obtaining a feature tensor (FT) in view of the FVs and the depth distributions, and processing the obtained FTs, using a second NN, to identify one or more objects depicted in the images.

Подробнее
15-02-2024 дата публикации

Object identification in bird's-eye view reference frame with explicit depth estimation co-training

Номер: WO2024035658A1
Принадлежит: Waymo LLC

The described aspects and implementations enable efficient detection and classification of objects with machine learning models that deploy a bird's-eye view representation and are trained using depth ground truth data. In one implementation, disclosed are system and techniques that include obtaining images, generating, using a first neural network (NN), feature vectors (FVs) and depth distributions pixels of images, wherein the first NN is trained using training images and a depth ground truth data for the training images. The techniques further include obtaining a feature tensor (FT) in view of the FVs and the depth distributions, and processing the obtained FTs, using a second NN, to identify one or more objects depicted in the images.

Подробнее
30-04-2020 дата публикации

Object action classification for autonomous vehicles

Номер: WO2020086358A1
Принадлежит: Waymo LLC

Aspects of the disclosure relate to training and using a model 270 for identifying actions of objects. For instance, LIDAR sensor data frames 250 including an object bounding box corresponding to an object as well as an action label for the bounding box may be received. Each sensor frame is associated with a timestamp and is sequenced with respect to other sensor frames. Each given sensor data frame may be projected into a camera image 700 of the object based on the timestamp associated with the given sensor data frame in order to provide fused data. The model may be trained using the fused data such that the model is configured to, in response to receiving fused data, the model outputs an action label for each object bounding box of the fused data. This output may then be used to control a vehicle 100 in an autonomous driving mode.

Подробнее
21-09-2023 дата публикации

End-to-end processing in automated driving systems

Номер: US20230294687A1
Принадлежит: Waymo LLC

The described aspects and implementations enable efficient object detection and tracking. In one implementation, disclosed is a method and a system to perform the method, the system including the sensing system configured to obtain sensing data characterizing an environment of the vehicle. The system further includes a data processing system operatively coupled to the sensing system and configured to process the sensing data using a first (second) set of neural network (NN) layers to obtain a first (second) set of features for a first (second) region of the environment, the first (second) set of features is associated with a first (second) spatial resolution. The data processing system is further to process the two sets of features using a second set of NN layers to detect a location of obj ect(s) in the environment of the vehicle and a state of motion of the object(s).

Подробнее
12-09-2023 дата публикации

Contrastive learning for object detection

Номер: US11756309B2
Принадлежит: Waymo LLC

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using contrastive learning. One of the methods includes obtaining a network input representing an environment; processing the network input using a first subnetwork of the neural network to generate a respective embedding for each location in the environment; processing the embeddings for each location in the environment using a second subnetwork of the neural network to generate a respective object prediction for each location; determining, for each of a plurality of pairs of the plurality of locations in the environment, whether the respective object predictions of the pair of locations characterize the same possible object or different possible objects; computing a respective contrastive loss value for each of the plurality of pairs of locations; and updating values for a plurality of parameters of the first subnetwork using the computed contrastive loss values.

Подробнее