Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 73. Отображено 73.
18-05-2017 дата публикации

DUMMY CORE PLUS PLATING RESIST RESTRICT RESIN PROCESS AND STRUCTURE

Номер: US20170142828A1
Принадлежит: Multek Technologies Limited

A printed circuit board (PCB) has multiple layers, where select portions of inner layer circuitry, referred to as inner core circuitry, are exposed from the remaining layers. The PCB having an exposed inner core circuitry is formed using a dummy core plus plating resist process. The select inner core circuitry is part of an inner core. The inner core corresponding to the exposed inner core circuitry forms a semi-flexible PCB portion. The semi-flexible PCB portion is an extension of the remaining adjacent multiple layer PCB. The remaining portion of the multiple layer PCB is rigid. The inner core is common to both the semi-flexible PCB portion and the remaining rigid PCB portion.

Подробнее
22-01-2019 дата публикации

Live updates for synthetic long exposures

Номер: US0010187587B2
Принадлежит: Google LLC, GOOGLE LLC

An image sensor of an image capture device may capture an image. The captured image may be stored in a buffer of two or more previously-captured images. An oldest image of the two or more previously-captured images may be removed from the buffer. An aggregate image of the images in the buffer may be updated. This updating may involve subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image. A viewfinder of the image capture device may display a representation of the aggregate image.

Подробнее
31-08-2021 дата публикации

Scalable volumetric 3D reconstruction

Номер: US0011107272B2

Scalable volumetric reconstruction is described whereby data from a mobile environment capture device is used to form a 3D model of a real-world environment. In various examples, a hierarchical structure is used to store the 3D model where the structure comprises a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node. In various examples, parallel processing is used to enable captured data to be integrated into the 3D model and/or to enable images to be rendered from the 3D model. In an example, metadata is computed and stored in the hierarchical structure and used to enable space skipping and/or pruning of the hierarchical structure.

Подробнее
02-11-2017 дата публикации

DISCONNECT CAVITY BY PLATING RESIST PROCESS AND STRUCTURE

Номер: US20170318685A1
Автор: Jiawen Chen, Pui Yin Yu
Принадлежит: Multek Technologies Limited

A disconnect cavity is formed within a PCB, where the disconnect cavity is electrically disconnected from a PCB landing layer. The disconnect cavity is formed using a plating resist process which does not require low flow prepreg nor selective copper etching. Plating resist is printed on a core structure selectively positioned within a PCB stack-up. The volume occupied by the plating resist forms a subsequently formed disconnect cavity. After lamination of the PCB stack-up, depth control milling, drilling and electroless copper plating are performed, followed by a plating resist stripping process to substantially remove the plating resist and all electroless copper plated to the plating resist, thereby forming the disconnect cavity. In a subsequent copper plating process, without electric connectivity copper cannot be plated to the side walls and bottom surface of the disconnect cavity, resulting in the disconnect cavity wall being electrically disconnected from the PCB landing layer. 1. A printed circuit board comprising:a. a laminated stack comprising a plurality of non-conducting layers and a plurality of conductive layers;b. a via formed from an outer surface of the laminated stack and terminating within the laminated stack at a terminating end having a terminating surface; andc. a disconnect cavity at the terminating end of the via, wherein the disconnect cavity comprises the terminating surface and disconnect cavity side walls, further wherein the terminating surface and the disconnect cavity side walls are free of conductive plating.2. The printed circuit board of wherein the via comprises via side walls extending from the outer surface to the disconnect cavity claim 1 , wherein the via side walls are plated with conductive material.3. The printed circuit board of wherein the disconnect cavity further comprises an opposing surface opposite the terminating surface claim 1 , wherein the opposing surface has an opening coincident with the via.4. The printed ...

Подробнее
22-09-2015 дата публикации

Systems and methods for push-button slow motion

Номер: US0009143693B1
Принадлежит: Google Inc., GOOGLE INC, GOOGLE INC.

Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control. The disclosure includes techniques for enabling push-button slow motion effects through buffer management and the adjustment of a display frame rate.

Подробнее
08-03-2018 дата публикации

Graphic Interface for Real-Time Vision Enhancement

Номер: US20180067312A1
Принадлежит: Google LLC

Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control.

Подробнее
08-03-2016 дата публикации

Rendering images with volumetric shadows using rectified height maps for independence in processing camera rays

Номер: US0009280848B1
Принадлежит: Disney Enterprises Inc.

Rendering a scene with participating media is done by generating a depth map from a camera viewpoint and a shadow map from a light source, converting the shadow map using epipolar rectification to form a rectified shadow map (or generating the rectified shadow map directly), generating an approximation to visibility terms in a scattering integral, then computing a 1D min-max mipmap or other acceleration data structure for rectified shadow map rows and traversing that mipmap/data structure to find lit segments to accumulate values for the scattering integral for specific camera rays, and generating rendered pixel values that take into account accumulated values for the scattering integral for the camera rays. The scattering near an epipole of the rectified shadow map might be done using brute force ray marching when the epipole is on or near the screen. The process can be implemented using a GPU for parallel operations.

Подробнее
16-05-2024 дата публикации

Imaging-Enabled Bioreactor for Ex Vivo Human Airway Tissues

Номер: US20240161291A1

A bioreactor is disclosed that can be used to study living tissues (e.g., lung tissues) in an environment that mimics the natural environment. Pressure, flow and force modules enable imitation of in vivo conditions. An integrated imaging module allows for the airway tissues and cells of interest to be visualized and monitored continuously and non-destructively at the single-cell level. A method of use of the created systems can be utilized to quantify mucociliary fluid movements across the luminal surface of in vitro-cultured human airway tissue via in situ particle tracking and analysis is also disclosed. Another method of use of systems created in accordance with an embodiment of the present invention is to generate in vitro-cultured human airway tissue with severely impaired mucociliary flow by depositing thick viscous mucus-mimetic fluid on to the airway lumen. Methods of de-epithelialization and replacement of living cells are also disclosed.

Подробнее
21-05-2020 дата публикации

SCALABLE VOLUMETRIC 3D RECONSTRUCTION

Номер: US20200160597A1
Принадлежит: Microsoft Technology Licensing, LLC

Scalable volumetric reconstruction is described whereby data from a mobile environment capture device is used to form a 3D model of a real-world environment. In various examples, a hierarchical structure is used to store the 3D model where the structure comprises a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node. In various examples, parallel processing is used to enable captured data to be integrated into the 3D model and/or to enable images to be rendered from the 3D model. In an example, metadata is computed and stored in the hierarchical structure and used to enable space skipping and/or pruning of the hierarchical structure. 120-. (canceled)21. A computer-implemented method comprising:receiving, at a processor, a stream of depth maps of a real-world environment captured by a mobile environment capture device, and also receiving at the processor a position and an orientation of the mobile environment capture device associated with each depth map;calculating, from the depth maps, a three-dimensional (3D) model comprising values representing surfaces in the real-world environment;storing in memory of a parallel processing unit the 3D model;calculating an active region of the real-world environment using a current position and orientation of the mobile environment capture device;mapping the active region to a working set of the memory;streaming values of the 3D model between the memory of the parallel processing unit and memory of a host device on the basis of the mapping.22. The method as claimed in claim 21 , wherein the 3D model is a 3D volume.23. The method as claimed in claim 21 , wherein the 3D model is stored in a hierarchical structure.24. The method as claimed in claim 23 , wherein storing the 3D model in the hierarchical structure ...

Подробнее
20-06-2024 дата публикации

NEURAL PHOTOFINISHER DIGITAL CONTENT STYLIZATION

Номер: US20240202989A1
Принадлежит: Adobe Inc.

Digital content stylization techniques are described that leverage a neural photofinisher to generate stylized digital images. In one example, the neural photofinisher is implemented as part of a stylization system to train a neural network to perform digital image style transfer operations using reference digital content as training data. The training includes calculating a style loss term that identifies a particular visual style of the reference digital content. Once trained, the stylization system receives a digital image and generates a feature map of a scene depicted by the digital image. Based on the feature map as well as the style loss, the stylization system determines visual parameter values to apply to the digital image to incorporate a visual appearance of the particular visual style. The stylization system generates the stylized digital image by applying the visual parameter values to the digital image automatically and without user intervention.

Подробнее
06-02-2014 дата публикации

ANIMATING OBJECTS USING THE HUMAN BODY

Номер: US20140035901A1
Принадлежит: MICROSOFT CORPORATION

Methods of animating objects using the human body are described. In an embodiment, a deformation graph is generated from a mesh which describes the object. Tracked skeleton data is received which is generated from sensor data and the tracked skeleton is then embedded in the graph. Subsequent motion which is captured by the sensor result in motion of the tracked skeleton and this motion is used to define transformations on the deformation graph. The transformations are then applied to the mesh to generate an animation of the object which corresponds to the captured motion. In various examples, the mesh is generated by scanning an object and the deformation graph is generated using orientation-aware sampling such that nodes can be placed close together within the deformation graph where there are sharp corners or other features with high curvature in the object. 1. A method of animating an object comprising:generating, by a processor, a deformation graph automatically from an input mesh defining the object;receiving body tracking data defining positions of one or more points on a body;attaching points on a skeleton to points on the deformation graph using the body tracking data;transforming the deformation graph in real-time based on motion of the tracked skeleton by computing a plurality of transformations; anddynamically applying the plurality of transformations to the mesh to render a corresponding animation of the object.2. A method according to claim 1 , further comprising creating the input mesh by:generating a 3D volumetric reconstruction of a scene scanned by a user with a depth camera;segmenting an object from the 3D volumetric reconstruction of the scene; andextracting a geometric isosurface from the segmented portion of the 3D volumetric reconstruction.3. A method according to claim 1 , wherein the deformation graph is generated automatically from the input mesh using orientation-aware sampling.4. A method according to claim 1 , wherein generating a ...

Подробнее
14-05-2019 дата публикации

Disconnect cavity by plating resist process and structure

Номер: US0010292279B2

A disconnect cavity is formed within a PCB, where the disconnect cavity is electrically disconnected from a PCB landing layer. The disconnect cavity is formed using a plating resist process which does not require low flow prepreg nor selective copper etching. Plating resist is printed on a core structure selectively positioned within a PCB stack-up. The volume occupied by the plating resist forms a subsequently formed disconnect cavity. After lamination of the PCB stack-up, depth control milling, drilling and electroless copper plating are performed, followed by a plating resist stripping process to substantially remove the plating resist and all electroless copper plated to the plating resist, thereby forming the disconnect cavity. In a subsequent copper plating process, without electric connectivity copper cannot be plated to the side walls and bottom surface of the disconnect cavity, resulting in the disconnect cavity wall being electrically disconnected from the PCB landing layer.

Подробнее
03-01-2017 дата публикации

Animating objects using the human body

Номер: US0009536338B2

Methods of animating objects using the human body are described. In an embodiment, a deformation graph is generated from a mesh which describes the object. Tracked skeleton data is received which is generated from sensor data and the tracked skeleton is then embedded in the graph. Subsequent motion which is captured by the sensor result in motion of the tracked skeleton and this motion is used to define transformations on the deformation graph. The transformations are then applied to the mesh to generate an animation of the object which corresponds to the captured motion. In various examples, the mesh is generated by scanning an object and the deformation graph is generated using orientation-aware sampling such that nodes can be placed close together within the deformation graph where there are sharp corners or other features with high curvature in the object.

Подробнее
24-11-2022 дата публикации

Defocus Blur Removal and Depth Estimation Using Dual-Pixel Image Data

Номер: US20220375042A1
Принадлежит:

A method includes obtaining dual-pixel image data that includes a first sub-image and a second sub-image, and generating an in-focus image, a first kernel corresponding to the first sub-image, and a second kernel corresponding to the second sub-image. A loss value may be determined using a loss function that determines a difference between (i) a convolution of the first sub-image with the second kernel and (ii) a convolution of the second sub-image with the first kernel, and/or a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second kernel. Based on the loss value and the loss function, the in-focus image, the first kernel, and/or the second kernel, may be updated and displayed. 1. A computer-implemented method comprising:obtaining dual-pixel image data comprising a first sub-image and a second sub-image;determining (i) an in-focus image, (ii) a first blur kernel corresponding to the first sub-image, and (iii) a second blur kernel corresponding to the second sub-image;determining a loss value using a loss function comprising one or more of: an equivalence loss term configured to determine a difference between (i) a convolution of the first sub-image with the second blur kernel and (ii) a convolution of the second sub-image with the first blur kernel, or a data loss term configured to determine a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first blur kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second blur kernel;based on the loss value and the loss function, updating one or more of: (i) the in-focus image, (ii) the first blur kernel, or (iii) the second blur kernel; andgenerating image data based on one or more of: (i) the in-focus image as updated, (ii) the first blur kernel as ...

Подробнее
05-03-2020 дата публикации

Dark Flash Photography With A Stereo Camera

Номер: US20200077076A1
Принадлежит:

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions. 1. A device comprising:a first camera, wherein the first camera is operable to generate green image data based on green light received by the first camera, blue image data based on blue light received by the first camera, and red image data based on red light received by the first camera;a second camera, wherein the second camera is operable to generate first visible image data based on light at a first visible wavelength that is received by the second camera and first invisible image data based on light at a first invisible wavelength received by the second camera;a flash, wherein the flash is operable to emit light at the first invisible wavelength; and operating the flash, during a first period of time, to illuminate a scene with light at the first invisible wavelength;', 'operating the first camera, during the first period of time, to generate a first image of the scene, wherein the first image comprises information indicative of red, green, and blue light received from the scene; and', ...

Подробнее
31-12-2019 дата публикации

Live updates for synthetic long exposures

Номер: US0010523875B2
Принадлежит: Google Inc., GOOGLE LLC, GOOGLE INC, Google LLC

An image sensor of an image capture device may capture an image. The captured image may be stored in a buffer of two or more previously-captured images. An oldest image of the two or more previously-captured images may be removed from the buffer. An aggregate image of the images in the buffer may be updated. This updating may involve subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image. A viewfinder of the image capture device may display a representation of the aggregate image.

Подробнее
25-02-2021 дата публикации

Depth Prediction from Dual Pixel Images

Номер: US20210056349A1
Принадлежит:

Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system. 1. A computer-implemented method , comprising:receiving, at a computing device, a dual pixel image of at least a foreground object, the dual pixel image comprising a plurality of dual pixels, wherein a dual pixel of the plurality of dual pixels comprises a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image;training a machine learning system to determine a depth map associated with the dual pixel image using the computing device, wherein training the machine learning system to determine the depth map comprises training the machine learning system to determine the depth map based on a loss function that is invariant to depth ambiguities; andproviding the trained machine learning system using the computing device.2. The computer-implemented method of claim 1 , wherein the depth ambiguities comprise unknown scale and offset values.3. The computer-implemented method of claim 2 , where the unknown scale and offset values are associated with characteristics of a device used to capture the dual pixel image.4. The computer-implemented method of claim 1 , wherein the foreground object has a first object type claim 1 , and wherein training the machine learning system to determine the depth map comprises training the machine ...

Подробнее
19-10-2017 дата публикации

Live Updates for Synthetic Long Exposures

Номер: US20170302840A1
Принадлежит:

An image sensor of an image capture device may capture an image. The captured image may be stored in a buffer of two or more previously-captured images. An oldest image of the two or more previously-captured images may be removed from the buffer. An aggregate image of the images in the buffer may be updated. This updating may involve subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image. A viewfinder of the image capture device may display a representation of the aggregate image. 1. A method comprising:capturing, by an image sensor of an image capture device, an image of a scene;storing the captured image in a buffer of two or more previously-captured images of the scene;removing, from the buffer, an oldest image of the two or more previously-captured images;updating an aggregate image of the images in the buffer, wherein the updating involves subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image; anddisplaying, on a viewfinder of the image capture device, a representation of the aggregate image.2. The method of claim 1 , wherein the aggregate image is an additive representation of the images in the buffer.3. The method of claim 1 , further comprising:while displaying the representation of the aggregate image, capturing a further image of the scene.4. The method of claim 1 , wherein the viewfinder of the image capture device has a refresh rate defined by a refresh time interval claim 1 , wherein the representation of the aggregate image has a synthetic exposure length defined by a synthetic exposure time interval claim 1 , wherein the synthetic exposure time interval is greater than the refresh time interval.5. The method of claim 4 , wherein synthetic exposure time interval is at least 1000 milliseconds and the refresh time interval is less than 150 milliseconds.6. The method of ...

Подробнее
18-05-2017 дата публикации

RIGID-BEND PRINTED CIRCUIT BOARD FABRICATION

Номер: US20170142829A1
Принадлежит: Multek Technologies Limited

A printed circuit board (PCB) has multiple layers, where select portions of one or more conductive layers, referred to as core circuitry, form a semi-flexible PCB portion that is protected by an exposed prepreg layer. The semi-flexible PCB portion having an exposed prepreg layer is formed using a dummy core process that leaves the exposed prepreg layer smooth and undamaged. The core circuitry is part of a core structure. The semi-flexible PCB portion is an extension of the remaining adjacent multiple layer PCB. The remaining portion of the multiple layer PCB is rigid. The core structure is common to both the semi-flexible PCB portion and the remaining rigid PCB portion. 1. A printed circuit board comprising:a. a rigid printed circuit board portion comprising a laminated stack of a plurality of non-conducting layers and a plurality of conductive layers, wherein the laminated stack further comprises a first portion of a core structure; andb. a semi-flexible printed circuit board portion comprising a second portion of the core structure, wherein the core structure is a continuous structure that extends through both the rigid printed circuit board portion and the semi-flexible printed circuit board portion, further wherein the second portion of the core structure comprises core circuitry and the semi-flexible printed circuit board portion further comprises an exposed non-conductive layer covering the core circuitry, wherein the exposed non-conductive layer has an exposed surface that is smooth.2. The printed circuit board of wherein each of the conductive layers is pattern etched.3. The printed circuit board of further comprising one or more plated through hole vias in the rigid printed circuit board portion.4. The printed circuit board of wherein the rigid printed circuit board portion comprises a first rigid printed circuit board portion claim 1 , further wherein the printed circuit board further comprises a second rigid printed circuit board portion comprising a ...

Подробнее
11-06-2019 дата публикации

Dummy core plus plating resist restrict resin process and structure

Номер: US0010321560B2

A printed circuit board (PCB) has multiple layers, where select portions of inner layer circuitry, referred to as inner core circuitry, are exposed from the remaining layers. The PCB having an exposed inner core circuitry is formed using a dummy core plus plating resist process. The select inner core circuitry is part of an inner core. The inner core corresponding to the exposed inner core circuitry forms a semi-flexible PCB portion. The semi-flexible PCB portion is an extension of the remaining adjacent multiple layer PCB. The remaining portion of the multiple layer PCB is rigid. The inner core is common to both the semi-flexible PCB portion and the remaining rigid PCB portion.

Подробнее
01-11-2022 дата публикации

Dark flash photography with a stereo camera

Номер: US0011490070B2
Принадлежит: Google LLC

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.

Подробнее
01-02-2024 дата публикации

COMPOSITION HAVING FUNCTIONS OF MOISTURIZING, REPAIRING AND WHITENING AND USE THEREOF

Номер: US20240033315A1
Принадлежит: Infinitus China Co Ltd

Disclosed is a composition having functions of moisturizing, repairing and whitening and use thereof. The composition comprises Dendrobium candidum polysaccharide, a lotus extract and a Glycyrrhiza glabra extract. The lotus extract and the Glycyrrhiza glabra extract can inhibit pigmentation by inhibiting synthesis of melanin, inhibiting transfer of melanin globule, resisting oxidation and improving skin microcirculation, so as to exert the effect of special whitening. Through the synergistic synergy of the three components, and the compounded composition can reduce synthesis of melanin of skin's melanophores, promote expression of keratinocyte cell hydration-related factors, inhibit expression of inflammation-related factors, promote the ability of lipid synthesis and exert multiple effects of whitening, enhancement in moisturizing of the skin, alleviation in inflammation and discomfort of the skin, and improvement in barrier function of epidermis. In addition, the compounded composition may be applied to preparation of skin care products or skin formulation.

Подробнее
15-06-2021 дата публикации

Dark flash photography with a stereo camera

Номер: US0011039122B2
Принадлежит: Google LLC, GOOGLE LLC, Google LLC

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.

Подробнее
14-11-2019 дата публикации

EASY-PEEL LAMINATED FOOD CAN

Номер: US20190344947A1
Автор: Jiawen Chen, Haishan Chen

An easy-peel laminated food can, comprising a can body () made by punching a laminated metal sheet, and a cover body () made of a laminating film. The can body () comprises a cavity () provided with an opening () and an annular bonding body () circumferentially arranged at the edge of the opening (), and the cover body () and the annular bonding body () may be thermally pressed and bonded to be sealed. 1. A laminated food easy-peel-off can , comprising a can body drawn from a laminated metal sheet and a lid made of a laminated material , the can body comprising a cavity having an opening , and an annular bonding body located around an edge of the opening , the lid being capable of being thermally bonded and sealed to the annular bonding body.2. The laminated food easy-peel-off can according to claim 1 , wherein the lid is thermally bonded and sealed to the annular bonding body claim 1 , the lid and the annular bonding body cooperatively form a sealed storage chamber with the cavity.3. The laminated food easy-peel-off can according to claim 2 , wherein a temperature at which the lid and the annular bonding body are thermally bonded and sealed is 130° to 250°.4. The laminated food easy-peel-off can according to claim 1 , wherein an adhesive force of the thermally bonding between an edge of the lid and the annular bonding body is greater than or equal to 120 kpa.5. The laminated food easy-peel-off can according to claim 1 , wherein an edge of the annular bonding body is provided with a smooth transitional scratch-resistant portion.6. The laminated food easy-peel-off can according to claim 5 , wherein the scratch-resistant portion is curled in a direction toward the cavity or away from the cavity claim 5 , and forms an annular protrusion.7. The laminated food easy-peel-off can according to claim 1 , wherein a smooth transitional portion is provided between the annular bonding body and the cavity.8. The laminated food easy-peel-off can according to claim 1 , wherein the ...

Подробнее
21-09-2017 дата публикации

EMBEDDED CAVITY IN PRINTED CIRCUIT BOARD BY SOLDER MASK DAM

Номер: US20170271734A1
Принадлежит: Multek Technologies Limited

A PCB having multiple stacked layers laminated together. The laminated stack includes regular flow prepreg and includes an embedded cavity, the perimeter of which is formed by a photo definable, or photo imageable, polymer structure, such as a solder mask dam. The solder mask dam defines cavity dimensions and prevents prepreg resin flow into the cavity during lamination.

Подробнее
05-06-2018 дата публикации

Rigid-bend printed circuit board fabrication

Номер: US0009992880B2

A printed circuit board (PCB) has multiple layers, where select portions of one or more conductive layers, referred to as core circuitry, form a semi-flexible PCB portion that is protected by an exposed prepreg layer. The semi-flexible PCB portion having an exposed prepreg layer is formed using a dummy core process that leaves the exposed prepreg layer smooth and undamaged. The core circuitry is part of a core structure. The semi-flexible PCB portion is an extension of the remaining adjacent multiple layer PCB. The remaining portion of the multiple layer PCB is rigid. The core structure is common to both the semi-flexible PCB portion and the remaining rigid PCB portion.

Подробнее
03-05-2022 дата публикации

Graphic interface for real-time vision enhancement

Номер: US0011320655B2
Принадлежит: Google LLC

Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control.

Подробнее
24-11-2022 дата публикации

Learning-Based Lens Flare Removal

Номер: US20220375045A1
Принадлежит:

A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.

Подробнее
05-12-2017 дата публикации

Graphic interface for real-time vision enhancement

Номер: US0009835862B1
Принадлежит: Google LLC, GOOGLE INC, GOOGLE LLC, Google Inc.

Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control.

Подробнее
20-06-2019 дата публикации

Machine-Learning Based Technique for Fast Image Enhancement

Номер: US20190188535A1
Принадлежит:

Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model. 1. A method comprising:receiving, at a mobile device, a plurality of image processing parameters;causing an image sensor of the mobile device to capture an initial image;receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image;determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters;receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model; andgenerating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model.2. The method of claim 1 , further comprising downsampling the initial image to provide the input image claim 1 , wherein the input image comprises a downsampled version of the initial image.3. The method of claim 2 , wherein the input image comprises no more than 256 pixels along a first ...

Подробнее
18-04-2019 дата публикации

Live Updates for Synthetic Long Exposures

Номер: US20190116304A1
Принадлежит:

An image sensor of an image capture device may capture an image. The captured image may be stored in a buffer of two or more previously-captured images. An oldest image of the two or more previously-captured images may be removed from the buffer. An aggregate image of the images in the buffer may be updated. This updating may involve subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image. A viewfinder of the image capture device may display a representation of the aggregate image. 1. A method comprising:storing, in memory of an image capture device, a buffer of images and an aggregate image, wherein the aggregate image represents a first summation of pixel values in images stored in the buffer;displaying, on a viewfinder of the image capture device, a representation of the aggregate image;capturing, by an image sensor of the image capture device, a new image;in response to capturing the new image, the image capture device: (i) removing, from the buffer, an oldest image, (ii) storing, in the buffer, the new image, and (iii) updating the aggregate image to represent a second summation of pixel values in the images stored in the buffer; anddisplaying, on the viewfinder, a representation of the aggregate image as updated.2. The method of claim 1 , wherein updating the aggregate image comprises:subtracting the oldest image from the aggregate image; andadding the new image to the aggregate image.3. The method of claim 1 , wherein capturing the new image occurs while the viewfinder displays the representation of the aggregate image.4. The method of claim 1 , wherein the viewfinder has a refresh rate defined by a refresh time interval claim 1 , wherein the representation of the aggregate image has a synthetic exposure length defined by a synthetic exposure time interval claim 1 , wherein the synthetic exposure time interval is greater than the refresh time interval.5. The method of ...

Подробнее
28-05-2020 дата публикации

PROCESSING METHOD FOR ASEPTIC CANNING AND ASEPTIC CANNING SYSTEM

Номер: US20200165114A1

A process method for aseptic canning and an aseptic canning system. The process comprises the following steps: placing a first solid-state food item into a sterilization pot, closing a pot lid of the sterilization pot; warming the interior of the sterilization pot, cooking the first solid-state food item, and sterilizing the first solid-state food item to produce a second solid-state food item; cooling the second solid-state food item in the sterilization pot; and in the aseptic environment within the sterilization pot, placing the cooled solid-state food item into a can and sealing the can. The system applying the process also is provided with a transfer apparatus used for transferring the first solid-state food item and/or the second solid-state food item. 1. A processing method of aseptic canning , comprising steps of:putting a first solid-state food into a sterilization pot, and closing a pot cover of the sterilization pot;cooking the first solid-state food by increasing a temperature of an interior of the sterilization pot, and sterilizing the first solid-state food to obtain a second solid-state food;cooling the second solid-state food in the sterilization pot; andin an aseptic environment inside the sterilization pot, putting the cooled second solid-state food into a sterilized can body, and sealing the can body after filling.2. The processing method of aseptic canning according to claim 1 , wherein containing the first solid-state food into a netlike ventilation container claim 1 , and then putting the first solid-state food into the sterilization pot; or hanging the first solid-state food in the sterilization pot; or placing the first solid-state food onto a roller claim 1 , a baking tray claim 1 , and a frying pan in the sterilization pot.3. The processing method of aseptic canning according to claim 1 , wherein the first solid-state food is movable within the sterilization pot.4. The processing method of aseptic canning according to claim 1 , wherein ...

Подробнее
03-03-2020 дата публикации

Machine-learning based technique for fast image enhancement

Номер: US0010579908B2
Принадлежит: Google LLC, GOOGLE LLC

Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model.

Подробнее
18-05-2021 дата публикации

Easy-peel laminated food can

Номер: US0011008146B2

An easy-peel laminated food can, comprising a can body (100) made by punching a laminated metal sheet, and a cover body (200) made of a laminating film. The can body (100) comprises a cavity (110) provided with an opening (112) and an annular bonding body (120) circumferentially arranged at the edge of the opening (112), and the cover body (200) and the annular bonding body (120) may be thermally pressed and bonded to be sealed.

Подробнее
10-04-2014 дата публикации

WEARABLE SENSOR FOR TRACKING ARTICULATED BODY-PARTS

Номер: US20140098018A1
Принадлежит: MICROSOFT CORPORATION

A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part. In some examples an inertial measurement unit is also included in the sensor to enable tracking of the arm and hand

Подробнее
11-06-2024 дата публикации

Defocus blur removal and depth estimation using dual-pixel image data

Номер: US0012008738B2
Принадлежит: Google LLC

A method includes obtaining dual-pixel image data that includes a first sub-image and a second sub-image, and generating an in-focus image, a first kernel corresponding to the first sub-image, and a second kernel corresponding to the second sub-image. A loss value may be determined using a loss function that determines a difference between (i) a convolution of the first sub-image with the second kernel and (ii) a convolution of the second sub-image with the first kernel, and/or a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second kernel. Based on the loss value and the loss function, the in-focus image, the first kernel, and/or the second kernel, may be updated and displayed.

Подробнее
18-12-2014 дата публикации

SCALABLE VOLUMETRIC 3D RECONSTRUCTION

Номер: US20140368504A1
Принадлежит:

Scalable volumetric reconstruction is described whereby data from a mobile environment capture device is used to form a 3D model of a real-world environment. In various examples, a hierarchical structure is used to store the 3D model where the structure comprises a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node. In various examples, parallel processing is used to enable captured data to be integrated into the 3D model and/or to enable images to be rendered from the 3D model. In an example, metadata is computed and stored in the hierarchical structure and used to enable space skipping and/or pruning of the hierarchical structure. 1. A computer-implemented method comprising:receiving, at a processor, a stream of depth maps of the real-world environment captured by a mobile environment capture device;calculating, from the depth maps, a 3D model comprising values representing surfaces in the real-world environment;storing the 3D model in a hierarchical structure comprising a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node;storing, at the root and interior nodes, metadata describing the hierarchical structure;storing at the leaf nodes, the values representing surfaces.2. A method as claimed in wherein storing the 3D model in a hierarchical structure comprises forming the interior level nodes and the leaf nodes on the basis of a refinement strategy which checks whether a depth observation from a depth map is near to at least some of the values representing surfaces in the real-world environment.3. A method as claimed in wherein the ...

Подробнее
16-07-2020 дата публикации

Depth Prediction from Dual Pixel Images

Номер: US20200226419A1
Принадлежит:

Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system. 1. A computer-implemented method , comprising:receiving, at a computing device, a dual pixel image of at least a foreground object, the dual pixel image comprising a plurality of dual pixels, wherein a dual pixel of the plurality of dual pixels comprises a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image;training a machine learning system to determine a depth map associated with the dual pixel image using the computing device; andproviding the trained machine learning system using the computing device.2. The computer-implemented method of claim 1 , wherein training the machine learning system to determine the depth map comprises training the machine learning system to determine the depth map based on a loss function that comprises an affine mapping of an estimate of the depth map.3. The computer-implemented method of claim 2 , where training the machine learning system to determine the depth map based on the loss function comprises training the machine learning system to determine the depth map based on a loss function that comprises a difference between the affine mapping of the estimate of the depth map and a reference depth map.4. The computer-implemented method of claim 1 , wherein the foreground object has a ...

Подробнее
02-09-2021 дата публикации

Dark Flash Photography With A Stereo Camera

Номер: US20210274151A1
Принадлежит: Google LLC

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can be addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the invisible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an “IR-G-UV” camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.

Подробнее
08-12-2020 дата публикации

Depth prediction from dual pixel images

Номер: US0010860889B2
Принадлежит: Google LLC, GOOGLE LLC

Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.

Подробнее
31-12-2020 дата публикации

INFORMATION RECOMMENDATION DEVICE, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Номер: US20200410567A1
Принадлежит:

The present disclosure relates to an information recommendation device, method, and computer-readable storage medium, and relates to the technical field of computer. The information recommendation device includes: a receiver configured to receive a communication identifier of a contact of a recommended user; a processor configured to acquire a user identifier corresponding to the communication identifier of the contact, and acquire shopping information of the contact according to the user identifier; and a transmitter configured to recommend the shopping information of the contact to the recommended user. 1. An information recommendation device in an information recommendation platform , comprising:a receiver configured to receive a communication identifier of a contact of a recommended user;a processor configured to acquire a user identifier corresponding to the communication identifier of the contact, and acquire shopping information of the contact according to the user identifier; anda transmitter configured to transmit the shopping information of the contact to a terminal of the recommended user to recommend one or more items.2. The information recommendation device of claim 1 , wherein the receiver is further configured to receive the communication identifier of the contact of the recommended user acquired by the terminal from a communication application or by a cloud or by a server.3. The information recommendation device of claim 1 , wherein the transmitter is further configured to transmit at least one of the following information to the recommended user:purchased item information with a rating higher than a preset level in the shopping information of the contact; orpublic shopping information of the contact.4. The information recommendation device of claim 1 , wherein the shopping information comprises one or more of purchased item information claim 1 , rating information claim 1 , collection information claim 1 , or browsing information.5. The information ...

Подробнее
24-06-2020 дата публикации

Easy-peel laminated food can

Номер: EP3564152A4
Автор: Haishan Chen, Jiawen Chen
Принадлежит: Guangzhou Jorson Food Technology Co Ltd

Подробнее
06-02-2014 дата публикации

Animating objects using the human body

Номер: WO2014022448A1
Принадлежит: MICROSOFT CORPORATION

Methods of animating objects using the human body are described. In an embodiment, a deformation graph is generated from a mesh which describes the object. Tracked skeleton data is received which is generated from sensor data and the tracked skeleton is then embedded in the graph. Subsequent motion which is captured by the sensor result in motion of the tracked skeleton and this motion is used to define transformations on the deformation graph. The transformations are then applied to the mesh to generate an animation of the object which corresponds to the captured motion. In various examples, the mesh is generated by scanning an object and the deformation graph is generated using orientation-aware sampling such that nodes can be placed close together within the deformation graph where there are sharp corners or other features with high curvature in the object.

Подробнее
25-10-2023 дата публикации

Anomaly detection method and apparatus for dynamic control system, and computer-readable medium

Номер: EP4266209A1
Принадлежит: SIEMENS AG

The embodiments of the present invention relate to an anomaly detection method and apparatus for a dynamic control system, and a computer-readable medium. The method comprises: initializing a hidden state distribution of a dynamic control system by using a g network in a neural network; receiving a measurement value of a sensor and a state value of a trigger at the current time point t, the measurement value and the state value being obtained by means of real-time monitoring; inputting at least one first sampling point into an f network in the neural network so as to perform prediction to obtain at least one second sampling point, wherein the first sampling point represents a hidden state distribution of the dynamic control system at a neighbouring time point t-1 before the current time point t, and the second sampling point represents a prior hidden state distribution of the dynamic control system at the current time point t; mapping the second sampling point to a sensor measurement value space by using an h network in the neural network, so as to perform prediction to obtain a probability distribution of the measurement value of the sensor of the dynamic control system at the current time point t; and by means of comparing the measurement value, which is obtained by means of real-time monitoring, and the probability distribution obtained by means of prediction, determining whether there is an anomaly in the system.

Подробнее
08-07-2021 дата публикации

Method and device for processing sensor data

Номер: WO2021134564A1

A method and device of processing sensor data are provided. The method includes: obtaining a first sensor data comprising data collected by at least one sensor from at least one device; acquiring distances between at least one first data point in the first sensor data and at least one second data point in a second sensor data; scaling the distances to have a pre-determined proportion of the distances be scaled within a pre-defined value range; mapping the scaled distances using a non-linear mapping scheme to the updated distances, the non-linear mapping scheme maps a value exceeding the pre-defined value range to a value within the value range while retaining size relations between the distances; acquiring a first similarity measurement using the updated distances; and generating an overall similarity measurement indicating similarity by combining the first similarity measurement with at least one third similarity measurement.

Подробнее
12-10-2023 дата публикации

Splatting-based Digital Image Synthesis

Номер: US20230326044A1
Автор: Jiawen Chen, Simon Niklaus
Принадлежит: Adobe Inc

Digital image synthesis techniques are described that leverage splatting, i.e., forward warping. In one example, a first digital image and a first optical flow are received by a digital image synthesis system. A first splat metric and a first merge metric are constructed by the digital image synthesis system that defines a weighted map of respective pixels. From this, the digital image synthesis system produces a first warped optical flow and a first warp merge metric corresponding to an interpolation instant by forward warping the first optical flow based on the splat metric and the merge metric. A first warped digital image corresponding to the interpolation instant is formed by the digital image synthesis system by backward warping the first digital image based on the first warped optical flow.

Подробнее
08-05-2024 дата публикации

Method and apparatus for training a model

Номер: EP4364041A1
Принадлежит: SIEMENS AG

A method, apparatus and computer-readable medium for training a model are presented. The method (300) includes: training (S301) a model (28) based on a training data set (21) only including historical sensor data when the equipment is under normal working condition; testing the model (28) with sensor data causing false alarm (22), sensor data of the equipments historical confirmed failure (23) and sensor data within pre-defined recent time period when the equipment is under normal working condition (24); activating (S303) the model (28) if the model (28) passes test. Status monitoring can be based on continuously updated model and can be applicable in various and volatile working conditions.

Подробнее
05-02-2020 дата публикации

Machine-learning based technique for fast image enhancement

Номер: EP3602483A1
Принадлежит: Google LLC

Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model.

Подробнее
27-03-2020 дата публикации

一种型材的分料装置

Номер: CN210192701U

本实用新型公开了一种型材的分料装置,机架上设置有用以抬升型材的升降机构、运送型材的输送机构、将堆积的型材按层脱离的脱离机构、将脱离后的型材逐根分离的分离机构和控制器;升降机构布置在机架一侧,升降机构包括一端与机架固定连接的绷带,绷带连接有拉紧绷带以带动型材升降的第一动力装置;输送机构包括链条,链条与绷带沿其拉紧方向的一侧衔接,链条连接有第二动力装置;脱离机构包括卡料板,卡料板连接有第三动力装置;分离机构包括推板,推板连接有第四动力装置,推板运动以推动型材朝绷带移动;第一动力装置、第二动力装置、第三动力装置、第四动力装置电性连接控制器。

Подробнее
24-03-2020 дата публикации

一种光纤纤内异常检测系统

Номер: CN210180640U
Принадлежит: Zhejiang University of Technology ZJUT

本实用新型公开了一种光纤纤内异常检测系统,包括光纤、耦合透镜、反向汇聚装置、荧光帽及光电探测器,所述耦合透镜设置在光纤耦合端,并通过耦合透镜连接激光光源和光纤,所述荧光帽设置在光纤照明端,荧光帽激发的白光沿光纤反向传输,所述反向汇聚装置设置在光纤耦合端,用于将反向传输信号光汇聚到光电探测器上进行检测。本实用新型的有益效果是:采用双包层光纤,纤芯用来传能量,内包层传信号,纤芯不吸收信号光,用内包层传递的信号光来检测纤芯是否发生故障,从而保障照明安全,提高光纤的利用率。

Подробнее
09-05-2024 дата публикации

Non-destructive pressure-assisted tissue stiffness measurement apparatus

Номер: AU2022357552A1

A minimally invasive device, containing a pressure channel, camera, and optical fiber imaging probe, to measure the stiffness of tissues

Подробнее
20-04-2016 дата публикации

Scalable volumetric 3d reconstruction

Номер: EP3008702A1
Принадлежит: Microsoft Technology Licensing LLC

Scalable volumetric reconstruction is described whereby data from a mobile environment capture device is used to form a 3D model of a real-world environment. In various examples, a hierarchical structure is used to store the 3D model where the structure comprises a root level node, a plurality of interior level nodes and a plurality of leaf nodes, each of the nodes having an associated voxel grid representing a portion of the real world environment, the voxel grids being of finer resolution at the leaf nodes than at the root node. In various examples, parallel processing is used to enable captured data to be integrated into the 3D model and/or to enable images to be rendered from the 3D model. In an example, metadata is computed and stored in the hierarchical structure and used to enable space skipping and/or pruning of the hierarchical structure.

Подробнее
20-06-2019 дата публикации

Machine-learning based technique for fast image enhancement

Номер: WO2019117998A1
Принадлежит: Google LLC

Systems and methods described herein may relate to image transformation utilizing a plurality of deep neural networks. An example method includes receiving, at a mobile device, a plurality of image processing parameters. The method also includes causing an image sensor of the mobile device to capture an initial image and receiving, at a coefficient prediction neural network at the mobile device, an input image based on the initial image. The method further includes determining, using the coefficient prediction neural network, an image transformation model based on the input image and at least a portion of the plurality of image processing parameters. The method additionally includes receiving, at a rendering neural network at the mobile device, the initial image and the image transformation model. Yet further, the method includes generating, by the rendering neural network, a rendered image based on the initial image, according to the image transformation model.

Подробнее
27-03-2024 дата публикации

Anomaly detection method and apparatus for industrial equipment, electronic device, and storage medium

Номер: EP4341858A1
Принадлежит: SIEMENS AG

This application provides an anomaly detection method and apparatus for industrial equipment, an electronic device, and a storage medium. The anomaly detection method for industrial equipment includes: obtaining current operating data and historical operating data of to-be-detected industrial equipment; obtaining predicted operating data of the industrial equipment according to the historical operating data, where the predicted operating data is a predicted value of the operating data of the industrial equipment at the current moment; calculating a difference between the current operating data and the predicted operating data, and obtaining an operating data deviation; determining a deviation range according to the operating data deviation and a pre-determined deviation distribution, where the deviation distribution is used for representing a data distribution met by a difference between an actual value and a predicted value of operating data of the industrial equipment during normal operating; and determining, if the operating data deviation falls beyond the deviation range, that the industrial equipment has an anomaly. This solution can improve accuracy of performing anomaly detection on the industrial equipment.

Подробнее
31-10-2018 дата публикации

Live updates for synthetic long exposures

Номер: EP3395060A1
Принадлежит: Google LLC

An image sensor of an image capture device may capture an image. The captured image may be stored in a buffer of two or more previously-captured images. An oldest image of the two or more previously-captured images may be removed from the buffer. An aggregate image of the images in the buffer may be updated. This updating may involve subtracting a representation of the oldest image from the aggregate image, and adding a representation of the captured image to the aggregate image. A viewfinder of the image capture device may display a representation of the aggregate image.

Подробнее
19-05-2021 дата публикации

Live updates for synthetic long exposures

Номер: EP3395060B1
Принадлежит: Google LLC

Подробнее
09-07-2024 дата публикации

Learning-based lens flare removal

Номер: US12033309B2
Принадлежит: Google LLC

A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.

Подробнее
06-04-2023 дата публикации

Non-destructive pressure-assisted tissue stiffness measurement apparatus

Номер: CA3233497A1

A minimally invasive device, containing a pressure channel, camera, and optical fiber imaging probe, to measure the stiffness of tissues in vivo and ex vivo is disclosed. The device is inserted into a patient and navigated to a tissue of interest, where stiffness is evaluated by applying suction and measuring the elongation or by applying compression force and measuring the compression of the tissue. Biopsies can be taken for further analysis, or tissue can be removed using an ablation laser. Small fluorescent molecules or therapeutics can also be delivered for improved visualization and targeted treatment. As such, this technology may be used to evaluate the stiffness of biomaterials as well as tissues and organs that are difficult to access, allowing for simultaneous diagnosis, treatment, and excision of diseased tissues.

Подробнее
25-04-2024 дата публикации

Apparatus for in situ measurement of electrical impedance of lung tissue

Номер: WO2024086668A1

Apparatus and methods are disclosed to assess tissue properties, especially those of lung tissues. By measuring impedance in response to electrical stimulation, a variety of properties can be deduced including spatial location of damaged tissue, presence of epithelial cells and function of diseased tissue. To these ends, a probe (10) with electrodes (12, 14, 16, 18) connected to an impedance analyzer (20) was developed for assessing tissue (22) function. By applying voltages and/or currents, different information can be determined. In another embodiment, such a probe can be used for electroporation to evaluate the location of genetic molecules (e.g., RNA, DNA, or gene-editing machinery) and/or encourage their transport across different layers of tissue (FIG. 11).

Подробнее
03-04-2020 дата публикации

基于hy-srf05超声波模块的无人机自动缓降装置

Номер: CN210235335U

一种基于HY‑SRF05超声波模块的无人机自动缓降装置,包括机身、机臂、脚架和旋翼,所述机身包括控制模块、处理模块、超声波装置、转动机构,所述转动机构安装在机身下方,包括中心轴和圆柱形外壳,所述超声波装置还包括对地超声波发送接收模块、水平超声波发送接收模块,所述机身还安装有阻风板。对地超声波发送接收模块能提供无人机的实时飞行高度,实现更准确的飞行控制和降落状态判断,以达到缓慢降落的目的;同时,避免因降落速度过快与地面发生强力的碰撞而损坏。机身上的阻风板能减小旋翼自身负担,增强无人机缓降效果,有助于实现飞行的微调。水平超声波发送接收模块防止无人机降落至流体地形,保障无人机最后的安全降落。

Подробнее
07-08-2024 дата публикации

Non-destructive pressure-assisted tissue stiffness measurement apparatus

Номер: EP4408260A1

A minimally invasive device, containing a pressure channel, camera, and optical fiber imaging probe, to measure the stiffness of tissues in vivo and ex vivo is disclosed. The device is inserted into a patient and navigated to a tissue of interest, where stiffness is evaluated by applying suction and measuring the elongation or by applying compression force and measuring the compression of the tissue. Biopsies can be taken for further analysis, or tissue can be removed using an ablation laser. Small fluorescent molecules or therapeutics can also be delivered for improved visualization and targeted treatment. As such, this technology may be used to evaluate the stiffness of biomaterials as well as tissues and organs that are difficult to access, allowing for simultaneous diagnosis, treatment, and excision of diseased tissues.

Подробнее
14-09-2022 дата публикации

Defocus blur removal and depth estimation using dual-pixel image data

Номер: EP4055556A1
Принадлежит: Google LLC

A method includes obtaining dual-pixel image data that includes a first sub-image and a second sub-image, and generating an in-focus image, a first kernel corresponding to the first sub-image, and a second kernel corresponding to the second sub-image. A loss value may be determined using a loss function that determines a difference between (i) a convolution of the first sub-image with the second kernel and (ii) a convolution of the second sub-image with the first kernel, and/or a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second kernel. Based on the loss value and the loss function, the in-focus image, the first kernel, and/or the second kernel, may be updated and displayed.

Подробнее
19-05-2022 дата публикации

Defocus blur removal and depth estimation using dual-pixel image data

Номер: WO2022103400A1
Принадлежит: Google LLC

A method includes obtaining dual-pixel image data that includes a first sub-image and a second sub-image, and generating an in-focus image, a first kernel corresponding to the first sub-image, and a second kernel corresponding to the second sub-image. A loss value may be determined using a loss function that determines a difference between (i) a convolution of the first sub-image with the second kernel and (ii) a convolution of the second sub-image with the first kernel, and/or a sum of (i) a difference between the first sub-image and a convolution of the in-focus image with the first kernel and (ii) a difference between the second sub-image and a convolution of the in-focus image with the second kernel. Based on the loss value and the loss function, the in-focus image, the first kernel, and/or the second kernel, may be updated and displayed.

Подробнее
31-03-2020 дата публикации

一种带异常检测的单包层多芯光纤

Номер: CN210222288U
Принадлежит: Zhejiang University of Technology ZJUT

本实用新型公开了一种带异常检测的单包层多芯光纤,所述单包层多芯光纤包括能量传输纤芯、信号传输纤芯及光纤包层,所述能量传输纤芯一端设置第一耦合透镜,并通过第一耦合透镜外接照明激光光源,所述信号传输纤芯一端设置第二耦合透镜,并通过第二耦合透镜外接检测激光光源,所述信号传输纤芯另一端设置信号返回装置,所述信号返回装置外接光电探测器。本实用新型的有益效果是:本实用新型的光纤有多根纤芯传能量,至少一根光纤传信号,这样一来一方面减少了光纤包层的用料,降低了成本,包层里多根纤芯传能量提高了效率,还减少了光缆的截面积,另一方面该新型光纤还有传信号的纤芯,用来检测导光光纤是否有损坏,保证整个照明系统能正常运行。

Подробнее
27-07-2022 дата публикации

Learning-based lens flare removal

Номер: EP4032061A1
Принадлежит: Google LLC

A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.

Подробнее
15-02-2024 дата публикации

Imaging-enabled bioreactor for in vitro cultivation and bioengineering of isolated airway tissue

Номер: US20240052285A1
Принадлежит: Stevens Institute of Technology

Systems and methods associated with a bioreactor are disclosed. An imaging module is provided that allows for in situ observation of, for instance, lung tissue. Various compounds can be introduced into a cell culture chamber for experimental and practical applications on epithelial tissue. Methods and apparatus are also provided for deepithelializing human or rat tissue without damaging the structures underneath. Thereafter, cell-growth can be effected in a homogenous distribution.

Подробнее
26-09-2024 дата публикации

Learning-Based Lens Flare Removal

Номер: US20240320808A1
Принадлежит: Google LLC

A method includes obtaining an input image that contains a particular representation of lens flare, and processing the input image by a machine learning model to generate a de-flared image that includes the input image with at least part of the particular representation of lens flare removed. The machine learning (ML) model may be trained by generating training images that combine respective baseline images with corresponding lens flare images. For each respective training image, a modified image may be determined by processing the respective training image by the ML model, and a loss value may be determined based on a loss function comparing the modified image to a corresponding baseline image used to generate the respective training image. Parameters of the ML model may be adjusted based on the loss value determined for each respective training image and the loss function.

Подробнее
12-03-2020 дата публикации

Dark flash photography with a stereo camera

Номер: WO2020050941A1
Принадлежит: Google LLC

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can he addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the in visible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an "IR- G-UV" camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.

Подробнее
07-07-2021 дата публикации

Dark flash photography with a stereo camera

Номер: EP3844567A1
Принадлежит: Google LLC

Scenes can be imaged under low-light conditions using flash photography. However, the flash can be irritating to individuals being photographed, especially when those individuals' eyes have adapted to the dark. Additionally, portions of images generated using a flash can appear washed-out or otherwise negatively affected by the flash. These issues can he addressed by using a flash at an invisible wavelength, e.g., an infrared and/or ultraviolet flash. At the same time a scene is being imaged, at the in visible wavelength of the invisible flash, the scene can also be imaged at visible wavelengths. This can include simultaneously using both a standard RGB camera and a modified visible-plus-invisible-wavelengths camera (e.g., an "IR- G-UV" camera). The visible and invisible image data can then be combined to generate an improved visible-light image of the scene, e.g., that approximates a visible light image of the scene, had the scene been illuminated during daytime light conditions.

Подробнее
26-09-2024 дата публикации

Burst image matting

Номер: US20240320838A1
Принадлежит: Adobe Inc

Systems and methods perform image matte generation using image bursts. In accordance with some aspects, an image burst comprising a set of images is received. Features of a reference image from the set of images is aligned with features of other images from the set of images. A matte for the reference image is generated using the aligned features.

Подробнее
31-05-2023 дата публикации

A reciprocating compressor bearing fault feature extraction method

Номер: ZA202301196B
Принадлежит: Univ Shenyang Ligong

Подробнее