Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 220. Отображено 130.
14-02-2017 дата публикации

Data storage and access in block processing pipelines

Номер: US9571846B2
Принадлежит: APPLE INC, Apple Inc.

Block processing pipeline methods and apparatus in which reference data are stored to a memory according to tile formats to reduce memory accesses when fetching the data from the memory. When the pipeline stores reference data from a current frame being processed to memory as a reference frame, the reference samples are stored in macroblock sequential order. Each macroblock sample set is stored as a tile. Reference data may be stored in tile formats for luma and chroma. Chroma reference data may be stored in tile formats for chroma 4:2:0, 4:2:2, and/or 4:4:4 formats. A stage of the pipeline may write luma and chroma reference data for macroblocks to memory according to one or more of the macroblock tile formats in a modified knight's order. The stage may delay writing the reference data from the macroblocks until the macroblocks have been fully processed by the pipeline.

Подробнее
31-01-2017 дата публикации

Blur downscale

Номер: US0009558536B2
Принадлежит: Apple Inc., APPLE INC

Systems, apparatuses, and methods for generating a blur effect on a source image in a power-efficient manner. Pixels of the source image are averaged as they are read into pixel buffers, and then the source image is further downscaled by a first factor. Then, the downscaled source image is upscaled back to the original size, and then this processed image is composited with a semi-transparent image to create a blurred effect of the source image.

Подробнее
16-08-2016 дата публикации

Operating a device to capture high dynamic range images

Номер: US0009420198B2
Принадлежит: Apple Inc., APPLE INC

Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels.

Подробнее
15-08-2012 дата публикации

Temporal filtering techniques for image signal processing

Номер: CN102640184A
Принадлежит:

Various techniques for temporally filtering raw image data acquired by an image sensor are provided. In one embodiment, a temporal filter determines a spatial location of a current pixel and identifies at least one collocated reference pixel from a previous frame. A motion delta value is determined based at least partially upon the current pixel and its collocated reference pixel. Next, an index is determined based upon the motion delta value and a motion history value corresponding to the spatial location of the current pixel, but from the previous frame. Using the index, a first filtering coefficient may be selected from a motion table. After selecting the first filtering coefficient, an attenuation factor may be selected from a luma table based upon the value of the current pixel, and a second filtering coefficient may subsequently be determined based upon the selected attenuation factor and the first filtering coefficient. The temporally filtered output value corresponding to the current ...

Подробнее
16-03-2017 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20170078667A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 127-. (canceled)28. A non-transitory computer readable medium storing a program that is executable by a processing unit , the program comprising sets of instructions for:identifying two or more initial sets of chroma quantization parameter (QP) offset values at two or more levels of a video coding hierarchy, each initial set of chroma QP offset values for specifying chroma QPs of video units encompassed by one level of the video coding hierarchy;identifying an additional set of chroma QP offset values for a quantization group comprising a plurality of video units; andcomputing, for the plurality of video units, a set of chroma QP values by adding (i) the initial sets of chroma QP offset values that were identified for the plurality of video units and (ii) the additional set of chroma QP offset values that was identified for the quantization group.29. The non-transitory computer readable medium of claim 28 , wherein an identified initial set of chroma QP offset values specified at a particular level of the video coding hierarchy are for video units that are encompassed by the particular level of the video coding hierarchy.30. The method of further comprising identifying a luma QP value for the plurality of video units.31. The method of claim 30 , wherein computing the set of ...

Подробнее
30-03-2017 дата публикации

WHITE POINT CORRECTION

Номер: US20170092180A1
Принадлежит:

A method for adjusting the gain of a plurality of pixels across a display includes determining grid point gain adjustments for a plurality of grid points corresponding to coordinates across the display. The corresponding coordinates have a non-uniform spacing across the display. The method also includes determining uniformity gain adjustments for the plurality of pixels via interpolation with the grid point gain adjustments. The method also includes multiplying the uniformity gain adjustment for each pixel of the plurality of pixels by an input signal to the respective pixel. The drive strength supplied to the respective pixel is based at least in part on the input signal, and the drive strength supplied to each pixel is configured to control the light emitted from the respective pixel. 1. An electronic device , comprising:a display comprising a plurality of pixels, wherein each pixel comprises a plurality of subpixels; anda controller coupled to the display, wherein the controller is configured to control a gain of each subpixel based at least in part on a dynamic adjustment for the respective subpixel and a uniformity adjustment for the respective subpixel, wherein the dynamic adjustment is based at least in part on a determined temperature or a determined brightness of the respective subpixel, and the uniformity adjustment is based at least in part on a location of the respective subpixel within the display.2. The electronic device of claim 1 , wherein the display comprises a plurality of temperature sensors disposed about the display claim 1 , the plurality of subpixels comprises a set of subpixels claim 1 , and the determined temperature of each subpixel of the set of subpixels is based on temperature feedback from a corresponding temperature sensor of the plurality of temperature sensors that is disposed near the location of the respective subpixel of the set of subpixels within the display.3. The electronic device of claim 1 , wherein the display comprises a ...

Подробнее
15-08-2012 дата публикации

System and method for demosaicing image data using weighted gradients

Номер: CN102640499A
Принадлежит:

Various techniques are provided herein for the demosaicing of images acquired and processed by an imaging system. The imaging system includes an image signal processor 32 and image sensors 30 utilizing color filter arrays (CFA) for acquiring red, green, and blue color data using one pixel array. In one embodiment, the CFA may include a Bayer pattern. During image signal processing, demosaicing may be applied to interpolate missing color samples from the raw image pattern. In one embodiment, interpolation for the green color channel may include employing edge-adaptive filters with weighted gradients of horizontal and vertical filtered values. The red and blue color channels may be interpolated using color difference samples with co-located interpolated values of the green color channel. In another embodiment, interpolation of the red and blue color channels may be performed using color ratios (e.g., versus color difference data).

Подробнее
16-02-2017 дата публикации

OPERATING A DEVICE TO CAPTURE HIGH DYNAMIC RANGE IMAGES

Номер: US20170048442A1
Принадлежит: Apple Inc.

Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels. 1. A non-transitory machine readable medium of a device that captures images , the medium storing a program that when executed by at least one processing unit captures an image of a high dynamic range (HDR) scene , the program comprising sets of instructions for:capturing and storing a plurality of images at a first exposure level;upon receiving a command to capture the HDR scene, capturing a first image at a second exposure level and a second image at a third exposure level;selecting a third image from the captured plurality of images; andcompositing the first, second, and third images to produce a composite image that captures the HDR scene.2. The machine readable medium of claim 1 , wherein the set of instructions for selecting the third image from the plurality of images comprises a set of instructions for selecting an image in the plurality of images that was captured immediately before receiving the command to capture the HDR scene.3. The machine readable medium of claim 1 , wherein the set of instructions for selecting the third image from the plurality of images comprises a set of instructions for selecting an image in the plurality of images that (i) was captured within a time period before receiving the command to capture the HDR scene claim 1 , and (ii) was captured while the device moved less than a threshold amount.4. The machine readable ...

Подробнее
20-10-2016 дата публикации

DEBANDING IMAGE DATA BASED ON SPATIAL ACTIVITY

Номер: US20160307302A1
Принадлежит: Apple Inc

A method for attenuating banding in image data may involve receiving a stream of input pixels. The method may then include applying a bi-lateral filter to a first portion of the stream of input pixels to generate a first filtered output and applying a high pass filter to a second portion of the stream of input pixels to generate a second filtered output. The method may then determine a local activity and a local intensity associated with the first portion of the stream. The method may then include blending the first filtered output with the first portion of the stream of input pixels based at least in part on the local activity and the local intensity to generate a third filtered output. Afterward, the method may combine the third filtered output with the second filtered output to generate a fourth filtered output that may be output as the image data.

Подробнее
30-03-2017 дата публикации

DEVICES AND METHODS FOR MITIGATING VARIABLE REFRESH RATE CHARGE IMBALANCE

Номер: US20170092210A1
Принадлежит:

Devices and methods for reducing and/or substantially eliminating pixel charge imbalance due to variable refresh rates are provided. By way of example, a method includes providing a first frame of image data via a processor to a plurality of pixels of the display during a first frame period corresponding to a first refresh rate, and providing a second frame of image data to the plurality of pixels of the display during a second frame period corresponding to a second refresh rate. The method further includes dividing the first frame period into a first frame sub-period and a second frame sub-period, and driving the plurality of pixels of the display with the first frame of image data during the first frame sub-period and the second frame sub-period. 1. A method of operating a display , comprising:providing a first frame of image data via a processor to a plurality of pixels of the display during a first frame period corresponding to a first refresh rate;providing a second frame of image data to the plurality of pixels of the display during a second frame period corresponding to a second refresh rate;dividing the first frame period into a first frame sub-period and a second frame sub-period; anddriving the plurality of pixels of the display with the first frame of image data during the first frame sub-period and the second frame sub-period.2. The method of claim 1 , wherein providing the first frame of image data to the plurality of pixels of the display comprises providing a positive frame of image data to the plurality of pixels.3. The method of claim 1 , wherein providing the second frame of image data to the plurality of pixels of the display comprises providing a negative frame of image data to the plurality of pixels.4. The method of claim 1 , wherein dividing the first frame period into the first frame sub-period and the second frame sub-period comprises dividing the first frame period each time a charge on the pixels of the display reaches a pixel charge ...

Подробнее
22-08-2017 дата публикации

Systems and methods for local tone mapping

Номер: US0009741099B2
Принадлежит: APPLE INC., APPLE INC

Systems and methods for local tone mapping are provided. In one example, an electronic device includes an electronic display, an imaging device, and an image signal processor. The electronic display may display images of a first bit depth, and the imaging device may include an image sensor that obtains image data of a higher bit depth than the first bit depth. The image signal processor may process the image data, and may include local tone mapping logic that may apply a spatially varying local tone curve to a pixel of the image data to preserve local contrast when displayed on the display. The local tone mapping logic may smooth the local tone curve applied to the intensity difference between the pixel and another nearby pixel exceeds a threshold.

Подробнее
06-10-2016 дата публикации

BLUR DOWNSCALE

Номер: US20160292827A1
Принадлежит:

Systems, apparatuses, and methods for generating a blur effect on a source image in a power-efficient manner. Pixels of the source image are averaged as they are read into pixel buffers, and then the source image is further downscaled by a first factor. Then, the downscaled source image is upscaled back to the original size, and then this processed image is composited with a semi-transparent image to create a blurred effect of the source image. 1. An apparatus configured to:receive source image data corresponding to a source frame; downscale the source image data in a first direction by averaging the source image data;', 'downscale the source image data in a second direction; and', 'upscale the source image data back to an original size., 'responsive to receiving a request that a blur operation be performed on the source frame2. The apparatus as recited in claim 1 , wherein the apparatus is further configured to combine source image data which has been downscaled and upscaled with semi-transparent image data to produce a blurred version of the source frame.3. The apparatus as recited in claim 1 , wherein the apparatus is configured to average the source image data by adding pixels of the source image data to produce a sum and rounding one or more least significant bits of the sum.4. The apparatus as recited in claim 1 , wherein the apparatus is further configured to downscale the source image data in the first direction after reading the source image data out of one or more pixel buffers and prior to upscaling the source image data back to the original size.5. The apparatus as recited in claim 1 , wherein the second direction is perpendicular to the first direction.6. The apparatus as recited in claim 1 , wherein the apparatus is configured to utilize a multi-tap polyphase filter to downscale the source image data in the second direction.7. The apparatus as recited in claim 1 , wherein the apparatus is further configured to perform a rotation of the source image ...

Подробнее
31-10-2017 дата публикации

Late-stage mode conversions in pipelined video encoders

Номер: US0009807410B2
Принадлежит: Apple Inc., APPLE INC

Video encoders may determine an initial designation of a mode in which to encode a block of pixels in an early stage of a block processing pipeline. A component of a late stage of the block processing pipeline (one that precedes the transcoder) may determine a different mode designation for the block of pixels based on coded block pattern information, motion vector information, the position of the block in a row of such blocks, the order in which such blocks are processed in the pipeline, or other encoding related syntax elements. The component in the late stage may communicate information to the transcoder usable in coding the block of pixels, such as modified syntax elements or an end of row marker. The transcoder may encode the block of pixels in accordance with the different mode designation or may change the mode again, dependent on the communicated information.

Подробнее
11-07-2012 дата публикации

Overflow control techniques for image signal processing

Номер: CN102572316A
Принадлежит:

A disclosed relates to overflow control techniques for image signal processing. Certain embodiments disclosed herein relate to an image signal processing system includes overflow control logic that detects an overflow condition when a destination unit when a sensor input queue and/or front-end processing unit receives back pressure from a downstream destination unit. In one embodiment, pixels of a current frame are dropped when an overflow condition occurs. The number of dropped pixels may be tracked using a counter. Upon recovery of the overflow condition, the remaining pixels of the frame are received and each dropped pixel may be replaced using a replacement pixel value.

Подробнее
04-07-2012 дата публикации

System and method for processing image data using an image signal processor

Номер: CN102547301A
Принадлежит:

The invention discloses a system and a method for processing image data using an image signal processor. Disclosed embodiments provide for a an image signal processing system (32) that includes back-end pixel processing unit (120) that receives pixel data after being processed by at least one of a front-end pixel processing unit (80) and a pixel processing pipeline (82). In certain embodiments, the back-end processing unit (120) receives luma/chroma image data and may be configured to apply face detection operations, local tone mapping, bright, contrast, color adjustments, as well as scaling. Further, the back-end processing unit (120) may also include a back-end statistics unit (2208) that may collect frequency statistics. The frequency statistics may be provided to an encoder (118) and may be used to determine quantization parameters that are to be applied to an image frame.

Подробнее
27-07-2017 дата публикации

INTRA-FRAME PREDICTION SYSTEMS AND METHODS

Номер: US20170214912A1
Принадлежит:

System and method for improving operational efficiency of a video encoding pipeline, which includes a mode decision block that selects a luma intra-frame prediction mode used to encode a luma component of the source image data and a chroma reconstruction block that determines a first distortion expected to result in a first chroma transform block when each of a plurality of candidate chroma intra-frame prediction modes is implemented based on reconstructed image data, determines a second distortion expected to result in a second chroma transform block of the prediction unit when each of the plurality of candidate chroma intra-frame prediction modes is implemented based at least in part on the source image data, and selects a chroma intra-frame prediction mode used to encode a chroma component from the plurality of candidate chroma intra-frame prediction modes based at least in part on the first distortion and the second distortion. 1. A computing device comprising a video encoding pipeline configured to encode source image data , wherein the video encoding pipeline comprises:a mode decision block configured to select a luma intra-frame prediction mode used to encode a luma component of the source image data corresponding with a prediction unit; and determine a first distortion expected to result in a first chroma transform block of the prediction unit when each of a plurality of candidate chroma intra-frame prediction modes is implemented based on reconstructed image data adjacent the first chroma transform block;', 'determine a second distortion expected to result in a second chroma transform block of the prediction unit when each of the plurality of candidate chroma intra-frame prediction modes is implemented based at least in part on the source image data adjacent the second chroma transform block; and', 'select a chroma intra-frame prediction mode used to encode a chroma component of the source image data corresponding with the prediction unit from the plurality ...

Подробнее
18-10-2016 дата публикации

Skip thresholding in pipelined video encoders

Номер: US0009473778B2
Принадлежит: Apple Inc., APPLE INC

The video encoders described herein may make an initial determination to designate a macroblock as a skip macroblock, but may subsequently reverse that decision based on additional information. For example, an initial skip mode decision may be based on aggregate distortion metrics for the luma component of the macroblock (e.g., SAD, SATD, or SSD), then reversed based on an individual pixel difference metric, an aggregate or individual pixel metric for a chroma component of the macroblock, or on the position of the macroblock within a macroblock row. The final skip mode decision may be based, at least in part, on the maximum difference between any pixel in the macroblock (or in a region of interest within the macroblock) and the corresponding pixel in a reference frame. The initial skip mode decision may be made during an early stage of a pipelined video encoding process and reversed in a later stage.

Подробнее
06-06-2017 дата публикации

Sub-pixel layout compensation

Номер: US0009672765B2
Принадлежит: APPLE Inc., APPLE INC, Apple Inc.

Devices and methods for reducing or eliminating sub-pixel layout artifacts on an electronic display are provided. One such device may include an electronic display to display image data, a processor to generate the image data, and sub-pixel layout compensation circuitry that modifies the image data to reduce or eliminate a sub-pixel layout artifact of the electronic display by modifying pixels of the image data on a sub-pixel-by-sub-pixel basis. The sub-pixel layout compensation circuitry may adjust a sub-pixel of a first color in a first pixel based at least in part on a first gradient between the sub-pixel of the first color of the first pixel and a sub-pixel of the first color of a second pixel.

Подробнее
22-09-2016 дата публикации

HIGH SPEED DISPLAY INTERFACE

Номер: US20160275905A1
Принадлежит:

Methods and devices employing circuitry for dynamically adjusting bandwidth control of a display interface are provided. The display interface or image content is dynamically adjusted to support both high-speed image data (e.g., 120 Hz image data) and lower-speed content (e.g., 60 Hz content). For example, in some embodiments, additional pixel pipelines and/or processing lanes may be activated during the rendering of high-speed image data, but not during the rendering of low-speed image data. Additionally or alternatively, high-speed image data, but not low-speed data, may be compressed to render high-speed content over an interface that supports only low-speed content. 1. A method comprising:receiving a refresh rate for content to be displayed on an electronic display;determining, based upon the refresh rate, a number of pixel pipelines of an interface, a number of lanes of the interface, or both to activate;activate the number of pixel pipelines, the number of lanes, or both; andprovide the content for rendering at a display panel via the pixel pipelines and the lanes that are activated.2. The method of claim 1 , wherein receiving the refresh rate comprises:receiving the content and decoding the refresh rate from the content that is received.3. The method of claim 1 , wherein determining the number of pixel pipelines comprises:when the refresh rate is approximately 60 Hz, determining the number of pixel pipelines to equal 1; andwhen the refresh rate is approximately 120 Hz, determining the number of pixel pipelines to equal 2.4. The method of claim 1 , wherein determining the number of lanes comprises multiplying the number of pipelines by a number of lanes in a pipeline.5. The method of claim 1 , wherein determining the number of lanes comprises interpolating a number of lanes based upon the refresh rate.6. The method of claim 1 , comprising merging outputs from the pipelines that are activated prior to providing the content for rendering at the display panel.7. ...

Подробнее
01-12-2016 дата публикации

Electronic Device Display With Charge Accumulation Tracker

Номер: US20160351138A1
Принадлежит:

An electronic device may generate content that is to be displayed on a display. The display may have an array of liquid crystal display pixels for displaying image frames of the content. The image frames may be displayed with positive and negative polarities to help reduce charge accumulation effects. A charge accumulation tracker may analyze the image frames to determine when there is a risk of excess charge accumulation. The charge accumulation tracker may analyze information on gray levels, frame duration, and frame polarity. The charge accumulation tracker may compute a charge accumulation metric for entire image frames or may process subregions of each frame separately. When subregions are processed separately, each subregion may be individually monitored for a risk of excess charge accumulation.

Подробнее
12-12-2017 дата публикации

Delayed chroma processing in block processing pipelines

Номер: US0009843813B2
Принадлежит: Apple Inc., APPLE INC

A block processing pipeline in which macroblocks are input to and processed according to row groups so that adjacent macroblocks on a row are not concurrently at adjacent stages of the pipeline. The input method may allow chroma processing to be postponed until after luma processing. One or more upstream stages of the pipeline may process luma elements of each macroblock to generate luma results such as a best mode for processing the luma elements. Luma results may be provided to one or more downstream stages of the pipeline that process chroma elements of each macroblock. The luma results may be used to determine processing of the chroma elements. For example, if the best mode for luma is an intra-frame mode, then a chroma processing stage may determine a best intra-frame mode for chroma and reconstruct the chroma elements according to the best chroma intra-frame mode.

Подробнее
18-10-2016 дата публикации

Display pipe statistics calculation for video encoder

Номер: US0009472168B2
Принадлежит: Apple Inc., APPLE INC

In an embodiment, a system includes a display processing unit configured to process a video sequence for a target display. In some embodiments, the display processing unit is configured to composite the frames from frames of the video sequence and one or more other image sources. The display processing unit may be configured to write the processed/composited frames to memory, and may also be configured to generate statistics over the frame data, where the generated statistics are usable to encode the frame in a video encoder. The display processing unit may be configured to write the generated statistics to memory, and the video encoder may be configured to read the statistics and the frames. The video encoder may be configured to encode the frame responsive to the statistics.

Подробнее
27-06-2017 дата публикации

Source pixel component passthrough

Номер: US0009691349B2
Принадлежит: Apple Inc., APPLE INC

Systems, apparatuses, and methods for passing source pixel data through a display control unit. A display control unit includes N-bit pixel component processing lanes for processing source pixel data. When the display control unit receives M-bit source pixel components, wherein ‘M’ is greater than ‘N’, the display control unit may assign the M-bit source pixel components to the N-bit processing lanes. Then, the M-bit source pixel components may passthrough the pixel component processing elements of the display control unit without being modified.

Подробнее
12-09-2017 дата публикации

Chroma cache architecture in block processing pipelines

Номер: US0009762919B2
Принадлежит: Apple Inc., APPLE INC

Methods and apparatus for caching reference data in a block processing pipeline. A cache may be implemented to which reference data corresponding to motion vectors for blocks being processed in the pipeline may be prefetched from memory. Prefetches for the motion vectors may be initiated one or more stages prior to a processing stage. Cache tags for the cache may be defined by the motion vectors. When a motion vector is received, the tags can be checked to determine if there are cache block(s) corresponding to the vector (cache hits) in the cache. Upon a cache miss, a cache block in the cache is selected according to a replacement policy, the respective tag is updated, and a prefetch (e.g., via DMA) for the respective reference data is issued.

Подробнее
29-11-2016 дата публикации

Chroma quantization in video coding

Номер: US0009510002B2
Принадлежит: APPLE INC., APPLE INC, Apple Inc.

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy.

Подробнее
26-12-2017 дата публикации

Video encoding optimization with extended spaces

Номер: US0009854246B2
Принадлежит: APPLE INC., APPLE INC, Apple Inc.

Embodiments of the present invention may provide a video coder. The video coder may include an encoder to perform coding operations on a video signal in a first format to generate coded video data, and a decoder to decode the coded video data. The video coder may also include an inverse format converter to convert the decoded video data to second format that is different than the first format and an estimator to generate a distortion metric using the decoded video data in the second format and the video signal in the second format. The encoder may adjust the coding operations based on the distortion metric.

Подробнее
30-03-2017 дата публикации

SUB-PIXEL LAYOUT COMPENSATION

Номер: US20170092174A1
Принадлежит: Apple Inc

Devices and methods for reducing or eliminating sub-pixel layout artifacts on an electronic display are provided. One such device may include an electronic display to display image data, a processor to generate the image data, and sub-pixel layout compensation circuitry that modifies the image data to reduce or eliminate a sub-pixel layout artifact of the electronic display by modifying pixels of the image data on a sub-pixel-by-sub-pixel basis. The sub-pixel layout compensation circuitry may adjust a sub-pixel of a first color in a first pixel based at least in part on a first gradient between the sub-pixel of the first color of the first pixel and a sub-pixel of the first color of a second pixel.

Подробнее
15-11-2016 дата публикации

Debanding image data based on spatial activity

Номер: US0009495731B2
Принадлежит: Apple Inc., APPLE INC

A method for attenuating banding in image data may involve receiving a stream of input pixels. The method may then include applying a bi-lateral filter to a first portion of the stream of input pixels to generate a first filtered output and applying a high pass filter to a second portion of the stream of input pixels to generate a second filtered output. The method may then determine a local activity and a local intensity associated with the first portion of the stream. The method may then include blending the first filtered output with the first portion of the stream of input pixels based at least in part on the local activity and the local intensity to generate a third filtered output. Afterward, the method may combine the third filtered output with the second filtered output to generate a fourth filtered output that may be output as the image data.

Подробнее
04-07-2012 дата публикации

Flash synchronization using image sensor interface timing signal

Номер: CN102547302A
Принадлежит:

The invention relates to flash synchronization using image sensor interface timing signal. Certain aspects of this disclosure relate to an image signal processing system (32) that includes a flash controller (550) that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller (550) receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller (550) subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame.

Подробнее
20-10-2016 дата публикации

DEBANDING IMAGE DATA USING BIT DEPTH EXPANSION

Номер: US20160307298A1
Принадлежит:

An image signal processing system may include processing circuitry that may reduce banding artifacts in image data to be depicted on a display. The processing circuitry may receive a first pixel value associated with a first pixel of the image data and detect a first set of pixels located in a first direction along a same row of pixels or a same column of pixels with respect to the first pixel. The first set of pixels is associated with a first band. The processing circuitry may then interpolate a second pixel value based on an average of a first set of pixel values that correspond to the first set of pixels and a distance between the first pixel and a closest pixel in the first band. The processing circuitry may then output the second pixel value for the first pixel. 1. An image signal processing system , comprising: receive a first pixel value associated with a first pixel of the image data;', 'detect a first set of pixels located in a first direction along a same row of pixels or a same column of pixels with respect to the first pixel, wherein the first set of pixels is associated with a first band;', 'interpolate a second pixel value based on a first set of pixel values that correspond to the first set of pixels and a distance between the first pixel and a closest pixel in the first band; and', 'output the second pixel value for the first pixel., 'processing circuitry configured to reduce banding artifacts in image data to be depicted on a display, wherein the processing circuitry is configured to2. The image signal processing system of claim 1 , wherein the processing circuitry is configured to receive the first pixel value from a buffer configured to store a portion of the image data.3. The image signal processing system of claim 2 , wherein the portion of the image data comprises 128 lines of the image data.4. The image signal processing system of claim 2 , wherein the portion of the image data comprises a first number of lines of the image data above the ...

Подробнее
11-07-2012 дата публикации

Techniques for synchronizing audio and video data in an image signal processing system

Номер: CN102572443A
Принадлежит:

The present disclosure relates to techniques for synchronizing audio and video data in an image signal processing system. The present disclosure provides techniques for performing audio-video synchronization using an image signal processing system (22). In one embodiment, a time code register (492) provides a current time stamp when sampled. The value of the time code register (492) may be incremented at regular intervals based on a clock of the image signal processing system (32). At the start of a current frame acquired by an image sensor (90), the time code register (492) is sampled, and a time stamp (500) is stored into a time stamp register associated with the image sensor (90). The time stamp (500) is then read from the time stamp register and written to a set of metadata (498) associated with the current frame. The time stamp (500) stored in the frame metadata (498) may then be used to synchronize the current frame with a corresponding set of audio data.

Подробнее
24-04-2013 дата публикации

Operating a device to capture high dynamic range images

Номер: CN103069453A
Принадлежит:

Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels.

Подробнее
06-10-2016 дата публикации

SOURCE PIXEL COMPONENT PASSTHROUGH

Номер: US20160293137A1
Принадлежит:

Systems, apparatuses, and methods for passing source pixel data through a display control unit. A display control unit includes N-bit pixel component processing lanes for processing source pixel data. When the display control unit receives M-bit source pixel components, wherein ‘M’ is greater than ‘N’, the display control unit may assign the M-bit source pixel components to the N-bit processing lanes. Then, the M-bit source pixel components may passthrough the pixel component processing elements of the display control unit without being modified. 1. A display control unit configured to:receive source pixel data;responsive to determining the source pixel data is in a first format comprising a first bit-width, assign each source pixel component of the source pixel data to a single corresponding pixel component processing lane; andresponsive to determining the source pixel data is in a second format comprising a second bit-width that is greater than the first bit-width, assign a first source pixel component of the source pixel data to at least a portion of two separate pixel component processing lanes.2. The display control unit as recited in claim 1 , wherein responsive to determining the source pixel data is in the first format claim 1 , the display control unit is further configured to modify at least a portion of the source pixel data.3. The display control unit as recited in claim 2 , wherein responsive to determining the source pixel data is in the second format claim 2 , the display control unit is further configured to pass the source pixel data through the display control unit without modifying the source pixel data.4. The display control unit as recited in claim 1 , wherein the display control is configured to assign a first source pixel component of the source pixel data to at least a portion of two separate pixel component processing lanes in further response to determining the source pixel data is subsampled.5. The display control unit as recited in claim ...

Подробнее
27-09-2016 дата публикации

Global configuration broadcast

Номер: US0009454378B2
Принадлежит: Apple Inc., APPLE INC

Methods and apparatus for configuring multiple components of a subsystem are described. The configuration memory of each of a plurality of components coupled to an interconnect includes a global configuration portion. The configuration memory of one of the components may be designated as a master global configuration for all of the components. A module coupled to the interconnect may receive writes to the components from a configuration source. For each write, the module may decode the write to determine addressing information and check to see if the write is addressed to the master global configuration. If the write is addressed to the master global configuration, the module broadcasts the write to the global configuration portion of each of the components via the interconnect. If the write is not addressed to the master global configuration, the module forwards the write to the appropriate component via the interconnect.

Подробнее
30-03-2017 дата публикации

CONTENT-BASED STATISTICS FOR AMBIENT LIGHT SENSING

Номер: US20170092228A1
Принадлежит:

An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side. 1. An electronic device comprising:a display panel comprising a plurality of pixels each configured to emit light;an ambient light sensor arranged behind the display panel; andambient light sensor compensation logic configured to estimate how much light detected by the ambient light sensor can be attributed to the emitted light.2. The electronic device of claim 1 , wherein the ambient light sensor compensation logic compensates for the light emitted from the display panel by dividing image frames of video image data to be displayed by the display panel into overlapping regions.3. The electronic device of claim 2 , wherein the regions are concentric regions.4. The electronic device of claim 1 , wherein the ambient light sensor compensation logic is configured to determine how much light detected by the ambient light sensor can be attributed to ambient brightness from outside the electronic device and is configured to substantially remove the light emitted from the display panel from the measured brightness to determine.5. The electronic device of claim 1 , wherein the ambient light sensor compensation logic weights pixels of the plurality of pixels that are closer to the ambient light sensor more heavily than pixels of the plurality of pixels that are farther from the ambient light sensor.6. The electronic device of claim 3 , wherein the regions comprise rectangular-shaped regions.7. The electronic device of claim 6 , wherein the ambient light sensor compensation logic is configured to track each region using an offset from a reference point of the display panel and a size of the region.8. A method comprising:capturing ambient ...

Подробнее
22-08-2017 дата публикации

Systems and methods for lens shading correction

Номер: US0009743057B2

Systems and methods for correcting intensity drop-offs due to geometric properties of lenses are provided. In one example, a method includes receiving an input pixel of the image data, the image data acquired using an image sensor. A color component of the input pixel is determined. A gain grid is determined by pointing to the gain grid in external memory. Each of the plurality of grid points is associated with a lens shading gain selected based upon the color of the input pixel. A nearest set of grid points that enclose the input pixel is identified. Further, a lens shading gain is determined by interpolating the lens shading gains associated with each of the set of grid points and is applied to the input pixel.

Подробнее
15-08-2012 дата публикации

System and method for detecting and correcting defective pixels in an image sensor

Номер: CN102640489A
Принадлежит:

Various techniques are provided for the detection and correction of defective pixels in an image sensor. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold.; If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.

Подробнее
20-03-2018 дата публикации

Electronic device display with charge accumulation tracker

Номер: US0009922608B2
Принадлежит: Apple Inc., APPLE INC

An electronic device may generate content that is to be displayed on a display. The display may have an array of liquid crystal display pixels for displaying image frames of the content. The image frames may be displayed with positive and negative polarities to help reduce charge accumulation effects. A charge accumulation tracker may analyze the image frames to determine when there is a risk of excess charge accumulation. The charge accumulation tracker may analyze information on gray levels, frame duration, and frame polarity. The charge accumulation tracker may compute a charge accumulation metric for entire image frames or may process subregions of each frame separately. When subregions are processed separately, each subregion may be individually monitored for a risk of excess charge accumulation.

Подробнее
04-07-2012 дата публикации

Image signal processor line buffer configuration for processing raw image data

Номер: CN102547162A
Принадлежит:

The present disclosure provides techniques relates to the implementation of a raw pixel processing unit (900) using a set of line buffers. In one embodiment, the set of line buffers may include a first subset (1162) and second subset (1164). Various logical units of the raw pixel processing unit (900) may be implemented using the first and second subsets of line buffers (1162, 1164) in a shared manner. For instance, in one embodiment, defective pixel correction and detection logic (932) may be implemented using the first subset of line buffers (1162). The second subset of line buffers (1164) may be used to implement lens shading correction logic (936), gain, offset, and clamping logic (938), and demosaicing logic (940). Further, noise reduction may also be implemented using at least a portion of each of the first and second subsets of line buffers (1162, 1164).

Подробнее
24-04-2013 дата публикации

Capturing and rendering high dynamic ranges images

Номер: CN103069454A
Принадлежит:

Some embodiments of the invention provide a mobile device that captures and produces images with high dynamic ranges. To capture and produce a high dynamic range image, the mobile device of some embodiments includes novel image capture and processing modules. Tn some embodiments, the mobile device produces a high dynamic range (HDR) image by (1) having its image capture module rapidly capture a succession of images at different image exposure durations, and (2) having its image processing module composite these images to produce the HDR image.

Подробнее
14-02-2017 дата публикации

Debanding image data using bit depth expansion

Номер: US9569816B2
Принадлежит: APPLE INC, Apple Inc.

An image signal processing system may include processing circuitry that may reduce banding artifacts in image data to be depicted on a display. The processing circuitry may receive a first pixel value associated with a first pixel of the image data and detect a first set of pixels located in a first direction along a same row of pixels or a same column of pixels with respect to the first pixel. The first set of pixels is associated with a first band. The processing circuitry may then interpolate a second pixel value based on an average of a first set of pixel values that correspond to the first set of pixels and a distance between the first pixel and a closest pixel in the first band. The processing circuitry may then output the second pixel value for the first pixel.

Подробнее
20-10-2016 дата публикации

LINEAR SCALING IN A DISPLAY PIPELINE

Номер: US20160307540A1
Принадлежит:

Systems, apparatuses, and methods for performing linear scaling in a display control unit. A display control unit receives source image data that has already been gamma encoded with an unknown gamma value. The display control unit includes a hard-coded LUT storing a gamma curve of a first gamma value which is used to perform a degamma operation on the received source image data. Even if the first gamma value used to perform the degamma operation is different from the gamma value used to gamma encode the source image data, fewer visual artifacts are generated as compared with not performing a degamma operation. After the degamma operation is performed, the source image data may be linearly scaled. 1. A display control unit configured to:receive first source image data, wherein the first source image data has been gamma encoded with an unknown gamma value;perform a degamma operation of a first gamma value on the first source image data; andperform one or more linear scaling operations on the first source image data subsequent to performing the degamma operation on the first source image data.2. The display control unit as recited in claim 1 , wherein the degamma operation is performed using a predetermined value.3. The display control unit as recited in claim 1 , wherein the display control unit is further configured to reapply a gamma encoding to the first source image data subsequent to performing the one or more linear scaling operations on the first source image data claim 1 , wherein the gamma encoding is an inverse of the degamma operation of the first gamma value.4. The display control unit as recited in claim 1 , wherein the display control unit is further configured to store the first source image data in one or more line buffers prior to performing the degamma operation.5. The display control unit as recited in claim 1 , wherein the display control unit is further configured to:receive second source image data, wherein the second source image data has been ...

Подробнее
11-07-2012 дата публикации

Auto-focus control using image statistics data with coarse and fine auto-focus scores

Номер: CN102572265A
Принадлежит:

Techniques are provided for determining an optimal focal position using auto-focus statistics. In one embodiment, such techniques may include generating coarse and fine auto-focus scores for determining an optimal focal length at which to position a lens (88) associated with the image sensor (90). For instance, the statistics logic (680) may determine a coarse position that indicates an optimal focus area which, in one embodiment, may be determined by searching for the first coarse position in which a coarse auto-focus score decreases with respect to a coarse auto-focus score at a previous position. Using this position as a starting point for fine score searching, the optimal focal position may be determined by searching for a peak in fine auto-focus scores. In another embodiment, auto-focus statistics may also be determined based on each color of the Bayer RGB, such that, even in the presence of chromatic aberrations, relative auto-focus scores for each color may be used to determine the ...

Подробнее
30-03-2017 дата публикации

TIMESTAMP BASED DISPLAY UPDATE MECHANISM

Номер: US20170092236A1
Принадлежит:

Systems, apparatuses, and methods for implementing a timestamp based display update mechanism. A display control unit includes a timestamp queue for storing timestamps, wherein each timestamp indicates when a corresponding frame configuration set should be fetched from memory. At pre-defined intervals, the display control unit may compare the timestamp of the topmost entry of the timestamp queue to a global timer value. If the timestamp is earlier than the global timer value, the display control unit may pop the timestamp entry and fetch the frame next configuration set from memory. The display control unit may then apply the updates of the frame configuration set to its pixel processing elements. After applying the updates, the display control unit may fetch and process the source pixel data and then drive the pixels of the next frame to the display. 1. A display control unit configured to:fetch a frame configuration set corresponding to a frame to be displayed based on a timestamp associated with the frame; andutilize the configuration set to update one or more display settings for processing and displaying source pixel data corresponding to the frame.2. The display control unit as recited in claim 1 , wherein the display control unit is further configured to receive a timestamp associated with the frame configuration set claim 1 , wherein the timestamp is generated by a host processor external to the display control unit.3. The display control unit as recited in claim 1 , wherein the display control unit is further configured to receive an address with each timestamp claim 1 , wherein the address identifies a location in memory of the frame configuration set.4. The display control unit as recited in claim 1 , wherein the frame configuration set includes parameters specifying at least one of an input frame size claim 1 , an output frame size claim 1 , an input pixel format claim 1 , an output pixel format claim 1 , and a location of one or more source frames.5. ...

Подробнее
05-01-2012 дата публикации

Capturing and Rendering High Dynamic Range Images

Номер: US20120002082A1
Принадлежит:

Some embodiments of the invention provide a mobile device that captures and produces images with high dynamic ranges. To capture and produce a high dynamic range image, the mobile device of some embodiments includes novel image capture and processing modules. In some embodiments, the mobile device produces a high dynamic range (HDR) image by (1) having its image capture module rapidly capture a succession of images at different image exposure durations, and (2) having its image processing module composite these images to produce the HDR image. 1. A mobile device that captures images , the device comprising:a) a camera for capturing at least three images at different image exposure durations; andb) an image processing module for compositing the three captured images to produce a composite image.2. The mobile device of claim 1 , wherein the three captured images are images that capture a high dynamic range (HDR) scene.3. The mobile device of claim 2 , wherein the HDR scene has bright and dark regions and the composite image is an image that displays the details within the bright and dark regions of the HDR scene.4. The mobile device of claim 1 , wherein the image processing module composites the three captured images by identifying weighting values for pixels in the three captured images and selectively combining component color values of the pixels based on the identified weighting values.5. The mobile device of claim 4 , wherein the image processing module identifies the weighting values for the pixels based on brightness of the pixels.6. The mobile device of claim 4 , wherein the image processing module identifies the weighting value for each pixel in each image based on brightness values of neighboring pixels that are within a particular range of the pixel.7. The mobile device of claim 1 , wherein the image processing module receives the three captured images in a color space that is defined by luma and chroma channels claim 1 , wherein the image processing module ...

Подробнее
05-01-2012 дата публикации

Operating a Device to Capture High Dynamic Range Images

Номер: US20120002898A1
Принадлежит:

Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels. 1. A non-transitory machine readable medium of a device that captures images , the medium storing a program that when executed by at least one processing unit captures an image of a high dynamic range (HDR) scene , the program comprising sets of instructions for:capturing and storing a plurality of images at a first exposure level;upon receiving a command to capture the HDR scene, capturing a first image at a second exposure level and a second image at a third exposure level;selecting a third image from the captured plurality of images; andcompositing the first, second, and third images to produce a composite image that captures the HDR scene.2. The machine readable medium of claim 1 , wherein the set of instructions for selecting the third image from the plurality of images comprises a set of instructions for selecting an image in the plurality of images that was captured immediately before receiving the command to capture the HDR scene.3. The machine readable medium of claim 1 , wherein the set of instructions for selecting the third image from the plurality of images comprises a set of instructions for selecting an image in the plurality of images that (i) was captured within a time period before receiving the command to capture the HDR scene claim 1 , and (ii) was captured while the device moved less than a threshold amount.4. The machine readable ...

Подробнее
05-01-2012 дата публикации

Aligning Images

Номер: US20120002899A1
Принадлежит:

Some embodiments provide a method of aligning a pair of images. The method defines multiple different pairs of images at multiple different resolutions. The method hierarchically aligns the original pair of images by first aligning the pair of images at the lowest resolution and then aligning each pair of images at each higher resolution based on the alignments of the pair of images at the lower resolutions. For some of the resolutions, to perform the hierarchically alignment, the method identifies, for at least one image at each resolution, portions that are suitable for performing the alignment and portions that are not suitable for performing the alignment. The method compares each pair of images at a particular resolution by using the suitable portions while excluding the unsuitable portions from the comparison. 1. A method of aligning a pair of original images , the method comprising:defining a plurality of different pairs of images at a plurality of different resolutions; andhierarchically aligning the pair of original images by first aligning the pair of images at the lowest resolution and then aligning each pair of images at each higher resolution based on the alignments of the pair of images at the lower resolutions, in at least one image at the particular resolution, identifying portions that are suitable for performing the alignment and portions that are not suitable for performing the alignment; and', 'comparing the pair of images at the particular resolution by using the suitable portions while excluding the unsuitable portions from the comparison., 'the hierarchical aligning comprising, for each particular resolution in a subset of resolutions2. The method of claim 1 ,wherein the plurality of different pairs of images is associated with a plurality of different pairs of bitmaps;wherein each particular bitmap is a bitmap of a particular original image at a particular resolution; ["dividing the particular image's bitmap into a plurality of tiles;", ' ...

Подробнее
02-02-2012 дата публикации

BINNING COMPENSATION FILTERING TECHNIQUES FOR IMAGE SIGNAL PROCESSING

Номер: US20120026368A1
Принадлежит: Apple Inc.

Various techniques for applying binning compensation filtering to binned raw image data acquired by an image sensor are provided. In one embodiment, a binning compensation filter (BCF) includes separate digital differential analyzers (DDA) for vertical and horizontal scaling. A current position of an output pixel is determined by incrementing the DDA based upon a step size. Using the known output pixel position, a center source input pixel and an index corresponding to the between-pixel fractional position of the output pixel position relative to the input pixels may be selected for filtering. Using the selected center input pixel, one or more same-colored neighboring source pixels may be selected. The number of selected source pixels may depend on the number of taps used by the scaling logic, and may depend on whether horizontal or vertical scaling is being applied. Using the selected index, a set of filter coefficients may be selected from a filter coefficient lookup table, applied to the selected source pixels, and the results may be summed to determine a value for an output pixel having a position corresponding to the current position of the DDA. This process may be repeated for each input pixel and may be performed in both vertical and horizontal directions, thus ultimately producing a re-sampled set of image data that is spatially evenly distributed. 1. A method for processing raw image data comprising:using a binning compensation filter of an image signal processor:using a position value stored by a position register to determine a plurality of output pixel locations, wherein each of the plurality of output pixel locations is determined by incrementing the position value using a step value; using the current position value to select a center input pixel from the raw image data and to select an index value, wherein the index value represents a fractional position of a current output pixel at the current position value between two input pixels of the same color ...

Подробнее
23-02-2012 дата публикации

Method and mechanical press system for the generation of densified cylindrical briquettes

Номер: US20120042793A1
Принадлежит: 9177-4331 QUEBEC Inc

A method and a press for preparing a pressed article from compressible and cohesive biomass particles are provided. The method comprises providing a first pressing ram and a second pressing ram operating in opposite directions and disposed in a compression chamber, in retracted position; supplying a quantity of biomass particles in a space in the compression chamber between the first and second pressing rams; closing the compression chamber; extending the first pressing ram towards the biomass particles in the compression chamber; displacing the biomass particles with the first pressing ram towards the second pressing ram; detecting abutment of the biomass particles on the second pressing ram once the biomass particles are displaced by the first pressing ram to touch the second pressing ram; applying pressure to the biomass particles with the first pressing ram by extending the first pressing ram to abut the biomass particles on the second pressing ram and with the second pressing ram by extending the second press ram to abut the biomass particles on the first pressing ram; detecting a pressure applied to match a predetermined compression pressure and continuing to extend the first pressing ram and the second pressing ram until a predetermined time at the matched compression pressure has elapsed, thereby forming a pressed article; stopping the extension of the second pressing ram when a predetermined extension length for the second pressing ram is reached; continuing to extend the first pressing ram until a predetermined additional time has elapsed after the stopping; ejecting a pressed article made of compressed biomass particles from the compression chamber.

Подробнее
01-03-2012 дата публикации

TECHNIQUES FOR ACQUIRING AND PROCESSING STATISTICS DATA IN AN IMAGE SIGNAL PROCESSOR

Номер: US20120050567A1
Принадлежит: Apple Inc.

Various techniques are disclosed for processing statistics data in an image signal processor (ISP). In one embodiment, a statistics collection engine may be implemented in a front-end processing unit of the ISP, such that statistics are collected prior to processing by an ISP pipeline downstream from the front-end processing unit. In one embodiment, the statistics collection engine may be configured to acquire statistics relating to auto white-balance, auto-exposure, and auto-focus, as well as flicker detection. Collected statistics may be output to a memory and used by the ISP to process acquired image data. 1. An image signal processing system comprising:a front-end pixel processing unit configured to receive a frame of raw image data comprising pixels acquired using a digital image sensor, wherein the front-end pixel processing unit comprises statistics collection logic configured to collect statistics based upon the raw frame pixel data, wherein the collected statistics comprises at least one of auto-white balance statistics, auto-exposure statistics, auto-focus statistics, and flicker detection statistics, and wherein the collection of the statistics occurs prior to the raw frame being processed by an image signal processing pipeline coupled downstream of the front-end pixel processing unit.2. The image signal processing system of claim 1 , wherein the statistics collection logic comprises:an input configured to receive the raw frame pixels;color space conversion logic configured to convert the raw frame pixel data into multiple sets of converted pixel data, each of the converted pixel data sets being in a different color space; anda set of pixel filters, each being configured to receive the raw frame pixel data and the multiple sets of converted pixel data, to select one set of either the raw frame pixel data or the converted pixel data, and to determine one or more accumulated color sum values by analyzing the selected set of pixel data.3. The image signal ...

Подробнее
01-03-2012 дата публикации

AUTO-FOCUS CONTROL USING IMAGE STATISTICS DATA WITH COARSE AND FINE AUTO-FOCUS SCORES

Номер: US20120051730A1
Принадлежит: Apple Inc.

Techniques are provided for determining an optimal focal position using auto-focus statistics. In one embodiment, such techniques may include generating coarse and fine auto-focus scores for determining an optimal focal length at which to position a lens associated with the image sensor. For instance, the statistics logic may determine a coarse position that indicates an optimal focus area which, in one embodiment, may be determined by searching for the first coarse position in which a coarse auto-focus score decreases with respect to a coarse auto-focus score at a previous position. Using this position as a starting point for fine score searching, the optimal focal position may be determined by searching for a peak in fine auto-focus scores. In another embodiment, auto-focus statistics may also be determined based on each color of the Bayer RGB, such that, even in the presence of chromatic aberrations, relative auto-focus scores for each color may be used to determine the direction of focus. 1. An image signal processing system comprising:a front-end pixel processing unit configured to receive a frame of raw image data comprising pixels acquired using an imaging device having a digital image sensor, wherein the front-end pixel processing unit comprises a statistics collection engine having auto-focus statistics logic configured to process the raw image data to collect coarse and fine auto-focus statistics; andcontrol logic configured to determine an optimal focal position of a lens of the imaging device using coarse and fine auto-focus scores based upon the coarse and fine auto-focus statistics and to adjust the focal position of the lens between a minimum position and a maximum position defining a total focal length to reach the optimal focal position.2. The image signal processing system of claim 1 , wherein the control logic is configured to determine the optimal focal position of the lens by:stepping the focal position across a plurality of coarse score ...

Подробнее
05-04-2012 дата публикации

IMAGE SIGNAL PROCESSOR LINE BUFFER CONFIGURATION FOR PROCESSING RAW IMAGE DATA

Номер: US20120081578A1
Принадлежит: Apple Inc.

The present disclosure provides techniques relates to the implementation of a raw pixel processing unit using a set of line buffers. In one embodiment, the set of line buffers may include a first subset and second subset. Various logical units of the raw pixel processing unit may be implemented using the first and second subsets of line buffers in a shared manner. For instance, in one embodiment, defective pixel correction and detection logic may be implemented using the first subset of line buffers. The second subset of line buffers may be used to implement lens shading correction logic, gain, offset, and clamping logic, and demosaicing logic. Further, noise reduction may also be implemented using at least a portion of each of the first and second subsets of line buffers. 1. A method for processing image data using an image signal processing system comprising:using a digital image sensor to acquire raw image data comprising a plurality of raw pixels representative of an image scene;providing the raw pixels to a raw image processing pipeline comprising a set of line buffers;processing the raw pixels using the raw image processing pipeline, wherein processing the raw pixels using the raw image processing pipeline comprises:applying a first set of gain, offset, and clamping parameters to the raw pixels using gain, offset, and clamping logic implemented in a first line of logic;using defective pixel correction logic implemented using a first line of logic and a first subset of the set of line buffers to apply a defective pixel correction operation to the raw pixels;using noise reduction logic implemented using the set of line buffers to apply noise reduction to the raw pixels;using lens shading correction logic implemented using a second subset of the set of line buffers to apply lens shading correction to the raw pixels;applying a second of gain, offset, and clamping parameters to the raw pixels using the second subset of line buffers; andusing demosaicing logic ...

Подробнее
05-04-2012 дата публикации

Overflow control techniques for image signal processing

Номер: US20120081580A1
Принадлежит: Apple Inc

Certain embodiments disclosed herein relate to an image signal processing system includes overflow control logic that detects an overflow condition when a destination unit when a sensor input queue and/or front-end processing unit receives back pressure from a downstream destination unit. In one embodiment, pixels of a current frame are dropped when an overflow condition occurs. The number of dropped pixels may be tracked using a counter. Upon recovery of the overflow condition, the remaining pixels of the frame are received and each dropped pixel may be replaced using a replacement pixel value.

Подробнее
14-06-2012 дата публикации

METHOD AND APPARATUS FOR H.264 TO MPEG-2 VIDEO TRANSCODING

Номер: US20120147952A1
Автор: Cote Guy, Winger Lowell L.
Принадлежит:

A method for transcoding from an H.264 format to an MPEG-2 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the H.264 format to generate a picture having a plurality of macroblock pairs that used an H.264 macroblock adaptive field/frame coding; (B) determining a mode indicator for each of the macroblock pairs; and (C) coding the macroblock pairs into an output video stream in the MPEG-2 format using one of (i) an MPEG-2 field mode coding and (ii) an MPEG-2 frame mode coding as determined from the mode indicators. 1. A method for transcoding from an H.264 format to an MPEG-2 format , comprising the steps of:(A) decoding an input video stream in said H.264 format to generate a picture;(B) determining that said picture had an H.264 picture adaptive field/frame coding in said input video stream; and(C) coding said picture into an output video stream in said MPEG-2 format using an MPEG-2 frame mode coding.2. The method according to claim 1 , further comprising the step of:determining an H.264 scan order of said picture, wherein said coding of said picture into said output video stream is seeded using said H.264 scan order.3. The method according to claim 1 , further comprising the step of:mapping an H.264 16×16 macroblock partition in said picture to at least one of (i) an MPEG-2 16×16 partition and (ii) an MPEG-2 16×8 partition.4. The method according to claim 1 , further comprising the step of:mapping an H.264 16×8 macroblock partition in said picture to an MPEG-2 16×8 partition.5. The method according to claim 1 , further comprising the step of:combining a plurality of H.264 motion vectors from an H.264 macroblock partition in said picture to generate a seed motion vector of an MPEG-2 partition.6. The method according to claim 5 , further comprising the step of:using said seed motion vector as a motion estimation center in a searchs of an MPEG-2 motion vector.7. The method according to claim 1 , further comprising ...

Подробнее
13-09-2012 дата публикации

VIDEO BITSTREAM TRANSCODING METHOD AND APPARATUS

Номер: US20120230404A1
Автор: Cote Guy
Принадлежит:

A video transcoder is disclosed. The video transcoder generally comprises a processor and a video digital signal processor. The processor may be formed on a first die. The video digital signal processor may be formed on a second die and coupled to the processor. The video digital signal processor may have (i) a first module configured to perform a first operation in decoding an input video stream in a first format and (ii) a second module configured to perform a second operation in coding an output video stream in a second format, wherein the first operation and the second operation are performed in parallel. 1. An apparatus comprising:a processor configured to (i) generate a plurality of commands and (ii) perform a portion of one or more of (a) a decoding of an input video stream and (b) a coding of an output video stream; anda video digital signal processor coupled to said processor, said video digital signal processor having (i) a decoder module configured to generate a plurality of decoded pixels by decoding said input video stream in a first format, (ii) an encoder module configured to code said output video stream in a second format and (iii) a memory module disposed between said decoder module and said encoder module and arranged to pass said decoded pixels from said decoder module directly to said encoder module, wherein said second format is different than said first format.2. The apparatus according to claim 1 , wherein said first format comprises an H.264 format and said second format comprises an MPEG-2 format.3. The apparatus according to claim 1 , wherein said first format comprises an VC-1 format and said second format comprises an MPEG-2 format.4. The apparatus according to claim 1 , wherein said first format comprises an MPEG-2 format and said second format comprises an H.264 format.5. The apparatus according to claim 1 , wherein said first format comprises an MPEG-2 format and said second format comprises a VC-1 format.6. The apparatus according to ...

Подробнее
13-09-2012 дата публикации

METHOD AND APPARATUS FOR MPEG-2 TO H.264 VIDEO TRANSCODING

Номер: US20120230415A1
Автор: Cote Guy, Winger Lowell L.
Принадлежит:

A method for transcoding from an MPEG-2 format to an H.264 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the MPEG-2 format to generate a plurality of macroblocks; (B) determining a plurality of indicators from a pair of the macroblocks, the pair of the macroblocks being vertically adjoining; and (C) coding the pair of the macroblocks into an output video stream in the H.264 format using one of (i) a field mode coding and (ii) a frame mode coding as determined from the indicators. 1. A method for transcoding from an MPEG-2 format to an H.264 format , comprising the steps of:(A) buffering an input video stream in said MPEG-2 format, wherein data in said input video stream represents a picture;(B) generating a first macroblock in a first row of said picture by decoding a first portion of said data;(C) generating a second macroblock in a second row of said picture by decoding a second portion of said data; and(D) coding said first macroblock and said second macroblock together into an output video stream in said H.264 format using an H.264 macroblock adaptive field/frame coding.2. The method according to claim 1 , further comprising the step of:detecting an MPEG-2 startcode of said first row before generating said first macroblock.3. The method according to claim 2 , further comprising the step of:marking said first macroblock as decoded by storing a first location of said first portion of said data in said input video stream.4. The method according to claim 3 , further comprising the step of:detecting said MPEG-2 startcode of said second row before completing said generating of said first row.5. The method according to claim 4 , further comprising the step of:marking said second macroblock as decoded by storing a second location of said second portion of said data in said input video stream.6. The method according to claim 5 , further comprising the steps of:generating a third macroblock sequentially following ...

Подробнее
03-01-2013 дата публикации

METHOD AND/OR ARCHITECTURE FOR MOTION ESTIMATION USING INTEGRATED INFORMATION FROM CAMERA ISP

Номер: US20130002907A1
Автор: Alvarez Jose R., Cote Guy
Принадлежит:

A camera comprising a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information. The first circuit may be further configured to pass the image signal processing related information to the second circuit. The second circuit may be further configured to pass the encoding related information to the first circuit. The second circuit may be further configured to modify one or more motion estimation processes based upon the information from the first circuit. 1. A camera comprising:a first circuit configured to receive an input signal and perform image signal processing using encoding related information and at least one limitation of the camera; anda second circuit configured to encode image data using image signal processing related information, wherein (i) said first circuit is further configured to pass said image signal processing related information to said second circuit and (ii) said second circuit is further configured to pass said encoding related information to said first circuit and modify one or more motion estimation processes based upon the information from said first circuit.2. The camera according to claim 1 , wherein said first and said second circuits are integrated in a single integrated circuit.3. The camera according to claim 1 , wherein said second circuit modifies said one or more motion estimation processes to favor small motion vectors as long as distortion is below a predetermined threshold when a zoom process is active.4. The camera according to claim 1 , wherein said second circuit modifies said one or more motion estimation processes to favor co-located motion prediction when a zoom process is active.5. The camera according to claim 1 , wherein said second circuit modifies said one or more motion estimation processes to favor small blocks in areas ...

Подробнее
16-05-2013 дата публикации

METHOD AND APPARATUS FOR QP MODULATION BASED ON PERCEPTUAL MODELS FOR PICTURE ENCODING

Номер: US20130121403A1
Автор: Cote Guy
Принадлежит:

A method for encoding a picture is disclosed. The method generally includes the steps of (A) generating at least one respective macroblock statistic from each of a plurality of macroblocks in the picture, (B) generating at least one global statistic from the picture and (C) generating a respective macroblock quantization parameter for each of the macroblocks based on both (i) the at least one respective macroblock statistic and (ii) said at least one global statistic. 1. A method for encoding a picture , comprising the steps of:(A) generating at least one respective macroblock statistic from each of a plurality of macroblocks in said picture, the at least one respective macroblock statistic being selected from a group that consists of a luminance motion value, a luminance DC value, a luminance high-frequency value, a spatial edge strength value and a temporal edge strength value;(B) generating at least one global statistic from said picture by averaging the at least one respective macroblock static of the plurality of macroblocks over said picture; and(C) generating a respective macroblock quantization parameter for each of said macroblocks based on both (i) said at least one respective macroblock statistic and (ii) said at least one global statistic.2. The method according to claim 1 , wherein each of said respective macroblock quantization parameters is further based on a picture quantization parameter.3. The method according to claim 1 , wherein each of said respective macroblock quantization parameters comprises a respective delta quantization parameter generated only from said at least one respective macroblock statistic and said at least one global statistic.4. The method according to claim 1 , further comprising the step of:generating a respective modulation factor for each of said macroblocks based on both (i) said at least one respective macroblock statistic and (ii) said at least one global statistic, wherein in step (C) said respective macroblock ...

Подробнее
31-10-2013 дата публикации

FLASH SYNCHRONIZATION USING IMAGE SENSOR INTERFACE TIMING SIGNAL

Номер: US20130286242A1
Принадлежит:

Certain aspects of this disclosure relate to an image signal processing system that includes a flash controller that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame. 1. An image signal processing system comprising:an image sensor interface configured to receive image data acquired from an image sensor as a plurality of image frames based upon a sensor timing signal provided by the image sensor and to provide the received image data;image signal processing logic configured to receive the image data from the image sensor interface and to process the image data acquired by the image sensor;a strobe device configured to, when activated, illuminate an image scene being captured by the image sensor; and using a first timing signal received by the image sensor interface that is delayed by a first interval with respect to the sensor timing signal to identify a first time corresponding to an end of a previous image frame immediately preceding the selected image frame;', 'adding a second interval between the selected image frame and the previous image frame to the first time to determine a second time;', 'subtracting the first interval from the second time to determine a third time;', 'subtracting a third interval from the third time to determine a fourth time; and', 'activating the strobe device at the fourth time ...

Подробнее
05-12-2013 дата публикации

Systems and method for reducing fixed pattern noise in image data

Номер: US20130321671A1
Принадлежит: Apple Inc

The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may be configured to receive a frame of the image data having a plurality of pixels acquired using a digital image sensor. The image processing pipeline may then be configured to determine a first plurality of correction factors that may correct each pixel in the plurality of pixels for fixed pattern noise. The first plurality of correction factors may be determined based at least in part on fixed pattern noise statistics that correspond to the frame of the image data. After determining the first plurality of correction factors, the image processing pipeline may be configured to configured to apply the first plurality of correction factors to the plurality of pixels, thereby reducing the fixed pattern noise present in the plurality of pixels.

Подробнее
05-12-2013 дата публикации

SYSTEMS AND METHODS FOR COLLECTING FIXED PATTERN NOISE STATISTICS OF IMAGE DATA

Номер: US20130321672A1
Принадлежит: Apple Inc.

The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may collect statistics associated with fixed pattern noise of image data by receiving a first frame of the image data comprising a plurality of pixels. The image processing pipeline may then determine a sum of a first plurality of pixel values that correspond to at least a first portion of the plurality of pixels such that each pixel in at least the first portion of the plurality of pixels is disposed along a first axis within the frame of the image data. After determining the sum of the first plurality of pixel values, the image processing pipeline may store the sum of the first plurality of pixel values in a memory such that the sum of the first plurality of pixel values represent the statistics. 1. An image signal processing system comprising: receiving a first frame of the image data comprising a plurality of pixels acquired using a digital image sensor;', 'determining a sum of a first plurality of pixel values that correspond to at least a first portion of the plurality of pixels, wherein each pixel in at least the first portion of the plurality of pixels is disposed along a first axis within the frame of the image data; and', 'storing the sum of the first plurality of pixel values in a memory, wherein the sum of the first plurality of pixel values represent the statistics., 'an image processing pipeline configured to collect statistics associated with fixed pattern noise of image data by2. The image signal processing system of claim 1 , wherein each pixel in at least the first portion of the plurality of pixels comprise the same color component.3. The image signal processing system of claim 1 , wherein the first axis corresponds to a horizontal axis claim 1 , a vertical axis claim 1 , or a diagonal axis within the frame of the image data.4. The image signal processing system of claim 3 , wherein each pixel in at least ...

Подробнее
05-12-2013 дата публикации

Systems and Methods for Determining Noise Statistics of Image Data

Номер: US20130321673A1
Принадлежит: Apple Inc.

The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may compute noise statistics associated with image data by receiving a frame of the image data having a plurality of pixels. The image processing pipeline may then identify a plurality of portions of the frame of the image data such that each portion of the plurality of portions has a flat surface. The image processing pipeline may then calculate a plurality of gradients for each portion of the plurality of portions, determine one or more dominant gradient orientations for each portion of the plurality of portions, and generate a histogram that represents a plurality of dominant gradient orientations that corresponds to the plurality of portions. After generating the histogram, the image processing pipeline may store the histogram, which may represent the noise statistics, in a memory. 1. An image signal processing system comprising: receiving a frame of the image data comprising a plurality of pixels acquired using a digital image sensor;', 'identifying a plurality of portions of the frame of the image data, wherein each portion of the plurality of portions comprises a flat surface;', 'calculating a plurality of gradients for each portion of the plurality of portions;', 'determining one or more dominant gradient orientations for each portion of the plurality of portions;', 'generating a histogram that represents a plurality of dominant gradient orientations that corresponds to the plurality of portions; and', 'storing the histogram in a memory, wherein the histogram represents the noise statistics., 'an image processing pipeline configured to compute noise statistics associated with image data by2. The image signal processing system of claim 1 , wherein the flat surface corresponds to an area of the frame of the image data that has an isotropic distribution of gradient orientations.3. The image signal processing system of ...

Подробнее
05-12-2013 дата публикации

Image Signal Processing Involving Geometric Distortion Correction

Номер: US20130321674A1
Принадлежит: Apple Inc.

Systems and methods for correcting geometric distortion are provided. In one example, an electronic device may include an imaging device, which may obtain image data of a first resolution, and geometric distortion and scaling logic. The imaging device may include a sensor and a lens that causes some geometric distortion in the image data. The geometric distortion correction and scaling logic may scale and correct for geometric distortion in the image data by determining first pixel coordinates in uncorrected or partially corrected image data that, when resampled, would produce corrected output image data at second pixel coordinates. The geometric distortion correction and scaling logic may resample pixels around the image data at the first pixel coordinates to obtain the corrected output image data at the second pixel coordinates. The corrected output image data may be of a second resolution. 1. An image signal processing system comprising:chromatic aberration correction logic configured to at least partially correct for chromatic aberration in image data while the image data is in a Bayer raw format; andgeometric distortion correction logic configured to at least partially correct for geometric distortion in the image data while the image data is in a YCC format.2. The image signal processing system of claim 1 , wherein the chromatic aberration correction logic comprises a component of a raw processing pipeline configured to perform a plurality of operations on Bayer raw image data.3. The image signal processing system of claim 1 , wherein the geometric distortion correction logic comprises a component of a YCC processing pipeline configured to perform a plurality of operations on YCC image data.4. The image signal processing system of claim 1 , wherein the chromatic aberration correction logic and the geometric distortion logic are separated by rgb processing logic configured to perform a plurality of operations on the image data while the image data is in an RGB ...

Подробнее
05-12-2013 дата публикации

RAW SCALER WITH CHROMATIC ABERRATION CORRECTION

Номер: US20130321675A1
Принадлежит: Apple Inc.

Systems and methods for down-scaling are provided. In one example, a method for processing image data includes determining a plurality of output pixel locations using a position value stored by a position register, using the current position value to select a center input pixel from the image data and selecting an index value, selecting a set of input pixels adjacent to the center input pixel, selecting a set of filtering coefficients from a filter coefficient lookup table using the index value, filtering the set of source input pixels to apply a respective one of the set of filtering coefficients to each of the set of source input pixels to determine an output value for the current output pixel at the current position value, and correcting chromatic aberrations in the set of source input pixels. 1. A method for processing raw image data comprising:using raw scaler logic of an image signal processor:using a position value stored by a position register to determine a plurality of output pixel locations, wherein each of the plurality of output pixel locations is determined by incrementing the position value using a step value; using the current position value to select a center input pixel from the raw image data and to select an index value, wherein the index value represents a fractional position of a current output pixel at the current position value between two input pixels of the same color, and wherein the raw image data comprises multiple color components in a single plane and corresponds to an image scene captured by a digital image sensor;', 'selecting a set of input pixels adjacent to the center input pixel, wherein the adjacent input pixels and the center input pixel form a set of source input pixels;', 'selecting a set of filtering coefficients from a filter coefficient lookup table using the index value;', 'filtering the set of source input pixels using scaling logic to apply a respective one of the set of filtering coefficients to each of the set of ...

Подробнее
05-12-2013 дата публикации

Green Non-Uniformity Correction

Номер: US20130321676A1
Принадлежит: Apple Inc.

Systems and methods for correcting green channel non-uniformity (GNU) are provided. In one example, GNU may be corrected using energies between the two green channels (Gb and Gr) during green interpolation processes for red and green pixels. Accordingly, the processes may be efficiently employed through implementation using demosaic logic hardware. In addition, the green values may be corrected based on low-pass-filtered values of the green pixels (Gb and Gr). Additionally, green post-processing may provide some defective pixel correction on interpolated greens by correcting artifacts generated through enhancement algorithms. 1. A method for processing image data , comprising:receiving a raw image pattern from an image sensor, the raw image pattern comprising a plurality of first, second, and third color pixels arranged in accordance with a color filter array; obtaining a first low-pass filter (LPF) value for one of the two first color pixels in the pattern;', 'obtaining a second low-pass filter (LPF) value for the other of the two first color pixels in the pattern; and', 'halving the difference between the first and second LPF values to obtain the color-non-uniformity corrected value; and, 'interpolating a color-non-uniformity corrected value for each first color pixel in the image pattern to obtain a full set of corrected values for the image data, wherein interpolating the color-non-uniformity corrected value for each first color pixel comprisesapplying the full set of corrected values to the first color pixels in the image data.2. The method of claim 1 , wherein the color filter array comprises a Bayer color filter array defining a Bayer color pattern; and wherein the first color comprises green claim 1 , the second color comprises red claim 1 , and the third color comprises blue.3. The method of claim 2 , wherein interpolating the color-non-uniformity corrected value for each first color pixel comprises:receiving a current non-green pixel;determining a ...

Подробнее
05-12-2013 дата публикации

SYSTEMS AND METHODS FOR RAW IMAGE PROCESSING

Номер: US20130321677A1
Принадлежит: Apple Inc.

Systems and methods for processing raw image data are provided. One example of such a system may include memory to store image data in raw format from a digital imaging device and an image signal processor to process the image data. The image signal processor may include data conversion logic and a raw image processing pipeline. The data conversion logic may convert the image data into a signed format to preserve negative noise from the digital imaging device. The raw image processing pipeline may at least partly process the image data in the signed format. The raw image processing pipeline may also include, among other things, black level compensation logic, fixed pattern noise reduction logic, temporal filtering logic, defective pixel correction logic, spatial noise filtering logic, lens shading correction logic, and highlight recovery logic. 1. An image signal processing system comprising:data conversion logic configured to convert unsigned input image data deriving from a digital sensor into signed input image data to preserve negative noise from the sensor; anda raw image processing pipeline configured to process the signed input image data into processed signed image data.2. The image signal processing system of claim 1 , wherein the data conversion logic is configured to scale the signed input image data by a programmable scale value.3. The image signal processing system of claim 1 , wherein the data conversion logic is configured to right-bit-shift the signed input image data to scale the signed input image data by a programmable or non-programmable scale value.4. The image signal processing system of claim 1 , wherein the data conversion logic is configured to offset the signed input image data by subtracting an offset value to set a zero-bias in the signed input image data.5. The image signal processing system of claim 1 , wherein the data conversion logic is configured to convert the processed signed image data into unsigned output image data when the ...

Подробнее
05-12-2013 дата публикации

SYSTEMS AND METHODS FOR LENS SHADING CORRECTION

Номер: US20130321678A1
Принадлежит: Apple Inc.

Systems and methods for correcting intensity drop-offs due to geometric properties of lenses are provided. In one example, a method includes receiving an input pixel of the image data, the image data acquired using an image sensor. A color component of the input pixel is determined. A gain grid is determined by pointing to the gain grid in external memory. Each of the plurality of grid points is associated with a lens shading gain selected based upon the color of the input pixel. A nearest set of grid points that enclose the input pixel is identified. Further, a lens shading gain is determined by interpolating the lens shading gains associated with each of the set of grid points and is applied to the input pixel. 1. A method for applying lens shading correction to image data , comprising:receiving an input pixel of the image data, wherein the image data is acquired using an image sensor;determining a color component of the input pixel;accessing a gain grid by pointing to the gain grid in external memory, wherein the gain grid comprises a plurality of grid points distributed in horizontal and vertical directions for an entire sensor, wherein each of the plurality of grid points is associated with a lens shading gain selected based upon the color of the input pixel;determining the location of the input pixel relative to the gain grid;if the location of the input pixel corresponds directly to one of the plurality of grid points, applying the lens shading gain associated with the directly corresponding grid point to the input pixel; andif the location of the input pixel does not correspond directly to one of the plurality of grid points, identifying a nearest set of grid points that enclose the input pixel, determining a lens shading gain by interpolating the lens shading gains associated with each of the set of grid points, and applying the interpolated lens shading gain to the input pixel.2. The method of claim 1 , wherein a lens shading correction region is defined ...

Подробнее
05-12-2013 дата публикации

Systems and Methods for Luma Sharpening

Номер: US20130321700A1
Принадлежит: Apple Inc.

Systems, methods, and devices for sharpening image data are provided. One example of an image signal processing system includes a YCC processing pipeline that includes luma sharpening logic. The luma sharpening logic may sharpen the luma component while avoiding sharpening some noise. Specifically, a multi-scale unsharp mask filter may obtain unsharp signals by filtering an input luma component, and sharp component determination logic may determine sharp signals representing differences between the unsharp signals and the luma component. Sharp lookup tables may “core” the sharp signals, which may prevent some noise from being sharpened. Output logic may determine a sharpened output luma signal by combining the sharp signals with, for example, luma component or one of the unsharp signals. 1. An image signal processing system comprising: a multi-scale unsharp mask filter configured to obtain a plurality of unsharp signals by filtering the luma component;', 'sharp component determination logic configured to determine a plurality of sharp signals representing differences between the one or more unsharp signals and the luma component;', 'a plurality of sharp lookup tables configured to vary the sharp signals so as to suppress the sharp signals when the sharp signals are beneath a coring threshold to reduce sharpening of noise; and', 'output logic configured to determine an output luma signal by combining one or more of the sharp signals with the luma component or one of the unsharp signals, or a combination thereof., 'a YCC processing pipeline configured to process image data in a YCC format, wherein the image data comprises a luma component and two chroma components, wherein the YCC processing pipeline comprises luma sharpening logic configured to sharpen the luma component, wherein the luma sharpening logic comprises2. The image signal processing system of claim 1 , wherein the multi-scale unsharp mask filter comprises a plurality of Gaussian filters.3. The image ...

Подробнее
05-12-2013 дата публикации

Local Image Statistics Collection

Номер: US20130322745A1
Принадлежит: Apple Inc.

Systems and methods for generating local image statistics are provided. In one example, an image signal processing system may include a statistics pipeline with image processing logic and local image statistics collection logic. The image processing logic may receive and process pixels of raw image data. The local image statistics collection logic may generate a local histogram associated with a luminance of the pixels of a first block of pixels of the raw image data or a thumbnail in which a pixel of the thumbnail represents a downscaled version of the luminance of the pixels of the first block of the pixel. The raw image data may include many other blocks of pixels of the same size as the first block of pixels. 1. A method for generating local image statistics of image data comprising:receiving input pixels of a block of the image data;determining luminance values associated with the input pixels of the block of the image data; andgenerating a local image statistic using the luminance values associated with the block of the image data or red, green, or blue subpixels of the block of the image data, or a combination thereof.2. The method of claim 1 , wherein the image statistic comprises a local histogram of the block of the image data.3. The method of claim 1 , wherein the image statistic comprises a thumbnail in which the block of pixels corresponds to a single pixel of the thumbnail.4. The method of claim 1 , wherein the luminance values associated with the input pixels of the block of the image data comprise an average or weighted average luminance of all color channels of at least one of the input pixels claim 1 , a maximal or weighted maximal luminance of at least one of the input pixels claim 1 , a blend of the average or weighted average luminance or the maximal or weighted maximal luminance claim 1 , or a logarithmic value of the average or weighted average luminance claim 1 , the maximal or weighted maximal luminance claim 1 , or the blend of the average ...

Подробнее
05-12-2013 дата публикации

SYSTEMS AND METHODS FOR YCC IMAGE PROCESSING

Номер: US20130322746A1
Принадлежит: Apple Inc.

Systems and methods for processing YCC image data provided. In one example, an electronic device includes memory to store image data in RGB or YCC format and a YCC image processing pipeline to process the image data. The YCC image processing pipeline may include receiving logic configured to receive the image data in RGB or YCC format and color space conversion logic configured to, when the image data is received in RGB format, convert the image data into YCC format. The YCC image processing logic may also include luma sharpening and chroma suppression logic; brightness, contrast, and color adjustment logic; gamma logic; chroma decimation logic; scaling logic; and chromanoise reduction logic. 1. An electronic device comprising:memory configured to store image data in RGB or YCC format, or both, wherein the YCC format comprises one luminance component and two chrominance components; receiving logic configured to receive the image data in RGB or YCC format;', 'color space conversion logic configured to, when the image data is received in RGB format, convert the image data into YCC format;', 'luma sharpening and chroma suppression logic to generally sharpen the luminance component of the image data and generally suppress noise in the chrominance components of the image data;', 'brightness, contrast, and color adjustment logic to adjust brightness, contrast, or color, or a combination thereof, of the image data;', 'gamma logic adjust a gamma of the image data;', 'chroma decimation logic to decimate the chrominance components of the image data;', 'scaling logic configured to scale the image data; and', 'chromanoise reduction logic configured to reduce noise in the chrominance components of the image data., 'a YCC image processing pipeline configured to process the image data, wherein the YCC image processing pipeline comprises2. The electronic device of claim 1 , wherein the logic of the YCC image processing pipeline is configured to process the image data in the recited ...

Подробнее
05-12-2013 дата публикации

SYSTEMS AND METHODS FOR LOCAL TONE MAPPING

Номер: US20130322753A1
Принадлежит: Apple Inc.

Systems and methods for local tone mapping are provided. In one example, an electronic device includes an electronic display, an imaging device, and an image signal processor. The electronic display may display images of a first bit depth, and the imaging device may include an image sensor that obtains image data of a higher bit depth than the first bit depth. The image signal processor may process the image data, and may include local tone mapping logic that may apply a spatially varying local tone curve to a pixel of the image data to preserve local contrast when displayed on the display. The local tone mapping logic may smooth the local tone curve applied to the intensity difference between the pixel and another nearby pixel exceeds a threshold. 1. An image signal processing system comprising:local tone mapping logic to apply spatially varying tone curves to image data; andspatially varying color correction matrices to apply spatially varying color correction matrices to the image data.2. The image signal processing system of claim 1 , wherein the local tone mapping logic comprises a spatially varying grid of lookup tables of tone curves indexed by luminance associated each pixel of the image data.3. The image signal processing system of claim 2 , wherein each lookup table of the spatially varying grid of lookup tables of tone curves is indexed by:an average luminance of all color components of a pixel of the image data;a maximal luminance of the color components of the pixel;a blend of the average luminance and the maximal luminance; ora logarithmic value of the average luminance, the maximal luminance, or the blend of the average luminance and the maximal luminance; ora combination thereof.4. The image signal processing system of claim 2 , wherein the local tone mapping logic comprises luminance computation logic configured to determine the luminance associated with each pixel of the image data.5. The image signal processing system of claim 1 , wherein the ...

Подробнее
09-01-2014 дата публикации

SYSTEMS AND METHODS FOR STATISTICS COLLECTION USING CLIPPED PIXEL TRACKING

Номер: US20140010480A1
Принадлежит: Apple Inc.

Systems and methods are provided for selectively performing image statistics processing based at least partly on whether a pixel has been clipped. In one example, an image signal processor may include statistics collection logic. The statistics collection logic may include statistics image processing logic and a statistics core. The statistics image processing logic may perform initial image processing on image pixels, at least occasionally causing some of the image pixels to become clipped. The statistics core may obtain image statistics from the image pixels. The statistics core may obtain at least one of the image statistics using only pixels that have not been clipped and excluding pixels that have been clipped. 1. An image signal processor comprising: statistics image processing logic configured to perform initial image processing on image pixels, wherein the statistics image processing logic at least occasionally causes some of the image pixels to become clipped; and', 'a statistics core comprising a plurality of components configured to obtain image statistics from the image pixels, wherein at least one component of the statistics core is configured to obtain at least one of the image statistics using only pixels that have not been clipped, thereby excluding pixels that have been clipped., 'statistics collection logic comprising2. The image signal processor of claim 1 , wherein each pixel comprises a clipped pixel flag indicating whether that pixel was clipped by the statistics image processing logic.3. The image signal processor of claim 2 , wherein the clipped pixel flag of each pixel comprises a bit that claim 2 , when set to a particular value claim 2 , indicates that the pixel was clipped.4. The image signal processor of claim 2 , wherein each pixel comprises pixel image data adjacent to the clipped pixel flag.5. The image signal processor of claim 1 , wherein each pixel comprises a clipped pixel flag indicating whether that pixel was clipped by a ...

Подробнее
07-01-2016 дата публикации

LATE-STAGE MODE CONVERSIONS IN PIPELINED VIDEO ENCODERS

Номер: US20160007038A1
Автор: Chou Jim C., Cote Guy
Принадлежит: Apple Inc.

The video encoders described herein may determine an initial designation of a mode in which to encode a block of pixels in an early stage of a block processing pipeline. A component of a late stage of the block processing pipeline (one that precedes the transcoder) may determine a different mode designation for the block of pixels based on coded block pattern information, motion vector information, the position of the block in a row of such blocks, the order in which such blocks are processed in the pipeline, or other encoding related syntax elements. The component in the late stage may communicate information to the transcoder usable in coding the block of pixels, such as modified syntax elements or an end of row marker. The transcoder may encode the block of pixels in accordance with the different mode designation or may change the mode again, dependent on the communicated information. 1. An apparatus , comprising:a block processing pipeline that implements a transcode stage and two or more stages that precede the transcode stage, each stage comprising at least one component, each component being configured to perform one or more operations on blocks of pixels from video frames that pass through the pipeline;wherein the at least one component of a given one of the two or more stages that precede the transcode stage is configured to determine an initial mode designation to be applied when encoding a given block of pixels; determine that a different mode designation should be applied when encoding the given block of pixels; and', 'communicate information to the transcode stage that is usable in generating an encoded bit stream for the given block of pixels in accordance with the different mode designation;, 'wherein, subsequent to the determination of the initial mode designation, the at least one component of an other one of the two or more stages that precede the transcode stage is configured towherein the other one of the two or more stages that precede the ...

Подробнее
14-01-2021 дата публикации

Electronic Device Display With Charge Accumulation Tracker

Номер: US20210012733A1
Принадлежит:

An electronic device may generate content that is to be displayed on a display. The display may have an array of liquid crystal display pixels for displaying image frames of the content. The image frames may be displayed with positive and negative polarities to help reduce charge accumulation effects. A charge accumulation tracker may analyze the image frames to determine when there is a risk of excess charge accumulation. The charge accumulation tracker may analyze information on gray levels, frame duration, and frame polarity. The charge accumulation tracker may compute a charge accumulation metric for entire image frames or may process subregions of each frame separately. When subregions are processed separately, each subregion may be individually monitored for a risk of excess charge accumulation. 1. An electronic device comprising:a display;control circuitry that generates image frames that are displayed on the display; anda charge accumulation tracker that receives inputs for each of the image frames, wherein the charge accumulation tracker computes a charge accumulation metric for each of the image frames using information from the respective image frame and a plurality of image frames that have been previously displayed by the display, and wherein the control circuitry takes remedial action when the charge accumulation metric exceeds a predetermined threshold.2. The electronic device defined in wherein the remedial action comprises adjusting frame polarity of the image frames.3. The electronic device defined in wherein the image frames are displayed with a variable refresh rate and wherein the remedial action comprises adjusting the variable refresh rate.4. The electronic device defined in wherein the received inputs comprise gray level values and wherein the charge accumulation tracker computes the charge accumulation metric for each of the image frames based on the gray level values.5. The electronic device display defined in wherein the gray level values ...

Подробнее
14-01-2016 дата публикации

ENCODING BLOCKS IN VIDEO FRAMES CONTAINING TEXT USING HISTOGRAMS OF GRADIENTS

Номер: US20160014421A1
Автор: Cote Guy, Shi Xiaojin
Принадлежит: Apple Inc.

A block input component of a video encoding pipeline may, for a block of pixels in a video frame, compute gradients in multiple directions, and may accumulate counts of the computed gradients in one or more histograms. The block input component may analyze the histogram(s) to compute block-level statistics and determine whether a dominant gradient direction exists in the block, indicating the likelihood that it represents an image containing text. If text is likely, various encoding parameter values may be selected to improve the quality of encoding for the block (e.g., by lowering a quantization parameter value). The computed statistics or selected encoding parameter values may be passed to other stages of the pipeline, and used to bias or control selection of a prediction mode, an encoding mode, or a motion vector. Frame-level or slice-level parameter values may be generated from gradient histograms of multiple blocks. 1. An apparatus , comprising:a block processing pipeline configured to process blocks of pixels from video frames;wherein the block processing pipeline comprises a block input component; receive input data representing the block of pixels;', 'compute gradient values for the block of pixels in two or more directions;', 'compute one or more histograms representing statistics derived from the gradient values for the block of pixels;', 'determine a likelihood that the block of pixels represents a portion of the video frame that contains text, wherein to determine the likelihood that the block of pixels represents a portion of the video frame that contains text, the block input component is configured to determine a presence or absence of a dominant gradient direction in the block of pixels, dependent on the one or more computed histograms; and', 'determine one or more parameter values for encoding the block of pixels, dependent on the likelihood that the block of pixels represents a portion of the video frame that contains text., 'wherein, for each of a ...

Подробнее
16-01-2020 дата публикации

GLOBAL MOTION VECTOR VIDEO ENCODING SYSTEMS AND METHODS

Номер: US20200021841A1
Принадлежит:

Systems and methods for improving operational efficiency of a video encoding system used to encode image data are provided. In embodiments, the video encoding system includes a low resolution pipeline that includes a low resolution motion estimation block, which generates downscaled image data by reducing resolution of the image data and performs a motion estimation search using the downscaled image data and previously downscaled image data. The video encoding system also includes a main encoding pipeline in parallel with the low resolution pipeline that includes a motion estimation block, which determines a global motion vector based on data from the low resolution motion estimation block. The main encoding pipeline may utilize the global motion vector in determining a candidate inter prediction mode. 1. A video encoding system configured to encode source image data corresponding with an image , comprising: generate a first downscaled pixel block by downscaling resolution of the first source image data;', 'perform a low resolution motion estimation search based on the first downscaled pixel block to determine a first downscaled reference sample and a first low resolution inter prediction mode indicative of location of a first full resolution reference sample corresponding with the first downscaled reference sample; and', 'determine global motion vector statistics based at least in part on the first low resolution inter prediction mode, wherein the global motion vector statistics enable determination of a global motion vector indicative of motion trend in the image; and, 'a low resolution pipeline configured to receive first source image data corresponding with a first pixel block in the image, wherein the low resolution pipeline comprises a low resolution motion estimation block programmed to determine a search window to be stored in internal memory of the main encoding pipeline based at least in part on the global motion vector;', 'determine a first full ...

Подробнее
28-01-2021 дата публикации

Global motion vector video encoding systems and methods

Номер: US20210029376A1
Принадлежит: Apple Inc

Systems and methods for improving operational efficiency of a video encoding system used to encode image data are provided. In embodiments, the video encoding system includes a low resolution pipeline that includes a low resolution motion estimation block, which generates downscaled image data by reducing resolution of the image data and performs a motion estimation search using the downscaled image data and previously downscaled image data. The video encoding system also includes a main encoding pipeline in parallel with the low resolution pipeline that includes a motion estimation block, which determines a global motion vector based on data from the low resolution motion estimation block. The main encoding pipeline may utilize the global motion vector in determining a candidate inter prediction mode.

Подробнее
25-02-2021 дата публикации

LADDER RACK FOR A VEHICLE

Номер: US20210053502A1
Автор: Cote Guy
Принадлежит:

A ladder rack for supporting a ladder on a vehicle. The ladder rack comprises a supporting arrangement anchored only along a single edge area of a rooftop, the supporting arrangement comprising a first load-bearing member entirely extending over the rooftop within an edge of the rooftop and a second load-bearing member extending substantially parallel to the first load-bearing member with a lateral offset, outside the rooftop and away from the edge of the rooftop. A ladder-supporting arrangement holds the ladder and connects to the first load-bearing member and to the first load-bearing member for supporting a weight of the ladder. The remainder of the rooftop surface is exempt of any member and is free for other purposes. The ladder is held on a side of the vehicle with an offset to be able to open a sliding door on the same side of the vehicle as the ladder rack. 1. A ladder rack for supporting a ladder on a vehicle , the ladder rack comprising:a supporting arrangement anchored on a rooftop of the vehicle having two edges, the supporting arrangement anchored only along a single edge thereof, and comprising bars spanning across the single edge of the rooftop, on either side of the single edge and being both parallel to the single edge of the rooftop of the vehicle, a weight of the ladder being distributing by the bars.2. The ladder rack of claim 1 , wherein the bars distributing the weight of the ladder comprise a first bar and a second bar claim 1 , wherein the supporting arrangement is secured to anchors at predetermined locations on the rooftop of the vehicle claim 1 , the ladder rack further comprising an offset supporting member which supports the second bar with a lateral offset relative to the first bar claim 1 , the offset supporting member being also secured to the anchors.3. The ladder rack of claim 2 , wherein the anchors at predetermined locations on the rooftop of the vehicle are located on a same edge area by the single edge of the rooftop claim 2 , ...

Подробнее
15-05-2014 дата публикации

Systems And Methods For Statistics Collection Using Pixel Mask

Номер: US20140133749A1
Автор: Cote Guy, Kuo David Daming
Принадлежит: Apple Inc.

Systems and methods are provided for collecting image statistics using a pixel mask. In one example, statistics collection logic of an image signal processor may include a pixel weighting mask and accumulation logic. The pixel weighting mask may receive a first representation of a pixel that includes a luma and chroma representation of the pixel. The pixel weighting mask may output a pixel weighting using first and second chroma components of the luma and chroma representation of the pixel. The accumulation logic may receive the first or a second representation of the pixel and the pixel weighting value. Using these, the accumulation logic may weight the second representation of the pixel or the first representation of the pixel using the pixel weighting value to obtain a weighted pixel value, adding the weighted pixel value to a statistics count. 1. An image signal processor comprising: [ receive a first representation of a pixel, wherein the first representation of the pixel comprises a luma and chroma representation of the pixel; and', 'output a pixel weighting value based at least partly on first and second chroma components of the luma and chroma representation of the pixel; and, 'a pixel weighting mask configured to, receive a second representation of the pixel or the first representation of the pixel;', 'receive the pixel weighting value;', 'weight the second representation of the pixel or the first representation of the pixel using the pixel weighting value to obtain a weighted pixel value; and', 'add the weighted pixel value to a statistics count., 'accumulation logic configured to], 'statistics collection logic comprising2. The image signal processor of claim 1 , wherein the pixel weighting mask comprises a two-dimensional weighting map indexed by a first index value and a second index value claim 1 , wherein the first index value is based at least partly on the first chroma component and wherein the second index value is based at least partly on the ...

Подробнее
05-03-2015 дата публикации

OPERATING A DEVICE TO CAPTURE HIGH DYNAMIC RANGE IMAGES

Номер: US20150062382A1
Принадлежит: Apple Inc.

Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels. 141.-. (canceled)42. A non-transitory machine readable medium of a device that captures images , the medium storing a program that when executed by at least one processing unit captures an image of a high dynamic range (HDR) scene , the program comprising sets of instructions for:capturing a plurality of images of the HDR scene at different exposure levels, wherein the different exposure levels are selected based at least in part upon detected lighting conditions within the HDR scene; andcompositing the plurality of images to produce a composite image of the HDR scene.43. The machine readable medium of claim 42 , wherein the sets of instructions for capturing a plurality of images of the HDR scene at different exposure levels comprises sets of instructions for:capturing and storing, upon the device entering an HDR mode, a first plurality of images at a first exposure level; andcapturing, upon receiving a command to capture the HDR scene, at least one image at one or more different exposure levels, wherein the one or more different exposure levels are selected based at least in part upon the detected lighting conditions within the HDR scene.44. The machine readable medium of claim 43 , wherein the program further comprises sets of instructions for:computing the first exposure level based on the detected lighting conditions within the HDR scene; ...

Подробнее
20-02-2020 дата публикации

VIDEO PIPELINE

Номер: US20200058152A1
Принадлежит: Apple Inc.

A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display. 1. A device , comprising:a display subsystem for displaying a 3D virtual view;a current frame decoder and a previous frame decoder each configured to decompress and process frames received from a base station over a wireless connection and provide the frames to the display subsystem for display; receive a compressed current frame from the base station over the wireless connection;', 'write the compressed current frame to a previous frame buffer and pass the compressed current frame to the current frame decoder to decompress and process the current frame; and', 'while the current frame decoder is decompressing and processing the current frame, simultaneously decompress and process a previous frame from the previous frame buffer on the previous frame decoder., 'wherein the device is configured to2. The device as recited in claim 1 , wherein the device is further configured to:monitor the receiving of the compressed current frames from the base station over the wireless connection and the decompressing and processing of the current frames by the current ...

Подробнее
03-03-2016 дата публикации

VIDEO ENCODER WITH CONTEXT SWITCHING

Номер: US20160065969A1
Принадлежит: Apple Inc.

A context switching method for video encoders that enables higher priority video streams to interrupt lower priority video streams. A high priority frame may be received for processing while another frame is being processed. The pipeline may be signaled to perform a context stop for the current frame. The pipeline stops processing the current frame at an appropriate place, and propagates the stop through the stages of the pipeline and to a transcoder through DMA. The stopping location is recorded. The video encoder may then process the higher-priority frame. When done, a context restart is performed and the pipeline resumes processing the lower-priority frame beginning at the recorded location. The transcoder may process data for the interrupted frame while the higher-priority frame is being processed in the pipeline, and similarly the pipeline may begin processing the lower-priority frame after the context restart while the transcoder completes processing the higher-priority frame. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages each configured to perform one or more operations on blocks of pixels from input frames passing through the pipeline; begin processing a frame from a first video source;', 'receive an indication that a higher priority frame from a second video source needs to be processed;', 'perform a context stop for the frame from the first video source at a context stop location within the frame;', 'process the higher priority frame from the second video source; and', 'perform a context restart to resume processing of the frame from the first video source at the context stop location within the frame., 'wherein the block processing pipeline is configured to2. The apparatus as recited in claim 1 , wherein the block processing pipeline is configured to process the blocks of pixels from the frame according to row groups each including two or more rows of blocks from the frame claim 1 , and wherein the context stop ...

Подробнее
03-03-2016 дата публикации

Chroma cache architecture in block processing pipelines

Номер: US20160065973A1
Принадлежит: Apple Inc

Methods and apparatus for caching reference data in a block processing pipeline. A cache may be implemented to which reference data corresponding to motion vectors for blocks being processed in the pipeline may be prefetched from memory. Prefetches for the motion vectors may be initiated one or more stages prior to a processing stage. Cache tags for the cache may be defined by the motion vectors. When a motion vector is received, the tags can be checked to determine if there are cache block(s) corresponding to the vector (cache hits) in the cache. Upon a cache miss, a cache block in the cache is selected according to a replacement policy, the respective tag is updated, and a prefetch (e.g., via DMA) for the respective reference data is issued.

Подробнее
12-03-2015 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20150071344A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 1. A method comprising:identifying one or more initial sets of chroma quantization parameter (QP) offset values at one or more levels of a video coding hierarchy, each set of chroma QP offset values at a particular level for specifying chroma QPs of video units encompassed by said particular level;identifying an additional set of chroma QP offset values for a plurality of video units in the video coding hierarchy; andcomputing a set of chroma QP values for the plurality of video units, wherein the identified initial sets of chroma QP offset values and the identified additional set of chroma QP offset values are additive components of the set of chroma QP value for the plurality of video units.2. The method of claim 1 , wherein the identified initial sets of chroma quantization parameters are for video units that encompass the particular level of the video coding hierarchy.3. The method of further comprising identifying a luma quantization parameter value for the plurality of video units.4. The method of claim 3 , wherein computing the set of chroma QP values comprises adding the identified initial sets of chroma QP offset values and the identified additional set of chroma QP offset values to a luma QP value.5. The method of claim 3 , wherein identifying the additional set of ...

Подробнее
12-03-2015 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20150071345A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 1. A method comprising:receiving an array of sets of chroma quantization parameter (QP) offset values from an header of an encoded video picture, wherein an encoded video picture includes a plurality of quantization groups, each quantization group includes at least one coding unit;selecting one set of chroma QP offset values form the array of sets of chroma QP offset values for a particular quantization group from the plurality of quantization groups; andcomputing a set of chroma quantization parameters for a coding unit in the particular quantization group based on the selected set of chroma QP offset values.2. The method of claim 1 , wherein computing the set of chroma quantization parameters comprises adding the selected set of chroma QP offset values to a luma QP value.3. The method of further comprising identifying the luma QP value for the coding unit.4. The method of claim 2 , wherein the selected set of chroma QP offset values is a first set of chroma QP offset values claim 2 , the method further comprising adding a second set of chroma QP offset values associated with the video picture to the luma QP value and the first set of chroma QP offset values.5. The method of claim 2 , wherein the selected set of chroma QP offset values is a first set of chroma QP offset ...

Подробнее
28-02-2019 дата публикации

Overdrive for Electronic Device Displays

Номер: US20190066569A1
Принадлежит:

An electronic device is provided. The electronic device includes a display that is configured to show content that includes a plurality of frames. The plurality of frames includes a first frame that is associated with a pre-transition value. The plurality of frames also includes a second frame that is associated with a current frame value that corresponds to a first luminance. Additionally, the electronic device is configured to determine an overdriven current frame value corresponding to a second luminance that is greater than the first luminance. The electronic device is also configured to display the second frame using the overdriven current frame value. 2. The electronic device of claim 1 , wherein the electronic device is configured to display a third frame of the plurality of the frames using the current frame value after displaying the second frame.3. The electronic device of claim 1 , wherein the display comprises a plurality of pixels claim 1 , wherein the pre-transition value claim 1 , current frame value claim 1 , and compensated current frame value are associated with a first pixel.4. The electronic device of claim 3 , wherein the electronic device is configured to determine a second current frame value and a second compensated current frame value associated with a second pixel of the plurality of pixels.5. The electronic device of claim 1 , wherein the plurality of frames comprises a third frame claim 1 , wherein the third frame is associated with a next frame value corresponding to a third luminance claim 1 , wherein the electronic device is configured to:determine a next frame compensated value corresponding to a fourth luminance, wherein the fourth luminance is greater than the first luminance; anddisplay the third frame using the next frame compensated value.6. The electronic device of claim 5 , wherein the electronic device is configured to display a fourth frame after the third frame claim 5 , wherein the fourth frame is associated with the ...

Подробнее
17-03-2022 дата публикации

Systems and Methods for Encoding Image Data

Номер: US20220086485A1
Принадлежит:

Systems and methods for improving operational efficiency of a video encoding system used to encode image data are provided. In embodiments, a video encoding system includes image processing circuitry configured to receive source image data and derive full-resolution image data and low-resolution image data from the source image data. The video encoding system also includes a low resolution pipeline configured to receive the low-resolution image data and determine one or more low resolution inter prediction modes based on the low-resolution image data. Furthermore, the video encoding system includes a main pipeline configured to encode the full-resolution image data based on the one or more low resolution inter prediction modes. 1. A video encoding system comprising:{'claim-text': ['receive source image data; and', 'derive full-resolution image data and low-resolution image data from the source image data;'], '#text': 'image processing circuitry configured to:'}memory configured to store the low-resolution image data;a low resolution pipeline separate from the image processing circuitry and the memory, wherein the low resolution pipeline is configured to receive the low-resolution image data from the memory without further downscaling and, after receiving the low-resolution image data from the memory, determine one or more low resolution inter prediction modes based on the low-resolution image data; anda main pipeline separate from the image processing circuitry and configured to encode the full-resolution image data based on the one or more low resolution inter prediction modes.2. The video encoding system of claim 1 , wherein:the memory is configured to store the full-resolution image data; andthe main pipeline is configured to encode the full-resolution image data after receiving the full-resolution image data from the memory.3. The video encoding system of claim 1 , wherein the image processing circuitry is configured to derive the full-resolution image data and ...

Подробнее
26-03-2015 дата публикации

NEIGHBOR CONTEXT CACHING IN BLOCK PROCESSING PIPELINES

Номер: US20150084968A1
Принадлежит: Apple Inc.

Methods and apparatus for caching neighbor data in a block processing pipeline that processes blocks in knight's order with quadrow constraints. Stages of the pipeline may maintain two local buffers that contain data from neighbor blocks of a current block. A first buffer contains data from the last C blocks processed at the stage. A second buffer contains data from neighbor blocks on the last row of a previous quadrow. Data for blocks on the bottom row of a quadrow are stored to an external memory at the end of the pipeline. When a block on the top row of a quadrow is input to the pipeline, neighbor data from the bottom row of the previous quadrow is read from the external memory and passed down the pipeline, each stage storing the data in its second buffer and using the neighbor data in the second buffer when processing the block. 1. An apparatus , comprising:an interface to an external memory; anda block processing pipeline comprising a plurality of stages, each stage configured to perform one or more operations on a block of pixels passing through the pipeline;wherein the apparatus is configured to process blocks of pixels from a frame in the block processing pipeline so that adjacent blocks on a row are not concurrently at adjacent stages of the pipeline; receive a block for processing at the stage;', 'process the block according to information from one or more previously processed neighbor blocks stored in one or more buffers in a local memory of the stage;', 'store information from the processed block to a first buffer in the local memory, wherein said storing overwrites oldest information from a previously processed block in the first buffer; and', 'output the processed block to a next stage in the pipeline or to the external memory., 'wherein one or more of the plurality of stages of the block processing pipeline are each configured to2. The apparatus as recited in claim 1 , wherein the rows of blocks are separated into a plurality of row groups each ...

Подробнее
26-03-2015 дата публикации

NEIGHBOR CONTEXT PROCESSING IN BLOCK PROCESSING PIPELINES

Номер: US20150084969A1
Принадлежит: Apple Inc.

A block processing pipeline in which blocks are input to and processed according to row groups so that adjacent blocks on a row are not concurrently at adjacent stages of the pipeline. A stage of the pipeline may process a current block according to neighbor pixels from one or more neighbor blocks. Since adjacent blocks are not concurrently at adjacent stages, the left neighbor of the current block is at least two stages downstream from the stage. Thus, processed pixels from the left neighbor can be passed back to the stage for use in processing the current block without the need to wait for the left neighbor to complete processing at a next stage of the pipeline. In addition, the neighbor blocks may include blocks from the row above the current block. Information from these neighbor blocks may be passed to the stage from an upstream stage of the pipeline. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages, each stage configured to perform one or more operations on a block of pixels passing through the pipeline;wherein the apparatus is configured to process blocks of pixels from a frame in the block processing pipeline so that adjacent blocks on a row are not concurrently at adjacent stages of the pipeline; receive, from an upstream stage of the pipeline, a current block of pixels for processing at the stage;', 'process the current block according to neighbor pixels from one or more neighbor blocks of the current block that were input to the pipeline for processing prior to input of the current block to the pipeline, wherein the neighbor pixels include left neighbor pixels from a left neighbor block of the current block received from a downstream stage of the pipeline; and', 'output the processed current block to a next stage in the pipeline., 'wherein at least one stage of the block processing pipeline is configured to2. The apparatus as recited in claim 1 , wherein the neighbor pixels further include above neighbor pixels ...

Подробнее
26-03-2015 дата публикации

REFERENCE FRAME DATA PREFETCHING IN BLOCK PROCESSING PIPELINES

Номер: US20150084970A1
Принадлежит: Apple Inc.

Block processing pipeline methods and apparatus in which pixel data from a reference frame is prefetched into a search window memory. The search window may include two or more overlapping regions of pixels from the reference frame corresponding to blocks from the rows in the input frame that are currently being processed in the pipeline. Thus, the pipeline may process blocks from multiple rows of an input frame using one set of pixel data from a reference frame that is stored in a shared search window memory. The search window may be advanced by one column of blocks by initiating a prefetch for a next column of reference data from a memory. The pipeline may also include a reference data cache that may be used to cache a portion of a reference frame and from which at least a portion of a prefetch for the search window may be satisfied. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages each configured to perform one or more operations on a block of pixels from a frame passing through the pipeline, wherein the pipeline is configured to process each block according to a corresponding region of a reference frame, the region including a plurality of blocks of pixels from the reference frame;wherein the apparatus is configured to process blocks of pixels from the frame in the block processing pipeline according to row groups each including two or more rows of blocks from the frame;a memory configured to store a window including a plurality of columns of blocks of pixels from the reference frame for access by at least one of the stages, wherein the window covers two or more overlapping regions from the reference frame each corresponding to a block from a row group currently being processed in the pipeline; andwherein the apparatus is further configured to advance the window by one column of blocks from the reference frame at each rth block, where r is the number of rows in a row group.2. The apparatus as recited in claim 1 , wherein ...

Подробнее
26-03-2015 дата публикации

DELAYED CHROMA PROCESSING IN BLOCK PROCESSING PIPELINES

Номер: US20150085931A1
Принадлежит: Apple Inc.

A block processing pipeline in which macroblocks are input to and processed according to row groups so that adjacent macroblocks on a row are not concurrently at adjacent stages of the pipeline. The input method may allow chroma processing to be postponed until after luma processing. One or more upstream stages of the pipeline may process luma elements of each macroblock to generate luma results such as a best mode for processing the luma elements. Luma results may be provided to one or more downstream stages of the pipeline that process chroma elements of each macroblock. The luma results may be used to determine processing of the chroma elements. For example, if the best mode for luma is an intra-frame mode, then a chroma processing stage may determine a best intra-frame mode for chroma and reconstruct the chroma elements according to the best chroma intra-frame mode. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages, each stage configured to perform one or more operations on a macroblock of pixels passing through the pipeline, wherein each macroblock includes at least one block of luma elements and at least one block of chroma elements;wherein the apparatus is configured to process macroblocks of pixels from a current frame in the block processing pipeline so that adjacent macroblocks on a row are not concurrently at adjacent stages of the pipeline; and process luma elements of a current macroblock at one or more upstream luma processing stages of the pipeline; and', 'process chroma elements of the current macroblock at one or more downstream chroma processing stages of the pipeline according to results of said processing the luma elements of the current macroblock at the one or more upstream luma processing stages., 'wherein the block processing pipeline is configured to2. The apparatus as recited in claim 1 , wherein the one or more luma processing stages include a luma intra-frame estimation stage configured to determine ...

Подробнее
22-03-2018 дата публикации

DITHERING TECHNIQUES FOR ELECTRONIC DISPLAYS

Номер: US20180082626A1
Принадлежит:

Devices and methods for error diffusion and spatiotemporal dithering are provided. By way of example, a method of operating a display includes receiving a pixel input, a set of pixel coordinates, and a current frame number. A kernel and a particular kernel bit of the kernel is selected from a set of kernels, based upon the pixel input, the pixel coordinates, the frame number, or any combination thereof. A dithered output is determined based at least in part upon the kernel bit. When the display is in a diamond pixel configuration, the dithered output is applied in accordance with a diamond pattern formed by red, blue, or red and blue pixel channels. 1. A method of operating a display , comprising:receiving a pixel input;receiving a set of pixel coordinates associated with the pixel input;receiving a current frame number associated with the pixel input;selecting a kernel from a kernel lookup table, based upon the pixel input, the pixel coordinates, the frame number, or any combination thereof;selecting a kernel bit from the kernel, based upon the pixel input, the pixel coordinates, the frame number, or any combination thereof;calculating a dithered output based at least in part upon the kernel bit; andapplying the dithered output in accordance with a diamond pattern formed by red channels, blue channels, or red and blue pixel channels;wherein the kernel is rotated for a subsequent frame of image data.2. The method of claim 1 , wherein applying the dithered output in accordance with the diamond pattern formed by the red claim 1 , the blue claim 1 , or the red channels and the blue pixel channels comprises:selecting the kernel bit using only every second pixel for the red channels and the blue channels.3. The method of claim 2 , comprising selecting the kernel bit using bits 2:1 of an x-coordinate of the pixel coordinates as a horizontal index and bits 1:0 of a y-coordinate of the pixel coordinates as a vertical index to select the kernel bit from the kernel.4. The ...

Подробнее
14-03-2019 дата публикации

ELECTRONIC DISPLAY COLOR ACCURACY COMPENSATION

Номер: US20190080656A1
Принадлежит:

Systems, methods, and non-transitory media are presented that provide for improving color accuracy. An electronic display includes a display region having multiple pixels each having multiple subpixels. The electronic device also includes a display pipeline coupled to the electronic display. The display pipeline is configured to receive image data and perform white point compensation on the image data to compensate for a current drop in the display to cause the display to display a target white point when displaying white. The display pipeline also is configured to correct white point overcompensation on the image data to reduce possible oversaturation of non-white pixels using the white point compensation. Finally, the display pipeline is configured to output the compensated and corrected image data to the electronic display to facilitate displaying the compensated and corrected image data on the display region. 1. An electronic device , comprising:an electronic display comprising a display region comprising a plurality of pixels each comprising a plurality of subpixels; anda display pipeline coupled to the electronic display, wherein the display pipeline is configured to:receive image data; 'correct oversaturation of non-white pixels due to the white point compensation; and', 'perform white point compensation on the image data to compensate for a current drop in the display to cause the display to display a target white point when displaying white;'}output the compensated and corrected image data to the electronic display to facilitate displaying the compensated and corrected image data on the display region.2. The electronic device of claim 1 , wherein the display pipeline comprises a multi-dimensional lookup table claim 1 , and wherein correcting the oversaturation comprises looking up values in the multi-dimensional lookup table based at least in part on a color overcompensation correction value determined for the electronic display.3. The electronic device of ...

Подробнее
14-03-2019 дата публикации

Burn-In Statistics and Burn-In Compensation

Номер: US20190080666A1
Принадлежит:

An electronic display pipeline may process image data for display on an electronic display. The electronic display pipeline may include burn-in compensation statistics collection circuitry and burn-in compensation circuitry. The burn-in compensation statistics collection circuitry may collect image statistics based at least in part on the image data. The statistics may estimate a likely amount of non-uniform aging of the sub-pixels of the electronic display. The burn-in compensation circuitry may apply a gain to sub-pixels of the image data to account for non-uniform aging of corresponding sub-pixels of the electronic display. The applied gain may be based at least in part on the image statistics collected by the burn-in compensation statistics collection circuitry. 1. An electronic device comprising:an electronic display configured to display images when programmed with display image data; and burn-in compensation processing configured to apply gains to sub-pixels of the image data according to one or more burn-in compensation gain maps, wherein the one or more burn-in compensation gain maps provide a two-dimensional mapping of gains that, when applied to the sub-pixels of the image data, reduce a gain of at least one of the sub-pixels of the image data to account for sub-pixels aging non-uniformity on the electronic display and thereby to reduce or eliminate burn-in artifacts that would otherwise appear on the electronic display when the electronic display is programmed with the display image data; or', 'burn-in statistics collection processing configured to compute incremental updates of pixel aging that is expected to occur due to the display image data or a current temperature of an area of the electronic display, or both; or', 'both the burn-in compensation processing and the burn-in statistics collection processing., 'a display pipeline configured to receive image data and process the image data through one or more image processing blocks to obtain the ...

Подробнее
02-04-2015 дата публикации

Processing order in block processing pipelines

Номер: US20150091914A1
Принадлежит: Apple Inc

A knight's order processing method for block processing pipelines in which the next block input to the pipeline is taken from the row below and one or more columns to the left in the frame. The knight's order method may provide spacing between adjacent blocks in the pipeline to facilitate feedback of data from a downstream stage to an upstream stage. The rows of blocks in the input frame may be divided into sets of rows that constrain the knight's order method to maintain locality of neighbor block data. Invalid blocks may be input to the pipeline at the left of the first set of rows and at the right of the last set of rows, and the sets of rows may be treated as if they are horizontally arranged rather than vertically arranged, to maintain continuity of the knight's order algorithm.

Подробнее
02-04-2015 дата публикации

MEMORY LATENCY TOLERANCE IN BLOCK PROCESSING PIPELINES

Номер: US20150091920A1
Принадлежит: Apple Inc.

Memory latency tolerance methods and apparatus for maintaining an overall level of performance in block processing pipelines that prefetch reference data into a search window. In a general memory latency tolerance method, search window processing in the pipeline may be monitored. If status of search window processing changes in a way that affects pipeline throughput, then pipeline processing may be modified. The modification may be performed according to no stall methods, stall recovery methods, and/or stall prevention methods. In no stall methods, a block may be processed using the data present in the search window without waiting for the missing reference data. In stall recovery methods, the pipeline is allowed to stall, and processing is modified for subsequent blocks to speed up the pipeline and catch up in throughput. In stall prevention methods, processing is adjusted in advance of the pipeline encountering a stall condition. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages each configured to perform one or more operations on a block of pixels from a frame passing through the pipeline; anda memory configured to store a search window including a plurality of columns of blocks of pixels from a reference frame, wherein the search window covers two or more overlapping search regions from the reference frame, each search region including a plurality of blocks; generate requests to prefetch specified columns of blocks of pixels from the reference frame into the search window, wherein the requested columns of blocks include pixels for processing upcoming blocks from the frame at one or more stages of the pipeline;', 'monitor the search window to detect conditions in which one or more blocks from the reference frame are not prefetched into the search window for at least one upcoming block from the frame; and', 'in response to detecting a condition in which one or more blocks from the reference frame are not prefetched into the ...

Подробнее
02-04-2015 дата публикации

WAVEFRONT ENCODING WITH PARALLEL BIT STREAM ENCODING

Номер: US20150091921A1
Принадлежит: Apple Inc.

In the video encoders described herein, blocks of pixels from a video frame may be encoded (e.g., using CAVLC encoding) in a block processing pipeline using wavefront ordering (e.g., in knight's order). Each of the encoded blocks may be written to a particular one of multiple DMA buffers such that the encoded blocks written to each of the buffers represent consecutive blocks of the video frame in scan order. A transcode pipeline may operate in parallel with (or at least overlapping) the operation of the block processing pipeline. The transcode pipeline may read encoded blocks from the buffers in scan order and merge them into a single bit stream (in scan order). A transcoder core of the transcode pipeline may decode the encoded blocks and encode them using a different encoding process (e.g., CABAC). In some cases, the transcoder may be bypassed. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages, each stage configured to perform one or more operations on a block of pixels passing through the pipeline;a transcode pipeline configured to operate substantially in parallel with the block processing pipeline; anda plurality of buffers; process a plurality of blocks of pixels from a frame using wavefront ordering; and', 'write each of the processed blocks output by the encoding component to a respective one of the plurality of buffers such that the processed blocks written to each one of the plurality of buffers represent consecutive blocks of pixels in the frame in scan order; and, 'wherein an encoding component of the block processing pipeline is configured to read the processed blocks from the plurality of buffers in scan order; and', 'provide the processed blocks in scan order to another component for further processing., 'wherein a component of the transcode pipeline is configured to2. The apparatus of claim 1 , wherein to process the plurality of blocks of pixels using wavefront ordering claim 1 , the encoding component of the ...

Подробнее
02-04-2015 дата публикации

WAVEFRONT ORDER TO SCAN ORDER SYNCHRONIZATION

Номер: US20150091927A1
Принадлежит: Apple Inc.

Blocks of pixels from a video frame may be encoded in a block processing pipeline using wavefront ordering, e.g. according to knight's order. Each of the encoded blocks may be written to a particular one of multiple buffers such that the blocks written to each of the buffers represent consecutive blocks of the frame in scan order. Stitching information may be written to the buffers at the end of each row. A stitcher may read the rows from the buffers in order and generate a scan order output stream for the frame. The stitcher component may read the stitching information at the end of each row and apply the stitching information to one or more blocks at the beginning of a next row to stitch the next row to the previous row. Stitching may involve modifying pixel(s) of the blocks and/or modifying metadata for the blocks. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages, each stage configured to perform one or more operations on a block of pixels from a frame passing through the pipeline; anda plurality of buffers; receive and process blocks of pixels from the frame in wavefront order so that adjacent blocks on a row are not concurrently at adjacent stages of the pipeline;', 'write each processed block to one of the plurality of buffers such that the processed blocks written to each of the plurality of buffers represent consecutive blocks of pixels from a respective row in the frame in scan order; and', 'for at least one processed block that is a last block written to one of the buffers from a row in the frame, generate and store stitching information for stitching the row to a next row in the frame., 'wherein the block processing pipeline is configured to2. The apparatus as recited in claim 1 , wherein the block processing pipeline is configured to process the blocks of pixels according to row groups each including two or more rows of blocks from the frame claim 1 , and wherein each of the plurality of buffers corresponds to a ...

Подробнее
02-04-2015 дата публикации

Context re-mapping in cabac encoder

Номер: US20150092834A1
Принадлежит: Apple Inc

A video encoder may include a context-adaptive binary arithmetic coding (CABAC) encode component that converts each syntax element of a representation of a block of pixels to binary code, serializes it, and codes it mathematically, after which the resulting bit stream is output. A lookup table in memory and a context cache may store probability values for supported contexts, which may be retrieved from the table or cache for use in coding syntax elements. Depending on the results of a syntax element coding, the probability value for its context may be modified (e.g., increased or decreased) in the cache and, subsequently, in the table. After coding multiple syntax elements, and based on observed access patterns for probability values, a mapping or indexing for the cache or the table may be modified to improve cache performance (e.g., to reduce cache misses or access data for related contexts using fewer accesses).

Подробнее
02-04-2015 дата публикации

DATA STORAGE AND ACCESS IN BLOCK PROCESSING PIPELINES

Номер: US20150092843A1
Принадлежит: Apple Inc.

Block processing pipeline methods and apparatus in which reference data are stored to a memory according to tile formats to reduce memory accesses when fetching the data from the memory. When the pipeline stores reference data from a current frame being processed to memory as a reference frame, the reference samples are stored in macroblock sequential order. Each macroblock sample set is stored as a tile. Reference data may be stored in tile formats for luma and chroma. Chroma reference data may be stored in tile formats for chroma 4:2:0, 4:2:2, and/or 4:4:4 formats. A stage of the pipeline may write luma and chroma reference data for macroblocks to memory according to one or more of the macroblock tile formats in a modified knight's order. The stage may delay writing the reference data from the macroblocks until the macroblocks have been fully processed by the pipeline. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages each configured to perform one or more operations on blocks of pixels from a current frame passing through the pipeline;wherein the block processing pipeline is configured to store data from processed blocks of the current frame as reference data to a memory according to a tile format, wherein the tile format stores reference data from each processed block of the frame in a corresponding tile, wherein each tile includes two or more contiguous memory blocks in the memory, each memory block of a block request size of the memory, and wherein the tiles are stored to a reference frame in the memory in sequential order of the blocks of pixels in the frame; andwherein the block processing pipeline is further configured to fetch a window of reference data from a reference frame previously stored to the memory according to the tile format and process at least one block of pixels from the current frame according to the fetched window of reference data.2. The apparatus as recited in claim 1 , wherein the block request ...

Подробнее
02-04-2015 дата публикации

PARALLEL HARDWARE AND SOFTWARE BLOCK PROCESSING PIPELINES

Номер: US20150092854A1
Принадлежит: Apple Inc.

A block processing pipeline that includes a software pipeline and a hardware pipeline that run in parallel. The software pipeline runs at least one block ahead of the hardware pipeline. The stages of the pipeline may each include a hardware pipeline component that performs one or more operations on a current block at the stage. At least one stage of the pipeline may also include a software pipeline component that determines a configuration for the hardware component at the stage of the pipeline for processing a next block while the hardware component is processing the current block. The software pipeline component may determine the configuration according to information related to the next block obtained from an upstream stage of the pipeline. The software pipeline component may also obtain and use information related to a block that was previously processed at the stage. 1. An apparatus , comprising:a block processing pipeline comprising a plurality of stages, wherein one or more of the plurality of stages of the block processing pipeline each comprises a component of a software pipeline and a component of a hardware pipeline, wherein the software and hardware pipelines are configured to process blocks of pixels from a frame in parallel;wherein the software pipeline component at each of the one or more stages is configured to iteratively determine configurations for processing the blocks at the hardware pipeline component of the stage according to obtained information for the blocks;wherein the hardware pipeline component at each of the one or more stages is configured to iteratively process the blocks according to the configurations for processing the blocks that are determined by the software pipeline component at the stage; andwherein the hardware pipeline component at each of the one or more stages processes a current block at the stage while the software pipeline component at the stage is determining a configuration for an upcoming block to be processed by the ...

Подробнее
02-04-2015 дата публикации

SKIP THRESHOLDING IN PIPELINED VIDEO ENCODERS

Номер: US20150092855A1
Принадлежит: Apple Inc.

The video encoders described herein may make an initial determination to designate a macroblock as a skip macroblock, but may subsequently reverse that decision based on additional information. For example, an initial skip mode decision may be based on aggregate distortion metrics for the luma component of the macroblock (e.g., SAD, SATD, or SSD), then reversed based on an individual pixel difference metric, an aggregate or individual pixel metric for a chroma component of the macroblock, or on the position of the macroblock within a macroblock row. The final skip mode decision may be based, at least in part, on the maximum difference between any pixel in the macroblock (or in a region of interest within the macroblock) and the corresponding pixel in a reference frame. The initial skip mode decision may be made during an early stage of a pipelined video encoding process and reversed in a later stage. 1. An apparatus , comprising:a block processing pipeline that implements a plurality of stages each comprising at least one component, each component configured to perform one or more operations on a macroblock of pixels from a video frame passing through the pipeline; generate an initial determination that a macroblock of pixels should be designated as a skip macroblock, wherein designation of the macroblock as a skip macroblock indicates that the macroblock should be represented by a macroblock predictor rather than by an encoding of a motion vector difference and a residual for the macroblock, and wherein to generate the initial determination, the one or more components are configured to compute one or more aggregate metrics for a representation of a luma component of the macroblock;', 'subsequent to generation of the initial determination, determine that the macroblock should not be designated as a skip macroblock; and', 'subsequent to the determination that the macroblock should not be designated as a skip macroblock, provide a motion vector difference and a ...

Подробнее
29-03-2018 дата публикации

REDUCED FOOTPRINT PIXEL RESPONSE CORRECTION SYSTEMS AND METHODS

Номер: US20180090102A1
Принадлежит:

Systems and methods for improving displayed image quality of an electronic display including a display pixel and a display driver are provided. A display pipeline receives input image data that indicates target luminance of the display pixel when displaying an image frame on the electronic display; determines a first bit group in pixel response corrected image data by mapping a first bit group in the input image data based at least in part on a first pixel response correction look-up-table; determines a second bit group in the pixel response corrected image data by mapping a second bit group in the input image data based at least in part on a second pixel response correction look-up-table; and outputs the pixel response corrected image data to the display driver to enable the display driver to facilitate displaying the image frame by writing the display pixel based on the pixel response corrected image data. 1. An electronic device comprising:an electronic display configured to display image frames, wherein the electronic display comprises a first display pixel and a display driver; receive first input image data that indicates first target luminance of the first display pixel when displaying a first image frame on the electronic display;', 'determine a first bit group in first pixel response corrected image data by mapping a corresponding first bit group in the first input image data based at least in part on a first pixel response correction look-up-table;', 'determine a second bit group in the first pixel response corrected image data by mapping a corresponding second bit group in the first input image data based at least in part on a second pixel response correction look-up-table; and', 'output the first pixel response corrected image data to the display driver to enable the display driver to facilitate displaying the first image frame by writing the first display pixel based at least in part on the first pixel response corrected image data., 'a display pipeline ...

Подробнее
30-03-2017 дата публикации

CONFIGURABLE MOTION ESTIMATION SEARCH SYSTEMS AND METHODS

Номер: US20170094293A1
Принадлежит:

System and method for improving operational efficiency of a video encoding pipeline used to encode image data. The video encoding pipeline includes a motion estimation setup block, which dynamically adjusts a setup configuration of the motion estimation block based at least in part on operational parameters of the video encoding pipeline and select an initial candidate inter-frame prediction mode based at least on the setup configuration, a full-pel motion estimation block, which determines an intermediate candidate inter-frame prediction mode by performing a motion estimation search based on the initial candidate inter-frame prediction mode, a sub-pel motion estimation block, which determines a final candidate inter-frame prediction by performing a motion estimation search based on the intermediate candidate inter-frame prediction mode, and a mode decision block, which determines a rate-distortion cost associated with the final candidate inter-frame prediction mode and determines a prediction mode used to prediction encoding the image data. 1. A computing device comprising a video encoding pipeline configured to encode image data , wherein the video encoding pipeline comprises: [ dynamically adjust a setup configuration of the motion estimation block based at least in part on operational parameters of the video encoding pipeline; and', 'select a first initial candidate inter-frame prediction mode based at least on the setup configuration;, 'a motion estimation setup block configured to, 'a full-pel motion estimation block configured to determine an intermediate candidate inter-frame prediction mode by performing a first motion estimation search based on the first initial candidate inter-frame prediction mode; and', 'a sub-pel motion estimation block configured to determine a final candidate inter-frame prediction mode by performing a second motion estimation search based on the intermediate candidate inter-frame prediction mode; and, 'a motion estimation block ...

Подробнее
30-03-2017 дата публикации

PREDICTOR CANDIDATES FOR MOTION ESTIMATION SEARCH SYSTEMS AND METHODS

Номер: US20170094304A1
Принадлежит:

System and method for improving operational efficiency of a video encoding pipeline used to encode image data. The video encoding pipeline includes a mode decision block, which selects a first inter-frame prediction mode used to prediction encode a first prediction unit, and a motion estimation block, which receives the first inter-frame prediction mode as feedback from the mode decision block when processing a second prediction unit; determines an initial candidate inter-frame prediction mode of the second prediction unit based at least in part on the first inter-frame prediction mode; and determines a final candidate inter-frame prediction mode of the second prediction unit by performing a first motion estimation search based at least in part on the initial candidate inter-frame prediction mode. The mode decision block determines a rate-distortion cost associated with the final candidate inter-frame prediction mode and a prediction mode used to prediction encode the second prediction unit based at least in part on the rate-distortion cost. 1. A computing device comprising a video encoding pipeline configured to encode image data , wherein the video encoding pipeline comprises:a mode decision block configured to select a first inter-frame prediction mode used to prediction encode a first prediction unit; receive the first inter-frame prediction mode as feedback from the mode decision block when processing a second prediction unit;', 'determine an initial candidate inter-frame prediction mode of the second prediction unit based at least in part on the first inter-frame prediction mode; and', 'determine a final candidate inter-frame prediction mode of the second prediction unit by performing a first motion estimation search based at least in part on the initial candidate inter-frame prediction mode;, 'a motion estimation block configured towherein the mode decision block is configured to determine a rate-distortion cost associated with the final candidate inter-frame ...

Подробнее
30-03-2017 дата публикации

MEMORY-TO-MEMORY LOW RESOLUTION MOTION ESTIMATION SYSTEMS AND METHODS

Номер: US20170094311A1
Принадлежит:

System and method for improving operational efficiency of a video encoding pipeline used to encode image data. In embodiments, the video encoding pipeline includes a low resolution pipeline that includes a low resolution motion estimation block, which generates downscaled image data by reducing resolution of the image data and determines a low resolution inter-frame prediction mode by performing a motion estimation search using the downscaled image data and previously downscaled image data. The video encoding pipeline also includes a main pipeline in parallel with the low resolution pipeline that includes a motion estimation block, which determines a candidate inter-frame prediction mode based at least in part on the low resolution inter-frame prediction mode, and a mode decision block, which determines a first rate-distortion cost associated with the candidate inter-frame prediction mode and determines prediction mode used to prediction encode the image data based at least in part on the first rate-distortion cost. 1. A video encoding pipeline configured to encode image data , comprising: generate downscaled image data by reducing resolution of the image data; and', 'determine a low resolution inter-frame prediction mode by performing a motion estimation search using the downscaled image data and previously downscaled image data; and, 'a low resolution pipeline configured to receive the image data, wherein the low resolution pipeline comprises a low resolution motion estimation block configured to a motion estimation block configured to determine a candidate inter-frame prediction mode based at least in part on the low resolution inter-frame prediction mode; and', 'a mode decision block configured to determine a first rate-distortion cost associated with the candidate inter-frame prediction mode and to determine a prediction mode used to prediction encode the image data based at least in part on the first rate-distortion cost., 'a main pipeline in parallel with the ...

Подробнее
07-04-2016 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20160100170A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 119-. (canceled)20. A method comprising:receiving a plurality of sets of chroma quantization parameter (QP) offset values associated with the encoded video picture, wherein each set of chroma QP offset values in the plurality of sets of chroma QP offset values is associated with an index;receiving an encoded video picture comprising a plurality of quantization groups, wherein each quantization group is associated with an index; andcomputing a set of chroma quantization parameters for the video picture by computing a set of chroma QP offset values for each quantization group of the video picture by using the index of the quantization group to select a set chroma QP offset value from the plurality of sets of chroma QP offset values.21. The method of claim 20 , wherein computing the set of chroma quantization parameters comprises adding the selected set of chroma QP offset values to a luma QP value.22. The method of claim 21 , wherein the selected set of chroma QP offset values is a first set of chroma QP offset values claim 21 , the method further comprising adding a second set of chroma QP offset values associated with the video picture to the luma QP value and the first set of chroma QP offset values.23. The method of claim 20 , wherein each video comprises a plurality of ...

Подробнее
14-04-2016 дата публикации

METADATA HINTS TO SUPPORT BEST EFFORT DECODING FOR GREEN MPEG APPLICATIONS

Номер: US20160105675A1
Принадлежит:

In a coding system, an encoder codes video data according to a predetermined protocol, which, when decoded causes an associated decoder to perform a predetermined sequence of decoding operations. The encoder may perform local decodes of the coded video data, both in the manner dictated by the coding protocol that is at work and also by one or more alternative decoding operations. The encoder may estimate relative performance of the alternative decoding operations as compared to a decoding operation that is mandated by the coding protocol. The encoder may provide identifiers in metadata that is associated with the coded video data to identify such levels of distortion and/or levels of resources conserved. A decoder may refer to such identifiers when determining when to engage alternative decoding operations as may be warranted under resource conservation policies. 1. A method , comprising:coding a video sequence according to a first coding protocol generating a coded video sequence therefrom,decoding the video sequence according to the first coding protocol,decoding the video sequence according to an alternate coding protocol,comparing decoding performance of the first coding protocol to decoding performance of the alternate coding protocol,transmitting, in a channel, coded video data representing the video sequence coded according to the first protocol, and an indicator of relative performance of the alternate coding protocol.2. The method of claim 1 , wherein the comparison of decoding performances includes estimating resource conservation to be achieved by decode of the coded video data according to the alternate coding protocol.3. The method of claim 1 , wherein the comparison of decoding performances includes estimating relative distortion between decode of the coded video data according to the alternate coding protocol and decode of the coded video data according to the first coding protocol.4. The method of claim 1 , further comprisingestimating state of a ...

Подробнее
26-03-2020 дата публикации

EXTENDING SUPPORTED COMPONENTS FOR ENCODING IMAGE DATA

Номер: US20200099942A1
Принадлежит: Apple Inc.

Support for additional components may be specified in a coding scheme for image data. A layer of a coding scheme that specifies color components may also specify additional components. Characteristics of the components may be specified in the same layer or a different layer of the coding scheme. An encoder or decoder may identify the specified components and determine the respective characteristics to perform encoding and decoding of image data. 1. One or more non-transitory , computer-readable storage media , storing program instructions that when executed on or across one or more computing devices cause the one or more computing devices to implement:identifying one or more components of image data to be encoded that are specified in a same layer of a coding scheme that also specifies a number of color components of the coding scheme;determining a respective one or more characteristics for the identified one or more components from the same layer of the coding scheme or a different layer of the coding scheme; andencoding the identified one or more components according to the respective one or more characteristics along with the specified number of color components in the same layer of a bit stream of encoded image data.2. The one or more non-transitory claim 1 , computer-readable storage media of claim 1 , wherein claim 1 , in determining the respective one or more characteristics for the identified one or more components claim 1 , the program instructions cause the one or more computing devices to implement parsing absolute values for the one or more characteristics from the same layer or the different layer.3. The one or more non-transitory claim 1 , computer-readable storage media of claim 1 , wherein claim 1 , in identifying one or more components of image data to be encoded claim 1 , the program instructions cause the one or more computing devices to implement determining a number for the one or more components and the one or more color components specified in ...

Подробнее
24-07-2014 дата публикации

METHOD AND APPARATUS FOR MPEG-2 TO H.264 VIDEO TRANSCODING

Номер: US20140205005A1
Автор: Cote Guy, Winger Lowell L.
Принадлежит: LSI Corporation

A method for transcoding from an MPEG-2 format to an H.264 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the MPEG-2 format to generate a plurality of macroblocks; (B) determining a plurality of indicators from a pair of the macroblocks, the pair of the macroblocks being vertically adjoining; and (C) coding the pair of the macroblocks into an output video stream in the H.264 format using one of (i) a field mode coding and (ii) a frame mode coding as determined from the indicators. 1. A method for transcoding from an MPEG-2 format to an H.264 format , comprising the steps of:(A) buffering an input video stream in said MPEG-2 format, wherein data in said input video stream represents a picture;(B) generating a plurality of macroblocks by decoding said data;(C) determining at least one item from a given one of said macroblocks; and(D) coding said given macroblock into an output video stream in said H.264 format using said at least one item to refine an H.264 motion vector of said given macroblock.2. The method according to claim 1 , wherein said at least one item comprises an MPEG-2 motion vector of said given macroblock claim 1 , the method further comprising the step of:generating an H.264 motion vector of said given macroblock using said MPEG-2 motion vector as a search center in a motion estimation.3. The method according to claim 1 , wherein (i) said given macroblock is decoded from an MPEG-2 intra-frame and coded into an H.264 predicted-frame and (ii) said at least one item comprises an MPEG-2 intra-frame concealment motion vector used as a starting point to determine said H.264 motion vector.4. The method according to claim 1 , wherein said at least one item comprises an MPEG-2 fcode claim 1 , the method further comprising the step of:determining a size of said refinement of said H.264 motion vector based on said MPEG-2 fcode.5. The method according to claim 1 , wherein said at least one item comprises an ...

Подробнее
25-04-2019 дата публикации

Overdrive for Electronic Device Displays

Номер: US20190122636A1
Принадлежит:

An electronic device is provided. The electronic device includes a display that is configured to show content that includes a plurality of frames. The plurality of frames includes a first frame that is associated with a pre-transition value. The plurality of frames also includes a second frame that is associated with a current frame value that corresponds to a first luminance. Additionally, the electronic device is configured to determine an overdriven current frame value corresponding to a second luminance that is greater than the first luminance. The electronic device is also configured to display the second frame using the overdriven current frame value. 1. An electronic device , comprising:a display configured to display content; identify a high contrast transition from a first gray level of a first frame of the content to a second gray level of a second frame of the content;', 'determine an overdrive over-compensation mitigation gray level based upon the high contrast transition;', 'identify a transition from the second frame of the content to a third frame of the content having a third gray level;', 'determine an overdrive gray level based upon the overdrive over-compensation mitigation gray level and the third gray level; and', 'cause the third frame of the content to be displayed at the overdrive gray level., 'one or more processors configured to2. The electronic device of claim 1 , wherein the one or more processors are configured to determine the overdrive over-compensation mitigation gray level based on a delta between the first gray level and the second gray level.3. The electronic device of claim 1 , comprising memory claim 1 , wherein the electronic device is configured to store the overdrive over-compensation mitigation gray level in the memory as a replacement to the second gray level.4. The electronic device of claim 1 , wherein the electronic device is configured to perform a brightness band adjustment such that a first maximum luminance of the ...

Подробнее
28-08-2014 дата публикации

FLASH SYNCHRONIZATION USING IMAGE SENSOR INTERFACE TIMING SIGNAL

Номер: US20140240587A1
Принадлежит: Apple Inc.

Certain aspects of this disclosure relate to an image signal processing system that includes a flash controller that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame. 1. A method comprising:receiving a request on an electronic device having an image signal processing sub-system to store a target image frame from a set of image frames corresponding to an image scene being acquired by an image sensor while the image signal processing sub-system is operating in a preview mode;determining whether to illuminate the image scene using a flash device;if the image scene is to be illuminated, acquiring a previous image frame that occurs prior to the target image frame;processing the previous image frame to obtain an updated set of image statistics based on the previous image frame;operating the image signal processing sub-system of the electronic device in a capture mode to acquire the target frame using the updated set of image statistics and with a flash device activated; andstoring the target image frame in a memory device of the electronic device.2. The method of claim 1 , wherein acquiring the previous image frame comprises activating the flash device during the acquisition of the previous image frame.3. The method of claim 1 , wherein the updated set of image statistics comprises auto-white balance parameters ...

Подробнее
14-06-2018 дата публикации

Electronic Device Display With Charge Accumulation Tracker

Номер: US20180166032A1
Принадлежит:

An electronic device may generate content that is to be displayed on a display. The display may have an array of liquid crystal display pixels for displaying image frames of the content. The image frames may be displayed with positive and negative polarities to help reduce charge accumulation effects. A charge accumulation tracker may analyze the image frames to determine when there is a risk of excess charge accumulation. The charge accumulation tracker may analyze information on gray levels, frame duration, and frame polarity. The charge accumulation tracker may compute a charge accumulation metric for entire image frames or may process subregions of each frame separately. When subregions are processed separately, each subregion may be individually monitored for a risk of excess charge accumulation. 1. A display comprising:rows and columns of pixels that that display image frames; anda charge accumulation tracker, wherein the charge accumulation tracker receives inputs for each of the image frames including gray level values and frame duration information, wherein the charge accumulation tracker computes a charge accumulation medic for each of the image frames as a product of the gray level values and the frame duration information, and wherein the charge accumulation tracker takes remedial action when the charge accumulation metric exceeds a predetermined threshold.2. The display defined in claim 1 , wherein the image frames are displayed on the display with positive and negative polarities and wherein the remedial action comprises adjusting frame polarity of the image frames.3. The display defined in claim 1 , wherein the image frames are displayed with a variable refresh rate claim 1 , and wherein the remedial action comprises adjusting the refresh rate.4. The display defined in claim 1 , wherein the gray level values include an average gray level value for each of the image frames.5. The display defined in claim 1 , wherein each of the image frames includes ...

Подробнее
16-06-2016 дата публикации

DELAYED CHROMA PROCESSING IN BLOCK PROCESSING PIPELINES

Номер: US20160173885A1
Принадлежит: Apple Inc.

A block processing pipeline in which macroblocks are input to and processed according to row groups so that adjacent macroblocks on a row are not concurrently at adjacent stages of the pipeline. The input method may allow chroma processing to be postponed until after luma processing. One or more upstream stages of the pipeline may process luma elements of each macroblock to generate luma results such as a best mode for processing the luma elements. Luma results may be provided to one or more downstream stages of the pipeline that process chroma elements of each macroblock. The luma results may be used to determine processing of the chroma elements. For example, if the best mode for luma is an intra-frame mode, then a chroma processing stage may determine a best intra-frame mode for chroma and reconstruct the chroma elements according to the best chroma intra-frame mode. 120-. (canceled)21. An apparatus , comprising: one or more luma processing stages configured to process luma elements of the macroblocks to generate results including a best mode for each macroblock, wherein the best mode is one of an intra-frame mode or an inter-frame mode; and', determine that the best mode received from the one or more luma processing stages is an intra-frame mode;', 'perform intra-frame estimation for the chroma elements of the macroblock to determine one of a plurality of chroma intra-frame modes as a best chroma intra-frame mode for the macroblock; and', 'perform chroma reconstruction for the macroblock according to the determined best chroma intra-frame mode., 'a chroma reconstruction stage downstream to the one or more luma processing stages and configured to, for at least one macroblock], 'a block processing pipeline configured to process macroblocks of pixels from a current frame passing through the pipeline, wherein the pipeline includes a plurality of stages each configured to perform one or more operations on a macroblock currently at the respective stage, wherein each ...

Подробнее
13-06-2019 дата публикации

BACKWARD-COMPATIBLE VIDEO CAPTURE AND DISTRIBUTION

Номер: US20190182487A1
Принадлежит: Apple Inc.

Video processing techniques and pipelines that support capture, distribution, and display of high dynamic range (HDR) image data to both HDR-enabled display devices and display devices that do not support HDR imaging. A sensor pipeline may generate standard dynamic range (SDR) data from HDR data captured by a sensor using tone mapping, for example local tone mapping. Information used to generate the SDR data may be provided to a display pipeline as metadata with the generated SDR data. If a target display does not support HDR imaging, the SDR data may be directly rendered by the display pipeline. If the target display does support HDR imaging, then an inverse mapping technique may be applied to the SDR data according to the metadata to render HDR data for display. Information used in performing color gamut mapping may also be provided in the metadata and used to recover clipped colors for display. 120.-. (canceled)21. A system comprising: obtain encoded video data in a standard color space and metadata comprising parameters used in encoding input video data in an input color space to generate the encoded video data in the standard color space; and', 'convert the encoded video data in the standard color space to video data in an output color space of a display device based at least in part on the metadata, wherein the metadata is used to recover information from the input video data not encoded in the encoded video data;', 'wherein the input color space, the standard color space, and the output color space are different., 'a decoding pipeline configured to This application is a continuation of U.S. patent application Ser. No. 14/631,401, filed Feb. 25, 2015, which claims benefit of priority of U.S. Provisional Application Ser. No. 61/944,484 entitled “DISPLAY PROCESSING METHODS AND APPARATUS” filed Feb. 25, 2014, and U.S. patent application Ser. No. 14/631,401 claims benefit of priority of U.S. Provisional Application Ser. No. 61/946,638 entitled “DISPLAY PROCESSING ...

Подробнее
20-06-2019 дата публикации

PANEL OVERDRIVE COMPENSATION

Номер: US20190189082A1
Принадлежит:

Systems and methods for interpolating overdrive values using a lookup table to compensate for potential display artifacts. Interpolating includes applying a first interpolation type to a first portion of the lookup table when a point to be interpolated is in the first portion of the lookup table. However, interpolating includes applying a second interpolation type to a second portion of the lookup table when the point to be interpolated is in the second portion of the lookup table. The interpolated values are then used to drive pixels of a display panel. 1. A method comprising: applying a first interpolation type to a first portion of the lookup table when a point to be interpolated is in the first portion of the lookup table; and', 'applying a second interpolation type to a second portion of the lookup table when the point to be interpolated is in the second portion of the lookup table; and, 'interpolating overdrive values using a lookup table to compensate for potential display artifacts bydriving pixels of a display panel using the interpolated overdrive values.2. The method of claim 1 , wherein interpolating the overdrive values comprises interpolating the overdrive value based at least in part on a grey level of a current frame.3. The method of claim 1 , wherein interpolating the overdrive values comprises interpolating the overdrive value based at least in part on a grey level of a previous frame.4. The method of claim 1 , wherein the first interpolation type comprises barycentric interpolation.5. The method of claim 4 , wherein the first portion corresponds to interpolation regions near a diagonal boundary separating positive overdrive values from negative overdrive values.6. The method of claim 5 , wherein the first portion corresponds to interpolation regions above the diagonal boundary and within a threshold value of the diagonal boundary in an upward direction.7. The method of claim 5 , wherein the first portion corresponds to interpolation regions below ...

Подробнее
22-07-2021 дата публикации

SERVER-SIDE ADAPTIVE VIDEO PROCESSING

Номер: US20210227226A1
Принадлежит: Apple Inc.

Adaptive video processing for a target display panel may be implemented in or by a server/encoding pipeline. The adaptive video processing methods may obtain and take into account video content and display panel-specific information including display characteristics and environmental conditions (e.g., ambient lighting and viewer location) when processing and encoding video content to be streamed to the target display panel in an ambient setting or environment. The server-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to generate video content in the color gamut and dynamic range of the target display panel that is adapted to the display panel characteristics and ambient viewing conditions. 1. A system , comprising: receive video data from one or more sources; and', obtain one or more characteristics of the display panel;', 'obtain one or more environment metrics indicating current environmental conditions at the display panel;', 'process the video data according to one or more characteristics of the video data, the one or more display panel characteristics, and the one or more environment metrics to generate video content adapted to the display panel characteristics according to the current environmental conditions as indicated by the environment metrics;', 'encode the adapted video content according to a compressed video format to generate encoded video content; and', 'provide the encoded video content to a decoding pipeline associated with the respective display panel., 'for each of one or more target display panels], 'an encoding pipeline configured to2. The system as recited in claim 1 , wherein display panel contrast is not affected by the non-linear scaling of the display panel brightness.3. The system as recited in claim 1 , wherein claim 1 , to scale the display panel brightness up or down according to a non-linear brightness adjustment function claim 1 , the ...

Подробнее
29-07-2021 дата публикации

Chroma Quantization in Video Coding

Номер: US20210235088A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 1. A method comprising:identifying one or more initial sets of chroma quantization parameter (QP) offset values at one or more levels of a video coding hierarchy, each set of chroma QP offset values at a particular level for specifying chroma QPs of video units encompassed by said particular level;identifying an additional set of chroma QP offset values for a plurality of video units in the video coding hierarchy; andcomputing a set of chroma QP values for the plurality of video units, wherein the identified initial sets of chroma QP offset values and the identified additional set of chroma QP offset values are additive components of the set of chroma QP value for the plurality of video units.2. The method of claim 1 , wherein the identified initial sets of chroma quantization parameters are for video units that encompass the particular level of the video coding hierarchy.3. The method of further comprising identifying a luma quantization parameter value for the plurality of video units.4. The method of claim 3 , wherein computing the set of chroma QP values comprises adding the identified initial sets of chroma QP offset values and the identified additional set of chroma QP offset values to a luma QP value.5. The method of claim 3 , wherein identifying the additional set of ...

Подробнее
29-07-2021 дата публикации

BACKWARD-COMPATIBLE VIDEO CAPTURE AND DISTRIBUTION

Номер: US20210235093A1
Принадлежит:

Video processing techniques and pipelines that support capture, distribution, and display of high dynamic range (HDR) image data to both HDR-enabled display devices and display devices that do not support HDR imaging. A sensor pipeline may generate standard dynamic range (SDR) data from HDR data captured by a sensor using tone mapping, for example local tone mapping. Information used to generate the SDR data may be provided to a display pipeline as metadata with the generated SDR data. If a target display does not support HDR imaging, the SDR data may be directly rendered by the display pipeline. If the target display does support HDR imaging, then an inverse mapping technique may be applied to the SDR data according to the metadata to render HDR data for display. Information used in performing color gamut mapping may also be provided in the metadata and used to recover clipped colors for display. 1. A system comprising: receive input video data represented at a high dynamic range in a first color space;', 'apply a tone mapping technique to the input video data to convert the input video data to video data represented at a lower dynamic range in a second color space; and', 'generate mapping metadata indicating information used in the tone mapping technique to convert the input video data to the lower dynamic range video data;, 'an encoding pipeline configured to obtain the lower dynamic range video data and the mapping metadata;', 'convert the lower dynamic range video data from the second color space to a third color space to generate video data represented in the third color space; and', 'apply an inverse tone mapping technique to the video data represented in the third color space to generate video data represented at a higher dynamic range in the third color space, wherein the inverse tone mapping technique uses the mapping metadata to recover at least part of the high dynamic range of the input video data., 'a decoding pipeline configured to2. The system as ...

Подробнее
04-07-2019 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20190208204A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 127-. (canceled)28. A method comprising:encoding a video picture and storing the encoded video picture in a bitstream as a hierarchical coding structure;storing in the bitstream two initial chroma quantization parameter (QP) offset values defined at two levels of the hierarchical coding structure;for each of a plurality of quantization groups within a third level of the hierarchical coding structure, storing in the bitstream an additional chroma QP offset value for the quantization group for computing a chroma QP value for that quantization group, wherein different additional non-zero chroma QP offset values can be stored for at least two quantization groups.29. The method of claim 28 , each quantization group comprising a set of video data units from a plurality of video data units of the encoded video picture.30. The method of further comprising quantizing the set of video data units of each quantization group using the computed chroma QP values.31. The method of claim 29 , the computed chroma QP values for decoding the encoded video picture.32. The method of claim 29 , an initial chroma QP offset value specified at a particular level of the hierarchical coding structure being for video units that are encompassed by the particular level of the hierarchical coding structure. ...

Подробнее
04-07-2019 дата публикации

CHROMA QUANTIZATION IN VIDEO CODING

Номер: US20190208205A1
Принадлежит:

A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy. 119-. (canceled)20. A non-transitory machine readable medium storing a program for execution by at least one of the processing units , the program comprising sets of instructions for:receiving in a bitstream an encoded video picture, a plurality of chroma quantization parameter (QP) offset values, and a plurality of quantization groups, each quantization group comprising a set of coding units; anddecoding the encoded video picture by computing a plurality of chroma QPs for the video picture and using the computed chroma QPs to decode the video picture,wherein computing a plurality of chroma QPs for the video picture comprises computing a chroma QP value for the coding units of each quantization groups by (i) using an index associated with the quantization group to select a chroma QP offset value from the plurality of chroma QP offset values, and (ii) computing the chroma QP value for the set of coding units of the quantization group based on the selected chroma QP offset value and a luma QP value associated with the coding unit.21. The non-transitory machine readable medium of claim 20 , wherein the set of instructions for using an index associated with the quantization group to select a chroma QP offset value comprises:a set of instructions for determining from a value of a ...

Подробнее
27-08-2015 дата публикации

SERVER-SIDE ADAPTIVE VIDEO PROCESSING

Номер: US20150243243A1
Принадлежит: Apple Inc.

Adaptive video processing for a target display panel may be implemented in or by a server/encoding pipeline. The adaptive video processing methods may obtain and take into account video content and display panel-specific information including display characteristics and environmental conditions (e.g., ambient lighting and viewer location) when processing and encoding video content to be streamed to the target display panel in an ambient setting or environment. The server-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to generate video content in the color gamut and dynamic range of the target display panel that is adapted to the display panel characteristics and ambient viewing conditions. 1. A system , comprising: receive video data from one or more sources; and', obtain one or more characteristics of the display panel;', 'obtain one or more environment metrics indicating current environmental conditions at the display panel;', 'process the video data according to one or more characteristics of the video data, the one or more display panel characteristics, and the one or more environment metrics to generate video content adapted to the display panel characteristics according to the current environmental conditions as indicated by the environment metrics;', 'encode the adapted video content according to a compressed video format to generate encoded video content; and', 'provide the encoded video content to a decoding pipeline associated with the respective display panel., 'for each of one or more target display panels], 'an encoding pipeline configured to2. The system as recited in claim 1 , wherein the display panel characteristics include color gamut of the display panel claim 1 , and wherein claim 1 , to process the video data claim 1 , the encoding pipeline is configured to apply color gamma mapping to the video data to generate video content in the color gamma of the ...

Подробнее
27-08-2015 дата публикации

USER INTERFACE AND GRAPHICS COMPOSITION WITH HIGH DYNAMIC RANGE VIDEO

Номер: US20150245004A1
Принадлежит:

A method and system for adaptively mixing video components with graphics/UI components, where the video components and graphics/UI components may be of different types, e.g., different dynamic ranges (such as HDR, SDR) and/or color gamut (such as WCG). The mixing may result in a frame optimized for a display device's color space, ambient conditions, viewing distance and angle, etc., while accounting for characteristics of the received data. The methods include receiving video and graphics/UI elements, converting the video to HDR and/or WCG, performing statistical analysis of received data and any additional applicable rendering information, and assembling a video frame with the received components based on the statistical analysis. The assembled video frame may be matched to a color space and displayed. The video data and graphics/UI data may have or be adjusted to have the same white point and/or primaries. 1. A computer-implemented compositing method , comprising:receiving, by a processor, first data having a first dynamic range;receiving, by the processor, second data having a second dynamic range that is lower than the first dynamic range;converting, by the processor, the first data to a third dynamic range having a transfer function (TF) that matches at least a segment of an TF of the second dynamic range;assembling, by the processor, a video frame from the converted first data and the second data, andrendering, by the processor, an output frame based on the assembled video frame.2. The method of claim 1 , further comprising converting claim 1 , by the processor claim 1 , the second data to a fourth dynamic range at least partially overlapping with the third dynamic range.3. The method of claim 1 , wherein the conversion of the first data retains at least one of a white point and primary chromaticities claim 1 , of the first data.4. The method of claim 1 , wherein the conversion of the first data includes converting the first data to one of a high dynamic range ...

Подробнее
27-08-2015 дата публикации

DISPLAY-SIDE ADAPTIVE VIDEO PROCESSING

Номер: US20150245043A1
Принадлежит: Apple Inc.

Adaptive video processing for a target display panel may be implemented in or by a decoding/display pipeline associated with the target display panel. The adaptive video processing methods may take into account video content, display characteristics, and environmental conditions including but not limited to ambient lighting and viewer location when processing and rendering video content for a target display panel in an ambient setting or environment. The display-side adaptive video processing methods may use this information to adjust one or more video processing functions as applied to the video data to render video for the target display panel that is adapted to the display panel according to the ambient viewing conditions. 1. A system , comprising:one or more sensors configured to detect one or more environmental conditions;a display panel; and receive encoded video data;', 'decode the encoded video data to generate video content;', 'process the video content according to one or more characteristics of the video content, one or more characteristics of the display panel, and one or more current environmental conditions as detected by the one or more sensors to generate video content adapted for viewing on the display panel under the current environmental conditions; and', 'output the processed video content to the display panel for display., 'a decoding pipeline configured to2. The system as recited in claim 1 , wherein the decoding pipeline is configured to process the video content according to one or more video processing techniques claim 1 , wherein at least one of the video processing techniques is adjusted according to the current environmental conditions to generate the video content adapted for viewing on the display panel under the current environmental conditions.3. The system as recited in claim 2 , wherein the video processing techniques include one or more of noise reduction claim 2 , artifact reduction claim 2 , scaling claim 2 , sharpening claim 2 , ...

Подробнее
27-08-2015 дата публикации

BACKWARD-COMPATIBLE VIDEO CAPTURE AND DISTRIBUTION

Номер: US20150245044A1
Принадлежит: Apple Inc.

Video processing techniques and pipelines that support capture, distribution, and display of high dynamic range (HDR) image data to both HDR-enabled display devices and display devices that do not support HDR imaging. A sensor pipeline may generate standard dynamic range (SDR) data from HDR data captured by a sensor using tone mapping, for example local tone mapping. Information used to generate the SDR data may be provided to a display pipeline as metadata with the generated SDR data. If a target display does not support HDR imaging, the SDR data may be directly rendered by the display pipeline. If the target display does support HDR imaging, then an inverse mapping technique may be applied to the SDR data according to the metadata to render HDR data for display. Information used in performing color gamut mapping may also be provided in the metadata and used to recover clipped colors for display. 1. A system comprising: receive input video data represented at a high dynamic range in a first color space;', 'apply a tone mapping technique to the input video data to convert the input video data to video data represented at a lower dynamic range in a second color space; and', 'generate mapping metadata indicating information used in the tone mapping technique to convert the input video data to the lower dynamic range video data;, 'an encoding pipeline configured to obtain the lower dynamic range video data and the mapping metadata;', 'convert the lower dynamic range video data from the second color space to a third color space to generate video data represented in the third color space; and', 'apply an inverse tone mapping technique to the video data represented in the third color space to generate video data represented at a higher dynamic range in the third color space, wherein the inverse tone mapping technique uses the mapping metadata to recover at least part of the high dynamic range of the input video data., 'a decoding pipeline configured to2. The system as ...

Подробнее
03-09-2015 дата публикации

VIDEO ENCODING OPTIMIZATION WITH EXTENDED SPACES

Номер: US20150249833A1
Принадлежит:

Embodiments of the present invention may provide a video coder. The video coder may include an encoder to perform coding operations on a video signal in a first format to generate coded video data, and a decoder to decode the coded video data. The video coder may also include an inverse format converter to convert the decoded video data to second format that is different than the first format and an estimator to generate a distortion metric using the decoded video data in the second format and the video signal in the second format. The encoder may adjust the coding operations based on the distortion metric. 1. A method , comprising:performing coding operations on an in-process formatted input signal to generate coded video data;decoding the coded video data;converting the decoded video data to another format than the in-process format;estimating coding factors using the another formatted decoded video data and the input signal in the another format;based on the estimated factors, adjusting the coding operations; andoutputting the coded video data.2. The method of claim 1 , wherein the another format is an original format of the input signal claim 1 , and the original format is a higher resolution than the in-process format.3. The method of claim 1 , wherein the coding operations include calculating a distortion metric.4. The method of claim 1 , wherein the coding operations include mode decisions for the current block to be encoded.5. The method of claim 1 , further comprising:pre-analyzing the input signal in the another format to derive target space information, andcontrolling quantization parameters during the coding operations of the in-process formatted input signal based on the derived information.6. The method of claim 1 , wherein the another format is based on a target display.7. The method of claim 1 , further comprising:converting the input signal into a plurality of formats including the another format;converting the decoded video data into the plurality ...

Подробнее
10-09-2015 дата публикации

Display Pipe Statistics Calculation for Video Encoder

Номер: US20150255047A1
Принадлежит: Apple Inc.

In an embodiment, a system includes a display processing unit configured to process a video sequence for a target display. In some embodiments, the display processing unit is configured to composite the frames from frames of the video sequence and one or more other image sources. The display processing unit may be configured to write the processed/composited frames to memory, and may also be configured to generate statistics over the frame data, where the generated statistics are usable to encode the frame in a video encoder. The display processing unit may be configured to write the generated statistics to memory, and the video encoder may be configured to read the statistics and the frames. The video encoder may be configured to encode the frame responsive to the statistics. 1. A system comprising:a display processing unit configured to generate frames of a video sequence, wherein the display processing unit is further configured to generate one or more statistics over data in the frames;a memory controller coupled to the display processing unit and configured to couple to a memory, wherein the display controller is configured to write the frames to the memory and further configured to write the one or more statistics to the memory; anda video encoder coupled to the memory controller, wherein the video encoder is configured to read the one or more statistics and the frames from memory and to encode the video sequence responsive to the one or more statistics.2. The system as recited in wherein the one or more statistics comprise a histogram of pixel color values within at least a portion of the frame.3. The system as recited in wherein the histogram is based on a plurality of most significant bits of the pixel color values.4. The system as recited in wherein the histogram is generated over an entirety of the frame.5. The system as recited in wherein the one or more statistics comprise a value generated for each macroblock in at least a portion of the frame.6. The ...

Подробнее
15-10-2015 дата публикации

SYSTEMS AND METHODS FOR RGB IMAGE PROCESSING

Номер: US20150296193A1
Принадлежит: Apple Inc.

Systems and methods for processing image data in RGB format are provided. In one example, an electronic device includes memory to store image data in raw or RGB format, or both, and an RGB image processing pipeline to process the image data. Specifically, the RGB image processing pipeline may process the image data regardless of whether the image data is of raw or RGB format. The RGB image processing pipeline may include receiving logic to receive the image data in raw or RGB format and demosaicing logic to, when the receiving logic receives the image data in raw format, convert the image data into RGB format. The logic may include local tone mapping logic configured to apply spatially varying tone curves to the image data, a color correction matrix configured to correct color in the image data, and gamma logic configured to transform the image data into gamma space. 1. An image signal processing system comprising:data conversion logic configured to convert unsigned input image data deriving from a digital image sensor into signed input image data to preserve negative noise from the sensor; andan RGB-format image processing pipeline configured to process the signed input image data into processed signed RGB output image data.2. The image signal processing system of claim 1 , wherein the input image data is in raw or RGB format and the RGB-format image processing pipeline is configured to receive both raw and RGB-format signed input image data.3. The image signal processing system of claim 1 , wherein the data conversion logic is configured to scale the signed input image data by a programmable scale value.4. The image signal processing system of claim 3 , wherein claim 3 , when the signed input image data comprises RGB-format input image data claim 3 , the data conversion logic is configured to apply the programmable scale value equally to all three color components of the signed input image data.5. The image signal processing system of claim 1 , wherein the data ...

Подробнее
11-10-2018 дата публикации

CONTENT-BASED STATISTICS FOR AMBIENT LIGHT SENSING

Номер: US20180293958A1
Принадлежит:

An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side. 1. An electronic device comprising:a display panel comprising a plurality of pixels each configured to emit light with a plurality of regions;an ambient light sensor arranged behind the display panel to measure ambient light; andan ambient light sensor compensator configured to estimate how much light detected by the ambient light sensor can be attributed to the emitted light based at least in part on weighting data corresponding to pixels of the plurality of pixels that are closer to the ambient light sensor more heavily than data corresponding pixels of the plurality of pixels that are farther from the ambient light sensor according to which region of the plurality of regions the respective pixels are in.2. The electronic device of claim 1 , wherein the plurality of regions comprises overlapping regions.3. The electronic device of claim 2 , wherein the overlapping regions comprises rectangular-shaped regions.4. The electronic device of claim 2 , wherein the plurality of regions comprises concentric regions.5. The electronic device of claim 4 , wherein the regions comprise rectangular-shaped regions.6. The electronic device of claim 1 , wherein the ambient light sensor compensator is configured to determine how much of measured ambient light can be attributed to ambient brightness from outside the electronic device by substantially removing the light emitted from the display panel from the measured ambient light to determine how much of the measured ambient light is properly attributed to the ambient brightness from outside the electronic device.7. The electronic device of claim 6 , wherein the ambient light sensor compensator ...

Подробнее
26-11-2015 дата публикации

RAW SCALER WITH CHROMATIC ABERRATION CORRECTION

Номер: US20150341604A1
Принадлежит: Apple Inc.

Systems and methods for down-scaling are provided. In one example, a method for processing image data includes determining a plurality of output pixel locations using a position value stored by a position register, using the current position value to select a center input pixel from the image data and selecting an index value, selecting a set of input pixels adjacent to the center input pixel, selecting a set of filtering coefficients from a filter coefficient lookup table using the index value, filtering the set of source input pixels to apply a respective one of the set of filtering coefficients to each of the set of source input pixels to determine an output value for the current output pixel at the current position value, and correcting chromatic aberrations in the set of source input pixels. 1. An image signal processing system comprising: an input configured to receive raw image data acquired by a digital image sensor; and', 'a vertical resampler and a horizontal resampler, wherein the vertical resampler and the horizontal resampler are configured to produce X/Y coordinate pairs defining a source of an output sample within a specific color of an input frame and using the X/Y coordinates within the input frame to generate the output sample, the output sample comprising scaled and chromatic aberration corrected raw data., 'a raw scaler block comprising2. The image signal processing system of claim 1 , wherein the vertical resampler comprises:a vertical coordinate generator configured to compute Y coordinates on the sensor for every output sample of the vertical resampler;a Y displacement computation block, configured to compute X and Y displacements of the current vertical resampler output sample; anda vertical sensor to component coordinate translation, configured to translate corrected sensor Y coordinate to the Y coordinate within an appropriate input color frame.3. The image signal processing system of claim 2 , wherein the vertical coordinate generator ...

Подробнее