Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 11755. Отображено 100.
26-01-2012 дата публикации

Image receiver

Номер: US20120019629A1
Автор: Hidetoshi Nagano
Принадлежит: Sony Corp

An image receiver according to the present invention includes a receiving unit that receives an integrated image in which a first image and a second image are arranged in one frame and an reception unit that receives region information indicating a region of the first image transmitted along with the integrated image is provided, wherein a non-stereoscopic video display mode in which only the first image is displayed based on the region information and/or a stereoscopic video display mode in which the first image and the second image are displayed as stereoscopic video are included.

Подробнее
01-03-2012 дата публикации

Video Processor Configured to Correct Field Placement Errors in a Video Signal

Номер: US20120051442A1
Принадлежит: LSI Corp

A video processor or other processing device incorporates functionality for correcting field placement errors in a video signal. The device obtains at least a portion of a current frame of a video signal, and compares a designated field of an adjacent frame of the video signal with a designated field of the current frame. If the designated field of the adjacent frame and the designated field of the current frame are of the same polarity, the device adjusts a field display configuration value of the current frame, and the current frame is displayed in accordance with the adjusted field display configuration value. However, if the designated field of the adjacent frame and the designated field of the current frame are of different polarities, the current frame is displayed without adjusting the field display configuration value. This process is repeated for each of one or more additional frames of the video signal. The device may comprise, for example, a video processor integrated circuit implemented in a digital video player.

Подробнее
14-06-2012 дата публикации

Image decoding apparatus, image decoding method, image encoding apparatus, image encoding method, and program

Номер: US20120147972A1
Автор: Masashi Miyazaki
Принадлежит: Sony Corp

An image decoding apparatus includes a first decoding unit configured to decode a bit stream that is generated by using a first variable length encoding system, so as to generate an intermediate stream, a second decoding unit configured to decode a bit stream that is generated by using a second variable length encoding system, so as to generate a syntax element, a syntax conversion unit configured to convert the syntax element that is generated, from syntax of the second variable length encoding system into syntax of the first variable length encoding system, and a first encoding unit configured to encode the syntax element that is syntax-converted, so as to generate the intermediate stream.

Подробнее
12-07-2012 дата публикации

Methods and apparatus for producing video records of use of medical ultrasound imaging systems

Номер: US20120179039A1
Принадлежит: Hansen Trevor, Laurent Pelissier, Tomas Bobovsky

Methods and apparatus for producing a video record of a use of a medical ultrasound system are provided. An ultrasound image video signal and a synthetic display element signal produced by an ultrasound system are encoded as at least one data stream. The at least one data stream is stored in a multimedia container file. The ultrasound video signal and synthetic display element signal may be encoded as different data streams. In some embodiments, the ultrasound video signal and synthetic display element signal are combined in a single video signal, which is encoded as a video data stream. In some embodiments, a subject image video signal is produced by a camera, and this signals is encoded and stored with an associated ultrasound image video signal in a video container.

Подробнее
09-08-2012 дата публикации

Method and apparatus for intra-prediction encoding/decoding

Номер: US20120201295A1
Принадлежит: SK TELECOM CO LTD

The present disclosure provides a method and apparatus for intra prediction encoding/decoding. The method includes: selecting an intra prediction mode of each block to be encoded; encoding a residual block generated through an intra prediction of the block according to the selected intra prediction mode to generate a coefficient bit; encoding a mode identifier for indicating the intra prediction mode according to the predetermined mode determination method to generate a mode bit; generating a bitstream including a mode bit field including a mode bit for one or more blocks and a coefficient bit field including a coefficient bit for the block; and including a mode bit field pointer for identifying the mode bit field in the bitstream. The present disclosure simplifies the process of selecting a prediction mode in a video compression to improve a compression speed and decreases a size of compressed data to improve the compression efficiency.

Подробнее
06-09-2012 дата публикации

Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus

Номер: US20120224774A1
Принадлежит: Panasonic Corp

The image coding method is used to code images to generate a coded stream. The image coding method includes: writing, into a sequence parameter set in the coded stream to be generated, a first parameter representing a first bit-depth that is a bit-depth of a reconstructed sample in the images; and writing, into the sequence parameter set, a second parameter which is different from the first parameter and represents a second bit-depth that is a bit-depth of an Intra Pulse Code Modulation (IPCM) sample in the images.

Подробнее
25-10-2012 дата публикации

Nal unit header

Номер: US20120269276A1
Принадлежит: Vidyo Inc

Disclosed are techniques for scalable, multiview, and multiple descriptive video coding using an improved Network Adaptation Layer (NAL) unit header. A NAL unit header can include a layer-id that can be a reference into a table of layer descriptions, which specify the properties of the layer. The improved NAL unit header can further include fields for reference picture management and to identify temporal layers.

Подробнее
21-03-2013 дата публикации

Method For Modeling Coding Information Of A Video Signal To Compress/Decompress The Information

Номер: US20130070850A1
Принадлежит: LG ELECTRONICS INC

A method and an apparatus of decoding a video signal are provided. The present invention includes the steps of parsing first coding information indicating whether a residual data of an image block in the enhanced layer is predicted from a corresponding block in the base layer, from the bitstream of the enhanced layer, and decoding the video signal based on the first coding information. And, the step of parsing includes the step of performing modeling of the first coding information based on second coding information indicating whether prediction information of the corresponding block in the base layer is used to decode the image block in the enhanced layer. Accordingly, the present invention raises efficiency of video signal processing by enabling a decoder to derive information on a prediction mode of a current block in a decoder instead of transferring the information to the decoder.

Подробнее
02-05-2013 дата публикации

Fragmented parameter set for video coding

Номер: US20130107942A1
Принадлежит: Qualcomm Inc

A video encoder generates a first network abstraction layer (NAL) unit. The first NAL unit contains a first fragment of a parameter set associated with video data. The video encoder also generates a second NAL unit. The second NAL unit contains a second fragment of the parameter set. A video decoder may receive a bitstream that includes the first and second NAL units. The video decoder decodes, based at least in part on the parameter set, one or more coded pictures of the video data.

Подробнее
09-05-2013 дата публикации

Multiview video coding

Номер: US20130114705A1
Принадлежит: Qualcomm Inc

Aspects of this disclosure relate to a method of coding video data. In an example, the method includes obtaining, from an encoded bitstream and for any view component of a first view, reference view information indicating one or more reference views for predicting view components of the first view. The method also includes including, for decoding a first view component in an access unit and in the first view, one or more reference candidates in a reference picture list, where the one or more reference candidates comprise view components in the access unit and in the reference views indicated by the reference view information, where the number of reference candidates is equal to the number of reference views. The method also includes decoding the first view component based on the one or more reference candidates in the reference picture list.

Подробнее
06-06-2013 дата публикации

Encoding device, encoding method, decoding device, and decoding method

Номер: US20130142247A1
Принадлежит: Sony Corp

An encoding device and method, and a decoding device and method, capable of encoding and decoding a multi-viewpoint image in accordance with a mode having compatibility with an existing mode. A compatible encoder generates a compatible stream by encoding an image that is a compatible image. An image converting unit converts the resolution of images that are auxiliary images. An auxiliary encoder generates an encoded stream of the auxiliary image by encoding the auxiliary image of which the resolution is converted. A compatibility information generating unit generates, as compatibility information, information that designates the image as a compatible image. A multiplexing unit transmits the compatible stream, the encoded stream of the auxiliary image, and the compatibility information. The encoding device can encode a 3D image of the multi-viewpoint mode.

Подробнее
06-06-2013 дата публикации

Coding picture order count values identifying long-term reference frames

Номер: US20130142257A1
Принадлежит: Qualcomm Inc

In general, techniques are described for coding picture order count values identifying long-term reference pictures. A video decoding device comprising a processor may perform the techniques. The processor may determine least significant bits (LSBs) of a picture order count (POC) value that identifies a long-term reference picture (LTRP). The LSBs do not uniquely identify the POC value with respect to the LSBs of any other POC value identifying any other picture in a decoded picture buffer (DPB). The processor may determine most significant bits (MSBs) of the POC value. The MSBs combined with the LSBs is sufficient to distinguish the POC value from any other POC value that identifies any other picture in the DPB. The processor may retrieve the LTRP from the decoded picture buffer based on the LSBs and MSBs of the POC value, and decode a current picture of the video data using the retrieved LTRP.

Подробнее
11-07-2013 дата публикации

Motion vector candidate index signaling in video coding

Номер: US20130177083A1
Принадлежит: Qualcomm Inc

A video encoder generates a first and a second candidate list. The first candidate list includes a plurality of motion vector (MV) candidates. The video encoder selects, from the first candidate list, a MV candidate for a first prediction unit (PU) of a coding unit (CU). The second MV candidate list includes each of the MV candidates of the first MV candidate list except the MV candidate selected for the first PU. The video encoder selects, from the second MV candidate list, a MV candidate for a second PU of the CU. A video decoder generates the first and second MV candidate lists in a similar way and generates predictive sample blocks for the first and second PUs based on motion information of the selected MV candidates.

Подробнее
19-09-2013 дата публикации

Encoding device, decoding device, playback device, encoding method, and decoding method

Номер: US20130243103A1
Принадлежит: Panasonic Corp

An encoding device that, when encoding frame image groups which represent scenes respectively viewed from plurality of viewpoints over predetermined time period, generates base-view video stream by encoding frame image group of standard viewpoint without using, as reference image, any frame image of other viewpoints, generates first-type dependent-view video stream by encoding frame image group of first-type viewpoint by using, as reference image, frame image of same time of base-view video stream or of another first-type dependent-view video stream, first-type viewpoint being positioned such that at least one viewpoint is present between first-type viewpoint and standard viewpoint, and generates second-type dependent-view video stream by encoding frame image group of second-type viewpoint by using, as reference images, frame images of same time of two viewpoints sandwiching second-type viewpoint, second-type viewpoint being neither standard viewpoint nor first-type viewpoint.

Подробнее
26-09-2013 дата публикации

Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition

Номер: US20130251035A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An encoding method including: receiving and parsing a bitstream of an encoded image, determining coding units having a hierarchical structure being data units in which the encoded image is decoded, and sub-units for predicting the coding units, by using information that indicates division shapes of the coding units and information about prediction units of the coding units, parsed from the received bitstream, wherein the sub-units comprise partitions obtained by splitting at least one of a height and a width of the coding units according to at least one of a symmetric ratio and an asymmetric ratio, and reconstructing the image by performing decoding including motion compensation using the partitions for the coding units, using the encoding information parsed from received bitstream, wherein the coding units having the hierarchical structure comprise coding units of coded depths split hierarchically according to the coded depths and independently from neighboring coding units.

Подробнее
03-10-2013 дата публикации

Three-dimensional video encoding apparatus, three-dimensional video capturing apparatus, and three-dimensional video encoding method

Номер: US20130258053A1
Принадлежит: GODO KAISHA IP BRIDGE 1

A provided three-dimensional video encoding apparatus adaptively switches a method of setting a reference picture according to a parallax between right and left sides, thereby improving encoding efficiency. A parallax acquisition unit 101 calculates parallax information on a first viewpoint video signal and a second viewpoint video signal according to a parallax matching method or the like. A reference picture setting unit 102 determines, from the parallax information, reference picture setting information on the selection of the reference picture in the encoding of a picture to be encoded, and the allocation of a reference index to the reference picture. An encoding unit 103 compresses and encodes the image data of the picture to be encoded, according to reference picture selection information.

Подробнее
17-10-2013 дата публикации

Grouping bypass coded syntax elements in video coding

Номер: US20130272380A1
Принадлежит: Qualcomm Inc

A video encoding device is configured to generate a first group of syntax elements. Each syntax element in the first group indicates whether a prediction mode of a respective prediction unit (PU) is based on an index into a list of most probable modes. A second group of syntax elements is generated that correspond to respective syntax elements in the first group. The syntax elements in the second group identify either an index into the list of most probable modes or an intra-prediction mode. The first group of syntax elements are context adaptive binary arithmetic coding (CABAC) encoded, and the second group of syntax elements are bypass encoded. A video decoder is configured to receive the entropy encoded first and second groups of syntax elements. The video decoder CABAC decodes the first group of flags and bypass decodes the second group of flags.

Подробнее
17-10-2013 дата публикации

Electronic devices for sending a message and buffering a bitstream

Номер: US20130273945A1
Автор: Sachin G. Deshpande
Принадлежит: Sharp Laboratories of America Inc

An electronic device for sending a message is described. The electronic device includes a processor and instructions stored in memory that is in electronic communication with the processor. The electronic device determines whether a first picture is a Clean Random Access (CRA) picture. The electronic device also determines whether a leading picture is present if the first picture is a CRA picture. The electronic device further generates a message including a CRA discard flag and an initial CRA Coded Picture Buffer (CPB) removal delay parameter if a leading picture is present. The electronic device additionally sends the message.

Подробнее
02-01-2014 дата публикации

Methods, systems and apparatus for displaying the multimedia information from wireless communication networks

Номер: US20140003486A1
Принадлежит: Virginia Innovation Sciences Inc

Video signals for a mobile terminal are converted to accommodate reproduction by an alternative display terminal. The video signal is processed to provide a converted video signal appropriate for an alternative display terminal that is separate from the mobile terminal. This converted video signal is then provided for the alternative display terminal to accommodate the corresponding video display on a screen provided by the alternative (e.g., external) display terminal.

Подробнее
02-01-2014 дата публикации

Method and apparatus for video coding

Номер: US20140003489A1
Автор: Miksa HANNUKSELA
Принадлежит: Nokia Oyj

A method, apparatus and computer program product are provided that permit values of certain parameters or syntax elements, such as the HRD parameters and/or a level indicator, to be taken from a syntax structure, such as a sequence parameter set. In this regard, values of certain parameters or syntax elements, such as the HRD parameters and/or a level indicator, may be taken from a syntax structure of a certain other layer, such as the highest layer, present in an access unit, coded video sequence and/or bitstream even if the other layer, such as the highest layer, were not decoded. The syntax element values from the other layer, such as the highest layer, may be semantically valid and may be used for conformance checking, while the values of the respective syntax elements from other respective syntax structures, such as sequence parameter sets, may be active or valid otherwise.

Подробнее
09-01-2014 дата публикации

Layer Dependency and Priority Signaling Design for Scalable Video Coding

Номер: US20140010291A1
Автор: YAN Ye, Yong He, Yuwen He
Принадлежит: Vid Scale Inc

Signaling of layer dependency and/or priority of dependent layers in a video parameter set (VPS) may be used to indicate the relationship between an enhancement layer and its dependent layers, and/or prioritize the order of the dependent layers for multiple layer scalable video coding of HEVC for inter-layer prediction. A method may include receiving a bit stream that includes a video parameter set (VPS). The VPS may include a dependent layer parameter that indicates a dependent layer for an enhancement layer of the bit stream. The dependent layer parameter may indicate a layer identification (ID) of the dependent layer. The VPS may indicate a total number of dependent layers for the enhancement layer. The VPS may include a maximum number of layers parameter that indicates a total number of layers of the bit stream. The total number of dependent layers for the enhancement layer may not include the enhancement layer.

Подробнее
06-03-2014 дата публикации

Motion vector calculation method

Номер: US20140064375A1
Принадлежит: Panasonic Corp

When a block (MB 22 ) of which motion vector is referred to in the direct mode contains a plurality of motion vectors, 2 motion vectors MV 23 and MV 24 , which are used for inter picture prediction of a current picture (P 23 ) to be coded, are determined by scaling a value obtained from averaging the plurality of motion vectors or selecting one of the plurality of the motion vectors.

Подробнее
27-03-2014 дата публикации

Indication and activation of parameter sets for video coding

Номер: US20140086337A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

In some examples, a video encoder includes multiple sequence parameter set (SPS) IDs in an SEI message, such that multiple active SPSs can be indicated to a video decoder. In some examples, a video decoder activates a video parameter set (VPS) and/or one or more SPSs through referencing an SEI message, e.g., based on the inclusion of the VPS ID and one or more SPS IDs in the SEI message. The SEI message may be, as examples, an active parameter sets SEI message or a buffering period SEI message.

Подробнее
03-04-2014 дата публикации

Systems and methods for reference picture set extension

Номер: US20140092988A1
Автор: Sachin G. Deshpande
Принадлежит: Sharp Laboratories of America Inc

A method for sending information by an electronic device is described. The method includes creating reference picture set (RPS) information based on a coding structure. The method also includes determining whether to signal RPS extension information. The method additionally includes creating the RPS extension information if it is determined to signal RPS extension information. The method further includes sending the RPS extension information if it is determined to signal RPS extension information.

Подробнее
03-04-2014 дата публикации

Error resilient decoding unit association

Номер: US20140092993A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

Techniques are described for signaling decoding unit identifiers for decoding units of an access unit. The video decoder determines which network abstraction layer (NAL) units are associated with which decoding units based on the decoding unit identifiers. Techniques are also described for including one or more copies of supplemental enhancement information (SEI) messages in an access unit.

Подробнее
03-04-2014 дата публикации

Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus

Номер: US20140093180A1
Принадлежит: Panasonic Corp

A dependency indication is signaled within the beginning of a packet, that is, within the adjacent of a slice header to be parsed or a parameter set. This is achieved, for example, by including the dependency indication at the beginning of the slice header, preferably after a syntax element identifying the parameter set and before the slice address, by including the dependency indication before the slice address, by providing the dependency indication to a NALU header using a separate message, or by using a special NALU type for NALUs carrying dependent slices.

Подробнее
06-01-2022 дата публикации

POINT CLOUD ENCODING METHOD AND APPARATUS, POINT CLOUD DECODING METHOD AND APPARATUS, AND STORAGE MEDIUM

Номер: US20220007037A1
Автор: Cai Kangying, Zhang Dejun
Принадлежит:

This application discloses a point cloud encoding method and apparatus, a point cloud decoding method and apparatus, and a storage medium, and belongs to a data processing field. The method includes: first obtaining auxiliary information of a to-be-encoded patch, and then encoding the auxiliary information and a first index of the to-be-encoded patch into a bitstream. Values of the first index may be a first value, a second value, and a third value. Different values indicate different types of patches. Therefore, different types of patches can be distinguished by using the first index. For different types of patches, content included in auxiliary information encoded into a bitstream may be different. This can simplify a format of information encoded into the bitstream, reduce bit overheads of the bitstream, and improve encoding efficiency. 1. A point cloud encoding method , wherein the method comprises:obtaining auxiliary information of a to-be-encoded point cloud patch; andencoding the auxiliary information of the to-be-encoded patch and a syntax element of the to-be-encoded patch into a bitstream, wherein the syntax element comprises a first syntax element, wherein:when the first syntax element indicates that the to-be-encoded patch has a reference patch,the syntax element encoded into the bitstream further comprises a second syntax element, a value of the second syntax element is true, indicating that two-dimensional information of the to-be-encoded patch is encoded into the bitstream but three-dimensional information of the to-be-encoded patch is not encoded into the bitstream, and the auxiliary information of the to-be-encoded patch comprises the two-dimensional information; orthe syntax element encoded into the bitstream further comprises a second syntax element and a third syntax element, a value of the second syntax element is false and a value of the third syntax element is true, indicating that two-dimensional information of the to-be-encoded patch is not ...

Подробнее
06-01-2022 дата публикации

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE

Номер: US20220007055A1
Принадлежит:

A three-dimensional data encoding method includes: assigning three-dimensional points to M layers, where M is an integer greater than 1; generating respective predicted values of attribute information for the three-dimensional points; encoding the attribute information based on the respective predicted values; and generating a bitstream including layer-number information indicating a number N of lower layers including a bottom-most layer among the M layers, where N is an integer between zero and M, inclusive. In the generating of the predicted values: (i) a predicted value for an upper-layer three-dimensional point in upper layers other than the lower layers among the M layers is generated by a same-layer reference in which another three-dimensional point belonging to a same layer as the upper-layer three-dimensional point is referenced; and (ii) a predicted value for a lower-layer three-dimensional point in lower layers is generated in a condition where the same-layer reference is disabled. 1. A three-dimensional data encoding method , comprising:assigning three-dimensional points to M layers, M being an integer greater than 1;generating respective predicted values of attribute information for the three-dimensional points;encoding the attribute information for the three-dimensional points based on the respective predicted values; andgenerating a bitstream including layer-number information indicating a number N of lower layers including a bottom-most layer among the M layers, N being an integer greater than or equal to zero and less than or equal to M, wherein (i) a predicted value for an upper-layer three-dimensional point in upper layers is generated by a same-layer reference in which another three-dimensional point is referenced, the other three-dimensional point belonging to a same layer as the upper-layer three-dimensional point, the upper layers being other than the lower layers among the M layers; and', '(ii) a predicted value for a lower-layer three- ...

Подробнее
07-01-2016 дата публикации

METHODS, SYSTEMS AND APPARATUS FOR DISPLAYING THE MULTIMEDIA INFORMATION FROM WIRELESS COMMUNICATION NETWORKS

Номер: US20160005377A1
Принадлежит: Virginia Innovation Sciences, Inc.

Video signals for a mobile terminal are converted to accommodate reproduction by an alternative display terminal. The video signal is processed to provide a converted video signal appropriate for an alternative display terminal that is separate from the mobile terminal. This converted video signal is then provided for the alternative display terminal to accommodate the corresponding video display on a screen provided by the alternative (e.g., external) display terminal. 1. An apparatus for converting and providing signals , the apparatus comprising:an input interface configured for receiving a video signal appropriate for displaying a video content on a mobile terminal;a processing unit configured for processing the video signal to produce a converted video signal for use by an alternative display terminal, wherein the processing includes converting a signal format of the video signal to a different format for output to the alternative display terminal, and the converted video signal produced by the processing unit comprises a high definition digital signal; wherein the processing the video signal comprises converting the video signal with a compression format to a decompressed digital video signal; wherein the processing the video signal further comprises encoding the decompressed digital video signal for transmission to the alternative display terminal through a digital high definition interface; andan output interface configured for providing the converted video signal to the alternative display terminal through the digital high definition interface to accommodate displaying the video content by the alternative display terminal.2. The apparatus of claim 1 , wherein power from a source external to the mobile terminal charges the interval power supply of the mobile terminal for said processing the video signal. This application is a continuation of U.S. application Ser. No. 14/551,024 filed on Nov. 23, 2014, which is a continuation of U.S. application Ser. No. 14/ ...

Подробнее
13-01-2022 дата публикации

METHOD FOR ENCODING/DECODING IMAGE SIGNAL AND DEVICE THEREFOR

Номер: US20220014743A1
Автор: Lee Bae Keun
Принадлежит:

A method for decoding an image according to the present invention comprises the steps of: partitioning a coding block into a plurality of sub-blocks in a horizontal direction or vertical direction; determining whether a second inverse transform is applied to the coding block; and when it is determined that the second inverse transform is applied to the coding block, applying the second inverse transform to at least one of the plurality of sub-blocks. 1. A method of decoding a video , the method comprising:partitioning a coding block into a plurality of sub-blocks in a horizontal direction or in a vertical direction;determining whether a second inverse transform is applied to the coding block or not; andwhen it is determined that the second inverse transform is applied to the coding block, applying the second inverse transform to at least one among the plurality of sub-blocks,wherein a determination of whether to apply the second inverse-transform is based on index information signaled via a bitstream, andwherein the index information is signaled only when a size of at least one among the plurality of sub-blocks is equal to or greater than a threshold value.2. The method of claim 1 , wherein when the size of at least one among the plurality of sub-blocks is less than the threshold value claim 1 , decoding of the index information is omitted claim 1 , andwherein when decoding of the information is omitted, a value of the index information is inferred to indicate that the second inverse transform is not applied.3. The method of claim 1 , wherein the second inverse transform is performed based on an inverse transform matrix claim 1 , andwherein the inverse transform matrix is determined based on an intra prediction mode of the coding block and the information.4. The method of claim 3 , wherein claim 3 , based on the intra prediction mode claim 3 , an inverse transform matrix set is determined claim 3 , andwherein, based on the index information, one among a plurality of ...

Подробнее
13-01-2022 дата публикации

CONTEXT CODING FOR MATRIX-BASED INTRA PREDICTION

Номер: US20220014785A1
Принадлежит:

Devices, systems and methods for digital video coding, which includes matrix-based intra prediction methods for video coding, are described. In a representative aspect, a method for video processing includes encoding a current video block of a video using a matrix intra prediction (MIP) mode in which a prediction block of the current video block is determined by performing, on previously coded samples of the video, a boundary downsampling operation, followed by a matrix vector multiplication operation, and selectively followed by an upsampling operation; and adding, to a coded representation of the current video block, a syntax element indicative of applicability of the MIP mode to the current video block using arithmetic coding in which a context for the syntax element is derived based on a rule. 1. A method of processing video data , comprising:determining, for a conversion between a current video block of a video and a bitstream of the video, whether a matrix intra prediction (MIP) mode is applied on the current video block based on a syntax element, wherein in the MIP mode, prediction samples of the current video block are determined by performing a matrix vector multiplication operation; andperforming the conversion based on the determining,wherein at least one bin of the syntax element is context coded, and an index of the context is determined based on characteristics of a neighboring block of the current video block.2. The method of claim 1 , wherein the index of the context is determined further based on a size of the current video block.3. The method of claim 2 , wherein in response to a width-height ratio of the current video block being greater than 2 claim 2 , a context with a first predefined index is used for coding the at least one bin of the syntax element.4. The method of claim 2 , wherein in response to a width-height ratio of the current video block being smaller than or equal to 2 claim 2 , a context with a second index is used for coding the at ...

Подробнее
07-01-2016 дата публикации

METHOD FOR SUB-RANGE BASED CODING A DEPTH LOOKUP TABLE

Номер: US20160007005A1
Принадлежит:

The invention relates to a method () for sub-range based coding a depth lookup table (), the depth lookup table comprising depth values of a 3D video sequence, the depth values being constrained within a range (), the method comprising: partitioning () the range () into a plurality of sub-ranges, a first sub-range () comprising a first set of the depth values and a second sub-range () comprising a second set of the depth values; and coding () the depth values of each of the sub-ranges of the depth lookup table () separately according to a predetermined coding rule. 1. A method for coding a depth lookup table , the depth lookup table comprising depth values of at least a part of a 3D video sequence , the depth values being constrained within a range , the method comprising:partitioning the range into a plurality of sub-ranges, a first sub-range comprising a first set of the depth values and a second sub-range comprising a second set of the depth values; andcoding the depth values of each of the first and second sub-ranges to the depth lookup table according to a predetermined coding rule.2. The method of claim 1 , comprising:signaling a position of the second sub-range in the range of the depth values of the depth lookup table by using an offset to the position of the first sub-range.3. The method of claim 2 , comprising:signaling a width of a sub-range by using a parameter indicating the width of that sub-range.4. The method of claim 1 , wherein an occurrence of the depth values in each of the first and second sub-ranges of the depth lookup table is signaled as binary string.5. The method of claim 1 , wherein the depth values of each of the first and second sub-ranges of the depth lookup table are coded by using a range constrained bit map coding according to 3D Video Coding Extension Development of ITU-T and ISO/IEC standardization.6. The method of claim 1 , wherein the partitioning the range into a plurality of sub-ranges is based on a selection criterion.7. The ...

Подробнее
07-01-2016 дата публикации

ESTIMATING RATE COSTS IN VIDEO ENCODING OPERATIONS USING ENTROPY ENCODING STATISTICS

Номер: US20160007046A1
Автор: Chou Jim C.
Принадлежит: Apple Inc.

A component of an entropy encoding stage of a block processing pipeline (e.g., a CABAC encoder) may, for a block of pixels in a video frame, accumulate counts indicating the number of times each of two possible symbols is used in encoding a syntax element bin. An empirical probability for each symbol, an estimated entropy, and an estimated rate cost for encoding the bin may be computed, dependent on the symbol counts. A pipeline stage that precedes the entropy encoding stage may, upon receiving another block of pixels for the video frame, calculate and use the estimated rate cost when making encoding decisions for the other block of pixels based on a cost function that includes a rate cost term. The symbol counts or empirical probabilities may be passed to the earlier pipeline stage or written to a shared memory, from which components of the earlier stage may obtain them. 1. An apparatus , comprising:a block processing pipeline configured to process blocks of pixels from a video frame; anda memory accessible by respective processors in components at each of two or more stages of the block processing pipeline;wherein an entropy encoding stage of the block processing pipeline comprises a binary arithmetic coding component configured to encode syntax element bins that represent the blocks of pixels; code the syntax element bin using one or more symbols, each of which has one of two possible values;', 'update, in the memory, at least one of: a count of symbols used in coding the syntax element bin having one of the two possible values or a count of symbols used in coding the syntax element bin having the other one of the two possible values; and, 'wherein, for each of one or more of a plurality of syntax element bins for a given block of pixels, the binary arithmetic coding component is configured to access the count of symbols used in coding a given syntax element bin having the one of the two possible values and the count of symbols used in coding the given syntax ...

Подробнее
07-01-2021 дата публикации

VIDEO ENCODING METHOD WITH SYNTAX ELEMENT SIGNALING OF PACKING OF PROJECTION FACES DERIVED FROM CUBE-BASED PROJECTION AND ASSOCIATED VIDEO DECODING METHOD AND APPARATUS

Номер: US20210006785A1
Принадлежит:

A video decoding method includes decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that has projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of packing of the projection faces in the cube-based projection layout 1. A video decoding method comprising:decoding a part of a bitstream to generate a decoded frame, wherein the decoded frame is a projection-based frame that has projection faces packed in a cube-based projection layout, and at least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection; andparsing at least one syntax element from the bitstream, wherein said at least one syntax element is indicative of packing of the projection faces in the cube-based projection layout.2. The video decoding method of claim 1 , wherein packing of the projection faces is selected from a group consisting of packing of regular cubemap projection faces and packing of hemisphere cubemap projection faces.3. The video decoding method of claim 2 , wherein the hemisphere cubemap projection faces comprise a first projection face and four second projection faces claim 2 , the first projection face has a first size claim 2 , each of the four second projection faces has a second size claim 2 , and the first size is larger than the second size.4. The video decoding method of claim 1 , wherein said at least one syntax element comprises:a first syntax element, arranged to specify a packing type of said packing of the projection faces in the cube-based projection layout.5. The video decoding method of claim 4 , wherein the first syntax element is further arranged to specify a pre-defined arrangement of position indexes assigned to face positions under the ...

Подробнее
07-01-2021 дата публикации

MINIMUM ALLOWED QUANTIZATION PARAMETER FOR TRANSFORM SKIP MODE AND PALETTE MODE IN VIDEO CODING

Номер: US20210006794A1
Принадлежит:

A video encoder derives a minimum allowed base quantization parameter for video data based on an input bitdepth of the video data, determines a base quantization parameter for a block of the video data based on the minimum allowed base quantization parameter, and quantizes the block of video data based on the base quantization parameter. In a reciprocal fashion, a video decoder derives a minimum allowed base quantization parameter for the video data based on an input bitdepth of the video data, determines a base quantization parameter for a block of the video data based on the minimum allowed base quantization parameter, and inverse quantizes the block of video data based on the base quantization parameter. 1. A method of decoding video data , the method comprising:deriving a minimum allowed base quantization parameter for video data based on an input bitdepth of the video data;determining a base quantization parameter for a block of the video data based on the minimum allowed base quantization parameter; andinverse quantizing the block of video data based on the base quantization parameter.2. The method of claim 1 , further comprising:decoding a syntax element indicative of the input bitdepth.3. The method of claim 1 , wherein the block of the video data is a transform unit of pixel domain video data claim 1 , the method further comprising:decoding the inverse quantized transform unit of pixel domain video data using a transform skip mode or an escape mode of palette mode.4. The method of claim 1 , wherein deriving the minimum allowed base quantization parameter for the video data based on the input bitdepth of the video data comprises:deriving the minimum allowed base quantization parameter for the video data as a function of an internal bitdepth (internalBitDepth) minus the input bitdepth (inputBitDepth).5. The method of claim 4 , wherein deriving the minimum allowed base quantization parameter for the video data as the function of the internal bitdepth ( ...

Подробнее
07-01-2021 дата публикации

SIDE INFORMATION SIGNALING FOR INTER PREDICTION WITH GEOMETRIC PARTITIONING

Номер: US20210006803A1
Принадлежит:

A method for processing a video includes performing a determination, by a processor, that a first video block is partitioned to include a first prediction portion that is non-rectangular and non-square; adding a first motion vector (MV) prediction candidate associated with the first prediction portion to a motion candidate list associated with the first video block, wherein the first MV prediction candidate is derived from a sub-block MV prediction candidate; and performing further processing of the first video block using the motion candidate list. 1. A method of coding video data , comprising:performing a conversion between a current block of a video and a bitstream representation of the video, wherein the current block is coded with a geometric partitioning mode;wherein the bitstream representation includes multiple syntax elements among which one syntax element indicating a splitting pattern of the geometric partitioning mode for the current block and other syntax elements indicating multiple merge indices for the current block.2. The method of claim 1 , wherein a prediction among the multiple merge indices is utilized.3. The method of claim 1 , wherein an indication which specifies whether the geometric partitioning mode is enabled is included in the bitstream representation.4. The method of claim 3 , wherein the indication is at a sequence parameter set level or a picture parameter set level or a video parameter set level or a picture header or a slice header or a tile group header or a coding tree unit level.5. The method of claim 1 , wherein the geometric partitioning mode is disabled for the current block due to the current block satisfying a size condition.6. The method of claim 5 , wherein the size condition specifies not to use the geometric partitioning mode for the current block due to the current block having a size greater than a first threshold claim 5 , wherein the first threshold is equal to 64.7. The method of claim 5 , wherein the size condition ...

Подробнее
07-01-2021 дата публикации

APPARATUS AND METHODS THEREOF FOR VIDEO PROCESSING

Номер: US20210006808A1
Принадлежит: Telefonaktiebolaget lM Ericsson (publ)

A method to be performed by a receiving apparatus for decoding an encoded bitstream representing a sequence of pictures of a video stream is provided. In the method, capabilities relating to level of decoding parallelism for the decoder are identified, a parameter indicative of the decoder's capabilities relating to level of decoding parallelism is kept, and for a set of levels of decoding parallelism, information relating to HEVC profile and HEVC level that the decoder is capable of decoding is kept. 1. A method to be performed by a video camera for decoding an encoded bitstream representing a sequence of pictures of a video stream comprising:identifying capabilities relating to level of decoding parallelism for a decoder of the video camera,keeping a parameter indicative of decoder capabilities relating to level of decoding parallelism, andfor a set of levels of decoding parallelism, keeping information relating to HEVC profile and HEVC level that the decoder is capable of decoding.2. The method according to claim 1 , wherein information of available representations of the encoded bitstream that can be provided by an encoder is received claim 1 ,using the received information, the parameter indicative of the decoder capabilities relating to level of decoding parallelism, and the decoder capabilities relating to HEVC profile and HEVC level that the decoder is capable of decoding for selecting a representation that can be decoded, andsending an indication of the selected representation.3. The method according to claim 2 , wherein the step of selecting a representation that can be encoded claim 2 , further comprises:evaluating the information of the possible representations, andselecting the one with the highest HEVC level for which the parameter has a parallelism level such that the decoder is capable of decoding that HEVC level.4. The method according to claim 1 , further comprising:using the parameter indicative of the decoder capabilities relating to level of ...

Подробнее
07-01-2021 дата публикации

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR ENCODING DISPARITIES BETWEEN VIEWS OF A STEREOSCOPIC IMAGE

Номер: US20210006811A1
Принадлежит:

From a bit stream, at least the following are decoded: a stereoscopic image of first and second views; a maximum positive disparity between the first and second views; and a minimum negative disparity between the first and second views. In response to the maximum positive disparity violating a limit on positive disparity, a convergence plane of the stereoscopic image is adjusted to comply with the limit on positive disparity. In response to the minimum negative disparity violating a limit on negative disparity, the convergence plane is adjusted to comply with the limit on negative disparity. 1. A method comprising:receiving, at a conversion device, capture information from a decoding device, wherein the capture information comprises a stereoscopic image of first and second views, maximum positive disparity of a first object in the first and second views, and minimum negative disparity of a second object in the first and second views;receiving, at the conversion device, conversion information from a display device;determining, at the conversion device, whether a convergence plane of the stereoscopic image complies with the maximum positive disparity and minimum negative disparity; the maximum positive disparity of the first object;', 'the minimum negative disparity of the second object; and', 'the determination of whether the convergence plane of the stereoscopic image complies with the maximum positive disparity and minimum negative disparity;, 'adjusting, at the conversion device, the first view of the stereoscopic image by a first amount at a first edge based onadjusting, at the conversion device, the second view of the stereoscopic image by a second amount at a second edge, wherein the second amount is equal to the first amount and the second edge is opposite the first edge; andoutputting, by the conversion device, the adjusted first view and the adjusted second view of the stereoscopic image to the display device.2. The method of claim 1 ,wherein the ...

Подробнее
07-01-2021 дата публикации

RULES ON UPDATING LUTS

Номер: US20210006819A1
Принадлежит:

Devices, systems and methods for encoding and decoding digital video using historical information containing coding candidates are described. In a representative aspect, a method for video processing includes maintaining one or more tables of motion candidates during a conversion between a current video block and a bitstream representation of a video, comparing a motion candidate associated with the current video block with a number of entries in the one or more tables, and updating the one or more tables based on the comparing. 1. A method for video processing , comprising:performing a conversion between video blocks of a video and a bitstream representation of the video and maintaining one or more tables of motion candidates during the conversion;wherein the performing a conversion comprises:constructing a motion candidate list for a current video block of the video, wherein a suitability of using the one or more tables is based on the coding characteristic of the current video block, wherein the suitability indicates whether to use the one or more tables during the constructing;performing the conversion between the current video block and the bitstream representation of the video based on the motion candidate list; andselectively using a motion candidate associated with motion information of the current video block to update the one or more tables.2. The method of claim 1 , wherein the performing a conversion comprises:encoding the current block into the bitstream representation.3. The method of claim 1 , wherein the performing a conversion comprises:decoding the current block from the bitstream representation.4. The method of claim 1 , wherein the coding characteristic comprises at least one of a coding mode of the current video block claim 1 , or a size of the current video block.5. The method of claim 1 , wherein whether to update the one or more tables is based on the coding characteristic.6. The method of claim 5 , wherein the table is not used based on the ...

Подробнее
07-01-2021 дата публикации

PICTURE CODING SUPPORTING BLOCK MERGING AND SKIP MODE

Номер: US20210006821A1
Принадлежит:

A coding efficiency increase is achieved by using a common signalization within the bitstream with regard to activation of merging and activation of the skip mode. One possible state of one or more syntax elements within the bitstream may signalize for a current sample set of a picture that the sample set is to be merged and has no prediction residual encoded and inserted into the bitstream. A common flag may signalize whether the coding parameters associated with a current sample set are to be set according to a merge candidate or to be retrieved from the bitstream, and whether the current sample set of the picture is to be reconstructed based on a prediction signal depending on the coding parameters associated with the current sample set, without any residual data, or to be reconstructed by refining the prediction signal depending on the coding parameters associated with the current sample set by means of residual data within the bitstream. 1. A decoder configured to decode a data stream having a video encoded therein , comprising:a memory; andan apparatus communicatively coupled with the memory for performing the following:extract, from the data stream, first information associated with a coding block of the video, wherein the first information has first and second states, the first state indicates that (1) the coding block is to be reconstructed based on a coding parameter of a first merge candidate coding block and (2) the coding block is to be reconstructed without residual data, wherein extract, from the data stream, second information associated with the coding block, the second information identifying the first merge candidate coding block, and', 'reconstruct the coding block based on the coding parameter of the first merge candidate coding block identified by the second information,, 'when the first information is in the first state,'} 'extract, from the data stream, residual data and third information associated with the coding block, the third ...

Подробнее
07-01-2021 дата публикации

VIDEO ENCODING METHOD WITH SYNTAX ELEMENT SIGNALING OF MAPPING FUNCTION EMPLOYED BY CUBE-BASED PROJECTION AND ASSOCIATED VIDEO DECODING METHOD

Номер: US20210006832A1
Принадлежит:

A video encoding method includes: encoding a projection-based frame to generate a part of a bitstream, wherein at least a portion of a 360-degree content of a sphere is mapped to projection faces via cube-based projection, and the projection-based frame has the projection faces packed in a cube-based projection layout; and signaling at least one syntax element via the bitstream, wherein said at least one syntax element is associated with a mapping function that is employed by the cube-based projection to determine sample locations for each of the projection faces. 1. A video encoding method comprising:encoding a projection-based frame to generate a part of a bitstream, wherein at least a portion of a 360-degree content of a sphere is mapped to projection faces via cube-based projection, and the projection-based frame has the projection faces packed in a cube-based projection layout; andsignaling at least one syntax element via the bitstream, wherein said at least one syntax element is associated with a mapping function that is employed by the cube-based projection to determine sample locations for each of the projection faces.2. The video encoding method of claim 1 , wherein the mapping function is a parameterized mapping function employed by the cube-based projection to adjust sample locations for said each of the projection faces claim 1 , and said at least one syntax element signaled via the bitstream comprises a plurality of syntax elements that specify at least coefficients of the parameterized mapping function.3. The video encoding method of claim 2 , wherein for said each of the projection faces claim 2 , the plurality of syntax elements comprise:a first syntax element, arranged to specify a coefficient used in the parameterized mapping function of a first axis.4. The video encoding method of claim 3 , wherein for said each of the projection faces claim 3 , the plurality of syntax elements further comprise:a second syntax element, arranged to indicate whether ...

Подробнее
07-01-2021 дата публикации

IMAGE ENCODING APPARATUS, IMAGE ENCODING METHOD, IMAGE DECODING APPARATUS, AND IMAGE DECODING METHOD

Номер: US20210006836A1
Автор: Kondo Kenji
Принадлежит:

The present disclosure relates to an image encoding apparatus, an image encoding method, an image decoding apparatus, and an image decoding method that are capable of reducing the processing amounts of encoding and decoding. In the encoding apparatus, identification information for identifying a threshold of an orthogonal transformation maximum size is set. In a case where a coding unit is larger than the threshold of the orthogonal transformation maximum size, simple orthogonal transformation is performed on the coding unit. A simple transformation coefficient that is a result of the simple orthogonal transformation is encoded so that a bitstream including the identification information is generated. In the image decoding apparatus, identification information is parsed from a bitstream. The bitstream is decoded so that a simple transformation coefficient that is a result of simple orthogonal transformation on a coding unit is generated. Simple inverse orthogonal transformation based on a size of the coding unit is performed by referring to the identification information. The present technology is applicable to, for example, an image encoding apparatus configured to encode images and an image decoding apparatus configured to decode images. 1. An image encoding apparatus comprising:a setting section configured to set identification information for identifying a threshold of an orthogonal transformation maximum size that is a maximum size of a processing unit in orthogonal transformation of an image;an orthogonal transformation section configured to perform, in a case where a coding unit that is a processing unit in encoding of the image is larger than the threshold of the orthogonal transformation maximum size, simple orthogonal transformation on the coding unit; andan encoding section configured to encode a simple transformation coefficient that is a result of the simple orthogonal transformation by the orthogonal transformation section, to thereby generate a ...

Подробнее
07-01-2021 дата публикации

VIDEO ENCODING METHOD WITH SYNTAX ELEMENT SIGNALING OF GUARD BAND CONFIGURATION OF PROJECTION-BASED FRAME AND ASSOCIATED VIDEO DECODING METHOD AND APPARATUS

Номер: US20210006838A1
Принадлежит:

A video decoding method includes: decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that includes projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of a guard band configuration of the projection-based frame. 1. A video decoding method comprising:decoding a part of a bitstream to generate a decoded frame, wherein the decoded frame is a projection-based frame that comprises projection faces packed in a cube-based projection layout, and at least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection; andparsing at least one syntax element from the bitstream, wherein said at least one syntax element is indicative of a guard band configuration of the projection-based frame.2. The video decoding method of claim 1 , wherein packing of the projection faces is selected from a group consisting of packing of regular cubemap projection faces and packing of hemisphere cubemap projection faces.3. The video decoding method of claim 1 , wherein said at least one syntax element comprises:a first syntax element, arranged to indicate whether the projection-based frame contains at least one guard band.4. The video decoding method of claim 3 , wherein the first syntax element is set to indicate that the projection-based frame contains said at least one guard band; the projection faces correspond to faces of an object in a three-dimensional space claim 3 , respectively claim 3 , and comprise a first projection face and a second projection face claim 3 , where the object is a cube or a hemisphere cube; said at least one guard band packed in the cube-based projection layout comprises a first guard band claim 3 , where regarding the cube-based projection ...

Подробнее
04-01-2018 дата публикации

Method and Apparatus for Entropy Coding of Source Samples with Large Alphabet

Номер: US20180007359A1
Автор: HSIANG Shih-Ta
Принадлежит: MEDIATEK INC.

A general entropy coding method for source symbols is disclosed. This method determines a prefix part and any suffix part for the current symbol. The method divides prefix of the source symbol into at least two parts by comparing a test value related to the prefix part against a threshold. If the test value is greater than or equal to the threshold, the method derives a first binary string by binarizing a first prefix part related to the prefix part using a first variable length code. If the test value related to the prefix part is less than the threshold, the method derives a second binary string by binarizing a second prefix part related to the prefix part using a second variable length code or a first fixed-length code. The method then encodes at least one of the first binary string and the second binary string using a CABAC mode. 1. A method of entropy coding for source symbols in an encoder comprising:receiving a current source symbol having a current symbol;determining a prefix part and any suffix part for the current symbol;if a test value related to the prefix part is greater than or equal to a threshold, deriving a first binary string by binarizing a first prefix part related to the prefix part using a first variable length code;if the test value related to the prefix part is less than the threshold, deriving a second binary string by binarizing a second prefix part related to the prefix part using a second variable length code or a first fixed-length code;deriving a third binary string by binarizing the suffix part using a second fixed-length code or a first truncated binary code if any suffix exists; andencoding at least one of the first binary string and the second binary string using a CABAC (context-adaptive binary arithmetic coding) mode.2. The method of claim 1 , wherein the prefix part is derived by applying a k-th order Exp-Golomb (EGk) binarization process to the current symbol.3. (canceled)4. The method of claim 1 , wherein the prefix part ...

Подробнее
04-01-2018 дата публикации

ENCODING/DECODING DIGITAL FRAMES BY DOWN-SAMPLING/UP-SAMPLING WITH ENHANCEMENT INFORMATION

Номер: US20180007362A1
Автор: Krishnan Rathish
Принадлежит:

Input digital frames may be down-sampled to create one or more base frames characterized by a lower resolution than the input digital frames. Enhancement information corresponding to a difference between pixel values for the one or more input digital frames and corresponding pixel values of up-sampled versions of the one or more base frames is then created. The one base frames are encoded to form a set of base data and the enhancement information is encoded to form a set of enhancement data. The base data and enhancement data may then be transmitted over a network or stored in a memory. 1. A method for encoding one or more input digital frames , comprising:down-sampling the one or more input digital frames to create one or more base frames characterized by a lower resolution than the input digital frames;creating enhancement information corresponding to a difference between pixels values of the one or more input digital frames and corresponding pixel values of up-sampled versions of the one or more base frames;encoding the one or more base frames to form a set of base data;encoding the enhancement information to form a set of enhancement data; andtransmitting the set of base data and the set of enhancement data over a network or storing the set of base data and the set of enhancement data in a memory.2. The method of claim 1 , wherein the enhancement information is created in such a way as to minimize an arithmetic difference between the pixel values of the one or more input digital frames and corresponding pixel values of the up-sampled versions of the one or more base frames.3. The method of claim 1 , wherein an average time needed to generate a frame by decoding the base data and enhancement data is not higher than a time needed to decode an input frame encoded without down-sampling and without using an enhancement data.4. The method of claim 1 , wherein the enhancement information is encoded in a video format.5. The method of claim 4 , wherein the video format ...

Подробнее
04-01-2018 дата публикации

IMAGE DATA ENCAPSULATION

Номер: US20180007407A1
Принадлежит:

A method of encapsulating an encoded bitstream representing one or more images, the encapsulated bitstream comprising a data part and a metadata part. The method including providing image item information identifying a portion of the data part representing a sub-image or an image of a single image; providing image description information comprising parameters including display parameters and/or transformation operators relating to one or more images and outputting said bitstream together with said provided information as an encapsulated data file, wherein the image description information is stored in the metadata part. 1. A method of encapsulating an encoded bitstream representing one or more images , the encapsulated bitstream comprising a data part and a metadata part , the method comprising:providing image item information identifying a portion of the data part representing a sub-image or an image of a single image;providing image description information comprising parameters including display parameters and/or transformation operators relating to one or more images andoutputting the bitstream together with the provided information as an encapsulated data file;wherein the image description information is stored in the metadata part.2. The method according to claim 1 , wherein the display parameters comprise one or several parameters among:image position and size;pixel aspect ratio,color information, andthe transformation operators comprise one or several transformation operators among:cropping,rotation.3. The method according to claim 1 , wherein each parameter comprised in the image description information is associated with additional data which comprises:type information, and/oran identifier used to link an image item information to the parameter.4. The method according to claim 1 , wherein metadata part is included in an ISOBMFF's ‘meta’ data box.5. The method according to claim 1 , wherein each transformation operators comprised in the image description ...

Подробнее
04-01-2018 дата публикации

TRANSMITTING DEVICE, TRANSMITTING METHOD, RECEIVING DEVICE, AND RECEIVING METHOD

Номер: US20180007423A1
Автор: Tsukagoshi Ikuo
Принадлежит: SONY CORPORATION

The present invention enables a receiving side to easily recognize a high-quality format corresponding to encoded image data included in an extended video stream. Two video streams including a basic video stream including encoded image data of basic format image data, and an extended video stream including encoded image data of high-quality format image data of one type selected from a plurality of types are generated. A container of a predetermined format including the basic video stream and the extended video stream is transmitted. Information indicating a high-quality format corresponding to the encoded image data included in the extended video stream is inserted into the extended video stream and/or the container. 1. A transmitting device comprising:an image encoding unit configured to generate two video streams including a basic video stream including encoded image data of basic format image data, and an extended video stream including encoded image data of high-quality format image data of one type selected from among a plurality of types;a transmitting unit configured to transmit a container of a predetermined format including the basic video stream and the extended video stream; andan information insertion unit configured to insert information indicating a high-quality format corresponding to the encoded image data included in the extended video stream into the extended video stream and/or the container.2. The transmitting device according to claim 1 , whereinthe image encoding unitperforms, regarding the basic format image data, prediction encoding processing of an inside of the basic format image data, to obtain encoded image data, andselectively performs, regarding the high-quality format image data, prediction encoding processing of an inside of the high-quality format image data and prediction encoding processing of between the high-quality format image data and the basic format image data, to obtain encoded image data.3. The transmitting device ...

Подробнее
03-01-2019 дата публикации

Packed Image Format for Multi-Directional Video

Номер: US20190007669A1
Принадлежит:

Frame packing techniques are disclosed for multi-directional images and video. According to an embodiment, a multi-directional source image is reformatted into a format in which image data from opposing fields of view are represented in respective regions of the packed image as flat image content. Image data from a multi-directional field of view of the source image between the opposing fields of view are represented in another region of the packed image as equirectangular image content. It is expected that use of the formatted frame will lead to coding efficiencies when the formatted image is processed by predictive video coding techniques and the like. 1. A method of representing multi-directional image data , comprising:forming a formatted image from a multi-directional source image in which image data from opposing fields of view are represented in respective first and second regions of the packed image, and image data from a multi-directional field of view between the opposing fields of view are represented in a third region of the formatted image.2. The method of claim 1 , wherein the source image is a cube map image and the image data in the third region is a cylindrical projection of content from the cube map image.3. The method of claim 1 , wherein the source image is an equirectangular image.4. The method of claim 1 , wherein the source image is a segmented sphere image.5. The method of claim 1 , wherein the source image is a truncated pyramid-based image.6. The method of claim 1 , wherein the source image is a polygonal-based image.7. The method of claim 1 , further comprising compressing the formatted image by predictive video compression.8. The method of claim 7 , further comprising decoding the coded formatted image and storing the decoded formatted image in a memory claim 7 , wherein the decoded image has the formatted image format.9. The method of claim 7 , wherein the video compression codes data of the formatted image on a pixel block by pixel ...

Подробнее
03-01-2019 дата публикации

ARRANGEMENTS AND METHODS THEREOF FOR PROCESSING VIDEO

Номер: US20190007691A1
Принадлежит:

A method performed by a video encoder for encoding a current picture belonging to a temporal level identified by a temporal_id. The method includes determining a Reference Picture Set (RPS) for the current picture indicating reference pictures that are kept in a decoded picture buffer (DPB) when decoding the current picture, and when the current picture is a temporal switching point. The method further comprises operating to ensure that the RPS of the current picture includes no picture having a temporal_id greater than or equal to the temporal_id of the current picture. 1. A video encoder for encoding a current picture belonging to a temporal level identified by a temporal_id , the video encoder comprising:at least one processor; and determining a Reference Picture Set (RPS) for the current picture indicating reference pictures that are kept in a decoded picture buffer (DPB) when decoding the current picture; and', 'encoding the current picture ensuring that the RPS of the current picture includes no picture having a temporal_id greater than or equal to the temporal_id of the current picture when the current picture is a temporal switching point, wherein the current picture, being a temporal switching point, is a temporal sub-layer access (TSA) picture, the TSA picture and all coded pictures with temporal_id greater than or equal to the temporal_id of the TSA picture that follow the TSA picture in decoding order shall not include any picture in their reference picture set that precedes the TSA picture in decoding order and for which temporal_id is greater than or equal to the temporal_id of the TSA picture., 'at least one memory storing program code that is executed by the at least one processor to perform operations comprising2. The video encoder according to claim 1 , wherein the temporal layer switching point is a coded picture for which each slice has a unique nal_unit_type.3. The video encoder according to claim 2 , wherein the TSA picture is a coded picture ...

Подробнее
02-01-2020 дата публикации

METHOD AND APPARATUS FOR VIDEO CODING

Номер: US20200007868A1
Принадлежит: Tencent America LLC

A method and an apparatus for video coding are provided. The apparatus includes processing circuitry. Prediction information of a first block from a coded video bitstream is decoded by the processing circuitry. The first block is a non-square block and the prediction information of the first block is indicative of a first intra prediction direction mode in a first set of intra prediction direction modes that is associated with the non-square block. The first set of intra prediction direction modes includes a subset of a second set of intra prediction direction modes that is associated with a square block and at least one additional intra prediction direction mode that is different from the second set of intra prediction direction modes. At least one sample of the first block is subsequently reconstructed by the processing circuitry according to the first intra prediction direction mode. 1. A method for video decoding in a decoder , comprising:decoding prediction information of a first block from a coded video bitstream, the first block being a non-square block and the prediction information of the first block being indicative of a first intra prediction direction mode in a first set of intra prediction direction modes that is associated with the non-square block, the first set of intra prediction direction modes including a subset of a second set of intra prediction direction modes that is associated with a square block and at least one additional intra prediction direction mode that is different from the second set of intra prediction direction modes; andreconstructing at least one sample of the first block according to the first intra prediction direction mode, whereinthe second set of intra prediction direction modes includes at least one intra prediction direction mode that is different from the first set of intra prediction direction modes.2. (canceled)3. The method of claim 1 , wherein a number of the modes in the first set of intra prediction direction modes ...

Подробнее
02-01-2020 дата публикации

IMAGE ENCODING/DECODING METHOD AND DEVICE

Номер: US20200007876A1

Disclosed are an image encoding/decoding method and device supporting a plurality of layers. The image decoding method supporting the plurality of layers comprises the steps of; receiving a bitstream comprising the plurality of layers; and decoding the bitstream so as to acquire maximum number information about sublayers with respect to each of the plurality of layers. 1. A method for picture decoding supporting layers , the method comprising:receiving a bitstream comprising the layers;acquiring information on a maximum number of sub-layers for each of the layers by decoding the bitstream;acquiring a residual block of a current block by decoding the bitstream; andgenerating a reconstructed block of the current block using the residual block,wherein the information on the maximum number of sub-layers is included in video parameter set extension information and signaled, andwherein a video parameter set comprises information on a maximum number of sub-layers.2. The method of claim 1 , wherein the information on the maximum number of sub-layers for each of the layers is acquired in accordance with flag information representing whether the information on the maximum number of sub-layers is present.3. The method of claim 1 , wherein the acquiring of the information on the maximum number of sub-layers for each of the layers comprises acquiring the information on the maximum number of sub-layers for a layer in which the maximum number of sub-layers signaled in the video parameter extension information is different from the maximum number of sub-layers signaled in a video parameter set.4. The method of claim 3 , wherein the acquiring of the information on the maximum number of sub-layers for each of the layers further comprises acquiring the information on the maximum number of sub-layers for each of the layers based on flag information representing whether the maximum number of sub-layers signaled in the video parameter extension information is equal to the maximum number ...

Подробнее
27-01-2022 дата публикации

Encoder, a Decoder and Corresponding Methods

Номер: US20220030222A1
Автор: Ma Xiang, Yang Haitao
Принадлежит:

A method of decoding a coded video bitstream includes obtaining a sequence parameter set (SPS)-level syntax element from the bitstream, wherein that the SPS-level syntax element equals to a preset value specifies that no video parameter set (VPS) is referred to by a SPS, and the SPS-level syntax element greater than the preset value specifies that the SPS refers to a VPS, obtaining, as the SPS-level syntax element is greater than the preset value, an inter-layer enabled syntax element specifying whether one or more inter-layer reference pictures (ILRPs) are enabled to be used for the inter prediction of one or more coded pictures, and predicting one or more coded pictures based on the value of the inter-layer enabled syntax element. 1. A method of decoding a coded video bitstream , the method comprising:obtaining a sequence parameter set (SPS)-level syntax element from the coded video bitstream, wherein that the SPS-level syntax element equals to a preset value indicates that no video parameter set (VPS) is referred to by an SPS, and wherein the SPS-level syntax element greater than the preset value indicates that the SPS refers to a VPS;obtaining, from the coded video bitstream, an inter-layer enabled syntax element indicating whether one or more inter-layer reference pictures (ILRPs) are enabled to be used for performing inter prediction of one or more coded pictures when the SPS-level syntax element is greater than the preset value; andperforming the inter prediction on the one or more coded pictures based on a value of the inter-layer enabled syntax element.2. The method of claim 1 , wherein the VPS comprises a plurality of syntax elements describing inter-layer prediction information of layers in a coded video sequence (CVS) comprising the one or more ILRPs and the one or more coded pictures claim 1 , and wherein the SPS comprises the SPS-level syntax element and the inter-layer enabled syntax element.3. The method of claim 2 , wherein performing the inter ...

Подробнее
27-01-2022 дата публикации

METHOD AND APPARATUS FOR POINT CLOUD CODING

Номер: US20220030258A1
Автор: Gao Wen, Liu Shan, ZHANG Xiang
Принадлежит: Tencent America LLC

An apparatus for point cloud decoding includes processing circuitry. The processing circuitry receives, from a coded bitstream for a point cloud, encoded occupancy codes for nodes in an octree structure for the point cloud. The nodes in the octree structure correspond to three dimensional (3D) partitions of a space of the point cloud. Sizes of the nodes are associated with sizes of the corresponding 3D partitions. Further, the processing circuitry decodes, from the encoded occupancy codes, occupancy codes for the nodes. At least a first occupancy code for a child node of a first node is decoded without waiting for a decoding of a second occupancy code for a second node having a same node size as the first node. Then, the processing circuitry reconstructs the octree structure based on the decoded occupancy codes for the nodes, and reconstructs the point cloud based on the octree structure. 1. A method for point cloud coding , comprising:receiving point cloud data of a point cloud; first nodes having first sizes larger than a threshold node size for coding order change, and', 'second nodes having second sizes equal to or smaller than the threshold node size for coding order change, the second nodes being arranged into one or more sub octrees;, 'deriving an octree structure of the point cloud based on the point cloud data, nodes in the octree structure corresponding to three dimensional (3D) partitions of a space of the point cloud, sizes of the nodes being associated with sizes of the corresponding 3D partitions, the nodes in the octree structure including'} a first portion of the sequence of occupancy codes for the first nodes arranged according to a first coding order, and', 'a second portion of the sequence of occupancy codes including one or more subsets of occupancy codes for the one or more sub octrees respectively arranged according to a second coding order, the second coding order being different from the first coding order;, 'generating, by processing ...

Подробнее
27-01-2022 дата публикации

METHODS AND APPARATUSES FOR PERFORMING ENCODING AND DECODING ON IMAGE

Номер: US20220030260A1
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

Provided is a computer-recordable recording medium having stored thereon a video file including artificial intelligence (AI) encoding data, wherein the AI encoding data includes: image data including encoding information of a low resolution image generated by AI down-scaling a high resolution image; and AI data about AI up-scaling of the low resolution image reconstructed according to the image data, wherein the AI data includes: AI target data indicating whether AI up-scaling is to be applied to at least one frame; and AI supplementary data about up-scaling deep neural network (DNN) information used for AI up-scaling of the at least one frame from among a plurality of pieces of pre-set default DNN configuration information, when AI up-scaling is applied to the at least one frame. 1. A server for providing an image by using artificial intelligence (AI) , the server comprising:one or more processors configured to execute one or more instructions stored in the server to:select a down-scaling deep neural network (DNN) setting information among a plurality of down-scaling DNN setting information for AI down-scaling an original image of at least one frame,obtain a down-scaled image of the at least one frame by performing the AI down-scaling of the original image of the at least one frame through a down-scaling DNN which is set with the selected down-scaling DNN setting information, andobtain AI data related to the AI down-scaling and obtain image data by encoding the down-scaled image of the at least one frame,to obtain a video file including the image data and the AI data.2. An electronic device for displaying an image by using an artificial intelligence (AI) , the electronic device comprising:a display; andone or more processors configured to execute one or more instructions stored in the electronic device to:receive a video file including image data and AI data about AI up-scaling of the image data,obtain the AI data and obtain the image data, reconstruct a down- ...

Подробнее
27-01-2022 дата публикации

CONFIGURING LUMA-DEPENDENT CHROMA RESIDUE SCALING FOR VIDEO CODING

Номер: US20220030267A1
Принадлежит:

A method for video processing is provided to include: performing a conversion between a current video block of a video that is a chroma block and a coded representation of the video, wherein, during the conversion, the current video block is constructed based on a first domain and a second domain, and wherein the conversion further includes applying a forward reshaping process and/or an inverse reshaping process to one or more chroma components of the current video block. 1. A method of processing video data , comprising:generating, for a conversion between a current luma video block of a video and a bitstream of the video, prediction luma samples for the current luma video block;performing, a forward mapping process for the current luma video block, in which the prediction luma samples are converted from an original domain to a reshaped domain to generate modified prediction luma samples;performing the conversion based on the modified prediction luma samples;wherein for a current chroma video block corresponding to the current luma video block, whether to apply a scaling process on chroma residual samples of the current chroma video block is based on a dimension of the current luma video block,wherein in the scaling process, the chroma residual samples are scaled before being used to reconstruct the current chroma video block,wherein an inverse mapping process which is an inverse operation of the forward mapping process is applied after reconstructing the current luma video block, andwherein a piecewise linear model is used to map the samples of the current luma video block into particular values during the forward mapping process.2. The method of claim 1 , wherein whether to apply the scaling process is based on a number of luma samples contained in the current luma video block.3. The method of claim 2 , wherein the scaling process is disabled in a case that the number of the luma samples are less than a first specific value.4. The method of claim 3 , wherein when ...

Подробнее
10-01-2019 дата публикации

CIRCUIT DEVICE, ELECTRO-OPTICAL DEVICE, ELECTRONIC APPARATUS, MOBILE BODY, AND ERROR DETECTION METHOD

Номер: US20190013826A1
Принадлежит: SEIKO EPSON CORPORATION

A circuit device in which a processing load of a processing device with respect to error detection performed on image data can be reduced, and an electro-optical device, an electronic apparatus, a mobile body, an error detection method and the like. The circuit device includes: an interface unit that receives image data; and an error detection unit that performs error detection. The interface unit receives the image data including display image data and error detection data that includes at least position information regarding an error detection region, and the error detection unit performs the error detection on the display image data based on the display image data of the error detection region that is specified by the position information. 1. A circuit device comprising:an interface unit that receives image data; andan error detection unit that performs error detection,wherein the interface unit receives the image data including display image data and error detection data that includes at least position information regarding an error detection region, andthe error detection unit performs the error detection on the display image data based on the display image data of the error detection region that is specified by the position information.2. The circuit device according to claim 1 ,wherein the error detection data further includes expectation value information that is used in the error detection, andthe error detection unit performs the error detection based on the expectation value information.3. The circuit device according to claim 1 ,{'sup': 'th', 'wherein the image data includes second to npieces of error detection data (n is an integer of two or more), and'}{'sup': th', 'th', 'th, 'i(i is an integer that satisfies 2≤i≤n) error detection data of the second to npieces of error detection data includes the position information corresponding to an ierror detection region.'}4. The circuit device according to claim 1 ,wherein the error detection unit performs the ...

Подробнее
11-01-2018 дата публикации

SKIPPING EVALUATION STAGES DURING MEDIA ENCODING

Номер: US20180014017A1
Автор: Li Bin, Xu Jizheng
Принадлежит: Microsoft Technology Licensing, LLC

Various innovations in media encoding are presented herein. In particular, the innovations can reduce the computational complexity of encoding by selectively skipping certain evaluation stages during encoding. For example, based on analysis of decisions made earlier in encoding or based on analysis of media to be encoded, an encoder can selectively skip evaluation of certain coding tools (such as residual coding or rate-distortion-optimized quantization), skip evaluation of certain values for parameters or settings (such as candidate unit sizes or transform sizes, or candidate partition patterns for motion compensation), and/or skip evaluation of certain coding modes (such as frequency transform skip mode) that are not expected to improve rate-distortion performance during encoding. 1. In a computer system , a method of media encoding comprising:encoding a first picture among multiple pictures to produce encoded data for the first picture;determining a threshold unit size for the first picture, the threshold unit size indicating a unit size at or below which a threshold proportion of content of the first picture is reached;outputting the encoded data for the first picture;encoding a second picture among the multiple pictures to produce encoded data for the second picture, the second picture following the first picture in coding order, including limiting unit size for at least part of the second picture based at least in part on the threshold unit size for the first picture; andoutputting the encoded data for the second picture.2. The method of claim 1 , wherein the content of the first picture claim 1 , for purposes of the threshold unit size claim 1 , is intra-picture-coded units of the first picture.3. The method of claim 2 , wherein the first picture is entirely encoded using intra-picture-coded units claim 2 , wherein the second picture is at least partially encoded using inter-picture-coded units claim 2 , and wherein unit size is limited for any intra-picture- ...

Подробнее
11-01-2018 дата публикации

METHOD FOR DEPTH LOOKUP TABLE SIGNALING

Номер: US20180014029A1
Принадлежит:

A method and apparatus for depth lookup table (DLT) signaling in a three-dimensional and multi-view coding system. The method identifies one or more pictures to be processed. If one or more pictures contain depth data, then the method determines the DLT associated with said one or more pictures, applies predictive coding to the DLT based on the previous DLT, includes syntax related to the DLT in the PPS, and includes first bit-depth information related to first depth samples of the DLT in the PPS. The first bit-depth information is consistent with second bit depth information signaled in a sequence level. The method further signals the PPS in a video bitstream for a sequence including said one or more pictures. A circuit is also provided that embodies circuitry configured to carry out the operations specified above. 1. A method of depth coding using a depth lookup table (DLT) in a three-dimensional and multi-view coding system , the method comprising:identifying one or more pictures to be processed; determining the DLT associated with said one or more pictures;', 'applying predictive coding to the DLT based on the previous DLT;', 'including syntax related to the DLT in the PPS; and', 'including first bit-depth information related to first depth samples of the DLT in the PPS, wherein the first bit-depth information is consistent with second bit depth information signaled in a sequence level; and, 'if said one or more pictures contain depth datasignaling the PPS in a video bitstream for a sequence including said one or more pictures.2. The method of claim 1 , wherein the second bit depth information signaled in the sequence level corresponds to luma samples.3. The method of claim 1 , wherein the second bit depth information signaled in the sequence level is bit_depth_luma_minus8.4. The method of claim 1 , wherein the second bit depth information signaled in the sequence level is for second depth luma samples of a sequence containing said one or more pictures.5. The ...

Подробнее
11-01-2018 дата публикации

SYNTAX STRUCTURES INDICATING COMPLETION OF CODED REGIONS

Номер: US20180014033A1
Принадлежит: Microsoft Technology Licensing, LLC

Syntax structures that indicate the completion of coded regions of pictures are described. For example, a syntax structure in an elementary bitstream indicates the completion of a coded region of a picture. The syntax structure can be a type of network abstraction layer unit, a type of supplemental enhancement information message or another syntax structure. For example, a media processing tool such as an encoder can detect completion of a coded region of a picture, then output, in a predefined order in an elementary bitstream, syntax structure(s) that contain the coded region as well as a different syntax structure that indicates the completion of the coded region. Another media processing tool such as a decoder can receive, in a predefined order in an elementary bitstream, syntax structure(s) that contain a coded region of a picture as well as a different syntax structure that indicates the completion of the coded region. 1. A computing system including:a buffer configured to store, as part of an elementary bitstream, one or more syntax structures that contain a coded region for a region of an image or video, and, after the one or more syntax structures that contain the coded region, a different syntax structure that indicates completion of the coded region, the different syntax structure including a next slice segment address that indicates a slice segment address for a next slice segment header when the slice segment address for the next slice segment header is present in the elementary bitstream; anda media processing tool configured to detect the completion of the coded region using the different syntax structure.2. The computing system of claim 1 , wherein the media processing tool is further configured to:decode the coded region to reconstruct the region.3. (canceled)4. The computing system of claim 1 , wherein the different syntax structure has a type that designates the different syntax structure as an end-of-region indicator.5. The computing system of ...

Подробнее
14-01-2016 дата публикации

ADAPTIVE BITRATE STREAMING FOR WIRELESS VIDEO

Номер: US20160014418A1
Принадлежит:

Techniques related to adaptive bitrate streaming for wireless video are discussed. Such techniques may include determining candidate bitrates for encoding segments of a source video. A minimum of the candidate bitrates may be selected and a segment of the source video may be encoded based on the selected encoding bitrate. The encoded bitstream may be transmitted wirelessly from a transmitting device to a receiving device, which may decode the bitstream and present the decoded video to a user. 1. A computer-implemented method for encoding video content for wireless transmission comprising: wherein the first candidate bitrate comprises a bitrate for the source video encoded using a second video codec modified by a scaling factor,', 'wherein the second candidate bitrate comprises an average bitrate for at least one of the segment of the source video encoded using the first video codec or a previous segment of the source video encoded using the first video codec, and', 'wherein the third candidate bitrate comprises an encoding bitrate prediction for at least one of the segment encoded using the first video codec or the previous segment encoded using the first video codec;, 'determining a first, a second, and a third candidate bitrate for encoding, using a first video codec, a segment of a source video,'}selecting an encoding bitrate for the segment of the source video as a minimum of the first, second, and third candidate bitrates; andencoding, using the first video codec, the segment of the source video based on the selected encoding bitrate.2. The method of claim 1 , further comprising:determining a fourth candidate bitrate for encoding, using the first video codec, a second segment of a second source video, wherein the fourth candidate bitrate comprises a bitrate for the second source video encoded using a third video codec modified by a second scaling factor, and wherein the scaling factor is different than the second scaling factor.3. The method of claim 1 , ...

Подробнее
14-01-2016 дата публикации

Encoding Perceptually-Quantized Video Content In Multi-Layer VDR Coding

Номер: US20160014420A1
Принадлежит: Dolby Laboratories Licensing Corp

Input VDR images are received. A candidate set of function parameter values for a mapping function is selected from multiple candidate sets. A set of image blocks of non-zero standard deviations in VDR code words in at least one input VDR image is constructed. Mapped code values are generated by applying the mapping function with the candidate set of function parameter values to VDR code words in the set of image blocks in the at least one input VDR image. Based on the mapped code values, a subset of image blocks of standard deviations below a threshold value in mapped code words is determined as a subset of the set of image blocks. Based at least in part on the subset of image blocks, it is determined whether the candidate set of function parameter values is optimal for the mapping function to map the at least one input VDR image.

Подробнее
14-01-2016 дата публикации

Methods and Systems for Detecting Block Errors in a Video

Номер: US20160014433A1
Принадлежит: INTERRA SYSTEMS, INC.

Systems and Methods for efficient and reliable detection of error blocks in a video based on detecting one or more candidate blocks in a region of interest and then verifying the block error on the basis of the patterns formed inside the candidate block and its distinction from the surrounding blocks spatially and/or temporally. 1. A processor implemented method for detecting block errors in a video sequence having a plurality of frames , the frame being composed of a plurality of top fields and a plurality of bottom fields , comprising the steps of:a. detecting a scene change and determining a set of frames corresponding to a single scene;b. detecting motion between two consecutive top fields or between two consecutive bottom fields corresponding to the set of frames determined in step a, and determining one or more blocks in the top field or bottom field having motion, wherein each block is a matrix of predetermined size containing pixels; calculating a vertical gradient for the motion area of the current top field and current bottom field;', 'thresholding the vertical gradient using a predefined threshold;', 'processing the thresholded image using a morphological operation for getting regions corresponding to block errors;', 'determining a corresponding horizontal edge and a corresponding vertical edge for each rectangular region; and', 'creating one or more candidate blocks using the horizontal edge and the vertical edge; and, 'c. determining one or more candidate blocks within the top field or bottom field determined in step b, comprising the steps of determining number of intensity transitions in horizontal direction within the candidate blocks, and comparing the number of transitions with a first predefined threshold;', 'if the number of intensity transitions is greater than the first predefined threshold then getting two separate sub-blocks each for even and odd vertical lines of the candidate blocks, and determining standard deviation for each of the sub- ...

Подробнее
10-01-2019 дата публикации

MOTION VECTOR CALCULATION METHOD

Номер: US20190014341A1
Принадлежит:

When a block (MB) of which motion vector is referred to in the direct mode contains a plurality of motion vectors, motion vectors MV and MV, which are used for inter picture prediction of a current picture (P) to be coded, are determined by scaling a value obtained from averaging the plurality of motion vectors or selecting one of the plurality of the motion vectors. 1. A coding method for coding a current block included in a current picture in direct mode , the coding method comprising:specifying a co-located block which is a block included in a second picture that is different from the current picture, the co-located block being located in the second picture at the same position that the current block is located in the current picture;determining a first motion vector and a second motion vector of the current block for performing motion compensation on the current block, using a third motion vector which is a motion vector of the co-located block;generating a first predictive image of the current block using the first motion vector of the current block and a second predictive image of the current block using the second motion vector of the current block;generating a predictive image of the current block based on the first predictive image and the second predictive image;generating a difference image of the current block between the current block and the predictive image of the current block; andcoding the difference image of the current block to obtain coded data of the current block,wherein the co-located block is motion-compensated using a first motion vector corresponding to a first reference picture of the co-located block and a second motion vector corresponding to a second reference picture of the co-located block,wherein, in the case where the first reference picture of the co-located block is stored in a long-term picture buffer and a second reference picture of the co-located block is stored in a short-term picture buffer, (i) the third motion vector is ...

Подробнее
10-01-2019 дата публикации

ENHANCED HIGH-LEVEL SIGNALING FOR FISHEYE VIRTUAL REALITY VIDEO IN DASH

Номер: US20190014350A1
Автор: Wang Yekui
Принадлежит:

A method of processing a manifest file including one or more syntax elements in an adaptation set level syntax structure that specify attributes of one or more representations of a corresponding adaptation set, the one or more representations including fisheye video data, determining, based on the one or more syntax elements, the attributes of the one or more representations including the fisheye video data, and retrieving, based on the determination, at least a portion of a segment of the one of the representations including the fisheye video data, the segment comprising an independently retrievable media file. 1. A method of processing a file including video data , the method comprising:processing a manifest file including one or more syntax elements at an adaptation set level syntax structure that specify attributes of one or more representations of a corresponding adaptation set, the one or more representations including fisheye video data;determining, based on the one or more syntax elements, the attributes of the one or more representations including the fisheye video data; andretrieving, based on the determination, at least a portion of a segment of one of the representations including the fisheye video data, the segment comprising an independently retrievable media file.2. The method of claim 1 , wherein the attributes of the one or more representations including the fisheye video data include at least one of an indication of monoscopic fisheye video data or an indication of stereoscopic fisheye video data.3. The method of claim 1 , wherein the one or more syntax elements include a fisheye video information (FVI) descriptor.4. The method of claim 3 , wherein the FVI descriptor includes a view_dimension_idc syntax element claim 3 , wherein a value of 0 for the view_dimension_idc syntax element indicates the fisheye video data is stereoscopic fisheye video data claim 3 , and wherein a value of 1 for the view_dimension_idc syntax element indicates the fisheye ...

Подробнее
10-01-2019 дата публикации

MOVING IMAGE CODING DEVICE, A MOVING IMAGE CODING METHOD, AND A MOVING IMAGE DECODING DEVICE

Номер: US20190014351A1
Принадлежит:

A hierarchical moving image decoding device () includes a profile information decoding unit () that decodes/configures sublayer profile information after decoding a sublayer profile present flag regarding respective sublayers, and a level information decoding unit () that decodes/configures sublayer level information. 1. (canceled)2. A moving image coding device which codes image information and generates coded data , comprising:a processor, anda memory associated with the processor; whereinthe processor executes instructions stored on the memory to perform:coding a sublayer profile present flag (sub_layer_profile_present_flag) indicating the presence or absence of sublayer profile information regarding respective sublayers;coding a sublayer level present flag (sub_layer_level_present_flag) indicating the presence or absence of sublayer level information regarding the respective sublayers;coding byte-aligned data that is determined based on the number of sublayers and is inserted after the sublayer profile present flag and the sublayer level present flag, and before the sublayer profile information;coding the sublayer profile information in a case where the sublayer profile present flag is equal to 1, wherein a first bit of the sublayer profile information is byte-aligned by the byte-aligned data;coding the sublayer level information in a case where the sublayer level present flags is equal to 1; andgenerating the coded data including the sublayer profile present flag, the sublayer level present flag, the byte-aligned data, the sublayer profile information, and the sublayer level information.3. A moving image coding method for coding image information and generating coded data , comprising:coding a sublayer profile present flag (sub_layer_profile_present_flag) indicating the presence or absence of sublayer profile information regarding respective sublayers;coding a sublayer level present flag (sub_layer_level_present_flag) indicating the presence or absence of ...

Подробнее
10-01-2019 дата публикации

FORWARD ERROR CORRECTION USING SOURCE BLOCKS WITH SYMBOLS FROM AT LEAST TWO DATASTREAMS WITH SYNCHRONIZED START SYMBOL IDENTIFIERS AMONG THE DATASTREAMS

Номер: US20190014353A1
Принадлежит:

A forward error correction (FEC) data generator has an input for at least two datastreams for which FEC data shall be generated in a joint manner, each datastream having a plurality of symbols. A FEC data symbol is based on a FEC source block possibly having a subset of symbols of the at least two data streams. The FEC data generator further has a signaling information generator configured to generate signaling information for the FEC data symbol regarding which symbols within the at least two datastreams belong to the corresponding source block by determining pointers to start symbols within a first and a second datastream, respectively, of the at least two datastreams and a number of symbols within the first datastream and second datastreams, respectively, that belong to the corresponding source block. 1. A forward error correction data generator comprising:an input for at least two datastreams for which forward error correction data shall be generated in a joint manner, each datastream comprising a plurality of symbols, wherein a forward error correction data symbol is based on a forward error correction (FEC) source block;a signaling information generator configured to generate signaling information for the forward error correction data symbol regarding which symbols within the at least two datastreams belong to the corresponding FEC source block by determining a pointer to a start symbol within a first datastream of the at least two datastreams, a pointer to a start symbol within a second datastream of the at least two datastreams, a number of symbols within the first datastream that belong to the corresponding source block, and a number of symbols within the second datastream that belong to the corresponding source block;a synchronizer configured to determine a common identifier for the start symbols within the at least two datastreams;wherein the signaling information generator is configured to include the common identifier into the signaling information to ...

Подробнее
14-01-2021 дата публикации

MEMORY CONSTRAINT FOR ADAPTATION PARAMETER SETS FOR VIDEO CODING

Номер: US20210014515A1
Принадлежит:

A video decoder is configured to decode one or more first adaptation parameter set (APS) indices for a current picture that indicate one or more first APSs that may be used for decoding the current picture. The video decoder may determine, for a block of a sub-picture of the current picture, an APS from the one or more first APSs indicated for the current picture, and decode the block of the sub-picture using the determined APS. In some examples, the video decoder may determine, for the block of the sub-picture of the current picture, the APS from the one or more first APSs indicated for the current picture without decoding any syntax elements, at a sub-picture level, indicating APSs that may be used for decoding the sub-picture. 1. An apparatus configured to decode video data , the apparatus comprising:a memory configured to store one or more blocks of video data; and decode one or more first adaptation parameter set (APS) indices for a current picture that indicate one or more first APSs that may be used for decoding the current picture;', 'determine, for a block of a sub-picture of the current picture, an APS from the one or more first APSs indicated for the current picture; and', 'decode the block of the sub-picture using the determined APS., 'one or more processors implemented in circuitry and in communication with the memory, the one or more processors configured to2. The apparatus of claim 1 , wherein to decode the one or more first APS indices for the current picture that indicate the one or more first APSs that may be used for decoding the current picture claim 1 , the one or more processors are configured to:decode the one or more first APS indices from a picture header.3. The apparatus of claim 1 , wherein the sub-picture comprises one or more of a slice claim 1 , a tile group claim 1 , a tile claim 1 , or a brick.4. The apparatus of claim 1 , wherein the one or more processors are further configured to:decode a first syntax element that indicates whether ...

Подробнее
14-01-2021 дата публикации

DERIVING CODING SYSTEM OPERATIONAL CONFIGURATION

Номер: US20210014535A1
Принадлежит:

A device for coding video data, the device comprising a memory configured to store video data; and one or more processors implemented in circuitry and configured to: code a value for a profile indicator syntax element in a bitstream including video data, the value for the profile indicator representing a class of a profile to which the bitstream conforms; code one or more values representing one or more coding-tool-specific constraints, separate from the profile indicator syntax element, each of the coding-tool-specific constraints indicating whether coding tools corresponding to the coding-tool-specific constraints can be enabled for at least a subset of the bitstream; and code the video data according to the coding-tool-specific constraints and the class of the profile. 1. A method of coding video data , the method comprising:coding a value for a profile indicator syntax element in a bitstream including video data, the value for the profile indicator representing a class of a profile to which the bitstream conforms;coding one or more values representing one or more coding-tool-specific constraints, separate from the profile indicator syntax element, each of the coding-tool-specific constraints indicating whether coding tools corresponding to the coding-tool-specific constraints can be enabled for at least a subset of the bitstream; andcoding the video data according to the coding-tool-specific constraints and the class of the profile.2. The method of claim 1 , further comprising coding a value for a syntax element representing that coding-tool-specific enabling/disabling indications are signaled in the bitstream.3. The method of claim 2 , wherein the syntax element representing that the tool-specific enabling/disabling indications are signaled comprises a constrained_tool_indication_flag.4. The method of claim 2 , wherein the value for the syntax element representing that the tool-specific enabling/disabling indications are signaled has a predefined value ...

Подробнее
14-01-2021 дата публикации

Method and Apparatus of Optimized Splitting Structure for Video Coding

Номер: US20210014536A1
Принадлежит:

In one method, the current block is partitioned into multiple final sub-blocks using one or more stages of sub-tree partition comprising ternary tree partition and at least one other-type partition, where ternary partition tree is excluded from the sub-tree partition if a current sub-tree depth associated with a current sub-block is greater than a first threshold and the first threshold is an integer greater than or equal to 1. In another method, if a test condition is satisfied, the current block is encoded or decoded using a current Inter mode selected from a modified group of Inter tools, where the modified group of Inter tools is derived from an initial group of Inter tools by removing one or more first Inter tools from the initial group of Inter tools, replacing one or more second Inter tools with one or more complexity-reduced Inter tools, or both. 1. A method of video coding , the method comprising:receiving input data associated with a current block in a current image from a video sequence;partitioning the current block into multiple final sub-blocks using one or more stages of sub-tree partition comprising ternary tree partition and at least one other-type partition, wherein the ternary tree partition is excluded from the sub-tree partition if a current sub-tree depth associated with a current sub-block is greater than a first threshold and the first threshold is an integer greater than or equal to 1; andencoding said multiple final sub-blocks to generate compressed bits to include in a video bitstream in an encoder side or decoding said multiple final sub-blocks from the video bitstream in a decoder side.2. The method of claim 1 , wherein said at least one other-type partition comprises quadtree partition claim 1 , binary-tree partition or both.3. The method of claim 1 , wherein the current block corresponds to one Coding Tree Unit (CTB) and each of the multiple final sub-blocks corresponds to one Coding Unit (CU) claim 1 , Prediction Unit (PU) or ...

Подробнее
14-01-2021 дата публикации

TRANSMITTING METHOD, RECEIVING METHOD, TRANSMITTING DEVICE AND RECEIVING DEVICE

Номер: US20210014546A1
Принадлежит:

A transmitting method according to one aspect of the present disclosure includes: encoding a video signal and generating encoded data including a plurality of access units; storing the plurality of access units in a packet in a unit that defines one access unit as one unit or in a unit defined by dividing one access unit, and generating a packet group; transmitting the generated packet group as data; generating first information and second information, the first information indicating a presentation time of a first access unit that is presented first among the plurality of access units, and the second information being used to calculate a decoding time of the plurality of access units; and transmitting the first information and the second information as control information. 1. A transmitting method comprising:dividing an access unit into slice segments or tiles, encoding the slice segments or the tiles, and storing at least one of the encoded slice segments or at least one of the encoded tiles in a network abstraction layer (NAL) unit, the access unit being an image included in a video signal;storing the NAL unit in a data unit;storing, in a NAL unit different from the NAL unit in which the video signal is stored, a parameter set for encoding the video signal;storing the data unit in a packet, in units of one data unit, in units of a plurality of data units including the data unit, or in units of portions into which the data unit is divided, and generating a packet group, the packet in which the data unit is stored being different from a packet in which a data unit including the parameter set is stored;transmitting the generated packet group as data;generating control information, the control information including presentation time information of a first access unit, and information used to calculate a decoding time of the plurality of access units; andtransmitting the control information,wherein the control information is stored and transmitted in a payload of a ...

Подробнее
09-01-2020 дата публикации

METHOD AND DEVICE FOR VIDEO CODING AND DECODING

Номер: US20200014927A1
Принадлежит:

A method and device for coding and decoding are disclosed. The method includes: dividing a picture to be encoded into several slices, each containing macroblocks continuous in a designated scanning sequence in the picture; dividing slices in the picture into one or more slice sets according to attribute information of the slices, each slice set containing one or more slices; and encoding the slices in the slice sets according to slice and slice set division information to get a coded bit stream of the picture. The decoding method includes: obtaining slice and slice set division information from a bit stream to be decoded and decoding the bit stream according to the obtained slice and slice set division information. The invention improves the performance of video transmission effectively and realizes region based coding. The implementation of coding and decoding is simple and the complexity of coding and decoding systems is reduced. 1. A video coding method , comprising:dividing a picture to be encoded into a plurality of slices;grouping the slices contained in the picture into a plurality of slice sets, each slice set containing one or more of the slices; and encoding a slice set syntax element for a first slice in a current slice set of the plurality of slice sets;', 'using the slice set syntax element as a slice set syntax element of a second slice without encoding a new slice set syntax element for the second slice, wherein the second slice is in the current slice set and after the first slice., 'encoding the slices in the slice sets to get a coded bit stream of the picture, wherein the encoding the slices in the slice sets comprises2. The method according to claim 1 , further comprising:writing a slice set enable flag in the coded bit stream indicating whether the slice sets are divided in a coding process of a current picture.3. The method according to claim 1 , wherein the encoding the slices in the slice sets comprises at least one of the following coding ...

Подробнее
09-01-2020 дата публикации

INHERITED MOTION INFORMATION FOR DECODING A CURRENT CODING UNIT IN A VIDEO CODING SYSTEM

Номер: US20200014948A1
Принадлежит: MEDIATEK INC.

A method of video decoding at a decoder can include receiving a bitstream including encoded data of a picture, decoding a plurality of coding units (CUs) in the picture based on motion information stored in a history-based motion vector prediction (HMVP) table without updating the HMVP table, and updating the HMVP table with motion information of all or a part of the plurality of CUs after the plurality of CUs are decoded based on the motion information stored in the HMVP table. 1. A method of video decoding at a decoder , comprising:receiving a bitstream including encoded data of a picture;decoding a plurality of coding units (CUs) in the picture based on motion information stored in a history-based motion vector prediction (HMVP) table without updating the HMVP table; andupdating the HMVP table with motion information of all or a part of the plurality of CUs after the plurality of CUs are decoded based on the motion information stored in the HMVP table.2. The method of claim 1 , further comprising:decoding every P CUs in the picture based on motion information stored in the HMVP table without updating the HMVP table, P being an integer greater than 1; andupdating the HMVP table with motion information of all or a part of every P CUs after every P CUs are decoded.3. The method of claim 2 , further comprising:receiving a syntax element indicating a value of P in the bitstream,4. The method of claim 1 , wherein 'decoding CUs within a merge sharing region based on the motion information stored in the HMVP table, wherein the plurality of CUs are the CUs within the merge sharing region that are decoded based on the motion information stored in the HMVP table, and', 'the decoding includes 'after the merge sharing region is decoded, updating the HMVP table with the motion information of all or the part of the CUs within the merge sharing region that are decoded based on the motion information stored in the HMVP table.', 'the updating includes5. The method of claim 4 , ...

Подробнее
03-02-2022 дата публикации

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE

Номер: US20220036595A1
Принадлежит:

A three-dimensional data encoding method includes: calculating coefficient values from pieces of attribute information of three-dimensional points included in point cloud data; quantizing the coefficient values to generate quantization values; and generating a bitstream including the quantization values. The three-dimensional points corresponding to the coefficient values belong to one layer among one or more layers. Each of a predetermined number of layers among the one or more layers is assigned a quantization parameter for the layer. In the quantizing, (i) when a quantization parameter is assigned to a layer to which each of the coefficient values belongs, the coefficient value is quantized using the quantization parameter, and (ii) when the quantization parameter is not assigned to a layer to which each of the coefficient values belongs, the coefficient value is quantized using a quantization parameter assigned to one layer among the predetermined number of the layers. 1. A three-dimensional data encoding method , comprising:calculating coefficient values from pieces of attribute information of three-dimensional points included in point cloud data;quantizing the coefficient values to generate quantization values; andgenerating a bitstream including the quantization values,wherein the three-dimensional points corresponding to the coefficient values belong to one layer among one or more layers,each of a predetermined number of layers among the one or more layers is assigned a quantization parameter for the layer, andin the quantizing, (i) when a quantization parameter is assigned to a layer to which each of the coefficient values belongs, the coefficient value is quantized using the quantization parameter, and (ii) when the quantization parameter is not assigned to a layer to which each of the coefficient values belongs, the coefficient value is quantized using a quantization parameter assigned to one layer among the predetermined number of the layers.2. The three ...

Подробнее
21-01-2016 дата публикации

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Номер: US20160021371A1
Автор: Sato Kazushi
Принадлежит: SONY CORPORATION

Provided is an image processing device including a selection section configured to select, from a plurality of transform units with different sizes, a transform unit used for inverse orthogonal transformation of image data to be decoded, a generation section configured to generate, from a first quantization matrix corresponding to a transform unit for a first size, a second quantization matrix corresponding to a transform unit for a second size from a first quantization matrix corresponding to a transform unit for a first size, and an inverse quantization section configured to inversely quantize transform coefficient data for the image data using the second quantization matrix generated by the generation section when the selection section selects the transform unit for the second size.

Подробнее
21-01-2016 дата публикации

TRANSPORT STREAM FOR CARRIAGE OF VIDEO CODING EXTENSIONS

Номер: US20160021375A1
Автор: Chen Ying, Hendry Fnu
Принадлежит:

A video processing device may obtain, from a descriptor for a program comprising one or more elementary streams, a plurality of profile, tier, level (PTL) syntax element sets. The video processing device may obtain, from the descriptor, a plurality of operation point syntax element sets. For each respective operation point syntax element set of the plurality of operation point syntax element sets, the video processing device may determine, for each respective layer of the respective operation point specified by the respective operation point syntax element set, based on a respective syntax element in the respective operation point syntax element set, which of the PTL syntax element sets specifies the PTL information assigned to the respective layer, the respective operation point having a plurality of layers.

Подробнее
21-01-2016 дата публикации

TRANSPORT STREAM FOR CARRIAGE OF VIDEO CODING EXTENSIONS

Номер: US20160021398A1
Автор: Chen Ying, Hendry Fnu
Принадлежит:

A video processing device may obtain, from a descriptor for a program comprising one or more elementary streams, a plurality of profile, tier, level (PTL) syntax element sets. Each respective PTL syntax element set of the plurality of PTL syntax element sets comprises syntax elements may specify respective PTL information. The video processing device obtains, from the descriptor for the program, a plurality of operation point syntax element sets. Each respective operation point syntax element set of the plurality of operation point syntax element sets may specify a respective operation point of a plurality of operation points. The video processing device may determine, for each respective layer of respective operation point specified by respective operation point syntax element sets, based on a respective syntax element in the respective operation point syntax element set, which of the PTL syntax element sets specifies the PTL information assigned to the respective layer.

Подробнее
19-01-2017 дата публикации

Eye Mounted Displays and Systems, with Headpiece

Номер: US20170019660A1
Принадлежит:

A display device is mounted on and/or inside the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. The projected light propagates through the pupil but does not fill the entire pupil. In this way, multiple sub-displays can project their light onto the relevant portion of the retina. Moving from the pupil to the cornea, the projection of the pupil onto the cornea will be referred to as the corneal aperture. The projected light propagates through less than the full corneal aperture. The sub-displays use spatial multiplexing at the corneal surface. Various electronic devices interface to the eye mounted display. 1. An eye mounted display system , comprising:an eye mounted display mountable on an eye of a human user, the eye mounted display receiving images in an output format suitable for projection by the eye mounted display onto a retina of the eye, the images in the output format including pixels of different resolutions within each image; anda headpiece worn on a head of the human user when using the eye mounted display, the headpiece facilitating a generation and/or transmission of the images to the eye mounted display.2. The eye mounted display system of further comprising:a device that generates the images from an image input, wherein the headpiece receives the images from the device and transmits the images to the eye mounted display.3. The eye mounted display system of wherein the headpiece receives the images from the device and transmits the images to the eye mounted display using the same output format.4. The eye mounted display system of wherein the headpiece receives the images from the device in an encrypted format claim 2 , decrypts the received images claim 2 , and transmits the decrypted images to the eye mounted display.5. The eye mounted display system of wherein the headpiece receives the images from the device in ...

Подробнее
19-01-2017 дата публикации

Eye Mounted Displays and Systems, with Eye Tracker and Head Tracker

Номер: US20170019661A1
Принадлежит:

A display device is mounted on and/or inside the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. The projected light propagates through the pupil but does not fill the entire pupil. In this way, multiple sub-displays can project their light onto the relevant portion of the retina. Moving from the pupil to the cornea, the projection of the pupil onto the cornea will be referred to as the corneal aperture. The projected light propagates through less than the full corneal aperture. The sub-displays use spatial multiplexing at the corneal surface. Various electronic devices interface to the eye mounted display. 1. An eye mounted display system , comprising:an eye mounted display mountable on an eye of a human user;an eye tracker that detects an orientation of the eye; anda device that generates images from an image input, the images generated based on the orientation of the eye and generated in an output format suitable for use in the eye mounted display, the images in the output format including pixels of different resolutions within each image, and the eye mounted display projecting the images onto a retina of the eye.2. The eye mounted display system of wherein the images projected onto the retina of the eye appear to be stationary with respect to the user's surrounding environment.3. The eye mounted display system of wherein:the image input comprises images in an input format with pixels of constant resolution within each image; andthe device converts the images from the input format to the output format, the output format including pixels of different resolutions within each image, the pixels of different resolutions including higher resolution pixels and lower resolution pixels, wherein which pixels in the input format are converted to higher resolution pixels or to lower resolution pixels in the output format depends on the ...

Подробнее
19-01-2017 дата публикации

IMAGE DECODING DEVICE, IMAGE DECODING METHOD, RECODING MEDIUM, IMAGE CODING DEVICE, AND IMAGE CODING METHOD

Номер: US20170019673A1
Принадлежит:

According to an aspect of the present invention, in an output layer set, decoding processing of a non-output and non-reference layer is omitted, and thus a processing amount and a memory size required for decoding the non-output and non-reference layer can be reduced. 1. An image decoding device which decodes hierarchy image coding data , the device comprising:a first flag decoding circuit that decodes a first flag in a unit of a layer set, which indicates whether or not each layer is included in a layer set;a layer set information decoding circuit that derives a layer ID list of the layer set based on the first flag;an output layer set information decoding circuit that decodes output layer set information in a unit of an output layer set, which includes a) a layer set identifier, and b) an output layer flag which indicates whether or not each layer included in the output layer set is an output layer;a dependency flag deriving circuit that derives a dependency flag which indicates whether or not a first layer is a reference layer of a second layer;a decoding layer ID list deriving circuit that derives a decoding layer ID list indicating a layer to be decoded for the output layer set based on the layer ID list corresponding to the output layer set, the output layer flag of the output layer set, and the dependency flag; anda picture decoding circuit that decodes a picture of each layer included in the derived decoding layer ID list from the hierarchy image coding data corresponding to the each layer.24-. (canceled)5. An image decoding method of decoding hierarchy image coding data , the method comprising:decoding a first flag in a unit of a layer set, which indicates whether or not each layer is included in a layer set;deriving a layer ID list of the layer set based on the first flag;decoding output layer set information in a unit of an output layer set, which includes a) a layer set identifier, and b) an output layer flag which indicates whether or not each layer ...

Подробнее
19-01-2017 дата публикации

IMAGE CODING APPARATUS AND METHOD, AND IMAGE DECODING APPARATUS AND METHOD

Номер: US20170019674A1
Принадлежит: SONY CORPORATION

There is provided an image coding apparatus including: circuitry configured to set a correspondence relationship between resolution information and an enhancement layer, in a case where the number of the layers is greater than the number of multiple candidates for the resolution information on a layer of an image; and code the image and generate a bitstream including information relating to the set correspondence relationship. 1. An image coding apparatus comprising:circuitry configured toset a correspondence relationship between resolution information and an enhancement layer, in a case where the number of the layers is greater than the number of multiple candidates for the resolution information on a layer of an image; andcode the image and generate a bitstream including information relating to the set correspondence relationship.2. The image coding apparatus according to claim 1 , wherein in a case where the number of the candidates is greater than the number of the layers claim 1 , the correspondence relationship between the resolution information and the enhancement layer is set claim 1 , beginning with a leading candidate.3. The image coding apparatus according to claim 1 , wherein in a case where multiple candidates are present claim 1 , the information is set that indicates whether or not the correspondence relationship between the resolution information and the enhancement layer is present.4. The image coding apparatus according to claim 1 , wherein in a case where the number of the candidates is 1 claim 1 , the updating of the correspondence relationship between the resolution information and the enhancement layer is prohibited in a sequence parameter set.5. An image coding method comprising:setting a correspondence relationship between resolution information and an enhancement layer, in a case where the number of the layers is greater than the number of multiple candidates for the resolution information on a layer of an image; andcoding the image and ...

Подробнее
19-01-2017 дата публикации

PARALLEL DECODER WITH INTER-PREDICTION OF VIDEO PICTURES

Номер: US20170019675A1
Принадлежит:

A parallel decoder for decoding compressed video picture data including inter-coded picture item data with motion vector data. A decoding module decodes picture data stored in a temporary storage. The decoding module includes an inter-prediction module that uses inter-prediction item data to decode an inter-coded picture item by referring to already decoded reference picture item data. The structure of inter-prediction item data in the temporary storage is a function of the positions of corresponding reference picture items. The decoding order of stored inter-prediction item data by the inter-prediction module is prioritized as a function of a decoding order of reference picture item data. 1. A parallel decoder for decoding compressed video picture data including inter-coded picture item data with motion vector data , the decoder comprising:a temporary storage for storing a plurality of structures of picture data to be decoded including structures to organize inter-prediction items;at least one decoding module for decoding the stored picture data, wherein the decoding module includes at least one inter-prediction module that uses inter-prediction item data to decode an inter-coded picture item by referring to already decoded reference picture item data; anda control module that controls the structure of inter-prediction item data in the temporary storage as a function of the positions in a decoding order of corresponding reference picture items, and prioritizes a decoding order of stored inter-prediction item data by the inter-prediction module as a function of the decoding order of reference picture item data.2. The parallel decoder of claim 1 , wherein the structures to organize inter-prediction item data to be decoded comprise respective queues of inter-prediction item data entities claim 1 , and wherein each inter-prediction item data entity contains an identification of the relevant reference picture item data that the inter-prediction module uses in decoding ...

Подробнее
19-01-2017 дата публикации

HYBRID VIDEO DECODING APPARATUS FOR PERFORMING HARDWARE ENTROPY DECODING AND SUBSEQUENT SOFTWARE DECODING AND ASSOCIATED HYBRID VIDEO DECODING METHOD

Номер: US20170019679A1
Принадлежит:

A hybrid video decoding apparatus has a hardware entropy decoder and a storage device. The hardware entropy decoder performs hardware entropy decoding to generate an entropy decoding result of a picture. The storage device has a plurality of storage areas allocated to buffer a plurality of entropy-decoded partial data, respectively, and is further arranged to store position information indicative of storage positions of the entropy-decoded partial data in the storage device. The entropy-decoded partial data are derived from the entropy decoding result of the picture, and are associated with a plurality of portions of the picture, respectively. 1. A hybrid video decoding apparatus comprising:a hardware entropy decoder, arranged to perform hardware entropy decoding to generate an entropy decoding result of a picture; anda storage device, having a plurality of storage areas allocated to buffer a plurality of entropy-decoded partial data, respectively, and further arranged to store position information indicative of storage positions of the entropy-decoded partial data in the storage device, wherein the entropy-decoded partial data are derived from the entropy decoding result of the picture, and are associated with a plurality of portions of the picture, respectively.2. The hybrid video decoding apparatus of claim 1 , further comprising:a multi-core processor system, arranged to execute a decoding program to perform software decoding upon the entropy-decoded partial data in a parallel processing fashion;wherein one core of the multi-core processor system is arranged to access one of the storage areas to retrieve one entropy-decoded partial data and decode said one entropy-decoded partial data.3. The hybrid video decoding apparatus of claim 1 , wherein each of the storage areas allocated in the storage device has a predetermined size.4. The hybrid video decoding apparatus of claim 1 , wherein each of the storage areas allocated in the storage device has a variable size ...

Подробнее
19-01-2017 дата публикации

Method for Depth Lookup Table Signaling in 3D Video Coding Based on High Efficiency Video Coding Standard

Номер: US20170019682A1
Принадлежит:

A method and apparatus for depth lookup table (DLT) signaling in a three-dimensional and multi-view coding system are disclosed. According to the present invention, if the pictures contain only texture data, no DLT information is incorporated in the picture parameter set (PPS) corresponding to the pictures. On the other hand, if the pictures contain depth data, the DLT associated with the pictures is determined. If a previous DLT required for predicting the DLT exists, the DLT will be predicted based on the previous DLT. Syntax related to the DLT is included in the PPS. Furthermore, first bit-depth information related to first depth samples of the DLT is also included in the PPS and the first bit-depth information is consistent with second bit-depth information signaled in a sequence level data for second depth samples of a sequence containing the pictures. 1. A method of depth coding using a depth lookup table (DLT) in a three-dimensional and multi-view coding system , the method comprising:identifying one or more pictures to be processed;if said one or more pictures contain only texture data, excluding any DLT information in a picture parameter set (PPS) corresponding to said one or more pictures;if said one or more pictures contain depth data:determining the DLT associated with said one or more pictures;if a previous DLT required for predicting the DLT exists, applying predictive coding to the DLT based on the previous DLT;including syntax related to the DLT in the PPS; andincluding first bit-depth information related to first depth samples of the DLT in the PPS, wherein the first bit-depth information is consistent with second bit depth information signaled in a sequence level for second depth samples of a sequence containing said one or more pictures; andsignaling the PPS in a video bitstream for a sequence including said one or more pictures.2. The method of claim 1 , wherein if the previous DLT required for predicting the DLT does not exist claim 1 , ...

Подробнее
03-02-2022 дата публикации

SYSTEMS AND METHODS FOR SIGNALING REFERENCE PICTURES IN VIDEO CODING

Номер: US20220038685A1
Автор: Deshpande Sachin G.
Принадлежит:

According to an aspect of an invention, a method for signaling reference pictures in video coding is disclosed. The method comprises: decoding a number of entries in a reference picture list syntax structure; decoding a number of reference index active minus one syntax in a slice header, if the number of entries is greater than one; and deriving an active variable by using the number of reference index active minus one syntax. 17-. (canceled)8: A method of decoding information for a reference picture list structure , the method including:decoding a delta picture order count syntax element in the reference picture list structure, wherein the delta picture order count syntax element specifies a value of a delta picture order count; anddecoding a short-term reference picture entry sign flag by using the delta picture order count,wherein:the short-term reference picture entry sign flag specifies whether an i-th entry in the reference picture list structure has a value greater than or equal to zero, andthe delta picture order count is set to a value of the delta picture order count syntax element according to a condition.9: The method of claim 8 , wherein the short-term reference picture entry sign flag is decoded claim 8 , in a case that the value of the delta picture order count is greater than zero.10: The method of claim 8 , further including:decoding a number syntax element specifying a number of reference picture list structure with a list index equal to i included in a sequence parameter set; anddecoding a picture list index in a case that a value of the number syntax element is greater than one,wherein the picture list index specifies an index of the reference picture list structure with the list index equal to i that is used for derivation of an i-th reference picture list of a current picture.11: A method of encoding information for a reference picture list structure claim 8 , the method including:encoding a delta picture order count syntax element in the ...

Подробнее
03-02-2022 дата публикации

TWO-LEVEL SIGNALING OF FILTERING INFORMATION IN VIDEO PROCESSING

Номер: US20220038705A1
Принадлежит:

A video processing method is provided to include performing a conversion between a coded representation of a video comprising one or more video regions and the video, wherein the coded representation includes a first side information at a first level, and wherein a second side information at a second level is derived from the first side information such that the second side information provides parameters for a video unit coded with in-loop reshaping (ILR) in which a reconstruction of the video unit of a video region is based on a representation of a video unit in a first domain and a second domain and/or scaling chroma residue of a chroma video unit. 1. A method of processing video data , comprising:performing a conversion between a bitstream of a video comprising one or more video regions and the video,wherein the bitstream includes side information applicable for a coding tool of some of the one or more video regions, 1) a forward mapping process for a luma component of the video unit, in which prediction samples of the luma component are converted from an original domain to a reshaped domain,', '2) an inverse mapping process, which is an inverse operation of the forward mapping process, that convert reconstructed samples of the luma component in the reshaped domain to the original domain, or', '3) a scaling process, in which residual samples of a chroma component of the video unit are scaled before being used to reconstruct the chroma component, and, 'wherein the side information provides parameters for a reconstruction of a video unit of a current video region based on at least one ofwherein the side information for the current video region is based on an adaptation parameter set having a smaller or equal temporal layer index than the current video region.2. The method of claim 1 , wherein the side information for the current video region is copied from the adaptation parameter set.3. The method of claim 1 , wherein a level of the adaptation parameter set is ...

Подробнее
03-02-2022 дата публикации

CROSS-COMPONENT QUANTIZATION IN VIDEO CODING

Номер: US20220038721A1
Автор: Li Ming, Wu Ping
Принадлежит:

A video decoding technique includes parsing a bitstream to determine a quantization parameter (QP) for a luma component of a region from a data unit of a parameter set included in the bitstream, determining a flag from the data unit of the parameter set, determining, in a case that the flag is equal to a first value, a QP for a chroma component of the region as a first function of the QP for the luma component and a default chroma delta QP indicated by the flag, determining, in a case that the flag is equal to a second value, the QP for the chroma component of the region by (a) obtaining a delta chroma QP from the data unit, and (b) determining the QP for the chroma component as a second function of the QP for the luma component and the delta chroma QP, and decoding the chroma component using the QP for the chroma component in a case that the parameter set is activated for decoding the region. 1. A method for processing visual information , comprising:parsing a bitstream to determine a quantization parameter (QP) for a luma component of a region of the visual information from a data unit of a parameter set included in the bitstream;determining a flag from the data unit of the parameter set;determining, in a case that the flag is equal to a first value, a QP for a chroma component of the region as a first function of the QP for the luma component and a default chroma delta QP indicated by the flag;determining, in a case that the flag is equal to a second value different from the first value, the QP for the chroma component of the region by:(a) obtaining a delta chroma QP from the data unit; and(b) determining the QP for the chroma component as a second function of the QP for the luma component and the delta chroma QP; anddecoding the chroma component using the QP for the chroma component in a case that the parameter set is activated for decoding the region.2. The method of claim 1 , wherein the first function is an addition function.3. The method of any of - claim 1 ...

Подробнее
03-02-2022 дата публикации

FRAME-RATE SCALABLE VIDEO CODING

Номер: US20220038723A1

Methods and systems for frame rate scalability are described. Support is provided for input and output video sequences with variable frame rate and variable shutter angle across scenes, or for input video sequences with fixed input frame rate and input shutter angle, but allowing a decoder to generate a video output at a different output frame rate and shutter angle than the corresponding input values. Techniques allowing a decoder to decode more computationally-efficiently a specific backward compatible target frame rate and shutter angle among those allowed are also presented. 113-. (canceled)14. A non-transitory processor-readable medium having stored thereon an encoded video stream structure , the encoded video stream structure comprising:an encoded picture section including an encoding of a sequence of video pictures; and a first shutter angle flag that indicates whether shutter angle information is fixed for all temporal sub-layers in the encoded picture section; and', 'if the first shutter angle flag indicates that shutter angle information is fixed, then the signaling section including a fixed shutter angle value for displaying a decoded version of the sequence of video pictures for all the temporal sub-layers in the encoded picture section using the fixed shutter angle value, else', 'the signaling section including an array of sub-layer shutter angle values, wherein for each one of the temporal sub-layers, a value in the array of sub-layer shutter angle values indicates a corresponding shutter angle for displaying a decoded version of the temporal sub-layer of the sequence of video pictures., 'a signaling section including an encoding of15. The non-transitory processor-readable medium of claim 14 , wherein the signaling section further includes an encoding of a frame-repetition value indicating the number of times a decoded picture in the sequence of video pictures should be consecutively displayed.16. The non-transitory processor-readable medium of claim ...

Подробнее
03-02-2022 дата публикации

APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING

Номер: US20220038733A1
Принадлежит: NOKIA TECHNOLOGIES OY

A method comprising encoding a bitstream comprising a base layer, a first enhancement layer and a second enhancement layer; encoding an indication of both the base layer and the first enhancement layer used for prediction for the second enhancement layer in the bitstream; encoding, in the bitstream, an indication of a first set of prediction types that is applicable from the base layer to the second enhancement layer, wherein the first set of prediction types is a subset of all prediction types available for prediction between layers, and encoding, in the bitstream, an indication of a second set of prediction types that is applicable from the base layer or the first enhancement layer to the second enhancement layer, wherein the second set of prediction types is a subset of all prediction types available for prediction between layers. 1. A method comprising:encoding a bitstream comprising a base layer, a first enhancement layer and a second enhancement layer;encoding, in the bitstream, an indication of a number of bits in a prediction type mask syntax element;encoding, in the bitstream using a first prediction type mask syntax, an indication of a first set of prediction types that is applicable from the base layer to the second enhancement layer, wherein the first set of prediction types is a subset of all prediction types available for prediction between layers; andencoding, in the bitstream using a second prediction type mask syntax, an indication of a second set of prediction types that is applicable from the first enhancement layer to the second enhancement layer, wherein the second set of prediction types is a subset of all prediction types available for prediction between layers,wherein each of said prediction types available for prediction between layers is represented in the first prediction type mask syntax and the second prediction type mask syntax, andwherein said prediction types available for prediction between layers are adaptively selectable as at ...

Подробнее
03-02-2022 дата публикации

METHODS AND APPARATUSES FOR VIDEO CODING

Номер: US20220038736A1
Принадлежит: Tencent America LLC

Aspects of the disclosure provide methods and apparatuses for video encoding/decoding. An apparatus for video decoding includes processing circuitry that decodes prediction information for a current block in a current coded picture. The prediction information indicates a motion vector predictor index (MVP_idx) for selecting a motion vector predictor in a motion vector predictor list. The processing circuitry determines whether the MVP_idx is smaller than a threshold. When the MVP_idx is determined to be smaller than the threshold, the processing circuitry decodes a motion vector difference (MVD) corresponding to the motion vector predictor and reconstructs the current block based on the motion vector predictor and the MVD. When the MVP_idx is determined to be equal to or larger than the threshold, the processing circuitry reconstructs the current block based on the motion vector predictor without the MVD which is not signaled in the coded video sequence. 1. A method for video encoding in an encoder , comprising:generating at least two motion vector predictor (MVP) candidate lists according to advanced motion vector prediction (AMVP);concatenating additional MVPs to the generated MVP candidate lists, the additional MVPs being derived using one or more of a High Efficiency Video Coding (HEVC) merge mode, a sub-block based temporal motion vector prediction (SbTMVP) method, a history-based motion vector predictor (HMVP) method, a pairwise average motion vector predictor (MVP) method, and Multi-hypothesis MVP method;generating prediction information for a current block in a current picture of a video sequence, the prediction information including an MVP index for each of the MVP candidate lists including the additional MVPs; andencoding the current block based on the MVP candidate indexes and the MVP candidate lists including the additional MVPs.2. The method according to claim 1 , further comprising:determining that an inter-prediction hypothesis of an MVP in one of the ...

Подробнее
03-02-2022 дата публикации

METHOD AND APPARATUS FOR POINT CLOUD COMPRESSION

Номер: US20220038743A1
Автор: Gao Wen, Liu Shan, ZHANG Xiang
Принадлежит: Tencent America LLC

Aspects of the disclosure provide methods, apparatuses, and a non-transitory computer-readable medium for point cloud compression and decompression. In a method, syntax information of a point cloud is decoded from a coded bitstream. The syntax information indicates that parallel decoding is to be performed on occupancy codes of nodes in a range of one or more partitioning depths in an octree partitioning structure of the point cloud is determined. The parallel decoding is performed on the occupancy codes of the nodes. The point cloud is reconstructed based on the occupancy codes of the nodes. 1. A method for point cloud coding in a decoder , comprising:decoding syntax information of a point cloud from a coded bitstream, the syntax information indicating that parallel decoding is to be performed on occupancy codes of nodes in a range of one or more partitioning depths in an octree partitioning structure of the point cloud;performing the parallel decoding on the occupancy codes of the nodes; andreconstructing the point cloud based on the occupancy codes of the nodes.2. The method of claim 1 , wherein the syntax information includes at least one of a first syntax element or a second syntax element claim 1 , the first syntax element indicating whether the parallel decoding is to be performed on the occupancy codes of the nodes in the range of the one or more partitioning depths in the octree partitioning structure claim 1 , and the second syntax element indicating a minimum partitioning depth at which the parallel decoding is to be performed.3. The method of claim 1 , wherein the syntax information is included in at least one of a sequence parameter set claim 1 , a geometry parameter set claim 1 , or a geometry slice header.4. The method of claim 1 , wherein the performing further comprises:determining, in the coded bitstream, a sub-bitstream for each of the one or more partitioning depths based on a bitstream offset corresponding to each of the one or more partitioning ...

Подробнее
03-02-2022 дата публикации

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD

Номер: US20220038746A1
Принадлежит:

An encoder including circuitry and memory coupled to the circuitry. In a second type of residual coding among a first type of residual coding where an orthogonal transform is applied to a current block and the second type of residual coding where the orthogonal transform is skipped for the current block, wherein a first syntax used for the first type of residual coding is different from a second syntax used for the second type of residual coding, the circuitry: in a first loop process, derives a context index by using at least one of a plurality of surrounding coefficients; and encodes a plurality of coefficient information flags by CABAC with the derived context index; and in a second loop process of the plurality of loop processes, encodes a plurality of absolute value flags by CABAC with another context index. 1. An encoder comprising:circuitry; andmemory coupled to the circuitry, whereinin a second type of residual coding among a first type of residual coding where an orthogonal transform is applied to a current block and the second type of residual coding where the orthogonal transform is skipped for the current block, wherein a first syntax used for the first type of residual coding is different from a second syntax used for the second type of residual coding, the circuitry: derives a context index by using at least one of a plurality of surrounding coefficients of a coefficient within the current block; and', 'encodes a plurality of coefficient information flags by Context-based Adaptive Binary Arithmetic Coding (CABAC) with the derived context index, each of the plurality of coefficient information flags relating to the coefficient; and, 'in a first loop process of a plurality of loop processes,'} 'encodes a plurality of absolute value flags by CABAC with another context index that is different from the derived context index.', 'in a second loop process of the plurality of loop processes,'}2. The encoder according to claim 1 , wherein the plurality of ...

Подробнее
18-01-2018 дата публикации

METHOD AND DEVICE FOR TRANSMITTING AND RECEIVING BROADCAST SIGNAL FOR RESTORING PULLED-DOWN SIGNAL

Номер: US20180020185A1
Автор: HWANG Soojin, Suh Jongyeul
Принадлежит: LG ELECTRONICS INC.

The present invention provides a method and a device for transmitting and receiving a broadcast signal for restoring a pulled-down signal. The method for transmitting the broadcast signal, according to one embodiment of the present invention, comprises the steps of: pulling video data down so as to reconfigure the same; encoding the reconfigured video data; encoding signaling information for the reconfigured video data; generating the broadcast signal including the encoded video data and the encoded signaling information; and transmitting the generated broadcast signal. 1. A method for transmitting a broadcast signal , the method comprising the steps of:reconfiguring video data by performing pull-down process in order to achieve a higher frame rate than a original frame rate;encoding the reconfigured video data and a picture timing supplemental enhancement information (SEI) message for the reconfigured video data to generate a video stream,wherein the picture timing SEI message includes picture configuration information indicating whether a picture of the reconfigured video data is displayed as a frame or one or more fields, scan type information indicating a scan type of the picture of the reconfigured video data, and duplicate flag information indicating that the picture of the reconfigured video data is indicated to be a duplicate of a previous picture of the reconfigured video data as a result of the pull-down process;generating a broadcast signal including the generated video stream; andtransmitting the generated broadcast signal.2. The method according to claim 1 , wherein the video stream includes pull down information for signaling information on the pull-down process applied to the reconfigured video data.3. (canceled)4. The method according to claim 2 , wherein the pull down information includes at least one of pull down type information indicating a pull-down type applied to the reconfigured video data claim 2 , cadence size information indicating a size ...

Подробнее
18-01-2018 дата публикации

Reference Picture List Handling

Номер: US20180020212A1
Принадлежит:

An encoder is configured to encode a representation of a current picture of a video stream of multiple pictures. The encoder is further configured to encode, for each of a plurality of reference pictures included in a buffer description for the current picture, a respective one-bit flag according to one of two available values for the one-bit flag. The two available values for the one-bit flag include a first value explicitly indicating to a decoder to include the reference picture in a reference picture list for decoding the current picture. The two available values for the one-bit flag further include a second value explicitly indicating to the decoder not to include the reference picture in the reference picture list for decoding the current picture. The encoder is further configured to output the representation of the current picture and the one-bit flags. 1. An encoder comprising a processor , wherein the processor is configured to:encode a representation of a current picture of a video stream of multiple pictures; a first value explicitly indicating to a decoder to include the reference picture in a reference picture list for decoding the current picture; and', 'a second value explicitly indicating to the decoder not to include the reference picture in the reference picture list for decoding the current picture;, 'encode, for each of a plurality of reference pictures included in a buffer description for the current picture, a respective one-bit flag according to one of two available values for the one-bit flag, the two available values for the one-bit flag comprisingoutput the representation of the current picture and the one-bit flags.2. The encoder of claim 1 , wherein to encode the one-bit flags claim 1 , the processor is configured to set the one-bit flag corresponding to a first reference picture of the plurality of reference pictures to the second value responsive to determining that a layer identity of the first reference picture is higher than a layer ...

Подробнее
18-01-2018 дата публикации

IMAGE PROCESSING DEVICE AND METHOD WITH A SCALABLE QUANTIZATION MATRIX

Номер: US20180020219A1
Принадлежит: SONY CORPORATION

An image processing device and method that enable suppression of an increase in the amount of coding of a scaling list. The image processing device sets a coefficient located at the beginning of a quantization matrix by adding a replacement difference coefficient that is a difference between a replacement coefficient used to replace a coefficient located at the beginning of the quantization matrix and the coefficient located at the beginning of the quantization matrix to the coefficient located at the beginning of the quantization matrix; up-converts the set quantization matrix; and dequantizes quantized data using an up-converted quantization matrix in which a coefficient located at the beginning of the up-converted quantization matrix has been replaced with the replacement coefficient. The device and method can be applied to an image processing device. 1. (canceled)2: An image processing device , comprising: decode encoded data including a difference value that is a difference between a replacement coefficient and an initial value, the replacement coefficient being used to replace a (0, 0) coefficient of an up-converted quantization matrix which is obtained by up-converting the quantization matrix to the same size as a transform block size, and a replacement difference coefficient that is a difference between the replacement coefficient and a (0, 0) coefficient of a quantization matrix whose size is limited to not greater than a transmission size that is a maximum size allowed in transmission;', 'set the replacement coefficient according to the replacement difference coefficient and the (0, 0) coefficient of the quantization matrix according to the replacement difference coefficient;', 'up-convert the quantization matrix set by the circuitry to set the up-converted quantization matrix;', 'replace the (0, 0) coefficient of the up-converted quantization matrix set by the circuitry with the replacement coefficient, and', 'dequantize quantized data obtained by ...

Подробнее
18-01-2018 дата публикации

SIGNAL RESHAPING APPROXIMATION

Номер: US20180020224A1

Statistical values are computed based on received source images. An adaptive reshaping function is selected for one or more source images based on the one or more statistical values. A portion of source video content is adaptively reshaped, based on the selected adaptive reshaping function to generate a portion of reshaped video content. The portion of source video content is represented by the one or more source images. An approximation of an inverse of the selected adaptive reshaping function is generated. The reshaped video content and a set of adaptive reshaping parameters defining the approximation of the inverse of the selected adaptive reshaping function are encoded into a reshaped video signal. The reshaped video signal may be processed by a downstream recipient device to generate a version of reconstructed source images, for example, for rendering with a display device. 1. A method , comprising:computing one or more statistical values based on one or more source images in a sequence of source images;selecting, based on the one or more statistical values, an adaptive reshaping function for the one or more source images;adaptively reshaping, based at least in part on the selected adaptive reshaping function, a portion of source video content to generate a portion of reshaped video content, the portion of source video content being represented by the one or more source images;generating an approximation of an inverse of the selected adaptive reshaping function;encoding the reshaped video content and a set of adaptive reshaping parameters that define the approximation of the inverse of the selected adaptive reshaping function into a reshaped video signal.2. The method as recited in claim 1 , wherein the portion of the reshaped video content comprises one or more reshaped images.3. The method as recited in claim 1 , wherein the one or more source images form a scene.4. The method as recited in claim 1 , further comprising:determining a target lookup table (LUT) ...

Подробнее
18-01-2018 дата публикации

VIDEO PROCESSING SYSTEM WITH MULTIPLE SYNTAX PARSING CIRCUITS AND/OR MULTIPLE POST DECODING CIRCUITS

Номер: US20180020228A1
Принадлежит:

A video processing system includes a storage device, a demultiplexing circuit, and a syntax parser. The storage device includes a first buffer and a second buffer. The demultiplexing circuit performs a demultiplexing operation upon an input bitstream to write a video bitstream into the first buffer and write start points of bitstream segments of the video bitstream stored in the first buffer into the second buffer. Each start point is indicative of a start address of a corresponding bitstream segment stored in the first buffer. The syntax parser includes syntax parsing circuits and a syntax parsing control circuit. The syntax parsing control circuit fetches a start point from the second buffer, assigns the fetched start point to a syntax parsing circuit, and triggers the selected syntax parsing circuit to start syntax parsing of a bitstream segment that is read from the first buffer according to the fetched start point. 1. A video processing system comprising: a first buffer; and', 'a second buffer;, 'a storage device, comprisinga demultiplexing circuit, arranged to receive an input bitstream, and perform a demultiplexing operation upon the input bitstream to write a video bitstream into the first buffer and write a plurality of start points of a plurality of bitstream segments of the video bitstream stored in the first buffer into the second buffer, wherein each start point is indicative of a start address of a corresponding bitstream segment stored in the first buffer; and a plurality of syntax parsing circuits; and', 'a syntax parsing control circuit, arranged to fetch a first start point from the second buffer, assign the fetched first start point to a first syntax parsing circuit that is an idle syntax parsing circuit selected from the syntax parsing circuits, and trigger the selected first syntax parsing circuit to start syntax parsing of a first bitstream segment that is read from the first buffer according to the fetched first start point., 'a syntax parser, ...

Подробнее
16-01-2020 дата публикации

Image processing apparatus and method

Номер: US20200020134A1
Автор: Takeshi Tsukuba
Принадлежит: Sony Corp

The present disclosure relates to image processing apparatus and method that can suppress a reduction of subjective image quality. A process related to hiding of predetermined data with respect to data regarding an image is executed, and hiding is skipped in a case where data in a spatial domain of the image is to be encoded. The data regarding the image for which the hiding is performed or the data regarding the image for which the hiding is skipped is encoded. The present disclosure can be applied to, for example, an image processing apparatus, an image encoding apparatus, an image decoding apparatus, and the like.

Подробнее
17-01-2019 дата публикации

IMAGE ENCODING APPARATUS, AND CONTROL METHOD THEREOF

Номер: US20190020851A1
Автор: ABE Takahiro
Принадлежит:

This invention encodes, using less memory, a wide-angle image obtained by performing image capturing a plurality of times. An apparatus includes a compositing unit that, each time an image capturing unit captures an image, crops a partial image of a predetermined region in the captured image, and composes the partial image with a composed image obtained from a previously captured image, an encoding unit that, when the composed image updated by the compositing unit has a pre-set size, encodes the image of the tile in the composed image, a releasing unit that releases an area used for the encoded tile in the memory, and a control unit that controls the compositing unit, the encoding unit, and the releasing unit so as to repeatedly perform operations until a pre-set condition is satisfied. 1. An image encoding apparatus that encodes a wide-angle composite image obtained from a plurality of images captured while changing an image capture direction of an image capturing unit , the image encoding apparatus comprising:a memory for temporarily storing a captured image;a compositing unit that, each time a captured image captured by the image capturing unit is input, crops a partial image of a predetermined region in the input captured image, and positions and composes the partial image with a composed image obtained from a previously input captured image stored in the memory so as to update the composed image;an encoding unit that, when the composed image updated by the compositing unit has a pre-set size of an encode unit tile, encodes the image of the tile in the composed image;a releasing unit that releases the encoded tile so as to make the encoded tile overwritable in the memory; anda control unit that controls the compositing unit, the encoding unit, and the releasing unit so as to repeatedly perform operations until a pre-set condition is satisfied, and generates encoded data of the wide-angle composite image.2. The apparatus according to claim 1 ,wherein, if it is ...

Подробнее
17-01-2019 дата публикации

METHOD FOR ENCODING/DECODING BLOCK INFORMATION USING QUAD TREE, AND DEVICE FOR USING SAME

Номер: US20190020887A1
Принадлежит:

Disclosed decoding method of the intra prediction mode comprises the steps of: determining whether an intra prediction mode of a present prediction unit is the same as a first candidate intra prediction mode or as a second candidate intra prediction mode on the basis of 1-bit information; and determining, among said first candidate intra prediction mode and said second candidate intra prediction mode, which candidate intra prediction mode is the same as the intra prediction mode of said present prediction unit on the basis of additional 1-bit information, if the intra prediction mode of the present prediction unit is the same as at least either the first candidate intra prediction mode or the second candidate intra prediction mode, and decoding the intra prediction mode of the present prediction unit. 1. A video decoding method comprising:decoding integrated code block flag information in an encoding unit;decoding a split information flag based on the integrated code block flag information and size information in a first transform block; anddecoding code block flag information in the first transform block in a case that the first transform block is not split into four second transform blocks based on the split information flag, whereinthe split information flag is not decoded in a case that transform coefficients of the first transform block are not present,the first transform block is not split into the four second transform blocks in a case that a value of the split information flag is equal to a first predetermined value, andthe first transform block is split into the four second transform blocks in a case that the value of the split information flag is equal to a second predetermined value.2. The video decoding method of claim 1 , whereinthe code block flag information in the first transform block is decoded without decoding the split information flag in a case that a size of the first transform block is equal to a predetermined size.3. The video decoding method ...

Подробнее
17-01-2019 дата публикации

Motion vector calculation method

Номер: US20190020891A1

When a block (MB 22 ) of which motion vector is referred to in the direct mode contains a plurality of motion vectors, 2 motion vectors MV 23 and MV 24 , which are used for inter picture prediction of a current picture (P 23 ) to be coded, are determined by scaling a value obtained from averaging the plurality of motion vectors or selecting one of the plurality of the motion vectors.

Подробнее