Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 2361. Отображено 200.
19-10-2006 дата публикации

Method, device and system for effectively coding and decoding of video data

Номер: AU2006233279A1
Автор: WANG YE-KUI, YE-KUI WANG
Принадлежит:

Подробнее
22-08-2017 дата публикации

video coding of several layers

Номер: BR112016030044A2
Принадлежит:

Подробнее
26-12-2018 дата публикации

STORAGE OF VIRTUAL REALITY VIDEO IN MEDIA FILES

Номер: BR112018016787A2
Принадлежит:

Подробнее
22-09-2005 дата публикации

Transmission of asset information in streaming services

Номер: AU2004317110A1
Принадлежит:

Подробнее
02-02-2004 дата публикации

METHOD FOR ERROR CONCEALMENT IN VIDEO SEQUENCES

Номер: AU2003281127A1
Принадлежит:

Подробнее
24-07-2008 дата публикации

Carriage of SEI messages in RTP payload format

Номер: AU2008206744A1
Принадлежит:

Подробнее
17-11-2011 дата публикации

Signaling of multiple decoding times in media files

Номер: AU2008242129B2
Принадлежит:

The exemplary embodiments of this invention provide in one aspect thereof an ability to signal multiple decoding times for each sample in a file format level in order to allow, for example, different decoding times for each sample (or sample subset) between decoding an entire stream and decoding a subset of the stream. An alternate decoding time box is specified to allow for the signaling of multiple decoding times for each sample. Such a box can contain a compact version of a table that allows indexing from an alternate decoding time to a sample number, where an alternate decoding time is a decoding time to be used with a sample when only a subset of an elementary stream stored in a track is to be decoded. Furthermore, each entry in the table provides the number of consecutive samples with the same time delta, and the delta between those consecutive samples. By adding the deltas a complete time-to-sample map can be constructed.

Подробнее
15-08-2017 дата публикации

CONFORMANCE WINDOW INFORMATION IN MULTI-LAYER CODING

Номер: BR112016024233A2
Принадлежит:

Подробнее
04-07-2017 дата публикации

CODED PICTURE BUFFER REMOVAL TIMES SIGNALED IN PICTURE AND SUB-PICTURE TIMING SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGES

Номер: BR112015006488A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
04-01-2017 дата публикации

CODING SEI NAL UNITS FOR VIDEO CODING

Номер: PT0002873235T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
17-03-2011 дата публикации

System and method for efficient scalable stream adaptation

Номер: AU2006300881B2
Принадлежит:

Подробнее
23-10-2008 дата публикации

A video coder

Номер: AU2007350974A1
Принадлежит:

Подробнее
24-11-2005 дата публикации

Multiple interoperability points for scalable media coding and transmission

Номер: AU2005242601A1
Принадлежит:

Подробнее
11-11-2010 дата публикации

Coding of frame number in scalable video coding

Номер: AU2006233316B2
Принадлежит:

Подробнее
30-06-2011 дата публикации

System and method for efficient scalable stream adaptation

Номер: AU2011202791A1
Принадлежит:

SYSTEM AND METHOD FOR EFFICIENT SCALABLE STREAM A system and method for signaling low-to-high layer switching points in a file format level to enable efficient scalable stream switching in streaming servers and local file playback. The present invention also provides for a system and method for signaling low-to-high layer switching points in video bit stream, e.g., to enable intelligent 10 forwarding of scalability layers in media-aware network elements or computationally scalable decoding in stream recipients.

Подробнее
19-11-2009 дата публикации

Stream switching based on gradual decoder refreash

Номер: AU2003246988B2
Принадлежит:

Подробнее
19-04-2007 дата публикации

System and method for efficient scalable stream adaptation

Номер: AU2006300881A1
Принадлежит:

Подробнее
12-07-2007 дата публикации

Method for checking of video encoder and decoder state integrity

Номер: AU2006334077A1
Принадлежит:

Подробнее
27-01-2011 дата публикации

Method, device and system for effectively coding and decoding of video data

Номер: AU2006233279C1
Принадлежит:

Подробнее
30-10-2008 дата публикации

Signaling of multiple decoding times in media files

Номер: AU2008242129A1
Принадлежит:

Подробнее
22-09-2005 дата публикации

Timing of quality of experience metrics

Номер: AU2004317111A1
Принадлежит:

Подробнее
04-10-2007 дата публикации

Reference picture marking in scalable video encoding and decoding

Номер: AU2007231083A1
Принадлежит:

Подробнее
19-07-2007 дата публикации

Backward-compatible aggregation of pictures in scalable video coding

Номер: AU2007204168A1
Принадлежит:

Подробнее
22-09-2005 дата публикации

Classified media quality of experience

Номер: AU2004317109A1
Автор: WANG YE-KUI, YE-KUI WANG
Принадлежит:

Подробнее
24-02-2011 дата публикации

Backward-compatible aggregation of pictures in scalable video coding

Номер: AU2007204168B2
Принадлежит:

Подробнее
28-02-2008 дата публикации

System and method for indicating track relationships in media files

Номер: AU2007287222A1
Принадлежит:

Подробнее
10-06-2003 дата публикации

VIDEO ENCODING AND DECODING OF FOREGROUND AND BACKGROUND WHEREIN PICTURE IS DIVIDED INTO SLICE

Номер: AU2002347489A1
Принадлежит:

Подробнее
21-07-2020 дата публикации

media processing using a generic descriptor file shape boxes

Номер: BR112020000015A2
Принадлежит:

Подробнее
09-10-2018 дата публикации

METHODS AND SYSTEMS OF IMPROVED VIDEO STREAM SWITCHING AND RANDOM ACCESS

Номер: BR112018006068A2
Автор: FNU HENDRY, YE-KUI WANG
Принадлежит:

Подробнее
19-09-2017 дата публикации

SYSTEM AND METHOD FOR MEDIA CONTENT STREAMING

Номер: BR112012011581A2
Принадлежит:

Подробнее
08-01-2009 дата публикации

Timing of quality of experience metrics

Номер: AU2004317111B2
Принадлежит:

Подробнее
07-06-2004 дата публикации

PICTURE BUFFERING FOR PREDICTION REFERENCES AND DISPLAY

Номер: AU2003276300A1
Принадлежит:

Подробнее
02-04-2009 дата публикации

Method for error concealment in video sequences

Номер: AU2003281127B2
Принадлежит:

Подробнее
17-11-2005 дата публикации

Refined quality feedback in streaming services

Номер: AU2005241687A1
Принадлежит:

Подробнее
04-01-2005 дата публикации

STREAM SWITCHING BASED ON GRADUAL DECODER REFREASH

Номер: AU2003246988A1
Автор: WANG YE-KUI, YE-KUI WANG
Принадлежит:

Подробнее
17-11-2003 дата публикации

RANDOM ACCESS POINTS IN VIDEO ENCODING

Номер: AU2003229800A1
Принадлежит:

Подробнее
17-08-2011 дата публикации

System and method for implementing low-complexity multi-view video coding

Номер: CN0101558652B
Принадлежит:

A system and method for implementing low complexity multi-view video coding. According to various embodiments, single-loop decoding is applied to multi-view video coding. For N coded views, where onlyM of the N views are to be displayed, only those M views are required to be fully decoded and stored to a decoded picture buffer (DPB) when needed. Pictures of other views are only partially decodedor simply parsed and do not have to be stored into the DPB. Various embodiments also provide for an encoder that encodes multi-view video bitstreams in accordance with the single-loop decoding concept, as well as a decoder that utilizes single-loop decoding to decode and output on a subset of the encoded views from a multi-view bitstream.

Подробнее
06-05-2015 дата публикации

PARALLEL PROCESSING OF PLATES AND WAVEFRONT

Номер: AR0000092851A1
Принадлежит:

Técnicas que pueden permitir que un codificador de vídeo implemente de forma simultánea múltiples mecanismos de procesamiento en paralelo, incluyendo dos o más del procesamiento en paralelo de frente de ondas (WPP), placas, y partes de la entropía. Técnicas de señalización que son compatibles con las normas de codificación que sólo permiten un mecanismo de procesamiento en paralelo a ser implementado a la vez, pero que además son compatibles con las normas de codificación futuras potenciales que pueden permitir que se implemente más de un mecanismo de procesamiento en paralelo de forma simultánea. Además describe las restricciones que pueden habilitar el WPP y las placas a ser implementados simultáneamente.

Подробнее
15-05-2018 дата публикации

FILE FORMAT BASED STREAMING WITH DASH FORMATS BASED ON LCT

Номер: BR112017018956A2
Принадлежит:

Подробнее
04-07-2017 дата публикации

EXPANDED DECODING UNIT DEFINITION

Номер: BR112015006479A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
15-11-2016 дата публикации

SIGNALING OF DEBLOCKING FILTER PARAMETERS IN VIDEO CODING

Номер: PT0002805494T
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
24-04-2008 дата публикации

Discardable lower layer adaptations in scalable video coding

Номер: AU2007311477A1
Принадлежит:

Подробнее
22-05-2008 дата публикации

Classified media quality of experience

Номер: AU2004317109B2
Автор: WANG YE-KUI, YE-KUI WANG
Принадлежит:

Подробнее
19-10-2006 дата публикации

Coding of frame number in scalable video coding

Номер: AU2006233316A1
Принадлежит:

Подробнее
22-09-2011 дата публикации

Carriage of SEI messages in RTP payload format

Номер: AU2008206744B2
Принадлежит:

A system and method of modifying error resiliency features by conveying temporal level 0 picture indices, such as t10_pic_idx, within an SEl message instead of optionally including them in the NAL unit header is provided. In addition, a mechanism is provided for enabling repetition of any SEl messages in Real-Time Transport Protocol (RTP) packets. Enabling such repetition of any SEl messages facilitates detection of lost temporal level 0 pictures on the basis of any received packet.

Подробнее
28-09-2011 дата публикации

Picture delimiter in scalable video coding

Номер: CN0101444102B
Принадлежит:

The use of a picture delimiter that is contained in a NAL unit type value that is reserved in the current AVC or SVC specification. The present invention provides the scalability information for the H.264/AVC base layer in such a manner that bitstreams remain decodable with H.264/AVC decoders. In addition, the picture delimiter of the present invention may contain many other syntax elements that can help in easier processing of bitstreams compared to the plain H.264/AVC bitstream syntax.

Подробнее
07-07-2011 дата публикации

Method for checking of video encoder and decoder state integrity

Номер: AU2006334077B2
Принадлежит:

The present invention provides a method and a system for verifying a match between states of a first video processor and a second video processor, wherein one of said first and second video processors is a video encoder utilizing predictive video encoding and the other one of said first and second video processors is a video decoder capable of reproducing a decoded bit stream from an encoded bit stream generated by said video encoder.

Подробнее
24-04-2008 дата публикации

Virtual decoded reference picture marking and reference picture list

Номер: AU2007311489A1
Принадлежит:

Подробнее
15-12-2011 дата публикации

System and method for providing picture output indications in video coding

Номер: AU2007311526B2
Принадлежит:

An explicit signaling element for controlling decoded picture output and applications when picture output is not desired. A signal element, such as a syntax element in a coded video bitstream, is used to indicate (1) whether a certain decoded picture is output; (2) whether a certain set of pictures are output, wherein the set of pictures may be explicitly signaled or implicitly derived; or (3) whether a certain portion of a picture is output. The signal element may be a part of the coded picture or access unit that it is associated with, or it may reside in a separate syntax structure from the coded picture or access unit, such as a sequence parameter set. The signal element can be used both by an encoder and a decoder in a video coding system, as well as a processing unit that produces a subset of a bitstream as output.

Подробнее
27-05-2010 дата публикации

Method, device and system for effectively coding and decoding of video data

Номер: AU2006233279B2
Принадлежит:

Подробнее
23-04-2009 дата публикации

Motion skip and single-loop encoding for multi-view video content

Номер: AU2008313328A1
Принадлежит:

Подробнее
18-07-2017 дата публикации

PARALLEL PROCESSING FOR VIDEO CODING

Номер: BR112015021566A2
Принадлежит:

Подробнее
04-04-2017 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR VIDEO CODING

Номер: BR112014006867A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
24-04-2008 дата публикации

System and method for providing picture output indications in video coding

Номер: AU2007311526A1
Принадлежит:

Подробнее
03-04-2018 дата публикации

signaling cluster sample in file formats

Номер: BR112017017315A2
Автор: FNU HENDRY, YE-KUI WANG
Принадлежит:

Подробнее
14-07-2020 дата публикации

packing region region, cover content and signaling packing frame for media content

Номер: BR112020000328A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
03-02-2017 дата публикации

SIGNALING OF PICTURE ORDER COUNT TO TIMING INFORMATION RELATIONS FOR VIDEO TIMING IN VIDEO CODING

Номер: PT0002941888T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
04-04-2017 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR VIDEO CODING

Номер: BR112014006842A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
04-04-2017 дата публикации

DECODED PICTURE BUFFER MANAGEMENT

Номер: BR112014006854A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
26-02-2019 дата публикации

VIRTUAL REALITY VIDEO SIGNALING IN DYNAMIC ADAPTIVE STREAMING OVER HTTP

Номер: BR112018073902A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
11-01-2014 дата публикации

Multiple interoperability points for scalable media coding and transmission

Номер: TWI423677B
Принадлежит: NOKIA CORP, NOKIA CORPORATION

Подробнее
13-01-2017 дата публикации

SIGNALING PICTURE SIZE IN VIDEO CODING

Номер: PT0002732629T
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
23-03-2011 дата публикации

Coding of frame number in scalable video coding

Номер: CN0101189881B
Принадлежит:

A method of encoding scalable video data having multiple layers where each layer in the multiple layers is associated with at least one other layer includes identifying one or more layers using a first identifier where the first identifier indicates decoding dependency, and identifying reference pictures within the identified one or more layers using a second identifier. The coding of the second identifier for pictures in a first layer is independent of pictures in a second enhancement layer. As such, for all pictures with a certain value of DependencylD, the syntax element frame_num is codedindependently of other pictures with different values of DependencylD. Within all pictures with a pre-determined value of DependencylD, a default frame_num coding method is used.

Подробнее
04-07-2017 дата публикации

INDICATION AND ACTIVATION OF PARAMETER SETS FOR VIDEO CODING

Номер: BR112015006441A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
22-08-2017 дата публикации

PICTURE ORDER COUNT RESET FOR MULTI-LAYER CODECS

Номер: BR112016029777A2
Принадлежит:

Подробнее
23-08-2017 дата публикации

INDICATION AND ACTIVATION OF PARAMETER SETS FOR VIDEO CODING

Номер: PT0002898690T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
04-12-2018 дата публикации

MARKING REFERENCE PICTURES IN VIDEO SEQUENCES HAVING BROKEN LINK PICTURES

Номер: PT0002839644T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
28-11-2018 дата публикации

RANDOM ACCESS WITH ADVANCED CODED PICTURE BUFFER MANAGEMENT IN VIDEO CODING

Номер: PT0002774365T
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
28-11-2016 дата публикации

PROGRESSIVE REFINEMENT WITH TEMPORAL SCALABILITY SUPPORT IN VIDEO CODING

Номер: PT0002939427T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
11-07-2017 дата публикации

SIGNALING OF PICTURE ORDER COUNT TO TIMING INFORMATION RELATIONS FOR VIDEO TIMING IN VIDEO CODING

Номер: BR112015016256A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
27-05-2015 дата публикации

SIGNALING LONG TERM REFERENCE PICTURES IN CODING VI DEO

Номер: AR0000093241A1
Принадлежит:

Señales de codificador de video, en una cabecera de corte para un corte actual de una imagen actual, una primera entrada de imagen referencia a largo plazo (LTRP, por sus siglas en inglés), la primera entrada de LTRP indica que una imagen de referencia particular se encuentra en un conjunto de imagen de referencia a largo plazo de la imagen actual. En forma adicional, el codificador de video señaliza, en la cabecera de corte, una segunda entrada de LTRP únicamente si la segunda entrada de LTRP no indica que la imagen de referencia particular se encuentra en el conjunto de imagen de referencia a largo plazo de la imagen actual.

Подробнее
27-06-2017 дата публикации

adaptation preclude imaging-based random access clean (the CRA)

Номер: BR112014032029A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
16-11-2011 дата публикации

System and method for storing multi-source multimedia presentations

Номер: CN0102246491A
Принадлежит:

A file format design supports storage of multi-source multimedia presentation via the inclusion of indications as to whether a presentation is a multi-source presentation,and for one media type,the tracks of which are from different sources and should be played simultaneously. If a multi-source presentation exists, additional indications may be provided including: an indication of a multi-source presentation type being stored; indications regarding the source of each track and which tracks have the same source; indications of different parties' information such as phone numbers, etc. Thus, a player may playback a recorded presentation in the same or substantially the same manner as it was presented during the actual session, and may automatically manipulate the presentation to be more informative or efficient. The file format design further supports storage of other types of multi-source presentations that render more than one media stream for at least one type of media.

Подробнее
29-01-2019 дата публикации

CONFORMANCE CONSTRAINT FOR COLLOCATED REFERENCE INDEX IN VIDEO CODING

Номер: BR112018070983A2
Принадлежит:

Подробнее
04-04-2017 дата публикации

CODING REFERENCE PICTURES FOR A REFERENCE PICTURE SET

Номер: BR112014006843A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
26-12-2018 дата публикации

HANDLING OF END OF BITSTREAM NAL UNITS IN L-HEVC FILE FORMAT AND IMPROVEMENTS TO HEVC AND L-HEVC TILE TRACKS

Номер: BR112018016781A2
Автор: FNU HENDRY, YE-KUI WANG
Принадлежит:

Подробнее
22-04-2020 дата публикации

signaling important information video researcher network parameters of mime type

Номер: BR112019019836A2
Принадлежит:

Подробнее
04-07-2017 дата публикации

SIGNALING LAYER IDENTIFIERS FOR OPERATION POINTS IN VIDEO CODING

Номер: BR112015006839A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
22-08-2017 дата публикации

SIGNALING HRD PARAMETERS FOR BITSTREAM PARTITIONS

Номер: BR112016029306A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
12-12-2017 дата публикации

OPERATION POINT FOR CARRIAGE OF LAYERED HEVC BITSTREAMS

Номер: BR112017007298A2
Принадлежит:

Подробнее
23-05-2012 дата публикации

Carriage of sei messages in rtp payload format

Номер: CN0101622879B
Принадлежит:

A system and method of modifying error resiliency features by conveying temporal level 0 picture indices, such as t10_pic_idx, within an SEl message instead of optionally including them in the NAL unit header is provided. In addition, a mechanism is provided for enabling repetition of any SEl messages in Real-Time Transport Protocol (RTP) packets. Enabling such repetition of any SEl messages facilitates detection of lost temporal level 0 pictures on the basis of any received packet.

Подробнее
14-09-2005 дата публикации

Method for error concealment in video sequences

Номер: CN0001669322A
Принадлежит:

Подробнее
04-04-2017 дата публикации

VIDEO CODING WITH SUBSETS OF A REFERENCE PICTURE SET

Номер: BR112014006839A2
Автор: YE-KUI WANG, YING CHEN
Принадлежит:

Подробнее
05-02-2019 дата публикации

IMPROVEMENT ON TILE GROUPING IN HEVC AND L-HEVC FILE FORMATS

Номер: BR112018069708A2
Автор: FNU HENDRY, YE-KUI WANG
Принадлежит:

Подробнее
12-10-2016 дата публикации

INDICATION AND ACTIVATION OF PARAMETER SETS FOR VIDEO CODING

Номер: PT0002898691T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
03-02-2017 дата публикации

SIGNALING OF CLOCK TICK DERIVATION INFORMATION FOR VIDEO TIMING IN VIDEO CODING

Номер: PT0002941886T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
03-04-2018 дата публикации

LOW LATENCY VIDEO STREAMING

Номер: BR112017017152A2
Принадлежит:

Подробнее
30-01-2019 дата публикации

VIDEO BUFFERING OPERATIONS FOR RANDOM ACCESS IN VIDEO CODING

Номер: PT0002941869T
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
05-05-2010 дата публикации

Video encoding and decoding method and apparatus

Номер: CN0001593065B
Принадлежит:

A video coding and decoding method, wherein a picture is first divided into sub-pictures corresponding to one or more subjectively important picture regions and to a background region sub-picture, which remains after the other sub-pictures are removed from the picture. The sub-pictures are formed to conform to predetermined allowable groups of video coding macroblocks (MBs). The allowable groups of MBs can be, for example, of rectangular shape. The picture is then divided into slices so that each sub-picture is encoded independent of other sub-pictures except for the background region sub-picture, which may be coded using another sub-pictures. The slices of the background sub-picture are formed in a scan-order with skipping over MBs that belong to another sub/picture. The background sub-picture is only decoded if all the positions and sizes of all other sub-pictures can be reconstructed on decoding the picture.

Подробнее
02-12-2019 дата публикации

HYPOTHETICAL REFERENCE DECODER PARAMETERS IN VIDEO CODING

Номер: PT0002898680T
Автор: YE-KUI WANG
Принадлежит: QUALCOMM INC, QUALCOMM INCORPORATED

Подробнее
04-08-2010 дата публикации

Feedback based scalable video coding

Номер: CN0101796846A
Принадлежит:

A system and method provides a first integrity check code that can be calculated at an encoder and then sent to a decoder as a supplemental enhancement information message. The decoder can then calculate a second integrity check code over the actual received network abstraction layer units. This second integrity check code can be compared with the encoder-generated first integrity check code sentvia the supplemental enhancement information message to indicate if in fact all of the transmitted NAL units from which the integrity check code was generated have been received without changes in their content. In addition, an error tracking algorithm is provided that can be run at either the encoder or the decoder in order to determine if the network abstraction layer units are correct in content at the decoder level. Therefore, pictures that are sent as just intra coded frames and pictures that are sent as just inter coded frames can both be checked for errors. Hence, error checking can beprovided ...

Подробнее
11-05-2014 дата публикации

Parameter set and picture header in video coding

Номер: TWI437886B
Принадлежит: NOKIA CORP, NOKIA CORPORATION

Подробнее
22-08-2017 дата публикации

SIGNALING HRD PARAMETERS FOR BITSTREAM PARTITIONS

Номер: BR112016029356A2
Автор: YE-KUI WANG
Принадлежит:

Подробнее
29-12-2010 дата публикации

Method, device and system for enhanced and effective fine granularity scalability (fgs) coding and decoding of video data

Номер: CN0101180884B
Автор: YE-KUI WANG, WANG YE-KUI
Принадлежит:

The present invention discloses methods, devices and systems for effective and improved video data scalable coding and/or decoding based on Fine Grain Scalability (FGS) information. According to a first aspect of the present invention a method for encoding video data is provided, the method comprising obtaining said video data; generating a base layer picture based on said obtained video data, the base layer picture comprising at least one slice, said slice corresponding to a region within said base layer picture; and generating at least one enhancement layer picture corresponding to said base layer picture, wherein said at least one enhancement layer picture comprises at least one fine granularity scalability (FGS) slice, said at least one FGS-slice corresponding to a region within said enhancement layer picture, wherein the region to which said at least one of said FGS-slices corresponds is different from the region to which said slice in the base layer picture corresponds, encodingsaid ...

Подробнее
17-01-2013 дата публикации

Signaling picture size in video coding

Номер: US20130016769A1
Принадлежит: Qualcomm Inc

A video encoder is configured to determine a picture size for one or more pictures included in a video sequence. The picture size associated with the video sequence may be a multiple of an aligned coding unit size for the video sequence. In one example, the aligned coding unit size for the video sequence may comprise a minimum coding unit size where the minimum coding unit size is selected from a plurality of smallest coding unit sizes corresponding to different pictures in the video sequence. A video decoder is configured to obtain syntax elements to determine the picture size and the aligned coding unit size for the video sequence. The video decoder decodes the pictures included in the video sequence with the picture size, and stores the decoded pictures in a decoded picture buffer.

Подробнее
07-02-2013 дата публикации

CODING PARAMETER SETS FOR VARIOUS DIMENSIONS IN VIDEO CODING

Номер: US20130034170A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a device for coding video data includes a video coder configured to code, for a bitstream, information representative of which of a plurality of video coding dimensions are enabled for the bitstream, and code values for each of the enabled video coding dimensions, without coding values for the video coding dimensions that are not enabled, in a network abstraction layer (NAL) unit header of a NAL unit comprising video data coded according to the values for each of the enabled video coding dimensions. In this manner, NAL unit headers may have variable lengths, while still providing information for scalable dimensions to which the NAL units correspond. 1. A method of coding video data , the method comprising:coding, for a bitstream, information representative of which of a plurality of video coding dimensions are enabled for the bitstream; andcoding values for syntax elements representative of the enabled video coding dimensions, without coding values for syntax elements representative of the video coding dimensions that are not enabled, in a network abstraction layer (NAL) unit header of a NAL unit comprising video data coded according to the values for each of the enabled video coding dimensions.2. The method of claim 1 , wherein coding the values for each of the enabled video coding dimensions comprises:determining, for each of the enabled video coding dimensions, a respective number of bits for the syntax elements used to code the respective values; andcoding the values for the syntax elements of the enabled video coding dimensions based on the determined respective numbers of bits.3. The method of claim 2 , further comprising claim 2 , for all video data of the bitstream claim 2 , inferring default values for the video coding dimensions that are not enabled.4. The method of claim 2 , wherein the plurality of video coding dimensions comprise a plurality of scalable video coding dimensions claim 2 , wherein the plurality of scalable video coding ...

Подробнее
07-03-2013 дата публикации

SLICE HEADER THREE-DIMENSIONAL VIDEO EXTENSION FOR SLICE HEADER PREDICTION

Номер: US20130057646A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a video coder is configured to code one or more blocks of video data representative of texture information of at least a portion of a frame of video data, process a texture slice for a texture view component of a current view associated, the texture slice comprising the coded one or more blocks and a texture slice header comprising a set of syntax elements representative of characteristics of the texture slice, code depth information representative of depth values for at least the portion of the frame, and process a depth slice for a depth view component corresponding to the texture view component of the view, the depth slice comprising the coded depth information and a depth slice header comprising a set of syntax elements representative of characteristics of the depth slice, wherein process the texture slice or the depth slice comprises predict at least one syntax element. 1. A method of coding video data , the method comprising:coding one or more blocks of video data representative of texture information of at least a portion of a frame of the video data;processing a texture slice for a texture view component of a current view associated with an access unit, the texture slice comprising the coded one or more blocks and a texture slice header comprising a set of syntax elements representative of characteristics of the texture slice;coding depth information representative of depth values for at least the portion of the frame; andprocessing a depth slice for a depth view component corresponding to the texture view component of the view, the depth slice comprising the coded depth information and a depth slice header comprising a set of syntax elements representative of characteristics of the depth slice;wherein processing the texture slice or the depth slice comprises predicting at least one syntax element of at least one of the set of syntax elements representative of characteristics of the texture slice or set of syntax elements representative of ...

Подробнее
28-03-2013 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR VIDEO CODING

Номер: US20130077677A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to constructing reference picture lists. The reference picture lists may be constructed from reference picture subsets of a reference picture set. In some examples, the reference picture subsets may be ordered in a particular manner to form the reference picture lists. 1. A method for coding video data , the method comprising:coding information indicative of reference pictures that belong to a reference picture set, wherein the reference picture set identifies the reference pictures that can potentially be used for inter-predicting a current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;constructing a plurality of reference picture subsets that each identifies zero or more of the reference pictures of the reference picture set;adding reference pictures from a first subset of the plurality of reference picture subsets, followed by reference pictures from a second subset of the plurality of reference picture subsets, and followed by reference pictures from a third subset of the plurality of reference picture subsets into a reference picture list as long as a number of reference picture list entries is not greater than a maximum number of allowable reference list entries; andcoding the current picture based on the reference picture list.2. The method of claim 1 , wherein adding the reference pictures comprises:adding the reference pictures from the first reference picture subset in the reference picture list until all reference pictures in the first reference picture subset are added in the reference picture list or the number of reference picture list entries is equal to the maximum number of allowable reference picture list entries;when the number of reference picture list entries is less than the maximum number of allowable reference picture list entries, and after adding the reference pictures from the first reference picture subset, adding the reference ...

Подробнее
28-03-2013 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR VIDEO CODING

Номер: US20130077678A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to constructing reference picture lists. The reference picture lists may be constructed from reference picture subsets of a reference picture set. In some examples, the techniques may repeatedly list reference pictures identified in the reference picture subsets until the number of entries in the reference picture list is equal to the maximum number of allowable entries in the reference picture list. 1. A method for coding video data , the method comprising:coding information indicative of reference pictures that belong to a reference picture set, wherein the reference picture set identifies the reference pictures that can potentially be used for inter-predicting a current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;constructing a plurality of reference picture subsets that each identifies zero or more of the reference pictures of the reference picture set;adding reference pictures from the plurality of reference picture subsets into a first set of entries in a reference picture list;determining whether a number of entries in the reference picture list is equal to a maximum number of allowable entries in the reference picture list;when the number of entries in the reference picture list is not equal to the maximum number of allowable entries in the reference picture list, repeatedly re-adding one or more reference pictures from at least one of the reference picture subsets into entries in the reference picture list that are subsequent to the first set of entries until the number of entries in the reference picture list is equal to the maximum number of allowable entries in the reference picture list; andcoding the current picture based on the reference picture list.2. The method of claim 1 , wherein constructing the plurality of reference picture subsets comprises constructing at least a first reference picture subset claim 1 , a second reference ...

Подробнее
28-03-2013 дата публикации

VIDEO CODING WITH SUBSETS OF A REFERENCE PICTURE SET

Номер: US20130077679A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to deriving a reference picture set. A reference picture set may identify reference pictures that can potentially be used to inter-predict a current picture and picture following the current picture in decoding order. In some examples, deriving the reference picture set may include constructing a plurality of reference picture subsets that together form the reference picture set. 1. A method for coding video data comprising:coding information indicative of reference pictures that belong to a reference picture set, wherein the reference picture set identifies the reference pictures that can potentially be used for inter-predicting a current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;constructing a plurality of reference picture subsets that each identifies zero or more of the reference pictures of the reference picture set; andcoding the current picture based on the plurality of reference picture subsets.2. The method of claim 1 , further comprising:deriving the reference picture set from the plurality of reference picture subsets,wherein coding the current picture comprises coding the current picture based on the derived reference picture set.3. The method of claim 1 , wherein constructing the plurality of reference picture subsets comprises constructing at least five reference picture subsets.4. The method of claim 1 , wherein constructing the plurality of reference picture subsets comprises constructing at least two of:a first reference picture subset that identifies short-term reference pictures that are prior to the current picture in decoding order and prior to the current picture in output order and that can potentially be used for inter-predicting the current picture and one or more of the one or more pictures following the current picture in decoding order;a second reference picture subset that identifies short-term reference pictures that are ...

Подробнее
28-03-2013 дата публикации

DECODED PICTURE BUFFER MANAGEMENT

Номер: US20130077680A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to output and removal of decoded pictures from a decoded picture buffer (DPB). The example techniques may remove a decoded picture from the DPB prior to coding a current picture. For instance, the example techniques may remove the decoded picture if that decoded picture is not identified in the reference picture set of the current picture. 1. A method for coding video data , the method comprising:coding information indicative of reference pictures that belong to a reference picture set, wherein the reference picture set identifies the reference pictures that can potentially be used for inter-predicting a current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;deriving the reference picture set based on the coded information;determining whether a decoded picture stored in a decoded picture buffer (DPB) is not needed for output and is not identified in the reference picture set;when the decoded picture is not needed for output and is not identified in the reference picture set, removing the decoded picture from the DPB; andsubsequent to the removing of the decoded picture, coding the current picture.2. The method of claim 1 , further comprising:constructing a reference picture list based on the reference picture set,wherein removing the decoded picture from the DPB comprises removing the decoded picture from the DPB after constructing the reference picture list.3. The method of claim 1 , further comprising:determining a time when to output the decoded picture; andoutputting the decoded picture based on the determined time and prior to coding the current picture.4. The method of claim 1 , further comprising:storing the current picture in the DPB after coding the current picture.5. The method of claim 1 , further comprising:determining whether the DPB is full; and selecting a decoded picture in the DPB that is marked as “needed for output” and having a smallest ...

Подробнее
28-03-2013 дата публикации

REFERENCE PICTURE SIGNALING AND DECODED PICTURE BUFFER MANAGEMENT

Номер: US20130077681A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to performing random access starting from a random access point picture that is not an instantaneous decoder refresh picture. Some techniques are also related to reducing the amount of information that is signaled for long-term reference pictures of a reference picture set. Additional techniques are also related to decoded picture buffer management, such as removing decoded pictures based on a temporal identification value. 1. A method for coding video data , the method comprising:coding a full identifier value for a random access point (RAP) picture that is not an instantaneous decoder refresh (IDR) picture; andcoding a partial identifier value for a non-RAP picture based on the full identifier value for the RAP picture, wherein the partial identifier value represents a portion of a full identifier value for the non-RAP picture.2. The method of claim 1 , further comprising:for non-RAP pictures following the RAP picture in decoding order, identifying reference pictures that can be used for inter-predicting the non-RAP pictures based on the full identifier value of the RAP picture.3. The method of claim 1 , wherein coding the full identifier value comprises coding a full picture order count (POC) value for the RAP picture.4. The method of claim 1 , wherein coding the full identifier value comprises separately coding a most significant bit (MSB) portion of a full picture order count (POC) value for the RAP picture and a least signification bit (LSB) portion of the full POC value for the RAP picture.5. The method of claim 1 , wherein the RAP picture comprises one of a clean random access (CRA) picture claim 1 , a broken link access (BLA) picture claim 1 , and a gradual decoding refresh (GDR) picture.6. The method of claim 1 , further comprising:coding a value of a flag, in a slice header for a current picture that includes the slice,wherein when the value of the flag is a first value, no long-term reference picture can be used for inter ...

Подробнее
28-03-2013 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR VIDEO CODING

Номер: US20130077685A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to modifying an initial reference picture list. The example techniques may identify a reference picture in at least one of the reference picture subsets used to construct the initial reference picture. The example techniques may list the identified reference picture in a current entry of the initial reference picture list to construct a modified reference picture list. 1. A method for coding video data , the method comprising:coding information indicative of reference pictures that belong to a reference picture set, wherein the reference picture set identifies the reference pictures that can potentially be used for inter-predicting a current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;constructing a plurality of reference picture subsets that each identifies zero or more of the reference pictures of the reference picture set;constructing an initial reference picture list based on the constructed reference picture subsets; identifying a reference picture in at least one of the constructed reference picture subsets; and', 'adding the identified reference picture in a current entry of the initial reference picture to construct a modified reference picture list; and, 'when reference picture modification is neededcoding the current picture based on the modified reference picture list.2. The method of claim 1 , wherein identifying the reference picture comprises:determining an index into at least one of the constructed reference picture subsets; anddetermining the reference picture identified at an entry of the at least one of the constructed reference picture subsets based on the determined index.3. The method of claim 2 , wherein determining the index comprises:coding a first syntax element to identify the at least one of the constructed reference picture subsets from which the reference picture is identified; andcoding a second syntax element that ...

Подробнее
28-03-2013 дата публикации

CODING REFERENCE PICTURES FOR A REFERENCE PICTURE SET

Номер: US20130077687A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

Techniques are described related to coding of long-term reference pictures for a reference picture set. In some examples, a video coder may code candidate long-term reference pictures in a parameter set. The video coder also code syntax elements that indicate which long-term reference pictures from the candidate long-term reference pictures belong in the reference picture set. 1. A method for coding video data , the method comprising:coding syntax elements indicating candidate long-term reference pictures identified in a parameter set, wherein one or more of the candidate long-term reference pictures belong in a reference picture set of a current picture, and wherein the reference picture set identifies reference pictures that can potentially be used for inter-predicting the current picture and can potentially be used for inter-predicting one or more pictures following the current picture in decoding order;coding syntax elements that indicate which candidate long-term reference pictures, identified in the parameter set, belong in the reference picture set of the current picture; andconstructing at least one of a plurality of reference picture subsets based on the indication of which candidate long-term reference pictures belong in the reference picture set of the current picture, wherein the plurality of reference picture subsets form the reference picture set.2. The method of claim 1 , wherein coding the candidate long-term reference pictures in the parameter set comprises coding the candidate long-term reference pictures in a sequence parameter set.3. The method of claim 1 , wherein coding the syntax elements that indicate which candidate long-term reference pictures belong in the reference picture set of the current picture comprises coding the syntax elements that indicate which candidate long-term reference pictures belong in the reference picture set of the current picture in a slice header of the current picture.4. The method of claim 1 , further comprising: ...

Подробнее
11-04-2013 дата публикации

EFFICIENT SIGNALING OF REFERENCE PICTURE SETS

Номер: US20130089134A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coder can select which reference pictures should be signaled in a parameter set such as a picture parameter set (PPS) and which reference pictures should be signaled in a slice header such that when a video decoder constructs a reference picture set, the video decoder does not need to reorder the reference picture set to construct an initial reference picture list for a slice of video data. 1. A method for coding video data , the method comprising:determining that a reference picture set for a current picture is to be constructed from reference pictures identified in a parameter set and reference pictures identified in a slice header, wherein a distance between the current picture and any of the reference pictures identified in the slice header in terms of output order is greater than a distance between the current picture and any of the reference pictures identified in the parameter set; andconstructing the reference picture set based on the reference pictures identified in the parameter set and the slice header.2. The method of claim 1 , further comprising:constructing an initial reference picture list from the constructed reference picture set.3. The method of claim 2 , wherein constructing the initial reference picture list comprises constructing the initial reference picture list without reordering the constructed reference picture set.4. The method of claim 1 , wherein the parameter set comprises a picture parameter set (PPS).5. The method of claim 1 , wherein the distance between the current picture and the reference picture is determined based on a delta picture order count value.6. The method of claim 1 , further comprising:coding a syntax element, wherein the syntax elements indicates the reference picture set for the current picture is to be constructed from the reference pictures identified in the parameter set and the reference pictures identified in the slice header.7. The method of claim 1 , wherein coding comprises decoding claim 1 , and ...

Подробнее
11-04-2013 дата публикации

ADAPTIVE FRAME SIZE SUPPORT IN ADVANCED VIDEO CODECS

Номер: US20130089135A1
Принадлежит: QUALCOMM INCORPORATED

Techniques are described related to receiving a first decoded frame of video data, wherein the first decoded frame is associated with a first resolution, determining whether a decoded picture buffer is available to store the first decoded frame based on the first resolution, and in the event the decoded picture buffer is available to store the first decoded frame, storing the first decoded frame in the decoded picture buffer, and determining whether the decoded picture buffer is available to store a second decoded frame of video data, wherein the second decoded frame is associated with a second resolution, based on the first resolution and the second resolution, wherein the first decoded frame is different than the second decoded frame. 1. A method of decoding video data , the method comprising:receiving a first decoded frame of video data, wherein the first decoded frame is associated with a first resolution;determining whether a decoded picture buffer is available to store the first decoded frame based on the first resolution; andin the event the decoded picture buffer is available to store the first decoded frame, storing the first decoded frame in the decoded picture buffer, and determining whether the decoded picture buffer is available to store a second decoded frame of video data, wherein the second decoded frame is associated with a second resolution, based on the first resolution and the second resolution, wherein the first decoded frame is different than the second decoded frame.2. The method of claim 1 , wherein determining whether the decoded picture buffer is available to store the first decoded frame based on the first resolution comprises:determining an amount of information that may be stored within the decoded picture buffer;determining an amount of information associated with the first decoded frame based on the first resolution; andcomparing the amount of information that may be stored within the decoded picture buffer and the amount of ...

Подробнее
11-04-2013 дата публикации

SIGNALING PICTURE IDENTIFICATION FOR VIDEO CODING

Номер: US20130089152A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a video coder, such as a video encoder or video decoder, is configured to determine a number of least significant bits of picture identifying information for a picture of video data, determine a value of the picture identifying information for the picture, and code information indicative of the determined number of least significant bits of the value of the picture identifying information for the picture. 1. A method of coding video data , the method comprising:determining a number of least significant bits of picture identifying information for a picture of video data based on a picture type for the picture;determining a value of the picture identifying information for the picture; andcoding information indicative of the determined number of least significant bits of the value of the picture identifying information for the picture.2. The method of claim 1 , wherein coding the information indicative of the determined number of least significant bits comprises coding data representative of the picture type claim 1 , and wherein determining the number of least significant bits comprises determining that the number of least significant bits comprises zero when the picture type comprises an instantaneous decoder refresh (IDR) picture.3. The method of claim 2 , wherein when the picture type comprises a picture type other than an IDR picture claim 2 , the method further comprises coding a pic_order_cnt_lsb syntax element having a length equal to the determined number of least significant bits claim 2 , wherein the pic_order_cnt_lsb syntax element comprises a value corresponding to the least significant bits of the picture identifying information for the picture.4. The method of claim 2 , wherein determining the number of the least significant bits based on the picture type comprises determining whether the picture is a random access point claim 2 , and when the picture is a random access point claim 2 , the method further comprises determining whether the ...

Подробнее
11-04-2013 дата публикации

ADAPTIVE FRAME SIZE SUPPORT IN ADVANCED VIDEO CODECS

Номер: US20130089154A1
Принадлежит: QUALCOMM INCORPORATED

Techniques are described related to receiving first and second sub-sequences of video, wherein the first sub-sequence includes one or more frames each having a first resolution, and the second sub-sequence includes one or more frames each having a second resolution, receiving a first sequence parameter set and a second sequence parameter set for the coded video sequence, wherein the first sequence parameter set indicates the first resolution of the one or more frames of the first sub-sequence, and the second sequence parameter set indicates the second resolution of the one or more frames of the second sub-sequence, and wherein the first sequence parameter set is different than the second sequence parameter set, and using the first sequence parameter set and the second sequence parameter set to decode the coded video sequence. 1. A method of decoding video data , the method comprising:receiving a coded video sequence comprising a first sub-sequence and a second sub-sequence, wherein the first sub-sequence includes one or more frames each having a first resolution, and the second sub-sequence includes one or more frames each having a second resolution, and wherein the first sub-sequence is different than the second sub-sequence, and the first resolution is different than the second resolution;receiving a first sequence parameter set and a second sequence parameter set for the coded video sequence, wherein the first sequence parameter set indicates the first resolution of the one or more frames of the first sub-sequence, and the second sequence parameter set indicates the second resolution of the one or more frames of the second sub-sequence, and wherein the first sequence parameter set is different than the second sequence parameter set; andusing the first sequence parameter set and the second sequence parameter set to decode the coded video sequence.2. The method of claim 1 , wherein the first sequence parameter set and the second sequence parameter set are coded in ...

Подробнее
25-04-2013 дата публикации

GROUPING OF TILES FOR VIDEO CODING

Номер: US20130101035A1
Принадлежит: QUALCOMM INCORPORATED

Techniques described herein for coding video data include techniques for coding pictures partitioned into tiles, in which each of the plurality of tiles in a picture is assigned to one of a plurality of tile groups. One example method for coding video data comprising a picture that is partitioned into a plurality tiles comprises coding video data in a bitstream, and coding, in the bitstream, information that indicates one of a plurality of tile groups to which each of the plurality of tiles is assigned. The techniques for grouping tiles described herein may facilitate improved parallel processing for both encoding and decoding of video bitstreams, improved error resilience, and more flexible region of interest (ROI) coding. 1. A method of coding video data comprising a picture that is partitioned into a plurality tiles , the method comprising:coding the video data in a bitstream; andcoding, in the bitstream, information that indicates one of a plurality of tile groups to which each of the plurality of tiles is assigned.2. The method of claim 1 , wherein the information comprises claim 1 , for each of the plurality of tiles of the picture claim 1 , a selected one of a plurality of tile group IDs associated with the tile claim 1 , wherein the selected tile group ID indicates to which of the tile groups the tile is assigned.3. The method of claim 1 , wherein the tiles assigned to a first one of the plurality of tile groups are interleaved within the picture with the tiles assigned to a second one of the plurality of tile groups.4. The method of claim 1 , wherein a subset of the plurality of tile groups forms at least one of an independently decodable sub-picture or a region of interest (ROI).5. The method of claim 4 , wherein coding the video data comprises decoding the video data claim 4 , wherein decoding the video data comprises at least one of:requesting non-delivery of a portion of the video data that is not associated with the subset of tile groups, ordiscarding ...

Подробнее
02-05-2013 дата публикации

Fragmented parameter set for video coding

Номер: US20130107942A1
Принадлежит: Qualcomm Inc

A video encoder generates a first network abstraction layer (NAL) unit. The first NAL unit contains a first fragment of a parameter set associated with video data. The video encoder also generates a second NAL unit. The second NAL unit contains a second fragment of the parameter set. A video decoder may receive a bitstream that includes the first and second NAL units. The video decoder decodes, based at least in part on the parameter set, one or more coded pictures of the video data.

Подробнее
02-05-2013 дата публикации

UNIFIED DESIGN FOR PICTURE PARTITIONING SCHEMES

Номер: US20130107952A1
Принадлежит: QUALCOMM INCORPORATED

A video coder can control in-picture prediction across slice boundaries within a picture. In one example, a first syntax element can control if in-picture prediction across slice boundaries for slices of a picture. If in-picture prediction across slice boundaries is enabled for the picture, then a second syntax element can control, for an individual slices, if in-picture prediction across slice boundaries is enabled for the slice. 1. A method of coding video data , the method comprising:coding a first syntax element for a first picture, wherein a first value for the first syntax element indicates in-picture prediction is allowed across slices for slices of the first picture; and,coding a first coding unit of a first slice based on information of a second coding unit of a second slice.2. The method of claim 1 , further comprising:in response to the first syntax element indicating in-picture prediction is allowed across slices, coding a second syntax element indicating in-picture prediction is allowed across slices, wherein the second syntax element is part of a slice header.3. The method of claim 2 , wherein presence of the second syntax element in the slice header is dependent on the first value of the first syntax element.4. The method of claim 2 , further comprising:coding a starting address for a slice, wherein the starting address for the slice is located before the second syntax element in the slice header.5. The method of claim 1 , wherein the first syntax element is part of a picture parameter set (PPS).6. The method of claim 1 , wherein coding the first syntax element comprises coding a first instance of the first syntax element claim 1 , the method further comprising:coding a second instance of the first syntax element for a second picture, wherein a second value for the second instance of the first syntax element indicates in-picture prediction is not allowed across slices for slices of the second picture.7. The method of claim 6 , further comprising: ...

Подробнее
02-05-2013 дата публикации

RANDOM ACCESS WITH ADVANCED DECODED PICTURE BUFFER (DPB) MANAGEMENT IN VIDEO CODING

Номер: US20130107953A1
Принадлежит: QUALCOMM INCORPORATED

As one example, techniques for decoding video data include receiving a bitstream that includes one or more pictures of a coded video sequence (CVS), decoding a first picture according to a decoding order, wherein the first picture is a random access point (RAP) picture that is not an instantaneous decoding refresh (IDR) picture, and decoding at least one other picture following the first picture according to the decoding order based on the decoded first picture. As another example, techniques for encoding video data include generating a bitstream that includes one or more pictures of a CVS, wherein a first picture according to the decoding order is a RAP picture that is not an IDR picture, and avoiding including at least one other picture, other than the first picture, that corresponds to a leading picture associated with the first picture, in the bitstream. 1. A method of decoding video data , the method comprising:receiving a bitstream comprising one or more pictures of a coded video sequence (CVS);decoding a first picture of the one or more pictures according to a decoding order associated with the CVS, wherein the first picture is a random access point (RAP) picture that is not an instantaneous decoding refresh (IDR) picture; anddecoding at least one of the one or more pictures, other than the first picture, following the first picture according to the decoding order, based on the decoded first picture.2. The method of claim 1 , further comprising:identify at least one of the one or more pictures, other than the first picture, that corresponds to a leading picture associated with the first picture, wherein the leading picture comprises a picture that follows the first picture according to the decoding order and precedes the first picture according to a display order associated with the CVS; and identifying one or more reference pictures used to encode the respective picture;', 'determining whether any of the identified one or more reference pictures is ...

Подробнее
02-05-2013 дата публикации

CARRIAGE OF SEI MESSAGES IN RTP PAYLOAD FORMAT

Номер: US20130107954A1
Принадлежит: Nokia Corporation

A system and method of modifying error resiliency features by conveying temporal level 0 picture indices, such as t10_pic_idx, within an SEI message instead of optionally including them in the NAL unit header is provided. In addition, a mechanism is provided for enabling repetition of any SEI messages in Real-Time Transport Protocol (RTP) packets. Enabling such repetition of any SEI messages facilitates detection of lost temporal level 0 pictures on the basis of any received packet. 19-. (canceled)10. A method for packetizing a temporal scalable bitstream representative of an image sequence , the method comprising:packetizing at least a portion of the image sequence into a first packet, wherein the first packet comprises first information summarizing the contents of the at least a portion of the encoded image sequence, andproviding in the first packet second information indicative of a decoding order of an image within a lowest temporal layer in a temporal layer hierarchy.11. The method of claim 10 , wherein the second information comprises a temporal level picture index.12. The method of claim 11 , wherein the temporal level picture index comprises a plurality of network abstraction layer units in a scalable video coding bitstream.13. The method of claim 11 , wherein claim 11 , if the image represents an instantaneous decoding refresh picture claim 11 , the value of the temporal level picture index is equal to one of a zero value and any other value in a predetermined range.14. The method of claim 11 , wherein claim 11 , if the image does not represent an instantaneous decoding refresh picture claim 11 , the value of the temporal level picture index is a function of a modulo-operated value of a temporal level picture index of a previous picture having a temporal level of zero.15. A computer program product claim 10 , embodied in a computer-readable medium claim 10 , comprising computer code configured to perform the processes of .16. An apparatus claim 10 , ...

Подробнее
02-05-2013 дата публикации

LOOP FILTERING CONTROL OVER TILE BOUNDARIES

Номер: US20130107973A1
Принадлежит: QUALCOMM INCORPORATED

A video coder can be configured to code a syntax element that indicates if a loop filtering operation, such as deblocking filtering, adaptive loop filtering, or sample adaptive offset filtering, is allowed across a tile boundary. A first value for the syntax element may indicate loop filtering is allowed across the tile boundary, and a second value for the syntax element may indicate loop filtering is not allowed across the tile boundary. If loop filtering is allowed across a tile boundary, additional syntax elements may indicate specifically for which boundaries loop filtering is allowed or disallowed. 1. A method of coding video data , the method comprising:coding, for a picture of video data that is partitioned into tiles, a first value for a first syntax element, wherein the first value for the first syntax element indicates that loop filtering operations are allowed across at least one tile boundary within the picture; and,performing the one or more loop filtering operations across the at least one tile boundary in response to the first value indicating that the loop filtering operations are allowed across the tile boundary.2. The method of claim 1 , wherein the one or more loop filtering operations comprise one or more of a deblocking filtering operation and a sample adaptive offset filtering operation.3. The method of claim 1 , wherein the one or more loop filtering operations comprise an adaptive loop filtering operation.4. The method of claim 1 , wherein the first value for the first syntax element indicates that loop filtering operations are allowed across all tile boundaries within the picture.5. The method of claim 1 , further comprising:coding for a second picture of video data that is partitioned into tiles a second value for the first syntax element, wherein the second value for the first syntax element indicates that loop filtering operations are not allowed across tile boundaries within the second picture.6. The method of claim 5 , further ...

Подробнее
09-05-2013 дата публикации

PARAMETER SET GROUPS FOR CODED VIDEO DATA

Номер: US20130114694A1
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video encoder or a video decoder, may be configured to code a parameter set group representing a first parameter set of a first type and a second parameter set of a second, different type, and code a slice of video data using information of the parameter set group, information of the first parameter set, and information of the second parameter set, wherein the slice includes information referring to the parameter set group. The video coding device may further code the first and second parameter sets. 1. A method of coding video data , the method comprising:coding a parameter set group representing a first parameter set of a first type and a second parameter set of a second, different type; andcoding a slice of video data using information of the parameter set group, information of the first parameter set, and information of the second parameter set, wherein the slice includes information referring to the parameter set group.2. The method of claim 1 , further comprising coding data that signals an identifier of at least one of the first type of the first parameter set and the second type of the second parameter set.3. The method of claim 1 , wherein the information referring to the parameter set group comprises a parameter set group identifier.4. The method of claim 1 , wherein the first type and the second type are each selected from a group of parameter set types including an adaptive loop filter parameter set claim 1 , a sample adaptive offset parameter set claim 1 , a quantization matrix table parameter set claim 1 , a reference picture list construction parameter set claim 1 , and a reference picture set parameter set.5. The method of claim 1 , further comprising:coding the first parameter set; andcoding the second parameter set.6. The method of claim 5 ,wherein coding the first parameter set comprises coding a first network abstraction layer (NAL) unit comprising the first parameter set; andwherein coding the second parameter ...

Подробнее
09-05-2013 дата публикации

VIDEO CODING WITH NETWORK ABSTRACTION LAYER UNITS THAT INCLUDE MULTIPLE ENCODED PICTURE PARTITIONS

Номер: US20130114735A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video encoder generates a Network Abstraction Layer (NAL) unit that contains a plurality of encoded picture partitions of the video data. The video encoder generates a bitstream that includes a variable-length value that represents an entropy-encoded first syntax element, a variable-length value that represents an entropy-encoded second syntax element, and fixed-length values that represent offset syntax elements. Lengths of each of the offset syntax elements are determinable based on the first syntax element. A video decoder uses the first syntax element, the second syntax element, and the offset syntax elements when decoding the encoded picture partitions. 1. A method for encoding video data , the method comprising:entropy encoding a first syntax element, a second syntax element, and a series of offset syntax elements, wherein lengths of each of the offset syntax elements are determinable based on the first syntax element, the number of offset syntax elements in the series of offset syntax elements is determinable based on the second syntax element, and locations of a plurality of encoded picture partitions within a NAL unit are determinable based on the offset syntax elements; andgenerating a bitstream that includes a variable-length value that represents the entropy-encoded first syntax element, a variable-length value that represents the entropy-encoded second syntax element, and fixed-length values that represent the offset syntax elements.2. The method of claim 1 , wherein each of the encoded picture partitions includes a group of coded tree blocks within the NAL unit that are associated with a single entropy slice claim 1 , tile claim 1 , or wavefront parallel processing (WPP) wave.3. The method of claim 1 , wherein the series of offset syntax elements indicate byte offsets of the encoded picture partitions relative to preceding encoded picture partitions within the NAL unit.4. The method of claim 1 , further comprising generating a Supplemental ...

Подробнее
09-05-2013 дата публикации

PADDING OF SEGMENTS IN CODED SLICE NAL UNITS

Номер: US20130114736A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder divides a picture into a plurality of picture partitions, such as tiles or wavefront parallel processing (WPP) waves. The picture partitions are associated with non-overlapping subsets of the treeblocks of the picture. The video encoder generates a coded slice network abstraction layer (NAL) unit that includes encoded representations of the treeblocks associated with a slice of the picture. The coded treeblocks are grouped within the coded slice NAL unit into segments associated with different ones of the picture partitions. The video encoder pads one or more of the segments such that each of the segments begins on a byte boundary. 1. A method for encoding video data , the method comprising:dividing a picture into a plurality of picture partitions, the picture having a plurality of treeblocks, the picture partitions associated with non-overlapping subsets of the treeblocks of the picture; andgenerating a coded slice network abstraction layer (NAL) unit that includes encoded representations of the treeblocks that are associated with a slice of the picture, the encoded representations of the treeblocks grouped within the coded slice NAL unit into segments associated with different ones of the picture partitions, wherein one or more of the segments are padded such that each of the segments begins on a byte boundary.2. The method of claim 1 , wherein generating the coded slice NAL unit comprises generating a slice header that indicates entry points for one or more of the segments.3. The method of claim 2 , wherein the entry points for the segments indicate byte offsets of the segments.4. The method of claim 1 , wherein the picture partitions are tiles or wavefront parallel processing (WPP) waves.5. The method of claim 1 , further comprising generating a parameter set associated with the picture claim 1 , the parameter set including a flag that has a first value claim 1 , the first value indicating that the one or more of the segments are padded such that ...

Подробнее
16-05-2013 дата публикации

CARRIAGE OF SEI MESSAGES IN RTP PAYLOAD FORMAT

Номер: US20130121413A1
Принадлежит: Nokia Corporation

A system and method of modifying error resiliency features by conveying temporal level 0 picture indices, such as t10_pic_idx, within an SEI message instead of optionally including them in the NAL unit header is provided. In addition, a mechanism is provided for enabling repetition of any SEI messages in Real-Time Transport Protocol (RTP) packets. Enabling such repetition of any SEI messages facilitates detection of lost temporal level 0 pictures on the basis of any received packet. 19-. (canceled)10. A method for packetizing a temporal scalable bitstream representative of an image sequence , the method comprising:packetizing at least a portion of the image sequence into a first packet, wherein the first packet comprises first information summarizing the contents of the at least a portion of the encoded image sequence, andproviding in the first packet second information indicative of a decoding order of an image within a lowest temporal layer in a temporal layer hierarchy.11. The method of claim 10 , wherein the second information comprises a temporal level picture index.12. The method of claim 11 , wherein the temporal level picture index comprises a plurality of network abstraction layer units in a scalable video coding bitstream.13. The method of claim 11 , wherein claim 11 , if the image represents an instantaneous decoding refresh picture claim 11 , the value of the temporal level picture index is equal to one of a zero value and any other value in a predetermined range.14. The method of claim 11 , wherein claim 11 , if the image does not represent an instantaneous decoding refresh picture claim 11 , the value of the temporal level picture index is a function of a modulo-operated value of a temporal level picture index of a previous picture having a temporal level of zero.15. A computer program product claim 10 , embodied in a computer-readable medium claim 10 , comprising computer code configured to perform the processes of .16. An apparatus claim 10 , ...

Подробнее
30-05-2013 дата публикации

SEQUENCE LEVEL INFORMATION FOR MULTIVIEW VIDEO CODING (MVC) COMPATIBLE THREE-DIMENSIONAL VIDEO CODING (3DVC)

Номер: US20130135431A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In general, techniques are described for separately coding depth and texture components of video data. A video coding device for processing the video data comprising one or more processors may perform the techniques. The one or more processors may be configured to determine first sequence level information describing characteristics of the depth components, and determine second sequence level information describing characteristics of an operation point of the video data. 1. A method of processing video data including a view component comprising one or more depth components and one or more texture components , the method comprising:determining first sequence level information describing characteristics of the depth components; anddetermining second sequence level information describing characteristics of an operation point of the video data.2. The method of claim 1 , wherein the first sequence level information comprises a three-dimensional video coding sequence parameter set that specifies a view dependency of the depth components.3. The method of claim 2 , further comprising determining a reference picture list that identifies one or more reference pictures for the depth components indicated in the three-dimensional video coding sequence parameter set.4. The method of claim 2 ,wherein the second sequence level information includes a three-dimensional video coding sequence parameter set that described, for the operation point, a list of target output views, a number of texture views to be decoded when decoding the operation point, and a number of depth views to be decoded for when decoding the operation point,wherein the number of texture views to be decoded is different form the number of depth views.5. The method of claim 4 , further comprising targeting claim 4 , for each of the target output views specified in the list of target output views claim 4 , the one or more depth components when available.6. The method of claim 1 , further comprising specifying a three ...

Подробнее
30-05-2013 дата публикации

DEPTH COMPONENT REMOVAL FOR MULTIVIEW VIDEO CODING (MVC) COMPATIBLE THREE-DIMENSIONAL VIDEO CODING (3DVC)

Номер: US20130135433A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In general, techniques are described for separately coding depth and texture components of video data. A video coding device configured to code video data may perform the techniques. The video coding device may comprise a decoded picture buffer and a processor configured to store a depth component in the decoded picture buffer, analyze a view dependency to determine whether the depth component is used for inter-view prediction and remove the depth component from the decoded picture buffer in response to determining that the depth component is not used for inter-view prediction. for processing video data including a view component comprised of a depth component and a texture component 1. A method for video coding , the method comprising:storing a depth component in a decoded picture buffer,analyzing a view dependency to determine whether the depth component is used for inter-view prediction; andremoving the depth component from the decoded picture buffer in response to determining that the depth component is not used for inter-view prediction.2. The method of claim 1 ,wherein the depth component is associated with a view component of a view of video data,wherein a texture component is also associated with the view component, andwherein removing the depth component comprises removing the depth component from the decoded picture buffer without removing the texture component in response to determining that the depth component is not used for inter-view prediction.3. The method of claim 1 , wherein the depth component does not belong to a target output view and is a non-reference picture or a picture marked as “unused for reference.”4. The method of claim 1 ,wherein the view dependency is signaled in a video coding sequence parameter set extension of a subset sequence parameter set, andwherein the subset sequence parameter set contains a three dimensional video profile and is activated as an active view video coding sequence parameter set when analyzing the view ...

Подробнее
30-05-2013 дата публикации

NESTED SEI MESSAGES FOR MULTIVIEW VIDEO CODING (MVC) COMPATIBLE THREE-DIMENSIONAL VIDEO CODING (3DVC)

Номер: US20130135434A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In general, techniques are described for separately processing depth and texture components of video data. A device configured to process video data including a view component comprised of a depth component and a texture component may perform various aspects of the techniques. The device may comprise a processor configured to determine a supplemental enhancement information message that applies when processing the view component of the video data, and determine a nested supplemental enhancement information message that applies in addition to the supplemental enhancement information message when processing the depth component of the view component. 1. A method of processing video data including a view component comprising a depth component and a texture component , the method comprising:determining a supplemental enhancement information message that applies when processing the view component of the video data; anddetermining a nested supplemental enhancement information message that applies to the depth component of the view component in addition to the supplemental enhancement information message.2. The method of claim 1 , further comprising processing the depth component of the view component based on the supplemental enhancement information message and the nested supplemental enhancement information message.3. The method of claim 1 ,wherein determining a nested supplemental enhancement information message comprises determining the nested supplemental enhancement information message that applies in addition to the supplemental enhancement information message when only processing the depth component of the view component, andwherein processing the depth component of the view component comprises processing only the depth component of the view component and not the texture component of the view component based on the supplemental enhancement information message and the nested supplemental enhancement information message.4. The method of claim 1 , further comprising: ...

Подробнее
30-05-2013 дата публикации

Activation of parameter sets for multiview video coding (mvc) compatible three-dimensional video coding (3dvc)

Номер: US20130136176A1
Автор: Ye-Kui Wang, YING Chen
Принадлежит: Qualcomm Inc

In general, techniques are described for separately coding depth and texture components of video data. A video coding device for coding video data that includes a view component comprised of a depth component and a texture component may perform the techniques. The video coding device may comprise, as one example, a processor configured to activate a parameter set as a texture parameter set for the texture component of the view component, and code the texture component of the view component based on the activated texture parameter set.

Подробнее
06-06-2013 дата публикации

CODING LEAST SIGNFICANT BITS OF PICTURE ORDER COUNT VALUES IDENTIFYING LONG-TERM REFERENCE PICTURES

Номер: US20130142256A1
Принадлежит: QUALCOMM INCORPORATED

In general, techniques are described for coding picture order count values identifying long-term reference pictures. A video decoding device comprising a processor may perform the techniques. The processor may be configured to determine a number of bits used to represent least significant bits of the picture order count value that identifies a long-term reference picture to be used when decoding at least a portion of a current picture and parse the determined number of bits from a bitstream representative of the encoded video data. The parsed bits represent the least significant bits of the picture order count value. The processor retrieves the long-term reference picture from a decoded picture buffer based on the least significant bits, and decodes at least the portion of the current picture using the retrieved long-term reference picture. 1. A method of encoding video data , the method comprising:determining, for a current picture of the video data, a long term reference picture to be used when encoding at least a portion of a current picture of the video data;determining a number of bits to be used to represent one or more least significant bits of a picture order count value representative of an encoded version of the video data;specifying the one or more least significant bits of the picture order count value using the determined number of bits used to represent the one or more least significant bits of the picture order count value; andencoding at least the portion of the current picture using the long-term reference picture.2. The method of claim 1 , wherein the picture order count value that identifies the long-term reference picture comprises a picture order count value that identifies a long-term reference picture that is present in a decoded picture buffer but that is not specified as one or more long-term reference pictures in a sequence parameter set associated with the current picture.3. The method of claim 1 , wherein specifying the one or more least ...

Подробнее
06-06-2013 дата публикации

Coding picture order count values identifying long-term reference frames

Номер: US20130142257A1
Принадлежит: Qualcomm Inc

In general, techniques are described for coding picture order count values identifying long-term reference pictures. A video decoding device comprising a processor may perform the techniques. The processor may determine least significant bits (LSBs) of a picture order count (POC) value that identifies a long-term reference picture (LTRP). The LSBs do not uniquely identify the POC value with respect to the LSBs of any other POC value identifying any other picture in a decoded picture buffer (DPB). The processor may determine most significant bits (MSBs) of the POC value. The MSBs combined with the LSBs is sufficient to distinguish the POC value from any other POC value that identifies any other picture in the DPB. The processor may retrieve the LTRP from the decoded picture buffer based on the LSBs and MSBs of the POC value, and decode a current picture of the video data using the retrieved LTRP.

Подробнее
20-06-2013 дата публикации

REFERENCE PICTURE LIST CONSTRUCTION FOR MULTI-VIEW AND THREE-DIMENSIONAL VIDEO CODING

Номер: US20130155184A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list. 1. A method for multiview video decoding , the method comprising:parsing, from a bitstream, syntax elements indicating a reference picture set of a current view component of an access unit, the reference picture set including an inter-view reference picture set that includes a plurality of view components that belong to the access unit and that are associated with different views;generating, based on the reference picture set, a reference picture list for the current view component; anddecoding at least a portion of the current view component based on one or more reference pictures in the reference picture list.2. The method of claim 1 , wherein generating the reference picture list comprises generating the reference picture list such that the reference picture list includes a first subset claim 1 , a second subset claim 1 , a third subset claim 1 , a fourth subset claim 1 , a fifth subset claim 1 , a sixth subset claim 1 , and the inter-view reference picture set claim 1 , the first subset comprising short-term reference view components that are prior to the ...

Подробнее
27-06-2013 дата публикации

PERFORMING MOTION VECTOR PREDICTION FOR VIDEO CODING

Номер: US20130163668A1
Принадлежит: QUALCOMM INCORPORATED

In general, techniques are described for performing motion vector prediction for video coding. A video coding device comprising a processor may perform the techniques. The processor may be configured to determine a plurality of candidate motion vectors for a current block of the video data so as to perform the motion vector prediction process and scale one or more of the plurality of candidate motion vectors determined for the current block of the video data to generate one or more scaled candidate motion vectors. The processor may then be configured to modify the scaled candidate motion vectors to be within a specified range. 1. A method of coding video data , the method comprising:determining a plurality of candidate motion vectors for a current block of the video data so as to perform a motion vector prediction process;scaling one or more of the plurality of candidate motion vectors determined for the current block of the video data to generate one or more scaled candidate motion vectors;modifying the scaled candidate motion vectors to be within a specified range;selecting one of the plurality of candidate motion vectors as a motion vector predictor for the current block of the video data; andcoding the current block of video data based on motion vector predictor.2. The method of claim 1 , wherein modifying the scaled candidate motion vectors comprises modifying the scaled candidate motion vectors without modifying any of the other candidate motion vectors that have not been scaled.3. The method of claim 1 , wherein modifying the scaled candidate motion vectors comprises clipping the scaled candidate motion vectors prior to selecting one of the plurality of candidate motion vectors as a motion vector predictor for the current block of the video data.4. The method of claim 1 , wherein the motion vector prediction process is one of a merge mode and an advanced motion vector prediction mode.5. The method of claim 1 , wherein the specified range is defined by a video ...

Подробнее
11-07-2013 дата публикации

SIGNALING VIEW SYNTHESIS PREDICTION SUPPORT IN 3D VIDEO CODING

Номер: US20130176389A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a video coder is configured to code information indicative of whether view synthesis prediction is enabled for video data. When the information indicates that view synthesis prediction is enabled for the video data, the video coder may generate a view synthesis picture using the video data and code at least a portion of a current picture relative to the view synthesis picture. The at least portion of the current picture may comprise, for example, a block (e.g., a PU, a CU, a macroblock, or a partition of a macroblock), a slice, a tile, a wavefront, or the entirety of the current picture. On the other hand, when the information indicates that view synthesis prediction is not enabled for the video data, the video coder may code the current picture using at least one of intra-prediction, temporal inter-prediction, and inter-view prediction without reference to any view synthesis pictures. 1. A method of coding video data , the method comprising:coding information indicative of whether view synthesis prediction is enabled for video data; generating a view synthesis picture using the video data; and', 'coding at least a portion of a current picture relative to the view synthesis picture., 'when the information indicates that view synthesis prediction is enabled for the video data2. The method of claim 1 , wherein coding the information comprises coding a syntax element of a parameter set corresponding to the at least portion of the current picture.3. The method of claim 2 , wherein coding the syntax element comprises coding a syntax element of a sequence parameter set corresponding to a sequence of pictures including the current picture.4. The method of claim 2 , wherein coding the syntax element comprises coding a syntax element of at least one of a picture parameter set corresponding to the current picture and an access unit level parameter set corresponding to a slice comprising the at least portion of the current picture.5. The method of claim 4 , ...

Подробнее
11-07-2013 дата публикации

Motion vector scaling in video coding

Номер: US20130177084A1
Автор: Xianglin Wang, Ye-Kui Wang
Принадлежит: Qualcomm Inc

This disclosure proposes techniques for motion vector scaling. In particular, this disclosure proposes that both an implicit motion vector scaling process (e.g., the POC-based motion vector scaling process described above), as well as an explicit motion vector (e.g., a motion vector scaling process using scaling weights) may be used to perform motion vector scaling. This disclosure also discloses example signaling methods for indicating the type of motion vector scaling used.

Подробнее
18-07-2013 дата публикации

CODING PARAMETER SETS AND NAL UNIT HEADERS FOR VIDEO CODING

Номер: US20130182755A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In one example, a video coder, such as a video encoder or video decoder, is configured to code a video parameter set (VPS) for one or more layers of video data, wherein each of the one or more layers of video data refer to the VPS, and code the one or more layers of video data based at least in part on the VPS. The video coder may code the VPS for video data conforming to High-Efficiency Video Coding, Multiview Video Coding, Scalable Video Coding, or other video coding standards or extensions of video coding standards. The VPS may include data specifying parameters for corresponding sequences of video data within various different layers (e.g., views, quality layers, or the like). The parameters of the VPS may provide indications of how the corresponding video data is coded. 1. A method of coding video data , the method comprising:coding a video parameter set (VPS) for one or more layers of video data, wherein each of the one or more layers of video data refer to the VPS; andcoding the one or more layers of video data based at least in part on the VPS.2. The method of claim 1 , wherein coding the VPS comprises coding data of the VPS indicative of a maximum number of temporal layers in the one or more layers.3. The method of claim 1 , wherein coding the VPS comprises coding data of the VPS indicative of a number of frames to be reordered in at least one of the one or more layers.4. The method of claim 1 , wherein coding the VPS comprises coding data of the VPS indicative of a number of pictures to be stored in a decoded picture buffer (DPB) during decoding of the one or more layers.5. The method of claim 1 , wherein coding the VPS comprises coding data of the VPS indicative of one or more sets of hypothetical reference decoder (HRD) parameters.6. The method of claim 1 , wherein coding the VPS comprises coding data of the VPS indicative of whether the VPS includes an extension beyond a corresponding standard claim 1 , and when the VPS includes the extension claim 1 , ...

Подробнее
18-07-2013 дата публикации

INDICATION OF USE OF WAVEFRONT PARALLEL PROCESSING IN VIDEO CODING

Номер: US20130182774A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder generates a bitstream that includes a syntax element that indicates whether a picture is encoded according either a first coding mode or a second coding mode. In the first coding mode, the picture is entirely encoded using wavefront parallel processing (WPP). In the second coding mode, each tile of the picture is encoded without using WPP and the picture may have one or more tiles. A video decoder may parse the syntax element from the bitstream. In response to determining that the syntax element has a particular value, the video decoder decodes the picture entirely using WPP. In response to determining that the syntax element does not have the particular value, the video decoder decodes each tile of the picture without using WPP. 1. A method for decoding video data , the method comprising:parsing, from a bitstream that includes a coded representation of a picture in the video data, a syntax element;in response to determining that the syntax element has a particular value, decoding the picture entirely using wavefront parallel processing (WPP); andin response to determining that the syntax element does not have the particular value, decoding each tile of the picture without using WPP, wherein the picture has one or more tiles.2. The method of claim 1 , further comprising parsing claim 1 , from the bitstream claim 1 , a picture parameter set that includes the syntax element.3. The method of claim 1 , further comprising parsing claim 1 , from the bitstream claim 1 , a sequence parameter set that includes the syntax element.4. The method of claim 1 , wherein the picture is partitioned into at least a first tile and a second tile and decoding each tile of the picture without using WPP comprises decoding claim 1 , in parallel claim 1 , a coding tree block (CTB) of the first tile and a CTB of the second tile.5. The method of claim 1 , further comprising:determining that a parameter set includes a tile column number syntax element and a tile row number ...

Подробнее
18-07-2013 дата публикации

SUB-STREAMS FOR WAVEFRONT PARALLEL PROCESSING IN VIDEO CODING

Номер: US20130182775A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder signals whether WPP is used to encode a picture of a sequence of video picture. If WPP is used to encode the picture, the video encoder generates a coded slice NAL unit that includes a plurality of sub-streams, each of which includes a consecutive series of bits that represents one encoded row of coding tree blocks (CTBs) in a slice of the picture. A video decoder receives a bitstream that includes the coded slice NAL unit. Furthermore, the video decoder may determine, based on a syntax element in the bitstream, that the slice is encoded using WPP and may decode the slice using WPP. 1. A method of encoding video data , the method comprising:signaling that wavefront parallel processing (WPP) is used to encode a picture of a sequence of video picture;performing WPP to generate a plurality of sub-streams, each of the sub-streams including a consecutive series of bits that represents one encoded row of coding tree blocks (CTBs) in a slice of the picture; andgenerating a coded slice network abstraction layer (NAL) unit that includes the plurality of sub-streams.2. The method of claim 1 , wherein generating the coded slice NAL unit comprises generating a slice header of the coded slice NAL unit and slice data of the coded slice NAL unit claim 1 , the slice data including the sub-streams claim 1 , the slice header including a plurality of offset syntax elements from which entry points of the sub-streams are determinable.3. The method of claim 2 , wherein the slice header further includes a first syntax element and a second syntax element claim 2 , the number of offset syntax elements in the plurality of offset syntax elements is determinable based on the first syntax element claim 2 , and a length claim 2 , in bits claim 2 , of each of the offset syntax elements is determinable based on the second syntax element.4. The method of claim 1 , wherein signaling that WPP is used to encode the picture comprises generating a picture parameter set (PPS) that ...

Подробнее
25-07-2013 дата публикации

SIGNALING OF DEBLOCKING FILTER PARAMETERS IN VIDEO CODING

Номер: US20130188733A1
Принадлежит: QUALCOMM INCORPORATED

This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice. 1. A method of decoding video data , the method comprising:decoding a first syntax element defined to indicate whether deblocking filter parameters are present in both a picture layer parameter set and a slice header;when the first syntax element indicates that deblocking filter parameters are present in both the picture layer parameter set and the slice header, decoding a second syntax element in the slice header defined to indicate whether to use a first set of deblocking filter parameters included in the picture layer parameter set or a second set of deblocking filter parameters included in the slice header to define a deblocking filter applied to a current video slice; andwhen the first syntax element indicates that deblocking filter parameters are not present in both the picture layer parameter set and the slice header, determining that the second syntax element is not present in the slice header to be decoded.2. The method of claim 1 , wherein the picture layer parameter set comprises one of a picture parameter set (PPS) or an adaptation parameter set ...

Подробнее
01-08-2013 дата публикации

METHOD OF CODING VIDEO AND STORING VIDEO CONTENT

Номер: US20130195171A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include a dedicated array for parameter sets. 1. A method of generating a video file including coded video content , the method comprising:obtaining a plurality of slices of coded video content;obtaining a plurality of parameter sets associated with the plurality of slices of video content; andencapsulating a plurality of parameter sets within a sample description of a file track, wherein parameter set network abstraction layer units correspond to a type of parameter set are included in a dedicated array in the sample description.2. The method of claim 1 , wherein the sample description further includes an array including supplemental enhancement information network abstraction layer units.3. The method of claim 1 , wherein the sample description includes a first array including sequence parameter set network abstraction layer units and a second array including picture parameter set network abstraction layer units.4. A device comprising a video file creation module configured to:obtain a plurality of slices of coded video content;obtain a plurality of parameter sets associated with the plurality of slices of video content; andencapsulate a plurality of parameter sets within a sample description of a file track, wherein parameter set network abstraction layer units correspond to a type of parameter set are included in a dedicated array in the sample description.5. The device of claim 4 , wherein the sample description further includes an array ...

Подробнее
01-08-2013 дата публикации

METHOD OF CODING VIDEO AND STORING VIDEO CONTENT

Номер: US20130195172A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include an indicator identifying a number of temporal layers of the video stream. 1. A method of generating a video file including coded video content , the method comprising:obtaining a plurality of slices of coded video content;encapsulating the plurality of slices of coded video content within a plurality of access units of a video stream, wherein the video stream includes multiple temporal layers; andencapsulating an indicator within a sample description of a file track, wherein the indicator an indicates of a number of temporal layers of the video stream.2. The method of claim 1 , wherein the file track contains a representation of the assignment of the samples in the track to temporal layers as well as a characteristics description for each of the temporal layers.3. The method of claim 2 , wherein the characteristics descriptions includes at least one of temporal layer identification claim 2 , profile claim 2 , level claim 2 , bitrate claim 2 , and frame rate.4. A method of generating a video file including coded video content claim 2 , the method comprising:obtaining a plurality of slices of coded video content;encapsulating the plurality of slices of coded video content within a plurality of access units of a video stream, wherein the video stream includes multiple temporal layers; andencapsulating the plurality of access units within a plurality of samples in a file track, wherein the file track contains a representation of the assignment of ...

Подробнее
01-08-2013 дата публикации

METHOD OF CODING VIDEO AND STORING VIDEO CONTENT

Номер: US20130195173A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include an indicator identifying a number of parameter sets stored within one or more access units of the video stream. 1. A method of generating a video file including coded video content , the method comprising:obtaining a plurality of slices of coded video content;obtaining a plurality of parameter sets associated with the plurality of slices of video content;encapsulating the plurality of slices of coded video content within a plurality of access units of a video stream;encapsulating the plurality of access units within a plurality of samples in a file track; andencapsulating a first plurality of parameter sets within the plurality of samples.2. The method of claim 1 , further comprising encapsulating a second plurality of parameter sets within a sample description of the file track.3. The method of claim 2 , wherein the first plurality of parameter sets consists of parameter sets of a first type and the second plurality of parameter sets consists of parameter sets of a second type.4. The method of claim 3 , wherein the first type is a picture parameter set claim 3 , and the second type is a sequence parameter set.5. The method of claim 3 , wherein the sample description includes an indicator identifying a number of parameter sets of the second type stored within the sample description.6. The method of claim 2 , wherein the sample description includes an indicator identifying a number of parameter sets stored within the sample description.7. The ...

Подробнее
01-08-2013 дата публикации

METHOD OF CODING VIDEO AND STORING VIDEO CONTENT

Номер: US20130195205A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include stream properties associated with the video stream. 1. A method of generating a video file including coded video content , the method comprising:obtaining a plurality of slices of coded video content;encapsulating the plurality of slices of coded video content within a plurality of access units of a video stream;obtaining a plurality of stream properties associated with the video stream; andencapsulating a stream properties within a sample description of a file track, wherein the stream properties include at least one of a frame rate and a spatial resolution of the video stream.2. The method of claim 1 , wherein the sample description further includes information indicating a geometry region covered by a set of tiles.3. The method of claim 1 , wherein the sample description further includes a bit-depth value for the slices of coded video content.4. A device comprising a video file creation module configured to:obtain a plurality of slices of coded video content;encapsulate the plurality of slices of coded video content within a plurality of access units of a video stream;obtain a plurality of stream properties associated with the video stream; andencapsulate a stream properties within a sample description of a file track, wherein the stream properties include at least one of a frame rate and a spatial resolution of the video stream.5. The device of claim 4 , wherein the sample description further includes information indicating a geometry ...

Подробнее
08-08-2013 дата публикации

REFERENCE PICTURE LIST MODIFICATION FOR VIDEO CODING

Номер: US20130202035A1
Принадлежит: QUALCOMM INCORPORATED

A video coder may, in some cases, signal whether one or more initial reference picture lists are to be modified. When an initial list is to be modified, the video coder can signal information indicating a starting position in the initial reference picture list. When the starting position signaled by the video coder is less than a number of pictures included in the initial reference picture list, then the video coder signals the number of pictures to be inserted into the initial reference picture list, and a reference picture source from which a picture can be retrieved to insert into the initial reference picture list to construct a modified reference picture list. 1. A method for encoding video data , the method comprising coding information indicating a number of pictures to be inserted into an initial reference picture list to construct a modified reference picture list.2. The method of claim 1 , further comprising coding information indicating a starting position in the initial reference picture list claim 1 , wherein the starting position indicates a position at which to begin modification of the initial reference picture list.3. The method of claim 2 , wherein the initial reference picture list is constructed based on a reference picture set claim 2 , wherein the reference picture set identifies reference pictures that can be used for inter-predicting one or more pictures included in the video data claim 2 , and further comprising: coding information indicating a number of pictures to be inserted into the initial reference picture list; and', 'coding information indicating a selected reference picture superset and an index into the selected reference picture superset from which a picture can be retrieved to insert into the initial reference picture list to construct a modified reference picture list,', 'wherein the selected reference picture superset comprises one or more subsets of the reference picture set., 'when the starting position is less than a number ...

Подробнее
29-08-2013 дата публикации

Bitstream extraction in three-dimensional video

Номер: US20130222537A1
Принадлежит: Qualcomm Inc

To extract a sub-bitstream from a 3-dimensional video (3DV) bitstream, a device determines a texture target view list that indicates views in the 3DV bitstream that have texture view components that are required for decoding pictures in a plurality of target views. The target views are a subset of the views in the bitstream that are to be decodable from the sub-bitstream. In addition, the device determines a depth target view list that indicates views in the 3DV bitstream that have depth view components that are required for decoding pictures in the plurality of target views. The device determines the sub-bitstream based at least in part on the texture target view list and the depth target view list.

Подробнее
29-08-2013 дата публикации

NETWORK ABSTRACTION LAYER (NAL) UNIT HEADER DESIGN FOR THREE-DIMENSIONAL VIDEO CODING

Номер: US20130222538A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder generates a network abstraction layer (NAL) unit that includes at least a first syntax element and a second syntax element. The first syntax element indicates that the NAL unit belongs to a particular NAL unit type. Coded slices of texture view components and depth view components are encapsulated within NAL units that belong to the particular NAL unit type. The second syntax element indicates whether a NAL unit header of the NAL unit includes an Advanced Video Coding (AVC)-compatible 3-dimensional video (3DV) header extension or includes a Multiview Video Coding (MVC)-compatible 3DV header extension. The video encoder outputs a bitstream that includes the NAL unit. A video decoder receives the NAL unit and determines whether the second syntax element indicates that the NAL unit header of the NAL unit includes the AVC-compatible 3DV header extension or the MVC-compatible 3DV header extension. 1. A method for encoding video data , the method comprising: coded slices of texture view components and depth view components are encapsulated within NAL units that belong to the particular NAL unit type,', 'the second syntax element indicates whether a NAL unit header of the NAL unit includes an Advanced Video Coding (AVC)-compatible three-dimensional video (3DV) header extension or a Multi-View Coding (MVC)-compatible header extension,', 'the AVC-compatible 3DV header extension includes syntax elements associated with AVC-compatible 3DV, and', 'the MVC-compatible 3DV header extension has a different syntax structure than the AVC-compatible 3DV header extension and includes syntax elements associated with MVC-compatible 3DV., 'generating a network abstraction layer (NAL) unit that includes at least a first syntax element and a second syntax element, the first syntax element indicating that the NAL unit belongs to a particular NAL unit type, wherein2. The method of claim 1 , wherein the AVC-compatible 3DV header extension has a syntax structure that is the same ...

Подробнее
19-09-2013 дата публикации

HIGH-LEVEL SYNTAX EXTENSIONS FOR HIGH EFFICIENCY VIDEO CODING

Номер: US20130243081A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a device includes a video coder configured to code a picture order count (POC) value for a first picture of video data, code a second-dimension picture identifier for the first picture, and code, in accordance with a base video coding specification or an extension to the base video coding specification, a second picture based at least in part on the POC value and the second-dimension picture identifier of the first picture. The video coder may comprise a video encoder or a video decoder. The second-dimension picture identifier may comprise, for example, a view identifier, a view order index, a layer identifier, or other such identifier. The video coder may code the POC value and the second-dimension picture identifier during coding of a motion vector for a block of the second picture, e.g., during advanced motion vector prediction or merge mode coding. 1. A method of decoding video data , the method comprising:decoding a picture order count (POC) value for a first picture of video data;decoding a second-dimension picture identifier for the first picture; anddecoding, in accordance with a base video coding specification, a second picture based at least in part on the POC value and the second-dimension picture identifier of the first picture.2. The method of claim 1 , further comprising disabling motion vector prediction between a first motion vector of a first block of the second picture claim 1 , wherein the first motion vector refers to a short-term reference picture claim 1 , and a second motion vector of a second block of the second picture claim 1 , wherein the second motion vector refers to a long-term reference picture.3. The method of claim 1 , wherein coding the second picture comprises:identifying the first picture using the POC value and the second-dimension picture identifier; anddecoding at least a portion of the second picture relative to the first picture.4. The method of claim 3 , wherein identifying the first picture comprises ...

Подробнее
19-09-2013 дата публикации

MOTION VECTOR CODING AND BI-PREDICTION IN HEVC AND ITS EXTENSIONS

Номер: US20130243093A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a device includes a video coder (e.g., a video encoder or a video decoder) configured to determine that a block of video data is to be coded in accordance with a three-dimensional extension of High Efficiency Video Coding (HEVC), and, based the determination that the block is to be coded in accordance with the three-dimensional extension of HEVC, disable temporal motion vector prediction for coding the block. The video coder may be further configured to, when the block comprises a bi-predicted block (B-block), determine that the B-block refers to a predetermined pair of pictures in a first reference picture list and a second reference picture list, and, based on the determination that the B-block refers to the predetermined pair, equally weight contributions from the pair of pictures when calculating a predictive block for the block. 1. A method of decoding video data , the method comprising:determining a first type for a current motion vector of a current block of video data;determining a second type for a candidate motion vector predictor of a neighboring block to the current block;setting a variable representative of whether the candidate motion vector predictor is available to a value indicating that the candidate motion vector predictor is not available when the first type is different from the second type; anddecoding the current motion vector based at least in part on the value of the variable.2. The method of claim 1 , wherein when the first type comprises a disparity motion vector claim 1 , the second type comprises a disparity motion vector claim 1 , and the candidate motion vector predictor is used to predict the current motion vector claim 1 , decoding the current motion vector comprises decoding the current motion vector without scaling the candidate motion vector predictor.3. The method of claim 1 ,wherein determining the first type for the current motion vector comprises determining the first type based on a first reference picture ...

Подробнее
10-10-2013 дата публикации

LOW-DELAY VIDEO BUFFERING IN VIDEO CODING

Номер: US20130266075A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

As one example, a method of coding video data includes storing one or more decoding units of video data in a picture buffer. The method further includes obtaining a respective buffer removal time for the one or more decoding units, wherein obtaining the respective buffer removal time comprises receiving a respective signaled value indicative of the respective buffer removal time for at least one of the decoding units. The method further includes removing the decoding units from the picture buffer in accordance with the obtained buffer removal time for each of the decoding units. The method further includes coding video data corresponding to the removed decoding units, wherein coding the video data comprises decoding the at least one of the decoding units. 1. A method of coding video data , the method comprising:storing one or more decoding units of video data in a picture buffer;obtaining a respective buffer removal time for the one or more decoding units, wherein obtaining the respective buffer removal time comprises receiving a respective signaled value indicative of the respective buffer removal time for at least one of the decoding units;removing the decoding units from the picture buffer in accordance with the obtained buffer removal time for each of the decoding units; andcoding video data corresponding to the removed decoding units, wherein coding the video data comprises decoding the at least one of the decoding units.2. The method of claim 1 , wherein each of the one or more decoding units of video data is either an access unit or a subset of an access unit.3. The method of claim 1 , further comprising storing the one or more decoding units of video data in a continuous decoding order in the picture buffer.4. The method of claim 3 , further comprising receiving the one or more decoding units of video data in the continuous decoding order prior to storing the one or more decoding units.5. The method of claim 1 , wherein the picture buffer is a coded picture ...

Подробнее
10-10-2013 дата публикации

LOW-DELAY VIDEO BUFFERING IN VIDEO CODING

Номер: US20130266076A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

As one example, a method of coding video data includes storing one or more decoding units of video data in a coded picture buffer (CPB). The method further includes obtaining a respective buffer removal time for the one or more decoding units. The method further includes removing the decoding units from the CPB in accordance with the obtained buffer removal time for each of the decoding units. The method further includes determining whether the CPB operates at access unit level or sub-picture level. The method further includes coding video data corresponding to the removed decoding units. If the CPB operates at access unit level, coding the video data comprises coding access units comprised in the decoding units. If the CPB operates at sub-picture level, coding the video data comprises coding subsets of access units comprised in the decoding units. 1. A method of coding video data , the method comprising:storing one or more decoding units of video data in a coded picture buffer (CPB);obtaining a respective buffer removal time for the one or more decoding units;removing the decoding units from the CPB in accordance with the obtained buffer removal time for each of the decoding units;determining whether the CPB operates at access unit level or sub-picture level; andcoding video data corresponding to the removed decoding units,wherein, if the CPB operates at access unit level, coding the video data comprises coding access units comprised in the decoding units; andwherein, if the CPB operates at sub-picture level, coding the video data comprises coding subsets of access units comprised in the decoding units.2. The method of claim 1 , wherein determining whether the CPB operates at access unit level or sub-picture level comprises:determining that the CPB operates at access unit level if a sub-picture coded picture buffer preferred flag has a value of zero or if a sub-picture coded picture buffer parameters present flag has a value of zero; anddetermining that the CPB ...

Подробнее
17-10-2013 дата публикации

WAVEFRONT PARALLEL PROCESSING FOR VIDEO CODING

Номер: US20130272370A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a video coder may be configured to determine that a slice of a picture of video data begins in a row of coding tree units (CTUs) in the picture at a position other than a beginning of the row. Based on the determination, the video coder may be further configured to determine that the slice ends within the row of CTUs. The video coder may be further configured to code the slice based on the determination that the slice ends within the row of CTUs. 1. A method of coding video data , the method comprising:determining that a slice of a picture of video data begins in a row of coding tree units (CTUs) in the picture at a position other than a beginning of the row;based on the determination, determining that the slice ends within the row of CTUs; andcoding the slice based on the determination that the slice ends within the row of CTUs.2. The method of claim 1 , further comprising coding all slices of all pictures of the video data such that all of the slices that begin at a position other than a beginning of a corresponding row of CTUs also end within the corresponding row of CTUs.3. The method of claim 1 , wherein coding the slice comprises coding the slice using wavefront parallel processing.4. The method of claim 3 , wherein coding the slice using wavefront parallel processing further comprises determining that wavefront parallel processing is enabled.5. The method of claim 3 , further comprising enabling wavefront parallel processing.6. The method of claim 5 , further comprising coding syntax data indicating that wavefront parallel processing is enabled.7. The method of claim 3 , wherein coding the slice comprises coding at least a portion of a picture that includes the slice using wavefront parallel processing.8. The method of claim 1 , further comprising determining that the slice ends either at an end of the row of CTUs or before the end of the row of CTUs.9. The method of claim 1 , wherein coding the slice comprises coding CTUs of the slice in ...

Подробнее
17-10-2013 дата публикации

REFERENCE PICTURE SET PREDICTION FOR VIDEO CODING

Номер: US20130272403A1
Принадлежит: QUALCOMM INCORPORATED

In one example, a device for decoding video data includes a video decoder configured to decode one or more syntax elements of a current reference picture set (RPS) prediction data structure, wherein at least one of the syntax elements represents a picture order count (POC) difference between a POC value associated with the current RPS and a POC value associated with a previously decoded RPS, form a current RPS based at least in part on the RPS prediction data structure and the previously decoded RPS, and decode one or more pictures using the current RPS. A video encoder may be configured to perform a substantially similar process during video encoding. 1. A method of decoding video data , the method comprising:decoding one or more syntax elements of a current reference picture set (RPS) prediction data structure, wherein at least one of the syntax elements represents a picture order count (POC) difference between a POC value associated with the current RPS and a POC value associated with a previously decoded RPS;forming a current RPS based at least in part on the RPS prediction data structure and the previously decoded RPS; anddecoding one or more pictures using the current RPS.2. The method of claim 1 ,wherein decoding the one or more syntax elements comprises decoding at least one syntax element representative of whether a previously decoded picture associated with the POC difference is to be included in the current RPS, andwherein forming the current RPS comprises adding data representative of the previously decoded picture to the current RPS when the at least one of the syntax elements indicates that the previously decoded picture is to be included in the reference picture set.3. The method of claim 1 ,wherein decoding the one or more syntax elements comprises decoding at least a first syntax element specifying a number of reference pictures, starting from the beginning of the previously decoded RPS, to be omitted from the current RPS and a second syntax element ...

Подробнее
24-10-2013 дата публикации

VIDEO CODING WITH ENHANCED SUPPORT FOR STREAM ADAPTATION AND SPLICING

Номер: US20130279564A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

Various techniques for enhanced support of stream adaptation and splicing based on clean random access (CRA) pictures are described. Instead of using a flag in the slice header to indicate that a broken link picture is present, a distinct network abstraction layer (NAL) unit type can be used to indicate the presence of a broken link picture. In some implementations, a first distinct NAL unit type may be used to indicate the presence of a broken link picture with leading pictures, while a second distinct NAL unit type indicates the presence of a broken link picture without leading pictures. In some implementations, a third distinct NAL unit type may be used to indicate the presence of a broken link picture with decodable leading pictures. 1. A method for processing video data , the method comprising:receiving a first network abstraction layer (NAL) unit comprising a portion of the video data; andbased on a NAL unit type of the first NAL unit, detecting a broken link picture.2. The method of claim 1 , wherein the NAL unit type is a first NAL unit type claim 1 , and detecting the broken link picture comprises detecting a broken link picture with leading pictures.3. The method of claim 2 , further comprising:discarding the leading the pictures; andtransmitting the first NAL unit to a video processing device.4. The method of claim 1 , wherein the NAL unit type is a first NAL unit type claim 1 , and detecting the broken link picture comprises detecting a broken link picture without leading pictures.5. The method of claim 4 , further comprising:transmitting the first NAL unit to a video processing device.6. The method of further comprising:receiving a second NAL unit;determining a NAL unit type for the second NAL unit; andbased on the NAL unit type for the second NAL unit, detecting a broken link picture with leading pictures, wherein the NAL unit type for the second NAL unit is different than the first NAL unit type.7. The method of claim 1 , wherein the NAL unit type is ...

Подробнее
24-10-2013 дата публикации

MARKING REFERENCE PICTURES IN VIDEO SEQUENCES HAVING BROKEN LINK PICTURES

Номер: US20130279575A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

Systems, methods, and devices for processing video data are disclosed. Some examples determine that a current picture is a broken-link access (BLA) picture. These examples may also mark a reference picture in a picture storage buffer as unused for reference. In some examples, this may be done prior to decoding the BLA picture. 1. A method of decoding video data , the method comprising:determining that a current picture is a broken-link access (BLA) picture; andmarking a reference picture in a picture storage buffer as unused for reference prior to decoding the BLA picture.2. The method of claim 1 , wherein determining that a current picture is a BLA picture further comprises determining that the current picture is a clean random access (CRA) picture and determining that the current picture is a random access picture (RAP).3. The method of claim 1 , wherein determining that the current picture is a BLA picture is based on network abstraction layer (NAL) unit type of the current picture.4. The method of claim 1 , further comprising marking the reference picture in the picture storage buffer when the leading picture comprises a non-decodable leading picture.5. The method of claim 1 , wherein the picture storage buffer includes a decoded picture buffer (DPB).6. The method of claim 1 , wherein decoding the BLA picture comprises decoding the BLA picture in a decoder.7. The method of claim 1 , wherein decoding the BLA picture comprises decoding the BLA picture in a network element.8. The method of claim 7 , wherein the network element is a Media Aware Network Element (MANE).9. The method of claim 7 , wherein the network element is a streaming server.10. The method of claim 7 , wherein the network element is a splicer.11. The method of claim 1 , further comprising decoding the BLA picture without using the prior pictures marked as unused for reference.12. The method of claim 1 , further comprising receiving a slice of a current picture to be decoded for a sequence of video ...

Подробнее
24-10-2013 дата публикации

View dependency in multi-view coding and 3d coding

Номер: US20130279576A1
Автор: Ye-Kui Wang, YING Chen
Принадлежит: Qualcomm Inc

This disclosure described techniques for coding layer dependencies for a block of video data. According to these techniques, a video encoder generates layer dependencies associated with a given layer. The video encoder also generates a type of prediction associated with one or more of the layer dependencies. In some examples, the video encoder generates a first syntax element to signal layer dependencies and a second syntax element to signal a type of prediction associated with one or more of the layer dependencies. A video decoder may obtain the layer dependencies associated with a given layer and the type of prediction associated with one or more of the layer dependencies.

Подробнее
24-10-2013 дата публикации

Decoded picture buffer processing for random access point pictures in video sequences

Номер: US20130279599A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

Systems, methods, and devices for processing video data are disclosed. Some examples receive a slice of a current picture to be decoded for a sequence of video data. These examples may also receive, in a slice header of the slice, at least one entropy coded syntax element and at least one non-entropy coded syntax element, wherein the non-entropy coded syntax element is before the entropy coded syntax element in the slice header and indicates whether pictures prior to the current picture in decoding order are to be emptied from a decode picture buffer without being output. They may decode the slice based on the non-entropy coded syntax element.

Подробнее
31-10-2013 дата публикации

PARAMETER SET CODING

Номер: US20130287115A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

Systems, methods, and devices for processing video data are disclosed. Some examples relate to receiving or forming a parameter set having an identifier that is fixed length coded, wherein a parameter set identification (ID) for the parameter set is before any syntax element in the parameter set that is entropy coded and using the parameter set having the identifier that is fixed length coded to decode or encode video data. Other examples determine whether a first parameter set ID of a first parameter set of a first bitstream is the same as a second parameter set ID of a second parameter set of a second bitstream. In response to determining that the second parameter set ID is the same as the first parameter set ID, changing the second parameter set ID to a unique parameter set ID. A parameter set associated with the unique parameter set ID may be transmitted. 1. A method of decoding video data , the method comprising:receiving a parameter set having a parameter set identifier (ID) that is fixed length coded, wherein the parameter set ID for the parameter set is before any syntax element in the parameter set that is entropy coded; andusing the parameter set having the identifier that is fixed length coded to decode video data.2. The method of claim 1 , further comprising determining the number of bits for the fixed length coding based on a signaling received.3. The method of claim 1 , further comprising receiving a spliced bitstream comprising a first bitstream and a second bitstream spliced together claim 1 , wherein the parameter set ID comprises a parameter set ID of a first parameter set of the first bitstream and a parameter set ID of a second parameter set of a second bitstream and wherein the first parameter set ID is unique from the second parameter set ID.4. The method of claim 3 , further comprising decoding one or more of the parameter set ID of the first parameter set of the first bitstream and the parameter set ID of the second parameter set of the ...

Подробнее
31-10-2013 дата публикации

Identifying parameter sets in video files

Номер: US20130287366A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

An apparatus is configured to store coded video data including a number of sequences of coded video pictures in an electronic file. The apparatus includes at least one processor configured to determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The at least one processor is also configured to provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination.

Подробнее
07-11-2013 дата публикации

PARAMETER SET UPDATES IN VIDEO CODING

Номер: US20130294499A1
Автор: Wang Ye-Kui
Принадлежит:

Techniques of this disclosure provide an indication of whether a parameter set update can occur in a portion of a bitstream. The indication may enable a video decoder to determine whether an update of a stored parameter set can occur without performing a content comparison between the stored parameter set and a new parameter set of the same type with the same identification value. A parameter set update includes storing a current parameter set with a given identification value to replace a previous parameter set of the same type and having the same identification value. When a parameter set update cannot occur, the video decoder may store and activate a single parameter set of a given type for the entire portion of the bitstream. When a parameter set update can occur, the video decoder may automatically update a stored parameter set, or may determine whether to update the stored parameter. 1. A method of decoding video data comprising:decoding an indicator that indicates whether a parameter set update can occur in a portion of a bitstream, wherein a parameter set update occurs if a current parameter set of a particular type and having a particular identification value has content that is different than content of a previous parameter set of the same type and having the same identification value.2. The method of claim 1 , further comprising claim 1 , based on the indicator indicating that the parameter set update cannot occur in the portion of the bitstream:activating a first parameter set of a particular type with a particular identification value for the entire portion of the bitstream; andignoring other parameter sets of the same type and having the same identification value as the first parameter set in the entire portion of the bitstream to not update the first parameter set for the entire portion of the bitstream, the first parameter set being previous to the other parameter sets.3. The method of claim 1 , further comprising claim 1 , based on the indicator ...

Подробнее
07-11-2013 дата публикации

FULL RANDOM ACCESS FROM CLEAN RANDOM ACCESS PICTURES IN VIDEO CODING

Номер: US20130294500A1
Автор: Wang Ye-Kui
Принадлежит:

Techniques of this disclosure provide an indication of whether performing random access from a particular access unit in a bitstream requires fetching of parameter sets from previous access units. A clean random access (CRA) picture can be positioned at any point within a coded video sequence and does not clean a decoded picture buffer (DPB) of a video decoder. In order to perform random access decoding from the CRA picture, a video decoder may need to fetch one or more parameter sets included in unavailable access units that precede the CRA picture. The techniques provide an indication, for each CRA picture, that indicates whether parameter sets included in previous access units are needed to perform random access from the picture. When no parameter sets from previous access units are needed for random access from a particular CRA picture, a video decoder may determine to perform random access from that picture. 1. A method of decoding video data comprising:decoding an indicator that indicates whether random access to the bitstream from a particular clean random access (CRA) access unit requires one or more parameter sets from previous access units to decode the particular CRA access unit or subsequent access units, wherein the particular CRA access unit is positioned at any point within a coded video sequence of the bitstream and does not clean a decoded picture buffer (DPB); andbased on the indicator indicating that no parameter sets from previous access units are needed, performing random access to the bitstream from the particular CRA access unit without fetching parameter sets from the previous access units.2. The method of claim 1 , further comprising claim 1 , based on the indicator indicating that parameter sets from previous access units are needed claim 1 , determining whether to perform random access to the bitstream from the particular CRA access unit.3. The method of claim 2 , further comprising claim 2 , based on random access to the bitstream being ...

Подробнее
07-11-2013 дата публикации

CLASSIFIED MEDIA QUALITY OF EXPERIENCE

Номер: US20130297786A1
Автор: Wang Ye-Kui
Принадлежит:

A method for reporting a streaming quality is shown, wherein at least one continuous media stream is streamed to a client (), and wherein the streaming is controlled by a protocol () that is operated between the client () and a server (), the method including selecting at least one quality metric and a quality metrics class from a pre-defined set of at least two quality metrics classes, and reporting to the server () the quality of the streaming based on the at least one selected quality metric and the selected quality metrics class. The protocol () is preferably a Real-time Streaming Protocol (RTSP) in combination with a Session Description Protocol (SDP) in the context of the 3GPP Packet-Switched Streaming Service (PSS). Also shown is a computer program, a computer program product, a system, a client, a server and a protocol. 1. A method for reporting a streaming quality , wherein at least one continuous media stream is streamed to a client , and wherein said streaming is controlled by a protocol that is operated between said client and a server , comprising:selecting at least one quality metric and a quality metrics class from a pre-defined set of at least two quality metrics classes, andreporting to said server the quality of said streaming based on said at least one selected quality metric and said selected quality metrics class.2. The method according to claim 1 , wherein said selecting said quality metrics class comprises negotiating said quality metrics class between said client and said server.3. The method according to claim 1 , wherein said protocol defines a quality metrics class field within at least one protocol data unit claim 1 , wherein said quality metrics class field is capable of identifying each quality metrics class of said pre-defined set of at least two quality metrics classes.4. The method according to claim 3 , wherein said quality metrics class field is located in a header section of said at least one protocol data unit.5. The method ...

Подробнее
05-12-2013 дата публикации

EXTERNAL PICTURES IN VIDEO CODING

Номер: US20130322531A1
Принадлежит:

A video encoder generates a syntax element that indicates whether a video unit of a current picture is predicted from an external picture. The external picture is in a different layer than the current picture. Furthermore, the video encoder outputs a video data bitstream that includes a representation of the syntax element. The video data bitstream may or may not include a coded representation of the external picture. A video decoder obtains the syntax element from the video data bitstream. The video decoder uses the syntax element in a process to reconstruct video data of a portion of the video unit. 1. A method of decoding video data , the method comprising:obtaining, from a video data bitstream, a syntax element that indicates whether a video unit of a current picture is predicted from an external picture that is in a different layer than the current picture; andusing the syntax element in a process to reconstruct video data of a portion of the video unit.2. The method of claim 1 , wherein the video data bitstream does not include a coded representation of the external picture.3. The method of claim 1 , wherein the video unit is a coding unit or a prediction unit.4. The method of claim 1 , wherein:the syntax element is a first syntax element, andthe method further comprises obtaining, from the video data bitstream, a slice header syntax structure for a slice, the slice header syntax structure including a second syntax element, the second syntax element indicating whether any coding units (CUs) of the slice are predicted from any external picture.5. The method of claim 4 , further comprising obtaining claim 4 , from the slice header syntax structure claim 4 , a third syntax element claim 4 , the third syntax element indicating the number of external pictures used to predict the CUs of the slice.6. The method of claim 4 , further comprising:obtaining, from a parameter set, a third syntax element, the third syntax element indicating whether any CU of any slices ...

Подробнее
12-12-2013 дата публикации

SIGNALING DATA FOR LONG TERM REFERENCE PICTURES FOR VIDEO CODING

Номер: US20130329787A1
Принадлежит: QUALCOMM INCORPORATED

A video coder codes a slice header for a slice of video data. The slice header includes a syntax element comprising identifying information for a long term reference picture, wherein the identifying information is explicitly signaled in the slice header or derived from a sequence parameter set corresponding to the slice. When the syntax element indicates that the identifying information for the long term reference picture is explicitly signaled, to code the slice header, the video coder is further configured to code a value for the identifying information for the long term reference picture in the slice header. 1. A method of decoding video data , the method comprising:decoding a slice header for a slice of video data, wherein the slice header includes a syntax element comprising identifying information for a long term reference picture, wherein the identifying information is explicitly signaled in the slice header or derived from a sequence parameter set corresponding to the slice; andwhen the syntax element indicates that the identifying information for the long term reference picture is explicitly signaled, wherein decoding the slice header further comprises decoding a value for the identifying information for the long term reference picture in the slice header.2. The method of claim 1 , wherein the long term reference picture comprises a first picture claim 1 , the method further comprising:storing a first decoded picture corresponding to the first picture in a decoded picture buffer;when the decoded picture buffer contains more than one reference picture that is marked as “used for reference” and that has the same value of least significant bits (LSBs) of picture order count (POC) as the first picture, wherein decoding the identifying information further comprises: decoding a first syntax element equal to one in the slice header of a slice of a second picture;when the decoded picture buffer does not contain more than one picture that is marked as “used for ...

Подробнее
26-12-2013 дата публикации

DEVICE AND METHOD FOR MULTIMEDIA COMMUNICATIONS WITH PICTURE ORIENTATION INFORMATION

Номер: US20130342762A1
Принадлежит:

Systems, devices, and methods for capturing and displaying picture data including picture orientation information are described. In one innovative aspect, a method for transmitting media information is provided. The method includes obtaining picture or video information, said picture or video information including image data and orientation information of a media capture unit when the picture or video information is obtained. The method further includes encoding said picture or video information, wherein the orientation information is included in a first portion and the image data is included in a second portion, the second portion being encoded and the first portion being distinct from the second portion. The method also includes transmitting the first portion and the second portion. 1. A method for displaying media information , the method comprising:obtaining picture or video information, said picture or video information including at least one output picture and rotation information for the at least one output picture, the rotation information included in a first portion of the picture or video information and the at least one output picture included in a second portion of the picture or video information, the second portion being encoded and the first portion being distinct from the second portion;decoding at least one output picture included in the second portion of the picture or video information;identifying rotation data and a period for the rotation data based on the rotation information included in the first portion of the picture of video information; androtating the decoded at least one output picture in accordance with the identified rotation data and the identified period.2. The method of claim 1 , wherein the identified period includes at least one of a rotation start point and a rotation end point.3. The method of claim 1 , wherein the period identifies a packet sequence number of a packet including a first picture to be rotated.4. The method of ...

Подробнее
26-12-2013 дата публикации

HEADER PARAMETER SETS FOR VIDEO CODING

Номер: US20130343465A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

An example method of decoding video data includes determining a header parameter set that includes one or more syntax elements specified individually by each of one or more slice headers, the header parameter set being associated with a header parameter set identifier (HPS ID), and determining one or more slice headers that reference the header parameter set to inherit at least one of the syntax elements included in the header parameter set, where the slice headers are each associated with a slice of the encoded video data, and where the slice headers each reference the header parameter set using the HPS ID. 1. A method of decoding encoded video data , the method comprising:determining a header parameter set that includes one or more syntax elements specified individually by each of one or more slice headers, the header parameter set being associated with a header parameter set identifier (HPS ID); anddetermining one or more slice headers that reference the header parameter set to inherit at least one of the syntax elements included in the header parameter set,wherein the slice headers are each associated with a slice of the encoded video data, andwherein the slice headers each reference the header parameter set using the HPS ID.2. The method of claim 1 ,wherein determining the header parameter set comprises determining the header parameter set for an access unit that includes one or more slice headers, andwherein the header parameter set for the access unit includes the one or more syntax elements for any slices associated with the access unit but not for any slices associated with a different access unit.3. The method of claim 1 ,wherein determining the header parameter set comprises determining the header parameter set for an access unit different than an access unit that includes the header parameter set and the one or more slice headers, andwherein the header parameter set determined for the access unit includes the one or more syntax elements for any slices ...

Подробнее
02-01-2014 дата публикации

VIDEO PARAMETER SET FOR HEVC AND EXTENSIONS

Номер: US20140003491A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

A video processing device can be configured to process one or more initial syntax elements for a parameter set associated with a video bitstream; receive in the parameter set an offset syntax element for the parameter set that identifies syntax elements to be skipped within the parameter set; and based on the offset syntax element, skip the syntax elements within the parameter set and process one or more additional syntax elements in the parameter set that are after the skipped syntax elements in the parameter set. 1. A method of processing video data , the method comprising:processing one or more initial syntax elements for a parameter set associated with a video bitstream;receiving in the parameter set an offset syntax element for the parameter set, wherein the offset syntax element identifies syntax elements to be skipped within the parameter set;based on the offset syntax element, skipping the syntax elements within the parameter set;processing one or more additional syntax elements in the parameter set, wherein the one or more additional syntax elements are after the skipped syntax elements in the parameter set.2. The method of claim 1 , wherein the syntax elements to be skipped comprise one or more syntax elements coded using variable length coding.3. The method of claim 1 , wherein the offset syntax element identifies the syntax elements to be skipped by identifying a number of bytes in the parameter set that are to be skipped.4. The method of claim 1 , wherein the one or more initial syntax elements comprise fixed-length syntax elements and wherein the one or more initial syntax elements precede the offset syntax element.5. The method of claim 4 , wherein the one or more additional syntax elements comprise additional fixed-length syntax elements and wherein the one or more additional syntax elements follow the offset syntax element and follow the skipped syntax elements.6. The method of claim 1 , wherein the one or more initial syntax elements comprise ...

Подробнее
02-01-2014 дата публикации

VIDEO PARAMETER SET FOR HEVC AND EXTENSIONS

Номер: US20140003492A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

A device for processing video data can be configured to receive in a video parameter set, one or more syntax elements that include information related to session negotiation; receive in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set; receive in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set; process, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and a second set of video blocks associated with the second parameter set. 1. A method of processing video data , the method comprising:receiving in a video parameter set, one or more syntax elements that include information related to session negotiation;receiving in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set;receiving in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set;processing, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and a second set of video blocks associated with the second parameter set.2. The method of claim 1 , wherein the first sequence parameter set comprises a first syntax structure comprising a first group of syntax elements that apply to one or more whole pictures of the video data claim 1 , and wherein the second sequence parameter set comprises a second syntax structure comprising a second group of syntax elements that apply to one or more different whole pictures of the video data.3. The method of claim 1 , wherein the one or more syntax elements comprise fixed length syntax elements.4. The method of claim 1 , wherein the one or more syntax elements precede claim 1 , in the video parameter set claim 1 , any variable length coded syntax elements.5. The method of claim 1 , wherein the one or more ...

Подробнее
02-01-2014 дата публикации

VIDEO PARAMETER SET FOR HEVC AND EXTENSIONS

Номер: US20140003493A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит:

A video coder can be configured to receive in a video parameter set, one or more syntax elements that include information related to hypothetical reference decoder (HRD) parameters; receive in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set; receive in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set; and, code, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and second set of video blocks associated with the second parameter set. 1. A method of decoding video data , the method comprising:receiving in a video parameter set, one or more syntax elements that include information related to hypothetical reference decoder (HRD) parameters;receiving in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set;receiving in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set;coding, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and second set of video blocks associated with the second parameter set.2. The method of claim 1 , wherein the first sequence parameter set comprises a first syntax structure comprising a first group of syntax elements that apply to one or more whole pictures of the video data claim 1 , and wherein the second sequence parameter set comprises a second syntax structure comprising a second group of syntax elements that apply to one or more different whole pictures of the video data.3. The method of claim 1 , wherein the one or more syntax elements that include information related to HRD parameters comprise a syntax element indicating the HRD parameters for the video data are default HRD parameters.4. The method of claim 1 , wherein the one or more syntax elements that ...

Подробнее
02-01-2014 дата публикации

SIGNALING OF LONG-TERM REFERENCE PICTURES FOR VIDEO CODING

Номер: US20140003506A1
Принадлежит:

In one example, a device for decoding video data includes a video decoder configured to decode a value representative of a difference between most significant bits (MSBs) of a reference picture order count (POC) value and MSBs of a long-term reference picture (LTRP) POC value, wherein the reference POC value corresponds to a picture for which data must have been received in order to properly decode a current picture, determine the MSBs of the LTRP POC value based on the decoded value and the reference POC value, and decode at least a portion of the current picture relative to the LTRP based at least in part on the LTRP POC value. The picture for which data must have been received in order to properly decode a current picture may correspond to the current picture itself or a most recent random access point (RAP) picture. 1. A method of decoding video data , the method comprising:decoding a value representative of a difference between most significant bits (MSBs) of a reference picture order count (POC) value and MSBs of a long-term reference picture (LTRP) POC value, wherein the reference POC value corresponds to a POC value of a current picture or a POC value of a picture, preceding the current picture in decoding order, for which data must have been received in order to properly decode the current picture;determining the MSBs of the LTRP POC value based on the decoded value and the reference POC value; anddecoding at least a portion of the current picture relative to the LTRP based at least in part on the LTRP POC value.2. The method of claim 1 , further comprising:decoding a syntax element indicative of whether the MSBs of the LTRP POC value are predicted from a POC value for a random access point (RAP) picture or a POC value for the current picture; andreproducing the MSBs of the POC value for the LTRP based on the decoded value and the syntax element.3. The method of claim 2 , wherein reproducing the POC value for the LTRP comprises:when the syntax element ...

Подробнее
02-01-2014 дата публикации

TILES AND WAVEFRONT PARALLEL PROCESSING

Номер: US20140003531A1
Принадлежит: QUALCOMM INCORPORATED

This disclosure describes techniques that may enable a video coder to simultaneously implement multiple parallel processing mechanisms, including two or more of wavefront parallel processing (WPP), tiles, and entropy slices. This disclosure describes signaling techniques that are compatible both with coding standards that only allow one parallel processing mechanism to be implemented at a time, but that are also compatible with potential future coding standards that may allow for more than one parallel processing mechanism to be implemented simultaneously. This disclosure also describes restrictions that may enable WPP and tiles to be implemented simultaneously. 1. A method of decoding video data , the method comprising:receiving a parameter set comprising one or more first bits and one or more second bits, wherein the one or more first bits indicate whether tiles are enabled for a series of video blocks, wherein the one or more second bits are different from the one or more first bits, and wherein the one or more second bits indicate whether wavefront parallel processing (WPP) is enabled for the series of video blocks; and,decoding the series of video blocks based on the parameter set.2. The method of claim 1 , the method further comprising:decoding the series of video blocks using both tiles and WPP.3. The method of claim 1 , wherein wavefronts are present entirely within tiles.4. The method of claim 3 , wherein wavefronts do not span across multiple tiles.5. The method of claim 1 , wherein the parameter set is a picture parameter set.6. The method of claim 1 , wherein the series of video blocks comprise a plurality of tiles claim 1 , wherein each tile starts a new slice claim 1 , and wherein each new slice has a corresponding slice header.7. The method of claim 1 , further comprising:receiving WPP entry points signaled in a slice header.8. The method of claim 1 , further comprising:receiving, for a second series of video blocks, a parameter set indicating that ...

Подробнее
02-01-2014 дата публикации

Streaming adaption based on clean random access (cra) pictures

Номер: US20140003536A1
Автор: Ye-Kui Wang, YING Chen
Принадлежит: Qualcomm Inc

Systems, methods, and devices for processing video data are disclosed. Some examples systems, methods, and devices receive an external indication at a video decoder. The example systems, methods, and devices treat a clean random access (CRA) picture as a broken link access (BLA) picture based on the external indication.

Подробнее
02-01-2014 дата публикации

RANDOM ACCESS AND SIGNALING OF LONG-TERM REFERENCE PICTURES IN VIDEO CODING

Номер: US20140003537A1
Принадлежит: QUALCOMM INCORPORATED

A video coder can be configured to code a random access point (RAP) picture and code one or more decodable leading pictures (DLPs) for the RAP picture such that all pictures that are targeted for discard precede the DLPs associated with the RAP picture in display order. 1. A method of decoding video data , the method comprising:decoding a random access point (RAP) picture; anddecoding one or more decodable leading pictures (DLPs) for the RAP picture such that all pictures that are targeted for discard precede the DLPs associated with the RAP picture in display order.2. The method of claim 1 , wherein the DLPs comprise one or more pictures having display order values that indicate a display order earlier than a display order value of the RAP picture and having decoding order values that indicate a decoding order later than a decoding order value of the RAP picture claim 1 , and wherein the one or more pictures do not refer to video data earlier than the RAP picture in decoding order.3. The method of claim 1 , further comprising decoding one or more leading pictures relative to the RAP picture such that all of the leading pictures for the RAP picture precede all trailing pictures for the RAP picture in decoding order claim 1 , wherein the trailing pictures comprise pictures having display order values that are greater than a display order value of the RAP picture.4. The method of claim 1 , wherein the RAP picture comprises one of a clean random access (CRA) picture and a broken link access (BLA) picture.5. The method of claim 4 , wherein any picture preceding a CRA or BLA picture in decoding order precedes any DLP picture associated with the CRA picture or the BLA picture in display order.6. The method of claim 1 , the method further comprising:decoding one or more leading pictures associated with the RAP picture, wherein the leading pictures precede the RAP picture in display order value and succeed the RAP picture in decoding order; anddecoding one or more trailing ...

Подробнее
02-01-2014 дата публикации

SIGNALING LONG-TERM REFERENCE PICTURES FOR VIDEO CODING

Номер: US20140003538A1
Принадлежит:

A video decoder may be configured to decode a first value representative of a difference between a base most significant bits (MSBs) value of a picture order count (POC) value of a current picture of video data and a first MSBs value of a first POC value of a first long-term reference picture of the video data, decode a second value representative of a difference between a second MSBs value of a second POC value of a second long-term reference picture of the video data and the first MSBs value, wherein the first POC value and the second POC value have different least significant bits values, and decode at least a portion of a current picture of the video data relative to at least one of the first long-term reference picture and the second long-term reference picture. 1. A method of decoding video data , the method comprising:decoding a first value representative of a difference between a base most significant bits (MSBs) value of a picture order count (POC) value of a current picture of video data and a first MSBs value of a first POC value of a first long-term reference picture of the video data;decoding a second value representative of a difference between a second MSBs value of a second POC value of a second long-term reference picture of the video data and the first MSBs value, wherein the first POC value and the second POC value have different least significant bits (LSBs) values; anddecoding at least a portion of a current picture of the video data relative to at least one of the first long-term reference picture using the first value and the second long-term reference picture using the first value and the second value.2. The method of claim 1 , further comprising:calculating a first MSB cycle value for the first long-term reference picture as DeltaPocMSBCycleLt[i−1] using the first value; andcalculating a second MSB cycle value for the second long-term reference picture as DeltaPocMSBCycleLt[i], wherein calculating the second MSB cycle value comprises ...

Подробнее
09-01-2014 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION (SEI) MESSAGES HAVING A FIXED-LENGTH CODED VIDEO PARAMETER SET (VPS) ID

Номер: US20140010277A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM, Incorporated

Systems, methods, and devices are disclosed that code a supplemental enhancement information (SEI) message. In some examples, the SEI message may contain an identifier of an active video parameter set (VPS). In some examples, the identifier may be fixed-length coded. 1. A method of coding video data , the method comprising:coding a supplemental enhancement information (SEI) message that contains an identifier of an active video parameter set (VPS), wherein the identifier of the active VPS is fixed-length coded.2. The method of claim 1 , wherein the SEI message contains only the identifier of the active VPS.3. The method of claim 1 , wherein a payload of the SEI message consists of the identifier of the active VPS.4. The method of claim 1 , wherein a payload of the SEI message consists essentially of the identifier of the active VPS.5. The method of claim 1 , wherein the identifier of the active VPS is coded in an early position before any entropy-coded syntax element in the SEI message.6. The method of claim 5 , wherein the identifier of the active VPS is coded as the first syntax element in the SEI message.7. The method of claim 6 , further comprising an SEI network abstraction layer (NAL) unit that includes the SEI message.8. The method of claim 7 , wherein no other SEI messages are included in the SEI NAL unit.9. The method of claim 8 , further comprising coding each random access point (RAP) access unit of the video data to include the SEI NAL unit.10. The method of claim 1 , wherein coding the SEI message comprises encoding the SEI message that contains the identifier of the active VPS.11. The method of claim 1 , wherein coding the SEI message comprises decoding the SEI message that contains the identifier of the active VPS and decoding video data using the VPS identified by the identifier of the active VPS.12. The method of claim 1 , wherein the identifier of the active VPS comprises a vps_id.13. A device for coding video data comprising: 'code a supplemental ...

Подробнее
16-01-2014 дата публикации

CODING RANDOM ACCESS PICTURES FOR VIDEO CODING

Номер: US20140016697A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In one example, a device for decoding video data includes a processor configured to decapsulate a slice of a random access point (RAP) picture of a bitstream from a network abstraction layer (NAL) unit, wherein the NAL unit includes a NAL unit type value that indicates whether the RAP picture is of a type that can have associated leading pictures and whether the RAP picture is an instantaneous decoder refresh (IDR) picture or a clean random access (CRA) picture, determine whether the RAP picture can have associated leading pictures based on the NAL unit type value, and decode video data of the bitstream following the RAP picture based on the determination of whether the RAP picture can have associated leading pictures. 1. A method of decoding video data , the method comprising:decapsulating a slice of a random access point (RAP) picture of a bitstream from a network abstraction layer (NAL) unit, wherein the NAL unit includes a NAL unit type value that indicates whether the RAP picture is of a type that can have associated leading pictures and whether the RAP picture is an instantaneous decoder refresh (IDR) picture or a clean random access (CRA) picture;determining whether the RAP picture can have associated leading pictures based on the NAL unit type value; anddecoding video data of the bitstream following the RAP picture based on the determination of whether the RAP picture can have associated leading pictures.2. The method of claim 1 , wherein the NAL unit type value indicates that the RAP picture comprises the CRA picture claim 1 , wherein determining comprises determining that the RAP picture is of a type that can have associated leading pictures including a tagged for discard (TFD) leading based on the NAL unit type value claim 1 , and wherein decoding comprises parsing claim 1 , without decoding claim 1 , data of the bitstream corresponding to TFD pictures when the CRA picture is used as a random access point.3. The method of claim 1 , wherein decapsulating ...

Подробнее
16-01-2014 дата публикации

CODING SEI NAL UNITS FOR VIDEO CODING

Номер: US20140016707A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In one example, a device for decoding video data includes a processor configured to determine, for a supplemental enhancement information (SEI) network abstraction layer (NAL) unit of a bitstream, whether a NAL unit type value for the SEI NAL unit indicates that the NAL unit comprises a prefix SEI NAL unit including a prefix SEI message or a suffix SEI NAL unit including a suffix SEI message, and decode video data of the bitstream following the SEI NAL unit based on whether the SEI NAL unit is the prefix SEI NAL unit or the suffix SEI NAL unit and data of the SEI NAL unit. 1. A method of decoding video data , the method comprising:determining, for a supplemental enhancement information (SEI) network abstraction layer (NAL) unit of a bitstream, whether a NAL unit type value for the SEI NAL unit indicates that the NAL unit comprises a prefix SEI NAL unit including a prefix SEI message or a suffix SEI NAL unit including a suffix SEI message; anddecoding video data of the bitstream following the SEI NAL unit based on whether the SEI NAL unit is the prefix SEI NAL unit or the suffix SEI NAL unit and data of the SEI NAL unit.2. The method of claim 1 , further comprising claim 1 , when the SEI NAL unit comprises the suffix SEI NAL unit claim 1 , extracting the suffix SEI NAL unit from an access unit (AU) that includes at least a first video coding layer (VCL) NAL unit in the AU prior to the suffix SEI NAL unit in decoding order.3. The method of claim 2 , wherein the suffix SEI NAL unit follows all VCL NAL units in the AU in decoding order.4. A device for decoding video data claim 2 , the device comprising a processor configured to determine claim 2 , for a supplemental enhancement information (SEI) network abstraction layer (NAL) unit of a bitstream claim 2 , whether a NAL unit type value for the SEI NAL unit indicates that the NAL unit comprises a prefix SEI NAL unit including a prefix SEI message or a suffix SEI NAL unit including a suffix SEI message claim 2 , and ...

Подробнее
16-01-2014 дата публикации

CODING TIMING INFORMATION FOR VIDEO CODING

Номер: US20140016708A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In one example, a device for presenting video data includes a processor configured to determine an integer value for the video data, determine a difference value between a presentation time of a first picture and a presentation time of a second picture, wherein the difference value is equal to the integer value multiplied by a clock tick value, and present the first picture and the second picture according to the determined difference value. 1. A method of presenting video data , the method comprising:determining an integer value for the video data;determining a difference value between a presentation time of a first picture and a presentation time of a second picture, wherein the difference value is equal to the integer value multiplied by a clock tick value; andpresenting the first picture and the second picture according to the determined difference value.2. The method of claim 1 , further comprising determining that a temporal layer including the first picture and the second picture has a constant picture rate claim 1 , wherein determining the integer value comprises claim 1 , based on the determination that the temporal layer has the constant picture rate claim 1 , decoding data defining the integer value.3. The method of claim 2 , wherein determining that the temporal layer has the constant picture rate comprises determining that a fixed_pic_rate_flag has a value indicating that the temporal layer has the constant picture rate.4. The method of claim 2 , further comprising:determining, for each temporal layer having a respective constant picture rate, an individually signaled integer value; andpresenting pictures of each of the temporal layers having respective constant picture rates according to respective integer values multiplied by the clock tick value and differences between presentation times of the pictures.5. The method of claim 1 , wherein determining the clock tick value comprises determining a time scale value.6. The method of claim 1 , wherein the ...

Подробнее
20-02-2014 дата публикации

COMPATIBLE THREE-DIMENSIONAL VIDEO COMMUNICATIONS

Номер: US20140049603A1
Принадлежит: QUALCOMM INCORPORATED

Information for a video stream indicating whether the video stream includes stereoscopic three-dimensional video data can be provided to a display device. This information allows the device to determine whether to accept the video data and to properly decode and display the video data. This information can be made available for video data regardless of the codec used to encode the video. Systems, devices, and methods for transmission and reception of compatible video communications including stereoscopic three-dimensional picture information are described. 1. A device for coding video information , the device comprising:a memory configured to store at least a portion of said video information, said video information including image data; and receive at least a portion of said video information from said memory;', 'determine compatibility information associated with said image data, said compatibility information being encoded in a first portion of said video information and said image data is encoded in a second portion of said video information, and said compatibility information indicative of whether said image data includes frame-packed stereoscopic three dimensional video; and', 'process the video information based on the determined compatibility information., 'a processor in communication with said memory, the processor configured to2. The device of claim 1 , wherein the processor being configured to process the indication comprises the processor being configured to receive the compatibility information.3. The device of claim 1 , wherein the processor being configured to process the indication comprises the processor being configured to generate the compatibility information.4. The device of claim 3 , wherein the processor is configured to determine the compatibility information based on at least one of the image data claim 3 , a configuration for the device claim 3 , or an identifier for a capture device that provided the video information.5. The device of ...

Подробнее
06-03-2014 дата публикации

Network abstraction layer header design

Номер: US20140064384A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

A video processing device can receive in an encoded bitstream of video data a network abstraction layer (NAL) unit and parse a first syntax element in a header of the NAL unit to determine a temporal identification (ID) for the NAL unit, wherein a value of the first syntax element is one greater than the temporal identification.

Подробнее
20-03-2014 дата публикации

Indication of frame-packed stereoscopic 3d video data for video coding

Номер: US20140078249A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

This disclosure describes techniques for signaling and using an indication that video data is in a frame-packed stereoscopic 3D video data format. In one example of the disclosure, a method for decoding video data comprises receiving video data, receiving an indication that indicates whether any pictures in the received video data contain frame-packed stereoscopic 3D video data, and decoding the received video data in accordance with the received indication. The received video data may be rejected if the video decoder is unable to decode frame-packed stereoscopic 3D video data.

Подробнее
20-03-2014 дата публикации

INDICATION OF INTERLACED VIDEO DATA FOR VIDEO CODING

Номер: US20140079116A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

This disclosure proposes techniques for encoding and video data. The techniques of the disclosure receiving a first indication that indicates whether all pictures in received video data are progressive frames coded as frame pictures. If a video decoder is unable to decode progressive frames, the video data may be rejected based on the first indication. 1. A method for decoding video data , the method comprising:receiving video data;receiving a first indication that indicates whether all pictures in the received video data are progressive frames coded as frame pictures; anddecoding the received video data in accordance with the received first indication.2. The method of claim 1 , wherein the first indication comprises a flag claim 1 , and wherein the flag value equal to 0 indicates that all pictures in the received video data are progressive frames coded as frame pictures claim 1 , and wherein the flag value equal to 1 indicates that there may be one or more pictures in the received video data that are not progressive frames or not coded as frame pictures.3. The method of claim 1 , wherein the first indication indicates that there may be one or more pictures in the received video data that are not progressive frames or not coded as frame pictures claim 1 , and wherein decoding the received video data comprises rejecting the video data.4. The method of claim 1 , further comprising receiving the first indication in at least one of a video parameter set and a sequence parameter set.5. The method of claim 1 , further comprising receiving the first indication in a sample entry of video file format information.6. The method of claim 5 , further comprising receiving the first indication in one of a HEVCDecoderConfigurationRecord sample entry and a VisualSampleEntry sample entry.7. The method of claim 1 , wherein the first indication is a parameter in a Real-time Transport Protocol (RTP) payload.8. The method of claim 1 , further comprising receiving the first indication in ...

Подробнее
20-03-2014 дата публикации

VIDEO CODING WITH IMPROVED RANDOM ACCESS POINT PICTURE BEHAVIORS

Номер: US20140079140A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

This disclosure describes techniques for selection of coded picture buffer (CPB) parameters used to define a CPB for a video coding device for clean random access (CRA) pictures and broken link access (BLA) pictures in a video bitstream. A video coding device receives a bitstream including one or more CRA pictures or BLA pictures, and also receives a message indicating whether to use an alternative set of CPB parameters for at least one of the CRA pictures or BLA pictures. The message may be received from an external means, such as a processing means included in a streaming server or network entity. The video coding device sets a variable defined to indicate the set of CPB parameters for a given one of the pictures based on the received message, and selects the set of CPB parameters for the given one of the pictures based on the variable for the picture. 1. A method of processing video data , the method comprising:receiving a bitstream representing a plurality of pictures including one or more of clean random access (CRA) pictures or broken link access (BLA) pictures;receiving a message indicating whether to use an alternative set of coded picture buffer (CPB) parameters for at least one of the CRA pictures or the BLA pictures;setting a variable defined to indicate the set of CPB parameters for the one of the CRA pictures or the BLA pictures based on the received message; andselecting the set of CPB parameters for the one of the CRA pictures or the BLA pictures based on the variable for the picture.2. The method of claim 1 , further comprising initializing a hypothetical reference decoder (HRD) using the one of the CRA pictures or the BLA pictures and associated HRD parameters claim 1 , wherein the HRD parameters include the selected set of CPB parameters for the picture.3. The method of claim 1 , wherein the one of the CRA pictures or the BLA pictures comprises one of a CRA picture or a BLA picture with a network abstraction layer (NAL) unit type that indicates a ...

Подробнее
27-03-2014 дата публикации

BITSTREAM CONFORMANCE TEST IN VIDEO CODING

Номер: US20140086303A1
Автор: Wang Ye-Kui
Принадлежит:

A device performs a decoding process as part of a bitstream conformance test. As part of the decoding process, the device performs a bitstream extraction process to extract, from a bitstream, an operation point representation of an operation point defined by a target set of layer identifiers and a target highest temporal identifier. The target set of layer identifiers contains values of layer identifier syntax elements present in the operation point representation, the target set of layer identifiers being a subset of values of layer identifier syntax elements of the bitstream. The target highest temporal identifier is equal to a greatest temporal identifier present in the operation point representation, the target highest temporal identifier being less than or equal to a greatest temporal identifier present in the bitstream. The device decodes network abstraction layer (NAL) units of the operation point representation. 1. A method of processing video data , the method comprising: [ wherein the target set of layer identifiers contains values of layer identifier syntax elements present in the operation point representation, the target set of layer identifiers being a subset of values of layer identifier syntax elements of the bitstream,', 'wherein the target highest temporal identifier is equal to a greatest temporal identifier present in the operation point representation, the target highest temporal identifier being less than or equal to a greatest temporal identifier present in the bitstream; and, 'performing a bitstream extraction process that extracts, from the bitstream, an operation point representation of an operation point defined by a target set of layer identifiers and a target highest temporal identifier,'}, 'decoding network abstraction layer (NAL) units of the operation point representation., 'performing a bitstream conformance test that determines whether a bitstream conforms to a video coding standard, wherein performing the bitstream conformance test ...

Подробнее
27-03-2014 дата публикации

INDICATION AND ACTIVATION OF PARAMETER SETS FOR VIDEO CODING

Номер: US20140086317A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

In some examples, a video encoder includes multiple sequence parameter set (SPS) IDs in an SEI message, such that multiple active SPSs can be indicated to a video decoder. In some examples, a video decoder activates a video parameter set (VPS) and/or one or more SPSs through referencing an SEI message, e.g., based on the inclusion of the VPS ID and one or more SPS IDs in the SEI message. The SEI message may be, as examples, an active parameter sets SEI message or a buffering period SEI message. 1. A method of decoding video data , the method comprising:decoding a bitstream that includes video data and syntax information for decoding the video data, wherein the syntax information comprises a supplemental enhancement information (SEI) message of an access unit, and wherein the SEI message indicates a plurality of sequence parameter sets (SPSs) and a video parameter set (VPS) for decoding video data of the access unit; anddecoding the video data of the access unit based on the plurality of SPSs and the VPS indicated in the SEI message.2. The method of claim 1 , wherein the SEI message comprises an active parameter sets SEI message.3. The method of claim 2 , wherein the active parameter sets SEI message precedes claim 2 , in decoding order claim 2 , a first portion of the video data of the access unit.4. The method of claim 1 , wherein the SEI message comprises a buffering period SEI message.5. The method of claim 1 , wherein the SEI message includes a first syntax element that indicates a first SPS of the plurality of SPSs claim 1 , a second syntax element that specifies a number of additional SPSs indicated in the SEI message claim 1 , and one or more additional syntax elements that respectively indicate the additional SPSs.6. The method of claim 5 , wherein the first syntax element comprises a active_seq_param_set_id syntax element claim 5 , the second syntax element comprises a num_additional_sps_ids_minus1 syntax element claim 5 , and the one or more additional ...

Подробнее
27-03-2014 дата публикации

LONG-TERM REFERENCE PICTURE SIGNALING IN VIDEO CODING

Номер: US20140086324A1
Принадлежит: QUALCOMM INCORPORATED

A video encoder signals, in a slice header for a current slice of a current picture, a first long-term reference picture (LTRP) entry, the first LTRP entry indicating that a particular reference picture is in a long-term reference picture set of the current picture. Furthermore, the video encoder signals, in the slice header, a second LTRP entry only if second LTRP entry does not indicate that the particular reference picture is in the long-term reference picture set of the current picture. 1. A method of decoding video data , the method comprising:obtaining, from a bitstream, a slice header of a current slice of a current picture, wherein a set of one or more long-term reference picture (LTRP) entries are signaled in the slice header, wherein the set of one or more LTRP entries includes a first LTRP entry indicating that a particular reference picture is in a long-term reference picture set of the current picture, and wherein the set of one or more LTRP entries includes a second LTRP entry only if the second LTRP entry does not indicate that the particular reference picture is in the long-term reference picture set of the current picture;generating, based at least in part on the one or more LTRP entries, a reference picture list for the current picture; andreconstructing, based at least in part on one or more reference pictures in the reference picture list for the current picture, the current picture.2. The method of claim 1 , wherein:the method further comprises obtaining, from the bitstream, a sequence parameter set (SPS) that is applicable to the current picture, the SPS including the first LTRP entry; andthe slice header includes an index to the first LTRP entry.3. The method of claim 2 , wherein the slice header includes the second LTRP entry only if the second LTRP entry does not indicate that the particular reference picture is in the long-term reference picture set of the current picture.4. The method of claim 2 , wherein:the SPS includes the first LTRP ...

Подробнее
27-03-2014 дата публикации

HYPOTHETICAL REFERENCE DECODER PARAMETERS IN VIDEO CODING

Номер: US20140086331A1
Автор: Wang Ye-Kui
Принадлежит:

A device performs a hypothetical reference decoder (HRD) operation that determines conformance of a bitstream to a video coding standard or determines conformance of a video decoder to the video coding standard. As part of performing the HRD operation, the device determines a highest temporal identifier of a bitstream-subset associated with a selected operation point of the bitstream. Furthermore, as part of the HRD operation, the device determines, based on the highest temporal identifier, a particular syntax element from among an array of syntax elements. The device then uses the particular syntax element in the HRD operation. 1. A method of processing video data , the method comprising: determining a highest temporal identifier of a bitstream-subset associated with a selected operation point of the bitstream;', 'determining, based on the highest temporal identifier, a particular syntax element from among an array of syntax elements; and', 'using the particular syntax element in the HRD operation., 'performing a hypothetical reference decoder (HRD) operation, wherein the HRD operation determines conformance of a bitstream to a video coding standard or determines conformance of a video decoder to the video coding standard, wherein performing the HRD operation comprises2. The method of claim 1 , wherein respective syntax elements in the array of syntax elements specify claim 1 , for respective values of the highest temporal identifier claim 1 , a maximum required size of a decoded picture buffer.3. The method of claim 1 , wherein respective syntax elements in the array of syntax elements specify claim 1 , for respective values of the highest temporal identifier claim 1 , a maximum allowed number of pictures preceding any picture in decoding order and succeeding that picture in output order.4. The method of claim 1 , wherein respective syntax elements in the array of syntax elements specify claim 1 , for respective values of the highest temporal identifier claim 1 , ...

Подробнее
27-03-2014 дата публикации

ACCESS UNIT INDEPENDENT CODED PICTURE BUFFER REMOVAL TIMES IN VIDEO CODING

Номер: US20140086332A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video encoder or a video decoder, may be configured to code a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in an access unit (AU) and a second DU, wherein the second DU is subsequent to the first DU in decoding order and in the same AU as the first DU. The video coding device may further determine a removal time of the DU based at least on the coded duration. The coding device may also code a sub-picture timing supplemental enhancement information (SEI) message associated with the first DU. The video coding device may further determine a removal time of the DU based at least in part on the sub-picture timing SEI message. 1. A method for decoding video data , the method comprising:decoding a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in an access unit (AU) and CPB removal time of a second DU, wherein the second DU is subsequent to the first DU in decoding order and in the same AU as the first DU;determining a removal time of the first DU based at least in part on the decoded duration; anddecoding video data of the first DU based at least in part on the removal time.2. The method of claim 1 , wherein the second DU is immediately subsequent to the first DU in the AU in decoding order.3. The method of claim 1 , wherein the second DU is a last DU in the AU in decoding order.4. The method of claim 1 , further comprising:decoding one or more sub-picture level CPB parameters, wherein determining the removal time of the first DU comprises determining the removal time of the first DU based at least in part on the decoded duration and the sub-picture level CPB parameters.5. The method of claim 4 , wherein decoding one or more sub-picture level CPB parameters comprises:decoding a sub-picture timing supplemental enhancement information (SEI) message that is associated with the first DU.6. The method of claim 5 , wherein the second DU is a last DU in the AU in ...

Подробнее
27-03-2014 дата публикации

BITSTREAM PROPERTIES IN VIDEO CODING

Номер: US20140086333A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device signals a property of a bitstream. The bitstream comprises a plurality of coded video sequences (CVSs). When the property has a particular value, all the CVSs of the bitstream conform to the same profile. A video processing device is able to determine, based on the property, whether to process the bitstream. 1. A method of processing video data , the method comprising: 'wherein the bitstream conforms to a video coding specification and comprises a plurality of coded video sequences (CVSs) and when the signaled property has a particular value, all the CVSs of the bitstream conform to the same profile, the profile being a subset of an entire bitstream syntax that is specified by the video coding specification; and', 'determining, based on a signaled property of a bitstream that comprises an encoded representation of the video data, whether a video decoder is able to decode the bitstream,'}processing, based on the determination, the bitstream.2. The method of claim 1 , wherein the signaled property is signaled in an International Organization for Standardization (ISO) base media file format file.3. The method of claim 2 , wherein the signaled property is signaled in a sample entry in a High Efficiency Video Coding (HEVC) video track of the ISO base media file format file.4. The method of claim 1 , wherein the signaled property comprises a parameter in an element or attribute in a media presentation description (MPD) of dynamic adaptive streaming over hypertext transfer protocol (DASH).5. The method of claim 1 , wherein the signaled property comprises a parameter of a session description protocol (SDP).6. The method of claim 1 , wherein processing the bitstream comprises decoding the bitstream.7. The method of claim 1 , wherein the signaled property is a particular syntax element and when the particular syntax element has the particular value claim 1 , general profile indicator syntax elements in respective sequence parameter sets (SPSs) that are activated when ...

Подробнее
27-03-2014 дата публикации

HYPOTHETICAL REFERENCE DECODER PARAMETERS IN VIDEO CODING

Номер: US20140086336A1
Автор: Wang Ye-Kui
Принадлежит:

A computing device selects, from among a set of hypothetical reference decoder (HRD) parameters in a video parameter set and a set of HRD parameters in a sequence parameter set, a set of HRD parameters applicable to a particular operation point of a bitstream. The computing device performs, based at least in part on the set of HRD parameters applicable to the particular operation point, an HRD operation on a bitstream subset associated with the particular operation point. 1. A method of processing video data , the method comprising:selecting, from among a set of Hypothetical Reference Decoder (HRD) parameters in a video parameter set (VPS) and a set of HRD parameters in a sequence parameter set (SPS), a set of HRD parameters applicable to a particular operation point of a bitstream; andperforming, based at least in part on the selected set of HRD parameters applicable to the particular operation point, an HRD operation on a bitstream subset associated with the particular operation point.2. The method of claim 1 , wherein selecting the set of HRD parameters applicable to the particular operation point comprises determining that the set of HRD parameters in the SPS is applicable to the particular operation point when a layer identifier set of the particular operation point contains a set of all layer identifiers present in the bitstream subset.3. The method of claim 1 , wherein the set of HRD parameters applicable to the particular operation point includes parameters that specify an initial coding picture buffer (CPB) removal delay claim 1 , a CPB size claim 1 , a bit rate claim 1 , an initial decoded picture buffer (DPB) output delay claim 1 , and a DPB size.4. The method of claim 1 , further comprising:determining a target layer identifier set of the particular operation point that contains each layer identifier present in the bitstream subset, wherein the target layer identifier set of the particular operation point is a subset of layer identifiers present in the ...

Подробнее
27-03-2014 дата публикации

Indication and activation of parameter sets for video coding

Номер: US20140086337A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

In some examples, a video encoder includes multiple sequence parameter set (SPS) IDs in an SEI message, such that multiple active SPSs can be indicated to a video decoder. In some examples, a video decoder activates a video parameter set (VPS) and/or one or more SPSs through referencing an SEI message, e.g., based on the inclusion of the VPS ID and one or more SPS IDs in the SEI message. The SEI message may be, as examples, an active parameter sets SEI message or a buffering period SEI message.

Подробнее
27-03-2014 дата публикации

Expanded decoding unit definition

Номер: US20140086340A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

A video coding device, such as a video encoder or a video decoder, may be configured to decode a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in an access unit (AU) and CPB removal time of a second DU, wherein the first DU comprises a non-video coding layer (VCL) network abstraction layer (NAL) unit with nal_unit_type equal to UNSPEC0, EOS_NUT, EOB_NUT, in the range of RSV_NVCL44 to RSV_NVCL47 or in the range of UNSPEC48 to UNSPEC63. The video decoder determines a removal time of the first DU based at least in part on the decoded duration and decodes video data of the first DU based at least in part on the removal time.

Подробнее
27-03-2014 дата публикации

CODED PICTURE BUFFER REMOVAL TIMES SIGNALED IN PICTURE AND SUB-PICTURE TIMING SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGES

Номер: US20140086341A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video encoder or a video decoder, may be configured to code a sub-picture timing supplemental enhancement information (SEI) message associated with a first decoding unit (DU) of an access unit (AU). The video coding device may further code a duration between coded picture buffer (CPB) removal time of a second DU of the AU in decoding order and CPB removal time of the first DU in the sub-picture SEI message. The coding device may also derive a CPB removal time of the first DU based at least in part on the sub-picture timing SEI message. 1. A method for decoding video data , the method comprising:decoding a sub-picture timing supplemental enhancement information (SEI) message associated with a first decoding unit (DU) of an access unit (AU);determining, from the sub-picture timing SEI message, a duration between coded picture buffer (CPB) removal time of a second DU of the AU in decoding order and CPB removal time of the first DU in the sub-picture SEI message; andderiving a CPB removal time of the first DU based at least in part on the sub-picture timing SEI message.2. The method of claim 1 , further comprising:decoding a sequence level flag to determine the presence of one or more sub-picture level CPB parameters either in the sub-picture timing SEI message or a picture timing SEI message associated with the first DU.3. The method of claim 1 , further comprising:decoding the sub-picture level CPB parameters, wherein determining the CPB removal time of the first DU is further based at least in part on the sub-picture level CPB parameters.4. The method of claim 3 , wherein the sequence level flag indicates the sub-picture level CPB parameters are to be present in the sub-picture timing SEI message claim 3 , and wherein decoding the sub-picture level CPB parameters comprises:decoding the sub-picture timing SEI message associated with the first DU.5. The method of claim 4 , wherein the second DU is a last DU in the AU in decoding order ...

Подробнее
27-03-2014 дата публикации

SEQUENCE LEVEL FLAG FOR SUB-PICTURE LEVEL CODED PICTURE BUFFER PARAMETERS

Номер: US20140086342A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video encoder or a video decoder, may be configured to decode a sequence level flag to determine the presence of one or more sub-picture level coded picture buffer (CPB) parameters for a decoding unit (DU) of an access unit (AU) in either in a picture timing supplemental enhancement information (SEI) message or a sub-picture timing SEI message associated with the DU. The coding device may also decode the one or more sub-picture level CPB parameters from the picture timing SEI message or the sub-picture timing SEI message based on the sequence level flag. 1. A method for decoding video data , the method comprising:decoding a sequence level flag to determine the presence of one or more sub-picture level coded picture buffer (CPB) parameters for a decoding unit (DU) of an access unit (AU) in either in a picture timing supplemental enhancement information (SEI) message or a sub-picture timing SEI message associated with the DU; anddecoding the one or more sub-picture level CPB parameters from the picture timing SEI message or the sub-picture timing SEI message based on the sequence level flag.2. The method of claim 1 , wherein the one or more sub-picture level CPB parameters are present in only one of the picture timing SEI message or the sub-picture timing SEI message.3. The method of claim 1 , further comprising:determining a CPB removal time of the DU based at least in part on the one or more sub-picture level CPB parameters.4. The method of claim 3 , wherein determining the CPB removal time of the DU comprises determining the CPB removal time of the DU without decoding an initial CPB removal delay and offset.5. The method of claim 1 , wherein the sequence level flag indicates the sub-picture level CPB parameters are present in the sub-picture timing SEI message claim 1 , and wherein decoding the sub-picture level CPB parameters comprises decoding the sub-picture timing SEI message associated with the DU.6. The method of claim 1 , ...

Подробнее
27-03-2014 дата публикации

BUFFERING PERIOD AND RECOVERY POINT SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGES

Номер: US20140086343A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video decoder, may be configured to decode a buffering period supplemental enhancement information (SEI) message associated with an access unit (AU). The video decoder is further configured to decode a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in the AU and CPB removal time of a second DU from the buffering period SEI message, wherein the AU has a TemporalId equal to 0. The video decoder is configured to determine a removal time of the first DU based at least in part on the decoded duration and decode video data of the first DU based at least in part on the removal time. 1. A method for decoding video data , the method comprising:decoding a buffering period supplemental enhancement information (SEI) message associated with an access unit (AU);decoding a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in the AU and CPB removal time of a second DU from the buffering period SEI message, wherein the AU has a TemporalId equal to 0;determining a removal time of the first DU based at least in part on the decoded duration; anddecoding video data of the first DU based at least in part on the removal time.2. The method of claim 1 , wherein the second DU is subsequent to the first DU in decoding order and in the same AU as the first DU.3. The method of claim 2 , wherein the second DU is immediately subsequent to the first DU in the AU in decoding order or a last DU in the AU in decoding order.4. The method of claim 1 , further comprising:decoding one or more sub-picture level CPB parameters, wherein determining the removal time of the first DU comprises determining the removal time of the first DU based at least in part on the decoded duration and the sub-picture level CPB parameters.5. The method of claim 4 , wherein decoding one or more sub-picture level CPB parameters comprises:decoding a sub-picture timing supplemental enhancement information (SEI) message ...

Подробнее
27-03-2014 дата публикации

CODED PICTURE BUFFER ARRIVAL AND NOMINAL REMOVAL TIMES IN VIDEO CODING

Номер: US20140086344A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A video coding device, such as a video decoder, may be configured to derive at least one of a coded picture buffer (CPB) arrival time and a CPB nominal removal time for an access unit (AU) at both an access unit level and a sub-picture level regardless of a value of a syntax element that defines whether a decoding unit (DU) is the entire AU. The video coding device may further be configured to determine a removal time of the AU based at least in part on one of the CPB arrival time and a CPB nominal removal time and decode video data of the AU based at least in part on the removal time. 1. A method for decoding video data , the method comprising:deriving at least one of a coded picture buffer (CPB) arrival time and a CPB nominal removal time for an access unit (AU) at both an access unit level and a sub-picture level regardless of a value of a syntax element that defines whether a decoding unit (DU) is the entire AU;determining a removal time of the AU based at least in part on one of the CPB arrival time and a CPB nominal removal time; anddecoding video data of the AU based at least in part on the removal time.2. The method of claim 1 , further comprising:responsive to the syntax element having a true value, deriving a CPB removal time only for the AU level; andresponsive to the syntax element having a false value, deriving a CPB removal time only for the sub-picture level.3. The method of claim 1 , wherein at least one of a CPB arrival time and a CPB nominal removal time are derived only when a syntax flag that indicates CPB parameters are present has a true value.4. The method of claim 1 , further comprising:decoding a duration between CPB removal of a first DU in the AU and CPB removal of a second DU;determining a removal time of the first DU based at least in part on the decoded duration; anddecoding video data of the first DU based at least in part on at least one of the removal time, the CPB arrival time, and the CPB nominal removal time.5. The method of claim ...

Подробнее
03-04-2014 дата публикации

SUB-BITSTREAM EXTRACTION FOR MULTIVIEW, THREE-DIMENSIONAL (3D) AND SCALABLE MEDIA BITSTREAMS

Номер: US20140092213A1
Автор: Chen Ying, Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

Techniques are described for modal sub-bitstream extraction. For example, a network entity may select a sub-bitstream extraction mode from a plurality of sub-bitstream extraction modes. Each sub-bitstream extraction mode may define a particular manner in which to extract coded pictures from views or layers to allow a video decoder to decode target output views or layers for display. In this manner, the network entity may adaptively select the appropriate sub-bitstream extraction technique, rather than a rigid, fixed sub-bitstream extraction technique. 1. A method of processing video data , the method comprising:receiving a bitstream of encoded video data;selecting a sub-bitstream extraction mode from a plurality of sub-bitstream extraction modes, wherein each of the sub-bitstream extraction modes defines a manner in which to extract coded pictures from views or layers from the bitstream to allow decoding of target output views or target output layers, and wherein each coded picture comprises one or more video coding layer network abstraction layer (VCL NAL) units of a view or a layer within an access unit; andextracting, from the bitstream, a sub-bitstream in the manner defined by the selected sub-bitstream extraction mode.2. The method of claim 1 , wherein each coded picture of a view comprises one of a view component claim 1 , a texture view component claim 1 , and a depth view component.3. The method of claim 2 , wherein selecting a sub-bitstream extraction mode comprises selecting a self-complete sub-bitstream extraction mode claim 2 , and wherein extracting the sub-bitstream comprises extracting claim 2 , when the selected sub-bitstream extraction mode is the self-complete sub-bitstream extraction mode claim 2 , all available texture view components and depth view components of the view if a texture view or a depth view of the view is needed for decoding the target output views.4. The method of claim 2 , wherein selecting a sub-bitstream extraction mode ...

Подробнее
03-04-2014 дата публикации

SIGNALING LAYER IDENTIFIERS FOR OPERATION POINTS IN VIDEO CODING

Номер: US20140092955A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

Techniques described herein are related to coding layer identifiers for operation points in video coding. In one example, a method of decoding video data is provided. The method comprises decoding syntax elements in a video parameter set (VPS) within a conforming bitstream indicating a first operation point having a first set of content. The method further comprises decoding, if present, syntax elements in the VPS within the conforming bitstream indicating hypothetical reference decoder (HRD) parameter information having a second set of content for the first operation point, wherein the conforming bitstream does not include syntax elements in the VPS that duplicate at least one of the first or second set of content for a second operation point, and wherein decoding syntax elements comprises decoding the syntax elements indicating the first operation point and the HRD parameter information only within conforming bitstreams. 1. A method of decoding video data , the method comprising:decoding syntax elements in a video parameter set (VPS) within a conforming bitstream indicating a first operation point having a first set of content; anddecoding, if present, syntax elements in the VPS within the conforming bitstream indicating hypothetical reference decoder (HRD) parameter information having a second set of content for the first operation point,wherein the conforming bitstream does not include syntax elements in the VPS that duplicate at least one of the first or second set of content for a second operation point, and wherein decoding syntax elements comprises decoding the syntax elements indicating the first operation point and the HRD parameter information only within conforming bitstreams.2. The method of claim 1 , wherein the conforming bitstream does not include syntax elements in the VPS that duplicate the first set of content for the second operation point claim 1 , the first set of content being unique to the first operation point in the VPS.3. The method of ...

Подробнее
03-04-2014 дата публикации

SIGNALING OF REGIONS OF INTEREST AND GRADUAL DECODING REFRESH IN VIDEO CODING

Номер: US20140092963A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

During a coding process, systems, methods, and apparatus may code information indicating whether gradual decoder refresh (GDR) of a picture is enabled. When GDR is enabled, the coding process, systems, methods, and apparatus may code information that indicates whether one or more slices of the picture belong to a foreground region of the picture. In another example, during a coding process, systems, methods, and apparatus may decode video data corresponding to an ISP identification (ISP ID) for one of the ISPs for slices of a picture. The systems, methods, and apparatus may decode video data corresponding to an ROI using the ISP. 1. A method of decoding video data , the method comprising:decoding information indicating whether gradual decoder refresh (GDR) of a picture is enabled; andwhen GDR is enabled, decoding information that indicates whether one or more slices of the picture belong to a foreground region of the picture.2. The method of claim 1 , wherein the information comprises a flag in one of a supplemental enhancement information (SEI) message and a slice header.3. The method of claim 1 , wherein the picture comprises one of a plurality of pictures claim 1 , the foreground region comprises one of a plurality of foreground regions with each of the plurality of pictures having a foreground region and each of these foreground regions having one or more slices claim 1 , the method further comprising decoding information indicating the respective pictures corresponding to a GDR starting point and a GDR recovery point.4. The method of claim 3 , further comprising decoding video data corresponding to the slices belonging to the foreground regions in the pictures between the GDR starting point and the GDR recovery point.5. The method of claim 4 , further comprising discarding video data corresponding to the slices not belonging to the foreground regions in the pictures between the GDR starting point and the GDR recovery point.6. The method of claim 1 , further ...

Подробнее
03-04-2014 дата публикации

Error resilient decoding unit association

Номер: US20140092993A1
Автор: Ye-Kui Wang
Принадлежит: Qualcomm Inc

Techniques are described for signaling decoding unit identifiers for decoding units of an access unit. The video decoder determines which network abstraction layer (NAL) units are associated with which decoding units based on the decoding unit identifiers. Techniques are also described for including one or more copies of supplemental enhancement information (SEI) messages in an access unit.

Подробнее
03-04-2014 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGE CODING

Номер: US20140092994A1
Автор: Wang Ye-Kui
Принадлежит:

Techniques are described for signaling decoding unit identifiers for decoding units of an access unit. The video decoder determines which network abstraction layer (NAL) units are associated with which decoding units based on the decoding unit identifiers. Techniques are also described for including one or more copies of supplemental enhancement information (SEI) messages in an access unit. 1. A method for coding video data , the method comprising:coding a supplemental enhancement information (SEI) message in an access unit, wherein the access unit includes the video data for reconstructing at least one picture, and the SEI message defines a characteristic of the video data; andcoding a copy of the SEI message in the access unit.2. The method of claim 1 , further comprising:including the SEI message before a first video coding layer (VCL) network abstraction layer (NAL) unit in decoding order in the access unit; andincluding the copy of the SEI message after the first VCL NAL unit in decoding order and before a last VCL NAL unit in decoding order in the access unit,wherein coding the SEI message comprises encoding the SEI message that is included before the first VCL NAL unit, andwherein coding the copy of the SEI message comprises encoding the copy of the SEI message that included after the first VCL NAL unit and before the last VCL NAL unit.3. The method of claim 1 , further comprising:decoding a first video coding layer (VCL) network abstraction layer (NAL) unit in decoding order in the access unit; anddecoding a last VCL NAL unit in decoding order in the access unit,wherein coding the SEI message comprises decoding the SEI message prior to decoding the first VCL NAL unit, andwherein coding the copy of the SEI message comprises decoding the copy of the SEI message after decoding the first VCL NAL unit and prior to decoding the last VCL NAL unit.4. The method of claim 1 , further comprising:determining a type of the SEI message;determining a temporal ...

Подробнее
03-04-2014 дата публикации

SIGNALING OF LAYER IDENTIFIERS FOR OPERATION POINTS

Номер: US20140092996A1
Автор: Wang Ye-Kui
Принадлежит: QUALCOMM INCORPORATED

A device for processing video data receives an indication of a maximum layer identification (ID) value for a bitstream; receives a flag for a first layer with a layer ID value less than the maximum layer ID value; and, based on a value of the flag, determines if the first layer is included in an operation point. 1. A method of processing video data , the method comprising:receiving an indication of a maximum layer identification (ID) value for a bitstream;receiving a flag for a first layer with a layer ID value less than the maximum layer ID value;based on a value of the flag, determining if the first layer is included in an operation point.2. The method of further comprising:receiving a flag for each layer ID value between zero and the maximum layer ID minus one, wherein a value for each flag indicates if each layer is included in the operation point.3. The method of claim 1 , wherein a first value for the flag indicates the first layer is included in the operation point and a second value for the flag indicates the first layer is not present in the operation point.4. The method of claim 1 , wherein the operation point comprises an operation point identification value claim 1 , wherein the operation point identification value is associated with a set of decoding parameters identified in the video data.5. The method of claim 1 , wherein a video parameter set comprises the indication of the maximum layer identification (ID) value and the flag.6. The method of claim 1 , wherein receiving the indication of the maximum layer ID value for the bitstream comprises receiving a syntax element identifying the maximum layer ID value.7. The method of claim 1 , wherein receiving the indication of the maximum layer ID value for the bitstream comprises determining a maximum possible layer ID value.8. The method of claim 1 , wherein the method is performed by a media aware network element (MANE) claim 1 , and wherein the method further comprises:in response to the first layer being ...

Подробнее