19-07-2019 дата публикации
Номер:
KR1020190085986A
Автор:
Принадлежит:
Контакты:
Номер заявки: 70-19-102017491
Дата заявки: 12-12-2017

[1]

The present invention generally relates to a system with audio - visual content watermarking (audio-visual content content).

[2]

In many digital broadcast systems, a broadcast station transmits both audio-visual content and streams of one or more enhanced service data. The enhanced service data may be provided with AV content providing information and services, or may be provided separately from AV content providing information and services.

[3]

In many broadcasters, the audio - visual content and the one or more enhanced service data are not received directly by AV presenting devices from the broadcast station. Rather, AV presentation devices, such as televisions, are typically connected to broadcast receiving devices that receive audio - visual content and one or more enhanced service data in a compressed form and provide uncompressed audio - visual content to AV presenting devices.

[4]

In some broadcasting environments, a broadcast receiving device receives audiovisual content from Multichannel (MVPD (hereinafter sometimes referred to as' MVp '). MVPD receives the audio - visual broadcast signal from the broadcast station, extracts content from the received audio - visual broadcast signal, converts the extracted content into audio - visual signals having a format suitable for transmission, and provides the converted audio - visual signals to the broadcast receiving device. During the conversion process. MVPD may include enhanced service data that is provided from the broadcast station, or may include different enhanced service data provided to the broadcast receiving device. In this way, the broadcast station may provide audio - visual content with enhanced service data, but the enhanced service data ultimately provided to AV-presenting devices and/or broadcast receiving devices, if present, may not be the same as provided by the broadcast station.

[5]

The broadcast receiving device extracts audio - visual content from a signal received from MVPD and provides only uncompressed audio - visual data to AV-presenting devices, so only enhanced service data provided to the broadcast receiving device is available. Moreover, the same enhanced service data provided by the broadcast station may not be provided to the broadcast receiving device and/or AV presenting device.

[6]

The foregoing and other objects, features, and advantages of the present invention will be more readily understood from consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings, taken in conjunction with the accompanying drawings.

[7]

In one example, a method of processing a data stream, wherein the method, is a method of processing a data stream.

[8]

(a). The method; receiving a data stream comprising a watermark message encoded in the data stream.

[9]

The method of claim (b); further comprising: extracting a unique resource identifier message from the watermark message related to communicating the uniform resource identifiers.

[10]

(c): Extracting from the uniform resource identifier a unique resource identifier type that identifies a type of uniform resource identifier that is followed within the uniform resource identifier message; and.

[11]

(d), Wherein the unique resource identifier type identifying the type of uniform resource identifier comprises 0xx01 indicating a uniform resource identifier of the signaling server providing access to the service layer signaling; and optionally determining whether the unique resource identifier type has a value of 0x01 indicating access to the service layer signaling.

[12]

The method of (e), determining whether the unique resource identifier type identifying the type of uniform resource identifier has a value of 0xx02 indicating a uniform resource identifier of the electronic service guide data server providing access to the electronic service guide data.

[13]

(f); Optionally, determining whether the unique resource identifier type identifying the type of uniform resource identifier comprises 0xx03 indicating a uniform resource identifier of a service usage data collection report server for use in the reporting service usage; and iii.

[14]

(g), The method includes selectively determining whether the type resource identifier type identifying the type of uniform resource identifier comprises WebbbSocket protocol, whether it has a value of 0xx04 indicating a uniform resource identifier of the Socket Server.

[15]

In one example, a device for processing a data stream, as well as a device for processing a data stream. Such a device includes one or more processors, and such a processor .

[16]

(a) To receive the data stream comprising a watermark message encoded in the data stream.

[17]

Extraction (b) Ununiform resource identifiers; extracting a corresponding unique resource identifier message from the watermark message.

[18]

(c): Extracting from the uniform resource identifier a unique resource identifier type that identifies a type of uniform resource identifier that is followed within the uniform resource identifier message; and.

[19]

(d) Is; the unique resource identifier type identifying the type of uniform resource identifier is indicative 0xx01 indicating a uniform resource identifier of the signaling server providing access to the service layer signaling.

[20]

(e) Is; the unique resource identifier type identifying the type of uniform resource identifier that identifies the type of uniform resource identifier comprises 0xx02 indicating a uniform resource identifier of the electronic service guide data server providing access to the electronic service guide data.

[21]

(f); Determining whether the unique resource identifier type identifying the type of uniform resource identifier comprises 0xx03 indicating a uniform resource identifier of a service usage data collection report server for use in the reporting service usage.

[22]

Union resource WebbbSocket Server providing access to dynamic events over WebbbSocket protocol that identifies the type of uniform resource identifier that follows (g) It is configured to determine whether 0xx04 indicates an identifier.

[23]

1 Shows a system with enhanced service information. 2 Illustrates another system with enhanced information. 3 Illustrates a data flow for a system with enhanced information. 4 Illustrates another system with enhanced information. 5 Shows the watermark payload. 6 Shows another watermark payload. 7 Shows the relationships between watermark payloads. 8 Shows the relationships between watermark payloads. 9 Shows the relationships between watermark payloads. 10 Illustrates another system with enhanced information. 11 Illustrates obtaining synchronization and maintaining synchronization. 12 Shows another watermark payload. 13 Shows data SDO. 14 Illustrates metadata encapsulated within cmdID personal data as SDO payloads using SDO. 15 Shows a watermark embedding system. 16 Shows a watermark extraction system. 17 Illustrates an expiration time value, a long acute flag, a severity indicator, and a certainty indicator of the emergency message. 18 Illustrates an example emergency alert message. 19 Illustrates another exemplary emergency alert message. 20 Illustrates an exemplary set of certainty and severity codes. 21 Illustrates another exemplary emergency alert message. 22 Illustrates another exemplary emergency alert message. 23 Illustrates another exemplary emergency alert message. 24a Illustrates an exemplary bitstream syntax of a watermark message block. 24b Is an exemplary mapping of field wm _ message() to watermark message wm _ message _ id. 24c Shows an exemplary syntax of wm _ message(). 24d Illustrates an exemplary syntax of URI messages. 24e Shows an exemplary mapping from the value uri _ type to the types URI. 24f Illustrates another example syntax for URI messages. 24g Shows another exemplary mapping from the value uri _ type to the types URI. 25a Illustrates an example dynamic event message. 25b Illustrates a transfer protocol type field encoding. 25c Illustrates another exemplary dynamic event message. 26a Shows an exemplary syntax of emergency _ alert _ message(). 26b Illustrates an exemplary encoding of severity and certainty.

[24]

Definitions

[25]

Uimimimimbf format represents an unsigned integer significant bit (MSB) 1 format.

[26]

When the value in the number of bits is equal var, this represents a variable length field.

[27]

The reserved field indicates that the bits corresponding to the field are reserved for future use.

[28]

16 Hexadecimal (also base 16, BTS). Or hex) is a radix or base location numeric system 16. This uses 16 distinct symbols, most often symbols 0-9 the values 0 through 9, and A, B, C, D, E, F (or alternatively a, b, c, d, e, f) representing values 10 through 15. 16 Launch numbers often use the prefix "0x".

[29]

When used to represent arithmetic operations xy is exponentientientiation, i.e. y power of x. sub.x corresponds to the exponentiation of the exponentiation operation. In other contexts, this notation is used for unintended deletions to interpret as dihydrate.

[30]

Detailed description of a preferred embodiment

[31]

To FIG. 1, the system may include a content source (100), a content recognition service providing server (120), a multi - channel video program distributor (130), an enhanced service information providing server (140), a broadcast receiving device (160), a network (170), and AV presenting devices (180).

[32]

Source (100) may correspond to a broadcast station that broadcasts a broadcast signal comprising one or more streams of audio - visual content (e.g. audio and/or video). The broadcast signal may further include enhanced services data and/or signaling information. Enhanced services data is preferably associated with one or more of the audio - visual broadcast streams. Enhanced data services, are provided. For example, service information, metadata, additional data, compiled executable files, web applications, HTML (Hyperperperperperts Language) documents, XML (Extensible sible sible sible Markup Language (CSS (Casasasasasasasasastics) documents, audio files, video files 2.0, ATSC (Advanced TTTTTTTTEL) documents, HTML documents, and URL (Uniform form form form Resource Locator), may have any suitable format.

[33]

The server (120) for providing a content recognition service provides a content recognition service that allows AV presentation devices (180) to recognize content based on audio - visual content from content sources (100). The server (120) for providing the content-aware service may optionally modify the audio - visual broadcast content, such as by including a watermark. In some cases AV-presenting device 180 is a digital video recording device.

[34]

The server (120) may include a watermark inserter. The watermark inserter may insert watermarks designed to carry enhanced services data and/or signaling information, at least minimally invasive to viewers, and at least minimally intrusive to viewers. In other cases, the same is true. An easily observable watermark may be inserted (e.g. it may be easily visible in the image and/or may be easily viewable in the audio).). For example, an easily observable watermark may be a logo, such as a logo of a content provider on the top - or top - side of each frame.

[35]

The content recognition service providing server 120 may include a watermark embedder that modifies the audio - visual content to include a non - easily observable watermark (e.g. non - easily observable) and/or non - easily observable in the image.), non - easily observable in the audio may be non-easy to be audible in audio). For example, a non - easily observable watermark may include security information, tracking information, data, or something other than the security information, tracking information, data, or otherwise. Other examples include channel, content, timing, triggers, and/or URL information.

[36]

The multi - channel video program distributor 130 receives the broadcast signals from one or more broadcast stations, and typically provides multiplexed broadcast signals to the broadcast receiving device 160. The multi - channel video program distributor 130 may perform demodulation and channel decoding on the received broadcast signals to extract audio - visual content and enhanced service data. The multi - channel video program distributor 130 may also perform channel encoding on the extracted audio - visual content and the enhanced service data to generate a multiplexed signal for further distribution. The multi - channel video program distributor 130 may exclude the extracted enhanced service data and/or may include different enhanced service data.

[37]

Receiver device (160) can tune to a channel selected by a user, and be tuned to a channel selected by the user. An audio - visual signal of the tuned channel can be received. The broadcast receiving device 160 typically performs demodulation and channel decoding on the received signal to extract the desired audio - visual content. Receiver device (160), extracted from the extracted audio - visual content, for example H. /MPEG-4 AVC (Moving Group-4 advanced advanced advanced video coding), H. 264Decoding using any suitable technique, such as (High efficiency efficiency efficiency video coding), Dolololby AC-3, and MPEG-2 AAC (Moving Audio Coding). 265/HEVC Receiver device (160) typically provides uncompressed audio - visual content to AV presenting device (180).

[38]

The enhanced service information providing server 140 provides enhanced service information to audio - visual content in response to a request from AV presentation device 180.

[39]

AV apparatus (180) is, for example, a television, a notebook computer, and a computer-readable recording medium. A digital video recorder, a mobile phone, and a display, such as a smart phone, may include a display. AV presentation device 180 may receive audio - visual or video or audio content that is not compressed from broadcast receiving device 160, audio - visual or video or audio content encoded from the content source 100, and/or receive decoded audio - visual or video or audio content from the multi - channel video program distributor 130. The audio-visual or video or audio content can be received from a content source (S) or from a multi-channel video program distributor (ES). In some cases, uncompressed video and audio may be received via HDMI cable. AV presentation device 180 may receive, from the content recognition service provision server 170 via the network 120, an address of an enhanced service related to the audio - visual content from the enhanced service information providing server 140.

[40]

It should be understood that content source (100), server (120), multi - channel video program distributor (130), and enhanced service information providing server (140) may be combined, or omitted, as desired. It should be understood that these are logical roles. In some cases some of these entities may be separate physical devices. In other instances, some of these logical entities may be implemented in the same physical device. For example, the followings may be mentioned. Devices (160) and AV-presenting devices (180), if desired, can be combined, if desired.

[41]

With reference to FIG. 2, the modified system may include a watermark embedder 190. The watermark insert 190 may modify the audio - visual (e.g. audio and/or video) content to include additional information in the audio - visual content. The multi - channel video program distribution (130) may receive and distribute a broadcast signal that includes a modified audio - visual content with a watermark.

[42]

The watermark insert 190 preferably modifies the signal in such a way that it comprises additional information (e.g. visible and/or audible) in the form of digital information (e.g. visible and/or audibly). In non - readily observable watermarking, the inserted information may be easily identifiable in audio and/or video. In non - readily observable watermarks, even if the information is included in audio - visual content (e.g. audio and/or video), the user does not easily recognize the information.

[43]

One use for watermarking is copyright protection for suppressing illegal copying of digital media. Another use for watermarking is source tracking of digital media. Further use for watermarking is description information for digital media. Another use of watermarking is to provide location information where additional content associated with the digital media may be received. Yet another use is the content and content source being viewed, and a content source. The current time point in such content, and then allowing the device to access the desired additional functionality through the Internet connection. The watermark information is contained within the audio - visual content itself, as distinguished from the meta - data communicated with the audio - visual content. As an example, watermark information may be included using spread spectrum techniques, quantization techniques, and/or amplitude modulation techniques.

[44]

To FIG. 3, an example data flow is shown. The content source (100) transmits (190) a broadcast signal comprising at least one audio - visual content and enhanced service data to the watermark embedder 201.

[45]

The watermark extractor (190) provides a content source (100), and receives a broadcast signal comprising a watermark readily observable in the audio - visual content and/or non - easily observable watermark. The modified audio - visual content with the watermark is provided (130) in MVPD (203) along with enhanced service data.

[46]

The content information associated with the watermark, for example, the identification information, audio - visual content identification information, the name of the content section used in the content information acquisition, the names of the channels in which the audio - visual content is broadcasted, audio - visual content, and descriptions of the channels broadcast over it, audio - visual content, and the like. The usage information reporting period, the minimum use time for user information acquisition, statistics on sports events, display, widgets, applications, executable ones (execututant), and/or available enhanced service information related to the audio - visual content.

[47]

An acquisition path of the available enhanced service data may be expressed in any manner, such as an Internet protocol-based path or ATSC M/H Advanced TTTTTTTTTTTTTCE.

[48]

MVPD (130) may receive broadcast signals including watermarked audio - visual content and enhanced data service, and generate a multiplexed signal that is provided (160) to a broadcast receiving device (205). In this case, the multiplexed signal may exclude the received enhanced service data and/or include different enhanced service data.

[49]

Receiver device (160) can select (180) the channels of the tuned channel, demodulate the received signals, perform channel decoding and audio - video decoding on demodulated signals to produce uncompressed audio - video content, and then provide the uncompressed audio - visual content to AV presentation device 206. sub. A content source (100) can also broadcast audio - visual content to AV presentation device (180) over a channel (207) as well (). MVPD (130) can directly transmit (160) a broadcast signal containing audio - visual content without passing the broadcast receiving device 180 and directly to AV-presenting device 208. In other cases, some of AV information may be transmitted to AV presentation device 180 over a broadband connection. In some cases it may be a managed broadband connection. In other cases it may be an unmanaged broadband connection.

[50]

AV presentation device 180 is capable of receiving uncompressed audio - visual content from a broadcast receiving device 160. © KIPO & WIPO & WIPO & WIPO & WIPO & WIPO , Additionally. AV-presenting device 180 may receive a broadcast signal from a content source (100) through a channel, and then demodulate and decode the received broadcast signal to obtain audio - visual content. In addition, AV presentation device 180 may receive a broadcast signal from MVPD (130), and then demodulate and decode the received broadcast signal to obtain the audio - visual content. AV presentation device 180 (or broadcast receiving device 160) extracts watermark information from a selection of audio samples of a received audio - visual content, or from one or more video frames. AV presentation device 180 may use information obtained from watermark (or information obtained from the watermark) to make requests (140) for enhanced service information providing server (209) for additional information. The enhanced service information providing server 140 responds to it in response thereto, in response thereto. A reply (211) can be provided.

[51]

To FIG. 4, a further example includes a content source (190) that provides audio - visual content to the watermark embedder (100) along with enhanced service data (if desired). In addition, the content source 100 may provide the code (300) to the watermark insert (190), along with the audio - visual content. The code (300) may be any suitable code that identifies, among a plurality of audio - visual streams, which of the plurality of audio-visual streams should be modified with a watermark. For example, the code=1 may identify an audio - visual stream 1, and the code=2 may identify an audio - visual stream 2 from ABC, and the code=3 may identify the N 4=4 audio - visual stream 4 from National ational ational ational (NBC), and so on. The code may include temporal location information in the audio - visual content. The code may include, if desired, other metadata.

[52]

The watermarked audio - visual content and associated data, signaling is provided to MVPD by a watermark embedder 190, which in turn can provide the watermarked compressed audio - visual content to the broadcast receiving device 160 (e.g. a set-top box). Receiver device (160) may provide watermarked audio - visual content to AV presenting device (180) (e.g. un-compressed). AV device (180) may comprise a watermark enabled receiver (320) together with a watermark client 310. A watermarked receiver (310) detects the presence of a watermark in the audio - visual content, and detects the presence of a watermark in the audio-visual content. The watermark data is extracted from the audio - visual content. The watermarked client (320) is suitable for requesting additional data based on the data extracted from the watermark, and subsequently uses this additional data in a suitable manner.

[53]

AV presentation device 180 may request a request from the metadata server 300 using code (350) from the extracted watermark. sub.sub. A code database (370) receives data including the code (300) and associated metadata (360) from the content source (100). The code (300) and associated metadata (360) are stored in the code database (370) for subsequent use. In this way, the code (190) provided to the watermark insert (300), encoded within the audio - visual content, is also stored with its associated metadata (360) in the code database (370). When MVPD (130), or something other than it removes or otherwise modifies associated metadata, it queries the code database (351) using the provided code (370) and is recoverable by AV presentation device (353) from a metadata server (180) that provides a response associated with metadata (350) to AV presenting device (180). The reply metadata provided by the metadata server 350 is used by AV presentation device 380 to form a request (355) provided to the content and signaling server (180). © KIPO & WIPO & WIPO & WIPO & WIPO The content and signaling server 380 provides, in response to this request, the selected content and signaling (357) to AV-presenting device (180). In general, the content and signaling server 380 may be different from the metadata server 350. The content and signaling server may generally be different from the metadata server (S).

[54]

However, providing the request for code provided by following the request 1 to the metadata server, and subsequently using the metadata to the content and signaling server 380, tends to be burdensome and failure due to 2 different servers and/or requests used, which are used. In addition, this may increase latency.

[55]

As an example, the metadata may consist of one or more of the following syntax elements:

[56]

With the server (1), such as the location (e.g. its network address,) of the content and signaling server. Examples of network addresses; domain names, IPv4 addresses, and so on).

[57]

(2) Content and a protocol (for example Hyperperperperperperperpertext Transfer Protocol (HTTP, Hyperperperperperperperperperpers, etc.) to be used for communication with the signaling server (e.g. Transfer Protocol (HTTP); Hyperative Transfer Protocol (HTTP).

[58]

Time code (e.g. where metadata has to be associated in the audio - visual content) identifying the temporal location in (3) audio - visual content); for example, metadata has to be associated in the audio-visual content).

[59]

(4) Time Sensing event triggers (e.g. advertisements or events for specific locations in the audio - visual content); for example, advertisements or events for a particular location in the audio-visual content).

[60]

(5) Channel identification (for example, FIGS. Channel-specific information; local channel content); and the like.

[61]

(6) Content and the duration requested by the signaling server are randomly performed by the client (e.g. for load balancing). For brevity, this syntax element may also be referred to as a duration for content server requests; for brevity, this syntax element may also be referred to as a duration for content server requests.

[62]

F (7) et al.

[63]

The watermark embedded in the audio - video content typically has a capacity that carries only few bits of payload information when the watermarked audio - video broadcast has non - easily observable information. For relatively small payload sizes, the time code (element 3 at above) and/or the location (element 1 at above) of the content and signaling server tend to occupy a significant percentage of the available payload, leaving a limited additional payload for the remaining data, which tends to be problematic.

[64]

It may be desirable to partition the metadata across multiple watermark payloads so that both time code and position information may be provided with additional information, including enough metadata in the watermark. Each of the watermark payloads is likewise preferably included in different portions of the audio - visual content. Data extracted from multiple watermark payloads are combined together to form a set of desired information to be used to make a request. In the description below, the term payload may be used to indicate a watermark payload. Each of the syntax elements can be included in a single payload, respectively. Multiple payloads may span, and/or may be fragmented across multiple payloads. Each payload may be assigned a payload type for purposes of identification. Further, associations may be established between multiple payloads that belong to the same or approximately the same timeline location. Moreover, this association may be, as desired, mono - or bi - directional.

[65]

The desired time code data may be obtained from the payload following several temporal locations of the audio - visual content. Some systems thus may establish rules that associate the determined time code with a particular temporal location of the audio - visual content. In an example, the selected temporal location may correspond to a temporal location at the end of a predetermined watermark payload.

[66]

For example, the payload size may be 50 bits while the preferred metadata may 70-bits, thus exceeding the payload size of a single watermark. Examples of preferred metadata may be as follows:

[67]

Location (I) 32-bit (Internet protocol "IP" address) of content and server

[68]

Application layer protocol (A) 1 bit (httttttttttttp/https)

[69]

Time Code (T) 25 bit (for 1 years with the granularity of 1 seconds))

[70]

Time Sensor (D) 1 bit (A value of 1 indicates AV-presenting devices have to query for interactive content). A value of 0 (e.g. AV) presenting device. Note that as in time base trigger).), the interactive content should not be queried for interactive content).

[71]

Channel identification (L) 9 bit

[72]

Duration (R) 2 bit for content server requests

[73]

Another example of preferred metadata may be as follows:

[74]

Location (I) 32-bit (IP address) of content and server

[75]

Application layer protocol (A) 2 bit (00=http/01=https, 10=reservation, 11=reservation)

[76]

Time Code (T) 25 bit (for 1 years with the granularity of 1 seconds))

[77]

Time-sensing trigger (D) 1 bit

[78]

Channel identification (L) 9 bit

[79]

Duration (R) 2 bit for content server requests

[80]

One way of partitioning the metadata is to include CSSCSCI information in one payload, and timeline information in the other payload. CSSCSCI payload, for example, may how information (where information), association information, and how information how information (e.g. CSSCSCI payload is to associate with one or more other payloads), and how information (e.g. how information is to associate with one or more other payloads), such as how information). It may include application layer protocol, duration for content server requests). The timeline information may include, for example, association information, time information, and any information (which information will be referred to as' channel ') information.) may include, for example, association information, information when information is used, for example, channel identification) information (e.g. channel identifying information) information (e.g. channel identification).

[81]

To FIG. 5, an exemplary CSSCSCI payload is shown.

[82]

To FIG. 6, an example temporal location payload is shown. The term temporal location may alternatively be used instead of the term temporal location.

[83]

The payload type may be identified by the (1-bit, "Y"). When Y is set to 0, the payload corresponds CSSCSCI payload, 14 bit payload identifier P is used to describe CSSCSCI (S). When Y is set to 1, the payload corresponds to the temporal location payload, 14 bit payload identifier P signaling the corresponding CSSCSCI. In consequence, different payload types with the same payload identifier (P) value are associated with each other. The identifier R indicates time duration for diffusing the content and signaling server requests. The identifier R indicates the time duration for diffusing the content and signaling server requests. In yet another example "Y" may correspond to 2- bit fields, where the value 00 indicates CSSCSCI payload, and wherein the value is indicative of a CSSCI payload. The value 01 indicates the temporal location payload, and the values 10, 11 are reserved for future use.

[84]

To FIG. 7, an exemplary time line is shown. A Type 1 CSSCI payload (e.g. CSSCSCI-0) has 1 sets of association information P while a 2 CSSCI-type payload (e.g. CSSCSCI-1) has 2 different sets of association information P. In CSSCSCI-0 and CSSCSCI-1, having 2 different association information P distinguishes between 2 CSSCSCI payloads and identifies them. 1 Th time location payload (e.g. Timeline-1) has 1 sets of relevance information P matching the association information P to CSSCSCI-0, and the time location payload (e.g. Timeline-2) has the same set 2 1 set of association information P that matches the association information P for CSSCSCI-1 (e.g. Timeline-2). 3 sub.0 represents the same set of sets of association information P matched with CSSCSCI-1 2. In this way CSSCSCI-0, Timeline-0; CSSCSCI-0, Timeline-1; and CSSCSCI-1, Timeline-2 are linked together as pairs with advanced watermarked information. This allows the same CSSCSCI type payload to be used for multiple different time location payloads.

[85]

As shown, each temporal location payload is associated with a previously received CSSCSCI type payload . Thus, it is unidirectional at its association. If previous CSSCSCI type payloads consistent with temporal location payloads are not available, the system may determine that the packet is lost or otherwise the watermarking is not effective. Audio - video content tends to be modified by audio - video transcoding, such as to reduce the bitrate of audio - video content, so that the loss of watermarking data occurs at some frequency.

[86]

To FIG. 8, an exemplary time line is shown. A Type 1 CSSCI payload (e.g. CSSCSCI-0) has 1 sets of association information P while a 2 CSSCI-type payload (e.g. CSSCSCI-1) has 2 different sets of association information P. In CSSCSCI-0 and CSSCSCI-1, having 2 different association information P distinguishes between 2 CSSCSCI payloads and identifies them. 1 Th time location payload (e.g. Timeline-1) has 1 sets of relevance information P matching the association information P to CSSCSCI-0, and the time location payload (e.g. Timeline-2) has the same set 2 1 set of association information P that matches the association information P for CSSCSCI-1 (e.g. Timeline-2). 3 sub.0 represents the same set of sets of association information P matched with CSSCSCI-1 2. In this way. CSSCSCI-0, Timeline-0; CSSCSCI-0, Timeline-1; and CSSCSCI-1, Timeline-2 are linked together as pairs with advanced watermarked information. This allows the same CSSCSCI type payload to be used for multiple different time location payloads. As seen, 2 of the temporal location payloads are associated with a previously received CSSCSCI type payload, one of CSSCSCI type payloads being associated with subsequent received temporal location payloads, and therefore both in its association. If a corresponding CSSCSCI type payload that matches the temporal location payload is not available, the system may determine that the packet is lost or otherwise the watermarking is not effective. Similar, if a corresponding timeline type payload that matches CSSCSCI payload is not available, the system may determine that the packet is lost or otherwise the watermarking is not effective. Audio - video content tends to be modified by audio - video transcoding, such as to reduce the bitrate of audio - video content, so that the loss of watermarking data occurs at some frequency.

[87]

In the example, CSSCSCI type payloads (e.g. CSSCSCI-0) have 2 sets of association information P0 and P1. Time position payload, as well as time position payload For example, the Timeline-0 has 2 sets of association information P0 and P1 matching the association information P0 and P1 with respect CSSCSCI-0. In this example bi-directional association exists for CSSCSCI-0, Timeline-0, where P0 indicates CSSCSCI-0 and P1 indicates Timeline-0.

[88]

The number of bits assigned to the payload identifier P may be modified, as desired (e.g. for the desired robustness). Likewise, the number of bits assigned to I, A, T, D, L, and R can, as desired, be modified.

[89]

In the example, AV presentation device 180 may maintain a list represented by the variable listC of CSSCSCI payload received most recently received "c". ". " S. " S. "" -" "C" may be provided in the watermark, if desired, or else be set by the system. In this way AV presentation device 180 may have to maintain a limited number CSSCSCI payloads in memory. In case of c=1, once CSSCSCI payload is received, this remains valid until another CSSCSCI payload is received, as shown in FIG. 9. The loss of CSSCSCI payload may be detected using the payload identifier P, for example, the temporal location payload includes P that does not correspond to any of CSSCSCI payloads in listC. In this way. By varying AV presentation devices (180), the same user experience may be achieved.

[90]

In an example, AV presentation device 180 may maintain more than one list of received CSSCSCI payload (SDs). Each list may be different in size, and may be maintained using a different set of rules (i.e. added/removed of entries within the list). It should be understood that this does not exclude the possibility that a subset of lists may have the same size and/or the same maintenance rules. As an example 180 lists held by 2 may be present, wherein one list includes "c1" most recently received CSSCSCI payload; while another list includes "c2" most 0 recently received CSSCSCI payload, where each payload is received at an interval of "d" CSSCSCI payload. sub.E payload (DCSSCSCI payload) received at an interval of " D′ CSSCI payload, where each payload is received at an interval of'd ' CSSCI payload.

[91]

To FIG. 10, the modified system may include a content source (100), a watermark embedder (190), MVPD (130), a broadcast receiving device (160), and AV presenting device (310) together with the watermark client (320 180). Server (400) can be modified to include a code database (370), a metadata server (350), and content and signaling server (380). The code (300) and metadata (360) are provided by the content source (100) to the content server (400). The content and signaling data are provided to the content and signaling server (s) (390).

[92]

AV presentation device 180 may provide code in a request based on one or more watermarks decoded from an audio - video broadcast. The server (400) receives a request from AV presented device (180). Next, the process is ended. Server (380) parses the received code request, and, based on the information from the code database (370), requests content and signaling server (390) to determine content and signaling information presented next to AV device (180). The content and signaling information are then transmitted to the server (S). In such manner, AV presentation device 180 only needs to make a single request to a single content server 400, which in turn provides a response to AV presenting device 180. sub.sub. It should be understood that different functions of the content servers 400 combine existing functions together, separate the existing functions into more components, omitting the components, and/or by any other technique.

[93]

5 And 6 (to be sent to the content server (400)) corresponding to the payload in the httttttttttttp/https request URL may be defined as follows when the time sensing trigger D is equal to 1: the time sensing trigger D is equal to zero.

[94]

When A is equal to 0, httttp Request URL is as follows:

[95]

URL@ .http:IIIIIIII. IIIIIIIIIIII. IIIIIIIIIIII. IIIIIIIIIIIIIIIIIIIIIIII/LLLLL?Time=TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT

[96]

Alternatively, httttps request URL is as follows:

[97]

URL@. https:IIIIIIII. IIIIIIIIIIII. IIIIIIIIIIII. IIIIIIIIIIIIIIIIIIIIIIII/LLLLL?Time=TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT

[98]

IIIIIIIIIIII in the stomach. IIIIIIIIIIII. IIIIIIIIIIII. IIIIIIIIIIII corresponds to 32-bit IP addresses signaled in CSSCSCI payloads.

[99]

In the example, FIGS. The content server location, communication protocol, communication port, login information, and subset of URL specifying information such as a folder on the content server are conveyed in a designated payload type.

[100]

In some implementations, the value of the syntax element can be derived using a decoding process that can access information leading to multiple payloads. For example, the time code may be fragmented into multiple watermark payloads and then reassembled to construct a complete time code. In an example, the time code may correspond to a temporal location within the audio - visual content. In an example, the time code may correspond to timeline data of the audio - visual content.

[101]

For example, the payload size may be 50 bits while the preferred metadata may be 66 bits, thus exceeding the payload size of a single watermark. Examples of preferred metadata may be as follows:

[102]

Location (I) 32-bit (IP address) of content and server

[103]

Application layer protocol (A) 1 bit (httttttttttttp/https)

[104]

Time Code (T) 25 bit (for 1 years with the granularity of 1 seconds))

[105]

Time-sensing trigger (D) 1 bit

[106]

Channel identification (L) 5 bit

[107]

Duration (R) 2 bit for content server requests

[108]

Another example of preferred metadata may be as follows:

[109]

Location (I) 32-bit (IP address) of content and server

[110]

Layer protocol (A) 2 bit (00=http/01=https. 10=Reservation, 11=reservation)

[111]

Time Code (T) 25 bit (for 1 years with the granularity of 1 seconds))

[112]

Time-sensing trigger (D) 1 bit

[113]

Channel identification (L) 5 bit

[114]

Duration (R) 2 bit for content server requests

[115]

To FIG. 11, a state transition diagram depicts one technique of calculating a time code. A number of consecutive payloads starting with the payload type "start sync" to obtain time code synchronization are followed by payloads of the type "not start start" that is the same as "r". Using the entirety "r" contiguous payloads, each having some time information contained therein, the time synchronization can be determined by calculating an anchor time anchor time. sub.m. After calculating the anchor time code, the time code may be updated by receiving additional payloads including partial time code information therein in such a manner that does not require receiving another whole "r" contiguous payload to determine the next time code. One technique to achieve this time synchronization is partitioning the incremental time code in each of the successive payloads and the time code in successive payloads. Changing channels, such as by changing channels, such as by changing channels When synchronization is lost, a synchronization acquisition process is performed. The video display device is first turned on (ON) to enter an initial "synchronization acquisition" state. sub.n when the video display device is turned on.

[116]

To FIG. 12, an exemplary structure of a watermark payload is shown. Z indicates the payload type, where Z equal to 1 indicates the start of the time sink and Z equal 0 indicates that the same Z is not the start of the time sink. S indicates the time sync payload bits used in determining the absolute time code. M indicates the time sync payloads bits used in maintaining the time code.

[117]

As an example, AV presentation device 180 may receive n=7 successive watermark payloads, where the th payload 1 has Z=1 and the subsequent watermark payloads have Z=0. ". " S. " S. "" -" The bits corresponding to "SSSS" are extracted from (t-n +1) th through t th watermark payload and concatenated together to obtain 28-bit representations of the temporal code "Tt" of the temporal position. sub.t. The anchor time code "Ct" is also set to "Tt". ". " S. " S. "" -" "Tt" is SSSSSSSSSSZ=l, T-n +1. SSSSSSSSSS7, t-1SSSSSSSSZ=0, t; "Ct"="Tt". In another example, FIGS. It may be added) and/or multiplied. sub. In another alternate example, the derived values are mapped to different values by use of a mapping function.

[118]

Once synchronization initialization is acquired, the anchor time and payload time are updated using the respective payload. This can be performed, for example, as follows: The following can be carried out as follows.

[119]

Tt t t (Ct-1, MMMMMMT)

[120]

Ct t t (Tt)

[121]

Here, f represents a mapping function that takes 2 values as input and outputs a 1 value; g represents a mapping function that takes 1 values as input and outputs 1 values; and/is an integer divider with cutting of results to zero. For example 7/4 and -7/4 are cut to 1 and -7/4 and 7 /4 are cut to -1. In the example, FIGS.

[122]

Tt t t t-1 + MMMMMMT

[123]

Ct t t t

[124]

As described above, the anchor time for "n" payloads may also be determined using bits corresponding to "SSSS". ". " S. " S. "" -" The anchor time determined using "SSSS" must mate with the above anchor time derivation, and can be used to verify the correct of the time code maintained.

[125]

Since the watermark can lead to a non - zero time, the temporal position of the time code Tt, for example, is, for example, zero. Tt may be determined by a set of rules, such as may correspond to a time instant at the end of t th watermark payload.

[126]

It should be understood that multiple syntax elements may be combined to form a code. Next, the code may be mapped by AV presentation device 180 or to different syntax element values using another server. The server information (e.g. the location and/or application layer protocol of the content and signaling server, etc.) and the time code may be combined into a single code, for example. Next, the single code is mapped to the temporal location in the uncompressed audio - video stream, and the location of the content and signaling server. This manner, a single request may be made to the server for additional information.

[127]

A limited number of bits may be used for the time code in this manner allowing collisions in the time code. For example, using 20 bits for a time code allows a unique maximum @datdatdat@ at a granularity 12 of 1 seconds. After 12, the code space corresponding to the time code will be reused, resulting in collisions.

[128]

In one example, the watermark payload Standards ards ards ards ards ards in an SDO private data command to be encapsulated as SDO payloads using cmddIDs. By way of example, FIGS. 5 Or 6 may be encapsulated as SDO payloads. CmddID value 0xx05 can refer to a watermark-based interactive services trigger triggered decdecdecdecdecdecdecarative object-TDO model). CmddID value 0xx06 can refer to a watermark-based interactive services trigger (direct execution model). This facilitates reuse of existing segmentation and reassembly modules built for trigger transport. The segmented command can, if desired, be embedded in the watermarks. SDO private data may be desired, as shown in FIG. 13, where the packet is included as part of SDOOO_payload (). In some examples, the watermark payload received in this way may be communicated to the entity and/or module at the receiver handling these defined cmddID types. Next, segmentation and reassembly functionality of the corresponding module may be re-used depending on the capacity of the selected watermark scheme in terms of the number of - bits if the watermark payload packet needs to be divided into multiple packets.

[129]

The parameter type T is referred to as CEA A A A ("CEA:" Digital TTTeleon (DTV) June 2013 Clolololosed Capacity, CEA A A A A A section @ipip@ ") 7.1.11.2, @datdatdata_E, where the instance of SDOOOOO@ 'is a portion of a segmented variable length command, as defined by the (' Consumer Electronics (' CEA). And, if so, this instance is 2- bit fields indicating whether the first, medium, or last segment is the last segment. The type field in SDOOOOOvateData command is encoded as specified in the section @ipip@ of 7.1.11.2 CEA A A A. Pr is a flag indicating that when set to '1', the contents of a command assert as Program Related) is asserted (assert). When this flag is set to '0', the content of the command is not so asserted. Length (L) is an unsigned integer indicating the number of bytes following the header in range 2 and 27, expressed in SDOOOOOvateData command as a set of bits L4 to L0, where L4 is the top and L0 is the lowest. Cid (cmddID) is 8-bit field identifying SDO as defined SDOOO_payload () data structure and semantics. The metadata, as shown in FIG. 14, can be encapsulated within SDO private data as SDO payloads using cmddIDs.

[130]

And Standards by using cmddIDs as SDO payloads. sub. sub. DELTA.d.O. Private data (SDO (). 5 6 CmddID values 0xx05 and 0xx06 can refer to the encapsulation of payloads defined in FIG. 5 and FIG. 6, respectively. This facilitates reuse of existing segmentation and reassembly modules built for trigger transport. Segmented commands, respectively. If desired, the watermark may be embedded in watermarks. SDO private data may be desired, as shown in FIG. 13, where the payload packet is included as part of SDOOO_payload ().

[131]

The payload as defined in FIG. 12 Standards ards ards ards ards ards as SDO payloads using cmddIDs, so as to be encapsulated within a Personal Data Command. CmddID value 0xx05 may refer to the encapsulation of payloads defined in FIG. 12. This facilitates reuse of existing segmentation and reassembly modules built for trigger transport. The segmented command can, if desired, be embedded in the watermarks. SDO private data may be desired, as shown in FIG. 13, where the packet is included as part of SDOOO_payload ().

[132]

To FIG. 15, the transmitter of the system may receive one or more messages (530A, 530B, 530C) that must be embedded into an essence (essence, audio and/or video content) as a watermark. One or more messages (530A, 530B, 530C) may be packaged in the form of one or more fragments (520A, 520B, 520C). As an example, each message may be packed in the form of a corresponding fragment. As an example, each message may be packed in the form of one or more corresponding fragments. By way of example, the message can be partitioned, and the message can be partitioned. Each of them corresponds to a message fragment. In some cases, a message that exceeds the allowed length of the fragment may be spread into a plurality of corresponding fragments. In some cases, a long message may be spread to a plurality of corresponding fragments. In the example, each of the fragments is encoded to be transmitted only when no other fragments need to be transmitted. The transmitter may receive the message fragment and may generate a series of one or more payload (510) embedded within the essence.), the transmitter may receive the message fragment. In some cases, such a series of things may include embedding and/or transmitting the same message fragment several times. In an example, one payload is embedded with one unit (e.g. one picture of video and/or one segment of audio) of the essence. The payload (510) may each include additional header and signaling information for the fragment (e.g. fragments). The essence, which may for example be a video picture and/or an audio segment, can be received by a watermark embedder (510) embedding a payload (500) therein to create a marked ecentre, therein.

[133]

In an example system, it may be required that all pictures within the video segment will carry a watermark if the picture in the video segment carries the watermark. Next, the receiver is not detected in the current video segment, although the watermark segment is currently not detected . The earlier time period can detect the loss of pictures by detecting that the picture in the video segment has included a watermark. A video segment will correspond to a group of successive pictures. Within the receiver the video segment can be identified by a watermark extractor by some external means.

[134]

To FIG. 16, the decoder or receiver of the system may receive one or more marked eccens, such as those provided by the transmitter of FIG. 15. A watermark payload extractor (600) extracts the payload from the marked essence (s). The method for extracting payload (S) from the marked egress is described below. One or more message fragments may be extracted (610) from one or more payloads. A result of the extraction (610) is a series of one or more message fragments. Each of the one or more message fragments may be properly grouped and entered into message reassembly (620A, 620B, 620C). The results of message reassemble (620A, 620B, 620C) are a series of messages (630A, 630B, 630C). The messages (630A, 630B, 630C) may be the result of reassembly of one or more fragments, which may be the result of one or more payloads, which may be the result of one or more marked ecentre. In the example, extraction in FIG. 16 and reassembled Message 1 (630A). . Message (N-1) (630B), Message N (630C) are 15 (Message 1), in FIG. 530A. , Message (N-1) (530B), Message N (530C), respectively. As an example, message reassembly may involve concatenating message data included in a group of message fragments in a particular order, in a particular order.

[135]

In the example "1X" video watermark carries 30 bytes of payload data per video frame, whereas "2X" video watermark (ejection format) system passes 60 bytes per frame. These are sometimes referred to as 1X systems and 2X systems, respectively.

[136]

In an example, the payload format for the video watermark is the same in both 1X and 2X systems.

[137]

The run-in pattern in the exemplary payload format for the video watermark is followed by one or more instances of the message block.

[138]

The message fragment may include type information indicating a particular type of information carried in the fragment. For example, the message type may indicate that the information includes a predefined set of subsets of syntax elements (e.g. a content identifier, media time). In some cases, the values taken by some syntax elements can be used to determine the exact subset of syntax elements included in the message fragment. For example, the message type may indicate that the information may include a channel identifier. For example, the followings may be mentioned. The message type may indicate uniform resource resource identifier, and URI type. The message type may indicate a uniform resource identifier (URI) type. In another example, the message type may indicate that the information includes a content identifier.

[139]

In an example, the message fragment may include a content identifier that may correspond to EIIDR. ''.

[140]

In an example, the message fragment may include a content identifier that may correspond to Ad-ID(advertising identifier) that is used to track the advertising assets.

[141]

In an example, the message fragment may include length information about the variable length information contained therein.

[142]

In an example, the watermark payload may comprise a message.

[143]

In an example, a message may be included in one message fragment.

[144]

In an example, the watermark payload may carry one or more message fragments.

[145]

In the example, the message fragments may include, for example, URI, Ad-ID, and length information about the variable length information contained therein.

[146]

In an example, the message fragment may include length information about the first 1 variable length information included in the message fragment. The first variable length information 1 may include a fixed length portion and/2 variable length information. The length of the length information 2 may be derived as a length of the length minus fixed length portion 1 of the variable length information. The length of the fixed length portion can be derived in any suitable way. For example, the followings may be mentioned. The fixed length portion may be derived based on the message type, length of the 1 variable length information, the length of the syntax elements belonging to the fixed length portion included in the message fragment. In the example, the length of the portion of the length information in the message fragment 2 is derived as the length of the fixed length portion included in the length minus message fragment 1 of the length information. In an example, a fixed length portion included in a message fragment may not be included adjacently. In the example, the fixed length portion included in the message fragment may be placed on either side of the at 2 variable length information. In the example, the fixed length portion is only partially included in the message fragment. In an example, the fixed length portion may not be included in the message fragment.

[147]

In some audio - video environments, the system preferably has the ability to time-shift the audio - video content. , This refers to recording audio - visual content on a storage medium, such as a hard drive, and, next, viewing show (show) recorded at a later time even if the record has not yet been completed. In some audio - video environments, a system, is a system. It is also desirable for trick mode functions, such as playback, pause, pause - live, next segment jump, jump to the last segment, broadcast resumption of live content, and the like. In some audio - video environments, it is desirable for the system to have the capability to enable user preferences and interactive applications to be interrupted as needed in the case of an emergency alert. Commonly, emergency alerts are important messages originating from a federal, primary, or local government that provide emergency information, such as national and/or substantially regional seismic, flood, and other events. For such emergency alerts often provided with audio visual content, it is desirable to stop the graphics being displayed on AV presenting device 180, such as video overlays or other graphical content, so that an emergency alert message is presented in an easily visible manner on AV presenting devices. For example, if a viewer is watching video content on AV presenting devices, such as televisions, with other windows opened on AV presenting devices that interact with interactive TV applications, it is desirable to stop both video content and interactive TV applications so that emergency alert messages are readily visible on AV presenting devices. Only displaying the emergency alert message in the video content is video content, for example. Other applications, such as interactive TV applications, may be insufficient in some situations that are obfuscated by other applications. To the extent some audio - video environments are not available from MVPD to viewers, such as cable, satellite, or Internet Protocol Protocol Protocol Protocol (IPTV (IPTV) operator, the system should be able to enable the receivers to retrieve missing components of services via alternative networks (e.g. broadband network access). Often, this may include emergency alert messages and its contents, which may not be available for AV presentation device 160 because the broadcast receiver device 180 that receives audiovisual content uses high definition definition definition multimedia (HDMI) for AV presenting devices that would otherwise be desirable to provide AV-presenting devices. sub.box) for the AV-presenting device, which would otherwise would be desirable to provide the AV-presenting device with audio and video information that would otherwise would be desirable to provide the AV-presenting device. AV presenting devices can render audio and/or visual content, and are capable of rendering audio and/or visual content. It should be understood that the multi - screen interactive TV session may be any device that can be networked together.

[148]

While presenting any emergency alert messages included with the audio video content, such as embedded in a broadcast audio - video content, audio and/or video content provided at the same time by the broadcast company, AV presenting device 310 with watermark client 320 will detect and respond to emergency alert 180 signals, as well as the emergency alert signals included in the watermark client (S). However, when the viewer is time - shifted audio - video content, when AV-presenting device 180 receives time - shifted audio - video content with a watermark that includes an emergency alert signal, AV presenting device 180 will likewise detect and respond to an emergency alert signal, as well. This delayed detection and response may be appropriate if the shift - shifting is a minimum duration, but this may result in a disturbance to the viewer experience when the time - shifting is not a minimum duration because the emergency alert is often no longer relevant. As an example, AV presentation device 310 with watermark enabled receiver 320 and watermark client 180 when time - shifting is not a minimum duration may involve modifying the video content and respond to detecting and responding to an emergency alert signal that may involve removing any other applications currently presented on AV presenting device 180 . This will result in unnecessary disturbances in viewing experience.

[149]

To FIG. 17, the audio and/or video content is preferably included in the audio and/or video content at an expiration time value (700). The expiration time value 700 indicates a time value representing the degree of time of the corresponding emergency alert. For example, the temporal extent may be expressed in terms of minutes (minutes) in the case of audio and video watermarks, or in terms of seconds (seconds) in the case of video workmarks. , The time extent agrees with the text content of the alert message of the broadcasters. For example, a time to 5 years of time will be appropriate for warning messages of the broadcasters called "Slood warning in effect (Flash FFFFFFFFFFPM)" to 5.

[150]

It is also preferred that the emergency alert watermarks contained in the audio and/or video content comprise an acute flag (710). An acute flag (710) signals to the devices a prompt attention to the emergency alert. If, for example, an acute flag (710) is set, even though the rest of the emergency alert message is still being searched, all on - screen display objects (e.g. the like) are still retrieved. An interactive TV application running on AV-presenting device 180, such as a television, can be erased, so that an emergency alert message can be presented in a more urgent manner. For example, if an acute flag (710) is not set, on - screen display objects are not necessarily eliminated in this timely manner, while the rest of the emergency alert message is still being retrieved. If an acute flag (710) is not set, the emergency alert message may be further parsed and registered to further confirm its own applicability to the current viewer. For example, a further processing may include geolocation processing that determines whether a message is applicable to a particular viewer.

[151]

The audio and/or video content contained in the audio and/or video content also preferably comprises a severe indicator (720). For example, serious indicators (720) may include range of values such as, for example, extremes, severity, moderate, sausness, and/or unknown sounds. In this fashion, the emergency alert signal can provide information related to the severity of the emergency event.

[152]

The audio and/or video content, which is included in the video content, is also preferably comprised of a certainty indicator 730. For example, the indicator (730), for example, is a certainty indicator. The observable, possible, possible, no possibility, and/or unknown values may include a range of values such as unknown sounds. In this fashion, the emergency alert signal can provide information related to the certainty of the emergency event.

[153]

Having an expiration time value (700), an acute flag (710), a severity indicator (720), and/or a certainty indicator (730) makes it possible to flexibly signal time - sensitive emergency alerts suitable for environments including time - shift use of the audio - visual content via MVPD broadcast receiving device 160. The expiration time value (700), the acute flag (710), the severity indicator (720), and/or the certainty indicator (730) is provided in the audio watermark and/or video watermarks of the audio video content. Furthermore, by providing an emergency alert signal comprising an expiration time value (700), an acute flag (710), a severity indicator (720), and/or a certainty indicator (730), the receivers make it possible to properly identify and provide a suitable response. In addition, an expiration time value (700), an acute flag (710), a severity indicator (720), and/or a certainty indicator (730), in particular in the case of time - shifted audio video content, makes it easy to reduce unnecessary disturbance to the viewer's experience. It is also, the like. If an expiration time value (700), an acute flag (710), a severity indicator (720), and/or a certainty indicator (730) are provided, the viewer provides information to the viewer so that the viewer can respond appropriately to the emergency alert signal.

[154]

To FIG. 18, the structure of the watermark message block 800 carried in the payload of the watermark technology with a common capacity, such as a video watermark, may include a watermark message identification (wm_ME_id) (800) indicating the type of messages signaled by the watermark message block 802, such as an emergency alert signal and a message. The watermark message block 800 may include the entire wmmm_message () or wmmm_message () fragment. Table (805) may be used to select an appropriate set of watermark decoding and/or processing based on the type of wmmmmmm_ids (802). Wmmmmmmu is 0xxed (806) indicating that the watermark message block 800 includes EA (emergency alert) signals and messages (EA_A_message ()) (808). Wmmmmmm_bytes () contains a complete instance of wmmm_message (), identified by the value of wmmmmmm_id (), when indicating that fragmentation is not used, and else wmmmmmm_bytes () includes a fragment of wmmm_message (). Other structures for the watermark message, as desired, can be used as desired, as desired.

[155]

The structure of EA_A_message () (- 808) may include one or more different data fields. EA_A_message () (- 808) may include EA_A_A_A_ 26- which may be 852-bit integer value representing the number of granularity when the current emergency message ends. UTC, the EA_message may include EA_ExpIn( ½), which may be a tribit integer value representing the ordinated time at the end of the current emergency message. The EA_A_A_A_value of 0 indicates that the warning termination time is not known. Receiving device, and receiving device The current time UTC may be compared with UTC of EA_A_A_A_ 852, and when UTC of the current time is less UTC (852) of EA_A_A_A_, the emergency alert event is still adequate to be processed accordingly. If @datdatdate_( 852) value indicates that EA_A_A_A_value is unknown, AV presenting device 0 180 can automatically render an alert message.' EA_Exp.E.E.E.m. Expt. If it indicates that the warning expiration time is not known, the AV presenting device (ED) may automatically render an alert message. EA_A_A_A_ 852 corresponds to an expiration time value 700. The EA_ExpInvalid (½) corresponds to an expiration time value (½).

[156]

EA_A_message () (- 808) may include EA_A_A_A_PG (1-) that may be 854-bit values representing the urgency of the emergency alert event. A value of 1 signals to AV presenting devices (180), such as televisions, that immediate attention is desirable. The value of 0 signals AV-presenting device 180, such as television, that the alert will be in fact normal urgency. sub.m, that is in fact normal urgency. Such AV presentation device 180 may further propagate a signal to one or more companion devices currently present in a networked multi - screen interactive TV session with AV presenting device 180, such as a television. EA_A_A_A_OUT (854) corresponds to the acute flag (710).

[157]

EA_A_message () (- 808) may include EA_A_A_A_A_A_A_A_sig (808) that may be 1 bit values indicating the presence of additional data associated with EA_A_message 856.

[158]

EA_A_message () (- 808) may include a reservation 4 bit (858) of padding for byte alignment.

[159]

EA_A_message () (- 808) may include a conditional statement (808) for signaling additional data associated with EA_A_message 860.

[160]

Data may include EA_A_A_A_A_ID (862) that may provide ID for the emergency alert message.

[161]

The additional data may include EA_A_A_A_A_version (864), which may provide a version number for the emergency alert message.

[162]

The additional data may include EA_A_A_A_A_A_A_length (866) which may be 8-bit unsigned integers giving the length of the EA_A_A_A_A_text (866).

[163]

Supplementary data may include EA_A_A_A_A_( 8*N) (868), which may be a text string of emergency alert text.

[164]

It should be understood that the watermark message and/or any other fields therein may be structured in any suitable manner. It should be understood that fewer and/or larger numbers of bits may be used for signaling. Data is preferably received in audio and/or video watermarking . It is to be understood that likewise can be obtained in any other way.

[165]

To FIG. 19, FIGS. Another example for signaling a watermark message in a video can include replacing the reserved 4 bit (858) with EA_A_A_A_A_A_A_A_A_A_sig, which indicates the certainty and/or severity of the corresponding emergency message, 900), respectively.

[166]

To FIG. 20, the table represents different combinations of certainty and severity. The certainty (1000) may include, for example, observations, possibilities, possible cases, no possibility, and/or range of values such as unknown sounds. To represent 2 values by 5-bits, unknown and non-likelihood may be combined. The severity (1010) may include, for example, extremes, severity, moderate, sausness, and/or a range of values such as not known. To represent 2 values by 5-bit, unknown and beveled may be combined.

[167]

To FIG. 21, another example for signaling a watermark message in a video may include replacing the reserved 4 bit (858) with a reserved 6-bit (1102). Further, signaling the watermark message in the video may include replacing EA_Expo (852) (at 26 bit) with EA_A_A_A_( ½) bits). sub. 1100 32. 32 Bit provides an additional granularity to more properly signal UTC time code using a second granularity.

[168]

To FIG. 22, FIGS. Another example for signaling a watermark message in a video may include replacing the reserved 6 bit (1102) with a reserved 2 bit (1104). In addition, signaling the watermark message in the video may comprise EA_A_A_A_A_A_A_A_A_A_sig. phase_code (900).

[169]

To FIG. 23, the structure of the watermark message block 800 included in the watermark appropriate for audio content may include an emergency alert flag EAA_flag (EA_flag) (1) having 1200 bits indicative of the type of messages signaled by the watermark message, such as an emergency alert signal. When EAA_flag has a value of 0, the watermark message is not an emergency alert type. In this case, the watermark message preferably comprises serververver_code 22, which can be 1210-bit code used to query the audio watermark server to obtain further information regarding the non - emergency alert message. The query " http:{serververver_code}. vp1. Intervalvalval_code () indicates the timeline location in the video content corresponding to serververver_code. sub. tv/interval_code. 1220 1210 The trigger (1230) may be provided to indicate that previous one or more serververver_code and/or intervalvalval_code watermark data have to be executed.

[170]

When EAA_flag (1200) has a value of 1, the watermark message is of the emergency alert type. In this case, the watermark message preferably comprises serververver_code 22 which can be 1210-bit code that is used to query the audio watermark server to obtain further information regarding the emergency alert message. The query " http:{serververver_code}. vp1. tv/atsc30/AEA . Zip ip ip ip ip code . Here, the query comprises 310 digits mail ZIP codes of AV presentation device (320) with watermark enabled receiver (180) and watermark client (5-) to enable the server to provide relevant emergency alert information to such AV presenting devices. The watermark message may also include EA_A_A_A_, which may be 22-bit code used to determine an expiration time (1240). The watermark message can also include EA_A_A_A_Urgency (854) indicative of the urgency of the watermark message in a manner similar to that of EA_A_A_A_Urgency (1250).

[171]

A system employing an audio visual watermarking may include the requirement that broadcasters employing such a watermark technique should be set to 1 correspondingly to EA flag whenever the broadcaster signals somewhere in the emitted signal that EA event is valid, and wmmmmmm.guards must be set correspondingly to 0xx05 every time the broadcaster signals in somewhere in the emitted signal.

[172]

The system employing the audio visual watermarking may include the requirement that broadcasters employing such a watermark technique should be set to 0 correspondingly to EA flag whenever the broadcaster signals somewhere in the emitted signal that no valid EA event is present, and wmmmmmm_id must be set correspondingly to 0xx05 every time the broadcaster signals in somewhere in the emitted signal.

[173]

24a illustrates an exemplary bitstream structure of wmmmmmm_block ()) in a video watermark message block (). Here:

[174]

Wmmmmmm_id is a value uniquely identifying the semantics and syntax of the data bytes carried in the message block.

[175]

The wmmmmmm_version is 4-bit values that can be incremented once any number in wmmm_message () is changed, after the value reached 15, the wrap - around becomes 0.

[176]

Fragggg_number is 1-bit value specifying the number minus 2- of the current message fragment.

[177]

Last__fragment is 2-bit value specifying the fragment number of the last fragment used to convey the complete wmmm_message (). Last__'00' indicates that fragmentation is not used (the wmmm_message () contained therein is complete). Last__fragment '01' indicates that wmmm_message () is transferred to 2 parts, '10' indicates that wmmm_message () is transferred to 3 parts, and '11' indicates that the wm_message () is transferred to 4 parts. Fragggg_number and last__fragment may be considered to signal part M M M "of" N ". sub.N".

[178]

When wmmmmmm_bytes () - lastlastlast_value 0, wmmmmmm_bytes () may be a complete instance of the watermark message identified by the value of wmmmmmm_id — id. When last__value is non - zero, wmmmmmm_bytes () may be a fragment of the corresponding watermark message wmmm_message (). All instances of wmmmmmm_bytes () and the association with wmmmmmm_id and wm_messard — version number result in a complete wmmm_message () associated with the corresponding wmmmmmm_id. The assembly of wmmm_message () from one or more wmmmmmm_block () instances may be as shown in FIG. 24c. Wmmmmmm_block (i) is i - th instance. For example, a corresponding wmmmmmm_block () instance with fragggg_number values equal to i may be displayed.

[179]

24b is an exemplary mapping from wmmmmmm_id wmmm_message (). This is used to determine bytes included in wmmmmmm_bytes ().

[180]

In an exemplary system fragggg_number is constrained to be last__fragment or less.

[181]

24d represents an exemplary URI message used to convey various types URI Is. URI message may be sent in fragments (e.g. last__fragment in the message header may be non - zero). The value of the field uri_type identifies the type of URI. The value of field uri___id signals the number of characters in the following URI_I_string () field. The field URI_I_string () @urururil (Uniform form form form by IETF TF RFC (Request for for for example), which is incorporated https:www herein by reference. sub. 3986, where the values are incorporated herein by reference. Ietetf. Org/rfc3986. Txx. txt) is URI that consists of characters that may be limited to those that are allowed. As. URI string (URI_I_string () length may be as given by the value of uri___len. This character string is constrained to the effective URI per RFC 33336 if URI is sent in fragments after re-assembly.

[182]

In an example, when signaling variable length fields in a video watermark, the variable length fields in the video watermark are signaled. The length value, e.g. L, of the field may be signaled first, followed by bytes containing the data for the field. Since the capacity of 1X and 2X systems is limited, the value that the length L can assume is higher. More specifically, the sum of lengths of the variable length field may not exceed the length of the maximum video watermark payload length minus the length of the various fixed length fields in the video watermark payload. Fixed length fields may include length fields for variable length data.

[183]

To FIG. 24d, since the maximum allowable value for euri___len 255, the overall uri__message () can be larger than the maximum allowed capacity of the watermark 1X or 2X system. Thus, in order to ensure that when using 1X or 2X systems an entire message uri___below is described below to ensure that the entire message can fit the capacity of the watermark message. Without this constraint, it is possible to do not fit the watermark system and create a message that can cause the receiver to parse the received message.

[184]

To FIG. 24d, the variable length field URI_I_string () is preceded by its length field uri___len. In the example, uri___len field value may be 86 or less for 1X video watermark emission format (1X system). In another example, uri___len field value may be 78 or less for 1X video watermark emission format (1X system).

[185]

To FIG. 24e, the variable length field URI_I_string () is preceded by its length field uri___len. In the example, uri___len field value may be 206 or less for 2X video watermark emission format (2X system). In another example, uri___len field value may be 198 or less for 2X video watermark emission format (2X system).

[186]

24f illustrates another example syntax for URI messages.

[187]

24g depicts another exemplary mapping of uri_type fields in URI messages to types URI types.

[188]

In the example and with respect to FIG. 24f, the field uri_type may be 8-bit unsigned integer fields that may identify the type of URI followed, according to the encoding given in FIG. 24g.

[189]

Uri_type _type _type _type is shown in FIG. 24g, through WebbbSocket Server URL to indicate a URL of the dynamic event WebbbSocket Server providing access to dynamic events. In the example, access to dynamic events over Webbbol protocol, is accessed. It can be achieved using techniques described in ATSC WorWorWorWorWorWorWorWorWorWorWorWorWorWorWorking Draft A/337 . Various dynamic events can be conveyed by a broad band in addition to the broadcast. Since new event information may need to be dynamically communicated at any time, the use of notifications via Webbbsite server is supported for broadband delivery of dynamic events. Uri_type _type _type _type provides URL to this WebbbSocket server. Webbb@ protocol http:www. Ietetf. Org/rfc6455. For txxt, all of which are defined in IETF TF TF TF TF TF RFC 6455 herein incorporated by reference.

[190]

In one example, aa dynamic event may be intended for an application running in a run - time environment, or as 2, it may signal the availability of unscheduled updates to the service signaling file or data. If a dynamic event is intended for an application running in a run - time environment, the device may make an event available for applications through the callback routine. In a further example of this behavior, the football application may dynamically "receive Touchucher" events through the dynamic event WebbbSocket server each time a touchdown occurs in the football game. In another example, targeted ad insertion dynamic events may be sent to the application via the dynamic event WebbbSocket server. Next, the process is ended. An application running in a runtime environment receives such dynamic events, and takes an appropriate action (e.g. showing a touch-down alert message to a user) or showing a targeted advertisement to the user). The receiver can connect to a server represented in the above exemplary syntax and mapping when initiating a football application, a target further application, or when the receiver receives dynamic events.

[191]

25a illustrates an example dynamic event message. As shown in FIG. 24b, dynamicicicicic_message ()) is one of the watermark messages.

[192]

An event is a timed notification to the receiver software indicating that some actions are taken.

[193]

An event stream is a stream of events.

[194]

The broadcast station may send events to the receiver via a broadcast channel or a wide band. The events may be dynamically transmitted as required. As an example, events may be sent to signal a receiver to start or stop a particular application associated with the current program. Other example events may include an event that carries some data required by the running application. These are merely examples and other types of data can be transmitted by events.

[195]

Dynamicicicicicited message () supports the delivery of dynamic events in the video watermarks. In an example Dynamic EEEvent Message and bitstream semantics may be as given in FIG. 25a. The semantic description of various syntax elements in FIG. 25a may be as shown below.

[196]

The deliverververververver_type is 4-bit fields that can mean the delivery of services to which dynamic events are applied. 25b illustrates an example encoding of this field. For example, the followings may be mentioned. The DASH (MPEG Media Media Media Media (MMTP) or ROUUTE (Real-time Object for Dynamic Adaptive Adaptive Adaptive Adaptive Adaptive (DASH) can operate on the top of Dynamic Adaptive over HTTP (DASH). MMTP is described in: "Information formation-High efficiency coding and media delivery in heterogeneous environments MMT", which are incorporated herein by reference. sub. MPEG media media transport (MMT) ". 1 ISO/IEC ISO/IEC 23008 -1 DASH is further described in "ISO/IEC 23009 -1 Dynamic adaptive streaming over HTTP (DASH)- Part 1: Media presentation presentation presentation presentation presentation and segment formats", which are incorporated herein by reference in its entirety.

[197]

Scheme_e_e_e_e_e_e_descriptor indicates scheme_e_e_e_e_e_stream_id field in 8-bit unsigned integer field provided in byte units.

[198]

Scheme_e_e_e_e_e_Ma_string is a string that provides schemeeeeeri for the event stream of the event. The URI identifying the scheme is specified. The semantics of such an element are specific to the scheme specified by this attribute. Schemeeeee·URN) or URL (Uniform form form form Resource Locator) or Uniform Resource Locator (URN.) URN and URL @urururys, all of which are incorporated by reference. https:tools. Ietetf. Org is defined in IETF TF TF TF TF TF RF63986. org/html/rfc3986.

[199]

Value_e_e_starvation is 8-bit unsigned integer field providing value_e_string field length in byte unit.

[200]

Value_e_string is a string that provides a value for the event stream of the event.

[201]

Timescale . The "ISO/IEC 23009 -1 Dynamic adaptive streaming over HTTP (DASH)- Part 1: as defined in MPEG DDASH standard described in Media presentation presentation presentation presentation presentation and segment formats" is 32-bit unsigned integer, which provides a time scale for the event stream of an event.

[202]

Presentaentaentation_time is 1970-bit unsigned 1 January 00:00:00 integer indicating the presentation 32- time of an event as the least 32 bit of the number of seconds after @datdatdate_, TAI (International AAAAMIC).).

[203]

The presentaentaentaentaentation_time_ms are 0-bit unsigned integers from the time indicated by presentaentaentation_time in 999, and thus the time plus (presentaentaentaentaentaentaentaentation_time_ms/1000) calculates the 1 actual presentation time to 10- milliseconds.

[204]

Duration is 32- bit unsigned integer that provides a duration of an event at the time scale of an event.

[205]

Id is 32-bit unsigned integer field identifier (ID) for an event (ID), and the like. The event stream is unique.

[206]

Dataaalength is 8-bit integer that provides the length of the data field in bytes.

[207]

Data is a field that contains data necessary for responding to an event, if present. The format and use of data is determined by the event stream specifications, known to any application that registers to receive events for any event targeted for applications.

[208]

Extensions of dynamic event messages to support future scalability are required. The dynamic event message shown in FIG. 25a provides only syntax elements when the transfer protocol is ROUUUUUTE/DASH or MMTP. This can be seen by the use of deliververververververververver_type=='2' _______________AO73_ '1') in FIG. 25a including the dynamic event related syntax elements corresponding to 2 transfer protocols.} {.}. However, the dynamic event messages in FIG. 25a do not allow for signaling dynamic event information for this when other transfer protocols are used in the future.

[209]

An extension of the dynamic event message is shown in FIG. 25c. 25c to dynamicicicicic_message () is added to else fields in an else {.} part directed to the end of dynamic_event_message (). These fields comprise "proto________field" and "reserved" fields. Of these fields are described below.

[210]

Proto________length is 8-bit unsigned integer field that provides a length in units of bytes of a reserved field immediately following this field.

[211]

Reserved is the field of the length proto________length field.

[212]

If a new transfer protocol is defined in the future, bytes in the reserved field may be used to signal any desired data elements.

[213]

Upon receipt of this message that adheres to the syntax shown in FIG. 25c, it can skip past the reservation field since it knows its length, if it receives such a message that adheres to the syntax shown in the diagram). Instead, a new receiver knowing for the format in the reservation field for the new transfer protocol receives a message that adheres to the syntax shown in FIG. 25c . The inside of the reserved field can be parsed.

[214]

Hence, the syntax shown in FIG. 25c provides future scalability in a backward compatible manner.

[215]

25a and 25c, the syntax comprises 3 variable length fields, namely: scheme_e_e_e_e_e_enabled, value_e_enabled, data fields. FIGS. Each of these fields scheme_e_e_e_e_e_length, dataaa_length) indicating the length of these variable length fields. sub.n_length, value_e_length, data_length. Since the values scheme_e_e_e_e_e_stream_, value_e_length, dataaa_length is 255, the total dynamicicicicic_message () may be larger than the maximum allowed capacity of the watermark 1X or 2X system. In order to ensure that the entire message can fit the capacity of the watermark message when using 1X or 2X systems, the following constraints are described below for these fields. Without such constraints, these constraints are eliminated. It is possible to create a message that may not fit the watermark system and cause the receiver to parse the received message.

[216]

If deliverververververver_type is equal to 1 or 2, the sum of scheme_e_e_e_e_e_field value, value_e_length field value and dataaa_length field value may be 66 or less for 1X video watermark emission format (1X system) and 186 or less for 2X video watermark emission format (2X system).

[217]

When deliverververververver_ylindr_type has a value other than value 1 or 2, proto________length may be not 87 for 1X video watermark emission format (1X system) and 207 or less for 2X video watermark emission format (2X system).

[218]

When deliverververververver_type is equal to 1 or 2, the sum of scheme_e_e_e_e_e_field value, value_e_length field value, and dataaa_length field value may be 58 or less for 1X video watermark emission format (1X system) and 178 or less for 2X video watermark emission format (2X system).

[219]

, When deliverververververver_type is value 1 or 2, it, when the value has a value other than the value, is used. The value of proto________length can be up to 78 for 1X video watermark emission format (1X system) and 198 or less for 2X video watermark emission format (2X system).

[220]

In another example, field proto________length may be referred to as another field name. In one example, field proto________length may be referred to as field reserved1_field_length.

[221]

To FIG. 24b emergengengengengengency_notify message () may correspond to the example syntax shown in FIG. 26.

[222]

Semantics for the fields in FIG. 26a are listed below: FIGS.

[223]

CAP________length — this 8-bit unsigned integer field gives the length of CAP_____ID field in byte units.

[224]

CAP_____ID is an OAASIS: ' Common AlAlAlAlting Protocol " 1 July 2010 Version sion sion sion 1.2, @datdatdatin, and a string that can provide ID of CAP message.

[225]

URL@ .http:docs. Oasis-open. Org v l. 2-os. Pddf (which is entirely incorporated herein by reference). This can be a cap.primitive identifier value of CAP messages represented by CAP______url.

[226]

CAP________url_length is 8-bit unsigned integer field that provides CAP______url fields in bytes.

[227]

CAP______url is a string that can provide URL that can be used to retrieve CAP messages.

[228]

Expires at any of 1970 digits @datdatdat@, 1 January 00:00:00 TAI (International AAAAmic Time), and any 32-bit encoded CAP message in the CAP message. <info> A recent expiration date and time of an element is a parameter that can indicate time.

[229]

Urgency is set to '1', the most urgent in CAP messages <info> When the urgency of the element is a flag to indicate that "is' Immmenter", it may otherwise indicate that there. '0'.

[230]

Bseveritititititycermet is 4-bit field code derived from the values CAP values required for certainty and severity. For both elements, the lowest 2 values have been merged. Encoding severititititity_certainty can be as given in FIG. 26b.

[231]

To FIG. 26a, the variable length field CAP_____ID is preceded by its length field CAP_______ID_length. Variable length field CAP______url is preceded by its length field CAP________url_length. Since the allowed maximum value for field CAP________length is 255, emergengengengengengency_credit_message () can be larger than the maximum allowed capacity of the watermark 1X or 2X system. Thus, constraints are described below for the field CAP________url_length to ensure that when using 1X or 2X systems, the entire message can fit the capacity of the watermark message. Without this constraint, it is possible to do not fit the watermark system and create a message that can cause the receiver to parse the received message.

[232]

In the example, FIGS. The value of CAP_______length field and the value of CAP________length field may be 80 or less for 1X video watermark emission format (1X system). As another example, the sum of values of CAP_______length field and the value of CAP________length field may be 73 or less for 1X video watermark emission format (1X system).

[233]

In example CAP_______length field values and CAP________length field values may be 200 or less for 2X video watermark emission format (2X system). As another example, the sum of values of CAP_______length field and the value of CAP________length field may be 193 or less for 2X video watermark emission format (2X system).

[234]

In the example, referring to FIG. 26a, expires field may not be signaled in the message. The signaling may e.g. be controlled 0 by flags when the flag value @datdatdate_and expires field is not signaled. When the flag 1 value @datdatdate_, expires field is signaled. The ' emergengengengengengency_notify message () may be signaled.

[235]

In the example, referring to FIG. 26a, special values for expires fields can be set separately. This special value will indicate that emergengengengengengency_notify message () valid expiration is not known. For example, this special value may be a value of 0.

[236]

A system employing an audio visual watermarking may include setting the expiration times to 0 to mitigate the need to determine a suitable duration and/or end time for the discretion of the broadcasters.

[237]

A system employing an audio visual watermarking may determine expiration times based on other elements included in or otherwise available to the display device.

[238]

It is also, the like. Each function block or various features of a base station device and a terminal device used in each of the above-described embodiments can be implemented or executed by circuitry, which is typically an integrated circuit or a plurality of integrated circuits. Circuits designed to perform the functions described herein may include general purpose processors, DSP (digital signal signal processors), application specific or general application integrated circuit (ASIC), FPGA (field programmable programmable programmable gate array), or other programmable logic devices, individual gates or transistor logic, or individual hardware components, or combinations thereof. A general purpose processor may be a microprocessor, or, in the alternative, such a processor may be any conventional processor, controller, microcontroller, or state machine. The general-purpose processor or each circuit described above may be configured by a digital circuit or may be configured by an analog circuit. In addition. With advances in semiconductor technology, the advent of integrated circuit fabrication techniques that replace integrated circuits at present time have emerged, and integrated circuits by such techniques may also be used.

[239]

It is to be understood that the claims are not limited to the exact construction and components shown above. Various modifications, changes and variations may be made in the arrangement, operation, and details of the systems, methods, and apparatus described herein without departing from the scope of the claims.



[240]

A system for broadcasting that includes a watermark payload (Figures 24A, 24B, 24F and 24G).



A method comprising: (a) extracting a unique resource identifier type identifying a type of a uniform resource identifier; (c) determining whether a uniform resource identifier type identifying a type of a uniform resource identifier following (g) has a value of 0xx01 indicating a unique resource identifier of a unicast resource identifier. (d); (d) determining whether a uniform resource identifier type identifying a type of a uniform resource identifier is present in the uniform resource identifier (d); (d) determining whether the unique resource identifier type has a value of 0x01 indicating a unique resource identifier of the service layer signaling.). (d) selectively determining whether the unique resource identifier type has the value of 0xx01 indicating the type of the uniform resource identifier following the request e) of the Uniform Resource (URI) message to be used in the Uniform Resource Locator (UEST) message. The method comprises (d) determining whether the unique resource identifier type has the value of 0xx01 indicating a unique resource identifier of the service layer signaling message or a unique resource identifier type that identifies a type of a uniform resource identifier that follows (f) requesting access to the service layer signaling.

The method of claim 1, further comprising: (a) receiving video data in a video data stream; (b) displaying the video data depending on one of the Ununiform resource identifier type and the Ununiform resource identifier.

Device for processing a data stream, as well as a device for processing a data stream The device comprises one or more processors, the processor configured to extract, (a) the data stream comprising a watermark message encoded in the data stream; (b) extract a corresponding uniform resource identifier message from the watermark message related to communicating the Ununiform resource identifiers. (c). (F): selectively determining whether the type resource identifier type identifying the type of uniform resource identifier comprises (d) the unique resource identifier of the signaling server providing access to the service layer signaling; and selectively determining whether the unique resource identifier type identifying the type of uniform resource identifier follows (g) has a value of 0xx02 which indicates a uniform resource identifier of the signaling server that provides access to service layer signaling; (e) determining whether the uniform resource identifier type has a value of 0xx01 indicating a uniform resource identifier of the request for service layer signaling; (f) determining whether the uniform resource identifier type has a value of 0x01 indicating a uniform resource identifier of the service layer signaling. (g). (g)) selectively determining whether the unique resource identifier type has a value of 0x01 indicating a unique resource identifier of the service layer signaling. (g). (g).

Video according to claim 3, in (a) a video data stream A device configured to receive data; (b) display the video data in dependence on one of the uniform resource identifier type and the uniform resource identifier.