NORCROSS SCOTT GREGORY (US)
DOLBY INT AB (IE)
WO2020185025A1 | 2020-09-17 |
US20160219387A1 | 2016-07-28 | |||
US20180091917A1 | 2018-03-29 | |||
US20180095718A1 | 2018-04-05 |
"Text of ISO/IEC 23003-4, Dynamic Range Control, 2nd Edition", no. n17643, 5 June 2018 (2018-06-05), XP030262172, Retrieved from the Internet
CLAIMS 1. A method of metadata-based dynamic processing of audio data for playback, the method including: receiving, by a decoder, a bitstream including audio data and metadata for dynamic loudness adjustment, wherein the metadata for dynamic loudness adjustment comprises a plurality of sets of metadata, wherein each set of metadata corresponds to a respective playback condition; decoding, by the decoder, the audio data and the metadata to obtain decoded audio data and the metadata; selecting, in response to playback condition information provided to the decoder, a set of metadata corresponding to a specific playback condition, and extracting, from the selected set of metadata, one or more processing parameters for dynamic loudness adjustment; applying the extracted one or more processing parameters to the decoded audio data to obtain processed audio data; and outputting the processed audio data for playback. 2. The method according to claim 1, wherein said extracting the one or more processing parameters further includes extracting one or more processing parameters for dynamic range compression, DRC. 3. The method according to claim 1 or 2, wherein the playback condition information is indicative of a specific loudspeaker setup. 4. The method according to any one of claims 1 to 3, wherein the selected set of metadata includes a set of DRC sequences, DRCSet. 5. The method according to any of claims 1 to 4, wherein selecting the set of metadata includes identifying a set of metadata corresponding to a specific downmix. 6. The method according to any one of claims 1 to 5, wherein the sets of metadata each include one or more processing parameters relating to average loudness values and optionally one or more processing parameters relating to dynamic range compression characteristics. 7. The method according to any one of claims 1 to 6, wherein the bitstream further includes additional metadata for static loudness adjustment to be applied to the decoded audio data. 8. The method according to any one of claims 1 to 7, wherein the bitstream is an MPEG- D DRC bitstream and the presence of metadata is signaled based on MPEG-D DRC bitstream syntax. 9. The method according to claim 8, wherein a loudnessInfoSetExtension()-element is used to carry the metadata as a payload. 10. The method according to any one of claims 1 to 9, wherein the metadata comprises one or more metadata payloads, wherein each metadata payload includes a plurality of sets of parameters and identifiers, with each set including a respective downmix identifier, downmixId, in combination with one or more processing parameters relating to the downmix identifier in the set. 11. A decoder for metadata-based dynamic processing of audio data for playback, wherein the decoder comprises one or more processors and non-transitory memory configured to perform a method including: receiving, by the decoder, a bitstream including audio data and metadata for dynamic loudness adjustment, wherein the metadata for dynamic loudness adjustment comprises a plurality of sets of metadata, wherein each set of metadata corresponds to a respective playback condition; decoding, by the decoder, the audio data and the metadata to obtain decoded audio data and the metadata; selecting, in response to a playback condition provided to the decoder a set of metadata corresponding to a specific playback condition, and extracting, from the selected set of metadata, one or more processing parameters for dynamic loudness adjustment; applying the extracted one or more processing parameters to the decoded audio data to obtain processed audio data; and outputting the processed audio data for playback. 12. A method of encoding audio data and metadata for dynamic loudness adjustment into a bitstream, the method including: inputting original audio data into a loudness leveler for loudness processing to obtain, as an output from the loudness leveler, loudness processed audio data; generating the metadata for dynamic loudness adjustment based on the loudness processed audio data and the original audio data; and encoding the original audio data and the metadata into the bitstream. 13. The method according to claim 12, wherein the method further includes generating additional metadata for static loudness adjustment to be used by a decoder. 14. The method according to claim 12 or 13, wherein said generating metadata includes comparison of the loudness processed audio data to the original audio data, and wherein the metadata is generated based on a result of said comparison. 15. The method according to claim 14, wherein said generating metadata further includes measuring the loudness over one or more pre-defined time periods, and wherein the metadata is generated further based on the measured loudness. 16. The method according to claim 15, wherein the measuring comprises measuring overall loudness of the audio data. 17. The method according to claim 15, wherein the measuring comprises measuring loudness of dialogue in the audio data. 18. The method according to any one of claims 12 to 17, wherein the bitstream is an MPEG-D DRC bitstream and the presence of the metadata is signaled based on MPEG-D DRC bitstream syntax. 19. The method according to claim 18, wherein a loudnessInfoSetExtension()-element is used to carry the metadata as a payload. 20. The method according to any one of claims 12 to 19, wherein the metadata comprises a plurality of sets of metadata, wherein each set of metadata corresponds to a respective playback condition. 21. The method according to any one of claims 12 to 20, wherein the metadata comprises one or more metadata payloads, wherein each metadata payload includes a plurality of sets of parameters and identifiers, with each set including a respective downmix identifier, downmixId, in combination with one or more processing parameters relating to the downmix identifier in the set, and wherein the one or more processing parameters are parameters for dynamic loudness adjustment by a decoder. 22. An encoder for encoding in a bitstream original audio data and metadata for dynamic loudness adjustment, wherein the encoder comprises one or more processors and non-transitory memory configured to perform a method including: inputting original audio data into a loudness leveler for loudness processing to obtain, as an output from the loudness leveler, loudness processed audio data; generating the metadata for dynamic loudness adjustment based on the loudness processed audio data and the original audio data; and encoding the original audio data and the metadata into the bitstream. 23. A system of an encoder for encoding in a bitstream original audio data and metadata for dynamic loudness adjustment, according to claim 22 and a decoder for metadata-based dynamic processing of audio data for playback according to claim 11. 24. A computer program product comprising a computer-readable storage medium with instructions adapted to cause the device to carry out the method according to any one of claims 1 to 10 or 12 to21 when executed by a device having processing capability. 25. A computer-readable storage medium storing the computer program product of claim 24. |
Table 1: Syntax of loudnessInfoSetExtension()-element
Table 2: Syntax of loudnessInfoSet()-element
Table 4: loudnessInfoSet extension types New dynLoudComp(): Table 5: Syntax of dynLoudComp()-element • The drcSetId enables dynLoudComp (relating to the metadata) to be applied per DRC- set. • The eqSetId enables dynLoudComp to be applied in combination with different settings for the equalization tool. • The downmixId enables dynLoudComp to be applied per DownmixID. In some cases, in addition to the above parameters, it may be beneficial for the dynLoudComp() element to also include a methodDefinition parameter (specified by, e.g., 4 bits) specifying a loudness measurement method used for deriving the dynamic program loudness metadata (e.g., anchor loudness, program loudness, short-term loudness, momentary loudness, etc.) and/or a measurementSystem parameter (specified by, e.g., 4 bits) specifying a loudness measurement system used for measuring the dynamic program loudness metadata (e.g., EBU R.128, ITU-R BS-1770 with or without preprocessing, ITU-R BS-1771, etc.). Such parameters may, e.g., be included in the dynLoudComp() element between the downmixId and dynLoudCompValue parameters.
Alternative Syntax 1 Table 6: Syntax of loudnessInfoSetExtension()-element Table 7: loudnessInfoSet extension types
Table 8: Syntax of loudnessInfoV2() payload In some cases, it may be beneficial to modify the syntax shown above in Table 8 such that the dynLoudCompPresent and (if dynLoudCompPresent==1) dynLoudCompValue parameters follow the reliability parameter within the measurementCount loop of the loudnessInfoV2() payload, rather than being outside the measurementCount loop. Furthermore, it may also be beneficial to set dynLoudCompValue equal to 0 in cases where dynLoudCompPresent is 0. Alternative Syntax 2 Alternatively, the dynLoudComp()-element could be placed into the uniDrcGainExtension()- element. Table 9: Syntax of uniDrcGain()-element
Table 10: Syntax of uniDrcGainExtension()-element Table 11: UniDrc gain extension types Semantics dynLoudCompValue This field contains the value for dynLoudCompDb. The values are encoded according to the table below. The default value is 0 dB. Table 12: Coding of dynLoudCompValue field Updated MPEG-D DRC Loudness Normalization Processing Table 13: Loudness normalization processing Pseudo-Code for selection and processing of dynLoudComp /* Selection Process */ /* The following settings would be derived from user/decoder settings drcSetId = 1; eqSetID = 2; downmixId = 3; */ findMatchingDynLoudComp (drcSetId, eqSetID, downmixId) { dynLoudComp = UNDEFINED; /* Check if matching loudnessInfo set is present */ if (targetLoudnessPresent(drcSetId, eqSetID, downmixId)) continue; else { dynLoudCompPresent = false; exit(0); } /* If all values are defined */ if (drcSetId != UNDEFINED && eqSetID != UNDEFINED && downmixId != UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].drcSetId == drcSetId && dynLoudCompArray[num].eqSetID == eqSetID && dynLoudCompArray[num].downmixId == downmixId) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (drcSetId == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].eqSetID == eqSetID && dynLoudCompArray[num].downmixId == downmixId) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (eqSetID == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].drcSetId == drcSetId && dynLoudCompArray[num].downmixId == downmixId) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (downmixId == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].drcSetId == drcSetId && dynLoudCompArray[num].eqSetID == eqSetID) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (drcSetId == UNDEFINED && downmixId == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].eqSetID == eqSetID) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (drcSetId == UNDEFINED && eqSetID == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].downmixId == downmixId) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } else if (eqSetID == UNDEFINED && downmixId == UNDEFINED) { for (num=0; num<num_of_dynLoudCompValues; num++) { if (dynLoudCompArray[num].eqSetID == eqSetID) { /* Correct entry found, assign to dynLoudComp */ dynLoudComp = dynLoudCompArray[num].dynLoudCompValue; } } } if (dynLoudComp == UNDEFINED){ dynLoudCompPresent = false; exit(1); } else { dynLoudCompPresent = true; exit(0); } } /* Processing */ if (targetLoudnessPresent) { if (dynLoudCompPresent) { loudnessNormalizationGainDb = targetLoudness – contentLoudness + dynLoudCompDb; } else { loudnessNormalizationGainDb = targetLoudness – contentLoudness; } } else { loudnessNormalizationGainDb = 0.0; } if (loudnessNormalizationGainDbMaxPresent) { loudnessNormalizationGainDb = min(loudnessNormalizationGainDb, loudnessNormalizationGainDbMax); } if (loudnessNormalizationGainModificationDbPresent) { gainNorm = pow(2.0, (loudnessNormalizationGainDb + loudnessNormalizationGainModificationDb) / 6); } else { gainNorm = pow(2.0, loudnessNormalizationGainDb / 6); } for (t=0; t<drcFrameSize; t++) { for (c=0; c<nChannels; c++) { audioSample[c][t] = gainNorm * audioSample[c][t]; } } In some cases, in addition to the selection process shown in pseudo-code above (e.g., taking drcSetId, eqSetID and downmixId into consideration for selecting a dynLoudCompValue parameter), it may be beneficial for the selection process to also take a methodDefinition parameter and/or a measurementSystem parameter into consideration for selecting a dynLoudCompValue parameter.
Alternative Updated MPEG-D DRC Loudness Normalization Processing Table 14: Alternative Loudness normalization processing In cases where the alternative loudness normalization processing of Table 14 above is used, the loudness normalization processing pseudo-code described above may be replaced by the following alternative loudness normalization processing pseudo-code. Note that a default value of dynLoudCompDb, e.g., 0 dB, may be assumed to ensure that the value of dynLoudCompDb is defined, even for cases where dynamic loudness processing metadata is not present in the bitstream. /* Alternative Processing */ if (targetLoudnessPresent) { loudnessNormalizationGainDb = targetLoudness – contentLoudness + dynLoudCompDb; } else { loudnessNormalizationGainDb = 0.0; } if (loudnessNormalizationGainDbMaxPresent) { loudnessNormalizationGainDb = min(loudnessNormalizationGainDb, loudnessNormalizationGainDbMax); } if (loudnessNormalizationGainModificationDbPresent) { gainNorm = pow(2.0, (loudnessNormalizationGainDb + loudnessNormalizationGainModificationDb) / 6); } else { gainNorm = pow(2.0, loudnessNormalizationGainDb / 6); } for (t=0; t<drcFrameSize; t++) { for (c=0; c<nChannels; c++) { audioSample[c][t] = gainNorm * audioSample[c][t]; } } Alternative Syntax 3 In some cases, it may be beneficial to combine the syntax described above in Table 1 – Table 5 with Alternative Syntax 1 described above in Table 6 – Table 8, as shown in the following Tables, to allow increased flexibility for transmission of the dynamic loudness processing values.
Syntax No. of bits Mnemonic loudnessInfoSetExtension() { while (loudnessInfoSetExtType != UNIDRCLOUDEXT_TERM) { 4 uimsbf extSizeBits = bitSizeLen + 4; 4 uimsbf extBitSize = bitSize + 1; extSizeBits uimsbf switch (loudnessInfoSetExtType) { UNIDRCLOUDEXT_EQ: loudnessInfoV1AlbumCount; 6 uimsbf loudnessInfoV1Count; 6 uimsbf for (i=0; i<loudnessInfoV1AlbumCount; i++) { loudnessInfoV1(); } for (i=0; i<loudnessInfoV1Count; i++) { loudnessInfoV1(); } break; UNIDRCLOUDEXT_DYNLOUDCOMP: loudnessInfoV2Count; 6 uimsbf for (i=0; i< loudnessInfoV2Count; i++) { loudnessInfoV2(); } break; UNIDRCLOUDEXT_DYNLOUDCOMP2: dynLoudCompCount; 6 uimsbf for (i=0; i<dynLoudCompCount; i++) { dynLoudComp(); } break; /* add future extensions here */ default: for (i=0; i<extBitSize; i++) { o therBit; 1 bslbf } Table 15: Alternate Syntax 3 of loudnessInfoSetExtension()-element Table 16: Alternate Syntax 3 of loudnessInfoSet extension types
Table 17: Alternate Syntax 3 of loudnessInfoV2() payload Alternate Syntax 3 of dynLoudComp(): Table 18: Alternate Syntax 3 of dynLoudComp()-element Interface Extension Syntax In some cases, it may be beneficial to allow control, e.g., by an end user, of whether or not dynamic loudness processing is performed, even when dynamic loudness processing information is present in a received bitstream. Such control may be provided by updating the MPEG-D DRC interface syntax to include an additional interface extension (e.g., UNIDRCINTERFACEEXT_DYNLOUD) which contains a revised loudness normalization control interface payload (e.g., loudnessNormalizationControlInterfaceV1()) as shown in the following tables.
Table 19: Syntax of uniDRCInterfaceExtension() payload Table 20: Syntax of loudnessNormalizationControlInterfaceV1() payload
Table 21: UniDRC Interface extension types Interface Extension Semantics loudnessNormalizationOn This flag signals if loudness normalization processing should be switched on or off. The default value is 0. If loudnessNormalizationOn == 0, loudnessNormalizationGainDb shall be set to 0 dB. t argetLoudness This field contains the desired output loudness. The values are encoded according to the following Table. Table 22: Coding of targetLoudness field d ynLoudnessNormalizationOn This flag signals if dynamic loudness normalization processing should be switched on or off. The default value is 0. If dynLoudnessNormalizationOn == 0, dynloudnessNormalizationGainDb shall be set to 0 dB. Interpretation Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the disclosure discussions utilizing terms such as “processing”, “computing”, “determining”, “analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing devices, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities. In a similar manner, the term “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A “computer” or a “computing machine” or a “computing platform” may include one or more processors. The methodologies described herein are, in one example embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. A bus subsystem may be included for communicating between the components. The processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth. The processing system may also encompass a storage system such as a disk drive unit. The processing system in some configurations may include a sound output device, and a network interface device. The memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one or more of the methods described herein. Note that when the method includes several elements, e.g., several steps, no ordering of such elements is implied, unless specifically stated. The software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system. Thus, the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code. Furthermore, a computer-readable carrier medium may form, or be included in a computer program product. In alternative example embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a personal computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Note that the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Thus, one example embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, example embodiments of the present disclosure may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, aspects of the present disclosure may take the form of a method, an entirely hardware example embodiment, an entirely software example embodiment or an example embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer- readable program code embodied in the medium. The software may further be transmitted or received over a network via a network interface device. While the carrier medium is in an example embodiment a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present disclosure. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. For example, the term “carrier medium” shall accordingly be taken to include, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor or one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions. It will be understood that the steps of methods discussed are performed in one example embodiment by an appropriate processor (or processors) of a processing (e.g., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the disclosure is not limited to any particular implementation or programming technique and that the disclosure may be implemented using any appropriate techniques for implementing the functionality described herein. The disclosure is not limited to any particular programming language or operating system. Reference throughout this disclosure to “one embodiment”, “some embodiments” or “an example embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in some embodiments” or “in an example embodiment” in various places throughout this disclosure are not necessarily all referring to the same example embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more example embodiments. As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third”, etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising. It should be appreciated that in the above description of example embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single example embodiment, Fig., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed example embodiment. Thus, the claims following the Description are hereby expressly incorporated into this Description, with each claim standing on its own as a separate example embodiment of this disclosure. Furthermore, while some example embodiments described herein include some but not other features included in other example embodiments, combinations of features of different example embodiments are meant to be within the scope of the disclosure, and form different example embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed example embodiments can be used in any combination. In the description provided herein, numerous specific details are set forth. However, it is understood that example embodiments of the disclosure may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Thus, while there has been described what are believed to be the best modes of the disclosure, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the disclosure, and it is intended to claim all such changes and modifications as fall within the scope of the disclosure. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure. In the following, enumerated example embodiments (EEEs) describe some structures, features, and functionalities of some aspects of the example embodiments disclosed herein. EEE1. A method of metadata-based dynamic processing of audio data for playback, the method including processes of: (a) receiving, by a decoder, a bitstream including audio data and metadata for dynamic loudness adjustment; (b) decoding, by the decoder, the audio data and the metadata to obtain decoded audio data and the metadata; (c) determining, by the decoder, from the metadata, one or more processing parameters for dynamic loudness adjustment based on a playback condition; (d) applying the determined one or more processing parameters to the decoded audio data to obtain processed audio data; and (e) outputting the processed audio data for playback. EEE2. The method according to EEE1, wherein the metadata is indicative of processing parameters for dynamic loudness adjustment for a plurality of playback conditions. EEE3. The method according to EEE1 or EEE2, wherein said determining the one or more processing parameters further includes determining one or more processing parameters for dynamic range compression, DRC, based on the playback condition EEE4. The method according to any one of EEE1 to EEE3, wherein the playback condition includes one or more of a device type of the decoder, characteristics of a playback device, characteristics of a loudspeaker, a loudspeaker setup, characteristics of background noise, characteristics of ambient noise and characteristics of the acoustic environment. EEE5. The method according to any one of EEE1 to EEE4, wherein process (c) further includes selecting, by the decoder, at least one of a set of DRC sequences, DRCSet, a set of equalizer parameters, EQSet, and a downmix, corresponding to the playback condition. EEE6. The method according to EEE5, wherein process (c) further includes identifying a metadata identifier indicative of the at least one selected DRCSet, EQSet, and downmix to determine the one or more processing parameters from the metadata. EEE7. The method according to any one of EEE1 to EEE6, wherein the metadata includes one or more processing parameters relating to average loudness values and optionally one or more processing parameters relating to dynamic range compression characteristics. EEE8. The method according to any one of EEE1 to EEE7, wherein the bitstream further includes additional metadata for static loudness adjustment to be applied to the decoded audio data. EEE9. The method according to any one of EEE1 to EEE8, wherein the bitstream is an MPEG-D DRC bitstream and the presence of metadata is signaled based on MPEG-D DRC bitstream syntax. EEE10. The method according to EEE9, wherein a loudnessInfoSetExtension()-element is used to carry the metadata as a payload. EEE11. The method according to any one of EEE1 to EEE10, wherein the metadata comprises one or more metadata payloads, wherein each metadata payload includes a plurality of sets of parameters and identifiers, with each set including at least one of a DRCSet identifier, drcSetId, an EQSet identifier, eqSetId, and a downmix identifier, downmixId, in combination with one or more processing parameters relating to the identifiers in the set. EEE12. The method according to EEE11 when depending on EEE5, wherein process (c) involves selecting a set among the plurality of sets in the payload based on the at least one DRCSet, EQSet, and downmix selected by the decoder, and wherein the one or more processing parameters determined at process (c) are the one or more processing parameters relating to the identifiers in the selected set. EEE13. A decoder for metadata-based dynamic processing of audio data for playback, wherein the decoder comprises one or more processors and non-transitory memory configured to perform a method including processes of: (a) receiving, by a decoder, a bitstream including audio data and metadata for dynamic loudness adjustment; (b) decoding, by the decoder, the audio data and the metadata to obtain decoded audio data and the metadata; (c) determining, by the decoder, from the metadata, one or more processing parameters for dynamic loudness adjustment based on a playback condition; (d) applying the determined one or more processing parameters to the decoded audio data to obtain processed audio data; and (e) outputting the processed audio data for playback. EEE14. A method of encoding audio data and metadata for dynamic loudness adjustment, into a bitstream, the method including processes of: (a) inputting original audio data into a loudness leveler for loudness processing to obtain, as an output from the loudness leveler, loudness processed audio data; (b) generating metadata for dynamic loudness adjustment based on the loudness processed audio data and the original audio data; and (c) encoding the original audio data and the metadata into the bitstream. EEE15. The method according to EEE14, wherein the method further includes generating additional metadata for static loudness adjustment to be used by a decoder. EEE16. The method according to EEE14 or EEE15, wherein process (b) includes comparison of the loudness processed audio data to the original audio data, and wherein the metadata is generated based on a result of said comparison. EEE17. The method according to EEE16, wherein process (b) further includes measuring the loudness over one or more pre-defined time periods, and wherein the metadata is generated further based on the measured loudness. EEE18. The method according to EEE17, wherein the measuring comprises measuring overall loudness of the audio data. EEE19. The method according to EEE17, wherein the measuring comprises measuring loudness of dialogue in the audio data. EEE20. The method according to any one of EEE14 to EEE19, wherein the bitstream is an MPEG-D DRC bitstream and the presence of the metadata is signaled based on MPEG-D DRC bitstream syntax. EEE21. The method according to EEE20, wherein a loudnessInfoSetExtension()-element is used to carry the metadata as a payload. EEE22. The method according to any one of EEE14 to EEE21, wherein the metadata comprises one or more metadata payloads, wherein each metadata payload includes a plurality of sets of parameters and identifiers, with each set including at least one of a DRCSet identifier, drcSetId, an EQSet identifier, eqSetId, and a downmix identifier, downmixId, in combination with one or more processing parameters relating to the identifiers in the set, and wherein the one or more processing parameters are parameters for dynamic loudness adjustment by a decoder. EEE23. The method according to EEE22, wherein the at least one of the drcSetId, the eqSetId, and the downmixId is related to at least one of a set of DRC sequences, DRCSet, a set of equalizer parameters, EQSet, and a downmix, to be selected by the decoder. EEE24. An encoder for encoding in a bitstream original audio data and metadata for dynamic loudness adjustment, wherein the encoder comprises one or more processors and non-transitory memory configured to perform a method including the processes of: (a) inputting original audio data into a loudness leveler for loudness processing to obtain, as an output from the loudness leveler, loudness processed audio data; (b) generating the metadata for dynamic loudness adjustment based on the loudness processed audio data and the original audio data; and (c) encoding the original audio data and the metadata into the bitstream. EEE25. A system of an encoder for encoding in a bitstream original audio data and metadata for dynamic loudness adjustment and/or dynamic range compression, DRC, according to EEE24 and a decoder for metadata-based dynamic processing of audio data for playback according to EEE13. EEE26. A computer program product comprising a computer-readable storage medium with instructions adapted to cause the device to carry out the method according to any one of EEE1 to EEE12 or EEE14 to EEE23 when executed by a device having processing capability. EEE27. A computer-readable storage medium storing the computer program product of EEE26. EEE28. The method according to any one of EEE 1 to EEE12 further comprising receiving, by the decoder, through an interface, an indication of whether or not to perform the metadata-based dynamic processing of audio data for playback, and when the decoder receives an indication not to perform the metadata-based dynamic processing of audio data for playback, bypassing at least the step of applying the determined one or more processing parameters to the decoded audio data. EEE29. The method according to EEE28, wherein until the decoder receives, through the interface, the indication of whether or not to perform the metadata-based dynamic processing of audio data for playback, the decoder bypasses at least the step of applying the determined one or more processing parameters to the decoded audio data. EEE30. The method of any one of EEE 1 to EEE12, EEE28, or EEE29, wherein the metadata is indicative of processing parameters for dynamic loudness adjustment for a plurality of playback conditions, and the metadata further includes a parameter specifying a loudness measurement method used for deriving a processing parameter of the plurality of processing parameters. EEE31. The method of any one of EEE1 to EEE12, or EEE28 to EEE30, wherein the metadata is indicative of processing parameters for dynamic loudness adjustment for a plurality of playback conditions, and the metadata further includes a parameter specifying a loudness measurement system used for measuring a processing parameter of the plurality of processing parameters.
Next Patent: MICROBIOME DELIVERY PLATFORM