Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING
Document Type and Number:
WIPO Patent Application WO/2024/074751
Kind Code:
A1
Abstract:
A method comprising: decoding (500) encoded samples of blocks of video sample data; determining (502) two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; performing (504), for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and computing (506), based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data.

Inventors:
BLASI SAVERIO (GB)
LAINEMA JANI (FI)
Application Number:
PCT/FI2023/050482
Publication Date:
April 11, 2024
Filing Date:
August 23, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04N19/593; H04N19/11; H04N19/176
Foreign References:
US20190215521A12019-07-11
Other References:
ABDOLI, M ET AL.: "Decoder-Side Intra Mode Derivation for Next Generation Video Coding", IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, 6 July 2020 (2020-07-06), XP033808189, Retrieved from the Internet [retrieved on 20231101], DOI: 10.1109/ICME46284.2020.9102799
E. MORA (ATEME), A. NASRALLAH (ATEME), M. RAULET (ATEME): "CE3-related: Decoder-side Intra Mode Derivation", 12. JVET MEETING; 20181003 - 20181012; MACAO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-L0164, 6 October 2018 (2018-10-06), XP030195049
X. XIU (INTERDIGITAL), Y. HE (INTERDIGITAL), Y. YE (INTERDIGITAL): "Decoder-side intra mode derivation", 3. JVET MEETING; 20160526 - 20160601; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-C0061, 26 May 2016 (2016-05-26), XP030247184
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
CLAIMS: 1. An apparatus comprising means for decoding encoded samples of blocks of video sample data; means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; means for performing, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and means for computing, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. 2. The apparatus according to claim 1, comprising means for performing, as a part of the decoder-side intra mode derivation process, an analysis of directionality of the samples belonging to at least one support region. 3. The apparatus according to claim 1 or 2, comprising means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation process. 4. The apparatus according to claim 3, comprising means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location- dependent on a specific support region. 5. The apparatus according to claim 3 or 4, comprising means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, a strength of location-dependency of the one or more intra- prediction modes on the specific support region. 6. The apparatus according to any preceding claim, comprising means for deriving, as a part of the decoder-side intra mode derivation process, a histogram of gradients of the samples of at least one support region.

7. The apparatus according to any preceding claim, comprising means for deriving, based on the decoder-side intra mode derivation parameters specific to each support region, a decoder-side intra mode derivation mode specific to each support region. 8. The apparatus according to claim 7, comprising means for computing, based on the decoder-side intra mode derivation mode specific to each support region, a prediction for the current block. 9. An apparatus comprising means for decoding encoded samples of blocks of video sample data; means for performing a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation parameters; means for computing at least two predictors for the block of video sample data based on the decoder-side intra mode derivation parameters specific to each support region; and means for combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. 10. The apparatus according to claim 9, comprising means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; and means for performing the decoder-side intra mode derivation process to derive decoder-side intra mode derivation parameters specific to each support region. 11. The apparatus according to claim 9, comprising means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location- dependent on a specific support region.

12. The apparatus according to claim 9, wherein said sample-based weights depend on the location of a sample in the block. 13. The apparatus according to claim 9, wherein said sample-based weights depend on the location-dependency of a given intra-prediction mode on a specific support region. 14. The apparatus according to any of claims 9 - 13, comprising means for inferring a usage of sample-based weighting based on characteristics of the block of video data. 15. The apparatus according to claim 14, comprising means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location- dependent on a specific support region. 16. The apparatus according to claim 14, wherein the sample-based weighting is inferred depending on a strength of the location-dependency of a decoder-side intra mode derivation mode on a specific support region. 17. The apparatus according to the any preceding claim, comprising means for computing a prediction for the current block using one or more pre- defined intra-prediction modes in combination with one or more intra-prediction modes derived using the decoder-side intra mode derivation process. 18. The apparatus according to the claim 17, comprising means for using sample-based weights to compute the prediction for the current block, wherein the weights used for the one or more pre-defined intra-prediction modes depend on the decoder-side intra mode derivation parameters specific to each support region. 19. A method comprising decoding encoded samples of blocks of video sample data; determining two or more support regions for a block of video sample data, where each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; performing, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and computing, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. 20. A method comprising decoding encoded samples of blocks of video sample data; performing a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; obtaining one or more intra-prediction modes from the decoder-side intra mode derivation parameters; computing at least two predictors for the block of video sample data based on the intra-prediction modes; and combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights.

Description:
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING TECHNICAL FIELD The present invention relates to an apparatus, a method and a computer program for video coding and decoding. BACKGROUND In video coding, decoder-side intra mode derivation (DIMD) techniques have been shown to provide a beneficial impact on state-of-the-art video codecs. These methods typically rely on an inference process operating on a support region formed of previously reconstructed samples in the surrounding of the current block. Gradient estimation techniques can be used to predict the directionality and strength of edges in the support area. These parameters are used to infer intra-prediction directions and finally derive directional intra- prediction (DIMD) modes, which are used to predict the current block, ether based on a single DIMD mode or a fusion two or three DIMD modes. The usage of DIMD is typically signalled for a given block. When DIMD is signalled to be used, no further information is required to perform the intra prediction for the current block, and the intra prediction mode is instead inferred. Thus, DIMD can successfully reduce the overhead needed to signal a given intra prediction mode. However, when using the known DIMD methods, the resulting prediction for the current block is often sub-optimal. SUMMARY Now in order to at least alleviate the above problems, an enhanced method is introduced herein. The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention. A method according to a first aspect comprises means for decoding encoded samples of blocks of video sample data; means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; means for performing, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and means for computing, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. According to an embodiment, the apparatus comprises means for performing, as a part of the decoder-side intra mode derivation process, an analysis of directionality of the samples belonging to at least one support region. According to an embodiment, the apparatus comprises means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, a strength of location-dependency of the one or more intra-prediction modes on the specific support region. According to an embodiment, the apparatus comprises means for deriving, as a part of the decoder-side intra mode derivation process, a histogram of gradients of the samples of at least one support region. According to an embodiment, the apparatus comprises means for deriving, based on the decoder-side intra mode derivation parameters specific to each support region, a decoder- side intra mode derivation mode specific to each support region. According to an embodiment, the apparatus comprises means for computing, based on the decoder-side intra mode derivation mode specific to each support region, a prediction for the current block. An apparatus according to a second aspect comprises means for decoding encoded samples of blocks of video sample data; means for performing a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation parameters; means for computing at least two predictors for the block of video sample data based on the decoder-side intra mode derivation parameters specific to each support region; and means for combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. According to an embodiment, the apparatus comprises means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; and means for performing the decoder-side intra mode derivation process to derive decoder-side intra mode derivation parameters specific to each support region. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, said sample-based weights depend on the location of a sample in the block. According to an embodiment, said sample-based weights depend on the location- dependency of a given intra-prediction mode on a specific support region. According to an embodiment, the apparatus comprises means for inferring a usage of sample-based weighting based on characteristics of the block of video data. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the sample-based weighting is inferred depending on a strength of the location-dependency of a decoder-side intra mode derivation mode on a specific support region. According to an embodiment, the apparatus comprises means for computing a prediction for the current block using one or more pre-defined intra-prediction modes in combination with one or more intra-prediction modes derived using the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises means for using sample- based weights to compute the prediction for the current block, wherein the weights used for the one or more pre-defined intra-prediction modes depend on the decoder-side intra mode derivation parameters specific to each support region. A method according a third aspect comprises decoding encoded samples of blocks of video sample data; determining two or more support regions for a block of video sample data, where each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; performing, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and computing, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. A method according a fourth aspect comprises decoding encoded samples of blocks of video sample data; performing a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; obtaining one or more intra-prediction modes from the decoder-side intra mode derivation parameters; computing at least two predictors for the block of video sample data based on the intra-prediction modes; and combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. The apparatuses and the computer readable storage mediums stored with code thereon, as described above, are thus arranged to carry out the above methods and one or more of the embodiments related thereto. BRIEF DESCRIPTION OF THE DRAWINGS For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which: Figure 1 shows schematically an electronic device employing embodiments of the invention; Figure 2 shows schematically a user equipment suitable for employing embodiments of the invention; Figure 3 further shows schematically electronic devices employing embodiments of the invention connected using wireless and wired network connections; Figures 4a and 4b show schematically an encoder and a decoder suitable for implementing embodiments of the invention; Figure 5 shows a flow chart of a decoding method according to an embodiment; Figure 6 shows an example of a plurality of support regions for predicting the current image block; Figure 7 illustrates an example of using gradient filters for analyzing the directionality of pixels belonging to a support region; Figure 8 shows a flow chart of a decoding method according to another embodiment; Figure 9 shows an example of sample-based blending of at least two predictors according to an embodiment; and Figure 10 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented. DETAILED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS The following describes in further detail suitable apparatus and possible mechanisms for prediction of chroma samples. In this regard reference is first made to Figures 1 and 2, where Figure 1 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an exemplary apparatus or electronic device 50, which may incorporate a codec according to an embodiment of the invention. Figure 2 shows a layout of an apparatus according to an example embodiment. The elements of Figs.1 and 2 will be explained next. The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require encoding and decoding or encoding or decoding video images. The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 further may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera capable of recording or capturing images and/or video. The apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection. The apparatus 50 may comprise a controller 56, processor or processor circuitry for controlling the apparatus 50. The controller 56 may be connected to memory 58 which in embodiments of the invention may store both data in the form of image and audio data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller. The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network. The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es). The apparatus 50 may comprise a camera capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video image data for processing from another device prior to transmission and/or storage. The apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding. The structural elements of apparatus 50 described above represent examples of means for performing a corresponding function. With respect to Figure 3, an example of a system within which embodiments of the present invention can be utilized is shown. The system 10 comprises multiple communication devices which can communicate through one or more networks. The system 10 may comprise any combination of wired or wireless networks including, but not limited to a wireless cellular telephone network (such as a GSM, UMTS, CDMA network etc.), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet. The system 10 may include both wired and wireless communication devices and/or apparatus 50 suitable for implementing embodiments of the invention. For example, the system shown in Figure 3 shows a mobile telephone network 11 and a representation of the internet 28. Connectivity to the internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways. The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport. The embodiments may also be implemented in a set-top box; i.e. a digital TV receiver, which may/may not have a display or wireless capabilities, in tablets or (laptop) personal computers (PC), which have hardware or software or combination of the encoder/decoder implementations, in various operating systems, and in chipsets, processors, DSPs and/or embedded systems offering hardware/software based coding. Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types. The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11 and any similar wireless communication technology. A communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection. In telecommunications and data networks, a channel may refer either to a physical channel or to a logical channel. A physical channel may refer to a physical transmission medium such as a wire, whereas a logical channel may refer to a logical connection over a multiplexed medium, capable of conveying several logical channels. A channel may be used for conveying an information signal, for example a bitstream, from one or several senders (or transmitters) to one or several receivers. An MPEG-2 transport stream (TS), specified in ISO/IEC 13818-1 or equivalently in ITU-T Recommendation H.222.0, is a format for carrying audio, video, and other media as well as program metadata or other metadata, in a multiplexed stream. A packet identifier (PID) is used to identify an elementary stream (a.k.a. packetized elementary stream) within the TS. Hence, a logical channel within an MPEG-2 TS may be considered to correspond to a specific PID value. Available media file format standards include ISO base media file format (ISO/IEC 14496-12, which may be abbreviated ISOBMFF) and file format for NAL unit structured video (ISO/IEC 14496-15), which derives from the ISOBMFF. Video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. A video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec. Typically encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate). Typical hybrid video encoders, for example many encoder implementations of ITU- T H.263 and H.264, encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate). In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (IBC; a.k.a. intra-block-copy prediction), prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter- layer or inter-view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter-view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction. Motion compensation can be performed either with full sample or sub-sample accuracy. In the case of full sample accurate motion compensation, motion can be represented as a motion vector with integer values for horizontal and vertical displacement and the motion compensation process effectively copies samples from the reference picture using those displacements. In the case of sub-sample accurate motion compensation, motion vectors are represented by fractional or decimal values for the horizontal and vertical components of the motion vector. In the case a motion vector is referring to a non-integer position in the reference picture, a sub-sample interpolation process is typically invoked to calculate predicted sample values based on the reference samples and the selected sub-sample position. The sub-sample interpolation process typically consists of horizontal filtering compensating for horizontal offsets with respect to full sample positions followed by vertical filtering compensating for vertical offsets with respect to full sample positions. However, the vertical processing can be also be done before horizontal processing in some environments. Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures. Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied. One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction. Figs.4a and 4b show an encoder and a decoder suitable for employing embodiments of the invention. A video codec consists of an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. Typically, the encoder discards and/or loses some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate). An example of an encoding process is illustrated in Figure 4a. Figure 4a illustrates an image to be encoded (In); a predicted representation of an image block (P'n); a prediction error signal (Dn); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); a transform (T) and inverse transform (T-1); a quantization (Q) and inverse quantization (Q-1); entropy encoding (E); a reference frame memory (RFM); inter prediction (P inter ); intra prediction (P intra ); mode selection (MS) and filtering (F). An example of a decoding process is illustrated in Figure 4b. Figure 4b illustrates a predicted representation of an image block (P'n); a reconstructed prediction error signal (D'n); a preliminary reconstructed image (I'n); a final reconstructed image (R'n); an inverse transform (T-1); an inverse quantization (Q-1); an entropy decoding (E-1); a reference frame memory (RFM); a prediction (either inter or intra) (P); and filtering (F). Many hybrid video encoders encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate). Video codecs may also provide a transform skip mode, which the encoders may choose to use. In the transform skip mode, the prediction error is coded in a sample domain, for example by deriving a sample-wise difference value relative to certain adjacent samples and coding the sample-wise difference value with an entropy coder. Entropy coding/decoding may be performed in many ways. For example, context- based coding/decoding may be applied, where in both the encoder and the decoder modify the context state of a coding parameter based on previously coded/decoded coding parameters. Context-based coding may for example be context adaptive binary arithmetic coding (CABAC) or context-based variable length coding (CAVLC) or any similar entropy coding. Entropy coding/decoding may alternatively or additionally be performed using a variable length coding scheme, such as Huffman coding/decoding or Exp-Golomb coding/decoding. Decoding of coding parameters from an entropy-coded bitstream or codewords may be referred to as parsing. The phrase along the bitstream (e.g. indicating along the bitstream) may be defined to refer to out-of-band transmission, signalling, or storage in a manner that the out-of-band data is associated with the bitstream. The phrase decoding along the bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signalling, or storage) that is associated with the bitstream. For example, an indication along the bitstream may refer to metadata in a container file that encapsulates the bitstream. The H.264/AVC standard was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organisation for Standardization (ISO) / International Electrotechnical Commission (IEC). The H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC). There have been multiple versions of the H.264/AVC standard, integrating new extensions or features to the specification. These extensions include Scalable Video Coding (SVC) and Multiview Video Coding (MVC). Version 1 of the High Efficiency Video Coding (H.265/HEVC a.k.a. HEVC) standard was developed by the Joint Collaborative Team – Video Coding (JCT-VC) of VCEG and MPEG. The standard was published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC). Later versions of H.265/HEVC included scalable, multiview, fidelity range, three-dimensional, and screen content coding extensions which may be abbreviated SHVC, MV-HEVC, REXT, 3D-HEVC, and SCC, respectively. Versatile Video Coding (VVC) (MPEG-I Part 3), a.k.a. ITU-T H.266, is a video compression standard developed by the Joint Video Experts Team (JVET) of the Moving Picture Experts Group (MPEG), (formally ISO/IEC JTC1 SC29 WG11) and Video Coding Experts Group (VCEG) of the International Telecommunication Union (ITU) to be the successor to HEVC/H.265. Some key definitions, bitstream and coding structures, and concepts of H.264/AVC and HEVC are described in this section as an example of a video encoder, decoder, encoding method, decoding method, and a bitstream structure, wherein the embodiments may be implemented. Some of the key definitions, bitstream and coding structures, and concepts of H.264/AVC are the same as in HEVC – hence, they are described below jointly. The aspects of the invention are not limited to H.264/AVC or HEVC, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized. Similarly to many earlier video coding standards, the bitstream syntax and semantics as well as the decoding process for error-free bitstreams are specified in H.264/AVC and HEVC. The encoding process is not specified, but encoders must generate conforming bitstreams. Bitstream and decoder conformance can be verified with the Hypothetical Reference Decoder (HRD). The standards contain coding tools that help in coping with transmission errors and losses, but the use of the tools in encoding is optional and no decoding process has been specified for erroneous bitstreams. The elementary unit for the input to an H.264/AVC or HEVC encoder and the output of an H.264/AVC or HEVC decoder, respectively, is a picture. A picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoded may be referred to as a decoded picture. The source and decoded pictures are each comprised of one or more sample arrays, such as one of the following sets of sample arrays: - Luma (Y) only (monochrome). - Luma and two chroma (YCbCr or YCgCo). - Green, Blue and Red (GBR, also known as RGB). - Arrays representing other unspecified monochrome or tri-stimulus color samplings (for example, YZX, also known as XYZ). In H.264/AVC and HEVC, a picture may either be a frame or a field. A frame comprises a matrix of luma samples and possibly the corresponding chroma samples. A field is a set of alternate sample rows of a frame and may be used as encoder input, when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or chroma sample arrays may be subsampled when compared to luma sample arrays. Chroma formats may be summarized as follows: - In monochrome sampling there is only one sample array, which may be nominally considered the luma array. - In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array. - In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array. - In 4:4:4 sampling when no separate color planes are in use, each of the two chroma arrays has the same height and width as the luma array. In H.264/AVC and HEVC, it is possible to code sample arrays as separate color planes into the bitstream and respectively decode separately coded color planes from the bitstream. When separate color planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling. A partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets. When describing the operation of HEVC encoding and/or decoding, the following terms may be used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non-overlapping LCUs. A CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU. Typically, a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes. Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively. Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs). Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It is typically signalled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU. The division of the image into CUs, and division of CUs into PUs and TUs is typically signalled in the bitstream allowing the decoder to reproduce the intended structure of these units. In HEVC, a picture can be partitioned in tiles, which are rectangular and contain an integer number of LCUs. In HEVC, the partitioning to tiles forms a regular grid, where heights and widths of tiles differ from each other by one LCU at the maximum. In HEVC, a slice is defined to be an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit. In HEVC, a slice segment is defined to be an integer number of coding tree units ordered consecutively in the tile scan and contained in a single NAL unit. The division of each picture into slice segments is a partitioning. In HEVC, an independent slice segment is defined to be a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment, and a dependent slice segment is defined to be a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order. In HEVC, a slice header is defined to be the slice segment header of the independent slice segment that is a current slice segment or is the independent slice segment that precedes a current dependent slice segment, and a slice segment header is defined to be a part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment. The CUs are scanned in the raster scan order of LCUs within tiles or within a picture, if tiles are not in use. Within an LCU, the CUs have a specific scan order. The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence. The filtering may for example include one more of the following: deblocking, sample adaptive offset (SAO), and/or adaptive loop filtering (ALF). H.264/AVC includes a deblocking, whereas HEVC includes both deblocking and SAO. In typical video codecs the motion information is indicated with motion vectors associated with each motion compensated image block, such as a prediction unit. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently those are typically coded differentially with respect to block specific predicted motion vectors. In typical video codecs the predicted motion vectors are created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signalling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, it can be predicted which reference picture(s) are used for motion- compensated prediction and this prediction information may be represented for example by a reference index of previously coded/decoded picture. The reference index is typically predicted from adjacent blocks and/or co-located blocks in temporal reference picture. Moreover, typical high efficiency video codecs employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signalled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks. In typical video codecs the prediction residual after motion compensation is first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding. Video coding standards and specifications may allow encoders to divide a coded picture to coded slices or alike. In-picture prediction is typically disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture to independently decodable pieces. In H.264/AVC and HEVC, in-picture prediction may be disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission. In many cases, encoders may indicate in the bitstream which types of in- picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighboring CU may be regarded as unavailable for intra prediction, if the neighboring CU resides in a different slice. An elementary unit for the output of an H.264/AVC or HEVC encoder and the input of an H.264/AVC or HEVC decoder, respectively, is a Network Abstraction Layer (NAL) unit. For transport over packet-oriented networks or storage into structured files, NAL units may be encapsulated into packets or similar structures. A bytestream format has been specified in H.264/AVC and HEVC for transmission or storage environments that do not provide framing structures. The bytestream format separates NAL units from each other by attaching a start code in front of each NAL unit. To avoid false detection of NAL unit boundaries, encoders run a byte-oriented start code emulation prevention algorithm, which adds an emulation prevention byte to the NAL unit payload if a start code would have occurred otherwise. In order to enable straightforward gateway operation between packet- and stream-oriented systems, start code emulation prevention may always be performed regardless of whether the bytestream format is in use or not. A NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with emulation prevention bytes. A raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit. An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0. NAL units consist of a header and payload. In H.264/AVC and HEVC, the NAL unit header indicates the type of the NAL unit In HEVC, a two-byte NAL unit header is used for all specified NAL unit types. The NAL unit header contains one reserved bit, a six-bit NAL unit type indication, a three-bit nuh_temporal_id_plus1 indication for temporal level (may be required to be greater than or equal to 1) and a six-bit nuh_layer_id syntax element. The temporal_id_plus1 syntax element may be regarded as a temporal identifier for the NAL unit, and a zero-based TemporalId variable may be derived as follows: TemporalId = temporal_id_plus1 – 1. The abbreviation TID may be used to interchangeably with the TemporalId variable. TemporalId equal to 0 corresponds to the lowest temporal level. The value of temporal_id_plus1 is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes. The bitstream created by excluding all VCL NAL units having a TemporalId greater than or equal to a selected value and including all other VCL NAL units remains conforming. Consequently, a picture having TemporalId equal to tid_value does not use any picture having a TemporalId greater than tid_value as inter prediction reference. A sub-layer or a temporal sub-layer may be defined to be a temporal scalable layer (or a temporal layer, TL) of a temporal scalable bitstream, consisting of VCL NAL units with a particular value of the TemporalId variable and the associated non-VCL NAL units. nuh_layer_id can be understood as a scalability layer identifier. NAL units can be categorized into Video Coding Layer (VCL) NAL units and non- VCL NAL units. VCL NAL units are typically coded slice NAL units. In HEVC, VCL NAL units contain syntax elements representing one or more CU. A non-VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit. Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the reconstruction of decoded sample values. Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set. In addition to the parameters that may be needed by the decoding process, the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation. In HEVC a sequence parameter set RBSP includes parameters that can be referred to by one or more picture parameter set RBSPs or one or more SEI NAL units containing a buffering period SEI message. A picture parameter set contains such parameters that are likely to be unchanged in several coded pictures. A picture parameter set RBSP may include parameters that can be referred to by the coded slice NAL units of one or more coded pictures. In HEVC, a video parameter set (VPS) may be defined as a syntax structure containing syntax elements that apply to zero or more entire coded video sequences as determined by the content of a syntax element found in the SPS referred to by a syntax element found in the PPS referred to by a syntax element found in each slice segment header. A video parameter set RBSP may include parameters that can be referred to by one or more sequence parameter set RBSPs. The relationship and hierarchy between video parameter set (VPS), sequence parameter set (SPS), and picture parameter set (PPS) may be described as follows. VPS resides one level above SPS in the parameter set hierarchy and in the context of scalability and/or 3D video. VPS may include parameters that are common for all slices across all (scalability or view) layers in the entire coded video sequence. SPS includes the parameters that are common for all slices in a particular (scalability or view) layer in the entire coded video sequence, and may be shared by multiple (scalability or view) layers. PPS includes the parameters that are common for all slices in a particular layer representation (the representation of one scalability or view layer in one access unit) and are likely to be shared by all slices in multiple layer representations. VPS may provide information about the dependency relationships of the layers in a bitstream, as well as many other information that are applicable to all slices across all (scalability or view) layers in the entire coded video sequence. VPS may be considered to comprise two parts, the base VPS and a VPS extension, where the VPS extension may be optionally present. Out-of-band transmission, signaling or storage can additionally or alternatively be used for other purposes than tolerance against transmission errors, such as ease of access or session negotiation. For example, a sample entry of a track in a file conforming to the ISO Base Media File Format may comprise parameter sets, while the coded data in the bitstream is stored elsewhere in the file or in another file. The phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the bitstream or the coded unit, respectively. The phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively. A SEI NAL unit may contain one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, rendering, error detection, error concealment, and resource reservation. A coded picture is a coded representation of a picture. In HEVC, a coded picture may be defined as a coded representation of a picture containing all coding tree units of the picture. In HEVC, an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh_layer_id. In addition to containing the VCL NAL units of the coded picture, an access unit may also contain non-VCL NAL units. Said specified classification rule may for example associate pictures with the same output time or picture output count value into the same access unit. A bitstream may be defined as a sequence of bits, in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences. A first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol. An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams. The end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream. In HEVC and its current draft extensions, the EOB NAL unit is required to have nuh_layer_id equal to 0. In H.264/AVC, a coded video sequence is defined to be a sequence of consecutive access units in decoding order from an IDR access unit, inclusive, to the next IDR access unit, exclusive, or to the end of the bitstream, whichever appears earlier. In HEVC, a coded video sequence (CVS) may be defined, for example, as a sequence of access units that consists, in decoding order, of an IRAP access unit with NoRaslOutputFlag equal to 1, followed by zero or more access units that are not IRAP access units with NoRaslOutputFlag equal to 1, including all subsequent access units up to but not including any subsequent access unit that is an IRAP access unit with NoRaslOutputFlag equal to 1. An IRAP access unit may be defined as an access unit in which the base layer picture is an IRAP picture. The value of NoRaslOutputFlag is equal to 1 for each IDR picture, each BLA picture, and each IRAP picture that is the first picture in that particular layer in the bitstream in decoding order, is the first IRAP picture that follows an end of sequence NAL unit having the same value of nuh_layer_id in decoding order. There may be means to provide the value of HandleCraAsBlaFlag to the decoder from an external entity, such as a player or a receiver, which may control the decoder. HandleCraAsBlaFlag may be set to 1 for example by a player that seeks to a new position in a bitstream or tunes into a broadcast and starts decoding and then starts decoding from a CRA picture. When HandleCraAsBlaFlag is equal to 1 for a CRA picture, the CRA picture is handled and decoded as if it were a BLA picture. In HEVC, a coded video sequence may additionally or alternatively (to the specification above) be specified to end, when a specific NAL unit, which may be referred to as an end of sequence (EOS) NAL unit, appears in the bitstream and has nuh_layer_id equal to 0. A group of pictures (GOP) and its characteristics may be defined as follows. A GOP can be decoded regardless of whether any previous pictures were decoded. An open GOP is such a group of pictures in which pictures preceding the initial intra picture in output order might not be correctly decodable when the decoding starts from the initial intra picture of the open GOP. In other words, pictures of an open GOP may refer (in inter prediction) to pictures belonging to a previous GOP. An HEVC decoder can recognize an intra picture starting an open GOP, because a specific NAL unit type, CRA NAL unit type, may be used for its coded slices. A closed GOP is such a group of pictures in which all pictures can be correctly decoded when the decoding starts from the initial intra picture of the closed GOP. In other words, no picture in a closed GOP refers to any pictures in previous GOPs. In H.264/AVC and HEVC, a closed GOP may start from an IDR picture. In HEVC a closed GOP may also start from a BLA_W_RADL or a BLA_N_LP picture. An open GOP coding structure is potentially more efficient in the compression compared to a closed GOP coding structure, due to a larger flexibility in selection of reference pictures. A Decoded Picture Buffer (DPB) may be used in the encoder and/or in the decoder. There are two reasons to buffer decoded pictures, for references in inter prediction and for reordering decoded pictures into output order. As H.264/AVC and HEVC provide a great deal of flexibility for both reference picture marking and output reordering, separate buffers for reference picture buffering and output picture buffering may waste memory resources. Hence, the DPB may include a unified decoded picture buffering process for reference pictures and output reordering. A decoded picture may be removed from the DPB when it is no longer used as a reference and is not needed for output. In many coding modes of H.264/AVC and HEVC, the reference picture for inter prediction is indicated with an index to a reference picture list. The index may be coded with variable length coding, which usually causes a smaller index to have a shorter value for the corresponding syntax element. In H.264/AVC and HEVC, two reference picture lists (reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice. Many coding standards, including H.264/AVC and HEVC, may have decoding process to derive a reference picture index to a reference picture list, which may be used to indicate which one of the multiple reference pictures is used for inter prediction for a particular block. A reference picture index may be coded by an encoder into the bitstream is some inter coding modes or it may be derived (by an encoder and a decoder) for example using neighboring blocks in some other inter coding modes. Motion parameter types or motion information may include but are not limited to one or more of the following types: - an indication of a prediction type (e.g. intra prediction, uni-prediction, bi-prediction) and/or a number of reference pictures; - an indication of a prediction direction, such as inter (a.k.a. temporal) prediction, inter- layer prediction, inter-view prediction, view synthesis prediction (VSP), and inter- component prediction (which may be indicated per reference picture and/or per prediction type and where in some embodiments inter-view and view-synthesis prediction may be jointly considered as one prediction direction) and/or - an indication of a reference picture type, such as a short-term reference picture and/or a long-term reference picture and/or an inter-layer reference picture (which may be indicated e.g. per reference picture) - a reference index to a reference picture list and/or any other identifier of a reference picture (which may be indicated e.g. per reference picture and the type of which may depend on the prediction direction and/or the reference picture type and which may be accompanied by other relevant pieces of information, such as the reference picture list or alike to which reference index applies); - a horizontal motion vector component (which may be indicated e.g. per prediction block or per reference index or alike); - a vertical motion vector component (which may be indicated e.g. per prediction block or per reference index or alike); - one or more parameters, such as picture order count difference and/or a relative camera separation between the picture containing or associated with the motion parameters and its reference picture, which may be used for scaling of the horizontal motion vector component and/or the vertical motion vector component in one or more motion vector prediction processes (where said one or more parameters may be indicated e.g. per each reference picture or each reference index or alike); - coordinates of a block to which the motion parameters and/or motion information applies, e.g. coordinates of the top-left sample of the block in luma sample units; - extents (e.g. a width and a height) of a block to which the motion parameters and/or motion information applies. In comparison to the previous video coding standards, Versatile Video Codec (H.266/VVC) introduces a plurality of new coding tools, such as the following: ^ Intra prediction – 67 intra mode with wide angles mode extension – Block size and mode dependent 4 tap interpolation filter – Position dependent intra prediction combination (PDPC) – Cross component linear model intra prediction (CCLM) – Multi-reference line intra prediction – Intra sub-partitions – Weighted intra prediction with matrix multiplication ^ Inter-picture prediction – Block motion copy with spatial, temporal, history-based, and pairwise average merging candidates – Affine motion inter prediction – sub-block based temporal motion vector prediction – Adaptive motion vector resolution – 8x8 block-based motion compression for temporal motion prediction – High precision (1/16 pel) motion vector storage and motion compensation with 8-tap interpolation filter for luma component and 4-tap interpolation filter for chroma component – Triangular partitions – Combined intra and inter prediction – Merge with MVD (MMVD) – Symmetrical MVD coding – Bi-directional optical flow – Decoder side motion vector refinement – Bi-prediction with CU-level weight ^ Transform, quantization and coefficients coding – Multiple primary transform selection with DCT2, DST7 and DCT8 – Secondary transform for low frequency zone – Sub-block transform for inter predicted residual – Dependent quantization with max QP increased from 51 to 63 – Transform coefficient coding with sign data hiding – Transform skip residual coding ^ Entropy Coding – Arithmetic coding engine with adaptive double windows probability update ^ In loop filter – In-loop reshaping – Deblocking filter with strong longer filter – Sample adaptive offset – Adaptive Loop Filter ^ Screen content coding: – Current picture referencing with reference region restriction ^ 360-degree video coding – Horizontal wrap-around motion compensation ^ High-level syntax and parallel processing – Reference picture management with direct reference picture list signalling – Tile groups with rectangular shape tile groups While the new coding tools listed above lacks decoder-side intra mode derivation (DIMD), it has been considered already for adoption in the VVC/H.266 video codec. DIMD techniques have been shown to have a beneficial impact on state-of-the-art video codecs. These methods typically rely on an inference process operating on a support region formed of already reconstructed samples in the surrounding of the current block. Gradient estimation techniques can be used to predict the directionality and strength of edges in the support area. These are then used to infer intra-prediction directions and finally derive directional intra- prediction modes. These modes are used to predict the current block. The usage of DIMD is typically signalled for a given block. When DIMD is signalled to be used, no further information is required to perform the intra prediction for the current block, and the intra prediction mode is instead inferred. However, while DIMD can successfully reduce the overhead needed to signal a given intra prediction mode, the resulting prediction is often sub-optimal. Now an improved method for performing decoder-side intra mode derivation is introduced. A method according to an aspect is shown in Figure 5, where the method comprises decoding (500) encoded samples of blocks of video sample data; determining (502) two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; performing (504), for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and computing (506), based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. Thus, the method produces an intra-prediction for a given block, where at least two or more decoder-side intra mode derivation (DIMD) processes are performed, where each process operates on reconstructed samples extracted from a specific support region formed of a set of samples at a specific location with respect to the current block, where each process outputs DIMD parameters specific to a support region. Accordingly, at least two support regions in the surrounding of the current block are identified. The support regions comprise previously reconstructed decoded samples (or pixels). Each support region may be formed of samples at a certain location with respect to the current block. Support regions may overlap with each other. An example of support regions is given in Figure 6, where three support regions for the current coding unit (CU) are identified: Support region1 corresponding to samples directly above the current block, Support region3 corresponding to samples directly on the left of the current block, and Support region2 corresponding to samples in the top-left region of the current block. In this example, a first support region (Support region1) may be formed of a three-pixel-high area of reconstructed samples directly above the current block; a second support region (Support region2) may be formed of a three-by-three pixel area of reconstructed samples located on the top-left of the current block; and a third support region (Support region3) may be formed of a three-pixel-wide area of reconstructed samples directly on the left of the current block. For each support region, a decoder-side intra mode derivation (DIMD) process is performed, in order to derive specific DIMD parameters to each region. According to an embodiment, the method comprises performing, as a part of the decoder-side intra mode derivation process, an analysis of directionality of the samples belonging to at least one support region. Thus, an analysis of the directionality of the pixels belonging to the specific support region may be carried out as a part of the DIMD process, and the specific DIMD parameters to each region are derived thereafter. According to an embodiment, the method comprises deriving, as a part of the decoder-side intra mode derivation process, a histogram of gradients of the samples of at least one support region. As an example for the analysis of the directionality, 3x3 Sobel gradient filters (horizontal and vertical) can be used. They are convoluted with samples in the support region to obtain a given histogram of gradients. According to an embodiment, the method comprises obtaining one or more intra- prediction modes from the decoder-side intra mode derivation process. The histogram has a number of bins corresponding to the number of possible intra- prediction modes M. The amplitude of a given bar in the histogram at a certain index m = 0 … M-1 represents the cumulative amplitude of the gradients that were estimated to have the same direction as intra-prediction mode m. In an example shown in Figure 7, the shaded (gray) area illustrates the pixels used as center for the sliding window used for support region 1, where the dotted line illustrates the 3x3 sliding window of samples convoluted with the Sobel kernels. According to an embodiment, the method comprises inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. Thus, the DIMD parameters obtained for each support region are used to determine whether a given derived DIMD mode is location-dependent or not, and if so, which location it depends on. As an example, a given DIMD mode, which may be obtained using any conventional approach, may be classified as location dependent, and its location may be identified, by means of considering and possibly comparing the DIMD parameters output from different support regions. As an example, two support regions may be considered, one formed of samples located directly above the current block, and the other formed of samples located directly on the left of the current block, respectively, corresponding to support region 1 and support region 3 in Figure 6. Two histograms of gradients may be obtained, H ^^^^^ specific to the support region above, and H ^^^^ specific to the support region on the left. For a given DIMD mode m, the amplitude of the two histograms obtained in the two support regions can then be used to determine whether mode m is location dependent. As an example, if H ^^^^ (m)==0 and H ^^^^^ (m)≠0, then that may be taken as an indication that mode m is location-dependent on the support region above. Conversely, if H ^^^^ (m)≠0 and H ^^^^^ (m)==0, then m may be classified as location-dependent on the support region left. This may be generalized as follows: if N support regions are considered, resulting in N histograms H ^ ,H ^ ,…H ^-^ , then a given mode m may be determined to be location- dependent on region i if: H ^ (m)≠0 and H ^ (m)==0 for j = 0, 1, …, N-1 and j ≠i The height of the bars of the specific histograms for the support regions above and left may also be used to determine the location-dependency of a given mode m. As an example, a mode m may be determined to be location-dependent on region i if, considering a factor K∈ [01], then: According to an embodiment, the method comprises performing a normalization of the DIMD parameters output for each support region in accordance to the number of samples belonging to the support regions. As an example, consider N support regions, resulting in N histograms H ^ ,H ^ ,…H ^-^ , and denote s ^ as the number of pixels belonging to support region i. Denote M as the number of bins in each histogram, and also consider the cumulative number Then normalized histograms ^ ^ , ^ = 0, … , ^ − 1 can be obtained as: H^ ^ : According to an embodiment, the method comprises inferring, based on the decoder-side intra mode derivation parameters specific to each support region, a strength of location-dependency of the one or more intra-prediction modes on the specific support region. Alternatively, or in addition, to determining whether a given DIMD mode is classified as location-dependent on a given support region, the strength of the location- dependency of a given mode on a given support region may be determined, in dependence on the DIMD parameters output for each support region. As an example, consider N support regions, resulting in N histograms H ^ ,H ^ ,…H ^-^ , which may or may not be normalized. Consider that a given mode m is classified to be location-dependent on region i. The strength of the location-dependency of mode m on support region i may be obtained following from the calculation of the ratio: According to an embodiment, the method comprises deriving, based on the decoder-side intra mode derivation parameters specific to each support region, a decoder-side intra mode derivation mode specific to each support region. According to this approach, which may be used exclusively or in combination with other embodiments, the DIMD parameters obtained for each support region are used to determine specific DIMD modes for each support region. As an example, a given DIMD mode may be obtained using any conventional approaches, where the reconstructed pixels used to compute the DIMD parameters are restricted to those belonging to the specific support region. As an example, consider N support regions, resulting in N histograms H ^ ,H ^ ,…H ^-^ , and consider M as the number of bins in each histogram, then for a given support region i a specific DIMD mode ^ ^ can be obtained as the mode with the highest peak in ^ ^ or: m ^ = ^^ m^,…a,x^-^ H ^ (m). According to an embodiment, the method comprises computing, based on the decoder-side intra mode derivation mode specific to each support region, a prediction for the current block. The specific DIMD modes for each support region may be used to compute a prediction for the current block. As an example, an index may be signalled in the bitstream identifying the specific support region to use to determine the DIMD mode for the current block. As another example, a flag may be signalled to identify whether to compute the prediction for the current block as a result of blending the specific DIMD modes determined for each support region. This approach may also be used in combination with the other embodiments, for instance a specific DIMD mode ^ ^ can be obtained for a given support region, and the strength of its location-dependency on support region i may be obtained as: H (m) F ( ^ ^ ^ m ^ ) = According to an embodiment, which can be implemented independently of, or in combination with, other embodiments, the method comprises computing at least two predictors for the block of video sample data based on the decoder-side intra mode derivation parameters specific to each support region; and combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. If implemented independently, the method as shown in the flow chart of Figure 8 may be performed, wherein the method comprises decoding (800) encoded samples of blocks of video sample data; performing (802) a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; obtaining (804) one or more intra-prediction modes from the decoder-side intra mode derivation parameters; computing (806) at least two predictors for the block of video sample data based on the intra-prediction modes; and combining (808) at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. Thus, according to an approach, sample-based blending of a number of DIMD modes may be performed in order to obtain a prediction for the current block. The sample- based blending may operate based on determining specific weights for each predictor and for each sample within the current block. The weights may be determined in accordance to the location of each sample within the block, where different samples within the block may be blended using different weights. However, the blending does not necessarily require determining DIMD parameters specific to each support region. According to an embodiment, the method comprises determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; and performing the decoder-side intra mode derivation process to derive decoder-side intra mode derivation parameters specific to each support region. Accordingly, DIMD parameters specific to each support region may nevertheless be derived albeit without restricting the usage of these parameters to determine intra- prediction modes only. Instead, said parameters may be used to determine other things, for instance the weights to use to combine the predictors, or to infer whether to use sample-based weighting at all or not. According to an embodiment, the method comprises obtaining said at least two predictors based on the two or more support regions; performing the decoder-side intra mode derivation process to derive decoder-side intra mode derivation modes specific to each support region; and computing the at least two predictors based on the decoder-side intra mode derivation modes specific to said two or more support regions. As an example, assume that three support regions are considered as depicted in Figure 6. Assume that two DIMD modes are considered, m 0 and m 1 . These modes may be determined following any of the approaches described herein. Assume that two predictors P 0 and P1 are obtained performing intra-prediction processes in accordance to modes m0 and m1, respectively. Then a final predictor P for the current block may be obtained as: P(x,y) = w 1 (x,y)P 1 (x,y) +w 0 (x,y)P 0 (x,y) where Pi (x,y) refers to pixel in predictor Pi at location (x,y). According to an embodiment, the method comprises inferring a usage of sample- based weighting based on characteristics of the block of video data. According to an embodiment, said sample-based weights depend on the location of a sample in the block. According to an embodiment, said sample-based weights depend on the location- dependency of a given intra-prediction mode on a specific support region. Thus, the usage of sample-based blending may be inferred, for example, depending on whether the DIMD modes are location-dependent on a specific support region, and/or the strength of the location-dependency of each DIMD mode. Accordingly, the weights for a given predictor wi (x,y) may depend on the location (x,y). The weights wi (x,y) may also depend on whether mi was classified as location-dependent on a specific support region. In the above example, assume that mode m 0 is classified as dependent on the support region above, and m1 is classified as dependent on the support region left, as illustrated in Figure 9. Assume a block of size HxW where W is the width and H is the height. Then the weights for the predictors may be determined for each sample as: w ^ (x,y) = 1 - w ^ (x,y) An integer-precision representation of the weights may also be considered. Assuming a 6-bits representation of the weights these may be defined as: w ^ (x,y) = 64- w ^ (x,y) Division-free operations may also be used to determine the weights, for example by means of scaling and shifting. The weights may also depend on the strength of the location-dependency of a given mode on a given support region. In the above example, denote as F ^^^^^ (^ ^ ) the strength of the location-dependency of mode m ^ , from the support region above, and similarly denote as F ^^^^ (^ ^ ) the strength of the location-dependency of m ^ from support region left. Then, F ^^^^^ (^ ^ ) and/or F ^^^^ (^ ^ ) may be used to determine the weights wi (x,y). As an example, two intermediate parameters may be derived, referred to as ∆ ^ and ∆ ^ . The parameters may be derived as: Other ways of deriving the intermediate parameters may be used. The intermediate parameters may then be used to compute the weights as in: w ^ (x,y) = 0.5 w ^ (x,y) = 1 - w ^ (x,y) The weights may be clipped within a pre-defined range. For instance, weights may be clipped between 0 and 1. The weights may also be pre-computed and stored e.g. in look-up tables to be reused during the decoding process. According to an embodiment, the method comprises computing a prediction for the current block using one or more pre-defined intra-prediction modes in combination with one or more intra-prediction modes derived using the decoder-side intra mode derivation process. Hence, additional pre-determined predictors may also be considered and used. As an example, in addition to two DIMD-derived predictors P ^ an additional predictor P ^^^ may be considered, obtained by performing a pre-defined Planar mode on the current block. When using these pre-derived predictors, appropriate weights may be used to blend the two DIMD-derived predictors and the additional predictors. These weights may be derived in accordance to the location-dependency of each mode on a given support region and/or on the strength of such location-dependency, and/or on the DIMD parameters output during the DIMD process. According to an embodiment, the method comprises using sample-based weights to compute the prediction for the current block, wherein the weights used for the one or more pre-defined intra-prediction modes depend on the decoder-side intra mode derivation parameters specific to each support region. Accordingly, if a pre-defined (e.g. Planar) mode is used in combination with DIMD, then sample-based weights may be used, where the weights to be used on the Planar mode depend also on the DIMD parameters. An apparatus according to an aspect comprises means for decoding encoded samples of blocks of video sample data; means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; means for performing, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and means for computing, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. According to an embodiment, the apparatus comprises means for performing, as a part of the decoder-side intra mode derivation process, an analysis of directionality of the samples belonging to at least one support region. According to an embodiment, the apparatus comprises means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, a strength of location-dependency of the one or more intra-prediction modes on the specific support region. According to an embodiment, the apparatus comprises means for deriving, as a part of the decoder-side intra mode derivation process, a histogram of gradients of the samples of at least one support region. According to an embodiment, the apparatus comprises means for deriving, based on the decoder-side intra mode derivation parameters specific to each support region, a decoder- side intra mode derivation mode specific to each support region. According to an embodiment, the apparatus comprises means for computing, based on the decoder-side intra mode derivation mode specific to each support region, a prediction for the current block. An apparatus according to a second aspect comprises means for decoding encoded samples of blocks of video sample data; means for performing a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; means for obtaining one or more intra-prediction modes from the decoder-side intra mode derivation parameters; means for computing at least two predictors for the block of video sample data based on the decoder-side intra mode derivation parameters specific to each support region; and means for combining at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. According to an embodiment, the apparatus comprises means for determining two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; and means for performing the decoder-side intra mode derivation process to derive decoder-side intra mode derivation parameters specific to each support region. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, said sample-based weights depend on the location of a sample in the block. According to an embodiment, said sample-based weights depend on the location- dependency of a given intra-prediction mode on a specific support region. According to an embodiment, the apparatus comprises means for inferring a usage of sample-based weighting based on characteristics of the block of video data. According to an embodiment, the apparatus comprises means for inferring, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the sample-based weighting is inferred depending on a strength of the location-dependency of a decoder-side intra mode derivation mode on a specific support region. According to an embodiment, the apparatus comprises means for computing a prediction for the current block using one or more pre-defined intra-prediction modes in combination with one or more intra-prediction modes derived using the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises means for using sample- based weights to compute the prediction for the current block, wherein the weights used for the one or more pre-defined intra-prediction modes depend on the decoder-side intra mode derivation parameters specific to each support region. As a further aspect, there is provided an apparatus comprising: at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least: decode encoded samples of blocks of video sample data; determine two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; perform, for each support region, a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters specific to each support region; and compute, based on the decoder-side intra mode derivation parameters specific to each support region, a prediction for the samples of the block of video sample data. According to an embodiment, the apparatus comprises code configured to cause the apparatus to perform, as a part of the decoder-side intra mode derivation process, an analysis of directionality of the samples belonging to at least one support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to obtain one or more intra-prediction modes from the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises code configured to cause the apparatus to infer, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to infer, based on the decoder-side intra mode derivation parameters specific to each support region, a strength of location-dependency of the one or more intra-prediction modes on the specific support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to derive, as a part of the decoder-side intra mode derivation process, a histogram of gradients of the samples of at least one support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to derive, based on the decoder-side intra mode derivation parameters specific to each support region, a decoder-side intra mode derivation mode specific to each support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to compute, based on the decoder-side intra mode derivation mode specific to each support region, a prediction for the current block. An apparatus according to a fourth aspect comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least: decode encoded samples of blocks of video sample data; perform a decoder-side intra mode derivation process to obtain decoder-side intra mode derivation parameters; obtain one or more intra- prediction modes from the decoder-side intra mode derivation parameters; compute at least two predictors for the block of video sample data based on the decoder-side intra mode derivation parameters specific to each support region; and combine at least said two predictors together using blending to form a prediction for the block, where blending is performed using sample-based weights. According to an embodiment, the apparatus comprises code configured to cause the apparatus to determine two or more support regions for a block of video sample data, wherein each support region comprises a set of reconstructed samples at predetermined locations with respect to said block of video sample data; and code configured to cause the apparatus to perform the decoder-side intra mode derivation process to derive decoder-side intra mode derivation parameters specific to each support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to infer, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, said sample-based weights depend on the location of a sample in the block. According to an embodiment, said sample-based weights depend on the location- dependency of a given intra-prediction mode on a specific support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to infer a usage of sample-based weighting based on characteristics of the block of video data. According to an embodiment, the apparatus comprises code configured to cause the apparatus to infer, based on the decoder-side intra mode derivation parameters specific to each support region, whether the one or more intra-prediction modes are location-dependent on a specific support region. According to an embodiment, the sample-based weighting is inferred depending on a strength of the location-dependency of a decoder-side intra mode derivation mode on a specific support region. According to an embodiment, the apparatus comprises code configured to cause the apparatus to compute a prediction for the current block using one or more pre-defined intra- prediction modes in combination with one or more intra-prediction modes derived using the decoder-side intra mode derivation process. According to an embodiment, the apparatus comprises code configured to cause the apparatus to use sample-based weights to compute the prediction for the current block, wherein the weights used for the one or more pre-defined intra-prediction modes depend on the decoder-side intra mode derivation parameters specific to each support region. Such apparatuses may comprise e.g. the functional units disclosed in any of the Figures 1, 2, 4a, and 4b for implementing the embodiments. Such an apparatus further comprises code, stored in said at least one memory, which when executed by said at least one processor, causes the apparatus to perform one or more of the embodiments disclosed herein. Figure 10 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented. A data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats. An encoder 1520 may include or be connected with a pre- processing, such as data format conversion and/or filtering of the source signal. The encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software. The encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal. The encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the corresponding decoding process and vice versa. The coded media bitstream may be transferred to a storage 1530. The storage 1530 may comprise any type of mass memory to store the coded media bitstream. The format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file. The encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530. Some systems operate “live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540. The coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis. The format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file. The encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices. The encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate. The server 1540 sends the coded media bitstream using a communication protocol stack. The stack may include but is not limited to one or more of Real-Time Transport Protocol (RTP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP). When the communication protocol stack is packet-oriented, the server 1540 encapsulates the coded media bitstream into packets. For example, when RTP is used, the server 1540 encapsulates the coded media bitstream into RTP packets according to an RTP payload format. Typically, each media type has a dedicated RTP payload format. It should be again noted that a system may contain more than one server 1540, but for the sake of simplicity, the following description only considers one server 1540. If the media content is encapsulated in a container file for the storage 1530 or for inputting the data to the sender 1540, the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure). In particular, if the container file is not transmitted as such but at least one of the contained coded media bitstream is encapsulated for transport over a communication protocol, a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol. The sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads. The multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol. The server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks. The gateway may also or alternatively be referred to as a middle- box. For DASH, the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550. The gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions. The gateway 1550 may be a server entity in various embodiments. The system includes one or more receivers 1560, typically capable of receiving, de- modulating, and de-capsulating the transmitted signal into a coded media bitstream. The coded media bitstream may be transferred to a recording storage 1570. The recording storage 1570 may comprise any type of mass memory to store the coded media bitstream. The recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory. The format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file. If there are multiple coded media bitstreams, such as an audio stream and a video stream, associated with each other, a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams. Some systems operate “live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580. In some systems, only the most recent part of the recorded stream, e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570. The coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file. The recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality The coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams. Finally, a renderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example. The receiver 1560, recording storage 1570, decoder 1570, and renderer 1590 may reside in the same physical device or they may be included in separate devices. A sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for switching between different viewports of 360- degree video content, view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. In other words, the receiver 1560 may initiate switching between representations. A request from the receiver can be, e.g., a request for a Segment or a Subsegment from a different representation than earlier, a request for a change of transmitted scalability layers and/or sub-layers, or a change of a rendering device having different capabilities compared to the previous one. A request for a Segment may be an HTTP GET request. A request for a Subsegment may be an HTTP GET request with a byte range. Additionally or alternatively, bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions. Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders. A decoder 1580 may be configured to perform switching between different representations e.g. for switching between different viewports of 360-degree video content, view switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. Faster decoding operation might be needed for example if the device including the decoder 1580 is multi-tasking and uses computing resources for other purposes than decoding the video bitstream. In another example, faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate. In the above, some embodiments have been described with reference to and/or using terminology of HEVC and/or VVC. It needs to be understood that embodiments may be similarly realized with any video encoder and/or video decoder. In the above, where the example embodiments have been described with reference to an encoder, it needs to be understood that the resulting bitstream and the decoder may have corresponding elements in them. Likewise, where the example embodiments have been described with reference to a decoder, it needs to be understood that the encoder may have structure and/or computer program for generating the bitstream to be decoded by the decoder. For example, some embodiments have been described related to generating a prediction block as part of encoding. Embodiments can be similarly realized by generating a prediction block as part of decoding, with a difference that coding parameters, such as the horizontal offset and the vertical offset, are decoded from the bitstream than determined by the encoder. The embodiments of the invention described above describe the codec in terms of separate encoder and decoder apparatus in order to assist the understanding of the processes involved. However, it would be appreciated that the apparatus, structures and operations may be implemented as a single encoder-decoder apparatus/structure/operation. Furthermore, it is possible that the coder and decoder may share some or all common elements. Although the above examples describe embodiments of the invention operating within a codec within an electronic device, it would be appreciated that the invention as defined in the claims may be implemented as part of any video codec. Thus, for example, embodiments of the invention may be implemented in a video codec which may implement video coding over fixed or wired communication paths. Thus, user equipment may comprise a video codec such as those described in embodiments of the invention above. It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers. Furthermore elements of a public land mobile network (PLMN) may also comprise video codecs as described above. In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD. The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multi-core processor architecture, as non-limiting examples. Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate. Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication. The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.