WO2012150693A1 - Residual quadtree structure for transform units in non-square prediction units - Google Patents

Residual quadtree structure for transform units in non-square prediction units Download PDF

Info

Publication number
WO2012150693A1
WO2012150693A1 PCT/JP2012/061296 JP2012061296W WO2012150693A1 WO 2012150693 A1 WO2012150693 A1 WO 2012150693A1 JP 2012061296 W JP2012061296 W JP 2012061296W WO 2012150693 A1 WO2012150693 A1 WO 2012150693A1
Authority
WO
WIPO (PCT)
Prior art keywords
tus
split
transform tree
rectangular
flag
Prior art date
Application number
PCT/JP2012/061296
Other languages
French (fr)
Inventor
Robert A. Cohen
Anthony Vetro
Huifang Sun
Original Assignee
Mitsubishi Electric Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corporation filed Critical Mitsubishi Electric Corporation
Priority to CN201280021462.0A priority Critical patent/CN103503461B/en
Priority to JP2013556183A priority patent/JP6037341B2/en
Priority to EP12721366.8A priority patent/EP2705665B1/en
Priority to TW101115534A priority patent/TWI504209B/en
Publication of WO2012150693A1 publication Critical patent/WO2012150693A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the invention relates generally to coding pictures, and more particularly to methods for coding pictures using hierarchical transform units in the context of encoding and decoding pictures.
  • HEVC High Efficiency Video Coding
  • HEVC High Efficiency Video Coding
  • MPEG-4 AVC the application of TUs to residual blocks is represented by a tree as described in "Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding," IEEE Transactions on Circuits and Systems for Video
  • the hierarchical coding layers defined in the standard include video
  • Treeblock According to the proposed standard, a picture is partitioned into slices, and each slice is partitioned into a sequence of treeblocks (TBs) ordered consecutively in a raster scan. Pictures and TBs are broadly analogous to frames and
  • the maximum allowed size of the TB is 64x64 pixels luma (intensity), and chroma (color) samples.
  • a Coding Unit is the basic unit of splitting used for Intra and Inter prediction. Intra prediction operates in the spatial domain of a single picture, while Inter prediction operates in the temporal domain among the picture to be predicted and a set of previously-decoded pictures.
  • the CU is always square, and can be 128x128 (LCU), 64 x 64, 32 X 32, 16 x 16 and 8 x 8 pixels.
  • LCU 128x128
  • the CU allows recursive splitting into four equally sized blocks, starting from the TB. This process gives a content-adaptive coding tree structure comprised of CU blocks that can be as large as the TB, or as small as 8x8 pixels.
  • a Prediction Unit is the basic unit used for carrying the information (data) related to the prediction processes.
  • the PU is not restricted to being square in shape, in order to facilitate partitioning, which matches, for example, the boundaries of real objects in the picture.
  • Each CU may contain one or more PUs.
  • Transform Unit (TU) is the basic unit used for carrying the information (data) related to the prediction processes.
  • TU Transform Unit
  • a root node 101 of the transform tree 100 corresponds to an NxN TU or "Transform Unit” (TU) applied to a block of data 110.
  • the TU is the basic unit used for the transformation and quantization processes. In the proposed standard, the TU is always square and can take a size from 4x4 to 32x32 pixels. The TU cannot be larger than the PU and does not exceed the size of the CU. Multiple TUs can be arranged in a tree structure, henceforth - transform tree. Each CU may contain one or more TUs, where multiple TUs can be arranged in a tree structure.
  • the example transform tree is a quadtree with four levels 0-3. If the transform tree is split once, then four N/2xN/2 TUs are applied. Each of these TUs can subsequently be split down to a predefined limit.
  • transform trees are applied over "Prediction Units" (PUs) of Intra-prediction residual data. These PUs are currently defined as squares or rectangles of size 2Nx2N, 2NxN, Nx2N, or NxN pixels.
  • PUs Prediction Units
  • These PUs are currently defined as squares or rectangles of size 2Nx2N, 2NxN, Nx2N, or NxN pixels.
  • the square TU must be contained entirely within a PU, so the largest allowed TU size is typically 2Nx2N or NxN pixels.
  • the relation between a-j TUs and a-j PUs within this transform tree structure is shown in Fig. 1.
  • PUs can be strips or rectangles 201 as small as one or two pixels wide, e.g. Nx2, 2xN, Nxl, or lxN pixels.
  • Nx2, 2xN, Nxl, or lxN pixels When overlaying a transform tree on an Intra-coded block that has been partitioned into such narrow PUs, the transform tree is split to a level where the size of the TU is only 2x2 or lxl. The TU size cannot be greater than the PU size; otherwise, the transformation and prediction process is
  • the prior art SDIP method that utilizes these new PU structures define, for example, as lxN and 2xN TUs. Due to the rectangular TU sizes, the prior art is not compatible with the transform tree structure that is in the current draft specification of the HEVC standard. The SDIP does not use the transform tree mandated in the standard, instead the TU size is implicitly dictated by the sizes of the PUs.
  • a bitstream includes coded pictures, and split-flags.
  • the split flags are used for generating a transform tree. Effectively, the bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs).
  • the transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with CUs.
  • TU transform units
  • the generation splits each TU only if the corresponding split-flag is set.
  • the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging.
  • data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
  • Figure 1 is diagram of a tree splitting for transform units according to the prior art
  • Figure 2 is diagram of a decomposition into rectangular prediction units according to the prior art
  • Figure 3 is a flow diagram of an example decoding system used by embodiments of the invention.
  • Figure 4 is a diagram of a first step of the transform tree generation according to this invention.
  • Figure 5 is a diagram of a second step of the transform tree generation according to this invention.
  • the embodiments of our invention provide a method for coding pictures using hierarchical transform units (TUs). Coding encompasses encoding and decoding. Generally, encoding and decoding are performed in a codec (CODer- DECcoder.
  • the codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal. For example, the coder encodes a bit stream or signal for compression, transmission, storage or encryption, and the decoder decodes the encoded bit stream for playback or editing.
  • the method applies square and rectangular TUs on rectangular, and sometimes very narrow rectangular portions of pictures, while still maintaining a hierarchical transform tree structure of the Transform Units (TUs) as defined in the High Efficiency Video Coding (HEVC) standard.
  • Transforms can refer either to transforms or inverse transforms.
  • the transform tree is a quadtree (Q-tree), however other tree structures, such a binary trees (B-tree) and octrees, generally N-ary trees are also possible.
  • Input to the method is an NxN coding unit (CU) partitioned into Prediction Units (PUs).
  • CU NxN coding unit
  • PUs Prediction Units
  • Our invention generates a transform tree that is used to apply TUs on the PUs.
  • Figs. 3 show an example decoder and method system 300 used by
  • the decoder which can be software, firmware or a processor connected to a memory and input/output interfaces as known in the art.
  • Input to the method (or decoder) is a bit stream 301 of coded pictures, e.g., an image or a sequence of images in a video.
  • the bit stream is parsed 310 to obtain split-flags 311 for generating the transform tree.
  • the split-flags are associated with TUs of corresponding nodes of a transform tree 221, and data 312 to be processed, e.g., NxN blocks of data.
  • the data includes a partitioning of the coding units (CUs) into Prediction Units (PUs).
  • any node represents a TU at a given depth in the transform tree. In most cases, only TUs at leaf nodes are realized. However, the codec can implement the TU at nodes higher in the hierarchy of the transform tree.
  • the split-flags are used to generate 320 a transform tree 321. Then, the data in the PUs are decoded according to the transform tree to produce decoded data 302.
  • the generation step 320 includes splitting 350 each TUs only if the split-flag 311 is set.
  • the multiple TUs are merged into a larger TU.
  • a 16x8 PU can be partitioned by two 8x8 TUs. These two 8x8 TUs can be merged into one 16x8 TU.
  • a 64x64 square PU is partitioned into sixteen 8x32 TUs. Four of these TUs are merged into a 32x32 square TU, and the other TUs remain as 8x32 rectangles.
  • the merging solves the problem in the prior art of having many very small, e.g., lxl TUs, see Cao, et al.
  • the transform tree 321 is modified 370 according splitting and merging. The splitting, partitioning, merging and modifying can be repeated 385 until a size of the TU is equal to a predetermined minimum 380.
  • the data 312 contained in each PU can be decoded using the TUs associated with the PU.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • Fig. 4 shows the partitioning 312 of the input CU into PUs 312, the iterative splitting 350 (or not) of the PUs according to split-flags, and the subsequent merging.
  • Step 1 A root node of the transform tree corresponds to an inital NxN TU covering the NxN PU 312.
  • the bit stream 301 received by the decoder 300, as shown in Fig. 3, contains the split-flag 311 that is associated with this node. If the split-flag is not set 401, then the corresponding TU is not split, and the process for this node is complete. If the split-flag is set 402, then the NxN TU is split into TUs 403.
  • the number of TUs produced corresponds to the structure of the tree, e.g., four for a quadtree. It is noted that the number of TUs produced by the splitting can vary.
  • the decoder determines the PU includes multiple than TUs.
  • a rectangular PU includes multiple TUs, e.g., two square TUs, each of size N/2XN/2.
  • the multiple TUs in that PU are merged 404 into an
  • the rectangular PUs and TUs can include longer axes corresponding to lengths, and a shorter axis corresponding to widths.
  • Merging square TUs into larger rectangular TUs eliminates the problem where a long narrow rectangle can be split into many small square TUs, as in the prior art, see Cao et al. Merging also reduces the number of TUs in the PUs.
  • Having many small TUs is usually less effective than having a few larger TUs, especially when the dimensions of these TUs are small, or when multiple TUs cover similar data.
  • the transform tree is then modified.
  • the branch of the transform tree that corresponded with the first N/2XN/2 TUs 406 is redefined to correspond to the merged rectangular TU, and the branch of the transform tree that corresponded to the second merged TU is eliminated.
  • Step 2 For each node generated in Step 1, if a size of the TU is equal to a predefined minimum, the process is done for that node. Each remaining node is further split when the associated split-flag is set, or if the TU for that node is not contained entirely within the PU.
  • Step 2a the way that the node is split depends upon the shape of the PU, as shown in Fig. 5, because the PUs can have arbitrary shapes and sizes.
  • This splitting is performed as described in Step 2a or Step 2b below.
  • the decision whether to look for the split-flag in the bit stream or to split when the TU covers more than one PU can be made beforehand, i.e., the system is defined such that the split-flag is signaled in the bit stream, or the split-flag is inferred based upon criteria such as minimum or maximum TU sizes, or whether a TU spans multiple PUs.
  • an "implicit split-flag" can be parsed from the bit stream 301. If the implicit split-flag is not set, then the split-flag is signaled for the
  • the implicit split-flag is set, then the split-flag is not signaled for this node, and the splitting decision is made based on predefined split conditions.
  • the predefined split conditions can include other factors, such as whether the TU spans multiple PUs, or if the TU size limitation is met. In this case, the implicit split-flag is received before the split-flag, if any.
  • the implicit split-flag can be received before each node, before each transform tree, before each image or video frame, or before each video sequence.
  • a TU is not allowed to span multiple PUs because the PU is predicted from a set of neighboring PUs, so those neighboring PUs are to be fully decoded, inverse transformed, and reconstructed in order to be used for predicting the current PU.
  • the implicit flag cannot be set, but predefined metrics or conditions are used to decide whether to split a node without requiring the presence of a split-flag.
  • Step 2a If the TU for this node is square, the process goes back to Step 1, treating this node as a new root node and splitting it into four square TUs, e.g., of size N/4XN/4.
  • Step 2b If the TU for this node is rectangular, e.g., N/2xN, then the node is split into two nodes corresponding to N/4xN TUs. Similarly, an NxN/2 TU is split into two nodes corresponding to NxN/4 TUs. The process then repeats Step 2 for each of these nodes, ensuring that rectangular TUs are split along the direction of the longer axis, so that rectangular TUs become thinner.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • Step 2b is modified so that nodes associated with rectangular TUs are split into multiple nodes, e.g., four nodes and four TUs.
  • an N/2xN TU is split into four N/8xN TUs.
  • This partitioning into a larger number of TUs can be beneficial for cases where the data in the PU is different for different portions in the PU.
  • this embodiment requires only one quadtree level, and thus only one split-flag, to split one TU into four rectangular TUs.
  • This embodiment can be predefined, or can be signaled as a "multiple split- flag" in the bit stream, similar to the way the implicit flag was signaled.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • Step 1 is modified so that nodes associated with square TUs are not merged to become rectangular until the size of the square TU is less than a predefined threshold. For example, if the threshold is four, then a rectangular 8x4 PU may be covered by two 4x4 TUs. A 4x2 PU, however, may not be covered by two 2x2 TUs. In this case, Embodiment 1 is applied, and the two nodes are merged to form a 4x2 TU to cover the 4x2 PU. This embodiment is useful for cases where square TUs are preferred due to performance or complexity considerations, and rectangular TUs are used only when the square TUs lose effectiveness due to their small dimensions.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • Step 2b is modified so that nodes associated with rectangular TUs can be split to form more than two square or rectangular TUs, where the split is not necessarily aligned with the longer dimension of the rectangle.
  • a 16x4 TU can be split into four 4x4 TUs or two 8x4 TUs.
  • the choice of whether to split into a square or rectangular TU can be explicitly indicated by a flag in the bit-stream, as was the case for the implicit flag, or it can be predefined as part of the encoding/decoding process.
  • This embodiment is typically used for very large rectanglur TUs, e.g., 64x16, so that eight 16x16 TUs are used instead of two 64x8 TUs.
  • Another example splits a 64x16 TU into four 32x8 TUs.
  • a very long horizontal TU for example, can produce artifacts such as ringing in the horizontal direction, so this embodiment reduces the artifacts by reducing the maximum length of a rectangular TU. This maximum length may also be included as a signal in the bit stream. Similarly, a maximum width can be specified.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • Step 1 is modified so that the NxN TU is directly split into rectangular TUs, i.e. other than size N/2xN/2.
  • the NxN TU can be split into four N/4xN TUs.
  • This embodiment differs from Embodiment 2 in that a square TU can be split directly into multiple rectangular TUs, even though the PU may be square.
  • This embodiment is useful for cases where features in the PU are oriented horizontally or vertically, so that a horizontal or vertical rectangular TUs aligned with the direction of the features can be more effective than multiple square TUs that split the oriented data in the PU.
  • Features can include, color, edges, ridges, corners, objects and other points of interest. As before, whether or not to do this kind of splitting can be predefined or be signaled, as was the case for the implicit split-flag.
  • Embodiment 6 is a diagrammatic representation of Embodiment 6
  • Step 1 is modified so that a TU can span multiple PUs. This can occur when the PUs are Inter-predicted. For example, Inter-predicted PUs are predicted using data from previously-decoded pictures, not from data decoded from within the same CU. A transform can therefore be applied over multiple PUs within a CU.

Abstract

A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.

Description

DESCRIPTION
TITLE OF INVENTION
RESIDUAL QUADTREE STRUCTURE FOR TRANSFORM UNITS IN NON- SQUARE PREDICTION UNITS
TECHNICAL FIELD
The invention relates generally to coding pictures, and more particularly to methods for coding pictures using hierarchical transform units in the context of encoding and decoding pictures.
BACKGROUND ART
For the High Efficiency Video Coding (HEVC) standard currently under development as H.264/MPEG-4 AVC, the application of TUs to residual blocks is represented by a tree as described in "Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding," IEEE Transactions on Circuits and Systems for Video
Technology, vol. 20, no. 12, pp. 1676-1687, December 2010.
Coding Layers
The hierarchical coding layers defined in the standard include video
sequence, picture, slice, and treeblock layers. Higher layers contain lower layers.
Treeblock According to the proposed standard, a picture is partitioned into slices, and each slice is partitioned into a sequence of treeblocks (TBs) ordered consecutively in a raster scan. Pictures and TBs are broadly analogous to frames and
macroblocks, respectively, in previous video coding standards, such as H.264/AVC The maximum allowed size of the TB is 64x64 pixels luma (intensity), and chroma (color) samples.
Coding Unit
A Coding Unit (CU) is the basic unit of splitting used for Intra and Inter prediction. Intra prediction operates in the spatial domain of a single picture, while Inter prediction operates in the temporal domain among the picture to be predicted and a set of previously-decoded pictures. The CU is always square, and can be 128x128 (LCU), 64 x 64, 32 X 32, 16 x 16 and 8 x 8 pixels. The CU allows recursive splitting into four equally sized blocks, starting from the TB. This process gives a content-adaptive coding tree structure comprised of CU blocks that can be as large as the TB, or as small as 8x8 pixels.
Prediction Unit (PU)
A Prediction Unit (PU) is the basic unit used for carrying the information (data) related to the prediction processes. In general, the PU is not restricted to being square in shape, in order to facilitate partitioning, which matches, for example, the boundaries of real objects in the picture. Each CU may contain one or more PUs. Transform Unit (TU)
As shown in Fig. 1, a root node 101 of the transform tree 100 corresponds to an NxN TU or "Transform Unit" (TU) applied to a block of data 110. The TU is the basic unit used for the transformation and quantization processes. In the proposed standard, the TU is always square and can take a size from 4x4 to 32x32 pixels. The TU cannot be larger than the PU and does not exceed the size of the CU. Multiple TUs can be arranged in a tree structure, henceforth - transform tree. Each CU may contain one or more TUs, where multiple TUs can be arranged in a tree structure.
The example transform tree is a quadtree with four levels 0-3. If the transform tree is split once, then four N/2xN/2 TUs are applied. Each of these TUs can subsequently be split down to a predefined limit. For Intra-coded pictures, transform trees are applied over "Prediction Units" (PUs) of Intra-prediction residual data. These PUs are currently defined as squares or rectangles of size 2Nx2N, 2NxN, Nx2N, or NxN pixels. For Intra-coded pictures, the square TU must be contained entirely within a PU, so the largest allowed TU size is typically 2Nx2N or NxN pixels. The relation between a-j TUs and a-j PUs within this transform tree structure is shown in Fig. 1.
As shown in Fig. 2, a new PU structures has been proposed for the proposed HEVC standard as described by Cao, et al. "CE6.bl Report on Short Distance Intra Prediction Method (SDIP)," JCTVC-E278, March 2011. With the SDIP method, PUs can be strips or rectangles 201 as small as one or two pixels wide, e.g. Nx2, 2xN, Nxl, or lxN pixels. When overlaying a transform tree on an Intra-coded block that has been partitioned into such narrow PUs, the transform tree is split to a level where the size of the TU is only 2x2 or lxl. The TU size cannot be greater than the PU size; otherwise, the transformation and prediction process is
complicated. The prior art SDIP method that utilizes these new PU structures define, for example, as lxN and 2xN TUs. Due to the rectangular TU sizes, the prior art is not compatible with the transform tree structure that is in the current draft specification of the HEVC standard. The SDIP does not use the transform tree mandated in the standard, instead the TU size is implicitly dictated by the sizes of the PUs.
Hence, there is a need for a method of splitting and applying square and rectangular TUs on rectangular, and sometimes very narrow rectangular PUs, while still maintaining the tree structure of the TUs as defined by the proposed standard.
SUMMARY OF INVENTION
A bitstream includes coded pictures, and split-flags. The split flags are used for generating a transform tree. Effectively, the bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs).
The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with CUs.
The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging.
Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 is diagram of a tree splitting for transform units according to the prior art;
Figure 2 is diagram of a decomposition into rectangular prediction units according to the prior art;
Figure 3 is a flow diagram of an example decoding system used by embodiments of the invention;
Figure 4 is a diagram of a first step of the transform tree generation according to this invention; and
Figure 5 is a diagram of a second step of the transform tree generation according to this invention.
DESCRIPTION OF EMBODIMENTS The embodiments of our invention provide a method for coding pictures using hierarchical transform units (TUs). Coding encompasses encoding and decoding. Generally, encoding and decoding are performed in a codec (CODer- DECcoder. The codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal. For example, the coder encodes a bit stream or signal for compression, transmission, storage or encryption, and the decoder decodes the encoded bit stream for playback or editing.
The method applies square and rectangular TUs on rectangular, and sometimes very narrow rectangular portions of pictures, while still maintaining a hierarchical transform tree structure of the Transform Units (TUs) as defined in the High Efficiency Video Coding (HEVC) standard. Transforms can refer either to transforms or inverse transforms. In the preferred embodiment, the transform tree is a quadtree (Q-tree), however other tree structures, such a binary trees (B-tree) and octrees, generally N-ary trees are also possible.
Input to the method is an NxN coding unit (CU) partitioned into Prediction Units (PUs). Our invention generates a transform tree that is used to apply TUs on the PUs.
Decoding System
Figs. 3 show an example decoder and method system 300 used by
embodiments of the invention, i.e., the steps of the method are performed by the decoder, which can be software, firmware or a processor connected to a memory and input/output interfaces as known in the art. Input to the method (or decoder) is a bit stream 301 of coded pictures, e.g., an image or a sequence of images in a video. The bit stream is parsed 310 to obtain split-flags 311 for generating the transform tree. The split-flags are associated with TUs of corresponding nodes of a transform tree 221, and data 312 to be processed, e.g., NxN blocks of data. The data includes a partitioning of the coding units (CUs) into Prediction Units (PUs).
In other words, any node represents a TU at a given depth in the transform tree. In most cases, only TUs at leaf nodes are realized. However, the codec can implement the TU at nodes higher in the hierarchy of the transform tree.
The split-flags are used to generate 320 a transform tree 321. Then, the data in the PUs are decoded according to the transform tree to produce decoded data 302.
The generation step 320 includes splitting 350 each TUs only if the split-flag 311 is set.
For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU. For example, a 16x8 PU can be partitioned by two 8x8 TUs. These two 8x8 TUs can be merged into one 16x8 TU. In another example, a 64x64 square PU is partitioned into sixteen 8x32 TUs. Four of these TUs are merged into a 32x32 square TU, and the other TUs remain as 8x32 rectangles. The merging solves the problem in the prior art of having many very small, e.g., lxl TUs, see Cao, et al. Then, the transform tree 321 is modified 370 according splitting and merging. The splitting, partitioning, merging and modifying can be repeated 385 until a size of the TU is equal to a predetermined minimum 380.
After the transform tree has been generated 320, the data 312 contained in each PU can be decoded using the TUs associated with the PU.
Various embodiments are now described.
Embodiment 1:
Fig. 4 shows the partitioning 312 of the input CU into PUs 312, the iterative splitting 350 (or not) of the PUs according to split-flags, and the subsequent merging.
Step 1: A root node of the transform tree corresponds to an inital NxN TU covering the NxN PU 312. The bit stream 301 received by the decoder 300, as shown in Fig. 3, contains the split-flag 311 that is associated with this node. If the split-flag is not set 401, then the corresponding TU is not split, and the process for this node is complete. If the split-flag is set 402, then the NxN TU is split into TUs 403. The number of TUs produced corresponds to the structure of the tree, e.g., four for a quadtree. It is noted that the number of TUs produced by the splitting can vary.
Then, the decoder determines the PU includes multiple than TUs. For, example, a rectangular PU includes multiple TUs, e.g., two square TUs, each of size N/2XN/2. In this case, the multiple TUs in that PU are merged 404 into an
NxN/2 TU or an N/2xN rectangular TUs 405 aligned with the dimensions of the PU. The rectangular PUs and TUs can include longer axes corresponding to lengths, and a shorter axis corresponding to widths. Merging square TUs into larger rectangular TUs eliminates the problem where a long narrow rectangle can be split into many small square TUs, as in the prior art, see Cao et al. Merging also reduces the number of TUs in the PUs.
Having many small TUs is usually less effective than having a few larger TUs, especially when the dimensions of these TUs are small, or when multiple TUs cover similar data.
The transform tree is then modified. The branch of the transform tree that corresponded with the first N/2XN/2 TUs 406 is redefined to correspond to the merged rectangular TU, and the branch of the transform tree that corresponded to the second merged TU is eliminated.
Step 2: For each node generated in Step 1, if a size of the TU is equal to a predefined minimum, the process is done for that node. Each remaining node is further split when the associated split-flag is set, or if the TU for that node is not contained entirely within the PU.
Unlike Step 1, however, the way that the node is split depends upon the shape of the PU, as shown in Fig. 5, because the PUs can have arbitrary shapes and sizes. This splitting is performed as described in Step 2a or Step 2b below. The decision whether to look for the split-flag in the bit stream or to split when the TU covers more than one PU can be made beforehand, i.e., the system is defined such that the split-flag is signaled in the bit stream, or the split-flag is inferred based upon criteria such as minimum or maximum TU sizes, or whether a TU spans multiple PUs.
Implicit Split-Flag
Alternatively, an "implicit split-flag" can be parsed from the bit stream 301. If the implicit split-flag is not set, then the split-flag is signaled for the
corresponding node. If the implicit split-flag is set, then the split-flag is not signaled for this node, and the splitting decision is made based on predefined split conditions. The predefined split conditions can include other factors, such as whether the TU spans multiple PUs, or if the TU size limitation is met. In this case, the implicit split-flag is received before the split-flag, if any.
For example, the implicit split-flag can be received before each node, before each transform tree, before each image or video frame, or before each video sequence. For Intra PUs, a TU is not allowed to span multiple PUs because the PU is predicted from a set of neighboring PUs, so those neighboring PUs are to be fully decoded, inverse transformed, and reconstructed in order to be used for predicting the current PU.
In another example, the implicit flag cannot be set, but predefined metrics or conditions are used to decide whether to split a node without requiring the presence of a split-flag.
Step 2a: If the TU for this node is square, the process goes back to Step 1, treating this node as a new root node and splitting it into four square TUs, e.g., of size N/4XN/4. Step 2b: If the TU for this node is rectangular, e.g., N/2xN, then the node is split into two nodes corresponding to N/4xN TUs. Similarly, an NxN/2 TU is split into two nodes corresponding to NxN/4 TUs. The process then repeats Step 2 for each of these nodes, ensuring that rectangular TUs are split along the direction of the longer axis, so that rectangular TUs become thinner.
Embodiment 2:
In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs are split into multiple nodes, e.g., four nodes and four TUs. For example, an N/2xN TU is split into four N/8xN TUs. This partitioning into a larger number of TUs can be beneficial for cases where the data in the PU is different for different portions in the PU. Rather than require two levels of a binary tree to split one rectangular TU into four rectangular TUs, this embodiment requires only one quadtree level, and thus only one split-flag, to split one TU into four rectangular TUs. This embodiment can be predefined, or can be signaled as a "multiple split- flag" in the bit stream, similar to the way the implicit flag was signaled.
Embodiment 3:
Here, Step 1 is modified so that nodes associated with square TUs are not merged to become rectangular until the size of the square TU is less than a predefined threshold. For example, if the threshold is four, then a rectangular 8x4 PU may be covered by two 4x4 TUs. A 4x2 PU, however, may not be covered by two 2x2 TUs. In this case, Embodiment 1 is applied, and the two nodes are merged to form a 4x2 TU to cover the 4x2 PU. This embodiment is useful for cases where square TUs are preferred due to performance or complexity considerations, and rectangular TUs are used only when the square TUs lose effectiveness due to their small dimensions.
Embodiment 4:
In this embodiment, Step 2b is modified so that nodes associated with rectangular TUs can be split to form more than two square or rectangular TUs, where the split is not necessarily aligned with the longer dimension of the rectangle. For example, a 16x4 TU can be split into four 4x4 TUs or two 8x4 TUs. The choice of whether to split into a square or rectangular TU can be explicitly indicated by a flag in the bit-stream, as was the case for the implicit flag, or it can be predefined as part of the encoding/decoding process.
This embodiment is typically used for very large rectanglur TUs, e.g., 64x16, so that eight 16x16 TUs are used instead of two 64x8 TUs. Another example splits a 64x16 TU into four 32x8 TUs. A very long horizontal TU, for example, can produce artifacts such as ringing in the horizontal direction, so this embodiment reduces the artifacts by reducing the maximum length of a rectangular TU. This maximum length may also be included as a signal in the bit stream. Similarly, a maximum width can be specified.
Embodiment 5:
In this embodiment, Step 1 is modified so that the NxN TU is directly split into rectangular TUs, i.e. other than size N/2xN/2. For example, the NxN TU can be split into four N/4xN TUs. This embodiment differs from Embodiment 2 in that a square TU can be split directly into multiple rectangular TUs, even though the PU may be square.
This embodiment is useful for cases where features in the PU are oriented horizontally or vertically, so that a horizontal or vertical rectangular TUs aligned with the direction of the features can be more effective than multiple square TUs that split the oriented data in the PU. Features can include, color, edges, ridges, corners, objects and other points of interest. As before, whether or not to do this kind of splitting can be predefined or be signaled, as was the case for the implicit split-flag.
Embodiment 6:
In this embodiment, Step 1 is modified so that a TU can span multiple PUs. This can occur when the PUs are Inter-predicted. For example, Inter-predicted PUs are predicted using data from previously-decoded pictures, not from data decoded from within the same CU. A transform can therefore be applied over multiple PUs within a CU.

Claims

1. A method for coding pictures, comprising the steps of:
parsing a bitstream including coded pictures to obtain split-flags for generating a transform tree, and a partitioning of coding units (CUs) into
Prediction Units (PUs);
generating the transform tree according to the split-flags, wherein nodes in the transform tree represent transform units (TUs) associated with the CUs, wherein the generating further comprises;
splitting each TUs only if the split-flag is set;
merging, for each PU that includes multiple TUs, the multiple TUs into a larger TU;
modifying the transform tree according to the splitting and merging; and
decoding data contained in each PU using the TUs associated with the PU according to the transform tree.
2. The method of claim 1, wherein square TUs are split into multiple rectangular TUs.
3. The method of claim 1, further comprising:
repeating the splitting, merging and modifying until a size of each TU is equal to a predetermined minimum.
4. The method of claim 3, wherein the repeating continues when the TU for a particular node is not contained entirely within the associated PU.
5. The method of claim 1, wherein the bitstream includes an implicit-split flag, and if the implicit split-flag is not set, then the split-flag is signaled in the bitstream for the corresponding node in the transform tree.
6. The method of claim 3, wherein the bitstream includes an implicit-split flag, and the repeating is performed only if the implicit split-flag is set and a predefined split condition is met.
7. The method of claim 1, wherein the splitting of a rectangular TU is along a direction of a longer axis of the rectangular TU.
8. The method of claim 1, wherein the splitting produces more than two TUs.
9. The method of claim 1, wherein a maximum length or a maximum width of the TUs are reduced.
10. The method of claim 1, wherein the PUs have arbitrary shapes and sizes.
11. The method of claim 1, wherein the splitting produces rectangular TUs.
12. The method of claim 1, wherein horizontal rectangular TUs and vertical rectangular TUs are aligned with a direction of features in the PU.
13. The method of claim 1, wherein the PU contains a portion of video data.
14. The method of claim 1, wherein the PU contains residual data obtained from a prediction process.
15. The method of claim 1, wherein the transform tree is an N-ary tree.
16. The method of claim 1, wherein the splitting of rectangular TUs is along a direction of a shorter axis.
17. The method of claim 1, wherein square or rectangular TUs are merged into larger TUs.
18. The method of claim 15, wherein values of N of the N-ary tree differs for different nodes of the transform tree.
19. The method of claim 1, wherein the TU spans multiple PUs when the PUs are Inter-predicted.
20. The method of claim 1, wherein the TUs are represented by leaf nodes of the transform tree.
PCT/JP2012/061296 2011-05-05 2012-04-20 Residual quadtree structure for transform units in non-square prediction units WO2012150693A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201280021462.0A CN103503461B (en) 2011-05-05 2012-04-20 The method that picture is compiled code
JP2013556183A JP6037341B2 (en) 2011-05-05 2012-04-20 Method for decoding video
EP12721366.8A EP2705665B1 (en) 2011-05-05 2012-04-20 Residual quadtree structure for transform units in non-square prediction units
TW101115534A TWI504209B (en) 2011-05-05 2012-05-02 Method for decoding pictures

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161482873P 2011-05-05 2011-05-05
US61/482,873 2011-05-05
US13/169,959 US8494290B2 (en) 2011-05-05 2011-06-27 Method for coding pictures using hierarchical transform units
US13/169,959 2011-06-27

Publications (1)

Publication Number Publication Date
WO2012150693A1 true WO2012150693A1 (en) 2012-11-08

Family

ID=47090285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/061296 WO2012150693A1 (en) 2011-05-05 2012-04-20 Residual quadtree structure for transform units in non-square prediction units

Country Status (6)

Country Link
US (1) US8494290B2 (en)
EP (1) EP2705665B1 (en)
JP (1) JP6037341B2 (en)
CN (1) CN103503461B (en)
TW (1) TWI504209B (en)
WO (1) WO2012150693A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2501551A (en) * 2012-04-26 2013-10-30 Sony Corp Partitioning Image Data into Coding, Prediction and Transform Units in 4:2:2 HEVC Video Data Encoding and Decoding
WO2014203805A1 (en) * 2013-06-18 2014-12-24 Mitsubishi Electric Corporation Method for coding pictures
WO2019009314A1 (en) * 2017-07-06 2019-01-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method and decoding method

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807426B2 (en) * 2011-07-01 2017-10-31 Qualcomm Incorporated Applying non-square transforms to video data
US9787982B2 (en) * 2011-09-12 2017-10-10 Qualcomm Incorporated Non-square transform units and prediction units in video coding
US9800870B2 (en) * 2011-09-16 2017-10-24 Qualcomm Incorporated Line buffer reduction for short distance intra-prediction
US9699457B2 (en) 2011-10-11 2017-07-04 Qualcomm Incorporated Most probable transform for intra prediction coding
BR112014009403B1 (en) * 2011-10-17 2022-09-13 Kt Corporation METHOD TO DECODE A VIDEO SIGNAL.
US9247254B2 (en) * 2011-10-27 2016-01-26 Qualcomm Incorporated Non-square transforms in intra-prediction video coding
US20130128971A1 (en) * 2011-11-23 2013-05-23 Qualcomm Incorporated Transforms in video coding
US9467701B2 (en) * 2012-04-05 2016-10-11 Qualcomm Incorporated Coded block flag coding
US9912944B2 (en) * 2012-04-16 2018-03-06 Qualcomm Incorporated Simplified non-square quadtree transforms for video coding
US9762903B2 (en) * 2012-06-01 2017-09-12 Qualcomm Incorporated External pictures in video coding
US9749645B2 (en) * 2012-06-22 2017-08-29 Microsoft Technology Licensing, Llc Coded-block-flag coding and derivation
WO2014078068A1 (en) * 2012-11-13 2014-05-22 Intel Corporation Content adaptive transform coding for next generation video
KR101677406B1 (en) 2012-11-13 2016-11-29 인텔 코포레이션 Video codec architecture for next generation video
KR20150058324A (en) 2013-01-30 2015-05-28 인텔 코포레이션 Content adaptive entropy coding for next generation video
US9544597B1 (en) 2013-02-11 2017-01-10 Google Inc. Hybrid transform in video encoding and decoding
US9967559B1 (en) 2013-02-11 2018-05-08 Google Llc Motion vector dependent spatial transformation in video coding
US9674530B1 (en) 2013-04-30 2017-06-06 Google Inc. Hybrid transforms in video coding
CN104811731A (en) * 2014-01-03 2015-07-29 上海天荷电子信息有限公司 Multilayer sub-block matching image compression method
US10687079B2 (en) * 2014-03-13 2020-06-16 Qualcomm Incorporated Constrained depth intra mode coding for 3D video coding
WO2016054774A1 (en) * 2014-10-08 2016-04-14 Mediatek Singapore Pte. Ltd. A method for the co-existence of color-space transform and cross-component prediction
US9565451B1 (en) 2014-10-31 2017-02-07 Google Inc. Prediction dependent transform coding
WO2016090568A1 (en) 2014-12-10 2016-06-16 Mediatek Singapore Pte. Ltd. Binary tree block partitioning structure
US10382795B2 (en) 2014-12-10 2019-08-13 Mediatek Singapore Pte. Ltd. Method of video coding using binary tree block partitioning
CN105141957B (en) * 2015-07-31 2019-03-15 广东中星电子有限公司 The method and apparatus of image and video data encoding and decoding
US9769499B2 (en) 2015-08-11 2017-09-19 Google Inc. Super-transform video coding
US10277905B2 (en) 2015-09-14 2019-04-30 Google Llc Transform selection for non-baseband signal coding
US10638132B2 (en) * 2015-10-15 2020-04-28 Lg Electronics Inc. Method for encoding and decoding video signal, and apparatus therefor
US9807423B1 (en) 2015-11-24 2017-10-31 Google Inc. Hybrid transform scheme for video coding
WO2017088170A1 (en) * 2015-11-27 2017-06-01 Mediatek Inc. Entropy coding the binary tree block partitioning structure
CN108781299A (en) * 2015-12-31 2018-11-09 联发科技股份有限公司 Method and apparatus for video and the prediction binary tree structure of image coding and decoding
US20170244964A1 (en) * 2016-02-23 2017-08-24 Mediatek Inc. Method and Apparatus of Flexible Block Partition for Video Coding
WO2018037853A1 (en) * 2016-08-26 2018-03-01 シャープ株式会社 Image decoding apparatus and image coding apparatus
CN116962726A (en) * 2016-09-20 2023-10-27 株式会社Kt Method for decoding and encoding video and method for transmitting video data
EP3306938A1 (en) 2016-10-05 2018-04-11 Thomson Licensing Method and apparatus for binary-tree split mode coding
KR102354628B1 (en) * 2017-03-31 2022-01-25 한국전자통신연구원 A method of video processing for processing coding tree units and coding units, a method and appratus for decoding and encoding video using the processing.
CN112601085A (en) * 2017-06-28 2021-04-02 华为技术有限公司 Image data encoding and decoding methods and devices
DK3649781T3 (en) * 2017-07-04 2024-03-11 Huawei Tech Co Ltd IMPROVEMENT OF FORCED BORDER DIVISION
WO2019234613A1 (en) 2018-06-05 2019-12-12 Beijing Bytedance Network Technology Co., Ltd. Partition tree with partition into 3 sub-blocks by horizontal and vertical splits
US10623736B2 (en) * 2018-06-14 2020-04-14 Telefonaktiebolaget Lm Ericsson (Publ) Tile selection and bandwidth optimization for providing 360° immersive video
US10419738B1 (en) 2018-06-14 2019-09-17 Telefonaktiebolaget Lm Ericsson (Publ) System and method for providing 360° immersive video based on gaze vector information
US10567780B2 (en) 2018-06-14 2020-02-18 Telefonaktiebolaget Lm Ericsson (Publ) System and method for encoding 360° immersive video
TWI723433B (en) 2018-06-21 2021-04-01 大陸商北京字節跳動網絡技術有限公司 Improved border partition
US10841662B2 (en) 2018-07-27 2020-11-17 Telefonaktiebolaget Lm Ericsson (Publ) System and method for inserting advertisement content in 360° immersive video
US10757389B2 (en) 2018-10-01 2020-08-25 Telefonaktiebolaget Lm Ericsson (Publ) Client optimization for providing quality control in 360° immersive video during pause
CN111277828B (en) * 2018-12-04 2022-07-12 华为技术有限公司 Video encoding and decoding method, video encoder and video decoder
CN111277840B (en) * 2018-12-04 2022-02-08 华为技术有限公司 Transform method, inverse transform method, video encoder and video decoder
CN114727105B (en) * 2019-03-22 2023-03-24 华为技术有限公司 Transform unit partitioning method for video coding
US11122297B2 (en) 2019-05-03 2021-09-14 Google Llc Using border-aligned block functions for image compression

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4751742A (en) * 1985-05-07 1988-06-14 Avelex Priority coding of transform coefficients
US5315670A (en) * 1991-11-12 1994-05-24 General Electric Company Digital data compression system including zerotree coefficient coding
US5546477A (en) * 1993-03-30 1996-08-13 Klics, Inc. Data compression and decompression
JPH07168809A (en) * 1993-03-30 1995-07-04 Klics Ltd Method and circuit for conversion of wavelet
US5602589A (en) * 1994-08-19 1997-02-11 Xerox Corporation Video image compression using weighted wavelet hierarchical vector quantization
US5748786A (en) * 1994-09-21 1998-05-05 Ricoh Company, Ltd. Apparatus for compression using reversible embedded wavelets
US5881176A (en) * 1994-09-21 1999-03-09 Ricoh Corporation Compression and decompression with wavelet style and binary style including quantization by device-dependent parser
US6633611B2 (en) * 1997-04-24 2003-10-14 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for region-based moving image encoding and decoding
TW388843B (en) * 1997-04-24 2000-05-01 Mitsubishi Electric Corp Moving image encoding method, moving image encoder and moving image decoder
JP2003250161A (en) * 2001-12-19 2003-09-05 Matsushita Electric Ind Co Ltd Encoder and decoder
KR20060109247A (en) * 2005-04-13 2006-10-19 엘지전자 주식회사 Method and apparatus for encoding/decoding a video signal using pictures of base layer
JP2005012439A (en) * 2003-06-18 2005-01-13 Nippon Hoso Kyokai <Nhk> Encoding device, encoding method and encoding program
WO2006028088A1 (en) * 2004-09-08 2006-03-16 Matsushita Electric Industrial Co., Ltd. Motion image encoding method and motion image decoding method
JP4656912B2 (en) * 2004-10-29 2011-03-23 三洋電機株式会社 Image encoding device
WO2006112272A1 (en) * 2005-04-13 2006-10-26 Ntt Docomo, Inc. Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, dynamic image decoding method, dynamic image encoding program, and dynamic image decoding program
EA029414B1 (en) * 2009-04-08 2018-03-30 Шарп Кабусики Кайся Video frame encoding apparatus and video frame decoding apparatus
KR101474756B1 (en) * 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
KR101452860B1 (en) * 2009-08-17 2014-10-23 삼성전자주식회사 Method and apparatus for image encoding, and method and apparatus for image decoding
EA037919B1 (en) * 2009-10-20 2021-06-07 Шарп Кабусики Кайся Moving image coding device, moving image decoding device, moving image coding/decoding system, moving image coding method and moving image decoding method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 20, no. 12, December 2010 (2010-12-01), pages 1676 - 1687
CAO (TSINGHUA) X ET AL: "CE6.b1 Report on Short Distance Intra Prediction Method", 20110310, no. JCTVC-E278, 10 March 2011 (2011-03-10), XP030008784, ISSN: 0000-0007 *
CAO ET AL.: "CE6.bl Report on Short Distance Intra Prediction Method (SDIP", JCTVC-E278, March 2011 (2011-03-01)
DETLEV MARPE ET AL: "Video Compression Using Nested Quadtree Structures, Leaf Merging, and Improved Techniques for Motion Representation and Entropy Coding", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 12, 1 December 2010 (2010-12-01), pages 1676 - 1687, XP011329407, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2010.2092615 *
GARY J SULLIVAN ET AL: "Meeting report of the fifth meeting of the Joint Collaborative Team on Video Coding (JCT-VC), Geneva, CH, 16-23 March 2011", 5. JCT-VC MEETING; 96. MPEG MEETING; 16-3-2011 - 23-3-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-E600, 15 April 2011 (2011-04-15), XP030009012 *
YUAN (TSINGHUA) Y ET AL: "Asymmetric Motion Partition with OBMC and Non-Square TU", 5. JCT-VC MEETING; 96. MPEG MEETING; 16-3-2011 - 23-3-2011; GENEVA;(JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-E376, 11 March 2011 (2011-03-11), XP030008882, ISSN: 0000-0005 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2501551A (en) * 2012-04-26 2013-10-30 Sony Corp Partitioning Image Data into Coding, Prediction and Transform Units in 4:2:2 HEVC Video Data Encoding and Decoding
WO2014203805A1 (en) * 2013-06-18 2014-12-24 Mitsubishi Electric Corporation Method for coding pictures
WO2019009314A1 (en) * 2017-07-06 2019-01-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method and decoding method

Also Published As

Publication number Publication date
CN103503461B (en) 2016-11-09
EP2705665B1 (en) 2016-09-14
TWI504209B (en) 2015-10-11
US8494290B2 (en) 2013-07-23
US20120281928A1 (en) 2012-11-08
CN103503461A (en) 2014-01-08
JP2014511628A (en) 2014-05-15
TW201309025A (en) 2013-02-16
EP2705665A1 (en) 2014-03-12
JP6037341B2 (en) 2016-12-07

Similar Documents

Publication Publication Date Title
EP2705665B1 (en) Residual quadtree structure for transform units in non-square prediction units
RU2769348C1 (en) Image processing method and device therefor
KR102227898B1 (en) Disabling sign data hiding in video coding
WO2014203805A1 (en) Method for coding pictures
KR101773240B1 (en) Coded block flag coding
EP2878124B1 (en) Devices and methods for processing of partition mode in high efficiency video coding
EP3328086A1 (en) Devices and methods for context reduction in last significant coefficient position coding
KR20190090866A (en) Method and apparatus for encoding / decoding video signal using second order transform
US20140146894A1 (en) Devices and methods for modifications of syntax related to transform skip for high efficiency video coding (hevc)
WO2012099743A1 (en) Method and system for processing video data
EP2805495A2 (en) Devices and methods for context reduction in last significant coefficient position coding
WO2014055231A1 (en) Devices and methods for using base layer motion vector for enhancement layer motion vector prediction
WO2013152356A1 (en) Devices and methods for signaling sample adaptive offset (sao) parameters
KR20220058582A (en) Video coding method and apparatus based on transformation
KR20220066351A (en) Transformation-based video coding method and apparatus
WO2014100111A1 (en) Devices and methods for using base layer intra prediction mode for enhancement layer intra mode prediction
EP4017008A1 (en) Transform-based image coding method and device
KR20220097513A (en) Transformation-based video coding method and apparatus
KR20220058584A (en) Video coding method and apparatus based on transformation
KR20220024500A (en) Transformation-based video coding method and apparatus
KR20220058583A (en) Transformation-based video coding method and apparatus
AU2020375518B2 (en) Image coding method based on transform, and device therefor
RU2803184C1 (en) Image encoding method based on transformation and device for its implementation
JP7414977B2 (en) Video coding method and device based on conversion
RU2811986C2 (en) Image encoding method based on transformation and device for its implementation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12721366

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013556183

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2012721366

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012721366

Country of ref document: EP