US20070253486A1 - Method and apparatus for reconstructing an image block - Google Patents
Method and apparatus for reconstructing an image block Download PDFInfo
- Publication number
- US20070253486A1 US20070253486A1 US11/543,031 US54303106A US2007253486A1 US 20070253486 A1 US20070253486 A1 US 20070253486A1 US 54303106 A US54303106 A US 54303106A US 2007253486 A1 US2007253486 A1 US 2007253486A1
- Authority
- US
- United States
- Prior art keywords
- motion vector
- block
- picture
- fgs
- picture layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/34—Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/29—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving scalability at the object level, e.g. video object layer [VOL]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Definitions
- the present invention relates, in general, to methods of encoding and decoding video signals.
- a Scalable Video Codec is a scheme for encoding video signals at the highest image quality when encoding the video signals, and enabling image quality to be secured to some degree even if only part of the entire picture (frame) sequence generated as a result of the encoding (a sequence of frames intermittently selected from the entire sequence) is decoded.
- a separate sub-picture sequence for a low bit rate for example, a picture sequence of small screens and/or a picture sequence having a small number of frames per second, can be provided.
- This sub-picture sequence is called a base layer, and a main picture sequence is called an enhanced layer.
- the base layer and the enhanced layer are obtained by encoding the same video signal source, and redundant information exists in the video signals of the two layers. Therefore, when the base layer is provided, an interlayer prediction method can be used to improve coding efficiency.
- an enhanced layer may be used, which is called SNR scalability, Fine Granular Scalability (FGS), or progressive refinement.
- transform coefficients corresponding to respective pixels are separately encoded into a base layer and an enhanced layer, depending on the resolution of bit representation.
- DCT Discrete Cosine Transform
- the transmission of the enhanced layer is omitted, so that the bit rate can be decreased while the quality of a decoded image is deteriorated. That is, FGS compensates for loss occurring during a quantization process, and provides high flexibility enabling a bit rate to be controlled in response to a transmission or decoding environment.
- PFGS Progressive FGS
- an adaptive reference block formation function receives a base layer collocated block Xb and a FGS enhanced layer reference block Re, and produces an adapted reference block Ra for use in reconstructing a current image block X in a current frame of the FGS layer that is being reconstructed.
- the collocated block Xb is the block in the base layer that is collocated with respect to the current image block X. Namely, the collocated block Xb is in a base layer frame temporally coincident with the current frame of the FGS layer, and the collocated block Xb is in the same relative position within the base layer frame as the current image block X in the current frame of the FGS layer.
- the collocated block Xb includes a reference picture index that indicates a reference base layer frame.
- the collocated block Xb also includes a motion vector.
- the motion vector points to a base layer reference block Rb in the reference base layer frame.
- the FGS enhanced layer reference block Re is a collocated block with respect to the base layer reference block Rb. Namely, the frame in the FGS layer temporally coincident with the reference frame in the base layer indicated by the reference picture index of the collocated block Xb serves as the FGS enhanced layer reference frame. Further, the motion vector of the collocated block Xb is used as the motion vector in the FGS enhanced layer reference frame to obtain the FGS enhanced layer reference block Re.
- the FGS enhanced layer reference block Re is a difference or error signal representing enhancement quality.
- the adaptive reference block formation function adds the FGS enhanced layer reference block Re to the collocated block Xb at a transform coefficient level to obtain the adapted reference block Ra.
- a reconstruction function reconstructs the current image block X by combining an encoded block Rd for the current image block X with the adapted reference block Ra in the well-known manner.
- the resolution of bit representation of an image may vary due to the difference between the quantization step sizes of the FGS base layer and the FGS enhanced layer, so that the motion vector of the FGS base layer collocated block Xb may not be identical to that of the FGS enhanced layer block X. This means that coding efficiency may be decreased.
- the present invention relates to a method of reconstructing a current block in a first picture layer.
- a motion vector for the current block is generated based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block.
- the second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer.
- the current block is reconstructed using the generated motion vector and a reference picture.
- a picture in the first picture layer is determined as the reference picture based on a reference picture index for the block in the second picture layer.
- the motion vector information is obtained from the block in the second picture layer.
- the motion vector is generated by determining a motion vector prediction based on the obtained motion vector information, and generating the motion vector associated with the current block in the first picture layer based on the motion vector prediction and the motion vector difference information.
- the motion vector information may include a motion vector associated with the block of the second picture layer, and the motion vector prediction may be determined equal to the motion vector associated with the block of the second picture layer.
- the reference picture for the current block may be temporally associated with a reference picture in the second picture layer, and the reference picture in the second picture layer is a reference picture for the block in the second picture layer.
- the present invention also relates to an apparatus for reconstructing a current block in a first picture layer.
- the apparatus includes a first decoder generating a motion vector for the current block based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block.
- the second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer is temporally associated with the current block in the first picture layer.
- the first decoder reconstructs the current block using the generated motion vector.
- the apparatus also includes a second decoder obtaining the motion vector information from the second picture layer and sending the motion vector information to the first decoder.
- FIG. 1 illustrates a progressive FGS structure for encoding the FGS enhanced layer of a current frame using the quality base layer of the current frame and the quality enhanced layer of another frame;
- FIG. 2 illustrates a process of finely adjusting the motion vector of the FGS base layer of a current frame in the picture of the FGS enhanced layer of a reference frame to predict the FGS enhanced layer of the current frame according to an embodiment of the present invention
- FIG. 3 illustrates a process of searching the FGS enhanced layer picture of a reference frame for an FGS enhanced layer reference block for an arbitrary block in a current frame, independent of the motion vector of an FGS base layer of the arbitrary block according to another embodiment of the present invention
- FIG. 4 is a block diagram of an apparatus which encodes a video signal to which the present invention may be applied.
- FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream to which the present invention may be applied.
- the motion vector mv(Xb) of a Fine Granular Scalability (FGS) base layer collocated block Xb is finely adjusted to improve the coding efficiency of Progressive FGS (PFGS).
- FGS Fine Granular Scalability
- the embodiment obtains the FGS enhanced layer frame for the FGS enhanced layer block X to be encoded as the FGS enhanced layer frame temporally coincident with the base layer reference frame for the base layer block Xb collocated with respect to the FGS enhanced layer block X.
- this base layer reference frame will be indicated in a reference picture index of the collocated block Xb; however, it is common for those skilled in the art to refer to the reference frame as being pointed to by the motion vector.
- a region e.g., a partial region
- This region includes a block indicated by the motion vector mv(Xb) for the base layer collated block Xb.
- the region is searched to obtain the block having the smallest image difference with respect to the block X, that is, a block Re′, causing the Sum of Absolute Differences (SAD) to be minimized.
- SAD is the sum of absolute differences between corresponding pixels in the two blocks.
- the two blocks are the block X to be coded or decoded and the selected block. Then, a motion vector mv(X) from the block X to the selected block is calculated.
- the search range can be limited to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb).
- the search can be performed with respect only to the region extended by 1 pixel in every direction.
- the search resolution that is, the unit by which the block X is moved to find a block having a minimum SAD, may be a pixel, a 1 ⁇ 2 pixel (half pel), or a 1 ⁇ 4 pixel (quarter pel).
- the location at which SAD is minimized is selected from among 9 candidate locations, as shown in FIG. 2 .
- the difference vector mvd_ref fgs between the calculated motion vector mv(X) and the motion vector mv(Xb), as shown in FIG. 2 is transmitted in the FGS enhanced layer.
- the FGS enhanced layer reference block associated with the obtained motion vector mv(x) is the enhanced layer reference block Re′.
- the block Re is used as a prediction block (or a predictor) for the block X to be decoded.
- motion estimation/prediction operations are performed independent of the motion vector mv(Xb) for the FGS base layer collocated block Xb corresponding to the block X, as shown in FIG. 3 .
- the FGS enhanced layer predicted image (FGS enhanced layer reference block) for the block X can be searched for in the reference frame indicated by the motion vector mv(Xb) (i.e., indicated by the reference picture index for the block Xb), or the reference block for the block X can be searched for in another frame.
- the obtained FGS enhanced layer reference block associated with the motion vector mv(X) is the enhanced layer reference block Re′.
- a current motion vector mv When a current motion vector mv is encoded, generally, the difference mvd between the current motion vector mv and a motion vector mvp, which is predicted from the motion vectors of surrounding blocks, is encoded and transmitted.
- the pieces of data are mvd_ref_fgs_ 10 / 11
- the pieces of data are mvd_fgs_ 10 / 11 .
- the motion vectors for macroblocks are calculated in relation to the FGS enhanced layer, and the calculated motion vectors are included in a macroblock layer within the FGS enhanced layer and transmitted to a decoder.
- related information is defined on the basis of a slice level, and is not defined on the basis of a macroblock level, a sub-macroblock level, or sub-block level.
- the generation of the FGS enhanced layer is similar to a procedure of performing prediction between a base layer and an enhanced layer having different spatial resolutions in an intra base prediction mode, and generating residual data which is an image difference.
- X can correspond to the block of a quality enhanced layer to be encoded
- Xb can correspond to the block of a quality base layer
- an intra mode prediction method is applied to the residual block R to reduce the amount of residual data to be encoded in the FGS enhanced layer.
- the same mode information about the intra mode that is used in the base layer collocated block Xb corresponding to the block X is used.
- a block Rd having a difference value of the residual data is obtained by applying the mode information, used in the block Xb, to the residual block R.
- Discrete Cosine Transform (DCT) is performed on the obtained block Rd, and the DCT results are quantized using a quantization step size set smaller than the quantization step size used when the FGS base layer data for the block Xb is generated, thus generating FGS enhanced layer data for the block X.
- an intra mode applied to the residual block R is a DC mode based on the mean value of respective pixels in the block R.
- FIG. 4 is a block diagram of an apparatus which encodes a video signal and to which the present invention may be applied.
- the video signal encoding apparatus of FIG. 4 includes a base layer (BL) encoder 110 for performing motion prediction on an image signal, input as a frame sequence, using a predetermined method; performing DCT on motion prediction results; quantizing the DCT transform results, using a predetermined quantization step size; and generating base layer data.
- An FGS enhanced layer (FGS_EL) encoder 120 generates the FGS enhanced layer of a current frame using the motion information, the base layer data that are provided by the BL encoder 110 , and the FGS enhanced layer data of a frame (for example, a previous frame) which is a reference for motion estimation for the current frame.
- a muxer 130 multiplexes the output data of the BL encoder 110 and the output data of the FGS_EL encoder 120 using a predetermined method, and outputs multiplexed data.
- the FGS_EL encoder 120 reconstructs the quality base layer of the reference frame (also called a FGS base layer picture), which is the reference for motion prediction for a current frame, from the base layer data provided by the BL encoder 110 , and reconstructs the FGS enhanced layer picture of the reference frame using the FGS enhanced layer data of the reference frame and the reconstructed quality base layer of the reference frame.
- the quality base layer of the reference frame also called a FGS base layer picture
- the FGS_EL encoder 120 reconstructs the quality base layer of the reference frame (also called a FGS base layer picture), which is the reference for motion prediction for a current frame, from the base layer data provided by the BL encoder 110 , and reconstructs the FGS enhanced layer picture of the reference frame using the FGS enhanced layer data of the reference frame and the reconstructed quality base layer of the reference frame.
- the reference frame may be a frame indicated by the motion vector mv(Xb) of the FGS base layer collocated block Xb corresponding to the block X in the current frame.
- the FGS enhanced layer picture of the reference frame may have been stored in a buffer in advance.
- the FGS_EL encoder 120 searches the FGS enhanced layer picture of the reconstructed reference frame for an FGS enhanced layer reference image for the block X, that is, a reference block or predicted block Re′ in which an SAD with respect to the block X is minimized, and then calculates a motion vector mv(X) from the block X to the found reference block Re′.
- the FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than a predetermined quantization step (quantization step size used when the BL encoder 110 generates the FGS base layer data for the block Xb), thus generating FGS enhanced layer data for the block X.
- the FGS_EL encoder 120 may limit the search range to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb) so as to reduce the burden of the search, as in the first embodiment of the present invention.
- the FGS_EL encoder 120 records the difference mvd_ref_fgs between the calculated motion vector mv(X) and the motion vector mv(Xb) in the FGS enhanced layer in association with the block X.
- the FGS_EL encoder 120 may perform a motion estimation operation independent of the motion vector mv(Xb) so as to obtain the optimal motion vector mv_fgs of the FGS enhanced layer for the block X; thus searching for a reference block Re′ having a minimum SAD with respect to the block X, and calculating the motion vector mv_fgs from the block X to the found reference block Re.
- the FGS enhanced layer reference block for the block X may be searched for in the reference frame indicated by the motion vector mv(Xb), or a reference block for the block X may be searched for in a frame other than the reference frame.
- the FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than the predetermined quantization step size; thus generating the FGS enhanced layer data for the block X.
- the FGS_EL encoder 120 records the difference mvd_fgs between the calculated motion vector mv_fgs and the motion vector mvp_fgs, predicted and obtained from surrounding blocks, in the FGS enhanced layer in association with the block X. That is, the FGS_EL encoder 120 records syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock), in the FGS enhanced layer.
- information related to the motion vector may further include a reference index for a frame including the found reference block Re′.
- the encoded data stream is transmitted to a decoding apparatus in a wired or wireless manner, or is transferred through a recording medium.
- FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream and to which the present invention may be applied.
- the decoding apparatus of FIG. 5 includes a demuxer 210 for separating a received data stream into a base layer stream and an enhanced layer stream; a base layer (BL) decoder 220 for decoding an input base layer stream using a preset method; and an FGS_EL decoder 230 for generating the FGS enhanced layer picture of a current frame using the motion information, the reconstructed quality base layer (or FGS base layer data) that are provided by the BL decoder 220 , and the FGS enhanced layer stream.
- BL base layer
- FGS_EL decoder 230 for generating the FGS enhanced layer picture of a current frame using the motion information, the reconstructed quality base layer (or FGS base layer data) that are provided by the BL decoder 220 , and the FGS enhanced layer stream.
- the FGS_EL decoder 230 checks information about the block X in the current frame, that is, information related to a motion vector used for motion prediction for the block X, in the FGS enhanced layer stream.
- the FGS enhanced layer for the block X in the current frame is encoded on the basis of the FGS enhanced layer picture of another frame and ii) is encoded using a block other than the block indicated by the motion vector mv(Xb) of the block Xb corresponding to the block X (that is the FGS base layer block of the current frame) as a predicted block or a reference block, motion information for indicating the other block is included in the FGS enhanced layer data of the current frame.
- the FGS enhanced layer includes syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock).
- the information related to the motion vector may further include an index for the reference frame in which the FGS enhanced layer reference block for the block X is found (the reference frame including the reference block).
- the FGS_EL decoder 230 When motion information related to the block X in the current frame exists in the FGS enhanced layer of the current frame, the FGS_EL decoder 230 generates the FGS enhanced layer picture of the reference frame using the quality base layer of the reference frame (the FGS base layer picture reconstructed by the BL decoder 220 may be provided, or may be reconstructed from the FGS base layer data provided by the BL decoder 220 ), which is the reference for motion prediction for the current frame, and the FGS enhanced layer data of the reference frame.
- the reference frame may be a frame indicated by the motion vector mv(Xb) of the block Xb.
- the FGS enhanced layer of the reference frame may be encoded using an FGS enhanced layer picture of a different frame.
- a picture reconstructed from the different frame is used to reconstruct the reference frame.
- the FGS enhanced layer picture may have been generated in advance and stored in a buffer.
- the FGS_EL decoder 230 obtains the FGS enhanced layer reference block Re′ for the block X from the FGS enhanced layer picture of the reference frame, using the motion information related to the block X.
- the motion vector mv(X) from the block X to the reference block Re′ is obtained as the sum of the motion information mv_ref_fgs, included in an FGS enhanced layer stream for the block X, and the motion vector mv(Xb) of the block Xb.
- the motion vector mv(X) is obtained as the sum of the motion information mvd_fgs, included in the FGS enhanced layer stream for the block X, and the motion vector mvp_fgs, predicted and obtained from the surrounding blocks.
- the motion vector mvp_fgs may be implemented using the motion vector mvp, which is obtained at the time of calculating the motion vector mv(Xb) of the FGS base layer collocated block Xb without change, or using a motion vector derived from the motion vector mvp.
- the FGS_EL decoder 230 performs inverse-quantization and inverse DCT on the FGS enhanced layer data for the block X, and adds the results of inverse quantization and inverse DCT to the obtained reference block Re′, thus generating the FGS enhanced layer picture for the block X.
- the above-described decoding apparatus may be mounted in a mobile communication terminal, or a device for reproducing recording media.
- the present invention is advantageous in that it can efficiently perform motion estimation/prediction operations on an FGS enhanced layer picture when the FGS enhanced layer is encoded or decoded, and can efficiently transmit motion information required to reconstruct an FGS enhanced layer picture.
Abstract
In one embodiment, a motion vector for a current block is generated based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer. The current block is reconstructed using the generated motion vector and a reference picture.
Description
- This application claims the benefit of priority on U.S. Provisional Application No. 60/723,474 filed Oct. 5, 2005; the entire content of which is hereby incorporated by reference.
- This application claims the benefit of priority on Korean Patent Application No. 10-2006-0068314 filed Jul. 21, 2006; the entire content of which is hereby incorporated by reference.
- 1. Field of the Invention
- The present invention relates, in general, to methods of encoding and decoding video signals.
- 2. Description of the Related Art
- A Scalable Video Codec (SVC) is a scheme for encoding video signals at the highest image quality when encoding the video signals, and enabling image quality to be secured to some degree even if only part of the entire picture (frame) sequence generated as a result of the encoding (a sequence of frames intermittently selected from the entire sequence) is decoded.
- Even if only a partial sequence of a picture sequence encoded by a scalable scheme is received and processed, image quality can be secured to some degree. However, if the bit rate is decreased, the deterioration in image quality becomes serious. In order to solve the problem, a separate sub-picture sequence for a low bit rate, for example, a picture sequence of small screens and/or a picture sequence having a small number of frames per second, can be provided.
- This sub-picture sequence is called a base layer, and a main picture sequence is called an enhanced layer. The base layer and the enhanced layer are obtained by encoding the same video signal source, and redundant information exists in the video signals of the two layers. Therefore, when the base layer is provided, an interlayer prediction method can be used to improve coding efficiency.
- Further, in order to improve the Signal-to-Noise Ratio (SNR) of a base layer, that is, to enhance image quality, an enhanced layer may be used, which is called SNR scalability, Fine Granular Scalability (FGS), or progressive refinement.
- According to FGS, transform coefficients corresponding to respective pixels, for example, Discrete Cosine Transform (DCT) coefficients, are separately encoded into a base layer and an enhanced layer, depending on the resolution of bit representation. When a transmission environment is bad, the transmission of the enhanced layer is omitted, so that the bit rate can be decreased while the quality of a decoded image is deteriorated. That is, FGS compensates for loss occurring during a quantization process, and provides high flexibility enabling a bit rate to be controlled in response to a transmission or decoding environment.
- For example, if a transform coefficient is quantized using a quantization step size (that is, QP), for example, QP=32, to generate a base layer, a first FGS enhanced layer is generated by quantizing the difference between an original transform coefficient and a transform coefficient obtained by inversely quantizing the quantized coefficient of the base layer, using a quantization step size corresponding to quality higher than QP=32, for example, QP=26. Similarly, a second FGS enhanced layer is generated by quantizing the difference between the original transform coefficient and a transform coefficient obtained by inversely quantizing the sum of the quantized coefficients of the base layer and the first FGS enhanced layer, using a quantization step size, for example, QP=20.
- However, in a conventional FGS coding method, only a quality base layer, that is, a picture of an FGS base layer, is used to generate an FGS enhanced layer. This means that temporally redundant information existing between temporally adjacent quality enhanced layers, that is, pictures of an FGS enhanced layer, are not used.
- In order to use such temporal redundancy in the FGS enhanced layer, a method of utilizing an adjacent quality enhanced layer as well as a quality base layer to predict a current FGS enhanced layer is proposed. This method is called a Progressive FGS (PFGS), and the structure of such a PFGS scheme is shown in
FIG. 1 . - As shown in
FIG. 1 , an adaptive reference block formation function receives a base layer collocated block Xb and a FGS enhanced layer reference block Re, and produces an adapted reference block Ra for use in reconstructing a current image block X in a current frame of the FGS layer that is being reconstructed. The collocated block Xb is the block in the base layer that is collocated with respect to the current image block X. Namely, the collocated block Xb is in a base layer frame temporally coincident with the current frame of the FGS layer, and the collocated block Xb is in the same relative position within the base layer frame as the current image block X in the current frame of the FGS layer. - The collocated block Xb includes a reference picture index that indicates a reference base layer frame. The collocated block Xb also includes a motion vector. As shown in
FIG. 1 , the motion vector points to a base layer reference block Rb in the reference base layer frame. The FGS enhanced layer reference block Re is a collocated block with respect to the base layer reference block Rb. Namely, the frame in the FGS layer temporally coincident with the reference frame in the base layer indicated by the reference picture index of the collocated block Xb serves as the FGS enhanced layer reference frame. Further, the motion vector of the collocated block Xb is used as the motion vector in the FGS enhanced layer reference frame to obtain the FGS enhanced layer reference block Re. - The FGS enhanced layer reference block Re is a difference or error signal representing enhancement quality. As such, the adaptive reference block formation function adds the FGS enhanced layer reference block Re to the collocated block Xb at a transform coefficient level to obtain the adapted reference block Ra. Then, as shown in
FIG. 1 , a reconstruction function reconstructs the current image block X by combining an encoded block Rd for the current image block X with the adapted reference block Ra in the well-known manner. - However, the resolution of bit representation of an image may vary due to the difference between the quantization step sizes of the FGS base layer and the FGS enhanced layer, so that the motion vector of the FGS base layer collocated block Xb may not be identical to that of the FGS enhanced layer block X. This means that coding efficiency may be decreased.
- The present invention relates to a method of reconstructing a current block in a first picture layer.
- In one embodiment, a motion vector for the current block is generated based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer. The current block is reconstructed using the generated motion vector and a reference picture.
- In one embodiment, a picture in the first picture layer is determined as the reference picture based on a reference picture index for the block in the second picture layer.
- In one embodiment, the motion vector information is obtained from the block in the second picture layer.
- In one embodiment, the motion vector is generated by determining a motion vector prediction based on the obtained motion vector information, and generating the motion vector associated with the current block in the first picture layer based on the motion vector prediction and the motion vector difference information.
- For example, the motion vector information may include a motion vector associated with the block of the second picture layer, and the motion vector prediction may be determined equal to the motion vector associated with the block of the second picture layer.
- In one embodiment, the reference picture for the current block may be temporally associated with a reference picture in the second picture layer, and the reference picture in the second picture layer is a reference picture for the block in the second picture layer.
- The present invention also relates to an apparatus for reconstructing a current block in a first picture layer.
- In one embodiment, the apparatus includes a first decoder generating a motion vector for the current block based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer is temporally associated with the current block in the first picture layer. The first decoder reconstructs the current block using the generated motion vector. The apparatus also includes a second decoder obtaining the motion vector information from the second picture layer and sending the motion vector information to the first decoder.
- The above and other objects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 illustrates a progressive FGS structure for encoding the FGS enhanced layer of a current frame using the quality base layer of the current frame and the quality enhanced layer of another frame; -
FIG. 2 illustrates a process of finely adjusting the motion vector of the FGS base layer of a current frame in the picture of the FGS enhanced layer of a reference frame to predict the FGS enhanced layer of the current frame according to an embodiment of the present invention; -
FIG. 3 illustrates a process of searching the FGS enhanced layer picture of a reference frame for an FGS enhanced layer reference block for an arbitrary block in a current frame, independent of the motion vector of an FGS base layer of the arbitrary block according to another embodiment of the present invention; -
FIG. 4 is a block diagram of an apparatus which encodes a video signal to which the present invention may be applied; and -
FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream to which the present invention may be applied. - Hereinafter, example embodiments of the present invention will be described in detail with reference to the attached drawings.
- In an embodiment of the present invention, during the encoding process, the motion vector mv(Xb) of a Fine Granular Scalability (FGS) base layer collocated block Xb is finely adjusted to improve the coding efficiency of Progressive FGS (PFGS).
- That is, the embodiment obtains the FGS enhanced layer frame for the FGS enhanced layer block X to be encoded as the FGS enhanced layer frame temporally coincident with the base layer reference frame for the base layer block Xb collocated with respect to the FGS enhanced layer block X. As will be appreciated, this base layer reference frame will be indicated in a reference picture index of the collocated block Xb; however, it is common for those skilled in the art to refer to the reference frame as being pointed to by the motion vector. Given the enhanced layer reference frame, a region (e.g., a partial region) of a picture is reconstructed from the FGS enhanced layer reference frame. This region includes a block indicated by the motion vector mv(Xb) for the base layer collated block Xb. The region is searched to obtain the block having the smallest image difference with respect to the block X, that is, a block Re′, causing the Sum of Absolute Differences (SAD) to be minimized. The SAD is the sum of absolute differences between corresponding pixels in the two blocks. The two blocks are the block X to be coded or decoded and the selected block. Then, a motion vector mv(X) from the block X to the selected block is calculated.
- In this case, in order to reduce the burden of the search, the search range can be limited to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb). For example, the search can be performed with respect only to the region extended by 1 pixel in every direction.
- Further, the search resolution, that is, the unit by which the block X is moved to find a block having a minimum SAD, may be a pixel, a ½ pixel (half pel), or a ¼ pixel (quarter pel).
- In particular, when a search is performed with respect only to the region extended by 1 pixel in every direction, and is performed on a pixel basis, the location at which SAD is minimized is selected from among 9 candidate locations, as shown in
FIG. 2 . - If the search range is limited in this way, the difference vector mvd_ref fgs between the calculated motion vector mv(X) and the motion vector mv(Xb), as shown in
FIG. 2 , is transmitted in the FGS enhanced layer. The FGS enhanced layer reference block associated with the obtained motion vector mv(x) is the enhanced layer reference block Re′. The block Re is used as a prediction block (or a predictor) for the block X to be decoded. - In another embodiment of the present invention, in order to obtain an optimal motion vector mv_fgs for the FGS enhanced layer for the block X, that is, in order to generate the optimal predicted image of the FGS enhanced layer for the block X, motion estimation/prediction operations are performed independent of the motion vector mv(Xb) for the FGS base layer collocated block Xb corresponding to the block X, as shown in
FIG. 3 . - In this case, the FGS enhanced layer predicted image (FGS enhanced layer reference block) for the block X can be searched for in the reference frame indicated by the motion vector mv(Xb) (i.e., indicated by the reference picture index for the block Xb), or the reference block for the block X can be searched for in another frame. As with the embodiment of
FIG. 2 , the obtained FGS enhanced layer reference block associated with the motion vector mv(X) is the enhanced layer reference block Re′. - In the former case, there are advantages in that frames in which the FGS enhanced layer reference block for the block X is to be searched for are limited to the reference frame indicated by the motion vector mv(Xb), so that the burden of encoding is reduced, and there is no need to transmit a reference index for the block X that includes the reference block.
- In the latter case, there are disadvantages in that the number of frames, in which the reference block is to be searched for, increases, so that the burden of encoding increases, and a reference index for the frame, including a found reference block, must be additionally transmitted. But, there is an advantage in that the optimal predicted image of the FGS enhanced layer for the block X can be generated.
- When a motion vector is encoded without change, a great number of bits are required. Since the motion vectors of neighboring blocks have a tendency to be highly correlated, respective motion vectors can be predicted from the motion vectors of surrounding blocks that have been previously encoded (immediate left, immediate upper and immediate upper-right blocks).
- When a current motion vector mv is encoded, generally, the difference mvd between the current motion vector mv and a motion vector mvp, which is predicted from the motion vectors of surrounding blocks, is encoded and transmitted.
- Therefore, the motion vector mv_fgs of the FGS enhanced layer for the block X that is obtained through an independent motion prediction operation is encoded by mvd_fgs=mv_fgs−mvp_fgs. In this case, the motion vector mvp_fgs, predicted and obtained from the surrounding blocks, can be implemented using the motion vector mvp, obtained when the motion vector mv(Xb) of the FGS base layer collocated block Xb is encoded, without change (e.g., mvp=mv(Xb)), or using a motion vector derived from the motion vector mvp (e.g., mvp=scaled version of mv(Xb)).
- If the number of motion vectors of the FGS base layer collocated block Xb corresponding to the block X is two, that is, if the block Xb is predicted using two reference frames, two pieces of data related to the encoding of the motion vector of the FGS enhanced layer for the block X are obtained. For example, in a first embodiment, the pieces of data are mvd_ref_fgs_10/11, and in a second embodiment, the pieces of data are mvd_fgs_10/11.
- In the above embodiments, the motion vectors for macroblocks (or image blocks smaller than macroblocks) are calculated in relation to the FGS enhanced layer, and the calculated motion vectors are included in a macroblock layer within the FGS enhanced layer and transmitted to a decoder. However, in the conventional FGS enhanced layer, related information is defined on the basis of a slice level, and is not defined on the basis of a macroblock level, a sub-macroblock level, or sub-block level.
- Therefore, in the present invention, in order to define, in the FGS enhanced layer, data related to the motion vectors calculated on the basis of a macroblock (or an image block smaller than a macroblock), syntax required to define a macroblock layer and/or an image block layer smaller than a macroblock layer, for example, progressive_refinement_macroblock_layer_in_scalable_extension( ), and progressive_refinement_mb (and/or sub_mb)_pred_in_scalable_extension( ), is newly defined, and the calculated motion vectors are recorded in the newly defined syntax and then transmitted.
- Meanwhile, the generation of the FGS enhanced layer is similar to a procedure of performing prediction between a base layer and an enhanced layer having different spatial resolutions in an intra base prediction mode, and generating residual data which is an image difference.
- For example, if it is assumed that the block of the enhanced layer is X and the block of the base layer corresponding to the block X is Xb, the residual block obtained through intra base prediction is R=X−Xb. In this case, X can correspond to the block of a quality enhanced layer to be encoded, Xb can correspond to the block of a quality base layer, and R=X−Xb can correspond to residual data to be encoded in the FGS enhanced layer for the block X.
- In another embodiment of the present invention, an intra mode prediction method is applied to the residual block R to reduce the amount of residual data to be encoded in the FGS enhanced layer. In order to perform intra mode prediction on the residual block R, the same mode information about the intra mode that is used in the base layer collocated block Xb corresponding to the block X is used.
- A block Rd having a difference value of the residual data is obtained by applying the mode information, used in the block Xb, to the residual block R. Discrete Cosine Transform (DCT) is performed on the obtained block Rd, and the DCT results are quantized using a quantization step size set smaller than the quantization step size used when the FGS base layer data for the block Xb is generated, thus generating FGS enhanced layer data for the block X.
- In a further embodiment, an adapted reference block Ra′ for the block X is generated as equal to the FGS enhanced layer reference block Re′. Further, residual data R to be encoded in the FGS enhanced layer for the block X is set as R=X−Ra, so that an intra mode prediction method is applied to the residual block R. It will be appreciated that in this embodiment, the enhanced layer reference block Re′, and therefore, the adapted reference block Ra′, are reconstructed pictures and not at the transform coefficient level.
- In this case, an intra mode applied to the residual block R is a DC mode based on the mean value of respective pixels in the block R. Further, if the block Re is generated by the methods according to embodiments of the present invention, information related to motion required to generate the block Re in the decoder must be included in the FGS enhanced layer data for the block X.
-
FIG. 4 is a block diagram of an apparatus which encodes a video signal and to which the present invention may be applied. - The video signal encoding apparatus of
FIG. 4 includes a base layer (BL)encoder 110 for performing motion prediction on an image signal, input as a frame sequence, using a predetermined method; performing DCT on motion prediction results; quantizing the DCT transform results, using a predetermined quantization step size; and generating base layer data. An FGS enhanced layer (FGS_EL)encoder 120 generates the FGS enhanced layer of a current frame using the motion information, the base layer data that are provided by theBL encoder 110, and the FGS enhanced layer data of a frame (for example, a previous frame) which is a reference for motion estimation for the current frame. Amuxer 130 multiplexes the output data of theBL encoder 110 and the output data of theFGS_EL encoder 120 using a predetermined method, and outputs multiplexed data. - The
FGS_EL encoder 120 reconstructs the quality base layer of the reference frame (also called a FGS base layer picture), which is the reference for motion prediction for a current frame, from the base layer data provided by theBL encoder 110, and reconstructs the FGS enhanced layer picture of the reference frame using the FGS enhanced layer data of the reference frame and the reconstructed quality base layer of the reference frame. - In this case, the reference frame may be a frame indicated by the motion vector mv(Xb) of the FGS base layer collocated block Xb corresponding to the block X in the current frame.
- When the reference frame is a frame previous to the current frame, the FGS enhanced layer picture of the reference frame may have been stored in a buffer in advance.
- Thereafter, the
FGS_EL encoder 120 searches the FGS enhanced layer picture of the reconstructed reference frame for an FGS enhanced layer reference image for the block X, that is, a reference block or predicted block Re′ in which an SAD with respect to the block X is minimized, and then calculates a motion vector mv(X) from the block X to the found reference block Re′. - The
FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than a predetermined quantization step (quantization step size used when theBL encoder 110 generates the FGS base layer data for the block Xb), thus generating FGS enhanced layer data for the block X. - When the reference block is predicted, the
FGS_EL encoder 120 may limit the search range to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb) so as to reduce the burden of the search, as in the first embodiment of the present invention. In this case, theFGS_EL encoder 120 records the difference mvd_ref_fgs between the calculated motion vector mv(X) and the motion vector mv(Xb) in the FGS enhanced layer in association with the block X. - Further, as in the case of the above-described second embodiment of the present invention, the
FGS_EL encoder 120 may perform a motion estimation operation independent of the motion vector mv(Xb) so as to obtain the optimal motion vector mv_fgs of the FGS enhanced layer for the block X; thus searching for a reference block Re′ having a minimum SAD with respect to the block X, and calculating the motion vector mv_fgs from the block X to the found reference block Re. - In this case, the FGS enhanced layer reference block for the block X may be searched for in the reference frame indicated by the motion vector mv(Xb), or a reference block for the block X may be searched for in a frame other than the reference frame.
- The
FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than the predetermined quantization step size; thus generating the FGS enhanced layer data for the block X. - Further, the
FGS_EL encoder 120 records the difference mvd_fgs between the calculated motion vector mv_fgs and the motion vector mvp_fgs, predicted and obtained from surrounding blocks, in the FGS enhanced layer in association with the block X. That is, theFGS_EL encoder 120 records syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock), in the FGS enhanced layer. - When the reference block Re′ for the block X is searched for in a frame other than the reference frame indicated by the motion vector mv(Xb), information related to the motion vector may further include a reference index for a frame including the found reference block Re′.
- The encoded data stream is transmitted to a decoding apparatus in a wired or wireless manner, or is transferred through a recording medium.
-
FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream and to which the present invention may be applied. The decoding apparatus ofFIG. 5 includes ademuxer 210 for separating a received data stream into a base layer stream and an enhanced layer stream; a base layer (BL)decoder 220 for decoding an input base layer stream using a preset method; and anFGS_EL decoder 230 for generating the FGS enhanced layer picture of a current frame using the motion information, the reconstructed quality base layer (or FGS base layer data) that are provided by theBL decoder 220, and the FGS enhanced layer stream. - The
FGS_EL decoder 230 checks information about the block X in the current frame, that is, information related to a motion vector used for motion prediction for the block X, in the FGS enhanced layer stream. - When i) the FGS enhanced layer for the block X in the current frame is encoded on the basis of the FGS enhanced layer picture of another frame and ii) is encoded using a block other than the block indicated by the motion vector mv(Xb) of the block Xb corresponding to the block X (that is the FGS base layer block of the current frame) as a predicted block or a reference block, motion information for indicating the other block is included in the FGS enhanced layer data of the current frame.
- That is, in the above description, the FGS enhanced layer includes syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock). The information related to the motion vector may further include an index for the reference frame in which the FGS enhanced layer reference block for the block X is found (the reference frame including the reference block).
- When motion information related to the block X in the current frame exists in the FGS enhanced layer of the current frame, the
FGS_EL decoder 230 generates the FGS enhanced layer picture of the reference frame using the quality base layer of the reference frame (the FGS base layer picture reconstructed by theBL decoder 220 may be provided, or may be reconstructed from the FGS base layer data provided by the BL decoder 220), which is the reference for motion prediction for the current frame, and the FGS enhanced layer data of the reference frame. In this case, the reference frame may be a frame indicated by the motion vector mv(Xb) of the block Xb. - Further, the FGS enhanced layer of the reference frame may be encoded using an FGS enhanced layer picture of a different frame. In this case, a picture reconstructed from the different frame is used to reconstruct the reference frame. Further, when the reference frame is a frame previous to the current frame, the FGS enhanced layer picture may have been generated in advance and stored in a buffer.
- Further, the
FGS_EL decoder 230 obtains the FGS enhanced layer reference block Re′ for the block X from the FGS enhanced layer picture of the reference frame, using the motion information related to the block X. - In the above-described first embodiment of the present invention, the motion vector mv(X) from the block X to the reference block Re′ is obtained as the sum of the motion information mv_ref_fgs, included in an FGS enhanced layer stream for the block X, and the motion vector mv(Xb) of the block Xb.
- Further, in the second embodiment of the present invention, the motion vector mv(X) is obtained as the sum of the motion information mvd_fgs, included in the FGS enhanced layer stream for the block X, and the motion vector mvp_fgs, predicted and obtained from the surrounding blocks. In this case, the motion vector mvp_fgs may be implemented using the motion vector mvp, which is obtained at the time of calculating the motion vector mv(Xb) of the FGS base layer collocated block Xb without change, or using a motion vector derived from the motion vector mvp.
- Thereafter, the
FGS_EL decoder 230 performs inverse-quantization and inverse DCT on the FGS enhanced layer data for the block X, and adds the results of inverse quantization and inverse DCT to the obtained reference block Re′, thus generating the FGS enhanced layer picture for the block X. - The above-described decoding apparatus may be mounted in a mobile communication terminal, or a device for reproducing recording media.
- As described above, the present invention is advantageous in that it can efficiently perform motion estimation/prediction operations on an FGS enhanced layer picture when the FGS enhanced layer is encoded or decoded, and can efficiently transmit motion information required to reconstruct an FGS enhanced layer picture.
- Although the example embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention.
Claims (14)
1. A method of reconstructing a current block in a first picture layer, comprising:
generating a motion vector for the current block based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block, the second picture layer having lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer; and
reconstructing the current block using the generated motion vector and a reference picture.
2. The method of claim 1 , further comprising:
determining the reference picture, which is in the first picture layer, based on a reference picture index for the block in the second picture layer.
3. The method of claim 1 , further comprising:
obtaining the motion vector information from the block in the second picture layer; and
obtaining the motion vector difference information from a video data bitstream.
4. The method of claim 1 , wherein the motion vector information includes a motion vector associated with the block of the second picture layer.
5. The method of claim 1 , wherein the generating step comprises:
determining a motion vector prediction based on the obtained motion vector information; and
generating the motion vector associated with the current block in the first picture layer based on the motion vector prediction and the motion vector difference information.
6. The method of claim 5 , wherein
the motion vector information includes a motion vector associated with the block of the second picture layer; and
the determining a motion vector prediction step determines the motion vector prediction equal to the motion vector associated with the block of the second picture layer.
7. The method of claim 5 , wherein the generating step generates the motion vector for the current block equal to the motion vector prediction plus a motion vector difference indicated by the motion vector difference information.
8. The method of claim 7 , wherein
the motion vector information includes a motion vector associated with the block of the second picture layer; and
the determining a motion vector prediction step determines the motion vector prediction equal to the motion vector associated with the block of the second picture layer.
9. The method of claim 1 , wherein the motion vector difference information indicates a motion vector difference of a one-quarter pixel or less.
10. The method of claim 1 , wherein the motion vector difference information indicates a motion vector difference of a one-half pixel or less.
11. The method of claim 1 , wherein the reference picture is a picture in the first picture layer.
12. The method of claim 11 , wherein the reference picture for the current block is temporally associated with a reference picture in the second picture layer, the reference picture in the second picture layer being a reference picture for the block in the second picture layer.
13. The method of claim 1 , wherein the reconstructing step combines a prediction block with a residual block to reconstruct the current block, the prediction block being based on the generated motion vector and the reference picture.
14. An apparatus for reconstructing a current block in a first picture layer, comprising:
a first decoder generating a motion vector for the current block based on motion vector information for a block in a second picture layer and motion vector difference information associated with the current block, the second picture layer having lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer;
the first decoder reconstructing the current block using the generated motion vector; and
a second decoder obtaining the motion vector information from the second picture layer and sending the motion vector information to the first decoder.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/543,031 US20070253486A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for reconstructing an image block |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US72347405P | 2005-10-05 | 2005-10-05 | |
KR10-2006-0068314 | 2006-07-21 | ||
KR1020060068314A KR20070038396A (en) | 2005-10-05 | 2006-07-21 | Method for encoding and decoding video signal |
US11/543,031 US20070253486A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for reconstructing an image block |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070253486A1 true US20070253486A1 (en) | 2007-11-01 |
Family
ID=38159769
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/992,956 Active 2027-03-15 US7773675B2 (en) | 2005-10-05 | 2006-10-02 | Method for decoding a video signal using a quality base reference picture |
US11/992,958 Active 2027-04-11 US7869501B2 (en) | 2005-10-05 | 2006-10-02 | Method for decoding a video signal to mark a picture as a reference picture |
US11/543,032 Abandoned US20070195879A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for encoding a motion vection |
US11/543,080 Abandoned US20070086518A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for generating a motion vector |
US11/543,031 Abandoned US20070253486A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for reconstructing an image block |
US12/656,128 Active 2027-03-12 US8422551B2 (en) | 2005-10-05 | 2010-01-19 | Method and apparatus for managing a reference picture |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/992,956 Active 2027-03-15 US7773675B2 (en) | 2005-10-05 | 2006-10-02 | Method for decoding a video signal using a quality base reference picture |
US11/992,958 Active 2027-04-11 US7869501B2 (en) | 2005-10-05 | 2006-10-02 | Method for decoding a video signal to mark a picture as a reference picture |
US11/543,032 Abandoned US20070195879A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for encoding a motion vection |
US11/543,080 Abandoned US20070086518A1 (en) | 2005-10-05 | 2006-10-05 | Method and apparatus for generating a motion vector |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/656,128 Active 2027-03-12 US8422551B2 (en) | 2005-10-05 | 2010-01-19 | Method and apparatus for managing a reference picture |
Country Status (10)
Country | Link |
---|---|
US (6) | US7773675B2 (en) |
EP (1) | EP2924997B1 (en) |
JP (1) | JP4851528B2 (en) |
KR (5) | KR20070038396A (en) |
CN (3) | CN101352044B (en) |
BR (1) | BRPI0616860B8 (en) |
ES (1) | ES2539935T3 (en) |
HK (1) | HK1124710A1 (en) |
RU (2) | RU2008117444A (en) |
WO (3) | WO2007040335A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080025399A1 (en) * | 2006-07-26 | 2008-01-31 | Canon Kabushiki Kaisha | Method and device for image compression, telecommunications system comprising such a device and program implementing such a method |
US20110255591A1 (en) * | 2010-04-09 | 2011-10-20 | Lg Electronics Inc. | Method and apparatus for processing video data |
US10027957B2 (en) | 2011-01-12 | 2018-07-17 | Sun Patent Trust | Methods and apparatuses for encoding and decoding video using multiple reference pictures |
US10841573B2 (en) | 2011-02-08 | 2020-11-17 | Sun Patent Trust | Methods and apparatuses for encoding and decoding video using multiple reference pictures |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1628484B1 (en) * | 2004-08-18 | 2019-04-03 | STMicroelectronics Srl | Method for transcoding compressed video signals, related apparatus and computer program product therefor |
KR20070038396A (en) * | 2005-10-05 | 2007-04-10 | 엘지전자 주식회사 | Method for encoding and decoding video signal |
WO2007080223A1 (en) * | 2006-01-10 | 2007-07-19 | Nokia Corporation | Buffering of decoded reference pictures |
EP2123049B1 (en) | 2007-01-18 | 2016-12-28 | Nokia Technologies Oy | Carriage of sei messages in rtp payload format |
WO2009032255A2 (en) * | 2007-09-04 | 2009-03-12 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
JP5406465B2 (en) * | 2008-04-24 | 2014-02-05 | 株式会社Nttドコモ | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
US20090279614A1 (en) * | 2008-05-10 | 2009-11-12 | Samsung Electronics Co., Ltd. | Apparatus and method for managing reference frame buffer in layered video coding |
EP2152009A1 (en) * | 2008-08-06 | 2010-02-10 | Thomson Licensing | Method for predicting a lost or damaged block of an enhanced spatial layer frame and SVC-decoder adapted therefore |
KR101220175B1 (en) * | 2008-12-08 | 2013-01-11 | 연세대학교 원주산학협력단 | Method for generating and processing hierarchical pes packet for digital satellite broadcasting based on svc video |
JP5700970B2 (en) * | 2009-07-30 | 2015-04-15 | トムソン ライセンシングThomson Licensing | Decoding method of encoded data stream representing image sequence and encoding method of image sequence |
JP2013505647A (en) * | 2009-09-22 | 2013-02-14 | パナソニック株式会社 | Image encoding apparatus, image decoding apparatus, image encoding method, and image decoding method |
US9578325B2 (en) * | 2010-01-13 | 2017-02-21 | Texas Instruments Incorporated | Drift reduction for quality scalable video coding |
WO2012042884A1 (en) * | 2010-09-29 | 2012-04-05 | パナソニック株式会社 | Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit |
KR101959091B1 (en) | 2010-09-30 | 2019-03-15 | 선 페이턴트 트러스트 | Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit |
JP5781313B2 (en) * | 2011-01-12 | 2015-09-16 | 株式会社Nttドコモ | Image prediction coding method, image prediction coding device, image prediction coding program, image prediction decoding method, image prediction decoding device, and image prediction decoding program |
US8834507B2 (en) | 2011-05-17 | 2014-09-16 | Warsaw Orthopedic, Inc. | Dilation instruments and methods |
US9854275B2 (en) | 2011-06-25 | 2017-12-26 | Qualcomm Incorporated | Quantization in video coding |
US9232233B2 (en) * | 2011-07-01 | 2016-01-05 | Apple Inc. | Adaptive configuration of reference frame buffer based on camera and background motion |
KR20130005167A (en) * | 2011-07-05 | 2013-01-15 | 삼성전자주식회사 | Image signal decoding device and decoding method thereof |
SE538057C2 (en) | 2011-09-09 | 2016-02-23 | Kt Corp | A method for deriving a temporal prediction motion vector and a device using the method. |
US9420307B2 (en) | 2011-09-23 | 2016-08-16 | Qualcomm Incorporated | Coding reference pictures for a reference picture set |
KR20140057301A (en) | 2011-10-17 | 2014-05-12 | 가부시끼가이샤 도시바 | Encoding method and decoding method |
WO2013062174A1 (en) * | 2011-10-26 | 2013-05-02 | 경희대학교 산학협력단 | Method for managing memory, and device for decoding video using same |
KR101835625B1 (en) | 2011-10-26 | 2018-04-19 | 인텔렉추얼디스커버리 주식회사 | Method and apparatus for scalable video coding using inter prediction mode |
US9264717B2 (en) | 2011-10-31 | 2016-02-16 | Qualcomm Incorporated | Random access with advanced decoded picture buffer (DPB) management in video coding |
US9172737B2 (en) * | 2012-07-30 | 2015-10-27 | New York University | Streamloading content, such as video content for example, by both downloading enhancement layers of the content and streaming a base layer of the content |
US20140092977A1 (en) * | 2012-09-28 | 2014-04-03 | Nokia Corporation | Apparatus, a Method and a Computer Program for Video Coding and Decoding |
CN108337524A (en) * | 2012-09-28 | 2018-07-27 | 索尼公司 | Encoding device, coding method, decoding device and coding/decoding method |
CN104871540B (en) * | 2012-12-14 | 2019-04-02 | Lg 电子株式会社 | The method of encoded video, the method for decoding video and the device using it |
US20150312581A1 (en) * | 2012-12-26 | 2015-10-29 | Sony Corporation | Image processing device and method |
WO2014163793A2 (en) * | 2013-03-11 | 2014-10-09 | Dolby Laboratories Licensing Corporation | Distribution of multi-format high dynamic range video using layered coding |
KR20140121315A (en) * | 2013-04-04 | 2014-10-15 | 한국전자통신연구원 | Method and apparatus for image encoding and decoding based on multi-layer using reference picture list |
KR102177831B1 (en) * | 2013-04-05 | 2020-11-11 | 삼성전자주식회사 | Method and apparatus for decoding multi-layer video, and method and apparatus for encoding multi-layer video |
US20160100180A1 (en) * | 2013-04-17 | 2016-04-07 | Wilus Institute Of Standards And Technology Inc. | Method and apparatus for processing video signal |
US20150341634A1 (en) * | 2013-10-16 | 2015-11-26 | Intel Corporation | Method, apparatus and system to select audio-video data for streaming |
JP2015188249A (en) * | 2015-06-03 | 2015-10-29 | 株式会社東芝 | Video coding device and video coding method |
KR101643102B1 (en) * | 2015-07-15 | 2016-08-12 | 민코넷주식회사 | Method of Supplying Object State Transmitting Type Broadcasting Service and Broadcast Playing |
US10555002B2 (en) | 2016-01-21 | 2020-02-04 | Intel Corporation | Long term reference picture coding |
JP2017069987A (en) * | 2017-01-18 | 2017-04-06 | 株式会社東芝 | Moving picture encoder and moving picture encoding method |
CN109344849B (en) * | 2018-07-27 | 2022-03-11 | 广东工业大学 | Complex network image identification method based on structure balance theory |
CN112995670B (en) * | 2021-05-10 | 2021-10-08 | 浙江智慧视频安防创新中心有限公司 | Method and device for sequentially executing inter-frame and intra-frame joint prediction coding and decoding |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5818531A (en) * | 1995-10-27 | 1998-10-06 | Kabushiki Kaisha Toshiba | Video encoding and decoding apparatus |
US5973739A (en) * | 1992-03-27 | 1999-10-26 | British Telecommunications Public Limited Company | Layered video coder |
US6292512B1 (en) * | 1998-07-06 | 2001-09-18 | U.S. Philips Corporation | Scalable video coding system |
US6330280B1 (en) * | 1996-11-08 | 2001-12-11 | Sony Corporation | Method and apparatus for decoding enhancement and base layer image signals using a predicted image signal |
US6339618B1 (en) * | 1997-01-08 | 2002-01-15 | At&T Corp. | Mesh node motion coding to enable object based functionalities within a motion compensated transform video coder |
US20020037046A1 (en) * | 2000-09-22 | 2002-03-28 | Philips Electronics North America Corporation | Totally embedded FGS video coding with motion compensation |
US20020118742A1 (en) * | 2001-02-26 | 2002-08-29 | Philips Electronics North America Corporation. | Prediction structures for enhancement layer in fine granular scalability video coding |
US6498865B1 (en) * | 1999-02-11 | 2002-12-24 | Packetvideo Corp,. | Method and device for control and compatible delivery of digitally compressed visual data in a heterogeneous communication network |
US20030007557A1 (en) * | 1996-02-07 | 2003-01-09 | Sharp Kabushiki Kaisha | Motion picture coding and decoding apparatus |
US6510177B1 (en) * | 2000-03-24 | 2003-01-21 | Microsoft Corporation | System and method for layered video coding enhancement |
US20030156646A1 (en) * | 2001-12-17 | 2003-08-21 | Microsoft Corporation | Multi-resolution motion estimation and compensation |
US6614936B1 (en) * | 1999-12-03 | 2003-09-02 | Microsoft Corporation | System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding |
US6639943B1 (en) * | 1999-11-23 | 2003-10-28 | Koninklijke Philips Electronics N.V. | Hybrid temporal-SNR fine granular scalability video coding |
US20030223493A1 (en) * | 2002-05-29 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Entropy constrained scalar quantizer for a laplace-markov source |
US20030223643A1 (en) * | 2002-05-28 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Efficiency FGST framework employing higher quality reference frames |
US20040001635A1 (en) * | 2002-06-27 | 2004-01-01 | Koninklijke Philips Electronics N.V. | FGS decoder based on quality estimated at the decoder |
US6765965B1 (en) * | 1999-04-22 | 2004-07-20 | Renesas Technology Corp. | Motion vector detecting apparatus |
US20040252900A1 (en) * | 2001-10-26 | 2004-12-16 | Wilhelmus Hendrikus Alfonsus Bruls | Spatial scalable compression |
US20050011543A1 (en) * | 2003-06-27 | 2005-01-20 | Haught John Christian | Process for recovering a dry cleaning solvent from a mixture by modifying the mixture |
US20050053148A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Intra-coded fields for Bi-directional frames |
US20050111543A1 (en) * | 2003-11-24 | 2005-05-26 | Lg Electronics Inc. | Apparatus and method for processing video for implementing signal to noise ratio scalability |
US6907070B2 (en) * | 2000-12-15 | 2005-06-14 | Microsoft Corporation | Drifting reduction and macroblock-based control in progressive fine granularity scalable video coding |
US20050185714A1 (en) * | 2004-02-24 | 2005-08-25 | Chia-Wen Lin | Method and apparatus for MPEG-4 FGS performance enhancement |
US6940905B2 (en) * | 2000-09-22 | 2005-09-06 | Koninklijke Philips Electronics N.V. | Double-loop motion-compensation fine granular scalability |
US20050195900A1 (en) * | 2004-03-04 | 2005-09-08 | Samsung Electronics Co., Ltd. | Video encoding and decoding methods and systems for video streaming service |
US20050195896A1 (en) * | 2004-03-08 | 2005-09-08 | National Chiao Tung University | Architecture for stack robust fine granularity scalability |
US20060013308A1 (en) * | 2004-07-15 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for scalably encoding and decoding color video |
US20060083309A1 (en) * | 2004-10-15 | 2006-04-20 | Heiko Schwarz | Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction |
US7072394B2 (en) * | 2002-08-27 | 2006-07-04 | National Chiao Tung University | Architecture and method for fine granularity scalable video coding |
US20060233242A1 (en) * | 2005-04-13 | 2006-10-19 | Nokia Corporation | Coding of frame number in scalable video coding |
US20060256863A1 (en) * | 2005-04-13 | 2006-11-16 | Nokia Corporation | Method, device and system for enhanced and effective fine granularity scalability (FGS) coding and decoding of video data |
US20070053442A1 (en) * | 2005-08-25 | 2007-03-08 | Nokia Corporation | Separation markers in fine granularity scalable video coding |
US20070160136A1 (en) * | 2006-01-12 | 2007-07-12 | Samsung Electronics Co., Ltd. | Method and apparatus for motion prediction using inverse motion transform |
US20080031345A1 (en) * | 2006-07-10 | 2008-02-07 | Segall Christopher A | Methods and Systems for Combining Layers in a Multi-Layer Bitstream |
US20080044094A1 (en) * | 2002-07-18 | 2008-02-21 | Jeon Byeong M | Method of determining motion vectors and a reference picture index for a current block in a picture to be decoded |
US20100038421A1 (en) * | 2003-06-03 | 2010-02-18 | Koninklijke Philips Electronics N.V. | Secure card terminal |
US20100296000A1 (en) * | 2009-05-25 | 2010-11-25 | Canon Kabushiki Kaisha | Method and device for transmitting video data |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998031151A1 (en) * | 1997-01-10 | 1998-07-16 | Matsushita Electric Industrial Co., Ltd. | Image processing method, image processing device, and data recording medium |
CN100579230C (en) * | 1997-04-01 | 2010-01-06 | 索尼公司 | Image encoder and method thereof, picture decoder and method thereof |
US7245663B2 (en) * | 1999-07-06 | 2007-07-17 | Koninklijke Philips Electronis N.V. | Method and apparatus for improved efficiency in transmission of fine granular scalable selective enhanced images |
EP1161839A1 (en) | 1999-12-28 | 2001-12-12 | Koninklijke Philips Electronics N.V. | Snr scalable video encoding method and corresponding decoding method |
US20020126759A1 (en) * | 2001-01-10 | 2002-09-12 | Wen-Hsiao Peng | Method and apparatus for providing prediction mode fine granularity scalability |
US20020118743A1 (en) * | 2001-02-28 | 2002-08-29 | Hong Jiang | Method, apparatus and system for multiple-layer scalable video coding |
JP2003299103A (en) * | 2002-03-29 | 2003-10-17 | Toshiba Corp | Moving picture encoding and decoding processes and devices thereof |
US6944222B2 (en) | 2002-03-04 | 2005-09-13 | Koninklijke Philips Electronics N.V. | Efficiency FGST framework employing higher quality reference frames |
KR100488018B1 (en) * | 2002-05-03 | 2005-05-06 | 엘지전자 주식회사 | Moving picture coding method |
KR20050027111A (en) * | 2002-07-16 | 2005-03-17 | 톰슨 라이센싱 에스.에이. | Interleaving of base and enhancement layers for hd-dvd |
AU2003253190A1 (en) | 2002-09-27 | 2004-04-19 | Koninklijke Philips Electronics N.V. | Scalable video encoding |
MXPA05008405A (en) * | 2003-02-18 | 2005-10-05 | Nokia Corp | Picture decoding method. |
JP2007525072A (en) * | 2003-06-25 | 2007-08-30 | トムソン ライセンシング | Method and apparatus for weighted prediction estimation using permuted frame differences |
WO2005032138A1 (en) | 2003-09-29 | 2005-04-07 | Koninklijke Philips Electronics, N.V. | System and method for combining advanced data partitioning and fine granularity scalability for efficient spatio-temporal-snr scalability video coding and streaming |
FI115589B (en) | 2003-10-14 | 2005-05-31 | Nokia Corp | Encoding and decoding redundant images |
US20050201471A1 (en) * | 2004-02-13 | 2005-09-15 | Nokia Corporation | Picture decoding method |
KR101407748B1 (en) * | 2004-10-13 | 2014-06-17 | 톰슨 라이센싱 | Method and apparatus for complexity scalable video encoding and decoding |
KR100679022B1 (en) | 2004-10-18 | 2007-02-05 | 삼성전자주식회사 | Video coding and decoding method using inter-layer filtering, video ecoder and decoder |
KR20060043115A (en) | 2004-10-26 | 2006-05-15 | 엘지전자 주식회사 | Method and apparatus for encoding/decoding video signal using base layer |
KR100703734B1 (en) | 2004-12-03 | 2007-04-05 | 삼성전자주식회사 | Method and apparatus for encoding/decoding multi-layer video using DCT upsampling |
DE602006011865D1 (en) * | 2005-03-10 | 2010-03-11 | Qualcomm Inc | DECODER ARCHITECTURE FOR OPTIMIZED ERROR MANAGEMENT IN MULTIMEDIA FLOWS |
US7756206B2 (en) * | 2005-04-13 | 2010-07-13 | Nokia Corporation | FGS identification in scalable video coding |
EP1869888B1 (en) * | 2005-04-13 | 2016-07-06 | Nokia Technologies Oy | Method, device and system for effectively coding and decoding of video data |
KR20060122663A (en) * | 2005-05-26 | 2006-11-30 | 엘지전자 주식회사 | Method for transmitting and using picture information in a video signal encoding/decoding |
WO2006132509A1 (en) | 2005-06-10 | 2006-12-14 | Samsung Electronics Co., Ltd. | Multilayer-based video encoding method, decoding method, video encoder, and video decoder using smoothing prediction |
US7617436B2 (en) * | 2005-08-02 | 2009-11-10 | Nokia Corporation | Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network |
KR100746011B1 (en) * | 2005-08-24 | 2007-08-06 | 삼성전자주식회사 | Method for enhancing performance of residual prediction, video encoder, and video decoder using it |
US9113147B2 (en) * | 2005-09-27 | 2015-08-18 | Qualcomm Incorporated | Scalability techniques based on content information |
KR20070038396A (en) | 2005-10-05 | 2007-04-10 | 엘지전자 주식회사 | Method for encoding and decoding video signal |
AU2006298012B2 (en) | 2005-10-05 | 2009-11-12 | Lg Electronics Inc. | Method for decoding a video signal |
KR100891663B1 (en) | 2005-10-05 | 2009-04-02 | 엘지전자 주식회사 | Method for decoding and encoding a video signal |
US20070086521A1 (en) * | 2005-10-11 | 2007-04-19 | Nokia Corporation | Efficient decoded picture buffer management for scalable video coding |
US8170116B2 (en) * | 2006-03-27 | 2012-05-01 | Nokia Corporation | Reference picture marking in scalable video encoding and decoding |
US8358704B2 (en) * | 2006-04-04 | 2013-01-22 | Qualcomm Incorporated | Frame level multimedia decoding with frame information table |
-
2006
- 2006-07-21 KR KR1020060068314A patent/KR20070038396A/en unknown
- 2006-09-29 KR KR1020060095950A patent/KR100883594B1/en active IP Right Grant
- 2006-09-29 KR KR1020060095951A patent/KR20070038418A/en not_active Application Discontinuation
- 2006-09-29 KR KR1020060095952A patent/KR100886193B1/en active IP Right Grant
- 2006-10-02 CN CN2006800370939A patent/CN101352044B/en active Active
- 2006-10-02 WO PCT/KR2006/003978 patent/WO2007040335A1/en active Application Filing
- 2006-10-02 ES ES06812190.4T patent/ES2539935T3/en active Active
- 2006-10-02 EP EP15162555.5A patent/EP2924997B1/en active Active
- 2006-10-02 BR BRPI0616860A patent/BRPI0616860B8/en active IP Right Grant
- 2006-10-02 US US11/992,956 patent/US7773675B2/en active Active
- 2006-10-02 JP JP2008534437A patent/JP4851528B2/en active Active
- 2006-10-02 RU RU2008117444/09A patent/RU2008117444A/en not_active Application Discontinuation
- 2006-10-02 US US11/992,958 patent/US7869501B2/en active Active
- 2006-10-04 CN CNA2006800371217A patent/CN101283601A/en active Pending
- 2006-10-04 WO PCT/KR2006/003996 patent/WO2007040342A1/en active Application Filing
- 2006-10-04 WO PCT/KR2006/003997 patent/WO2007040343A1/en active Application Filing
- 2006-10-05 US US11/543,032 patent/US20070195879A1/en not_active Abandoned
- 2006-10-05 US US11/543,080 patent/US20070086518A1/en not_active Abandoned
- 2006-10-05 US US11/543,031 patent/US20070253486A1/en not_active Abandoned
- 2006-10-09 CN CNA2006800370943A patent/CN101283595A/en active Pending
-
2008
- 2008-12-18 KR KR1020080129354A patent/KR101102399B1/en active IP Right Grant
-
2009
- 2009-03-06 HK HK09102175.4A patent/HK1124710A1/en not_active IP Right Cessation
- 2009-03-26 RU RU2009111142/07A patent/RU2508608C2/en active
-
2010
- 2010-01-19 US US12/656,128 patent/US8422551B2/en active Active
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5973739A (en) * | 1992-03-27 | 1999-10-26 | British Telecommunications Public Limited Company | Layered video coder |
US5818531A (en) * | 1995-10-27 | 1998-10-06 | Kabushiki Kaisha Toshiba | Video encoding and decoding apparatus |
US20030007557A1 (en) * | 1996-02-07 | 2003-01-09 | Sharp Kabushiki Kaisha | Motion picture coding and decoding apparatus |
US6330280B1 (en) * | 1996-11-08 | 2001-12-11 | Sony Corporation | Method and apparatus for decoding enhancement and base layer image signals using a predicted image signal |
US6339618B1 (en) * | 1997-01-08 | 2002-01-15 | At&T Corp. | Mesh node motion coding to enable object based functionalities within a motion compensated transform video coder |
US6292512B1 (en) * | 1998-07-06 | 2001-09-18 | U.S. Philips Corporation | Scalable video coding system |
US6498865B1 (en) * | 1999-02-11 | 2002-12-24 | Packetvideo Corp,. | Method and device for control and compatible delivery of digitally compressed visual data in a heterogeneous communication network |
US6765965B1 (en) * | 1999-04-22 | 2004-07-20 | Renesas Technology Corp. | Motion vector detecting apparatus |
US6639943B1 (en) * | 1999-11-23 | 2003-10-28 | Koninklijke Philips Electronics N.V. | Hybrid temporal-SNR fine granular scalability video coding |
US6614936B1 (en) * | 1999-12-03 | 2003-09-02 | Microsoft Corporation | System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding |
US6510177B1 (en) * | 2000-03-24 | 2003-01-21 | Microsoft Corporation | System and method for layered video coding enhancement |
US20020037046A1 (en) * | 2000-09-22 | 2002-03-28 | Philips Electronics North America Corporation | Totally embedded FGS video coding with motion compensation |
US6940905B2 (en) * | 2000-09-22 | 2005-09-06 | Koninklijke Philips Electronics N.V. | Double-loop motion-compensation fine granular scalability |
US6907070B2 (en) * | 2000-12-15 | 2005-06-14 | Microsoft Corporation | Drifting reduction and macroblock-based control in progressive fine granularity scalable video coding |
US20020118742A1 (en) * | 2001-02-26 | 2002-08-29 | Philips Electronics North America Corporation. | Prediction structures for enhancement layer in fine granular scalability video coding |
US20040252900A1 (en) * | 2001-10-26 | 2004-12-16 | Wilhelmus Hendrikus Alfonsus Bruls | Spatial scalable compression |
US20030156646A1 (en) * | 2001-12-17 | 2003-08-21 | Microsoft Corporation | Multi-resolution motion estimation and compensation |
US20030223643A1 (en) * | 2002-05-28 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Efficiency FGST framework employing higher quality reference frames |
US20030223493A1 (en) * | 2002-05-29 | 2003-12-04 | Koninklijke Philips Electronics N.V. | Entropy constrained scalar quantizer for a laplace-markov source |
US20040001635A1 (en) * | 2002-06-27 | 2004-01-01 | Koninklijke Philips Electronics N.V. | FGS decoder based on quality estimated at the decoder |
US20080044094A1 (en) * | 2002-07-18 | 2008-02-21 | Jeon Byeong M | Method of determining motion vectors and a reference picture index for a current block in a picture to be decoded |
US7072394B2 (en) * | 2002-08-27 | 2006-07-04 | National Chiao Tung University | Architecture and method for fine granularity scalable video coding |
US20100038421A1 (en) * | 2003-06-03 | 2010-02-18 | Koninklijke Philips Electronics N.V. | Secure card terminal |
US20050011543A1 (en) * | 2003-06-27 | 2005-01-20 | Haught John Christian | Process for recovering a dry cleaning solvent from a mixture by modifying the mixture |
US20050053148A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Intra-coded fields for Bi-directional frames |
US20050111543A1 (en) * | 2003-11-24 | 2005-05-26 | Lg Electronics Inc. | Apparatus and method for processing video for implementing signal to noise ratio scalability |
US20050185714A1 (en) * | 2004-02-24 | 2005-08-25 | Chia-Wen Lin | Method and apparatus for MPEG-4 FGS performance enhancement |
US20050195900A1 (en) * | 2004-03-04 | 2005-09-08 | Samsung Electronics Co., Ltd. | Video encoding and decoding methods and systems for video streaming service |
US20050195896A1 (en) * | 2004-03-08 | 2005-09-08 | National Chiao Tung University | Architecture for stack robust fine granularity scalability |
US20060013308A1 (en) * | 2004-07-15 | 2006-01-19 | Samsung Electronics Co., Ltd. | Method and apparatus for scalably encoding and decoding color video |
US20060083309A1 (en) * | 2004-10-15 | 2006-04-20 | Heiko Schwarz | Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction |
US20060256863A1 (en) * | 2005-04-13 | 2006-11-16 | Nokia Corporation | Method, device and system for enhanced and effective fine granularity scalability (FGS) coding and decoding of video data |
US20060233242A1 (en) * | 2005-04-13 | 2006-10-19 | Nokia Corporation | Coding of frame number in scalable video coding |
US20070053442A1 (en) * | 2005-08-25 | 2007-03-08 | Nokia Corporation | Separation markers in fine granularity scalable video coding |
US20070160136A1 (en) * | 2006-01-12 | 2007-07-12 | Samsung Electronics Co., Ltd. | Method and apparatus for motion prediction using inverse motion transform |
US20080031345A1 (en) * | 2006-07-10 | 2008-02-07 | Segall Christopher A | Methods and Systems for Combining Layers in a Multi-Layer Bitstream |
US20100296000A1 (en) * | 2009-05-25 | 2010-11-25 | Canon Kabushiki Kaisha | Method and device for transmitting video data |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080025399A1 (en) * | 2006-07-26 | 2008-01-31 | Canon Kabushiki Kaisha | Method and device for image compression, telecommunications system comprising such a device and program implementing such a method |
US20110255591A1 (en) * | 2010-04-09 | 2011-10-20 | Lg Electronics Inc. | Method and apparatus for processing video data |
US8861594B2 (en) * | 2010-04-09 | 2014-10-14 | Lg Electronics Inc. | Method and apparatus for processing video data |
US9426472B2 (en) | 2010-04-09 | 2016-08-23 | Lg Electronics Inc. | Method and apparatus for processing video data |
US9918106B2 (en) | 2010-04-09 | 2018-03-13 | Lg Electronics Inc. | Method and apparatus for processing video data |
US10321156B2 (en) | 2010-04-09 | 2019-06-11 | Lg Electronics Inc. | Method and apparatus for processing video data |
US10841612B2 (en) | 2010-04-09 | 2020-11-17 | Lg Electronics Inc. | Method and apparatus for processing video data |
US11197026B2 (en) | 2010-04-09 | 2021-12-07 | Lg Electronics Inc. | Method and apparatus for processing video data |
US20220060749A1 (en) * | 2010-04-09 | 2022-02-24 | Lg Electronics Inc. | Method and apparatus for processing video data |
US11695954B2 (en) * | 2010-04-09 | 2023-07-04 | Lg Electronics Inc. | Method and apparatus for processing video data |
US10027957B2 (en) | 2011-01-12 | 2018-07-17 | Sun Patent Trust | Methods and apparatuses for encoding and decoding video using multiple reference pictures |
US10841573B2 (en) | 2011-02-08 | 2020-11-17 | Sun Patent Trust | Methods and apparatuses for encoding and decoding video using multiple reference pictures |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070253486A1 (en) | Method and apparatus for reconstructing an image block | |
US8625670B2 (en) | Method and apparatus for encoding and decoding image | |
KR100888963B1 (en) | Method for scalably encoding and decoding video signal | |
KR100886191B1 (en) | Method for decoding an image block | |
US8532187B2 (en) | Method and apparatus for scalably encoding/decoding video signal | |
US20060120450A1 (en) | Method and apparatus for multi-layered video encoding and decoding | |
JP5061179B2 (en) | Illumination change compensation motion prediction encoding and decoding method and apparatus | |
US8948243B2 (en) | Image encoding device, image decoding device, image encoding method, and image decoding method | |
US20060104354A1 (en) | Multi-layered intra-prediction method and video coding method and apparatus using the same | |
US20090103613A1 (en) | Method for Decoding Video Signal Encoded Using Inter-Layer Prediction | |
KR20060105409A (en) | Method for scalably encoding and decoding video signal | |
KR20060122671A (en) | Method for scalably encoding and decoding video signal | |
EP1932363B1 (en) | Method and apparatus for reconstructing image blocks | |
EP1601205A1 (en) | Moving image encoding/decoding apparatus and method | |
US20070160136A1 (en) | Method and apparatus for motion prediction using inverse motion transform | |
CN116634169A (en) | Processing of multiple image size and conformance windows for reference image resampling in video coding | |
US20100303151A1 (en) | Method for decoding video signal encoded using inter-layer prediction | |
EP1817911A1 (en) | Method and apparatus for multi-layered video encoding and decoding | |
KR20120025111A (en) | Intra prediction encoding/decoding apparatus and method capable of skipping prediction mode information using the characteristics of reference pixels | |
Suzuki et al. | Block-based reduced resolution inter frame coding with template matching prediction | |
KR20070014956A (en) | Method for encoding and decoding video signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LG ELECTRONICS, INC., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JEON, BYEONG-MOON;PARK, JI-HO;PARK, SEUNG-WOOK;REEL/FRAME:019247/0494 Effective date: 20070430 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |