US20070195879A1 - Method and apparatus for encoding a motion vection - Google Patents

Method and apparatus for encoding a motion vection Download PDF

Info

Publication number
US20070195879A1
US20070195879A1 US11/543,032 US54303206A US2007195879A1 US 20070195879 A1 US20070195879 A1 US 20070195879A1 US 54303206 A US54303206 A US 54303206A US 2007195879 A1 US2007195879 A1 US 2007195879A1
Authority
US
United States
Prior art keywords
motion vector
block
picture
fgs
picture layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/543,032
Inventor
Byeong-Moon Jeon
Ji-Ho Park
Seung-Wook Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority to US11/543,032 priority Critical patent/US20070195879A1/en
Assigned to LG ELECTRONICS, INC. reassignment LG ELECTRONICS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JEON, BYEONG-MOON, PARK, JI-HO, PARK, SEUNG-WOOK
Publication of US20070195879A1 publication Critical patent/US20070195879A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/29Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving scalability at the object level, e.g. video object layer [VOL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Definitions

  • the present invention relates, in general, to methods of encoding and decoding video signals.
  • the collocated block Xb includes a reference picture index that indicates a reference base layer frame.
  • the collocated block Xb also includes a motion vector.
  • the motion vector points to a base layer reference block Rb in the reference base layer frame.
  • the FGS enhanced layer reference block Re is a collocated block with respect to the base layer reference block Rb. Namely, the frame in the FGS layer temporally coincident with the reference frame in the base layer indicated by the reference picture index of the collocated block Xb serves as the FGS enhanced layer reference frame. Further, the motion vector of the collocated block Xb is used as the motion vector in the FGS enhanced layer reference frame to obtain the FGS enhanced layer reference block Re.
  • the present invention relates to a method of encoding motion vector information.
  • One embodiment of the apparatus includes a first encoder obtaining a motion vector difference associated with a current block in a first picture layer based on motion vector information for a block in a second picture layer.
  • the second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer.
  • the first encoder includes information on the motion vector difference in the first picture layer, and a multiplexer generates a bit stream representing the first and second picture layers.
  • FIG. 2 illustrates a process of finely adjusting the motion vector of the FGS base layer of a current frame in the picture of the FGS enhanced layer of a reference frame to predict the FGS enhanced layer of the current frame according to an embodiment of the present invention
  • FIG. 4 is a block diagram of an apparatus which encodes a video signal to which the present invention may be applied.
  • the region is searched to obtain the block having the smallest image difference with respect to the block X, that is, a block Re′, causing the Sum of Absolute Differences (SAD) to be minimized.
  • SAD is the sum of absolute differences between corresponding pixels in the two blocks.
  • the two blocks are the block X to be coded or decoded and the selected block. Then, a motion vector mv(X) from the block X to the selected block is calculated.
  • the search resolution that is, the unit by which the block X is moved to find a block having a minimum SAD, may be a pixel, a 1 ⁇ 2 pixel (half pel), or a 1 ⁇ 4 pixel (quarter pel).
  • the motion vectors for macroblocks are calculated in relation to the FGS enhanced layer, and the calculated motion vectors are included in a macroblock layer within the FGS enhanced layer and transmitted to a decoder.
  • related information is defined on the basis of a slice level, and is not defined on the basis of a macroblock level, a sub-macroblock level, or sub-block level.
  • the generation of the FGS enhanced layer is similar to a procedure of performing prediction between a base layer and an enhanced layer having different spatial resolutions in an intra base prediction mode, and generating residual data which is an image difference.
  • a block Rd having a difference value of the residual data is obtained by applying the mode information, used in the block Xb, to the residual block R.
  • Discrete Cosine Transform (DCT) is performed on the obtained block Rd, and the DCT results are quantized using a quantization step size set smaller than the quantization step size used when the FGS base layer data for the block Xb is generated, thus generating FGS enhanced layer data for the block X.
  • the FGS enhanced layer reference block for the block X may be searched for in the reference frame indicated by the motion vector mv(Xb), or a reference block for the block X may be searched for in a frame other than the reference frame.
  • the FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than the predetermined quantization step size; thus generating the FGS enhanced layer data for the block X.
  • the FGS enhanced layer of the reference frame may be encoded using an FGS enhanced layer picture of a different frame.
  • a picture reconstructed from the different frame is used to reconstruct the reference frame.
  • the FGS enhanced layer picture may have been generated in advance and stored in a buffer.
  • the motion vector mv(X) from the block X to the reference block Re′ is obtained as the sum of the motion information mv_ref_fgs, included in an FGS enhanced layer stream for the block X, and the motion vector mv(Xb) of the block Xb.
  • the motion vector mv(X) is obtained as the sum of the motion information mvd_fgs, included in the FGS enhanced layer stream for the block X, and the motion vector mvp_fgs, predicted and obtained from the surrounding blocks.
  • the motion vector mvp_fgs may be implemented using the motion vector mvp, which is obtained at the time of calculating the motion vector mv(Xb) of the FGS base layer collocated block Xb without change, or using a motion vector derived from the motion vector mvp.

Abstract

In one embodiment, a motion vector difference associated with a current block in a first picture layer is obtained based on motion vector information for a block in a second picture layer. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer is temporally associated with the current block in the first picture. A bit stream representing, the first and second picture layers is generated such that the first picture layer includes information on the obtained motion vector difference.

Description

    DOMESTIC PRIORITY INFORMATION
  • This application claims the benefit of priority on U.S. Provisional Application No. 60/723,474 filed Oct. 5, 2005; the entire content of which is hereby incorporated by reference.
  • FOREIGN PRIORITY INFORMATION
  • This application claims the benefit of priority on Korean Patent Application No. 10-2006-0068314 filed Jul. 21, 2006; the entire content of which is hereby incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates, in general, to methods of encoding and decoding video signals.
  • 2. Description of the Related Art
  • A Scalable Video Codec (SVC) is a scheme for encoding video signals at the highest image quality when encoding the video signals, and enabling image quality to be secured to some degree even if only part of the entire picture (frame) sequence generated as a result of the encoding (a sequence of frames intermittently selected from the entire sequence) is decoded.
  • Even if only a partial sequence of a picture sequence encoded by a scalable scheme is received and processed, image quality can be secured to some degree. However, if the bit rate is decreased, the deterioration in image quality becomes serious. In order to solve the problem, a separate sub-picture sequence for a low bit rate, for example, a picture sequence of small screens and/or a picture sequence having a small number of frames per second, can be provided.
  • This sub-picture sequence is called a base layer, and a main picture sequence is called an enhanced layer. The base layer and the enhanced layer are obtained by encoding the same video signal source, and redundant information exists in the video signals of the two layers. Therefore, when the base layer is provided, an interlayer prediction method can be used to improve coding efficiency.
  • Further, in order to improve the Signal-to-Noise Ratio (SNR) of a base layer, that is, to enhance image quality, an enhanced layer may be used, which is called SNR scalability, Fine Granular Scalability (FGS), or progressive refinement.
  • According to FGS, transform coefficients corresponding to respective pixels, for example, Discrete Cosine Transform (DCT) coefficients, are separately encoded into a base layer and an enhanced layer, depending on the resolution of bit representation. When a transmission environment is bad, the transmission of the enhanced layer is omitted, so that the bit rate can be decreased while the quality of a decoded image is deteriorated. That is, FGS compensates for loss occurring during a quantization process, and provides high flexibility enabling a bit rate to be controlled in response to a transmission or decoding environment.
  • For example, if a transform coefficient is quantized using a quantization step size (that is, QP), for example, QP=32, to generate a base layer, a first FGS enhanced layer is generated by quantizing the difference between an original transform coefficient and a transform coefficient obtained by inversely quantizing the quantized coefficient of the base layer, using a quantization step size corresponding to quality higher than QP=32, for example,
  • QP=26. Similarly, a second FGS enhanced layer is generated by quantizing the difference between the original transform coefficient and a transform coefficient obtained by inversely quantizing the sum of the quantized coefficients of the base layer and the first FGS enhanced layer, using a quantization step size, for example, QP=20.
  • However, in a conventional FGS coding method, only a quality base layer, that is, a picture of an FGS base layer, is used to generate an FGS enhanced layer. This means that temporally redundant information existing between temporally adjacent quality enhanced layers, that is, pictures of an FGS enhanced layer, are not used.
  • In order to use such temporal redundancy in the FGS enhanced layer, a method of utilizing an adjacent quality enhanced layer as well as a quality base layer to predict a current FGS enhanced layer is proposed. This method is called a Progressive FGS (PFGS), and the structure of such a PFGS scheme is shown in FIG. 1.
  • As shown in FIG. 1, an adaptive reference block formation function receives a base layer collocated block Xb and a FGS enhanced layer reference block Re, and produces an adapted reference block Ra for use in reconstructing a current image block X in a current frame of the FGS layer that is being reconstructed. The collocated block Xb is the block in the base layer that is collocated with respect to the current image block X. Namely, the collocated block Xb is in a base layer frame temporally coincident with the current frame of the FGS layer, and the collocated block Xb is in the same relative position within the base layer frame as the current image block X in the current frame of the FGS layer.
  • The collocated block Xb includes a reference picture index that indicates a reference base layer frame. The collocated block Xb also includes a motion vector. As shown in FIG. 1, the motion vector points to a base layer reference block Rb in the reference base layer frame. The FGS enhanced layer reference block Re is a collocated block with respect to the base layer reference block Rb. Namely, the frame in the FGS layer temporally coincident with the reference frame in the base layer indicated by the reference picture index of the collocated block Xb serves as the FGS enhanced layer reference frame. Further, the motion vector of the collocated block Xb is used as the motion vector in the FGS enhanced layer reference frame to obtain the FGS enhanced layer reference block Re.
  • The FGS enhanced layer reference block Re is a difference or error signal representing enhancement quality. As such, the adaptive reference block formation function adds the FGS enhanced layer reference block Re to the collocated block Xb at a transform coefficient level to obtain the adapted reference block Ra. Then, as shown in FIG. 1, a reconstruction function reconstructs the current image block X by combining an encoded block Rd for the current image block X with the adapted reference block Ra in the well-known manner.
  • However, the resolution of bit representation of an image may vary due to the difference between the quantization step sizes of the FGS base layer and the FGS enhanced layer, so that the motion vector of the FGS base layer collocated block Xb may not be identical to that of the FGS enhanced layer block X. This means that coding efficiency may be decreased.
  • SUMMARY OF THE INVENTION
  • The present invention relates to a method of encoding motion vector information.
  • In one embodiment, a motion vector difference associated with a current block in a first picture layer is obtained based on motion vector information for a block in a second picture layer. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer is temporally associated with the current block in the first picture. A bit stream representing the first and second picture layers is generated such that the first picture layer includes information on the obtained motion vector difference.
  • In one embodiment, the bit stream is generated such the second picture layer includes the motion vector information.
  • In one embodiment, the motion vector information includes a motion vector associated with the block of the second picture layer.
  • In one embodiment, the motion vector difference is obtained by determining a motion vector prediction based on the motion vector information, and generating the motion vector difference based on the motion vector prediction.
  • In an embodiment, the motion vector information includes a motion vector associated with the block of the second picture layer, and the motion vector prediction is determined as equal to the motion vector associated with the block of the second picture layer.
  • The present invention also relates to an apparatus for encoding motion vector.
  • One embodiment of the apparatus includes a first encoder obtaining a motion vector difference associated with a current block in a first picture layer based on motion vector information for a block in a second picture layer. The second picture layer has lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer. The first encoder includes information on the motion vector difference in the first picture layer, and a multiplexer generates a bit stream representing the first and second picture layers.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other objects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 illustrates a progressive FGS structure for encoding the FGS enhanced layer of a current frame using the quality base layer of the current frame and the quality enhanced layer of another frame;
  • FIG. 2 illustrates a process of finely adjusting the motion vector of the FGS base layer of a current frame in the picture of the FGS enhanced layer of a reference frame to predict the FGS enhanced layer of the current frame according to an embodiment of the present invention;
  • FIG. 3 illustrates a process of searching the FGS enhanced layer picture of a reference frame for an FGS enhanced layer reference block for an arbitrary block in a current frame, independent of the motion vector of an FGS base layer of the arbitrary block according to another embodiment of the present invention;
  • FIG. 4 is a block diagram of an apparatus which encodes a video signal to which the present invention may be applied; and
  • FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream to which the present invention may be applied.
  • DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
  • Hereinafter, example embodiments of the present invention will be described in detail with reference to the attached drawings.
  • In an embodiment of the present invention, during the encoding process, the motion vector mv(Xb) of a Fine Granular Scalability (FGS) base layer collocated block Xb is finely adjusted to improve the coding efficiency of Progressive FGS (PFGS).
  • That is, the embodiment obtains the FGS enhanced layer frame for the FGS enhanced layer block X to be encoded as the FGS enhanced layer frame temporally coincident with the base layer reference frame for the base layer block Xb collocated with respect to the FGS enhanced layer block X. As will be appreciated, this base layer reference frame will be indicated in a reference picture index of the collocated block Xb; however, it is common for those skilled in the art to refer to the reference frame as being pointed to by the motion vector. Given the enhanced layer reference frame, a region (e.g., a partial region) of a picture is reconstructed from the FGS enhanced layer reference frame. This region includes a block indicated by the motion vector mv(Xb) for the base layer collated block Xb. The region is searched to obtain the block having the smallest image difference with respect to the block X, that is, a block Re′, causing the Sum of Absolute Differences (SAD) to be minimized. The SAD is the sum of absolute differences between corresponding pixels in the two blocks. The two blocks are the block X to be coded or decoded and the selected block. Then, a motion vector mv(X) from the block X to the selected block is calculated.
  • In this case, in order to reduce the burden of the search, the search range can be limited to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb). For example, the search can be performed with respect only to the region extended by 1 pixel in every direction.
  • Further, the search resolution, that is, the unit by which the block X is moved to find a block having a minimum SAD, may be a pixel, a ½ pixel (half pel), or a ¼ pixel (quarter pel).
  • In particular, when a search is performed with respect only to the region extended by 1 pixel in every direction, and is performed on a pixel basis, the location at which SAD is minimized is selected from among 9 candidate locations, as shown in FIG. 2.
  • If the search range is limited in this way, the difference vector mvd_ref_fgs between the calculated motion vector mv(X) and the motion vector mv(Xb), as shown in FIG. 2, is transmitted in the FGS enhanced layer. The FGS enhanced layer reference block associated with the obtained motion vector mv(x) is the enhanced layer reference block Re′. The block Re is used as a prediction block (or a predictor) for the block X to be decoded.
  • In another embodiment of the present invention, in order to obtain an optimal motion vector mv_fgs for the FGS enhanced layer for the block X, that is, in order to generate the optimal predicted image of the FGS enhanced layer for the block X, motion estimation/prediction operations are performed independent of the motion vector mv(Xb) for the FGS base layer collocated block Xb corresponding to the block X, as shown in FIG. 3.
  • In this case, the FGS enhanced layer predicted image (FGS enhanced layer reference block) for the block X can be searched for in the reference frame indicated by the motion vector mv(Xb) (i.e., indicated by the reference picture index for the block Xb), or the reference block for the block X can be searched for in another frame. As with the embodiment of FIG. 2, the obtained FGS enhanced layer reference block associated with the motion vector mv(X) is the enhanced layer reference block Re′.
  • In the former case, there are advantages in that frames in which the FGS enhanced layer reference block for the block X is to be searched for are limited to the reference frame indicated by the motion vector mv(Xb), so that the burden of encoding is reduced, and there is no need to transmit a reference index for the block X that includes the reference block.
  • In the latter case, there are disadvantages in that the number of frames, in which the reference block is to be searched for, increases, so that the burden of encoding increases, and a reference index for the frame, including a found reference block, must be additionally transmitted. But, there is an advantage in that the optimal predicted image of the FGS enhanced layer for the block X can be generated.
  • When a motion vector is encoded without change, a great number of bits are required. Since the motion vectors of neighboring blocks have a tendency to be highly correlated, respective motion vectors can be predicted from the motion vectors of surrounding blocks that have been previously encoded (immediate left, immediate upper and immediate upper-right blocks).
  • When a current motion vector mv is encoded, generally, the difference mvd between the current motion vector mv and a motion vector mvp, which is predicted from the motion vectors of surrounding blocks, is encoded and transmitted.
  • Therefore, the motion vector mv_fgs of the FGS enhanced layer for the block X that is obtained through an independent motion prediction operation is encoded by mvd_fgs=mv_fgs−mvp_fgs. In this case, the motion vector mvp_fgs, predicted and obtained from the surrounding blocks, can be implemented using the motion vector mvp, obtained when the motion vector mv(Xb) of the FGS base layer collocated block Xb is encoded, without change (e.g., mvp=mv(Xb)), or using a motion vector derived from the motion vector mvp (e.g., mvp=scaled version of mv(Xb)).
  • If the number of motion vectors of the FGS base layer collocated block Xb corresponding to the block X is two, that is, if the block Xb is predicted using two reference frames, two pieces of data related to the encoding of the motion vector of the FGS enhanced layer for the block X are obtained. For example, in a first embodiment, the pieces of data are mvd_ref_fgs_l0/l1, and in a second embodiment, the pieces of data are mvd_fgs_l0/l1.
  • In the above embodiments, the motion vectors for macroblocks (or image blocks smaller than macroblocks) are calculated in relation to the FGS enhanced layer, and the calculated motion vectors are included in a macroblock layer within the FGS enhanced layer and transmitted to a decoder. However, in the conventional FGS enhanced layer, related information is defined on the basis of a slice level, and is not defined on the basis of a macroblock level, a sub-macroblock level, or sub-block level.
  • Therefore, in the present invention, in order to define, in the FGS enhanced layer, data related to the motion vectors calculated on the basis of a macroblock (or an image block smaller than a macroblock), syntax required to define a macroblock layer and/or an image block layer smaller than a macroblock layer, for example, progressive_refinement_macroblock_layer_in_scalable_extension( ) and progressive_refinement_mb (and/or sub_mb)_pred_in_scalable_extension( ), is newly defined, and the calculated motion vectors are recorded in the newly defined syntax and then transmitted.
  • Meanwhile, the generation of the FGS enhanced layer is similar to a procedure of performing prediction between a base layer and an enhanced layer having different spatial resolutions in an intra base prediction mode, and generating residual data which is an image difference.
  • For example, if it is assumed that the block of the enhanced layer is X and the block of the base layer corresponding to the block X is Xb, the residual block obtained through intra base prediction is R=X−Xb. In this case, X can correspond to the block of a quality enhanced layer to be encoded, Xb can correspond to the block of a quality base layer, and R=X−Xb can correspond to residual data to be encoded in the FGS enhanced layer for the block X.
  • In another embodiment of the present invention, an intra mode prediction method is applied to the residual block R to reduce the amount of residual data to be encoded in the FGS enhanced layer. In order to perform intra mode prediction on the residual block R, the same mode information about the intra mode that is used in the base layer collocated block Xb corresponding to the block X is used.
  • A block Rd having a difference value of the residual data is obtained by applying the mode information, used in the block Xb, to the residual block R. Discrete Cosine Transform (DCT) is performed on the obtained block Rd, and the DCT results are quantized using a quantization step size set smaller than the quantization step size used when the FGS base layer data for the block Xb is generated, thus generating FGS enhanced layer data for the block X.
  • In a further embodiment, an adapted reference block Ra′ for the block X is generated as equal to the FGS enhanced layer reference block Re′. Further, residual data R to be encoded in the FGS enhanced layer for the block X is set as R=X−Ra, so that an intra mode prediction method is applied to the residual block R. It will be appreciated that in this embodiment, the enhanced layer reference block Re′, and therefore, the adapted reference block Ra′, are reconstructed pictures and not at the transform coefficient level.
  • In this case, an intra mode applied to the residual block R is a DC mode based on the mean value of respective pixels in the block R. Further, if the block Re is generated by the methods according to embodiments of the present invention, information related to motion required to generate the block Re in the decoder must be included in the FGS enhanced layer data for the block X.
  • FIG. 4 is a block diagram of an apparatus which encodes a video signal and to which the present invention may be applied.
  • The video signal encoding apparatus of FIG. 4 includes a base layer (BL) encoder 10 for performing motion prediction on an image signal, input as a frame sequence, using a predetermined method; performing DCT on motion prediction results; quantizing the DCT transform results, using a predetermined quantization step size; and generating base layer data. An FGS enhanced layer (FGS_EL) encoder 120 generates the FGS enhanced layer of a current frame using the motion information, the base layer data that are provided by the BL encoder 110, and the FGS enhanced layer data of a frame (for example, a previous frame) which is a reference for motion estimation for the current frame. A muxer 130 multiplexes the output data of the BL encoder 110 and the output data of the FGS_EL encoder 120 using a predetermined method, and outputs multiplexed data.
  • The FGS_EL encoder 120 reconstructs the quality base layer of the reference frame (also called a FGS base layer picture), which is the reference for motion prediction for a current frame, from the base layer data provided by the BL encoder 110, and reconstructs the FGS enhanced layer picture of the reference frame using the FGS enhanced layer data of the reference frame and the reconstructed quality base layer of the reference frame.
  • In this case, the reference frame may be a frame indicated by the motion vector mv(Xb) of the FGS base layer collocated block Xb corresponding to the block X in the current frame.
  • When the reference frame is a frame previous to the current frame, the FGS enhanced layer picture of the reference frame may have been stored in a buffer in advance.
  • Thereafter, the FGS_EL encoder 120 searches the FGS enhanced layer picture of the reconstructed reference frame for an FGS enhanced layer reference image for the block X, that is, a reference block or predicted block Re′ in which an SAD with respect to the block X is minimized, and then calculates a motion vector mv(X) from the block X to the found reference block Re′.
  • The FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than a predetermined quantization step (quantization step size used when the BL encoder 110 generates the FGS base layer data for the block Xb), thus generating FGS enhanced layer data for the block X.
  • When the reference block is predicted, the FGS_EL encoder 120 may limit the search range to a region including predetermined pixels in horizontal and vertical directions around the block indicated by the motion vector mv(Xb) so as to reduce the burden of the search, as in the first embodiment of the present invention. In this case, the FGS_EL encoder 120 records the difference mvd_ref_fgs between the calculated motion vector mv(X) and the motion vector mv(Xb) in the FGS enhanced layer in association with the block X.
  • Further, as in the case of the above-described second embodiment of the present invention, the FGS_EL encoder 120 may perform a motion estimation operation independent of the motion vector mv(Xb) so as to obtain the optimal motion vector mv_fgs of the FGS enhanced layer for the block X; thus searching for a reference block Re′ having a minimum SAD with respect to the block X, and calculating the motion vector mv_fgs from the block X to the found reference block Re.
  • In this case, the FGS enhanced layer reference block for the block X may be searched for in the reference frame indicated by the motion vector mv(Xb), or a reference block for the block X may be searched for in a frame other than the reference frame.
  • The FGS_EL encoder 120 performs DCT on the difference between the block X and the found reference block Re′, and quantizes the DCT results using a quantization step size set smaller than the predetermined quantization step size; thus generating the FGS enhanced layer data for the block X.
  • Further, the FGS_EL encoder 120 records the difference mvd_fgs between the calculated motion vector mv_fgs and the motion vector mvp_fgs, predicted and obtained from surrounding blocks, in the FGS enhanced layer in association with the block X. That is, the FGS_EL encoder 120 records syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock), in the FGS enhanced layer.
  • When the reference block Re′ for the block X is searched for in a frame other than the reference frame indicated by the motion vector mv(Xb), information related to the motion vector may further include a reference index for a frame including the found reference block Re′.
  • The encoded data stream is transmitted to a decoding apparatus in a wired or wireless manner, or is transferred through a recording medium.
  • FIG. 5 is a block diagram of an apparatus which decodes an encoded data stream and to which the present invention may be applied. The decoding apparatus of FIG. 5 includes a demuxer 210 for separating a received data stream into a base layer stream and an enhanced layer stream; a base layer (BL) decoder 220 for decoding an input base layer stream using a preset method; and an FGS_EL decoder 230 for generating the FGS enhanced layer picture of a current frame using the motion information, the reconstructed quality base layer (or FGS base layer data) that are provided by the BL decoder 220, and the FGS enhanced layer stream.
  • The FGS_EL decoder 230 checks information about the block X in the current frame, that is, information related to a motion vector used for motion prediction for the block X, in the FGS enhanced layer stream.
  • When i) the FGS enhanced layer for the block X in the current frame is encoded on the basis of the FGS enhanced layer picture of another frame and ii) is encoded using a block other than the block indicated by the motion vector mv(Xb) of the block Xb corresponding to the block X (that is the FGS base layer block of the current frame) as a predicted block or a reference block, motion information for indicating the other block is included in the FGS enhanced layer data of the current frame.
  • That is, in the above description, the FGS enhanced layer includes syntax for defining information related to the motion vector calculated on a block basis (a macroblock or an image block smaller than a macroblock). The information related to the motion vector may further include an index for the reference frame in which the FGS enhanced layer reference block for the block X is found (the reference frame including the reference block).
  • When motion information related to the block X in the current frame exists in the FGS enhanced layer of the current frame, the FGS_EL decoder 230 generates the FGS enhanced layer picture of the reference frame using the quality base layer of the reference frame (the FGS base layer picture reconstructed by the BL decoder 220 may be provided, or may be reconstructed from the FGS base layer data provided by the BL decoder 220), which is the reference for motion prediction for the current frame, and the FGS enhanced layer data of the reference frame. In this case, the reference frame may be a frame indicated by the motion vector mv(Xb) of the block Xb.
  • Further, the FGS enhanced layer of the reference frame may be encoded using an FGS enhanced layer picture of a different frame. In this case, a picture reconstructed from the different frame is used to reconstruct the reference frame. Further, when the reference frame is a frame previous to the current frame, the FGS enhanced layer picture may have been generated in advance and stored in a buffer.
  • Further, the FGS_EL decoder 230 obtains the FGS enhanced layer reference block Re′ for the block X from the FGS enhanced layer picture of the reference frame, using the motion information related to the block X.
  • In the above-described first embodiment of the present invention, the motion vector mv(X) from the block X to the reference block Re′ is obtained as the sum of the motion information mv_ref_fgs, included in an FGS enhanced layer stream for the block X, and the motion vector mv(Xb) of the block Xb.
  • Further, in the second embodiment of the present invention, the motion vector mv(X) is obtained as the sum of the motion information mvd_fgs, included in the FGS enhanced layer stream for the block X, and the motion vector mvp_fgs, predicted and obtained from the surrounding blocks. In this case, the motion vector mvp_fgs may be implemented using the motion vector mvp, which is obtained at the time of calculating the motion vector mv(Xb) of the FGS base layer collocated block Xb without change, or using a motion vector derived from the motion vector mvp.
  • Thereafter, the FGS_EL decoder 230 performs inverse-quantization and inverse DCT on the FGS enhanced layer data for the block X, and adds the results of inverse quantization and inverse DCT to the obtained reference block Re′, thus generating the FGS enhanced layer picture for the block X.
  • The above-described decoding apparatus may be mounted in a mobile communication terminal, or a device for reproducing recording media.
  • As described above, the present invention is advantageous in that it can efficiently perform motion estimation/prediction operations on an FGS enhanced layer picture when the FGS enhanced layer is encoded or decoded, and can efficiently transmit motion information required to reconstruct an FGS enhanced layer picture.
  • Although the example embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention.

Claims (16)

1. A method of encoding motion vector information, comprising:
obtaining a motion vector difference associated with a current block in a first picture layer based on motion vector information for a block in a second picture layer, the second picture layer having lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer; and
generating a bit stream representing the first and second picture layers such that the first picture layer includes information on the obtained motion vector difference.
2. The method of claim 1, wherein the generating a bit stream step generates the bit stream such the second picture layer includes the motion vector information.
3. The method of claim 2, wherein the motion vector information includes a motion vector associated with the block of the second picture layer.
4. The method of claim 1, wherein the motion vector information includes a motion vector associated with the block of the second picture layer.
5. The method of claim 1, wherein the obtaining step comprises:
determining a motion vector prediction based on the motion vector information; and
generating the motion vector difference based on the motion vector prediction.
6. The method of claim 5, wherein
the motion vector information includes a motion vector associated with the block of the second picture layer; and
the determining a motion vector prediction step determines the motion vector prediction equal to the motion vector associated with the block of the second picture layer.
7. The method of claim 5, wherein the obtaining step further comprises:
determining a motion vector for the current block in the first picture layer based on the motion vector information; and
the generating the motion vector difference step generates the motion vector difference based on the motion vector prediction and the motion vector for the current block.
8. The method of claim 7, wherein the generating the motion vector difference step generates the motion vector difference equal to the motion vector for the current block minus the motion vector prediction.
9. The method of claim 8, wherein
the motion vector information includes a motion vector associated with the block of the second picture layer; and
the determining a motion vector prediction step determines the motion vector prediction equal to the motion vector associated with the block of the second picture layer.
10. The method of claim 1, wherein the motion vector difference information indicates a motion vector difference of a one-quarter pixel or less.
11. The method of claim 1, wherein the motion vector difference information indicates a motion vector difference of a one-half pixel or less.
12. The method of claim 1, wherein the obtaining step comprises:
determining a motion vector for the current block in a first picture layer based on the motion vector information; and
determining the motion vector difference based on the motion vector for the current block.
13. The method of claim 12, wherein the motion vector for the current block points to a block in a reference picture for the current block.
14. The method of claim 13, wherein the reference picture is a picture in the first picture layer.
15. The method of claim 14, wherein the reference picture for the current block is temporally associated with a reference picture in the second picture layer, the reference picture in the second picture layer being a reference picture for the block in the second picture layer.
16. An apparatus for encoding motion vector information, comprising:
a first encoder obtaining a motion vector difference associated with a current block in a first picture layer based on motion vector information for a block in a second picture layer, the second picture layer having lower quality pictures than pictures in the first picture layer, and the block of the second picture layer being temporally associated with the current block in the first picture layer;
the first encoder including information on the motion vector difference in the first picture layer; and
a multiplexer generating a bit stream representing the first and second picture layers.
US11/543,032 2005-10-05 2006-10-05 Method and apparatus for encoding a motion vection Abandoned US20070195879A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/543,032 US20070195879A1 (en) 2005-10-05 2006-10-05 Method and apparatus for encoding a motion vection

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US72347405P 2005-10-05 2005-10-05
KR10-2006-0068314 2006-07-21
KR1020060068314A KR20070038396A (en) 2005-10-05 2006-07-21 Method for encoding and decoding video signal
US11/543,032 US20070195879A1 (en) 2005-10-05 2006-10-05 Method and apparatus for encoding a motion vection

Publications (1)

Publication Number Publication Date
US20070195879A1 true US20070195879A1 (en) 2007-08-23

Family

ID=38159769

Family Applications (6)

Application Number Title Priority Date Filing Date
US11/992,956 Active 2027-03-15 US7773675B2 (en) 2005-10-05 2006-10-02 Method for decoding a video signal using a quality base reference picture
US11/992,958 Active 2027-04-11 US7869501B2 (en) 2005-10-05 2006-10-02 Method for decoding a video signal to mark a picture as a reference picture
US11/543,032 Abandoned US20070195879A1 (en) 2005-10-05 2006-10-05 Method and apparatus for encoding a motion vection
US11/543,080 Abandoned US20070086518A1 (en) 2005-10-05 2006-10-05 Method and apparatus for generating a motion vector
US11/543,031 Abandoned US20070253486A1 (en) 2005-10-05 2006-10-05 Method and apparatus for reconstructing an image block
US12/656,128 Active 2027-03-12 US8422551B2 (en) 2005-10-05 2010-01-19 Method and apparatus for managing a reference picture

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/992,956 Active 2027-03-15 US7773675B2 (en) 2005-10-05 2006-10-02 Method for decoding a video signal using a quality base reference picture
US11/992,958 Active 2027-04-11 US7869501B2 (en) 2005-10-05 2006-10-02 Method for decoding a video signal to mark a picture as a reference picture

Family Applications After (3)

Application Number Title Priority Date Filing Date
US11/543,080 Abandoned US20070086518A1 (en) 2005-10-05 2006-10-05 Method and apparatus for generating a motion vector
US11/543,031 Abandoned US20070253486A1 (en) 2005-10-05 2006-10-05 Method and apparatus for reconstructing an image block
US12/656,128 Active 2027-03-12 US8422551B2 (en) 2005-10-05 2010-01-19 Method and apparatus for managing a reference picture

Country Status (10)

Country Link
US (6) US7773675B2 (en)
EP (1) EP2924997B1 (en)
JP (1) JP4851528B2 (en)
KR (5) KR20070038396A (en)
CN (3) CN101352044B (en)
BR (1) BRPI0616860B8 (en)
ES (1) ES2539935T3 (en)
HK (1) HK1124710A1 (en)
RU (2) RU2008117444A (en)
WO (3) WO2007040335A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025399A1 (en) * 2006-07-26 2008-01-31 Canon Kabushiki Kaisha Method and device for image compression, telecommunications system comprising such a device and program implementing such a method
US20110222605A1 (en) * 2009-09-22 2011-09-15 Yoshiichiro Kashiwagi Image coding apparatus, image decoding apparatus, image coding method, and image decoding method
US9854275B2 (en) 2011-06-25 2017-12-26 Qualcomm Incorporated Quantization in video coding
US10027957B2 (en) 2011-01-12 2018-07-17 Sun Patent Trust Methods and apparatuses for encoding and decoding video using multiple reference pictures
US10841573B2 (en) 2011-02-08 2020-11-17 Sun Patent Trust Methods and apparatuses for encoding and decoding video using multiple reference pictures

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1628484B1 (en) * 2004-08-18 2019-04-03 STMicroelectronics Srl Method for transcoding compressed video signals, related apparatus and computer program product therefor
KR20070038396A (en) * 2005-10-05 2007-04-10 엘지전자 주식회사 Method for encoding and decoding video signal
WO2007080223A1 (en) * 2006-01-10 2007-07-19 Nokia Corporation Buffering of decoded reference pictures
EP2123049B1 (en) 2007-01-18 2016-12-28 Nokia Technologies Oy Carriage of sei messages in rtp payload format
WO2009032255A2 (en) * 2007-09-04 2009-03-12 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
JP5406465B2 (en) * 2008-04-24 2014-02-05 株式会社Nttドコモ Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program
US20090279614A1 (en) * 2008-05-10 2009-11-12 Samsung Electronics Co., Ltd. Apparatus and method for managing reference frame buffer in layered video coding
EP2152009A1 (en) * 2008-08-06 2010-02-10 Thomson Licensing Method for predicting a lost or damaged block of an enhanced spatial layer frame and SVC-decoder adapted therefore
KR101220175B1 (en) * 2008-12-08 2013-01-11 연세대학교 원주산학협력단 Method for generating and processing hierarchical pes packet for digital satellite broadcasting based on svc video
JP5700970B2 (en) * 2009-07-30 2015-04-15 トムソン ライセンシングThomson Licensing Decoding method of encoded data stream representing image sequence and encoding method of image sequence
US9578325B2 (en) * 2010-01-13 2017-02-21 Texas Instruments Incorporated Drift reduction for quality scalable video coding
CN105472386B (en) * 2010-04-09 2018-09-18 Lg电子株式会社 The method and apparatus for handling video data
WO2012042884A1 (en) * 2010-09-29 2012-04-05 パナソニック株式会社 Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit
KR101959091B1 (en) 2010-09-30 2019-03-15 선 페이턴트 트러스트 Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit
JP5781313B2 (en) * 2011-01-12 2015-09-16 株式会社Nttドコモ Image prediction coding method, image prediction coding device, image prediction coding program, image prediction decoding method, image prediction decoding device, and image prediction decoding program
US8834507B2 (en) 2011-05-17 2014-09-16 Warsaw Orthopedic, Inc. Dilation instruments and methods
US9232233B2 (en) * 2011-07-01 2016-01-05 Apple Inc. Adaptive configuration of reference frame buffer based on camera and background motion
KR20130005167A (en) * 2011-07-05 2013-01-15 삼성전자주식회사 Image signal decoding device and decoding method thereof
SE538057C2 (en) 2011-09-09 2016-02-23 Kt Corp A method for deriving a temporal prediction motion vector and a device using the method.
US9420307B2 (en) 2011-09-23 2016-08-16 Qualcomm Incorporated Coding reference pictures for a reference picture set
KR20140057301A (en) 2011-10-17 2014-05-12 가부시끼가이샤 도시바 Encoding method and decoding method
WO2013062174A1 (en) * 2011-10-26 2013-05-02 경희대학교 산학협력단 Method for managing memory, and device for decoding video using same
KR101835625B1 (en) 2011-10-26 2018-04-19 인텔렉추얼디스커버리 주식회사 Method and apparatus for scalable video coding using inter prediction mode
US9264717B2 (en) 2011-10-31 2016-02-16 Qualcomm Incorporated Random access with advanced decoded picture buffer (DPB) management in video coding
US9172737B2 (en) * 2012-07-30 2015-10-27 New York University Streamloading content, such as video content for example, by both downloading enhancement layers of the content and streaming a base layer of the content
US20140092977A1 (en) * 2012-09-28 2014-04-03 Nokia Corporation Apparatus, a Method and a Computer Program for Video Coding and Decoding
CN108337524A (en) * 2012-09-28 2018-07-27 索尼公司 Encoding device, coding method, decoding device and coding/decoding method
CN104871540B (en) * 2012-12-14 2019-04-02 Lg 电子株式会社 The method of encoded video, the method for decoding video and the device using it
US20150312581A1 (en) * 2012-12-26 2015-10-29 Sony Corporation Image processing device and method
WO2014163793A2 (en) * 2013-03-11 2014-10-09 Dolby Laboratories Licensing Corporation Distribution of multi-format high dynamic range video using layered coding
KR20140121315A (en) * 2013-04-04 2014-10-15 한국전자통신연구원 Method and apparatus for image encoding and decoding based on multi-layer using reference picture list
KR102177831B1 (en) * 2013-04-05 2020-11-11 삼성전자주식회사 Method and apparatus for decoding multi-layer video, and method and apparatus for encoding multi-layer video
US20160100180A1 (en) * 2013-04-17 2016-04-07 Wilus Institute Of Standards And Technology Inc. Method and apparatus for processing video signal
US20150341634A1 (en) * 2013-10-16 2015-11-26 Intel Corporation Method, apparatus and system to select audio-video data for streaming
JP2015188249A (en) * 2015-06-03 2015-10-29 株式会社東芝 Video coding device and video coding method
KR101643102B1 (en) * 2015-07-15 2016-08-12 민코넷주식회사 Method of Supplying Object State Transmitting Type Broadcasting Service and Broadcast Playing
US10555002B2 (en) 2016-01-21 2020-02-04 Intel Corporation Long term reference picture coding
JP2017069987A (en) * 2017-01-18 2017-04-06 株式会社東芝 Moving picture encoder and moving picture encoding method
CN109344849B (en) * 2018-07-27 2022-03-11 广东工业大学 Complex network image identification method based on structure balance theory
CN112995670B (en) * 2021-05-10 2021-10-08 浙江智慧视频安防创新中心有限公司 Method and device for sequentially executing inter-frame and intra-frame joint prediction coding and decoding

Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818531A (en) * 1995-10-27 1998-10-06 Kabushiki Kaisha Toshiba Video encoding and decoding apparatus
US5973739A (en) * 1992-03-27 1999-10-26 British Telecommunications Public Limited Company Layered video coder
US6292512B1 (en) * 1998-07-06 2001-09-18 U.S. Philips Corporation Scalable video coding system
US6330280B1 (en) * 1996-11-08 2001-12-11 Sony Corporation Method and apparatus for decoding enhancement and base layer image signals using a predicted image signal
US6339618B1 (en) * 1997-01-08 2002-01-15 At&T Corp. Mesh node motion coding to enable object based functionalities within a motion compensated transform video coder
US20020037046A1 (en) * 2000-09-22 2002-03-28 Philips Electronics North America Corporation Totally embedded FGS video coding with motion compensation
US20020118742A1 (en) * 2001-02-26 2002-08-29 Philips Electronics North America Corporation. Prediction structures for enhancement layer in fine granular scalability video coding
US6498865B1 (en) * 1999-02-11 2002-12-24 Packetvideo Corp,. Method and device for control and compatible delivery of digitally compressed visual data in a heterogeneous communication network
US20030007557A1 (en) * 1996-02-07 2003-01-09 Sharp Kabushiki Kaisha Motion picture coding and decoding apparatus
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US6535559B2 (en) * 1997-04-01 2003-03-18 Sony Corporation Image encoder, image encoding method, image decoder, image decoding method, and distribution media
US20030156646A1 (en) * 2001-12-17 2003-08-21 Microsoft Corporation Multi-resolution motion estimation and compensation
US6614936B1 (en) * 1999-12-03 2003-09-02 Microsoft Corporation System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding
US6639943B1 (en) * 1999-11-23 2003-10-28 Koninklijke Philips Electronics N.V. Hybrid temporal-SNR fine granular scalability video coding
US20030223643A1 (en) * 2002-05-28 2003-12-04 Koninklijke Philips Electronics N.V. Efficiency FGST framework employing higher quality reference frames
US20030223493A1 (en) * 2002-05-29 2003-12-04 Koninklijke Philips Electronics N.V. Entropy constrained scalar quantizer for a laplace-markov source
US20040001635A1 (en) * 2002-06-27 2004-01-01 Koninklijke Philips Electronics N.V. FGS decoder based on quality estimated at the decoder
US6765965B1 (en) * 1999-04-22 2004-07-20 Renesas Technology Corp. Motion vector detecting apparatus
US20040252900A1 (en) * 2001-10-26 2004-12-16 Wilhelmus Hendrikus Alfonsus Bruls Spatial scalable compression
US20050011543A1 (en) * 2003-06-27 2005-01-20 Haught John Christian Process for recovering a dry cleaning solvent from a mixture by modifying the mixture
US20050111543A1 (en) * 2003-11-24 2005-05-26 Lg Electronics Inc. Apparatus and method for processing video for implementing signal to noise ratio scalability
US6907070B2 (en) * 2000-12-15 2005-06-14 Microsoft Corporation Drifting reduction and macroblock-based control in progressive fine granularity scalable video coding
US20050185714A1 (en) * 2004-02-24 2005-08-25 Chia-Wen Lin Method and apparatus for MPEG-4 FGS performance enhancement
US6940905B2 (en) * 2000-09-22 2005-09-06 Koninklijke Philips Electronics N.V. Double-loop motion-compensation fine granular scalability
US20050195896A1 (en) * 2004-03-08 2005-09-08 National Chiao Tung University Architecture for stack robust fine granularity scalability
US20050195900A1 (en) * 2004-03-04 2005-09-08 Samsung Electronics Co., Ltd. Video encoding and decoding methods and systems for video streaming service
US20060013308A1 (en) * 2004-07-15 2006-01-19 Samsung Electronics Co., Ltd. Method and apparatus for scalably encoding and decoding color video
US20060083309A1 (en) * 2004-10-15 2006-04-20 Heiko Schwarz Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction
US7072394B2 (en) * 2002-08-27 2006-07-04 National Chiao Tung University Architecture and method for fine granularity scalable video coding
US20060233242A1 (en) * 2005-04-13 2006-10-19 Nokia Corporation Coding of frame number in scalable video coding
US20060256863A1 (en) * 2005-04-13 2006-11-16 Nokia Corporation Method, device and system for enhanced and effective fine granularity scalability (FGS) coding and decoding of video data
US20070053442A1 (en) * 2005-08-25 2007-03-08 Nokia Corporation Separation markers in fine granularity scalable video coding
US20070160136A1 (en) * 2006-01-12 2007-07-12 Samsung Electronics Co., Ltd. Method and apparatus for motion prediction using inverse motion transform
US20080031345A1 (en) * 2006-07-10 2008-02-07 Segall Christopher A Methods and Systems for Combining Layers in a Multi-Layer Bitstream
US20080044094A1 (en) * 2002-07-18 2008-02-21 Jeon Byeong M Method of determining motion vectors and a reference picture index for a current block in a picture to be decoded
US20100296000A1 (en) * 2009-05-25 2010-11-25 Canon Kabushiki Kaisha Method and device for transmitting video data

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998031151A1 (en) * 1997-01-10 1998-07-16 Matsushita Electric Industrial Co., Ltd. Image processing method, image processing device, and data recording medium
US7245663B2 (en) * 1999-07-06 2007-07-17 Koninklijke Philips Electronis N.V. Method and apparatus for improved efficiency in transmission of fine granular scalable selective enhanced images
EP1161839A1 (en) 1999-12-28 2001-12-12 Koninklijke Philips Electronics N.V. Snr scalable video encoding method and corresponding decoding method
US20020126759A1 (en) * 2001-01-10 2002-09-12 Wen-Hsiao Peng Method and apparatus for providing prediction mode fine granularity scalability
US20020118743A1 (en) * 2001-02-28 2002-08-29 Hong Jiang Method, apparatus and system for multiple-layer scalable video coding
JP2003299103A (en) * 2002-03-29 2003-10-17 Toshiba Corp Moving picture encoding and decoding processes and devices thereof
US6944222B2 (en) 2002-03-04 2005-09-13 Koninklijke Philips Electronics N.V. Efficiency FGST framework employing higher quality reference frames
KR100488018B1 (en) * 2002-05-03 2005-05-06 엘지전자 주식회사 Moving picture coding method
KR20050027111A (en) * 2002-07-16 2005-03-17 톰슨 라이센싱 에스.에이. Interleaving of base and enhancement layers for hd-dvd
AU2003253190A1 (en) 2002-09-27 2004-04-19 Koninklijke Philips Electronics N.V. Scalable video encoding
MXPA05008405A (en) * 2003-02-18 2005-10-05 Nokia Corp Picture decoding method.
JP4532476B2 (en) * 2003-06-03 2010-08-25 エヌエックスピー ビー ヴィ Secure card terminal
JP2007525072A (en) * 2003-06-25 2007-08-30 トムソン ライセンシング Method and apparatus for weighted prediction estimation using permuted frame differences
US8064520B2 (en) * 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
WO2005032138A1 (en) 2003-09-29 2005-04-07 Koninklijke Philips Electronics, N.V. System and method for combining advanced data partitioning and fine granularity scalability for efficient spatio-temporal-snr scalability video coding and streaming
FI115589B (en) 2003-10-14 2005-05-31 Nokia Corp Encoding and decoding redundant images
US20050201471A1 (en) * 2004-02-13 2005-09-15 Nokia Corporation Picture decoding method
KR101407748B1 (en) * 2004-10-13 2014-06-17 톰슨 라이센싱 Method and apparatus for complexity scalable video encoding and decoding
KR100679022B1 (en) 2004-10-18 2007-02-05 삼성전자주식회사 Video coding and decoding method using inter-layer filtering, video ecoder and decoder
KR20060043115A (en) 2004-10-26 2006-05-15 엘지전자 주식회사 Method and apparatus for encoding/decoding video signal using base layer
KR100703734B1 (en) 2004-12-03 2007-04-05 삼성전자주식회사 Method and apparatus for encoding/decoding multi-layer video using DCT upsampling
DE602006011865D1 (en) * 2005-03-10 2010-03-11 Qualcomm Inc DECODER ARCHITECTURE FOR OPTIMIZED ERROR MANAGEMENT IN MULTIMEDIA FLOWS
US7756206B2 (en) * 2005-04-13 2010-07-13 Nokia Corporation FGS identification in scalable video coding
EP1869888B1 (en) * 2005-04-13 2016-07-06 Nokia Technologies Oy Method, device and system for effectively coding and decoding of video data
KR20060122663A (en) * 2005-05-26 2006-11-30 엘지전자 주식회사 Method for transmitting and using picture information in a video signal encoding/decoding
WO2006132509A1 (en) 2005-06-10 2006-12-14 Samsung Electronics Co., Ltd. Multilayer-based video encoding method, decoding method, video encoder, and video decoder using smoothing prediction
US7617436B2 (en) * 2005-08-02 2009-11-10 Nokia Corporation Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network
KR100746011B1 (en) * 2005-08-24 2007-08-06 삼성전자주식회사 Method for enhancing performance of residual prediction, video encoder, and video decoder using it
US9113147B2 (en) * 2005-09-27 2015-08-18 Qualcomm Incorporated Scalability techniques based on content information
KR20070038396A (en) 2005-10-05 2007-04-10 엘지전자 주식회사 Method for encoding and decoding video signal
AU2006298012B2 (en) 2005-10-05 2009-11-12 Lg Electronics Inc. Method for decoding a video signal
KR100891663B1 (en) 2005-10-05 2009-04-02 엘지전자 주식회사 Method for decoding and encoding a video signal
US20070086521A1 (en) * 2005-10-11 2007-04-19 Nokia Corporation Efficient decoded picture buffer management for scalable video coding
US8170116B2 (en) * 2006-03-27 2012-05-01 Nokia Corporation Reference picture marking in scalable video encoding and decoding
US8358704B2 (en) * 2006-04-04 2013-01-22 Qualcomm Incorporated Frame level multimedia decoding with frame information table

Patent Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973739A (en) * 1992-03-27 1999-10-26 British Telecommunications Public Limited Company Layered video coder
US5818531A (en) * 1995-10-27 1998-10-06 Kabushiki Kaisha Toshiba Video encoding and decoding apparatus
US20030007557A1 (en) * 1996-02-07 2003-01-09 Sharp Kabushiki Kaisha Motion picture coding and decoding apparatus
US6330280B1 (en) * 1996-11-08 2001-12-11 Sony Corporation Method and apparatus for decoding enhancement and base layer image signals using a predicted image signal
US6339618B1 (en) * 1997-01-08 2002-01-15 At&T Corp. Mesh node motion coding to enable object based functionalities within a motion compensated transform video coder
US6535559B2 (en) * 1997-04-01 2003-03-18 Sony Corporation Image encoder, image encoding method, image decoder, image decoding method, and distribution media
US6292512B1 (en) * 1998-07-06 2001-09-18 U.S. Philips Corporation Scalable video coding system
US6498865B1 (en) * 1999-02-11 2002-12-24 Packetvideo Corp,. Method and device for control and compatible delivery of digitally compressed visual data in a heterogeneous communication network
US6765965B1 (en) * 1999-04-22 2004-07-20 Renesas Technology Corp. Motion vector detecting apparatus
US6639943B1 (en) * 1999-11-23 2003-10-28 Koninklijke Philips Electronics N.V. Hybrid temporal-SNR fine granular scalability video coding
US6614936B1 (en) * 1999-12-03 2003-09-02 Microsoft Corporation System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding
US6510177B1 (en) * 2000-03-24 2003-01-21 Microsoft Corporation System and method for layered video coding enhancement
US20020037046A1 (en) * 2000-09-22 2002-03-28 Philips Electronics North America Corporation Totally embedded FGS video coding with motion compensation
US6940905B2 (en) * 2000-09-22 2005-09-06 Koninklijke Philips Electronics N.V. Double-loop motion-compensation fine granular scalability
US6907070B2 (en) * 2000-12-15 2005-06-14 Microsoft Corporation Drifting reduction and macroblock-based control in progressive fine granularity scalable video coding
US20020118742A1 (en) * 2001-02-26 2002-08-29 Philips Electronics North America Corporation. Prediction structures for enhancement layer in fine granular scalability video coding
US20040252900A1 (en) * 2001-10-26 2004-12-16 Wilhelmus Hendrikus Alfonsus Bruls Spatial scalable compression
US20030156646A1 (en) * 2001-12-17 2003-08-21 Microsoft Corporation Multi-resolution motion estimation and compensation
US20030223643A1 (en) * 2002-05-28 2003-12-04 Koninklijke Philips Electronics N.V. Efficiency FGST framework employing higher quality reference frames
US20030223493A1 (en) * 2002-05-29 2003-12-04 Koninklijke Philips Electronics N.V. Entropy constrained scalar quantizer for a laplace-markov source
US20040001635A1 (en) * 2002-06-27 2004-01-01 Koninklijke Philips Electronics N.V. FGS decoder based on quality estimated at the decoder
US20080044094A1 (en) * 2002-07-18 2008-02-21 Jeon Byeong M Method of determining motion vectors and a reference picture index for a current block in a picture to be decoded
US7072394B2 (en) * 2002-08-27 2006-07-04 National Chiao Tung University Architecture and method for fine granularity scalable video coding
US20050011543A1 (en) * 2003-06-27 2005-01-20 Haught John Christian Process for recovering a dry cleaning solvent from a mixture by modifying the mixture
US20050111543A1 (en) * 2003-11-24 2005-05-26 Lg Electronics Inc. Apparatus and method for processing video for implementing signal to noise ratio scalability
US20050185714A1 (en) * 2004-02-24 2005-08-25 Chia-Wen Lin Method and apparatus for MPEG-4 FGS performance enhancement
US20050195900A1 (en) * 2004-03-04 2005-09-08 Samsung Electronics Co., Ltd. Video encoding and decoding methods and systems for video streaming service
US20050195896A1 (en) * 2004-03-08 2005-09-08 National Chiao Tung University Architecture for stack robust fine granularity scalability
US20060013308A1 (en) * 2004-07-15 2006-01-19 Samsung Electronics Co., Ltd. Method and apparatus for scalably encoding and decoding color video
US20060083309A1 (en) * 2004-10-15 2006-04-20 Heiko Schwarz Apparatus and method for generating a coded video sequence by using an intermediate layer motion data prediction
US20110038421A1 (en) * 2004-10-15 2011-02-17 Heiko Schwarz Apparatus and Method for Generating a Coded Video Sequence by Using an Intermediate Layer Motion Data Prediction
US20060256863A1 (en) * 2005-04-13 2006-11-16 Nokia Corporation Method, device and system for enhanced and effective fine granularity scalability (FGS) coding and decoding of video data
US20060233242A1 (en) * 2005-04-13 2006-10-19 Nokia Corporation Coding of frame number in scalable video coding
US20070053442A1 (en) * 2005-08-25 2007-03-08 Nokia Corporation Separation markers in fine granularity scalable video coding
US20070160136A1 (en) * 2006-01-12 2007-07-12 Samsung Electronics Co., Ltd. Method and apparatus for motion prediction using inverse motion transform
US20080031345A1 (en) * 2006-07-10 2008-02-07 Segall Christopher A Methods and Systems for Combining Layers in a Multi-Layer Bitstream
US20100296000A1 (en) * 2009-05-25 2010-11-25 Canon Kabushiki Kaisha Method and device for transmitting video data

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025399A1 (en) * 2006-07-26 2008-01-31 Canon Kabushiki Kaisha Method and device for image compression, telecommunications system comprising such a device and program implementing such a method
US20110222605A1 (en) * 2009-09-22 2011-09-15 Yoshiichiro Kashiwagi Image coding apparatus, image decoding apparatus, image coding method, and image decoding method
US8446958B2 (en) * 2009-09-22 2013-05-21 Panasonic Corporation Image coding apparatus, image decoding apparatus, image coding method, and image decoding method
US10027957B2 (en) 2011-01-12 2018-07-17 Sun Patent Trust Methods and apparatuses for encoding and decoding video using multiple reference pictures
US10841573B2 (en) 2011-02-08 2020-11-17 Sun Patent Trust Methods and apparatuses for encoding and decoding video using multiple reference pictures
US9854275B2 (en) 2011-06-25 2017-12-26 Qualcomm Incorporated Quantization in video coding

Also Published As

Publication number Publication date
KR100886193B1 (en) 2009-02-27
JP2009512268A (en) 2009-03-19
WO2007040335A1 (en) 2007-04-12
US20090225866A1 (en) 2009-09-10
US20100135385A1 (en) 2010-06-03
KR20090017460A (en) 2009-02-18
US20070086518A1 (en) 2007-04-19
HK1124710A1 (en) 2009-07-17
CN101283601A (en) 2008-10-08
CN101352044A (en) 2009-01-21
EP2924997A2 (en) 2015-09-30
EP2924997A3 (en) 2015-12-23
US20070253486A1 (en) 2007-11-01
US8422551B2 (en) 2013-04-16
CN101283595A (en) 2008-10-08
WO2007040342A1 (en) 2007-04-12
BRPI0616860A2 (en) 2011-07-05
CN101352044B (en) 2013-03-06
KR20070038417A (en) 2007-04-10
KR20070038419A (en) 2007-04-10
KR101102399B1 (en) 2012-01-05
WO2007040343A1 (en) 2007-04-12
RU2508608C2 (en) 2014-02-27
BRPI0616860B1 (en) 2020-05-12
KR100883594B1 (en) 2009-02-13
RU2008117444A (en) 2009-11-10
ES2539935T3 (en) 2015-07-07
BRPI0616860B8 (en) 2020-07-07
US7869501B2 (en) 2011-01-11
US20090147857A1 (en) 2009-06-11
KR20070038396A (en) 2007-04-10
US7773675B2 (en) 2010-08-10
EP2924997B1 (en) 2020-08-19
JP4851528B2 (en) 2012-01-11
RU2009111142A (en) 2010-10-10
KR20070038418A (en) 2007-04-10

Similar Documents

Publication Publication Date Title
US20070195879A1 (en) Method and apparatus for encoding a motion vection
US8625670B2 (en) Method and apparatus for encoding and decoding image
KR100888963B1 (en) Method for scalably encoding and decoding video signal
KR100886191B1 (en) Method for decoding an image block
US8532187B2 (en) Method and apparatus for scalably encoding/decoding video signal
US7899115B2 (en) Method for scalably encoding and decoding video signal
JP5061179B2 (en) Illumination change compensation motion prediction encoding and decoding method and apparatus
US20060120450A1 (en) Method and apparatus for multi-layered video encoding and decoding
US8948243B2 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
US20090103613A1 (en) Method for Decoding Video Signal Encoded Using Inter-Layer Prediction
KR20060105409A (en) Method for scalably encoding and decoding video signal
KR20060122671A (en) Method for scalably encoding and decoding video signal
EP1932363B1 (en) Method and apparatus for reconstructing image blocks
EP1601205A1 (en) Moving image encoding/decoding apparatus and method
CN116634169A (en) Processing of multiple image size and conformance windows for reference image resampling in video coding
US20100303151A1 (en) Method for decoding video signal encoded using inter-layer prediction
KR20120025111A (en) Intra prediction encoding/decoding apparatus and method capable of skipping prediction mode information using the characteristics of reference pixels
Suzuki et al. Block-based reduced resolution inter frame coding with template matching prediction
KR20070014956A (en) Method for encoding and decoding video signal

Legal Events

Date Code Title Description
AS Assignment

Owner name: LG ELECTRONICS, INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JEON, BYEONG-MOON;PARK, JI-HO;PARK, SEUNG-WOOK;REEL/FRAME:019247/0487

Effective date: 20070430

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION