USRE41180E1 - Method for encoding and decoding moving picture signals - Google Patents

Method for encoding and decoding moving picture signals Download PDF

Info

Publication number
USRE41180E1
USRE41180E1 US11/108,883 US10888305A USRE41180E US RE41180 E1 USRE41180 E1 US RE41180E1 US 10888305 A US10888305 A US 10888305A US RE41180 E USRE41180 E US RE41180E
Authority
US
United States
Prior art keywords
subframe
frame
encoding
encoded
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US11/108,883
Inventor
Yutaka Machida
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Original Assignee
Panasonic Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corp filed Critical Panasonic Corp
Priority to US11/108,883 priority Critical patent/USRE41180E1/en
Application granted granted Critical
Publication of USRE41180E1 publication Critical patent/USRE41180E1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/65Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
    • H04N19/68Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience involving the insertion of resynchronisation markers into the bitstream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Definitions

  • the reissue applications are the present reissue application Ser. No. 11 / 108 , 883 filed on Apr. 19 , 2005 , and related reissue application Ser. Nos. 10 / 662 , 949 filed on Sep. 16 , 2003 , 11 / 108 , 884 filed on Apr. 19 , 2005 and 11 / 108 , 885 filed on Apr. 19 , 2005 , and the original reissue application Ser. No. 09 / 925 , 423 , filed Aug. 10 , 2001 , now Pat. No. Re 38 , 726 .
  • the present reissue application and the related reissue applications are all continuation applications of application Ser. No. 09 / 925 , 423 .
  • the present invention relates to a method for encoding and decoding digital moving picture signals for use in TV phones, TV conferences and the like.
  • a frame of inputted moving picture is divided into plural blocks each composed of N ⁇ M pixels, and processes of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc. are conducted on each block.
  • blocks each composed of N ⁇ M pixels are regenerated in a reverse procdyre procedure, that is, processes of variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc.
  • the above general encoding method and decoding method for encoding and decoding digital moving picture signals enable removal of redundancy contained in moving picture signals, and efficient communication and storage of a moving picture with less information.
  • H.261 defines an encoding method and a decoding method for encoding and decoding luminance signals and color difference signals, separately, of digital moving picture signals. However, description will be made of only the luminance signals, for the sake of convenience. Basically, the encoding method and decoding method for encoding and decoding the luminance signals are not different from those for the color difference signals.
  • one frame 101 of digital moving picture signals is composed of 352 ⁇ 288 pixels according to H.261.
  • the frame 101 is divided into twelve subframes 102 called GOBs (Group of Blocks) each composed of 176 ⁇ 48 pixels (hereinafter, the subframe in the description of the prior art will be referred a GOB).
  • GOBs Group of Blocks
  • the GOB 102 subframe
  • macro blocks each composed of 16 ⁇ 16 pixels.
  • the encoding method according to H.261 defines that encoded information for one frame is corresponded to a spatial hierarchical structure such as the frame 101 , GOBs 102 and macro blocks 103 described above, as shown in FIG. 2 .
  • FIG. 2 a part enclosed in a rectangle shows encoded information, and the number of coding bits is shown under each of the rectangles.
  • arrows show linkages of the encoded information.
  • a series of encoded moving picture signal sequences as this is called a bit stream 104 .
  • a part including all encoded information for one macro block 103 is called a macro block layer 103 S
  • a part including all encoded information for one GOB 102 is called a GOB layer 102 S
  • a part including all encoded information for one frame 101 is called a frame layer 101 S.
  • PSC (20 bits): a frame identifier 105 ; a unique code by which an encoding method can be always identified, expressed as “0000 0000 0000 0001”;
  • TR (5 bits): a frame number 106 ; indicating a time position in which this frame 101 should be displayed;
  • PTYPE (6 bits): frame type information 107 ; various information about the frame 101 ;
  • PEI (1 bit): extension data insertion information 108 ; a flag representing presence of following PSPARE 109 ;
  • PSPARE (8 bits): extension data; GOB layer 102 S (subframe)
  • GBSC (16 bits): a GOB identifier 110 ; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0000”;
  • GN (4 bits): a GOB number 111 ; indicating a spatial position of this GOB 102 within the frame 101 ;
  • GQUANT (5 bits): quantization characteristic information 112 ; indicating a quantization characteristic when a macro block 103 in the GOB 102 is encoded;
  • extension data insertion information 113 extension data insertion information 113 ; a flag representing presence of following GSPARE 114 ;
  • GSPARE (8 bits): extension data 114 .
  • the encoded information 115 of the macro block layer which is the lowest hierarchy in FIG. 2 is generated in the encoding method of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., as described before, whose coding bit number is not fixed.
  • the number of coding bits of the macro block layer 103 S in general, increases if a spatial level of pixels included in the macro block 103 changes largely or a time level of pixels included in the macro block 103 having the same spatial positions changes largely.
  • Such macro block 103 is, hereinafter, referred a macro block 103 which is difficult to be encoded.
  • Such macro block 103 is hereinafeter referred a macro block 103 which is easy to be encoded.
  • the PSC 105 which is an identifier of the frame layer 101 S is first found out from the bit stream 104 . Incidentally, in a state where a decodable code has been successfully found out it is said that synchronization is established.
  • the PSC 105 is found out from the bit stream and synchronization of the frame layer 101 S is established, it can be identified that the bit stream 104 until the next PSC 105 appears is encoded information for one frame. Further, a time position in which the frame 101 composed of 352 ⁇ 288 pixels obtained by decoding the bit stream 104 for that one frame can be obtained by examining the frame number 106 following the PSC 105 .
  • a GBSC 110 that is an identifier of the GOB layer 102 S is found out from the following bit stream 104 in the encoding method according to H.261.
  • a spatial position of the GOB 102 composed of 176 ⁇ 48 pixels obtained by decoding the bit stream 104 for that one GOB 102 in a frame 101 , in which the GOB 102 should be placed, can be obtained by examining a GN 111 which is a GOB number following the GBSC 110 .
  • a bit stream 104 of a following macro block layer 103 S is decoded after the establishment of the GOB layer 102 s.
  • the decoding method of the macro block layer 103 S is a procedure to regenerate a macro block 103 composed of 16 ⁇ 16 pixels in processes of variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc., as described before.
  • the macro block layer 103 S has no unique code by which a decoding method can be always identified dissimilarly to the PSC 105 or BGSC 110 , and encoded information of each macro block is composed of undefined length bits of a variable length code.
  • the encoded information from the first macro block 115 , to the thirty third macro block 115 33 is expressed as a series of variable length codes without a unique code. If decoding of the macro block encoded information is initiated from a point indicated by A in FIG. 3 , and successively conducted in the order of the first, the second, . . . the nth, . . . the thirty third macro blocks, it is possible to regenerate all the macro blocks 103 in the GOB layer 102 S. However, if the decoding of the macro block encoded information is initiated from a point indicated by B or C in FIG.
  • the GBSC 110 also represents a starting point of decoding the macro block layer 103 S.
  • the GOB 102 which is a set of regenerated macro blocks 103 is placed in a spatial position within a frame 101 directed by GN 111
  • the frame 100 which is a set of the regenerated GOBs 102 is placed in a time position directed by TR 106 .
  • codes which can be identified at all times in the bit stream 104 are only the PSC 105 which is a frame identifier and the GBSC 110 which is a subframe identifier in the general decoding method. If a part of the bit stream 104 lacks or an error occurs therein, it is impossible to recover synchronization of the decoding until the next GBSC 110 appears so that the decoding becomes unfeasible. Even if the next GBSC 110 appears, the bit stream 104 of that subframe layer 102 S cannot be correctly decoded in relation to time. This will be understood from FIG. 4 .
  • FIG. 4 shows an example where the fifth GOB 102 5 in the nth frame 101 n through the sixth GOB 102 6 in the (n+1)th frame 101 n-1 101 n+1 cannot be decoded in relation to time due to lackslacking portions or errors of the bit stream 104 occurring in burst.
  • the PSC 105 corresponding to the (n+1)th frame in relation to time but also the following TR 106 are missed or in error.
  • the general method for encoding and decoding digital moving picture signals has another drawback that if the GOB 102 including a picture in motion in relation to time cannot be decoded, a picture quality of the reproduced picture is largely degraded.
  • FIG. 5 shows one frame including decoded signals of a moving picture, where a figure is moving in the center of the frame.
  • a part moving in relation to time is indicated by slanting lines, and the remaining part is a background which is still in relation to time.
  • a scene like this is general in TV conferences, TV telephones or the like.
  • the first through fourth GOBs 102 1 through 102 4 include a picture still in relation to time. If the second GOB 102 2 cannot be decoded, for example, a skillful operation is conducted to substitute the second GOB 102 2 of the present frame 101 with the second GOB 102 2 of the preceding frame 101 -1 in the decoding. With this operation, degradation of a picture quality in the second GOB 102 2 of the present frame 101 may be hardly detected.
  • the fifth through twelfth GOSs GOBs 102 5 through 102 12 include a picture moving in relation to time. This means, for example, that a picture in the ninth GOB 102 9 of the preceding frame 101 -1 is largely different from the ninth GOB 102 9 of the present frame 101 in relation to time. If the decoding of the ninth GOB 102 9 is unfeasible, degradation of the picture quality of the ninth GOB 102 9 of the present frame 101 is obviously detected even if the skillful operation mentioned above is conducted in the decoding.
  • an object of the present invention is to provide a method for encoding and decoding digital moving picture signals, which can appropriately decode subframes (GOBs) following a subframe in trouble in relation to time if a part of a bit stream is missing or an error occurs in the bit stream.
  • GOBs subframes
  • Another object of the present invention is to provide a method for encoding and decoding digital moving picture signals, which can suppress degradation of a reproduced picture to a small extent if decoding of a subframe (GOB) including a picture in motion in relation to time becomes unfeasible.
  • a subframe (GOB) including a picture in motion in relation to time becomes unfeasible.
  • the present invention is featured in that in the method for encoding and decoding digital moving picture signals of this invention, time position information representing an order of displaying a subframe to an identifier subframe is attached to an identifier of the subframe by which the subframe is identified.
  • time position information representing an order of displaying a subframe is attached to an identifier used to identify the subframe and the identifier of the subframe is encoded. It is therefore possible to decode subframes following a subframe in trouble appropriately in relation to time if a part of bit stream of a bit stream is missing or an error occurs in the bit stream by using the time position information representing an order of displaying each of the subframes attached to an identifier used to identify the subframe.
  • the present invention is featured in that in the method for encoding and decoding digital moving picture signals of this invention, the number of blocks included in a subframe is varied according to a sum of quantities of generated information of the blocks included in the subframe so that each of all the subframes included in the frame has an equal sum of quantities of the generated information of the blocks included in the subframe.
  • the number of blocks included in a subframe is varied according to a sum of quantities of generated information of the blocks included in the subframe so that each of all the subframes included in the frame has an equal sum of quantities of the generated information of the blocks included in the subframe.
  • a spatial size of each subframe is not fixed.
  • a subframe including a block having a large number of coding bits is in a smaller size, whereas a subframe including a block having a small number of coding bits is in a larger size.
  • FIG. 1 shows units to be encoded in a general encoding method for encoding moving picture signals
  • FIG. 2 shows a bit stream generated in the general encoding method for encoding moving picture signals
  • FIG. 3 shows a GOB layer in the bit stream in FIG. 2 generated in the general encoding method for encoding moving picture signals
  • FIG. 4 illustrates an effect of a lack or an error of a part of a bit stream occurring in the general encoding and decoding method for encoding and decoding moving picture signals
  • FIG. 5 illustrates an effect of a lack or an error of a part of a bit stream occurring in the general encoding and decoding method for encoding and decoding moving picture signals
  • FIG. 6 shows a bit stream generated in a method for encoding digital moving picture signals according to first and second embodiments of this invention
  • FIG. 7 is a flowchart illustrating the method for decoding digital moving picture signals according to the first embodiment of this invention.
  • FIG. 8 illustrates the method for encoding digital moving picture signals according to the second embodiment of this invention.
  • FIG. 9 shows a structure of subframes according to the second embodiment of this invention.
  • a method for encoding and decoding digital moving picture signals according to a first embodiment will be now described, which may correctly decode a subframe as a unit in relation to time even if a part of a bit stream is missing or an error occurs therein.
  • one frame of digital motion picture signals is composed of, for example, 352 ⁇ 288 pixels.
  • the frame is divided into twelve subframes each composed of, for example, 176 ⁇ 48 pixels. Further, the subframe is divided into thirty three blocks 13 each composed of, for example, 16 ⁇ 16 pixels.
  • the encoding method according to this embodiment corresponds encoded information for one frame to a spatial hierarchical structure made up of a frame 11 , subframes 12 and blocks 13 to generate a bit stream 14 as shown, for example, in FIG. 6 .
  • FSC (20 bits): a frame identifier 15 ; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0001 0000”;
  • SFSC (16 bits): a subframe identifier 16 ; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0001”;
  • SFNT (5 bits): a subframe time number 17 ; indicating a time position in which this subframe 12 should be displayed;
  • SFNS (4 bits): a subframe space number 18 ; indicating a spatial position in which the subframe 12 should be displayed;
  • SFQUANT (5 bits): quantization characteristic information 19 ; representing a quantization characteristic when a block 13 in the subframe 12 is encoded.
  • encoded information 20 in the block layer 13 S which is the lowest hierarchy in FIG. 6 is generated in an encoding method of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., whose coding bit number are not fixed.
  • an FSC 15 which is an identifier of a frame layer 11 S is found out from a bit stream 14 to establish synchronization of the frame layer 11 S.
  • an SFSC 16 which is an identifier of a subframe layer 12 S is found out from the following bit stream 14 to establish synchronization of the subframe layer 12 S. Then a subframe time number SFNT 17 and a subframe space number SFNS 18 following the SFSC 16 are examined. Next, a bit stream 14 of a block layer 13 S is decoded. A method for decoding this block layer 13 S is a procedure to regenerate the block in processes of, for example, variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc. Finally, the subframe 12 which is a set of the regenerated blocks 13 is placed in time and space positions instructed by the SFNT 17 and the SFNS 18 .
  • a seek for the SFSC 16 which is an identifier of the subframe layer 12 S is started.
  • a lack or an error of of a portion of, or an error in, the bit stream 14 can be detected from, for example, that a decoded value exceeds a range specified in advance or an unexpected code word appears when the bit stream 14 is decoded.
  • the SFSC 16 When the SFSC 16 is detected and synchronization of the subframe layer 12 S is established, the SFNT 17 and the SFNS 18 are examined as stated above, the block layer 13 S is decoded and regenerated, and the subframe 12 which is a set of the regenerated blocks 13 is placed in time and space positions instructed by the SFNT 17 and the SFNS 18 .
  • the frame 11 , the subframe 12 and the block are in different sizes and shapes.
  • a bit length of each encoded information may be different from that of the above encoded information, or the frame layer 19 may be omitted, in addition.
  • one frame 11 of digital moving picture signals is composed of, for example, 352 ⁇ 288 pixels.
  • the frame 11 is divided into blocks each composed of 16 ⁇ 16 pixels.
  • one frame 11 is composed of 22 blocks ⁇ 18 block lines 21 .
  • the block line 21 corresponds to the subframe 12 mentioned above.
  • each block 13 ′ is encoded from the uppermost block line 21 , as shown in FIG. 8 , to generate encoded information.
  • the encoded information of each block 13 ′ is generated in an encoding method of, for example, motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., the number of coding bits of which is not fixed. More specifically, the number of coding bits of a block 13 ′ which is difficult to be encoded is large, whereas the number of coding bits of a block 13 ′ which is easy to be encoded is small.
  • a set of blocks 13 or 13 ′ composes a subframe 12 (or a block line 21 ) which is a unit of encoding, but the number of blocks 13 or 13 ′ included in one subframe 12 or 12 ′ is not fixed.
  • a manner of generating a bit stream 14 in the encoding method of this embodiment and a structure of a subframe layer 12 S will be now described with reference to FIG. 6 .
  • an identifier of a frame layer is encoded, and an FSC 15 is placed in a bit stream 14 .
  • the identifier of the subframe layer 12 S, a time number and a space number of that subframe, and a quantization characteristic of that subframe are encoded together, and code words of an SFSC 16 , an SFNT 17 , an SFNS 18 and an SFQUANT 19 are placed in the bit stream 14 .
  • block coding bit number integrated value B-add is set to zero.
  • a block 13 is encoded and encoded information of the block 13 composed of variable codes is placed in the bit stream 14 .
  • B-add exceeds a subframe interval SFd when encoding of a certain block 12 is completed, an identifier of the subframe, a time number and a space number of that subframe and a quantization characteristic of that subframe are encoded, and code words of an SFSC 16 , an SFNT 17 , an SFNS and an SFQUANT 19 are placed in the bit stream 14 .
  • a block coding bit integrated value B-add is set to zero. In other words, a new subframe layer 12 S is started to be formed from that point.
  • the number of blocks 13 included in a subframe 12 is varied according to a quantity of generated information of the blocks included in one subframe, thereby varying a spatial size of the subframe 12 , as stated above. More specifically, a subframe 12 including a block which is difficult to be encoded becomes small, whereas a subframe 12 including a block 13 which is easy to be encoded becomes large.
  • FIG. 9 shows an example of a structure of subframes formed in the encoding method of this embodiment.
  • a subframe 12 including a block 13 which contains a motion in relation to time, and is thus difficult to be coded is made smaller in relation to space. If such subframe 12 cannot be decoded, it is possible to suppress degradation of a quality of a reproduced picture to a small extent. In a region within a frame in which no motion in relation to time exists and degradation of the picture quality is hardly detected even if the decoding is unfeasible, a size of one subframe is large in relation to space, which allows a less volume of side information such as the subframe identifier SFSC 16 , subframe number SFNT 16 and subframe number SFNT 17 . This can prevent an encoding efficiency from being lowered.
  • the frame 11 , subframe 12 and the block 13 are in different sizes and shapes. It is also possible to employ values of a quantity of codes of one frame and a subframe interval SFD different from those employed in the above example.
  • this invention enables correct decoding of each subframe 12 as a unit in relation to time even if a part of the bit stream 14 is missing or an error occurs therein.

Abstract

A method for encoding and decoding digital moving picture signals which can decode subframes appropriately in relation to time if a part of a bit stream is missing or an error occurs in the bit stream, and can suppress degradation of a reproduced picture if decoding of a subframe including a picture in motion in relation to time becomes unfeasible. In the method for encoding and decoding digital moving picture signals of this invention, information for one frame is encoded correspondingly to a spatial hierarchy of a frame, subframes and blocks. A subframe time position number and a subframe space number are attached to an identifier of each of the subframe, thereby resuming appropriate decoding of the subframes immediately after a trouble if an error occurs. The subframe identifiers are placed at a certain interval in the bit stream so as to give a smaller size to a subframe including a block which is in motion and difficult to be encoded, thereby suppressing degradation of a reproduced picture if decoding of the subframe becomes unfeasible.

Description

This application is a continuation of Reissue Application Ser. No. 09/925,423, filed on Aug. 10, 2001, now U.S. Pat. No. RE38,726, which is a Reissue application of application Ser. No. 08/594,565, filed Jan. 31, 1996, and which issued as U.S. Pat. No. 5,937,095.
Notice: more than one reissue application has been filed for the reissue of U.S. Pat. No. 5,937,095. The reissue applications are the present reissue application Ser. No. 11/108,883 filed on Apr. 19, 2005, and related reissue application Ser. Nos. 10/662,949 filed on Sep. 16, 2003, 11/108,884 filed on Apr. 19, 2005 and 11/108,885 filed on Apr. 19, 2005, and the original reissue application Ser. No. 09/925,423, filed Aug. 10, 2001, now Pat. No. Re 38,726. The present reissue application and the related reissue applications are all continuation applications of application Ser. No. 09/925,423.
BACKGROUND OF THE INVENTION
(1) Field of the Invention
The present invention relates to a method for encoding and decoding digital moving picture signals for use in TV phones, TV conferences and the like.
(2) Description of the Prior Art
In a general method for encoding digital moving picture signals, a frame of inputted moving picture is divided into plural blocks each composed of N×M pixels, and processes of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc. are conducted on each block.
In a general method for decoding digital motion picture signals, blocks each composed of N×M pixels are regenerated in a reverse procdyre procedure, that is, processes of variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc.
The above general encoding method and decoding method for encoding and decoding digital moving picture signals enable removal of redundancy contained in moving picture signals, and efficient communication and storage of a moving picture with less information.
In the general encoding method and decoding method for encoding and decoding digital moving picture signals, the processes are conducted on each pixel block, as stated above. It is general that a set of pixel blocks forms a subframe and a set of subframes forms a frame, which are units processed in the general encoding and decoding method.
Hereinafter, encoding and decoding of each block, subframe and frame will be described by way of an example of a general encoding and decoding method for encoding and decoding digital moving picture signals with reference to ITU-T Recommendation H.261 (hereinafter, referred simply H.261) made on March, 1993.
H.261 defines an encoding method and a decoding method for encoding and decoding luminance signals and color difference signals, separately, of digital moving picture signals. However, description will be made of only the luminance signals, for the sake of convenience. Basically, the encoding method and decoding method for encoding and decoding the luminance signals are not different from those for the color difference signals.
As shown in FIG. 1, one frame 101 of digital moving picture signals is composed of 352×288 pixels according to H.261. The frame 101 is divided into twelve subframes 102 called GOBs (Group of Blocks) each composed of 176×48 pixels (hereinafter, the subframe in the description of the prior art will be referred a GOB). Further, the GOB 102 (subframe) is divided into thirty three blocks 103 called macro blocks each composed of 16×16 pixels.
The encoding method according to H.261 defines that encoded information for one frame is corresponded to a spatial hierarchical structure such as the frame 101, GOBs 102 and macro blocks 103 described above, as shown in FIG. 2.
In FIG. 2, a part enclosed in a rectangle shows encoded information, and the number of coding bits is shown under each of the rectangles. In FIG. 2, arrows show linkages of the encoded information. A series of encoded moving picture signal sequences as this is called a bit stream 104.
In the bit stream 104 according to H.261 shown in FIG. 2, a part including all encoded information for one macro block 103 is called a macro block layer 103S, a part including all encoded information for one GOB 102 is called a GOB layer 102S, and a part including all encoded information for one frame 101 is called a frame layer 101S.
Meanings of the encoded information in each of the layers shown in FIG. 2 are given below:
Frame Layer 101S
PSC (20 bits): a frame identifier 105; a unique code by which an encoding method can be always identified, expressed as “0000 0000 0000 0000 0001”;
TR (5 bits): a frame number 106; indicating a time position in which this frame 101 should be displayed;
PTYPE (6 bits): frame type information 107; various information about the frame 101;
PEI (1 bit): extension data insertion information 108; a flag representing presence of following PSPARE 109;
PSPARE (8 bits): extension data; GOB layer 102S (subframe)
GBSC (16 bits): a GOB identifier 110; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0000”;
GN (4 bits): a GOB number 111; indicating a spatial position of this GOB 102 within the frame 101;
GQUANT (5 bits): quantization characteristic information 112; indicating a quantization characteristic when a macro block 103 in the GOB 102 is encoded;
GEI (1 bit): extension data insertion information 113; a flag representing presence of following GSPARE 114;
GSPARE (8 bits): extension data 114.
Incidentally, the encoded information 115 of the macro block layer which is the lowest hierarchy in FIG. 2 is generated in the encoding method of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., as described before, whose coding bit number is not fixed. The number of coding bits of the macro block layer 103S, in general, increases if a spatial level of pixels included in the macro block 103 changes largely or a time level of pixels included in the macro block 103 having the same spatial positions changes largely. Such macro block 103 is, hereinafter, referred a macro block 103 which is difficult to be encoded.
To the contrary, if a level of pixels included in the macro block 103 is steady in relation to space and time, the number of coding bits of the macro block layer 103S remarkably decreases, or sometimes becomes zero. Such macro block 103 is hereinafeter referred a macro block 103 which is easy to be encoded.
In the decoding method according to H.261, the PSC 105 which is an identifier of the frame layer 101S is first found out from the bit stream 104. Incidentally, in a state where a decodable code has been successfully found out it is said that synchronization is established. When the PSC 105 is found out from the bit stream and synchronization of the frame layer 101S is established, it can be identified that the bit stream 104 until the next PSC 105 appears is encoded information for one frame. Further, a time position in which the frame 101 composed of 352×288 pixels obtained by decoding the bit stream 104 for that one frame can be obtained by examining the frame number 106 following the PSC 105.
After the establishment of the frame layer, a GBSC 110 that is an identifier of the GOB layer 102S is found out from the following bit stream 104 in the encoding method according to H.261. When synchronization of the GBSC layer is established, it can be identified that the bit stream 104 until the next GBSC 110 appears is encoded information for one GOB 102. Further, a spatial position of the GOB 102 composed of 176×48 pixels obtained by decoding the bit stream 104 for that one GOB 102 in a frame 101, in which the GOB 102 should be placed, can be obtained by examining a GN 111 which is a GOB number following the GBSC 110.
In the decoding method according to H.261, a bit stream 104 of a following macro block layer 103S is decoded after the establishment of the GOB layer 102s. The decoding method of the macro block layer 103S is a procedure to regenerate a macro block 103 composed of 16×16 pixels in processes of variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc., as described before. It should be here noted that the macro block layer 103S has no unique code by which a decoding method can be always identified dissimilarly to the PSC 105 or BGSC 110, and encoded information of each macro block is composed of undefined length bits of a variable length code.
As shown in FIG. 3, in the GOB (subframe) layer 102S, the encoded information from the first macro block 115, to the thirty third macro block 115 33 is expressed as a series of variable length codes without a unique code. If decoding of the macro block encoded information is initiated from a point indicated by A in FIG. 3, and successively conducted in the order of the first, the second, . . . the nth, . . . the thirty third macro blocks, it is possible to regenerate all the macro blocks 103 in the GOB layer 102S. However, if the decoding of the macro block encoded information is initiated from a point indicated by B or C in FIG. 3, it is impossible to identify a point from which encoded information 115 of one macro block starts, which leads to a failure of establishing synchronization. In which case, the decoding and regenerating all macro blocks 103 become unfeasible until the next GBSC 110 appears. In other words, the GBSC 110 also represents a starting point of decoding the macro block layer 103S.
Finally, in the decoding method according to H.261, the GOB 102 which is a set of regenerated macro blocks 103 is placed in a spatial position within a frame 101 directed by GN 111, and the frame 100 which is a set of the regenerated GOBs 102 is placed in a time position directed by TR 106.
As above, it is possible to decode one frame 101 of digital moving picture correctly in relation to space and time according to H.261.
However, the above general method for encoding and decoding digital moving picture signals has a drawback that if a part of a bit stream 104 lacks is lacking or an error occurs therein, it might be impossible to accurately decode all subframes (GOBs) 102 in relation to time until synchronization of the next frame layer 101S is established.
The reason of the above is that codes which can be identified at all times in the bit stream 104 are only the PSC 105 which is a frame identifier and the GBSC 110 which is a subframe identifier in the general decoding method. If a part of the bit stream 104 lacks or an error occurs therein, it is impossible to recover synchronization of the decoding until the next GBSC 110 appears so that the decoding becomes unfeasible. Even if the next GBSC 110 appears, the bit stream 104 of that subframe layer 102S cannot be correctly decoded in relation to time. This will be understood from FIG. 4.
FIG. 4 shows an example where the fifth GOB 102 5 in the nth frame 101n through the sixth GOB 102 6 in the (n+1)th frame 101 n-1 101 n+1 cannot be decoded in relation to time due to lackslacking portions or errors of the bit stream 104 occurring in burst. In this example, not only the PSC 105 corresponding to the (n+1)th frame in relation to time but also the following TR 106 are missed or in error. It is therefore possible to correctly decode the GOB 102 7 in relation to space by establishing synchronization from the GBSC 110 corresponding to the seventh GOB 102 7 in the (n+1)th frame 101 n+1 in relation to time and decoding the following GN 111, but impossible to specify whether this GOB 102 7 positions in the nth frame or in the (n+1)th frame in relation to time.
In terms of decoding of the eighth GOB 102 8 through the twelfth GOB 102 12 in the (n+1)th frame in relation to time, it is impossible to specify whether these GOBs 102 position in the nth frame or in the (n+1)th frame in relation to time.
In consequence, if a part of the bit stream 104 is missed missing or an error occurs therein, it becomes impossible to correctly decode all GOBs 102 in relation to time until synchronization of the next frame layer 101 5 is established.
Further, the general method for encoding and decoding digital moving picture signals has another drawback that if the GOB 102 including a picture in motion in relation to time cannot be decoded, a picture quality of the reproduced picture is largely degraded.
This problem will be described in more detail with reference to FIG. 5. FIG. 5 shows one frame including decoded signals of a moving picture, where a figure is moving in the center of the frame. In FIG. 5, a part moving in relation to time is indicated by slanting lines, and the remaining part is a background which is still in relation to time. A scene like this is general in TV conferences, TV telephones or the like.
Referring to FIG. 5, considering that any one of the first GOB 102 1 through the fourth GOB 102 4 cannot be decoded. The first through fourth GOBs 102 1 through 102 4 include a picture still in relation to time. If the second GOB 102 2 cannot be decoded, for example, a skillful operation is conducted to substitute the second GOB 102 2 of the present frame 101 with the second GOB 102 2 of the preceding frame 101 -1 in the decoding. With this operation, degradation of a picture quality in the second GOB 102 2 of the present frame 101 may be hardly detected.
However, it is a problem if decoding of the fifth through twelfth GOBs 102 5 through 102 12 shown in FIG. 5 cannot be decoded. The fifth through twelfth GOSs GOBs 102 5 through 102 12 include a picture moving in relation to time. This means, for example, that a picture in the ninth GOB 102 9 of the preceding frame 101 -1 is largely different from the ninth GOB 102 9 of the present frame 101 in relation to time. If the decoding of the ninth GOB 102 9 is unfeasible, degradation of the picture quality of the ninth GOB 102 9 of the present frame 101 is obviously detected even if the skillful operation mentioned above is conducted in the decoding.
Accordingly, if decoding of GOB 102 including a picture moving in relation to time becomes unfeasible, a quality of a reproduced picture is largely degraded.
SUMMARY OF THE INVENTION
In the light of the above problems, an object of the present invention is to provide a method for encoding and decoding digital moving picture signals, which can appropriately decode subframes (GOBs) following a subframe in trouble in relation to time if a part of a bit stream is missing or an error occurs in the bit stream.
Another object of the present invention is to provide a method for encoding and decoding digital moving picture signals, which can suppress degradation of a reproduced picture to a small extent if decoding of a subframe (GOB) including a picture in motion in relation to time becomes unfeasible.
To accomplish the first object, the present invention is featured in that in the method for encoding and decoding digital moving picture signals of this invention, time position information representing an order of displaying a subframe to an identifier subframe is attached to an identifier of the subframe by which the subframe is identified.
According to the method for encoding and decoding digital moving picture signals of this invention, time position information representing an order of displaying a subframe is attached to an identifier used to identify the subframe and the identifier of the subframe is encoded. It is therefore possible to decode subframes following a subframe in trouble appropriately in relation to time if a part of bit stream of a bit stream is missing or an error occurs in the bit stream by using the time position information representing an order of displaying each of the subframes attached to an identifier used to identify the subframe.
To accomplish the second object, the present invention is featured in that in the method for encoding and decoding digital moving picture signals of this invention, the number of blocks included in a subframe is varied according to a sum of quantities of generated information of the blocks included in the subframe so that each of all the subframes included in the frame has an equal sum of quantities of the generated information of the blocks included in the subframe.
According to the method for encoding and decoding digital moving picture signals of this invention, the number of blocks included in a subframe is varied according to a sum of quantities of generated information of the blocks included in the subframe so that each of all the subframes included in the frame has an equal sum of quantities of the generated information of the blocks included in the subframe. In consequence, a spatial size of each subframe is not fixed. A subframe including a block having a large number of coding bits is in a smaller size, whereas a subframe including a block having a small number of coding bits is in a larger size. It is therefore possible to suppress degradation of a reproduced picture even if decoding of a subframe becomes unfeasible since a subframe including a block which includes a motion in relation to time and is difficult to be encoded is in a smaller size in relation to space.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows units to be encoded in a general encoding method for encoding moving picture signals;
FIG. 2 shows a bit stream generated in the general encoding method for encoding moving picture signals;
FIG. 3 shows a GOB layer in the bit stream in FIG. 2 generated in the general encoding method for encoding moving picture signals;
FIG. 4 illustrates an effect of a lack or an error of a part of a bit stream occurring in the general encoding and decoding method for encoding and decoding moving picture signals;
FIG. 5 illustrates an effect of a lack or an error of a part of a bit stream occurring in the general encoding and decoding method for encoding and decoding moving picture signals;
FIG. 6 shows a bit stream generated in a method for encoding digital moving picture signals according to first and second embodiments of this invention;
FIG. 7 is a flowchart illustrating the method for decoding digital moving picture signals according to the first embodiment of this invention;
FIG. 8 illustrates the method for encoding digital moving picture signals according to the second embodiment of this invention; and
FIG. 9 shows a structure of subframes according to the second embodiment of this invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hereinafter, description will be made of embodiments according to the present invention referring to the drawings.
A method for encoding and decoding digital moving picture signals according to a first embodiment will be now described, which may correctly decode a subframe as a unit in relation to time even if a part of a bit stream is missing or an error occurs therein.
In the encoding method according to this embodiment, one frame of digital motion picture signals is composed of, for example, 352×288 pixels. The frame is divided into twelve subframes each composed of, for example, 176×48 pixels. Further, the subframe is divided into thirty three blocks 13 each composed of, for example, 16×16 pixels.
The encoding method according to this embodiment corresponds encoded information for one frame to a spatial hierarchical structure made up of a frame 11, subframes 12 and blocks 13 to generate a bit stream 14 as shown, for example, in FIG. 6.
Meanings of encoded information of each layer shown in FIG. 6 are given below: Frame layer 11S
FSC (20 bits): a frame identifier 15; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0001 0000”;
Subframe Layer 12S
SFSC (16 bits): a subframe identifier 16; a unique code by which a decoding method can be always identified, expressed as “0000 0000 0000 0001”;
SFNT (5 bits): a subframe time number 17; indicating a time position in which this subframe 12 should be displayed;
SFNS (4 bits): a subframe space number 18; indicating a spatial position in which the subframe 12 should be displayed;
SFQUANT (5 bits): quantization characteristic information 19; representing a quantization characteristic when a block 13 in the subframe 12 is encoded.
Incidentally, encoded information 20 in the block layer 13S which is the lowest hierarchy in FIG. 6 is generated in an encoding method of motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., whose coding bit number are not fixed.
Now referring to FIG. 7, a decoding method according to this embodiment will be now described. First, an FSC 15 which is an identifier of a frame layer 11S is found out from a bit stream 14 to establish synchronization of the frame layer 11S.
After the establishment of synchronization of the frame layer 11S, an SFSC 16 which is an identifier of a subframe layer 12S is found out from the following bit stream 14 to establish synchronization of the subframe layer 12S. Then a subframe time number SFNT 17 and a subframe space number SFNS 18 following the SFSC 16 are examined. Next, a bit stream 14 of a block layer 13S is decoded. A method for decoding this block layer 13S is a procedure to regenerate the block in processes of, for example, variable length decoding, reverse quantization, reverse orthogonal transform, motion compensation, etc. Finally, the subframe 12 which is a set of the regenerated blocks 13 is placed in time and space positions instructed by the SFNT 17 and the SFNS 18. If synchronization of the decoding is lost due to a lack of a part of the bit stream 14 or an error therein, a seek for the SFSC 16 which is an identifier of the subframe layer 12S is started. A lack or an error of of a portion of, or an error in, the bit stream 14 can be detected from, for example, that a decoded value exceeds a range specified in advance or an unexpected code word appears when the bit stream 14 is decoded. When the SFSC 16 is detected and synchronization of the subframe layer 12S is established, the SFNT 17 and the SFNS 18 are examined as stated above, the block layer 13S is decoded and regenerated, and the subframe 12 which is a set of the regenerated blocks 13 is placed in time and space positions instructed by the SFNT 17 and the SFNS 18.
According to the first embodiment of this invention, if a part of the bit stream 14 lacks is lacking or an error occurs in the bit stream 14, synchronization of the decoding is lost and the decoding becomes unfeasible, but correct decoding becomes possible immediately after a subframe 12 in trouble.
As having been described the above first embodiment by way of an example, it is alternatively possible that the frame 11, the subframe 12 and the block are in different sizes and shapes. A bit length of each encoded information may be different from that of the above encoded information, or the frame layer 19 may be omitted, in addition.
According to a second embodiment of this invention, description will be now made of a method for encoding digital moving picture signals which can suppress degradation of a reproduced picture to a small extent if a subframe including a picture moving in relation to time cannot be decoded. Incidentally, it is possible here to employ a decoding method similar to that of the first embodiment.
In the encoding method of this embodiment, one frame 11 of digital moving picture signals is composed of, for example, 352×288 pixels. The frame 11 is divided into blocks each composed of 16×16 pixels. In other words, one frame 11 is composed of 22 blocks×18 block lines 21. The block line 21 corresponds to the subframe 12 mentioned above.
In the encoding method of this embodiment, each block 13′ is encoded from the uppermost block line 21, as shown in FIG. 8, to generate encoded information. The encoded information of each block 13′ is generated in an encoding method of, for example, motion detection, prediction, orthogonal transform, quantization, variable length coding, etc., the number of coding bits of which is not fixed. More specifically, the number of coding bits of a block 13′ which is difficult to be encoded is large, whereas the number of coding bits of a block 13′ which is easy to be encoded is small. In the encoding method of this embodiment, a set of blocks 13 or 13′ composes a subframe 12 (or a block line 21) which is a unit of encoding, but the number of blocks 13 or 13′ included in one subframe 12 or 12′ is not fixed.
A manner of generating a bit stream 14 in the encoding method of this embodiment and a structure of a subframe layer 12S will be now described with reference to FIG. 6. When one frame is encoded, an identifier of a frame layer is encoded, and an FSC 15 is placed in a bit stream 14. Next, the identifier of the subframe layer 12S, a time number and a space number of that subframe, and a quantization characteristic of that subframe are encoded together, and code words of an SFSC 16, an SFNT 17, an SFNS 18 and an SFQUANT 19 are placed in the bit stream 14. At the same time, block coding bit number integrated value B-add is set to zero. Following that, a block 13 is encoded and encoded information of the block 13 composed of variable codes is placed in the bit stream 14. Concurrently, the coding bit number B of this block 13 is added to B-add. Namely, an equation, B-add=B-add+B, is computed. Similarly, blocks 13 are encoded successively, encoded information 20 of each block 13 is placed in the bit stream 14, and a calculation of B-add=B-add+B is repeated each time. If the B-add exceeds a subframe interval SFd when encoding of a certain block 12 is completed, an identifier of the subframe, a time number and a space number of that subframe and a quantization characteristic of that subframe are encoded, and code words of an SFSC 16, an SFNT 17, an SFNS and an SFQUANT 19 are placed in the bit stream 14. At the same time, a block coding bit integrated value B-add is set to zero. In other words, a new subframe layer 12S is started to be formed from that point.
The subframe interval SFd is set to, for example, 540 bits. Therefore, if one frame is encoded with, for example, 6400 bits in the encoding method of this embodiment, 12 subframes 12 exist in one frame since 6400/540=11.85.
In the encoding method according to this embodiment, the number of blocks 13 included in a subframe 12 is varied according to a quantity of generated information of the blocks included in one subframe, thereby varying a spatial size of the subframe 12, as stated above. More specifically, a subframe 12 including a block which is difficult to be encoded becomes small, whereas a subframe 12 including a block 13 which is easy to be encoded becomes large. FIG. 9 shows an example of a structure of subframes formed in the encoding method of this embodiment.
According to th e second embodiment of this invention, a subframe 12 including a block 13 which contains a motion in relation to time, and is thus difficult to be coded is made smaller in relation to space. If such subframe 12 cannot be decoded, it is possible to suppress degradation of a quality of a reproduced picture to a small extent. In a region within a frame in which no motion in relation to time exists and degradation of the picture quality is hardly detected even if the decoding is unfeasible, a size of one subframe is large in relation to space, which allows a less volume of side information such as the subframe identifier SFSC 16, subframe number SFNT 16 and subframe number SFNT 17. This can prevent an encoding efficiency from being lowered.
As having been described the second embodiment by way of an example, it is alternatively possible that the frame 11, subframe 12 and the block 13 are in different sizes and shapes. It is also possible to employ values of a quantity of codes of one frame and a subframe interval SFD different from those employed in the above example.
As obvious from the above embodiments, this invention enables correct decoding of each subframe 12 as a unit in relation to time even if a part of the bit stream 14 is missing or an error occurs therein.
Further, according to this invention, it is possible to suppress degradation of a quality of the reproduced picture to a small extent if a subframe 13 including a block which is in motion in relation to time cannot be decoded.
Still further, in a region within a frame in which no motion in relation to time exists and degradation of a quality of the reproduced picture quality is hardly detected even if the decoding is unfeasible, side information of the region is allowed to be in a small volume so that it is possible to prevent an encoding efficiency from being lowered.

Claims (17)

1. A method for encoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded;
setting an identifier to said subframe to identify said subframe; and
specifying a frame to which said subframe belongs by adding to said identifier time position information representing an order of displaying said subframe;
encoding said time position information along with said subframe, and
multiplexing said encoded time position information and a bit stream of said encoded subframe to transmit said encoded time position information and said bit stream.
2. A method for encoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded; and
varying the number of said blocks included in said subframe according to a quantity of information generated by encoding each block to vary a spatial size of each of said subframes included in each frame.
3. A method for encoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded;
setting an identifier to said subframe to identify said subframe;
specifying a frame to which said subframe belongs by adding to said identifier time position information representing an order of displaying said subframe;
encoding said time position information along with said subframe, and
multiplexing said encoded time position information and a bit stream of said encoded subframe to transmit said encoded time position information and said bit stream; and
varying the number of said blocks included in said subframe according to a quantity of information generated by encoding each block to vary a spatial size of each of said subframes included in each frame.
4. The method for encoding digital motion picture signals of a frame according to claim 2, wherein each of said subframes included in said frame has an equal sum of quantities of generated information of said blocks included in said subframe.
5. The method for encoding digital motion picture signals of a frame according to claim 3, wherein each of said subframes included in said frame has an equal sum of quantities of generated information of said blocks included in said subframe.
6. A method for encoding and decoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded;
setting an identifier to said subframe to identify said subframe;
specifying a frame to which said subframe belongs by adding to said identifier time position information representing an order of displaying said subframe;
encoding said time position information along with said subframe;
multiplexing said encoded time position information and a bit stream of said encoded subframe to transmit said encoded time position information and said bit stream; and
decoding each of said subframes appropriately in relation to time by decoding and using said time position information to form said frame of said digital moving picture signals.
7. A method for encoding and decoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded;
varying the number of said blocks included in said subframe according to a quantity of information generated by encoding each block to vary a spatial size of each of said subframes included in each frame; and
decoding each of said subframes to form said frame of said digital moving picture signal.
8. A method for encoding and decoding digital motion picture signals of a frame, comprising the steps of:
dividing said frame into plural blocks each including N×M pixels;
forming a subframe composed of a set of said blocks, said subframe being a unit to be encoded,
setting an identifier to said subframe to identify said subframe;
specifying a frame to which said subframe belongs by adding to said identifier time position information representing an order of displaying said subframe;
encoding said time position information along with said subframe;
multiplexing said encoded time position information and a bit stream of said encoded subframe to transmit said encoded time position information and said bit stream;
varying the number of said blocks included in said subframe according to a quantity of information generated by encoding each block to vary a spatial size of each of said subframes included in each frame; and
decoding said subframe appropriately in relation to time by decoding and using said time position information to form said frame of said digital moving picture signal.
9. The method for encoding and decoding digital motion picture signals of a frame according to claim 7, wherein each of said subframes included in said frame has an equal sum of quantities of generated information of said blocks included in said subframe.
10. The method for encoding and decoding digital motion picture signals of a frame according to claim 8, wherein each of said subframes included in said frame has an equal sum of quantities of generated information of said blocks included in said subframe.
11. The method for encoding digital motion picture signals of a frame according to claim 1, wherein said step of adding time position information comprises adding the time information to each subframe of said frame.
12. The method for encoding digital motion picture signals of a frame according to claim 11, further comprising the step of maintaining substantially constant a quantity of information generated for each subframe within said frame thereby to vary spatial dimensions represented by each said subframe.
13. The method for encoding digital motion picture signals of a frame according to claim 1, further comprising the step of maintaining substantially constant a quantity of information generated for each subframe within said frame thereby to vary spatial dimensions represented by each said subframe.
14. The method for encoding and decoding digital motion picture signals of a frame according to claim 6, wherein said step of adding time position information comprises adding the time information to each subframe of said frame.
15. The method for encoding digital motion picture signals of a frame according to claim 14, further comprising the step of maintaining substantially constant a quantity of information generated for each subframe within said frame thereby to vary spatial dimensions represented by each said subframe.
16. The method for encoding digital motion picture signals of a frame according to claim 6, further comprising the step of maintaining substantially constant a quantity of information generated for each subframe within said frame thereby to vary spatial dimensions represented by each said subframe.
17. A method for decoding an encoded bitstream, said method comprising:
(a) receiving said encoded bitstream containing an identifier, an encoded subframe obtained by encoding a subframe composed of at least one block and an encoded time position information representing an order of displaying said subframe, wherein said identifier is set to said encoded subframe to identify said subframe, said encoded time position information is added to said identifier by specifying a frame to which said subframe belongs;
(b) detecting said identifier to extract said encoded time position information from said encoded bitstream;
(c) decoding said extracted encoded time position information to obtain decoded time position information; and
(d) decoding said encoded subframe to form said frame according to said decoded time position information.
US11/108,883 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals Expired - Lifetime USRE41180E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/108,883 USRE41180E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP7-14514 1995-01-31
JP1451495A JP3351645B2 (en) 1995-01-31 1995-01-31 Video coding method
US08/594,565 US5937095A (en) 1995-01-31 1996-01-31 Method for encoding and decoding moving picture signals
US09/925,423 USRE38726E1 (en) 1995-01-31 2001-08-10 Method for encoding and decoding moving picture signals
US11/108,883 USRE41180E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/594,565 Reissue US5937095A (en) 1995-01-31 1996-01-31 Method for encoding and decoding moving picture signals

Publications (1)

Publication Number Publication Date
USRE41180E1 true USRE41180E1 (en) 2010-03-30

Family

ID=11863203

Family Applications (6)

Application Number Title Priority Date Filing Date
US08/594,565 Ceased US5937095A (en) 1995-01-31 1996-01-31 Method for encoding and decoding moving picture signals
US09/925,423 Expired - Lifetime USRE38726E1 (en) 1995-01-31 2001-08-10 Method for encoding and decoding moving picture signals
US10/662,949 Expired - Lifetime USRE39367E1 (en) 1995-01-31 2003-09-16 Method for encoding and decoding moving picture signals
US11/108,885 Expired - Lifetime USRE40829E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals
US11/108,883 Expired - Lifetime USRE41180E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals
US11/108,884 Expired - Lifetime USRE40828E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US08/594,565 Ceased US5937095A (en) 1995-01-31 1996-01-31 Method for encoding and decoding moving picture signals
US09/925,423 Expired - Lifetime USRE38726E1 (en) 1995-01-31 2001-08-10 Method for encoding and decoding moving picture signals
US10/662,949 Expired - Lifetime USRE39367E1 (en) 1995-01-31 2003-09-16 Method for encoding and decoding moving picture signals
US11/108,885 Expired - Lifetime USRE40829E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/108,884 Expired - Lifetime USRE40828E1 (en) 1995-01-31 2005-04-19 Method for encoding and decoding moving picture signals

Country Status (6)

Country Link
US (6) US5937095A (en)
EP (4) EP1146747B1 (en)
JP (1) JP3351645B2 (en)
AU (2) AU681818B2 (en)
CA (1) CA2168415C (en)
DE (2) DE69636529T2 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3351645B2 (en) * 1995-01-31 2002-12-03 松下電器産業株式会社 Video coding method
JPH10336595A (en) * 1997-06-05 1998-12-18 Mitsubishi Electric Corp Decoder
JP3573396B2 (en) * 1997-06-20 2004-10-06 富士通株式会社 Moving image decoding method and apparatus
JP3198996B2 (en) * 1997-08-26 2001-08-13 日本電気株式会社 Image size conversion method for orthogonally coded images
US6064748A (en) * 1998-01-16 2000-05-16 Hewlett-Packard Company Method and apparatus for embedding and retrieving additional data in an encoded data stream
JP2002027463A (en) * 2000-07-07 2002-01-25 Matsushita Electric Ind Co Ltd Image encoding device and method
US7079582B2 (en) 2000-07-07 2006-07-18 Matsushita Electric Industrial Co., Ltd. Image coding apparatus and image coding method
CN101448162B (en) 2001-12-17 2013-01-02 微软公司 Method for processing video image
US7830963B2 (en) * 2003-07-18 2010-11-09 Microsoft Corporation Decoding jointly coded transform type and subblock pattern information
US10554985B2 (en) 2003-07-18 2020-02-04 Microsoft Technology Licensing, Llc DC coefficient signaling at small quantization step sizes
US7289114B2 (en) * 2003-07-31 2007-10-30 Hewlett-Packard Development Company, L.P. Generating and displaying spatially offset sub-frames
US7724827B2 (en) * 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
US7301549B2 (en) 2003-10-30 2007-11-27 Hewlett-Packard Development Company, L.P. Generating and displaying spatially offset sub-frames on a diamond grid
TWI288568B (en) * 2004-12-10 2007-10-11 Seiko Epson Corp Image display method and device, and projector
JP4947389B2 (en) * 2009-04-03 2012-06-06 ソニー株式会社 Image signal decoding apparatus, image signal decoding method, and image signal encoding method
CA3033984C (en) * 2010-09-30 2022-03-08 Mitsubishi Electric Corporation Moving image encoding device, moving image decoding device, moving image coding method, and moving image decoding method
CN109561119B (en) * 2017-09-26 2022-04-01 中国移动通信有限公司研究院 Data receiving method and equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02272851A (en) 1989-04-14 1990-11-07 Oki Electric Ind Co Ltd Assembly system and disassembly system of picture packet
US5023710A (en) * 1988-12-16 1991-06-11 Sony Corporation Highly efficient coding apparatus
US5060285A (en) * 1989-05-19 1991-10-22 Gte Laboratories Incorporated Hierarchical variable block size address-vector quantization using inter-block correlation
US5107345A (en) * 1990-02-27 1992-04-21 Qualcomm Incorporated Adaptive block size image compression method and system
US5138447A (en) * 1991-02-11 1992-08-11 General Instrument Corporation Method and apparatus for communicating compressed digital video signals using multiple processors
US5144424A (en) * 1991-10-15 1992-09-01 Thomson Consumer Electronics, Inc. Apparatus for video data quantization control
US5173952A (en) * 1990-03-20 1992-12-22 Victor Company Of Japan, Ltd. Image filtering system performing transform coding of image data other than boundary picture elements of adjoining blocks
US5203715A (en) * 1991-03-13 1993-04-20 Yazaki Corporation Connector
EP0552049A2 (en) * 1992-01-17 1993-07-21 Sharp Kabushiki Kaisha Digital recording and/or reproduction apparatus of video signal
US5231384A (en) * 1991-08-26 1993-07-27 General Electric Company Apparatus for splitting video signal between two channels
US5287178A (en) * 1992-07-06 1994-02-15 General Electric Company Reset control network for a video signal encoder
JPH0715729A (en) 1993-03-11 1995-01-17 Matsushita Electric Ind Co Ltd Image coding method and circuit, device therefor and optical disk
JPH07111651A (en) 1993-10-13 1995-04-25 Hitachi Ltd Moving image compression coding device and compression moving image recording device
GB2287603A (en) * 1994-03-16 1995-09-20 Fujitsu Ltd Inserting header code words
US5557331A (en) 1993-03-11 1996-09-17 Matsushita Electric Industrial Co., Ltd. Image encoding method, an image encoding circuit, an image encoding apparatus, and an optical disk

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0055049B1 (en) * 1980-12-20 1986-03-19 Thorn Emi Plc Discharge lamp arc tubes
US5168356A (en) * 1991-02-27 1992-12-01 General Electric Company Apparatus for segmenting encoded video signal for transmission
US5397095A (en) * 1993-03-19 1995-03-14 Jeffrey; Alfred S. Modular building system
US5475716A (en) * 1994-01-18 1995-12-12 Gi Corporation Method for communicating block coded digital data with associated synchronization/control data
JP3351645B2 (en) * 1995-01-31 2002-12-03 松下電器産業株式会社 Video coding method
US5827178A (en) * 1997-01-02 1998-10-27 Berall; Jonathan Laryngoscope for use in trachea intubation

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5023710A (en) * 1988-12-16 1991-06-11 Sony Corporation Highly efficient coding apparatus
JPH02272851A (en) 1989-04-14 1990-11-07 Oki Electric Ind Co Ltd Assembly system and disassembly system of picture packet
US5060285A (en) * 1989-05-19 1991-10-22 Gte Laboratories Incorporated Hierarchical variable block size address-vector quantization using inter-block correlation
US5107345A (en) * 1990-02-27 1992-04-21 Qualcomm Incorporated Adaptive block size image compression method and system
US5173952A (en) * 1990-03-20 1992-12-22 Victor Company Of Japan, Ltd. Image filtering system performing transform coding of image data other than boundary picture elements of adjoining blocks
US5138447A (en) * 1991-02-11 1992-08-11 General Instrument Corporation Method and apparatus for communicating compressed digital video signals using multiple processors
US5203715A (en) * 1991-03-13 1993-04-20 Yazaki Corporation Connector
US5231384A (en) * 1991-08-26 1993-07-27 General Electric Company Apparatus for splitting video signal between two channels
US5144424A (en) * 1991-10-15 1992-09-01 Thomson Consumer Electronics, Inc. Apparatus for video data quantization control
EP0552049A2 (en) * 1992-01-17 1993-07-21 Sharp Kabushiki Kaisha Digital recording and/or reproduction apparatus of video signal
US5287178A (en) * 1992-07-06 1994-02-15 General Electric Company Reset control network for a video signal encoder
JPH0715729A (en) 1993-03-11 1995-01-17 Matsushita Electric Ind Co Ltd Image coding method and circuit, device therefor and optical disk
US5557331A (en) 1993-03-11 1996-09-17 Matsushita Electric Industrial Co., Ltd. Image encoding method, an image encoding circuit, an image encoding apparatus, and an optical disk
JPH07111651A (en) 1993-10-13 1995-04-25 Hitachi Ltd Moving image compression coding device and compression moving image recording device
GB2287603A (en) * 1994-03-16 1995-09-20 Fujitsu Ltd Inserting header code words

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Codec For Afdiovisual Services AT n×384 kbit/s" Comite Consultatif International Telegraphique Telephonique Recommendation, vol. 3, No. III.06, Nov. 14, 1998, pp. 120-128, XP000575342.
"Codec for Afdiovisual Services AT n×384kbits/s" çomite Consultatif International Telegraphique Telephonique Recommendation vol. 3, No. III.06, Nov. 14, 1998, pp. 120-128, XP000575342. *
"Transmission of Non-Telephone Signals. Information Technology- Generic Coding of Moving Pictures and Associated Audio Information: Video" ITU-T Telecommunication Standarization Sector of ITU, Jul. 1995, pp. A/B, I-VIII, 1-201, XP000198491.
"Transmission of Non-Telephone Signals. Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video" ITU-T Telecommunication Standarrization Sector of ITu, Jul. 1995, pp. A/B, I-VIII, 1-201, XP000198491. *
Line Tranmission of Non-Telephone Signals, International Telecommunication Union, ITU-T Recommendation H.261, Mar. 1993, pp. 1-25. *

Also Published As

Publication number Publication date
EP1146747A3 (en) 2002-03-13
AU4331096A (en) 1996-08-08
EP1991007A3 (en) 2011-09-14
DE69636529T2 (en) 2007-01-18
DE69620932D1 (en) 2002-06-06
EP0725545A2 (en) 1996-08-07
AU681818B2 (en) 1997-09-04
USRE40829E1 (en) 2009-07-07
EP0725545B1 (en) 2002-05-02
EP0725545A3 (en) 1999-01-27
EP1189453A1 (en) 2002-03-20
USRE39367E1 (en) 2006-10-31
US5937095A (en) 1999-08-10
CA2168415A1 (en) 1996-08-01
CA2168415C (en) 2000-07-18
AU2359097A (en) 1997-08-14
EP1991007A2 (en) 2008-11-12
EP1146747A2 (en) 2001-10-17
EP1146747B1 (en) 2006-09-06
AU684697B2 (en) 1997-12-18
USRE38726E1 (en) 2005-04-19
JPH08205155A (en) 1996-08-09
DE69636529D1 (en) 2006-10-19
DE69620932T2 (en) 2002-11-07
JP3351645B2 (en) 2002-12-03
USRE40828E1 (en) 2009-07-07

Similar Documents

Publication Publication Date Title
USRE41180E1 (en) Method for encoding and decoding moving picture signals
US5070503A (en) Digital information transmitting and receiving system
USRE42456E1 (en) Moving-picture signal coding and/or decoding system resistant to transmission error
US5781561A (en) Encoding apparatus for hierarchically encoding image signal and decoding apparatus for decoding the image signal hierarchically encoded by the encoding apparatus
JPH03267889A (en) Composition of picture data encoded from among data current
JPH0738444A (en) Decoder having two-step buffer for variable-length decoder and method
JP3611790B2 (en) Video signal decoding method
JPH07141798A (en) Method of copying digital data and video recorder
JPH0951541A (en) Animation communicating method
JPH10136369A (en) Decoder for digital moving image