US20030151753A1 - Methods and apparatuses for use in switching between streaming video bitstreams - Google Patents

Methods and apparatuses for use in switching between streaming video bitstreams Download PDF

Info

Publication number
US20030151753A1
US20030151753A1 US10/185,741 US18574102A US2003151753A1 US 20030151753 A1 US20030151753 A1 US 20030151753A1 US 18574102 A US18574102 A US 18574102A US 2003151753 A1 US2003151753 A1 US 2003151753A1
Authority
US
United States
Prior art keywords
bitstream
switching
recited
quantization parameter
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/185,741
Inventor
Shipeng Li
Feng Wu
Xiaoyan Sun
Goubin Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/185,741 priority Critical patent/US20030151753A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, SHIPENG, SHEN, GOUBIN, SUN, XIAOYAN, WU, FENG
Priority to EP02028649A priority patent/EP1337111A3/en
Priority to JP2003018057A priority patent/JP2003244700A/en
Priority to KR10-2003-0007895A priority patent/KR20030067589A/en
Priority to JP2003032872A priority patent/JP2003283340A/en
Publication of US20030151753A1 publication Critical patent/US20030151753A1/en
Priority to US12/472,266 priority patent/US8576919B2/en
Priority to US14/071,540 priority patent/US9686546B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities

Definitions

  • This invention relates to data bitstreams, and more particularly to methods and apparatuses for switching between different streaming bitstreams.
  • a video sequence is compressed into several non-scalable bitstreams at different bit rates.
  • Some special frames known as key frames, are either compressed without prediction or coded with an extra switching bitstream.
  • Key frames provide access points to switch among these bitstreams to fit in the available bandwidth.
  • One advantage of this method is the high coding efficiency with non-scalable bitstreams.
  • this method only provides coarse and sluggish capability in adapting to channel bandwidth variations.
  • a video sequence is compressed into a single scalable bitstream, which can be truncated flexibly to adapt to bandwidth variations.
  • MPEG-4 Fine Granularity Scalable (FGS) coding has become prominent due to its fine-grain scalability. Since the enhancement bitstream can be truncated arbitrarily in any frame, FGS provides a remarkable capability in readily and precisely adapting to channel bandwidth variations.
  • low coding efficiency is the vital disadvantage that prevents FGS from being widely deployed in video streaming applications.
  • Progressive Fine Granularity Scalable (PFGS) coding scheme is a significant improvement over FGS by introducing two prediction loops with different quality references.
  • Improved methods and apparatuses are provided for switching of streaming data bitstreams, such as, for example, used in video streaming and other related applications.
  • Some desired functionalities provided herein include random access, fast forward and fast backward, error-resilience and bandwidth adaptation.
  • the improved methods and apparatuses can be configured to increase coding efficiency of and/or reduce the amount of data needed to encode a switching bitstream.
  • an encoding method includes encoding data into a first bitstream using a first quantization parameter and encoding the data into a second bitstream using a second quantization parameter that is different from the first quantization parameter.
  • the method also includes generating an encoded switching bitstream associated with the first and second bitstreams using the first quantization parameter to support up-switching between the first and second bitstreams and using the second quantization parameter to support down-switching between the first and second bitstreams.
  • An exemplary apparatus includes a first bitstream encoder configured to encode data into an encoded first bitstream using a first quantization parameter and a second bitstream encoder configured to encode the data into an encoded second bitstream using a second quantization parameter that is different from the first quantization parameter.
  • the apparatus also includes a switching bitstream encoder operatively coupled to the first bitstream encoder and the second bitstream encoder and configured to output an encoded switching bitstream that supports up-switching and down-switching between the first encoded bitstream and the second encoded bitstream based on information processed using the first and second quantization parameters.
  • An exemplary decoding method includes receiving at least one encoded bitstream, such as, a first bitstream that was generated using a first quantization parameter and/or a second bitstream that was generated using a second quantization parameter that is different from the first quantization parameter.
  • the received encoded bitstream is decoded.
  • the decoding method further includes receiving an encoded switching bitstream associated with the first and second bitstreams that was generated using the first quantization parameter to support up-switching between the first and second bitstreams and using the second quantization parameter to support down-switching between the first and second bitstreams.
  • the method also includes decoding the received encoded switching bitstream using the first and second quantization parameters.
  • Another exemplary apparatus includes a first decoder configured to decode a first encoded bitstream into a decoded first bitstream using a first quantization parameter and a second decoder configured to decode a second bitstream into a decoded second bitstream using a second quantization parameter that is different from the first quantization parameter.
  • the apparatus also includes a switching bitstream decoder that is operatively coupled to the first decoder and the second decoder and configured to output a decoded switching bitstream that supports up-switching and down-switching between the first decoded bitstream and the second decoded bitstream based on information processed using the first and second quantization parameters.
  • FIG. 1 is a block diagram depicting an exemplary computing environment that is suitable for use with certain implementations of the present invention.
  • FIG. 2 is a diagram illustratively depicting switching between bitstreams, in accordance with certain exemplary implementations of the present invention.
  • FIG. 3 is a block diagram depicting a conventional decoder.
  • FIG. 5 is block diagram depicting an improved decoder, in accordance with certain exemplary implementations of the present invention.
  • FIG. 6 is block diagram depicting an improved encoder, in accordance with certain exemplary implementations of the present invention.
  • FIG. 7 is block diagram depicting an improved decoder, in accordance with certain further exemplary implementations of the present invention.
  • FIG. 8 is block diagram depicting an improved encoder, in accordance with certain further exemplary implementations of the present invention.
  • FIG. 9 is block diagram depicting an improved decoder, in accordance with still other exemplary implementations of the present invention.
  • FIG. 10 is block diagram depicting an improved decoder, in accordance with still other exemplary implementations of the present invention.
  • FIG. 11 is block diagram depicting an improved encoder, in accordance with still other exemplary implementations of the present invention.
  • a similar representative switching process 200 is depicted in the illustrative diagram in FIG. 2. Here, switching is shown as occurring from bitstream 1 to bitstream 2 using SP pictures.
  • the streaming system When switching from bitstream 1 to bitstream 2 , the streaming system does not need to wait for a key frame to start the switching process. Instead, it can switch at the SP frames. At SP frames, the streaming system sends a switching bitstream S 12 , and the decoder decodes the switching bitstream using the same techniques without knowing whether it is S 1 , S 2 or S 12 . Thus, the bitstream switching is transparent to the decoder. The decoded frame will be exactly the same as the reference frame for the next frame prediction in bitstream 2 . As such, there should not be any drifting problems.
  • FIG. 3 and FIG. 4 An exemplary conventional decoder 300 and encoder 400 are depicted in FIG. 3 and FIG. 4, respectively. A more detailed description of the scheme can be found in Kurceren et al. There are some potential issues with the scheme in Kurceren et al.
  • the size of the down-switching bitstream may often be much smaller than that of the up-switching one. Since the high bit-rate bitstream typically contains most of the information of a low bit-rate one, in theory, one should be able to configure the scheme to make the size of switching bitstream sufficiently small.
  • FIG. 5 and FIG. 6 illustrate an improved decoder and encoder, respectively, in accordance with certain implementations of the present invention.
  • the proposed techniques solve the contradiction existing in the scheme proposed in Kurceren et al. so that the down-switching bitstream can be encoded to have significantly reduced, if not minimal, size while the coding efficiency of bitstreams 1 and 2 is also well preserved.
  • the switching points for up-switching and down-switching can be decoupled. This means that one can encode more switching down points than switching-up points, for example, to suit the TCP-friendly protocols, etc. Moreover, such decoupling allows for further improved coding efficiency of the bitstream that the system is switched from, for example, by individually setting the Qs in the reconstruction loop to an appropriately small value.
  • FIG. 7 and FIG. 8 illustrate an exemplary decoder and encoder, respectively, that support both high coding efficiency for the normal bitstreams and a compact size for the switching bitstream.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • FIG. 1 illustrates an example of a suitable computing environment 120 on which the subsequently described systems, apparatuses and methods may be implemented.
  • Exemplary computing environment 120 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the improved methods and systems described herein. Neither should computing environment 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in computing environment 120 .
  • the improved methods and systems herein are operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known computing systems, environments, and/or configurations that may be suitable include, but are not limited to, personal computers, server computers, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • computing environment 120 includes a general-purpose computing device in the form of a computer 130 .
  • the components of computer 130 may include one or more processors or processing units 132 , a system memory 134 , and a bus 136 that couples various system components including system memory 134 to processor 132 .
  • Bus 136 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus also known as Mezzanine bus.
  • Computer 130 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 130 , and it includes both volatile and non-volatile media, removable and non-removable media.
  • system memory 134 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 140 , and/or non-volatile memory, such as read only memory (ROM) 138 .
  • RAM random access memory
  • ROM read only memory
  • a basic input/output system (BIOS) 142 containing the basic routines that help to transfer information between elements within computer 130 , such as during start-up, is stored in ROM 138 .
  • BIOS basic input/output system
  • RAM 140 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processor 132 .
  • Computer 130 may further include other removable/non-removable, volatile/non-volatile computer storage media.
  • FIG. 1 illustrates a hard disk drive 144 for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”), a magnetic disk drive 146 for reading from and writing to a removable, non-volatile magnetic disk 148 (e.g., a “floppy disk”), and an optical disk drive 150 for reading from or writing to a removable, non-volatile optical disk 152 such as a CD-ROM/R/RW, DVD-ROM/R/RW/+R/RAM or other optical media.
  • Hard disk drive 144 , magnetic disk drive 146 and optical disk drive 150 are each connected to bus 136 by one or more interfaces 154 .
  • the drives and associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules, and other data for computer 130 .
  • the exemplary environment described herein employs a hard disk, a removable magnetic disk 148 and a removable optical disk 152 , it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like, may also be used in the exemplary operating environment.
  • a number of program modules may be stored on the hard disk, magnetic disk 148 , optical disk 152 , ROM 138 , or RAM 140 , including, e.g., an operating system 158 , one or more application programs 160 , other program modules 162 , and program data 164 .
  • the improved methods and systems described herein may be implemented within operating system 158 , one or more application programs 160 , other program modules 162 , and/or program data 164 .
  • a user may provide commands and information into computer 130 through input devices such as keyboard 166 and pointing device 168 (such as a “mouse”).
  • Other input devices may include a microphone, joystick, game pad, satellite dish, serial port, scanner, camera, etc.
  • a user input interface 170 that is coupled to bus 136 , but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
  • USB universal serial bus
  • a monitor 172 or other type of display device is also connected to bus 136 via an interface, such as a video adapter 174 .
  • personal computers typically include other peripheral output devices (not shown), such as speakers and printers, which may be connected through output peripheral interface 175 .
  • Computer 130 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 182 .
  • Remote computer 182 may include many or all of the elements and features described herein relative to computer 130 .
  • Logical connections shown in FIG. 1 are a local area network (LAN) 177 and a general wide area network (WAN) 179 .
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • computer 130 When used in a LAN networking environment, computer 130 is connected to LAN 177 via network interface or adapter 186 .
  • the computer When used in a WAN networking environment, the computer typically includes a modem 178 or other means for establishing communications over WAN 179 .
  • Modem 178 which may be internal or external, may be connected to system bus 136 via the user input interface 170 or other appropriate mechanism.
  • FIG. 1 Depicted in FIG. 1, is a specific implementation of a WAN via the Internet.
  • computer 130 employs modem 178 to establish communications with at least one remote computer 182 via the Internet 180 .
  • program modules depicted relative to computer 130 may be stored in a remote memory storage device.
  • remote application programs 189 may reside on a memory device of remote computer 182 . It will be appreciated that the network connections shown and described are exemplary and other means of establishing a communications link between the computers may be used.
  • FIGS. 3 - 8 The modules and notations used in FIGS. 3 - 8 are defined as follows:
  • DCT Discrete cosine transform
  • IDCT Inverse discrete cosine transform
  • Entropy Encoding Entropy encoding of quantized coefficients. It could be arithmetic coding or variable length coding.
  • Entropy Decoding Entropy decoding of quantized coefficients. It could be arithmetic decoding or variable length decoding that matches the corresponding modules in the encoder.
  • Q ⁇ 1 Inverse Quantization or dequantization.
  • MC Motion compensation module, where a predicted frame is formed according to the motion vectors and the reference in the frame buffer.
  • ME Motion estimation module, where the motion vectors are searched to best predict the current frame.
  • Loop Filter A smoothing filter in the motion compensation loop to reduce the blocking artifacts.
  • FrameBuffer0 A frame buffer that holds the reference frame for next frame encoding/decoding.
  • P Picture A frame encoded using traditional motion compensated predictive coding.
  • SP Picture A frame encoded as a switching frame using the proposed motion compensated predictive coding.
  • Switching Bitstream The bitstream transmitted to make seamless transition from one bitstream to another.
  • K serr1 QP 1 ⁇ 1 (L err1 ).
  • K rec1 K pred1 +K serr1 .
  • L rec1 Qs(K rec1 ).
  • the levels L rec1 are dequantized using Qs ⁇ 1 and the inverse DCT transform is performed to obtain the reconstructed image.
  • the reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for the next frame decoding.
  • the resultant picture is the same as that decoded from S 2 .
  • a drifting-free switching from bitstream 1 to bitstream 2 is achieved.
  • Qs is encoded in the S 12 bitstream.
  • FIG. 4 Reference is now made to FIG. 4 and the exemplary conventional encoding process that is illustrated for encoding of SP frame S 1 or S 2 in a normal bitstream.
  • S 1 is used as an example.
  • DCT transform to the macroblock of the original video is performed, and the obtained coefficients as K ong1 denoted.
  • a DCT transform is performed to the predicted macroblock, and the obtained coefficients as K pred1 denoted.
  • the next step is to quantize K pred1 using Qs and obtain levels L pred1 .
  • L pred1 Qs(K pred1 ).
  • K err1 K orig1 ⁇ K pred1 .
  • L err1 QP 1 (K err1 ).
  • Entropy encoding is performed with L err12 and bitstream S 12 .
  • S 1 is used as an example.
  • the levels of the prediction error coefficients, L err1 , and motion vectors, are generated for the macroblock.
  • Levels L err1 are dequantized using quantizer QP 1 ⁇ 1 ,
  • K serr1 QP 1 ⁇ 1 (L err1 ).
  • L serr1 Qs(K serr1 ).
  • L pred1 Qs 1 (K pred1 ).
  • L pred1 is then dequantized by Qs 1 ⁇ 1 ,
  • K spred1 Qs 1 ⁇ 1 (L pred1 ).
  • L spred1 Qs(K spred1 ).
  • L rec1 L spred1 +L serr1 .
  • the reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for next frame decoding.
  • the decoding of switching bitstream S 12 for example, when switching from bitstream 1 to bitstream 2 , follows a similar decoding process similar except that the input is bitstream S 12 , QP 1 ⁇ 1 is replaced by Qs 2 ⁇ 1 , Qs is replaced by Qs 2 , Qs ⁇ 1 is replaced by Qs 2 ⁇ 1 , L rec1 is replaced by L rec2 , L err1 is replaced by L err12 , K serr1 is replaced by K serr12 , and L spred1 is replaced by L spred12 .
  • the resultant picture is the same as that decoded from S 2 .
  • a drifting-free switching from bitstream 1 to bitstream 2 is achieved.
  • S 1 is used as an example.
  • DCT transform is performed to the macroblock of the original video, and the obtained coefficients as K orig1 denoted.
  • K pred1 After motion compensation, DCT transform is performed to the predicted macroblock, and the obtained coefficients denoted as K pred1 . Then K pred1 is quantized using Qs 1 and levels L pred1 obtained,
  • L pred1 Qs 1 (K pred1 ).
  • the next step is to dequantize L pred1 using dequantizer Qs 1 ⁇ 1 ,
  • K spred1 Qs 1 ⁇ 1 (L pred1 ).
  • K err1 K orig1 ⁇ K pred1 .
  • L err1 QP 1 (K err1 ).
  • the encoding of switching bitstream S 12 is based on the encoding of S 1 and S 2 .
  • the process involves quantizing prediction coefficients K spred1 in the S 1 encoder using quantizer Qs 2 .
  • L spred12 Qs 2 (K spred1 ).
  • S 1 is used as an example.
  • Levels L err1 are dequantized using quantizer QP 1 ⁇ 1 :
  • K serr1 QP 1 ⁇ 1 (L err1 ).
  • K rec1 K pred1 +K serr1 .
  • L rec1 Qs 1 (K rec1 ).
  • the levels L rec1 are dequantized using Qs 1 ⁇ 1 and the inverse DCT transform is performed to obtain the reconstructed image.
  • the reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for next frame decoding.
  • the decoding of switching bitstream S 12 for example, when switching from bitstream 1 to bitstream 2 , follows a similar decoding process similar except that the input is bitstream S 12 , QP 1 ⁇ 1 is replaced by Qs 2 ⁇ 1 , Qs 1 is replaced by Qs 2 , Qs 1 ⁇ 1 is replaced by Qs 2 ⁇ 1 , K serr1 is replaced by K serr12 , K rec1 is replaced by K rec12 , L rec1 is replaced by L rec2 , and L err1 is replaced by L err12 .
  • the resultant picture is the same as that decoded from S 2 .
  • a drifting-free switching from bitstream 1 to bitstream 2 is achieved.
  • S 1 is used as an example.
  • the process includes subtracting K pred1 from K orig1 and obtaining error coefficients K err1 .
  • K err1 K orig1 ⁇ K pred1 .
  • K err1 is quantized using QP 1 and error levels L err1 obtained,
  • L err1 QP 1 (K err1 ).
  • the process includes reconstructing levels L rec1 and the reference for the next frame encoding. Note that here there is a quantizer Qs 1 and a dequantizer Qs 1 ⁇ 1 in the reconstruction loop.
  • the encoding of switching bitstream S 12 is based on the encoding of S 1 and S 2 .
  • prediction coefficients K pred1 are quantized in the S 1 encoder using quantizer Qs 2 .
  • L pred12 Qs 2 (K pred1 ).
  • the process includes subtracting L pred12 from the reconstructed level L rec2 in S 2 encoder.
  • FIG. 9 is a block diagram depicting a decoder 900 for S 1 and S 2 , in accordance with certain other implementations of the present invention.
  • the quantization Qs is operated on the reconstructed DCT reference rather than on the decoded DCT residue and the DCT prediction.
  • the quantization in this example can be described as:
  • X is the reconstructed DCT coefficient
  • Y is the quantized DCT coefficient
  • A(.) is the quantization table
  • Qs is the quantization step.
  • L err are the levels of the prediction error coefficients
  • K pred are prediction coefficients. This is quite different from that used in conventional SP coding.
  • One advantage is that a high quality display can be reconstructed from the part in [. . . ] of the above formula.
  • decoder 900 provides two ways for reconstructing the display image.
  • the reconstructed reference is directly used for the purpose of display. There is little if any complexity increase in this case.
  • the decoder is powerful enough, another high quality image can be reconstructed for display.
  • This process includes the modules within box 902 . These modules are, for example, non-normative parts for the current JVT standard.
  • FIG. 10 illustrates a decoder 1000 for the switching bitstream S 12 , in accordance with certain further implementations of the present invention.
  • decoder 1000 for the switching bitstream S 12 is slightly different from that for S 1 and S 2 , presented in previous sections.
  • the quantization Qs is only needed on the DCT prediction.
  • the quantization in this example can be described as:
  • Decoder 1000 is configured to know which SP bitstream is received. Therefore, for example, a 1-bit syntax can be employed to notify decoder 1000 .
  • An exemplary modification in the SP syntax and semantic include a Switching Bitstream Flag (e.g., 1 bit) and Quantization parameter (e.g., 5 bits).
  • the 1-bit syntax element “Switching Bitstream Flag” is inserted before the syntax element “Slice Qp”.
  • the Switching Bitstream Flag is 1, the current bitstream is decoded as Bitstream S 12 , and the syntax element “Slice QP” is skipped; otherwise it is decoded as Bitstream S 1 or S 2 , and the syntax element “Slice QP” is the quantization parameter Qp.
  • An encoder 1100 is illustrated in the block diagram of FIG. 11, in accordance with certain further exemplary implementations of the present invention.
  • encoder 1100 includes a switch 1102 .
  • the DCT prediction can be directly subtracted from the original DCT image without quantization and dequantization, or the DCT prediction can be subtracted from the original DCT image after quantization and dequantization. Whether the DCT prediction is quantized or not can be decided, for example, one by one coefficient with rate-distortion criterion.

Abstract

Improved methods and apparatuses are provided for switching of streaming data bitstreams, such as, for example, used in video streaming and other related applications. Some desired functionalities provided herein include random access, fast forward and fast backward, error-resilience and bandwidth adaptation. The improved methods and apparatuses can be configured to increase coding efficiency of and/or reduce the amount of data needed to encode a switching bitstream.

Description

    RELATED PATENT APPLICATIONS
  • This U.S. Non-provisional Application for Letters Patent further claims the benefit of priority from, and hereby incorporates by reference the entire disclosure of, co-pending U.S. Provisional Application for Letters Patent Serial No. 60/355,071, filed Feb. 8, 2002. [0001]
  • Furthermore, this U.S. Non-provisional Application for Letters Patent is related to a co-pending application Ser. No. ______ (Attorney's Docket Number MS1-1218US), filed Jun. 27, 2002, and titled “Seamless Switching Of Scalable Video Bitstreams”.[0002]
  • TECHNICAL FIELD
  • This invention relates to data bitstreams, and more particularly to methods and apparatuses for switching between different streaming bitstreams. [0003]
  • BACKGROUND
  • With steady growth of access bandwidth, more and more Internet applications start to use streaming audio and video contents. Since the current Internet is inherently a heterogeneous and dynamical best-effort network, channel bandwidth usually fluctuates in a wide range from bit rate below 64 kbps to well above 1 Mbps. This brings great challenges to video coding and streaming technologies in providing a smooth playback experience and best available video quality to the users. To deal with the network bandwidth variations, two main approaches, namely, switching among multiple non-scalable bitstreams and streaming with a single scalable bitstream, have been extensively investigated in recent years. [0004]
  • In the first approach, a video sequence is compressed into several non-scalable bitstreams at different bit rates. Some special frames, known as key frames, are either compressed without prediction or coded with an extra switching bitstream. Key frames provide access points to switch among these bitstreams to fit in the available bandwidth. One advantage of this method is the high coding efficiency with non-scalable bitstreams. However, due to limitation in both the number of bitstreams and switching points, this method only provides coarse and sluggish capability in adapting to channel bandwidth variations. [0005]
  • In the second approach, a video sequence is compressed into a single scalable bitstream, which can be truncated flexibly to adapt to bandwidth variations. Among numerous scalable coding techniques, MPEG-4 Fine Granularity Scalable (FGS) coding has become prominent due to its fine-grain scalability. Since the enhancement bitstream can be truncated arbitrarily in any frame, FGS provides a remarkable capability in readily and precisely adapting to channel bandwidth variations. However, low coding efficiency is the vital disadvantage that prevents FGS from being widely deployed in video streaming applications. Progressive Fine Granularity Scalable (PFGS) coding scheme is a significant improvement over FGS by introducing two prediction loops with different quality references. On the other hand, since only one high quality reference is used in enhancement layer coding, most coding efficiency gain appears within a certain bit rate range around the high quality reference. Generally, with today's technologies, there is still a coding efficiency loss compared with the non-scalable case at fixed bit rates. [0006]
  • Nevertheless, bandwidth fluctuations remain a problem for streaming video in the current Internet. Conventional streaming video systems typically try to address this problem by switching between different video bitstreams with different bit-rates, for example, as described above. However, in these and other existing video coding schemes, the switching points are restricted only to key frames (e.g., typically I-frames) to avoid drifting problems. Such key frames are usually encoded far apart from each other to preserve high coding efficiency, so bitstream switching can only take place periodically. This greatly reduces the adaptation capability of existing streaming systems. Consequently, a viewer may experience frequent pausing and re-buffering when watching a streaming video. [0007]
  • Hence, there is a need for improved method and apparatuses for use in switching streaming bitstreams. [0008]
  • SUMMARY
  • Improved methods and apparatuses are provided for switching of streaming data bitstreams, such as, for example, used in video streaming and other related applications. Some desired functionalities provided herein include random access, fast forward and fast backward, error-resilience and bandwidth adaptation. The improved methods and apparatuses can be configured to increase coding efficiency of and/or reduce the amount of data needed to encode a switching bitstream. [0009]
  • In accordance with certain exemplary implementations of the present invention, an encoding method is provided. The method includes encoding data into a first bitstream using a first quantization parameter and encoding the data into a second bitstream using a second quantization parameter that is different from the first quantization parameter. The method also includes generating an encoded switching bitstream associated with the first and second bitstreams using the first quantization parameter to support up-switching between the first and second bitstreams and using the second quantization parameter to support down-switching between the first and second bitstreams. [0010]
  • An exemplary apparatus includes a first bitstream encoder configured to encode data into an encoded first bitstream using a first quantization parameter and a second bitstream encoder configured to encode the data into an encoded second bitstream using a second quantization parameter that is different from the first quantization parameter. The apparatus also includes a switching bitstream encoder operatively coupled to the first bitstream encoder and the second bitstream encoder and configured to output an encoded switching bitstream that supports up-switching and down-switching between the first encoded bitstream and the second encoded bitstream based on information processed using the first and second quantization parameters. [0011]
  • An exemplary decoding method includes receiving at least one encoded bitstream, such as, a first bitstream that was generated using a first quantization parameter and/or a second bitstream that was generated using a second quantization parameter that is different from the first quantization parameter. The received encoded bitstream is decoded. The decoding method further includes receiving an encoded switching bitstream associated with the first and second bitstreams that was generated using the first quantization parameter to support up-switching between the first and second bitstreams and using the second quantization parameter to support down-switching between the first and second bitstreams. The method also includes decoding the received encoded switching bitstream using the first and second quantization parameters. [0012]
  • Another exemplary apparatus includes a first decoder configured to decode a first encoded bitstream into a decoded first bitstream using a first quantization parameter and a second decoder configured to decode a second bitstream into a decoded second bitstream using a second quantization parameter that is different from the first quantization parameter. The apparatus also includes a switching bitstream decoder that is operatively coupled to the first decoder and the second decoder and configured to output a decoded switching bitstream that supports up-switching and down-switching between the first decoded bitstream and the second decoded bitstream based on information processed using the first and second quantization parameters.[0013]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. The same numbers are used throughout the figures to reference like components and/or features. [0014]
  • FIG. 1 is a block diagram depicting an exemplary computing environment that is suitable for use with certain implementations of the present invention. [0015]
  • FIG. 2 is a diagram illustratively depicting switching between bitstreams, in accordance with certain exemplary implementations of the present invention. [0016]
  • FIG. 3 is a block diagram depicting a conventional decoder. [0017]
  • FIG. 4 is a block diagram depicting a conventional encoder. [0018]
  • FIG. 5 is block diagram depicting an improved decoder, in accordance with certain exemplary implementations of the present invention. [0019]
  • FIG. 6 is block diagram depicting an improved encoder, in accordance with certain exemplary implementations of the present invention. [0020]
  • FIG. 7 is block diagram depicting an improved decoder, in accordance with certain further exemplary implementations of the present invention. [0021]
  • FIG. 8 is block diagram depicting an improved encoder, in accordance with certain further exemplary implementations of the present invention. [0022]
  • FIG. 9 is block diagram depicting an improved decoder, in accordance with still other exemplary implementations of the present invention. [0023]
  • FIG. 10 is block diagram depicting an improved decoder, in accordance with still other exemplary implementations of the present invention. [0024]
  • FIG. 11 is block diagram depicting an improved encoder, in accordance with still other exemplary implementations of the present invention.[0025]
  • DETAILED DESCRIPTION
  • Ragip Kurceren and Marta Karczewicz, in a document titled “Improved SP-frame Encoding”, VCEG-M-73, ITU-T Video Coding Experts Group Meeting, Austin, Tex., 02-04 April 2001 (hereinafter simply referred to as Kurceren et al.), proposed a switching scheme that allows seamless switching between bitstreams with different bit-rate. It introduced a special frame called an SP picture that serves as a switching point in a video sequence. [0026]
  • A similar [0027] representative switching process 200 is depicted in the illustrative diagram in FIG. 2. Here, switching is shown as occurring from bitstream 1 to bitstream 2 using SP pictures.
  • The streaming system usually either transmits [0028] bitstream 1 or bitstream 2, for example, depending on the current channel bandwidth. However, when the channel bandwidth changes, the transmitted bitstream can be switched to a bit-rate that matches the current channel condition, for example, to improve the video quality if bandwidth increases and to maintain smooth playback if bandwidth drops.
  • When switching from [0029] bitstream 1 to bitstream 2, the streaming system does not need to wait for a key frame to start the switching process. Instead, it can switch at the SP frames. At SP frames, the streaming system sends a switching bitstream S12, and the decoder decodes the switching bitstream using the same techniques without knowing whether it is S1, S2 or S12. Thus, the bitstream switching is transparent to the decoder. The decoded frame will be exactly the same as the reference frame for the next frame prediction in bitstream 2. As such, there should not be any drifting problems.
  • An exemplary [0030] conventional decoder 300 and encoder 400 are depicted in FIG. 3 and FIG. 4, respectively. A more detailed description of the scheme can be found in Kurceren et al. There are some potential issues with the scheme in Kurceren et al.
  • For example, in real streaming applications, it is usually desirable to be able to switch down from a high bit-rate bitstream to a low bit-rate one very quickly. This is a desirable feature, for example, for TCP-friendly protocols currently used in many existing streaming systems. On the other hand, switching up from a low bit-rate video bitstream to a high bit-rate does not usually have to be done as quickly as switching down. This is again a feature of the TCP-friendly protocols, for example. [0031]
  • Therefore, it would be useful to support more rapid and frequent down-switching. Indeed, as mentioned, the very reason for down-switching is often related to reduced/reducing channel bandwidth capabilities. The size of the down-switching bitstream may often be much smaller than that of the up-switching one. Since the high bit-rate bitstream typically contains most of the information of a low bit-rate one, in theory, one should be able to configure the scheme to make the size of switching bitstream sufficiently small. [0032]
  • However, the scheme in Kurceren et al. only allows the same Qs for both the down-switching bitstream and up-switching bitstream (see, e.g., FIG. 4), and the Qs is included in the prediction and reconstruction loop. The introduction of quantization Qs in the prediction and reconstruction loop will inevitably degrade the coding efficiency of the original bitstreams without SP frames. If one sets Qs too small, high coding efficiency for both [0033] bitstreams 1 and 2 can be achieved. However, the difference for down-switching is also fine-grain quantized and it would result a very large down-switching bitstream. Conversely, if one sets Qs too large, although obtaining a very compact switching bitstream, the coding efficiency of bitstreams 1 and 2 will be severely degraded, which is not desired either. It appears that this demonstrative contradiction can not be solved by the techniques proposed in Kurceren et al., which make a compromise between coding efficiency and the size of the switching bitstream.
  • Furthermore, there are many quantization and dequantization processes in the signal flow in the encoder proposed in Kurceren et al. (see, e.g., FIG. 4). This tends to further degrade the coding efficiency of [0034] bitstreams 1 and 2. There is also a mismatch between the prediction reference and reconstruction reference in Kurceren et al. that may contribute to the coding efficiency degradation of bitstreams 1 and 2.
  • In order to address these and other issues/problems improved methods and apparatuses are provided herein that allow different Qs for switching up and switching down. The block diagrams depicted in FIG. 5 and FIG. 6 illustrate an improved decoder and encoder, respectively, in accordance with certain implementations of the present invention. [0035]
  • In accordance with certain aspects of the present invention, the proposed techniques solve the contradiction existing in the scheme proposed in Kurceren et al. so that the down-switching bitstream can be encoded to have significantly reduced, if not minimal, size while the coding efficiency of [0036] bitstreams 1 and 2 is also well preserved.
  • In accordance with certain other aspects of the present invention, the switching points for up-switching and down-switching can be decoupled. This means that one can encode more switching down points than switching-up points, for example, to suit the TCP-friendly protocols, etc. Moreover, such decoupling allows for further improved coding efficiency of the bitstream that the system is switched from, for example, by individually setting the Qs in the reconstruction loop to an appropriately small value. [0037]
  • In accordance with certain other aspects of the present invention, the improved methods and apparatuses can be further simplified and additional quantization and dequantization processes can be readily removed. For example, FIG. 7 and FIG. 8 illustrate an exemplary decoder and encoder, respectively, that support both high coding efficiency for the normal bitstreams and a compact size for the switching bitstream. [0038]
  • Exemplary Operational Environments: [0039]
  • Turning to the drawings, wherein like reference numerals refer to like elements, the invention is illustrated as being implemented in a suitable computing environment. Although not required, the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by a personal computer. [0040]
  • Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multi-processor systems, microprocessor based or X programmable consumer electronics, network PCs, minicomputers, mainframe computers, portable communication devices, and the like. [0041]
  • The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. [0042]
  • FIG. 1 illustrates an example of a [0043] suitable computing environment 120 on which the subsequently described systems, apparatuses and methods may be implemented. Exemplary computing environment 120 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the improved methods and systems described herein. Neither should computing environment 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in computing environment 120.
  • The improved methods and systems herein are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable include, but are not limited to, personal computers, server computers, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. [0044]
  • As shown in FIG. 1, [0045] computing environment 120 includes a general-purpose computing device in the form of a computer 130. The components of computer 130 may include one or more processors or processing units 132, a system memory 134, and a bus 136 that couples various system components including system memory 134 to processor 132.
  • [0046] Bus 136 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus also known as Mezzanine bus.
  • [0047] Computer 130 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 130, and it includes both volatile and non-volatile media, removable and non-removable media.
  • In FIG. 1, [0048] system memory 134 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 140, and/or non-volatile memory, such as read only memory (ROM) 138. A basic input/output system (BIOS) 142, containing the basic routines that help to transfer information between elements within computer 130, such as during start-up, is stored in ROM 138. RAM 140 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processor 132.
  • [0049] Computer 130 may further include other removable/non-removable, volatile/non-volatile computer storage media. For example, FIG. 1 illustrates a hard disk drive 144 for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”), a magnetic disk drive 146 for reading from and writing to a removable, non-volatile magnetic disk 148 (e.g., a “floppy disk”), and an optical disk drive 150 for reading from or writing to a removable, non-volatile optical disk 152 such as a CD-ROM/R/RW, DVD-ROM/R/RW/+R/RAM or other optical media. Hard disk drive 144, magnetic disk drive 146 and optical disk drive 150 are each connected to bus 136 by one or more interfaces 154.
  • The drives and associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules, and other data for [0050] computer 130. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 148 and a removable optical disk 152, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and the like, may also be used in the exemplary operating environment.
  • A number of program modules may be stored on the hard disk, [0051] magnetic disk 148, optical disk 152, ROM 138, or RAM 140, including, e.g., an operating system 158, one or more application programs 160, other program modules 162, and program data 164.
  • The improved methods and systems described herein may be implemented within [0052] operating system 158, one or more application programs 160, other program modules 162, and/or program data 164.
  • A user may provide commands and information into [0053] computer 130 through input devices such as keyboard 166 and pointing device 168 (such as a “mouse”). Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, camera, etc. These and other input devices are connected to the processing unit 132 through a user input interface 170 that is coupled to bus 136, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).
  • A [0054] monitor 172 or other type of display device is also connected to bus 136 via an interface, such as a video adapter 174. In addition to monitor 172, personal computers typically include other peripheral output devices (not shown), such as speakers and printers, which may be connected through output peripheral interface 175.
  • [0055] Computer 130 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 182. Remote computer 182 may include many or all of the elements and features described herein relative to computer 130.
  • Logical connections shown in FIG. 1 are a local area network (LAN) [0056] 177 and a general wide area network (WAN) 179. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • When used in a LAN networking environment, [0057] computer 130 is connected to LAN 177 via network interface or adapter 186. When used in a WAN networking environment, the computer typically includes a modem 178 or other means for establishing communications over WAN 179. Modem 178, which may be internal or external, may be connected to system bus 136 via the user input interface 170 or other appropriate mechanism.
  • Depicted in FIG. 1, is a specific implementation of a WAN via the Internet. Here, [0058] computer 130 employs modem 178 to establish communications with at least one remote computer 182 via the Internet 180.
  • In a networked environment, program modules depicted relative to [0059] computer 130, or portions thereof, may be stored in a remote memory storage device. Thus, e.g., as depicted in FIG. 1, remote application programs 189 may reside on a memory device of remote computer 182. It will be appreciated that the network connections shown and described are exemplary and other means of establishing a communications link between the computers may be used.
  • Exemplary Switching Schemes: [0060]
  • In this section, exemplary encoder and the decoder methods and apparatuses are described in more detail with reference to FIGS. [0061] 5-8. For comparison, additional description of the architecture proposed by Kurceren et al is also provided.
  • The modules and notations used in FIGS. [0062] 3-8 are defined as follows:
  • DCT: Discrete cosine transform. [0063]
  • IDCT: Inverse discrete cosine transform. [0064]
  • Entropy Encoding: Entropy encoding of quantized coefficients. It could be arithmetic coding or variable length coding. [0065]
  • Entropy Decoding: Entropy decoding of quantized coefficients. It could be arithmetic decoding or variable length decoding that matches the corresponding modules in the encoder. [0066]
  • Q: Quantization. [0067]
  • Q[0068] −1: Inverse Quantization or dequantization.
  • MC: Motion compensation module, where a predicted frame is formed according to the motion vectors and the reference in the frame buffer. [0069]
  • ME: Motion estimation module, where the motion vectors are searched to best predict the current frame. [0070]
  • Loop Filter: A smoothing filter in the motion compensation loop to reduce the blocking artifacts. [0071]
  • FrameBuffer0: A frame buffer that holds the reference frame for next frame encoding/decoding. [0072]
  • P Picture: A frame encoded using traditional motion compensated predictive coding. [0073]
  • SP Picture: A frame encoded as a switching frame using the proposed motion compensated predictive coding. [0074]
  • Switching Bitstream: The bitstream transmitted to make seamless transition from one bitstream to another. [0075]
  • There are some basic assumptions on the quantization and dequantization: [0076]
  • If L[0077] 1=Q(K1), Q(Q−1(K1))=L1;
  • If L[0078] 1=Q(K1) and L2=Q(K2), Q(Q−1(L1)+Q−1(L2))=L1+L2;
  • If L[0079] 1=Q(K1), Q(Q−1(L1)+K2))=L1+Q(K2);
  • The following descriptions work for the inter-macroblocks in a frame. For intra-macroblocks, a simple “copy” operation can be used. [0080]
  • Reference is now made to the conventional decoding process illustrated, for example, in FIG. 3. Here, the decoding of SP frame S[0081] 1 or S2 in a normal bitstream is shown. Using S1 as an example, after entropy decoding of the bitstream S1, the levels of the prediction error coefficients, Lerr1, and motion vectors, are generated for the macroblock. Levels Lerr1 are dequantized using dequantizer QP1 −1:
  • K[0082] serr1=QP1 −1(Lerr1).
  • After motion compensation, perform forward DCT transform for the predicted macroblock and obtain K[0083] pred1, the reconstructed coefficients Krec1 are obtained by:
  • K[0084] rec1=Kpred1+Kserr1.
  • The reconstructed coefficients K[0085] rec1 are quantized by Qs to obtain reconstructed levels Lrec1,
  • L[0086] rec1=Qs(Krec1).
  • The levels L[0087] rec1 are dequantized using Qs−1 and the inverse DCT transform is performed to obtain the reconstructed image. The reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for the next frame decoding.
  • When decoding of switching bitstream S[0088] 12, e.g., when switching from bitstream 1 to bitstream 2, the decoding process is the same as the decoding of S1, except that the input is bitstream S12, QP1 −1 is replaced by Qs−1, Kserr1 is replaced by Kserr12, Lrec1 is replaced by Lrec2, Krec1 is replaced by Krec12, and Lerr1 is replaced by Lerr12.
  • The resultant picture is the same as that decoded from S[0089] 2. Thus a drifting-free switching from bitstream 1 to bitstream 2 is achieved. Qs is encoded in the S12 bitstream.
  • Reference is now made to FIG. 4 and the exemplary conventional encoding process that is illustrated for encoding of SP frame S[0090] 1 or S2 in a normal bitstream. Here, S1 is used as an example.
  • DCT transform to the macroblock of the original video is performed, and the obtained coefficients as K[0091] ong1 denoted. After motion compensation, a DCT transform is performed to the predicted macroblock, and the obtained coefficients as Kpred1 denoted. The next step is to quantize Kpred1 using Qs and obtain levels Lpred1.
  • L[0092] pred1=Qs(Kpred1).
  • Then dequantize L[0093] pred1 using dequantizer Qs−1, Kspred1=Qs−1(Lpred1) and subtract Kspred1 from Korig1 to obtain error coefficients Kerr1,
  • K[0094] err1=Korig1−Kpred1.
  • Then quantize K[0095] err1 using QP1 and obtain error levels Lerr1,
  • L[0096] err1=QP1(Kerr1).
  • Next, perform entropy encoding on L[0097] err1 and obtain bitstream S1. Using the S1 decoder described above, for example, reconstruct levels Lrec1 and the reference for the next frame encoding. Note that in this example there is a quantizer Qs and a dequantizer Qs−1 in the reconstruction loop.
  • Notice that there is a mismatch between prediction reference and reconstruction reference in this scheme. The encoding of switching bitstream S[0098] 12 (switching from bitstream 1 to bitstream 2). The encoding of S12 is based on the encoding of S1 and S2. Lpred1 is subtracted by the S1 encoder from the reconstructed level Lrec2 in S2 encoder.
  • L[0099] err12=Lrec2−Lpred1.
  • Entropy encoding is performed with L[0100] err12 and bitstream S12.
  • An improved [0101] decoding process 500 of FIG. 5, in accordance with certain exemplary implementations of the present invention will now be descried in greater detail.
  • To describe the decoding of SP frame S[0102] 1 or S2 in a normal bitstream, S1 is used as an example. After entropy decoding of the bitstream S1, the levels of the prediction error coefficients, Lerr1, and motion vectors, are generated for the macroblock. Levels Lerr1 are dequantized using quantizer QP1 −1,
  • K[0103] serr1=QP1 −1(Lerr1).
  • The error coefficients K[0104] serr1 are quantized using quantizer Qs=Qs1 and obtain levels,
  • L[0105] serr1=Qs(Kserr1).
  • After motion compensation, forward DCT transform is performed for the predicted macroblock and obtain K[0106] pred1. Kpred1 is then quantized by Qs1,
  • L[0107] pred1=Qs1(Kpred1).
  • L[0108] pred1 is then dequantized by Qs1 −1,
  • K[0109] spred1=Qs1 −1(Lpred1).
  • The dequantized coefficients K[0110] spred1 are further quantized by quantizer Qs=Qs1 and obtain levels,
  • L[0111] spred1=Qs(Kspred1).
  • The reconstructed levels L[0112] rec1 are obtained by,
  • L[0113] rec1=Lspred1+Lserr1.
  • The levels L[0114] rec1 are dequantized using Qs−1=Qs1 −1 and the inverse DCT transform is performed to obtain the reconstructed image. The reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for next frame decoding.
  • The decoding of switching bitstream S[0115] 12, for example, when switching from bitstream 1 to bitstream 2, follows a similar decoding process similar except that the input is bitstream S12, QP1 −1 is replaced by Qs2 −1, Qs is replaced by Qs2, Qs−1 is replaced by Qs2 −1, Lrec1 is replaced by Lrec2, Lerr1 is replaced by Lerr12, Kserr1 is replaced by Kserr12, and Lspred1 is replaced by Lspred12.
  • Note that the information on Qs[0116] 1 and Qs2 is encoded in bitstream S12.
  • The resultant picture is the same as that decoded from S[0117] 2. Thus a drifting-free switching from bitstream 1 to bitstream 2 is achieved.
  • An improved [0118] encoding process 600 of FIG. 6, in accordance with certain exemplary implementations of the present invention will now be descried in greater detail.
  • To describe the encoding of SP frame S[0119] 1 or S2 in a normal bitstream, S1 is used as an example. Here, for example, DCT transform is performed to the macroblock of the original video, and the obtained coefficients as Korig1 denoted.
  • After motion compensation, DCT transform is performed to the predicted macroblock, and the obtained coefficients denoted as K[0120] pred1. Then Kpred1 is quantized using Qs1 and levels Lpred1 obtained,
  • L[0121] pred1=Qs1(Kpred1).
  • Then, the next step is to dequantize L[0122] pred1 using dequantizer Qs1 −1,
  • K[0123] spred1=Qs1 −1(Lpred1).
  • Then subtract K[0124] spred1 from Korig1 and obtain error coefficients Kerr1,
  • K[0125] err1=Korig1−Kpred1.
  • Next, quantize K[0126] err1 using QP1 and obtain error levels Lerr1,
  • L[0127] err1=QP1(Kerr1).
  • Then, perform entropy encoding on L[0128] err1 and obtain bitstream S1.
  • Using the S[0129] 1 decoder described above, for example, reconstruct levels Lrec1 and the reference for the next frame encoding.
  • Note that here there is a quantizer Qs[0130] 1 and a dequantizer Qs1 −1 in the reconstruction loop.
  • The encoding of switching bitstream S[0131] 12, for example, when switching from bitstream 1 to bitstream 2, is based on the encoding of S1 and S2.
  • Here, the process involves quantizing prediction coefficients K[0132] spred1 in the S1 encoder using quantizer Qs2.
  • L[0133] spred12=Qs2(Kspred1).
  • Then subtract L[0134] spred12 from the reconstructed level Lrec2 in S2 encoder.
  • L[0135] err12=Lrec2−Lspred12.
  • Next, perform entropy encoding of L[0136] err12 and generate bitstream S12.
  • An improved [0137] decoding process 700 of FIG. 7, in accordance with certain further exemplary implementations of the present invention will now be descried in greater detail.
  • To describe the decoding of SP frame S[0138] 1 or S2 in a normal bitstream, S1 is used as an example.
  • Here, after entropy decoding of the bitstream S[0139] 1, the levels of the prediction error coefficients, Lerr1, and motion vectors, are generated for the macroblock. Levels Lerr1 are dequantized using quantizer QP1 −1:
  • K[0140] serr1=QP1 −1(Lerr1).
  • After motion compensation, perform forward DCT transform for the X predicted macroblock and obtain K[0141] pred1, the reconstructed coefficients Krec1 are obtained by:
  • K[0142] rec1=Kpred1+Kserr1.
  • The reconstructed coefficients K[0143] rec1 are quantized by Qs1 to obtain reconstructed levels Lrec1,
  • L[0144] rec1=Qs1(Krec1).
  • The levels L[0145] rec1 are dequantized using Qs1 −1 and the inverse DCT transform is performed to obtain the reconstructed image. The reconstructed image will go through a loop filter to smooth certain blocky artifacts and output to the display and to the frame buffer for next frame decoding.
  • The decoding of switching bitstream S[0146] 12, for example, when switching from bitstream 1 to bitstream 2, follows a similar decoding process similar except that the input is bitstream S12, QP1 −1 is replaced by Qs2 −1, Qs1 is replaced by Qs2, Qs1 −1 is replaced by Qs2 −1, Kserr1 is replaced by Kserr12, Krec1 is replaced by Krec12, Lrec1 is replaced by Lrec2, and Lerr1 is replaced by Lerr12.
  • The resultant picture is the same as that decoded from S[0147] 2. Thus, a drifting-free switching from bitstream 1 to bitstream 2 is achieved.
  • An improved [0148] encoding process 800 of FIG. 8, in accordance with certain further exemplary implementations of the present invention will now be descried in greater detail.
  • To describe the encoding of SP frame S[0149] 1 or S2 in a normal bitstream, S1 is used as an example.
  • Here, DCT transform is performed to the macroblock of the original video, and the obtained coefficients as K[0150] orig1 denoted
  • Next, after motion compensation, DCT transform is performed to the predicted macroblock, and the obtained coefficients as K[0151] pred1 denoted.
  • Then the process includes subtracting K[0152] pred1 from Korig1 and obtaining error coefficients Kerr1.
  • K[0153] err1=Korig1−Kpred1.
  • Next, K[0154] err1 is quantized using QP1 and error levels Lerr1 obtained,
  • L[0155] err1=QP1 (Kerr1).
  • Then entropy encoding is performed on L[0156] err1 and bitstream S1 obtained.
  • Using the S[0157] 1 decoder described above, for example, the process includes reconstructing levels Lrec1 and the reference for the next frame encoding. Note that here there is a quantizer Qs1 and a dequantizer Qs1 −1 in the reconstruction loop.
  • The encoding of switching bitstream S[0158] 12, for example, when switching from bitstream 1 to bitstream 2, is based on the encoding of S1 and S2.
  • For example, prediction coefficients K[0159] pred1 are quantized in the S1 encoder using quantizer Qs2,
  • L[0160] pred12=Qs2(Kpred1).
  • Then the process includes subtracting L[0161] pred12 from the reconstructed level Lrec2 in S2 encoder.
  • L[0162] err12=Lrec2−Lpred12.
  • Next entropy encoding of L[0163] err12 is performed and bitstream S12 generated.
  • Reference is now made to FIG. 9, which is a block diagram depicting a [0164] decoder 900 for S1 and S2, in accordance with certain other implementations of the present invention. Here, it is noted that the quantization Qs is operated on the reconstructed DCT reference rather than on the decoded DCT residue and the DCT prediction. The quantization in this example can be described as:
  • Y=[X*A(Qs)+219]/220,
  • where X is the reconstructed DCT coefficient, and Y is the quantized DCT coefficient. A(.) is the quantization table. Qs is the quantization step. [0165]
  • If merging the dequantization QP and quantization QS in one step, the operation can, for example, be formularized as [0166] L rec = [ K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) ] * A ( Qs ) + 2 19 2 20 ,
    Figure US20030151753A1-20030814-M00001
  • where L[0167] err are the levels of the prediction error coefficients, Kpred are prediction coefficients. This is quite different from that used in conventional SP coding. One advantage is that a high quality display can be reconstructed from the part in [. . . ] of the above formula.
  • Therefore, [0168] decoder 900 provides two ways for reconstructing the display image. In the first case, the reconstructed reference is directly used for the purpose of display. There is little if any complexity increase in this case. In the second case, if the decoder is powerful enough, another high quality image can be reconstructed for display. This process includes the modules within box 902. These modules are, for example, non-normative parts for the current JVT standard.
  • FIG. 10 illustrates a [0169] decoder 1000 for the switching bitstream S12, in accordance with certain further implementations of the present invention. In this example, decoder 1000 for the switching bitstream S12 is slightly different from that for S1 and S2, presented in previous sections. Here, for example, the quantization Qs is only needed on the DCT prediction. Again, the quantization in this example can be described as:
  • Y=[X*A(Qs)+219]/220.
  • Decoder [0170] 1000 is configured to know which SP bitstream is received. Therefore, for example, a 1-bit syntax can be employed to notify decoder 1000. An exemplary modification in the SP syntax and semantic include a Switching Bitstream Flag (e.g., 1 bit) and Quantization parameter (e.g., 5 bits).
  • Thus, for example, when Ptype indicates an SP frame, the 1-bit syntax element “Switching Bitstream Flag” is inserted before the syntax element “Slice Qp”. Here, when the Switching Bitstream Flag is 1, the current bitstream is decoded as Bitstream S[0171] 12, and the syntax element “Slice QP” is skipped; otherwise it is decoded as Bitstream S1 or S2, and the syntax element “Slice QP” is the quantization parameter Qp.
  • When Ptype indicates a SP frame, the syntax element “SP Slice QP” is inserted after the syntax element “Slice QP” to encode the quantization parameter Qs. [0172]
  • An [0173] encoder 1100 is illustrated in the block diagram of FIG. 11, in accordance with certain further exemplary implementations of the present invention.
  • Here, for example, [0174] encoder 1100 includes a switch 1102. Thus, the DCT prediction can be directly subtracted from the original DCT image without quantization and dequantization, or the DCT prediction can be subtracted from the original DCT image after quantization and dequantization. Whether the DCT prediction is quantized or not can be decided, for example, one by one coefficient with rate-distortion criterion.
  • Thus, several exemplary improved SP picture coding methods and apparatuses have been presented. Separate Qs can be provided for up-switching and down-switching bitstreams. The Qs for switching bitstream coding can be decoupled from the prediction and reconstruction loop. This eliminates the contradiction of reducing switching bitstream size and improving coding efficiency of the normal bitstreams, for example. There can also be significant reduction in the switching bitstream size while maintaining the high coding efficiency of the normal bitstreams by optimizing different Qs independently. Some Quantization/Dequantization processes can be removed in accordance with certain implementations to improve coding efficiency. Coding efficiently can also be improved by using the same reference for prediction and reconstruction. The methods and apparatuses may also be configured to allow for more down-switching points than up-switching points. [0175]
  • Conclusion: [0176]
  • Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention. [0177]

Claims (54)

What is claimed is:
1. An encoding method comprising:
encoding data into a first bitstream using a first quantization parameter;
encoding said data into a second bitstream using a second quantization parameter that is different from said first quantization parameter; and
generating an encoded switching bitstream associated with said first and second bitstreams using said first quantization parameter to support up-switching between said first and second bitstreams and using said second quantization parameter to support down-switching between said first and second bitstreams.
2. The method as recited in claim 1, wherein said first quantization parameter and said second quantization parameter are decoupled.
3. The method as recited in claim 1, wherein encoded switching bitstream is configured to support a plurality of up-switching periods and a plurality of down-switching periods.
4. The method as recited in claim 3, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
5. The method as recited in claim 1, wherein said first bitstream and said second bitstream have different data bit rates.
6. A computer-readable medium comprising computer-implementable instructions for causing at least one processing unit to perform acts comprising:
encoding data into a first bitstream using a first quantization parameter;
encoding said data into a second bitstream using a second quantization parameter that is different from said first quantization parameter; and
generating an encoded switching bitstream associated with said first and second bitstreams using said first quantization parameter to support up-switching between said first and second bitstreams and using said second quantization parameter to support down-switching between said first and second bitstreams.
7. The computer-readable medium as recited in claim 6, wherein said first quantization parameter and said second quantization parameter are decoupled.
8. The method as recited in claim 6, wherein encoded switching bitstream is configured to support a plurality of up-switching periods and a plurality of down-switching periods.
9. The computer-readable medium as recited in claim 8, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
10. The computer-readable medium as recited in claim 6, wherein said first bitstream and said second bitstream have different data bit rates.
11. An apparatus comprising:
a first bitstream encoder configured to encode data into an encoded first bitstream using a first quantization parameter;
a second bitstream encoder configured to encode said data into an encoded second bitstream using a second quantization parameter that is different from said first quantization parameter; and
a switching bitstream encoder operatively coupled to said first bitstream encoder and said second bitstream encoder and configured to output an encoded switching bitstream that supports up-switching and down-switching between said first encoded bitstream and said second encoded bitstream based on information processed using said first and second quantization parameters.
12. The apparatus as recited in claim 11, wherein said first quantization parameter and said second quantization parameter are decoupled.
13. The apparatus as recited in claim 11, wherein said encoded switching bitstream is configured to support a plurality of up-switching periods and a plurality of down-switching periods.
14. The apparatus as recited in claim 13, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
15. The apparatus as recited in claim 11, wherein said first bitstream and said second bitstream have different data bit rates.
16. A decoding method comprising:
receiving at least one encoded bitstream selected from a group comprising a first bitstream that was generated using a first quantization parameter and a second bitstream that was generated using a second quantization parameter that is different from said first quantization parameter;
decoding said received encoded bitstream;
receiving an encoded switching bitstream associated with said first and second bitstreams that was generated using said first quantization parameter to support up-switching between said first and second bitstreams and using said second quantization parameter to support down-switching between said first and second bitstreams; and
decoding said received encoded switching bitstream using said first and second quantization parameters.
17. The method as recited in claim 16, wherein said first quantization parameter and said second quantization parameter are decoupled.
18. The method as recited in claim 16, wherein decoding said received encoded switching bitstream occurs during at least one period selected from a group comprising at least one of a plurality of up-switching periods and at least one of a plurality of down-switching periods.
19. The method as recited in claim 18, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
20. The method as recited in claim 16, wherein said first bitstream and said second bitstream have different data bit rates.
21. A computer-readable medium comprising computer-implementable instructions for causing at least one processing unit to perform acts comprising:
receiving at least one encoded bitstream selected from a group comprising a first bitstream that was generated using a first quantization parameter and a second bitstream that was generated using a second quantization parameter that is different from said first quantization parameter;
decoding said received encoded bitstream;
receiving an encoded switching bitstream associated with said first and second bitstreams that was generated using said first quantization parameter to support up-switching between said first and second bitstreams and using said second quantization parameter to support down-switching between said first and second bitstreams; and
decoding said received encoded switching bitstream using said first and second quantization parameters.
22. The computer-readable medium as recited in claim 21, wherein said first quantization parameter and said second quantization parameter are decoupled.
23. The computer-readable medium as recited in claim 21, wherein decoding said received encoded switching bitstream occurs during at least one period selected from a group comprising at least one of a plurality of up-switching periods and at least one of a plurality of down-switching periods.
24. The computer-readable medium as recited in claim 23, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
25. The computer-readable medium as recited in claim 21, wherein said first bitstream and said second bitstream have different data bit rates.
26. An apparatus comprising:
a first decoder configured to decode a first encoded bitstream into a decoded first bitstream using a first quantization parameter;
a second decoder configured to decode a second bitstream into a decoded second bitstream using a second quantization parameter that is different from said first quantization parameter; and
a switching bitstream decoder operatively coupled to said first decoder and said second decoder and configured to output a decoded switching bitstream that supports up-switching and down-switching between said first decoded bitstream and said second decoded bitstream based on information processed using said first and second quantization parameters.
27. The apparatus as recited in claim 26, wherein said first quantization parameter and said second quantization parameter are decoupled.
28. The apparatus as recited in claim 26, wherein decoding said received encoded switching bitstream occurs during at least one period selected from a group comprising at least one of a plurality of up-switching periods and at least one of a plurality of down-switching periods.
29. The apparatus as recited in claim 28, wherein over a period of time a number of said down-switching periods is greater than a number of said up-switching periods.
30. The apparatus as recited in claim 26, wherein said first bitstream and said second bitstream have different data bit rates.
31. A decoding method comprising:
reconstructing DCT reference data; and
quantizing said reconstructed DCT reference data using a quantization step (Qs) on the reconstructed DCT reference and not on decoded DCT residue and the DCT prediction data.
32. The decoding method as recited in claim 31, wherein quantizing said reconstructed DCT reference data is represented by:
Y=[X*A(Qs)+219]/220,
wherein X includes a reconstructed DCT coefficient and Y includes a quantized DCT coefficient, and A(.) is associated with a quantization table
33. The decoding method as recited in claim 31, further comprising:
merging a dequantization QP and quantization QS in to one operation represented by:
L rec = [ K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) ] * A ( Qs ) + 2 19 2 20 ,
Figure US20030151753A1-20030814-M00002
where Lerr are levels of the prediction error coefficients, Kpred are prediction coefficients.
34. The decoding method as recited in claim 33, further comprising generating data for a high quality display based at least in part on
K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) .
Figure US20030151753A1-20030814-M00003
35. The decoding method as recited in claim 31 further comprising:
selectively reconstructing different qualities of display images.
36. The decoding method as recited in claim 31 further comprising receiving decoder notification data associated with at least one switching event.
37. The decoding method as recited in claim 36, wherein said decoder notification includes at least a one-bit syntax having a switching bitstream flag.
38. The decoding method as recited in claim 37, wherein said decoder notification data includes at least one quantization parameter.
39. A computer-readable medium having computer-implementable instructions for causing at least one processing unit to perform acts comprising:
decoding bitstream data by:
reconstructing DCT reference data; and
quantizing said reconstructed DCT reference data using a quantization step (Qs) on the reconstructed DCT reference and not on decoded DCT residue and the DCT prediction data.
40. The computer-readable medium as recited in claim 39, wherein quantizing said reconstructed DCT reference data is represented by:
Y=[X*A(Qs)+219]/220,
wherein X includes a reconstructed DCT coefficient and Y includes a quantized DCT coefficient, and A(.) is associated with a quantization table
41. The computer-readable medium as recited in claim 39, having computer-implementable instructions for causing the at least one processing unit to perform further acts comprising:
merging a dequantization QP and quantization QS in to one operation represented by:
L rec = [ K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) ] * A ( Qs ) + 2 19 2 20 ,
Figure US20030151753A1-20030814-M00004
where Lerr are levels of the prediction error coefficients, Kpred are prediction coefficients.
42. The computer-readable medium as recited in claim 41, having computer-implementable instructions for causing the at least one processing unit to perform further acts comprising:
comprising generating data for a high quality display based at least in part on
K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) .
Figure US20030151753A1-20030814-M00005
43. The computer-readable medium as recited in claim 39 having computer-implementable instructions for causing the at least one processing unit to perform further acts comprising:
selectively reconstructing different qualities of display images.
44. The computer-readable medium as recited in claim 39 having computer-implementable instructions for causing the at least one processing unit to perform further acts comprising:
receiving decoder notification data associated with at least one switching event.
45. The computer-readable medium as recited in claim 44, wherein said decoder notification includes at least a one-bit syntax having a switching bitstream flag.
46. The computer-readable medium as recited in claim 45, wherein said decoder notification data includes at least one quantization parameter.
47. A decoder comprising:
logic operatively configured to reconstruct DCT reference data and quantize said reconstructed DCT reference data using a quantization step (Qs) on the reconstructed DCT reference and not on decoded DCT residue and the DCT prediction data.
48. The decoder as recited in claim 47, wherein said logic quantizes said reconstructed DCT reference data as:
Y=[X*A(Qs)+219]/220,
wherein X includes a reconstructed DCT coefficient and Y includes a quantized DCT coefficient, and A(.) is associated with a quantization table
49. The decoder as recited in claim 47, wherein said logic merges a dequantization QP and quantization QS into one operation represented by:
L rec = [ K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) ] * A ( Qs ) + 2 19 2 20 ,
Figure US20030151753A1-20030814-M00006
where Lerr are levels of the prediction error coefficients, Kpred are prediction coefficients.
50. The decoder as recited in claim 49, wherein said logic is further configured to generate data for a high quality display based at least in part on
K pred ( i , j ) + L err ( i , j ) * ( 2 20 + A ( Qp ) / 2 ) A ( Qp ) .
Figure US20030151753A1-20030814-M00007
51. The decoder as recited in claim 47, wherein said logic si fitehr configured to selectively reconstruct different qualities of display images.
52. The decoder as recited in claim 47 wherein said logic is further configured to receive decoder notification data associated with at least one switching event.
53. The decoder as recited in claim 52, wherein said decoder notification includes at least a one-bit syntax having a switching bitstream flag.
54. The decoder as recited in claim 53, wherein said decoder notification data includes at least one quantization parameter.
US10/185,741 2002-01-25 2002-06-27 Methods and apparatuses for use in switching between streaming video bitstreams Abandoned US20030151753A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US10/185,741 US20030151753A1 (en) 2002-02-08 2002-06-27 Methods and apparatuses for use in switching between streaming video bitstreams
EP02028649A EP1337111A3 (en) 2002-02-08 2002-12-20 Method and apparatus for switching between video bitstreams
JP2003018057A JP2003244700A (en) 2002-01-25 2003-01-27 Seamless switching of scalable video bitstream
KR10-2003-0007895A KR20030067589A (en) 2002-02-08 2003-02-07 Methods and apparatuses for use in switching between streaming video bitstreams
JP2003032872A JP2003283340A (en) 2002-02-08 2003-02-10 Encoding method and decoding method
US12/472,266 US8576919B2 (en) 2002-02-08 2009-05-26 Methods and apparatuses for use in switching between streaming video bitstreams
US14/071,540 US9686546B2 (en) 2002-02-08 2013-11-04 Switching between streaming video bitstreams

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US35507102P 2002-02-08 2002-02-08
US10/185,741 US20030151753A1 (en) 2002-02-08 2002-06-27 Methods and apparatuses for use in switching between streaming video bitstreams

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/472,266 Division US8576919B2 (en) 2002-02-08 2009-05-26 Methods and apparatuses for use in switching between streaming video bitstreams

Publications (1)

Publication Number Publication Date
US20030151753A1 true US20030151753A1 (en) 2003-08-14

Family

ID=27668244

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/185,741 Abandoned US20030151753A1 (en) 2002-01-25 2002-06-27 Methods and apparatuses for use in switching between streaming video bitstreams
US12/472,266 Expired - Fee Related US8576919B2 (en) 2002-02-08 2009-05-26 Methods and apparatuses for use in switching between streaming video bitstreams
US14/071,540 Active 2024-10-29 US9686546B2 (en) 2002-02-08 2013-11-04 Switching between streaming video bitstreams

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/472,266 Expired - Fee Related US8576919B2 (en) 2002-02-08 2009-05-26 Methods and apparatuses for use in switching between streaming video bitstreams
US14/071,540 Active 2024-10-29 US9686546B2 (en) 2002-02-08 2013-11-04 Switching between streaming video bitstreams

Country Status (2)

Country Link
US (3) US20030151753A1 (en)
EP (1) EP1337111A3 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262257A1 (en) * 2004-04-30 2005-11-24 Major R D Apparatus, system, and method for adaptive-rate shifting of streaming content
US20060127946A1 (en) * 2002-08-16 2006-06-15 Montagu Jean I Reading of fluorescent arrays
US20080013628A1 (en) * 2006-07-14 2008-01-17 Microsoft Corporation Computation Scheduling and Allocation for Visual Communication
US20080031344A1 (en) * 2006-08-04 2008-02-07 Microsoft Corporation Wyner-Ziv and Wavelet Video Coding
US20080079612A1 (en) * 2006-10-02 2008-04-03 Microsoft Corporation Request Bits Estimation for a Wyner-Ziv Codec
US20080084999A1 (en) * 2006-10-05 2008-04-10 Industrial Technology Research Institute Encoders and image encoding methods
US20080174794A1 (en) * 2007-01-24 2008-07-24 Xerox Corporation Gradual charge pump technique for optimizing phase locked loop (PLL) function in sub-pixel generation for high speed laser printers switching between different speeds
US20080181221A1 (en) * 2005-04-11 2008-07-31 Markus Kampmann Technique for Controlling Data Packet Transmission of Variable Bit Rate Data
US20080186849A1 (en) * 2005-04-11 2008-08-07 Markus Kampmann Technique for Dynamically Controlling Data Packet Transmissions
US20080195743A1 (en) * 2004-04-30 2008-08-14 Brueck David F Apparatus, system, and method for multi-bitrate content streaming
US20080222235A1 (en) * 2005-04-28 2008-09-11 Hurst Mark B System and method of minimizing network bandwidth retrieved from an external network
US20080263180A1 (en) * 2007-04-19 2008-10-23 Hurst Mark B Apparatus, system, and method for resilient content acquisition
US20080291065A1 (en) * 2007-05-25 2008-11-27 Microsoft Corporation Wyner-Ziv Coding with Multiple Side Information
US20090043906A1 (en) * 2007-08-06 2009-02-12 Hurst Mark B Apparatus, system, and method for multi-bitrate content streaming
US20090064254A1 (en) * 2007-02-27 2009-03-05 Canon Kabushiki Kaisha Method and device for transmitting data
US20090182889A1 (en) * 2008-01-15 2009-07-16 Move Networks, Inc. System and method of managing multiple video players
US20100064335A1 (en) * 2008-09-10 2010-03-11 Geraint Jenkin Virtual set-top box
US20100114857A1 (en) * 2008-10-17 2010-05-06 John Edwards User interface with available multimedia content from multiple multimedia websites
US20100205049A1 (en) * 2009-02-12 2010-08-12 Long Dustin W Advertisement management for live internet multimedia content
US20110022471A1 (en) * 2009-07-23 2011-01-27 Brueck David F Messaging service for providing updates for multimedia content of a live event delivered over the internet
US20110058675A1 (en) * 2009-09-04 2011-03-10 Brueck David F Controlling access to copies of media content by a client device
US20110090965A1 (en) * 2009-10-21 2011-04-21 Hong Kong Applied Science and Technology Research Institute Company Limited Generation of Synchronized Bidirectional Frames and Uses Thereof
US20110150099A1 (en) * 2009-12-21 2011-06-23 Calvin Ryan Owen Audio Splitting With Codec-Enforced Frame Sizes
US20110150084A1 (en) * 2006-03-27 2011-06-23 Hae-Chul Choi Scalable video encoding and decoding method using switching pictures and apparatus thereof
US8311102B2 (en) 2006-07-26 2012-11-13 Microsoft Corporation Bitstream switching in multiple bit-rate video streaming environments
US20130173760A1 (en) * 2010-09-20 2013-07-04 Humax Co., Ltd. Processing method to be implemented upon the occurrence of an expression switch in http streaming
US8650301B2 (en) 2008-10-02 2014-02-11 Ray-V Technologies, Ltd. Adaptive data rate streaming in a peer-to-peer network delivering video content
US8752085B1 (en) 2012-02-14 2014-06-10 Verizon Patent And Licensing Inc. Advertisement insertion into media content for streaming
US9332051B2 (en) 2012-10-11 2016-05-03 Verizon Patent And Licensing Inc. Media manifest file generation for adaptive streaming cost management
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US9578354B2 (en) 2011-04-18 2017-02-21 Verizon Patent And Licensing Inc. Decoupled slicing and encoding of media content
US9609340B2 (en) 2011-12-28 2017-03-28 Verizon Patent And Licensing Inc. Just-in-time (JIT) encoding for streaming media content
US9686546B2 (en) 2002-02-08 2017-06-20 Microsoft Technology Licensing, Llc Switching between streaming video bitstreams
US9832442B2 (en) 2008-01-15 2017-11-28 Echostar Technologies Llc System and method of managing multiple video players executing on multiple devices
US10194183B2 (en) 2015-12-29 2019-01-29 DISH Technologies L.L.C. Remote storage digital video recorder streaming and related methods

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8888592B1 (en) 2009-06-01 2014-11-18 Sony Computer Entertainment America Llc Voice overlay
US8968087B1 (en) 2009-06-01 2015-03-03 Sony Computer Entertainment America Llc Video game overlay
US8613673B2 (en) 2008-12-15 2013-12-24 Sony Computer Entertainment America Llc Intelligent game loading
US8147339B1 (en) 2007-12-15 2012-04-03 Gaikai Inc. Systems and methods of serving game video
US8926435B2 (en) 2008-12-15 2015-01-06 Sony Computer Entertainment America Llc Dual-mode program execution
US8506402B2 (en) 2009-06-01 2013-08-13 Sony Computer Entertainment America Llc Game execution environments
US8676591B1 (en) 2010-08-02 2014-03-18 Sony Computer Entertainment America Llc Audio deceleration
KR20170129297A (en) 2010-09-13 2017-11-24 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 A game server
KR102126910B1 (en) 2010-09-13 2020-06-25 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 Add-on Management
JP2012222530A (en) * 2011-04-06 2012-11-12 Sony Corp Receiving device and method, and program
WO2014158049A1 (en) * 2013-03-28 2014-10-02 Huawei Technologies Co., Ltd Method for protecting a video frame sequence against packet loss
KR102191878B1 (en) 2014-07-04 2020-12-16 삼성전자주식회사 Method and apparatus for receiving media packet in a multimedia system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118755A1 (en) * 2001-01-03 2002-08-29 Marta Karczewicz Video coding architecture and methods for using same
US6480541B1 (en) * 1996-11-27 2002-11-12 Realnetworks, Inc. Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts
US20030067872A1 (en) * 2001-09-17 2003-04-10 Pulsent Corporation Flow control method for quality streaming of audio/video/media over packet networks
US20030138042A1 (en) * 2001-12-21 2003-07-24 Yen-Kuang Chen Zigzag in-order for image/video encoder and decoder
US20030206659A1 (en) * 1998-09-08 2003-11-06 Canon Kabushiki Kaisha Image processing apparatus including an image data encoder having at least two scalability modes and method therefor
US6700933B1 (en) * 2000-02-15 2004-03-02 Microsoft Corporation System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (PFGS) video coding
US6795501B1 (en) * 1997-11-05 2004-09-21 Intel Corporation Multi-layer coder/decoder for producing quantization error signal samples
US20050002458A1 (en) * 2001-10-26 2005-01-06 Bruls Wilhelmus Hendrikus Alfonsus Spatial scalable compression
US20050135477A1 (en) * 2000-07-11 2005-06-23 Microsoft Corporation Systems and methods with error resilience in enhancement layer bitstream of scalable video coding

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2126467A1 (en) * 1993-07-13 1995-01-14 Barin Geoffry Haskell Scalable encoding and decoding of high-resolution progressive video
US5687095A (en) 1994-11-01 1997-11-11 Lucent Technologies Inc. Video transmission rate matching for multimedia communication systems
US5887110A (en) 1995-03-28 1999-03-23 Nippon Telegraph & Telephone Corp. Video data playback system using effective scheme for producing coded video data for fast playback mode
JP3263807B2 (en) * 1996-09-09 2002-03-11 ソニー株式会社 Image encoding apparatus and image encoding method
US5982436A (en) 1997-03-28 1999-11-09 Philips Electronics North America Corp. Method for seamless splicing in a video encoder
JP3191922B2 (en) * 1997-07-10 2001-07-23 松下電器産業株式会社 Image decoding method
US6208671B1 (en) 1998-01-20 2001-03-27 Cirrus Logic, Inc. Asynchronous sample rate converter
JPH11252546A (en) 1998-02-27 1999-09-17 Hitachi Ltd Transmission speed converter
US6292512B1 (en) 1998-07-06 2001-09-18 U.S. Philips Corporation Scalable video coding system
US7035278B2 (en) 1998-07-31 2006-04-25 Sedna Patent Services, Llc Method and apparatus for forming and utilizing a slotted MPEG transport stream
EP1169864A2 (en) 1999-04-14 2002-01-09 Sarnoff Corporation Method for generating and processing transition streams
US6262512B1 (en) * 1999-11-08 2001-07-17 Jds Uniphase Inc. Thermally actuated microelectromechanical systems including thermal isolation structures
JP3963296B2 (en) 1999-12-27 2007-08-22 株式会社Kddi研究所 Video transmission rate conversion device
FI120125B (en) 2000-08-21 2009-06-30 Nokia Corp Image Coding
US20020122491A1 (en) * 2001-01-03 2002-09-05 Marta Karczewicz Video decoder architecture and method for using same
DE60128152D1 (en) 2001-06-19 2007-06-06 Stratos Wireless Inc DIPLEXER SWITCHING / CIRCUIT WITH MODEM FUNCTION
US20030067672A1 (en) * 2001-10-10 2003-04-10 George Bodeep Programmable gain clamped and flattened-spectrum high power erbium-doped fiber amplifier
US7076204B2 (en) 2001-10-30 2006-07-11 Unwired Technology Llc Multiple channel wireless communication system
US6987947B2 (en) 2001-10-30 2006-01-17 Unwired Technology Llc Multiple channel wireless communication system
JP4062924B2 (en) 2002-01-24 2008-03-19 コニカミノルタホールディングス株式会社 Color image processing method and color image processing apparatus
US20030151753A1 (en) 2002-02-08 2003-08-14 Shipeng Li Methods and apparatuses for use in switching between streaming video bitstreams
US6996173B2 (en) 2002-01-25 2006-02-07 Microsoft Corporation Seamless switching of scalable video bitstreams
JP2003244700A (en) 2002-01-25 2003-08-29 Microsoft Corp Seamless switching of scalable video bitstream
JP2003323705A (en) * 2002-05-02 2003-11-14 Fuji Photo Film Co Ltd Servo writer

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480541B1 (en) * 1996-11-27 2002-11-12 Realnetworks, Inc. Method and apparatus for providing scalable pre-compressed digital video with reduced quantization based artifacts
US6795501B1 (en) * 1997-11-05 2004-09-21 Intel Corporation Multi-layer coder/decoder for producing quantization error signal samples
US20030206659A1 (en) * 1998-09-08 2003-11-06 Canon Kabushiki Kaisha Image processing apparatus including an image data encoder having at least two scalability modes and method therefor
US6700933B1 (en) * 2000-02-15 2004-03-02 Microsoft Corporation System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (PFGS) video coding
US20050135477A1 (en) * 2000-07-11 2005-06-23 Microsoft Corporation Systems and methods with error resilience in enhancement layer bitstream of scalable video coding
US20020118755A1 (en) * 2001-01-03 2002-08-29 Marta Karczewicz Video coding architecture and methods for using same
US20030067872A1 (en) * 2001-09-17 2003-04-10 Pulsent Corporation Flow control method for quality streaming of audio/video/media over packet networks
US20050002458A1 (en) * 2001-10-26 2005-01-06 Bruls Wilhelmus Hendrikus Alfonsus Spatial scalable compression
US20030138042A1 (en) * 2001-12-21 2003-07-24 Yen-Kuang Chen Zigzag in-order for image/video encoder and decoder

Cited By (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9686546B2 (en) 2002-02-08 2017-06-20 Microsoft Technology Licensing, Llc Switching between streaming video bitstreams
US20060127946A1 (en) * 2002-08-16 2006-06-15 Montagu Jean I Reading of fluorescent arrays
US9407564B2 (en) 2004-04-30 2016-08-02 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US8402156B2 (en) 2004-04-30 2013-03-19 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8612624B2 (en) 2004-04-30 2013-12-17 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US20050262257A1 (en) * 2004-04-30 2005-11-24 Major R D Apparatus, system, and method for adaptive-rate shifting of streaming content
US9571551B2 (en) 2004-04-30 2017-02-14 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10225304B2 (en) 2004-04-30 2019-03-05 Dish Technologies Llc Apparatus, system, and method for adaptive-rate shifting of streaming content
US11470138B2 (en) 2004-04-30 2022-10-11 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9071668B2 (en) 2004-04-30 2015-06-30 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US20080195743A1 (en) * 2004-04-30 2008-08-14 Brueck David F Apparatus, system, and method for multi-bitrate content streaming
US20110035507A1 (en) * 2004-04-30 2011-02-10 Brueck David F Apparatus, system, and method for multi-bitrate content streaming
US10469554B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US7818444B2 (en) 2004-04-30 2010-10-19 Move Networks, Inc. Apparatus, system, and method for multi-bitrate content streaming
US10469555B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US10951680B2 (en) 2004-04-30 2021-03-16 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US11677798B2 (en) 2004-04-30 2023-06-13 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US20080186849A1 (en) * 2005-04-11 2008-08-07 Markus Kampmann Technique for Dynamically Controlling Data Packet Transmissions
US9344476B2 (en) * 2005-04-11 2016-05-17 Telefonaktiebolaget Lm Ericsson (Publ) Technique for controlling data packet transmission of variable bit rate data
US20080181221A1 (en) * 2005-04-11 2008-07-31 Markus Kampmann Technique for Controlling Data Packet Transmission of Variable Bit Rate Data
US8804515B2 (en) * 2005-04-11 2014-08-12 Telefonaktiebolaget Lm Ericsson (Publ) Technique for dynamically controlling data packet transmissions
US8880721B2 (en) 2005-04-28 2014-11-04 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US9344496B2 (en) 2005-04-28 2016-05-17 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US20080222235A1 (en) * 2005-04-28 2008-09-11 Hurst Mark B System and method of minimizing network bandwidth retrieved from an external network
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US8619854B2 (en) * 2006-03-27 2013-12-31 Electronics And Telecommunications Research Institute Scalable video encoding and decoding method using switching pictures and apparatus thereof
US20110150084A1 (en) * 2006-03-27 2011-06-23 Hae-Chul Choi Scalable video encoding and decoding method using switching pictures and apparatus thereof
US20080013628A1 (en) * 2006-07-14 2008-01-17 Microsoft Corporation Computation Scheduling and Allocation for Visual Communication
US8358693B2 (en) 2006-07-14 2013-01-22 Microsoft Corporation Encoding visual data with computation scheduling and allocation
US8311102B2 (en) 2006-07-26 2012-11-13 Microsoft Corporation Bitstream switching in multiple bit-rate video streaming environments
US8340193B2 (en) 2006-08-04 2012-12-25 Microsoft Corporation Wyner-Ziv and wavelet video coding
US20080031344A1 (en) * 2006-08-04 2008-02-07 Microsoft Corporation Wyner-Ziv and Wavelet Video Coding
US20080079612A1 (en) * 2006-10-02 2008-04-03 Microsoft Corporation Request Bits Estimation for a Wyner-Ziv Codec
US7388521B2 (en) 2006-10-02 2008-06-17 Microsoft Corporation Request bits estimation for a Wyner-Ziv codec
US20080084999A1 (en) * 2006-10-05 2008-04-10 Industrial Technology Research Institute Encoders and image encoding methods
US8175151B2 (en) * 2006-10-05 2012-05-08 Industrial Technology Research Institute Encoders and image encoding methods
US8488186B2 (en) * 2007-01-24 2013-07-16 Xerox Corporation Gradual charge pump technique for optimizing phase locked loop (PLL) function in sub-pixel generation for high speed laser printers switching between different speeds
US20080174794A1 (en) * 2007-01-24 2008-07-24 Xerox Corporation Gradual charge pump technique for optimizing phase locked loop (PLL) function in sub-pixel generation for high speed laser printers switching between different speeds
US8429706B2 (en) * 2007-02-27 2013-04-23 Canon Kabushiki Kaisha Method and device for transmitting data
US20090064254A1 (en) * 2007-02-27 2009-03-05 Canon Kabushiki Kaisha Method and device for transmitting data
US20080263180A1 (en) * 2007-04-19 2008-10-23 Hurst Mark B Apparatus, system, and method for resilient content acquisition
US8340192B2 (en) 2007-05-25 2012-12-25 Microsoft Corporation Wyner-Ziv coding with multiple side information
US20080291065A1 (en) * 2007-05-25 2008-11-27 Microsoft Corporation Wyner-Ziv Coding with Multiple Side Information
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10165034B2 (en) 2007-08-06 2018-12-25 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US20090043906A1 (en) * 2007-08-06 2009-02-12 Hurst Mark B Apparatus, system, and method for multi-bitrate content streaming
US10116722B2 (en) 2007-08-06 2018-10-30 Dish Technologies Llc Apparatus, system, and method for multi-bitrate content streaming
US8190760B2 (en) 2008-01-15 2012-05-29 Echostar Advanced Technologies L.L.C. System and method of managing multiple video players
US9832442B2 (en) 2008-01-15 2017-11-28 Echostar Technologies Llc System and method of managing multiple video players executing on multiple devices
US10412357B2 (en) * 2008-01-15 2019-09-10 DISH Technologies L.L.C. System and methods of managing multiple video players executing on multiple devices
US20090182889A1 (en) * 2008-01-15 2009-07-16 Move Networks, Inc. System and method of managing multiple video players
US9680889B2 (en) 2008-01-15 2017-06-13 Echostar Technologies L.L.C. System and method of managing multiple video players
US20180098044A1 (en) * 2008-01-15 2018-04-05 Echostar Technologies L.L.C. System and methods of managing multiple video players executing on multiple devices
US8683543B2 (en) 2008-09-10 2014-03-25 DISH Digital L.L.C. Virtual set-top box that executes service provider middleware
US11831952B2 (en) 2008-09-10 2023-11-28 DISH Technologies L.L.C. Virtual set-top box
US20100064335A1 (en) * 2008-09-10 2010-03-11 Geraint Jenkin Virtual set-top box
US20100064324A1 (en) * 2008-09-10 2010-03-11 Geraint Jenkin Dynamic video source selection
US10616646B2 (en) 2008-09-10 2020-04-07 Dish Technologies Llc Virtual set-top box that executes service provider middleware
US8332905B2 (en) 2008-09-10 2012-12-11 Echostar Advanced Technologies L.L.C. Virtual set-top box that emulates processing of IPTV video content
US8418207B2 (en) 2008-09-10 2013-04-09 DISH Digital L.L.C. Dynamic video source selection for providing the best quality programming
US8935732B2 (en) 2008-09-10 2015-01-13 Echostar Technologies L.L.C. Dynamic video source selection for providing the best quality programming
US8650301B2 (en) 2008-10-02 2014-02-11 Ray-V Technologies, Ltd. Adaptive data rate streaming in a peer-to-peer network delivering video content
US8903863B2 (en) 2008-10-17 2014-12-02 Echostar Technologies L.L.C. User interface with available multimedia content from multiple multimedia websites
US8321401B2 (en) 2008-10-17 2012-11-27 Echostar Advanced Technologies L.L.C. User interface with available multimedia content from multiple multimedia websites
US20100114857A1 (en) * 2008-10-17 2010-05-06 John Edwards User interface with available multimedia content from multiple multimedia websites
US20100205049A1 (en) * 2009-02-12 2010-08-12 Long Dustin W Advertisement management for live internet multimedia content
US9009066B2 (en) 2009-02-12 2015-04-14 Echostar Technologies L.L.C. Advertisement management for live internet multimedia content
US20110022471A1 (en) * 2009-07-23 2011-01-27 Brueck David F Messaging service for providing updates for multimedia content of a live event delivered over the internet
US10410222B2 (en) 2009-07-23 2019-09-10 DISH Technologies L.L.C. Messaging service for providing updates for multimedia content of a live event delivered over the internet
US9203816B2 (en) 2009-09-04 2015-12-01 Echostar Technologies L.L.C. Controlling access to copies of media content by a client device
US20110058675A1 (en) * 2009-09-04 2011-03-10 Brueck David F Controlling access to copies of media content by a client device
US20110090965A1 (en) * 2009-10-21 2011-04-21 Hong Kong Applied Science and Technology Research Institute Company Limited Generation of Synchronized Bidirectional Frames and Uses Thereof
US9338523B2 (en) 2009-12-21 2016-05-10 Echostar Technologies L.L.C. Audio splitting with codec-enforced frame sizes
US20110150099A1 (en) * 2009-12-21 2011-06-23 Calvin Ryan Owen Audio Splitting With Codec-Enforced Frame Sizes
US10075744B2 (en) 2010-02-11 2018-09-11 DISH Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US20130173760A1 (en) * 2010-09-20 2013-07-04 Humax Co., Ltd. Processing method to be implemented upon the occurrence of an expression switch in http streaming
US9578354B2 (en) 2011-04-18 2017-02-21 Verizon Patent And Licensing Inc. Decoupled slicing and encoding of media content
US9609340B2 (en) 2011-12-28 2017-03-28 Verizon Patent And Licensing Inc. Just-in-time (JIT) encoding for streaming media content
US8789090B1 (en) 2012-02-14 2014-07-22 Uplynk, LLC Advertisement insertion into media content for streaming
US8973032B1 (en) 2012-02-14 2015-03-03 Verizon Patent And Licensing Inc. Advertisement insertion into media content for streaming
US8966523B1 (en) 2012-02-14 2015-02-24 Verizon Patent And Licensing Inc. Advertisement insertion into media content for streaming
US8752085B1 (en) 2012-02-14 2014-06-10 Verizon Patent And Licensing Inc. Advertisement insertion into media content for streaming
US8990849B2 (en) 2012-02-14 2015-03-24 Verizon Patent And Licensing Inc. Advertisement insertion into media content for streaming
US9332051B2 (en) 2012-10-11 2016-05-03 Verizon Patent And Licensing Inc. Media manifest file generation for adaptive streaming cost management
US10368109B2 (en) 2015-12-29 2019-07-30 DISH Technologies L.L.C. Dynamic content delivery routing and related methods and systems
US10194183B2 (en) 2015-12-29 2019-01-29 DISH Technologies L.L.C. Remote storage digital video recorder streaming and related methods
US10687099B2 (en) 2015-12-29 2020-06-16 DISH Technologies L.L.C. Methods and systems for assisted content delivery
US10721508B2 (en) 2015-12-29 2020-07-21 DISH Technologies L.L.C. Methods and systems for adaptive content delivery

Also Published As

Publication number Publication date
US9686546B2 (en) 2017-06-20
EP1337111A2 (en) 2003-08-20
EP1337111A3 (en) 2006-07-12
US20090238267A1 (en) 2009-09-24
US20140086308A1 (en) 2014-03-27
US8576919B2 (en) 2013-11-05

Similar Documents

Publication Publication Date Title
US9686546B2 (en) Switching between streaming video bitstreams
US6996173B2 (en) Seamless switching of scalable video bitstreams
US7391807B2 (en) Video transcoding of scalable multi-layer videos to single layer video
US6944222B2 (en) Efficiency FGST framework employing higher quality reference frames
US20070121723A1 (en) Scalable video coding method and apparatus based on multiple layers
US20020037046A1 (en) Totally embedded FGS video coding with motion compensation
US20060165304A1 (en) Multilayer video encoding/decoding method using residual re-estimation and apparatus using the same
US7263124B2 (en) Scalable coding scheme for low latency applications
US20050157794A1 (en) Scalable video encoding method and apparatus supporting closed-loop optimization
CA2543947A1 (en) Method and apparatus for adaptively selecting context model for entropy coding
US20040179606A1 (en) Method for transcoding fine-granular-scalability enhancement layer of video to minimized spatial variations
US20060013311A1 (en) Video decoding method using smoothing filter and video decoder therefor
EP2084907B1 (en) Method and system for scalable bitstream extraction
WO2008084184A2 (en) Generalised hypothetical reference decoder for scalable video coding with bitstream rewriting
WO2013001013A1 (en) Method for decoding a scalable video bit-stream, and corresponding decoding device
Arnold et al. Efficient drift-free signal-to-noise ratio scalability
EP2051525A1 (en) Bandwidth and content dependent transmission of scalable video layers
US6944346B2 (en) Efficiency FGST framework employing higher quality reference frames
US7912124B2 (en) Motion compensation for fine-grain scalable video
US8175151B2 (en) Encoders and image encoding methods
KR20030067589A (en) Methods and apparatuses for use in switching between streaming video bitstreams
Hsu et al. A new seamless bitstream switching scheme for H. 264 video adaptation with enhanced coding performance
Wolf Multidimensional Transcoding for Adaptive Video Streaming

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, SHIPENG;WU, FENG;SUN, XIAOYAN;AND OTHERS;REEL/FRAME:013354/0963

Effective date: 20020920

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014