US20100274919A1 - Bandwidth allocation to support fast buffering - Google Patents

Bandwidth allocation to support fast buffering Download PDF

Info

Publication number
US20100274919A1
US20100274919A1 US12/829,495 US82949510A US2010274919A1 US 20100274919 A1 US20100274919 A1 US 20100274919A1 US 82949510 A US82949510 A US 82949510A US 2010274919 A1 US2010274919 A1 US 2010274919A1
Authority
US
United States
Prior art keywords
bandwidth
media
client
buffer
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/829,495
Inventor
Spencer Greene
Robert Dykes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juniper Networks Inc
Original Assignee
Juniper Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juniper Networks Inc filed Critical Juniper Networks Inc
Priority to US12/829,495 priority Critical patent/US20100274919A1/en
Publication of US20100274919A1 publication Critical patent/US20100274919A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2801Broadband local area networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2854Wide area networks, e.g. public data networks
    • H04L12/2856Access arrangements, e.g. Internet access
    • H04L12/2869Operational details of access network equipments
    • H04L12/287Remote access server, e.g. BRAS
    • H04L12/2874Processing of data for distribution to the subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/826Involving periods of time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/752Media network packet handling adapting media to network capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast

Definitions

  • Streaming media typically includes audio and/or video transmitted over networks, such as, for example, the Internet, in a streaming or continuous fashion.
  • streaming audio and/or video data may be played back without the data being completely downloaded first.
  • Streaming media may, thus, be viewed or listened to in “real-time” as the data is received.
  • Streaming media may be user-controlled (e.g., on-demand, pay-per-view movies, etc.) or server-controlled (e.g., webcasting).
  • Audio streaming (voice or music) may include the distribution of voice or music containing media over the Internet for user listening.
  • Video-on-demand (VOD) allows users to select and watch video content over a network as part of an interactive television system. VOD systems may stream content allowing viewing while the video is being downloaded.
  • IP networks such as Internet Protocol (IP) networks
  • IP Internet Protocol
  • streaming media including audio and/or video
  • IP network IP network
  • a drawback with existing streaming media playback systems is that when initiating a media stream (e.g., at channel change time), the buffer must fill before media playback begins.
  • a method may include delivering a media stream to a client using a delivery bandwidth.
  • the method may further include adjusting an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.
  • a media server may include a communication interface that delivers a media stream to a client across a network using a delivery bandwidth.
  • the media server may further include a processing unit that adjusts an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.
  • a method may include requesting delivery of a media stream from a media server for a period of time sufficient to fill a buffer. The method may further include receiving a first portion of the media stream over a first bandwidth for the period of time and receiving a second portion of the media stream over a second bandwidth after expiration of the period of time, wherein the second bandwidth is different than the first bandwidth.
  • a method may include reserving a portion of a network bandwidth to divide the network bandwidth into a reserved network bandwidth and an unreserved network bandwidth and receiving a request for media delivery from a client.
  • the method may further include transmitting the media via a first bandwidth portion of the reserved network bandwidth for a time period that is based on an amount of time to fill a buffer at the client and transmitting the media via a second bandwidth portion of the unreserved network bandwidth after expiration of the time period, wherein the second bandwidth portion comprises less bandwidth than the first bandwidth portion.
  • a method may include setting a first buffer size for buffering a first portion of a media stream delivered according to a first network service level agreement.
  • the method further includes setting a second buffer size for buffering a second portion of the media stream delivered according to a second network service level agreement, wherein the first network service level agreement comprises a better service quality than the second network service level agreement and wherein the first buffer size is smaller than the second buffer size.
  • a method may include determining a period of time to sufficiently fill a client buffer that buffers a media stream and delivering a first portion of the media stream to the client using a first bandwidth during the period of time. The method may further include delivering a second portion of the media stream to the client using a second bandwidth subsequent to expiration of the period of time, wherein the first bandwidth is different than the second bandwidth.
  • FIG. 1 is a diagram of an overview of an exemplary embodiment described herein;
  • FIG. 2 is a diagram of an exemplary network in which systems and methods may be implemented
  • FIG. 3 is a diagram illustrating an exemplary embodiment in which a sub-network of the network of FIG. 2 includes a hybrid optical fiber/coaxial (HFC) cable network;
  • HFC hybrid optical fiber/coaxial
  • FIG. 4 graphically depicts the transmission of streaming media between the media server and a client of FIG. 2 via a fast buffer fill bandwidth or a steady state bandwidth;
  • FIG. 5 is a exemplary diagram of a client of FIG. 2 ;
  • FIG. 6 is a exemplary diagram of the media server of FIG. 2 ;
  • FIGS. 7A and 7B are flow charts that illustrate an exemplary process for buffering media data received via a fast buffer fill bandwidth and a steady state bandwidth;
  • FIG. 8 is a messaging diagram that depicts messages and data transmitted between the media server and a client of FIG. 2 ;
  • FIGS. 9A and 9B are flow charts that illustrate a process for allocating fast buffer fill bandwidth for transmission of media data according to an exemplary implementation.
  • Exemplary embodiments implement mechanisms that permit streaming media buffering to occur quickly at clients that receive and playback streaming media.
  • a better network service level agreement that permits media delivery at a higher rate or with less variability, may be allocated to a connection between a media delivery server and the receiving client.
  • the better SLA includes a higher bandwidth
  • the higher network bandwidth allocation may persist for an adequate period of time to permit the receiving buffer at the client to buffer a sufficient amount of the streaming media data.
  • the higher network bandwidth may be de-allocated and media delivery may continue with a lower network bandwidth, lower delivery rate connection between the media server and client.
  • Exemplary embodiments thus, permit the temporary allocation of a higher network bandwidth for quick media buffering at a media playback system.
  • FIG. 1 illustrates an exemplary overview of an implementation described herein.
  • a media server 100 may transmit media data 120 using a high bandwidth 110 , identified in FIG. 1 as a “fast buffer fill bandwidth,” to fill a buffer 130 at a client that is either empty or experiencing a buffer underflow condition.
  • Buffer 130 may be empty because the stream of media data 120 has just been initiated (e.g., at channel change), or because the stream of media data 120 has been interrupted or sufficiently delayed such that buffer 130 does not contain enough media data to continue media playback (e.g., a buffer underflow event).
  • media server 100 may begin transmitting the remaining media data 120 of the stream using a “steady state” bandwidth 140 , which includes less bandwidth than fast buffer fill bandwidth 110 .
  • FIG. 2 is a diagram of an exemplary network 200 in which systems and methods described herein may be implemented.
  • Network 200 may include a media server 100 connected to one or more clients 210 - 1 through 210 -N via a sub-network 220 .
  • Media server 100 and clients 210 - 1 through 210 -N may connect with sub-network 220 via any type of link, such as, for example, wired or wireless links.
  • Sub-network 220 can include one or more networks of any type, including a Public Land Mobile Network (PLMN), a digital subscriber line (DSL) network, a Public Switched Telephone Network (PSTN), a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), an optical fiber network, a hybrid optical fiber/coaxial (HFC) cable network, the Internet, or Intranet.
  • PLMN Public Land Mobile Network
  • DSL digital subscriber line
  • PSTN Public Switched Telephone Network
  • LAN local area network
  • MAN metropolitan area network
  • WAN wide area network
  • optical fiber network a hybrid optical fiber/coaxial (HFC) cable network
  • the Internet or Intranet.
  • Intranet Intranet.
  • the one or more networks may alternatively include packet-switched sub-networks, such as, for example, General Packet Radio Service (GPRS), Cellular Digital Packet Data (CDPD), and Mobile IP sub-networks.
  • GPRS General Packet
  • media server 110 may connect to a head end of the HFC cable network.
  • media server 110 may connect (i.e., indirectly) to a digital subscriber line access multiplexer (DSLAM) of the DSL network.
  • DSL digital subscriber line access multiplexer
  • Media server 100 may include any type of entity that delivers media data (e.g., streaming media) to respective clients 210 .
  • Each of clients 210 - 1 through 210 -N may include a device capable of receiving one or more streams of media data transmitted from media server 100 , buffering the one or more streams of media data, and playing back the one or more streams using a media player.
  • Each of clients 210 - 1 through 210 -N may include, for example, a personal computer, a television, a telephone, a cellular radiotelephone, a Personal Communications System (PCS) terminal, a personal digital assistant (PDA), a laptop and/or palmtop.
  • PCS terminal may combine a cellular radiotelephone with data processing, facsimile and/or data communications capabilities.
  • FIG. 2 the number of components illustrated in FIG. 2 is provided for explanatory purposes only.
  • a typical network may include more or fewer components than are illustrated in FIG. 2 .
  • FIG. 3 illustrates an exemplary embodiment in which sub-network 220 includes an HFC network.
  • media server 100 may connect to a cable head end 300 of the HFC network and may include one or more cable modem termination systems (CMTSs) 310 - 1 through 310 -M.
  • CMTSs 310 - 1 through 310 -M may connect to one or more clients 210 via, for example, coaxial cable.
  • CMTS 310 - 2 may connect to clients 210 - 1 through 210 -N.
  • Each of CMTSs 310 - 1 through 310 -M may transmit media data on downstream channels via, for example, the coaxial cable.
  • each of clients 210 - 1 through 210 -N may include a respective cable modem 320 and/or other customer premises equipment (CPE) 330 .
  • Each cable modem 320 may receive a downstream media data transmission from a respective CMTS 310 and pass the demodulated transmission on to a respective CPE 330 .
  • Each CPE 330 may include, for example, a personal computer, a television, a laptop or the like.
  • FIG. 4 graphically illustrates the transmission of media data 120 of a media stream from media server 100 to a client 210 using, alternatively, either fast buffer fill bandwidth 110 or steady state bandwidth 140 .
  • a fraction of the capacity of sub-network 220 (or at least the portions of sub-network 220 that are the most constrained) may be reserved such that one or more “channels” of fast buffer fill bandwidth 110 are available for use on-demand, with the remainder of the capacity of sub-network 220 being available for one or more “channels” of steady state bandwidth 140 .
  • sub-network 220 has a 100 stream capacity
  • 10 streams of the stream capacity may be reserved for fast buffer fill use.
  • the remaining 90 streams of the stream capacity may be used for steady state use.
  • media data 120 sent via fast buffer fill bandwidth 110 may have a higher designated priority or quality of service than media data 120 sent via steady state bandwidth 140 so that the media data 120 sent via fast buffer fill bandwidth 110 is less likely to be interrupted by congestion, loss or latency.
  • media server 100 may transmit a portion of the media data 120 of the media stream at a higher rate via a high bandwidth (e.g., via fast buffer fill bandwidth 110 ) of sub-network 220 . After a period of time sufficient to fill a buffer at client 210 , media server 100 may transmit a remaining portion of the media data 120 of the media stream at a lower rate via steady state bandwidth 140 , where steady state bandwidth 140 has a lower bandwidth than fast buffer fill bandwidth 110 .
  • Use of fast buffer fill bandwidth 110 thus, enables client 210 to quickly buffer media data of the media stream, thereby, reducing interruptions in playback of the media stream at client 210 .
  • FIG. 5 is a diagram of a portion of client 210 according to an exemplary implementation.
  • Client 210 may include a buffer 500 , a buffer controller 510 , a playback system 520 and an output device 530 .
  • buffer 500 may be implemented by a memory device (not shown)
  • buffer controller 510 and playback system 520 may be implemented by a processing unit (not shown), such as, for example, a microprocessor.
  • Buffer 500 may receive and store streaming media data 120 received from media server 100 .
  • Buffer controller 510 may control the sequential storage of streaming media data 120 in buffer 500 , and retrieval of media data 120 from buffer 500 for playback by playback system 520 .
  • Playback system 520 may receive data retrieved from buffer 500 by buffer controller 510 , and may play the streaming media data 120 to a listener or viewer via output device 530 .
  • playback system 520 may decode the data from buffer 500 before using output device 530 to convert the decoded data from an electrical signal to an auditory output signal.
  • playback system 520 may decode the data from buffer 500 before using output device 530 to convert the video data to a visual representation on a visual display unit.
  • Playback system 520 may simultaneously convert audio and video data from media data 120 to an auditory output signal and a visual representation on a visual display unit.
  • FIG. 6 illustrates a diagram of a portion of a media server 100 according to an exemplary embodiment.
  • Media server 100 may include a processing unit 605 , a memory 610 (or other storage), a communication interface(s) 615 and a bus 620 .
  • Processing unit 605 may include a processor, microprocessor or processing logic.
  • Processing unit 605 may perform data processing functions for data (e.g., media data) transmitted/received via communication interface 615 .
  • Memory 610 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 605 in performing control and processing functions.
  • Memory 610 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 605 .
  • Memory 610 can also include large-capacity storage devices, such as a magnetic and/or optical recording medium and its corresponding drive.
  • Communication interface 615 may include known circuitry for transmitting data to, or receiving data from, sub-network 220 .
  • Such circuitry may include modulators/demodulators, amplifiers, filters, interleavers, error correction circuitry, and/or other known circuitry used for network communication.
  • Bus 620 interconnects the various components of media server 100 to permit the components to communicate with one another.
  • FIGS. 7A and 7B are a flowchart of a process for buffering media data 120 received via a fast buffer fill bandwidth 110 and a steady state bandwidth 140 according to an exemplary implementation.
  • a client 210 may implement the process exemplified by FIGS. 7A and 7B .
  • the exemplary process may begin with client 210 determining whether a buffer fill event has occurred, or is going to occur (block 700 ).
  • a “buffer fill event” may occur at the initiation of the transmission of streaming media data 120 from media server 100 to client 210 .
  • buffer 500 may initially need to buffer an amount of media data 120 .
  • the “buffer fill event” may also include the circumstance where the stream of media data 120 being received at client 210 is interrupted or sufficiently delayed such that buffer 500 does not contain enough media data to continue media playback (e.g., a buffer underflow event).
  • Client 210 may send a message to media server 100 requesting allocation of fast buffer fill bandwidth 110 (block 705 ).
  • the message from client 210 may include an indication of a specified duration (t BUFFER ) of time over which the fast buffer fill bandwidth 110 is requested.
  • the message from client 210 may alternatively identify an amount of data that needs to be transmitted via fast buffer fill bandwidth 110 .
  • the amount of data that needs to be transmitted via fast buffer fill bandwidth 110 may be a configured parameter at media server 100 , or may be computed by media server 100 .
  • client 210 may send a fast buffer fill bandwidth request 810 to media server 100 .
  • Client 210 may determine whether fast buffer fill bandwidth 110 has been allocated (block 710 ).
  • media server 100 or other network element, may notify client 210 of the allocation of fast buffer fill bandwidth 110 by returning a notification message 820 to client 210 .
  • buffer controller 510 may set a minimum buffer size of buffer 500 to a short duration (block 715 ).
  • the short duration of the minimum buffer size of buffer 500 may be based on an amount of bandwidth (or variability thereof) of fast buffer fill bandwidth 110 allocated to client 210 and, thus, based on the rate that media data 120 is transmitted to client 210 .
  • Client 210 may then receive media data 120 from media server 100 via fast buffer fill bandwidth 110 (block 720 ).
  • FIG. 8 graphically depicts client 210 receiving media data 120 from media server 100 via fast buffer fill bandwidth 110 over a buffer fill 830 period of time (e.g., t BUFFER ).
  • Playback system 520 of client 210 may begin playback of the received media data when buffer 500 is sufficiently filled (block 725 ). Once buffer 500 has buffered a sufficient quantity of media data 120 to reduce chances that playback of media data 120 will be interrupted, playback system 520 may begin playback of the stream of media data 120 . After buffer 500 is sufficiently filled with media data received via fast buffer fill bandwidth 110 , client 210 may receive additional media data 120 of the stream at a lower rate via steady state bandwidth 140 (block 730 ). FIG. 8 graphically depicts client 210 receiving media data 120 at a lower rate via steady state bandwidth 140 from media server 100 .
  • buffer controller 510 of client 210 may set a minimum buffer size of buffer 500 to a longer duration (block 735 ).
  • Client 210 may receive a message from media server 100 denying allocation of fast buffer fill bandwidth 110 .
  • the longer duration of minimum buffer size of buffer 500 may be based on the lower bandwidth or higher variability of steady state bandwidth 140 and, thus, the lower rate that media data is transmitted to client 210 .
  • Client 210 may then receive media data 120 via steady state bandwidth 140 (block 740 ).
  • Playback system 520 of client 210 may begin media playback of media data 120 when buffer 500 is sufficiently filled (block 745 ).
  • FIGS. 9A and 9B are a flowchart of a process for allocating fast buffer fill bandwidth 110 for transmission of streaming media data 120 according to an exemplary implementation.
  • Media server 100 may implement the process exemplified by FIGS. 9A and 9B .
  • the exemplary process may begin with the receipt of a message from client 210 requesting allocation of fast buffer fill bandwidth 110 (block 905 ).
  • media server 100 may receive fast buffer fill bandwidth request 810 from client 210 .
  • Media server 100 may determine if fast buffer fill bandwidth 110 is available (block 910 ).
  • a fraction of the capacity of sub-network 220 may be reserved (or at least portions of sub-network 220 that are the most constrained) such that one or more channels of fast buffer fill bandwidth 110 are available for use on-demand, with the remainder of the capacity of sub-network 220 being available for one or more channels of steady state bandwidth 140 .
  • the one or more channels of fast buffer fill bandwidth 110 may be re-used frequently, since at any given time only a small fraction of clients 210 may have experienced a buffer fill event (e.g., a buffer underflow event, or an initial transmission of a stream of media data that requires buffering).
  • a buffer fill event e.g., a buffer underflow event, or an initial transmission of a stream of media data that requires buffering.
  • media server 100 may determine that fast buffer fill bandwidth 110 is currently unavailable.
  • media server 100 may communicate with one or more elements of sub-network 220 , or with a service management system associated with sub-network 220 , to negotiate a certain service level agreement (SLA) to obtain a different quality of service (e.g., a higher quality of service).
  • SLA service level agreement
  • negotiation of the SLA may include requesting an explicit bandwidth reservation (e.g., a higher quality of service) or requesting a higher class of service.
  • Availability of fast buffer fill bandwidth 110 may be determined by media server 100 based on the SLA negotiated with the one or more elements of sub-network 220 or with the service management system associated with sub-network 220 .
  • media server 100 may send a message notifying client 210 of a denial of allocation of fast buffer fill bandwidth 110 (block 915 ). If fast buffer fill bandwidth 110 is available (YES—block 910 ), then media server 100 may send a message notifying client 210 of the approval of fast buffer fill bandwidth allocation (block 920 ). As graphically depicted in the messaging diagram of FIG. 8 , media server 100 may send a message 820 notifying client 210 of the allocation of fast buffer bandwidth 110 to client 210 .
  • Media server 100 may send media data 120 to client 210 using fast buffer fill bandwidth 110 (block 925 ). As graphically shown in the messaging diagram of FIG. 8 , media server 100 may send media data 120 to client 210 via fast buffer fill bandwidth 110 . During sending of media data 120 to client 210 using fast buffer fill bandwidth 110 , media server 100 may determine if a specified period of time has expired (block 930 ). The period of time may correspond to the specified duration (t BUFFER ) of time, or the requested data volume, for which the fast buffer fill bandwidth 110 was requested by client 210 in the request message 810 . If the specified period of time has not expired (NO—block 930 ), then media server 100 may continue sending media data 120 to client 210 using fast buffer fill bandwidth 110 (block 925 ).
  • media server 100 may send media data 120 to client 210 using steady state bandwidth 140 (block 935 ). As graphically illustrated in the messaging diagram of FIG. 8 , media server 100 may send additional media data 120 of the stream to client 210 using steady state bandwidth 140 . The remaining portions of the streaming media data 120 may be sent to client 210 from media server 100 using a lower rate via steady state bandwidth 140 . Media server 100 may sent media data 120 to client 210 using steady state bandwidth 140 according to a SLA that includes a lower quality of service than the SLA negotiated above with respect to block 910 .

Abstract

A system delivers a media stream to a client using a delivery bandwidth. The system adjusts an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.

Description

    BACKGROUND
  • Streaming media typically includes audio and/or video transmitted over networks, such as, for example, the Internet, in a streaming or continuous fashion. In streaming media applications, streaming audio and/or video data may be played back without the data being completely downloaded first. Streaming media may, thus, be viewed or listened to in “real-time” as the data is received. Streaming media may be user-controlled (e.g., on-demand, pay-per-view movies, etc.) or server-controlled (e.g., webcasting).
  • There are several network-based streaming services including, for example, audio streaming and video-on-demand (cable, Internet Protocol Television (IPTV)). Audio streaming (voice or music) may include the distribution of voice or music containing media over the Internet for user listening. Video-on-demand (VOD) allows users to select and watch video content over a network as part of an interactive television system. VOD systems may stream content allowing viewing while the video is being downloaded.
  • Networks, such as Internet Protocol (IP) networks, carry bursty traffic and can experience occasional periods of congestion, loss or high latency. When delivering rich media, such as, for example, streaming media including audio and/or video over an IP network, it is common to provide buffering at the receiver end of the communication. As long as average bandwidth delivery is sufficient to support the media stream and instantaneous degradations are shorter in duration than the amount of play time held in the buffer, the media can be played without interruption. A drawback with existing streaming media playback systems is that when initiating a media stream (e.g., at channel change time), the buffer must fill before media playback begins.
  • Therefore, there is a tradeoff between better robustness to instantaneous network degradation (i.e., achieved by buffering for a longer time period) versus faster channel change (i.e., achieved by buffering a shorter time period).
  • SUMMARY
  • In accordance with one implementation, a method may include delivering a media stream to a client using a delivery bandwidth. The method may further include adjusting an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.
  • In another implementation, a media server may include a communication interface that delivers a media stream to a client across a network using a delivery bandwidth. The media server may further include a processing unit that adjusts an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.
  • In still another implementation, a method may include requesting delivery of a media stream from a media server for a period of time sufficient to fill a buffer. The method may further include receiving a first portion of the media stream over a first bandwidth for the period of time and receiving a second portion of the media stream over a second bandwidth after expiration of the period of time, wherein the second bandwidth is different than the first bandwidth.
  • In yet another implementation, a method may include reserving a portion of a network bandwidth to divide the network bandwidth into a reserved network bandwidth and an unreserved network bandwidth and receiving a request for media delivery from a client. The method may further include transmitting the media via a first bandwidth portion of the reserved network bandwidth for a time period that is based on an amount of time to fill a buffer at the client and transmitting the media via a second bandwidth portion of the unreserved network bandwidth after expiration of the time period, wherein the second bandwidth portion comprises less bandwidth than the first bandwidth portion.
  • In a further implementation, a method may include setting a first buffer size for buffering a first portion of a media stream delivered according to a first network service level agreement. The method further includes setting a second buffer size for buffering a second portion of the media stream delivered according to a second network service level agreement, wherein the first network service level agreement comprises a better service quality than the second network service level agreement and wherein the first buffer size is smaller than the second buffer size.
  • In an additional implementation, a method may include determining a period of time to sufficiently fill a client buffer that buffers a media stream and delivering a first portion of the media stream to the client using a first bandwidth during the period of time. The method may further include delivering a second portion of the media stream to the client using a second bandwidth subsequent to expiration of the period of time, wherein the first bandwidth is different than the second bandwidth.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more embodiments described herein and, together with the description, explain the invention. In the drawings,
  • FIG. 1 is a diagram of an overview of an exemplary embodiment described herein;
  • FIG. 2 is a diagram of an exemplary network in which systems and methods may be implemented;
  • FIG. 3 is a diagram illustrating an exemplary embodiment in which a sub-network of the network of FIG. 2 includes a hybrid optical fiber/coaxial (HFC) cable network;
  • FIG. 4 graphically depicts the transmission of streaming media between the media server and a client of FIG. 2 via a fast buffer fill bandwidth or a steady state bandwidth;
  • FIG. 5 is a exemplary diagram of a client of FIG. 2;
  • FIG. 6 is a exemplary diagram of the media server of FIG. 2;
  • FIGS. 7A and 7B are flow charts that illustrate an exemplary process for buffering media data received via a fast buffer fill bandwidth and a steady state bandwidth;
  • FIG. 8 is a messaging diagram that depicts messages and data transmitted between the media server and a client of FIG. 2; and
  • FIGS. 9A and 9B are flow charts that illustrate a process for allocating fast buffer fill bandwidth for transmission of media data according to an exemplary implementation.
  • DETAILED DESCRIPTION
  • The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
  • Exemplary embodiments implement mechanisms that permit streaming media buffering to occur quickly at clients that receive and playback streaming media. During the initiation of a media stream (e.g., at channel change) or during buffer underflow events, a better network service level agreement (SLA), that permits media delivery at a higher rate or with less variability, may be allocated to a connection between a media delivery server and the receiving client. For example, if the better SLA includes a higher bandwidth, the higher network bandwidth allocation may persist for an adequate period of time to permit the receiving buffer at the client to buffer a sufficient amount of the streaming media data. Once this period of time has elapsed, the higher network bandwidth may be de-allocated and media delivery may continue with a lower network bandwidth, lower delivery rate connection between the media server and client. Exemplary embodiments, thus, permit the temporary allocation of a higher network bandwidth for quick media buffering at a media playback system.
  • OVERVIEW
  • FIG. 1 illustrates an exemplary overview of an implementation described herein. As shown in FIG. 1, a media server 100 may transmit media data 120 using a high bandwidth 110, identified in FIG. 1 as a “fast buffer fill bandwidth,” to fill a buffer 130 at a client that is either empty or experiencing a buffer underflow condition. Buffer 130 may be empty because the stream of media data 120 has just been initiated (e.g., at channel change), or because the stream of media data 120 has been interrupted or sufficiently delayed such that buffer 130 does not contain enough media data to continue media playback (e.g., a buffer underflow event).
  • After a sufficient period of time (tBUFFER) to adequately fill buffer 130 with media data 120 has elapsed, media server 100 may begin transmitting the remaining media data 120 of the stream using a “steady state” bandwidth 140, which includes less bandwidth than fast buffer fill bandwidth 110.
  • Exemplary Network
  • FIG. 2 is a diagram of an exemplary network 200 in which systems and methods described herein may be implemented. Network 200 may include a media server 100 connected to one or more clients 210-1 through 210-N via a sub-network 220. Media server 100 and clients 210-1 through 210-N may connect with sub-network 220 via any type of link, such as, for example, wired or wireless links. Sub-network 220 can include one or more networks of any type, including a Public Land Mobile Network (PLMN), a digital subscriber line (DSL) network, a Public Switched Telephone Network (PSTN), a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), an optical fiber network, a hybrid optical fiber/coaxial (HFC) cable network, the Internet, or Intranet. The one or more networks may alternatively include packet-switched sub-networks, such as, for example, General Packet Radio Service (GPRS), Cellular Digital Packet Data (CDPD), and Mobile IP sub-networks. In an implementation in which sub-network 220 includes a HFC cable network, media server 110 may connect to a head end of the HFC cable network. In an implementation in which sub-network 220 includes a DSL network, media server 110 may connect (i.e., indirectly) to a digital subscriber line access multiplexer (DSLAM) of the DSL network.
  • Media server 100 may include any type of entity that delivers media data (e.g., streaming media) to respective clients 210. Each of clients 210-1 through 210-N may include a device capable of receiving one or more streams of media data transmitted from media server 100, buffering the one or more streams of media data, and playing back the one or more streams using a media player. Each of clients 210-1 through 210-N may include, for example, a personal computer, a television, a telephone, a cellular radiotelephone, a Personal Communications System (PCS) terminal, a personal digital assistant (PDA), a laptop and/or palmtop. A PCS terminal may combine a cellular radiotelephone with data processing, facsimile and/or data communications capabilities.
  • It will be appreciated that the number of components illustrated in FIG. 2 is provided for explanatory purposes only. A typical network may include more or fewer components than are illustrated in FIG. 2.
  • FIG. 3 illustrates an exemplary embodiment in which sub-network 220 includes an HFC network. In the exemplary embodiment of FIG. 3, media server 100 may connect to a cable head end 300 of the HFC network and may include one or more cable modem termination systems (CMTSs) 310-1 through 310-M. Each of CMTSs 310-1 through 310-M may connect to one or more clients 210 via, for example, coaxial cable. As shown in FIG. 3, CMTS 310-2 may connect to clients 210-1 through 210-N. Each of CMTSs 310-1 through 310-M may transmit media data on downstream channels via, for example, the coaxial cable.
  • As further illustrated in FIG. 3, each of clients 210-1 through 210-N may include a respective cable modem 320 and/or other customer premises equipment (CPE) 330. Each cable modem 320 may receive a downstream media data transmission from a respective CMTS 310 and pass the demodulated transmission on to a respective CPE 330. Each CPE 330 may include, for example, a personal computer, a television, a laptop or the like.
  • FIG. 4 graphically illustrates the transmission of media data 120 of a media stream from media server 100 to a client 210 using, alternatively, either fast buffer fill bandwidth 110 or steady state bandwidth 140. A fraction of the capacity of sub-network 220 (or at least the portions of sub-network 220 that are the most constrained) may be reserved such that one or more “channels” of fast buffer fill bandwidth 110 are available for use on-demand, with the remainder of the capacity of sub-network 220 being available for one or more “channels” of steady state bandwidth 140. For example, if sub-network 220 has a 100 stream capacity, 10 streams of the stream capacity may be reserved for fast buffer fill use. The remaining 90 streams of the stream capacity may be used for steady state use. In some implementations, media data 120 sent via fast buffer fill bandwidth 110 may have a higher designated priority or quality of service than media data 120 sent via steady state bandwidth 140 so that the media data 120 sent via fast buffer fill bandwidth 110 is less likely to be interrupted by congestion, loss or latency.
  • When client 210 needs to buffer data of the media stream (e.g., when the media stream is first initiated, or when a buffer underflow event occurs during transmission of the media stream), media server 100 may transmit a portion of the media data 120 of the media stream at a higher rate via a high bandwidth (e.g., via fast buffer fill bandwidth 110) of sub-network 220. After a period of time sufficient to fill a buffer at client 210, media server 100 may transmit a remaining portion of the media data 120 of the media stream at a lower rate via steady state bandwidth 140, where steady state bandwidth 140 has a lower bandwidth than fast buffer fill bandwidth 110. Use of fast buffer fill bandwidth 110, thus, enables client 210 to quickly buffer media data of the media stream, thereby, reducing interruptions in playback of the media stream at client 210.
  • Exemplary Client
  • FIG. 5 is a diagram of a portion of client 210 according to an exemplary implementation. Client 210 may include a buffer 500, a buffer controller 510, a playback system 520 and an output device 530. In some implementations, buffer 500 may be implemented by a memory device (not shown), and buffer controller 510 and playback system 520 may be implemented by a processing unit (not shown), such as, for example, a microprocessor.
  • Buffer 500 may receive and store streaming media data 120 received from media server 100. Buffer controller 510 may control the sequential storage of streaming media data 120 in buffer 500, and retrieval of media data 120 from buffer 500 for playback by playback system 520. Playback system 520 may receive data retrieved from buffer 500 by buffer controller 510, and may play the streaming media data 120 to a listener or viewer via output device 530. For example, playback system 520 may decode the data from buffer 500 before using output device 530 to convert the decoded data from an electrical signal to an auditory output signal. As another example, playback system 520 may decode the data from buffer 500 before using output device 530 to convert the video data to a visual representation on a visual display unit. Playback system 520 may simultaneously convert audio and video data from media data 120 to an auditory output signal and a visual representation on a visual display unit.
  • Exemplary Media Server
  • FIG. 6 illustrates a diagram of a portion of a media server 100 according to an exemplary embodiment. Media server 100 may include a processing unit 605, a memory 610 (or other storage), a communication interface(s) 615 and a bus 620. Processing unit 605 may include a processor, microprocessor or processing logic. Processing unit 605 may perform data processing functions for data (e.g., media data) transmitted/received via communication interface 615. Memory 610 may include Random Access Memory (RAM) that provides temporary working storage of data and instructions for use by processing unit 605 in performing control and processing functions. Memory 610 may additionally include Read Only Memory (ROM) that provides permanent or semi-permanent storage of data and instructions for use by processing unit 605. Memory 610 can also include large-capacity storage devices, such as a magnetic and/or optical recording medium and its corresponding drive.
  • Communication interface 615 may include known circuitry for transmitting data to, or receiving data from, sub-network 220. Such circuitry may include modulators/demodulators, amplifiers, filters, interleavers, error correction circuitry, and/or other known circuitry used for network communication. Bus 620 interconnects the various components of media server 100 to permit the components to communicate with one another.
  • Exemplary Client-Side Process
  • FIGS. 7A and 7B are a flowchart of a process for buffering media data 120 received via a fast buffer fill bandwidth 110 and a steady state bandwidth 140 according to an exemplary implementation. A client 210 may implement the process exemplified by FIGS. 7A and 7B.
  • The exemplary process may begin with client 210 determining whether a buffer fill event has occurred, or is going to occur (block 700). A “buffer fill event” may occur at the initiation of the transmission of streaming media data 120 from media server 100 to client 210. When the transmission of streaming media data 120 is first initiated, buffer 500 may initially need to buffer an amount of media data 120. The “buffer fill event” may also include the circumstance where the stream of media data 120 being received at client 210 is interrupted or sufficiently delayed such that buffer 500 does not contain enough media data to continue media playback (e.g., a buffer underflow event).
  • Client 210 may send a message to media server 100 requesting allocation of fast buffer fill bandwidth 110 (block 705). The message from client 210 may include an indication of a specified duration (tBUFFER) of time over which the fast buffer fill bandwidth 110 is requested. The message from client 210 may alternatively identify an amount of data that needs to be transmitted via fast buffer fill bandwidth 110. Alternatively, the amount of data that needs to be transmitted via fast buffer fill bandwidth 110 may be a configured parameter at media server 100, or may be computed by media server 100. As graphically depicted in the messaging diagram of FIG. 8, client 210 may send a fast buffer fill bandwidth request 810 to media server 100. Client 210 may determine whether fast buffer fill bandwidth 110 has been allocated (block 710). As shown in FIG. 8, media server 100, or other network element, may notify client 210 of the allocation of fast buffer fill bandwidth 110 by returning a notification message 820 to client 210.
  • If fast buffer fill bandwidth 110 has been allocated to client 210 (YES—block 710), then buffer controller 510 may set a minimum buffer size of buffer 500 to a short duration (block 715). The short duration of the minimum buffer size of buffer 500 may be based on an amount of bandwidth (or variability thereof) of fast buffer fill bandwidth 110 allocated to client 210 and, thus, based on the rate that media data 120 is transmitted to client 210. Client 210 may then receive media data 120 from media server 100 via fast buffer fill bandwidth 110 (block 720). FIG. 8 graphically depicts client 210 receiving media data 120 from media server 100 via fast buffer fill bandwidth 110 over a buffer fill 830 period of time (e.g., tBUFFER).
  • Playback system 520 of client 210 may begin playback of the received media data when buffer 500 is sufficiently filled (block 725). Once buffer 500 has buffered a sufficient quantity of media data 120 to reduce chances that playback of media data 120 will be interrupted, playback system 520 may begin playback of the stream of media data 120. After buffer 500 is sufficiently filled with media data received via fast buffer fill bandwidth 110, client 210 may receive additional media data 120 of the stream at a lower rate via steady state bandwidth 140 (block 730). FIG. 8 graphically depicts client 210 receiving media data 120 at a lower rate via steady state bandwidth 140 from media server 100.
  • Returning to block 710, if client 210 receives an indication from media server 100 that fast buffer fill bandwidth 110 has not been allocated to client 210 (NO—block 710), then buffer controller 510 of client 210 may set a minimum buffer size of buffer 500 to a longer duration (block 735). Client 210 may receive a message from media server 100 denying allocation of fast buffer fill bandwidth 110. The longer duration of minimum buffer size of buffer 500 may be based on the lower bandwidth or higher variability of steady state bandwidth 140 and, thus, the lower rate that media data is transmitted to client 210.
  • Client 210 may then receive media data 120 via steady state bandwidth 140 (block 740). Playback system 520 of client 210 may begin media playback of media data 120 when buffer 500 is sufficiently filled (block 745).
  • Exemplary Media Server-Side Process
  • FIGS. 9A and 9B are a flowchart of a process for allocating fast buffer fill bandwidth 110 for transmission of streaming media data 120 according to an exemplary implementation. Media server 100 may implement the process exemplified by FIGS. 9A and 9B.
  • The exemplary process may begin with the receipt of a message from client 210 requesting allocation of fast buffer fill bandwidth 110 (block 905). As graphically depicted in the messaging diagram of FIG. 8, media server 100 may receive fast buffer fill bandwidth request 810 from client 210.
  • Media server 100 may determine if fast buffer fill bandwidth 110 is available (block 910). A fraction of the capacity of sub-network 220 may be reserved (or at least portions of sub-network 220 that are the most constrained) such that one or more channels of fast buffer fill bandwidth 110 are available for use on-demand, with the remainder of the capacity of sub-network 220 being available for one or more channels of steady state bandwidth 140. The one or more channels of fast buffer fill bandwidth 110 may be re-used frequently, since at any given time only a small fraction of clients 210 may have experienced a buffer fill event (e.g., a buffer underflow event, or an initial transmission of a stream of media data that requires buffering). However, it may occur that all of the reserved capacity of sub-network 220 may be in use at the time that a given client 210 sends a fast buffer fill bandwidth request 910 to media server 100. In such a case, media server 100 may determine that fast buffer fill bandwidth 110 is currently unavailable. In one implementation, to determine whether fast buffer fill bandwidth 110 is available, media server 100 may communicate with one or more elements of sub-network 220, or with a service management system associated with sub-network 220, to negotiate a certain service level agreement (SLA) to obtain a different quality of service (e.g., a higher quality of service). Negotiation of the SLA may include requesting an explicit bandwidth reservation (e.g., a higher quality of service) or requesting a higher class of service. Availability of fast buffer fill bandwidth 110 may be determined by media server 100 based on the SLA negotiated with the one or more elements of sub-network 220 or with the service management system associated with sub-network 220.
  • If fast buffer fill bandwidth 110 is not available (NO—block 910), then media server 100, or other network element, may send a message notifying client 210 of a denial of allocation of fast buffer fill bandwidth 110 (block 915). If fast buffer fill bandwidth 110 is available (YES—block 910), then media server 100 may send a message notifying client 210 of the approval of fast buffer fill bandwidth allocation (block 920). As graphically depicted in the messaging diagram of FIG. 8, media server 100 may send a message 820 notifying client 210 of the allocation of fast buffer bandwidth 110 to client 210.
  • Media server 100 may send media data 120 to client 210 using fast buffer fill bandwidth 110 (block 925). As graphically shown in the messaging diagram of FIG. 8, media server 100 may send media data 120 to client 210 via fast buffer fill bandwidth 110. During sending of media data 120 to client 210 using fast buffer fill bandwidth 110, media server 100 may determine if a specified period of time has expired (block 930). The period of time may correspond to the specified duration (tBUFFER) of time, or the requested data volume, for which the fast buffer fill bandwidth 110 was requested by client 210 in the request message 810. If the specified period of time has not expired (NO—block 930), then media server 100 may continue sending media data 120 to client 210 using fast buffer fill bandwidth 110 (block 925).
  • If the specified period of time has expired (YES—block 930), then media server 100 may send media data 120 to client 210 using steady state bandwidth 140 (block 935). As graphically illustrated in the messaging diagram of FIG. 8, media server 100 may send additional media data 120 of the stream to client 210 using steady state bandwidth 140. The remaining portions of the streaming media data 120 may be sent to client 210 from media server 100 using a lower rate via steady state bandwidth 140. Media server 100 may sent media data 120 to client 210 using steady state bandwidth 140 according to a SLA that includes a lower quality of service than the SLA negotiated above with respect to block 910.
  • CONCLUSION
  • The foregoing description of embodiments described herein provides illustration and description, but is not intended to be exhaustive or to limit the embodiments described herein to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, certain portions have been described as executed as instructions by one or more processing units. However, implementations, other then software implementations, may be used, including, for example, hardware implementations such as application specific integrated circuits, field programmable gate arrays, or combinations of hardware and software. While series of acts has been described in FIGS. 7A, 7B, 9A and 9B, the order of the acts may vary in other. Also, non-dependent acts may be performed in parallel.
  • No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. The scope of the invention is defined by the claims and their equivalents.

Claims (2)

1. A method, comprising:
delivering a media stream to a client using a delivery bandwidth; and
adjusting an amount of the bandwidth used to deliver the media stream based on a state of a buffer associated with the client that receives and buffers the delivered media stream.
2-26. (canceled)
US12/829,495 2007-01-23 2010-07-02 Bandwidth allocation to support fast buffering Abandoned US20100274919A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/829,495 US20100274919A1 (en) 2007-01-23 2010-07-02 Bandwidth allocation to support fast buffering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/626,016 US7779142B1 (en) 2007-01-23 2007-01-23 Bandwidth allocation to support fast buffering
US12/829,495 US20100274919A1 (en) 2007-01-23 2010-07-02 Bandwidth allocation to support fast buffering

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/626,016 Continuation US7779142B1 (en) 2007-01-23 2007-01-23 Bandwidth allocation to support fast buffering

Publications (1)

Publication Number Publication Date
US20100274919A1 true US20100274919A1 (en) 2010-10-28

Family

ID=42555888

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/626,016 Active 2028-11-16 US7779142B1 (en) 2007-01-23 2007-01-23 Bandwidth allocation to support fast buffering
US12/829,495 Abandoned US20100274919A1 (en) 2007-01-23 2010-07-02 Bandwidth allocation to support fast buffering

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/626,016 Active 2028-11-16 US7779142B1 (en) 2007-01-23 2007-01-23 Bandwidth allocation to support fast buffering

Country Status (1)

Country Link
US (2) US7779142B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250655A1 (en) * 2009-03-24 2010-09-30 Thomson Licensing Methods for delivering and receiving interactive multimedia
US20140043970A1 (en) * 2010-11-16 2014-02-13 Edgecast Networks, Inc. Bandwiddth Modification for Transparent Capacity Management in a Carrier Network
KR20160077077A (en) * 2013-10-29 2016-07-01 톰슨 라이센싱 Method and device for reserving bandwidth for an adaptive streaming client
US20190238607A1 (en) * 2012-12-21 2019-08-01 Juniper Networks, Inc. Failure detection manager

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510462B2 (en) * 2009-03-31 2013-08-13 Canon Kabushiki Kaisha Network streaming of a video media from a media server to a media client
US20100251293A1 (en) * 2009-03-31 2010-09-30 Canon Kabushiki Kaisha Network streaming of a video media from a media server to a media client
US9954788B2 (en) * 2011-06-03 2018-04-24 Apple Inc. Bandwidth estimation based on statistical measures
US8745158B2 (en) * 2011-09-30 2014-06-03 Avid Technology, Inc. Application-guided bandwidth-managed caching
CN103188236B (en) * 2011-12-30 2015-12-16 华为技术有限公司 The appraisal procedure of media transmission quality and device
WO2015074623A1 (en) * 2013-11-25 2015-05-28 乐视致新电子科技(天津)有限公司 Video playback method, apparatus and intelligent terminal
JP6298030B2 (en) * 2015-10-28 2018-03-20 ファナック株式会社 Motor controller that achieves both low latency and high throughput data communication
US10990447B1 (en) * 2018-07-12 2021-04-27 Lightbits Labs Ltd. System and method for controlling a flow of storage access requests
US11166052B2 (en) 2018-07-26 2021-11-02 Comcast Cable Communications, Llc Remote pause buffer

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918020A (en) * 1997-02-28 1999-06-29 International Business Machines Corporation Data processing system and method for pacing information transfers in a communications network
US6405256B1 (en) * 1999-03-31 2002-06-11 Lucent Technologies Inc. Data streaming using caching servers with expandable buffers and adjustable rate of data transmission to absorb network congestion
US20040179497A1 (en) * 1997-06-20 2004-09-16 Tantivy Communications, Inc. Dynamic bandwidth allocation for multiple access communications using buffer urgency factor
US20050254427A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Buffer level signaling for rate adaptation in multimedia streaming
US20050286856A1 (en) * 2002-12-04 2005-12-29 Koninklijke Philips Electronics N.V. Portable media player with adaptive playback buffer control
US20060095472A1 (en) * 2004-06-07 2006-05-04 Jason Krikorian Fast-start streaming and buffering of streaming content for personal media player
US20060126667A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation Accelerated channel change in rate-limited environments
US20060268704A1 (en) * 2005-04-15 2006-11-30 New Jersey Institute Of Technology Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
US20070097816A1 (en) * 2003-11-18 2007-05-03 Koninklijke Philips Electronics N.V. Determining buffer refilling time when playing back variable bit rate media streams
US20080109556A1 (en) * 2006-11-07 2008-05-08 Sony Ericsson Mobile Communications Ab Adaptive insertion of content in streaming media
US7373413B1 (en) * 2000-06-28 2008-05-13 Cisco Technology, Inc. Devices and methods for minimizing start up delay in transmission of streaming media
US20080181110A1 (en) * 2007-01-31 2008-07-31 Cisco Technology, Inc. Determination of available service capacity in dynamic network access domains
US7581019B1 (en) * 2002-06-05 2009-08-25 Israel Amir Active client buffer management method, system, and apparatus
US7587736B2 (en) * 2001-12-28 2009-09-08 Xanadoo Company Wideband direct-to-home broadcasting satellite communications system and method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918020A (en) * 1997-02-28 1999-06-29 International Business Machines Corporation Data processing system and method for pacing information transfers in a communications network
US20040179497A1 (en) * 1997-06-20 2004-09-16 Tantivy Communications, Inc. Dynamic bandwidth allocation for multiple access communications using buffer urgency factor
US6405256B1 (en) * 1999-03-31 2002-06-11 Lucent Technologies Inc. Data streaming using caching servers with expandable buffers and adjustable rate of data transmission to absorb network congestion
US7373413B1 (en) * 2000-06-28 2008-05-13 Cisco Technology, Inc. Devices and methods for minimizing start up delay in transmission of streaming media
US7587736B2 (en) * 2001-12-28 2009-09-08 Xanadoo Company Wideband direct-to-home broadcasting satellite communications system and method
US7581019B1 (en) * 2002-06-05 2009-08-25 Israel Amir Active client buffer management method, system, and apparatus
US20050286856A1 (en) * 2002-12-04 2005-12-29 Koninklijke Philips Electronics N.V. Portable media player with adaptive playback buffer control
US20070097816A1 (en) * 2003-11-18 2007-05-03 Koninklijke Philips Electronics N.V. Determining buffer refilling time when playing back variable bit rate media streams
US20050254427A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Buffer level signaling for rate adaptation in multimedia streaming
US20050254499A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Buffer level signaling for rate adaptation in multimedia streaming
US20060095472A1 (en) * 2004-06-07 2006-05-04 Jason Krikorian Fast-start streaming and buffering of streaming content for personal media player
US20060126667A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation Accelerated channel change in rate-limited environments
US20060268704A1 (en) * 2005-04-15 2006-11-30 New Jersey Institute Of Technology Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
US20080109556A1 (en) * 2006-11-07 2008-05-08 Sony Ericsson Mobile Communications Ab Adaptive insertion of content in streaming media
US20080181110A1 (en) * 2007-01-31 2008-07-31 Cisco Technology, Inc. Determination of available service capacity in dynamic network access domains

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250655A1 (en) * 2009-03-24 2010-09-30 Thomson Licensing Methods for delivering and receiving interactive multimedia
US9124774B2 (en) * 2009-03-24 2015-09-01 Thomson Licensing Methods for delivering and receiving interactive multimedia
US20140043970A1 (en) * 2010-11-16 2014-02-13 Edgecast Networks, Inc. Bandwiddth Modification for Transparent Capacity Management in a Carrier Network
US9497658B2 (en) * 2010-11-16 2016-11-15 Verizon Digital Media Services Inc. Selective bandwidth modification for transparent capacity management in a carrier network
US10194351B2 (en) 2010-11-16 2019-01-29 Verizon Digital Media Services Inc. Selective bandwidth modification for transparent capacity management in a carrier network
US20190238607A1 (en) * 2012-12-21 2019-08-01 Juniper Networks, Inc. Failure detection manager
US10637903B2 (en) * 2012-12-21 2020-04-28 Juniper Networks, Inc. Failure detection manager
KR20160077077A (en) * 2013-10-29 2016-07-01 톰슨 라이센싱 Method and device for reserving bandwidth for an adaptive streaming client
US20160261661A1 (en) * 2013-10-29 2016-09-08 Thomson Licensing Method and device for reserving bandwidth for an adaptive streaming client
US10419507B2 (en) * 2013-10-29 2019-09-17 Interdigital Ce Patent Holdings Method and device for reserving bandwidth for an adaptive streaming client
KR102355325B1 (en) * 2013-10-29 2022-01-26 인터디지털 씨이 페이튼트 홀딩스, 에스에이에스 Method and device for reserving bandwidth for an adaptive streaming client

Also Published As

Publication number Publication date
US7779142B1 (en) 2010-08-17

Similar Documents

Publication Publication Date Title
US7779142B1 (en) Bandwidth allocation to support fast buffering
CN100448291C (en) Method and device for changing received flowing content channels
US7594025B2 (en) Startup methods and apparatuses for use in streaming content
US9191322B2 (en) Methods, apparatus and computer readable medium for managed adaptive bit rate for bandwidth reclamation
CA2385230C (en) Adaptive bandwidth system and method for broadcast data
JP5420759B2 (en) Fast channel change processing for slow multicast subscriptions
US7934231B2 (en) Allocation of overhead bandwidth to set-top box
EP2761833B1 (en) Bandwidth management for content delivery
EP2204954B1 (en) Optimised bandwidth utilisation in networks
EP2011308B1 (en) Device and method for dynamically storing media data
US9014048B2 (en) Dynamic bandwidth re-allocation
US20060075453A1 (en) Method for streaming multimedia content
US9294731B2 (en) Dynamic VOD channel allocation based on viewer demand
JP5807710B2 (en) Content distribution system, content distribution method and program
US20090077256A1 (en) Dynamic change of quality of service for enhanced multi-media streaming
JP2012004969A (en) Content distribution apparatus and program
US20150195589A1 (en) Method of and apparatus for determining a composite video services stream
US20050125836A1 (en) Shared wireless video downloading
WO2009080112A1 (en) Method and apparatus for distributing media over a communications network
KR20090001806A (en) Video on demand service system using pre-download of partial data and method thereof
EP1022925A2 (en) Methods and apparatus for specifying performance for multimedia communications

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION