US20050086386A1 - Shared running-buffer-based caching system - Google Patents

Shared running-buffer-based caching system Download PDF

Info

Publication number
US20050086386A1
US20050086386A1 US10/687,997 US68799703A US2005086386A1 US 20050086386 A1 US20050086386 A1 US 20050086386A1 US 68799703 A US68799703 A US 68799703A US 2005086386 A1 US2005086386 A1 US 2005086386A1
Authority
US
United States
Prior art keywords
buffer
content object
datastream
running
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/687,997
Inventor
Bo Shen
Songqing Chen
Yong Yan
Sujoy Basu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US10/687,997 priority Critical patent/US20050086386A1/en
Publication of US20050086386A1 publication Critical patent/US20050086386A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YAN, YONG, CHEN, SONGQING, BASU, SUJOY, SHEN, BO
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data

Definitions

  • the present invention relates generally to computer network systems and software for delivering objects from servers to clients with shared buffers, and specifically to a caching system based on shared running buffers.
  • the building block of a content delivery network is a server-proxy-client system.
  • a server delivers content to a client through a proxy.
  • the proxy can choose to cache content objects so that a subsequent request to the same content object can be served directly from the proxy without the delay in contacting the server.
  • Proxy caching strategies have therefore been the focus of many developments, particularly the caching of static web content to reduce network loading and end-to-end latencies.
  • the caching of larger objects presents a different set of challenges.
  • the size of a streaming media content object is usually orders of magnitude larger than a traditional web content object. For example, a two hour long MPEG video requires approximately 1.4 GB of disk space, while traditional web content may only require 10 KB.
  • the demand of continuous and timely delivery of a streaming media content object is more rigorous than that of traditional text-based web content. Therefore a lot of resources need to be reserved for delivering streaming media data to clients. In practice, even a relatively small number of streaming media clients can overload a media server, creating bottlenecks by demanding high disk bandwidth on the server and requiring high network bandwidth.
  • Partial caching caches either a prefix or segments of an content object, rather than the whole content object, so less storage space is required. Typically this involves storing the cached data on disk storage on a proxy server. While this does lessen the disk bandwidth requirement on the server, it moves some of that burden to the proxy. Ideally data should be cached in memory to effectively reduce disk bandwidth requirements and reduce data delivery latency. Partial caching techniques are not able to serve the same data to separate overlapping sessions.
  • patching can be used so that later sessions for the same content object can be served simultaneously.
  • a single session may be served to multiple clients at once.
  • a number of sessions for a single content object may be occurring concurrently and a client can receive data from each of these sessions simultaneously, each session providing a different part of the content object.
  • Such requires the clients to be listening on multiple channels and to store content before its presentation time. Thus client side storage is necessary.
  • patching allows streaming sessions that are overlapped in time to share data it does not buffer data for those sessions and hence does not make the best use of the data retrieved.
  • Proxy buffering uses either a running buffer or an interval caching buffer.
  • a running buffer is used to store a sliding window of an on-going streaming session in the memory of the proxy. Closely followed requests for the content object can be served directly from the buffer in memory rather than re-fetching the content object for every request.
  • Interval caching is similar but does not allocate a buffer until a further request for an content object is made within a certain timeframe. The prefix of the content object is then retrieved from the server and served to the client in conjunction with the data in the newly created buffer. While both of these techniques use memory to buffer the data, they do not fully use the currently buffered data to optimally reduce server load and network traffic. For example multiple running buffers for the same content object may co-exist in a given processing period without any data sharing among the multiple buffers.
  • a server-proxy-client network embodiment of the present invention delivers web content objects from servers to clients from cache content at a proxy server in between.
  • Multiple, moving-window buffers are used to service content requests of the server by various independent clients.
  • a first request for content is delivered by the server through the proxy to the requesting client.
  • the content is simultaneously duplicated to a first circulating buffer. Once the buffer fills, the earlier parts are automatically deleted. The buffer therefore holds a most-recently delivered window of content. If a second request for the same content comes in, a check is made to see if the start of the content is still in the first buffer. If it is, the content is delivered from the first buffer. Otherwise, a second buffer is opened and both buffers are used to deliver what they can simultaneously.
  • Such process can open up third and fourth buffers depending on the size of the content, the size of the buffers, and the respective timing of requests.
  • FIG. 1 is a schematic diagram of a server-proxy-client network system embodiment of the present invention
  • FIG. 2 is a chart of media position against access time, illustrating an embodiment of the present invention
  • FIG. 3 is a further chart of media position against access time, illustrating an embodiment of the present invention.
  • FIG. 4 is a further chart of media position against access time, illustrating an embodiment of the present invention.
  • FIG. 5 is another chart of media position against access time, illustrating another embodiment of the present invention.
  • FIG. 6 is a flowchart diagram of a software embodiment of the present invention.
  • FIG. 1 illustrates a server-proxy-client network system embodiment of the present invention for delivering objects from servers to clients, and is referred to herein by the general reference numeral 100 .
  • a server 101 includes original storage of a web content object 102 .
  • a request 104 is received for some or all of content object 102 , and such is serviced by a response datastream 106 .
  • a cache memory 108 includes many recirculating buffers, as represented by a first buffer 110 and a second buffer 112 .
  • a proxy server 114 hosts the cache memory 108 and off-loads work from server 101 .
  • the buffers 110 and 112 receive copies of the content passing through from the server to any of clients 116 - 118 . Such copies are then available to service subsequent requests for the same content.
  • Each of clients 116 - 118 are able to formula content requests and service responses 120 - 126 .
  • a request 120 is responded to directly from server 101 and response 106 by datastream 121 .
  • a copy of the response 106 is copied to buffer 110 .
  • a request 122 by client 117 for the same content can thus be serviced from buffer 110 with datastream 123 .
  • a later request 124 from client 118 is responded to in two datastreams 125 and 126 , because the content object being sought is not complete in either buffer 110 or 112 .
  • each client needs to be able to maintain multiple connections to receive the individual streams.
  • the client In the case of streaming media, the client will need to store content before its presentation time and will therefore store some of the received data.
  • embodiments of the present invention utilizes the memory space available on a proxy to serve streaming media content more efficiently.
  • a request for a streaming media content object arrives, if the request is the first to the content object, an initial buffer of size T is allocated.
  • the buffer is capable of caching T time units of content.
  • the buffer is filled with the stream from the server while the same stream is being delivered to the client.
  • additional requests to the same media content object are served directly from the buffer.
  • the initial buffer is full and based on the current access pattern the buffer may be extended or shrunk.
  • the size of the initial buffer is frozen and subsequent requests for the media content object cannot be served completely from the initial buffer.
  • a new buffer of initial size T is allocated and goes through the same adaptive allocation as above. Subsequent requests are served simultaneously from the new buffer as well as its preceding running buffers.
  • Buffer management is based on user access patterns for particular objects.
  • a request arrival is the time at that a client requests an content object.
  • a request interval is the difference in time between two consecutive requests arrivals.
  • the average request interval is used to measure the user access pattern.
  • the average request interval is the average request interval between the first request arrival and the last request arrival over the time period that a given number of initial requests arrive.
  • the waiting time is also considered.
  • the waiting time is calculated at time T and is the difference between T and the arrival time of the immediate previous request.
  • a buffer During the lifecycle of a buffer it may be in one of three states, the construction state, the running state or the idle state.
  • the construction state When an initial buffer is allocated upon the arrival of an initial request, the buffer is filled while the request is being served, expecting that the data cached in the buffer could serve closely followed requests for the same content object.
  • the size of the buffer may be adjusted to cache less or more data before its size is frozen. Before the buffer size is frozen, the buffer is in the construction state.
  • the start time of a buffer is defined as the arrival time of the last request before the buffer size is frozen.
  • the requests arriving while a buffer is in the construction state are called the resident requests of this buffer and the buffer is called the resident buffer of these requests.
  • the buffer freezes its size it serves as a running window of a streaming session and moves along with the streaming session. Therefore, the state of the buffer is called the running state.
  • the running distance of a buffer is defined as the length of the content object for the initial buffer allocated for the content object, or for a subsequent buffer as the distance in time between the start time of the buffer and the start time of its preceding buffer. Since data is shared among buffers, clients served from a particular buffer are also served from any preceding buffers that are still in running state. Such requires that the running distance of the buffer equals the time difference with the closest preceding buffer in running state.
  • the buffer When the running window reaches the end of the streaming session, the buffer enters the idle state, that is a transient state that allows the buffer to be reclaimed.
  • the end time of a buffer is defined as the time when a buffer enters the idle state and is ready to be reclaimed.
  • the end time of the initial buffer is equal to its start time plus the length of the content object assuming a complete viewing scenario.
  • the end time is the lesser of the start time of the latest running buffer plus the running distance of the subsequent buffer and the start time of the subsequent buffer plus the length of the content object.
  • the end time of the current buffers for an content object is dynamically updated upon the forming of new buffers for that content object.
  • the request For an incoming request to an content object, if the latest running buffer of the content object is caching the prefix of the content object, the request is served directly from all the existing running buffers of the content object. Otherwise, if there is enough memory, a new running buffer of a predetermined size T is allocated. The request is served from the new running buffer and all existing running buffers of the content object. If there is not enough memory the request may be served without caching, or a buffer replacement algorithm may be invoked to re-allocate an existing running buffer to the request. The end times of all existing buffers of the content object are then updated.
  • each buffer then adjusts its size by going through a three-state lifecycle management process as described below.
  • the proxy acts as a bypass server, i.e., content is passed to the client without caching in the memory buffers.
  • Such scheme gives preference to more frequently requested objects in the memory allocation.
  • the shadow indicates an allocated initial buffer that is reclaimed at T. If there have been multiple requests the waiting time and the average request interval are calculated. If the waiting time is greater than the average request interval, the initial buffer is shrunk by a time value equal to the waiting time, so that the most recent request can be served from the buffer.
  • part of the initial buffer is reclaimed at the end of T so that the last request, R 4 can be served from the buffer.
  • the buffer enters the running state.
  • Such running buffer then serves as a shifting window and runs to its end time. If the waiting time is less than the average request interval, the initial buffer maintains the construction state and is expanded by the difference between the waiting time and the average request interval to T′ expecting that a new request will arrive very soon.
  • T′ the waiting time and the average request interval are recalculated and the above procedure is repeated.
  • the initial buffer has been extended from T to T′.
  • the buffer will freeze its size and enter the running state.
  • the full length of the media content object is cached in the buffer.
  • the buffer also freezes and enters the running state, a static running. For most frequently accessed objects, this scheme ensures that the requests to these objects are served from the proxy memory directly.
  • the buffer expansion is bounded by the available memory in the proxy. When the available memory is exhausted, the buffer freezes its size and enters the running state regardless of future request arrivals.
  • a buffer After a buffer enters the running state, it starts running away from the beginning of the media content object and subsequent requests can not be served completely from the running buffer. In this case, a new buffer of an initial size T is allocated and goes through its own lifecycle. Subsequent requests are served from the new buffer as well as its preceding running buffers.
  • FIG. 5 illustrates maximal data sharing. Requests R 1 to Rk are served entirely from the first buffer B 1 . The requests Rk+1 to Rn are served simultaneously from B 1 and B 2 , showing how late requests are served from all existing buffers. Note that except for the first buffer, the other buffers do not run to the end of the content object.
  • a buffer When a buffer enters the running state, the running distance and end time are calculated for that buffer. In addition, the end times of preceding buffers for the same content object need to be modified according to the arrival time of the latest request. When a buffer runs to its end time, it enters the idle state where it is ready for reclamation.
  • a running buffer serves a group of requests. All the requests share the data read by the first request in the group. All the requests served by a later running buffer also accept data from earlier running buffers. In addition, the later running buffer only needs to reach the end of its immediate preceding runs at that instant. Since requests served by the later running buffer are also served by earlier runs of the same stream, the earlier runs may need to extend their running distance to cover the gap between different runs. Such means that the content in memory is shared by as many clients as possible, thus reducing disk and network input-output.
  • the initial size T of the buffers used for a particular content object is dependent on the advertised length of that content object, generally with a minimum and maximum size for the buffer. For example, a streaming media content object with a running time of one hour may use a buffer of one third that size, e.g., a buffer that will hold twenty minutes worth of streaming content. Such size may then be adjusted according to the user access patterns as described above.
  • Some embodiments of the present invention reclaim memory that is no longer needed to conserve memory.
  • the delivery of a streaming media content object from initial request to completion or termination is referred to herein as a streaming session.
  • the running buffer can be reclaimed. If the terminated session is served from the head of a running buffer, the system reclaims the memory space from the head of the buffer to the buffer location where the next immediate session is served. If the terminating session is served from the tail of a running buffer, the system reclaims the memory space from the tail up to its immediate prior session immediately or after a time period, depending on whether there are other requests associated with it. If the terminating session is served from the middle of a running buffer, the session is terminated with no buffer space reclaimed.
  • the newly available memory can be allocated to serve the requests to the most popular content object that needs a buffer.
  • Embodiments of the present invention may incorporates a buffer replacement algorithm.
  • the replacement algorithm is important in the sense that the available memory is still scarce compared to the size of video objects, so to efficiently use the limited resources is critical to achieve the best performance gain.
  • One embodiment implements a popularity-based replacement algorithm. If a request arrives while there is no available memory, all the objects that have on-going streams in memory are ordered according to their popularities calculated over a certain past time period.
  • the system can choose to start a memory less session in that the proxy bypasses the content to the client without caching. Such is called a non-replacement policy.
  • a zero-sized buffer may be used that results in a bypass session that can be shared.
  • a bypass session is one in that the content is streamed directly to the client without caching or additional action. If a later request to the same content object can listen to this bypass session, only the prefix of the content object needs to be delivered to the later request.
  • a method embodiment of the present invention comprises receiving a first request for an content object.
  • An initial running buffer of a predetermined size is allocated to store a first amount of data from the content object.
  • the content object is retrieved as a datastream having a start point and inserting the datastream into the initial buffer while delivering the same datastream.
  • the buffer contains a moving window of the retrieved data.
  • a second request is received. If such is received while the start of the datastream is in the initial buffer, the content object is served directly from the initial buffer. If the second request is received after the start point has been deleted from the initial buffer, the portion of the content object that has been deleted from the initial buffer is fetched, commencing from the start point. Such is delivered simultaneously with other parts of the content object from the initial buffer.
  • a request 600 is received from a client for an content object.
  • a check is made to see if the content object is currently in an existing buffer 601 . If the content object is not in an existing buffer then this is an initial request for the content object that will require the content object to be retrieved, and preferably buffered.
  • a check is made to determine whether there is enough memory to allocate a new buffer 602 . If there is not enough memory to allocate a new buffer, a process 603 is run to determine whether memory can be freed from any buffers existing for other objects.
  • a process 604 determines the least popular content object in memory, and reclaims the latest buffer for that content object to make room for the new buffer.
  • the new buffer is then allocated 605 to store a sliding window of data from the requested content object.
  • a process 606 retrieve the content object as a datastream and inserts the datastream into the newly allocated buffer while at the same time delivering the datastream to the client 607 .
  • process 603 determines that memory is not available and cannot be reclaimed from other buffers, then a pass-through session is required that does not buffer the data.
  • a process 608 retrieves the content object as a datastream and the datastream is delivered directly to the client 607 .
  • step 601 determines that the content object is already in an existing buffer then this request is a further request for the content object, and the content object will be able to be served either fully or partially from any existing buffers for the content object.
  • a process 609 checks to see if the prefix of the content object is in an existing buffer. If the prefix is cached then due to the sliding nature of the buffers the whole content object should be in the existing buffers. In this case process 610 serves the content object to the client from all of the existing buffers.
  • the content object may be in one or more buffers. If the content object is in more than one buffer then the client will be required to maintain separate connections to retrieve data, representing different parts of the content object, from each of the buffers.
  • step 609 determines that the prefix is not being cached in an existing buffer then part of the content object must be retrieved before serving while other parts of the content object can be served directly from any buffers existing for the content object.
  • Process 610 proceeds to serve the content object to the client for all of the existing buffers while step 602 checks to see whether there is enough room to allocate a new buffer. If there is not enough memory to allocate a new buffer, a process 603 is run to determine whether memory can be freed from any buffers existing for other objects. If memory can be freed, a process 604 determines the least popular content object in memory, and reclaims the latest buffer for that content object to make room for the new buffer. The new buffer is then allocated 605 to store a sliding window of data from the requested content object. A process 606 retrieves the content object as a datastream and inserts the datastream into the newly allocated buffer while at the same time delivering the datastream to the client 607 . Such datastream is delivered in parallel with the data delivered by process 610 .
  • process 603 determines that memory is not available and cannot be reclaimed from other buffers, then a pass-through session is required that does not buffer the data.
  • a process 608 retrieves the content object as a datastream and is delivered directly to the client 607 . Such datastream is delivered in parallel with the data delivered by process 610 .
  • processes 606 and 608 only need to retrieve the prefix of the content object, e.g., from the beginning, up to the nearest part already in an existing buffer. Such results in the content object in memory being shared by as many clients as possible reducing disk and network loading.

Abstract

A server-proxy-client network delivers web content objects from servers to clients from cache content at a proxy server in between. Multiple, moving-window buffers are used to service content requests of the server by various independent clients. A first request for content is delivered by the server through the proxy to the requesting client. The content is simultaneously duplicated to a first circulating buffer. Once the buffer fills, the earlier parts are automatically deleted. The buffer therefore holds a most-recently delivered window of content. If a second request for the same content comes in, a check is made to see if the start of the content is still in the first buffer. If it is, the content is delivered from the first buffer. Otherwise, a second buffer is opened and both buffers are used to deliver what they can simultaneously. Such process can open up third and fourth buffers depending on the size of the content, the size of the buffers, and the respective timing of requests.

Description

    FIELD OF THE PRESENT INVENTION
  • The present invention relates generally to computer network systems and software for delivering objects from servers to clients with shared buffers, and specifically to a caching system based on shared running buffers.
  • BACKGROUND OF THE INVENTION
  • The building block of a content delivery network is a server-proxy-client system. A server delivers content to a client through a proxy. The proxy can choose to cache content objects so that a subsequent request to the same content object can be served directly from the proxy without the delay in contacting the server. Proxy caching strategies have therefore been the focus of many developments, particularly the caching of static web content to reduce network loading and end-to-end latencies.
  • The caching of larger objects, such as streaming media content, presents a different set of challenges. The size of a streaming media content object is usually orders of magnitude larger than a traditional web content object. For example, a two hour long MPEG video requires approximately 1.4 GB of disk space, while traditional web content may only require 10 KB. The demand of continuous and timely delivery of a streaming media content object is more rigorous than that of traditional text-based web content. Therefore a lot of resources need to be reserved for delivering streaming media data to clients. In practice, even a relatively small number of streaming media clients can overload a media server, creating bottlenecks by demanding high disk bandwidth on the server and requiring high network bandwidth.
  • A number of caching systems have been proposed that are suited to streaming media and other large objects. Such systems include partial caching, patching, and proxy buffering. Partial caching caches either a prefix or segments of an content object, rather than the whole content object, so less storage space is required. Typically this involves storing the cached data on disk storage on a proxy server. While this does lessen the disk bandwidth requirement on the server, it moves some of that burden to the proxy. Ideally data should be cached in memory to effectively reduce disk bandwidth requirements and reduce data delivery latency. Partial caching techniques are not able to serve the same data to separate overlapping sessions.
  • For on-going streaming sessions, patching can be used so that later sessions for the same content object can be served simultaneously. A single session may be served to multiple clients at once. A number of sessions for a single content object may be occurring concurrently and a client can receive data from each of these sessions simultaneously, each session providing a different part of the content object. Such requires the clients to be listening on multiple channels and to store content before its presentation time. Thus client side storage is necessary. While patching allows streaming sessions that are overlapped in time to share data it does not buffer data for those sessions and hence does not make the best use of the data retrieved.
  • Proxy buffering uses either a running buffer or an interval caching buffer. A running buffer is used to store a sliding window of an on-going streaming session in the memory of the proxy. Closely followed requests for the content object can be served directly from the buffer in memory rather than re-fetching the content object for every request. Interval caching is similar but does not allocate a buffer until a further request for an content object is made within a certain timeframe. The prefix of the content object is then retrieved from the server and served to the client in conjunction with the data in the newly created buffer. While both of these techniques use memory to buffer the data, they do not fully use the currently buffered data to optimally reduce server load and network traffic. For example multiple running buffers for the same content object may co-exist in a given processing period without any data sharing among the multiple buffers.
  • SUMMARY OF THE PRESENT INVENTION
  • A server-proxy-client network embodiment of the present invention delivers web content objects from servers to clients from cache content at a proxy server in between. Multiple, moving-window buffers are used to service content requests of the server by various independent clients. A first request for content is delivered by the server through the proxy to the requesting client. The content is simultaneously duplicated to a first circulating buffer. Once the buffer fills, the earlier parts are automatically deleted. The buffer therefore holds a most-recently delivered window of content. If a second request for the same content comes in, a check is made to see if the start of the content is still in the first buffer. If it is, the content is delivered from the first buffer. Otherwise, a second buffer is opened and both buffers are used to deliver what they can simultaneously. Such process can open up third and fourth buffers depending on the size of the content, the size of the buffers, and the respective timing of requests.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram of a server-proxy-client network system embodiment of the present invention;
  • FIG. 2 is a chart of media position against access time, illustrating an embodiment of the present invention;
  • FIG. 3 is a further chart of media position against access time, illustrating an embodiment of the present invention;
  • FIG. 4 is a further chart of media position against access time, illustrating an embodiment of the present invention;
  • FIG. 5 is another chart of media position against access time, illustrating another embodiment of the present invention; and
  • FIG. 6 is a flowchart diagram of a software embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • FIG. 1 illustrates a server-proxy-client network system embodiment of the present invention for delivering objects from servers to clients, and is referred to herein by the general reference numeral 100. A server 101 includes original storage of a web content object 102. A request 104 is received for some or all of content object 102, and such is serviced by a response datastream 106. A cache memory 108 includes many recirculating buffers, as represented by a first buffer 110 and a second buffer 112. A proxy server 114 hosts the cache memory 108 and off-loads work from server 101. The buffers 110 and 112 receive copies of the content passing through from the server to any of clients 116-118. Such copies are then available to service subsequent requests for the same content.
  • Each of clients 116-118 are able to formula content requests and service responses 120-126. Here, a request 120 is responded to directly from server 101 and response 106 by datastream 121. A copy of the response 106 is copied to buffer 110. A request 122 by client 117 for the same content can thus be serviced from buffer 110 with datastream 123. A later request 124 from client 118 is responded to in two datastreams 125 and 126, because the content object being sought is not complete in either buffer 110 or 112.
  • Because the clients can receive different parts of the content object as separate streams, each client needs to be able to maintain multiple connections to receive the individual streams. In the case of streaming media, the client will need to store content before its presentation time and will therefore store some of the received data.
  • In a preferred embodiment embodiments of the present invention utilizes the memory space available on a proxy to serve streaming media content more efficiently. When a request for a streaming media content object arrives, if the request is the first to the content object, an initial buffer of size T is allocated. The buffer is capable of caching T time units of content. The buffer is filled with the stream from the server while the same stream is being delivered to the client. Within the next T time units, before the buffer is full, additional requests to the same media content object are served directly from the buffer. At time T the initial buffer is full and based on the current access pattern the buffer may be extended or shrunk.
  • At some stage the size of the initial buffer is frozen and subsequent requests for the media content object cannot be served completely from the initial buffer. In this case a new buffer of initial size T is allocated and goes through the same adaptive allocation as above. Subsequent requests are served simultaneously from the new buffer as well as its preceding running buffers.
  • Buffer management is based on user access patterns for particular objects. A request arrival is the time at that a client requests an content object. A request interval is the difference in time between two consecutive requests arrivals.
  • The average request interval is used to measure the user access pattern. The average request interval is the average request interval between the first request arrival and the last request arrival over the time period that a given number of initial requests arrive. The waiting time is also considered. The waiting time is calculated at time T and is the difference between T and the arrival time of the immediate previous request.
  • During the lifecycle of a buffer it may be in one of three states, the construction state, the running state or the idle state. When an initial buffer is allocated upon the arrival of an initial request, the buffer is filled while the request is being served, expecting that the data cached in the buffer could serve closely followed requests for the same content object. The size of the buffer may be adjusted to cache less or more data before its size is frozen. Before the buffer size is frozen, the buffer is in the construction state.
  • The start time of a buffer is defined as the arrival time of the last request before the buffer size is frozen. The requests arriving while a buffer is in the construction state are called the resident requests of this buffer and the buffer is called the resident buffer of these requests.
  • After the buffer freezes its size it serves as a running window of a streaming session and moves along with the streaming session. Therefore, the state of the buffer is called the running state.
  • The running distance of a buffer is defined as the length of the content object for the initial buffer allocated for the content object, or for a subsequent buffer as the distance in time between the start time of the buffer and the start time of its preceding buffer. Since data is shared among buffers, clients served from a particular buffer are also served from any preceding buffers that are still in running state. Such requires that the running distance of the buffer equals the time difference with the closest preceding buffer in running state.
  • When the running window reaches the end of the streaming session, the buffer enters the idle state, that is a transient state that allows the buffer to be reclaimed.
  • The end time of a buffer is defined as the time when a buffer enters the idle state and is ready to be reclaimed. The end time of the initial buffer is equal to its start time plus the length of the content object assuming a complete viewing scenario. For a subsequent buffer, the end time is the lesser of the start time of the latest running buffer plus the running distance of the subsequent buffer and the start time of the subsequent buffer plus the length of the content object. The end time of the current buffers for an content object is dynamically updated upon the forming of new buffers for that content object.
  • For an incoming request to an content object, if the latest running buffer of the content object is caching the prefix of the content object, the request is served directly from all the existing running buffers of the content object. Otherwise, if there is enough memory, a new running buffer of a predetermined size T is allocated. The request is served from the new running buffer and all existing running buffers of the content object. If there is not enough memory the request may be served without caching, or a buffer replacement algorithm may be invoked to re-allocate an existing running buffer to the request. The end times of all existing buffers of the content object are then updated.
  • Initially, all buffers are allocated with a predetermined size. Starting from the construction state, each buffer then adjusts its size by going through a three-state lifecycle management process as described below.
  • While the buffer is in the construction state, at the end of T, if there has only been one request arrival so far, the initial buffer enters the idle state immediately. For this request, the proxy acts as a bypass server, i.e., content is passed to the client without caching in the memory buffers. Such scheme gives preference to more frequently requested objects in the memory allocation.
  • In FIG. 2, the shadow indicates an allocated initial buffer that is reclaimed at T. If there have been multiple requests the waiting time and the average request interval are calculated. If the waiting time is greater than the average request interval, the initial buffer is shrunk by a time value equal to the waiting time, so that the most recent request can be served from the buffer.
  • In FIG. 3, part of the initial buffer is reclaimed at the end of T so that the last request, R4 can be served from the buffer. Subsequently, the buffer enters the running state. Such running buffer then serves as a shifting window and runs to its end time. If the waiting time is less than the average request interval, the initial buffer maintains the construction state and is expanded by the difference between the waiting time and the average request interval to T′ expecting that a new request will arrive very soon. At T′ the waiting time and the average request interval are recalculated and the above procedure is repeated.
  • In FIG. 4, the initial buffer has been extended from T to T′. Eventually, when the requests to the content object become less frequent, the buffer will freeze its size and enter the running state. In the extreme case, the full length of the media content object is cached in the buffer. In this case, the buffer also freezes and enters the running state, a static running. For most frequently accessed objects, this scheme ensures that the requests to these objects are served from the proxy memory directly.
  • The buffer expansion is bounded by the available memory in the proxy. When the available memory is exhausted, the buffer freezes its size and enters the running state regardless of future request arrivals.
  • After a buffer enters the running state, it starts running away from the beginning of the media content object and subsequent requests can not be served completely from the running buffer. In this case, a new buffer of an initial size T is allocated and goes through its own lifecycle. Subsequent requests are served from the new buffer as well as its preceding running buffers.
  • FIG. 5 illustrates maximal data sharing. Requests R1 to Rk are served entirely from the first buffer B1. The requests Rk+1 to Rn are served simultaneously from B1 and B2, showing how late requests are served from all existing buffers. Note that except for the first buffer, the other buffers do not run to the end of the content object.
  • When a buffer enters the running state, the running distance and end time are calculated for that buffer. In addition, the end times of preceding buffers for the same content object need to be modified according to the arrival time of the latest request. When a buffer runs to its end time, it enters the idle state where it is ready for reclamation.
  • A running buffer serves a group of requests. All the requests share the data read by the first request in the group. All the requests served by a later running buffer also accept data from earlier running buffers. In addition, the later running buffer only needs to reach the end of its immediate preceding runs at that instant. Since requests served by the later running buffer are also served by earlier runs of the same stream, the earlier runs may need to extend their running distance to cover the gap between different runs. Such means that the content in memory is shared by as many clients as possible, thus reducing disk and network input-output.
  • In a preferred embodiment, the initial size T of the buffers used for a particular content object is dependent on the advertised length of that content object, generally with a minimum and maximum size for the buffer. For example, a streaming media content object with a running time of one hour may use a buffer of one third that size, e.g., a buffer that will hold twenty minutes worth of streaming content. Such size may then be adjusted according to the user access patterns as described above.
  • Some embodiments of the present invention reclaim memory that is no longer needed to conserve memory. The delivery of a streaming media content object from initial request to completion or termination is referred to herein as a streaming session. When such a session terminates before it reaches the end of the requested content object, the running buffer can be reclaimed. If the terminated session is served from the head of a running buffer, the system reclaims the memory space from the head of the buffer to the buffer location where the next immediate session is served. If the terminating session is served from the tail of a running buffer, the system reclaims the memory space from the tail up to its immediate prior session immediately or after a time period, depending on whether there are other requests associated with it. If the terminating session is served from the middle of a running buffer, the session is terminated with no buffer space reclaimed.
  • When there is memory released from the running buffers because of normal session termination, the newly available memory can be allocated to serve the requests to the most popular content object that needs a buffer.
  • Embodiments of the present invention may incorporates a buffer replacement algorithm. The replacement algorithm is important in the sense that the available memory is still scarce compared to the size of video objects, so to efficiently use the limited resources is critical to achieve the best performance gain. One embodiment implements a popularity-based replacement algorithm. If a request arrives while there is no available memory, all the objects that have on-going streams in memory are ordered according to their popularities calculated over a certain past time period.
  • If the content object being demanded has a higher popularity than the least popular content object in memory, then the latest running buffer of the least popular content object is released, and the space is re-allocated to the new request. Those requests without running buffers do not buffer their data at all. In this case, theoretically, they are assumed to have no memory consumption. Alternatively, the system can choose to start a memory less session in that the proxy bypasses the content to the client without caching. Such is called a non-replacement policy.
  • In an alternative embodiment, a zero-sized buffer may be used that results in a bypass session that can be shared. A bypass session is one in that the content is streamed directly to the client without caching or additional action. If a later request to the same content object can listen to this bypass session, only the prefix of the content object needs to be delivered to the later request.
  • A method embodiment of the present invention comprises receiving a first request for an content object. An initial running buffer of a predetermined size is allocated to store a first amount of data from the content object. The content object is retrieved as a datastream having a start point and inserting the datastream into the initial buffer while delivering the same datastream. When the initial buffer is filled, data is deleted from the start of the datastream while continuing to insert retrieved data into the buffer. The buffer contains a moving window of the retrieved data. A second request is received. If such is received while the start of the datastream is in the initial buffer, the content object is served directly from the initial buffer. If the second request is received after the start point has been deleted from the initial buffer, the portion of the content object that has been deleted from the initial buffer is fetched, commencing from the start point. Such is delivered simultaneously with other parts of the content object from the initial buffer.
  • Referring now to FIG. 6, embodiments of the present invention can be embodied in computer software to perform the following functions. A request 600 is received from a client for an content object. A check is made to see if the content object is currently in an existing buffer 601. If the content object is not in an existing buffer then this is an initial request for the content object that will require the content object to be retrieved, and preferably buffered. A check is made to determine whether there is enough memory to allocate a new buffer 602. If there is not enough memory to allocate a new buffer, a process 603 is run to determine whether memory can be freed from any buffers existing for other objects. If memory can be freed, a process 604 determines the least popular content object in memory, and reclaims the latest buffer for that content object to make room for the new buffer. The new buffer is then allocated 605 to store a sliding window of data from the requested content object. A process 606 retrieve the content object as a datastream and inserts the datastream into the newly allocated buffer while at the same time delivering the datastream to the client 607.
  • If process 603 determines that memory is not available and cannot be reclaimed from other buffers, then a pass-through session is required that does not buffer the data. A process 608 retrieves the content object as a datastream and the datastream is delivered directly to the client 607.
  • If step 601 determines that the content object is already in an existing buffer then this request is a further request for the content object, and the content object will be able to be served either fully or partially from any existing buffers for the content object. A process 609 checks to see if the prefix of the content object is in an existing buffer. If the prefix is cached then due to the sliding nature of the buffers the whole content object should be in the existing buffers. In this case process 610 serves the content object to the client from all of the existing buffers. The content object may be in one or more buffers. If the content object is in more than one buffer then the client will be required to maintain separate connections to retrieve data, representing different parts of the content object, from each of the buffers.
  • If step 609 determines that the prefix is not being cached in an existing buffer then part of the content object must be retrieved before serving while other parts of the content object can be served directly from any buffers existing for the content object. Process 610 proceeds to serve the content object to the client for all of the existing buffers while step 602 checks to see whether there is enough room to allocate a new buffer. If there is not enough memory to allocate a new buffer, a process 603 is run to determine whether memory can be freed from any buffers existing for other objects. If memory can be freed, a process 604 determines the least popular content object in memory, and reclaims the latest buffer for that content object to make room for the new buffer. The new buffer is then allocated 605 to store a sliding window of data from the requested content object. A process 606 retrieves the content object as a datastream and inserts the datastream into the newly allocated buffer while at the same time delivering the datastream to the client 607. Such datastream is delivered in parallel with the data delivered by process 610.
  • If process 603 determines that memory is not available and cannot be reclaimed from other buffers, then a pass-through session is required that does not buffer the data. A process 608 retrieves the content object as a datastream and is delivered directly to the client 607. Such datastream is delivered in parallel with the data delivered by process 610.
  • Because parts of the content object already exist in one or more other buffers, processes 606 and 608 only need to retrieve the prefix of the content object, e.g., from the beginning, up to the nearest part already in an existing buffer. Such results in the content object in memory being shared by as many clients as possible reducing disk and network loading.
  • Although the present invention has been described in terms of the presently preferred embodiments, it is to be understood that the disclosure is not to be interpreted as limiting. Various alterations and modifications will no doubt become apparent to those skilled in the art after having read the above disclosure. Accordingly, it is intended that the appended claims be interpreted as covering all alterations and modifications as fall within the true spirit and scope of the present invention.

Claims (15)

1. A network proxy server, comprising:
a network connection able to intercept content-object requests of clients from a server, and able to respond instead of said server to such; and
a plurality of content buffers for duplicating web content passing through from said server to any client, and for caching such web content to any subsequent clients;
wherein, multiple, moving-window buffers are included in the plurality of content buffers to service content requests of a server by various independent clients; and
wherein, whole requests for content-object from single clients can be serviced simultaneously from parts distributed across more than one such content buffer.
2. A system of delivering objects from servers to clients comprising:
receiving a first request for an content object from a first client;
allocating a first running buffer;
retrieving the content object as a datastream having a start point and inserting the datastream into the first buffer while delivering the same datastream to the first client;
when the first buffer is filled, deleting data from the start point of the datastream while continuing to insert retrieved data into the buffer, so that the buffer contains a moving window of the retrieved data;
receiving a second request for the content object from a client;
if the second request is received while the start point of the datastream is still in the first buffer, serving the content object directly from the first buffer; and
if the second request is received after the start point has been deleted from the first buffer, retrieving the portion of the content object that has been deleted from the first buffer, commencing from the start point, and delivering the same as a datastream while simultaneously delivering a different part of the content object from the first buffer.
3. The system of claim 2, further comprising, if the second request is received after the start point of the datastream has been deleted from the first buffer:
allocating a second running buffer and inserting the datastream representing the portion of the content object not in the first running buffer into the second running buffer while delivering the same datastream.
4. The system of claim 3 further comprising for a third request for the content object received after the second running buffer has been allocated;
checking whether the start point is cached in an existing running buffer;
if the start point is cached in an existing running buffer, serving the content object as a datastream from each of the running buffers simultaneously;
if the start point is not cached in an existing running buffer,
allocating a third running buffer;
retrieving the portion of the content object not in an existing running buffer as a datastream and inserting the datastream into the third running buffer while delivering the same datastream and simultaneously delivering a different part of the content object from other existing running buffers.
5. The system of claim 2, wherein the first buffer or another buffer has a size that is determined as a proportion of an advertised length of the content object.
6. The system of claim 2, further comprising:
modifying the size of the first buffer or another buffer in response to an analysis of frequency of requests for the content object, in order to optimize allocation of memory.
7. The system of claim 2, further comprising, prior to allocating the first buffer or another buffer, applying a replacement algorithm to reclaim buffers from less frequently requested objects.
8. The system of claim 2, wherein the content object has a time length L and each buffer has a start time Si, an end time Ei and a running distance Di, wherein the running distance Di for each buffer after the first buffer equals: Di=Si−Si-1, and wherein the end time Ei for each buffer after the first buffer is, Ei=min(Slatest+Di, Si+L), where, Slatest is the start time of the most recent buffer allocated.
9. Computer data storage media having stored thereon software performing the following functions:
receiving a first request for an content object;
allocating a first running buffer;
retrieving the content object as a datastream having a start point and inserting the datastream into the first buffer while delivering the same datastream;
when the first buffer is filled, deleting data from the start point of the datastream while continuing to insert retrieved data into the buffer, so that the buffer contains a moving window of the retrieved data;
receiving a second request for the content object;
if the second request is received while the start point of the datastream is in the first buffer, serving the content object directly from the first buffer;
if the second request is received after the start point has been deleted from the first buffer:
retrieving the portion of the content object that has been deleted from the first buffer, commencing from the start point, and delivering the same as a datastream while simultaneously delivering a different part of the content object as a datastream from the first buffer.
10. The computer data storage media of claim 9, wherein the software performs the following further functions:
if the second request is received after the start point of the datastream has been deleted from the first buffer, allocating a second running buffer and inserting the datastream representing the portion of the content object not in the first running buffer into the second running buffer while delivering the same datastream.
11. The computer data storage media of claim 9, wherein the software performs the following further functions:
receiving a third request for the content object after the second running buffer has been allocated;
checking whether the start point is cached in an existing running buffer;
if the start point is cached in an existing running buffer, serving the content object as a datastream from each of the running buffers simultaneously;
if the start point is not cached in an existing running buffer:
allocating a third running buffer;
retrieving the portion of the content object not in an existing running buffer as a datastream and inserting the datastream into the third running buffer while delivering the same datastream and simultaneously delivering a different part of the content object as a datastream from other existing running buffers.
12. The computer data storage media of claim 9, wherein the software performs the following further functions:
determining the advertised length of the content object;
setting the size of the first buffer or another buffer as a proportion of an advertised length of the content object.
13. The computer data storage media of claim 9, wherein:
analyzing frequency of requests for the content object; and
modifying the size of the first buffer or another buffer in response to the analysis of the frequency of requests for the content object in order to optimize allocation of memory.
14. The computer data storage media of claim 9, wherein:
prior to allocating the first buffer or another buffer checking if memory is available;
if there is not enough memory available to allocate a buffer, applying a replacement algorithm to reclaim buffers from less frequently requested objects.
15. The computer data storage media of claim 9, wherein:
determining a time length L for the content object;
setting a start time Si, an end time Ei and a running distance Di for each buffer;
computing the running distance Di for each buffer after the first buffer as, Di=Si−Si-1;
computing the end time Ei for each buffer after the first buffer as, Ei=min(Slatest+Di, Si+L) , where, Slatest is the start time of the most recent buffer allocated.
US10/687,997 2003-10-17 2003-10-17 Shared running-buffer-based caching system Abandoned US20050086386A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/687,997 US20050086386A1 (en) 2003-10-17 2003-10-17 Shared running-buffer-based caching system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/687,997 US20050086386A1 (en) 2003-10-17 2003-10-17 Shared running-buffer-based caching system

Publications (1)

Publication Number Publication Date
US20050086386A1 true US20050086386A1 (en) 2005-04-21

Family

ID=34521077

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/687,997 Abandoned US20050086386A1 (en) 2003-10-17 2003-10-17 Shared running-buffer-based caching system

Country Status (1)

Country Link
US (1) US20050086386A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060010225A1 (en) * 2004-03-31 2006-01-12 Ai Issa Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US20060136551A1 (en) * 2004-11-16 2006-06-22 Chris Amidon Serving content from an off-line peer server in a photosharing peer-to-peer network in response to a guest request
US20060195547A1 (en) * 2004-12-30 2006-08-31 Prabakar Sundarrajan Systems and methods for providing client-side accelerated access to remote applications via TCP multiplexing
US20060230176A1 (en) * 2005-04-12 2006-10-12 Dacosta Behram M Methods and apparatus for decreasing streaming latencies for IPTV
US20060230171A1 (en) * 2005-04-12 2006-10-12 Dacosta Behram M Methods and apparatus for decreasing latency in A/V streaming systems
US20060253605A1 (en) * 2004-12-30 2006-11-09 Prabakar Sundarrajan Systems and methods for providing integrated client-side acceleration techniques to access remote applications
US20070022174A1 (en) * 2005-07-25 2007-01-25 Issa Alfredo C Syndication feeds for peer computer devices and peer networks
US7221369B1 (en) * 2004-07-29 2007-05-22 Nvidia Corporation Apparatus, system, and method for delivering data to multiple memory clients via a unitary buffer
US20070156876A1 (en) * 2005-12-30 2007-07-05 Prabakar Sundarrajan System and method for performing flash caching of dynamically generated objects in a data communication network
US20070156965A1 (en) * 2004-06-30 2007-07-05 Prabakar Sundarrajan Method and device for performing caching of dynamically generated objects in a data communication network
US20100077056A1 (en) * 2008-09-19 2010-03-25 Limelight Networks, Inc. Content delivery network stream server vignette distribution
US20100332594A1 (en) * 2004-12-30 2010-12-30 Prabakar Sundarrajan Systems and methods for automatic installation and execution of a client-side acceleration program
US20110145330A1 (en) * 2005-12-30 2011-06-16 Prabakar Sundarrajan System and method for performing flash crowd caching of dynamically generated objects in a data communication network
US8005889B1 (en) 2005-11-16 2011-08-23 Qurio Holdings, Inc. Systems, methods, and computer program products for synchronizing files in a photosharing peer-to-peer network
WO2012091693A1 (en) * 2010-12-27 2012-07-05 Limelight Networks, Inc. Partial object caching
US8255557B2 (en) 2010-04-07 2012-08-28 Limelight Networks, Inc. Partial object distribution in content delivery network
US8301839B2 (en) 2005-12-30 2012-10-30 Citrix Systems, Inc. System and method for performing granular invalidation of cached dynamically generated objects in a data communication network
US8370452B2 (en) 2010-12-27 2013-02-05 Limelight Networks, Inc. Partial object caching
US8706877B2 (en) 2004-12-30 2014-04-22 Citrix Systems, Inc. Systems and methods for providing client-side dynamic redirection to bypass an intermediary
US8739274B2 (en) 2004-06-30 2014-05-27 Citrix Systems, Inc. Method and device for performing integrated caching in a data communication network
US20140173048A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Devices And Methods Supporting Content Delivery With Reducer Services
US8788572B1 (en) 2005-12-27 2014-07-22 Qurio Holdings, Inc. Caching proxy server for a peer-to-peer photosharing system
US20140372588A1 (en) 2011-12-14 2014-12-18 Level 3 Communications, Llc Request-Response Processing in a Content Delivery Network
US8954595B2 (en) 2004-12-30 2015-02-10 Citrix Systems, Inc. Systems and methods for providing client-side accelerated access to remote applications via TCP buffering
US20150095509A1 (en) * 2013-09-30 2015-04-02 Verizon Patent And Licensing Inc. Adaptive buffers for media players
US9591047B1 (en) 2016-04-11 2017-03-07 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
US9634918B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation sequencing in a content delivery framework
US10652087B2 (en) 2012-12-13 2020-05-12 Level 3 Communications, Llc Content delivery framework having fill services
US10701148B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having storage services
US10701149B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having origin services
US10791050B2 (en) 2012-12-13 2020-09-29 Level 3 Communications, Llc Geographic location determination in a content delivery framework
US10972761B2 (en) 2018-12-26 2021-04-06 Purdue Research Foundation Minimizing stall duration tail probability in over-the-top streaming systems
US11368548B2 (en) 2012-12-13 2022-06-21 Level 3 Communications, Llc Beacon services in a content delivery framework

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933603A (en) * 1995-10-27 1999-08-03 Emc Corporation Video file server maintaining sliding windows of a video data set in random access memories of stream server computers for immediate video-on-demand service beginning at any specified location
US6101311A (en) * 1996-07-04 2000-08-08 Nec Corporation Moving picture and audio data reproducing method and system therefor
US6151444A (en) * 1992-02-07 2000-11-21 Abecassis; Max Motion picture including within a duplication of frames
US6195680B1 (en) * 1998-07-23 2001-02-27 International Business Machines Corporation Client-based dynamic switching of streaming servers for fault-tolerance and load balancing
US6339785B1 (en) * 1999-11-24 2002-01-15 Idan Feigenbaum Multi-server file download
US6522342B1 (en) * 1999-01-27 2003-02-18 Hughes Electronics Corporation Graphical tuning bar for a multi-program data stream
US6625643B1 (en) * 1998-11-13 2003-09-23 Akamai Technologies, Inc. System and method for resource management on a data network
US6633918B2 (en) * 1998-10-06 2003-10-14 Realnetworks, Inc. System and method for providing random access to a multimedia object over a network
US6807550B1 (en) * 1999-12-01 2004-10-19 Microsoft Corporation Methods and systems for providing random access to structured media content

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151444A (en) * 1992-02-07 2000-11-21 Abecassis; Max Motion picture including within a duplication of frames
US5933603A (en) * 1995-10-27 1999-08-03 Emc Corporation Video file server maintaining sliding windows of a video data set in random access memories of stream server computers for immediate video-on-demand service beginning at any specified location
US6101311A (en) * 1996-07-04 2000-08-08 Nec Corporation Moving picture and audio data reproducing method and system therefor
US6195680B1 (en) * 1998-07-23 2001-02-27 International Business Machines Corporation Client-based dynamic switching of streaming servers for fault-tolerance and load balancing
US6633918B2 (en) * 1998-10-06 2003-10-14 Realnetworks, Inc. System and method for providing random access to a multimedia object over a network
US6625643B1 (en) * 1998-11-13 2003-09-23 Akamai Technologies, Inc. System and method for resource management on a data network
US6522342B1 (en) * 1999-01-27 2003-02-18 Hughes Electronics Corporation Graphical tuning bar for a multi-program data stream
US6339785B1 (en) * 1999-11-24 2002-01-15 Idan Feigenbaum Multi-server file download
US6807550B1 (en) * 1999-12-01 2004-10-19 Microsoft Corporation Methods and systems for providing random access to structured media content

Cited By (111)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8433826B2 (en) 2004-03-31 2013-04-30 Qurio Holdings, Inc. Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US8234414B2 (en) 2004-03-31 2012-07-31 Qurio Holdings, Inc. Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US20060010225A1 (en) * 2004-03-31 2006-01-12 Ai Issa Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US20070156965A1 (en) * 2004-06-30 2007-07-05 Prabakar Sundarrajan Method and device for performing caching of dynamically generated objects in a data communication network
US8739274B2 (en) 2004-06-30 2014-05-27 Citrix Systems, Inc. Method and device for performing integrated caching in a data communication network
US8495305B2 (en) 2004-06-30 2013-07-23 Citrix Systems, Inc. Method and device for performing caching of dynamically generated objects in a data communication network
US7369132B1 (en) * 2004-07-29 2008-05-06 Nvidia Corporation Apparatus, system, and method for delivering data to multiple memory clients via a unitary buffer
US7221369B1 (en) * 2004-07-29 2007-05-22 Nvidia Corporation Apparatus, system, and method for delivering data to multiple memory clients via a unitary buffer
US7698386B2 (en) 2004-11-16 2010-04-13 Qurio Holdings, Inc. Serving content from an off-line peer server in a photosharing peer-to-peer network in response to a guest request
US8280985B2 (en) 2004-11-16 2012-10-02 Qurio Holdings, Inc. Serving content from an off-line peer server in a photosharing peer-to-peer network in response to a guest request
US20060136551A1 (en) * 2004-11-16 2006-06-22 Chris Amidon Serving content from an off-line peer server in a photosharing peer-to-peer network in response to a guest request
US20100169465A1 (en) * 2004-11-16 2010-07-01 Qurio Holdings, Inc. Serving content from an off-line peer server in a photosharing peer-to-peer network in response to a guest request
US8954595B2 (en) 2004-12-30 2015-02-10 Citrix Systems, Inc. Systems and methods for providing client-side accelerated access to remote applications via TCP buffering
US20060253605A1 (en) * 2004-12-30 2006-11-09 Prabakar Sundarrajan Systems and methods for providing integrated client-side acceleration techniques to access remote applications
US20100332594A1 (en) * 2004-12-30 2010-12-30 Prabakar Sundarrajan Systems and methods for automatic installation and execution of a client-side acceleration program
US8856777B2 (en) 2004-12-30 2014-10-07 Citrix Systems, Inc. Systems and methods for automatic installation and execution of a client-side acceleration program
US8706877B2 (en) 2004-12-30 2014-04-22 Citrix Systems, Inc. Systems and methods for providing client-side dynamic redirection to bypass an intermediary
US8549149B2 (en) 2004-12-30 2013-10-01 Citrix Systems, Inc. Systems and methods for providing client-side accelerated access to remote applications via TCP multiplexing
US20060195547A1 (en) * 2004-12-30 2006-08-31 Prabakar Sundarrajan Systems and methods for providing client-side accelerated access to remote applications via TCP multiplexing
US8788581B2 (en) * 2005-01-24 2014-07-22 Citrix Systems, Inc. Method and device for performing caching of dynamically generated objects in a data communication network
US8848710B2 (en) 2005-01-24 2014-09-30 Citrix Systems, Inc. System and method for performing flash caching of dynamically generated objects in a data communication network
US20060230171A1 (en) * 2005-04-12 2006-10-12 Dacosta Behram M Methods and apparatus for decreasing latency in A/V streaming systems
US20060230176A1 (en) * 2005-04-12 2006-10-12 Dacosta Behram M Methods and apparatus for decreasing streaming latencies for IPTV
US8688801B2 (en) 2005-07-25 2014-04-01 Qurio Holdings, Inc. Syndication feeds for peer computer devices and peer networks
US9098554B2 (en) 2005-07-25 2015-08-04 Qurio Holdings, Inc. Syndication feeds for peer computer devices and peer networks
US20070022174A1 (en) * 2005-07-25 2007-01-25 Issa Alfredo C Syndication feeds for peer computer devices and peer networks
US8005889B1 (en) 2005-11-16 2011-08-23 Qurio Holdings, Inc. Systems, methods, and computer program products for synchronizing files in a photosharing peer-to-peer network
US8788572B1 (en) 2005-12-27 2014-07-22 Qurio Holdings, Inc. Caching proxy server for a peer-to-peer photosharing system
US8499057B2 (en) * 2005-12-30 2013-07-30 Citrix Systems, Inc System and method for performing flash crowd caching of dynamically generated objects in a data communication network
US8301839B2 (en) 2005-12-30 2012-10-30 Citrix Systems, Inc. System and method for performing granular invalidation of cached dynamically generated objects in a data communication network
US8255456B2 (en) * 2005-12-30 2012-08-28 Citrix Systems, Inc. System and method for performing flash caching of dynamically generated objects in a data communication network
US20110145330A1 (en) * 2005-12-30 2011-06-16 Prabakar Sundarrajan System and method for performing flash crowd caching of dynamically generated objects in a data communication network
US20070156876A1 (en) * 2005-12-30 2007-07-05 Prabakar Sundarrajan System and method for performing flash caching of dynamically generated objects in a data communication network
US20100077056A1 (en) * 2008-09-19 2010-03-25 Limelight Networks, Inc. Content delivery network stream server vignette distribution
US8966003B2 (en) 2008-09-19 2015-02-24 Limelight Networks, Inc. Content delivery network stream server vignette distribution
US8255557B2 (en) 2010-04-07 2012-08-28 Limelight Networks, Inc. Partial object distribution in content delivery network
US8463876B2 (en) 2010-04-07 2013-06-11 Limelight, Inc. Partial object distribution in content delivery network
US8370452B2 (en) 2010-12-27 2013-02-05 Limelight Networks, Inc. Partial object caching
WO2012091693A1 (en) * 2010-12-27 2012-07-05 Limelight Networks, Inc. Partial object caching
US11838385B2 (en) 2011-12-14 2023-12-05 Level 3 Communications, Llc Control in a content delivery network
US20140372588A1 (en) 2011-12-14 2014-12-18 Level 3 Communications, Llc Request-Response Processing in a Content Delivery Network
US10841398B2 (en) 2011-12-14 2020-11-17 Level 3 Communications, Llc Control in a content delivery network
US11218566B2 (en) 2011-12-14 2022-01-04 Level 3 Communications, Llc Control in a content delivery network
US10187491B2 (en) 2011-12-14 2019-01-22 Level 3 Communications, Llc Request-response processing an a content delivery network
US9451045B2 (en) 2011-12-14 2016-09-20 Level 3 Communications, Llc Content delivery network
US9456053B2 (en) 2011-12-14 2016-09-27 Level 3 Communications, Llc Content delivery network
US9516136B2 (en) 2011-12-14 2016-12-06 Level 3 Communications, Llc Customer-specific request-response processing in a content delivery network
US9634904B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Framework supporting content delivery with hybrid content delivery services
US9722882B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with provisioning
US9628343B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Content delivery framework with dynamic service network topologies
US9628342B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Content delivery framework
US9628347B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Layered request processing in a content delivery network (CDN)
US9628346B2 (en) * 2012-12-13 2017-04-18 Level 3 Communications, Llc Devices and methods supporting content delivery with reducer services
US9628345B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Framework supporting content delivery with collector services network
US20140173048A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Devices And Methods Supporting Content Delivery With Reducer Services
US9634905B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation systems, methods, and devices
US9634918B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation sequencing in a content delivery framework
US9634907B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9634906B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9641402B2 (en) 2012-12-13 2017-05-02 Level 3 Communications, Llc Configuring a content delivery network (CDN)
US9641401B2 (en) 2012-12-13 2017-05-02 Level 3 Communications, Llc Framework supporting content delivery with content delivery services
US9647900B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Devices and methods supporting content delivery with delivery services
US9647899B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Framework supporting content delivery with content delivery services
US9647901B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Configuring a content delivery network (CDN)
US9654353B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with rendezvous services network
US9654355B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with adaptation services
US9654354B2 (en) * 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with delivery services network
US9654356B2 (en) * 2012-12-13 2017-05-16 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services
US9660875B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with rendezvous services having dynamically configurable log information
US9661046B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services
US9660876B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Collector mechanisms in a content delivery network
US9660874B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with delivery services having dynamically configurable log information
US9667506B2 (en) 2012-12-13 2017-05-30 Level 3 Communications, Llc Multi-level peering in a content delivery framework
US11368548B2 (en) 2012-12-13 2022-06-21 Level 3 Communications, Llc Beacon services in a content delivery framework
US9686148B2 (en) 2012-12-13 2017-06-20 Level 3 Communications, Llc Responsibility-based cache peering
US9705754B2 (en) 2012-12-13 2017-07-11 Level 3 Communications, Llc Devices and methods supporting content delivery with rendezvous services
US9722883B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Responsibility-based peering
US9722884B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Event stream collector systems, methods, and devices
US9628344B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Framework supporting content delivery with reducer services network
US20140173042A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Framework Supporting Content Delivery With Delivery Services Network
US9749190B2 (en) 2012-12-13 2017-08-29 Level 3 Communications, Llc Maintaining invalidation information
US9749192B2 (en) 2012-12-13 2017-08-29 Level 3 Communications, Llc Dynamic topology transitions in a content delivery framework
US9749191B2 (en) 2012-12-13 2017-08-29 Level 3 Communications, Llc Layered request processing with redirection and delegation in a content delivery network (CDN)
US9755914B2 (en) 2012-12-13 2017-09-05 Level 3 Communications, Llc Request processing in a content delivery network
US9787551B2 (en) 2012-12-13 2017-10-10 Level 3 Communications, Llc Responsibility-based request processing
US9819554B2 (en) 2012-12-13 2017-11-14 Level 3 Communications, Llc Invalidation in a content delivery framework
US9847917B2 (en) 2012-12-13 2017-12-19 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9887885B2 (en) 2012-12-13 2018-02-06 Level 3 Communications, Llc Dynamic fill target selection in a content delivery framework
US10135697B2 (en) 2012-12-13 2018-11-20 Level 3 Communications, Llc Multi-level peering in a content delivery framework
US10142191B2 (en) 2012-12-13 2018-11-27 Level 3 Communications, Llc Content delivery framework with autonomous CDN partitioned into multiple virtual CDNs
US11121936B2 (en) 2012-12-13 2021-09-14 Level 3 Communications, Llc Rendezvous optimization in a content delivery framework
US10608894B2 (en) 2012-12-13 2020-03-31 Level 3 Communications, Llc Systems, methods, and devices for gradual invalidation of resources
US10652087B2 (en) 2012-12-13 2020-05-12 Level 3 Communications, Llc Content delivery framework having fill services
US10701148B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having storage services
US10701149B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having origin services
US10700945B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Role-specific sub-networks in a content delivery framework
US10708145B2 (en) 2012-12-13 2020-07-07 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback from health service
US10742521B2 (en) 2012-12-13 2020-08-11 Level 3 Communications, Llc Configuration and control in content delivery framework
US10791050B2 (en) 2012-12-13 2020-09-29 Level 3 Communications, Llc Geographic location determination in a content delivery framework
US10826793B2 (en) 2012-12-13 2020-11-03 Level 3 Communications, Llc Verification and auditing in a content delivery framework
US20140173043A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Devices And Methods Supporting Content Delivery With Adaptation Services
US10841177B2 (en) 2012-12-13 2020-11-17 Level 3 Communications, Llc Content delivery framework having autonomous CDN partitioned into multiple virtual CDNs to implement CDN interconnection, delegation, and federation
US10862769B2 (en) 2012-12-13 2020-12-08 Level 3 Communications, Llc Collector mechanisms in a content delivery network
US10931541B2 (en) 2012-12-13 2021-02-23 Level 3 Communications, Llc Devices and methods supporting content delivery with dynamically configurable log information
US10992547B2 (en) 2012-12-13 2021-04-27 Level 3 Communications, Llc Rendezvous systems, methods, and devices
US20150095509A1 (en) * 2013-09-30 2015-04-02 Verizon Patent And Licensing Inc. Adaptive buffers for media players
US9680904B2 (en) * 2013-09-30 2017-06-13 Verizon Patent And Licensing Inc. Adaptive buffers for media players
US9749381B1 (en) 2016-04-11 2017-08-29 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
US9591047B1 (en) 2016-04-11 2017-03-07 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
US10972761B2 (en) 2018-12-26 2021-04-06 Purdue Research Foundation Minimizing stall duration tail probability in over-the-top streaming systems
US11356712B2 (en) 2018-12-26 2022-06-07 At&T Intellectual Property I, L.P. Minimizing stall duration tail probability in over-the-top streaming systems

Similar Documents

Publication Publication Date Title
US20050086386A1 (en) Shared running-buffer-based caching system
US5721956A (en) Method and apparatus for selective buffering of pages to provide continuous media data to multiple users
US9906590B2 (en) Intelligent predictive stream caching
JP3338451B2 (en) Staggered stream support for video on demand
US7873786B1 (en) Network acceleration and long-distance pattern detection using improved caching and disk mapping
US7251649B2 (en) Method for prioritizing content
EP0936615A2 (en) Disk use scheduling and non-linear video editing systems
US6154813A (en) Cache management system for continuous media system
EP2175383A1 (en) Method and apparatus for improving file access performance of distributed storage system
CN107197359B (en) Video file caching method and device
CN205430501U (en) Mobile terminal web advertisement video and positive video seamless handover device
CN102521279A (en) Playing method, playing system and player of streaming media files
KR20040101746A (en) Device and Method for minimizing transmission delay in data communication system
US7660964B2 (en) Windowing external block translations
CN111212114A (en) Method and device for downloading resource file
US6397274B1 (en) Method and apparatus for analyzing buffer allocation to a device on a peripheral component interconnect bus
US5909693A (en) System and method for striping data across multiple disks for continuous data streaming and increased bus utilization
US20050007953A1 (en) Resource management device, resource management method and recording medium
CN102497389A (en) Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV
US8566521B2 (en) Implementing cache offloading
JP5192506B2 (en) File cache management method, apparatus, and program
US11592986B2 (en) Methods for minimizing fragmentation in SSD within a storage system and devices thereof
US6742019B1 (en) Sieved caching for increasing data rate capacity of a heterogeneous striping group
Chen et al. SRB: Shared running buffers in proxy to exploit memory locality of multiple streaming media sessions
CN109582233A (en) A kind of caching method and device of data

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHEN, BO;CHEN, SONGQING;YAN, YONG;AND OTHERS;REEL/FRAME:016841/0354;SIGNING DATES FROM 20031015 TO 20050207

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION