CN103188296A - Implementation method and equipment for network byte cache - Google Patents

Implementation method and equipment for network byte cache Download PDF

Info

Publication number
CN103188296A
CN103188296A CN2011104492497A CN201110449249A CN103188296A CN 103188296 A CN103188296 A CN 103188296A CN 2011104492497 A CN2011104492497 A CN 2011104492497A CN 201110449249 A CN201110449249 A CN 201110449249A CN 103188296 A CN103188296 A CN 103188296A
Authority
CN
China
Prior art keywords
data block
equipment
buffer memory
client
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104492497A
Other languages
Chinese (zh)
Other versions
CN103188296B (en
Inventor
才华
梁志勇
郭璞
李浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING NETENTSEC Inc
Original Assignee
BEIJING NETENTSEC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING NETENTSEC Inc filed Critical BEIJING NETENTSEC Inc
Priority to CN201110449249.7A priority Critical patent/CN103188296B/en
Priority claimed from CN201110449249.7A external-priority patent/CN103188296B/en
Publication of CN103188296A publication Critical patent/CN103188296A/en
Application granted granted Critical
Publication of CN103188296B publication Critical patent/CN103188296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and equipment for network proxy cache. The method comprises the following steps that server side cache equipment receives data block index information cached in client side cache equipment; the data block index information is received and saved by the server side cache equipment; the server side cache equipment searches local data block index information according to generated data blocks; and if searching is successful, the index information corresponding to the data blocks is sent to the client side cache equipment. The equipment comprises a receiving module, a data generation module, a searching module, a data block reconfiguration module and a sending module. According to the method and the equipment, the possibility that cache hit fails is reduced; the error probability is reduced; and extra pressure of an index information interaction process on a bandwidth resource is optimized.

Description

A kind of implementation method of network bytes buffer memory and equipment thereof
Technical field
The present invention relates to the network agent caching technology, relate in particular to a kind of implementation method and equipment thereof of network bytes buffer memory.
Background technology
The network bytes caching technology is that the Internet resources that the user will ask to visit are cached to this locality, and is in the shortest time that information is continuous, complete, pass to the end user in real time, thereby reduces the bandwidth load of network, and improves the transmission speed of data.
Fig. 1 is the network topology structure of prior art byte buffer memory (Byte Cache) technology.As shown in Figure 1, this network comprises a pair of network agent buffer memory equipment, and one of them network agent buffer memory equipment is called client network proxy caching equipment near the user who initiates to connect, be used for receiving/transmit the access request from the user, and the corresponding data of buffer memory; Another network agent buffer memory equipment is called service end network agent buffer memory equipment near server, is used for receiving/transmit the user's requested resource from server, and the corresponding data of buffer memory.
Fig. 2 is the signaling diagram of prior art byte caching technology client-service end.As shown in Figure 2, user side is set up one with service end and is connected, and client network proxy caching equipment is to service end network agent buffer memory device forwards user's network access request, and service end network agent buffer memory equipment will obtain corresponding Internet resources from server.Service end network agent buffer memory equipment will be divided a plurality of data blocks from the content-based fingerprint of the data flow that server obtains (Content Fingerprinting) technology, in order to prevent the big or small excessive or too small of data block, the Min. of data block and to greatest extent can be designated, mean size that simultaneously can the restricting data piece, for example 4KB.
Service end network agent buffer memory equipment sends to client network proxy caching equipment with data block identifier (index), if client network proxy caching equipment buffer memory this indexed data piece, then this index is replaced to the data block in the buffer memory, and will send to the user after this data block reorganization; If client network proxy caching equipment is this indexed data piece of buffer memory not, client network proxy caching equipment then sends this index data piece request of obtaining to service end network agent buffer memory equipment, service end network agent buffer memory equipment sends the original data block of this index to client network proxy caching equipment, client network proxy caching equipment receives the original data block of this index, and will send to the user after the data block reorganization.In addition, the byte caching technology can be used for optimizing the data traffic that user end to server is uploaded equally.
The benefit of prior art byte caching technology is can index be reverted to corresponding data block with higher probability when client network proxy caching equipment, because the size of index is far smaller than corresponding data block, whole system can obtain high transmission compression ratio.When if the probability of this operation failure is higher, then cause and need ask original data block frequently to service end network agent buffer memory equipment, can reduce the efficient of whole transmission course so on the contrary.If to the request failure of original data block, then mean the request failure of original data block, mean losing of raw information, then the transmission of this connection will be interrupted.
Summary of the invention
The objective of the invention is to solve the problem that above-mentioned prior art exists.
For achieving the above object, on the one hand, the invention provides a kind of implementation method of network bytes buffer memory, this method may further comprise the steps: client-cache device storage data block and index information thereof, and institute's canned data sent to service end buffer memory equipment to keep information synchronization, service end buffer memory equipment receives from the data block index information in the client-cache equipment buffer memory; Service end buffer memory equipment receives and preserves described data block index information; Service end buffer memory equipment is searched local data piece index information according to the data block that generates, if search successfully, then sends the corresponding index information of this data block to client-cache equipment.
On the other hand, the invention provides a kind of equipment of network bytes buffer memory, this equipment comprises receiver module, is used for receiving data stream information; The data block generation module is used for generating data block according to data flow; Search module, be used for according to data block index search local data piece; The data block recombination module is used for the recombination data piece; Sending module is used for sending data.
The method according to this invention and equipment thereof can greatly improve index and hit success rate, reduce the probability of makeing mistakes, and simultaneously index information reciprocal process are optimized to the extra pressure that bandwidth resources cause.
Description of drawings
Exemplary embodiment of the present invention will be understood from the accompanying drawing of the detailed description that hereinafter provides and different embodiments of the invention more completely, however this should not be regarded as the present invention is limited to specific embodiment, and should be just in order to explain and to understand.
Fig. 1 is the network topology structure of prior art network bytes caching technology;
Fig. 2 is the signaling diagram of prior art network bytes caching technology client-service end;
Fig. 3 is the distributed system application scenarios figure of embodiment of the invention network bytes buffer memory;
Fig. 4 is the implementation method flow chart of embodiment of the invention network bytes buffer memory;
Fig. 5 is the method flow diagram of one embodiment of the invention service end network bytes buffer memory;
Fig. 6 is the method flow diagram of another embodiment of the present invention service end network bytes buffer memory;
Fig. 7 is the method flow diagram of embodiment of the invention client network byte buffer memory;
Fig. 8 is the equipment schematic configuration diagram of embodiment of the invention network bytes buffer memory.
Embodiment
Those of ordinary skill in the art will recognize that the following detailed description of described exemplary embodiment only is illustrative, and not be to be intended to be limited by any way.
Fig. 3 is the distributed system application scenarios figure of embodiment of the invention network bytes buffer memory.As shown in the figure, user A is access server A and server B simultaneously, and user B also can while access server A and server B.Client-cache equipment C (equipment C) constitutes opposite equip. respectively with service end buffer memory device A (device A), service end buffer memory equipment B (equipment B).Equally, client-cache equipment D (equipment D) also constitutes opposite equip. with service end buffer memory device A, service end buffer memory equipment B.Safeguarding stable information interaction passage between opposite equip..
The service end buffer memory device A of the distributed system of embodiment of the invention network bytes buffer memory, service end buffer memory equipment B, client-cache equipment C and client-cache equipment D take on a different character respectively the sign indicating number (a characteristic value), for example, the condition code of service end buffer memory equipment B is that the condition code of 0x0001, client-cache equipment C is 0x0010.Condition code has the characteristics of pairwise orthogonal.
Each buffer memory equipment of the distributed system of embodiment of the invention network bytes buffer memory can be set up a concordance list, and this concordance list is used for the distributed storage situation of data of description piece in whole system.The distributed storage information of the corresponding data block of each list item in the concordance list.
Table 1 example the contents in table of concordance list:
Table 1
Index Bitmap Timestamp The data block length The focus value
Wherein, index is the content-based fingerprint of data block; Bitmap (bitmap) is the storage condition of data block in distributed system, for example, bitmap is 0x0011, represent that then data block stores in service end buffer memory equipment B and client-cache equipment C, because the condition code 0x0001 of service end buffer memory equipment B, the condition code of client-cache equipment C is 0x0010, according to the characteristics of the pairwise orthogonal of condition code, 0x0011=0x0001﹠amp; 0x0010.Timestamp (timestamp) is represented the time of this list item the last time accessed (being established).The data block length is the data block length after data are divided.Whether the focus value is the hot spot data piece for the corresponding data of this list item, namely whether in ought be for the previous period, be visited repeatedly, the initial value H_new of this focus is 0, when list item is accessed, upgrade according to corresponding formula, its formula is: H_new=H_old*A/ (timestamp_new-timestamp_old)+1, wherein A is the arithmetic number greater than 0, and recently accessed list item weighting slowly becomes greatly, and its weight of visit far away slowly reduces.
Below at the list item of concordance list newly-built operation set forth:
In an example, when client-cache equipment C in local cache buffer memory during certain data block, then can search local concordance list.If the list item of this data block correspondence not in this concordance list then is that this data block is created list item in the concordance list of this locality, the bitmap in the list item equals the condition code of buffer memory equipment; If there has been corresponding list item in the local index table, then representing this data block stores in the opposite equip. service end buffer memory device A of client-cache equipment C or service end buffer memory equipment B, then only need to upgrade bitmap (bitmap) list item of this concordance list this moment, its formula is: bitmap_new=bitmap_old+characteristic value.Client-cache equipment C sends to its opposite equip. with newly-built index information by information interaction passage simultaneously, and the index information of its opposite equip. is kept synchronously.For example, client-cache equipment C is after certain data block has been stored in this locality, then newly-built index information is sent to opposite equip. service end buffer memory device A and service end buffer memory equipment B, service end buffer memory device A and service end buffer memory equipment B are searched local concordance list after receiving index information, if this list item not in its concordance list, the newly-built list item in this locality then, the bitmap of this list item equals the condition code of client-cache equipment C.If there has been this list item in its concordance list, then upgrade the bitmap list item in the local index table.
Below set forth at the list item deletion action of concordance list:
When the local cache runs out of resource sets of buffer memory equipment or the number of index list item surpass last the aging of buffer memory that can trigger in limited time, buffer memory equipment will be found out the minimum list item of focus value in the index list item, if the identical list item of a plurality of focus values is arranged simultaneously, then point out wherein timestamp list item the earliest, delete this list item and the corresponding data block of this list item then.When the index list item is deleted, if in remove entries, deleted the corresponding data block of list item, mean that then change has taken place in the storage of data block in distributed system, the information of deletion need be sent to the opposite equip. of buffer memory equipment.When opposite equip. receives the information of deletion, then upgrade local index list item, the index list item meta chart item after upgrading is 0, means that then data block is deleted in whole distributed system, deletes this list item then.
Above-mentioned buffer memory equipment and opposite end buffer memory equipment need synchronous index information only to take very little capacity, 20 bytes for example, therefore, it is very high that client-cache equipment comes the cost of an index information of synchronous service end buffer memory equipment by the IP datagram literary composition, what add index information is to take the corresponding network bandwidth synchronously equally, therefore when network bandwidth consumption is more serious, should avoids sending a large amount of parcels as far as possible and be used for synchronous index information.
In one embodiment, the time according to size and the information of the available bandwidth resources of current network, information to be sent is delayed transmission, provided with the making policy decision formula:
P=A0*Bandwidth_Available+A1*Delay+A2*Message_Size, wherein, A0, A1 and A2 are the constant greater than zero; Bandwidth_Available is current available bandwidth, and namely the link bandwidth that disposes of user deducts real-time bandwidth consumption; Delay is the time of delay of the last transmission information of distance, i.e. the decision-making current time constantly once produces the time that needs synchronous information the earliest; Message_Size is the size of the information that does not send out as yet of accumulation, i.e. the size sum total of the information that produces in section time of delay.
Above-mentioned decision-making is triggered by the generation of index information, detects with certain frequency afterwards.When P then sends all information greater than thresholding given in advance, when P then suspends transmission information less than thresholding given in advance, after all information all was sent out, above-mentioned decision-making stopped.
The distributed system of embodiment of the invention network bytes buffer memory adopts the mode of index information distributed storage, after cutting into data flow data block one by one, service end buffer memory equipment can determine whether that index of reference comes the surrogate data method piece to send to client-cache equipment by inquiring about local index table information, greatly improve index and hit success rate, reduce the probability of makeing mistakes, simultaneously index information reciprocal process is optimized to the extra pressure that bandwidth resources cause.
Fig. 4 is the implementation method flow chart of embodiment of the invention network bytes buffer memory.This method flow diagram comprises step 301-305.
In step 301, service end buffer memory equipment receives from the data block index information in the client-cache equipment buffer memory.
Client-cache equipment will be buffered on the local disk from the data block information that service end buffer memory equipment obtains, and sets up corresponding data block index according to data block contents, and this index is kept in the concordance list.The corresponding index of the data block of identical content is identical, and different index institute mapped data piece is inequality.
When the index table information of client is updated, client-cache equipment then sends to service end buffer memory equipment with the lastest imformation of index, service end buffer memory equipment upgrades local concordance list according to the index information that receives, and keeps synchronous with the index information of client-cache equipment.
In step 302, service end buffer memory equipment receives and preserves the data block index information.
The index list item information of the data block that client-cache equipment sends in the service end buffer memory equipment receiving step 301, if this list item not in the local index table, the newly-built list item in this locality then, the bitmap of this list item equals the condition code of client-cache equipment; If there has been this list item in the local index table, then upgrade the bitmap list item in the local index table.
In step 303, service end buffer memory equipment is searched local data piece index information according to the data block that generates, if search successfully, then sends the index information of this data block correspondence to client-cache equipment.
Particularly, service end buffer memory equipment obtains user's access request resource from server, the data flow of this resource is divided a plurality of data blocks, and set up corresponding index according to data block, service end buffer memory equipment is according to the data block index search local data piece index information that generates, if search successfully, then send the index information of this data block correspondence to client-cache equipment, and this index mapped data piece is stored in the internal memory.
The embodiment of the invention can reduce the possibility of cache hit failure by the data block index information of sync client network agent buffer memory equipment and service end network agent buffer memory equipment, reduces the probability of makeing mistakes.
Fig. 5 is the method flow diagram of one embodiment of the invention service end network bytes buffer memory.After the data block index information of client terminal proxy caching equipment and service end network agent buffer memory equipment was synchronous, the method step of service end network agent buffer memory comprised step 401-405:
In step 401, service end network agent buffer memory equipment obtains data flow from service end.
Particularly, client network proxy caching equipment is to service end network agent buffer memory device forwards user's network access request, and service end network agent buffer memory equipment will obtain corresponding Internet resources from server.
In step 402, service end network agent buffer memory equipment will be divided a plurality of data blocks from the data flow that server obtains, and set up corresponding data block index according to data block contents.
In step 403, service end network agent buffer memory equipment judges whether client has stored above-mentioned index institute mapped data piece.
Particularly, service end network agent buffer memory equipment according to synchronously after the data block index information judge whether client network proxy caching equipment has this data block.When judging client network proxy caching equipment this data block of buffer memory, then execution in step 404, otherwise execution in step 405.
In step 404, service end network agent buffer memory equipment will send to client network proxy caching equipment at the data block index that step 402 is obtained, and be buffered in the data block of obtaining in the step 402 in internal memory.
In step 405, service end network agent buffer memory equipment will send to client network proxy caching equipment in the data block that step 402 is obtained.
The embodiment of the invention, when network condition plays pendulum, service end network agent buffer memory equipment does not receive the return information of client network proxy caching equipment in preset time, then initiatively the data block of obtaining is pushed to client network proxy caching equipment.The return information of above-mentioned client network proxy caching equipment comprises that the index replace block becomes function signal or index replace block failure signal.
The weak point of the method for service end network bytes buffer memory shown in Figure 5 is that the data in buffer piece can consume the internal memories of service end buffer memory equipment in a large number, and whether transmission successfully needs the affirmation of client-cache equipment.
Fig. 6 is the method flow diagram of another embodiment of the present invention service end network bytes buffer memory.Fig. 6 optimizes at method shown in Figure 5.
After the data block index information of client terminal proxy caching equipment and service end network agent buffer memory equipment is synchronous, service end network agent buffer memory equipment uses least recently used (Least Recently Used in the management of data-block cache, LRU) algorithm, be stored at accessed repeatedly hot spot data piece in the data buffer memory of local disk, index/data block that the instant heating point value is high always can be stored in the disk of concordance list and equipment, and what frequently be updated all is the low index/data block of focus value.Thereby in the data-block cache and index information of opposite equip., buffer memory and the index information of hot spot data piece itself are more stable, when service end buffer memory equipment carries out data block to the replacement of index, there is great probability index in client-cache equipment, successfully to be reverted to data block.
The method step of service end byte buffer memory as shown in Figure 6 comprises step 501-507:
In step 501, service end network agent buffer memory equipment obtains data flow from service end.
Particularly, client network proxy caching equipment is to service end network agent buffer memory device forwards user's network access request, and service end network agent buffer memory equipment will obtain corresponding Internet resources from server.
In step 502, service end network agent buffer memory equipment will be divided a plurality of data blocks from the data flow that server obtains, and set up corresponding data block index according to data block contents.
In step 503, service end network agent buffer memory equipment judges whether client has stored data block.
Particularly, service end network agent buffer memory equipment according to client network proxy caching device synchronization after index information judge whether client network proxy caching equipment has this data block.When judging client network proxy caching equipment this data block of buffer memory, then execution in step 504, otherwise execution in step 506.
In step 504, service end network agent buffer memory equipment continues to judge in the data-block cache in the local disk whether have this data block.Have this data block in the data-block cache in judging local disk, then execution in step 505, otherwise execution in step 507.
In step 505, service end network agent buffer memory equipment sends the index of this data block to client network proxy caching equipment.
In step 507, service end network agent buffer memory equipment sends the index of this data block to client network proxy caching equipment, and in internal memory this data block of buffer memory.
Under normal circumstances, the method failed probability of service end byte buffer memory shown in Figure 6 is very little, but it is unusual in order to detect, service end buffer memory equipment need be added up the number of times that the data block that receives from client-cache equipment is replaced failure information in a period of time, if failure too much, it is asynchronous to show that then comparatively serious index information has appearred in opposite equip., therefore needs conversion to adopt the method for service end byte buffer memory shown in Figure 5.
In an example, P=(e) -an, wherein n be in a period of time the client-cache device data block recover failure cause that service end buffer memory equipment need read the number of times of original data block from local disk, a be one greater than 0 real number.As P during less than the thresholding that sets in advance, then will adopt the method for service end byte buffer memory shown in Figure 5.
Fig. 7 is the method flow diagram of embodiment of the invention client network byte buffer memory.The method of client network byte buffer memory comprises step 601-603.
In step 601, client network proxy caching equipment obtains the data block index from service end network agent buffer memory equipment, and the corresponding index of the data block of identical content is identical, and different index institute mapped data piece also is inequality.
In step 602, client network proxy caching equipment is according to the data block in the data block index search local cache that obtains, if search successfully, then execution in step 603, otherwise execution in step 604.
In step 603, client network proxy caching equipment replaces to this data block index mapped data piece with the data block index that obtains, and will send to the user after the data block reorganization.
In step 604, client network proxy caching equipment sends this index original data block request of obtaining to service end network agent buffer memory equipment.
Fig. 8 is the equipment schematic configuration diagram of embodiment of the invention network bytes buffer memory.The equipment of network agent buffer memory illustrated in fig. 8 both can be used as client network proxy caching equipment, also can be used as service end network agent buffer memory equipment, but no matter as client network proxy caching equipment or as service end network agent buffer memory equipment, the equipment of network agent buffer memory of the present invention only is example, under the prerequisite that can realize purpose of the present invention, those skilled in the art can change arbitrarily to it.
As shown in Figure 8, embodiment of the invention network bytes buffer memory equipment comprise receiver module 701, data block generation module 702, search module 703, data block recombination module 704, sending module 705 and memory module 706.
Receiver module 701 is used for receiving data stream information, during as client network proxy caching equipment, and the network access request that receiver module 701 receives from the user, and from the data block information of service end; During as service end network agent buffer memory equipment, receiver module 701 receives the network access request from client, and from the data flow of service end.
Data block generation module 702 is used for generating data block according to data flow, during as service end network agent buffer memory equipment, data block generation module 702 will be divided a plurality of data blocks from the content-based fingerprint technique of the data flow that server obtains, and set up corresponding data block index information for data block.
Searching module 703 is used for according to data block index search local data piece, during as client network proxy caching equipment, search module 703 and search this data block index institute mapped data piece in the local cache according to the data block index information that receiver module 701 receives; During as service end network agent buffer memory equipment, search module 703 and search corresponding data block index according to the data block that data block generation module 702 generates.
Data block recombination module 704 is recombinated the data block in the local cache according to the data block index information according to the sequence number of data block, the data after the reorganization are sent by sending module 705.
Sending module 705 is used for the transmission of data, and during as client network proxy caching equipment, sending module 705 is used for transmitting user's network access request, and replys opposite end data in server reception response message; During as service end network agent buffer memory equipment, sending module 705 can send the data block index information that generated by data block recombination module 704 to client network proxy caching equipment.
Need to prove that this network bytes buffer memory equipment can be an independently network equipment, also can be to be in the network equipments such as gateway, internet behavior management with a modular form storage.
The method of describing in conjunction with embodiment disclosed herein or the step of algorithm can use the software module of hardware, processor execution, and perhaps the combination of the two is implemented.Software module can place the storage medium of any other form known in random asccess memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or the technical field.
Although illustrated and described specific embodiments of the present invention, yet do not deviating from exemplary embodiment of the present invention and more under the prerequisite of broad aspect, those skilled in the art obviously can make changes and modifications based on teaching herein.Therefore, appended claim is intended to that all this classes are not deviated from the true spirit of exemplary embodiment of the present invention and variation and the change of scope is included within its scope.

Claims (11)

1. the implementation method of a network bytes buffer memory is characterized in that, described method comprises:
Service end buffer memory equipment receives from the data block index information in the client-cache equipment buffer memory;
Described service end buffer memory equipment receives and preserves described data block index information;
Described service end buffer memory equipment is searched local data piece index information according to the data block that generates, if search successfully, then sends the index information of this data block correspondence to client-cache equipment.
2. method according to claim 1, it is characterized in that: described client-cache equipment and described service end buffer memory equipment have condition code respectively, index information comprises the list item bitmap, the list item bitmap is made of described condition code, determines that according to described list item bitmap described data-block cache is in the corresponding buffer memory equipment of condition code.
3. according to right 1 described method, it is characterized in that: described data block index information comprises one or more in index list item, bitmap list item, timestamp list item, data block length list item and the focus value list item.
4. method according to claim 1, it is characterized in that: when described service end buffer memory equipment receives the data block index information, search the local index table, if this list item not in its concordance list, the newly-built list item in this locality then, the bitmap list item in this list item equals the condition code of described client-cache equipment; If there has been this list item in its concordance list, then upgrade the list item bitmap in the local index table.
5. method according to claim 1 is characterized in that: described client-cache equipment and/or the service end buffer memory equipment list item that deletion is worn out according to the focus value in the focus value list item and the data block of described list item correspondence.
6. method according to claim 1 is characterized in that: described client-cache equipment and/or service end buffer memory equipment keep index information synchronous by the information synchronization strategy.
7. method according to claim 6 is characterized in that: the one or more factors in the time that described information synchronization strategy is delayed transmission by size and the information of network availability bandwidth resource, information to be sent determine.
8. method according to claim 1 is characterized in that, described method also comprises:
Service end buffer memory equipment judges according to the data block and the index information that generate whether client-cache equipment has stored the corresponding data block of described index, if, then send described data block index to client-cache equipment, and in local internal memory the described data block of buffer memory; If not, then send described data block to client-cache equipment.
9. method according to claim 1 is characterized in that, described method also comprises:
Service end buffer memory equipment has been stored under the situation of the corresponding data block of described index in the data block that generates and index information judgement client-cache equipment, judge and whether have described data block in the local cache, if, then send described data block to client-cache equipment, otherwise send the index of described data block to client-cache equipment, and in the described data block of local cache.
10. method according to claim 1 is characterized in that: described service end buffer memory equipment uses least recently used algorithm, is stored in the data buffer memory of local disk at the hot spot data piece that uses repeatedly.
11. the equipment of a network bytes buffer memory is characterized in that comprising:
Receiver module (701) is used for receiving data stream information;
Data block generation module (702) is used for generating data block and index thereof according to data flow.
Search module (703), be used for according to data block index search local data piece;
Sending module (705) is used for the transmission of data.
CN201110449249.7A 2011-12-29 The implementation method of a kind of network byte cache and equipment thereof Active CN103188296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110449249.7A CN103188296B (en) 2011-12-29 The implementation method of a kind of network byte cache and equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110449249.7A CN103188296B (en) 2011-12-29 The implementation method of a kind of network byte cache and equipment thereof

Publications (2)

Publication Number Publication Date
CN103188296A true CN103188296A (en) 2013-07-03
CN103188296B CN103188296B (en) 2016-12-14

Family

ID=

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003945A (en) * 2014-12-23 2017-08-01 英特尔公司 Predictive in buffer storage is read
CN110134721A (en) * 2019-05-17 2019-08-16 智慧足迹数据科技有限公司 Data statistical approach, device and electronic equipment based on bitmap
CN110417901A (en) * 2019-07-31 2019-11-05 北京金山云网络技术有限公司 Data processing method, device and gateway server
CN110868333A (en) * 2019-10-28 2020-03-06 云深互联(北京)科技有限公司 Data caching method and system for gateway
CN111478938A (en) * 2020-02-29 2020-07-31 新华三信息安全技术有限公司 Data redundancy elimination method and device
CN112888062A (en) * 2021-03-16 2021-06-01 芯原微电子(成都)有限公司 Data synchronization method and device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031968B2 (en) * 2000-12-07 2006-04-18 Prev-U Israel Ltd. Method and apparatus for providing web site preview information
CN101123578A (en) * 2007-09-01 2008-02-13 腾讯科技(深圳)有限公司 A method and system for improving access speed of network resource
CN101873354A (en) * 2010-06-24 2010-10-27 中兴通讯股份有限公司 Data synchronization method and system thereof in interactive television
CN101917454A (en) * 2010-06-09 2010-12-15 中兴通讯股份有限公司 Method and device for synchronizing EPG (Error Pattern Generator) files in IPTV (Internet Protocol Television)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031968B2 (en) * 2000-12-07 2006-04-18 Prev-U Israel Ltd. Method and apparatus for providing web site preview information
CN101123578A (en) * 2007-09-01 2008-02-13 腾讯科技(深圳)有限公司 A method and system for improving access speed of network resource
CN101917454A (en) * 2010-06-09 2010-12-15 中兴通讯股份有限公司 Method and device for synchronizing EPG (Error Pattern Generator) files in IPTV (Internet Protocol Television)
CN101873354A (en) * 2010-06-24 2010-10-27 中兴通讯股份有限公司 Data synchronization method and system thereof in interactive television

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003945A (en) * 2014-12-23 2017-08-01 英特尔公司 Predictive in buffer storage is read
CN107003945B (en) * 2014-12-23 2021-08-10 英特尔公司 Speculative reads in cache
CN110134721A (en) * 2019-05-17 2019-08-16 智慧足迹数据科技有限公司 Data statistical approach, device and electronic equipment based on bitmap
CN110134721B (en) * 2019-05-17 2021-05-28 智慧足迹数据科技有限公司 Data statistics method and device based on bitmap and electronic equipment
CN110417901A (en) * 2019-07-31 2019-11-05 北京金山云网络技术有限公司 Data processing method, device and gateway server
CN110868333A (en) * 2019-10-28 2020-03-06 云深互联(北京)科技有限公司 Data caching method and system for gateway
CN111478938A (en) * 2020-02-29 2020-07-31 新华三信息安全技术有限公司 Data redundancy elimination method and device
CN111478938B (en) * 2020-02-29 2022-02-22 新华三信息安全技术有限公司 Data redundancy elimination method and device
CN112888062A (en) * 2021-03-16 2021-06-01 芯原微电子(成都)有限公司 Data synchronization method and device, electronic equipment and computer readable storage medium
CN112888062B (en) * 2021-03-16 2023-01-31 芯原微电子(成都)有限公司 Data synchronization method and device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
KR102301353B1 (en) Method for transmitting packet of node and content owner in content centric network
US8972488B2 (en) System, methods, and media for providing in-memory non-relational databases
CN108881515B (en) Domain name resolution method, device and network equipment
US20160006645A1 (en) Increased data transfer rate method and system for regular internet user
CN103701957A (en) Domain name server (DNS) recursive method and system thereof
EP2835942A1 (en) Dynamic interest forwarding mechanism for information centric networking
CN104320410A (en) All-service CDN system based on HTTP and working method thereof
CN102137139A (en) Method and device for selecting cache replacement strategy, proxy server and system
CN104768079B (en) Multimedia resource distribution method, apparatus and system
CN106326308B (en) Data de-duplication method and system in a kind of net based on SDN
US8539041B2 (en) Method, apparatus, and network system for acquiring content
CN104717314A (en) IP management method and system, client-side and server
CN107211035B (en) Method and network node for monitoring services in a content delivery network
CN103001964A (en) Cache acceleration method under local area network environment
CN107368608A (en) The HDFS small documents buffer memory management methods of algorithm are replaced based on ARC
CN103139252A (en) Achieving method of network proxy cache acceleration and device thereof
CN105009520A (en) Method for delivering content in communication network and apparatus therefor
CN109274720B (en) Method and system for transmitting data
CN103841144A (en) Cloud storage system and method, user terminal and cloud storage server
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
CN109002260A (en) A kind of data cached processing method and processing system
CN103825922B (en) A kind of data-updating method and web server
CN110581873B (en) Cross-cluster redirection method and monitoring server
CN103188296A (en) Implementation method and equipment for network byte cache
KR102235622B1 (en) Method and Apparatus for Cooperative Edge Caching in IoT Environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant