CN102902593A - Protocol distribution processing system based on cache mechanism - Google Patents

Protocol distribution processing system based on cache mechanism Download PDF

Info

Publication number
CN102902593A
CN102902593A CN2012103710126A CN201210371012A CN102902593A CN 102902593 A CN102902593 A CN 102902593A CN 2012103710126 A CN2012103710126 A CN 2012103710126A CN 201210371012 A CN201210371012 A CN 201210371012A CN 102902593 A CN102902593 A CN 102902593A
Authority
CN
China
Prior art keywords
data
processing
agreement
server
protocol distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103710126A
Other languages
Chinese (zh)
Other versions
CN102902593B (en
Inventor
谢清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Founder International Co Ltd
Original Assignee
Founder International Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Founder International Co Ltd filed Critical Founder International Co Ltd
Priority to CN201210371012.6A priority Critical patent/CN102902593B/en
Publication of CN102902593A publication Critical patent/CN102902593A/en
Application granted granted Critical
Publication of CN102902593B publication Critical patent/CN102902593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a protocol distribution processing system based on a cache mechanism. The protocol distribution processing system is characterized in that the system comprises a protocol distribution server and a business processing server, a protocol distribution processing routing table is created in the protocol distribution server according to business types, using hotspot frequency, data directions, occupied data bandwidth size and whether subpackage processing exists or not, business processing tasks are mapped to the corresponding business processing server for processing through the protocol distribution processing routing table; and data indexing and data updating of the business processing tasks are carried out in an internal memory of the business processing server, the business processing tasks are located to positions of corresponding data internal memory blocks according to a constructed index table. The system has a backup mechanism at different processing layers to guarantee the system to have a spontaneous recovery mechanism at different links with problems and guarantee availability and reliability of the system.

Description

Agreement distributing and processing system based on caching mechanism
Technical field
The invention belongs to the mass data processing technical field, be specifically related to a kind of agreement distributing and processing system based on caching mechanism.
Background technology
Use more and more based on the on-line monitoring of GPRS/3G pattern at present, simultaneously under the single application, the equipment of access is also more and more, increase along with equipment, device type might be different simultaneously, also have the different business data processing of up-downgoing, if adopt the mode of single access point, very high to the requirement meeting of server and network.Major part was if the last state of real-time monitoring equipment during monitoring was used simultaneously, if adopt traditional data processing method (as depositing first in the table follow-up again real-time calling), user's experience meeting is very poor, often thinks that data are not prompt enough.
There have been at present some solutions such as Chinese patent application number to provide a kind of mass data processing method and system for 201110182296, scheduler module judges whether calling data warehouse action statement (HQL) according to the current business information of obtaining and default scheduling strategy, when being judged as when being, obtain according to the current business information of obtaining and default scheduling strategy and to call order, scheduler module is called HQL according to calling order to Data Warehouse Platform, Data Warehouse Platform is configuration information corresponding to reading out data warehouse from relational database, Data Warehouse Platform triggers HQL the data of distributed platform storage is carried out computing according to calling order, generates result data and also stores distributed platform into.The invention also discloses a kind of mass data processing system.Adopt this mass data processing method and system, can strengthen the dirigibility of mass data processing.
The 200610098074 mass data transfers methods that provide between a kind of streamline multi-process based on message queue are provided for Chinese patent application number in addition, a ticket order is by being divided at least format in the charging product business processing flow, regular (perhaps being called sorting), re-scheduling, wholesale price, put a plurality of processes in storage, charging method adopts single step and the whole mechanism that combines of submitting to, be implemented in the automatic allocating task of employing different messages queue type under the varying environment by configuration, the load balancing management, ticket is distributed to not identical message queue, according to service logic, self-defined mode is disposed.Adopt the transmission of call bill data between process of the charge system magnanimity of the inventive method realization, all by message queue, processing procedure can realize there is not the expense of the IO of system in the internal memory the inside, and speed improves greatly.Employing obviously improves based on system's treatment effeciency of the pipeline and parallel design technical scheme of message queue, comes out at the top in the processing speed charging at home and abroad producer.
Yet present most of technology all is based on the real-time application of database, and the monitor data to equipment often just can reach 400,000 records in one day in real world applications, data service for this magnanimity, if adopt traditional data to read, need to index on the one hand, the access speed of data is quite slow on the other hand.And the data upload agreement major part of terminal is based on the TCP/UDP technology, and in order to dispose IP address and the port that conveniently tends to specify a reception server, this server will carry out protocol analysis on the one hand, will carry out on the other hand the data storage.In the king-sized situation of data volume, will shine into serve congested.In addition, major part is not much accomplished load balancing and distribution processor based on the service of devices communicating at present, and performance is often lower, and it is single-threaded processing that the data of most devices communicating are processed, and considers in the caching mechanism shortcoming simultaneously.The present invention therefore.
Summary of the invention
The object of the invention is to provide a kind of agreement distributing and processing system based on caching mechanism, occurs easily when having solved in the prior art mass data processing that service is congested, performance relatively lower inferior problem often.
In order to solve these problems of the prior art, technical scheme provided by the invention is:
A kind of agreement distributing and processing system based on caching mechanism, it is characterized in that described system comprises system resource registration, agreement Distributor, Service Process Server, it is characterized in that the interior focus frequency according to type of service, use of described agreement Distributor, data direction, the data bandwidth size that takies, whether have subpackage to process structure agreement distribution processor routing table, according to agreement distribution processor routing table the business processing duty mapping is processed to corresponding Service Process Server; Described Service Process Server carries out data directory and the Data Update of business processing task in internal memory, and navigates to corresponding datarams piece position according to the concordance list that makes up.
Preferably, described Service Process Server judges whether to carry out asynchronous read operation or asynchronous write operation according to the frequent degree of the use of data, makes up and the asynchronous refresh relevant database.
Preferably, when described Service Process Server memory cache data are occurring when unusual, again make up the memory cache data from relational database according to the critical data field of index.
The present invention is also by single-point access configuration, load balance process, agreement distribution processor, solve these problems based on data directory and caching mechanism, the asynchronous refresh RDBMS of internal memory.Key of the present invention is not direct control database of technical solution of the present invention, but the up-to-date device status data of technology access of the memory cache of taking, asynchronous save data advances database simultaneously.
Specifically:
1) single-point access configuration
All terminal equipment access systems all adopt the single IP configuration, by the single IP configuration, can be good at carrying out the initial configuration of hardware.
Access server should be deployed in outer net bandwidth maximum position, and this access server groundwork is exactly the agreement distribution.
2) system resource registration
System resource comprises cpu resource, Internet resources, memory source, server resource.Should register all resources during system initialization, such as position, parameter information of calling etc.And in conjunction with the one-to-one relationship of distributing routing table formation (information frame/system resource).
3) agreement distribution processor
The information frame that equipment is uploaded has different agreements.According to the focus frequency of type of service, use, data direction, the data bandwidth size that takies, whether have subpackage processing etc. can do a route to judge, simultaneously according to the processing of ranking of system resource situation.There is an agreement distribution processor routing table inside, is mapped on the corresponding Service Process Server by routing table and processes.
4) load balance process
According to 28 laws 20% agreement being arranged in the general device communication protocol is the focus agreement, and namely we will distribute more system resource to this focus agreement of 20%, improves handling property.Therefore can do a load balance process.When processing load, to remember the path of load distribution.
5) based on data directory and the caching mechanism of internal memory
The Data Update of internal memory and data directory energy large increase data-handling efficiency, system can rebuild operation system, and the key element of operation system and data relationship are implemented in the internal memory, and individual data flush mechanism is arranged simultaneously.Guarantee that the data in the internal memory are up-to-date all the time.The data of internal memory are set up concordance list, can navigate to the soonest corresponding datarams piece position by concordance list.
6) asynchronous refresh relevant database
The asynchronous refresh database comprises asynchronous read operation and asynchronous write operation, and these two steps are separately carried out.Asynchronous read operation is to read in internal memory according to the frequent degree of the use of data according to strategy.Asynchronous write operation is also to be that character of use according to data decides the frequency that can write according to strategy.
7) data cached mistake is processed
The memory cache data have a wrong treatment mechanism when appearance is unusual, namely the critical data field according to index makes up internal memory from relational database again, guarantees the normal operation of system.
The agreement distribution processor of processing by data in the technical solution of the present invention decides follow-up protocol forward by routing table; By the hot spot data of load balance process focus agreement, based in the data directory of internal memory and caching mechanism so that data were processed without the IO running time, accelerate reading speed; Other asynchronous refresh relevant database, the storage scheme of permanent data guarantees the reliable memory of data by the relational database technology of maturation.
On safeguard, internal storage data is that appearance is flimsy, adopts a kind of mechanism system that automatically remedies uninterruptedly to run when damaging, and the availability of mechanism system data, integrality are processed in the caching mechanism collapse of internal storage data.
With respect to scheme of the prior art, advantage of the present invention is:
The present invention is directed to a kind of universal solution of communication terminal device connecting system, it is many effectively to solve device category by this scheme, and quantity is many, the problem that agreement is many.Original framework is often only considered single access situation, and overall process is considered not.Because it is managerial that business datum has often.Having back mechanism assurance system to go wrong in different links in different processing aspects simultaneously has a kind of spontaneous Restoration Mechanism, has guaranteed available, the reliability of system.
Description of drawings
The invention will be further described below in conjunction with drawings and Examples:
Fig. 1 is the system architecture diagram based on the agreement distributing and processing system of caching mechanism;
Fig. 2 is the agreement distribution processor workflow diagram based on the agreement distributing and processing system of caching mechanism.
Embodiment
Below in conjunction with specific embodiment such scheme is described further.Should be understood that these embodiment are not limited to limit the scope of the invention for explanation the present invention.The implementation condition that adopts among the embodiment can be done further adjustment according to the condition of concrete producer, and not marked implementation condition is generally the condition in the normal experiment.
Embodiment
As shown in Figure 1, the agreement distributing and processing system based on caching mechanism that present embodiment obtains, comprise agreement Distributor, Service Process Server, it is characterized in that the interior focus frequency according to type of service, use of described agreement Distributor, data direction, the data bandwidth size that takies, whether have subpackage to process structure agreement distribution processor routing table, divide according to agreement--send out a processing routing table business processing duty mapping is processed to corresponding Service Process Server; Described Service Process Server carries out data directory and the Data Update of business processing task in internal memory, and navigates to corresponding datarams piece position according to the concordance list that makes up.
Specifically:
1) single-point access configuration
All terminal equipment access systems all adopt the single IP configuration, by the single IP configuration, can be good at carrying out the initial configuration of hardware.As: unified access server is made as 10.10.10.1.Receiving port is 10000.
Access server should be deployed in outer net bandwidth maximum position, and this access server groundwork is exactly the agreement distribution.
Present embodiment adopts the heartbeat message frame, is the information frame of the most frequently used equipment Inspection state.
Heartbeat message frame information: classification (heartbeat inquiry), data direction (read-only), frequency of utilization (often), size of data (little), priority (generally).
2) resource registration process
All application server resource cpu resources, Internet resources, memory source, server resource.Should register all resources during system initialization, such as position, parameter information of calling etc.And in conjunction with the one-to-one relationship of distributing routing table formation (information frame/system resource).
The system resource registration:
Resource 1(HeartQuery1), network address (10.10.11.1) port (9999), available CPU(100%2.2G processor), available network resource (1000M shares), free memory (100%32G).
Resource 2(HeartQuery2), network address (10.10.11.2) port (9999), available CPU(100%2.2G processor), available network resource (1000M shares), free memory (100%32G).
Information frame/system resource relation is corresponding:
The heartbeat message frame should use resource (HeartQuery1, HeartQuery2).
3) agreement distribution processor
The information frame that equipment is uploaded has different agreements.According to the focus frequency of type of service, use, data direction, the data bandwidth size that takies, whether have subpackage processing etc. can do a route processing.There is an agreement distribution processor routing table inside, is mapped on the corresponding Service Process Server by routing table and processes.
As shown in Figure 2, can judge the untreated queue size of queuing in the time of Resources allocation, according to different queue size transinformation frames.As:
Heartbeat message frame information: classification (heartbeat inquiry), data direction (read-only), frequency of utilization (often), size of data (little), priority (generally).The subsequent treatment application that gets access to the heartbeat message frame is HeartQuery1 or HeartQuery2.
3) load balance process
Focus agreement when according to 28 laws 20% agreement being arranged in the general device communication protocol, namely we will be optimized processing to this focus agreement of 20%, improve handling property.Therefore can do a load balance process.When processing load, to remember the path of load distribution.
Above-mentioned example equally:
20 of the current operating position of resource: HeartQuery1:CPU utilization rate 10%MEM (memory usage) 10%LAN (network usage) 10% pending requests.
200 of HeartQuery2:CPU utilization rate 80%MEM (memory usage) 80%LAN (network usage) 10% pending requests.
HeartQuery1 is used in the processing of heartbeat message frame, HeartQuery2, and the HeartQuery1 utilization rate is low, and the HeartQuery2 utilization rate is high, then automatic some tasks of overabsorption HeartQuery1 of subsequent treatment system.
If all near 99%, follow-up information frame reallocation causes resource to overflow with resource, so unactivated resource and the not high application of other resource utilizations will be sought by system, activate in this resource using HeartQuery, and distributed.
4) based on data directory and the caching mechanism of internal memory
The Data Update of internal memory and data directory energy large increase data-handling efficiency, system can rebuild operation system, and the key element of operation system and data relationship are implemented in the internal memory, and individual data flush mechanism is arranged simultaneously.Guarantee that the data in the internal memory are up-to-date all the time.The data of internal memory are set up concordance list, can navigate to the soonest corresponding datarams piece position by concordance list.
Be said process equally, use HeartQuery1 HeartQuery2 common memory piece MEMBlock1, sync key Service Database during this memory block initialization.The information of common this data block of updating maintenance of HeartQuery1HeartQuery2 is judged real time data validity according to request time.
5) asynchronous refresh relevant database
The asynchronous refresh database comprises asynchronous read operation and asynchronous write operation, and these two steps are separately carried out.Asynchronous read operation is to read in internal memory according to the frequent degree of the use of data according to strategy.The asynchronous write operation is also to be the frequency that decides write-back according to the character of use of data according to strategy.
Same above-mentioned example:
MEMBlock1 is heartbeat data, database of per 10 seconds write-backs.
6) data cached mistake is processed
The memory cache data have a wrong treatment mechanism when appearance is unusual, namely the critical data field according to index makes up internal memory from relational database again, guarantees the normal operation of system.
Same above-mentioned example:
MEMBlock1 falls a little or restarts to cause loss of data, then carries out at once initialization, rebuilds this application heap.
Above-mentioned example only is explanation technical conceive of the present invention and characteristics, and its purpose is to allow the people who is familiar with technique can understand content of the present invention and according to this enforcement, can not limit protection scope of the present invention with this.All equivalent transformations that Spirit Essence is done according to the present invention or modification all should be encompassed within protection scope of the present invention.

Claims (3)

1. agreement distributing and processing system based on caching mechanism, it is characterized in that described system comprises system resource registration process, agreement Distributor, Service Process Server, it is characterized in that in the described agreement Distributor that the focus frequency, data direction, the data bandwidth size that takies according to type of service, use, resource operating position that whether subpackage and system are arranged are processed makes up agreement distribution processor routing table, according to agreement distribution processor routing table the business processing duty mapping is processed to corresponding Service Process Server; Described Service Process Server carries out data directory and the Data Update of business processing task in internal memory, and navigates to corresponding datarams piece position according to the concordance list that makes up.
2. the agreement distributing and processing system based on caching mechanism according to claim 1, it is characterized in that described Service Process Server judges whether to carry out asynchronous read operation or asynchronous write operation according to the frequent degree of the use of data, makes up and the asynchronous refresh relevant database.
3. the agreement distributing and processing system based on caching mechanism according to claim 2, it is characterized in that occurring when unusual when described Service Process Server memory cache data, again make up the memory cache data from relational database according to the critical data field of index.
CN201210371012.6A 2012-09-28 2012-09-28 Agreement distributing and processing system based on caching mechanism Active CN102902593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210371012.6A CN102902593B (en) 2012-09-28 2012-09-28 Agreement distributing and processing system based on caching mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210371012.6A CN102902593B (en) 2012-09-28 2012-09-28 Agreement distributing and processing system based on caching mechanism

Publications (2)

Publication Number Publication Date
CN102902593A true CN102902593A (en) 2013-01-30
CN102902593B CN102902593B (en) 2016-05-25

Family

ID=47574839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210371012.6A Active CN102902593B (en) 2012-09-28 2012-09-28 Agreement distributing and processing system based on caching mechanism

Country Status (1)

Country Link
CN (1) CN102902593B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105828309A (en) * 2015-01-05 2016-08-03 中国移动通信集团广西有限公司 Phone bill processing method, phone bill processing device, and phone bill processing system
CN106462360A (en) * 2014-12-23 2017-02-22 华为技术有限公司 Resource scheduling method and related apparatus
CN106603723A (en) * 2017-01-20 2017-04-26 腾讯科技(深圳)有限公司 Request message processing method and device
CN108111329A (en) * 2016-11-25 2018-06-01 广东亿迅科技有限公司 Mass users cut-in method and system based on TCP long links

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001356945A (en) * 2000-04-12 2001-12-26 Anetsukusu Syst Kk Data backup recovery system
US20020046262A1 (en) * 2000-08-18 2002-04-18 Joerg Heilig Data access system and method with proxy and remote processing
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001356945A (en) * 2000-04-12 2001-12-26 Anetsukusu Syst Kk Data backup recovery system
US20020046262A1 (en) * 2000-08-18 2002-04-18 Joerg Heilig Data access system and method with proxy and remote processing
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462360A (en) * 2014-12-23 2017-02-22 华为技术有限公司 Resource scheduling method and related apparatus
US10430237B2 (en) 2014-12-23 2019-10-01 Huawei Technologies Co., Ltd. Resource scheduling method and related apparatus
US11194623B2 (en) 2014-12-23 2021-12-07 Huawei Technologies Co., Ltd. Resource scheduling method and related apparatus
CN105828309A (en) * 2015-01-05 2016-08-03 中国移动通信集团广西有限公司 Phone bill processing method, phone bill processing device, and phone bill processing system
CN105828309B (en) * 2015-01-05 2019-07-02 中国移动通信集团广西有限公司 A kind of call bill processing method, equipment and system
CN108111329A (en) * 2016-11-25 2018-06-01 广东亿迅科技有限公司 Mass users cut-in method and system based on TCP long links
CN106603723A (en) * 2017-01-20 2017-04-26 腾讯科技(深圳)有限公司 Request message processing method and device
CN106603723B (en) * 2017-01-20 2019-08-30 腾讯科技(深圳)有限公司 A kind of request message processing method and processing device

Also Published As

Publication number Publication date
CN102902593B (en) 2016-05-25

Similar Documents

Publication Publication Date Title
CN107018175B (en) Scheduling method and device of mobile cloud computing platform
CN101493826B (en) Database system based on WEB application and data management method thereof
CN110535831A (en) Cluster safety management method, device and storage medium based on Kubernetes and network domains
CN109309631A (en) A kind of method and device based on universal network file system write-in data
CN102624881A (en) Mobile-device-oriented service cache system architecture and development method
CN106850589A (en) A kind of management and control cloud computing terminal and the method and apparatus of Cloud Server running
CN104092756A (en) Cloud storage system resource dynamic allocation method based on DHT mechanism
US11743333B2 (en) Tiered queuing system
CN104243481A (en) Electricity consumption data acquisition and pre-processing method and system
CN111327692A (en) Model training method and device and cluster system
CN113010818A (en) Access current limiting method and device, electronic equipment and storage medium
WO2022063032A1 (en) Distributed system-oriented fault information association reporting method, and related device
CN107682460A (en) A kind of distributed storage trunked data communication method and system
US20210329354A1 (en) Telemetry collection technologies
CN108090000A (en) A kind of method and system for obtaining CPU register informations
WO2022271246A1 (en) Network interface device management of service execution failover
CN102902593A (en) Protocol distribution processing system based on cache mechanism
CN113392863A (en) Method and device for acquiring machine learning training data set and terminal
CN106815068A (en) The method that Hyperv live migration of virtual machine is realized based on Openstack
CN107197039B (en) A kind of PAAS platform service packet distribution method and system based on CDN
CN108090018A (en) Method for interchanging data and system
CN103645959A (en) Telecom real-time system multi-process SMP (shared memory pool) interaction assembly and method
CN103581119B (en) System and method for displaying production process data at high speed
CN104009864B (en) A kind of cloud management platform
CN102495764A (en) Method and device for realizing data distribution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant