US20030069952A1 - Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods - Google Patents

Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods Download PDF

Info

Publication number
US20030069952A1
US20030069952A1 US09/823,306 US82330601A US2003069952A1 US 20030069952 A1 US20030069952 A1 US 20030069952A1 US 82330601 A US82330601 A US 82330601A US 2003069952 A1 US2003069952 A1 US 2003069952A1
Authority
US
United States
Prior art keywords
data
network traffic
records
network
traffic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/823,306
Inventor
Jonathan Tams
Mark Pearce
Robin Iddon
Ronnie Brown
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3Com Corp
Original Assignee
3Com Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3Com Corp filed Critical 3Com Corp
Priority to US09/823,306 priority Critical patent/US20030069952A1/en
Publication of US20030069952A1 publication Critical patent/US20030069952A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/12Network monitoring probes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/026Capturing of monitoring data using flow identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps

Definitions

  • the present invention is directed to the collection, storage, processing and use of data in computer networks, and more specifically, to the collection, storage, processing and use of data relating to network traffic.
  • WWW World Wide Web
  • the increased use of intranets within individual businesses and the increased use of the Internet globally is due to the increased number of computer networks in existence and the ease with which data, e.g., messages and/or other information, can now be exchanged between computers located on inter-connected networks.
  • FIG. 1 illustrates an intranet 10 implemented using known networking techniques and three local area networks (LANS) 20 , 30 , 40 .
  • the intranet 10 may be implemented within a business by linking together physically remote LANS 20 , 30 , 40 .
  • each of the first through third LANS 20 , 30 , 40 includes a plurality of computers ( 21 , 22 , 23 ) ( 31 , 32 , 33 ) ( 41 , 42 , 43 ), respectively.
  • the computers within each LAN 20 , 30 , 40 are coupled together by a data link, e.g., an Ethernet, 26 , 36 , 46 , respectively.
  • the first LAN 20 is coupled to the second LAN 30 via a first router 18 .
  • the router 18 couples data links 26 , 36 together.
  • the second LAN 30 is coupled to the third LAN 30 via a second router 19 which couples data links 36 and 46 together.
  • the transferring of data in the form of packets can involve processing by several layers which are implemented in both hardware and/or software at different points in a network.
  • a different protocol may be used at each level resulting in a protocol hierarchy.
  • the network layer protocol At the bottom of the protocol hierarchy is the network layer protocol.
  • One or more application layer protocols are located above the network layer protocol.
  • the protocol associated with the packet when describing a protocol associated with a data packet, the protocol associated with the packet will be described in terms of the protocols and layers associated therewith.
  • [0008] is used to describe the protocol hierarchy of the top-level (application-layer N) protocol.
  • SNMP Simple Network Management Protocol
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • Network traffic information can be used when troubleshooting problems on an existing network. It can also be used when controlling routing on a system with alternative routing paths. In addition, information on existing or changing network traffic trends is useful when decisions on upgrading or expanding service are being made. Thus, information on network traffic is useful both when maintaining an existing network and when planning modifications and/or additions to a network. Given the usefulness of network traffic information, system administrators have recognized the need for methods and apparatus for monitoring network activity, e.g., data traffic.
  • RMON remote monitoring
  • monitors or probes are sometimes used. These devices often serve as agents of a central network management station.
  • the remote probes are stand-alone devices which include internal resources, e.g., data storage and processing resources, used to collect, process and forward, e.g., to the network management system, information on packets being passed over the network segment being monitored.
  • probes are built into devices such as a routers and bridges. In such cases, the available data processing and storage resources are often shared between a device's primary functions and its secondary traffic monitoring and reporting functions.
  • many probes may be used, e.g., one per each network segment to be monitored.
  • Network traffic data collected by a probe is normally stored internally within the probe until, e.g., being provided to a network management station.
  • the network traffic data is usually stored in a table sometimes referred to as a management information base (MIB).
  • MIB management information base
  • RMON2 MIB standards have been set by the Internet Engineering Task Force (IETF) which increase the types of network traffic that can be monitored, the number of ways network traffic can be counted, and also the number of data formats which can be used for storing collected data.
  • RMON2 tables may include a variety of network traffic data including information on network traffic which occurs on layers 3 through 7 of the Open Systems Interconnect (OSI) model. The particular network traffic information which is available from a probe will depend on which data table the probe implements and the counting method employed.
  • OSI Open Systems Interconnect
  • RMON2 matrix (or conversation) table types are possible: alMatrix, alMatrixTopN, nlMatrix, and nlMatrixTopN.
  • alMatrixTopN tables support two counting modes of operation which affect the manner in which the counting of packets and bytes is performed at the various protocol layers.
  • the first of these counting modes will be referred to herein as all count mode.
  • each monitored packet increments the counters for all the protocol layers used in the packet. For example, an IP/TCP/HTTP packet would increment the packet and byte counters for the IP, TCP and HTTP protocols.
  • the second counting mode will be referred to herein as terminal count mode. In this mode, each monitored packet increments only the counter of the “highest-layer” protocol in the packet. For example, an IP/TCP/HTTP packet would increment the packet and byte counters for only the HTTP protocol.
  • the terminal count mode may only be used with the alMatrixTopN table. However, all count mode can be used with all the RMON2 tables discussed above including the alMatrixTopN table.
  • probes may now collect and store data in tables corresponding to any one of five different RMON2 formats.
  • the five different RMON2 table possibilities are identified herein as alMatrixTopN (Terminal Count Mode), alMatrixTopN (All Count Mode), alMatrix, nlMatrix and nlMatrixTopN tables.
  • Network-layer (nl) tables e.g., nlMatrix, and nlMatrixTopN tables, count only those protocols which are deemed to be network-layer protocols.
  • Network-layer protocols are the protocols which are used to provide the transport-layer services as per the well known ISO OSI 7-layer protocol model, and include, for example, such protocols as IP, IPX, DECNET, NetBEUI and NetBIOS among others. No child-protocols of the network-layer protocols are counted in network-layer tables.
  • Application-layer (al) tables e.g., alMatrixTopN (Terminal Count Mode), alMatrixTopN (All Count Mode), and alMatrix tables, count any protocol that is transport layer or above, provided the probe knows how to decode the protocol. This includes, e.g., everything from IP through to IP/UDP/SNMP, Lotus Notes traffic, WWW traffic, and so on.
  • Application-layer tables provide information on a super-set of the protocols which the network-layer (nl) tables provide, by counting child-protocols of the supported network-layer protocols.
  • the alMatrix and nlMatrix tables monitor conversations which occur in the network, and keep count of the total number of bytes and packets seen for each conversation for each monitored protocol since the probe was turned on. If the probe has been reset since it was turned on, then the counters store the number of bytes and packets seen since the last time the probe was reset. These kinds of counters will be refereed to herein as absolute counters.
  • the entries in alMatrix and nlMatrix tables are ordered by address and protocol.
  • the alMatrixTopN and nlMatrixTopN tables also monitor all conversations which occur in the network, and also keep count of the number of bytes and packets seen for each conversation.
  • MatrixTopN tables must be configured by the user or by a client program, and are configured to have a maximum number of entries and a time interval for which the table will be generated. Once configured, the probe will perform the following steps until the MatrixTopN table is destroyed (either by a request from the user or client program, or by the probe being turned off):
  • MatrixTopN tables monitor the number of packets and bytes seen over the specified time interval, with the counters being effectively reset each time a new table of the top N conversations is generated, the counters generated by MatrixTopN tables are referred to herein as delta counters.
  • a probe may have insufficient processing and data storage resources to support all but the least resource intensive data table format, e.g., an nlMatrix table. Accordingly, the information included in traffic data tables of probes may vary from probe to probe depending on the particular protocols monitored, the individual probe's available resources, and the MIB format implemented by the individual probes.
  • probe selection rarely tends to be a practical solution to resolving problems resulting from a lack of consistency among probe data collection and storage techniques.
  • Data aging involves periodically scanning the stored data and, during the scan, data records that are older than certain preselected age limits are read and get combined, e.g., added together, to create an additional set of data records of lower resolution than the records used to create the additional set. The records used to create the lower resolution set of data records are then deleted from the original database.
  • this technique there are normally multiple age limits set up, resulting in multiple data sets corresponding to different non-overlapping time periods. In such a system, the older the data records become, the lower the resolution of those records will be. Hence less disk space is required to store records corresponding to a fixed period of time, the longer in the past the fixed period of time occurred.
  • the known system has the distinct disadvantage of requiring double buffering of the data while the aging process is being performed. Such double buffering is required so that accessing the data during aging will still give the correct results. Given that the size of the database to be aged can be quite substantial, double buffering presents obvious hardware disadvantages. From an implementation standpoint the known aging process also has the disadvantage of placing significant periodic demands for processing resources that can interfere, e.g., slow or delay, other processing tasks performed by a management station, while the aging operation is being performed.
  • new methods of collecting, processing and storing network traffic data be compatible with existing probe data formats. It is also desirable that the new methods and apparatus be capable of being used with, or adapted to being used with, probe data formats that may be supported in the future.
  • the present invention is directed to methods and apparatus for collecting, storing, processing and using data, e.g., network traffic data, in computer networks.
  • data e.g., network traffic data
  • the present invention processes collected network traffic data, as required, to place it into a common data format.
  • the common data format is selected to provide a maximum degree of information in a format that is easy to use, e.g., by database generation and graphing application.
  • the common data format include delta count values as opposed to absolute count values and that application layer information be presented in terminal count mode as opposed to all count mode.
  • the system of the present invention controls network traffic data probes to provide data in a format that is as close to the desired format as possible, given an individual probe's capabilities.
  • One specific embodiment of the present invention is directed to the use of RMON2 probes and RMON2 data tables.
  • network data is obtained from a probe using one of the available RMON2 table formats.
  • the RMON2 format is selected in the following order of preference: alMatrixTopN (Terminal Mode), alMatrixTopN (AllMode), alMatrix, nlMatrixTopN and nlMatrix.
  • RMON2 alMatrixTopN (Terminal Mode) data tables satisfy the format requirements used in the present invention and therefore do not require conversion operation to be performed.
  • RMON2 alMatrixTopN (Terminal Mode) data tables include both application layer and network layer data. For these reasons, the RMON2 alMatrixTopN (Terminal Mode) data table is the most preferred of the RMON2 tables in the above discussed embodiment.
  • network traffic data is collected and placed in a common format, it is ready for use in generating displays and/or network traffic databases.
  • the network traffic data in the common data format, is stored in a network traffic database to allow for future analysis such as baselining and troubleshooting.
  • the known database aging process is avoided by the system of the present, by creating and maintaining a database that includes multiple parallel sets of network traffic data at different resolutions.
  • a data set for each different resolution is stored in a first-in, first-out (FIFO) data structure.
  • the oldest records in the FIFO data structure are overwritten when there is no longer any unused storage space available for storing the records of the resolution to which the data structure corresponds.
  • the network traffic database of the present invention is not aged, the periodic processor loading associated with aging of databases is avoided. In addition, the need to double buffer the database data during an aging process is eliminated since no aging is performed.
  • the parallel database routines of the present invention also have the advantage of being well suited to a multiprocessor environment since each data set can be maintained and updated independently.
  • the database records at the different resolutions overlap covering the same time period. This makes it relatively easy for a system administrator to review database records corresponding to the same time period at different resolutions. This can facilitate a system administrator's attempts to identify network traffic problems and/or trends without the need to perform complicated processing when comparing or switching between data at different resolutions.
  • FIG. 1 is a block diagram of a known intranet arrangement.
  • FIG. 2 is a block diagram of an intranet including a management system implemented in accordance with one embodiment of the present invention.
  • FIG. 3 is a diagram of a protocol hierarchy used in various examples discussed herein.
  • FIG. 4A is a flow chart of a management system initialization routine implemented in accordance with the present invention.
  • FIG. 4B is an exemplary probe information/data table created by executing the initialization routine illustrated in FIG. 4A.
  • FIG. 5 is a diagram showing the processing of network conversation data in accordance with one exemplary embodiment of the present invention.
  • FIG. 6A illustrates a method of collecting network traffic data from probes and converting the collected data into a common data format.
  • FIG. 6B illustrates the conversion of various RMON2 data tables into the common data format used in accordance with various embodiments of the present invention.
  • FIG. 7 is a block diagram illustrating the generation of a network traffic database including parallel sets of data of differing resolutions.
  • FIG. 8 is a flow chart illustrating a method of the present invention for generating a network traffic database including parallel sets of network traffic stored at different resolutions.
  • FIG. 9 illustrates a network traffic database including parallel data sets having an hourly and 6-hourly resolution.
  • FIG. 10 is a flow chart illustrating a network traffic database including parallel sets of network traffic information stored at different resolutions.
  • the present invention relates to methods and apparatus which can be used collect, store, and process data, e.g., data regarding traffic in a computer network or intranet. It is also directed to methods of presenting network traffic data in a format that can be easily understood by a person, e.g., an individual responsible for managing the computer network or networks being monitored.
  • FIG. 2 there is illustrated an intranet 200 implemented in accordance with one embodiment of the present invention.
  • Various elements of the intranet 200 which are the same as, or similar to, the known intranet 10 , are identified using the same reference numerals used in FIG. 1.
  • the intranet 200 comprises first through third LANS 120 , 130 , 140 each of which includes a plurality of computers ( 21 , 22 , 23 ) ( 31 , 32 , 33 ) ( 41 , 42 , 43 ), respectively.
  • the computers within each LAN 120 , 130 , 140 are coupled together by a data link, e.g., an Ethernet, 26 , 36 , 46 , respectively.
  • the first LAN 120 is coupled to the second LAN 130 via a first router 17 which couples data links 26 , 36 together.
  • the first LAN 120 is also coupled to the third LAN 140 via a second router 18 .
  • the second LAN 130 is coupled to the third LAN 130 via a third router 19 which couples data links 36 and 46 together.
  • Data links 26 , 36 and 46 are network segments within the intranet 200 .
  • probes 127 , 137 , 147 are included in each of the first through third LANs, respectively.
  • Each probe is coupled to the data link, e.g., Ethernet, which is included in the LAN in which the probe resides. Because the first probe 127 is coupled to the first Ethernet 26 it can collect information about traffic on the network segment 26 . Similarly, the second and third probes 137 , 147 are able to collect information about traffic on the network segments 36 , 46 , to which they are coupled, respectively.
  • the probes 127 , 137 , 147 collect and store network traffic data in one or more RMON2 tables (MIBs).
  • MIBs RMON2 tables
  • the probes 127 , 137 , 147 may include memory, a processor, an I/O interface device and a mass storage device, such as a disk drive.
  • probes 127 , 137 , 147 are implemented using known network traffic data probes.
  • each of the probes 127 , 137 , 147 is coupled to a management station 150 which also forms part of the intranet 200 .
  • the management station 150 includes a display device 152 , one or more central processing units (CPUs) 154 , 155 , a keyboard 156 , a mass storage device 158 for storing, e.g., a data base, and memory 162 which are coupled together by a bus 163 .
  • the mass storage device 158 may be, e.g., a disk drive or array of drives.
  • two CPUs 154 , 155 capable of operating in parallel are shown. However, in many embodiments, a single CPU 154 is used on a time shared basis, e.g., to perform database generation and maintenance operations.
  • the bus 163 couples the discussed management station components to an input/output (I/O) interface 160 used to connect the management station and its components to the first through third probes 127 , 137 , 147 .
  • the I/O interface 160 is responsible for interfacing between the various devices coupled thereto.
  • One or both of the management station's CPUs, 154 , 155 can be used to control the operation of the management station 150 as a function of various routines stored in the memory 162 .
  • the use of one or both of the CPUs, in controlling the operation of the management station 150 depends on the implemented operating system. For exemplary purposes it will be assumed that only CPU 154 is used to control operation of the management station 150 .
  • the routines stored in the memory 162 , include initialization routines 171 , data collection and conversion routines 164 , parallel data set generation routines 166 , and processing/filtering/display routines 168 .
  • the various routines may be implemented as computer programs.
  • the memory 169 may include probe information and data tables received from the probes 127 , 137 and 147 .
  • the memory 162 may also include a buffer 173 for temporarily storing data tables converted to the common format of the present invention.
  • the collected probe data stored in the buffer 173 is processed by the CPU 154 under control of routines 164 , 166 , 168 and stored in a network traffic information database located on the storage device 158 as will be discussed below.
  • the keyboard 156 can be used for inputting queries regarding network traffic information. Charts and statistics regarding network traffic information are generated by the CPU 154 in response to such queries using the data included in the network traffic database. The charts and statistics are displayed on the display device 152 and/or printed on a printer 170 coupled to the management station 150 .
  • FIG. 3 illustrates an exemplary protocol hierarchy in the form of a tree 301 which may be retrieved from one of the probes 127 , 137 , 147 for a monitored conversation between two devices included in the intranet 200 .
  • the hierarchy illustrated in FIG. 3 will be used in the discussion which follows to illustrate various points. Note that, while a probe 127 , 137 , 147 may support many thousands of protocols, only those protocols which have been seen for a particular conversation will be stored in the data table or tables supported by the probe and thus will be the only protocols which may be retrieved by the management station 150 from the probe for that conversation.
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • SNMP Simple Network Management Protocol
  • TCP Transmission Control Protocol
  • FTP File Transfer Protocol
  • HTTP Hyper-Text Transfer Protocol—also sometimes referred to as WWW (World Wide Web) traffic).
  • the tree 100 has been divided into two halves: the network-layer protocol 303 and the application-layer protocols 305 . This division will be used in later examples.
  • the conversation for which the tree has been generated is a conversation between two devices e.g., computers A and B 21 , 22 , using the IP network-layer protocol.
  • IP/UDP protocol is shown in a dotted box—this is to represent that, while the IP/UDP/SNMP packets were monitored by the probe 127 , the probe 127 had the IP/UDP protocol turned off. This is a feature of RMON2, (the ability to turn off the monitoring of protocols), and means that any pure IP/UDP packets would not be counted. Thus, a count of any pure IP/UDP packets on the network segment 26 would not be supplied by the probe to the management station 150 on retrieval of the network traffic data from the probe 127 . However, child protocols of IP/UDP (such as IP/UDP/SNMP) would continue to be counted and supplied to the management station 150 from the probe 127 .
  • IP/TCP/HTTP IP/TCP/HTTP
  • networks may include a variety of probes 127 , 137 , 147 , with differing capabilities and differing network data table formats.
  • the management station 150 collects and processes network traffic data from the probes 127 , 137 , 147 included in the network.
  • the network traffic data received from the probes is processed to place it in a consistent format that can be used to support queries, storage, and displaying of network traffic data in a format that is easy process and understand.
  • processing components and modules e.g., the parallel data set generation routines 166 and processing/filtering/display routines 168 , can be isolated from the complexities associated with varying network traffic data formats encountered from probe to probe.
  • the inventors of the present application recognized that, for most purposes, what is of interest is the network traffic during a specific time interval and not the total amount of traffic monitored from the time a probe is turned on. Accordingly, in determining the common format into which network traffic data should be placed, it was decided that a delta counting, as opposed to absolute counting, technique should be used. In addition, it was decided that, for maximum flexibility, it was useful to obtain as much detail about network traffic as possible. Accordingly, it was decided that the common data format should include application layer protocol information when available. In addition, it was decided that it was more useful to have the data represented in terminal count mode, as opposed to all count mode.
  • nlMatrix and nlMatrixTopN tables only include network layer traffic data, these two tables are considered the least useful and are not used unless the probe from which the data is being obtained does not support one of the three possible application layer tables.
  • network data is obtained from a probe using one of the available table formats with the format utilized being selected in the following order of preference: alMatrixTopN (Terminal Mode), alMatrixTopN (AllMode), alMatrix, nlMatrixTopN and nlMatrix.
  • an alMatrixTopN (Terminal Mode) table has the advantage of requiring no format conversion operations.
  • the alMatrixTopN (AllMode) table requires a single conversion operation, i.e., an all count mode to terminal count mode conversion operation, to place it in the common format. Unlike absolute count to delta count conversion operations, as will be discussed below, terminal count conversion operations can be performed without the need to use the previously received data table. Accordingly, alMatrixTopN (AllMode) tables can be converted to the common format with a minimum of processing and memory requirements.
  • the alMatrix table is less desirable than the other application layer tables because it requires two conversion operations to place it in the common format. Furthermore, one of the conversion operations requires buffering of a retrieved data table for the duration of the data measurement interval thereby requiring more memory than is required to put the alMatrixTopN table in the common data format.
  • Identification of the probes which are coupled to the management system 150 , the data tables they support, and the selection of the data table to be used with each probe occur during execution, by CPU 154 , of a management station initialization routine 300 .
  • the routine 300 is one of the initialization routines included in memory segment 171 .
  • the initialization routine 300 is performed by the management station, e.g., when the station is powered up or reset.
  • the initialization routine 300 begins in step 302 wherein the initialization routines 171 is executed by the CPU 154 .
  • the management system 150 detects the probes 127 , 137 , 147 which are coupled to the system 150 .
  • the detection of the probes may be done, as known in the art, by transmitting a signal querying for a response from probes which are present.
  • the initialization routine determines the network traffic table format that is to be used with the detected probe and stores that information in memory for future use, e.g., in determining what if any format conversions need to be performed on data obtained from the probe.
  • step 306 a determination is made as to whether or not the probe being initialized supports application layer tables, i.e., if the probe has alMatrix capability.
  • alMatrix support is determined by querying a probeCapabilities object supported by the detected probe and monitoring the probe's response.
  • step 306 If in step 306 it is determined that the probe includes alMatrix support, operation proceeds to step 308 .
  • the management station 150 signals the probe to create an alMatrixTopN table using terminal mode counting. If, in step 310 , it is determined, e.g., by receipt of a signal from the probe, that creation of the desired alMatrixTopN table was successful, operation proceeds to step 312 .
  • step 312 probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrixTopN (Terminal Count Mode) format. With the successful updating of memory in step 312 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322 .
  • step 310 If, in step 310 it was determined that terminal alMatrixTopN table creation was unsuccessful, operation proceeds to step 314 instead of 312 .
  • step 314 the management system 150 signals the probe being initialized to create an alMatrixTopN table using all count mode (as opposed to terminal count mode) counting.
  • step 316 If, in step 316 , it is determined that all count Mode alMatrixTopN table creation was successful, e.g., by monitoring for a signal from the probe being initialized, operation proceeds to step 318 .
  • step 318 probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrixTopN (all mode counting) format. With the successful updating of memory in step 318 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322 .
  • step 316 If, in step 316 , it is determined that all Mode alMatrixTopN table creation was unsuccessful operation proceeds to step 320 .
  • step 320 probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrix format. With the successful updating of memory in step 320 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322 .
  • step 306 If in step 306 , it is determined that the probe being initialized does not support alMatrix tables, a network layer table must be selected for use. In such a case, operation proceeds from step 306 to step 324 wherein the management station 150 signals the probe being initialized to create an nlMatrixTopN table.
  • step 326 a determination is made as to whether or not creation of the nlMatrixTopN table was successful.
  • step 326 If, in step 326 , it is determined that nlMatrixTopN table creation was successful, e.g., by monitoring for a signal from the probe being initialized, operation proceeds to step 328 .
  • step 328 probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in nlMatrixTopN format. With the successful updating of memory in step 328 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322 .
  • step 326 If, in step 326 , it is determined that all Mode nlMatrixTopN table creation was unsuccessful, operation proceeds to step 330 .
  • step 330 probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in nlMatrix format. With the successful updating of memory in step 330 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322 .
  • step 322 a determination is made as to whether any probes detected in step 304 remain uninitialized. If there is another probe to be initialized, operation proceeds once again to step 306 wherein initialization of the next probe begins.
  • step 322 If, in step 322 it is determined that no probes remain to be initialized, operation proceeds to step 332 wherein the initialization routine is stopped pending its restart upon the next power up or resetting of the management station 150 .
  • FIG. 4B An exemplary probe information/data table 169 created in memory 150 via execution of the initialization routine is illustrated in FIG. 4B.
  • Each detected probe 127 , 137 , 147 is identified in the table 169 as well as the format of the data table which is to be obtained from the identified probe when collecting network traffic data.
  • the table 169 includes temporary data table storage space used for storing data tables used as part of the format conversion operations discussed below.
  • retrieved alMatrixTopN tables and nlMatrixTopN tables need not be stored for use in subsequent table format conversion operations since these tables are retrieved from the probe in the desired delta count format.
  • FIG. 5 illustrates the collection, processing, storage and display of network traffic data in accordance with an exemplary embodiment of the present invention.
  • the group of networks 120 , 130 , 140 from which network traffic data is collected, are generally represented as a group by the block 502 .
  • the probes 127 , 137 , 147 which monitor each network or network segment serve as the source of network traffic data which is supplied to the management station 150 .
  • Network traffic data in the form of a data table, is supplied to the management station from each probe 127 , 137 , 147 periodically in response to requests from the management station 150 , for the information.
  • the arrows, leading from the probes 127 , 137 , 147 to the data collection and conversion step 504 of the management station 150 represent the passing of the requested network traffic data to the management station 150 .
  • processing blocks 504 , 508 , 515 which are used to represent the various processing operations performed by the management station 150 .
  • blocks 506 , 510 and 152 which are used to illustrate the input and output data associated with the various processing operations.
  • the data collection and conversion step 504 represents data collection and formatting operations which are implemented using computer software, in the form of the data collection and conversion routines 164 , to control the CPU 154 .
  • network traffic data is collected at periodic intervals from each of the detected probes and converted, in accordance with the present invention, into the preselected common format discussed above.
  • the processing performed by the module 504 will be discussed in greater detail with regard to FIG. 6.
  • the output of the data collection and formatting step 504 is a set of network traffic data 506 which includes data from various probes that has been converted into the common data format of the present invention.
  • the network traffic data 506 represents data from multiple probes collected during one periodic data collection operation involving the collection of data from probes 127 , 137 , 147 .
  • the set of network traffic data 506 serves as the input to a network traffic data set generation and maintenance module 508 .
  • the data set generation and maintenance module 508 is responsible for generating multiple parallel sets of data which overlap in time but differ in terms of the resolution at which the network traffic data is stored in each data set.
  • the group of data sets generated by the module 508 represent a network traffic database 510 extending in time over multiple periodic data collection cycles.
  • the data in network traffic database 510 can be accessed, e.g., in response to queries, processed, filtered and displayed and/or printed.
  • Data processing, filtering and display generation step 515 which may be implemented by executing the routines 168 on the CPU 154 , is responsible for performing such operations.
  • the output of step 515 may take several forms including that of a printed document or a figure on the display device 152 .
  • FIG. 5 a circle and lines display of network traffic, generated in accordance with the present invention, is shown on the display 152 .
  • circles are used to represent computer networks or groups of computer networks. Points within a circle are used to represent devices located within the computer network represented by the surrounding circle. Lines between points are used to indicated detected conversations, while the thickness of a line is used to indicate the amount of data transferred during the monitored conversation.
  • the outer circle on the display 152 represents the group of networks illustrated in FIG. 2 while each of the inner circles represents one of the computer networks 120 , 130 , 140 .
  • FIG. 6A illustrates a method 600 corresponding, in one exemplary embodiment of the invention, to the data collection and conversion step 504 .
  • the routine 600 is executed periodically, e.g., every 30 minutes, by the CPU 154 .
  • the data collection and conversion routine 600 starts in step 602 .
  • the routine 600 is obtained from memory by the CPU 154 and executed.
  • step 604 operation proceeds to step 604 wherein the stored information, included in table 169 about the probes present in the network and the network traffic data table format to be used with each probe, is accessed.
  • the data collection and conversion routine 600 obtains from memory a list of probes that were detected during the previously discussed initialization process and information on the data table which the probe is to supply to the data collection routine.
  • Steps 606 through 614 are used to collect and process network traffic data corresponding to each individual probe that was detected during the initialization process.
  • routine 600 operation proceeds from step 604 to step 606 .
  • the processor 154 requests that the probe, from which data is to be collected, supply the network traffic data to the processor using the table format which was associated with the probe in the probe information/data table 169 .
  • step 608 the requested network traffic data table is received from the probe.
  • the processing performed on the received network traffic data table to place it into the common data format used in accordance with the present invention depends on the type of data table received.
  • step 608 If in step 608 an alMatrixTopN (Terminal Count Mode) table is received, no format conversion operations are required. Accordingly, when an alMatrixTopN (Terminal Count Mode) table is received operation proceeds from step 608 directly to step 614 wherein the received data table, including time stamps indicating the time at which the network traffic occurred, is stored in a buffer 173 included in memory 162 .
  • step 608 If in step 608 an alMatrixTopN (AllCount Mode) table is received, the data needs to be converted to terminal count mode to place it in the common format before storage in the buffer. In such a case, operation proceeds from step 608 to step 610 . In step 610 AllCount Mode data is converted to terminal mode count data. Once the conversion to terminal count mode data is completed the resulting data table is stored in the buffer 173 .
  • step 608 If in step 608 an alMatrix table is received, the absolute count data included therein needs to be converted to delta count data and all mode count data needs to be converted to terminal count mode data to place it in the common format before storage in the buffer. In such a case, operation proceeds from step 608 to step 612 and then to step 610 . In step 612 , absolute count data is converted to delta count data. In step 610 AllCount Mode data is converted to terminal count mode data. Once the conversion to terminal count mode data is completed operation proceeds to step 614 wherein the resulting data table is stored in the buffer 173 .
  • step 608 If in step 608 an nlMatrix table is received, the absolute count data needs to be converted to delta count data to place it in the common format before storage in the buffer 173 . Note that terminal count conversion need not be performed since application layer conversation information is not available in an nlMatrix table.
  • step 608 when an nlMatrix table is received, operation proceeds from step 608 to step 612 .
  • step 612 absolute count data is converted to delta count data. Once the conversion of absolute count data to delta count data is completed, operation proceeds to step 614 wherein the resulting data table is stored in the buffer 173 .
  • step 608 If in step 608 an nlMatrixTopN table is received, the data is already in delta count format. In addition, terminal count conversion need not be performed since application layer conversation information is not available from the received nlMatrixTopN table. In step 608 when an nlMatrixTopN table is received, operation proceeds directly to step 614 wherein the received data table is stored in the buffer 173 .
  • step 614 operation proceeds to step 616 wherein a determination is made as to whether or not there are any remaining probes from which data needs to be collected. If there are probes remaining, from which data has not been collected, operation proceeds from step 616 to step 606 wherein the process of collecting network traffic data from the next probe commences.
  • step 616 If, however, in step 616 it is determined that there are no more probes from which data needs to be collected, e.g., it is determined that network traffic data has been collected, processed and placed in the buffer for each of the probes identified in table 169 , operation proceeds to step 618 wherein the data collection and conversion routine 600 is stopped.
  • the buffer 173 includes data tables for each identified probe 127 , 137 , 147 corresponding to the just completed data collection cycle.
  • the data collection and conversion routine 600 may be re-executed, each time it is desired to collect network traffic data, e.g., periodically at 30 minute or hourly intervals.
  • the period between data collections is selected to match the period for which the delta count is to be generated, i.e., the delta count represents the network traffic detected since the last time the network traffic data table was retrieved.
  • FIG. 6B is an additional illustration showing how received probe data, in the form of a network traffic data table, is processed by the data collection and conversion routine 600 to generate a network traffic data table 640 in the desired common data format (with the nlMatrixTopN and nlMatrix tables of course lacking the desired but unavailable application layer information).
  • the five possible input data tables 621 , 622 , 623 , 624 and 625 are shown on the left side of FIG. 6B.
  • the ovals 630 and 632 represent terminal count conversion and delta generation operations, respectively.
  • the alMatrixTopN (Terminal Count Mode) and nlMatrixTopN data tables are already in the desired common format. Thus, conversion operations need not be performed on input tables 621 and 625 .
  • the conversion of absolute count data to delta count data may be performed in accordance with the following exemplary pseudo code:
  • Begin (delta count generation operation) if the received data is the first set of data received from the probe: Begin if Store the data table received from the probe in the temporary data table storage location associated with the specific probe from which the data being processed was collected; use the data included in the data table as delta data; end if else Begin else retrieve the previously stored data table from the temporary data table storage location associated with the specific probe from which the data table being processed was collected; store the most recently collected data table in said temporary data table storage location; from the entries in each row of the most recently collected data table, subtract the corresponding packet and byte counter values obtained from the corresponding row of the table retrieved from said temporary data table storage location, the resulting packet and byte counters being the delta count values for the network traffic table being generated; and incorporate the generated delta count values in the network traffic data table upon which the conversion operation is being performed thereby replacing the absolute count values from which they were generated; discard the network traffic data table retrieved from said temporary storage location; end else end ⁇ delta count generation operation ⁇
  • the delta time interval is the time interval between generation of the retrieved tables by the probe which supplies the data being processed.
  • a delta count conversion operation consider that a counter in a table corresponding to a specific probe had a value of 100 the first time the data table was retrieved from the specific probe, a value of 400 the next time the data table was retrieved from the same probe and a value of 600 the third time data was retrieved from the probe.
  • the delta counter value generated in accordance with the conversion process of the present invention for the interval corresponding to the time period between the first and second probe data retrievals would be 300 and the delta counter value generated for the second time interval corresponding to the period of time between the second and third probe data retrievals would be 200.
  • the conversion of all mode count data to terminal mode count data is required to convert data from alMatrix and alMatrixTopN (All Count Mode) tables into the common format used by the apparatus of the present invention.
  • the conversion process of the present invention assumes that the data in the tables has already been converted into delta count values if it was not already in delta count format.
  • the conversion of all count mode data to terminal count mode data in step 610 and the terminal count conversion operation 630 involve performing the steps set forth in the following pseudo code: Begin ⁇ Conversion of All Count mode data to Terminal Count mode Data ⁇ For each individual conversation for which there is data in the data table being processed do: Begin ⁇ do ⁇ determine the protocol hierarchy for the individual conversation; Starting at the network-layer protocols, subtract the counter values for each immediate (existing) child protocol from the child protocol's immediate (existing) parent counter value and store the result as the parent protocol's terminal count counter value. Repeat the preceding step for each child protocol until the entire protocol hierarchy has been traversed. End ⁇ do ⁇ end ⁇ Conversion of All Count mode data to Terminal Count mode Data ⁇
  • IP protocol counter values packet and byte counter values.
  • IP/TCP and IP/UDP/SNMP child protocols Subtract the corresponding counter values for the IP/TCP and IP/UDP/SNMP child protocols, from the IP parent protocol counter values.
  • IP/UDP/SNMP protocol is considered to be an immediate child of the IP protocol because the IP/UDP protocol does not exist in the data retrieved from the probe in the FIG. 3 example (since the probe is not monitoring it), and so this makes IP the immediate (existing) parent of IP/UDP/SNMP.
  • IP/TCP subtract counter values for the IP/TCP/FTP and IP/TCP/HTTP protocols from the corresponding IP/TCP counter values. Store the result as the IP/TCP terminal count counter values.
  • IP/UDP/SNMP there are no children, and so no processing to convert the counter values to terminal count values needs to be done.
  • the byte and packet counts for the example conversation shown in Table 1 include only the monitored protocols which were shown in the example hierarchy discussed earlier in regard to FIG. 3. Note that Table 1 reflects that the monitoring of UDP protocol has been turned off in the probe monitoring the conversation. Also note that in Table 1, e.g., IP/TCP represents all those packets which could only be decoded by the probe as far as the IP/TCP protocol—the IP/TCP count does not include the IP/TCP/FTP or IP/TCP/HTTP counts.
  • the alMatrixTopN (Terminal Count Mode) table monitors conversations at all the known application-layer protocols, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration).
  • the counters in the alMatrixTopN (Terminal Count Mode) table work in Terminal Count Mode, and so a monitored packet increments only the counter of the “highest-level” protocol used in the packet.
  • the alMatrixTopN (Terminal Count Mode) table for the exemplary conversation of TABLE 1 would look like this: TABLE 2 Network Application Layer Source Destination Layer Pack- Protocol Address Address Protocol ets Bytes IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000 IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000 IP 123.45.67.89 98.76.54.32 IP 50 5000 IP 123.45.67.89 98.76.54.32 IP/TCP 20 4000 IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
  • the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval.
  • alMatrixTopN (Terminal Count Mode)
  • the counters are already delta values in terminal count mode so the table, e.g., Table 2, received from a probe, is automatically in the common data format. Accordingly, in accordance with FIG. 6B the alMatrixTopN (Terminal Count Mode) table would be stored, unmodified, in the buffer 173 .
  • the alMatrixTopN (All Count Mode) table monitors conversations at all the known application-layer protocols, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration).
  • the counters in the alMatrixTopN (All Count Mode) table work in All Count Mode, and so a monitored packet increments the counters for all the protocol layers used in the packet.
  • IP/UDP protocol is not being monitored in this example by the probe, an IP/UDP counter is not maintained. Accordingly, packets for the IP/UDP/SNMP protocol do not increment an IP/UDP counter.
  • the resulting alMatrixTopN (All Count Mode) table would look like this: TABLE 4A Network Application Layer Source Destination Layer Pack- Protocol Address Address Protocol ets Bytes IP 123.45.67.89 98.76.54.32 IP 400 50000 IP 123.45.67.89 98.76.54.32 IP/TCP 200 35000 IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 230 30000 IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000 IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
  • the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval.
  • Table 4C is ready for storage in buffer 173 .
  • the alMatrix table monitors conversations at all the known application-layer protocols, and stores them, using absolute counters, in a table which is ordered by network-layer protocol, source and destination addresses, and application-layer protocol.
  • the counters in the alMatrix table work in All Count Mode, and so a monitored packet increments the counters for all the protocol layers used in the packet.
  • the alMatrix table would look like this: TABLE 5A Appli- Network cation Layer Source Destination Layer Pack- Protocol Address Address Protocol ets Bytes IP 123.45.67.89 98.76.54.32 IP 1200 150000 IP 123.45.67.89 98.76.54.32 IP/TCP 690 1000000 IP 123.45.67.89 98.76.54.32 IP/TCP/ 600 90000 FTP IP 123.45.67.89 98.76.54.32 IP/TCP/ 30 3000 HTTP IP 123.45.67.89 98.76.54.32 IP/UDP/ 360 30000 SNMP
  • the counter values are absolute values presented in all count mode. Accordingly, to place the alMatrix Table 5A into the desired common format, the counter values must be converted to delta values and all count mode values need to be converted to terminal count mode values.
  • the first step is the generation of delta values. This is done by subtracting the counter values in the alMatrix Table 5B, which was received during the last collection operation, from the corresponding counter values found in the most recently received alMatrix Table 5A.
  • Table 5B may be obtained from the temporary data table storage located in memory 169 .
  • Table 5C which includes the delta values generated by the subtraction operation is shown below: TABLE 5C Network Application Layer Source Destination Layer Pack- Protocol Address Address Protocol ets Bytes IP 123.45.67.89 98.76.54.32 IP 400 50000 IP 123.45.67.89 98.76.54.32 IP/TCP 230 35000 IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000 IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000 IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
  • the terminal count conversion operation results in the following table: TABLE 5E Network Application Layer Source Destination Layer Pack- Protocol Address Address Protocol ets Bytes IP 123.45.67.89 98.76.54.32 IP 50 5000 IP 123.45.67.89 98.76.54.32 IP/TCP 20 4000 IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000 IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000 IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
  • Table 5E can be stored in the buffer 173 .
  • the nlMatrixTopN table monitors conversations at the network-layer protocols only, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration).
  • the nlMatrixTopN table monitors only network-layer protocols, and so will consider all of the packets given in the exemplary conversation to be IP packets, and so the stored table would be as follows: TABLE 6 Source Destination Protocol Address Address Packets Bytes IP 23.45.67.89 98.76.54.32 400 50000
  • the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval. Since the counter values in the nlMatrixTopN table are already delta counter values, no conversion processing needs to be performed on the nlMatrixTopN table and it is ready for storage in the buffer 173 as retrieved.
  • the nlMatrix table monitors conversations at the network-layer protocols only. It stores the counted byte and packet information, using absolute count values, in a table which is ordered by network-layer protocol and source and destination addresses.
  • a delta conversion operation is performed. This involves subtracting the counter values from the current nlMatrix Table 7A from the corresponding counter values in the previously received Table 7B to generate a table as follows: TABLE 7C Source Destination Protocol Address Address Packets Bytes IP 123.45.67.89 98.76.54.32 400 50000
  • the data placed in the buffer 173 is in the common format rendering it suitable for use, e.g., in generating a network traffic database.
  • FIG. 7 illustrates how the network traffic data 701 , 703 , 705 , from the first through third probes respectively, placed in the buffer 173 , can be used to generate a network traffic database 707 .
  • the network traffic data 701 , 703 , 705 is processed by a database generation and maintenance routine 700 to generate a database 707 .
  • the database 707 includes multiple resolutions of the same data in parallel, e.g., in hourly, 6 hourly, daily, and weekly data sets. These data sets are stored in corresponding FIFO data structures 709 , 711 , 713 , 715 , respectively.
  • the database 707 may be stored on the data storage device 158 .
  • the parallel, multi-resolution storage method of the present invention provides a relatively simple means of managing a network traffic database and limiting its size without the need for an aging process and the double buffering often associated with such processes.
  • the disk space allocated to the database 707 is divided into 4 parts and assigned to the following fixed resolutions: hourly, 6-hourly, daily and weekly.
  • each row of a data table 701 , 703 , 705 corresponds to a monitored conversation and includes byte and packet count information.
  • Time stamp information indicating the time the conversation was monitored is also included in the tables 701 , 703 , 705 .
  • As each row of data is read in from one of the tables 701 , 703 , 705 it is used to create or update an entry in each of the parallel data sets 709 , 711 , 713 , 715 .
  • each record is used to represent a conversation between two hosts and the records are time aligned depending on the resolution: hourly on the hour; 6-hourly at 0600, 1200, 1800 and 2400 hrs; daily at 2400 hrs; and weekly at 2400 hrs on Saturday.
  • Database records for the same time interval can be considered as being in the same “bucket”.
  • a bucket is a set of data storage records for storing network traffic data corresponding to the preselected unit of time used for the resolution to which the bucket corresponds.
  • FIG. 8 illustrates the database generation and maintenance routine 700 of the present invention in greater detail.
  • the illustrated routine 700 may be one of the parallel data set generation routines 166 stored in the management station's memory 162 .
  • the routine 700 beings in step 702 wherein the database generation routine is started, e.g., by having the CPU 154 load and begin executing the routine 700 .
  • the routine 700 may be loaded into, and executed by the CPU 155 at the same time it is being loaded and executed by the CPU 154 .
  • the different CPUs 154 , 155 are normally responsible for creating and maintaining, in parallel, data sets of different resolutions.
  • CPU 154 may be responsible for creating and maintaining the hourly and 6 hour network traffic data sets while the CPU 155 might be responsible for creating the daily and weekly network traffic data sets.
  • routine 700 is executed by the processor 154 .
  • processor 154 the processor 154 .
  • multi-processor implementations are possible.
  • Step 704 Operation proceeds from step 702 to step 704 wherein the CPU 154 creates hourly, 6 hour, daily and weekly FIFO data structures, one for each of the different data set resolutions to be supported.
  • Step 704 may involve, e.g., allocating data storage records to serve as buckets.
  • the hourly FIFO would comprise a plurality of buckets each corresponding to a one hour period of time.
  • Each bucket may include several records or entries each corresponding to a different conversation/protocol pair.
  • the daily FIFO would comprise a plurality of buckets each corresponding to a different one day period of time.
  • each bucket in the FIFO is filled. When all the records in the FIFO are filled, the records in the oldest buckets are overwritten thereby insuring that the process can continue after the available storage space is used.
  • step 706 the buffer 173 into which collected network traffic data is placed, is monitored for network traffic data. Upon detecting that network traffic data has been placed into buffer 173 , operation proceeds to step 708 .
  • step 708 the time stamps associated with the buffered data are examined.
  • step 710 the buffered network traffic data is assigned to be included in individual buckets in the FIFO structures as a function of the examined time stamps.
  • data is placed in buckets, e.g., sets or groups of records corresponding to the basic unit of time supported, as a function of time stamps indicating the time period in which the network traffic was monitored. Accordingly, data collection and reporting delays encountered by the management station 150 do not negatively impact the accuracy of the created network traffic database.
  • Steps 712 , 714 , 716 , 718 which are illustrated in parallel represent the updating of records included in the hourly, six hourly, daily and weekly FIFO data structures, respectively, using the same set of network traffic data. Steps 712 , 714 , 716 , 718 are illustrated in parallel to show that they may be performed in parallel by one or more CPUs 154 , 155 .
  • Operation proceeds from steps 712 , 714 , 716 and 718 to step 720 wherein the data obtained from the buffer 173 , used to update the hourly, six hourly, daily and weekly data records, is deleted. Operation then returns to monitoring step 706 so that the database updating process will be performed on a continuous basis until, e.g., the management station 150 is powered off or reset.
  • FIG. 9 represent database records created from traffic between hosts A through F. Dashed lines are used to indicate different hourly time periods 901 , 902 , 903 , 904 , 905 , 906 and a single 6 hourly time period 910 .
  • the range of numbers at the top of each time period is used to indicate the specific hour or hours included in the time period
  • the first and second letters in each box indicate the two hosts involved in the monitored conversation.
  • the number in the box indicates the number of packets exchanged between the indicated hosts during the indicated time period.
  • the first hourly time period beginning at hour 0 and ending at hour 1, corresponds to bucket 901 .
  • Two conversations were detected during this first hourly time period.
  • the number of bytes, in addition to the number of packets, may also be stored in each record of the database 707 .
  • the hourly resolution data set 920 has six “buckets”, 901 through 906 , corresponding to first through sixth hourly time periods and the 6-hourly data set has one bucket 910 corresponding to the single 6 hour time period.
  • the 6-hour bucket 910 has more conversations and thus more entries than any one of the individual hourly buckets 901 through 906 .
  • the records in the six hour data set 922 are of a lower resolution than the hourly data set 920 , since they do not include detailed hourly conversation data.
  • read access is limited to complete data records.
  • data in a given time period may not be accessed until the record is fully complete, i.e., all the data from the system probes for the given time period has been included in the data record.
  • the presentation of incomplete data counts to an application or system user is avoided.
  • up to the minute data records are made available to the user.
  • a user may review, e.g., the most recent data in the weekly database despite the fact that the collection of the data for the current week is not yet complete.
  • FIG. 10 illustrates an exemplary steady station condition that may be reached after 7 weeks of operating one exemplary system 200 . Note that in the FIG.
  • the database includes enough storage space to store hourly information for 1.5 days, 6-hourly information for 4.5 days, daily information for 9 days and weekly information for 7 weeks assuming the use of the same amount of storage for each of the different resolutions. Note that the actual time periods for a given system will depend on the number of conversations which are monitored and the actual amount of storage space allocated for the database 707 .

Abstract

Methods and apparatus for collecting, storing, processing and using data are described. Network traffic probes are identified and attempts are made to configure the probes to generate network traffic data sets which are as close to a preselected common data format as possible. Application layer traffic data is collected in addition to network layer traffic data. The common data format uses delta count values, and terminal count mode format. Network data is obtained from a probe using one of the available table formats which is selected in the following order of preference: alMatrixTopN (Terminal Mode), alMatrixTopN (AllMode), alMatrix, nlMatrixTopN and nlMatrix. A database of collected network traffic information which includes multiple parallel sets of data stored at different resolutions is created. The data sets for each individual resolution are stored in a separate FIFO data structure and the oldest data records are overwritten when allocated data space becomes fully utilized.

Description

    FIELD OF THE INVENTION
  • The present invention is directed to the collection, storage, processing and use of data in computer networks, and more specifically, to the collection, storage, processing and use of data relating to network traffic. [0001]
  • BACKGROUND OF THE INVENTION
  • The use of computer networks, and inter-connected groups of computer networks referred as intranets, continues to be on the increase. The World Wide Web (WWW), sometimes referred to as the Internet, is an example of a global system of inter-connected computer networks used for both business and personal pursuits. The increased use of intranets within individual businesses and the increased use of the Internet globally is due to the increased number of computer networks in existence and the ease with which data, e.g., messages and/or other information, can now be exchanged between computers located on inter-connected networks. [0002]
  • FIG. 1 illustrates an [0003] intranet 10 implemented using known networking techniques and three local area networks (LANS) 20, 30, 40. The intranet 10 may be implemented within a business by linking together physically remote LANS 20, 30, 40. In the intranet 10, each of the first through third LANS 20, 30, 40 includes a plurality of computers (21, 22, 23) (31, 32, 33) (41, 42, 43), respectively. The computers within each LAN 20, 30, 40 are coupled together by a data link, e.g., an Ethernet, 26, 36, 46, respectively. The first LAN 20 is coupled to the second LAN 30 via a first router 18. Thus, the router 18 couples data links 26, 36 together. Similarly, the second LAN 30 is coupled to the third LAN 30 via a second router 19 which couples data links 36 and 46 together.
  • As is known in the art, the transferring of data in the form of packets can involve processing by several layers which are implemented in both hardware and/or software at different points in a network. A different protocol may be used at each level resulting in a protocol hierarchy. [0004]
  • At the bottom of the protocol hierarchy is the network layer protocol. One or more application layer protocols are located above the network layer protocol. In the present application, when describing a protocol associated with a data packet, the protocol associated with the packet will be described in terms of the protocols and layers associated therewith. [0005]
  • For example, the annotation: [0006]
  • <network-layer>/<application-[0007] layer 1>\ . . . \<application-layer N>
  • is used to describe the protocol hierarchy of the top-level (application-layer N) protocol. As another example, consider a packet which uses the SNMP (Simple Network Management Protocol) running over UDP (User Datagram Protocol), running on an IP (Internet Protocol) network-layer protocol. Such a packet would be described herein as an IP/UDP/SNMP packet. [0008]
  • As networks have grown in size and the volume of data being passed over networks has increased, system administrators have been faced with the job of planning and maintaining networks of ever increasing size and complexity. [0009]
  • Network traffic information can be used when troubleshooting problems on an existing network. It can also be used when controlling routing on a system with alternative routing paths. In addition, information on existing or changing network traffic trends is useful when decisions on upgrading or expanding service are being made. Thus, information on network traffic is useful both when maintaining an existing network and when planning modifications and/or additions to a network. Given the usefulness of network traffic information, system administrators have recognized the need for methods and apparatus for monitoring network activity, e.g., data traffic. [0010]
  • Because intranets often encompass geographically remote systems and/or networks, remote monitoring of network traffic is often desirable. [0011]
  • In order to facilitate the monitoring of network activity, remote monitoring (RMON) devices, often called monitors or probes, are sometimes used. These devices often serve as agents of a central network management station. Often the remote probes are stand-alone devices which include internal resources, e.g., data storage and processing resources, used to collect, process and forward, e.g., to the network management system, information on packets being passed over the network segment being monitored. In other cases, probes are built into devices such as a routers and bridges. In such cases, the available data processing and storage resources are often shared between a device's primary functions and its secondary traffic monitoring and reporting functions. In order to manage an intranet or other network comprising multiple segments many probes may be used, e.g., one per each network segment to be monitored. [0012]
  • Network traffic data collected by a probe is normally stored internally within the probe until, e.g., being provided to a network management station. The network traffic data is usually stored in a table sometimes referred to as a management information base (MIB). Recently, RMON2 MIB standards have been set by the Internet Engineering Task Force (IETF) which increase the types of network traffic that can be monitored, the number of ways network traffic can be counted, and also the number of data formats which can be used for storing collected data. RMON2 tables may include a variety of network traffic data including information on network traffic which occurs on [0013] layers 3 through 7 of the Open Systems Interconnect (OSI) model. The particular network traffic information which is available from a probe will depend on which data table the probe implements and the counting method employed.
  • Currently, four different RMON2 matrix (or conversation) table types are possible: alMatrix, alMatrixTopN, nlMatrix, and nlMatrixTopN. [0014]
  • Complicating matters, alMatrixTopN tables support two counting modes of operation which affect the manner in which the counting of packets and bytes is performed at the various protocol layers. The first of these counting modes will be referred to herein as all count mode. In this mode, each monitored packet increments the counters for all the protocol layers used in the packet. For example, an IP/TCP/HTTP packet would increment the packet and byte counters for the IP, TCP and HTTP protocols. The second counting mode will be referred to herein as terminal count mode. In this mode, each monitored packet increments only the counter of the “highest-layer” protocol in the packet. For example, an IP/TCP/HTTP packet would increment the packet and byte counters for only the HTTP protocol. Note that the terminal count mode may only be used with the alMatrixTopN table. However, all count mode can be used with all the RMON2 tables discussed above including the alMatrixTopN table. [0015]
  • Accordingly, probes may now collect and store data in tables corresponding to any one of five different RMON2 formats. The five different RMON2 table possibilities are identified herein as alMatrixTopN (Terminal Count Mode), alMatrixTopN (All Count Mode), alMatrix, nlMatrix and nlMatrixTopN tables. [0016]
  • Numerous distinctions exist between the various types of tables that may be supported by an RMON2 probe. [0017]
  • Network-layer (nl) tables, e.g., nlMatrix, and nlMatrixTopN tables, count only those protocols which are deemed to be network-layer protocols. Network-layer protocols are the protocols which are used to provide the transport-layer services as per the well known ISO OSI 7-layer protocol model, and include, for example, such protocols as IP, IPX, DECNET, NetBEUI and NetBIOS among others. No child-protocols of the network-layer protocols are counted in network-layer tables. [0018]
  • Application-layer (al) tables, e.g., alMatrixTopN (Terminal Count Mode), alMatrixTopN (All Count Mode), and alMatrix tables, count any protocol that is transport layer or above, provided the probe knows how to decode the protocol. This includes, e.g., everything from IP through to IP/UDP/SNMP, Lotus Notes traffic, WWW traffic, and so on. Application-layer tables provide information on a super-set of the protocols which the network-layer (nl) tables provide, by counting child-protocols of the supported network-layer protocols. [0019]
  • In addition to the different types of protocol data that will be monitored depending on whether a network layer (nl) or application layer (al) table is being supported, the method of counting data will vary depending on the supported table type. [0020]
  • The alMatrix and nlMatrix tables monitor conversations which occur in the network, and keep count of the total number of bytes and packets seen for each conversation for each monitored protocol since the probe was turned on. If the probe has been reset since it was turned on, then the counters store the number of bytes and packets seen since the last time the probe was reset. These kinds of counters will be refereed to herein as absolute counters. The entries in alMatrix and nlMatrix tables are ordered by address and protocol. [0021]
  • The alMatrixTopN and nlMatrixTopN tables also monitor all conversations which occur in the network, and also keep count of the number of bytes and packets seen for each conversation. However, there are several differences. MatrixTopN tables must be configured by the user or by a client program, and are configured to have a maximum number of entries and a time interval for which the table will be generated. Once configured, the probe will perform the following steps until the MatrixTopN table is destroyed (either by a request from the user or client program, or by the probe being turned off): [0022]
  • 1. Monitor the conversations in the network, counting the packets and bytes seen over the specified time interval. [0023]
  • 2. Once the time interval is reached, then generate a table of the top N conversations seen in the network. This table can then be retrieved by the user (or client program), and is held until the next table is generated, which then replaces the current table. The ordering in a MatrixTopN table may be either by the number of packets seen, or by the number of bytes seen. [0024]
  • 3. Go back to [0025] step 1.
  • As MatrixTopN tables monitor the number of packets and bytes seen over the specified time interval, with the counters being effectively reset each time a new table of the top N conversations is generated, the counters generated by MatrixTopN tables are referred to herein as delta counters. [0026]
  • Because intranets and the networks which comprise intranets are frequently implemented and modified over a period of time, a plurality of different probes, often supporting different data traffic table formats, will frequently be encountered in the same network. In some cases, a probe may have insufficient processing and data storage resources to support all but the least resource intensive data table format, e.g., an nlMatrix table. Accordingly, the information included in traffic data tables of probes may vary from probe to probe depending on the particular protocols monitored, the individual probe's available resources, and the MIB format implemented by the individual probes. [0027]
  • The numerous variations in data counting methods and monitored protocol layer information discussed above can cause network traffic data collected from probes to be difficult to compare, process and display in a manner that can be easily understood by a human. [0028]
  • One solution to the problem of different data tables, being supported by different probes in a network, is to use only probes which provide data in the same format. Unfortunately, this approach tends to be costly and often involves replacing existing probes, adding new probes, and/or using probes which at least in some locations, provide a greater data collection capability than required. Thus, for cost reasons, probe selection rarely tends to be a practical solution to resolving problems resulting from a lack of consistency among probe data collection and storage techniques. [0029]
  • While the recent addition of RMON2 support for including information about child protocols in at least some data tables, greatly increases the level of detailed information that can be collected regarding network traffic, it has lead to increases in probe data storage and processing requirements. As the volume of network and intranet activity continues to increase into the Gigabytes/sec range, space required to store detailed network traffic information for extended periods of time can become significant. While the data storage requirements for a probe maintaining network traffic data can be significant, the data storage requirements for a management system storing data obtained from several probes is many times greater. [0030]
  • One known technique for limiting the growth of a network traffic database is referred to as data aging. Data aging involves periodically scanning the stored data and, during the scan, data records that are older than certain preselected age limits are read and get combined, e.g., added together, to create an additional set of data records of lower resolution than the records used to create the additional set. The records used to create the lower resolution set of data records are then deleted from the original database. When this technique is used, there are normally multiple age limits set up, resulting in multiple data sets corresponding to different non-overlapping time periods. In such a system, the older the data records become, the lower the resolution of those records will be. Hence less disk space is required to store records corresponding to a fixed period of time, the longer in the past the fixed period of time occurred. [0031]
  • Unfortunately, the known data aging technique has several disadvantages, both from an implementation standpoint and from the standpoint of a human system administrator attempting to use the stored network traffic information. [0032]
  • From an implementation standpoint, the known system has the distinct disadvantage of requiring double buffering of the data while the aging process is being performed. Such double buffering is required so that accessing the data during aging will still give the correct results. Given that the size of the database to be aged can be quite substantial, double buffering presents obvious hardware disadvantages. From an implementation standpoint the known aging process also has the disadvantage of placing significant periodic demands for processing resources that can interfere, e.g., slow or delay, other processing tasks performed by a management station, while the aging operation is being performed. [0033]
  • The known data aging process results in multiple, non-overlapping data sets of differing resolutions corresponding to different time periods. From a human standpoint, this makes it difficult to review and compare data sets to detect, e.g., network traffic problems, since the data sets correspond to different time periods. [0034]
  • In view of the above discussion, it becomes apparent that there is a need for new and improved methods and apparatus for collecting and handling network traffic data from probes. [0035]
  • In particular, there is a need for methods of collecting network traffic data that minimize the number of different data formats and data tables which must be processed. In addition, there is a need for new methods and apparatus for processing data received in differing formats to produce a database of network traffic data which can easily be accessed by other applications and/or presented to a human administrator in a manner that allows for easy comparison and presentation of traffic data monitored on various network segments. [0036]
  • In addition, there is a need for methods and apparatus which are capable of limiting the growth of databases, e.g., network traffic databases, over time. It is desirable that the methods and apparatus allow for accurate access to the database at all times, once it is created. It is also desirable that the database methods not require double buffering of the data included in the database to support such access. In addition, if data sets of different resolutions are included in the database, it is desirable that the lower resolution data sets incorporate the information found in the higher resolution data sets and overlap for at least some period of time. [0037]
  • Data from different probes corresponding to a particular time period may not be received precisely at the same time by a monitoring device, e.g., due to network transmission delays, etc. Accordingly, it is also desirable that methods and apparatus for receiving and storing network traffic information be capable of compensating for such delays so that received network traffic data is stored and presented in a manner that accurately reflects the traffic in the time period that was monitored and not the time at which the traffic data was received by the monitoring station. [0038]
  • In addition to the above features, it is desirable that new methods of collecting, processing and storing network traffic data be compatible with existing probe data formats. It is also desirable that the new methods and apparatus be capable of being used with, or adapted to being used with, probe data formats that may be supported in the future. [0039]
  • In particular, it is desirable that that at least some new methods and apparatus be capable of working with network traffic data in a plurality of table and count formats including various RMON2 tables. It is also desirable that any such method and/or apparatus not require a specific one of the RMON2 tables to be used by a probe which would result in a constraint on RMON2 probe selection and probe resource requirements. [0040]
  • In view of the above, it is apparent that there remains considerable room for improvement in how network traffic data is collected, stored, processed and presented to network administrators and other individuals responsible for the design, maintenance and upgrading of networks and intranets. [0041]
  • SUMMARY OF THE PRESENT INVENTION
  • The present invention is directed to methods and apparatus for collecting, storing, processing and using data, e.g., network traffic data, in computer networks. [0042]
  • Several embodiments of the present invention are directed to dealing with the difficulties associated with collecting and processing network traffic data. As discussed above, one of the major problems encountered with collecting and processing network traffic data is the numerous different counting techniques and data table storage formats that may be used by various probes in the same system. [0043]
  • In order to provide a high degree of detailed information for subsequent applications, attempts are made by the method of the present invention to collect application layer traffic data as well as network layer traffic data. [0044]
  • To reduce problems due to different counting techniques and data table formats, the present invention processes collected network traffic data, as required, to place it into a common data format. The common data format is selected to provide a maximum degree of information in a format that is easy to use, e.g., by database generation and graphing application. [0045]
  • From a user standpoint, it was determined that, in at least one embodiment of the invention, it was desirable that the common data format include delta count values as opposed to absolute count values and that application layer information be presented in terminal count mode as opposed to all count mode. [0046]
  • In order to reduce the amount of processing required to put the data in the desired common format, and the temporary data storage requirements associated with such processing, the system of the present invention controls network traffic data probes to provide data in a format that is as close to the desired format as possible, given an individual probe's capabilities. [0047]
  • One specific embodiment of the present invention is directed to the use of RMON2 probes and RMON2 data tables. [0048]
  • In one such embodiment, to minimize the amount of data processing required to put a probe's network traffic data into the common format used by a management system of the present invention, and to maximize the amount of information collected, network data is obtained from a probe using one of the available RMON2 table formats. In accordance with the present invention the RMON2 format is selected in the following order of preference: alMatrixTopN (Terminal Mode), alMatrixTopN (AllMode), alMatrix, nlMatrixTopN and nlMatrix. [0049]
  • RMON2 alMatrixTopN (Terminal Mode) data tables satisfy the format requirements used in the present invention and therefore do not require conversion operation to be performed. In addition RMON2 alMatrixTopN (Terminal Mode) data tables include both application layer and network layer data. For these reasons, the RMON2 alMatrixTopN (Terminal Mode) data table is the most preferred of the RMON2 tables in the above discussed embodiment. [0050]
  • Once network traffic data is collected and placed in a common format, it is ready for use in generating displays and/or network traffic databases. [0051]
  • In one particular embodiment of the present invention, the network traffic data, in the common data format, is stored in a network traffic database to allow for future analysis such as baselining and troubleshooting. [0052]
  • The known database aging process is avoided by the system of the present, by creating and maintaining a database that includes multiple parallel sets of network traffic data at different resolutions. In accordance with the database generation and maintenance routine of the present invention, a data set for each different resolution is stored in a first-in, first-out (FIFO) data structure. The oldest records in the FIFO data structure are overwritten when there is no longer any unused storage space available for storing the records of the resolution to which the data structure corresponds. [0053]
  • Because the network traffic database of the present invention is not aged, the periodic processor loading associated with aging of databases is avoided. In addition, the need to double buffer the database data during an aging process is eliminated since no aging is performed. [0054]
  • The parallel database routines of the present invention also have the advantage of being well suited to a multiprocessor environment since each data set can be maintained and updated independently. [0055]
  • In the databases of the present invention, the database records at the different resolutions overlap covering the same time period. This makes it relatively easy for a system administrator to review database records corresponding to the same time period at different resolutions. This can facilitate a system administrator's attempts to identify network traffic problems and/or trends without the need to perform complicated processing when comparing or switching between data at different resolutions. [0056]
  • In addition to the above described features, many other features and embodiments of the present invention are described in detail below.[0057]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a known intranet arrangement. [0058]
  • FIG. 2 is a block diagram of an intranet including a management system implemented in accordance with one embodiment of the present invention. [0059]
  • FIG. 3 is a diagram of a protocol hierarchy used in various examples discussed herein. [0060]
  • FIG. 4A is a flow chart of a management system initialization routine implemented in accordance with the present invention. [0061]
  • FIG. 4B is an exemplary probe information/data table created by executing the initialization routine illustrated in FIG. 4A. [0062]
  • FIG. 5 is a diagram showing the processing of network conversation data in accordance with one exemplary embodiment of the present invention. [0063]
  • FIG. 6A illustrates a method of collecting network traffic data from probes and converting the collected data into a common data format. [0064]
  • FIG. 6B illustrates the conversion of various RMON2 data tables into the common data format used in accordance with various embodiments of the present invention. [0065]
  • FIG. 7 is a block diagram illustrating the generation of a network traffic database including parallel sets of data of differing resolutions. [0066]
  • FIG. 8 is a flow chart illustrating a method of the present invention for generating a network traffic database including parallel sets of network traffic stored at different resolutions. [0067]
  • FIG. 9 illustrates a network traffic database including parallel data sets having an hourly and 6-hourly resolution. [0068]
  • FIG. 10 is a flow chart illustrating a network traffic database including parallel sets of network traffic information stored at different resolutions.[0069]
  • DETAILED DESCRIPTION
  • As discussed above, the present invention relates to methods and apparatus which can be used collect, store, and process data, e.g., data regarding traffic in a computer network or intranet. It is also directed to methods of presenting network traffic data in a format that can be easily understood by a person, e.g., an individual responsible for managing the computer network or networks being monitored. [0070]
  • Referring now to FIG. 2, there is illustrated an [0071] intranet 200 implemented in accordance with one embodiment of the present invention. Various elements of the intranet 200 which are the same as, or similar to, the known intranet 10, are identified using the same reference numerals used in FIG. 1.
  • As illustrated, the [0072] intranet 200 comprises first through third LANS 120, 130, 140 each of which includes a plurality of computers (21, 22, 23) (31, 32, 33) (41, 42, 43), respectively. The computers within each LAN 120, 130, 140 are coupled together by a data link, e.g., an Ethernet, 26, 36, 46, respectively. The first LAN 120 is coupled to the second LAN 130 via a first router 17 which couples data links 26, 36 together. The first LAN 120 is also coupled to the third LAN 140 via a second router 18.
  • The [0073] second LAN 130 is coupled to the third LAN 130 via a third router 19 which couples data links 36 and 46 together.
  • [0074] Data links 26, 36 and 46 are network segments within the intranet 200. In order to obtain information on each of the network segments 26, 36, 46 probes 127, 137, 147 are included in each of the first through third LANs, respectively. Each probe is coupled to the data link, e.g., Ethernet, which is included in the LAN in which the probe resides. Because the first probe 127 is coupled to the first Ethernet 26 it can collect information about traffic on the network segment 26. Similarly, the second and third probes 137, 147 are able to collect information about traffic on the network segments 36, 46, to which they are coupled, respectively. In accordance with one embodiment of the present invention, the probes 127, 137, 147 collect and store network traffic data in one or more RMON2 tables (MIBs).
  • The [0075] probes 127, 137, 147 may include memory, a processor, an I/O interface device and a mass storage device, such as a disk drive. In one embodiment, probes 127, 137, 147 are implemented using known network traffic data probes.
  • In accordance with the present invention, each of the [0076] probes 127, 137, 147 is coupled to a management station 150 which also forms part of the intranet 200. The management station 150 includes a display device 152, one or more central processing units (CPUs) 154, 155, a keyboard 156, a mass storage device 158 for storing, e.g., a data base, and memory 162 which are coupled together by a bus 163. The mass storage device 158 may be, e.g., a disk drive or array of drives. In the embodiment illustrated in FIG. 2, two CPUs 154, 155 capable of operating in parallel are shown. However, in many embodiments, a single CPU 154 is used on a time shared basis, e.g., to perform database generation and maintenance operations.
  • The [0077] bus 163 couples the discussed management station components to an input/output (I/O) interface 160 used to connect the management station and its components to the first through third probes 127, 137, 147. The I/O interface 160 is responsible for interfacing between the various devices coupled thereto.
  • One or both of the management station's CPUs, [0078] 154, 155 can be used to control the operation of the management station 150 as a function of various routines stored in the memory 162. The use of one or both of the CPUs, in controlling the operation of the management station 150, depends on the implemented operating system. For exemplary purposes it will be assumed that only CPU 154 is used to control operation of the management station 150.
  • The routines, stored in the [0079] memory 162, include initialization routines 171, data collection and conversion routines 164, parallel data set generation routines 166, and processing/filtering/display routines 168. The various routines may be implemented as computer programs. In addition to the routines 171, 164, 166, 168, the memory 169 may include probe information and data tables received from the probes 127, 137 and 147.
  • The [0080] memory 162 may also include a buffer 173 for temporarily storing data tables converted to the common format of the present invention. The collected probe data stored in the buffer 173 is processed by the CPU 154 under control of routines 164, 166, 168 and stored in a network traffic information database located on the storage device 158 as will be discussed below.
  • The [0081] keyboard 156 can be used for inputting queries regarding network traffic information. Charts and statistics regarding network traffic information are generated by the CPU 154 in response to such queries using the data included in the network traffic database. The charts and statistics are displayed on the display device 152 and/or printed on a printer 170 coupled to the management station 150.
  • FIG. 3 illustrates an exemplary protocol hierarchy in the form of a [0082] tree 301 which may be retrieved from one of the probes 127, 137, 147 for a monitored conversation between two devices included in the intranet 200. The hierarchy illustrated in FIG. 3 will be used in the discussion which follows to illustrate various points. Note that, while a probe 127, 137, 147 may support many thousands of protocols, only those protocols which have been seen for a particular conversation will be stored in the data table or tables supported by the probe and thus will be the only protocols which may be retrieved by the management station 150 from the probe for that conversation.
  • In the FIG. 3, diagram, the protocols shown are IP (Internet Protocol), UDP (User Datagram Protocol), SNMP (Simple Network Management Protocol), TCP (Transmission Control Protocol), FTP (File Transfer Protocol) and HTTP (Hyper-Text Transfer Protocol—also sometimes referred to as WWW (World Wide Web) traffic). [0083]
  • The tree [0084] 100 has been divided into two halves: the network-layer protocol 303 and the application-layer protocols 305. This division will be used in later examples.
  • The conversation for which the tree has been generated is a conversation between two devices e.g., computers A and [0085] B 21, 22, using the IP network-layer protocol.
  • The IP/UDP protocol is shown in a dotted box—this is to represent that, while the IP/UDP/SNMP packets were monitored by the [0086] probe 127, the probe 127 had the IP/UDP protocol turned off. This is a feature of RMON2, (the ability to turn off the monitoring of protocols), and means that any pure IP/UDP packets would not be counted. Thus, a count of any pure IP/UDP packets on the network segment 26 would not be supplied by the probe to the management station 150 on retrieval of the network traffic data from the probe 127. However, child protocols of IP/UDP (such as IP/UDP/SNMP) would continue to be counted and supplied to the management station 150 from the probe 127.
  • As IP/UDP is not being monitored by the probe, we can describe this tree using the following format: [0087]
  • IP [0088]
  • IP/UDP/SNMP [0089]
  • IP/TCP [0090]
  • IP/TCP/FTP [0091]
  • IP/TCP/HTTP. [0092]
  • As discussed above, networks may include a variety of [0093] probes 127, 137, 147, with differing capabilities and differing network data table formats. In accordance with the present invention, the management station 150 collects and processes network traffic data from the probes 127, 137, 147 included in the network. In order to simplify subsequent data processing operations, the network traffic data received from the probes is processed to place it in a consistent format that can be used to support queries, storage, and displaying of network traffic data in a format that is easy process and understand. By converting network traffic data into a consistent format at an early stage, processing components and modules, e.g., the parallel data set generation routines 166 and processing/filtering/display routines 168, can be isolated from the complexities associated with varying network traffic data formats encountered from probe to probe.
  • The inventors of the present application recognized that, for most purposes, what is of interest is the network traffic during a specific time interval and not the total amount of traffic monitored from the time a probe is turned on. Accordingly, in determining the common format into which network traffic data should be placed, it was decided that a delta counting, as opposed to absolute counting, technique should be used. In addition, it was decided that, for maximum flexibility, it was useful to obtain as much detail about network traffic as possible. Accordingly, it was decided that the common data format should include application layer protocol information when available. In addition, it was decided that it was more useful to have the data represented in terminal count mode, as opposed to all count mode. [0094]
  • Unfortunately, the only RMON2 table which satisfies the above discussed criterion selected for a common data format is the alMatrixTopN (terminal count mode) table. Because nlMatrix and nlMatrixTopN tables only include network layer traffic data, these two tables are considered the least useful and are not used unless the probe from which the data is being obtained does not support one of the three possible application layer tables. [0095]
  • To minimize the amount of data processing required to put a probe's network traffic data into the common format used by the [0096] management system 150, network data is obtained from a probe using one of the available table formats with the format utilized being selected in the following order of preference: alMatrixTopN (Terminal Mode), alMatrixTopN (AllMode), alMatrix, nlMatrixTopN and nlMatrix.
  • As discussed above, an alMatrixTopN (Terminal Mode) table has the advantage of requiring no format conversion operations. [0097]
  • The alMatrixTopN (AllMode) table requires a single conversion operation, i.e., an all count mode to terminal count mode conversion operation, to place it in the common format. Unlike absolute count to delta count conversion operations, as will be discussed below, terminal count conversion operations can be performed without the need to use the previously received data table. Accordingly, alMatrixTopN (AllMode) tables can be converted to the common format with a minimum of processing and memory requirements. [0098]
  • The alMatrix table is less desirable than the other application layer tables because it requires two conversion operations to place it in the common format. Furthermore, one of the conversion operations requires buffering of a retrieved data table for the duration of the data measurement interval thereby requiring more memory than is required to put the alMatrixTopN table in the common data format. [0099]
  • Identification of the probes which are coupled to the [0100] management system 150, the data tables they support, and the selection of the data table to be used with each probe occur during execution, by CPU 154, of a management station initialization routine 300. The routine 300 is one of the initialization routines included in memory segment 171.
  • Operation of the [0101] management station 150 of the present invention will now be discussed with regard to the initialization routine 300 shown in FIG. 4A. The initialization routine 300 is performed by the management station, e.g., when the station is powered up or reset. The initialization routine 300 begins in step 302 wherein the initialization routines 171 is executed by the CPU 154.
  • In [0102] step 304, the management system 150 detects the probes 127, 137, 147 which are coupled to the system 150. The detection of the probes may be done, as known in the art, by transmitting a signal querying for a response from probes which are present.
  • Once a probe is detected, the initialization routine determines the network traffic table format that is to be used with the detected probe and stores that information in memory for future use, e.g., in determining what if any format conversions need to be performed on data obtained from the probe. [0103]
  • For each detected [0104] probe 127, 137, 147 the initialization process proceeds through steps 306 through 322. The path taken through these steps determines which table format will be used with the identified probe.
  • In step [0105] 306 a determination is made as to whether or not the probe being initialized supports application layer tables, i.e., if the probe has alMatrix capability. In one embodiment, alMatrix support is determined by querying a probeCapabilities object supported by the detected probe and monitoring the probe's response.
  • If in [0106] step 306 it is determined that the probe includes alMatrix support, operation proceeds to step 308. In step 308, the management station 150 signals the probe to create an alMatrixTopN table using terminal mode counting. If, in step 310, it is determined, e.g., by receipt of a signal from the probe, that creation of the desired alMatrixTopN table was successful, operation proceeds to step 312. In step 312, probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrixTopN (Terminal Count Mode) format. With the successful updating of memory in step 312 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322.
  • If, in [0107] step 310 it was determined that terminal alMatrixTopN table creation was unsuccessful, operation proceeds to step 314 instead of 312. In step 314 the management system 150 signals the probe being initialized to create an alMatrixTopN table using all count mode (as opposed to terminal count mode) counting.
  • If, in [0108] step 316, it is determined that all count Mode alMatrixTopN table creation was successful, e.g., by monitoring for a signal from the probe being initialized, operation proceeds to step 318. In step 318, probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrixTopN (all mode counting) format. With the successful updating of memory in step 318 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322.
  • If, in [0109] step 316, it is determined that all Mode alMatrixTopN table creation was unsuccessful operation proceeds to step 320. In step 320, probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in alMatrix format. With the successful updating of memory in step 320 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322.
  • If in [0110] step 306, it is determined that the probe being initialized does not support alMatrix tables, a network layer table must be selected for use. In such a case, operation proceeds from step 306 to step 324 wherein the management station 150 signals the probe being initialized to create an nlMatrixTopN table.
  • In [0111] step 326, a determination is made as to whether or not creation of the nlMatrixTopN table was successful.
  • If, in [0112] step 326, it is determined that nlMatrixTopN table creation was successful, e.g., by monitoring for a signal from the probe being initialized, operation proceeds to step 328. In step 328, probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in nlMatrixTopN format. With the successful updating of memory in step 328 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322.
  • If, in [0113] step 326, it is determined that all Mode nlMatrixTopN table creation was unsuccessful, operation proceeds to step 330. In step 330, probe information in memory is updated to include an entry on the probe being initialized and to indicate that the probe's data is in nlMatrix format. With the successful updating of memory in step 330 to reflect the presence and data table format of the detected probe which was just initialized, operation proceeds to step 322.
  • In step [0114] 322 a determination is made as to whether any probes detected in step 304 remain uninitialized. If there is another probe to be initialized, operation proceeds once again to step 306 wherein initialization of the next probe begins.
  • If, in [0115] step 322 it is determined that no probes remain to be initialized, operation proceeds to step 332 wherein the initialization routine is stopped pending its restart upon the next power up or resetting of the management station 150.
  • An exemplary probe information/data table [0116] 169 created in memory 150 via execution of the initialization routine is illustrated in FIG. 4B. Each detected probe 127, 137, 147 is identified in the table 169 as well as the format of the data table which is to be obtained from the identified probe when collecting network traffic data. Note that the table 169 includes temporary data table storage space used for storing data tables used as part of the format conversion operations discussed below. Note also that retrieved alMatrixTopN tables and nlMatrixTopN tables need not be stored for use in subsequent table format conversion operations since these tables are retrieved from the probe in the desired delta count format.
  • Once the [0117] management system 150 is initialized, collection, processing and storage of network data commences. FIG. 5 illustrates the collection, processing, storage and display of network traffic data in accordance with an exemplary embodiment of the present invention.
  • In FIG. 5 the group of [0118] networks 120, 130, 140, from which network traffic data is collected, are generally represented as a group by the block 502. The probes 127, 137, 147 which monitor each network or network segment serve as the source of network traffic data which is supplied to the management station 150. Network traffic data, in the form of a data table, is supplied to the management station from each probe 127, 137, 147 periodically in response to requests from the management station 150, for the information. The arrows, leading from the probes 127, 137, 147 to the data collection and conversion step 504 of the management station 150, represent the passing of the requested network traffic data to the management station 150.
  • Within the [0119] management station 150, there are several processing blocks 504, 508, 515 which are used to represent the various processing operations performed by the management station 150. In addition, there are several blocks, e.g., blocks 506, 510 and 152 which are used to illustrate the input and output data associated with the various processing operations.
  • The data collection and [0120] conversion step 504 represents data collection and formatting operations which are implemented using computer software, in the form of the data collection and conversion routines 164, to control the CPU 154.
  • In accordance with the processing performed in the data collection and [0121] conversion module 504, network traffic data is collected at periodic intervals from each of the detected probes and converted, in accordance with the present invention, into the preselected common format discussed above. The processing performed by the module 504 will be discussed in greater detail with regard to FIG. 6.
  • The output of the data collection and [0122] formatting step 504 is a set of network traffic data 506 which includes data from various probes that has been converted into the common data format of the present invention. The network traffic data 506 represents data from multiple probes collected during one periodic data collection operation involving the collection of data from probes 127, 137, 147. The set of network traffic data 506 serves as the input to a network traffic data set generation and maintenance module 508. As will be discussed in detail below, the data set generation and maintenance module 508 is responsible for generating multiple parallel sets of data which overlap in time but differ in terms of the resolution at which the network traffic data is stored in each data set. The group of data sets generated by the module 508 represent a network traffic database 510 extending in time over multiple periodic data collection cycles.
  • The data in [0123] network traffic database 510 can be accessed, e.g., in response to queries, processed, filtered and displayed and/or printed. Data processing, filtering and display generation step 515, which may be implemented by executing the routines 168 on the CPU 154, is responsible for performing such operations. The output of step 515 may take several forms including that of a printed document or a figure on the display device 152.
  • In the FIG. 5 embodiment, a circle and lines display of network traffic, generated in accordance with the present invention, is shown on the [0124] display 152. In one such embodiment, circles are used to represent computer networks or groups of computer networks. Points within a circle are used to represent devices located within the computer network represented by the surrounding circle. Lines between points are used to indicated detected conversations, while the thickness of a line is used to indicate the amount of data transferred during the monitored conversation. Note that in the FIG. 5 embodiment the outer circle on the display 152 represents the group of networks illustrated in FIG. 2 while each of the inner circles represents one of the computer networks 120, 130, 140.
  • FIG. 6A illustrates a [0125] method 600 corresponding, in one exemplary embodiment of the invention, to the data collection and conversion step 504. The routine 600 is executed periodically, e.g., every 30 minutes, by the CPU 154. As illustrated, the data collection and conversion routine 600 starts in step 602. During this step, the routine 600 is obtained from memory by the CPU 154 and executed.
  • From [0126] step 602 operation proceeds to step 604 wherein the stored information, included in table 169 about the probes present in the network and the network traffic data table format to be used with each probe, is accessed. Thus, the data collection and conversion routine 600 obtains from memory a list of probes that were detected during the previously discussed initialization process and information on the data table which the probe is to supply to the data collection routine.
  • [0127] Steps 606 through 614 are used to collect and process network traffic data corresponding to each individual probe that was detected during the initialization process.
  • In [0128] routine 600, operation proceeds from step 604 to step 606. In step 606 the processor 154 requests that the probe, from which data is to be collected, supply the network traffic data to the processor using the table format which was associated with the probe in the probe information/data table 169.
  • In [0129] step 608, the requested network traffic data table is received from the probe. The processing performed on the received network traffic data table to place it into the common data format used in accordance with the present invention depends on the type of data table received.
  • If in [0130] step 608 an alMatrixTopN (Terminal Count Mode) table is received, no format conversion operations are required. Accordingly, when an alMatrixTopN (Terminal Count Mode) table is received operation proceeds from step 608 directly to step 614 wherein the received data table, including time stamps indicating the time at which the network traffic occurred, is stored in a buffer 173 included in memory 162.
  • If in [0131] step 608 an alMatrixTopN (AllCount Mode) table is received, the data needs to be converted to terminal count mode to place it in the common format before storage in the buffer. In such a case, operation proceeds from step 608 to step 610. In step 610 AllCount Mode data is converted to terminal mode count data. Once the conversion to terminal count mode data is completed the resulting data table is stored in the buffer 173.
  • If in [0132] step 608 an alMatrix table is received, the absolute count data included therein needs to be converted to delta count data and all mode count data needs to be converted to terminal count mode data to place it in the common format before storage in the buffer. In such a case, operation proceeds from step 608 to step 612 and then to step 610. In step 612, absolute count data is converted to delta count data. In step 610 AllCount Mode data is converted to terminal count mode data. Once the conversion to terminal count mode data is completed operation proceeds to step 614 wherein the resulting data table is stored in the buffer 173.
  • If in [0133] step 608 an nlMatrix table is received, the absolute count data needs to be converted to delta count data to place it in the common format before storage in the buffer 173. Note that terminal count conversion need not be performed since application layer conversation information is not available in an nlMatrix table. In step 608 when an nlMatrix table is received, operation proceeds from step 608 to step 612. In step 612, absolute count data is converted to delta count data. Once the conversion of absolute count data to delta count data is completed, operation proceeds to step 614 wherein the resulting data table is stored in the buffer 173.
  • If in [0134] step 608 an nlMatrixTopN table is received, the data is already in delta count format. In addition, terminal count conversion need not be performed since application layer conversation information is not available from the received nlMatrixTopN table. In step 608 when an nlMatrixTopN table is received, operation proceeds directly to step 614 wherein the received data table is stored in the buffer 173.
  • From [0135] step 614, operation proceeds to step 616 wherein a determination is made as to whether or not there are any remaining probes from which data needs to be collected. If there are probes remaining, from which data has not been collected, operation proceeds from step 616 to step 606 wherein the process of collecting network traffic data from the next probe commences.
  • If, however, in [0136] step 616 it is determined that there are no more probes from which data needs to be collected, e.g., it is determined that network traffic data has been collected, processed and placed in the buffer for each of the probes identified in table 169, operation proceeds to step 618 wherein the data collection and conversion routine 600 is stopped.
  • At this point in time, the [0137] buffer 173 includes data tables for each identified probe 127, 137, 147 corresponding to the just completed data collection cycle.
  • By the time the data collection and [0138] conversion routine 600 stops, data from each of the network traffic probes 127, 137, 147 will have been converted, as required, into the common format used by the system of the present invention and stored in the buffer 173. The buffered network traffic data existing in a common format may then used, e.g., in the subsequent generation of a database of network traffic information.
  • The data collection and [0139] conversion routine 600 may be re-executed, each time it is desired to collect network traffic data, e.g., periodically at 30 minute or hourly intervals. To simplify absolute count data to delta count data conversion, in one embodiment, the period between data collections is selected to match the period for which the delta count is to be generated, i.e., the delta count represents the network traffic detected since the last time the network traffic data table was retrieved.
  • FIG. 6B is an additional illustration showing how received probe data, in the form of a network traffic data table, is processed by the data collection and [0140] conversion routine 600 to generate a network traffic data table 640 in the desired common data format (with the nlMatrixTopN and nlMatrix tables of course lacking the desired but unavailable application layer information). The five possible input data tables 621, 622, 623, 624 and 625 are shown on the left side of FIG. 6B. The ovals 630 and 632 represent terminal count conversion and delta generation operations, respectively. As illustrated, the alMatrixTopN (Terminal Count Mode) and nlMatrixTopN data tables are already in the desired common format. Thus, conversion operations need not be performed on input tables 621 and 625.
  • However, to place the alMatrixTopN (All Count Mode) [0141] data 622 in the common data format the terminal count conversion operation 630 is performed.
  • To [0142] place alMatrix data 623 in the common data format, both the delta generation operation 632 and the terminal count conversion operation 630 are performed.
  • To [0143] place nlMatrix data 624 into the common data format the delta generation operation 632 is performed.
  • Thus, by performing delta generation operations and/or terminal count conversion operations, it is possible to convert data tables [0144] 622, 623, and 624 into the desired common data format.
  • In accordance with an exemplary embodiment of the present invention, the conversion of absolute count data to delta count data may be performed in accordance with the following exemplary pseudo code: [0145]
  • Begin (delta count generation operation) [0146]
    if the received data is the first set of data received
    from the probe:
    Begin if
    Store the data table received from the probe
    in the temporary data table storage location
    associated with the specific probe from which
    the data being processed was collected;
    use the data included in the data table as
    delta data;
    end if
    else
    Begin else
    retrieve the previously stored data table
    from the temporary data table storage
    location associated with the specific probe
    from which the data table being processed was
    collected;
    store the most recently collected data table
    in said temporary data table storage
    location;
    from the entries in each row of the most
    recently collected data table, subtract the
    corresponding packet and byte counter values
    obtained from the corresponding row of the
    table retrieved from said temporary data
    table storage location, the resulting packet
    and byte counters being the delta count
    values for the network traffic table being
    generated; and
    incorporate the generated delta count values
    in the network traffic data table upon which
    the conversion operation is being performed
    thereby replacing the absolute count values
    from which they were generated;
    discard the network traffic data table
    retrieved from said temporary storage
    location;
    end else
    end {delta count generation operation}
  • In the pseudo code set forth above, the delta time interval is the time interval between generation of the retrieved tables by the probe which supplies the data being processed. [0147]
  • As an example of a delta count conversion operation consider that a counter in a table corresponding to a specific probe had a value of 100 the first time the data table was retrieved from the specific probe, a value of 400 the next time the data table was retrieved from the same probe and a value of 600 the third time data was retrieved from the probe. In such a case, the delta counter value generated in accordance with the conversion process of the present invention for the interval corresponding to the time period between the first and second probe data retrievals would be 300 and the delta counter value generated for the second time interval corresponding to the period of time between the second and third probe data retrievals would be 200. [0148]
  • The conversion of all mode count data to terminal mode count data is required to convert data from alMatrix and alMatrixTopN (All Count Mode) tables into the common format used by the apparatus of the present invention. The conversion process of the present invention assumes that the data in the tables has already been converted into delta count values if it was not already in delta count format. [0149]
  • In accordance with the exemplary embodiment of the present invention, the conversion of all count mode data to terminal count mode data in [0150] step 610 and the terminal count conversion operation 630, involve performing the steps set forth in the following pseudo code:
    Begin {Conversion of All Count mode data
    to Terminal Count mode Data}
    For each individual conversation for which there is data
    in the data table being processed do:
    Begin {do}
    determine the protocol hierarchy for the
    individual conversation;
    Starting at the network-layer protocols,
    subtract the counter values for each
    immediate (existing) child protocol from the
    child protocol's immediate (existing) parent
    counter value and store the result as the
    parent protocol's terminal count counter
    value.
    Repeat the preceding step for each child
    protocol until the entire protocol hierarchy
    has been traversed.
    End {do}
    end {Conversion of All Count mode data
    to Terminal Count mode Data}
  • AS an example of a terminal count conversion operation consider the exemplary protocol hierarchy discussed above in regard to FIG. 3. In order to convert all count mode data to terminal count mode data the following steps would be performed assuming the FIG. 3 protocol hierarchy: [0151]
  • 1. The protocol hierarchy for the monitored conversation would be determined. [0152]
  • 2. Start with the IP protocol counter values (packet and byte counter values). Subtract the corresponding counter values for the IP/TCP and IP/UDP/SNMP child protocols, from the IP parent protocol counter values. Note that the IP/UDP/SNMP protocol is considered to be an immediate child of the IP protocol because the IP/UDP protocol does not exist in the data retrieved from the probe in the FIG. 3 example (since the probe is not monitoring it), and so this makes IP the immediate (existing) parent of IP/UDP/SNMP. Store the resulting values as the terminal count IP protocol counter values. [0153]
  • 3. Next, move onto the children of IP, namely IP/TCP and IP/UDP/SNMP. For IP/TCP, subtract counter values for the IP/TCP/FTP and IP/TCP/HTTP protocols from the corresponding IP/TCP counter values. Store the result as the IP/TCP terminal count counter values. For IP/UDP/SNMP there are no children, and so no processing to convert the counter values to terminal count values needs to be done. [0154]
  • 4. Finally, the conversion process moves onto the children of IP/TCP, namely IP/TCP/FTP and IP/TCP/HTTP. As neither of these protocols have children in the hierarchy there is no processing to be done to convert the counter values to terminal count values. [0155]
  • Examples of the data collection, conversion (where required), and storage processes of the present invention will now be discussed. The following examples of how various packets and bytes seen for a single conversation would be counted in the various probe table formats, are based on the same contrived example conversation. The byte and packet counts for the example conversation, for one exemplary monitored time period, are set forth below in Table 1. In accordance with the present invention, the time period would correspond to the time period for which al and nl MatrixTopN tables were configured. [0156]
  • In the following example conversation, in the monitored time interval reflected in Table 1, the device with the IP address 123.45.67.89 was talking to the device with IP address 98.76.54.32 and the listed packet and byte counts were seen by a probe in regard to the conversation. [0157]
    TABLE 1
    Protocol Packets Bytes
    IP 50 5000
    IP/TCP 20 4000
    IP/TCP/FTP 200 30000
    IP/TCP/HTTP 10 1000
    IP/UDP/SNMP 120 10000
  • The byte and packet counts for the example conversation shown in Table 1 include only the monitored protocols which were shown in the example hierarchy discussed earlier in regard to FIG. 3. Note that Table 1 reflects that the monitoring of UDP protocol has been turned off in the probe monitoring the conversation. Also note that in Table 1, e.g., IP/TCP represents all those packets which could only be decoded by the probe as far as the IP/TCP protocol—the IP/TCP count does not include the IP/TCP/FTP or IP/TCP/HTTP counts. [0158]
  • Examples of the processing performed in FIG. 6B for each of the five possible input table formats will now be provided based on the above discussed exemplary conversation. [0159]
  • 1. alMatrixTopN (Terminal Count Mode) Table Processing Example [0160]
  • As discussed above, the alMatrixTopN (Terminal Count Mode) table monitors conversations at all the known application-layer protocols, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration). The counters in the alMatrixTopN (Terminal Count Mode) table work in Terminal Count Mode, and so a monitored packet increments only the counter of the “highest-level” protocol used in the packet. [0161]
  • In this example, we will assume that the user (or client program) has requested that the table be ordered by the byte counters. As the counters in this table work in Terminal Count Mode, the 200 IP/TCP/FTP packets, for example, increment only the IP/TCP/FTP packet counter by 200. [0162]
  • As a result, the alMatrixTopN (Terminal Count Mode) table for the exemplary conversation of TABLE 1 would look like this: [0163]
    TABLE 2
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
    IP 123.45.67.89 98.76.54.32 IP 50 5000
    IP 123.45.67.89 98.76.54.32 IP/TCP 20 4000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
  • Note that as this is a MatrixTopN table, the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval. [0164]
  • For alMatrixTopN (Terminal Count Mode), the counters are already delta values in terminal count mode so the table, e.g., Table 2, received from a probe, is automatically in the common data format. Accordingly, in accordance with FIG. 6B the alMatrixTopN (Terminal Count Mode) table would be stored, unmodified, in the [0165] buffer 173.
  • 2. alMatrixTopN (All Count Mode) Table Processing Example [0166]
  • The alMatrixTopN (All Count Mode) table monitors conversations at all the known application-layer protocols, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration). The counters in the alMatrixTopN (All Count Mode) table work in All Count Mode, and so a monitored packet increments the counters for all the protocol layers used in the packet. [0167]
  • Since the alMatrixTopN (All Count Mode) table works in All Count Mode, the monitored protocols increment the following counters for the exemplary conversation: [0168]
    TABLE 3
    Incremented
    Protocol Counters
    IP IP
    IP/TCP IP
    IP/TCP
    IP/TCP/FTP IP
    IP/TCP
    IF/TCP/FTP
    IP/TCP/HTTP IP
    IF/TCP
    IP/TCP/HTTP
    IP/UDP/SNMP IP
    IP/UDP/SNMP
  • This means that, for example, the 200 IP/TCP/FTP packets increment the IP, the IP/TCP and the IP/TCP/FTP packet counters by 200. [0169]
  • Note that as the IP/UDP protocol is not being monitored in this example by the probe, an IP/UDP counter is not maintained. Accordingly, packets for the IP/UDP/SNMP protocol do not increment an IP/UDP counter. [0170]
  • In this example, we will assume that the user (or client program) has requested that the table be ordered by the byte counters. Since the counters work in All Count Mode, the 200 IP/TCP/FTP packets increment the IP, the IP/TCP and the IP/TCP/FTP packet counters by 200. [0171]
  • The resulting alMatrixTopN (All Count Mode) table would look like this: [0172]
    TABLE 4A
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 400 50000
    IP 123.45.67.89 98.76.54.32 IP/TCP 200 35000
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 230 30000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
  • As this is a MatrixTopN table, the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval. [0173]
  • In order to place the alMatrixTopN (All Count Mode) table in the selected common format used by the present invention, a terminal count conversion operation is performed on the values in TABLE 4A as follows: [0174]
    TABLE 4B
    Protocol Formula Packets Bytes
    IP IP− IP/UDP/SNMP − 400 − 230 − 50000 − 10000 −
    IP/TCP 120 35000
    = 50 = 5000
    IP/UDP/ IP/UDP/SNMP = 120 = 10000
    SNMP
    IP/TCP IP/TCP − IP/TCP/FTP − 230 − 200 − 35000 − 30000 −
    IP/TCP/HTTP 10 1000
    = 20 = 4000
    IP/TCP/FTP IP/TCP/FTP = 200 = 30000
    IP/TCP/ IP/TCP/HTTP = 10 = 1000
    HTTP
  • TABLE 4B
  • After terminal count conversion, the counter values are now delta counter values expressed in terminal count mode format, giving the following table. [0175]
    TABLE 4C
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 50 5000
    IP 123.45.67.89 98.76.54.32 IP/TCP 20 4000
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
  • Since the monitored probe data is now in the desired common format, Table 4C is ready for storage in [0176] buffer 173.
  • 3. alMatrix Table Processing Example [0177]
  • The alMatrix table monitors conversations at all the known application-layer protocols, and stores them, using absolute counters, in a table which is ordered by network-layer protocol, source and destination addresses, and application-layer protocol. The counters in the alMatrix table work in All Count Mode, and so a monitored packet increments the counters for all the protocol layers used in the packet. [0178]
  • Since the alMatrix table works in All Count Mode, the monitored protocols increment the counters illustrated in Table 3. [0179]
  • As a result, the alMatrix table would look like this: [0180]
    TABLE 5A
    Appli-
    Network cation
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 1200 150000
    IP 123.45.67.89 98.76.54.32 IP/TCP 690 1000000
    IP 123.45.67.89 98.76.54.32 IP/TCP/ 600 90000
    FTP
    IP 123.45.67.89 98.76.54.32 IP/TCP/ 30 3000
    HTTP
    IP 123.45.67.89 98.76.54.32 IP/UDP/ 360 30000
    SNMP
  • Assuming the previously retrieved alMatrix Table from the same probe, was as follows: [0181]
    TABLE 5B
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 800 100000
    IP 123.45.67.89 98.76.54.32 IP/TCP 460 965000
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 400 60000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 20 2000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 240 20000
  • For the alMatrix Table 5A, the counter values are absolute values presented in all count mode. Accordingly, to place the alMatrix Table 5A into the desired common format, the counter values must be converted to delta values and all count mode values need to be converted to terminal count mode values. [0182]
  • In accordance with the present invention the first step is the generation of delta values. This is done by subtracting the counter values in the alMatrix Table 5B, which was received during the last collection operation, from the corresponding counter values found in the most recently received alMatrix Table 5A. Table 5B may be obtained from the temporary data table storage located in [0183] memory 169. The resulting table, Table 5C, which includes the delta values generated by the subtraction operation is shown below:
    TABLE 5C
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 400 50000
    IP 123.45.67.89 98.76.54.32 IP/TCP 230 35000
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
  • After delta count conversion, the values in Table 5C still need to be put into terminal count mode. Terminal count conversion involves performing the subtractions shown in Table 5D. [0184]
    TABLE 5D
    Protocol Formula Packets Bytes
    IP IP − IP/UDP/SNMP − 400 − 230 − 50000 − 10000 −
    IP/TCP 120 35000
    = 50 = 5000
    IP/UDP/ IP/UDP/SNMP = 120 = 10000
    SNMP
    IP/TCP IP/TCP − IP/TCP/FTP − 230 − 200 − 35000 − 30000 −
    IP/TCP/HTTP 10 1000
    = 20 = 4000
    IP/TCP/FTP IP/TCP/FTP = 200 = 30000
    TP/TCP/ IP/TCP/HTTP = 10 = 1000
    HTTP
  • The terminal count conversion operation results in the following table: [0185]
    TABLE 5E
    Network Application
    Layer Source Destination Layer Pack-
    Protocol Address Address Protocol ets Bytes
    IP 123.45.67.89 98.76.54.32 IP 50 5000
    IP 123.45.67.89 98.76.54.32 IP/TCP 20 4000
    IP 123.45.67.89 98.76.54.32 IP/TCP/FTP 200 30000
    IP 123.45.67.89 98.76.54.32 IP/TCP/HTTP 10 1000
    IP 123.45.67.89 98.76.54.32 IP/UDP/SNMP 120 10000
  • As Table 5E is now in the common data format, i.e., with counter values expressed as delta counter values in terminal count mode, Table 5E can be stored in the [0186] buffer 173.
  • 4. nlMatrixTopN Table Processing Example [0187]
  • The nlMatrixTopN table monitors conversations at the network-layer protocols only, and stores them, using delta counters, in a table which is ordered by the packet or byte counters (depending upon user-configuration). [0188]
  • The nlMatrixTopN table monitors only network-layer protocols, and so will consider all of the packets given in the exemplary conversation to be IP packets, and so the stored table would be as follows: [0189]
    TABLE 6
    Source Destination
    Protocol Address Address Packets Bytes
    IP 23.45.67.89 98.76.54.32 400 50000
  • Note that as this is a MatrixTopN table, the packet and byte counter values are the total number of packets and bytes for the conversation in the monitored time interval. Since the counter values in the nlMatrixTopN table are already delta counter values, no conversion processing needs to be performed on the nlMatrixTopN table and it is ready for storage in the [0190] buffer 173 as retrieved.
  • 5. nlMatrix Table Processing Example [0191]
  • The nlMatrix table monitors conversations at the network-layer protocols only. It stores the counted byte and packet information, using absolute count values, in a table which is ordered by network-layer protocol and source and destination addresses. [0192]
  • As the nlMatrix table monitors only network-layer protocols, it will consider all of the packets given in the example conversation to be IP packets, and so the stored table would look like this: [0193]
    TABLE 7A
    Source Destination
    Protocol Address Address Packets Bytes
    IP 123.45.67.89 98.76.54.32 1200 150000
  • Assuming the most recent previously retrieved nlMatrix Table from the same probe was as follows: [0194]
    TABLE 7B
    Source Destination
    Protocol Address Address Packets Bytes
    IP 123.45.67.89 98.76.54.32 800 10000
  • In order to place the nlMatrix table in the desired common format, a delta conversion operation is performed. This involves subtracting the counter values from the current nlMatrix Table 7A from the corresponding counter values in the previously received Table 7B to generate a table as follows: [0195]
    TABLE 7C
    Source Destination
    Protocol Address Address Packets Bytes
    IP 123.45.67.89 98.76.54.32 400 50000
  • Since Table 7C is now in the desired common format with delta counter values, it is ready for storage in the [0196] buffer 173.
  • As the result of the data collection and conversion routines discussed above, the data placed in the [0197] buffer 173 is in the common format rendering it suitable for use, e.g., in generating a network traffic database.
  • FIG. 7 illustrates how the [0198] network traffic data 701, 703, 705, from the first through third probes respectively, placed in the buffer 173, can be used to generate a network traffic database 707. In accordance with one embodiment of the present invention, the network traffic data 701, 703, 705 is processed by a database generation and maintenance routine 700 to generate a database 707. Unlike prior art databases which do not include data sets of different resolutions which overlap in time, the database 707 includes multiple resolutions of the same data in parallel, e.g., in hourly, 6 hourly, daily, and weekly data sets. These data sets are stored in corresponding FIFO data structures 709, 711, 713, 715, respectively. The database 707 may be stored on the data storage device 158.
  • The parallel, multi-resolution storage method of the present invention provides a relatively simple means of managing a network traffic database and limiting its size without the need for an aging process and the double buffering often associated with such processes. [0199]
  • While the amount of processing required to create and maintain multiple parallel sets of data in different resolutions may be slightly greater than systems which do not use parallel data sets, the processing associated with creating such a database is more constant than systems which involve aging processes. This is because the periodic load associated with the aging process is avoided when using the method of the present invention. A further benefit of this scheme is that the different resolutions of data are readily available which makes switching between different data resolutions fast and efficient when displaying data and or responding to administrator queries. [0200]
  • In the exemplary embodiment of FIG. 7, the disk space allocated to the [0201] database 707 is divided into 4 parts and assigned to the following fixed resolutions: hourly, 6-hourly, daily and weekly. As discussed above each row of a data table 701, 703, 705 corresponds to a monitored conversation and includes byte and packet count information. Time stamp information indicating the time the conversation was monitored is also included in the tables 701, 703, 705. As each row of data is read in from one of the tables 701, 703, 705, it is used to create or update an entry in each of the parallel data sets 709, 711, 713, 715. Within the generated parallel data sets, each record is used to represent a conversation between two hosts and the records are time aligned depending on the resolution: hourly on the hour; 6-hourly at 0600, 1200, 1800 and 2400 hrs; daily at 2400 hrs; and weekly at 2400 hrs on Saturday. Database records for the same time interval can be considered as being in the same “bucket”. Thus, a bucket is a set of data storage records for storing network traffic data corresponding to the preselected unit of time used for the resolution to which the bucket corresponds.
  • FIG. 8 illustrates the database generation and [0202] maintenance routine 700 of the present invention in greater detail. The illustrated routine 700 may be one of the parallel data set generation routines 166 stored in the management station's memory 162.
  • The routine [0203] 700 beings in step 702 wherein the database generation routine is started, e.g., by having the CPU 154 load and begin executing the routine 700. In embodiments where the routine 700 is implemented using parallel processing, it may be loaded into, and executed by the CPU 155 at the same time it is being loaded and executed by the CPU 154. In a parallel processing embodiment, the different CPUs 154, 155 are normally responsible for creating and maintaining, in parallel, data sets of different resolutions. For example, CPU 154 may be responsible for creating and maintaining the hourly and 6 hour network traffic data sets while the CPU 155 might be responsible for creating the daily and weekly network traffic data sets.
  • For the sake of simplicity the following discussion will assume that the routine [0204] 700 is executed by the processor 154. However, it is to be understood that, as discussed above, multi-processor implementations are possible.
  • Operation proceeds from [0205] step 702 to step 704 wherein the CPU 154 creates hourly, 6 hour, daily and weekly FIFO data structures, one for each of the different data set resolutions to be supported. Step 704 may involve, e.g., allocating data storage records to serve as buckets. For example, the hourly FIFO would comprise a plurality of buckets each corresponding to a one hour period of time. Each bucket may include several records or entries each corresponding to a different conversation/protocol pair. The daily FIFO would comprise a plurality of buckets each corresponding to a different one day period of time. As will be discussed below, as time progresses, each bucket in the FIFO is filled. When all the records in the FIFO are filled, the records in the oldest buckets are overwritten thereby insuring that the process can continue after the available storage space is used.
  • Once the FIFO data structures are created in [0206] step 704, operation proceeds to step 706. In step 706, the buffer 173 into which collected network traffic data is placed, is monitored for network traffic data. Upon detecting that network traffic data has been placed into buffer 173, operation proceeds to step 708. In step 708 the time stamps associated with the buffered data are examined. In step 710, the buffered network traffic data is assigned to be included in individual buckets in the FIFO structures as a function of the examined time stamps. Thus, data is placed in buckets, e.g., sets or groups of records corresponding to the basic unit of time supported, as a function of time stamps indicating the time period in which the network traffic was monitored. Accordingly, data collection and reporting delays encountered by the management station 150 do not negatively impact the accuracy of the created network traffic database.
  • [0207] Steps 712, 714, 716, 718 which are illustrated in parallel represent the updating of records included in the hourly, six hourly, daily and weekly FIFO data structures, respectively, using the same set of network traffic data. Steps 712, 714, 716, 718 are illustrated in parallel to show that they may be performed in parallel by one or more CPUs 154, 155.
  • Operation proceeds from [0208] steps 712, 714, 716 and 718 to step 720 wherein the data obtained from the buffer 173, used to update the hourly, six hourly, daily and weekly data records, is deleted. Operation then returns to monitoring step 706 so that the database updating process will be performed on a continuous basis until, e.g., the management station 150 is powered off or reset.
  • As a simple example of the generation of the hourly and 6 hourly data sets, consider hosts A through F illustrated in FIG. 2 as [0209] computers 21, 22, 23, 31, 32, 33, respectively. The boxes in FIG. 9 represent database records created from traffic between hosts A through F. Dashed lines are used to indicate different hourly time periods 901, 902, 903, 904, 905, 906 and a single 6 hourly time period 910. In FIG. 9, the range of numbers at the top of each time period is used to indicate the specific hour or hours included in the time period, the first and second letters in each box indicate the two hosts involved in the monitored conversation. In addition, the number in the box indicates the number of packets exchanged between the indicated hosts during the indicated time period.
  • The first hourly time period, beginning at hour 0 and ending at [0210] hour 1, corresponds to bucket 901. Two conversations were detected during this first hourly time period. A first conversation between devices A and B which involved 10 packets and a second conversation between devices A and E which involved 6 packets. The number of bytes, in addition to the number of packets, may also be stored in each record of the database 707.
  • Note that over a 6-hour period, the hourly [0211] resolution data set 920 has six “buckets”, 901 through 906, corresponding to first through sixth hourly time periods and the 6-hourly data set has one bucket 910 corresponding to the single 6 hour time period. Note also that the 6-hour bucket 910 has more conversations and thus more entries than any one of the individual hourly buckets 901 through 906. However, the records in the six hour data set 922 are of a lower resolution than the hourly data set 920, since they do not include detailed hourly conversation data.
  • In accordance with one embodiment of the present invention, read access is limited to complete data records. Thus, data in a given time period may not be accessed until the record is fully complete, i.e., all the data from the system probes for the given time period has been included in the data record. By restricting access to completed data records, the presentation of incomplete data counts to an application or system user is avoided. In other embodiments, up to the minute data records are made available to the user. In such embodiments, a user may review, e.g., the most recent data in the weekly database despite the fact that the collection of the data for the current week is not yet complete. [0212]
  • As discussed above, as the data at a resolution fills the part of the storage space assigned to that particular resolution, the data structure used to store the data records at the particular resolution operates as a FIFO data structure. Accordingly, the oldest database records corresponding to the data set of the particular resolution will be reused to store new data. The hourly data set tends to be the first resolution to hit the database size limit when the available storage space for the [0213] database 707 is equally divided amongst the four supported resolutions since it grows the fastest. However, given limited available storage space, all the resolutions will reach their limit given sufficient operating time. FIG. 10 illustrates an exemplary steady station condition that may be reached after 7 weeks of operating one exemplary system 200. Note that in the FIG. 10 example, the database includes enough storage space to store hourly information for 1.5 days, 6-hourly information for 4.5 days, daily information for 9 days and weekly information for 7 weeks assuming the use of the same amount of storage for each of the different resolutions. Note that the actual time periods for a given system will depend on the number of conversations which are monitored and the actual amount of storage space allocated for the database 707.

Claims (29)

What is claimed is:
1. A method of processing and storing data in a computer system including processor circuitry, and a data storage device, the method comprising the steps of:
storing first and second sets of records on the data storage device, the first and second sets of records being of different data resolutions and corresponding to overlapping periods of time;
operating the processor circuitry to receive data collected over a period of time; and
operating the processor circuitry to update at least one record in each of the stored first and second sets of records with the received data.
2. The method of claim 1,
wherein the first and second sets of records are stored in separate first-in, first-out data structures on the data storage device; and
wherein the step of operating the processor circuitry to update at least one record in each of the stored first and second sets of records, includes the step of replacing a previous record included in each of the first and second data structures.
3. The method of claim 2, further comprising the step of:
allocating fixed amounts of storage space on the data storage device for storing each one of the first and second first-in, first-out data structures used to store the first and second sets of records.
4. The method of claim 2, wherein the first set of records include hourly records and the second set of records includes daily records.
5. The method of claim 2, further comprising the step of:
periodically collecting network traffic data;
storing the collected network traffic data in a buffer; and
operating the processor circuitry to retrieve network traffic data from the buffer, the retrieved network traffic data being received by the processor circuitry.
6. The method of claim 5,
wherein the network traffic data stored in the buffer includes time stamp information indicating the period of time in which the network traffic data was collected; and
wherein the step of operating the processor circuitry to update at least one record in each of the stored first and second sets of records includes the step of:
examining at least one time stamp included in the buffered network traffic data.
7. The method of claim 5, wherein the collected network traffic data includes byte and packet count information associated with each of a plurality of monitored conversations between devices included in the computer system, the step of operating the processor circuitry to update at least one record in each of the stored first and second sets of records including the steps of:
updating a record corresponding to a first conversation in the first set of records; and
updating a record corresponding to the first conversation the second set of records.
8. The method of claim 5,
wherein the processor circuitry includes first and second central processing units, and
wherein the step of operating the processor circuitry to update at least one record in each of the stored first and second sets of records includes the step of operating the first processor to update the first set of records while operating the second processor to update the second set of records.
9. The method of claim 1,
wherein the processor circuitry includes first and second central processing units, and
wherein the step of operating the processor circuitry to update at least one record in each of the stored first and second sets of records includes the step of operating the first processor to update the first set of records while operating the second processor to update the second set of records.
10. The method of claim 5, wherein the computer system further includes a display device, the method further comprising the step of:
displaying data corresponding to overlapping periods of time at different resolutions on the display device.
11. The method of claim 1, further comprising the step of:
allocating storage space for storing the first and second sets of records in first and second first-in, first-out data structures, respectively.
12. A method of collecting and processing network traffic data, comprising the steps of:
periodically collecting network traffic data from a data probe,
generating a database of network traffic information from the collected network traffic data, the database comprising a plurality of network traffic data sets of differing degrees of data resolution corresponding to overlapping network traffic time periods.
13. The method of claim 12, wherein the differing degrees of resolution correspond to measurement time periods of different duration.
14. The method of claim 12,
wherein the collected network traffic data includes a plurality of traffic data counter values; and
wherein each traffic data counter value in the collected network traffic data includes information corresponding to an individual monitored conversation, the step of generating a database including the step of generating from the information on each different monitored conversation, a different record in each set of the plurality of network traffic data sets.
15. The method of claim 14, further comprising the step of storing each of the plurality of network traffic data sets in a different first-in, first-out data structure.
16. The method of claim 15, wherein a limited amount of data storage space is used for each of the different first-in, first out data structures, the method further comprising the step of:
overwriting the oldest data records in the first-in, first-out data structure used to store one of the network traffic data sets, when the limited amount of data storage space used for said first-in, first-out data structure is filled with records.
17. A system for monitoring network traffic data, comprising:
a plurality of network traffic data probes for collecting network traffic information;
processor circuitry coupled to the network traffic probes for receiving data therefrom; and
a data storage device for storing a network traffic database generated by the processor circuitry using data collected by the network traffic data probes, the data storage device including:
a plurality of data structures, each one of the plurality of data structures including network traffic data:
a) stored at a different resolution than the resolution at which network traffic data is stored in the other ones of the plurality of data structures; and
b) corresponding to a period of time which overlaps the period of time for which network traffic data is stored in the other ones of the plurality of data structures.
18. The system of claim 17, wherein each of the plurality of data structures is a first-in, first-out data structure.
19. The system of claim 18, wherein each one of the plurality of data structures includes a plurality of data records, each data record corresponding to a monitored network conversation.
20. The system of claim 18, wherein data records are arranged within each individual data structure as a function of the time the conversation to which the record corresponds was monitored.
21. The system of claim 20, wherein records which were monitored during the same time interval are grouped together within each individual data structure.
22. The system of claim 21, further comprising:
means for modifying at least one network traffic data record included in each one of the plurality of data structures to reflect collected information about an individual network conversation.
23. The system of claim 18, further comprising:
means for modifying at least one network traffic data record included in each one of the plurality of data structures to reflect collected information about an individual network conversation.
24. The system of claim 18, wherein the processor circuitry includes a plurality of separate central processing units which operate in parallel.
25. The system of claim 24, wherein each one of the plurality of data structures includes a plurality of data records, each data record corresponding to a monitored network conversation.
26. The system of claim 24, wherein data records are arranged within each individual data structure as a function of the time the conversation to which the record corresponds was monitored.
27. The system of claim 26, wherein records which were monitored during the same time interval are grouped together within each individual data structure.
28. The system of claim 27, further comprising:
means for modifying at least one network traffic data record included in each one of the plurality of data structures to reflect collected information about an individual network conversation.
29. The system of claim 24, further comprising:
means for modifying at least one network traffic data record included in each one of the plurality of data structures to reflect collected information about an individual network conversation.
US09/823,306 1998-05-28 2001-04-02 Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods Abandoned US20030069952A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/823,306 US20030069952A1 (en) 1998-05-28 2001-04-02 Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB9811416A GB2337903B (en) 1998-05-28 1998-05-28 Methods and apparatus for collecting storing processing and using network traffic data
GB9811416.8 1998-05-28
US09/131,717 US6279037B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data
US09/823,306 US20030069952A1 (en) 1998-05-28 2001-04-02 Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/131,717 Continuation US6279037B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data

Publications (1)

Publication Number Publication Date
US20030069952A1 true US20030069952A1 (en) 2003-04-10

Family

ID=10832808

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/131,717 Expired - Lifetime US6279037B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data
US09/131,725 Expired - Lifetime US6327620B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data
US09/823,306 Abandoned US20030069952A1 (en) 1998-05-28 2001-04-02 Methods and apparatus for monitoring, collecting, storing, processing and using network traffic data of overlapping time periods

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/131,717 Expired - Lifetime US6279037B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data
US09/131,725 Expired - Lifetime US6327620B1 (en) 1998-05-28 1998-08-10 Methods and apparatus for collecting, storing, processing and using network traffic data

Country Status (2)

Country Link
US (3) US6279037B1 (en)
GB (1) GB2337903B (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020013837A1 (en) * 1996-07-18 2002-01-31 Reuven Battat Network management system using virtual reality techniques to display and simulate navigation to network components
US20020147809A1 (en) * 2000-10-17 2002-10-10 Anders Vinberg Method and apparatus for selectively displaying layered network diagrams
US20030018771A1 (en) * 1997-07-15 2003-01-23 Computer Associates Think, Inc. Method and apparatus for generating and recognizing speech as a user interface element in systems and network management
US20030023722A1 (en) * 1997-07-15 2003-01-30 Computer Associates Think, Inc. Method and apparatus for filtering messages based on context
US20030023721A1 (en) * 1997-07-15 2003-01-30 Computer Associates Think, Inc. Method and apparatus for generating context-descriptive messages
US20030097440A1 (en) * 2001-11-16 2003-05-22 Alcatel Adaptive data acquisition for a network or services management system
US20030149786A1 (en) * 2002-02-06 2003-08-07 Mark Duffy Efficient counter retrieval
US20030220769A1 (en) * 2002-05-23 2003-11-27 Alcatel Device for and a method of monitoring service data for automated traffic engineering in a communications network
US20040199791A1 (en) * 2002-11-04 2004-10-07 Poletto Massimiliano Antonio Connection table for intrusion detection
US20040199793A1 (en) * 2002-11-04 2004-10-07 Benjamin Wilken Connection based denial of service detection
US20040215975A1 (en) * 2002-11-04 2004-10-28 Dudfield Anne Elizabeth Detection of unauthorized access in a network
US20040221190A1 (en) * 2002-11-04 2004-11-04 Roletto Massimiliano Antonio Aggregator for connection based anomaly detection
US20040220984A1 (en) * 2002-11-04 2004-11-04 Dudfield Anne Elizabeth Connection based denial of service detection
US20040236866A1 (en) * 2003-05-21 2004-11-25 Diego Dugatkin Automated characterization of network traffic
US20050021715A1 (en) * 2003-05-21 2005-01-27 Diego Dugatkin Automated capturing and characterization of network traffic using feedback
US20050235058A1 (en) * 2003-10-10 2005-10-20 Phil Rackus Multi-network monitoring architecture
US20060143239A1 (en) * 1996-07-18 2006-06-29 Computer Associates International, Inc. Method and apparatus for maintaining data integrity across distributed computer systems
US20060182036A1 (en) * 2005-02-16 2006-08-17 Fujitsu Limited Fault detection device
US20080010888A1 (en) * 2004-11-12 2008-01-17 Taser International, Inc. Systems and methods for electronic weaponry having audio and/or video recording capability
US7342581B2 (en) 1996-07-18 2008-03-11 Computer Associates Think, Inc. Method and apparatus for displaying 3-D state indicators
US20080243858A1 (en) * 2006-08-01 2008-10-02 Latitude Broadband, Inc. Design and Methods for a Distributed Database, Distributed Processing Network Management System
US20100008363A1 (en) * 2008-07-10 2010-01-14 Cheng Tien Ee Methods and apparatus to distribute network ip traffic
US20100260204A1 (en) * 2009-04-08 2010-10-14 Gerald Pepper Traffic Receiver Using Parallel Capture Engines
US20110069626A1 (en) * 2009-09-23 2011-03-24 Ethan Sun Network testing providing for concurrent real-time ingress and egress viewing of network traffic data
US7991827B1 (en) * 2002-11-13 2011-08-02 Mcafee, Inc. Network analysis system and method utilizing collected metadata
GB2483111A (en) * 2010-08-27 2012-02-29 Zeus Technology Ltd Monitoring connections to servers and memory management
US8180027B1 (en) * 2006-11-22 2012-05-15 Securus Technologies, Inc. Score-driven management of recordings
US8504879B2 (en) * 2002-11-04 2013-08-06 Riverbed Technology, Inc. Connection based anomaly detection
US8699484B2 (en) 2010-05-24 2014-04-15 At&T Intellectual Property I, L.P. Methods and apparatus to route packets in a network
US8767565B2 (en) 2008-10-17 2014-07-01 Ixia Flexible network test apparatus
US20180331922A1 (en) * 2017-05-12 2018-11-15 Pragati Kumar Dhingra Methods and systems for time-based binning of network traffic
US10296973B2 (en) * 2014-07-23 2019-05-21 Fortinet, Inc. Financial information exchange (FIX) protocol based load balancing
US20210288993A1 (en) * 2017-05-18 2021-09-16 Palo Alto Networks, Inc. Correlation-driven threat assessment and remediation
US11210236B2 (en) 2019-10-22 2021-12-28 EMC IP Holding Company LLC Managing global counters using local delta counters
US11323354B1 (en) 2020-10-09 2022-05-03 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using switch emulation
US11388081B1 (en) 2021-03-30 2022-07-12 Keysight Technologies, Inc. Methods, systems, and computer readable media for impairment testing using an impairment device
US11398968B2 (en) 2018-07-17 2022-07-26 Keysight Technologies, Inc. Methods, systems, and computer readable media for testing virtualized network functions and related infrastructure
US11405261B1 (en) 2020-09-10 2022-08-02 Juniper Networks, Inc. Optimizing bandwidth utilization when exporting telemetry data from a network device
US11405302B1 (en) 2021-03-11 2022-08-02 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using configurable test infrastructure
US11483228B2 (en) 2021-01-29 2022-10-25 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using an emulated data center environment
US11483227B2 (en) 2020-10-13 2022-10-25 Keysight Technologies, Inc. Methods, systems and computer readable media for active queue management
US11729087B2 (en) 2021-12-03 2023-08-15 Keysight Technologies, Inc. Methods, systems, and computer readable media for providing adaptive background test traffic in a test environment
US11765068B2 (en) 2021-12-22 2023-09-19 Keysight Technologies, Inc. Methods, systems, and computer readable media for programmable data plane processor based traffic impairment

Families Citing this family (142)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3328179B2 (en) * 1997-11-26 2002-09-24 日本電気株式会社 Network traffic monitoring system
US6453346B1 (en) * 1998-07-17 2002-09-17 Proactivenet, Inc. Method and apparatus for intelligent storage and reduction of network information
US6430611B1 (en) * 1998-08-25 2002-08-06 Highground Systems, Inc. Method and apparatus for providing data storage management
JP3935276B2 (en) * 1998-10-21 2007-06-20 キヤノン株式会社 Network device management method, apparatus, storage medium, and transmission apparatus
JP2002528819A (en) 1998-10-28 2002-09-03 バーティカルワン コーポレイション Automatic aggregation device and method, device and method for delivering electronic personal information or data, and transaction involving electronic personal information or data
US7085997B1 (en) 1998-12-08 2006-08-01 Yodlee.Com Network-based bookmark management and web-summary system
US8069407B1 (en) 1998-12-08 2011-11-29 Yodlee.Com, Inc. Method and apparatus for detecting changes in websites and reporting results to web developers for navigation template repair purposes
US7672879B1 (en) 1998-12-08 2010-03-02 Yodlee.Com, Inc. Interactive activity interface for managing personal data and performing transactions over a data packet network
FR2790348B1 (en) * 1999-02-26 2001-05-25 Thierry Grenot SYSTEM AND METHOD FOR MEASURING HANDOVER TIMES AND LOSS RATES IN HIGH-SPEED TELECOMMUNICATIONS NETWORKS
US6853623B2 (en) * 1999-03-05 2005-02-08 Cisco Technology, Inc. Remote monitoring of switch network
US6687750B1 (en) * 1999-04-14 2004-02-03 Cisco Technology, Inc. Network traffic visualization
US7752535B2 (en) 1999-06-01 2010-07-06 Yodlec.com, Inc. Categorization of summarized information
US6836803B1 (en) * 1999-11-30 2004-12-28 Accenture Llp Operations architecture to implement a local service activation management system
US6813278B1 (en) 1999-11-30 2004-11-02 Accenture Llp Process for submitting and handling a service request in a local service management system
US6961778B2 (en) * 1999-11-30 2005-11-01 Accenture Llp Management interface between a core telecommunication system and a local service provider
US6732167B1 (en) 1999-11-30 2004-05-04 Accenture L.L.P. Service request processing in a local service activation management environment
US7401030B1 (en) * 1999-12-30 2008-07-15 Pitney Bowes Inc. Method and system for tracking disposition status of an item to be delivered within an organization
US6779120B1 (en) * 2000-01-07 2004-08-17 Securify, Inc. Declarative language for specifying a security policy
US8074256B2 (en) * 2000-01-07 2011-12-06 Mcafee, Inc. Pdstudio design system and method
US6775696B1 (en) * 2000-03-23 2004-08-10 Urisys Corporation Systems and methods for collecting and providing call traffic information to end-users
US7120934B2 (en) * 2000-03-30 2006-10-10 Ishikawa Mark M System, method and apparatus for detecting, identifying and responding to fraudulent requests on a network
US6990616B1 (en) 2000-04-24 2006-01-24 Attune Networks Ltd. Analysis of network performance
US6889255B1 (en) * 2000-04-28 2005-05-03 Microsoft Corporation System and method for caching data in a client management tool
US6792455B1 (en) * 2000-04-28 2004-09-14 Microsoft Corporation System and method for implementing polling agents in a client management tool
US6958977B1 (en) * 2000-06-06 2005-10-25 Viola Networks Ltd Network packet tracking
US7917647B2 (en) * 2000-06-16 2011-03-29 Mcafee, Inc. Method and apparatus for rate limiting
US20020093527A1 (en) * 2000-06-16 2002-07-18 Sherlock Kieran G. User interface for a security policy system and method
US20040073617A1 (en) 2000-06-19 2004-04-15 Milliken Walter Clark Hash-based systems and methods for detecting and preventing transmission of unwanted e-mail
US6973491B1 (en) * 2000-08-09 2005-12-06 Sun Microsystems, Inc. System and method for monitoring and managing system assets and asset configurations
US6823381B1 (en) * 2000-08-17 2004-11-23 Trendium, Inc. Methods, systems and computer program products for determining a point of loss of data on a communication network
US7287072B1 (en) * 2000-10-17 2007-10-23 Sprint Communications Company L.P. Remote monitoring information management
US7562134B1 (en) * 2000-10-25 2009-07-14 At&T Intellectual Property I, L.P. Network traffic analyzer
US6738355B1 (en) 2000-11-01 2004-05-18 Agilent Technologies, Inc. Synchronization method for multi-probe communications network monitoring
JP4165017B2 (en) * 2001-02-06 2008-10-15 沖電気工業株式会社 Traffic management method and traffic management apparatus
WO2002086748A1 (en) * 2001-04-18 2002-10-31 Distributed Computing, Inc. Method and apparatus for testing transaction capacity of site on a global communication network
US7403987B1 (en) 2001-06-29 2008-07-22 Symantec Operating Corporation Transactional SAN management
US7315894B2 (en) * 2001-07-17 2008-01-01 Mcafee, Inc. Network data retrieval and filter systems and methods
US20030033403A1 (en) * 2001-07-31 2003-02-13 Rhodes N. Lee Network usage analysis system having dynamic statistical data distribution system and method
US7124183B2 (en) * 2001-09-26 2006-10-17 Bell Security Solutions Inc. Method and apparatus for secure distributed managed network information services with redundancy
US6757727B1 (en) * 2001-09-28 2004-06-29 Networks Associates Technology, Inc. Top-down network analysis system and method with adaptive filtering capabilities
US7058649B2 (en) * 2001-09-28 2006-06-06 Intel Corporation Automated presentation layer content management system
US6789117B1 (en) * 2001-12-21 2004-09-07 Networks Associates Technology, Inc. Enterprise network analyzer host controller/agent interface system and method
US6714513B1 (en) * 2001-12-21 2004-03-30 Networks Associates Technology, Inc. Enterprise network analyzer agent system and method
US7016948B1 (en) * 2001-12-21 2006-03-21 Mcafee, Inc. Method and apparatus for detailed protocol analysis of frames captured in an IEEE 802.11 (b) wireless LAN
US7154857B1 (en) 2001-12-21 2006-12-26 Mcafee, Inc. Enterprise network analyzer zone controller system and method
US6801940B1 (en) * 2002-01-10 2004-10-05 Networks Associates Technology, Inc. Application performance monitoring expert
US7299277B1 (en) * 2002-01-10 2007-11-20 Network General Technology Media module apparatus and method for use in a network monitoring environment
US7523198B2 (en) * 2002-01-25 2009-04-21 Architecture Technology Corporation Integrated testing approach for publish/subscribe network systems
US6760845B1 (en) * 2002-02-08 2004-07-06 Networks Associates Technology, Inc. Capture file format system and method for a network analyzer
US7599293B1 (en) * 2002-04-25 2009-10-06 Lawrence Michael Bain System and method for network traffic and I/O transaction monitoring of a high speed communications network
US7194538B1 (en) 2002-06-04 2007-03-20 Veritas Operating Corporation Storage area network (SAN) management system for discovering SAN components using a SAN management server
US7886031B1 (en) 2002-06-04 2011-02-08 Symantec Operating Corporation SAN configuration utility
US7711751B2 (en) * 2002-06-13 2010-05-04 Netscout Systems, Inc. Real-time network performance monitoring system and related methods
US20030233453A1 (en) * 2002-06-18 2003-12-18 Institute For Information Industry Topology probing method for mobile IP system
AU2003250959A1 (en) * 2002-07-18 2004-02-09 Vega Grieshaber Kg Bus station with an integrated bus monitor function
US8019849B1 (en) 2002-09-13 2011-09-13 Symantec Operating Corporation Server-side storage area network management interface
US7401338B1 (en) 2002-09-27 2008-07-15 Symantec Operating Corporation System and method for an access layer application programming interface for managing heterogeneous components of a storage area network
US7403998B2 (en) * 2003-04-10 2008-07-22 International Business Machines Corporation Estimating network management bandwidth
US7373416B2 (en) * 2003-04-24 2008-05-13 Akamai Technologies, Inc. Method and system for constraining server usage in a distributed network
US7331014B2 (en) * 2003-05-16 2008-02-12 Microsoft Corporation Declarative mechanism for defining a hierarchy of objects
KR100561628B1 (en) * 2003-11-18 2006-03-20 한국전자통신연구원 Method for detecting abnormal traffic in network level using statistical analysis
US7594011B2 (en) * 2004-02-10 2009-09-22 Narus, Inc. Network traffic monitoring for search popularity analysis
GB2418795A (en) * 2004-10-01 2006-04-05 Agilent Technologies Inc Monitoring traffic in a packet switched network
US7747733B2 (en) 2004-10-25 2010-06-29 Electro Industries/Gauge Tech Power meter having multiple ethernet ports
EP1832054B1 (en) 2004-12-23 2018-03-21 Symantec Corporation Method and apparatus for network packet capture distributed storage system
JP4317828B2 (en) * 2005-03-15 2009-08-19 富士通株式会社 Network monitoring apparatus and network monitoring method
US7525922B2 (en) * 2005-04-01 2009-04-28 Cisco Technology, Inc. Duplex mismatch testing
US7835293B2 (en) * 2005-09-13 2010-11-16 Cisco Technology, Inc. Quality of service testing of communications networks
US8213317B2 (en) * 2005-11-04 2012-07-03 Research In Motion Limited Procedure for correcting errors in radio communication, responsive to error frequency
EP1783952B1 (en) * 2005-11-04 2012-01-11 Research In Motion Limited Correction of errors in radio communication, responsive to error frequency
US7990887B2 (en) * 2006-02-22 2011-08-02 Cisco Technology, Inc. Sampling test of network performance
US20070218874A1 (en) * 2006-03-17 2007-09-20 Airdefense, Inc. Systems and Methods For Wireless Network Forensics
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US8184549B2 (en) 2006-06-30 2012-05-22 Embarq Holdings Company, LLP System and method for selecting network egress
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US8000318B2 (en) 2006-06-30 2011-08-16 Embarq Holdings Company, Llc System and method for call routing based on transmission performance of a packet network
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US7948909B2 (en) 2006-06-30 2011-05-24 Embarq Holdings Company, Llc System and method for resetting counters counting network performance information at network communications devices on a packet network
US8289965B2 (en) 2006-10-19 2012-10-16 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US8281392B2 (en) 2006-08-11 2012-10-02 Airdefense, Inc. Methods and systems for wired equivalent privacy and Wi-Fi protected access protection
US8040811B2 (en) 2006-08-22 2011-10-18 Embarq Holdings Company, Llc System and method for collecting and managing network performance information
US8228791B2 (en) 2006-08-22 2012-07-24 Embarq Holdings Company, Llc System and method for routing communications between packet networks based on intercarrier agreements
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8107366B2 (en) 2006-08-22 2012-01-31 Embarq Holdings Company, LP System and method for using centralized network performance tables to manage network communications
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US8015294B2 (en) 2006-08-22 2011-09-06 Embarq Holdings Company, LP Pin-hole firewall for communicating data packets on a packet network
US8407765B2 (en) * 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US8144586B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for controlling network bandwidth with a connection admission control engine
US8144587B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US8274905B2 (en) 2006-08-22 2012-09-25 Embarq Holdings Company, Llc System and method for displaying a graph representative of network performance over a time period
US8125897B2 (en) 2006-08-22 2012-02-28 Embarq Holdings Company Lp System and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
US8064391B2 (en) 2006-08-22 2011-11-22 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US7940735B2 (en) 2006-08-22 2011-05-10 Embarq Holdings Company, Llc System and method for selecting an access point
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US8199653B2 (en) 2006-08-22 2012-06-12 Embarq Holdings Company, Llc System and method for communicating network performance information over a packet network
US8223655B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for provisioning resources of a packet network based on collected network performance information
US8549405B2 (en) 2006-08-22 2013-10-01 Centurylink Intellectual Property Llc System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US8098579B2 (en) 2006-08-22 2012-01-17 Embarq Holdings Company, LP System and method for adjusting the window size of a TCP packet through remote network elements
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US7843831B2 (en) 2006-08-22 2010-11-30 Embarq Holdings Company Llc System and method for routing data on a packet network
US8189468B2 (en) 2006-10-25 2012-05-29 Embarq Holdings, Company, LLC System and method for regulating messages between networks
US8102770B2 (en) 2006-08-22 2012-01-24 Embarq Holdings Company, LP System and method for monitoring and optimizing network performance with vector performance tables and engines
US8194555B2 (en) 2006-08-22 2012-06-05 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US8576722B2 (en) 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US7606752B2 (en) 2006-09-07 2009-10-20 Yodlee Inc. Host exchange in bill paying services
US9391828B1 (en) * 2007-04-02 2016-07-12 Emc Corporation Storing and monitoring computed relationships between network components
US8111692B2 (en) 2007-05-31 2012-02-07 Embarq Holdings Company Llc System and method for modifying network traffic
US20090055465A1 (en) * 2007-08-22 2009-02-26 Microsoft Corporation Remote Health Monitoring and Control
US8068425B2 (en) 2008-04-09 2011-11-29 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US8261334B2 (en) 2008-04-25 2012-09-04 Yodlee Inc. System for performing web authentication of a user by proxy
US8625642B2 (en) 2008-05-23 2014-01-07 Solera Networks, Inc. Method and apparatus of network artifact indentification and extraction
US8521732B2 (en) 2008-05-23 2013-08-27 Solera Networks, Inc. Presentation of an extracted artifact based on an indexing technique
US8555359B2 (en) 2009-02-26 2013-10-08 Yodlee, Inc. System and methods for automatically accessing a web site on behalf of a client
US8666985B2 (en) 2011-03-16 2014-03-04 Solera Networks, Inc. Hardware accelerated application-based pattern matching for real time classification and recording of network traffic
WO2013016246A1 (en) 2011-07-22 2013-01-31 Mark Figura Systems and methods for network monitoring and testing using a generic data mediation platform
US10771532B2 (en) 2011-10-04 2020-09-08 Electro Industries/Gauge Tech Intelligent electronic devices, systems and methods for communicating messages over a network
US10303860B2 (en) 2011-10-04 2019-05-28 Electro Industries/Gauge Tech Security through layers in an intelligent electronic device
US10275840B2 (en) 2011-10-04 2019-04-30 Electro Industries/Gauge Tech Systems and methods for collecting, analyzing, billing, and reporting data from intelligent electronic devices
US20150356104A9 (en) * 2011-10-04 2015-12-10 Electro Industries/Gauge Tech Systems and methods for collecting, analyzing, billing, and reporting data from intelligent electronic devices
US10862784B2 (en) 2011-10-04 2020-12-08 Electro Industries/Gauge Tech Systems and methods for processing meter information in a network of intelligent electronic devices
US20130275576A1 (en) * 2012-04-11 2013-10-17 Yr20 Group, Inc. Network condition-based monitoring analysis engine
US11816465B2 (en) 2013-03-15 2023-11-14 Ei Electronics Llc Devices, systems and methods for tracking and upgrading firmware in intelligent electronic devices
US11734396B2 (en) 2014-06-17 2023-08-22 El Electronics Llc Security through layers in an intelligent electronic device
US10505818B1 (en) 2015-05-05 2019-12-10 F5 Networks. Inc. Methods for analyzing and load balancing based on server health and devices thereof
US10958435B2 (en) 2015-12-21 2021-03-23 Electro Industries/ Gauge Tech Providing security in an intelligent electronic device
US10430263B2 (en) 2016-02-01 2019-10-01 Electro Industries/Gauge Tech Devices, systems and methods for validating and upgrading firmware in intelligent electronic devices
WO2019118365A1 (en) * 2017-12-11 2019-06-20 Hispanispace, LLC Systems and methods for ingesting and processing data in a data processing environment
US11734704B2 (en) 2018-02-17 2023-08-22 Ei Electronics Llc Devices, systems and methods for the collection of meter data in a common, globally accessible, group of servers, to provide simpler configuration, collection, viewing, and analysis of the meter data
US11754997B2 (en) 2018-02-17 2023-09-12 Ei Electronics Llc Devices, systems and methods for predicting future consumption values of load(s) in power distribution systems
US11686594B2 (en) 2018-02-17 2023-06-27 Ei Electronics Llc Devices, systems and methods for a cloud-based meter management system
US10970153B2 (en) * 2018-06-17 2021-04-06 International Business Machines Corporation High-granularity historical performance snapshots
US10897402B2 (en) * 2019-01-08 2021-01-19 Hewlett Packard Enterprise Development Lp Statistics increment for multiple publishers
US11863589B2 (en) 2019-06-07 2024-01-02 Ei Electronics Llc Enterprise security in meters
JP2023018307A (en) * 2021-07-27 2023-02-08 キヤノン株式会社 Information processing device, information processing device control method, and program

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5101402A (en) * 1988-05-24 1992-03-31 Digital Equipment Corporation Apparatus and method for realtime monitoring of network sessions in a local area network
US5539659A (en) * 1993-02-22 1996-07-23 Hewlett-Packard Company Network analysis method
US5596722A (en) * 1995-04-03 1997-01-21 Motorola, Inc. Packet routing system and method for achieving uniform link usage and minimizing link load
US5606359A (en) * 1994-06-30 1997-02-25 Hewlett-Packard Company Video on demand system with multiple data sources configured to provide vcr-like services
US5634009A (en) * 1993-10-01 1997-05-27 3Com Corporation Network data collection method and apparatus
US5724263A (en) * 1993-10-21 1998-03-03 At&T Corp Automatic temporospatial pattern analysis and prediction in a telecommunications network using rule induction
US5841981A (en) * 1995-09-28 1998-11-24 Hitachi Software Engineering Co., Ltd. Network management system displaying static dependent relation information
US5870556A (en) * 1996-07-12 1999-02-09 Microsoft Corporation Monitoring a messaging link
US5887136A (en) * 1995-08-04 1999-03-23 Kabushiki Kaisha Toshiba Communication system and communication control method for the same
US5923850A (en) * 1996-06-28 1999-07-13 Sun Microsystems, Inc. Historical asset information data storage schema
US5966509A (en) * 1997-01-14 1999-10-12 Fujitsu Limited Network management device
US5968132A (en) * 1996-02-21 1999-10-19 Fujitsu Limited Image data communicating apparatus and a communication data quantity adjusting method used in an image data communication system
US6014727A (en) * 1996-12-23 2000-01-11 Apple Computer, Inc. Method and system for buffering messages in an efficient but largely undivided manner
US6321263B1 (en) * 1998-05-11 2001-11-20 International Business Machines Corporation Client-based application availability

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4775973A (en) * 1986-10-22 1988-10-04 Hewlett-Packard Company Method and apparatus for a packet-switched network communications measurement matrix display
EP0542976A4 (en) * 1991-06-07 1993-08-11 Australian And Overseas Telecommunications Corporation Limited Pcm monitor
GB2295299B (en) * 1994-11-16 1999-04-28 Network Services Inc Enterpris Enterprise network management method and apparatus
US6047321A (en) * 1996-02-23 2000-04-04 Nortel Networks Corporation Method and apparatus for monitoring a dedicated communications medium in a switched data network
US5898837A (en) * 1996-02-23 1999-04-27 Bay Networks, Inc. Method and apparatus for monitoring a dedicated communications medium in a switched data network
DE69720857T2 (en) * 1996-05-31 2004-02-05 Hewlett-Packard Co. (N.D.Ges.D.Staates Delaware), Palo Alto Systems and methods for operating a network management station
US5886643A (en) * 1996-09-17 1999-03-23 Concord Communications Incorporated Method and apparatus for discovering network topology
US6085243A (en) * 1996-12-13 2000-07-04 3Com Corporation Distributed remote management (dRMON) for networks

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5101402A (en) * 1988-05-24 1992-03-31 Digital Equipment Corporation Apparatus and method for realtime monitoring of network sessions in a local area network
US5539659A (en) * 1993-02-22 1996-07-23 Hewlett-Packard Company Network analysis method
US5634009A (en) * 1993-10-01 1997-05-27 3Com Corporation Network data collection method and apparatus
US5724263A (en) * 1993-10-21 1998-03-03 At&T Corp Automatic temporospatial pattern analysis and prediction in a telecommunications network using rule induction
US5606359A (en) * 1994-06-30 1997-02-25 Hewlett-Packard Company Video on demand system with multiple data sources configured to provide vcr-like services
US5596722A (en) * 1995-04-03 1997-01-21 Motorola, Inc. Packet routing system and method for achieving uniform link usage and minimizing link load
US5887136A (en) * 1995-08-04 1999-03-23 Kabushiki Kaisha Toshiba Communication system and communication control method for the same
US5841981A (en) * 1995-09-28 1998-11-24 Hitachi Software Engineering Co., Ltd. Network management system displaying static dependent relation information
US5968132A (en) * 1996-02-21 1999-10-19 Fujitsu Limited Image data communicating apparatus and a communication data quantity adjusting method used in an image data communication system
US5923850A (en) * 1996-06-28 1999-07-13 Sun Microsystems, Inc. Historical asset information data storage schema
US5870556A (en) * 1996-07-12 1999-02-09 Microsoft Corporation Monitoring a messaging link
US6014727A (en) * 1996-12-23 2000-01-11 Apple Computer, Inc. Method and system for buffering messages in an efficient but largely undivided manner
US5966509A (en) * 1997-01-14 1999-10-12 Fujitsu Limited Network management device
US6321263B1 (en) * 1998-05-11 2001-11-20 International Business Machines Corporation Client-based application availability

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7680879B2 (en) 1996-07-18 2010-03-16 Computer Associates Think, Inc. Method and apparatus for maintaining data integrity across distributed computer systems
US20060143239A1 (en) * 1996-07-18 2006-06-29 Computer Associates International, Inc. Method and apparatus for maintaining data integrity across distributed computer systems
US7342581B2 (en) 1996-07-18 2008-03-11 Computer Associates Think, Inc. Method and apparatus for displaying 3-D state indicators
US20020013837A1 (en) * 1996-07-18 2002-01-31 Reuven Battat Network management system using virtual reality techniques to display and simulate navigation to network components
US8291324B2 (en) 1996-07-18 2012-10-16 Ca, Inc. Network management system using virtual reality techniques to display and simulate navigation to network components
US20030018771A1 (en) * 1997-07-15 2003-01-23 Computer Associates Think, Inc. Method and apparatus for generating and recognizing speech as a user interface element in systems and network management
US20030023722A1 (en) * 1997-07-15 2003-01-30 Computer Associates Think, Inc. Method and apparatus for filtering messages based on context
US20030023721A1 (en) * 1997-07-15 2003-01-30 Computer Associates Think, Inc. Method and apparatus for generating context-descriptive messages
US20020147809A1 (en) * 2000-10-17 2002-10-10 Anders Vinberg Method and apparatus for selectively displaying layered network diagrams
US8639795B2 (en) * 2001-11-16 2014-01-28 Alcatel Lucent Adaptive data acquisition for a network or services management system
US20030097440A1 (en) * 2001-11-16 2003-05-22 Alcatel Adaptive data acquisition for a network or services management system
US20030149786A1 (en) * 2002-02-06 2003-08-07 Mark Duffy Efficient counter retrieval
US7050946B2 (en) * 2002-05-23 2006-05-23 Alcatel Device for and a method of monitoring service data for automated traffic engineering in a communications network
US20030220769A1 (en) * 2002-05-23 2003-11-27 Alcatel Device for and a method of monitoring service data for automated traffic engineering in a communications network
US7716737B2 (en) 2002-11-04 2010-05-11 Riverbed Technology, Inc. Connection based detection of scanning attacks
US8504879B2 (en) * 2002-11-04 2013-08-06 Riverbed Technology, Inc. Connection based anomaly detection
US20040199791A1 (en) * 2002-11-04 2004-10-07 Poletto Massimiliano Antonio Connection table for intrusion detection
US8479057B2 (en) * 2002-11-04 2013-07-02 Riverbed Technology, Inc. Aggregator for connection based anomaly detection
US7827272B2 (en) * 2002-11-04 2010-11-02 Riverbed Technology, Inc. Connection table for intrusion detection
US20040199793A1 (en) * 2002-11-04 2004-10-07 Benjamin Wilken Connection based denial of service detection
US20040220984A1 (en) * 2002-11-04 2004-11-04 Dudfield Anne Elizabeth Connection based denial of service detection
US8191136B2 (en) 2002-11-04 2012-05-29 Riverbed Technology, Inc. Connection based denial of service detection
US7461404B2 (en) 2002-11-04 2008-12-02 Mazu Networks, Inc. Detection of unauthorized access in a network
US20040215975A1 (en) * 2002-11-04 2004-10-28 Dudfield Anne Elizabeth Detection of unauthorized access in a network
US20040221190A1 (en) * 2002-11-04 2004-11-04 Roletto Massimiliano Antonio Aggregator for connection based anomaly detection
US7991827B1 (en) * 2002-11-13 2011-08-02 Mcafee, Inc. Network analysis system and method utilizing collected metadata
US8631124B2 (en) 2002-11-13 2014-01-14 Mcafee, Inc. Network analysis system and method utilizing collected metadata
US7627669B2 (en) 2003-05-21 2009-12-01 Ixia Automated capturing and characterization of network traffic using feedback
US20050021715A1 (en) * 2003-05-21 2005-01-27 Diego Dugatkin Automated capturing and characterization of network traffic using feedback
US8694626B2 (en) 2003-05-21 2014-04-08 Ixia Automated characterization of network traffic
US20040236866A1 (en) * 2003-05-21 2004-11-25 Diego Dugatkin Automated characterization of network traffic
US7840664B2 (en) * 2003-05-21 2010-11-23 Ixia Automated characterization of network traffic
US20110040874A1 (en) * 2003-05-21 2011-02-17 Diego Dugatkin Automated Characterization of Network Traffic
US20050235058A1 (en) * 2003-10-10 2005-10-20 Phil Rackus Multi-network monitoring architecture
US20080010888A1 (en) * 2004-11-12 2008-01-17 Taser International, Inc. Systems and methods for electronic weaponry having audio and/or video recording capability
US7957267B2 (en) * 2005-02-16 2011-06-07 Fujitsu Limited Fault detection device
US20060182036A1 (en) * 2005-02-16 2006-08-17 Fujitsu Limited Fault detection device
US20080243858A1 (en) * 2006-08-01 2008-10-02 Latitude Broadband, Inc. Design and Methods for a Distributed Database, Distributed Processing Network Management System
US8180027B1 (en) * 2006-11-22 2012-05-15 Securus Technologies, Inc. Score-driven management of recordings
US8687638B2 (en) 2008-07-10 2014-04-01 At&T Intellectual Property I, L.P. Methods and apparatus to distribute network IP traffic
US8331369B2 (en) 2008-07-10 2012-12-11 At&T Intellectual Property I, L.P. Methods and apparatus to distribute network IP traffic
US8031627B2 (en) 2008-07-10 2011-10-04 At&T Intellectual Property I, L.P. Methods and apparatus to deploy and monitor network layer functionalities
US20100008233A1 (en) * 2008-07-10 2010-01-14 Cheng Tien Ee Methods and apparatus to deploy and monitor network layer functionalities
US20100008363A1 (en) * 2008-07-10 2010-01-14 Cheng Tien Ee Methods and apparatus to distribute network ip traffic
US8767565B2 (en) 2008-10-17 2014-07-01 Ixia Flexible network test apparatus
US7953092B2 (en) 2009-04-08 2011-05-31 Ixia Traffic receiver using parallel capture engines
US20100260204A1 (en) * 2009-04-08 2010-10-14 Gerald Pepper Traffic Receiver Using Parallel Capture Engines
US8369225B2 (en) 2009-09-23 2013-02-05 Ixia Network testing providing for concurrent real-time ingress and egress viewing of network traffic data
US20110069626A1 (en) * 2009-09-23 2011-03-24 Ethan Sun Network testing providing for concurrent real-time ingress and egress viewing of network traffic data
US8670329B2 (en) 2009-09-23 2014-03-11 Ixia Network testing providing for concurrent real-time ingress and egress viewing of network traffic data
US8699484B2 (en) 2010-05-24 2014-04-15 At&T Intellectual Property I, L.P. Methods and apparatus to route packets in a network
US8843620B2 (en) 2010-08-27 2014-09-23 Riverbed Technology, Inc. Monitoring connections
GB2483111A (en) * 2010-08-27 2012-02-29 Zeus Technology Ltd Monitoring connections to servers and memory management
US10296973B2 (en) * 2014-07-23 2019-05-21 Fortinet, Inc. Financial information exchange (FIX) protocol based load balancing
US20180331922A1 (en) * 2017-05-12 2018-11-15 Pragati Kumar Dhingra Methods and systems for time-based binning of network traffic
US10466934B2 (en) * 2017-05-12 2019-11-05 Guavus, Inc. Methods and systems for time-based binning of network traffic
US20210288993A1 (en) * 2017-05-18 2021-09-16 Palo Alto Networks, Inc. Correlation-driven threat assessment and remediation
US11398968B2 (en) 2018-07-17 2022-07-26 Keysight Technologies, Inc. Methods, systems, and computer readable media for testing virtualized network functions and related infrastructure
US11210236B2 (en) 2019-10-22 2021-12-28 EMC IP Holding Company LLC Managing global counters using local delta counters
US11405261B1 (en) 2020-09-10 2022-08-02 Juniper Networks, Inc. Optimizing bandwidth utilization when exporting telemetry data from a network device
US11323354B1 (en) 2020-10-09 2022-05-03 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using switch emulation
US11483227B2 (en) 2020-10-13 2022-10-25 Keysight Technologies, Inc. Methods, systems and computer readable media for active queue management
US11483228B2 (en) 2021-01-29 2022-10-25 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using an emulated data center environment
US11405302B1 (en) 2021-03-11 2022-08-02 Keysight Technologies, Inc. Methods, systems, and computer readable media for network testing using configurable test infrastructure
US11388081B1 (en) 2021-03-30 2022-07-12 Keysight Technologies, Inc. Methods, systems, and computer readable media for impairment testing using an impairment device
US11729087B2 (en) 2021-12-03 2023-08-15 Keysight Technologies, Inc. Methods, systems, and computer readable media for providing adaptive background test traffic in a test environment
US11765068B2 (en) 2021-12-22 2023-09-19 Keysight Technologies, Inc. Methods, systems, and computer readable media for programmable data plane processor based traffic impairment

Also Published As

Publication number Publication date
US6279037B1 (en) 2001-08-21
GB9811416D0 (en) 1998-07-22
GB2337903A (en) 1999-12-01
GB2337903B (en) 2000-06-07
US6327620B1 (en) 2001-12-04

Similar Documents

Publication Publication Date Title
US6279037B1 (en) Methods and apparatus for collecting, storing, processing and using network traffic data
US5878420A (en) Network monitoring and management system
US8244853B1 (en) Method and system for non intrusive application interaction and dependency mapping
EP0983581B1 (en) System and method for analyzing remote traffic data in a distributed computing environment
US7120678B2 (en) Method and apparatus for configurable data collection on a computer network
US7143159B1 (en) Method for correlating and presenting network management data
US7561569B2 (en) Packet flow monitoring tool and method
US7788365B1 (en) Deferred processing of continuous metrics
JP5015951B2 (en) Method and apparatus for collecting data to characterize HTTP session load
Brownlee Traffic flow measurement: Experiences with NeTraMet
EP0854605A2 (en) Method and system for discovering computer network information from a remote device
JPH077518A (en) Method for analysis of network
US20050027858A1 (en) System and method for measuring and monitoring performance in a computer network
US20060136187A1 (en) Server recording and client playback of computer network characteristics
IL178481A (en) Method to identify transactions and manage the capacity to support the transaction
CN110401579B (en) Full link data sampling method, device and equipment based on hash table and storage medium
US10452879B2 (en) Memory structure for inventory management
US7707080B2 (en) Resource usage metering of network services
FR2839564A1 (en) SYSTEM AND METHOD FOR DYNAMICALLY CONFIGURING A NETWORK
US8607229B2 (en) Correcting packet timestamps in virtualized environments
JP2001134544A (en) Generating method and analyzing method for common log
Jabbarifar et al. Online incremental clock synchronization
JP3628902B2 (en) Network management system and storage medium applied to the system
Lambert A model for common operational statistics
Hielscher et al. A low-cost infrastructure for high precision high volume performance measurements of web clusters

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION