CA1279392C - Packet switching system arranged for congestion control - Google Patents

Packet switching system arranged for congestion control

Info

Publication number
CA1279392C
CA1279392C CA000555609A CA555609A CA1279392C CA 1279392 C CA1279392 C CA 1279392C CA 000555609 A CA000555609 A CA 000555609A CA 555609 A CA555609 A CA 555609A CA 1279392 C CA1279392 C CA 1279392C
Authority
CA
Canada
Prior art keywords
packet
congestion
data packet
data
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000555609A
Other languages
French (fr)
Inventor
Adrian Emmanuel Eckberg, Jr.
Daniel Ta-Duan Luan
David Michael Lucantoni
Tibor Joseph Schonfeld
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=25487371&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA1279392(C) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Application granted granted Critical
Publication of CA1279392C publication Critical patent/CA1279392C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5636Monitoring or policing, e.g. compliance with allocated rate, corrective actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes

Abstract

PACKET SWITCHING SYSTEM ARRANGED FOR CONGESTION CONTROL

Abstract A method for controlling congestion in a packet switching network uses marked packets and a packet dropping algorithm to determine when to drop the marked packets wherever the network is congested at any point along the path being traversed by the marked packets.

Description

PACKET SWITCHING SYSTEM ARRANG~D FOR CONGESTION CONTROL

T~h~i~al Field This invention relates to a packet switching system arranged for controlling switch node and link congestion caused by customers transmitting information at excessive rates.
t~le In~en~io~
Packet communication involves a technique of disassembling information at the sending end of a switching network for insertion into separate bursts, orpackets, of data and reassembling the same information from the data packets 10 at the receiving end of the network. Communication according to this technique is especially useful in common carrier or time-shared switching systems, since the communication path or circuit required for the packet transmissions is needed only while each packet is being forwarded through the network, and is, therefore, available to other users during intervening periods.Packet switching offers another attractive feature. That is the flexibility of providing integrated information transport services for a wide range of applications, e. g., interactive data, bulk data, signaling, packetized voice, image, etc. Instead of designing specialized networks optimized for specific forms of applications, many services can be simultaneously operated over the 20 same connection to the network. All varieties of user information are converted into packets, and the network transports these packets between users. End users are not tied to fixed rate connections. Instead, network connections adaptto the particular needs of the end user. Furthermore, it is possible to create auniform user-network interface applicable to a broad range of services. Note 2S that different applications may require different grades of service from the network. For example, packetized voice transmission has very stringent delay requirements for delivery of associated packets of an ongoing conversation.
Efflcient utilization of network resources can be attained by allowing packetized transmissions of a plurality of users on the same connection on a 30 time-shared basis. Thus the packets of one user are interspersed with the packets of other users.

~, Elements of the resources or facilities which may be shared in such packet networks include transmission link bandwidth (defined as bytes/sec, a measure of link capacity), processor real time ~i.e., time immediately availablefor processing packets), ports or links, and data or packet buffers. In large S multinode networks, each node or packet switch accommoclates many ~uch ports or links that terminate paths which extend to users' ter~ninal equipments or to other nodes. Each node may include one or more processors for controlling the routing and processing of packets through the node. The node is customarily equipped with a large number of buffers for storing packets prior to10 such routing or while awaiting an output link. Each line between nodes or extending to users typically serves a plurality of concurrent calls between different terminal equipments. Each packet passing through the network consumes a certain amount of processor real time at each node, takes away a certain amount of link capacity (proportional to the packet si2e3, and occupies 15 buffers while being processed. There is a maximal number of packets per unit of time that a network can accommodate. This notion of "capacityl' depends on all the aforementioned resources provisioned within the network, as well ~s on the particular traffic mix generated by the users.
One problem in a packet switching system arises when many users 20 attempt to utilize the network at the same time. This results in the formation of many paths or circuits for routing the packets and the congestion of the communication facilities. Congestion of a facility is the occurrence of more work than can be handled by the facility in a specific period of time. It has been found that congestion tends to spread through the network if the ~5 congestion is uncontrolled. As a result, it is desirable to have a flow/congestion control mechanism for protecting the expected performance level ~or each service type ~e.g., voice) from unpredictable tra~flc overloads due to other service types. Protection from overload can be provided through the allocation of key network resources. In the event that a key resource is overloaded by 30 traffic, it is desirable that the overall performance of the system should degrade a~ gracefully as possible. Controlling the ut;lization of the key resource may require different objectives under the overload condition than under a normal load condition.
3~9 A principal area of packet congestion is in buffers, or queues, in each node, particularly where the buffers become unavailable to store incoming packets. Yet the buffer requirement is closely related to the utilization of processor real time and/or link bandwidth. When the processor real time is 5 exhausting, or when the link bandwidth is not sufficient to handle the packet trafflc, queues within the switching node will build up causing a long delay.
Finally packet buffers will be exhausted, resulting in the dropping of packets.
A number of flow control procedures, e. g., end-to-end windowing, have been developed and commercially exploited for controlling congestion.
The known end-to-end windowing scheme for flow control is advantageous when network operation is viewed strictly from the network periphery. Each machine can have many logical channels simultaneously established between itself and ~larious other machiues. For each of these logical channels, a given machine is allowed to haYe W unacknowledged packets 15 outstanding in the network. For example, a machine can initially transmit W
packets into the network as fast as it desires; but, thereafter, it must wait until it has received an acknowledgment from the destination machine for at least one of those outstanding packets before it can transmit more packets.
This scheme has several very desirable properties. There is very little 20 wasted link bandwidth caused by the flow-controlling mechanism because the number of bits in an acknowledgment can be made very small compared to the number of bits in the W packets to which it refers. There is an automatic throttling of transmission under a heavy load condition because the increased round-trip delay will slow down the acknowledgments and hence the traffic 25 source.
There also are disadvantages to the end-to-end window flow control. The windowing mechanism by itself is not robust enough. The mechanism relies upon the end user adhering to an agreed-upon window size. By unilaterally increa~ing its window size, an abusive user can get favorable performance while 30 degrading the performance for other users. Even if all users o~ey their speciiied end-to-end window sizes, it is very difficult to determine suitable window sizesfor various load conditions. In general, the window size W is chosan large enough to allow uninterrupted transmission when the network is l;ghtly loaded;
however, overload conditions may require an unacceptably large amount of 3;~
buffer storage within the packet switch. It is possible for end users to adaptively adjust their window sizes based on network congestion, but this by itself would not necessarily give a fair congestion control.
Another drawback to relying upon the end-to-end windowing mechanism 5 is that not all agreed-upon user applications are subject to window-based end- t~end control. For example, the Unnumbered Information (UI) transfer embedded in some window-based protocols, e. g., LAPD, allows the end users to send packets without any windowing limitation. Other examples are packeti~ed voîce or packetized voice-band data applications where an end-to end window is 10 not applicable.
It has been proposed that in an integrated voiceldata packet network the proper way to control flow/congestion is by allocating bandwidth to connections and by making new connections only when the needed bandwldth i~ available.
This means that the network must provide a mechanism for users to 15 select their bandwidth needs and indicate the burstiness of their transmissions.
Thereafter, the network must enforce those parameters with respect to the respective users.
A key part of bandwidth allocation is the mechanism used to select and specify the needed bandwidth and to limit users to their selections. Perhaps the2~ simplest approach is the s~called "leaky bucket" method. A count in a counter, associated with each user terminal transmitting on a connection, is incremented whenever the user sends a packet and is decremented periodically.
The user selects the rate at which the count is decremented (this determines theaverage bandwidth) and the value of a threshold (a number indicating 2S burstiness). If the count exceeds the threshold upon being incremented, the network discards that packet.
There are problems with this "leaky bucket" bandwidth allocation proposal. A major problem is the fact that the control is open-loop in nature.
A user's packet~ will be dropped once the thresbold is ex~eeded even when the 30 network could have handled the packets. Precious network resources would be wasted. The unnecessary throttling of that user's data may sustain the information transfer over a lengthy period, contributing to network congestion at a later time. Another problem is that the limiting network resource may be processor real time rather than link bandwidth.

~79~9~

Summary of the Invention These and other problems are resolved by a novel method and apparatus for controlling congestion in a packet switching network. The method uses a packet dropping algorithm to determine when to drop marked packets beiny transmitted through the network. Marked packets are dropped wherever the network i5 congested at any point alony the path being traversed by the data packets.
In accordance with one aspect of the invention there is provided a method for dropping a marked data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of: a. preparing to transmit the data packet; b. determining whether or not the data packet is marked; c. evaluating congestion at the output of the switch node; d. determining whether or not the congestion is at a predetermined value; and e. if the data packet is marked and the congestion is at the predetermined value, dropping the data packet before it is transmitted.
In accordance with another aspect of the invention there is provided a packet switching node with a plurality of receive terminals; a plurality of channels, each interconnected with a different one of the receive terminals for transmitting sequentially packets of data, each packet containing at least one marking bit; means, responsive to a measurement of the congestion of the packet switching node, for generating a signal indicating the amount of congestion in the packet switching node; and means, responsive to the generated congestion signal and a threshold value, for deleting any packet of data containing a marking bit.
U.S. Patent No. 4,769,810 which issued on September 6, 1988 to A.E. Eckberg, et al defines the method utilized for monitoring and marking packets.
Brief Description of the Drawinqs The invention may be better understood by reading the following detailed description with reference to the drawings wherein FIG. 1 illustrates a packet switching network arranged 79~
5a for interconnecting voice/data/video terminals and controlliny congestion within the network;
FIG. 2 is a block diagram of circuitry arranged for monitoring the rate of transmission of a cus~omer and for marking that customer's packets;
FIG. 3 shows a flow chart of an algorithm for monitoring the rate of transmission of a customer and for marking the customers's packets when the rate of transm.ission is excessive;
FIG. 4 presents a flow chart of an algorithm for periodically adjusting a counter used in monitoring the rate of transmission of a customer and for determining a dynamic threshold relating to a customer's selected rate of transmission;
FIG. 5 is a graphic illustration of the actions of the algorithms shown in FIGS. 3 and 4 for a sequence of packets from a customer;
FIG. 6 is a block diagram of circuitry arranged for dropping marked packets within the network;
FIG. 7 shows a flow chart of an algorithm for dropping marked packets which encounter congestion within the network;

~7~3 FIG. 8 presents a flow chart of another algorithm for monitoring the rate oî transmission of a customer and for marking the customer's packets when the rate of transmission is excessive; and FIG. ~ is a graphic illustration of the actions of the algorithms shown in 5 FIGS. 4 and 8 for a sequence of packets from a customer.
~etailed ~;~i~n Referring now to FIG. 1, there is shown a packet switching network 20 that is arranged ~or establishing virtual circuit connections between pairs of terminal equipments. Terminals 21 and 22 transmit packets of data through 10 customer connection lines 25 and 26 to a packet multiplexer 28. Other terminals, not shown in FIG. 1 but indicated by a ~eries of dots, also can transmit packets o~ data into the multiplexer 28. Although the terminals 21 and 22 are shown as computer terminals, they also may be di~itized voice, video,or other data terminals. A resulting output stream of packets, interspersed with15 one another, are transmitted from the multiplexer 28 over an access line 2~ to an access node 30 in the packet switching network 20. Another access line 33, and other access lines represented by a series of dots, also transmit streams ofdata packets into the access node 30. Some of these access lines ori~inate at a multiplexer and others originate at a high speed terminal equipment.
Although a typical packet switching network may be a very complex network of switch nodes and links, only five switch nodes 30, 40, 50, 60 and 70 are shown in F~~. 1 to illustrate an arrangement of the invention.
In FIG. 1 only node 30 is arranged as an access noda for receiving packets from customers' terminal equipments. ~ny or all of the other nodes 40, 503 60 25 or 70 may also be access nodes in an operating system, but are not shown as access nodes in the network 20 merely to simplify the drawing.
Node 60 is ~hown as an egress node in FIG. 1. The other nodes also may be egress nodes but are not shown as such in FIG. 1 to simplify the drawing.
From the egress node 60, streams of packets are transmitted over egress lines 6130 and 62 and others, represented by a series of dots, to demultiplexers or customers' equipments. For purposes of simplification of FIG. 1, only a single demultiplexer 63 is shown. The stream of data packets transmitted over the egress line ~1 is separated within the demultiplexer 63 according to customer identity so that customer specific packets are passed over customer connection 7~ 3.

lines 65 and 6~, respectively, to customer terminals 67 and 68. Other customer lines and terminals also are supplied with streams of packets from the demultiplexer 63. Those other customer lines and terminals are represented ;II
FIG. 1 by a series of dots.
~or purposes of illustrating the operation of the data switching network 20, an exempla~ virtual connection is shown by a heavily weighed path line linking the terminal equipment 21 to the terminal equipment 67. Although typical transmission is two-way over such a virtual connection, only one-way transmission from the terminal equipment 21 through the network 20 to the 10 terminal equipment 67 is shown in FIG. 1. This virtual circuit connection is established from the multiplexer 28 and access line 29 through the access node 30, switch nodes 40 and 50, links 72, 74 and 78, egresq node 60, and egress link61 to the demultiplexer 63.
The network 20 is arranged for congestion control. Links and switches 15 are provisioned in quantities that permit unimpeded transmission of all packets up to a limit~ Congestion, which potentially might occur at a~y point within the network 20, can impede the progress of a growing number of packets if the congestion continues for an extended period of time. As a result, the congestioncan spread throughout the network and disable the network from effective ~0 operation.
The advantageous congestion control scheme, herein presented, is directed toward monitoring and marking selected customer data packets and eliminating or dropping from further transmission through the network marked data packets whenever and wherever they encounter a congestion condition.
25 This control scheme is implemented by algorithms which affect individual datapackets at various times and points withEn the network as a continuing stream of packets progresses through the virtual connection~ Each customer or end user may establish multiple virtual connections to different customers of the network. The monitoring and marking scheme can be implemented on a per 30 virtual circuit basis, on a per group of virtual circuits basis, or on a per customer basis. To simplify further explanation, we assume there is only one virtual circuit per customer, so the terms customer and virtual circuit will be synonymous hereinafter.

~L~t7 A first algorithm i9 for monitoring the bandwidth of a customer and for marking that customer's packets when that customer's subscribed bandwidth is exceeded. In this context, bandwidth may be defined as a two-dimensional quantity in the units (bytes/sec, packets/sec) to distinguish between whether S link bandwidth or processor real time is the limiting resource. The subscribedbandwidth is described in terms of an average rate (a throughput in bytes/sec that is guaranteed to be achievable by the user with packets of a speci~led size) and a burstiness factor (where a measure of burstines~ i9, for example, the peakto mean ratio of the transmission rate as well as the duration of the peak 10 transmissions). This first algorithm is used at the receive side of the access node 30. Each of the packets received from the access line 20 includes information in a header for identifying the virtual circuit connection to which the packet belongs. Accordingly the various packets are stored in registers and are identified with specific virtual circuits.
Information from the headers of packets, being transmitted by the terminal equipment 21 and being received by the access node 30, i9 applied to a bandwidth monitoring and packet marking circuit 80 in ~IG. 1.
Referring now to FIG. 2, there is a block diagram of the circuit ~0 which performs the proposed bandwidth monitoring and packet marking functions on 2~ a per customer basis. Only information from packets, identified as originating from the terminal 21 of FIG. 1 and transmitted through the heavily weighted line linking to the terminal 67, is monitored by the algorithm The circuit 80 istime shared for performing the same functions with respect to other virtual connections, but all of the monitoring and marking is done separately with 25 respect to individu~l virtual connections.
The monitoring is accomplished by an algorithm which determines whether or not the individual customer at terminal 21 is transmitting at an excessive rate (a rate greater than the subscribed rate) over the virtual circuit extending to the terminal 67.
When the illustrated virtual connection is set up, the customer terminal equipment 21 and the network 20 negotiate for a desire~ bandwidth allocation relating to the virtual connection. The bandwidth allocation-will be called the selected, or subscribed, transmission rate. Information transmissions which exceed, or are greater than, the subscribed transmission rate are referred to as ~ . ~

excessive rates.
A processor in the access node 30 translates the subscribed transmission rate allocation into the long-term thre9hold MJ a short-term threshold S and a decrement constant c. The long-term threshold M is chosen to accommodate 5 the largest burst size allowed by the subscribed transmission rate, and the short-term threshold S is determined by the maximum instantaneous rate allowed. The decrement constant c relates to the guaranteed average throughput of unmarked packets. These are initializing parameter~ that are applied to a logic circuit 81 when the virtual connection is established, as shown 10 in FIG. 2, for updating values of (~OUNT and T~ESH at the end of an interval. They are used subsequently in the bandwidth monitoring and packet marking circuit 80. Also initially the value of COUNT in an accumulator 82 is set to zero and an active threshold value THRESH is set equal to S in a register83. Further, in initialization~ a parameter k~ which is a weighting factor related 15 to the number of packets per interval, is applied to a logic circuit 84 whichproduces a signal for marking a packet being transmitted at an excessive transmission rate.
During each interval, as shown in FIGS. 2 and 5, the number of bytes contained in each arriving packet is input by way of a lead 8S to the register 20 storing BYTES to be applied to the logic circuit 84. Circuit 84 decides whether or not the specific packet should be marked as being transmitted at an excessivetransmission rate. If the packet is to be marked, a marking signal is generated on the lead 88 for inserting a marking signal into the packet header for identifying the packet as one being transmitted at an excessive transmission 25 rate. If the packet is within the limits of the subseribed transmission rate, no marking signal is generated or inserted into the header of the packet.
There are three alternative packet marking algorithms to be described herein by way of illustrations. Others can be used as well.

Param~ters Used iD. ~ ) ~ 0:
I - interval between successive decrements to the value of COUNT in the accumulator; this is a fixed interval for each virtual circuit being monitored and may differ among the virtual circuits; a typical value for I could be in the10-500 msec range.

t;~3;.~

k - a parameter by which the value of COUNT in the accumulator is to be incremented for each packet sent from the customer's terminal in addition to the value of BYTES for the packet; the parameter k represents a byte penalty per packet in guaranteed throughput that provides network protection from 5 excessive minimal-sized packets that might otherwise stres~ real time resource3;
a typical value for the parameter k i9 a number between 0 and 1000; a value of the parameter k equal to zero would be used when processor real time is not a concern .
c - a decrement constant relating to the customer's selected throughput 10 of bytes per interval which will avoid packets being marked for possibly being dropped; an amount by which the value of COUNT in the accumulator is to be decremented during each interval; the number of a customer's bytes per interval (subscribed transmission rate) that will guarantee all transmitted packets are unmarked.
S - a hort-term or instantaneous threshold on throughput during each interval which, if exceeded, will cause packets to be marked.
M - a long-term bandwidth threshold related to allowable "burst" size.
- BYTES - the number of bytes in a packet being received, from a customer's terminal, by an access node.
COUNT- the value in the accumulator.
THRESH - a variable threshold.

P~andwidth ~a~ and ~kQt~ ~1~ -~hm (~) One of those algorithms, Algorithm ~A~, is shown in FIG. 3 with a graphic example thereof illustrated in FIG. 5.

25 Initialization for the Algorithm (A):

1. Set the accumulator variable COUNT to 0.

2~ Set the threshold variable T~ESH to S

Steps in the Algorithm (A):
9~9~

1. During each interva}, upon receipt of each packet from the customer's terminal (FIG. 3):

a. Set the byte count variable BYTES to the number of bytes in the packet.
b. Compare the value of COUNT with the value of T~ ESH and take the following actions: If COUNT < THRESH, then pass the packet on unmarked and replace the value of COUNT by COUNTfBYTES+k.
Otherwise, if COUNT > T~ESH, then mark the packet, pass it on, and keep the same COUNT.
10 2. At the end of every interval (FIG. 4):

a. Replace the value of COUNT by COUNT-c or 0, whichever i9 larger.
b. Set THRESH to COUNT+S or M, whichever is smaller.

In FIG. 5 the vertical axis presents the value in bytes for the parameter 15 k, the value of BYTES for the current packet, the value of the short-term threshold S and the value of the long-term threshold M, t3ie value of the threshold variable T~ESH, and the decrement constant c. The horizontal axis is time-divided by equal intervals I. Interval I is the duration between two instants of time, such as the instants t 0 and t 1. Also along the horizontal axis 20 there t9 a series of numbers, each of which identifies a specific arriving packet.
Parameter k is shown in FIG. 5 as upward-directed arrows with solid arrowhead~. It is a constant value for all packets of a virtual connection with the value typically in the r ange ~1000. According to the algorithms, COUNT is incremented by the parameter k every time a packet arrives.
The value BYTES is represented in FIG. 5 by upwardly directed arrows with open arrowheads and various values for the several arriv;ng packets.
According to the algorithm, COUNT is încremented by BYTES every time a packet arrives, except when the packet is to be marked.

The accumulated values of COUNT are shown in FIG. 5 as heavy hori~ontal line segments.
The value~ of TEIRESH are shown in FIG. 5 as a light dotted line.
Vt'hen the packet arrives, COUNT is compared with T~ESH. If 5 COUNT is less than THRESH, the packet is not marked and COUNT i~
incremented. If COUNT is equal to or greater than THRESH, the packet i9 marked and COUNT is not incremented.
Stars are positioned above the lines (representing the parameter k and BYTES) of packet~ which are to bè marked because those packets are 10 determined to be transmitted at an excessive transmission rate.
The decrement constant c is shown in FIG. 5 by downwardly directed open headed arrows, which are of constant value and occur at the end of each interval I except when COUNT would be decremented below zero.
Also in the algorithm of FIG. 4, the sum of COUNT plus the value of S is 15 compared with the value of M. If the sum is greater than M, THRESH i5 set to M . If the sum is less than or equal to M, THRESH is set to the 3um of COU~T
plu~ the value of S .
Once the decision is made to mark the packet or not and the packet header is marked appropriately, the packet proceeds through the access node 20 20 of FIG. 1 to an output controller before being put into an output buffer associated with the output link, through which the packet is to be transmitted.
At that time, the information in the packet header field, reserved for the marking signal, is forwarded to a packet dropping logic circuit 53, associated with node 30 of FIG. 1. A more detailed bloclc diagram of the packet dropping 25 logic circuit 53 is presented in FIG. 6.
In FIGS. 6 and 7 there is shown both a block diagram and the algorithm for the logic of determ}ning whether or not to drop the current packet which is about to be applied to the output buffer of the access node 30 or which is aboutto be applied to the output buffer of any of the switch nodes 40, 50, 60 and 70 30 of FIG. 1.
It is assumed that traff~lc is light to moderate in the nodes 3Q, 40 and 60 along the virtual connection. At the switch node 50, however, trafflc is heavy enough to create a congestion condition.

First of all, the packet dropping logic circuits 53 and 54, which are associated with lightly loaded nodes 30 and 40 and are like the logic circuit S5 of FI(~. 6, run the algorithm of FIG. 7. Since there is no congestion at these nodes and the output bu~fers are not full when tested, whether or not the current 5 packet i9 marked, it is passed to the output buffer of the rele~ant node for transmission along the virtual connection.
Next the packet traverses the node 50, the congested node. The packek is applied to a dropping circuit ~0 in the packet dropping logic circuit 5~; by way of a lead 91. Before the packet is placed in its identified packet output buffer10 ~2 by way of a lead ~3, that buffer i9 checked to determine whether or not it is full. If the buffer ~2 is full, a signal is forwarded through a lead ~4 to the dropping circuit ~0. Regardless of whether or not the packet is marked, if ths output buffer ~2 is full, the packet is dropped. Whether or not the output buffer is full, a measurement of congestion is determined. The number of 15 packets in the packet output buf~er ~2 is applied by way OI a lead ~5 to a congestion measuring circuit 96. At the same time, a signal representing the availability of processor real time is applied to the congestion measuring circuit ~B by way of a lead 97.
In response to the signals on the leads 9S and 97, the circuit 96 produces 20 a signal on a lead 98 indicating the amount of congestion that exists at the node 50 of FIG. 1. The congestion signal on the lead 98 and a threshold value applied by way of a lead ~ determine whether or not marked packets are to be dropped by the dropping circuit 90. A signal indicating that a packet should be dropped is transmitted by way of a lead 100 to the switch node 50. When the 25 packet dropping slgnal occurs, the relevant packet is dropped (if it is a marked packet) or ls placed in the output buffer 92 for subsequent transmission throughthe link 76 of FIG. 1 (if it is an unmarked packet). The aforementioned procedure drops marked packets before placing them in the output buffer. It is called "input dropping" because packet dropping is done at the input to the 30 buffer. Alternatively, one may place all the packets in the output buffer if there is space there and implement "output dropping" for marked packets.
That is, when a marked packet in the output buffer fmally moves up to the head of the queue and is ready for output, the $hreshold i9 checked, and the packet will be dropped or transmitted accordingly.

3~3 The congestion measure is a threshold picked to guarantee that a certain quantity of resources are available in the node for passing unmarked packets. A
weighted sum of the number of packets residing in the output buffer 92 plus the amount of real time available in the processor is used to measure congestion.
5 The amount of real time is related to the parameter k . When the parameter k is equal to zero, real time is not a con~ern. Then the number of packets in the output buffer is the only measure of congestion.
Since the packet i9 dropped at the node 50 when that node is congested and the packet is a marked packet, the congestion is somewhat relieved. The 10 packet being dropped is one previously marked at the access node 30 as a packet being transmitted at an excessive transmission rate. A congestion condition, less critical than a full output buffer, is relieved by dropping only the marked packets. Under such a condition, unmarked packets are passed through to their destination.
Congestion, therefore, is relieved for the most part by dropping the packets of customers who are transmitting at rates which exceed their agreed upon, or assigned, transmission rate. The network, therefore, can adequately serve all subscribers who transmit within their subscribed transmission rate.

Alterna~ Bandwidth l~onitorlng ~ Packet Marking -Algorithm (~) Turning now to FIGS. 8 and ~, there is shown an alternative algorithm for determining which packets are to be marked together with a graphical presentation of how that algorithm operates on a series of received packets.
Only the first step (the marking step) is different from the earlier described algorithm explained with respect to FIGS. 3 and 4. Initialization and 25 parameters have the same or a similar meaning except that a new value of COUNT is determined before the value of COUNT is compared with the value of THRESH. The value of BYTES plus the parameter k is added to the value of COUNT before the comparison occurs. Subsequently when the value of COUNT i9 determined to be less than the value of THRESH, the packet is not 30 marked and is passed along. The existing value of COUNT is retained. If COUNT is equal to or greater than THI2EsH~ the packet is markad and passed along. Then the value of COUNT is decremented by the sum of BYTES plus the parameter k . As with the earlier described packet marking algorithm, 1~J79 Algorithm (A), packets which are marked are considered to be only those being tran~mitted at an excessive transmission rate.

Initiali2ation for the Algorithm ~B):

1. Set the counter variable COUNT to 0.

S 2. Set the threshold variable TEIRESH to S .

Steps in the Algorithm (B):

1. During each interval upon receipt of each packet ~rom the customer's terminal (FIG. 8):

a. Set the byte count variable BYTES to the number of bytes in 10 the packet.
b. Replace the value of COUNT by COUNT+BYTES~k.
c. Compare the value of COUNT with the value of THRESH and take the following actions: If COUNT < THRESH, then pass the packet on unmarked. Otherwise, if COUNT > THRESH, then mark the packet, pass it 15 on, and replace the value of COUNT by the value of COUNT-BYTES-k.

2. At the end of every interval I (FIG. 4):

a. Replace the value of COUNT by COUNT-c or û, whichever is larger.
- b. Set THRESHto COUNT+S orM, whichever is smaller.

FIG. ~ shows the result of the series of packets being monitored by the algorithm of FIG. 8. Stars positioned above the lines, representing the parameter k and BYTES for the packets, indicate that those packets are to be marked as excessive transmission rate packets.

.Special Service Packet Marki~g A special new service can be offered to customers. The new service offering i9 ba~ed on the fact that the network is arranged for dropping marked packets wherever and whenever the marked packets encounter a congestion condition or a full buffer condition. Thi~ new service is a low-cost, or economic 5 rate, service.
By subscribing to, or choosing, the economic rate serrice, the customer assumes a risk that transmitted messages will not be delivered to their destination if they encounter congestion. The operating equipment (either at the customer's terminal or at the access node) is arranged to mark every packet 10 transmitted by the economic rate service customer. Thereafter, as the marked packets proceed through the packet switching network, they are treated like other marked packets. If these marked economic rate service packets traverse a point of congestion or arrive at a full buffer in the network, the packets ar dropped. Since all of the economic rate packets are marked, there i3 a high 15 probability that the message will not be delivered during busy traffic conditions.
During hours of light traffic, however, there is a high probability that there is no congestion nor full buffers. At such times, the message is likely totraverse the network successfully on the first try.
This economic rate service will benefit both the operating 20 communications company and the customer. The customer benefits from the iow rate charges. The operating company benefits because customers of this service will tend to transmit economic rate service messages during slow traffictimes when much of the company's equipment i9 idle and much spare capacity is available.
25 ~ongestion Co~t~ol The approach adopted in thi~ invention offers the following advantages over exi~ting techniques ~or controlling congestion in a packet switching system.
First, it affords customers both a guaranteed level of information throughput aswell as the opportunîty to avail themselves of excess capacity that is likely to30 exist in the network (depending on the instantaneous level of network congestion) to realize in~ormation throughputs beyond their guaranteed levels.
This provides customers flexibility in trading off factors ~uch as guaranteed and expected ievels of throughput, integrity of information transport through the network, and the associated costs paid to the network provider for different ~ ~ t~9~

grades of service.
Second, the approach affords the network provider greater flexibility in the provisioning of key network resources to meet the demand9 of customers, in that the network resources can be provisioned to meet the total demands due to 5 the guaranteed average throughput~ of all customers (with a h;gh level of certainty) rather than a statistically-predicted peak total demand of all customers. This lessens the conservatism required in network re~ource provisioning and allows a higher average level of network resource utilization Finally~ the congestion control, as illustrated in Figures 1-~, is a 1() distributed control that employs the monitoring and marking of packets at access nodes and the dropping of marked packet~ at any network node that may be experiencing congestion The control is completely decoupled from actions that may be adopted by customers at the end terminal equipments Distribution of the control through the network eliminates the need for very 15 low-delay signaling mechanisms between network nocles that would otherwise be needed if excessive rate packets were to be dropped at access nodes. The decoupling of the control from terminal actions eliminates the dependence of the integrity of the control scheme on terminal actions, as is the case with some other control schemes The distribution of control provides a robustness to the 20 control and an ensured protection of both the network and well-behaved customers from other customers who abusively send packets at excessi~Te transmission rates.
The foregoing describes both apparatus and methods for marking packets being transmitted at an excessive rate when received at an access node, or being25 transmitted from a special customer, and for dropping marked packets at any node in the network when a congestion condition exists The apparatus and methods described herein together with other apparatus and methods made obvious in view thereof are considered to be within the scope of the appended claims.

Claims (21)

1. A method for dropping a marked data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. preparing to transmit the data packet;
b. determining whether or not the data packet is marked;
c. evaluating congestion at the output of the switch node;
d. determining whether or not the congestion is at a predetermined value;
and e. if the data packet is marked and the congestion is at the predetermined value, dropping the data packet before it is transmitted.
2. A method for dropping a marked data packet, in accordance with claim 1, and comprising the further steps of:
f. if the data packet is unmarked, passing it to the output;
g. multiplexing the unmarked data packet with other data packets; and h. transmitting the multiplexed unmarked data packet and other data packets through a link or terminal line.
3. A method for dropping a marked data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. segregating data packets transmitted by one customer;
b. marking that one customer's data packets being transmitted;
c. preparing to transmit one of the customer's data packets interposed among other data packets;
d. determining whether or not the data packet to be transmitted is marked;
e. evaluating congestion at the output of the switch node;
f. determining whether or not congestion at the switch node is at or above a predetermined value; and g. if the data packet is marked and the congestion is at or above the predetermined value, dropping the data packet.
4. A method for dropping a marked packet to be transmitted at an excessive rate from a switch node in a packet switching network, the method comprising the steps of:

a. preparing to transmit the data packet;
b. determining whether or not the data packet is marked as a data packet being transmitted at an excessive rate;
c. evaluating congestion at the output of the switch node;
d. determining whether or not the congestion is at a predetermined value;
and e. if the data packet is marked and the congestion is at the predetermined value, dropping the data packet.
5. A packet switching node with a plurality of receive ports; the node comprising:
a plurality of channels, each channel interconnected with a different one of thereceive ports, for transmitting sequentially packets of data, each packet containing at least one marking bit which may be enabled;
means, responsive to a measurement of the congestion of the packet switching node, for generating a signal indicating the amount of congestion in the packet switching node; and means, responsive to the generated congestion signal and a threshold value, for dropping any packet of data containing an enabled marking bit.
6. A method for dropping a data packet which may be marked at a first switch node and is to be transmitted from a second switch node in a packet switching network, the method comprising the steps of:
a. preparing to transmit the data packet from the second switch node;
b. determining whether or not the data packet is marked;
c. evaluating congestion at an output of the second switch node;
d. determining whether or not the congestion is at a predetermined value;
and e. if the data packet was marked at the first switch node and the congestion is at the predetermined value, dropping the data packet before it is transmitted from the output of the second switch node.
7. A method for dropping a data packet, in accordance with claim 6, and comprising the further steps of:
f. if the data packet is unmarked, passing it to the output of the second g. multiplexing the unmarked data packet with other data packets; and h. transmitting the multiplexed unmarked data packets and other data packets from the output of the second switch node through a link or terminal line .
8. A method for dropping a data packet which may be marked at a first switch node and is to be transmitted from a second switch node in a packet switching network, the method comprising the steps of:
a. segregating data packets transmitted from the first switch node by one customer;
b. marking that one customer's data packets before being transmitted from the first switch node;
c. preparing to transmit one of the customer's data packets from the second switch node;
d. determining whether or not the one data packet is marked;
e. evaluating congestion at an output of the second switch node;
f. determining whether or not congestion at the output of the second switch node is at or above a predetermined value; and g. if the one data packet is marked and the congestion is at or above the predetermined value, dropping the one data packet before it is transmitted from the output of the second switch node
9. A first packet switching node with a plurality of receive ports, from local access lines and a second packet switching node, the first packet switching node comprising, a plurality of channels, each channel from the second packet switching node interconnected with a different one of the receive ports, for transmitting packets of data, each packet containing a marking bit which may be enabled at the second packet switching node;
means, responsive to a measurement of congestion in the first packet switching node, for generating a signal indicating the amount of congestion in the first packet switching node; and means, responsive to the generated congestion signal and a threshold value, for dropping any packet of data containing the enabled marking bit before the packet is transmitted from the first packet switching node.
10. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. preparing to transmit the data packet;
b. determining whether or not the data packet is marked as being transmitted at an excessive rate;
c. evaluating congestion at the switch node;
d. determining whether or not the congestion is at or above a predetermined value; and e. if the data packet is marked as being transmitted at an excessive rate and the congestion is at or above the predetermined value, dropping the data packet before it is transmitted from the switch node.
11. A method for dropping a data packet, in accordance with claim 10, and comprising the further steps of:
f. if the data packet is unmarked, passing it to an output of the switch node;
g. multiplexing the unmarked data packet with other data packets; and h. transmitting the multiplied unmarked data packet and other data packets from the switch node through a link or terminal line.
12. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. segregating data packets transmitted by one customer into the network;
b. marking that one customer's data packets as being transmitted into the network at an excessive rate;

c. preparing to transmit one of that customer's data packets;
d. determining whether or not the one data packet is marked;
e. evaluating congestion at an output of the switch node;
f. determining whether or not congestion at the switch node is at or above a predetermined value; and g. if the one data packet is marked as being transmitted at an excessive rate and the congestion at the switch node is at or above the predetermined value, dropping the data packet.
13. A packet switching node with a plurality of receive ports; the switching node comprising a plurality of channels, each channel interconnected with a different one of the receive ports, for transmitting packets of data to the switching node, each packet received by at least one receive port containing a marking bit which may be enabled to indicate the packet was transmitted at an excessive rate;
means, responsive to a measurement of congestion in the packet switching node, for generating a signal indicating the amount of congestion in the packet switching node; and means, responsive to the generated congestion signal and a threshold value, for dropping any packet of data containing an enabled marking bit.
14. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of a. preparing to transmit the data packet;
b. determining whether or not the data packet is marked as a special service packet;
c. evaluating congestion at the switch node;
d. determining whether or not the congestion is at or above a predetermined value; and e. if the data packet is marked as being a special service packet and the congestion is at or above the predetermined value, dropping the data packet before it is transmitted from the switch node.
15. A method for dropping a data packet, in accordance with claim 14, and comprising the further steps of:
f. if the data packet is unmarked, passing it to an output;
g. multiplexing the unmarked data packet with other data packets; and h. transmitting the multiplexed unmarked data packet and other data packets from the switch node through a link or terminal line.
16. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. segregating data packets transmitted by one customer subscribing to a special service;
b. marking that one customer data packets as being transmitted into the network as a special service;
c. preparing to transmit one of the customers data packets from the switch node;
d. determining whether or not the one data packet is marked;
e. evaluating congestion at an output of the switch node;
f. determining whether or not congestion at the switch node is at or above a predetermined value; and g. if the data packet is marked and the congestion at the switch node is at or above the predetermined value, dropping the data packet.
17. A packet switching node with a plurality of receive ports; the switching node comprising a plurality of channels, each channel interconnected with a different one of the receive ports, for transmitting packets of data to the switching node, each packet received by at least one receive port containing a marking bit enabled to indicate the packet was transmitted as a special service packet;

means, responsive to a measurement of congestion in the packet switching node, for generating a signal indicating the amount of congestion in the packet switching node; and means, responsive to the generated congestion signal and a threshold value, for dropping any packet of data containing the enabled marking bit.
18. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. preparing to transmit the data packet;
b. determining whether or not the data packet is marked by a single bit;
c. evaluating congestion at the switch node;
d. determining whether or not the congestion is at or above a predetermined value; and e. if the data packet is marked by the single bit and the congestion is at or above the predetermined value, dropping the data packet before it is transmitted from the switch node.
19. A method for dropping a data packet, in accordance with claim 18, and comprising the further steps of:
f. if the data packet is unmarked, passing it to an output;
g. multiplexing the unmarked data packet with other data packets; and h. transmitting the multiplexed unmarked data packet and other data packets from the switch node through a link or terminal line.
20. A method for dropping a data packet to be transmitted from a switch node in a packet switching network, the method comprising the steps of:
a. segregating data packets transmitted by one customer;
b. marking with a single bit that one customer's data packets being transmitted;

c. preparing to transmit one of the customer's data packets from the switch node;
d. determining whether or not the one data packet is marked with the single bit;
e. evaluating congestion at an output of the switch node;
f. determining whether or not congestion at the node is at or above a predetermined value, and g. if the one data packet is marked with the single bit and the congestion is at or above the predetermined value, dropping the one data packet.
21. A packet switching node with a plurality of receive ports; the switching node comprising a plurality of channels, each channel interconnected with a different one of the receive ports, for transmitting packets of data to the switching node, each packet received by at least one receive port containing a single marking bit;
means, responsive to a measurement of the congestion of the packet switching node, for generating a signal indicating the amount of congestion in the packet switching node; and means, responsive to the generated congestion signal and a threshold value, for dropping any packet of data containing the single marking bit.
CA000555609A 1986-12-31 1987-12-30 Packet switching system arranged for congestion control Expired - Lifetime CA1279392C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US948,152 1986-12-31
US06/948,152 US4769811A (en) 1986-12-31 1986-12-31 Packet switching system arranged for congestion control

Publications (1)

Publication Number Publication Date
CA1279392C true CA1279392C (en) 1991-01-22

Family

ID=25487371

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000555609A Expired - Lifetime CA1279392C (en) 1986-12-31 1987-12-30 Packet switching system arranged for congestion control

Country Status (8)

Country Link
US (1) US4769811A (en)
EP (1) EP0275679B1 (en)
JP (1) JPH0657017B2 (en)
KR (1) KR910008759B1 (en)
AU (1) AU592030B2 (en)
CA (1) CA1279392C (en)
DE (1) DE3780800T2 (en)
ES (1) ES2033329T3 (en)

Families Citing this family (207)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4734907A (en) * 1985-09-06 1988-03-29 Washington University Broadcast packet switching network
US4920534A (en) * 1986-02-28 1990-04-24 At&T Bell Laboratories System for controllably eliminating bits from packet information field based on indicator in header and amount of data in packet buffer
FR2616025B1 (en) * 1987-05-26 1989-07-21 Lespagnol Albert METHOD AND SYSTEM FOR PACKET FLOW CONTROL
FR2618279B1 (en) * 1987-07-16 1989-10-20 Quinquis Jean Paul PRIORITY DATA PACKET SWITCHING SYSTEM.
DE3784168T2 (en) * 1987-09-23 1993-09-16 Ibm DIGITAL PACKAGE SWITCHING NETWORKS.
JPH01221042A (en) * 1988-02-29 1989-09-04 Toshiba Corp Congestion control method for packet exchange
US5377327A (en) * 1988-04-22 1994-12-27 Digital Equipment Corporation Congestion avoidance scheme for computer networks
US4862454A (en) * 1988-07-15 1989-08-29 International Business Machines Corporation Switching method for multistage interconnection networks with hot spot traffic
GB8818368D0 (en) * 1988-08-02 1988-09-07 Digital Equipment Corp Network transit prevention
JP2753294B2 (en) * 1988-12-23 1998-05-18 株式会社日立製作所 Packet congestion control method and packet switching device
NL8900269A (en) * 1989-02-03 1990-09-03 Nederland Ptt METHOD FOR TRANSMISSION OF TIME-DISTRIBUTED TRANSMISSION CHANNELS OVER A STAGE OF DATA CELLS THROUGH A MULTIPLE OF ASYNCHRONOUS Maintaining a counter reading per transmission channel, which depends on the number of data cells per time.
US4953157A (en) * 1989-04-19 1990-08-28 American Telephone And Telegraph Company Programmable data packet buffer prioritization arrangement
FR2648649B1 (en) * 1989-06-20 1991-08-23 Cit Alcatel METHOD AND DEVICE FOR QUANTIFIED EVALUATION OF THE FLOW RATE OF VIRTUAL CIRCUITS EMPLOYING AN ASYNCHRONOUS TIME MULTIPLEXING TRANSMISSION CHANNEL
AU619687B2 (en) * 1989-06-20 1992-01-30 Alcatel N.V. Method and device for evaluating the throughput of virtual circuits employing a time-division multiplexed transmission channel
US5179557A (en) * 1989-07-04 1993-01-12 Kabushiki Kaisha Toshiba Data packet communication system in which data packet transmittal is prioritized with queues having respective assigned priorities and frequency weighted counting of queue wait time
US5224092A (en) * 1989-09-05 1993-06-29 Koninklijke Ptt Nederland N.V. Method for controlling a flow of data cells into a plurality of asynchronously time-divided transmission channels with a single admission switch for transmission in the channels with reference to the state of a plurality of count values
DE59010648D1 (en) * 1989-09-29 1997-03-27 Siemens Ag Circuit arrangement for determining the amount of message signals supplied to an ATM switching system in the course of virtual connections and for checking compliance with specified bit rates
NL8902504A (en) * 1989-10-09 1991-05-01 Nederland Ptt METHOD FOR MONITORING A TRANSMISSION SYSTEM INCLUDING A MULTIPLE OF VIRTUAL ASYNCHRONOUS TIME-DISTRIBUTED CHANNELS THROUGH WHICH A DATA FLOW CAN BE TRANSFERRED.
FR2653285B1 (en) * 1989-10-12 1991-12-06 Cit Alcatel DEVICE FOR EVALUATING THE FLOW RATE OF VIRTUAL CIRCUITS EMPLOYING AN ASYNCHRONOUS TIME MULTIPLEXED TRANSMISSION CHANNEL.
US5029164A (en) * 1990-04-13 1991-07-02 Digital Equipment Corporation Congestion avoidance in high-speed network carrying bursty traffic
JP3128654B2 (en) * 1990-10-19 2001-01-29 富士通株式会社 Supervisory control method, supervisory control device and switching system
ES2028554A6 (en) * 1990-11-05 1992-07-01 Telefonica Nacional Espana Co Telecommunications packet switching system
US5121383A (en) * 1990-11-16 1992-06-09 Bell Communications Research, Inc. Duration limited statistical multiplexing in packet networks
US5187707A (en) * 1990-12-03 1993-02-16 Northern Telecom Limited Packet data flow control for an isdn D-channel
US5164938A (en) * 1991-03-28 1992-11-17 Sprint International Communications Corp. Bandwidth seizing in integrated services networks
US5229992A (en) * 1991-03-28 1993-07-20 Sprint International Communications Corp. Fixed interval composite framing in integrated services networks
US5426640A (en) * 1992-01-21 1995-06-20 Codex Corporation Rate-based adaptive congestion control system and method for integrated packet networks
JPH0614049A (en) * 1992-03-19 1994-01-21 Fujitsu Ltd Cell abort controller in atm and its method
CA2094896C (en) * 1992-04-27 1999-09-14 Nobuyuki Tokura Packet network and method for congestion avoidance in packet networks
US5335224A (en) * 1992-06-30 1994-08-02 At&T Bell Laboratories Service guarantees/congestion control in high speed networks
JP2760217B2 (en) * 1992-07-01 1998-05-28 三菱電機株式会社 Digital line multiplex transmission equipment
GB2272612B (en) * 1992-11-06 1996-05-01 Roke Manor Research Improvements in or relating to ATM signal processors
EP0600683B1 (en) * 1992-12-04 2001-10-10 AT&T Corp. Packet network interface
WO1995003657A1 (en) * 1993-07-21 1995-02-02 Fujitsu Limited Atm exchange
US5598581A (en) * 1993-08-06 1997-01-28 Cisco Sytems, Inc. Variable latency cut through bridge for forwarding packets in response to user's manual adjustment of variable latency threshold point while the bridge is operating
US5426635A (en) * 1993-09-08 1995-06-20 At&T Corp. Method for adaptive control of windows and rates in networks
US5528763A (en) * 1993-09-14 1996-06-18 International Business Machines Corporation System for admitting cells of packets from communication network into buffer of attachment of communication adapter
US5631897A (en) * 1993-10-01 1997-05-20 Nec America, Inc. Apparatus and method for incorporating a large number of destinations over circuit-switched wide area network connections
US5617409A (en) * 1994-01-28 1997-04-01 Digital Equipment Corporation Flow control with smooth limit setting for multiple virtual circuits
US5455826A (en) * 1994-06-28 1995-10-03 Oezveren; Cueneyt M. Method and apparatus for rate based flow control
AU710270B2 (en) * 1994-07-25 1999-09-16 Telstra Corporation Limited A method for controlling congestion in a telecommunications network
IT1278517B1 (en) * 1994-07-25 1997-11-24 Motorola Inc Process for inter-satellite transfer of data packets
US5790521A (en) * 1994-08-01 1998-08-04 The University Of Iowa Research Foundation Marking mechanism for controlling consecutive packet loss in ATM networks
US5629936A (en) * 1994-08-01 1997-05-13 University Of Iowa Research Foundation Inc. Control of consecutive packet loss in a packet buffer
US5633859A (en) * 1994-09-16 1997-05-27 The Ohio State University Method and apparatus for congestion management in computer networks using explicit rate indication
US5590122A (en) * 1994-12-22 1996-12-31 Emc Corporation Method and apparatus for reordering frames
US5867666A (en) * 1994-12-29 1999-02-02 Cisco Systems, Inc. Virtual interfaces with dynamic binding
US5793978A (en) * 1994-12-29 1998-08-11 Cisco Technology, Inc. System for routing packets by separating packets in to broadcast packets and non-broadcast packets and allocating a selected communication bandwidth to the broadcast packets
US5835711A (en) * 1995-02-01 1998-11-10 International Business Machines Corporation Method and system for implementing multiple leaky bucket checkers using a hybrid synchronous/asynchronous update mechanism
US5790770A (en) * 1995-07-19 1998-08-04 Fujitsu Network Communications, Inc. Method and apparatus for reducing information loss in a communications network
US6097718A (en) 1996-01-02 2000-08-01 Cisco Technology, Inc. Snapshot routing with route aging
US6147996A (en) 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US6917966B1 (en) 1995-09-29 2005-07-12 Cisco Technology, Inc. Enhanced network services using a subnetwork of communicating processors
US6182224B1 (en) 1995-09-29 2001-01-30 Cisco Systems, Inc. Enhanced network services using a subnetwork of communicating processors
US7246148B1 (en) 1995-09-29 2007-07-17 Cisco Technology, Inc. Enhanced network services using a subnetwork of communicating processors
US6091725A (en) 1995-12-29 2000-07-18 Cisco Systems, Inc. Method for traffic management, traffic prioritization, access control, and packet forwarding in a datagram computer network
US6035105A (en) * 1996-01-02 2000-03-07 Cisco Technology, Inc. Multiple VLAN architecture system
US6219728B1 (en) 1996-04-22 2001-04-17 Nortel Networks Limited Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US5864539A (en) * 1996-05-06 1999-01-26 Bay Networks, Inc. Method and apparatus for a rate-based congestion control in a shared memory switch
US6308148B1 (en) 1996-05-28 2001-10-23 Cisco Technology, Inc. Network flow data export
US6243667B1 (en) 1996-05-28 2001-06-05 Cisco Systems, Inc. Network flow switching and flow data export
US6141323A (en) * 1996-06-03 2000-10-31 Whittaker Corporation Closed loop congestion control using a queue measurement system
US6212182B1 (en) 1996-06-27 2001-04-03 Cisco Technology, Inc. Combined unicast and multicast scheduling
US6434120B1 (en) 1998-08-25 2002-08-13 Cisco Technology, Inc. Autosensing LMI protocols in frame relay networks
US5802042A (en) * 1996-06-28 1998-09-01 Cisco Systems, Inc. Autosensing LMI protocols in frame relay networks
US6961341B1 (en) * 1996-07-02 2005-11-01 Microsoft Corporation Adaptive bandwidth throttling for network services
US6525761B2 (en) 1996-07-23 2003-02-25 Canon Kabushiki Kaisha Apparatus and method for controlling a camera connected to a network
JP3862321B2 (en) * 1996-07-23 2006-12-27 キヤノン株式会社 Server and control method thereof
JP3202606B2 (en) * 1996-07-23 2001-08-27 キヤノン株式会社 Imaging server and its method and medium
US6473404B1 (en) 1998-11-24 2002-10-29 Connect One, Inc. Multi-protocol telecommunications routing optimization
US6016307A (en) 1996-10-31 2000-01-18 Connect One, Inc. Multi-protocol telecommunications routing optimization
US6304546B1 (en) 1996-12-19 2001-10-16 Cisco Technology, Inc. End-to-end bidirectional keep-alive using virtual circuits
US5872997A (en) * 1997-02-14 1999-02-16 Exabyte Corporation System for dynamically determining motion and reconnect thresholds of a storage media based on the effective transfer rate
US6252851B1 (en) 1997-03-27 2001-06-26 Massachusetts Institute Of Technology Method for regulating TCP flow over heterogeneous networks
US5968189A (en) * 1997-04-08 1999-10-19 International Business Machines Corporation System of reporting errors by a hardware element of a distributed computer system
US5923840A (en) * 1997-04-08 1999-07-13 International Business Machines Corporation Method of reporting errors by a hardware element of a distributed computer system
US6356530B1 (en) 1997-05-23 2002-03-12 Cisco Technology, Inc. Next hop selection in ATM networks
US6122272A (en) * 1997-05-23 2000-09-19 Cisco Technology, Inc. Call size feedback on PNNI operation
US6862284B1 (en) 1997-06-17 2005-03-01 Cisco Technology, Inc. Format for automatic generation of unique ATM addresses used for PNNI
US6078590A (en) 1997-07-14 2000-06-20 Cisco Technology, Inc. Hierarchical routing knowledge for multicast packet routing
US6330599B1 (en) 1997-08-05 2001-12-11 Cisco Technology, Inc. Virtual interfaces with dynamic binding
US6512766B2 (en) 1997-08-22 2003-01-28 Cisco Systems, Inc. Enhanced internet packet routing lookup
US6157641A (en) * 1997-08-22 2000-12-05 Cisco Technology, Inc. Multiprotocol packet recognition and switching
US6212183B1 (en) 1997-08-22 2001-04-03 Cisco Technology, Inc. Multiple parallel packet routing lookup
US6343072B1 (en) 1997-10-01 2002-01-29 Cisco Technology, Inc. Single-chip architecture for shared-memory router
US6073199A (en) * 1997-10-06 2000-06-06 Cisco Technology, Inc. History-based bus arbitration with hidden re-arbitration during wait cycles
US7570583B2 (en) 1997-12-05 2009-08-04 Cisco Technology, Inc. Extending SONET/SDH automatic protection switching
DE19755054A1 (en) * 1997-12-11 1999-06-17 Bosch Gmbh Robert Process for the transmission of digital data
US6252855B1 (en) 1997-12-22 2001-06-26 Cisco Technology, Inc. Method and apparatus for identifying a maximum frame size to maintain delay at or below an acceptable level
US6181708B1 (en) 1997-12-30 2001-01-30 Cisco Technology, Inc. Lossless arbitration scheme and network architecture for collision based network protocols
US6111877A (en) * 1997-12-31 2000-08-29 Cisco Technology, Inc. Load sharing across flows
US6424649B1 (en) 1997-12-31 2002-07-23 Cisco Technology, Inc. Synchronous pipelined switch using serial transmission
US6853638B2 (en) 1998-04-01 2005-02-08 Cisco Technology, Inc. Route/service processor scalability via flow-based distribution of traffic
US6920112B1 (en) 1998-06-29 2005-07-19 Cisco Technology, Inc. Sampling packets for network monitoring
US6370121B1 (en) 1998-06-29 2002-04-09 Cisco Technology, Inc. Method and system for shortcut trunking of LAN bridges
US6377577B1 (en) 1998-06-30 2002-04-23 Cisco Technology, Inc. Access control list processing in hardware
US6308219B1 (en) 1998-07-31 2001-10-23 Cisco Technology, Inc. Routing table lookup implemented using M-trie having nodes duplicated in multiple memory banks
US6182147B1 (en) 1998-07-31 2001-01-30 Cisco Technology, Inc. Multicast group routing using unidirectional links
US6389506B1 (en) 1998-08-07 2002-05-14 Cisco Technology, Inc. Block mask ternary cam
US6101115A (en) * 1998-08-07 2000-08-08 Cisco Technology, Inc. CAM match line precharge
US6243749B1 (en) 1998-10-08 2001-06-05 Cisco Technology, Inc. Dynamic network address updating
US6381214B1 (en) 1998-10-09 2002-04-30 Texas Instruments Incorporated Memory-efficient leaky bucket policer for traffic management of asynchronous transfer mode data communications
US6470013B1 (en) 1998-10-13 2002-10-22 Cisco Technology, Inc. Use of enhanced ethernet link—loop packets to automate configuration of intelligent linecards attached to a router
US6381706B1 (en) 1998-10-20 2002-04-30 Ecrix Corporation Fine granularity rewrite method and apparatus for data storage device
US6246551B1 (en) * 1998-10-20 2001-06-12 Ecrix Corporation Overscan helical scan head for non-tracking tape subsystems reading at up to 1X speed and methods for simulation of same
US6307701B1 (en) 1998-10-20 2001-10-23 Ecrix Corporation Variable speed recording method and apparatus for a magnetic tape drive
US6367047B1 (en) 1998-10-20 2002-04-02 Ecrix Multi-level error detection and correction technique for data storage recording device
US6167445A (en) * 1998-10-26 2000-12-26 Cisco Technology, Inc. Method and apparatus for defining and implementing high-level quality of service policies in computer networks
US6286052B1 (en) 1998-12-04 2001-09-04 Cisco Technology, Inc. Method and apparatus for identifying network data traffic flows and for applying quality of service treatments to the flows
US6580723B1 (en) 1998-11-02 2003-06-17 Cisco Technology, Inc. Time slotted logical ring
US7165117B1 (en) 1998-11-12 2007-01-16 Cisco Technology, Inc. Dynamic IP addressing and quality of service assurance
US7165122B1 (en) 1998-11-12 2007-01-16 Cisco Technology, Inc. Dynamic IP addressing and quality of service assurance
US6427174B1 (en) 1998-11-12 2002-07-30 Cisco Technology, Inc. Dynamic IP addressing and quality of service assurance
US6603618B1 (en) 1998-11-16 2003-08-05 Exabyte Corporation Method and system for monitoring and adjusting tape position using control data packets
US6421805B1 (en) 1998-11-16 2002-07-16 Exabyte Corporation Rogue packet detection and correction method for data storage device
US6367048B1 (en) 1998-11-16 2002-04-02 Mcauliffe Richard Method and apparatus for logically rejecting previously recorded track residue from magnetic media
US6308298B1 (en) 1998-11-16 2001-10-23 Ecrix Corporation Method of reacquiring clock synchronization on a non-tracking helical scan tape device
EP1145541B1 (en) 1998-11-24 2012-11-21 Niksun, Inc. Apparatus and method for collecting and analyzing communications data
US6442165B1 (en) 1998-12-02 2002-08-27 Cisco Technology, Inc. Load balancing between service component instances
US7616640B1 (en) 1998-12-02 2009-11-10 Cisco Technology, Inc. Load balancing between service component instances
US6868061B1 (en) * 1998-12-10 2005-03-15 Nokia Corporation System and method for pre-filtering low priority packets at network nodes in a network service class utilizing a priority-based quality of service
US6917617B2 (en) * 1998-12-16 2005-07-12 Cisco Technology, Inc. Use of precedence bits for quality of service
US6643260B1 (en) 1998-12-18 2003-11-04 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
AU2417799A (en) 1998-12-30 2000-07-24 Nokia Networks Oy Packet transmission method and apparatus
US6771642B1 (en) 1999-01-08 2004-08-03 Cisco Technology, Inc. Method and apparatus for scheduling packets in a packet switch
US6587468B1 (en) 1999-02-10 2003-07-01 Cisco Technology, Inc. Reply to sender DHCP option
US6490298B1 (en) 1999-02-26 2002-12-03 Harmonic Inc. Apparatus and methods of multiplexing data to a communication channel
US6252848B1 (en) * 1999-03-22 2001-06-26 Pluris, Inc. System performance in a data network through queue management based on ingress rate monitoring
US7065762B1 (en) 1999-03-22 2006-06-20 Cisco Technology, Inc. Method, apparatus and computer program product for borrowed-virtual-time scheduling
WO2000057606A1 (en) * 1999-03-23 2000-09-28 Telefonaktiebolaget Lm Ericsson (Publ) Discarding traffic in ip networks to optimize the quality of speech signals
US6757791B1 (en) 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6603772B1 (en) 1999-03-31 2003-08-05 Cisco Technology, Inc. Multicast routing with multicast virtual output queues and shortest queue first allocation
US6760331B1 (en) 1999-03-31 2004-07-06 Cisco Technology, Inc. Multicast routing with nearest queue first allocation and dynamic and static vector quantization
US6680906B1 (en) 1999-03-31 2004-01-20 Cisco Technology, Inc. Regulating packet traffic in an integrated services network
US6798746B1 (en) 1999-12-18 2004-09-28 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US6364234B1 (en) 2000-03-10 2002-04-02 Michael Donald Langiano Tape loop/slack prevention method and apparatus for tape drive
US6624960B1 (en) 2000-03-10 2003-09-23 Exabyte Corporation Current sensing drum/cleaning wheel positioning method and apparatus for magnetic storage system
US6977895B1 (en) * 2000-03-23 2005-12-20 Cisco Technology, Inc. Apparatus and method for rate-based polling of input interface queues in networking devices
US6954429B2 (en) * 2000-04-05 2005-10-11 Dyband Corporation Bandwidth control system
US6850980B1 (en) 2000-06-16 2005-02-01 Cisco Technology, Inc. Content routing service protocol
US6771665B1 (en) 2000-08-31 2004-08-03 Cisco Technology, Inc. Matching of RADIUS request and response packets during high traffic volume
US7411981B1 (en) 2000-08-31 2008-08-12 Cisco Technology, Inc. Matching of radius request and response packets during high traffic volume
US7095741B1 (en) * 2000-12-20 2006-08-22 Cisco Technology, Inc. Port isolation for restricting traffic flow on layer 2 switches
JP5048184B2 (en) * 2001-01-26 2012-10-17 富士通株式会社 Transmission rate monitoring apparatus and transmission rate monitoring method
EP1278342B1 (en) * 2001-01-26 2005-03-30 Nec Corporation Method and system for controlling communication network and router used in the network
US20020141420A1 (en) * 2001-03-30 2002-10-03 Sugiarto Basuki Afandi Throttling control system and method
US7295516B1 (en) * 2001-11-13 2007-11-13 Verizon Services Corp. Early traffic regulation techniques to protect against network flooding
US7076543B1 (en) 2002-02-13 2006-07-11 Cisco Technology, Inc. Method and apparatus for collecting, aggregating and monitoring network management information
US20030210649A1 (en) * 2002-05-03 2003-11-13 Bondi Andre B. Managing network loading by control of retry processing at proximate switches associated with unresponsive targets
US7085236B2 (en) * 2002-05-20 2006-08-01 University Of Massachusetts, Amherst Active queue management for differentiated services
US7161904B2 (en) * 2002-06-04 2007-01-09 Fortinet, Inc. System and method for hierarchical metering in a virtual router based network switch
US7386632B1 (en) 2002-06-07 2008-06-10 Cisco Technology, Inc. Dynamic IP addressing and quality of service assurance
JP3731665B2 (en) * 2003-03-27 2006-01-05 ソニー株式会社 Data communication system, information processing apparatus and information processing method, recording medium, and program
US20050060423A1 (en) * 2003-09-15 2005-03-17 Sachin Garg Congestion management in telecommunications networks
US7369542B1 (en) * 2004-03-04 2008-05-06 At&T Corp. Method and apparatus for routing data
US20060031565A1 (en) * 2004-07-16 2006-02-09 Sundar Iyer High speed packet-buffering system
US20080117918A1 (en) * 2004-10-22 2008-05-22 Satoshi Kobayashi Relaying Apparatus and Network System
US7397801B2 (en) 2005-04-08 2008-07-08 Microsoft Corporation Method and apparatus to determine whether a network is quality of service enabled
US7558200B2 (en) * 2005-09-01 2009-07-07 Microsoft Corporation Router congestion management
US8045473B2 (en) * 2005-11-28 2011-10-25 Cisco Technology, Inc. Tailored relief for congestion on application servers for real time communications
US7710959B2 (en) * 2006-08-29 2010-05-04 Cisco Technology, Inc. Private VLAN edge across multiple switch modules
US9036468B1 (en) * 2009-07-14 2015-05-19 Viasat, Inc. Flow congestion management
JP5314532B2 (en) * 2009-08-19 2013-10-16 日本電信電話株式会社 Network system and packet transfer method for transfer using classification based on burst nature of flow, and program therefor
US8619586B2 (en) * 2009-10-15 2013-12-31 Cisco Technology, Inc. System and method for providing troubleshooting in a network environment
US9001663B2 (en) * 2010-02-26 2015-04-07 Microsoft Corporation Communication transport optimized for data center environment
US9219378B2 (en) 2010-11-01 2015-12-22 Qualcomm Incorporated Wireless charging of devices
US8743885B2 (en) 2011-05-03 2014-06-03 Cisco Technology, Inc. Mobile service routing in a network environment
US9407540B2 (en) 2013-09-06 2016-08-02 Cisco Technology, Inc. Distributed service chaining in a network environment
US9794379B2 (en) 2013-04-26 2017-10-17 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
US9491094B2 (en) 2013-09-25 2016-11-08 Cisco Techonology, Inc. Path optimization in distributed service chains in a network environment
US20150085870A1 (en) * 2013-09-25 2015-03-26 Cisco Technology, Inc. Co-operative load sharing and redundancy in distributed service chains in a network environment
US9300585B2 (en) 2013-11-15 2016-03-29 Cisco Technology, Inc. Shortening of service paths in service chains in a communications network
US9479443B2 (en) 2014-05-16 2016-10-25 Cisco Technology, Inc. System and method for transporting information to services in a network environment
US9379931B2 (en) 2014-05-16 2016-06-28 Cisco Technology, Inc. System and method for transporting information to services in a network environment
US10417025B2 (en) 2014-11-18 2019-09-17 Cisco Technology, Inc. System and method to chain distributed applications in a network environment
USRE48131E1 (en) 2014-12-11 2020-07-28 Cisco Technology, Inc. Metadata augmentation in a service function chain
US9660909B2 (en) 2014-12-11 2017-05-23 Cisco Technology, Inc. Network service header metadata for load balancing
US9762402B2 (en) 2015-05-20 2017-09-12 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US9888127B2 (en) 2015-07-30 2018-02-06 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for adjusting the use of virtual resources providing communication services based on load
US10277736B2 (en) 2015-07-30 2019-04-30 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for determining whether to handle a request for communication services by a physical telephone number mapping service or a virtual telephone number mapping service
US9866521B2 (en) 2015-07-30 2018-01-09 At&T Intellectual Property L.L.P. Methods, systems, and computer readable storage devices for determining whether to forward requests from a physical telephone number mapping service server to a virtual telephone number mapping service server
US9851999B2 (en) 2015-07-30 2017-12-26 At&T Intellectual Property I, L.P. Methods, systems, and computer readable storage devices for handling virtualization of a physical telephone number mapping service
US10063468B2 (en) 2016-01-15 2018-08-28 Cisco Technology, Inc. Leaking routes in a service chain
US11044203B2 (en) 2016-01-19 2021-06-22 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
US10187306B2 (en) 2016-03-24 2019-01-22 Cisco Technology, Inc. System and method for improved service chaining
US10931793B2 (en) 2016-04-26 2021-02-23 Cisco Technology, Inc. System and method for automated rendering of service chaining
US10419550B2 (en) 2016-07-06 2019-09-17 Cisco Technology, Inc. Automatic service function validation in a virtual network environment
US10320664B2 (en) 2016-07-21 2019-06-11 Cisco Technology, Inc. Cloud overlay for operations administration and management
US10218616B2 (en) 2016-07-21 2019-02-26 Cisco Technology, Inc. Link selection for communication with a service function cluster
US10225270B2 (en) 2016-08-02 2019-03-05 Cisco Technology, Inc. Steering of cloned traffic in a service function chain
US10218593B2 (en) 2016-08-23 2019-02-26 Cisco Technology, Inc. Identifying sources of packet drops in a service function chain environment
US10361969B2 (en) 2016-08-30 2019-07-23 Cisco Technology, Inc. System and method for managing chained services in a network environment
US10225187B2 (en) 2017-03-22 2019-03-05 Cisco Technology, Inc. System and method for providing a bit indexed service chain
US10178646B2 (en) 2017-04-12 2019-01-08 Cisco Technology, Inc. System and method to facilitate slice management in a network environment
US10884807B2 (en) 2017-04-12 2021-01-05 Cisco Technology, Inc. Serverless computing and task scheduling
US10257033B2 (en) 2017-04-12 2019-04-09 Cisco Technology, Inc. Virtualized network functions and service chaining in serverless computing infrastructure
US10333855B2 (en) 2017-04-19 2019-06-25 Cisco Technology, Inc. Latency reduction in service function paths
US10554689B2 (en) 2017-04-28 2020-02-04 Cisco Technology, Inc. Secure communication session resumption in a service function chain
US10735275B2 (en) 2017-06-16 2020-08-04 Cisco Technology, Inc. Releasing and retaining resources for use in a NFV environment
US10798187B2 (en) 2017-06-19 2020-10-06 Cisco Technology, Inc. Secure service chaining
US10397271B2 (en) 2017-07-11 2019-08-27 Cisco Technology, Inc. Distributed denial of service mitigation for web conferencing
US10673698B2 (en) 2017-07-21 2020-06-02 Cisco Technology, Inc. Service function chain optimization using live testing
US11063856B2 (en) 2017-08-24 2021-07-13 Cisco Technology, Inc. Virtual network function monitoring in a network function virtualization deployment
US10791065B2 (en) 2017-09-19 2020-09-29 Cisco Technology, Inc. Systems and methods for providing container attributes as part of OAM techniques
US11018981B2 (en) 2017-10-13 2021-05-25 Cisco Technology, Inc. System and method for replication container performance and policy validation using real time network traffic
US10541893B2 (en) 2017-10-25 2020-01-21 Cisco Technology, Inc. System and method for obtaining micro-service telemetry data
US10666612B2 (en) 2018-06-06 2020-05-26 Cisco Technology, Inc. Service chains for inter-cloud traffic

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4484326A (en) * 1982-11-04 1984-11-20 At&T Bell Laboratories Packet load monitoring by trunk controllers
FR2538984A1 (en) * 1982-12-30 1984-07-06 Devault Michel SWITCH FOR MULTIDEBIT DIGITAL NETWORK WITH ASYNCHRONOUS TIME SWITCH ADAPTED TO VIDEOCOMMUTATIONS
US4472801A (en) * 1983-03-28 1984-09-18 At&T Bell Laboratories Distributed prioritized concentrator
US4616359A (en) * 1983-12-19 1986-10-07 At&T Bell Laboratories Adaptive preferential flow control for packet switching system
JPH0681156B2 (en) * 1984-02-23 1994-10-12 中部電力株式会社 Congestion control method in packet switching network
US4630261A (en) * 1984-07-30 1986-12-16 International Business Machines Corp. Integrated buffer management and signaling technique
US4630259A (en) * 1984-11-14 1986-12-16 At&T Bell Laboratories Lockup detection and recovery in a packet switching network
US4646287A (en) * 1984-12-07 1987-02-24 At&T Bell Laboratories Idle period signalling in a packet switching system
US4703477A (en) * 1986-02-28 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Packet information field data format
US4679190A (en) * 1986-04-28 1987-07-07 International Business Machines Corporation Distributed voice-data switching on multi-stage interconnection networks

Also Published As

Publication number Publication date
ES2033329T3 (en) 1993-03-16
JPS63176046A (en) 1988-07-20
JPH0657017B2 (en) 1994-07-27
EP0275679A1 (en) 1988-07-27
EP0275679B1 (en) 1992-07-29
AU592030B2 (en) 1989-12-21
AU8314687A (en) 1988-07-07
KR910008759B1 (en) 1991-10-19
DE3780800D1 (en) 1992-09-03
KR880008568A (en) 1988-08-31
DE3780800T2 (en) 1993-03-04
US4769811A (en) 1988-09-06

Similar Documents

Publication Publication Date Title
CA1279392C (en) Packet switching system arranged for congestion control
US4769810A (en) Packet switching system arranged for congestion control through bandwidth management
US5629928A (en) Dynamic fair queuing to support best effort traffic in an ATM network
EP0763915B1 (en) Packet transfer device and method adaptive to a large number of input ports
US7573827B2 (en) Method and apparatus for detecting network congestion
EP1670194B1 (en) Service guarantee and congestion control in high speed networks
JP2753468B2 (en) Digital communication controller
JP3525656B2 (en) Packet switch and congestion notification method
EP0487235A2 (en) Bandwidth and congestion management in accessing broadband ISDN networks
US6587437B1 (en) ER information acceleration in ABR traffic
NZ281675A (en) Broadband switching network: incoming asynchronous data packets buffered and applied to network by dynamic bandwidth controller
US7016302B1 (en) Apparatus and method for controlling queuing of data at a node on a network
JP2002543740A (en) Method and apparatus for managing traffic in an ATM network
US7218608B1 (en) Random early detection algorithm using an indicator bit to detect congestion in a computer network
KR20030054545A (en) Method of Controlling TCP Congestion
Perros A literature review of call admission algorithms
KR100482687B1 (en) Congestion Control Apparatus And Method For UBR Service In ATM Switch
GB2344974A (en) Fair packet discard in networks
JP3132719B2 (en) Usage parameter control circuit
JP3917830B2 (en) Rate control device
JP3632655B2 (en) ATM switching equipment
Guizani An effective congestion control scheme for ATM networks
JP2003023457A (en) Arrival rate detector
JP2003023450A (en) Rate controller
WO2001052075A1 (en) Simplified packet discard in atm switch

Legal Events

Date Code Title Description
MKEX Expiry