WO1997004544A1 - Network switch utilizing centralized and partitioned memory for connection topology information storage - Google Patents

Network switch utilizing centralized and partitioned memory for connection topology information storage Download PDF

Info

Publication number
WO1997004544A1
WO1997004544A1 PCT/US1996/011932 US9611932W WO9704544A1 WO 1997004544 A1 WO1997004544 A1 WO 1997004544A1 US 9611932 W US9611932 W US 9611932W WO 9704544 A1 WO9704544 A1 WO 9704544A1
Authority
WO
WIPO (PCT)
Prior art keywords
output
input
switch
point
input port
Prior art date
Application number
PCT/US1996/011932
Other languages
French (fr)
Inventor
Thomas A. Manning
Stephen A. Caldara
Stephen A. Hauser
Alan D. Sherman
Original Assignee
Fujitsu Network Communications, Inc
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Network Communications, Inc, Fujitsu Limited filed Critical Fujitsu Network Communications, Inc
Priority to JP9506873A priority Critical patent/JPH11510006A/en
Priority to PCT/US1996/011932 priority patent/WO1997004544A1/en
Priority to AU65017/96A priority patent/AU6501796A/en
Publication of WO1997004544A1 publication Critical patent/WO1997004544A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/18End to end
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17375One dimensional, e.g. linear array, ring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/4608LAN interconnection over ATM networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/266Stopping or restarting the source, e.g. X-on or X-off
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/106ATM switching elements using space switching, e.g. crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/107ATM switching elements using shared medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1515Non-blocking multistage, e.g. Clos
    • H04L49/153ATM switching fabrics having parallel switch planes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • H04L49/1553Interconnection of ATM switching modules, e.g. ATM switching fabrics
    • H04L49/1576Crossbar or matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • H04L49/203ATM switching fabrics with multicast or broadcast capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/256Routing or path finding in ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • H04L49/309Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/45Arrangements for providing or supporting expansion
    • H04L49/455Provisions for supporting expansion in ATM switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/555Error detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/324Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0682Clock or time synchronisation in a network by delay compensation, e.g. by compensation of propagation delay or variations thereof, by ranging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0685Clock or time synchronisation in a node; Intranode synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5616Terminal equipment, e.g. codecs, synch.
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5628Testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5634In-call negotiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5642Multicast/broadcast/point-multipoint, e.g. VOD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/564Connection-oriented
    • H04L2012/5643Concast/multipoint-to-point
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • H04L2012/5648Packet discarding, e.g. EPD, PTD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5649Cell delay or jitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5672Multiplexing, e.g. coding, scrambling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5682Threshold; Watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • H04L2012/5683Buffer or queue management for avoiding head of line blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5685Addressing issues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/04Speed or phase control by synchronisation signals
    • H04L7/041Speed or phase control by synchronisation signals using special codes as synchronising signal
    • H04L7/046Speed or phase control by synchronisation signals using special codes as synchronising signal using a dotting sequence

Definitions

  • the present invention relates generally to networks and, more particularly, to a network switch having centralized and partitioned memory for storing connection topology information.
  • ATM networks such as asynchronous transfer mode (“ATM”) networks are used to transfer audio, video and other data.
  • ATM networks deliver data by routing data units, such as ATM cells, from source to destination through switches. Switches
  • I/O ports 20 include input/output ("I/O") ports through which ATM cells are received and transmitted.
  • Each of the I/O ports has at least one queue associated therewith for temporarily storing, or buffering cells processed by the respective port. Queues associated with an input port are
  • input port queues queues associated with an output port are referred to herein as output port queues.
  • Cells and control signals may be transmitted from an input port queue to a single output port queue in the case
  • the cell header identifies the input port queue which temporarily stores the cell. In this way, the header also identifies the destination output port queue, or queues, since the cells stored in a particular input port queue have the same destination output port queue, or queues.
  • the ATM switch requires memory for storing a look-up table containing connection topology information (i.e., the destination output port queue, or queues associated with each input port queue) . As the number of input and output ports increases, the size and bandwidth requirements of the connection topology memory likewise increase. Additionally, scaling the number of ports of the switch requires an increase in size of the connection topology look-up table.
  • the switch includes at least one input port for receiving data from a network, at least one output port for transmitting data from the switch and a control module including a central switch fabric coupled between the input port and output port.
  • the central switch fabric includes a connection topology memory containing a look-up table correlating the at least one input port to the at least one output port.
  • the memory containing the connection topology information is centralized at the switch fabric, thereby facilitating scaling of the number of ports.
  • the number of input ports or output ports supported by the switch can be increased without modifying the I/O boards containing the I/O ports. Rather, the number of I/O ports can be increased by modifying the centralized connection topology information in the memory at the central switch fabric.
  • This arrangement can be contrasted to maintaining the connection topology memory at the I/O ports themselves, in which case the I/O boards would require increasing or decreasing the size of their memory in order to change the number of switch ports.
  • a connection topology memory residing at an input port would be required to have the memory size/bandwidth necessary to perform look ⁇ ups for the maximum number of switch output ports, even if it were utilized in a switch with fewer output ports.
  • the switch further includes at least one input side translator associated with the at least one input port, at least one output side translator associated with the at least one output port and a bandwidth arbiter operative to control data signal flow between the input side translator and the output side translator.
  • At least one input port queue is associated with the input port and at least one output port queue is associated with the output port for temporarily storing data received by the respective output port.
  • the central switch fabric memory is partitioned such that a first connection topology memory is associated with the input side translator and a second connection topology memory is associated with the output side translator.
  • Each input side translator is associated with a predetermined number of input ports and each output side translator is associated with a predetermined number of output ports.
  • each input side translator is associated with up to four input ports and each output side translator is associated with up to four output ports.
  • This arrangement advantageously distributes the bandwidth requirement associated with connection topology memory look-up operations, thereby effectively reducing the necessary memory access bandwidth per device.
  • scaling the number of ports of the switch is facilitated by the disclosed translator/memory/port modularity (i.e., the association of a connection topology memory with each translator and attached ports) .
  • the number of ports can be readily increased with the use of additional translator/memory pairs.
  • the first connection topology memory associated with the input side translator contains multiqueue number entries, each of which is a list of output port queues associated with a particular input port, and broadcast number entries, each of which is a pointer to one or more multiqueue numbers.
  • a multiqueue number is retrieved from the first connection topology memory for point to point connections and a broadcast number is retrieved from the first connection topology memory for point to multipoint connections.
  • Also stored in the first connection topology memory are bit vector entries which specify the destination output port, or ports. In point to point connections, the bit vector specifies a single output port. The multiqueue number or broadcast number is sent to the output side translator(s) associated with the output port(s) specified by the bit vector.
  • the second connection topology memory associated with the output side translator contains multiqueue numbers for each attached output port.
  • multiqueue numbers are retrieved from the second connection topology memory in response to a received broadcast number.
  • Multiqueue numbers are further decoded by the particular output port to identify the destination output port queue, or queues.
  • a multipoint to point connection is a set of multiple point to point connections distributed over time.
  • point to point it refers to either point to point or multipoint to point unless otherwise noted.
  • a multipoint to multipoint connection is a set of multiple point to multipoint connections distributed over time.
  • point to multipoint it refers to either point to multipoint or multipoint to multipoint unless otherwise noted.
  • the first connection topology memory is accessed to retrieve a multiqueue number and the second connection topology memory is not accessed; whereas, in point to multipoint connections, the first connection topology memory is accessed to retrieve a broadcast number and the second connection topology table memory is accessed to retrieve one or more multiqueue numbers in response to the broadcast number.
  • access of the second connection topology memory is avoided in point to point connections, thereby reducing the memory size and bandwidth requirements.
  • Fig. 1 is a block diagram of a network switch
  • Fig. 2 is a flow diagram of an illustrative process by which one or more destination output port queues are identified.
  • Fig. 3 is a diagram of an illustrative connection topology memory and mapping memory of the switch of Fig. 1, a ⁇ well as the address structures associated with the connection topology memory and the mapping memory.
  • a network switch 10 includes at least one input port comprising at least one input port processor 14, - 14n coupled to one or more respective ATM links and operative to receive and temporarily store ATM cells from the respective ATM links. Also provided is at least one output port comprising at least one output port processor 18, - 18o coupled to one or more respective ATM links and operative to temporarily store and transmit ATM cells to the respective ATM links.
  • the input port processors 14j - 14n are referred to herein generally as "To Switch Port Processors" or TSPPs 14 and the output port processors 18, - 18n are referred to herein generally as "From Switch Port Processors" or FSPPs 18.
  • Each of the I/O ports has at least one queue associated therewith.
  • a queue associated with an input port is referred to herein generally as an input port queue 16 and a queue associated with an output port is referred to herein generally as an output port queue 22.
  • Each input port queue 16 has a corresponding input port queue number identifying the respective input port queue.
  • each output port queue 22 has a corresponding output port queue number identifying the respective output port queue.
  • the input port queues 16 contain identifiers of other queues which buffer, or temporarily store cells contending for access to a common connection.
  • the input port queue 16 may alternatively be referred to as a scheduling list since such a queue contains a list of queues to be scheduled for access to a connection.
  • the input queues 16 may themselves contain (or point to) cells for the purpose of buffering such cells. It will therefore be appreciated that the term input port queue 16 as used hereinafter refers to either a queue containing other queues or a queue containing cells. Data and control signals may be transmitted from an input port queue 16 to a particular one of the output port queues 22, in the case of a point to point connection.
  • a multipoint to point connection is a set of multiple point to point connections distributed over time.
  • point to point it refers to either point to point or multipoint to point unless otherwise noted.
  • a multipoint to multipoint connection is a set of multiple point to multipoint connections distributed over time.
  • point to multipoint it refers to either point to multipoint or multipoint to multipoint unless otherwise noted.
  • a control module 20 is coupled between each of the TSPPs 14, - I4n and the FSPPs 18, - 18n, as shown.
  • the control module 20 includes a central switch fabric 28 which permits the flow of control signals and data between the TSPPs 14, - 14n and the FSPPs i ⁇ , - 18n.
  • the switch fabric 28 includes a plurality of input side translators 24, - 24n/x (referred to generally as input side translators 24) , where x is a predetermined number, such as four. Stated differently, each input side translator 24 is associated with between one and four TSPPs 14.
  • the input side translators 24 connect the respective TSPP(s) 14 to a bandwidth arbiter 30 of the switch fabric and perform translations used to implement internal switch flow control for point to multipoint and point to point connections.
  • the interface between each input side translator 24 and the bandwidth arbiter 30 is provided by serial signal lines.
  • connection topology memory 26, - 26n/x is a connection topology memory 26, - 26n/x, respectively (referred to generally as input side memories 26) .
  • Each input side memory 26 includes a look-up table containing connection topology information and thus, may be referred to alternatively as a look-up table memory 26.
  • each input side memory 26 contains a look-up table of multiqueue numbers (MQNs) , forward broadcast numbers (FBCNs) , input port queue numbers and bit vectors, as will be described.
  • MQNs multiqueue numbers
  • FBCNs forward broadcast numbers
  • bit vectors bit vectors
  • each output side translator 40 is associated with between one and four FSPPs 18.
  • the output side translators 40 are connected to the bandwidth arbiter 30 via serial signal lines and perform translations for internal switch flow control.
  • connection topology memory 44, - 44o/x (referred to generally as output side memories 44) is associated with each output side translator 40, - 40o/x, respectively.
  • Each of the output side memories 44 includes a look-up table containing connection topology information and thus, may be referred to alternatively as a look-up table memory 44.
  • each output side memory 44 contains a look-up table of input port queue numbers and reverse broadcast numbers (RBCNs) , as will be described.
  • the bandwidth arbiter 30 controls data flow within the switch 10 and includes a probe crossbar 32, an XOFF crossbar 36 and an XON crossbar 34, each of which is an NxN switch fabric, such as a cross point switch fabric.
  • the request message is used to query whether or not sufficient space is available at the destination output port queue, or queues 22 to enqueue a cell.
  • the request message is considered a "forward" control signal since its direction is from a TSPP 14 to one or more FSPPs 18 (i.e., the same direction as data) .
  • a two bit control signal flows in the reverse direction (from one or more FSPPs 18 to a TSPP 14) through the XOFF crossbar 36 and responds to the request message query by indicating whether or not the destination output port queue, or queues 22 are presently capable of accepting data cells and thus, whether
  • a data crossbar 48 of the switch fabric 28 permits transmission of data cells between the TSPPs 14 and the FSPPs 18. To this end, the data crossbar 48 is coupled between the TSPPs 14 and FSPPs 18.
  • a microprocessor 50 within the control module 20 provides various control functionality.
  • the microprocessor 50 executes call control software when switch connections are set up, in order to load the connection topology look-up tables into the input side memories 26 and output side memories 44.
  • the control module 20 resides on a respective I/O board and the control module 20 is implemented on a central board.
  • the input side translators 24 and output side translators 40 are implemented on an ASIC, which may contain one or more of the input and output side translators.
  • bandwidth arbiter 30 is implemented on an ASIC.
  • Each input side memory 26 and output side memory 44 is provided by a dedicated SRAM device.
  • the input side translators 24 and output side translators 40 are incorporated into the same device.
  • both the input side memories 26 and output side memories 44 contain similar connection topology look-up tables. Specifically, each of the memories 26, 44 contains a
  • connection topology look-up table with entries of multipoint queue numbers (MQNs) , forward broadcast numbers (FBCNs) , bit vectors, input port queue numbers and reverse broadcast numbers (RBCNs) .
  • MQNs multipoint queue numbers
  • FBCNs forward broadcast numbers
  • RBCNs reverse broadcast numbers
  • An MQN is a fourteen bit digital word which specifies one or more output port queues 22 associated with a particular FSPP 18.
  • an MQN for the destination FSPP 18 is retrieved from the input side memory 26 by the input side translator 24 and is sent to the destination FSPP 18 via the output side translator 40 and bandwidth arbiter 30.
  • An FBCN is a seventeen bit digital word which points to a list of one or more MQNs.
  • an FBCN is retrieved from the input side connection topology memory 26 by the respective input side translator 24 and is sent to the output side translator(s) 40 associated with the destination FSPP(s) 18. The output side translator(s) 40 then use the FBCN to retrieve an MQN for each destination FSPP 18.
  • a bit vector is a sixteen bit digital word which specifies the destination FSPP(s) 18, with the number of bits corresponding to the number of output ports.
  • a bit vector is retrieved from the input side look-up table memory 26 by the respective input side translator 24 in both point to point and point to multipoint connections. However, only one bit of the bit vector is set for point to point connections.
  • bit vectors are not used for transmissions in the reverse direction. Rather, reverse transmissions are either to a single TSPP or to all TSPPs 14. In the case of a reverse transmission to all TSPPs, an RBCN, which is a seventeen bit digital word which points to a list of input port queue numbers, is retrieved from an output side memory 44. Alternatively, in the case of a reverse transmission to one TSPP 14, an input port queue number and the input port number associated with the destination TSPP 14 is retrieved from the output side memory 44.
  • Table 1 summarizes the connection topology look-up operations performed by the input and output side translators 24, 40 for both forward probe control signal transmissions and reverse XON and XOFF control signal transmissions in point to point and point to multipoint connections.
  • Input port queue # FBCN or MQN
  • connection topology memory (including the discrete input side memories 26 associated with respective input side translators 24 and output side memories 44 associated with respective output side translators 40) is centralized at the switch fabric 28 of the control module 20.
  • This look-up memory centralization facilitates switch port scaling (i.e., changes in the number of ports of the switch) .
  • switch port scaling i.e., changes in the number of ports of the switch
  • I/O port capacity can be increased by modifying the centralized connection topology information contained in the switch fabric memories 26, 44.
  • a further advantage of the switch 10 is provided by the association of a predetermined number of ports with each translator 24, 40 and the partitioning of the connection topology memory, such that each input side translator 24 and output side translator 40 has a dedicated connection topology memory 26, 44, respectively, associated therewith.
  • each input side translator 24/input side memory 26 combination is associated with up to four TSPPs 14 and each output side translator 40/output side memory 44 combination is associated with up to four FSPPs 18.
  • This arrangement serves to distribute the bandwidth requirement associated with connection topology look-up operations.
  • switch modularity is facilitated since the number of ports can be readily increased by adding additional translator/memory pairs to the switch fabric 28. For example, input/output port quantity can be increased by adding additional input side translators 24 and associated input side memories 26.
  • a TSPP 14 sends the number of the associated input port queue 16 containing data (i.e., or containing queues containing data) to the attached input side translator 24 in step 74.
  • the input side translator 24 accesses the corresponding input side memory 26.
  • the input port queue number is used to address the input side memory 26 to retrieve a bit vector and either an FBCN or an MQN, depending on whether the cell connection is point to point or point to multipoint.
  • the input side translator 24 retrieves an MQN and, in the case of a point to multipoint connection, the input side translator 24 retrieves an FBCN.
  • the retrieved MQN or FBCN is sent by the input side translator 24 to the bandwidth arbiter 30.
  • the bandwidth arbiter 30 sends the MQN or FBCN to the output side translator(s) 40 associated with the FSPPs 18 specified by the bit vector.
  • the receiving output side translator(s) 40 perform the further look-up of retrieving an MQN> ⁇ or each attached FSPP 18 in response to the received FBCN.
  • the MQNs retrieved by the output side translator(s) 40 are then sent to the respective FSPPs 18, in step 98. Note that, in the event that an MQN was retrieved by the input side translator 24 in step 78 (i.e., a point to point connection) , that MQN is sent to the respective FSPP in step 98. Stated differently, there is no output side connection topology look-up operation for point to point connections. Finally, in step 102, the destination FSPP(s) 18 look up the associated output port queue numbers 22 indicated by the received MQN before the process is terminated in step 106.
  • connection topology memories 26, 44 are shown in conjunction with illustrative input side memory 26.
  • the connection topology memory 26 is a twenty-two bit wide SRAM segregated into three areas 150, 158 and 160.
  • the first memory area 150 (referred to as the FBCN:MQN look-up table area) is accessed to perform output side look- up for forward point to multipoint transmissions (i.e., during the look-up operation numbered 3 in Table 1) .
  • the FBCN:MQN look-up table area 150 contains entries correlating each FBCN to a list of MQNs.
  • the second memory area 158 (referred to as the RBCN:Input Port Queue # look-up table area) is accessed to perform input side look-up for - 5 reverse point to multipoint transmissions (i.e., during the look-up operation numbered 5 in Table 1) .
  • the RBCN:Input Port Queue number look-up table area 158 contains entries correlating each RBCN to a list of input queue numbers. 10
  • the third memory area 160 (referred to as the queue area) is accessed to perform input side look-up operations for forward and reverse point to point and point to multipoint connections (i.e., during the look-up operation ⁇ numbered 1, 2 and 4 in Table 1) .
  • queue area 160 15 contains three types of entries: (1) entries correlating an input port queue number to an FBCN or an MQN; (2) entries correlating an input port queue number to a bit vector; and (3) entries correlating an output port queue number to input port queue number(s) and the input port number or an RBCN.
  • Each FBCN/MQN entry contains a bit specifying whether the entry is an FBCN or an MQN (i.e., whether the connection is point to multipoint or point to point, respectively) .
  • the bit vector is a sixteen bit entry specifying the destination FSPP(s) 18. Thus, in the case of a point to point 25 connection, only one bit of the bit vector is set.
  • Each input port queue number/RBCN entry contains a bit specifying whether the entry is an input port queue number and input port number or an RBCN (i.e., whether the connection is point to point or point to multipoint) .
  • connection topology look-up operations are achieved using the port processor (TSPP 14 or FSPP 18) number
  • the memory address used to look up an MQN in response to an FBCN is labelled 168, in which the two least significant bits identify the receiving FSPP 18.
  • the most significant seventeen bits are given by an FBCN offset register value minus the FBCN.
  • the FBCN offset corresponds to the end of the memory 26 (i.e., the last memory location).
  • the address used to look up an input port queue number in response to an RBCN is labelled 170 and includes the receiving TSPP identifier as the two least significant bits.
  • the most significant seventeen bits are given by an RBCN offset register value plus the RBCN.
  • the RBCN offset corresponds to the end of the queue area 160.
  • the RBCN offset is programmable providing a trade-off between the size of the queue area 160 and the size of the BCN look-up table area. This allows flexibility in choosing between the number of connections and the percent of connections which are multipoint. Additionally, the look-up area for the FBCN grows down within the memory; whereas the look-up area for the RBCN grows up, so that the portion of memory dedicated to point to multipoint information as compared to multipoint to point information does not have to be established at machine initialization time.
  • look-up operations utilize a memory mapping scheme in order to optimize memory utilization. These look-up operations include: (1) looking up a bit vector in response to an input port queue number (i.e., numbered 2 in Table 1); (2) looking up either an MQN or an FBCN in response to an input port queue number (i.e., numbered 1 in Table 1) and (3) looking up either an input port queue number and input port number or an RBCN in response to an output port queue number (i.e., numbered 4 in Table 1) .
  • Each input side translator 24 and output side translator 40 contains a mapping RAM 164.
  • the address to the mapping RAM 164 is labelled 174 and includes the least significant bits of the port processor number (i.e., TSPP or FSPP) and the most significant bits of the input/output port queue number, depending on the particular look-up operation. For example, in the case of looking up a bit vector in response to an input port queue number, the port processor number identifies the transmitting TSPP 14 and the queue number identifies the particular input port queue 16. The two least significant bits of the port processor number are used in address 174 since each translator supports four port processors. The five most significant bits of the I/O port queue number are used in address 174 ⁇ ince the page size is 512 (i.e., nine bits) and the queue number is fourteen bits with all of the bits resolved.
  • the mapping address 174 is used to retrieve a seven bit mapping word 180 from the mapping RAM 164 for use in an address 178 to access the queue area 160 of the external SRAM 26.
  • the seven bit mapping word 180 specifies a particular page of 128 pages into which the queue area 160 is divided.
  • the queue area address 178 additionally includes the transmitting port queue number (i.e., an input port queue number 16 when looking up an MQN, bit vector or FBCN and an output port queue number 22 when looking up an RBCN or input port queue number 16) .
  • the two least significant bits of the queue area address 178 are look-up selection bits which specify the particular one of the three types of look-ups which utilize the queue area 160 (i.e., look-up operations labelled 1, 2 and 4 in Table 1) .
  • 00 as the two least significant bits of address 178 specifies a bit vector look-up operation
  • 01 as the two least significant bits specifies an MQN or FBCN look-up operation
  • 11 as the two least significant bits specifies an input port queue number and input port number or RBCN look-up operation.
  • An entry of 10 as the two least significant bits may be used to specify an additional look-up operation.
  • each port processor (TSPP 14 and FSPP 18) supports 16,384 connections.
  • each translator 24, 40 supporting up to four port processors 14, 18, respectively, significant memory space is required to store connection information.
  • some port processors 14, 18 support fewer than 16,384 connections while other port processors associated with the same translator support the maximum 16,384 connections.
  • a port processor with a low number of connections can give up some memory to a port processor that has more connections.
  • the mapping provides for dynamic reconfiguration of multipoint information as port processors are inserted/removed from an in-service switch.

Abstract

A network switch utilizing centralized and partitioned memory for storing connection topology information. The switch includes at least one input port (14), at least one output port (18) and a central switch fabric (28) interconnecting the input port and output port, with the connection topology memory centralized at the switch fabric. The central switch fabric includes at least one input side translator (24) associated with a predetermined number of input ports and at least one output port side translator (40) associated with a predetermined number of output ports. The connection topology memory (26, 44) is distributed among the at least one input side translator and the at least one output side translator. With this arrangement, memory bandwidth requirements associated with connection topology look-up operations are distributed and scaling the number of ports is facilitated.

Description

NETWORK SWITCH UTILIZING CENTRALIZED AND PARTITIONED MEMORY FOR CONNECTION TOPOLOGY INFORMATION STORAGE 5 RELATED CASE INFORMATION
This application claims benefit of U.S. Provisional Application Serial No. 60/001,498, filed July 19, 1995.
FIELD OF THE INVENTION 10 The present invention relates generally to networks and, more particularly, to a network switch having centralized and partitioned memory for storing connection topology information.
15 BACKGROUND OF THE INVENTION
Networks such as asynchronous transfer mode ("ATM") networks are used to transfer audio, video and other data. ATM networks deliver data by routing data units, such as ATM cells, from source to destination through switches. Switches
20 include input/output ("I/O") ports through which ATM cells are received and transmitted. Each of the I/O ports, in turn, has at least one queue associated therewith for temporarily storing, or buffering cells processed by the respective port. Queues associated with an input port are
25 referred to herein as input port queues and queues associated with an output port are referred to herein as output port queues.
Cells and control signals may be transmitted from an input port queue to a single output port queue in the case
30 of point to point connections or from an input port queue to a selected group of output port queues in the case of point k to multipoint connections. Prior to transmitting a cell from an input port queue to a destination output port queue, or queues, a control signal is sent from the input port to the
35 destination output port, or ports, to query whether sufficient output port queue space is available to enqueue the cell.
In accordance with one type of ATM transmission, which may be referred to as "per-VC queuing", the cell header identifies the input port queue which temporarily stores the cell. In this way, the header also identifies the destination output port queue, or queues, since the cells stored in a particular input port queue have the same destination output port queue, or queues. The ATM switch requires memory for storing a look-up table containing connection topology information (i.e., the destination output port queue, or queues associated with each input port queue) . As the number of input and output ports increases, the size and bandwidth requirements of the connection topology memory likewise increase. Additionally, scaling the number of ports of the switch requires an increase in size of the connection topology look-up table.
SUMMARY OF THE INVENTION Methods and apparatus are described for reducing memory size and bandwidth requirements associated with looking up stored connection topology information in a network switch and for facilitating scaling of the number of ports of the switch. The switch includes at least one input port for receiving data from a network, at least one output port for transmitting data from the switch and a control module including a central switch fabric coupled between the input port and output port. The central switch fabric includes a connection topology memory containing a look-up table correlating the at least one input port to the at least one output port.
With this arrangement, the memory containing the connection topology information is centralized at the switch fabric, thereby facilitating scaling of the number of ports. In particular, the number of input ports or output ports supported by the switch can be increased without modifying the I/O boards containing the I/O ports. Rather, the number of I/O ports can be increased by modifying the centralized connection topology information in the memory at the central switch fabric. This arrangement can be contrasted to maintaining the connection topology memory at the I/O ports themselves, in which case the I/O boards would require increasing or decreasing the size of their memory in order to change the number of switch ports. Moreover, a connection topology memory residing at an input port would be required to have the memory size/bandwidth necessary to perform look¬ ups for the maximum number of switch output ports, even if it were utilized in a switch with fewer output ports.
The switch further includes at least one input side translator associated with the at least one input port, at least one output side translator associated with the at least one output port and a bandwidth arbiter operative to control data signal flow between the input side translator and the output side translator. At least one input port queue is associated with the input port and at least one output port queue is associated with the output port for temporarily storing data received by the respective output port. The central switch fabric memory is partitioned such that a first connection topology memory is associated with the input side translator and a second connection topology memory is associated with the output side translator.
Each input side translator is associated with a predetermined number of input ports and each output side translator is associated with a predetermined number of output ports. For example, in the illustrative embodiment, each input side translator is associated with up to four input ports and each output side translator is associated with up to four output ports. This arrangement advantageously distributes the bandwidth requirement associated with connection topology memory look-up operations, thereby effectively reducing the necessary memory access bandwidth per device. Additionally, scaling the number of ports of the switch is facilitated by the disclosed translator/memory/port modularity (i.e., the association of a connection topology memory with each translator and attached ports) . In particular, the number of ports can be readily increased with the use of additional translator/memory pairs.
In accordance with a further aspect of the invention, the first connection topology memory associated with the input side translator contains multiqueue number entries, each of which is a list of output port queues associated with a particular input port, and broadcast number entries, each of which is a pointer to one or more multiqueue numbers. A multiqueue number is retrieved from the first connection topology memory for point to point connections and a broadcast number is retrieved from the first connection topology memory for point to multipoint connections. Also stored in the first connection topology memory are bit vector entries which specify the destination output port, or ports. In point to point connections, the bit vector specifies a single output port. The multiqueue number or broadcast number is sent to the output side translator(s) associated with the output port(s) specified by the bit vector. The second connection topology memory associated with the output side translator contains multiqueue numbers for each attached output port. In point to multipoint connections, multiqueue numbers are retrieved from the second connection topology memory in response to a received broadcast number. Multiqueue numbers are further decoded by the particular output port to identify the destination output port queue, or queues.
A multipoint to point connection is a set of multiple point to point connections distributed over time. Hereinafter, wherever point to point is used, it refers to either point to point or multipoint to point unless otherwise noted. A multipoint to multipoint connection is a set of multiple point to multipoint connections distributed over time. Hereinafter, wherever point to multipoint is used, it refers to either point to multipoint or multipoint to multipoint unless otherwise noted.
In point to point connections, the first connection topology memory is accessed to retrieve a multiqueue number and the second connection topology memory is not accessed; whereas, in point to multipoint connections, the first connection topology memory is accessed to retrieve a broadcast number and the second connection topology table memory is accessed to retrieve one or more multiqueue numbers in response to the broadcast number. With this arrangement, access of the second connection topology memory is avoided in point to point connections, thereby reducing the memory size and bandwidth requirements.
BRIEF DESCRIPTION OF THE DRAWINGS The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following detailed description of the drawings in which: Fig. 1 is a block diagram of a network switch;
Fig. 2 is a flow diagram of an illustrative process by which one or more destination output port queues are identified; and
Fig. 3 is a diagram of an illustrative connection topology memory and mapping memory of the switch of Fig. 1, aε well as the address structures associated with the connection topology memory and the mapping memory.
DESCRIPTION OF THE PREFERRED EMBODIMENT Referring to Fig. 1, a network switch 10 includes at least one input port comprising at least one input port processor 14, - 14n coupled to one or more respective ATM links and operative to receive and temporarily store ATM cells from the respective ATM links. Also provided is at least one output port comprising at least one output port processor 18, - 18o coupled to one or more respective ATM links and operative to temporarily store and transmit ATM cells to the respective ATM links. The input port processors 14j - 14n are referred to herein generally as "To Switch Port Processors" or TSPPs 14 and the output port processors 18, - 18n are referred to herein generally as "From Switch Port Processors" or FSPPs 18. Each of the I/O ports has at least one queue associated therewith. A queue associated with an input port is referred to herein generally as an input port queue 16 and a queue associated with an output port is referred to herein generally as an output port queue 22. Each input port queue 16 has a corresponding input port queue number identifying the respective input port queue. Likewise, each output port queue 22 has a corresponding output port queue number identifying the respective output port queue.
In one embodiment, the input port queues 16 contain identifiers of other queues which buffer, or temporarily store cells contending for access to a common connection. In such an embodiment, the input port queue 16 may alternatively be referred to as a scheduling list since such a queue contains a list of queues to be scheduled for access to a connection. Alternatively however, the input queues 16 may themselves contain (or point to) cells for the purpose of buffering such cells. It will therefore be appreciated that the term input port queue 16 as used hereinafter refers to either a queue containing other queues or a queue containing cells. Data and control signals may be transmitted from an input port queue 16 to a particular one of the output port queues 22, in the case of a point to point connection. Alternatively, data and control signals may be transmitted from an input port queue 16 to a selected set of output port queues 22, in the case of a point to multipoint connection. A multipoint to point connection is a set of multiple point to point connections distributed over time. Hereinafter, wherever point to point is used, it refers to either point to point or multipoint to point unless otherwise noted. A multipoint to multipoint connection is a set of multiple point to multipoint connections distributed over time. Hereinafter, wherever point to multipoint is used, it refers to either point to multipoint or multipoint to multipoint unless otherwise noted.
A control module 20 is coupled between each of the TSPPs 14, - I4n and the FSPPs 18, - 18n, as shown. The control module 20 includes a central switch fabric 28 which permits the flow of control signals and data between the TSPPs 14, - 14n and the FSPPs iδ, - 18n. The switch fabric 28 includes a plurality of input side translators 24, - 24n/x (referred to generally as input side translators 24) , where x is a predetermined number, such as four. Stated differently, each input side translator 24 is associated with between one and four TSPPs 14. The input side translators 24 connect the respective TSPP(s) 14 to a bandwidth arbiter 30 of the switch fabric and perform translations used to implement internal switch flow control for point to multipoint and point to point connections. The interface between each input side translator 24 and the bandwidth arbiter 30 is provided by serial signal lines.
Also associated with each input side translator 24, -
24n/x is a connection topology memory 26, - 26n/x, respectively (referred to generally as input side memories 26) . Each input side memory 26 includes a look-up table containing connection topology information and thus, may be referred to alternatively as a look-up table memory 26. In the illustrative embodiment, each input side memory 26 contains a look-up table of multiqueue numbers (MQNs) , forward broadcast numbers (FBCNs) , input port queue numbers and bit vectors, as will be described.
Also provided in the switch fabric 28 are a plurality of output side translators 40, - 40o/x (referred to generally as output side translators 40) , where x is a predetermined number, such as four. Thus, each output side translator 40 is associated with between one and four FSPPs 18. Like the input side translators 24, the output side translators 40 are connected to the bandwidth arbiter 30 via serial signal lines and perform translations for internal switch flow control.
A connection topology memory 44, - 44o/x (referred to generally as output side memories 44) is associated with each output side translator 40, - 40o/x, respectively. Each of the output side memories 44 includes a look-up table containing connection topology information and thus, may be referred to alternatively as a look-up table memory 44. In the illustrative embodiment, each output side memory 44 contains a look-up table of input port queue numbers and reverse broadcast numbers (RBCNs) , as will be described.
The bandwidth arbiter 30 controls data flow within the switch 10 and includes a probe crossbar 32, an XOFF crossbar 36 and an XON crossbar 34, each of which is an NxN switch fabric, such as a cross point switch fabric. Multiple request messages, or probe control signals, flow through the probe crossbar 32. The request message is used to query whether or not sufficient space is available at the destination output port queue, or queues 22 to enqueue a cell. The request message is considered a "forward" control signal since its direction is from a TSPP 14 to one or more FSPPs 18 (i.e., the same direction as data) . A two bit control signal flows in the reverse direction (from one or more FSPPs 18 to a TSPP 14) through the XOFF crossbar 36 and responds to the request message query by indicating whether or not the destination output port queue, or queues 22 are presently capable of accepting data cells and thus, whether
• or not the transmitting TSPP 14 can transmit cells. In the event that the XOFF control signal indicates that the queried
.5 output port queue(s) 22 are not presently capable of receiving data, another reverse control signal, which flows through the XON crossbar 34, notifies the transmitting TSPP(s) 14 once space becomes available at the destination output port queue(s) 22.
10 A data crossbar 48 of the switch fabric 28 permits transmission of data cells between the TSPPs 14 and the FSPPs 18. To this end, the data crossbar 48 is coupled between the TSPPs 14 and FSPPs 18. A microprocessor 50 within the control module 20 provides various control functionality.
15 As one example, the microprocessor 50 executes call control software when switch connections are set up, in order to load the connection topology look-up tables into the input side memories 26 and output side memories 44.
In one embodiment, each of the TSPPs 14 and FSPPs 18
20 resides on a respective I/O board and the control module 20 is implemented on a central board. In the preferred embodiment, the input side translators 24 and output side translators 40 are implemented on an ASIC, which may contain one or more of the input and output side translators.
25 Additionally, the bandwidth arbiter 30 is implemented on an ASIC. Each input side memory 26 and output side memory 44 is provided by a dedicated SRAM device.
Preferably, the input side translators 24 and output side translators 40 are incorporated into the same device.
30 Similarly, the input side memories 26 and output side memories 44 are interleaved on the same device. To this end,
• both the input side memories 26 and output side memories 44 contain similar connection topology look-up tables. Specifically, each of the memories 26, 44 contains a
35 connection topology look-up table with entries of multipoint queue numbers (MQNs) , forward broadcast numbers (FBCNs) , bit vectors, input port queue numbers and reverse broadcast numbers (RBCNs) .
An MQN is a fourteen bit digital word which specifies one or more output port queues 22 associated with a particular FSPP 18. In point to point connections, an MQN for the destination FSPP 18 is retrieved from the input side memory 26 by the input side translator 24 and is sent to the destination FSPP 18 via the output side translator 40 and bandwidth arbiter 30. An FBCN is a seventeen bit digital word which points to a list of one or more MQNs. In point to multipoint connections, an FBCN is retrieved from the input side connection topology memory 26 by the respective input side translator 24 and is sent to the output side translator(s) 40 associated with the destination FSPP(s) 18. The output side translator(s) 40 then use the FBCN to retrieve an MQN for each destination FSPP 18. In the case of point to point connections however, no connection topology look-up operations are performed on the output side of the switch fabric 28, since an MQN is transmitted directly from the input side of the switch fabric. Thus, fewer connection topology look-up operations are performed in the case of point to point connections, as contrasted to point to multipoint connections. That is, in point to point connections, an MQN is transmitted directly to the output side, as contrasted to transmission of an FBCN to be translated into MQN(s) by the output side. With this arrangement, the necessary memory size is reduced for point to point connections. In the illustrative embodiment, a bit vector is a sixteen bit digital word which specifies the destination FSPP(s) 18, with the number of bits corresponding to the number of output ports. A bit vector is retrieved from the input side look-up table memory 26 by the respective input side translator 24 in both point to point and point to multipoint connections. However, only one bit of the bit vector is set for point to point connections. In the illustrative embodiment, bit vectors are not used for transmissions in the reverse direction. Rather, reverse transmissions are either to a single TSPP or to all TSPPs 14. In the case of a reverse transmission to all TSPPs, an RBCN, which is a seventeen bit digital word which points to a list of input port queue numbers, is retrieved from an output side memory 44. Alternatively, in the case of a reverse transmission to one TSPP 14, an input port queue number and the input port number associated with the destination TSPP 14 is retrieved from the output side memory 44.
The following Table 1 summarizes the connection topology look-up operations performed by the input and output side translators 24, 40 for both forward probe control signal transmissions and reverse XON and XOFF control signal transmissions in point to point and point to multipoint connections.
TABLE 1 Input side translator look-up operations for forward signals:
1. Input port queue # : FBCN or MQN
(an FBCN for point to multipoint connections and an MQN for point to point connections)
2. Input port queue # : Bit Vector
Output side translator look-up operation for forward signals:
3. FBCN : MQNs
Output side translator look-up operations for reverse signals:
4. Output port queue#: Input port queue# and Input Port/ or RBCN
(input port queue# for point to point connections and an RBCN for point to multipoint connections)
Input side translator look-up operation for reverse signals: 5. RBCN : Input port queue #
With the above described arrangement, the connection topology memory (including the discrete input side memories 26 associated with respective input side translators 24 and output side memories 44 associated with respective output side translators 40) is centralized at the switch fabric 28 of the control module 20. This look-up memory centralization facilitates switch port scaling (i.e., changes in the number of ports of the switch) . In particular, in the event that it is desired to increase the number of input ports or output ports supported by the switch 10, it is not necessary to modify the I/O boards on which the TSPP/FSPP resides. Rather, I/O port capacity can be increased by modifying the centralized connection topology information contained in the switch fabric memories 26, 44. This arrangement can be contrasted to having the look-up table memory reside at the TSPPs 14 and FSPPs 18 which would require a change in size of their local look-up table memory in order to modify the number of switch ports. Also, a connection topology memory memory size/bandwidth necessary to perform connection topology look-ups for the maximum number of switch output ports, even if it were utilized in a switch with fewer output ports. A further advantage of the switch 10 is provided by the association of a predetermined number of ports with each translator 24, 40 and the partitioning of the connection topology memory, such that each input side translator 24 and output side translator 40 has a dedicated connection topology memory 26, 44, respectively, associated therewith. In particular, in the illustrative embodiment, each input side translator 24/input side memory 26 combination is associated with up to four TSPPs 14 and each output side translator 40/output side memory 44 combination is associated with up to four FSPPs 18. This arrangement serves to distribute the bandwidth requirement associated with connection topology look-up operations. Additionally, switch modularity is facilitated since the number of ports can be readily increased by adding additional translator/memory pairs to the switch fabric 28. For example, input/output port quantity can be increased by adding additional input side translators 24 and associated input side memories 26.
Referring also to Fig. 2, the process by which one or more output port queues 22 are identified for receipt of control signals and data from a particular input port queue 16 will be described. After the process commences in step 70, a TSPP 14 sends the number of the associated input port queue 16 containing data (i.e., or containing queues containing data) to the attached input side translator 24 in step 74. In step 78, the input side translator 24 accesses the corresponding input side memory 26. In particular, the input port queue number is used to address the input side memory 26 to retrieve a bit vector and either an FBCN or an MQN, depending on whether the cell connection is point to point or point to multipoint. In the case of a point to point connection, the input side translator 24 retrieves an MQN and, in the case of a point to multipoint connection, the input side translator 24 retrieves an FBCN.
In subsequent step 86, the retrieved MQN or FBCN is sent by the input side translator 24 to the bandwidth arbiter 30. Subsequently, in step 90, the bandwidth arbiter 30 sends the MQN or FBCN to the output side translator(s) 40 associated with the FSPPs 18 specified by the bit vector. In step 92, it is determined by the receiving output side translator 40 whether an MQN was retrieved by the input side translator 24. In the event that an MQN was not retrieved, then an FBCN was retrieved (i.e., a point to multipoint connection) and a further look-up operation is performed by the output side translator(s) 40 to which the FBCN is transmitted. In particular, in step 94, the receiving output side translator(s) 40 perform the further look-up of retrieving an MQN><or each attached FSPP 18 in response to the received FBCN.
The MQNs retrieved by the output side translator(s) 40 are then sent to the respective FSPPs 18, in step 98. Note that, in the event that an MQN was retrieved by the input side translator 24 in step 78 (i.e., a point to point connection) , that MQN is sent to the respective FSPP in step 98. Stated differently, there is no output side connection topology look-up operation for point to point connections. Finally, in step 102, the destination FSPP(s) 18 look up the associated output port queue numbers 22 indicated by the received MQN before the process is terminated in step 106.
Referring also to Fig. 3, the structure of the connection topology memories 26, 44 is shown in conjunction with illustrative input side memory 26. In the illustrative embodiment, the connection topology memory 26 is a twenty-two bit wide SRAM segregated into three areas 150, 158 and 160.
The first memory area 150 (referred to as the FBCN:MQN look-up table area) is accessed to perform output side look- up for forward point to multipoint transmissions (i.e., during the look-up operation numbered 3 in Table 1) . To this end, the FBCN:MQN look-up table area 150 contains entries correlating each FBCN to a list of MQNs. The second memory area 158 (referred to as the RBCN:Input Port Queue # look-up table area) is accessed to perform input side look-up for - 5 reverse point to multipoint transmissions (i.e., during the look-up operation numbered 5 in Table 1) . Thus, the RBCN:Input Port Queue number look-up table area 158 contains entries correlating each RBCN to a list of input queue numbers. 10 The third memory area 160 (referred to as the queue area) is accessed to perform input side look-up operations for forward and reverse point to point and point to multipoint connections (i.e., during the look-up operationε numbered 1, 2 and 4 in Table 1) . Thus, queue area 160 15 contains three types of entries: (1) entries correlating an input port queue number to an FBCN or an MQN; (2) entries correlating an input port queue number to a bit vector; and (3) entries correlating an output port queue number to input port queue number(s) and the input port number or an RBCN.
20 Each FBCN/MQN entry contains a bit specifying whether the entry is an FBCN or an MQN (i.e., whether the connection is point to multipoint or point to point, respectively) . The bit vector is a sixteen bit entry specifying the destination FSPP(s) 18. Thus, in the case of a point to point 25 connection, only one bit of the bit vector is set. Each input port queue number/RBCN entry contains a bit specifying whether the entry is an input port queue number and input port number or an RBCN (i.e., whether the connection is point to point or point to multipoint) .
30 Two of the connection topology look-up operations are achieved using the port processor (TSPP 14 or FSPP 18) number
* for addressing. In particular, looking up an MQN in response to an FBCN in the table area 150 (i.e., look-up operation 3
• in Table 1) and looking up an input port queue number in 35 response to an RBCN in the table area 158 (i.e., look-up operation 5 in Table 1) are achieved using the particular port processor number for addressing. Specifically, the memory address used to look up an MQN in response to an FBCN is labelled 168, in which the two least significant bits identify the receiving FSPP 18. The most significant seventeen bits are given by an FBCN offset register value minus the FBCN. The FBCN offset corresponds to the end of the memory 26 (i.e., the last memory location). The address used to look up an input port queue number in response to an RBCN is labelled 170 and includes the receiving TSPP identifier as the two least significant bits. The most significant seventeen bits are given by an RBCN offset register value plus the RBCN. The RBCN offset corresponds to the end of the queue area 160.
The RBCN offset is programmable providing a trade-off between the size of the queue area 160 and the size of the BCN look-up table area. This allows flexibility in choosing between the number of connections and the percent of connections which are multipoint. Additionally, the look-up area for the FBCN grows down within the memory; whereas the look-up area for the RBCN grows up, so that the portion of memory dedicated to point to multipoint information as compared to multipoint to point information does not have to be established at machine initialization time.
The remaining look-up operations (numbered 1, 2 and 4 in Table 1) utilize a memory mapping scheme in order to optimize memory utilization. These look-up operations include: (1) looking up a bit vector in response to an input port queue number (i.e., numbered 2 in Table 1); (2) looking up either an MQN or an FBCN in response to an input port queue number (i.e., numbered 1 in Table 1) and (3) looking up either an input port queue number and input port number or an RBCN in response to an output port queue number (i.e., numbered 4 in Table 1) .
Each input side translator 24 and output side translator 40 contains a mapping RAM 164. The address to the mapping RAM 164 is labelled 174 and includes the least significant bits of the port processor number (i.e., TSPP or FSPP) and the most significant bits of the input/output port queue number, depending on the particular look-up operation. For example, in the case of looking up a bit vector in response to an input port queue number, the port processor number identifies the transmitting TSPP 14 and the queue number identifies the particular input port queue 16. The two least significant bits of the port processor number are used in address 174 since each translator supports four port processors. The five most significant bits of the I/O port queue number are used in address 174 εince the page size is 512 (i.e., nine bits) and the queue number is fourteen bits with all of the bits resolved.
The mapping address 174 is used to retrieve a seven bit mapping word 180 from the mapping RAM 164 for use in an address 178 to access the queue area 160 of the external SRAM 26. The seven bit mapping word 180 specifies a particular page of 128 pages into which the queue area 160 is divided. The queue area address 178 additionally includes the transmitting port queue number (i.e., an input port queue number 16 when looking up an MQN, bit vector or FBCN and an output port queue number 22 when looking up an RBCN or input port queue number 16) . The two least significant bits of the queue area address 178 are look-up selection bits which specify the particular one of the three types of look-ups which utilize the queue area 160 (i.e., look-up operations labelled 1, 2 and 4 in Table 1) . In particular, 00 as the two least significant bits of address 178 specifies a bit vector look-up operation, 01 as the two least significant bits specifies an MQN or FBCN look-up operation and 11 as the two least significant bits specifies an input port queue number and input port number or RBCN look-up operation. An entry of 10 as the two least significant bits may be used to specify an additional look-up operation. Use of the above-described memory mapping scheme permits memory page sharing among the port processors supported by a particular translator. Stated differently, memory mapping allows one port processor 14, 18 to use more than the standard allotment of memory pages if another port processor associated with the same translator 24, 40 uses less than the standard allotment. For example, in the illustrative embodiment, each port processor (TSPP 14 and FSPP 18) supports 16,384 connections. With each translator 24, 40 supporting up to four port processors 14, 18, respectively, significant memory space is required to store connection information. However, in certain applications, some port processors 14, 18 support fewer than 16,384 connections while other port processors associated with the same translator support the maximum 16,384 connections. With the above¬ described memory mapping scheme, a port processor with a low number of connections can give up some memory to a port processor that has more connections. The mapping provides for dynamic reconfiguration of multipoint information as port processors are inserted/removed from an in-service switch.
Having described the preferred embodiments of the invention, it will now become apparent to one of skill in the art that other embodiments incorporating their concepts may be used. It is felt therefore that these embodiments should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims.

Claims

CLAIMSWe claim:
1. A switch for permitting data flow within a network, comprising:
• 5 at least one input port for receiving data from said network; at least one output port for transmitting data from said switch; and a switch fabric coupled between said at least one input 10 port and said at least one output port and operative to permit data flow between said at least one input port to said at least one output port, wherein said switch fabric comprises a connection topology memory correlating said at least one input port to said at least one output port. 15
2. The switch recited in claim 1 further comprising: at least one input port queue asεociated with said at least one input port; and at least one output port queue associated with said at
20 least one output port for temporarily storing data received from said at least one input port queue, wherein said connection topology memory correlates said at least one input port queue to said at least one output port queue.
25 3. The switch recited in claim 2 wherein said switch fabric further comprises: at least one input side translator asεociated with εaid at leaεt one input port; at least one output side translator associated with said 30 at least one output port; and a bandwidth arbiter coupled between said at least one input side translator and said at least one output εide translator and operative to control data flow between said at least one input side translator and said at least one
35 output side translator.
4. The switch recited in claim 3 wherein said connection topology memory comprises: a first connection topology memory associated with said at least one input side translator; and a second connection topology memory associated with said at least one output side translator.
5. The switch recited in claim 4 wherein said first connection topology memory contains a first entry which correlates said at least one input port queue to a bit vector.
6. The switch recited in claim 4 wherein said bit vector identifies said at least one output port.
7. The switch recited in claim 5 wherein said first connection topology memory contains a second entry which correlates said input port queue to either a multiqueue number for a point to point connection or a broadcast number for a point to multipoint connection.
8. The switch recited in claim 7 wherein said multiqueue number identifies said at least one output port queue and said broadcast number identifies said multiqueue number.
9. The switch recited in claim 8 wherein said second connection topology memory contains an entry which correlates said broadcast number to a multiqueue number for said point to multipoint connection.
10. A switch for permitting data flow within a network, comprising: at least one input port for receiving data from said network; a plurality of output ports for transmitting data from said switch; and a switch fabric coupled between said at least one input port and said plurality of output ports, said switch fabric comprising: at least one input side translator associated with said at least one input port; at least one output side translator associated with a predetermined number of said plurality of output ports; a bandwidth arbiter coupled between said at least one input side translator and said at least one output side translator and operative to control data flow between said at least one input side translator and said at least one output side translator; and a first connection topology memory asεociated with said at least one input side translator and a second connection topology memory associated with said at least one output side translator.
11. The switch recited in claim 10 further comprising: at least one input port queue associated with said at least one input port; and at least one output port queue associated with each of said plurality of output ports for temporarily storing data received at the respective output port.
12. The switch recited in claim 11 wherein said first connection topology memory contains a first entry correlating said at least one input port queue to a bit vector and a second entry correlating said at least one input port queue to either multiqueue number for a point to point connection or a broadcast number for a point to multipoint connection.
13. The switch recited in claim 12 wherein said bit vector identifies a selected one of said plurality of output ports, said multiqueue number identifies said at least one output port queue and said broadcast number identifies said multiqueue number.
14. The switch recited in claim 13 wherein said second connection topology memory contains an entry correlating said broadcast number to a multiqueue number for said point to multipoint connection.
15. A switch for permitting data flow within a network, comprising: at least one input port for receiving data from the network; at least one output port for transmitting data from said switch; at least one input port queue associated with said at least one input port; at least one output port queue associated with said at least one output port for temporarily storing data received from said at least one input port queue; and a switch fabric coupled between said at least one input port and said at least one output port, said switch fabric comprising: at least one input side translator associated with said at least one input port; at least one output side translator asεociated with said at least one output port; a bandwidth arbiter coupled between said at least one input side translator and said at least one output side translator and operative to control data flow between said at least one input side translator and said at least one output side translator; and a connection topology memory asεociated with εaid at least one input side translator and containing an entry which correlates said at least one input port queue to either a multiqueue number for a point to point connection or a broadcast number for a point to multipoint connection.
, 5 16. The switch recited in claim 15 wherein said connection topology memory contains a second entry which correlates said at least one input port queue to a bit vector.
17. The switch recited in claim 16 wherein said bit vector 10 identifies a selected one of said plurality of output ports, said multiqueue number identifies said at least one output port queue and said broadcast number identifies said multiqueue number.
15 18. The switch recited in claim 17 further comprising a second connection topology memory associated with said at least one output side translator and containing an entry which correlates said broadcast number to a multiqueue number for said point to multipoint connection.
20
19. A method for identifying at least one destination output port queue of a network switch, comprising the steps of: sending an input port queue number of an input port queue to an input side translator of said switch, said input
25 side translator looking up a bit vector identifying at least one output port of said switch and looking up either a firεt multiqueue number if said data cell is associated with a point to point connection or a broadcast number if said data cell is associated with a point to multipoint connection;
30 sending said bit vector and either said first multiqueue number or said broadcast number to an output εide tranεlator aεsociated with εaid at least one output port identified by said bit vector, said output εide tranεlator looking up a εecond multiqueue number in response to receipt of said
35 broadcast number; and said at least one output port identifying at least one output port queue associated therewith for receipt of said data cell in response to said first multiqueue number or second multiqueue number.
PCT/US1996/011932 1995-07-19 1996-07-18 Network switch utilizing centralized and partitioned memory for connection topology information storage WO1997004544A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP9506873A JPH11510006A (en) 1995-07-19 1996-07-18 Network switch that uses centralized, split-type memory for storage of connection topology information
PCT/US1996/011932 WO1997004544A1 (en) 1995-07-19 1996-07-18 Network switch utilizing centralized and partitioned memory for connection topology information storage
AU65017/96A AU6501796A (en) 1995-07-19 1996-07-18 Network switch utilizing centralized and partitioned memory for connection topology information storage

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US149895P 1995-07-19 1995-07-19
US60/001,498 1995-07-19
PCT/US1996/011932 WO1997004544A1 (en) 1995-07-19 1996-07-18 Network switch utilizing centralized and partitioned memory for connection topology information storage

Publications (1)

Publication Number Publication Date
WO1997004544A1 true WO1997004544A1 (en) 1997-02-06

Family

ID=38659681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/011932 WO1997004544A1 (en) 1995-07-19 1996-07-18 Network switch utilizing centralized and partitioned memory for connection topology information storage

Country Status (3)

Country Link
JP (1) JPH11510006A (en)
AU (1) AU6501796A (en)
WO (1) WO1997004544A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535197A (en) * 1991-09-26 1996-07-09 Ipc Information Systems, Inc. Shared buffer switching module
US5541912A (en) * 1994-10-04 1996-07-30 At&T Corp. Dynamic queue length thresholds in a shared memory ATM switch
US5546391A (en) * 1993-03-04 1996-08-13 International Business Machines Corporation Central shared queue based time multiplexed packet switch with deadlock avoidance
US5557607A (en) * 1994-04-28 1996-09-17 Network Synthesis, Inc. Methods and apparatus for enqueueing and dequeueing data cells in an ATM switch fabric architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535197A (en) * 1991-09-26 1996-07-09 Ipc Information Systems, Inc. Shared buffer switching module
US5546391A (en) * 1993-03-04 1996-08-13 International Business Machines Corporation Central shared queue based time multiplexed packet switch with deadlock avoidance
US5557607A (en) * 1994-04-28 1996-09-17 Network Synthesis, Inc. Methods and apparatus for enqueueing and dequeueing data cells in an ATM switch fabric architecture
US5541912A (en) * 1994-10-04 1996-07-30 At&T Corp. Dynamic queue length thresholds in a shared memory ATM switch

Also Published As

Publication number Publication date
JPH11510006A (en) 1999-08-31
AU6501796A (en) 1997-02-18

Similar Documents

Publication Publication Date Title
US5917805A (en) Network switch utilizing centralized and partitioned memory for connection topology information storage
JP4024904B2 (en) Data unit for receiving a data packet and distributing it to a packet switching circuit, and an exchange including the data unit
US5535197A (en) Shared buffer switching module
JP3443264B2 (en) Improved multicast routing in multistage networks
AU647267B2 (en) Switching node in label multiplexing type switching network
US5684797A (en) ATM cell multicasting method and apparatus
JP4090510B2 (en) Computer interface for direct mapping of application data
US5898687A (en) Arbitration mechanism for a multicast logic engine of a switching fabric circuit
US6147999A (en) ATM switch capable of routing IP packet
US5774675A (en) Header converting method
US5949785A (en) Network access communications system and methodology
AU632840B2 (en) An atm switching element for transmitting variable length cells
JP4006205B2 (en) Switching arrangement and method with separate output buffers
JP3014612B2 (en) Connectionless communication device and communication method
US6052376A (en) Distributed buffering system for ATM switches
US7782849B2 (en) Data switch and switch fabric
US6751233B1 (en) UTOPIA 2—UTOPIA 3 translator
JPH07321822A (en) Device with multi-casting function
US5666361A (en) ATM cell forwarding and label swapping method and apparatus
US5963552A (en) Low/medium speed multi-casting device and method
US6310875B1 (en) Method and apparatus for port memory multicast common memory switches
WO1999051000A1 (en) Ampic dram system in a telecommunication switch
US6985486B1 (en) Shared buffer asynchronous transfer mode switch
JP3204996B2 (en) Asynchronous time division multiplex transmission device and switch element
WO1997004544A1 (en) Network switch utilizing centralized and partitioned memory for connection topology information storage

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA

CFP Corrected version of a pamphlet front page

Free format text: REVISED ABSTRACT RECEIVED BY THE INTERNATIONAL BUREAU AFTER COMPLETION OF THE TECHNICAL PREPARATIONS FOR INTERNATIONAL PUBLICATION

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 506873

Kind code of ref document: A

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA