US20090141719A1 - Transmitting data through commuincation switch - Google Patents
Transmitting data through commuincation switch Download PDFInfo
- Publication number
- US20090141719A1 US20090141719A1 US12/272,609 US27260908A US2009141719A1 US 20090141719 A1 US20090141719 A1 US 20090141719A1 US 27260908 A US27260908 A US 27260908A US 2009141719 A1 US2009141719 A1 US 2009141719A1
- Authority
- US
- United States
- Prior art keywords
- switch
- pdu
- data
- bandwidth request
- request element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000004891 communication Methods 0.000 claims abstract description 20
- 239000004744 fabric Substances 0.000 claims description 24
- RGNPBRKPHBKNKX-UHFFFAOYSA-N hexaflumuron Chemical compound C1=C(Cl)C(OC(F)(F)C(F)F)=C(Cl)C=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F RGNPBRKPHBKNKX-UHFFFAOYSA-N 0.000 description 60
- 239000000872 buffer Substances 0.000 description 53
- 238000012360 testing method Methods 0.000 description 22
- 230000005540 biological transmission Effects 0.000 description 18
- 238000000348 solid-phase epitaxy Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 11
- 238000007493 shaping process Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 7
- 102100040338 Ubiquitin-associated and SH3 domain-containing protein B Human genes 0.000 description 6
- 101710143616 Ubiquitin-associated and SH3 domain-containing protein B Proteins 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 239000012634 fragment Substances 0.000 description 6
- 238000009432 framing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 229920000729 poly(L-lysine) polymer Polymers 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 241001522296 Erithacus rubecula Species 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101100150045 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) spe-3 gene Proteins 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003134 recirculating effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
- H04L12/6402—Hybrid switching fabrics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0685—Clock or time synchronisation in a node; Intranode synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0003—Switching fabrics, e.g. transport network, control network
- H04J2203/0025—Peripheral units
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0003—Switching fabrics, e.g. transport network, control network
- H04J2203/0026—Physical details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0682—Clock or time synchronisation in a network by delay compensation, e.g. by compensation of propagation delay or variations thereof, by ranging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0685—Clock or time synchronisation in a node; Intranode synchronisation
- H04J3/0688—Change of the master or reference, e.g. take-over or failure of the master
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/64—Hybrid switching systems
- H04L12/6402—Hybrid switching fabrics
- H04L2012/6413—Switch peripheries
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L7/00—Arrangements for synchronising receiver with transmitter
- H04L7/04—Speed or phase control by synchronisation signals
- H04L7/10—Arrangements for initial synchronisation
Definitions
- Disclosed embodiments relate to telecommunications networks and, more particularly, to transmitting data through telecommunication switches in said networks.
- TDM time division multiplexing
- a hierarchy of multiplexing is based on multiples of the T1 or E-1 signal, one of the most common being T3 or DS3.
- a T3 signal has 672 channels, the equivalent of twenty-eight T1 signals.
- TDM was originally designed for voice channels. Today, however, it is used for both voice and data.
- IP Internet Protocol
- ATM and SONET broadband technologies known as ATM and SONET have been developed.
- the ATM network is based on fixed length packets (cells) of 53-bytes each (48-bytes payload with 5-bytes overhead).
- QOS quality of service
- ATM cells are assigned different priorities based on QOS.
- CBR constant bit rate
- VBR variable bit rate
- UBR Unspecified bit rate
- URR Unspecified bit rate
- the SONET network is based on a frame of 810-bytes within which a 783-byte synchronous payload envelope (SPE) floats.
- SPE synchronous payload envelope
- the payload envelope floats because of timing differences throughout the network. The exact location of the payload is determined through a relatively complex system of stuffs/destuffs and pointers.
- STS-1 the basic SONET signal
- the SONET network includes a hierarchy of SONET signals wherein up to 768 STS-1 signals are multiplexed together providing the capacity of 21,504 T1 signals (768 T3 signals).
- STS-1 signals have a frame rate of 51.84 Mbit/sec, with 8,000 frames per second, and 125 microseconds per frame.
- the SONET standard uses sub-STS payload mappings, referred to as Virtual Tributary (VT) structures.
- VT Virtual Tributary
- the ITU calls these Tributary Units or TUs.
- VT-1.5 has a data transmission rate of 1.728 Mbit/s and accommodates a T1 signal with overhead.
- VT-2 has a data transmission rate of 2.304 Mbit/s and accommodates an E1 signal with overhead.
- VT-3 has a data transmission rate of 3.456 Mbit/s and accommodates a T2 signal with overhead.
- VT-6 has a data transmission rate of 6.912 Mbit/s and accommodates a DS2 signal with overhead.
- TDM TDM
- ATM TDM
- Packet technologies TDM, ATM and Packet technologies
- SONET being a complex form of TDM.
- TDM, ATM and Packet each desire their own unique transmission characteristics. Consequently, different kinds of switches are used to route these different kinds of signals.
- TDM desires careful time synchronization
- ATM desires careful attention to the priority of cells and QOS
- packet e.g. IP
- switching technologies for TDM, ATM, and variable length packet switching have evolved in different ways. Service providers and network designers have thus been forced to deal with these technologies separately, often providing overlapping networks with different sets of equipment which can only be used within a single network.
- FIG. 1 is a simplified schematic diagram of a port processor according to some embodiments
- FIG. 2 is a simplified schematic diagram of a switch element according to some embodiments.
- FIG. 3 is a schematic diagram illustrating the data frame structure of some embodiments.
- FIG. 3 a is a schematic diagram illustrating the presently preferred format of a PDU according to some embodiments.
- FIG. 3 b is a schematic diagram illustrating the row structure including request elements to a first stage of the switch
- FIG. 3 c is a schematic diagram illustrating the row structure including request elements to a second stage of the switch
- FIG. 4 is a schematic illustration of a three stage 48 ⁇ 48 switch according to some embodiments.
- FIG. 5 is a schematic illustration of a 48 ⁇ 48 folded Clos architecture switch according to some embodiments.
- Appendix A is an engineering specification (Revision 0.3) for a port processor according to some embodiments.
- Appendix B is an engineering specification (Revision 0.3) for a switch element according to some embodiments.
- the apparatus of some embodiments generally includes a port processor and a switch element.
- FIG. 1 illustrates some features of the port processor 10
- FIG. 2 illustrates some features of the switch element 100 .
- the port processor 10 includes a SONET interface and a UTOPIA interface.
- the SONET interface includes a serial to parallel converter 12 , a SONET framer and transport overhead (TOH) extractor 14 , a high order pointer processor 16 , and a path overhead (POH) extractor 18 .
- TOH SONET framer and transport overhead
- POH path overhead
- the ingress side of the SONET interface includes forty-eight HDLC framers 20 (for IP), forty-eight cell delineators 22 (for ATM), and forty-eight 64-byte FIFOs 24 (for both ATM and IP).
- the ingress side of the SONET interface includes a demultiplexer and low order pointer processor 26 .
- the SONET interface includes, for TDM signals, a multiplexer and low order pointer generator 28 .
- the egress side of the SONET interface includes forty-eight 64-byte FIFOs 30 , forty-eight HDLC frame generators 32 , and forty-eight cell mappers 34 .
- the egress side of the SONET interface also includes a POH generator 36 , a high order pointer generator 38 , a SONET framer and TOH generator 40 , and a parallel to serial interface 42 .
- the UTOPIA interface includes a UTOPIA input 44 for ATM and Packets and one 4.times.64-byte FIFO 46 .
- the UTOPIA interface includes ninety-six 4.times.64-byte FIFOs 48 and a UTOPIA output 50 .
- the ingress portion of the port processor 10 also includes a switch mapper 52 , a parallel to serial switch fabric interface 54 , and a request arbitrator 56 .
- the egress portion of the port processor also includes a serial to parallel switch fabric interface 58 , a switch demapper 60 , and a grant generator 62 .
- the port processor 10 For processing ATM and packet traffic, the port processor 10 utilizes, at the ingress portion, a descriptor constructor 64 , an IPF and ATM lookup processor 66 , an IP classification processor 68 , an RED/Policing processor 70 , all of which may be located off-chip. These units process ATM cells and packets before handing them to a (receive) data link manager 72 . At the egress portion of the port processor, a (transmit) data link manager 74 and a transmit scheduler and shaper 76 are provided. Both of these units may be located off-chip. The port processor is also provided with a host interface 78 and a weighted round robin scheduler 80 .
- the purpose of the port processor at ingress to the switch is to unpack TDM, Packet, and ATM data and frame it according to the data frame described below with respect to FIG. 3 .
- the port processor also buffers TDM and packet data while making arbitration requests for link bandwidth through the switch element and grants arbitration requests received through the switch as described in more detail below.
- predetermined bytes e.g., the V1-V4 bytes in the SONET frame
- the port processor reassembles TDM, Packet, and ATM data.
- the V1-V4 bytes are regenerated at the egress from the switch.
- the port processor 10 includes dual switch element interfaces which permit it to be coupled to two switch elements or to two ports of one switch element.
- the “standby” link carries only frame information until a failure in the main link occurs and then data is sent via the standby link. This provides for redundancy in the switch so that connections are maintained even if a portion of the switch fails.
- a switch element 100 includes twelve “datapath and link bandwidth arbitration modules” 102 (shown only once in FIG. 2 for clarity). Each module 102 provides one link input 104 and one link output 106 through the switch element 100 . Those skilled in the art will appreciate that data entering any link input can, depending on routing information, exit through any link output. According to some embodiments, each module 102 provides two forward datapaths 108 , 110 , 112 , 114 and one return “grant” path 116 , 118 . The three paths are collectively referred to as constituting a single channel. The reason why two datapaths are provided is to increase the bandwidth of each channel.
- the two datapaths are interleaved to provide a single “logical” serial datastream which exceeds (doubles) the bandwidth of a single physical datastream.
- Data is routed from an input link 104 to an output link 106 via an input link bus 120 and an output link bus 122 .
- Return path grants are routed from an output link 106 to an input link 104 via a grant bus 124 .
- each “datapath and link bandwidth arbitration module” 102 include a data stream deserializer 126 , a data stream demapper 128 , a row buffer mapper 130 , a row buffer 132 , a request arbitration module 134 , a data stream mapper 136 , and a data stream serializer 138 .
- the return grant path for each module 102 includes a grant stream deserializer 140 , a grant stream demapper 142 , a grant arbitration module 144 , a grant stream mapper 146 , and a grant stream serializer 148 .
- the switch element 100 also includes the following modules which are instantiated only once and which support the functions of the twelve “datapath and link bandwidth arbitration modules” 102 : a link synchronization and timing control 150 , a request parser 152 , a grant parser 154 , and a link RISC processor 156 .
- the switch element 100 also includes the following modules which are instantiated only once and which support the other modules, but which are not directly involved in “switching”: a configuration RISC processor 158 , a system control module 160 , a test pattern generator and analyzer 162 , a test interface bus multiplexer 164 , a unilink PLL 166 , a core PLL 168 , and a JTAG interface 170 .
- a typical switch includes multiple port processors 10 and multiple switch elements 100 .
- forty-eight “input” port processors are coupled to twelve, “first stage” switch elements, four to each.
- Each of the first stage switch elements may be coupled to eight second stage switch elements.
- Each of the second stage switch elements may be coupled to twelve third stage switch elements.
- Four “output” port processors may be coupled to each of the third stage switch elements.
- a data frame of nine rows by 1700 slots is used to transport ATM, TDM, and Packet data from a port processor through one or more switch elements to a port processor.
- Each frame is transmitted in 125 microseconds, each row in 13.89 microseconds.
- Each slot includes a four-bit tag plus a four-byte payload (i.e., thirty-six bits).
- the slot bandwidth ( 1/1700 of the total frame) is 2.592 Mbps which is large enough to carry an E-1 signal with overhead.
- the four-bit tag is a cross connect pointer which may be set up when a TDM connection is provisioned.
- the last twenty slots of the frame are reserved for link overhead (LOH).
- LEO link overhead
- the link overhead (LOH) in the last twenty slots of the frame is analogous in function to the line and section overhead in a SONET frame.
- the contents of the LOH slots may be inserted by the switch mapper ( 52 in FIG. 1 ).
- a 36-bit framing pattern may be inserted into one of the twenty slots. The framing pattern may be common to all output links and configurable via a software programmable register.
- a 32-bit status field may be inserted into another slot. The status field may be unique for each output link and may be configurable via a software programmable register.
- a 32-bit switch and link identifier may be inserted into another slot. The switch and link identifier includes a four bit link number, a twenty-four bit switch element ID, and a four bit stage number.
- a 32-bit stuff pattern may be inserted into slots not used by framing, status, or ID. The stuff pattern is common to all output links and may be configurable via a software programmable register.
- a PDU protocol data unit
- sixteen slots may be defined for a sixty-four-byte payload (large enough to accommodate an ATM cell with overhead).
- the format of the PDU is illustrated in FIG. 3 a .
- a maximum of ninety-six PDUs per row may be permitted (it being noted that the maximum number of ATM cells in a SONET OC-48 row is seventy-five).
- the sixteen four-bit tags (bit positions 32-35 in each slot) are not needed for PDU routing so they may be used as parity bits to protect the ATM or IP payload.
- twelve bytes (96 bits) may be used by the switch for internal routing (slots 0-2, bit positions 0-31).
- the PDUs may be self-routed through the switch with a twenty-eight-bit routing tag (slot 0, bit positions 0-27) which allows routing-through seven stages using four bits per stage. The remaining sixty-eight bits of the PDU may be used for various other addressing information.
- the PDU bits at slot 0, bits 30-31 may be used to identify whether the PDU is idle (00), an ATM cell. (01), an IP packet (10), or a control message (11).
- the two bits at slot 1, bit positions 30-31 may be used to indicate the internal protocol version of the chip which produced the PDU.
- the “valid bytes” field (slot 1, bits 24-29) may be used to indicate how many payload bytes are carried by the PDU when the FragID field indicates that the PDU is the last fragment of a fragmented packet.
- the VOQID field (slot 1, bit positions 19-23) identifies the class of service for the PDU.
- the class of service can be a value from 0 to 31, where 0 is the highest priority and 31 is the lowest.
- the FragID at slot 1, bits 17-18 indicates whether this PDU is a complete packet (11), a first fragment (01), a middle fragment (00), or a last fragment (10).
- the A bit at slot 1, bit position 16 is set if reassembly for this packet is being aborted, e.g. because of early packet (or partial packet) discard operations. When this bit is set, fragments of the packet received until this point are discarded by the output port processor.
- the fields labelled FFS are reserved for future use.
- the Seq# field at slot 1, bits 0-3 is a modular counter which counts packet fragments.
- the DestFlowId field at slot 2, bits 0-16 identifies the “flow” in the destination port processor to which this PDU belongs. A “flow” is an active data connection. There are 128K flows per port processor.
- bandwidth may be arbitrated among ATM and Packet connections as traffic enters the system.
- bandwidth may be arbitrated while maintaining TDM timing.
- bandwidth is arbitrated by a system of requests and grants which is implemented for each PDU in each row of the frame.
- the request elements which are generated by the port processors include “hop-by-hop” internal switch routing tags, switch element stage, and priority information.
- two request elements are sent in a three contiguous slot bundle and at least eight slots of non-request element traffic must be present between request element bundles.
- the time separation between request element bundles may be used by the arbitration logic in the switch elements and the port processors to process the request elements.
- the request element format is shown in section 7.1.5 of Appendix B.
- FIG. 3 b illustrates one example of how the row slots may be allocated for carrying PDUs and request elements.
- the maximum PDU capacity for a row is ninety-six.
- a block of sixteen slots which is capable of carrying a single PDU is referred to as a “group”.
- 1.5 slots of bandwidth may be used for carrying a forty-eight-bit request element (RE).
- FIG. 3 b illustrates how two REs are inserted into three slots within each of the first twenty-four groups. All the REs may be carried within the row as early as possible in order to allow the RES to ripple through the multistage switch fabric as soon as possible after the start of a row. Section 7 of Appendix B explains in detail how this affects the arbitration process.
- FIG. 3 b may be the desired format (for the first link) given system requirements and implementation constraints of a given embodiment. It places the REs early in the row but spaces them out enough to allow for arbitration. According to the present embodiment, the row structure is somewhat different depending on for which link of the switch it is configured.
- FIG. 3 b represents the row structure between the port processor and a switch element of the first switch fabric stage. The first block of two REs occupy the first three slots of the row.
- the present implementation of the arbitration logic which processes REs requires at least twelve slot times of latency between each three-slot block of REs on the input link.
- the row structure for the link between the first stage and the second stage may have the first group of REs starting at slot time 32 . This is illustrated in FIG. 3 c , which shows the same structure as FIG. 3 b offset by thirty-two slot times.
- TDM traffic may be switched through the switch elements with a finest granularity of one slot per row.
- the TDM traffic may be switched through the same path for a given slot for every row.
- the switch elements may not allow different switch paths for the same TDM data slot for different rows within the frame. This means that the switch does not care about what the current row number is (within a frame). The only time row numbering matters is when interpreting the contents of the Link Overhead slots.
- the switch elements can switch TDM traffic with a minimum of 2.52 Mbps of switching bandwidth. Since a slot can carry the equivalent of four columns of traffic from a SONET SPE, it can be said that the switch elements switch TDM traffic with a granularity of a VT1.5 or VT2 channel. Although a VT1.5 channel may only occupies three columns in the SONET SPE, it will still be mapped to the slot format which is capable of holding four SPE columns. As mentioned above, the format of the contents of the thirty-six-bit slot carrying TDM traffic is a four-bit tag and a thirty-two bits of payload. The tag field definitions are shown in Table 1 below.
- the switch elements know whether or not a slot contains TDM data via preconfigured connection tables. These tables may be implemented as an Input Cross Connect RAM for each input link.
- the input slot number is the address into the RAM, while the data output of the RAM contains the destination output link and slot number.
- the connection table can be changed by a centralized system controller which can send control messages, to the switch elements via either of two paths: (1) a host interface port or (2) in-band control messages which are sent via the link data channel. Since TDM connections will be changed infrequently, this relatively slow control message approach to update the connection tables is acceptable. It is the responsibility of an external software module to determine and configure the connection tables within the switch elements such that no TDM data will be lost.
- the receive side SONET interface of the port processor 10 includes the deserializer 12 and framer 14 .
- This interface may be configured as one OC-48, 16-bits wide at 155 MHz, four OC-12s, serially at 622 MHz, or four OC-3s, serially at 155 MHz.
- the deserializer 12 When configured as one OC-48, the deserializer 12 is not used.
- the deserializer 12 converts the serial data stream to a sixteen-bit wide parallel stream.
- the deserializer 12 includes circuitry to divide the input serial clocks by sixteen.
- the inputs to the deserializer include a one-bit serial data input, a one-bit 622 MHz clock and a one-bit 155 MHz clock.
- the outputs include a sixteen-bit parallel data output, a one-bit 38.87 MHz clock and a 9.72 MHz clock.
- the SONET interfaces are described in more detail in sections 3.2 and 3.3 of Appendix A.
- Parallel data is sent to the SONET framer and transport overhead (TOH) block 14 .
- All incoming signals may be framed according to the BELLCORE GR-253 standard which is incorporated herein by reference.
- the byte boundary and the frame boundary are found by scanning a series of sixteen bit words for the F628 pattern.
- the framer frames on the pattern F6F6F62828288.
- Independent SONET SPEs within the STS-N frame are demultiplexed by the framer 14 . There is a maximum of four independent line interfaces, therefore the framer 14 includes four independent framers.
- the inputs to the framer include a sixteen-bit parallel data input, and a one-bit clock which will accept 155 MHz, 38.87 MHz, or 9.72 MHz.
- the outputs of the framer include a sixteen-bit parallel data output, a one-bit start of frame (SOF) indication, a six-bit SPE ID used to indicate SONET SPE number.
- SOF start of frame
- SPE ID used to indicate SONET SPE number.
- the SPEs are numbered 1 through 48 with respect to the line side port configuration.
- the block 14 also terminates the transport (section and line) overhead for each independent SONET SPE. Since there are a maximum of forty-eight OC-1s on the line side, forty-eight transport overhead blocks are provided unless blocks are time-shared. The inputs to the TOH termination are the same as those discussed above with respect to the framer.
- the six-bit SPE ID enables data into this block. There may be no need for an output data bus as the traffic is routed to this block and to the next block (Ptr Proc 16) on the same data bus. The data path may only flow into this block, not through it.
- the pointer processor 16 uses the SONET pointer (H1, H2 and H3 bytes in the TOH) to correctly locate the start of the payload data being carried in the SONET envelope.
- the SONET pointer identifies the location of byte #1 of the path overhead.
- the pointer processor 16 is responsible for accommodating pointer justifications that were inserted in order to justify the frequency difference between the payload data and the SONET envelope. Since there may be a maximum of forty-eight OC-1s, forty-eight pointer processor blocks are mated to the forty-eight transport overhead termination blocks unless blocks are time-shared.
- the inputs to the pointer processor 16 are the same as those to the framer and TOH terminator 14 .
- the outputs include a sixteen-bit parallel data output, a one-bit start of SPE indicator which coincides with word 1 of SPE 3, a one-bit SPE valid indicator which gaps out overhead and accommodates pointer movements, and a one-bit POH valid indicator which indicates when a path overhead byte is on the output bus.
- the POH processor 18 processes the nine bytes of Path Overhead in each of the forty-eight SONET SPES. Since there are a maximum of forty-eight SPEs, forty-eight path overhead processors are provided unless processors are time-shared.
- the inputs to the path overhead processor 18 include an eight-bit parallel data input, a four-bit SPE ID, the one-bit start of SPE indicator, and the one-bit POH valid indicator.
- the outputs include a one-bit V1 indicator, J1 info, alarms, and path status. Further details about blocks 14 , 16 , and 18 are provided by the GR-253 standard and documentation accompanying standard SONET mapper/demappers such as those available from Lucent or TranSwitch.
- the payload is extracted from the SPE.
- the SPEs may be carrying TDM traffic, ATM cells or IP packets.
- the type of traffic for each SPE may be configured through the microprocessor interface 78 .
- Each SPE can carry only one type of traffic. The data from each SPE is routed directly to the correct payload extractor.
- SPEs containing packets and ATM cells may be sent to the HDLC framer 20 and the cell delineation block 22 , respectively.
- Each SPE may be configured to carry packet data (packet over SONET).
- the Port Processor 10 supports packet over SONET for the following SONET (SDH) signals: STS-1 (VC-3), STS-3c (VC-4), STS-12c (VC-4-4c), and STS-48c (VC-4-16c).
- SDH SONET
- STS-1 VC-3
- STS-3c VC-4
- STS-12c VC-4-4c
- STS-48c VC-4-16c
- the datagrams may be encapsulated in PPP packets which are framed using the HDLC protocol.
- the HDLC frames are mapped byte wise into SONET SPEs and high order SDH VCs.
- the HDLC framer 20 performs HDLC framing and forwards the PPP packet to a FIFO buffer 24 where it awaits assembly into PDUs.
- the framer 20 has an input which includes a sixteen-bit parallel data input, a six-bit SPE ID, a one-bit SPE valid indicator, and a one-bit PYLD valid indicator.
- the output of the framer 20 includes a sixteen-bit data bus, a one-bit start of packet indicator, and a one-bit end of packet indicator. Further details about packet extraction from SONET are found in IETF (Internet Engineering Task Force) RFC 1619 (1999) which is incorporated herein by reference.
- the cell delineation block 22 is based on ITU-T G.804, “ATM Cell Mapping into Plesiochronous Digital Hierarch (PDH)”, 1998, the complete disclosure of which is hereby incorporated herein by reference.
- the cell delineation block 22 has inputs that include a sixteen-bit parallel data bus, a six-bit SPE ID, a one-bit SPE valid indicator, and a one-bit POH valid indicator.
- the outputs include a sixteen-bit parallel data bus and a one-bit start of cell indicator. Cells are placed in a FIFO 24 while awaiting assembly into PDUS. Further details regarding ATM extraction from SONET are found in ITU-T G.804.
- the TDM data is routed to a TDM demultiplexer and low order pointer processor block 26 where the low order VTs and VCs are identified. If a particular SPE is configured for TDM data, then the TDM mapping is described using the host interface 78 .
- Each SPE can carry a combination of VC-11, VC-12, VC-2, VC-3 & VC-4.
- the VCs and VTs are demultiplexed out of the SONET signal based on the configuration for each of the SPEs. There is no interpretation of the traffic required to locate the containers and tributaries as all of this information is found in the configuration table (not shown) which is configured via the host interface 78 . Frames are located inside of the VCs and the VTs through the H4 byte in the path overhead of the SPE. Pointer processing is performed as indicated by the V bytes in the VT superframe.
- the TDM demultiplexer and low order pointer processor block 26 has inputs which include sixteen bits of parallel data, a six-bits SPE ID, a one-bit start of SPE indicator, a one-bit SPE valid indicator, a one-bit V1 indicator, and one-bit POH valid indicator.
- the TDM demultiplexer and low order pointer processor block 26 provides the following outputs to the switch mapper 52 : sixteen bits of parallel data, a one-bit VT/VC valid indicator, a six-bit SPE ID, and a five-bit VT/VC Number (0-27).
- the TDM data is placed in reserved slots in the frame as mentioned above and described in more detail below with reference to the switch mapper 52 . Further details regarding TDM extraction are found in the GR-253 specification
- IP packets and ATM cells from the UTOPIA interface 44 may be placed in FIFO 46 . Packets and cells from the FIFOs 24 may be merged with the packets and cells from the FIFO 46 .
- the descriptor constructor 64 determines whether the data is an ATM cell or an IP packet and generates a corresponding interrupt to trigger the IPF/ATM look-up processor 66 to perform either IP routing look-up or ATM look-up. IP routing look-up is performed by searching for the IP destination address for every packet and the IP source address for packets that need classification. ATM look-up is performed by searching the VPI/VCI fields of the cells.
- Outputs of the IPF/ATM look-up processor 66 for both IP packets and ATM cells include a seventeen-bit flow; index, a five-bit QOS index, and an indicator showing whether the IP packet-needs classification. If the IP packet needs classification, the packet is passed to the IP classification processor 68 for classification; otherwise it is passed to the next stage of packet processing, the RED/policing processor 70 .
- IP classification is described in detail in section 6.4 of Appendix A.
- the RED/Policing processor 70 performs random early detection and weighted random early detection for IP congestion control, performs leaky bucket policing for ATM traffic control, and performs early packet and partial packet discard for controlling ATM traffic which contains packets.
- the RED/Policing traffic control is described in detail in sections 7.5 et seq. of Appendix A.
- Some embodiments of the port processor 10 includes a mode register (not shown) which can be placed in a bypass mode to globally turn off the IP/ATM forwarding. In bypass mode, an external device is used for IP/ATM forwarding, and the data descriptors generated by the descriptor constructor 64 are routed directly to an output FIFO (not shown).
- All of the data stored in the FIFOs 24 and 46 may be in fifty-two-byte “chunks”. If an IP packet is longer than fifty-two-bytes, it may be segmented into multiple fifty-two-byte chunks.
- the input data descriptor for each chunk includes indications of whether the chunk is an ATM cell or a packet, whether it is the start of a packet or the end of a packet, packet length, and the source and destination port numbers.
- Cells and packets which survive RED/policing are read by the receive data link manager 72 , which creates the PDUs described above with reference to FIG. 3 a .
- the receive data link manager is described in detail in section 8 of Appendix A.
- processed cells and packets are stored in an external FIFO which is read whenever it is not empty.
- the switch mapper 52 receives TDM traffic from the TDM demultiplexer and low order pointer processor 26 as well as PDUs from the data link manager 72 . As mentioned above, the switch mapper also receives request elements. The request elements are formed by the arbiter 56 as described in more detail below. It is the function of the switch mapper (also referred to as the data mapper in Appendix A) to arrange TDM data, PDUS, and request elements in the frame described above with reference to FIGS. 3 and 3 a - c.
- the switch mapper 52 includes a state machine (not shown) which is associated with the ATM/IP PDUS.
- the data link manager 72 writes the PDU's using a sixty-four-bit interface to the external FIFO (not shown).
- the data may be transmitted from the external FIFO to the switch mapper 52 in thirty-two-bit slots with four bits of parity.
- the state machine associated with the external PDU FIFO monitors the status of the FIFO and maintains data integrity.
- each incoming ATM cell and packet is processed by performing a lookup based on the ATM VPI/VCI or on the IP source and destination. This lookup first verifies that the connection is active, and if active, it returns a seventeen-bit index. For ATM cells, the index points to a set of per VC parameters and to routing information. For packets, the index points to a set of queuing parameters and to routing information.
- the seventeen-bit index supports a maximum of 128K simultaneous IP and ATM flows through the port processor.
- the ATM cells are encapsulated in a cell container and stored in one of 128K queues in external memory. These 128K queues are managed by the data link manager 72 .
- the IP packets are fragmented into fifty-two-byte blocks and each of these blocks may be encapsulated in a cell container (PDU). These cell containers are also stored in one of the 128K queues in external memory by the data link manager.
- the 128K IP/ATM flows may be aggregated into one of thirty-two QOS queues for scheduling through the switch.
- the data link manager 72 also aggregates all the control headers desired for transmission of cells through the switch into the QOS queues and inserts these routing tags into one of thirty-one QOS routing tag FIFOS. One of the queues may be reserved for high priority traffic. Any cells arriving in the high priority queue may be interrupt the scheduler 80 and may be scheduled to leave the high priority queue immediately.
- the scheduler 80 may be responsible for scheduling cell containers through the switch.
- the scheduling algorithm used may be a weighted round robin, which operates on the QOS queues. Once cells have been scheduled from these queues the control headers from these queues are forwarded to the arbiter 56 and are stored in a request control table (not shown).
- the request arbiter 56 forms request elements from the control headers and forwards these requests to the switch data mapper 52 for transmission through the switch.
- the grants received in response to these requests may be deserialized by block 58 , deframed and transferred back to the arbiter block 56 by the grant block 62 .
- the cell containers may be dequeued from external memory by the data link manager 72 and transferred to the switch mapper 52 for transmission through the switch.
- the port processor 10 supports redundancy in order to improve reliability. Two redundancy schemes are supported. In the first redundancy scheme, the switch controller supports redundant routing tags and transparent route switch-over. In the second redundancy scheme, the port processor supports redundant data channels in both input and output directions. The redundant data channels connect to two separate switch fabrics. In the Appendices they are referred to as the A and B data channels. Each control header contains two routing tags, and each routing tag has a corresponding AB channel tag. This provides for two routes through the switch for data transmission. If both routing tags have the same channel tag, this allows for two alternate paths through the same switch fabric.
- An AB channel tag may be used to indicate whether the data is to be routed using the A data channel or the B data channel. If, after a programmable number of consecutive tries, no grant is received in response to request elements using the A channel routing tag, a bit may be set to switch over to the B channel routing tag. Further details of the redundancy feature are provided in sections 10.2.3 and 9.2.3 of Appendix A.
- the arbiter 56 may be responsible for sending requests to the switch mapper 52 and processing the grants that arrive from the grant demapper 62 .
- the arbiter dequeues requests from a routing tag FIFO, copies this information into a request control table, writes the FLOWID into FLOWID RAM, resets a request trial counter that counts the number of times a request has been tried, and resets the grant bit.
- Each request message has a unique request ID which is returned in the grant message.
- the request ID is the index in the arbiter request control table into which the routing tag is copied.
- the routing tag along with the request ID may be forwarded to a routing tag formatter block, which formats the routing tag into a request message and inserts the request into a request FIFO in the switch mapper 52 .
- the grant demapper in the grant block 62 stores the request ID and the grant in a FIFO called the grant_reqid FIFO.
- the request IDs are dequeued from A and B grant reqid FIFOS alternatively depending on whether the switchover bit is set.
- the request IDs dequeued from the FIFO are used to set a grant bit in the grant register at the bit position indicated by the request ID, to index the FLOWID.RAM, and read the FLOWID associated with the request ID.
- This FLOWID is written into a deq-flowid FIFO for the appropriate channel, i.e., if the request ID is dequeued from the A reqid_fifo, the FLOWID is written into the A deq_flowid fifo.
- the data link manager 72 monitors the deqflowid_fifo and uses the FLOWID to dequeue data PDUs from external memory and send them to the switch mapper 52 for transmission in the next row time.
- An end_of_grants signal is asserted by the grant demapper 62 , when no more grants can be received at the grant demapper. In most switch implementations the end_of_grants signal is rarely, if ever, asserted. It is only in switches having many stages that the end_of_grants signal is more likely to be asserted. Once the end_of_grant signal has been received the arbiter 56 begins the process of updating the request control table. If a grant has not been returned for a routing tag stored in the request control table, the request trial counter is incremented and a new request is generated using the routing tag.
- the TDM data, ATM/IP PDU's and the request messages may be combined into a single data stream for transmission through the switch fabric. This combination may be performed by the switch mapper 52 on the receive side of the port processor.
- a switch demapper 60 separates TDM data from ATM/IP PDUs.
- the demapper 60 may be provided with external memory for a PDU FIFO.
- the demapper writes PDUs to the FIFO and interrupts the data link manager 74 .
- the data link manager 74 reads the header information from the PDU FIFO, and extracts the FLOWID.
- the datalink manager 74 retrieves a Linked List/Shaping/Scheduling data structure from external memory.
- the data link manager 74 writes the linked list pointers to the PDU FIFO, then initiates a DMA transfer to move the PDU to external memory.
- the data link manager updates the head, tail, and count fields in the Linked List/Shaping/Scheduling data structure and passes the data structure to the Shaping/Scheduling processor 76 through a Shaping/Scheduling FIFO.
- the Shaping/Scheduling processor 76 performs the Shaping and Scheduling functions and updates the Linked List/Shaping/Scheduling datastructure.
- the data-flow from external memory to the SONET/UTOPIA Data FIFOs 30 and 48 may be as follows.
- the data link manager 74 polls the PDU FIFO and SONET/UTOPIA FIFO status flags. If the PDU FIFO is not empty and the SONET/UTOPIA FIFO is not full for a particular output port, the data link manager 74 retrieves the Link List/Shaping/Scheduling data structure for the Flow ID read from the PDU FIFO.
- the data link manager will continue to retrieve PDUs from the Linked List until a PDU with an End of Packet indicator is found.
- the data link manager then initiates a DMA transfer from external memory to the SONET/UTOPIA FIFOs 30 , 48 .
- the data link manager 74 then updates the Link List/Shaping/Scheduling data structure and writes it back to external memory.
- the transmit (TX) switch controller On the transmit side of the port processor 10 , the grant framer, deframer, serializer and deserializer in the grant block 62 , the switch demapper 60 , the transmit datalink manager 74 , and the transmit scheduler and shaper 76 are referred to collectively as the transmit (TX) switch controller in Appendix A.
- the TX switch controller is responsible for either accepting or rejecting requests that come into the port processor for output transmission. To do this, the TX switch controller checks if the queue identified by the output port number of the request can accept a cell container. These one hundred twenty-eight queues may be managed by the TX data link manager 74 . According to some embodiments, these queues are stored in external memory.
- the scheduling of these cell containers may be performed by the TX scheduler 76 . If the queue can accept the cell container, the request is turned into a grant and inserted into a grant_fifo. The grant-framer and serializer 62 reads this information and creates a grant message for transmission through the grant path.
- the TX switch controller monitors the status of the data queues for each of the one hundred twenty-eight output ports using the following three rules. If the full_status bit for the requested output port is set, there is no buffer space in the queue for any data PDUs destined for that output port and all requests to that output port may be denied. If the full_status bit is not set and the nearly_full_status bit is set, there is some space in the queue for data PDUs destined for that output port; however this space may be reserved for higher priority traffic. In this instance the QOS number is checked against a threshold (programmed) QOS number and if the QOS number is less than the threshold, the request will be accepted. If the nearly full_status bit is not set, all incoming requests may be granted.
- the corresponding output port counter is incremented. This reserves space in the data buffer ( 30 or 48 ) for the arrival of the data PDU at that output port.
- the transmit data link manager 74 constantly monitors the one hundred twenty-eight output port counters and sets/resets the one hundred twenty-eight full and nearly full status bits.
- the port processor 10 creates complete outgoing SONET signals. All of the transport and path overhead functions are supported.
- the SONET interfaces can run in source timing mode or loop timing mode.
- the high order pointer is adjusted by the high order pointer generator 38 through positive and negative pointer justifications to accommodate timing differences in the clocks used to generate the SONET frames and the clock used to generate the SONET SPEs.
- SPE FIFOs may be allowed to fill to halfway before data is taken out. The variations around the center point are monitored to determine if the rate of the SONET envelope is greater than or less than the rate of the SPE. If the rate of the SONET envelope is greater than the rate of the SPE, then the SPE FIFO will gradually approach a more empty state. In this case, positive pointer movements will be issued in order to give the SPE an opportunity to send additional data.
- the SPE FIFO will gradually approach a more full state. In this case, negative pointer movements will be issued in order to give the SPE an opportunity to output an extra byte of data from the FIFO.
- the SONET framer and TOH generator 40 generate transport overhead according to the BELLCORE GR-253 standard.
- the outgoing SONET frames may be generated from either the timing recovered from the receive side SONET interface or from the source timing of the Port Processor. Each signal is configured separately and they can be configured differently.
- the frame orientation of the outgoing SONET frames may be arbitrary.
- Each of the four signals can be running off different timing so there is no need to try to synchronize them together as they will constantly drift apart. There is no need to frame align the Tx ports to the Rx ports as this would result in realigning the Tx port after every realignment of the Rx port.
- the 16-bit wide internal bus is serialized to 155 Mbps or 622 Mbps by the serializer 42 .
- the entire sixteen bit bus is output under the control of an external serializer (not shown).
- SPE There is a potential for forty-eight different SPEs being generated for the outgoing SONET interfaces. All of these SPEs may be generated from a single timing reference. This allows all of the SPE generators to be shared among all of the SONET and Telecom bus interfaces without multiplexing between the different clocks of the different SONET timing domains.
- the SPE consists of the Path level overhead and the payload data.
- the payload data can be TDM, ATM or packet. All of these traffic types are mapped into single SPEs or concatenated SPEs as desired by their respective standards. As the SPEs are generated, they are deposited into SPE FIFOs.
- each SPE there is a sixty-four-byte FIFO and these individual SPE FIFOs are concatenated through SPE concatenation configuration registers. As described above, the fill status of the SPE FIFOs is used to determine the correct time to perform a positive or negative pointer justification.
- TDM TDM
- ATM and packet data may be all mapped into SONET SPEs as specified by their respective standards.
- the type of data carried in each of the potential forty-eight SPEs may be configured through the external host processor. Based on this configuration, each SPE generator may be allocated the correct type of mapper. All of this configuration may be performed at initialization and may be changed when the particular SPE is first disabled.
- Each of the ATM and packet payload mappers has a payload FIFO into which it writes payload data for a particular SPE. For TDM traffic, each potential Virtual Container is allocated its own FIFO.
- the data stream deserializer 126 synchronizes to the incoming serial data stream and then reassembles the row stream which is transported using two physical unilink channels. It also provides FIFO buffering on each of the two incoming, serial streams so that the streams may be “deskewed” prior to row reassembly. It recovers the thirty-six-bit slot data from the row stream in a third FIFO which is used for deskewing the twelve input links. This deskewing allows all the input-links to forward slot N to the switching core simultaneously. The link deskewing is controlled by the link synchronization and timing control module 150 .
- the deserializer 126 also continuously monitors the delta between where slot 0 of the incoming row is versus the internal row boundary signal within the switch element. The difference may be reported to the Link RISC Processor 156 and used (in the first stage of a switch) as part of the ranging process to synchronize the port processor connected to the input link.
- the data-stream demapper 128 may be responsible for extracting the data from the incoming serial data links. It demaps the input link slots based on the input slot number and determines whether the traffic is TDM, PDU, or a request element (RE). For TDM traffic, the demapper determines the destination link and row buffer 132 memory address. This information is stored in a demapper RAM (not shown), which may be configured by software when TDM connections are added or torn down. For PDU traffic, the demapper 128 assembles all sixteen slots which make up the PDU into a single 64-byte PDU word, then forwards this entire PDU word to the row buffer mapper logic 130 .
- the PDUs may be assembled prior to forwarding them to the row buffer 132 so that the row buffer mapper 130 can write the entire PDU to the row buffer 132 in a single clock cycle. This provides the maximum possible write-side memory bandwidth to the row buffer 132 . It is a significant feature of the switch element that twelve entire PDUs are written to a single row buffer in six link slot times (twelve core clock cycles). For request elements, the demapper 128 assembles the three-slot block of REs into two forty-eight-bit REs and forwards them to the request parser module 152 . A detailed description of the data stream demapper 128 is provided in Sections 4.3.1 et seq. of Appendix B.
- the row buffer mapper 130 may be responsible for mapping traffic which is received from the data stream demapper 128 into the row buffer 132 .
- the mapper 130 provides FIFO buffers for the TDM traffic as it is received from the data stream demapper 128 , then writes it to the row buffer 132 .
- the row buffer memory address is actually preconfigured in the demapper RAM (not shown) within the data stream demapper module 128 . That module forwards the address to the row buffer mapper 130 along with the TDM slot data.
- the mapper 130 also writes PDU traffic from the data stream demapper 128 to the row buffer 132 and computes the address within the row buffer 132 where each PDU will be written.
- PDUs are written into the row buffers starting at address 0 and then every sixteen-slot address boundary thereafter, up to the maximum configured number of PDU addresses for the row buffer 132 .
- a detailed description of the row buffer mapper 130 is provided in Section 4.3.1.4 of Appendix B.
- the row buffer 132 contains the row buffer memory elements. According to some embodiments, it provides double buffered row storage which allows one row buffer to be written during row N while the row data which was written during row N ⁇ 1 is being read out by the data stream mapper 136 . Each row buffer is capable of storing 1536 slots of data. This allows the row buffer to store ninety-six PDUs or 1536 TDM slots or a combination of the two traffic types. Request elements and link overhead slots may not be sent to the row buffer 132 . Therefore the row buffer may not need to be sized to accommodate the entire 1700 input link slots.
- TDM data thirty-six-bit slot
- PDU data entire 576-bit word
- Request arbitration utilizes two components: a centralized request parser module 152 and a request arbitration module 134 for each of the output links.
- Request elements are extracted from the input slot stream by the data stream demapper 128 and are forwarded to the request parser 152 .
- the request parser 152 forwards the forty-eight-bit request elements to the appropriate request arbitration module 134 via two request buses (part of the input link bus 120 ).
- Each request bus may contain a new request element each core clock cycle. This timing allows the request arbitration logic to process thirteen request sources in less than eight core clock cycles.
- the thirteen request sources are the twelve input data streams and the internal multicast and in band control messaging module 156 .
- the request arbitration module 134 monitors the two request element buses and reads in all request elements which are targeted for output links the request arbitration module is implementing. According to some embodiments, the request arbitration module 134 provides buffering for up to twenty-four request elements. When a new request element is received, it is stored in a free RE buffer (not shown). If there are not any free RE buffers, then the lowest priority RE which is already stored in a buffer is replaced with the new RE if the new RE is a higher priority. If the new RE is equal to or lower in priority than all REs currently stored in the RE buffers then the new RE is discarded.
- the request arbitration module 134 forwards the highest priority RE which is stored in the RE buffers to the data stream mapper module 136 . If the RE buffers are empty, then an “Idle” RE may be forwarded. A detailed description of the request arbitration module 134 is provided in Section 7 of Appendix B.
- the data stream mapper 136 may be responsible for inserting data and request elements into the outgoing serial data links. This includes mapping of the output link slots based on the output slot number to determine if the traffic is TDM, PDU, request element, or test traffic. The determination is based on the contents of the mapper RAM (not shown). For TDM traffic, the row buffer memory address may be determined from the mapper RAM which is configured by software as TDM connections are added or torn down. For PDU traffic, the data stream mapper 136 use one slot at a time from the row buffer 132 . The row buffer memory address may be stored in the mapper RAM by software.
- the mapper 136 transmits an idle pattern in order to ensure that a data PDU is not duplicated within the switch.
- the mapper For request elements, the mapper assembles the three-slot block of REs-from two forty-eight-bit REs. The REs are read from—the request arbitration module 134 .
- the mapper 136 inserts the appropriate test pattern from the output link bus 122 . These test patterns are created by either the test pattern generator 162 or test interface bus 164 modules.
- the data stream mapper supports slot multicasting at the output stage. For example, the data stream mapper for any output link is able to copy whatever any other output link is sending out on the current slot time. This copying is controlled via the mapper RAM and allows the mapper to copy the output data from another output link on a slot-by-slot basis.
- a detailed description of the data stream mapper 136 is provided in Section 4 of Appendix B.
- the data stream serializer 138 creates the output link serial stream. Data slots are received via the data stream mapper module 136 and the link overhead is generated internally by the data stream serializer 138 .
- the serializer 138 also splits the row data stream into two streams for transmission on the two paths 110 , 114 . A detailed description of this module is provided in, Section 11 of Appendix B.
- the grant stream deserializer 140 in each module 102 works in much the same manner as the data stream deserializer 126 .
- the primary difference is that the grant data only utilizes a single path, thus eliminating the need for deskewing and deinterleaving to recover a single input serial stream. Since this serial link is only one half the data stream rate of the forward link, there are 850 slots per row time.
- a single FIFO (not shown) may be used to allow for deskewing of the input serial grant streams for all 12 links.
- a detailed description of the grant stream deserializer 140 may be provided in Section 11 of Appendix B.
- the grant stream demapper 142 may be responsible for extracting the data from the incoming serial grant links. This includes demapping of the received grant link slots based on the input slot number to determine if the traffic is a grant element or another kind of traffic. The determination is based on the contents of the grant demapper RAM (not-shown). According to some embodiments, traffic other than grant elements is not yet defined. For grant elements, the grant stream demapper 142 assembles the three-slot block of GEs into two forty-eight-bit GEs and forwards them to the single grant parser module 154 . A detailed description of the grant stream demapper 142 is provided in Section 7.2.3.2 of Appendix B.
- the grant arbitration module 144 operates in a similar manner to the request arbitration logic 134 .
- this module is identical to the request arbitration module. The only difference may be that it processes grant elements in the reverse path instead of request elements in the forward path. It will be recalled that grant elements are, in fact, the request elements which have been returned.
- the grant stream mapper 146 may be responsible for inserting data into the outgoing serial grant links. It maps the output grant slots based on the output slot number to determine if the traffic is a grant element or test traffic. The determination is based on the contents of the grant mapper RAM (not shown). For grant elements, it assembles the three-slot block of GEs from two forty-eight-bit GEs. The GEs are read from the grant arbitration module 144 . For test patterns, it inserts the appropriate test pattern from the output link bus 122 . These test patterns may be created by either the test pattern generator 162 or the test interface bus 164 modules. A detailed description of the grant stream mapper 146 is provided in Section 7.2.3.2.
- the grant stream serializer 148 works in much the same manner as the data stream serializer 138 . The primary difference is that the grant data only utilizes a single path, thus eliminating the need for interleaving the transmit serial stream across multiple output serial streams. Since this serial link is only one half the forward data stream rate, there are only 850 slots per row time. A detailed description of the grant stream serializer 148 is provided in Section 11 of Appendix B.
- the modules described above may be instantiated for each link module 102 of which there are twelve for each switch element 100 .
- the following modules may be instantiated only once for each switch element.
- the link synchronization & timing control 150 provides the global synchronization and timing signals used in the switch element. It generates transmission control signals so that all serial outputs-start sending row data synchronized to the RSYNC (row synchronization) input reference. It also controls the deskewing FIFOs in the data stream deserializers so that all twelve input links will drive the data for slot N at the same time, onto the input link bus 120 . This same deskewing mechanism may be implemented on the grant stream deserializers. A detailed description of the link synchronization and timing control 150 is provided in Section 10 of Appendix B.
- the request parser 152 receives inputs from all thirteen request element sources and forwards the REs to the appropriate request arbitration modules via the two request element buses. A detailed description of the request parser 152 is provided in Section 7.2.1.1 of Appendix B.
- the grant parser 154 physically operates in a similar manner to and may be identical to the request parser 152 . The only difference is that it processes grant elements in the reverse path instead of request elements in the forward path. As mentioned above, the grant elements contain the same information as the request elements, i.e. the link address through the switch from one port processor to another.
- the link RISC processor 156 controls the ranging synchronization on the input links with the source port processors in the first stage of the switch fabric. It also controls the ranging synchronization on the output link grant stream input with the source port processors in the last stage of the switch fabric. It also handles the Req/Grant processing needed to transmit multicast messages and controls the reception and transmission of the in-band communications PDUs. All in-band communications PDUs are forwarded to the Configuration RISC Processor 158 which interprets the messages. The link RISC processor 156 only handles the Req/Grant processing needed to transmit multicast and in-band communications messages.
- the configuration RISC controller 158 processes configuration and status messages received from an external controller module (not shown) and in-band communication messages as described above.
- the system control module 160 handles all the reset inputs and resets the appropriate internal modules.
- the configuration RISC controller 158 and the system control module 160 may be implemented with an XtensaTM processor from Tensilica, Inc., Santa Clara, Calif.
- the test pattern generator and analyzer 162 may be used for the generation of various test patterns which can be sent out on any slot on the data stream or grant stream outputs. It is also capable of monitoring input slots from either the received data stream or grant stream.
- the test interface bus multiplexer 164 allows for sourcing transmit data from the external I/O pins and forwarding data to the I/O pins. This is used for testing the switch element when a port processor is not available.
- the unilink PLL 166 may be used to create the IF clock needed by the unilink-macros. Within each unilink macro another PLL multiplies the IF clock up to the serial clock rate.
- the core PLL 168 may be used to create the clock used by the switch element core logic. In some embodiments, the core clock is approximately 250 MHz. A detailed description of both PLLs is provided in Section 9 of Appendix B.
- the JTAG interface 170 may be used for two purposes: (1) boundary scan testing of the switch element at the ASIC fab and (2) Debug interface for the Configuration RISC Processor.
- the input link bus 120 there may be three datapath buses (the input link bus 120 , the output link bus 122 , and the grant bus 124 ) which may be used to move switched traffic from the input links to the output-links. These buses are also used to carry traffic which is sourced or terminated internally within the switch element.
- Some datapaths of the input link bus are summarized in Table 2 below.
- Some datapaths of the output link bus are summarized in Table 3 below.
- Some datapaths of the grant bus are summarized in Table 4 below.
- Source islot_num 1 11 Current input slot number for Link Sync & Timing Ctrl traffic from the Data Stream Deserializers ilink_req_0 12 48 Request elements received on the Data Stream Demapper thru input link module for each input link ilink_req_11 lcl_req_0 1 48 Request elements generated locally Link RISC Controller req_a, req_b 2 48 Parsed request elements Request Parser ilink_tdm_data_0 12 47 TDM data, 36-bit data + 11 bit Data Stream Demapper thru destination row buffer address module for each input link.
- ilink_req_11 ilink_tdm_dlink_0 12 4 Destination output link Data Stream Demapper thru (i.e., row buffer) identifier module for each input link ilink_tdm_dlink_11 ilink_pdu_0 12 512 Complete 64-byte PDU which has been Data Stream Demapper thru assembled from the incoming slots module for each input link ilink_pdu_11 ilink_pdu_flag_0 12 13 Each flag is asserted for each destination Data Stream Demapper thru which the current PDU is addressed.
- Source oslot_num 1 11 Current output slot number for Link Sync & Timing Ctrl traffic destined for the output links.
- rbuf_dout_0 12 36 Slot data output from the row buffer. Row Buffer for each thru output link.
- rbuf_dout_11 rbuf_rd_addr 12 12 Row buffer read address.
- olink_req_0 12 48 Request elements for each output link.
- each switch element includes a multicast controller and a separate multicast PDU buffer.
- Multicast request elements flow through the switch in the same manner as standard unicast request elements.
- the hop-by-hop field's bit code for that switch stage indicates that the request is multicast.
- the request is forwarded to the multicast controller.
- the multicast controller sources a grant if there is room for the data in the multicast recirculating buffers.
- the multicast controller examines the data header and determines which output links it needs to be sent out on. At this point, the multicast controller sources a number of request messages which are handled in the same manner as unicast requests.
Abstract
Methods, systems, and apparatuses related to a communication switch are disclosed herein. In some embodiments, the communication switch may be configured to transmit TDM, ATM and/or packet data from an ingress service processor, through a plurality of switch elements, to an egress service processor. Other embodiments may be described and claimed.
Description
- This application claims priority to U.S. patent application Ser. No. 10/155,517, filed May 24, 2002, which is a continuation-in-part of U.S. Pat. No. 6,631,130, issued Oct. 7, 2003, the specifications of which are hereby incorporated in their entirety.
- Disclosed embodiments relate to telecommunications networks and, more particularly, to transmitting data through telecommunication switches in said networks.
- One of the earliest techniques for employing broadband telecommunications networks was called time division multiplexing (TDM). The basic operation of TDM is simple to understand. A high frequency signal is divided into multiple time slots within which multiple lower frequency signals can be carried from one point to another. The actual implementation of TDM is quite complex, however, requiring sophisticated framing techniques and buffers in order to accurately multiplex and demultiplex signals. The North American standard for TDM (known as T1 or DS1) utilizes twenty-four interleaved channels together having a rate of 1.544 Mbits/sec. The European standard for TDM is known as E-1 and utilizes thirty interleaved channels having a rate of 2.048 Mbits/sec. A hierarchy of multiplexing is based on multiples of the T1 or E-1 signal, one of the most common being T3 or DS3. A T3 signal has 672 channels, the equivalent of twenty-eight T1 signals. TDM was originally designed for voice channels. Today, however, it is used for both voice and data.
- An early approach to broadband data communication was called packet switching. One of the differences between packet switching and TDM is that packet switching includes methods for error correction and retransmission of packets which become lost or damaged in transit. Another difference is that, unlike the channels in TDM, packets are not necessarily fixed in length. Further, packets are directed to their destination based on addressing information contained within the packet. In contrast, TDM channels are directed to their destination based on their location in the fixed frame. Today, a widely used packet switching protocol is known as IP (Internet Protocol).
- More recently, broadband technologies known as ATM and SONET have been developed. The ATM network is based on fixed length packets (cells) of 53-bytes each (48-bytes payload with 5-bytes overhead). One of the characteristics of the ATM network is that users contract for a quality of service (QOS) level. Thus, ATM cells are assigned different priorities based on QOS. For example, constant bit rate (CBR) service is the highest priority service and is substantially equivalent to a provisioned TDM connection. Variable bit rate (VBR) service is an intermediate priority service which permits the loss of cells during periods of congestion. Unspecified bit rate (UBR) service is the lowest priority and is used for data transmission which can tolerate high latency such as e-mail transmissions.
- The SONET network is based on a frame of 810-bytes within which a 783-byte synchronous payload envelope (SPE) floats. The payload envelope floats because of timing differences throughout the network. The exact location of the payload is determined through a relatively complex system of stuffs/destuffs and pointers. In North America, the basic SONET signal is referred to as STS-1 (or OC-1). The SONET network includes a hierarchy of SONET signals wherein up to 768 STS-1 signals are multiplexed together providing the capacity of 21,504 T1 signals (768 T3 signals). STS-1 signals have a frame rate of 51.84 Mbit/sec, with 8,000 frames per second, and 125 microseconds per frame. In Europe, the base (STM-1) rate is 155.520 Mbit/sec, equivalent to the North American STS-3 rate (3*51.84=155.520), and the payload portion is referred to as the virtual container (VC). To facilitate the transport of lower-rate digital signals, the SONET standard uses sub-STS payload mappings, referred to as Virtual Tributary (VT) structures. (The ITU calls these Tributary Units or TUs.) Four virtual tributary sizes are defined: VT-1.5, VT-2, VT-3 and VT-6. VT-1.5 has a data transmission rate of 1.728 Mbit/s and accommodates a T1 signal with overhead. VT-2 has a data transmission rate of 2.304 Mbit/s and accommodates an E1 signal with overhead. VT-3 has a data transmission rate of 3.456 Mbit/s and accommodates a T2 signal with overhead. VT-6 has a data transmission rate of 6.912 Mbit/s and accommodates a DS2 signal with overhead.
- Each of the above described broadband technologies can be categorized as TDM, ATM, or Packet technologies, with SONET being a complex form of TDM. From the foregoing, it will be appreciated that TDM, ATM and Packet each desire their own unique transmission characteristics. Consequently, different kinds of switches are used to route these different kinds of signals. In particular, TDM desires careful time synchronization; ATM desires careful attention to the priority of cells and QOS; and packet (e.g. IP) desires the ability to deal with variable length packets. For these reasons, switching technologies for TDM, ATM, and variable length packet switching have evolved in different ways. Service providers and network designers have thus been forced to deal with these technologies separately, often providing overlapping networks with different sets of equipment which can only be used within a single network.
-
FIG. 1 is a simplified schematic diagram of a port processor according to some embodiments; -
FIG. 2 is a simplified schematic diagram of a switch element according to some embodiments; -
FIG. 3 is a schematic diagram illustrating the data frame structure of some embodiments; -
FIG. 3 a is a schematic diagram illustrating the presently preferred format of a PDU according to some embodiments; -
FIG. 3 b is a schematic diagram illustrating the row structure including request elements to a first stage of the switch; -
FIG. 3 c is a schematic diagram illustrating the row structure including request elements to a second stage of the switch; -
FIG. 4 is a schematic illustration of a threestage 48×48 switch according to some embodiments; and -
FIG. 5 is a schematic illustration of a 48×48 folded Clos architecture switch according to some embodiments. - Appendix A is an engineering specification (Revision 0.3) for a port processor according to some embodiments; and
- Appendix B is an engineering specification (Revision 0.3) for a switch element according to some embodiments.
- The apparatus of some embodiments generally includes a port processor and a switch element.
FIG. 1 illustrates some features of theport processor 10, andFIG. 2 illustrates some features of theswitch element 100. Referring now toFIG. 1 , theport processor 10 includes a SONET interface and a UTOPIA interface. On the ingress (RX) side, the SONET interface includes a serial toparallel converter 12, a SONET framer and transport overhead (TOH)extractor 14, a highorder pointer processor 16, and a path overhead (POH)extractor 18. For ATM and IP packets transported in an SPE, the ingress side of the SONET interface includes forty-eight HDLC framers 20 (for IP), forty-eight cell delineators 22 (for ATM), and forty-eight 64-byte FIFOs 24 (for both ATM and IP). For TDM signals transported in an SPE, the ingress side of the SONET interface includes a demultiplexer and loworder pointer processor 26. On the egress (TX) side, the SONET interface includes, for TDM signals, a multiplexer and loworder pointer generator 28. For ATM and IP packets transported in an SPE, the egress side of the SONET interface includes forty-eight 64-byte FIFOs 30, forty-eightHDLC frame generators 32, and forty-eightcell mappers 34. The egress side of the SONET interface also includes aPOH generator 36, a highorder pointer generator 38, a SONET framer andTOH generator 40, and a parallel toserial interface 42. On the ingress side, the UTOPIA interface includes aUTOPIA input 44 for ATM and Packets and one 4.times.64-byte FIFO 46. On the egress side, the UTOPIA interface includes ninety-six 4.times.64-byte FIFOs 48 and aUTOPIA output 50. - The ingress portion of the
port processor 10 also includes aswitch mapper 52, a parallel to serialswitch fabric interface 54, and arequest arbitrator 56. The egress portion of the port processor also includes a serial to parallelswitch fabric interface 58, aswitch demapper 60, and agrant generator 62. - For processing ATM and packet traffic, the
port processor 10 utilizes, at the ingress portion, adescriptor constructor 64, an IPF andATM lookup processor 66, anIP classification processor 68, an RED/Policing processor 70, all of which may be located off-chip. These units process ATM cells and packets before handing them to a (receive)data link manager 72. At the egress portion of the port processor, a (transmit)data link manager 74 and a transmit scheduler andshaper 76 are provided. Both of these units may be located off-chip. The port processor is also provided with ahost interface 78 and a weightedround robin scheduler 80. - The purpose of the port processor at ingress to the switch is to unpack TDM, Packet, and ATM data and frame it according to the data frame described below with respect to
FIG. 3 . The port processor also buffers TDM and packet data while making arbitration requests for link bandwidth through the switch element and grants arbitration requests received through the switch as described in more detail below. In order to maintain timing for TDM traffic, predetermined bytes, e.g., the V1-V4 bytes in the SONET frame, may be stripped off and the VC bytes are buffered at the ingress to the switch. In rows having both PDU and TDM traffic, it may be desirable for the PDUs to be configured early and the TDM slots to be configured late in the row. At the egress of the switch, the port processor reassembles TDM, Packet, and ATM data. The V1-V4 bytes are regenerated at the egress from the switch. - Though not shown in
FIG. 1 , theport processor 10 includes dual switch element interfaces which permit it to be coupled to two switch elements or to two ports of one switch element. When both interfaces are used, the “standby” link carries only frame information until a failure in the main link occurs and then data is sent via the standby link. This provides for redundancy in the switch so that connections are maintained even if a portion of the switch fails. - Turning now to
FIG. 2 , aswitch element 100 according to some embodiments includes twelve “datapath and link bandwidth arbitration modules” 102 (shown only once inFIG. 2 for clarity). Eachmodule 102 provides onelink input 104 and onelink output 106 through theswitch element 100. Those skilled in the art will appreciate that data entering any link input can, depending on routing information, exit through any link output. According to some embodiments, eachmodule 102 provides twoforward datapaths path input link 104 to anoutput link 106 via aninput link bus 120 and anoutput link bus 122. Return path grants are routed from anoutput link 106 to aninput link 104 via agrant bus 124. - The forward datapaths of each “datapath and link bandwidth arbitration module” 102 include a
data stream deserializer 126, adata stream demapper 128, arow buffer mapper 130, arow buffer 132, arequest arbitration module 134, adata stream mapper 136, and adata stream serializer 138. The return grant path for eachmodule 102 includes agrant stream deserializer 140, agrant stream demapper 142, agrant arbitration module 144, agrant stream mapper 146, and agrant stream serializer 148. - The
switch element 100 also includes the following modules which are instantiated only once and which support the functions of the twelve “datapath and link bandwidth arbitration modules” 102: a link synchronization andtiming control 150, arequest parser 152, agrant parser 154, and alink RISC processor 156. Theswitch element 100 also includes the following modules which are instantiated only once and which support the other modules, but which are not directly involved in “switching”: aconfiguration RISC processor 158, asystem control module 160, a test pattern generator andanalyzer 162, a testinterface bus multiplexer 164, aunilink PLL 166, acore PLL 168, and aJTAG interface 170. - A typical switch according to some embodiments includes
multiple port processors 10 andmultiple switch elements 100. For example, as shown inFIG. 4 , forty-eight “input” port processors are coupled to twelve, “first stage” switch elements, four to each. Each of the first stage switch elements may be coupled to eight second stage switch elements. Each of the second stage switch elements may be coupled to twelve third stage switch elements. Four “output” port processors may be coupled to each of the third stage switch elements. From the foregoing, those skilled in the art will appreciate that the port processors and the switch elements of invention can be arranged in a folded Clos architecture as shown inFIG. 5 where a single switch element acts as both first stage and third stage. - Before describing in detail the functions of the
port processor 10 and theswitch element 100, it should be appreciated that some embodiments utilize a unique framing technique which is well adapted to carry combinations of TDM, ATM, and Packet data in the same frame. Turning now toFIG. 3 , according to some embodiments, a data frame of nine rows by 1700 slots is used to transport ATM, TDM, and Packet data from a port processor through one or more switch elements to a port processor. Each frame is transmitted in 125 microseconds, each row in 13.89 microseconds. Each slot includes a four-bit tag plus a four-byte payload (i.e., thirty-six bits). The slot bandwidth ( 1/1700 of the total frame) is 2.592 Mbps which is large enough to carry an E-1 signal with overhead. The four-bit tag is a cross connect pointer which may be set up when a TDM connection is provisioned. The last twenty slots of the frame are reserved for link overhead (LOH). Thus, the frame is capable of carrying the equivalent of 1,680 E-1 TDM signals. The link overhead (LOH) in the last twenty slots of the frame is analogous in function to the line and section overhead in a SONET frame. - The contents of the LOH slots may be inserted by the switch mapper (52 in
FIG. 1 ). There are four types of data which may be inserted in the LOH slots. A 36-bit framing pattern may be inserted into one of the twenty slots. The framing pattern may be common to all output links and configurable via a software programmable register. A 32-bit status field may be inserted into another slot. The status field may be unique for each output link and may be configurable via a software programmable register. A 32-bit switch and link identifier may be inserted into another slot. The switch and link identifier includes a four bit link number, a twenty-four bit switch element ID, and a four bit stage number. A 32-bit stuff pattern may be inserted into slots not used by framing, status, or ID. The stuff pattern is common to all output links and may be configurable via a software programmable register. - For ATM and packet data, a PDU (protocol data unit) of sixteen slots may be defined for a sixty-four-byte payload (large enough to accommodate an ATM cell with overhead). The format of the PDU is illustrated in
FIG. 3 a. A maximum of ninety-six PDUs per row may be permitted (it being noted that the maximum number of ATM cells in a SONET OC-48 row is seventy-five). The sixteen four-bit tags (bit positions 32-35 in each slot) are not needed for PDU routing so they may be used as parity bits to protect the ATM or IP payload. Of the sixty-four-byte payload, twelve bytes (96 bits) may be used by the switch for internal routing (slots 0-2, bit positions 0-31). This leaves fifty-two bytes (slots 3-15) for actual payload which is sufficient to carry an ATM cell (without the one-byte HEC) and sufficient for larger packets after fragmentation. The PDUs may be self-routed through the switch with a twenty-eight-bit routing tag (slot 0, bit positions 0-27) which allows routing-through seven stages using four bits per stage. The remaining sixty-eight bits of the PDU may be used for various other addressing information. - As shown in
FIG. 3 a, the PDU bits atslot 0, bits 30-31 may be used to identify whether the PDU is idle (00), an ATM cell. (01), an IP packet (10), or a control message (11). The two bits atslot 1, bit positions 30-31 may be used to indicate the internal protocol version of the chip which produced the PDU. For Packets and control messages, the “valid bytes” field (slot 1, bits 24-29) may be used to indicate how many payload bytes are carried by the PDU when the FragID field indicates that the PDU is the last fragment of a fragmented packet. The VOQID field (slot 1, bit positions 19-23) identifies the class of service for the PDU. The class of service can be a value from 0 to 31, where 0 is the highest priority and 31 is the lowest. The FragID atslot 1, bits 17-18 indicates whether this PDU is a complete packet (11), a first fragment (01), a middle fragment (00), or a last fragment (10). The A bit atslot 1,bit position 16 is set if reassembly for this packet is being aborted, e.g. because of early packet (or partial packet) discard operations. When this bit is set, fragments of the packet received until this point are discarded by the output port processor. The fields labelled FFS are reserved for future use. The Seq# field atslot 1, bits 0-3 is a modular counter which counts packet fragments. The DestFlowId field atslot 2, bits 0-16 identifies the “flow” in the destination port processor to which this PDU belongs. A “flow” is an active data connection. There are 128K flows per port processor. - As mentioned above, since ATM and Packet traffic are typically not provisioned, bandwidth may be arbitrated among ATM and Packet connections as traffic enters the system. Moreover, since TDM traffic shares the same frame as ATM and Packet traffic, bandwidth may be arbitrated while maintaining TDM timing. According to some embodiments, bandwidth is arbitrated by a system of requests and grants which is implemented for each PDU in each row of the frame. The request elements, which are generated by the port processors include “hop-by-hop” internal switch routing tags, switch element stage, and priority information. According to some embodiments two request elements are sent in a three contiguous slot bundle and at least eight slots of non-request element traffic must be present between request element bundles. The time separation between request element bundles may be used by the arbitration logic in the switch elements and the port processors to process the request elements. The request element format is shown in section 7.1.5 of Appendix B.
-
FIG. 3 b illustrates one example of how the row slots may be allocated for carrying PDUs and request elements. As shown, the maximum PDU capacity for a row is ninety-six. A block of sixteen slots which is capable of carrying a single PDU is referred to as a “group”. For each group in the row, 1.5 slots of bandwidth may be used for carrying a forty-eight-bit request element (RE).FIG. 3 b illustrates how two REs are inserted into three slots within each of the first twenty-four groups. All the REs may be carried within the row as early as possible in order to allow the RES to ripple through the multistage switch fabric as soon as possible after the start of a row.Section 7 of Appendix B explains in detail how this affects the arbitration process. - The structure shown in
FIG. 3 b may be the desired format (for the first link) given system requirements and implementation constraints of a given embodiment. It places the REs early in the row but spaces them out enough to allow for arbitration. According to the present embodiment, the row structure is somewhat different depending on for which link of the switch it is configured.FIG. 3 b represents the row structure between the port processor and a switch element of the first switch fabric stage. The first block of two REs occupy the first three slots of the row. The present implementation of the arbitration logic which processes REs requires at least twelve slot times of latency between each three-slot block of REs on the input link. Also, there may be some latency from when the first REs of the row are received by a switch element to when the REs are inserted into the output link of the switch element. This latency is used by the arbitration logic for mapping incoming REs into the RE buffers. Thus, the row structure for the link between the first stage and the second stage may have the first group of REs starting atslot time 32. This is illustrated inFIG. 3 c, which shows the same structure asFIG. 3 b offset by thirty-two slot times. - According to some embodiments, TDM traffic may be switched through the switch elements with a finest granularity of one slot per row. The TDM traffic may be switched through the same path for a given slot for every row. The switch elements may not allow different switch paths for the same TDM data slot for different rows within the frame. This means that the switch does not care about what the current row number is (within a frame). The only time row numbering matters is when interpreting the contents of the Link Overhead slots.
- With a finest granularity of one slot per row, the switch elements can switch TDM traffic with a minimum of 2.52 Mbps of switching bandwidth. Since a slot can carry the equivalent of four columns of traffic from a SONET SPE, it can be said that the switch elements switch TDM traffic with a granularity of a VT1.5 or VT2 channel. Although a VT1.5 channel may only occupies three columns in the SONET SPE, it will still be mapped to the slot format which is capable of holding four SPE columns. As mentioned above, the format of the contents of the thirty-six-bit slot carrying TDM traffic is a four-bit tag and a thirty-two bits of payload. The tag field definitions are shown in Table 1 below.
-
TABLE 1 0000 Idle 0001 reserved 1010 reserved 1011 Data present 1100 V5 byte in bits 31-24 1101 V5 byte in bits 23-16 1110 V5 byte in bits 15-8 1111 V5 byte in bits 7-0 - The switch elements know whether or not a slot contains TDM data via preconfigured connection tables. These tables may be implemented as an Input Cross Connect RAM for each input link. The input slot number is the address into the RAM, while the data output of the RAM contains the destination output link and slot number. The connection table can be changed by a centralized system controller which can send control messages, to the switch elements via either of two paths: (1) a host interface port or (2) in-band control messages which are sent via the link data channel. Since TDM connections will be changed infrequently, this relatively slow control message approach to update the connection tables is acceptable. It is the responsibility of an external software module to determine and configure the connection tables within the switch elements such that no TDM data will be lost.
- Returning now to
FIG. 1 , the receive side SONET interface of theport processor 10 includes thedeserializer 12 andframer 14. This interface may be configured as one OC-48, 16-bits wide at 155 MHz, four OC-12s, serially at 622 MHz, or four OC-3s, serially at 155 MHz. When configured as one OC-48, thedeserializer 12 is not used. When configured as four OC-12s or one OC-48, thedeserializer 12 converts the serial data stream to a sixteen-bit wide parallel stream. Thedeserializer 12 includes circuitry to divide the input serial clocks by sixteen. The inputs to the deserializer include a one-bit serial data input, a one-bit 622 MHz clock and a one-bit 155 MHz clock. The outputs include a sixteen-bit parallel data output, a one-bit 38.87 MHz clock and a 9.72 MHz clock. The SONET interfaces are described in more detail in sections 3.2 and 3.3 of Appendix A. - Parallel data is sent to the SONET framer and transport overhead (TOH)
block 14. All incoming signals may be framed according to the BELLCORE GR-253 standard which is incorporated herein by reference. The byte boundary and the frame boundary are found by scanning a series of sixteen bit words for the F628 pattern. The framer frames on the pattern F6F6F62828288. Independent SONET SPEs within the STS-N frame are demultiplexed by theframer 14. There is a maximum of four independent line interfaces, therefore theframer 14 includes four independent framers. The inputs to the framer include a sixteen-bit parallel data input, and a one-bit clock which will accept 155 MHz, 38.87 MHz, or 9.72 MHz. The outputs of the framer include a sixteen-bit parallel data output, a one-bit start of frame (SOF) indication, a six-bit SPE ID used to indicate SONET SPE number. The SPEs are numbered 1 through 48 with respect to the line side port configuration. - The
block 14 also terminates the transport (section and line) overhead for each independent SONET SPE. Since there are a maximum of forty-eight OC-1s on the line side, forty-eight transport overhead blocks are provided unless blocks are time-shared. The inputs to the TOH termination are the same as those discussed above with respect to the framer. The six-bit SPE ID enables data into this block. There may be no need for an output data bus as the traffic is routed to this block and to the next block (Ptr Proc 16) on the same data bus. The data path may only flow into this block, not through it. - The
pointer processor 16 uses the SONET pointer (H1, H2 and H3 bytes in the TOH) to correctly locate the start of the payload data being carried in the SONET envelope. The SONET pointer identifies the location ofbyte # 1 of the path overhead. Thepointer processor 16 is responsible for accommodating pointer justifications that were inserted in order to justify the frequency difference between the payload data and the SONET envelope. Since there may be a maximum of forty-eight OC-1s, forty-eight pointer processor blocks are mated to the forty-eight transport overhead termination blocks unless blocks are time-shared. The inputs to thepointer processor 16 are the same as those to the framer andTOH terminator 14. The outputs include a sixteen-bit parallel data output, a one-bit start of SPE indicator which coincides withword 1 ofSPE 3, a one-bit SPE valid indicator which gaps out overhead and accommodates pointer movements, and a one-bit POH valid indicator which indicates when a path overhead byte is on the output bus. - The
POH processor 18 processes the nine bytes of Path Overhead in each of the forty-eight SONET SPES. Since there are a maximum of forty-eight SPEs, forty-eight path overhead processors are provided unless processors are time-shared. The inputs to the pathoverhead processor 18 include an eight-bit parallel data input, a four-bit SPE ID, the one-bit start of SPE indicator, and the one-bit POH valid indicator. The outputs include a one-bit V1 indicator, J1 info, alarms, and path status. Further details aboutblocks - Once the frame boundaries of the incoming SONET/SDH signals are found and the location of the SPEs has been identified either through pointer processing or through Telecom bus I/F control signals, and the Path Overhead is processed, the payload is extracted from the SPE. The SPEs may be carrying TDM traffic, ATM cells or IP packets. The type of traffic for each SPE may be configured through the
microprocessor interface 78. Each SPE can carry only one type of traffic. The data from each SPE is routed directly to the correct payload extractor. - SPEs containing packets and ATM cells may be sent to the
HDLC framer 20 and thecell delineation block 22, respectively. Each SPE may be configured to carry packet data (packet over SONET). ThePort Processor 10 supports packet over SONET for the following SONET (SDH) signals: STS-1 (VC-3), STS-3c (VC-4), STS-12c (VC-4-4c), and STS-48c (VC-4-16c). The datagrams may be encapsulated in PPP packets which are framed using the HDLC protocol. The HDLC frames are mapped byte wise into SONET SPEs and high order SDH VCs. TheHDLC framer 20 performs HDLC framing and forwards the PPP packet to aFIFO buffer 24 where it awaits assembly into PDUs. Theframer 20 has an input which includes a sixteen-bit parallel data input, a six-bit SPE ID, a one-bit SPE valid indicator, and a one-bit PYLD valid indicator. The output of theframer 20 includes a sixteen-bit data bus, a one-bit start of packet indicator, and a one-bit end of packet indicator. Further details about packet extraction from SONET are found in IETF (Internet Engineering Task Force) RFC 1619 (1999) which is incorporated herein by reference. - The
cell delineation block 22 is based on ITU-T G.804, “ATM Cell Mapping into Plesiochronous Digital Hierarch (PDH)”, 1998, the complete disclosure of which is hereby incorporated herein by reference. Thecell delineation block 22 has inputs that include a sixteen-bit parallel data bus, a six-bit SPE ID, a one-bit SPE valid indicator, and a one-bit POH valid indicator. The outputs include a sixteen-bit parallel data bus and a one-bit start of cell indicator. Cells are placed in aFIFO 24 while awaiting assembly into PDUS. Further details regarding ATM extraction from SONET are found in ITU-T G.804. - The TDM data is routed to a TDM demultiplexer and low order
pointer processor block 26 where the low order VTs and VCs are identified. If a particular SPE is configured for TDM data, then the TDM mapping is described using thehost interface 78. Each SPE can carry a combination of VC-11, VC-12, VC-2, VC-3 & VC-4. There are seven VT groups in a single STS-1 payload, each VT group has twelve columns. Within one VT Group all of the VTs must be the same. Different VT groups within the same STS-1 SPE can carry different VT types, but within the group all VTs may be of the same type. The VCs and VTs are demultiplexed out of the SONET signal based on the configuration for each of the SPEs. There is no interpretation of the traffic required to locate the containers and tributaries as all of this information is found in the configuration table (not shown) which is configured via thehost interface 78. Frames are located inside of the VCs and the VTs through the H4 byte in the path overhead of the SPE. Pointer processing is performed as indicated by the V bytes in the VT superframe. The TDM demultiplexer and low orderpointer processor block 26 has inputs which include sixteen bits of parallel data, a six-bits SPE ID, a one-bit start of SPE indicator, a one-bit SPE valid indicator, a one-bit V1 indicator, and one-bit POH valid indicator. The TDM demultiplexer and low orderpointer processor block 26 provides the following outputs to the switch mapper 52: sixteen bits of parallel data, a one-bit VT/VC valid indicator, a six-bit SPE ID, and a five-bit VT/VC Number (0-27). The TDM data is placed in reserved slots in the frame as mentioned above and described in more detail below with reference to theswitch mapper 52. Further details regarding TDM extraction are found in the GR-253 specification - IP packets and ATM cells from the
UTOPIA interface 44 may be placed inFIFO 46. Packets and cells from the FIFOs 24 may be merged with the packets and cells from theFIFO 46. The descriptor constructor 64 determines whether the data is an ATM cell or an IP packet and generates a corresponding interrupt to trigger the IPF/ATM look-upprocessor 66 to perform either IP routing look-up or ATM look-up. IP routing look-up is performed by searching for the IP destination address for every packet and the IP source address for packets that need classification. ATM look-up is performed by searching the VPI/VCI fields of the cells. Outputs of the IPF/ATM look-upprocessor 66 for both IP packets and ATM cells include a seventeen-bit flow; index, a five-bit QOS index, and an indicator showing whether the IP packet-needs classification. If the IP packet needs classification, the packet is passed to theIP classification processor 68 for classification; otherwise it is passed to the next stage of packet processing, the RED/policing processor 70. IP classification is described in detail in section 6.4 of Appendix A. The RED/Policing processor 70 performs random early detection and weighted random early detection for IP congestion control, performs leaky bucket policing for ATM traffic control, and performs early packet and partial packet discard for controlling ATM traffic which contains packets. The RED/Policing traffic control is described in detail in sections 7.5 et seq. of Appendix A. Some embodiments of theport processor 10 includes a mode register (not shown) which can be placed in a bypass mode to globally turn off the IP/ATM forwarding. In bypass mode, an external device is used for IP/ATM forwarding, and the data descriptors generated by thedescriptor constructor 64 are routed directly to an output FIFO (not shown). - All of the data stored in the FIFOs 24 and 46 may be in fifty-two-byte “chunks”. If an IP packet is longer than fifty-two-bytes, it may be segmented into multiple fifty-two-byte chunks. The input data descriptor for each chunk includes indications of whether the chunk is an ATM cell or a packet, whether it is the start of a packet or the end of a packet, packet length, and the source and destination port numbers. After processing by the IPF/
ATM lookup processor 66 and theIP classification processor 68, an output data descriptor is written to a FIFO (not shown) which is read by the RED/Policing processor 70. - Cells and packets which survive RED/policing are read by the receive
data link manager 72, which creates the PDUs described above with reference toFIG. 3 a. The receive data link manager, is described in detail insection 8 of Appendix A. According to some embodiments, processed cells and packets are stored in an external FIFO which is read whenever it is not empty. - As shown in
FIG. 1 , theswitch mapper 52 receives TDM traffic from the TDM demultiplexer and loworder pointer processor 26 as well as PDUs from thedata link manager 72. As mentioned above, the switch mapper also receives request elements. The request elements are formed by thearbiter 56 as described in more detail below. It is the function of the switch mapper (also referred to as the data mapper in Appendix A) to arrange TDM data, PDUS, and request elements in the frame described above with reference toFIGS. 3 and 3 a-c. - The
switch mapper 52 includes a state machine (not shown) which is associated with the ATM/IP PDUS. Thedata link manager 72 writes the PDU's using a sixty-four-bit interface to the external FIFO (not shown). The data may be transmitted from the external FIFO to theswitch mapper 52 in thirty-two-bit slots with four bits of parity. The state machine associated with the external PDU FIFO monitors the status of the FIFO and maintains data integrity. - In
section 9 of Appendix A, thedata link manager 72,arbiter block 56,switch mapper 52, and weightedround robin scheduler 80, together with memory and other support circuits (not shown inFIG. 1 ) are referred to collectively as the “receive switch controller”. As described in detail above, each incoming ATM cell and packet is processed by performing a lookup based on the ATM VPI/VCI or on the IP source and destination. This lookup first verifies that the connection is active, and if active, it returns a seventeen-bit index. For ATM cells, the index points to a set of per VC parameters and to routing information. For packets, the index points to a set of queuing parameters and to routing information. The seventeen-bit index supports a maximum of 128K simultaneous IP and ATM flows through the port processor. The ATM cells are encapsulated in a cell container and stored in one of 128K queues in external memory. These 128K queues are managed by thedata link manager 72. As mentioned above, the IP packets are fragmented into fifty-two-byte blocks and each of these blocks may be encapsulated in a cell container (PDU). These cell containers are also stored in one of the 128K queues in external memory by the data link manager. The 128K IP/ATM flows may be aggregated into one of thirty-two QOS queues for scheduling through the switch. Thedata link manager 72 also aggregates all the control headers desired for transmission of cells through the switch into the QOS queues and inserts these routing tags into one of thirty-one QOS routing tag FIFOS. One of the queues may be reserved for high priority traffic. Any cells arriving in the high priority queue may be interrupt thescheduler 80 and may be scheduled to leave the high priority queue immediately. - The
scheduler 80 may be responsible for scheduling cell containers through the switch. The scheduling algorithm used may be a weighted round robin, which operates on the QOS queues. Once cells have been scheduled from these queues the control headers from these queues are forwarded to thearbiter 56 and are stored in a request control table (not shown). Therequest arbiter 56 forms request elements from the control headers and forwards these requests to theswitch data mapper 52 for transmission through the switch. The grants received in response to these requests may be deserialized byblock 58, deframed and transferred back to thearbiter block 56 by thegrant block 62. For granted requests, the cell containers may be dequeued from external memory by thedata link manager 72 and transferred to theswitch mapper 52 for transmission through the switch. - As mentioned above, the
port processor 10 supports redundancy in order to improve reliability. Two redundancy schemes are supported. In the first redundancy scheme, the switch controller supports redundant routing tags and transparent route switch-over. In the second redundancy scheme, the port processor supports redundant data channels in both input and output directions. The redundant data channels connect to two separate switch fabrics. In the Appendices they are referred to as the A and B data channels. Each control header contains two routing tags, and each routing tag has a corresponding AB channel tag. This provides for two routes through the switch for data transmission. If both routing tags have the same channel tag, this allows for two alternate paths through the same switch fabric. If both routing tags have different channel tags, this allows for a redundant switch fabric and any route failing in one switch fabric will cause a switch-over to use the redundant switch fabric. An AB channel tag may be used to indicate whether the data is to be routed using the A data channel or the B data channel. If, after a programmable number of consecutive tries, no grant is received in response to request elements using the A channel routing tag, a bit may be set to switch over to the B channel routing tag. Further details of the redundancy feature are provided in sections 10.2.3 and 9.2.3 of Appendix A. - As mentioned above, the
arbiter 56 may be responsible for sending requests to theswitch mapper 52 and processing the grants that arrive from thegrant demapper 62. The arbiter dequeues requests from a routing tag FIFO, copies this information into a request control table, writes the FLOWID into FLOWID RAM, resets a request trial counter that counts the number of times a request has been tried, and resets the grant bit. Each request message has a unique request ID which is returned in the grant message. The request ID is the index in the arbiter request control table into which the routing tag is copied. The routing tag along with the request ID may be forwarded to a routing tag formatter block, which formats the routing tag into a request message and inserts the request into a request FIFO in theswitch mapper 52. - The grant demapper in the
grant block 62 stores the request ID and the grant in a FIFO called the grant_reqid FIFO. In thearbiter_block 56, the request IDs are dequeued from A and B grant reqid FIFOS alternatively depending on whether the switchover bit is set. The request IDs dequeued from the FIFO are used to set a grant bit in the grant register at the bit position indicated by the request ID, to index the FLOWID.RAM, and read the FLOWID associated with the request ID. This FLOWID is written into a deq-flowid FIFO for the appropriate channel, i.e., if the request ID is dequeued from the A reqid_fifo, the FLOWID is written into the A deq_flowid fifo. Thedata link manager 72 monitors the deqflowid_fifo and uses the FLOWID to dequeue data PDUs from external memory and send them to theswitch mapper 52 for transmission in the next row time. - An end_of_grants signal is asserted by the
grant demapper 62, when no more grants can be received at the grant demapper. In most switch implementations the end_of_grants signal is rarely, if ever, asserted. It is only in switches having many stages that the end_of_grants signal is more likely to be asserted. Once the end_of_grant signal has been received thearbiter 56 begins the process of updating the request control table. If a grant has not been returned for a routing tag stored in the request control table, the request trial counter is incremented and a new request is generated using the routing tag. If a routing tag in the request control table has been sent as a RE a (programmed) maximum number of times, the most significant fifteen bits of the FLOWID are used to index into the redundancy control table and update the bit to indicate failure of the current path and to select the alternate routing path. Further details regarding thearbiter block 56 are provided at section 9.2.4 of Appendix A. - As described above, the TDM data, ATM/IP PDU's and the request messages may be combined into a single data stream for transmission through the switch fabric. This combination may be performed by the
switch mapper 52 on the receive side of the port processor. On the transmit side of the port processor, aswitch demapper 60 separates TDM data from ATM/IP PDUs. According to some embodiments, thedemapper 60 may be provided with external memory for a PDU FIFO. For ATM/IP data, the demapper writes PDUs to the FIFO and interrupts thedata link manager 74. Thedata link manager 74 reads the header information from the PDU FIFO, and extracts the FLOWID. Based on the FLOWID, thedatalink manager 74 retrieves a Linked List/Shaping/Scheduling data structure from external memory. Thedata link manager 74 writes the linked list pointers to the PDU FIFO, then initiates a DMA transfer to move the PDU to external memory. The data link manager updates the head, tail, and count fields in the Linked List/Shaping/Scheduling data structure and passes the data structure to the Shaping/Scheduling processor 76 through a Shaping/Scheduling FIFO. The Shaping/Scheduling processor 76 performs the Shaping and Scheduling functions and updates the Linked List/Shaping/Scheduling datastructure. - The data-flow from external memory to the SONET/
UTOPIA Data FIFOs data link manager 74 polls the PDU FIFO and SONET/UTOPIA FIFO status flags. If the PDU FIFO is not empty and the SONET/UTOPIA FIFO is not full for a particular output port, thedata link manager 74 retrieves the Link List/Shaping/Scheduling data structure for the Flow ID read from the PDU FIFO. (Note that for an IP packet flow, the data link manager will continue to retrieve PDUs from the Linked List until a PDU with an End of Packet indicator is found.) The data link manager then initiates a DMA transfer from external memory to the SONET/UTOPIA FIFOs data link manager 74 then updates the Link List/Shaping/Scheduling data structure and writes it back to external memory. - On the transmit side of the
port processor 10, the grant framer, deframer, serializer and deserializer in thegrant block 62, theswitch demapper 60, the transmitdatalink manager 74, and the transmit scheduler andshaper 76 are referred to collectively as the transmit (TX) switch controller in Appendix A. The TX switch controller is responsible for either accepting or rejecting requests that come into the port processor for output transmission. To do this, the TX switch controller checks if the queue identified by the output port number of the request can accept a cell container. These one hundred twenty-eight queues may be managed by the TXdata link manager 74. According to some embodiments, these queues are stored in external memory. The scheduling of these cell containers may be performed by theTX scheduler 76. If the queue can accept the cell container, the request is turned into a grant and inserted into a grant_fifo. The grant-framer andserializer 62 reads this information and creates a grant message for transmission through the grant path. - The TX switch controller-monitors the status of the data queues for each of the one hundred twenty-eight output ports using the following three rules. If the full_status bit for the requested output port is set, there is no buffer space in the queue for any data PDUs destined for that output port and all requests to that output port may be denied. If the full_status bit is not set and the nearly_full_status bit is set, there is some space in the queue for data PDUs destined for that output port; however this space may be reserved for higher priority traffic. In this instance the QOS number is checked against a threshold (programmed) QOS number and if the QOS number is less than the threshold, the request will be accepted. If the nearly full_status bit is not set, all incoming requests may be granted. If a request is accepted, the corresponding output port counter is incremented. This reserves space in the data buffer (30 or 48) for the arrival of the data PDU at that output port. The transmit
data link manager 74 constantly monitors the one hundred twenty-eight output port counters and sets/resets the one hundred twenty-eight full and nearly full status bits. - The
port processor 10 creates complete outgoing SONET signals. All of the transport and path overhead functions are supported. The SONET interfaces can run in source timing mode or loop timing mode. - The high order pointer is adjusted by the high
order pointer generator 38 through positive and negative pointer justifications to accommodate timing differences in the clocks used to generate the SONET frames and the clock used to generate the SONET SPEs. At initialization, SPE FIFOs may be allowed to fill to halfway before data is taken out. The variations around the center point are monitored to determine if the rate of the SONET envelope is greater than or less than the rate of the SPE. If the rate of the SONET envelope is greater than the rate of the SPE, then the SPE FIFO will gradually approach a more empty state. In this case, positive pointer movements will be issued in order to give the SPE an opportunity to send additional data. If the rate of the SONET envelope is less than the rate of the SPE, then the SPE FIFO will gradually approach a more full state. In this case, negative pointer movements will be issued in order to give the SPE an opportunity to output an extra byte of data from the FIFO. The SONET framer andTOH generator 40 generate transport overhead according to the BELLCORE GR-253 standard. - The outgoing SONET frames may be generated from either the timing recovered from the receive side SONET interface or from the source timing of the Port Processor. Each signal is configured separately and they can be configured differently. The frame orientation of the outgoing SONET frames may be arbitrary. Each of the four signals can be running off different timing so there is no need to try to synchronize them together as they will constantly drift apart. There is no need to frame align the Tx ports to the Rx ports as this would result in realigning the Tx port after every realignment of the Rx port.
- For OC-3 and OC-12 the 16-bit wide internal bus is serialized to 155 Mbps or 622 Mbps by the
serializer 42. For OC-48 applications, the entire sixteen bit bus is output under the control of an external serializer (not shown). - There is a potential for forty-eight different SPEs being generated for the outgoing SONET interfaces. All of these SPEs may be generated from a single timing reference. This allows all of the SPE generators to be shared among all of the SONET and Telecom bus interfaces without multiplexing between the different clocks of the different SONET timing domains. The SPE consists of the Path level overhead and the payload data. The payload data can be TDM, ATM or packet. All of these traffic types are mapped into single SPEs or concatenated SPEs as desired by their respective standards. As the SPEs are generated, they are deposited into SPE FIFOs. For each SPE there is a sixty-four-byte FIFO and these individual SPE FIFOs are concatenated through SPE concatenation configuration registers. As described above, the fill status of the SPE FIFOs is used to determine the correct time to perform a positive or negative pointer justification.
- TDM, ATM and packet data may be all mapped into SONET SPEs as specified by their respective standards. The type of data carried in each of the potential forty-eight SPEs may be configured through the external host processor. Based on this configuration, each SPE generator may be allocated the correct type of mapper. All of this configuration may be performed at initialization and may be changed when the particular SPE is first disabled. Once the configuration is complete, there may be an isolated set of functional blocks allocated to each SPE. This set of functional blocks includes one of each of the following: payload mapper, payload FIFO, POH generator, SPE FIFO and SPE generator. Each of the ATM and packet payload mappers has a payload FIFO into which it writes payload data for a particular SPE. For TDM traffic, each potential Virtual Container is allocated its own FIFO.
- Returning now to
FIG. 2 , in each “datapath and link bandwidth arbitration module” 102, thedata stream deserializer 126 synchronizes to the incoming serial data stream and then reassembles the row stream which is transported using two physical unilink channels. It also provides FIFO buffering on each of the two incoming, serial streams so that the streams may be “deskewed” prior to row reassembly. It recovers the thirty-six-bit slot data from the row stream in a third FIFO which is used for deskewing the twelve input links. This deskewing allows all the input-links to forward slot N to the switching core simultaneously. The link deskewing is controlled by the link synchronization andtiming control module 150. Thedeserializer 126 also continuously monitors the delta between whereslot 0 of the incoming row is versus the internal row boundary signal within the switch element. The difference may be reported to theLink RISC Processor 156 and used (in the first stage of a switch) as part of the ranging process to synchronize the port processor connected to the input link. - The data-
stream demapper 128 may be responsible for extracting the data from the incoming serial data links. It demaps the input link slots based on the input slot number and determines whether the traffic is TDM, PDU, or a request element (RE). For TDM traffic, the demapper determines the destination link androw buffer 132 memory address. This information is stored in a demapper RAM (not shown), which may be configured by software when TDM connections are added or torn down. For PDU traffic, thedemapper 128 assembles all sixteen slots which make up the PDU into a single 64-byte PDU word, then forwards this entire PDU word to the rowbuffer mapper logic 130. The PDUs may be assembled prior to forwarding them to therow buffer 132 so that therow buffer mapper 130 can write the entire PDU to therow buffer 132 in a single clock cycle. This provides the maximum possible write-side memory bandwidth to therow buffer 132. It is a significant feature of the switch element that twelve entire PDUs are written to a single row buffer in six link slot times (twelve core clock cycles). For request elements, thedemapper 128 assembles the three-slot block of REs into two forty-eight-bit REs and forwards them to therequest parser module 152. A detailed description of thedata stream demapper 128 is provided in Sections 4.3.1 et seq. of Appendix B. - The
row buffer mapper 130 may be responsible for mapping traffic which is received from thedata stream demapper 128 into therow buffer 132. Themapper 130 provides FIFO buffers for the TDM traffic as it is received from thedata stream demapper 128, then writes it to therow buffer 132. The row buffer memory address is actually preconfigured in the demapper RAM (not shown) within the datastream demapper module 128. That module forwards the address to therow buffer mapper 130 along with the TDM slot data. Themapper 130 also writes PDU traffic from thedata stream demapper 128 to therow buffer 132 and computes the address within therow buffer 132 where each PDU will be written. PDUs are written into the row buffers starting ataddress 0 and then every sixteen-slot address boundary thereafter, up to the maximum configured number of PDU addresses for therow buffer 132. A detailed description of therow buffer mapper 130 is provided in Section 4.3.1.4 of Appendix B. - The
row buffer 132 contains the row buffer memory elements. According to some embodiments, it provides double buffered row storage which allows one row buffer to be written during row N while the row data which was written during row N−1 is being read out by thedata stream mapper 136. Each row buffer is capable of storing 1536 slots of data. This allows the row buffer to store ninety-six PDUs or 1536 TDM slots or a combination of the two traffic types. Request elements and link overhead slots may not be sent to therow buffer 132. Therefore the row buffer may not need to be sized to accommodate the entire 1700 input link slots. According to some embodiments, the row buffer write port is 16*36=576 bits wide and it supports writing of only one thirty-six-bit slot (TDM data) or writing of an entire 576-bit word (PDU data) in a single clock cycle. A detailed description of therow buffer 132 is provided in Section 4.3.1.4 of Appendix B. - Request arbitration utilizes two components: a centralized
request parser module 152 and arequest arbitration module 134 for each of the output links. Request elements are extracted from the input slot stream by thedata stream demapper 128 and are forwarded to therequest parser 152. Therequest parser 152 forwards the forty-eight-bit request elements to the appropriaterequest arbitration module 134 via two request buses (part of the input link bus 120). Each request bus may contain a new request element each core clock cycle. This timing allows the request arbitration logic to process thirteen request sources in less than eight core clock cycles. The thirteen request sources are the twelve input data streams and the internal multicast and in bandcontrol messaging module 156. Therequest arbitration module 134 monitors the two request element buses and reads in all request elements which are targeted for output links the request arbitration module is implementing. According to some embodiments, therequest arbitration module 134 provides buffering for up to twenty-four request elements. When a new request element is received, it is stored in a free RE buffer (not shown). If there are not any free RE buffers, then the lowest priority RE which is already stored in a buffer is replaced with the new RE if the new RE is a higher priority. If the new RE is equal to or lower in priority than all REs currently stored in the RE buffers then the new RE is discarded. On the output side, when the datastream mapper module 138 is ready to receive the next RE, therequest arbitration module 134 forwards the highest priority RE which is stored in the RE buffers to the datastream mapper module 136. If the RE buffers are empty, then an “Idle” RE may be forwarded. A detailed description of therequest arbitration module 134 is provided inSection 7 of Appendix B. - The
data stream mapper 136 may be responsible for inserting data and request elements into the outgoing serial data links. This includes mapping of the output link slots based on the output slot number to determine if the traffic is TDM, PDU, request element, or test traffic. The determination is based on the contents of the mapper RAM (not shown). For TDM traffic, the row buffer memory address may be determined from the mapper RAM which is configured by software as TDM connections are added or torn down. For PDU traffic, thedata stream mapper 136 use one slot at a time from therow buffer 132. The row buffer memory address may be stored in the mapper RAM by software. If the target PDU is not valid (i.e., a PDU was not written to that row buffer location during the previous row time), then themapper 136 transmits an idle pattern in order to ensure that a data PDU is not duplicated within the switch. For request elements, the mapper assembles the three-slot block of REs-from two forty-eight-bit REs. The REs are read from—therequest arbitration module 134. For test patterns, themapper 136 inserts the appropriate test pattern from theoutput link bus 122. These test patterns are created by either thetest pattern generator 162 ortest interface bus 164 modules. - The data stream mapper supports slot multicasting at the output stage. For example, the data stream mapper for any output link is able to copy whatever any other output link is sending out on the current slot time. This copying is controlled via the mapper RAM and allows the mapper to copy the output data from another output link on a slot-by-slot basis. A detailed description of the
data stream mapper 136 is provided inSection 4 of Appendix B. - The
data stream serializer 138 creates the output link serial stream. Data slots are received via the datastream mapper module 136 and the link overhead is generated internally by thedata stream serializer 138. Theserializer 138 also splits the row data stream into two streams for transmission on the twopaths Section 11 of Appendix B. - The
grant stream deserializer 140 in eachmodule 102 works in much the same manner as thedata stream deserializer 126. The primary difference is that the grant data only utilizes a single path, thus eliminating the need for deskewing and deinterleaving to recover a single input serial stream. Since this serial link is only one half the data stream rate of the forward link, there are 850 slots per row time. A single FIFO (not shown) may be used to allow for deskewing of the input serial grant streams for all 12 links. A detailed description of thegrant stream deserializer 140 may be provided inSection 11 of Appendix B. - The
grant stream demapper 142 may be responsible for extracting the data from the incoming serial grant links. This includes demapping of the received grant link slots based on the input slot number to determine if the traffic is a grant element or another kind of traffic. The determination is based on the contents of the grant demapper RAM (not-shown). According to some embodiments, traffic other than grant elements is not yet defined. For grant elements, thegrant stream demapper 142 assembles the three-slot block of GEs into two forty-eight-bit GEs and forwards them to the singlegrant parser module 154. A detailed description of thegrant stream demapper 142 is provided in Section 7.2.3.2 of Appendix B. - The
grant arbitration module 144 operates in a similar manner to therequest arbitration logic 134. In some embodiments, this module is identical to the request arbitration module. The only difference may be that it processes grant elements in the reverse path instead of request elements in the forward path. It will be recalled that grant elements are, in fact, the request elements which have been returned. - The
grant stream mapper 146 may be responsible for inserting data into the outgoing serial grant links. It maps the output grant slots based on the output slot number to determine if the traffic is a grant element or test traffic. The determination is based on the contents of the grant mapper RAM (not shown). For grant elements, it assembles the three-slot block of GEs from two forty-eight-bit GEs. The GEs are read from thegrant arbitration module 144. For test patterns, it inserts the appropriate test pattern from theoutput link bus 122. These test patterns may be created by either thetest pattern generator 162 or thetest interface bus 164 modules. A detailed description of thegrant stream mapper 146 is provided in Section 7.2.3.2. - The
grant stream serializer 148 works in much the same manner as thedata stream serializer 138. The primary difference is that the grant data only utilizes a single path, thus eliminating the need for interleaving the transmit serial stream across multiple output serial streams. Since this serial link is only one half the forward data stream rate, there are only 850 slots per row time. A detailed description of thegrant stream serializer 148 is provided inSection 11 of Appendix B. - The modules described above (except for the request parser and the grant parser) may be instantiated for each
link module 102 of which there are twelve for eachswitch element 100. The following modules may be instantiated only once for each switch element. - The link synchronization &
timing control 150 provides the global synchronization and timing signals used in the switch element. It generates transmission control signals so that all serial outputs-start sending row data synchronized to the RSYNC (row synchronization) input reference. It also controls the deskewing FIFOs in the data stream deserializers so that all twelve input links will drive the data for slot N at the same time, onto theinput link bus 120. This same deskewing mechanism may be implemented on the grant stream deserializers. A detailed description of the link synchronization andtiming control 150 is provided inSection 10 of Appendix B. - The
request parser 152 receives inputs from all thirteen request element sources and forwards the REs to the appropriate request arbitration modules via the two request element buses. A detailed description of therequest parser 152 is provided in Section 7.2.1.1 of Appendix B. - The
grant parser 154 physically operates in a similar manner to and may be identical to therequest parser 152. The only difference is that it processes grant elements in the reverse path instead of request elements in the forward path. As mentioned above, the grant elements contain the same information as the request elements, i.e. the link address through the switch from one port processor to another. - The
link RISC processor 156 controls the ranging synchronization on the input links with the source port processors in the first stage of the switch fabric. It also controls the ranging synchronization on the output link grant stream input with the source port processors in the last stage of the switch fabric. It also handles the Req/Grant processing needed to transmit multicast messages and controls the reception and transmission of the in-band communications PDUs. All in-band communications PDUs are forwarded to theConfiguration RISC Processor 158 which interprets the messages. Thelink RISC processor 156 only handles the Req/Grant processing needed to transmit multicast and in-band communications messages. - The
configuration RISC controller 158 processes configuration and status messages received from an external controller module (not shown) and in-band communication messages as described above. Thesystem control module 160 handles all the reset inputs and resets the appropriate internal modules. Theconfiguration RISC controller 158 and thesystem control module 160 may be implemented with an Xtensa™ processor from Tensilica, Inc., Santa Clara, Calif. - The test pattern generator and
analyzer 162 may be used for the generation of various test patterns which can be sent out on any slot on the data stream or grant stream outputs. It is also capable of monitoring input slots from either the received data stream or grant stream. - The test
interface bus multiplexer 164 allows for sourcing transmit data from the external I/O pins and forwarding data to the I/O pins. This is used for testing the switch element when a port processor is not available. - The
unilink PLL 166 may be used to create the IF clock needed by the unilink-macros. Within each unilink macro another PLL multiplies the IF clock up to the serial clock rate. Thecore PLL 168 may be used to create the clock used by the switch element core logic. In some embodiments, the core clock is approximately 250 MHz. A detailed description of both PLLs is provided inSection 9 of Appendix B. - The
JTAG interface 170 may be used for two purposes: (1) boundary scan testing of the switch element at the ASIC fab and (2) Debug interface for the Configuration RISC Processor. - As shown in
FIG. 2 , there may be three datapath buses (theinput link bus 120, theoutput link bus 122, and the grant bus 124) which may be used to move switched traffic from the input links to the output-links. These buses are also used to carry traffic which is sourced or terminated internally within the switch element. Some datapaths of the input link bus are summarized in Table 2 below. Some datapaths of the output link bus are summarized in Table 3 below. Some datapaths of the grant bus are summarized in Table 4 below. -
TABLE 2 Name Qty Width Description Source islot_num 1 11 Current input slot number for Link Sync & Timing Ctrl traffic from the Data Stream Deserializers ilink_req_0 12 48 Request elements received on the Data Stream Demapper thru input link module for each input link ilink_req_11 lcl_req_0 1 48 Request elements generated locally Link RISC Controller req_a, req_b 2 48 Parsed request elements Request Parser ilink_tdm_data_0 12 47 TDM data, 36-bit data + 11 bit Data Stream Demapper thru destination row buffer address module for each input link. ilink_req_11 ilink_tdm_dlink_0 12 4 Destination output link Data Stream Demapper thru (i.e., row buffer) identifier module for each input link ilink_tdm_dlink_11 ilink_pdu_0 12 512 Complete 64-byte PDU which has been Data Stream Demapper thru assembled from the incoming slots module for each input link ilink_pdu_11 ilink_pdu_flag_0 12 13 Each flag is asserted for each destination Data Stream Demapper thru which the current PDU is addressed. module for each input link ilink_pdu_flag_11 Total destinations = 12 output links plus the internal MC and In-band Comm Controller lcl_pdu 1 64 Bus used to transport locally generated PDUs Link RISC Controller to the Data Stream Demappers -
TABLE 3 Name Qty Width Description Source oslot_num 1 11 Current output slot number for Link Sync & Timing Ctrl traffic destined for the output links. rbuf_dout_0 12 36 Slot data output from the row buffer. Row Buffer for each thru output link. rbuf_dout_11 rbuf_rd_addr 12 12 Row buffer read address. Data Stream Mapper for each output link. test_src1, 3 36 Test traffic sources. Test Pattern Generator, test_src2, Test Interface Bus test_src3 idle_ptrn 1 36 Idle pattern which is transmitted Data Stream Demapper when no valid PDU data is available. module for each input link. olink_req_0 12 48 Request elements for each output link. Req Arbitration modules. thru olink_req_11 omap_data_0 12 36 Link output after the mapping multiplexers. Data Stream Mapper for thru All 12 outputs are fed back into each of each output link omap_data_11 the Data Stream Mappers so that TDM multicasting can be done. -
TABLE 4 Name Qty Width Description Source olink_gntslot_num 1 10 Current input slot number for traffic Link Sync & Timing Ctrl from the Grant Stream Deserializers. olink_gnt_0 12 48 Grant elements received on the grant Grant Stream Demapper. thru receiver which is associated with olink_gnt_11 the output link. olink_gntslot_0 12 36 Demapped slots from the received grant Grant Stream Demapper. thru stream. These are slots which are not olink_gndslot_11 carrying grant elements. gnt_a, gnt_b 2 48 Parsed grant elements Grant Parser - According to some embodiments, each switch element includes a multicast controller and a separate multicast PDU buffer. Multicast request elements flow through the switch in the same manner as standard unicast request elements. At the point where the message needs to be multicast, the hop-by-hop field's bit code for that switch stage indicates that the request is multicast. The request is forwarded to the multicast controller. On the grant path, the multicast controller sources a grant if there is room for the data in the multicast recirculating buffers. Once the data has been transmitted to the multicast buffer, the multicast controller examines the data header and determines which output links it needs to be sent out on. At this point, the multicast controller sources a number of request messages which are handled in the same manner as unicast requests.
- There have been described and illustrated herein several embodiments of a network switch which supports TDM, ATM, and IP traffic. While particular embodiments of the invention have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. It will therefore be appreciated by those skilled in the art that yet other modifications could be made to the provided invention without deviating from its spirit and scope as so claimed.
Claims (24)
1.-7. (canceled)
8. A method comprising:
transmitting a bandwidth request element;
receiving a grant corresponding to the bandwidth request element;
generating a protocol data unit (PDU) including routing information, to facilitate routing of the PDU through a plurality of stages of a communication switch, and the PDU further including an indication of a first stage of the plurality of stages at which multicasting begins; and
transmitting, based at least in part on said receiving of the grant, the PDU to the first stage.
9. The method according to claim 8 , further comprising:
copying the PDU at the first stage to produce one or more copies of the PDU; and
replacing the routing information in the one or more copies of the PDU with updated routing information.
10. The method according to claim 8 , further comprising:
transmitting the bandwidth request element in row N of a data frame; and
transmitting the PDU in row N+1 of the data frame.
11. A method comprising:
generating a bandwidth request element including routing information, to facilitate routing of the bandwidth request element through a plurality of stages of a communication switch, and the bandwidth request element further including an indication of a first stage of the plurality of stages at which multicasting begins; and
transmitting the bandwidth request element to the first stage.
12. The method according to claim 11 , wherein the bandwidth request element requests bandwidth in row N+1 of a data frame and said method further comprises:
transmitting the bandwidth request element in row N of the data frame.
13. The method according to claim 12 , further comprising:
transmitting the data frame in 125 microseconds.
14. The method according to claim 11 , further comprising transmitting the bandwidth request element in an in-band link; and
receiving a grant, corresponding to the bandwidth request element, via an out-of-band link.
15. A system comprising:
a receive switch controller configured to generate a bandwidth request element including routing information, to facilitate routing of the bandwidth request element through a plurality of stages of a communication switch, and the bandwidth request element further including an indication of a first stage of the plurality of stages at which multicasting begins; and
an interface coupled to the receive switch controller and configured to transmit the bandwidth request element to the first stage.
16. The system according to claim 15 , further comprising:
a switch element acting as the first stage and including a controller configured to receive the bandwidth request element;
copy the bandwidth request element to produce one or more copies of the bandwidth request element; and
replace the routing information in the one or more copies of the bandwidth request element with updated routing information.
17. The system according to claim 15 , wherein the interface is further configured to receive a grant corresponding to the bandwidth request element.
18. The system according to claim 15 , wherein the receive switch controller comprises:
a mapper configured to generate a repeating data frame including a plurality of rows with the bandwidth request element in row N to request bandwidth in row N+1.
19. A system comprising:
a receive switch controller to generate a protocol data unit (PDU) including routing information, to facilitate routing of the PDU through a plurality of stages of a communication switch, and the PDU further including an indication of a first stage of the plurality of stages at which multicasting begins; and
an interface coupled to the receive switch controller and configured to transmit a bandwidth request element, to receive a grant corresponding to the bandwidth request element, and to transmit the PDU to the first stage based at least in part on said receiving of the grant.
20. The system according to claim 19 , further comprising:
a switch element acting as the first stage and including a controller configured to receive the PDU;
copy the PDU to produce one or more copies of the PDU; and
replace the routing information in the one or more copies of the PDU with updated routing information.
21. The system according to claim 19 , wherein the receive switch controller comprises:
a mapper to generate a repeating data frame including a plurality of rows with the bandwidth request element in row N to request bandwidth in row N+1.
22. A communications switch comprising:
a first switch element including one or more ports;
a second switch element including one or more ports; and
a port processor including
a first switch element interface coupled to a first port of the one or more ports of the first switch element;
a second switch element interface coupled to a first port of the one or more ports of the second switch element; and
a receive switch controller configured to automatically redirect traffic to either of said first or second switch element interfaces based at least in part on a switch congestion event.
23. The communications switch according to claim 22 , wherein the first port of the first switch element and the first port of the second switch element each comprise a plurality of interleaved serial links.
24. The communications switch according to claim 22 , wherein the receive switch controller is configured to automatically redirect traffic to either of said first or second switch element interfaces based at least in part on a switch failure.
25. A communications switch comprising:
a switch element including a first port and a second port; and
a port processor including
a first switch element interface coupled to the first port;
a second switch element interface coupled to the second port; and
a receive switch controller configured to automatically redirect traffic to either of said first or second switch element interfaces based at least in part on a switch failure or congestion event.
26. The communications switch according to claim 25 , wherein each of said first and second ports comprise a plurality of interleaved serial links.
27. A communications switch comprising:
a first switch fabric including a plurality of first switch elements;
a second switch fabric including a plurality of second switch elements; and
a port processor including
a first switch fabric interface coupled to one of the plurality of first switch elements,
a second switch fabric interface coupled to one of the plurality of second switch elements, and
a receive switch controller configured to automatically redirecting network traffic through either of said first switch fabric or said second switch fabric based at least in part on switch congestion.
28. The communications switch according to claim 27 , wherein the receive switch controller is further configured to automatically redirect network traffic through either of said first switch fabric or said second switch fabric based at least in part on switch failure.
29. A communications switch comprising:
a plurality of switch elements; and
a port processor configured to repeatedly transmit a request element through a first subset of the plurality of switch elements based at least in part on a first routing tag in the request element until a grant corresponding to the request is received or the request is transmitted through the first subset a predetermined number of times, and, in an event in which the request is transmitted through the first subset a predetermined number of times, to transmit the request element through a second subset of the plurality of switch elements based at least in part on a second routing tag in the request element.
30. The communication switch of claim 29 , wherein the request element includes a channel tag and the system further comprises:
a first switch fabric including the plurality of switch elements;
a second switch fabric; and
the port processor is further configured to transmit the request element over the first switch fabric or the second switch fabric based at least in part on the channel tag.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/272,609 US20090141719A1 (en) | 2000-11-21 | 2008-11-17 | Transmitting data through commuincation switch |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/717,440 US6631130B1 (en) | 2000-11-21 | 2000-11-21 | Method and apparatus for switching ATM, TDM, and packet data through a single communications switch while maintaining TDM timing |
US10/155,517 US7463626B2 (en) | 2000-11-21 | 2002-05-24 | Phase and frequency drift and jitter compensation in a distributed telecommunications switch |
US12/272,609 US20090141719A1 (en) | 2000-11-21 | 2008-11-17 | Transmitting data through commuincation switch |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/155,517 Continuation US7463626B2 (en) | 2000-11-21 | 2002-05-24 | Phase and frequency drift and jitter compensation in a distributed telecommunications switch |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090141719A1 true US20090141719A1 (en) | 2009-06-04 |
Family
ID=29582165
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/155,517 Expired - Lifetime US7463626B2 (en) | 2000-11-21 | 2002-05-24 | Phase and frequency drift and jitter compensation in a distributed telecommunications switch |
US12/272,609 Abandoned US20090141719A1 (en) | 2000-11-21 | 2008-11-17 | Transmitting data through commuincation switch |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/155,517 Expired - Lifetime US7463626B2 (en) | 2000-11-21 | 2002-05-24 | Phase and frequency drift and jitter compensation in a distributed telecommunications switch |
Country Status (3)
Country | Link |
---|---|
US (2) | US7463626B2 (en) |
AU (1) | AU2003273160A1 (en) |
WO (1) | WO2003100991A2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050213614A1 (en) * | 2004-03-29 | 2005-09-29 | Keiichiro Tsukamoto | Transmission apparatus and reception interface apparatus |
US20120002682A1 (en) * | 2009-05-22 | 2012-01-05 | Tejas Networks Limited | Method to transmit multiple data-streams of varying capacity data using virtual concatenation |
US20140059195A1 (en) * | 2012-08-24 | 2014-02-27 | Cisco Technology, Inc. | System and method for centralized virtual interface card driver logging in a network environment |
US20140171094A1 (en) * | 2012-12-18 | 2014-06-19 | Samsung Electronics Co., Ltd. | Method of multi-hop cooperative communication from terminal and base station and network for multi-hop cooperative communication |
US20140254397A1 (en) * | 2011-01-24 | 2014-09-11 | OnPath Technologies Inc. | Methods and Systems for Calibrating a Network Switch |
WO2014153421A3 (en) * | 2013-03-19 | 2014-12-31 | Yale University | Managing network forwarding configurations using algorithmic policies |
US20150104171A1 (en) * | 2013-10-14 | 2015-04-16 | Nec Laboratories America, Inc. | Burst Switching System Using Optical Cross-Connect as Switch Fabric |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6973603B2 (en) * | 2002-06-28 | 2005-12-06 | Intel Corporation | Method and apparatus for optimizing timing for a multi-drop bus |
US7137564B2 (en) * | 2002-08-09 | 2006-11-21 | Carry Computer Eng. Co., Ltd | Communication protocol for multi-functional mini-memory card suitable for small form memory interface and usb interfaces |
US7606269B1 (en) * | 2004-07-27 | 2009-10-20 | Intel Corporation | Method and apparatus for detecting and managing loss of alignment in a virtually concatenated group |
WO2006044139A2 (en) * | 2004-10-14 | 2006-04-27 | Motorola, Inc. | System and method for time synchronizing nodes in an automotive network using input capture |
US7593429B2 (en) * | 2004-10-14 | 2009-09-22 | Temic Automotive Of North America, Inc. | System and method for time synchronizing nodes in an automotive network using input capture |
US7593344B2 (en) * | 2004-10-14 | 2009-09-22 | Temic Automotive Of North America, Inc. | System and method for reprogramming nodes in an automotive switch fabric network |
US7623552B2 (en) * | 2004-10-14 | 2009-11-24 | Temic Automotive Of North America, Inc. | System and method for time synchronizing nodes in an automotive network using input capture |
WO2006044140A2 (en) * | 2004-10-14 | 2006-04-27 | Motorola, Inc. | System and method for time synchronizing nodes in an automotive network |
US20060083172A1 (en) * | 2004-10-14 | 2006-04-20 | Jordan Patrick D | System and method for evaluating the performance of an automotive switch fabric network |
US7599377B2 (en) * | 2004-10-15 | 2009-10-06 | Temic Automotive Of North America, Inc. | System and method for tunneling standard bus protocol messages through an automotive switch fabric network |
US7613190B2 (en) * | 2004-10-18 | 2009-11-03 | Temic Automotive Of North America, Inc. | System and method for streaming sequential data through an automotive switch fabric |
JP4500154B2 (en) * | 2004-11-08 | 2010-07-14 | 富士通株式会社 | Frame transmission apparatus and frame reception apparatus |
EP1684164A1 (en) * | 2005-01-13 | 2006-07-26 | Thomson Licensing | Data transfer system |
JP2006262417A (en) * | 2005-03-18 | 2006-09-28 | Fujitsu Ltd | Communication speed control method and apparatus therefor |
ATE410874T1 (en) * | 2005-09-20 | 2008-10-15 | Matsushita Electric Ind Co Ltd | METHOD AND DEVICE FOR PACKET SEGMENTATION AND LINK SIGNALING IN A COMMUNICATIONS SYSTEM |
US8266466B2 (en) * | 2007-05-21 | 2012-09-11 | Cisco Technology, Inc. | Globally synchronized timestamp value counter |
EP2244544B1 (en) * | 2009-04-23 | 2011-10-26 | ABB Technology AG | Hardware module and backplane board for an IED |
US8345675B1 (en) * | 2010-12-28 | 2013-01-01 | Juniper Networks, Inc. | Orderly offlining in a distributed, multi-stage switch fabric architecture |
CN102111334A (en) * | 2011-02-21 | 2011-06-29 | 华为技术有限公司 | Method, source line card and network card for processing cells in switched network |
IN2012DE01073A (en) * | 2012-04-09 | 2015-07-31 | Ciena Corp | |
CA3128713C (en) | 2013-12-05 | 2022-06-21 | Ab Initio Technology Llc | Managing interfaces for dataflow graphs composed of sub-graphs |
US10148352B1 (en) * | 2017-07-11 | 2018-12-04 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Continuous carrier optical phase optometric measurement over coherent optical communication link |
FR3094593B1 (en) * | 2019-03-29 | 2021-02-19 | Teledyne E2V Semiconductors Sas | Method of synchronizing digital data sent in series |
CN113472527A (en) * | 2020-12-30 | 2021-10-01 | 广东国腾量子科技有限公司 | Synchronous correction system for quantum key distribution |
Citations (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4679190A (en) * | 1986-04-28 | 1987-07-07 | International Business Machines Corporation | Distributed voice-data switching on multi-stage interconnection networks |
US5177736A (en) * | 1990-02-07 | 1993-01-05 | Hitachi Ltd. | Packet switch |
US5191577A (en) * | 1990-08-20 | 1993-03-02 | Fujitsu Limited | Switch stage number setting apparatus for MSSR channels |
US5291477A (en) * | 1992-08-10 | 1994-03-01 | Bell Communications Research, Inc. | Method and system for multicast routing in an ATM network |
US5396491A (en) * | 1988-10-14 | 1995-03-07 | Network Equipment Technologies, Inc. | Self-routing switching element and fast packet switch |
US5412651A (en) * | 1993-02-11 | 1995-05-02 | Nec America, Inc. | Structure and method for combining PCM and common control data on a backplane bus |
US5430716A (en) * | 1993-01-15 | 1995-07-04 | At&T Corp. | Path hunt for efficient broadcast and multicast connections in multi-stage switching fabrics |
US5434858A (en) * | 1991-08-30 | 1995-07-18 | Nec Corporation | Virtual tributary path idle insertion using timeslot interchange |
US5497369A (en) * | 1990-11-06 | 1996-03-05 | Hewlett-Packard Company | Multicast switch circuits |
US5570355A (en) * | 1994-11-17 | 1996-10-29 | Lucent Technologies Inc. | Method and apparatus enabling synchronous transfer mode and packet mode access for multiple services on a broadband communication network |
US5583861A (en) * | 1994-04-28 | 1996-12-10 | Integrated Telecom Technology | ATM switching element and method having independently accessible cell memories |
US5689506A (en) * | 1996-01-16 | 1997-11-18 | Lucent Technologies Inc. | Multicast routing in multistage networks |
US5724351A (en) * | 1995-04-15 | 1998-03-03 | Chao; Hung-Hsiang Jonathan | Scaleable multicast ATM switch |
US5748629A (en) * | 1995-07-19 | 1998-05-05 | Fujitsu Networks Communications, Inc. | Allocated and dynamic bandwidth management |
US5790539A (en) * | 1995-01-26 | 1998-08-04 | Chao; Hung-Hsiang Jonathan | ASIC chip for implementing a scaleable multicast ATM switch |
US5809021A (en) * | 1994-04-15 | 1998-09-15 | Dsc Communications Corporation | Multi-service switch for a telecommunications network |
US5832303A (en) * | 1994-08-22 | 1998-11-03 | Hitachi, Ltd. | Large scale interconnecting switch using communication controller groups with multiple input-to-one output signal lines and adaptable crossbar unit using plurality of selectors |
US5831972A (en) * | 1996-10-17 | 1998-11-03 | Mci Communications Corporation | Method of and system for mapping sonet performance parameters to ATM quality of service parameters |
US5844887A (en) * | 1995-11-30 | 1998-12-01 | Scorpio Communications Ltd. | ATM switching fabric |
US5844918A (en) * | 1995-11-28 | 1998-12-01 | Sanyo Electric Co., Ltd. | Digital transmission/receiving method, digital communications method, and data receiving apparatus |
US5892924A (en) * | 1996-01-31 | 1999-04-06 | Ipsilon Networks, Inc. | Method and apparatus for dynamically shifting between routing and switching packets in a transmission network |
US5909429A (en) * | 1996-09-03 | 1999-06-01 | Philips Electronics North America Corporation | Method for installing a wireless network which transmits node addresses directly from a wireless installation device to the nodes without using the wireless network |
US5940375A (en) * | 1996-10-31 | 1999-08-17 | Fujitsu Limited | Feedback control apparatus and cell scheduling apparatus for use with cell exchange |
US5949778A (en) * | 1996-12-31 | 1999-09-07 | Northern Telecom Limited | High performance fault tolerant switching system for multimedia satellite and terrestrial communications switches |
US5959991A (en) * | 1995-10-16 | 1999-09-28 | Hitachi, Ltd. | Cell loss priority control method for ATM switch and ATM switch controlled by the method |
US6049542A (en) * | 1997-12-31 | 2000-04-11 | Samsung Electronics Co., Ltd. | Scalable multistage interconnection network architecture and method for performing in-service upgrade thereof |
US6052373A (en) * | 1996-10-07 | 2000-04-18 | Lau; Peter S. Y. | Fault tolerant multicast ATM switch fabric, scalable speed and port expansion configurations |
US6078595A (en) * | 1997-08-28 | 2000-06-20 | Ascend Communications, Inc. | Timing synchronization and switchover in a network switch |
US6097776A (en) * | 1998-02-12 | 2000-08-01 | Cirrus Logic, Inc. | Maximum likelihood estimation of symbol offset |
US6115373A (en) * | 1997-01-24 | 2000-09-05 | The Hong Kong University Of Science And Technology | Information network architecture |
US6128319A (en) * | 1997-11-24 | 2000-10-03 | Network Excellence For Enterprises Corp. | Hybrid interface for packet data switching |
US6148349A (en) * | 1998-02-06 | 2000-11-14 | Ncr Corporation | Dynamic and consistent naming of fabric attached storage by a file system on a compute node storing information mapping API system I/O calls for data objects with a globally unique identification |
US6151301A (en) * | 1995-05-11 | 2000-11-21 | Pmc-Sierra, Inc. | ATM architecture and switching element |
US6157613A (en) * | 1995-03-03 | 2000-12-05 | Fujitsu Limited | Congestion control system for controlling congestion in switching cells |
US6169749B1 (en) * | 1997-12-18 | 2001-01-02 | Alcatel Usa Sourcing L.P. | Method of sequencing time division multiplex (TDM) cells in a synchronous optical network (sonet) frame |
US6205155B1 (en) * | 1999-03-05 | 2001-03-20 | Transwitch Corp. | Apparatus and method for limiting data bursts in ATM switch utilizing shared bus |
US6240087B1 (en) * | 1998-03-31 | 2001-05-29 | Alcatel Usa Sourcing, L.P. | OC3 delivery unit; common controller for application modules |
US6256361B1 (en) * | 1996-04-29 | 2001-07-03 | Telefonaktiebolaget Lm Ericsson (Publ) | D.T.R.M. data timing recovery module |
US6275499B1 (en) * | 1998-03-31 | 2001-08-14 | Alcatel Usa Sourcing, L.P. | OC3 delivery unit; unit controller |
US6292486B1 (en) * | 1995-08-17 | 2001-09-18 | Pmc-Sierra Ltd. | Low cost ISDN/pots service using ATM |
US6320861B1 (en) * | 1998-05-15 | 2001-11-20 | Marconi Communications, Inc. | Hybrid scheme for queuing in a shared memory ATM switch buffer |
US6351474B1 (en) * | 1998-01-14 | 2002-02-26 | Skystream Networks Inc. | Network distributed remultiplexer for video program bearing transport streams |
US6359859B1 (en) * | 1999-06-03 | 2002-03-19 | Fujitsu Network Communications, Inc. | Architecture for a hybrid STM/ATM add-drop multiplexer |
US6359891B1 (en) * | 1996-05-09 | 2002-03-19 | Conexant Systems, Inc. | Asynchronous transfer mode cell processing system with scoreboard scheduling |
US6363078B1 (en) * | 1998-03-31 | 2002-03-26 | Alcatel Usa Sourcing, L.P. | OC-3 delivery unit; path verification method |
US20020075883A1 (en) * | 2000-12-15 | 2002-06-20 | Dell Martin S. | Three-stage switch fabric with input device features |
US6414141B1 (en) * | 1997-12-22 | 2002-07-02 | Astrazeneca Ab | Process for purifying an ampicillin pro-drug ester |
US20020093952A1 (en) * | 2000-06-30 | 2002-07-18 | Gonda Rumi Sheryar | Method for managing circuits in a multistage cross connect |
US6452926B1 (en) * | 1998-07-17 | 2002-09-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Reliable and robust atm-switch |
US6516422B1 (en) * | 1999-05-19 | 2003-02-04 | Sun Microsystems, Inc. | Computer system including multiple clock sources and failover switching |
US6526448B1 (en) * | 1998-12-22 | 2003-02-25 | At&T Corp. | Pseudo proxy server providing instant overflow capacity to computer networks |
US6538993B1 (en) * | 1998-07-28 | 2003-03-25 | Fujitsu Limited | ATM switch and quality control method for an ATM connection |
US6635611B2 (en) * | 1998-11-13 | 2003-10-21 | Genencor International, Inc. | Fluidized bed low density granule |
US6636467B1 (en) * | 2000-06-30 | 2003-10-21 | Hewlett-Packard Development Company, L.P. | Method and apparatus for accurately calibrating the timing of a write onto storage media |
US6640248B1 (en) * | 1998-07-10 | 2003-10-28 | Malibu Networks, Inc. | Application-aware, quality of service (QoS) sensitive, media access control (MAC) layer |
US6646983B1 (en) * | 2000-11-21 | 2003-11-11 | Transwitch Corporation | Network switch which supports TDM, ATM, and variable length packet traffic and includes automatic fault/congestion correction |
US6675307B1 (en) * | 2000-03-28 | 2004-01-06 | Juniper Networks, Inc. | Clock controller for controlling the switching to redundant clock signal without producing glitches by delaying the redundant clock signal to match a phase of primary clock signal |
US6765928B1 (en) * | 1998-09-02 | 2004-07-20 | Cisco Technology, Inc. | Method and apparatus for transceiving multiple services data simultaneously over SONET/SDH |
US6898211B1 (en) * | 1999-06-16 | 2005-05-24 | Cisco Technology, Inc. | Scheme for maintaining synchronization in an inherently asynchronous system |
US6907041B1 (en) * | 2000-03-07 | 2005-06-14 | Cisco Technology, Inc. | Communications interconnection network with distributed resequencing |
US6980618B1 (en) * | 2000-08-11 | 2005-12-27 | Agere Systems Inc. | Phase offset detection |
US7023833B1 (en) * | 1999-09-10 | 2006-04-04 | Pulse-Link, Inc. | Baseband wireless network for isochronous communication |
-
2002
- 2002-05-24 US US10/155,517 patent/US7463626B2/en not_active Expired - Lifetime
-
2003
- 2003-05-05 WO PCT/US2003/013793 patent/WO2003100991A2/en not_active Application Discontinuation
- 2003-05-05 AU AU2003273160A patent/AU2003273160A1/en not_active Abandoned
-
2008
- 2008-11-17 US US12/272,609 patent/US20090141719A1/en not_active Abandoned
Patent Citations (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4679190A (en) * | 1986-04-28 | 1987-07-07 | International Business Machines Corporation | Distributed voice-data switching on multi-stage interconnection networks |
US5396491A (en) * | 1988-10-14 | 1995-03-07 | Network Equipment Technologies, Inc. | Self-routing switching element and fast packet switch |
US5177736A (en) * | 1990-02-07 | 1993-01-05 | Hitachi Ltd. | Packet switch |
US5191577A (en) * | 1990-08-20 | 1993-03-02 | Fujitsu Limited | Switch stage number setting apparatus for MSSR channels |
US5497369A (en) * | 1990-11-06 | 1996-03-05 | Hewlett-Packard Company | Multicast switch circuits |
US5434858A (en) * | 1991-08-30 | 1995-07-18 | Nec Corporation | Virtual tributary path idle insertion using timeslot interchange |
US5291477A (en) * | 1992-08-10 | 1994-03-01 | Bell Communications Research, Inc. | Method and system for multicast routing in an ATM network |
US5430716A (en) * | 1993-01-15 | 1995-07-04 | At&T Corp. | Path hunt for efficient broadcast and multicast connections in multi-stage switching fabrics |
US5412651A (en) * | 1993-02-11 | 1995-05-02 | Nec America, Inc. | Structure and method for combining PCM and common control data on a backplane bus |
US5809021A (en) * | 1994-04-15 | 1998-09-15 | Dsc Communications Corporation | Multi-service switch for a telecommunications network |
US5583861A (en) * | 1994-04-28 | 1996-12-10 | Integrated Telecom Technology | ATM switching element and method having independently accessible cell memories |
US5832303A (en) * | 1994-08-22 | 1998-11-03 | Hitachi, Ltd. | Large scale interconnecting switch using communication controller groups with multiple input-to-one output signal lines and adaptable crossbar unit using plurality of selectors |
US5570355A (en) * | 1994-11-17 | 1996-10-29 | Lucent Technologies Inc. | Method and apparatus enabling synchronous transfer mode and packet mode access for multiple services on a broadband communication network |
US5790539A (en) * | 1995-01-26 | 1998-08-04 | Chao; Hung-Hsiang Jonathan | ASIC chip for implementing a scaleable multicast ATM switch |
US6157613A (en) * | 1995-03-03 | 2000-12-05 | Fujitsu Limited | Congestion control system for controlling congestion in switching cells |
US5724351A (en) * | 1995-04-15 | 1998-03-03 | Chao; Hung-Hsiang Jonathan | Scaleable multicast ATM switch |
US6151301A (en) * | 1995-05-11 | 2000-11-21 | Pmc-Sierra, Inc. | ATM architecture and switching element |
US5748629A (en) * | 1995-07-19 | 1998-05-05 | Fujitsu Networks Communications, Inc. | Allocated and dynamic bandwidth management |
US5905729A (en) * | 1995-07-19 | 1999-05-18 | Fujitsu Network Communications, Inc. | Mapping a data cell in a communication switch |
US5870538A (en) * | 1995-07-19 | 1999-02-09 | Fujitsu Network Communications, Inc. | Switch fabric controller comparator system and method |
US5909427A (en) * | 1995-07-19 | 1999-06-01 | Fujitsu Network Communications, Inc. | Redundant switch system and method of operation |
US6292486B1 (en) * | 1995-08-17 | 2001-09-18 | Pmc-Sierra Ltd. | Low cost ISDN/pots service using ATM |
US5959991A (en) * | 1995-10-16 | 1999-09-28 | Hitachi, Ltd. | Cell loss priority control method for ATM switch and ATM switch controlled by the method |
US5844918A (en) * | 1995-11-28 | 1998-12-01 | Sanyo Electric Co., Ltd. | Digital transmission/receiving method, digital communications method, and data receiving apparatus |
US5844887A (en) * | 1995-11-30 | 1998-12-01 | Scorpio Communications Ltd. | ATM switching fabric |
US5689506A (en) * | 1996-01-16 | 1997-11-18 | Lucent Technologies Inc. | Multicast routing in multistage networks |
US5892924A (en) * | 1996-01-31 | 1999-04-06 | Ipsilon Networks, Inc. | Method and apparatus for dynamically shifting between routing and switching packets in a transmission network |
US6256361B1 (en) * | 1996-04-29 | 2001-07-03 | Telefonaktiebolaget Lm Ericsson (Publ) | D.T.R.M. data timing recovery module |
US6359891B1 (en) * | 1996-05-09 | 2002-03-19 | Conexant Systems, Inc. | Asynchronous transfer mode cell processing system with scoreboard scheduling |
US5909429A (en) * | 1996-09-03 | 1999-06-01 | Philips Electronics North America Corporation | Method for installing a wireless network which transmits node addresses directly from a wireless installation device to the nodes without using the wireless network |
US6052373A (en) * | 1996-10-07 | 2000-04-18 | Lau; Peter S. Y. | Fault tolerant multicast ATM switch fabric, scalable speed and port expansion configurations |
US5831972A (en) * | 1996-10-17 | 1998-11-03 | Mci Communications Corporation | Method of and system for mapping sonet performance parameters to ATM quality of service parameters |
US5940375A (en) * | 1996-10-31 | 1999-08-17 | Fujitsu Limited | Feedback control apparatus and cell scheduling apparatus for use with cell exchange |
US5949778A (en) * | 1996-12-31 | 1999-09-07 | Northern Telecom Limited | High performance fault tolerant switching system for multimedia satellite and terrestrial communications switches |
US6115373A (en) * | 1997-01-24 | 2000-09-05 | The Hong Kong University Of Science And Technology | Information network architecture |
US6078595A (en) * | 1997-08-28 | 2000-06-20 | Ascend Communications, Inc. | Timing synchronization and switchover in a network switch |
US6128319A (en) * | 1997-11-24 | 2000-10-03 | Network Excellence For Enterprises Corp. | Hybrid interface for packet data switching |
US6169749B1 (en) * | 1997-12-18 | 2001-01-02 | Alcatel Usa Sourcing L.P. | Method of sequencing time division multiplex (TDM) cells in a synchronous optical network (sonet) frame |
US6414141B1 (en) * | 1997-12-22 | 2002-07-02 | Astrazeneca Ab | Process for purifying an ampicillin pro-drug ester |
US6049542A (en) * | 1997-12-31 | 2000-04-11 | Samsung Electronics Co., Ltd. | Scalable multistage interconnection network architecture and method for performing in-service upgrade thereof |
US6351474B1 (en) * | 1998-01-14 | 2002-02-26 | Skystream Networks Inc. | Network distributed remultiplexer for video program bearing transport streams |
US6148349A (en) * | 1998-02-06 | 2000-11-14 | Ncr Corporation | Dynamic and consistent naming of fabric attached storage by a file system on a compute node storing information mapping API system I/O calls for data objects with a globally unique identification |
US6097776A (en) * | 1998-02-12 | 2000-08-01 | Cirrus Logic, Inc. | Maximum likelihood estimation of symbol offset |
US6240087B1 (en) * | 1998-03-31 | 2001-05-29 | Alcatel Usa Sourcing, L.P. | OC3 delivery unit; common controller for application modules |
US6275499B1 (en) * | 1998-03-31 | 2001-08-14 | Alcatel Usa Sourcing, L.P. | OC3 delivery unit; unit controller |
US6363078B1 (en) * | 1998-03-31 | 2002-03-26 | Alcatel Usa Sourcing, L.P. | OC-3 delivery unit; path verification method |
US6320861B1 (en) * | 1998-05-15 | 2001-11-20 | Marconi Communications, Inc. | Hybrid scheme for queuing in a shared memory ATM switch buffer |
US6640248B1 (en) * | 1998-07-10 | 2003-10-28 | Malibu Networks, Inc. | Application-aware, quality of service (QoS) sensitive, media access control (MAC) layer |
US6452926B1 (en) * | 1998-07-17 | 2002-09-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Reliable and robust atm-switch |
US6538993B1 (en) * | 1998-07-28 | 2003-03-25 | Fujitsu Limited | ATM switch and quality control method for an ATM connection |
US6765928B1 (en) * | 1998-09-02 | 2004-07-20 | Cisco Technology, Inc. | Method and apparatus for transceiving multiple services data simultaneously over SONET/SDH |
US6635611B2 (en) * | 1998-11-13 | 2003-10-21 | Genencor International, Inc. | Fluidized bed low density granule |
US6526448B1 (en) * | 1998-12-22 | 2003-02-25 | At&T Corp. | Pseudo proxy server providing instant overflow capacity to computer networks |
US6205155B1 (en) * | 1999-03-05 | 2001-03-20 | Transwitch Corp. | Apparatus and method for limiting data bursts in ATM switch utilizing shared bus |
US6516422B1 (en) * | 1999-05-19 | 2003-02-04 | Sun Microsystems, Inc. | Computer system including multiple clock sources and failover switching |
US6359859B1 (en) * | 1999-06-03 | 2002-03-19 | Fujitsu Network Communications, Inc. | Architecture for a hybrid STM/ATM add-drop multiplexer |
US6898211B1 (en) * | 1999-06-16 | 2005-05-24 | Cisco Technology, Inc. | Scheme for maintaining synchronization in an inherently asynchronous system |
US7023833B1 (en) * | 1999-09-10 | 2006-04-04 | Pulse-Link, Inc. | Baseband wireless network for isochronous communication |
US6907041B1 (en) * | 2000-03-07 | 2005-06-14 | Cisco Technology, Inc. | Communications interconnection network with distributed resequencing |
US6675307B1 (en) * | 2000-03-28 | 2004-01-06 | Juniper Networks, Inc. | Clock controller for controlling the switching to redundant clock signal without producing glitches by delaying the redundant clock signal to match a phase of primary clock signal |
US6636467B1 (en) * | 2000-06-30 | 2003-10-21 | Hewlett-Packard Development Company, L.P. | Method and apparatus for accurately calibrating the timing of a write onto storage media |
US20020093952A1 (en) * | 2000-06-30 | 2002-07-18 | Gonda Rumi Sheryar | Method for managing circuits in a multistage cross connect |
US6980618B1 (en) * | 2000-08-11 | 2005-12-27 | Agere Systems Inc. | Phase offset detection |
US6646983B1 (en) * | 2000-11-21 | 2003-11-11 | Transwitch Corporation | Network switch which supports TDM, ATM, and variable length packet traffic and includes automatic fault/congestion correction |
US20020075883A1 (en) * | 2000-12-15 | 2002-06-20 | Dell Martin S. | Three-stage switch fabric with input device features |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7672341B2 (en) * | 2004-03-29 | 2010-03-02 | Fujitsu Limited | Transmission apparatus and reception interface apparatus |
US20050213614A1 (en) * | 2004-03-29 | 2005-09-29 | Keiichiro Tsukamoto | Transmission apparatus and reception interface apparatus |
US20120002682A1 (en) * | 2009-05-22 | 2012-01-05 | Tejas Networks Limited | Method to transmit multiple data-streams of varying capacity data using virtual concatenation |
US8923337B2 (en) * | 2009-05-22 | 2014-12-30 | Tejas Networks Limited | Method to transmit multiple data-streams of varying capacity data using virtual concatenation |
US9088377B2 (en) * | 2011-01-24 | 2015-07-21 | OnPath Technologies Inc. | Methods and systems for calibrating a network switch |
US20140254397A1 (en) * | 2011-01-24 | 2014-09-11 | OnPath Technologies Inc. | Methods and Systems for Calibrating a Network Switch |
US20140059195A1 (en) * | 2012-08-24 | 2014-02-27 | Cisco Technology, Inc. | System and method for centralized virtual interface card driver logging in a network environment |
US10313380B2 (en) | 2012-08-24 | 2019-06-04 | Cisco Technology, Inc. | System and method for centralized virtual interface card driver logging in a network environment |
US9917800B2 (en) * | 2012-08-24 | 2018-03-13 | Cisco Technology, Inc. | System and method for centralized virtual interface card driver logging in a network environment |
US20140171094A1 (en) * | 2012-12-18 | 2014-06-19 | Samsung Electronics Co., Ltd. | Method of multi-hop cooperative communication from terminal and base station and network for multi-hop cooperative communication |
US9532296B2 (en) * | 2012-12-18 | 2016-12-27 | Samsung Electronics Co., Ltd. | Method of multi-hop cooperative communication from terminal and base station and network for multi-hop cooperative communication |
KR101855273B1 (en) * | 2012-12-18 | 2018-05-08 | 삼성전자주식회사 | Communication method for multi-hop cooperation communication of terminal and base station and network for the multi-hop cooperation communication |
WO2014153421A3 (en) * | 2013-03-19 | 2014-12-31 | Yale University | Managing network forwarding configurations using algorithmic policies |
US10361918B2 (en) | 2013-03-19 | 2019-07-23 | Yale University | Managing network forwarding configurations using algorithmic policies |
US9544667B2 (en) * | 2013-10-14 | 2017-01-10 | Nec Corporation | Burst switching system using optical cross-connect as switch fabric |
US20150104171A1 (en) * | 2013-10-14 | 2015-04-16 | Nec Laboratories America, Inc. | Burst Switching System Using Optical Cross-Connect as Switch Fabric |
Also Published As
Publication number | Publication date |
---|---|
US20030091035A1 (en) | 2003-05-15 |
AU2003273160A1 (en) | 2003-12-12 |
US7463626B2 (en) | 2008-12-09 |
WO2003100991A2 (en) | 2003-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6631130B1 (en) | Method and apparatus for switching ATM, TDM, and packet data through a single communications switch while maintaining TDM timing | |
US6646983B1 (en) | Network switch which supports TDM, ATM, and variable length packet traffic and includes automatic fault/congestion correction | |
US20090141719A1 (en) | Transmitting data through commuincation switch | |
JP4338728B2 (en) | Method and apparatus for exchanging ATM, TDM and packet data via a single communication switch | |
US6636515B1 (en) | Method for switching ATM, TDM, and packet data through a single communications switch | |
US7061935B1 (en) | Method and apparatus for arbitrating bandwidth in a communications switch | |
US20030227913A1 (en) | Adaptive timing recovery of synchronous transport signals | |
US5568486A (en) | Integrated user network interface device | |
US7035294B2 (en) | Backplane bus | |
US20040047367A1 (en) | Method and system for optimizing the size of a variable buffer | |
US6765928B1 (en) | Method and apparatus for transceiving multiple services data simultaneously over SONET/SDH | |
EP1518366B1 (en) | Transparent flexible concatenation | |
EP1423931B1 (en) | Stm-1 to stm-64 sdh/sonet framer with data multiplexing from a series of configurable i/o ports | |
US6636511B1 (en) | Method of multicasting data through a communications switch | |
US8107362B2 (en) | Multi-ring resilient packet ring add/drop device | |
US6836486B2 (en) | Switching of low order data structures using a high order switch | |
US6721336B1 (en) | STS-n with enhanced granularity | |
EP1537694B1 (en) | Synchronous transmission network node | |
US6954461B1 (en) | Communications network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TR TECHNOLOGIES FOUNDATION LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TRANSWITCH CORPORATION;REEL/FRAME:024965/0724 Effective date: 20071226 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |