US20060221990A1 - Hiding system latencies in a throughput networking system - Google Patents

Hiding system latencies in a throughput networking system Download PDF

Info

Publication number
US20060221990A1
US20060221990A1 US11/098,245 US9824505A US2006221990A1 US 20060221990 A1 US20060221990 A1 US 20060221990A1 US 9824505 A US9824505 A US 9824505A US 2006221990 A1 US2006221990 A1 US 2006221990A1
Authority
US
United States
Prior art keywords
receive
packet
network interface
memory
memory access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/098,245
Other versions
US7987306B2 (en
Inventor
Shimon Muller
Rahoul Puri
Michael Wong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle America Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Priority to US11/098,245 priority Critical patent/US7987306B2/en
Assigned to SUN MICROSYTEMS, INC. reassignment SUN MICROSYTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MULLER, SHIMON, PURI, RAHOUL, WONG, MICHAEL
Publication of US20060221990A1 publication Critical patent/US20060221990A1/en
Priority to US13/008,092 priority patent/US8006016B2/en
Application granted granted Critical
Publication of US7987306B2 publication Critical patent/US7987306B2/en
Assigned to Oracle America, Inc. reassignment Oracle America, Inc. MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: Oracle America, Inc., ORACLE USA, INC., SUN MICROSYSTEMS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • H04L47/326Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames with random discard, e.g. random early discard [RED]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/385Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing

Definitions

  • I/O Input Output
  • Known networked computer systems include platform servers, server based appliances and desktop computer systems.
  • Networked systems are generally judged by a number of efficiencies relating to network throughput (i.e., the aggregate network data movement ability for a given traffic profile), network latency (i.e., the system contribution to network message latency), packet rate (i.e., the system's upper limit on the number of packets processed per time unit), session rate (i.e., the system's upper limit on creation and removal of network connections or sessions), and networking processing overhead (i.e., the processing cost associated with a given network workload).
  • network throughput i.e., the aggregate network data movement ability for a given traffic profile
  • network latency i.e., the system contribution to network message latency
  • packet rate i.e., the system's upper limit on the number of packets processed per time unit
  • session rate i.e., the system's upper limit on creation and removal of network connections or sessions
  • networking processing overhead i.e., the processing cost associated with a given network workload.
  • Different uses of networked systems are more
  • bulk data movement workloads such as disk backup, media streaming and file transfers tend to be sensitive to network throughput
  • transactional uses such as web servers
  • session rates transactional uses, such as web servers
  • distributed application workloads such as clustering
  • known computer systems there may be one or more contributors to the system latency. These contributors include memory technologies that do not correspond to processor and networking speeds. Also, known computer systems may be based on a non-uniform memory access (NUMA) architecture which increases latency if the data cannot be held in the memory of the local processor. In known network systems it is often difficult to control where data is stored.
  • NUMA non-uniform memory access
  • IOMMUs Input output memory management units
  • VM virtual memory
  • IOMMUs Input output memory management units
  • IOMMUs can also generate system latencies.
  • systems that use a virtual memory (VM) model often require virtual address to physical address translation in hardware.
  • the translation tables are hardware limited. If an entry is evicted from the translation table, the latency penalty can be significant. This issue is typical for networking systems because it is often difficult to control to where information is stored.
  • a network system which addresses system latency issues by recognizing that a typical network system communicates with many destinations (via, e.g., multiple TCP connections), and that network traffic is bursty (i.e., multiple packets are sent at a time for a given connection).
  • the network system in accordance with the present invention includes an I/O architecture and protocol which allows relaxed ordering.
  • the network system includes a transmit method of requesting multiple packets and reordering interleaved partial completions.
  • the network system includes a receive method that minimizes ordering constraints on the I/O path of the network system.
  • the network system includes one or more of a plurality of features which address system latency issues.
  • the present invention provides a method for moving data for each connection independently and in parallel to and from memory. When one channel stalls due to a memory latency, another channel takes over. Also for example, in one embodiment, multiple packets are moved at a time. Also for example, in one embodiment, a split transaction model is implemented; the split transaction model enforces strict ordering on a given connection only when necessary and otherwise uses relaxed ordering. Also for example, in one embodiment, the network system maximizes IOMMU locality, thereby reducing the probability of a translation table entry being evicted. Also for example, in one embodiment, the network system reduces bridge latency in certain applications.
  • the network system provides dedicated resources for each connection including independent DMA channels, data structures, FIFOs, etc. Also for example, in one embodiment, the network system requests multiple packets from the same and multiple connections; the network system includes multiple receive descriptor updates and receive mailbox completions. Also for example, in one embodiment, the network system includes a reorder mechanism. Also for example, in one embodiment, the network system provides large virtually contiguous portions including virtually contiguous regions for descriptors and large virtually contiguous consecutively posted sub-buffers.
  • the invention in another embodiment, relates to a network system which includes a plurality of processing entities, a memory system coupled to the plurality of processing entities and a network interface coupled to the plurality of processing entities and the memory system wherein the network interface includes a plurality of memory access channels.
  • the network interface unit moves data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
  • FIG. 2 shows a conceptual diagram of the asymmetrical processing functional layering of the present invention.
  • FIG. 5A shows a block diagram of the flow of packet data and associated control signals in the network system from the operational perspective of receiving incoming packet data.
  • FIG. 5B shows a block diagram of the flow of packet data and associated control signals in the network system from the operational perspective of transmitting packet data.
  • FIG. 6 shows a block diagram of an implementation of a mailbox image of an interrupt status register in the multiprocessor system.
  • FIG. 7 shows a diagram of the timing sequence for an interrupt service routine utilizing the mailbox configuration.
  • FIGS. 9A and 9B show a block diagram of a receive packet FIFO module and a packet classifier module.
  • FIG. 10 shows a schematic block diagram of a receive DMA module.
  • FIG. 11 shows a schematic block diagram of a transmit DMA module and a transmit FIFO/reorder logic module.
  • FIG. 12 shows a schematic block diagram of an example of a four port network interface unit.
  • FIG. 13 shows a schematic block diagram of an example of a two port network interface unit.
  • FIG. 15 shows a flow chart of the movement of a packet received by the network interface unit.
  • FIG. 16 shows a flow chart of the movement of a packet transmitted by the network interface unit.
  • FIG. 17 shows a flow chart of the operation of a port scheduler.
  • FIG. 18 shows a flow chart of a select operation of the port scheduler.
  • FIG. 19 shows a flow chart of a loop operation of the port scheduler.
  • FIG. 20 shows a flow chart of the operation of a weighted random early discard module.
  • FIG. 23 shows a block diagram of the packet classification hierarchy.
  • FIG. 25 shows a flow diagram of a transmit flow between a network interface unit and a network system software stack.
  • the network system 100 includes a network interface unit 110 which is coupled to an interconnect device 112 via an interconnect controller 114 .
  • the interconnect controller 114 is also coupled to a peripheral interface module 116 .
  • the interconnect device 112 is also coupled to a plurality of processing entities 120 and to memory system 130 .
  • the processing entities 120 are coupled to the memory system 130 .
  • Each processing entity 120 includes a respective cache 121 .
  • the interconnect device 112 may be an input/output (I/O) bus (such as e.g., a PCI Express bus) along with a corresponding bus bridge, a crossbar switch or any other type of interconnect device.
  • I/O input/output
  • the interconnect device 112 or a bus bridge within the interconnect device 112 may include an I/O memory management unit (IOMMU).
  • IOMMU I/O memory management unit
  • the interconnect device 112 may be conceptualized as part of the interconnect in the processor coherency domain. The interconnect device 112 resides on the boundary between the coherent and the non-coherent domains of the network system 100 .
  • Each processing entity 120 may be a processor, a group of processors, a processor core, a group of processor cores, a processor thread or a group of processor threads or any combination of processors, processor cores or processor threads.
  • a single processor may include a plurality of processor cores and each processor core may include a plurality of processor threads. Accordingly, a single processor may include a plurality of processing entities 120 .
  • Each processing entity 120 also includes a corresponding memory hierarchy.
  • the memory hierarchy includes, e.g., a first level cache (such as cache 121 ), a second level cache, etc.
  • the memory hierarchy may also include a processor portion of a corresponding non-uniform memory architecture (NUMA) memory system.
  • NUMA non-uniform memory architecture
  • the memory system 130 may include a plurality of individual memory devices such as a plurality of memory modules. Each individual memory module or a subset of the plurality of individual memory modules may be coupled to a respective processing entity 120 .
  • the memory system 130 may also include corresponding memory controllers as well as additional cache levels. So for example, if the processing entities 120 of the network system 100 each include a first level cache, then the memory system 130 might include one or more second level caches.
  • the network system 100 addresses system latency issues by recognizing that a typical network system communicates with many destinations (via, e.g., multiple TCP connections), and that network traffic is bursty (i.e., multiple packets are sent at a time for a given connection).
  • the network system 100 includes an I/O architecture and protocol which allows relaxed ordering.
  • the network system 100 includes a transmit method of requesting multiple packets and reordering interleaved partial completions.
  • the network system 100 includes a receive method that minimizes ordering constraints on the I/O path of the network system.
  • the network system 100 includes one or more of a plurality of features which address system latency issues. For example, the network system 100 moves data for each connection independently and in parallel to and from the memory system 130 . When one channel stalls due to a memory latency, another channel takes over. Also for example, multiple packets are moved at a time. Also for example, a split transaction model is implemented; the split transaction model enforces strict ordering on a given connection only when necessary and otherwise uses relaxed ordering. Also for example, the network system 100 maximizes IOMMU locality, thereby reducing the probability of a translation table entry being evicted. Also for example, the network system 100 reduces bridge latency in certain applications.
  • the network system 100 provides dedicated resources for each connection including independent DMA channels. Also for example, the network system requests multiple packets from the same and multiple connections; the network system 100 includes multiple receive descriptor updates and receive mailbox completions. Also for example, the network system includes a reorder mechanism. Also for example, in one embodiment, the network system provides large virtually contiguous portions including virtually contiguous regions for descriptors and large virtual contiguous consecutively posted sub-buffers.
  • the network system 100 addresses system latency within the network system by providing a network interface which includes a plurality of memory access channels, moving data within each of the plurality of memory access channels independently and in parallel to and from memory so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
  • the network system 100 may includes one or more of a plurality of features relating to reducing system latency.
  • the network system 100 may allow relaxed ordering when internally moving data between the network interface and the memory system
  • a memory access channel may include dedicated queuing, control and buffering to move data while preserving ordering between a processing entity and the network interface.
  • the data may include packets of information; and multiple packets of information are sent at a time for a particular memory access channel.
  • the network system 100 may selectively enforce internal transaction ordering for some transactions within a memory access channel while keeping other transactions as relaxed ordering as necessary.
  • the plurality of memory access channels may include a plurality of receive memory access channels dedicated to moving data between the network interface and the memory system. Each of the plurality of receive memory access channels may include a receive descriptor ring.
  • Each of the plurality of receive memory access channels may include a receive completion ring.
  • the plurality of memory access channels may include a plurality of transmit memory access channels dedicated to moving data between the memory system and the network interface.
  • the plurality of transmit memory access channels may include transmit descriptor rings.
  • the method and apparatus of the present invention is capable of implementing asymmetrical multi-processing wherein processing resources are partitioned for processes and flows.
  • the partitions can be used to implement networking functions by using strands of a multi-stranded processor, or Chip Multi-Threaded Core Processor (CMT) to implement key low-level functions, protocols, selective off-loading, or even fixed-function appliance-like systems.
  • CMT Chip Multi-Threaded Core Processor
  • Using the CMT architecture for offloading leverages the traditionally larger processor teams and the clock speed benefits possible with custom methodologies. It also makes it possible to leverage a high capacity memory-based communication instead of an I/O interface. On-chip bandwidth and the higher bandwidth per pin supports CMT inclusion of network interfaces and packet classification functionality.
  • Asymmetrical processing in the system of the present invention is based on selectively implementing, off-loading, or optimizing specific functions, protocols, or flows, while preserving the networking functionality already present within the operating system of the local server or remote participants.
  • the network offloading can be viewed as granular slicing through the layers for specific flows, functions or applications.
  • the “offload” category includes the set of networking functions performed either below the TCP/IP stack, or the selective application of networking functions vertically for a set of connections/applications. Examples of the offload category include: (a) bulk data movement (NFS client, RDMA, iSCSI); (b) packet overhead reduction; (c) zero copy (application posted buffer management); and (d) scalability and isolation (traffic spreading from a hardware classifier).
  • FIG. 2 shows the “layers” 1-4 of a traditional networking system that comprise the link, network, transport and application layers, respectively.
  • a dashed line illustrates the delineation of networking functions that are traditionally handled by hardware vs. software. As shown in FIG. 2 , in most networking systems this line of delineation is between layers 2 and 3.
  • Network functions in prior art systems are generally layered and computing resources are symmetrically shared by layers that are multiprocessor ready, underutilized by layers that are not multiprocessor ready, or not shared at all by layers that have coarse bindings to hardware resources.
  • the layers have different degrees of multiprocessor readiness, but generally they do not have the ability to be adapted for scaling in multiprocessor systems. Layered systems often have bottlenecks that prevent linear scaling.
  • time slicing occurs across all of the layers, applications, and operating systems.
  • low-level networking functions are interleaved, over time, in all of the elements.
  • the present invention implements a method and apparatus that dedicates processing resources rather than utilizing those resources as time sliced.
  • the dedicated resources are illustrated by the vertical columns in FIG. 2 that will sometimes be referred to herein as “silos.”
  • the advantage of the asymmetrical model of the present invention is that it moves away from time slicing and moves toward “space slicing.”
  • the processing entities are dedicated to implement a particular networking function, even if the dedication of these processing resources to a particular network function sometimes results in “wasting” the dedicated resource because it is unavailable to assist with some other function.
  • the allocation of processing entities can be allocated with fine granularity.
  • the “silos” that are defined in the architecture of the present invention are desirable for enhancing performance, correctness, or for security purposes.
  • FIG. 3 is an illustration of a networking system that is partitioned whereby a plurality of processing entities are asymmetrically allocated to various networking functions.
  • the functional associations of the processing entities 120 a - n are illustrated by the dashed boundaries designated by reference numerals 310 a - d .
  • the functional association of processing entity 120 a and memory system 130 designated by reference numeral 310 a is a “hypervisor” that is responsible for managing the partitioning and association of the other processing entities, as will be described in greater detail hereinbelow.
  • Reference numeral 310 b shows the association of a processing entity 120 b with memory system 130 and a network interface unit resource of the network interface unit 110 .
  • Reference numeral 310 c illustrates the association of a plurality of processing entities 120 c - e with memory system 130 for performing a processing function that does not directly involve a network interface resource.
  • Reference numeral 310 d illustrates an association of a plurality of processing entities 120 f - n with memory system 130 and one or more network interface resources of the network interface unit 110 .
  • the various processing entities 120 a - n can comprise an entire processor core or a processing strand of a processing core.
  • the hypervisor 312 manages the partitioning and association of the various processing entities with the memory system 130 and, in some instances, with a predetermined set of networking resources in the network interface unit.
  • the hypervisor 312 has the responsibility for configuring the control resources that will be dedicated to whichever processing entity is charged with responsibility for managing a particular view of the interface.
  • the silo that is defined to include the M processing entities 120 f - n
  • only those processing entities will have the ability to access a predetermined set of hardware resources relating to the interface.
  • the control of the other processing entities, e.g., processing entities 120 c - e and the access to the memory system 130 for these processing entities is separated.
  • processing entities 120 c - e can be assigned to a processing task that does not directly involve a network interface resource, such as the N processing entities 120 c - e .
  • processing entities can be assigned to perform a network functionality, protocol or hardware function, such as the M processing entities 120 f - n illustrated in FIG. 3 .
  • the present invention uses computer resources for network specific functions that could be low level or high level.
  • High-level resources that are concentrated and implemented in the “silo” associations of the present invention are faster than a prior art general implementation of a symmetrical processing system.
  • low-level functionality previously performed in hardware can be raised above the delineation line illustrated in FIG. 2 . If there is a processing entity with a bottleneck, another processing entity, or strand, can become part of the flow or part of the function being executed in a particular “silo.”
  • the processing entities that are associated with an interface or other functionality remain efficient because they continue to be associated with the shared memory resources.
  • the processing entities 120 a - n are dedicated without being physically moved within the various layers of the networking system.
  • FIG. 3 also shows two network interface instances 110 . Each of the interfaces could have multiple links.
  • the system of the present invention comprises aggregation and policy mechanisms which makes it possible to apply all of the control and the mapping of the processing entities 120 a - 120 n to more than one physical interface.
  • fine or coarse grain processing resource controls and memory separation can be used to achieve the desired partitioning. Furthermore it is possible to have a separate program image and operating system for each resource. Very “coarse” bindings can be used to partition a large number of processing entities (e.g., half and half), or fine granularity can be implemented wherein a single strand of a particular core can be used for a function or flow.
  • the separation of the processing resources on this basis can be used to define partitions to allow simultaneous operation of various operating systems in a separated environment or it can be used to define two interfaces, but to specify that these two interfaces are linked to the same operating system.
  • a network system software stack 410 includes one or more instantiations of a network interface unit device driver 420 , the hypervisor 312 , as well as one or more operating systems 430 (e.g., OS 1 , OS 2 , OS 3 ).
  • the network interface unit 110 interacts with the operating system 430 via a respective network interface unit device driver 420 .
  • Hypervisor 312 is a high level firmware based function which performs a plurality of functions and services relating to the network system such as e.g., creating and enforcing the partitioning of a logically partitioned network system.
  • Hypervisor 312 is a software implemented virtual machine.
  • the network system 100 via hypervisor 312 , allows the simultaneous execution of independent operating system images by virtualizing all the hardware resources of the network system 100 .
  • Each of the operating systems 430 interact with the network interface unit device driver 420 via extended partition portions of the hypervisor 312 .
  • FIGS. 5A and 5B are illustrations of the flow of packet data and associated control signals in the system of the present invention from the operational perspective of receiving incoming packet data and transmitting packet data, respectively.
  • the network interface 110 is comprised of a plurality of physical network interfaces that provide data to a plurality of media access controllers (MACs).
  • the MACs are operably connected to a classifier and a queuing layer comprising a plurality of queues.
  • the classifier “steers” the flow of packet data in conjunction with a flow table, as described in more detail hereinbelow.
  • a mapping function based on the classification function performed by the classifier, and a receive DMA controller function are used to provide an ordered mapping of the packets into a merging module.
  • the output of the merging module is a flow of packets into a plurality of receive DMA channels that are functionally illustrated as a plurality of queuing resources, where the number of receive DMA channels shown in FIG. 5A is independent of the number of physical interfaces providing inputs to the interface unit. Both data and “events” travel over the DMA channels.
  • the queuing resources move the packet data to the shared memory.
  • the queues also hold “events” and therefore, are used to transfer messages corresponding to interrupts.
  • the main difference between data and events in the system of the present invention is that data is always consumed by memory, while events are directed to the processing entities.
  • the classifier determines which of the processing entities will receive the interrupt corresponding to the processing of a packet of data. The classifier also determines where in the shared memory a data packet will be stored for further processing. The queues are isolated by the designation of DMA channels.
  • control registers pages
  • the associations between the intended strands of the processing entities and the control registers are separable via the hypervisor 312 (see, e.g. FIG. 3 ). This is a logical relationship, rather than a physical relationship between the functional components of the interface unit.
  • Aggregation and classification are accomplished by the two interfaces that share the classifier and also share the DMA channels.
  • the classification function and the assignment of packets to DMA channels can be accomplished regardless of where the data packet originated. Fine and coarse grain are implemented by the flow table and the operation of the hypervisor to manage the receive DMA channels and the processing entities.
  • FIG. 5 B is an illustration of the flow of packet data and associated control signals from the operational perspective of transmitted packet data.
  • Packets transmitted from the various processing entities 120 a - 120 n are received by the interconnect 112 and are directed via plurality of transmit DMA channels.
  • the transmit DMA channels generate a packet stream that is received by the reorder module.
  • the reorder module is responsible for generating an ordered stream of packets and for providing a fan-out function.
  • the output of the reorder module is a stream of packets that are stored in transmit datal FIFOs.
  • the packets in the transmit data FIFOs are received by the plurality of media access controllers and are thereafter passed to the network interfaces.
  • FIG. 6 is an illustration of a mailbox and register-based interrupt event notification apparatus for separable, low overhead, scalable network interface service.
  • events are messages that are essentially the same as memory writes.
  • the “message” (or the “interrupt”) is simply a means for waking up a specified processing entity; it does not contain information relating to why the processing entity is requested to wake up.
  • a request to wake up a processing entity is issued, it is also necessary to explain the nature of the task that the processing entity is requested to perform.
  • the processing entity e.g., processing entity 120 b
  • it will read the information in the interrupt status register that denotes the task to be performed.
  • the interrupt status register in the interface unit hardware provides accurate information relating to the state of the interrupt request, accessing of this information involves significant processing overhead and latency.
  • data corresponding to the interrupt status that would normally be obtained from the Rx DMA interrupt status register 1016 in the network interface unit 110 is transferred into a “mailbox” 1010 in the shared memory 130 .
  • the shared memory mailbox is used to store an image of a corresponding interrupt register in the network interface unit 110 .
  • the image of the interrupt status register is stored in the shared memory mailbox just prior to sending a message to a processing entity asking it to wake up and perform a specified task.
  • the processing entity that is requested to perform a specified task can access the information in the shared memory mailbox much more efficiently and quickly than it can obtain the information from the corresponding hardware register in the network interface.
  • the information in the hardware interrupt status register in the interface unit may change between the time the message is issued to a processing entity and the time the processing entity “wakes up” to perform the specified task. Therefore the data contained in the image of the interrupt storage register that is stored in the shared memory mailbox may not be the latest version.
  • the processing entity can quickly determine the reason it was asked to wake up. It is very easy for the processing entity to consult the shared memory mailbox because of its close proximity to the processing entity.
  • the purpose of the mailbox 1010 is to minimize the number of times that the processing entity must cross the I/O interface.
  • the mailbox 1010 allows the processing entity 120 a to postpone the time that it actually needs to read the contents of the interrupt status register in the interface unit.
  • FIG. 7 The advantages relating to the shared memory mailbox implementation of the present invention can be seen by referring to FIG. 7 .
  • the system executes an interrupt service routine wherein the interrupt is decoded to identify a particular process to be executed.
  • the processing entity then executes a PIO read (PIORD) to retrieve data from the interrupt status register.
  • PIORD PIO read
  • the data obtained from the interrupt status register is used by the processing entity to perform actions corresponding to the information contained in the interrupt status register.
  • a subsequent PIORD is issued to determine if the interrupt status register contains data corresponding to additional actions that must be executed. This subsequent PIORD has a corresponding latency At 2 that results in a second stall. If the result of the subsequent PIORD indicates that the data previously obtained from the interrupt status register is the most current information, the processing entity responds with a return (RET) and the interrupt is terminated.
  • RET return
  • the interrupt service routine implemented using the shared memory mailbox of the present invention is illustrated generally by the lower timing diagram in FIG. 7 .
  • the processing entity accesses the image of the interrupt register in the shared memory mailbox, rather than executing a PIORD. This provides much faster access to the data and, therefore, significantly decreases the overall latency for the interrupt service routine.
  • the present invention also decreases the overall latency of the interrupt service routine by initiating a subsequent PIORD while the process is being executed.
  • the subsequent PIORD is executed on an interleaved basis while the processing entity is executing the process and the contents of the actual interrupt status register can be verified to determine if additional actions have been added to the interrupt request subsequent to storing the contents of the interrupt status register in the shared memory mailbox.
  • the subsequent PIORD can be “prefetched” by interleaving it with the processing, thereby allowing the status of the actual interrupt status register to be verified immediately upon completion of the process resulting in an overall significantly shorter time for the system to process the interrupt service routine.
  • the network interface unit 110 includes a transmit DMA module 812 , a transmit FIFO/reorder logic module 814 , a receive FIFO module 816 , a receive packet classifier module 818 , and a receive DMA module 820 .
  • the network interface unit 110 also includes a media access control (MAC) module 830 and a system interface module 832 .
  • the transmit packet FIFO reorder logic module 814 includes a transmit packet FIFO 850 and a transmit reorder module 852 .
  • the receive FIFO module 816 includes a receive packet FIFO 860 and a receive control FIFO 862 .
  • Each of the modules within the network interface unit 110 include respective programmable input/output (PIO) registers.
  • the PIO registers are distributed among the modules of the network interface unit 110 to control respective modules.
  • the PIO registers are where memory mapped I/O loads and stores to control and status registers (CSRs) are dispatched to different functional units.
  • CSRs control and status registers
  • the system interface module 832 provides the interface to the interconnect device 112 and ultimately to the memory system 130 .
  • the MAC module 830 provides a network connection such as an Ethernet controller.
  • the MAC module 830 supports a link protocol and statistics collection.
  • Packets received by the MAC module 830 are first classified based upon the packet header information via the packet classifier 818 .
  • the classification determines the receive DMA channel within the receive DMA module 820 .
  • Transmit packets are posted to a transmit DMA channel within the transmit DMA module 812 .
  • Each packet may include a gather list.
  • the network interface unit 110 supports checksum and CRC- 32 c offload on both receive and transmit data paths via the receive FIFO module 816 and the transmit FIFO reorder logic module 814 , respectively.
  • the network interface unit 110 provides support for partitioning. For functional blocks that are physically associated with a network port (such as MAC registers within the MAC module 830 ) or logical devices such as receive and transmit DMA channels within the receive DMA module 820 and the transmit DMA module 812 , respectively. Control registers are grouped into separate physical pages so that a partition manager (or hypervisor) can manage the functional blocks through a memory management unit on the processor side of the network system to provide an operating system (potentially multiple operating systems) direct access to the control registers. Control registers of shared logical blocks such as the packet classifier module 818 , though grouped into one or more physical pages, may be managed solely by a partition manager (or hypervisor).
  • a partition manager or hypervisor
  • Each DMA channel can be viewed as belonging to a partition.
  • the CSRs of multiple DMA channels can be grouped into a virtual page to simplify management of the DMA channels.
  • Each transmit DMA channel or receive DMA channel can perform range checking and relocation for addresses residing in multiple programmable ranges.
  • the addresses in the configuration registers, packet gather list pointers on the transmit side and the allocated buffer pointer on the receive side are then checked and relocated accordingly.
  • the network interface unit 110 supports sharing available system interrupts.
  • the number of system interrupts may be less than the number of logical devices.
  • a system interrupt is an interrupt that is sent to a processing entity 120 .
  • a logical device refers to a functional block that may ultimately cause an interrupt.
  • a logical device may be a transmit DMA channel, a receive DMA channel, a MAC device or other system level module.
  • One or more logical conditions may be defined by a logical device.
  • a logical device may have up to two groups of logical conditions. Each group of logical conditions includes a summary flag, also referred to as a logical device flag (LDF). Depending on the logical conditions captured by the group, the logical device flag may be level sensitive or may be edge triggered. An unmasked logical condition, when true, may trigger an interrupt.
  • LDF logical device flag
  • Logical devices are grouped into logical device groups.
  • a logical device group is a set of logical devices sharing an interrupt.
  • a group may have one or more logical devices.
  • the state of the logical devices that are part of a logical device group may be read by software.
  • the logical device group interrupt mask is a per logical device group mask that defines which logical device within the group, when a logical condition (LC) becomes true, can issue an interrupt.
  • the logical condition is a condition that when true can trigger an interrupt.
  • a logical condition may be a level, (i.e., the condition is constantly being evaluated) or may be an edge (i.e., a state is maintained when the condition first occurs, this state is cleared to enable detection of a next occurrence of the condition).
  • One example of a logical device that belongs to a group but does not generate an interrupt is a transmit DMA channel which is part of a logical device group.
  • Software may examine the flags associated with the transmit DMA channel by setting the logical device group number of the logical device. However, the transmit DMA channel will not trigger an interrupt if the corresponding bit of the interrupt mask is not set.
  • a system interrupt control value is associated with a logical device group.
  • the system interrupt control value includes an arm bit, a timer and system interrupt data.
  • System interrupt data is the data associated with the system interrupt and is sent along with the system interrupt.
  • the system interrupt control value is set by a partition manager or a hypervisor.
  • a device driver of the network interface unit 110 writes to a register to set the arm bit and set the value of the timer. Hardware causes the timer to start counting down.
  • a system interrupt is only issued if the timer is expired, the arm bit is set and one or more logical devices in a logical device group have their flags set and not masked. This system interrupt timer value ensures that there is some minimal separation between interrupt requests.
  • Software clears the state or adjusts the conditions of individual Logical Devices after servicing. Additionally, software enables a mailbox update of the Logical Device if desired. In one embodiment, hardware does not support any aggregate updates applied to an entire logical device group.
  • the system interrupt data is provided to a non cacheable unit to lookup the hardware thread and interrupt number.
  • some higher order bits of the system interrupt data are used to select a PCI function and the other bits of the logical device group ID are passed as part of the message signal interrupt (MSI) data, depending on the range value.
  • MSI message signal interrupt
  • a PCI-Express or HyperTransport (HT) module supports a system interrupt data to message signal interrupt (MSI) lookup unit.
  • MSI system interrupt data to message signal interrupt
  • the network interface unit 110 looks up the MSI address and the MSI data. A posted write to the MSI address with the MSI data is issued. This is always an ordered request.
  • a datapath interface is the interface to the specific interconnect.
  • Another embodiment of the integrated network interface unit 110 system interface supports cache line size transfers.
  • Logically there are two classes of requests, ordered requests and bypass requests.
  • the two classes of requests are queued separately in the system interface unit 832 .
  • An ordered request is not issued to the memory system 130 until “older” ordered and bypass requests are completed. However, acknowledgements may return out of order.
  • Bypass requests may be issued as long as the memory system 130 can accept the request and may overtake “older” ordered requests that are enqueued or in transit to the memory system 130 .
  • Packet data transfers both receive and transmit, are submitted as bypass requests. Control data requests that affect the state of the DMA channels are submitted as ordered requests. Additionally, write requests can be posted and no acknowledgement is returned.
  • a non cacheable unit is a focal point where PIO requests are dispatched to the network interface unit 110 and where the PIO information read returns and interrupts are processed.
  • the non cacheable unit serializes the PIOs from different processor threads to the network interface unit 110 .
  • the non cacheable unit also includes an internal table where, based on the System Interrupt Data, the non cacheable unit looks up the processor thread number and the interrupt number used.
  • FIGS. 9A and 9B a block diagram of the receive FIFO module 816 and the packet classifier module 818 is shown.
  • the receive FIFO module 816 is coupled to the MAC module 830 and the receive DMA module 820 as well as to the packet classifier module 818 .
  • the packet classifier module 818 is coupled to the MAC module 830 and the receive FIFO module 816 .
  • the receive FIFO module 816 includes a per port receive packet FIFO 860 and a per port control FIFO 862 .
  • the per port receive packet FIFO 860 includes two corresponding FIFO buffers
  • the per port receive packet FIFO 860 includes four FIFO buffers.
  • the per port control FIFO 860 includes two corresponding control FIFO buffers
  • the per port control FIFO 860 includes four control FIFO buffers.
  • the packet classifier module 818 includes a Layer 2 parser 920 , a virtual local area network (VLAN) table 922 , a MAC address table 924 , a layer 3 and 4 parser 926 , a hash compute module 930 , a lookup and compare module 932 , a TCAM and associated data module 934 and a merge logic receive DMA channel (RDC) map lookup module 936 .
  • the packet classifier module 818 also includes a receive DMA channel multiplexer module 938 .
  • the packet classifier module 818 also includes a checksum module 940 .
  • the packet classifier module 818 and specifically, the lookup and compare module 932 , is coupled to a hash table 950 .
  • the receive DMA module 820 includes a plurality of receive DMA channels 1010 , e.g., receive DMA channel 0 —receive DMA channel 31 .
  • the receive DMA module 820 also includes a port scheduler module 1020 , a receive DMA control scheduler module 1022 , a datapath engine module 1024 , a memory acknowledgement (ACK) processing module 1026 and a memory and system interface module 1028 .
  • ACK memory acknowledgement
  • the plurality of DMA channels 1010 are coupled to the port scheduler module 1020 as well as the receive DMA channel control scheduler 1022 and the memory ACK processing module 1026 .
  • the port scheduler module 1020 is coupled to the receive packet FIFO 860 and the receive control FIFO 862 as well as to the datapath engine scheduler module 1024 .
  • the datapath engine scheduler 1024 is coupled to the port scheduler module 1020 , the receive DMA channel control scheduler 1022 as well as to the memory acknowledgement processing module 1026 and the memory and system interface module 1028 .
  • the memory and system interface module 1028 is coupled to the receive packet FIFO 860 and the receive control FIFO 862 as well as to the datapath engine scheduler module 1024 and to the system interface module 832 .
  • the memory ACK processing module 1026 is coupled to the plurality of DMA channels 1010 as well as to the datapath engine scheduler 1024 and the system interface module 832 .
  • Each of the plurality of receive DMA channels 1010 includes a receive block ring (RBR) prefetch module 1040 , a receive completion ring (RCR) Buffer module 1042 , a receive DMA channel state module 1044 , a weighted random early discard WRED logic module 1046 and a partition definition register module 1048 .
  • RBR receive block ring
  • RCR receive completion ring
  • the transmit DMA module 812 is coupled to the system interface module 832 as well as to the transmit FIFO/reorder logic module 814 .
  • the transmit FIFO/reorder module 814 is coupled to the system interface module 832 as well as to the transmit DMA module 812 .
  • the transmit FIFO/reorder logic module 814 includes per port transmit FIFO 1110 and a per port reorder module 1111 as well as a checksum and CRC module 1162 .
  • the per port transmit FIFO 1110 and the per port reorder module 1111 each include logic and buffers which correspond to the number of network ports within the network interface unit 110 . For example, if the network interface unit 110 includes two network ports, then the module includes two per port reorder modules and the transmit FIFO 1110 includes two FIFO buffers, if the network interface unit 110 includes four network ports, then the per port reorder module includes four per port reorder modules and the transmit FIFO 1110 includes four FIFO buffers.
  • the transmit DMA module 812 includes a plurality of transmit DMA channels 1120 , e.g., transmit DMA channel 0 —transmit DMA channel 31 .
  • the transmit DMA module 812 also includes a scheduler module 1130 , a transmit DMA channel prefetch scheduler 1132 , a multiplexer 1134 , and an acknowledgement (ACK) processing module 1136 .
  • ACK acknowledgement
  • Each transmit DMA channel 1120 includes a control state register portion 1140 , a transmit ring prefetch buffer 1142 and a partition control register 1144 .
  • the control state register portion 1140 includes a plurality of control state registers which are associated with the PIO registers and which control an individual transmit DMA channel 1120 .
  • the scheduler module 1130 includes per port deficit round robin (DRR) scheduler modules 1150 as well as a round robin scheduler module 1152 .
  • the per port scheduler modules 1150 correspond to the number of network ports within the network interface unit 110 . For example, if the network interface unit 110 includes two network ports, then the scheduler module 1130 includes two per port DRR scheduler modules 1150 (port 0 DRR scheduler module and port 1 DRR scheduler module), if the network interface unit 110 includes four network ports, then the scheduler module 1130 includes four per port DRR scheduler modules 1150 (port 0 DRR scheduler module through port 3 DRR scheduler module). Each per port DRR scheduler module 1150 includes a transmit DMA channel map module 1154 .
  • the Transmit FIFO reorder logic module 814 includes a per port reorder module 1111 and a per port transmit FIFO 1110 and a checksum and CRC module 1162 .
  • the per port transmit FIFO 1160 includes FIFO buffers which correspond to the number of network ports within the network interface unit 110 . For example, if the network interface unit 110 includes two network ports, then the per port transmit FIFO 1110 includes two per port transmit FIFO buffers, if the network interface unit 110 includes four network ports, then the per port transmit FIFO 1110 includes four per port transmit FIFO buffers.
  • the four port network interface unit 1200 includes a transmit DMA module 812 , a transmit FIFO reorder logic module 814 , a receive FIFO module 816 , a receive packet classifier module 818 , and a receive DMA module 820 .
  • the four port network interface unit 1200 also includes a media access control (MAC) module 830 and a system interface module 832 .
  • the four port network interface unit 1200 also includes a zero copy function module 1210 which is coupled to a TCP translation buffer table module 1212 .
  • the packet classifier module 818 includes a corresponding ternary content addressable memory (TCAM) module 934 .
  • the packet classifier module 818 is coupled to an FC RAM module 950 which stores flow tables for use by the packet classifier module 818 .
  • the receive DMA module 820 includes 32 receive DMA channels 1010 .
  • the transmit DMA module 812 includes 32 transmit DMA channels 1120 .
  • the MAC module 830 includes four MAC ports 1220 as well as a serializer/deserializer (SERDES) bank module 1222 . Because there are four MAC ports 1220 , the per port receive packet FIFOs 816 include four corresponding receive packet FIFOs and the per port transmit FIFOs 814 include four corresponding transmit FIFOs.
  • the system interface module 832 includes a PCI Express interface module 1230 , a system interface SERDES module 1232 and a HT interface module 1234 .
  • FIG. 13 a schematic block diagram of an example of an integrated network interface unit 1300 is shown.
  • the integrated network interface unit 1300 portions of the four port network interface unit 1200 are included within an integrated solution in which network functions are included with a processor core. (The processor core is omitted from the Figure for clarity purposes).
  • the integrated network interface unit 1300 includes a transmit DMA module 812 , a transmit FIFO reorder logic module 814 , a receive FIFO module 816 , a receive packet classifier module 818 , and a receive DMA module 820 .
  • the integrated network interface unit 1200 also includes a media access control (MAC) module 830 and a system interface module 832 .
  • MAC media access control
  • the packet classifier module 818 includes a corresponding TCAM module 934 .
  • the packet classifier module 818 is coupled to an FC RAM module 950 which stores flow tables for use by the packet classifier module 818 .
  • the receive DMA module 820 includes 32 receive DMA channels 1010 .
  • the transmit DMA module 812 includes 32 transmit DMA channels 1120 .
  • the MAC module 830 includes two MAC ports 1220 as well as a SERDES bank module 1222 . Because there are two MAC ports 1220 , the per port receive packet FIFOs 816 include two corresponding receive packet FIFOs and the per port transmit FIFOs 814 include two corresponding transmit FIFOs.
  • the receive and transmit FIFOs are stored within a network interface unit memory pool.
  • the system interface module 832 includes an I/O unit module 1330 and a system interface unit module 1332 .
  • a flow chart of the classification of a packet received by the network interface unit 110 is shown. More specifically, a packet is received by a MAC port of the MAC module 830 at step 1410 .
  • the MAC module 830 includes a plurality of media access controller (MAC) ports that support a network protocol such as an Ethernet protocol.
  • the media access controller ports include layer 2 protocol logic, statistic counters, address matching and filtering logic.
  • the output from a media access controller port includes information on a destination address, whether the address is a programmed individual address or an accepted group address, and the index associated with the destination address in that category.
  • Packets from different physical ports are stored temporarily in a per port receive packet FIFO at step 1412 .
  • the packets are stored into the per port receive FIFO module 816 , the header of the packet is copied to the packet classifier module 818 at step 1414 .
  • the packet is passed through the checksum module at steps 1416 .
  • the packet classifier module 818 determines at step 1420 to which receive DMA channel group the packet belongs and an offset into the receive DMA channel table at step 1420 .
  • the network interface unit 110 includes eight receive DMA channel groups.
  • Each receive DMA Channel 1010 includes a receive block ring (RBR), a receive completion ring (RCR) and a set of control and status registers. (See, e.g., FIG. 21 .) Physically, the receive DMA channels 1010 are allocated as ring buffers in memory system 130 . A receive DMA channel 1010 is selected after an incoming packet is classified. A packet buffer is derived from a pool of packet buffers in the memory system 130 and used to store the incoming packet. Each receive DMA channel 1010 is capable of issuing an interrupt based on the queue length of the receive completion ring or a time out.
  • the receive block ring is a ring buffer of memory blocks posted by software.
  • the receive completion ring is a ring that stores the addresses of the buffers used to store incoming packets.
  • each receive DMA channel group table includes 32 entries (see, e.g., FIG. 23 ). Each entry contains one receive DMA channel 1010 . Each table defines the group of receive DMA channels that are allowed to move a packet to the system memory.
  • the packet classifier module 818 chooses a table as an intermediate step before a final receive DMA channel 1010 is selected. The zeroth entry of the table is the default receive DMA channel 1010 .
  • the default receive DMA channel 1010 queues error packets within the group. The default can be one of the receive DMA channels in the group.
  • the Layer 2 parser 920 processes the network header to determine if the received packet contains a virtual local area network (VLAN) Tag at step 1430 .
  • VLAN virtual local area network
  • a VLAN ID is used to lookup into a VLAN table 922 to determine the receive DMA channel table number for the packet.
  • the packet classifier 818 also looks up the MAC address table 924 to determine a receive DMA channel table number based on the destination MAC address information. Software programs determine which of the two results to use in subsequent classification.
  • the output of the Layer 2 parser 920 together with the resulting receive DMA channel table number, is passed to the layer 3 and 4 parser 926 .
  • the Layer 3 and 4 parser 926 examines the EtherType, the Type of Service/Differentiated Services Code Point (TOS/DSCP) field and the Protocol ID/Next header field to determine whether the IP packet needs further classification at step 1432 .
  • the Layer 3 and 4 parser 926 recognizes a fixed protocol such as a transmission control protocol (TCP) or a user datagram protocol (UDP).
  • TCP transmission control protocol
  • UDP user datagram protocol
  • the Layer 3 and 4 parser 926 also supports a programmable Protocol IP number. If the packet needs further classification, the packet generates a flow key and a TCAM key at step 1434 .
  • the TCAM key is provided to the TCAM unit 934 for an associative search at step 1440 . If there is a match, the result of the search (i.e., the TCAM result) may override the receive DMA channel Table selection for the Layer 2 or provide an offset into the Layer 2 receive DMA channel Table and ignore the result from the Hash unit 930 .
  • the result of the search may also specify a zero copy flow identifier to be used in a zero copy translation.
  • the TCAM result also determines whether a hash lookup based on the flow key is needed at step 1442 .
  • a hash unit 930 uses the receive DMA channel table number provided by the TCAM module 934 , which determines a partition of the external table the hash unit 930 can search, a lookup is launched and either an exact match or an optimistic match is performed. If there is a match, the result contains the offset into the receive DMA channel table and the user data.
  • the result may also contain a zero copy flow identification value used in a zero copy operation.
  • the output from the hash unit 930 and the TCAM module 934 are merged to determine the receive DMA channel 1010 at step 1450 .
  • the receive DMA channel 1010 moves the packet into memory system 130 . If a zero copy flow identification value is present as determined at step 1452 , then a zero copy function is performed at step 1454 and the receive DMA channel 1010 moves the packet with header payload separation.
  • a zero copy function is a receive function that performs header vs. payload separation and places payloads at a correct location within pre-posted (per flow) buffers. Each per flow buffer list may be viewed as a zero copy DMA channel. Packet headers are stored into memory system 130 via regular receive DMA channels, as determined by the packet classifier module 818 . Using zero copy, the network interface unit 110 may operate on a packet by packet basis without requiring reassembly buffers within the network interface unit 110 . Zero copy saves costly data movement operations from a host protocol stack, and in some cases reduces the per packet overheads by postponing header processing until a large set of buffers may be visited. Protocol state machines, and exception processing are maintained in the host protocol stack. Thus, the host's data movement function is removed on a selective basis and subject to instantaneous buffer availability.
  • an anchor (part of the Zero Copy state), which is a variable set associating the transmission control protocol (TCP) sequence number space to a buffer list, and implicitly confining zero copy to the current receive TCP window, and a buffer list are retrieved to determine whether payload placement is possible. Then one or more payload DMA operations are determined.
  • TCP transmission control protocol
  • the outputs of the packet classifier module 818 and possibly one or more zero copy DMA operations associated with the packet are stored into the receive control FIFO 862 .
  • the network interface unit 110 supports checksum offload and CRC- 32 c offload for transmission control protocol/streaming control transmission protocol (TCP/SCTP) payloads.
  • the network interface unit 110 compares the calculated values with the values embedded in the packet. The results of the compare are sent to software via a completion status indication. No discard decision is made based on the CRC result. Checksum/CRC errors do not affect the layer 3 and 4 classification. Similarly, the error status is provided to software via the completion status indication. Zero copy DMA operations are not performed if checksum errors are detected, though zero copy states are updated regardless of the packet error. The entire packet is stored in system memory using the appropriate receive DMA channel.
  • the receive packet FIFO 860 is logically organized per physical port. Layer 2, 3 and 4 error information is logically synchronized with the classification result of the corresponding packet.
  • FIG. 15 a flow chart of the movement of a packet by the receive DMA module 820 of the network interface unit 110 is shown. More specifically, logically there are 32 Receive DMA channels (receive DMA channel O-receive DMA channel 31 ) available to incoming packets.
  • the datapath engine scheduler 1024 is common across all DMA operations. The datapath engine scheduler 1024 also prefetches receive block pointers or updates the completion ring of the receive DMA channels 1010 and prefetches zero copy buffer pointers.
  • each receive DMA channel 1010 supports multiple memory rings. All the addresses posted by software, such as the configuration of the ring buffers and buffer block addresses are range compared and optionally translated when used to reference memory system 130 based on the ranges.
  • the size of each block is programmable, but fixed per channel.
  • the network interface unit 110 maintains a prefetch buffer 1040 for the receive block ring and a tail pointer for the receive completion ring.
  • a request is issued to the DMA system to retrieve a cache line of block addresses from the ring. If the receive completion ring tail pointer needs to be updated, a write request is issued.
  • the consistency of the receive completion ring state is maintained by the network interface unit 110 .
  • the receive DMA channel control scheduler 1022 maintains the fairness among the receive DMA channels.
  • the port scheduler 1020 examines whether there are any packets available from the receive packet FIFO 860 and the receive control FIFO 862 at step 1562 . The port scheduler 1020 then determines which port to service first at step 1564 .
  • the port scheduler 1020 includes a Deficit Round Robin scheduler.
  • the ports scheduler's determination does not depend on whether the packet is part of a zero copy flow. From the control header, the port scheduler 1020 determines which receive DMA channel 1010 to check for congestion and retrieves a buffer to store the packet at step 1566 . Congestion is relieved by a WRED algorithm applied on the receive buffer ring and the receive completion ring. If the receive DMA channel 1010 is not congested, a buffer address is allocated according to the packet size at step 1568 . Packet data requests are issued as posted writes. For zero copy flows, the buffers reflected in the receive completion ring buffer 1042 only hold the packet headers.
  • the datapath engine 1042 fairly schedules the requests from the Port Scheduler and the receive DMA channel control scheduler 1022 at step 1570 .
  • the datapath engine 1024 then issues the requests to the memory system 130 at step 1572 .
  • the receive completion ring buffer 1042 is updated after issuing the write requests for the entire packet at step 1574 .
  • the DMA status registers are updated every time that the receive completion ring buffer 1042 is updated at step 1576 .
  • Software may poll the DMA status registers to determine if any packet has been received.
  • the network interface unit 110 may update the receive completion ring buffer 1042 , and simultaneously, write the DMA status registers to a mailbox at step 1580 .
  • the software state is then updated and the logical device flag (LDF) may be raised at step 1582 .
  • the LDF may then lead to a system interrupt at step 1584 .
  • the network interface unit 110 maintains the consistency of the DMA status registers and the receive completion ring buffer 1042 as the status registers reflect the content of the receive completion ring in the memory system 130 at step 1586 .
  • FIG. 16 shows a flow chart of the movement of a packet transmitted by the network interface unit 110 .
  • the transmit DMA module 812 includes 32 transmit DMA channels 1120 .
  • Each transmit DMA channel 1120 includes a transmit ring and a set of control and status registers. (See, e.g., FIG. 22 .) Similar to the receive channels, each transmit channel supports multiple ranges. Addresses in the transmit ring are subjected to a range checking translation based on the ranges.
  • the transmit ring includes a ring buffer in memory system 130 .
  • Software posts packets into the transmit ring at step 1610 and signals the transmit DMA module 812 that packets have been queued at step 1612 .
  • Each packet is optimally built as a gather list.
  • the network interface unit 110 ensures that the packet size does not exceed the maximum packet size limit.
  • the network interface unit 110 prefetches the transmit ring entries into a per channel transmit ring prefetch buffer 1142 at step 1614 .
  • Any transmit DMA channel 1120 can be bound to one of the network ports by software.
  • the binding of the ports is controlled by a mapping register 1154 at the per port DRR scheduler 1150 .
  • the DRR scheduler 1150 may be switched to a different channel on packet boundary. This switching ensures that there will be no packet interleaving from different transmit DMA channels 1120 within a packet transfer.
  • the DRR scheduler 1150 first acquires an available buffer for that port at step 1620 . If a buffer is available, a memory request is then issued at step 1622 .
  • a buffer tag identifying the buffer is provided at step 1624 to enable reordering of potentially out of order read returns.
  • the buffer tag is linked to the request acknowledgement identifier for the packet at step 1626 .
  • the network ports are serviced in a round robin order via the round robin scheduler 1152 at step 1630 . Requests from different ports may be interleaved.
  • the transmit data requests and the prefetch request share the same datapath to the memory system 130 .
  • the returned acknowledgement is first processed at step 1640 to determine whether the returned acknowledgement is a prefetch or a transmit data.
  • the transmit DMA module 812 hardware also supports checksum offload and CRC- 32 c offload.
  • the transmit FIFO/Reorder Logic module 814 includes checksum and CRC- 32 c functionality.
  • the transfer of the packet is considered to be completed and the state of the transmit DMA channel 1120 is updated via the associated status register at step 1650 .
  • a 12-bit counter is initialized to zero and tracks transmitted packets.
  • Software polls the status registers to determine the status. Alternately, software may mark a packet so that an interrupt (if enabled) may be issued after the transmission of the packet.
  • the network interface unit 110 may update the state of the DMA channel to a predefined mailbox after transmitting a marked packet.
  • the transmit and receive portions of the network interface unit 110 fairly share the same memory system interface 832 .
  • a flow chart of the operation of the port scheduler 1020 is shown. More specifically, because a port may be supporting 1 Gbps or 10 Gbps, a rate based scheduler is provided to ensure no starvation.
  • the port scheduler 1020 only switches port at packet boundary and only schedules a port when the port FIFO has at least one complete packet.
  • the ‘next_queue_in_i’ operation returns the first queue in i if the last queue is reached.
  • the port scheduler 1020 performs a select operation at step 1718 .
  • the port scheduler 1020 performs a loop operation at step 1720 .
  • the select operation 1718 starts by setting i equal to the next queue in i at step 1810 .
  • the port scheduler 1020 sets C_i equal to the minimum value of C_i plus W_i or W_i at step 1812 .
  • the port scheduler 1020 determines whether the queue i is not eligible for scheduling at step 1814 . Queue i is not eligible if C_i is less than or equal to zero. If queue i is not eligible, then the operation returns to step 1810 . If queue i is eligible, then operation proceeds to the loop operation of step 1720 .
  • the loop operation 1720 starts by processing one packet from queue i at step 1910 .
  • the port scheduler 1020 decrements C_i at step 1912 .
  • the port scheduler 1020 determines whether queue i is not eligible for scheduling at step 1914 . Queue i is not eligible for scheduling if C_i is less than or equal to zero. If queue i is not eligible for scheduling, then the operation returns to the select operation of step 1910 . If queue i is eligible for scheduling then the operation proceeds to the select operation of 1720 .
  • C_i is decremented by the number of 16 B blocks the packet contains. A partial block is considered as one complete block.
  • the port DRR weight register programs the weight of a corresponding port.
  • a goal of congestion management (such as the use of a weighted random early discard module 2000 ) is to prevent overloading of the processing entity 120 and to fence off potential attacks that deplete system resources associated with network interfaces.
  • the control mechanism for providing congestion management is to discard packets randomly.
  • the weighted random early discard module 2000 provides the benefit of de-synchronizing the TCP slow start behavior and achieving an overall improvement in throughput.
  • the resources of a receive DMA channel are captured by two states: the receive completion ring queue length and the number of posted buffers.
  • a DMA channel is considered congested if there are a lot of packets queued up but not enough buffers posted to the DMA channel.
  • the receive block ring queue length is scaled up by a constant, S, because a block may store more than one packet.
  • a WRED function is characterized by two parameters, threshold and window. If the Q is larger than the threshold, then the packet is subjected to a WRED discard operation. The window value determines the range of Q above the threshold where the probabilistic discard is applicable. If Q is larger than (Threshold+Window), the packet is always discarded. Because it is desirable to protect existing connections and fence off potential SYN attacks, TCP SYN packets are subject to a different set of (Threshold, Window) pair.
  • the operation of the WRED module 2000 starts by initializing a plurality of values at step 2008 .
  • the WRED module 2000 sets a value x equal to Q ⁇ T at step 2010 .
  • the WRED module 2000 determines whether x is less than 0 at step 2012 . If x is less than zero, then the operation of the module exits. If x is not less than zero, then the WRED module 2000 obtains a random number between 0 and 1 at step 2014 .
  • the WRED module 2000 determines whether an integer value of R*W is less than x at step 2016 . If the integer value is less than x, then the packet is discarded at step 2018 . If the value is not less than x, then the operation of the module completes.
  • the random number is implemented with a 16 bit linear feedback shift register (LFSR) with polynomial such as X16+X5+X3+X2+1
  • LFSR linear feedback shift register
  • the network interface unit 110 provides performance based on parallelism, selective offloading of data movement and pipelined usage of an I/O interface.
  • the network interface unit 110 selectively uses direct virtual memory access (DVMA) and physical DMA models.
  • the network interface unit 110 provides partitionable control and data path (via, e.g., hypervisor partitions).
  • the network interface unit 110 provides packet classification for partitions, services and flow identification.
  • the network interface unit 110 is multi-ported for multi-homing, blade architectures and look aside applications.
  • the network interface unit 110 receives and transmits data movement profiles as described below. More specifically, the receive data movement profile provides that DMA writes are performed in up to 512 byte posted write transactions, that there are a plurality of pipelined write transitions per DMA channel, that the total number of pipelined write transactions is determined based upon I/O and memory latency characteristics, that the receive DMA write PCI-Express transactions have byte granularity and that most DMA writes are initiated with relaxed ordering.
  • the read data movement profile provides for a plurality of pipelined DMA read requests per DMA channel, that the total number of pipelined DMA read requests across channels is determined based upon I/O and memory latency characteristics, that each transmit DMA read request can be up to 2 K bytes, that the network interface unit 110 tries to request an entire packet or 2k whichever is smaller, that the DMA read completions can be partial, but in order for a given request, that the network interface unit 110 handles interleaved DMA read completions for outstanding requests, and that the network interface unit 110 preserves packet ordering per DMA channel despite request or completion reordering. It will be appreciated that any of the data movement profiles may be adjusted based upon the I/O and memory latency characteristics associated within the network system.
  • DMA channels which include both receive DMA channels 1010 and transmit DMA channels 1120 , are the basic constructs for queuing, and for enabling parallelism in servicing network interface units 110 from different processing entities 120 .
  • DMA channels are serviced independently, thereby avoiding the overhead of mutual exclusion when managing transmit and receive queues.
  • receive zero copy i.e., TCP reassembly
  • Translation tables are not considered separate channels.
  • the transmit DMA channels 1120 and receive DMA channels 1010 each include respective kick registers which are used via PIO posted writes to update network interface units 110 regarding how far the hardware may advance on each ring.
  • Completion registers analogously indicate to the software how far the hardware has advanced, while avoiding descriptor writebacks.
  • All PIO registers associated with the operation of a DMA channel are separable into pages.
  • the DMA channels may be managed by their own partitions.
  • the PIO registers, and thus the DMA channels, are groupable so that an arbitrary ensemble of DMA channels can be placed in a single partition.
  • Both the transmit DMA channels 1120 and the receive DMA channels 1010 cache at least a cache line worth of fetched descriptors to minimize descriptor memory accesses. Similarly, completion updates are batched to fill a cache line whenever possible. Every DMA channel includes a corresponding polling register.
  • the polling register reflects the state of the channel (not empty completion) so that the network interface unit 110 can use a programmable I/O read operation to the polling register.
  • a receive DMA channel 1010 includes a receive descriptor ring 2110 and a receive completion ring 2112 .
  • the receive descriptor ring 2110 holds free buffer pointers to blocks of buffers of pre-defined size, typically an operating system page size or a multiple of an operating system page size.
  • Buffer consumption granularity discriminates packet lengths based on three ranges, small, large or jumbo, which are defined by SMALL_PACKET_SIZE, LARGE_PACKET_SIZE, JUMBO_PACKET_SIZE elements, respectively.
  • the length of the packet is less than the value defined by the SMALL_PACKET_SIZE element; with the large packet length range, the length of the packet is greater than the value defined by the SMALL_PACKET_SIZE element and less than or equal to the value defined by the LARGE_PACKET_SIZE element; and, with a jumbo packet length range, the length of the packet is greater than the value defined by the LARGE_PACKET_SIZE element and less than or equal to the value defined by JUMBO_PACKET_SIZE element.
  • the receive DMA channel 1010 uses three free buffer pointers cached from its descriptor ring, one buffer is carved up for small packets, another buffer for large packets, and a third buffer for jumbo packets.
  • the PACKET_SIZE thresholds are coarsely programmable per channel and determine the number of packets per buffer and the fixed receive buffer sub-divisions where packets may start.
  • the respective packet pointers are posted to the channel's receive completion ring 2112 .
  • the receive completion ring 2112 therefore defines the order of packet arrival for the receive DMA channel 1010 corresponding to the completion ring. Jumbo packets may exceed the buffer size by spilling over into a second buffer. Two pointers per packet are posted to the receive completion ring 2112 in the case of spillover.
  • each receive DMA channel context includes a plurality of elements. More specifically, each receive DMA channel includes a buffer size element; a SMALL_PACKET_SIZE element; a LARGE_PACKET_SIZE element; a JUMBO_PACKET_SIZE element; a receive descriptor ring start pointer element; a receive descriptor ring size element; a receive descriptor ring head pointer element; a receive kick register element; a receive descriptor ring tail pointer element; a receive completion ring start pointer element; a receive completion ring size element; a receive completion ring head pointer element; a receive completion tail pointer element; a receive buffer pointer for SMALL element; a receive Buffer pointer for LARGE element; a receive Polling register element (reflects completion ring queue depth, i.e. the distance between completion head and tail register values); and WRED register elements (thresholds, discard statistics).
  • the completion ring size is programmed by software to be larger than the descriptor ring size. To accommodate small packet workloads, the ratio between the ring sizes is at least (Buffer size/SMALL_PACKET_SIZE).
  • a transmit DMA channel 1120 includes a single transmit descriptor ring 2210 holding buffer pointers for new packets to be transmitted. Each transmit DMA channel 1120 is associated via register programming with one of the MAC ports, or one trunk when link aggregation is used. Multiple DMA channels may be associated with a single MAC port. Transmit gather is supported, i.e., a packet may span an arbitrary number of buffers.
  • a transmit operation executes in open loop mode (i.e., with no interrupts) whenever possible. Complete descriptor removal is scheduled at the end of new packet queuing, or periodic interrupts requested at enqueuing time, but there is no need to generate an interrupt for every packet completion or to service the transmit process in any form for the transmit process to make progress.
  • each transmit DMA channel context includes a plurality of elements. More specifically, each transmit DMA channel context includes a transmit descriptor ring start pointer element; a transmit descriptor ring size element; a transmit descriptor ring head pointer element; a transmit kick register element; a transmit descriptor ring tail pointer element; a transmit completion register element; and, a transmit Polling register element (reflects descriptor ring queue depth, i.e. Distance between Head and Tail register values).
  • the descriptor structures defining the transmit DMA channels 1120 are very simple so that the descriptor structures can efficiently correspond to the DVMA structures without unnecessary input output memory management unit (IOMMU) thrashing for network interface units.
  • IOMMU input output memory management unit
  • the memory accesses proceed directly to a memory system 130 (after translating virtual addresses to physical address within the four port network interface unit) but without going through any bridge or IOMMU.
  • Memory accesses proceeding directly to a memory system 130 allows superior latency and additional I/O bandwidth, as networking does not compete with any other I/O.
  • a reorder function correlates DMA memory completions, and serializes some operations whenever necessary (either via descriptor update after DMA WR, or polling register update after DMA WR).
  • the packet classification hierarchy which is provided by the packet classifier module 818 provides several receive packet classification primitives. These receive packet classification primitives include virtualization, traffic spreading, perfect ternary matches, and imperfect and perfect flow matching.
  • the virtualization packet classification primitive determines the partition to be used for a given receive packet.
  • Virtualization allows multiple partitions to co-exist within a given network interface unit 110 or even a given port within a network interface unit 110 while keeping strict separation of DMA channels and their corresponding processing resources.
  • the shared parts of the network interface unit 110 are limited to the cable connected to the network interface unit 110 , the MAC module 830 , and the receive packet FIFOs 816 servicing the port.
  • the cable, the MAC module 830 and the receive packet FIFOs 816 provide continuous packet service (i.e., no stalls or blocking).
  • Virtualization can be based on VLANS, MAC addresses, or service addresses such as IP addresses or TCPIUDP ports. Virtualization essentially selects a group of receive DMA channels 1010 as the set of channels where a packet may end up regardless of all other traffic spreading and classification criteria.
  • the traffic spreading classification primitive is an efficient way of separating traffic statically into multiple queues. Traffic spreading classification preserves affinity as long as the parser is sophisticated enough to ignore all mutable header fields.
  • the implementation of traffic spreading is based on pre-defined packet classes and a hash function applied over a programmable set of header fields. The hash function can be tweaked by programming its initial value. The traffic spreading function can consider or ignore the ingress port, enabling different or identical spreading patterns for different ports.
  • the perfect ternary match classification primitive is the ultimate classification, where the packet can be associated with flows, or with wild-carded entries representing services, addresses, virtualized partitions, etc.
  • the implementation of perfect match is based on a TCAM match, and is therefore limited in depth.
  • the TCAM value is generally intended to match layer 3 and layer 4 fields for Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6), and also bind layer 2 virtualization to layers 3 and 4 by keying group numbers in addition to IP headers and transport headers.
  • the flow matching classification primitive is the association of packets to pre-inserted flows within a large hash table.
  • the hash entries can be used for perfect or imperfect binary matches, where a perfect match consumes four times the space of an imperfect match. Therefore, in general, there is a low but finite probability of having a false match, and also of not being able to insert the desired flow for a specific packet.
  • Flow matching is used for maintaining flow associations to DMA channels for a large number of connections (for example for operating system style hardware classification) as well as zero copy flows.
  • the implementation of flow matching is based on hashing into the hash table 950 . In the case of zero copy flows, regardless of the match type, the translation table stage does again a full 5-tuple comparison thus eliminating the risk of false matches. “Don't care” bits for flow matching are masked by a class filter before the hashing function, and are an attribute of the class, rather than the individual entry.
  • Populating the hash table 950 is optional and software functions with scenarios where the hash table 950 is or is not populated. Furthermore, the hash table 950 is partitionable into a plurality of separate tables (e.g., four separate tables), so that separate partitions can manage their own flows or connections directly without having to serialize access or invoke hypervisor calls in flow setup.
  • TCAM matches and flow matches are largely independent, except that the TCAM match virtualization determines which hash table partition to search.
  • the TCAM match virtualization results in some serialization between the searches.
  • the TCAM and flow matches are merged, allowing TCAM entries to override or defer to flow matches.
  • the flow match key is not controllable by the TCAM match, and its construction and hash computation may be overlapped with the TCAM search.
  • the ingress port is considered part of all matches and tables so that different policies can be applied across different ports.
  • the flow match and the traffic spreading function use the same key into the hash function. Key masking and assembly is programmable.
  • the tables have various sizes and roles.
  • the MAC table virtualizes based on the MAC Address index provided by the MAC blocks (e.g., 4 bits) and the ingress port number (e.g., 2 bits).
  • the output of the MAC table is a group # (e.g., 4 bits) and a MAC_Dominates signal to control how to merge this result with the VLAN table result.
  • the VLAN table virtualizes based on VLAN IDs (e.g., 12 bits) and a VLAN_Dominates signal to control how to merge this result.
  • the group tables include 16 sets of receive DMA channels grouped for virtualization. The receive DMA channels are programmed into one of the group tables. All 32 entries of a group table are filled with valid receive DMA channel numbers. Receive DMA channels are written more than once per group table if necessary to fill the table.
  • Both transmit and receive functions operate as store and forward in and out of the corresponding FIFO.
  • receive packet FIFOs arbitrate for DMA channel scheduling on packet boundaries.
  • the packet at the head of a given receive packet FIFO determines the DMA channel number to use for the packet.
  • Translation table lookups represent the longest latency step of ingress processing.
  • the pipeline design assumes that every packet goes through translation at ingress, and overlaps the translation with data flowing into the Receive packet FIFO.
  • Some receive control information is stored in the receive buffers along with the receive packets while other fields are deposited into the descriptors themselves. Information consumed by the driver goes to descriptors, and information needed above the driver stays in the buffer.
  • receive buffers accommodate a number of reserved locations per buffer to be used by software. The number is programmable per channel and up to 86 bytes.
  • Receive packets using TCP re-assembly derive their DMA addresses from the translation result in the form of a pair of (address, length) pairs with arbitrary byte granularity.
  • the transmit reorder module 852 produces the transmit FIFO address location for writing memory read (MEM RD) completions based on the transaction ID, address, byte count, and byte enables of the completion.
  • MEM RD memory read
  • a packet may require more than one request and therefore the packet may consume multiple transaction IDs.
  • the transmit reorder module 852 handles as many transaction IDs as the number of pipelined MEM RD requests issued by the network interface unit 110 . Completions are of arbitrary size up to Max_Payload_Size for the PCI-Express receive direction.
  • the transmit reorder module 852 therefore manages the re-assembly of completions at insertion time into Transmit FIFOs 850 , and in the process of doing so enforces a network packet order per MAC/DMA channel that is identical to the memory read request order for the transmit DMA channel 812 .
  • the memory read request order is derived from the packet descriptor order of each transmit DMA channel 1120 , with the freedom to schedule across transmit DMA channels 1120 with no order constraints.
  • the transmit reorder module 852 also determines when a given packet is completely written into the transmit FIFO 850 by determining that all the packet requests are completely satisfied. For simplicity purposes the request order is enforced within a transmit FIFO 850 even for requests from different transmit DMA channels 1120 .
  • TCP checksum insertion is performed by maintaining partial checksums per packets in the transmit reorder module 852 and using the additive property of the i's complement checksum to overcome completion interleaving.
  • the reorder module 852 is simplified because MEM RD completions are of fixed size, and possibly a smaller number of outstanding requests are pipelined.
  • the data buffering includes a plurality of discard policies. More specifically, the discard policy for a transmit operation is that there is not congestive discard in the transmit data path because the four port network interface unit only requests from memory packets that fit in the corresponding Transmit FIFO.
  • the discard policy for a receive is that congestive discard for Receive occurs under several scenarios at the boundary between a receive FIFO module 816 and a receive DMA channel 1010 . Accordingly, the receive FIFO module 816 is always serviced, be it by the receive DMA channel 1010 corresponding to the packet at the head of the receive FIFO module 816 , or by discarding from the head of the receive FIFO module 816 . Packets are never backpressured at the receive FIFO module 816 . All discard operations are on packet boundaries.
  • a DMA congestion scenario where no buffer is posted to the descriptor ring at the time the packet is at the head of its receive FIFO module 816 may trigger packet discard.
  • a DMA disabled scenario where a receive DMA channel 1010 is disabled at the time the packet is at the head of its receive FIFO module 816 may trigger packet discard.
  • a random early discard (RED) scenario which is implemented per receive DMA channel 1010 which determines that queue length requires packet discard, and randomizer determines that the next packet is the victim.
  • RED random early discard
  • a classifier triggered scenario when the packet classifier 818 indicates a packet is to be dropped; the packet is dropped from the head of the receive FIFO module 816 .
  • the classification result which is carried by the receive control FIFO 862 includes the packet drop indication.
  • a late discard scenario occurs in cases of congestion in the middle of the packet, or packet malfunction (Length or CRC based) signaled by the MAC at the end of a packet, packet discard is marked on the FIFO ingress side, possibly by rewriting the first receive packet FIFO 860 with a special marker sequence.
  • the design may also reclaim most of the offending packet's FIFO locations used so far by rewinding the ingress pointer.
  • Packet drop at the receive packet FIFO tail also occurs when the receive packet FIFO 860 fills. For example, for lookup congestion, if the packet classifier 818 fails to keep up with averaged packet rate (averaged by the receive packet FIFO depth), the receive control FIFO 862 is updated with results at a slower rate than the receive packet FIFO 860 . Should the receive packet FIFO fill, the affected packet is dropped on the FIFO ingress side by reclaiming the locations used so far.
  • the hypervisor 312 adds a level of indirection to the physical address space by introducing real addresses. Real addresses are unique per partition, but only physical addresses are system unique. There are two types of hypervisor hooks with the address usage of network interface units. These two hooks include any slave access to network interface unit registers intended to be directly manipulated by software in the partition without the hypervisor 312 (or equivalent) coordination is grouped into pages that the network system memory management unit can map separately and any DMA access originated from network interface units apply an address relocation mapping based on a per partition offset and range limit. The offset and limit values are programmable through yet another partition different from the partition that posts addresses to the DMA channel.
  • the level of indirection can be used in a hypervisor environment to achieve full partition isolation. This level of indirection can also be used in non-partitioned environments to avoid having to serialize access to shared resources in the data path. Providing a level of indirection is valuable to enable scalable performance.
  • the network interface unit 110 includes a plurality of register groups. These register groups include a MAC/PCS register group, a classification register group, a virtualized register group, transmit and receive DMA register groups, a PCI configuration space register group, an interrupt status and control register group, a partition control register group, and an additional control register group.
  • register groups include a MAC/PCS register group, a classification register group, a virtualized register group, transmit and receive DMA register groups, a PCI configuration space register group, an interrupt status and control register group, a partition control register group, and an additional control register group.
  • the register structure and event definition relies on separating datapath interrupt events so that the events can be mapped univocally to strands or processors, regardless of whether the processors enable interrupts, poll, or yield on an event register load.
  • the actual event signaling for network interface units 110 is based on interrupt messages (MSIs) to different addresses per target.
  • MSIs interrupt messages
  • the event signaling is done towards a set of interrupt registers placed close to the processor core.
  • the interface unit device driver 420 assists an operating system 430 with throughput, connection setup and teardown. While higher bandwidth data rates may saturate the network stacks on a single processor, the network system helps to achieve throughput networking by distributing the processing.
  • the network system device driver 420 programs the packet classifier 818 for identification of flows or connections to the appropriate processor entities 120 .
  • the network interface unit packet classifier 818 is programmed to place well defined flows on the appropriate DMA channel.
  • a model of a flow can occur in a single stage or multiple stages, so that different processing entities 120 can service different receive channels.
  • a single stage is when a packet is received, is classified as a flow, and sent to the software stack for processing without further context switching.
  • Multiple stages is when packets which are classified as flows are queued and then some other thread or operating system entity is informed to process the packets at some other time.
  • the operating system 430 creates a queue instance for each processor plus a thread with affinity to that processor entity 120 .
  • packet ordering is maintained on receive flows. Also, maintaining affinity of receive and transmit packets that belong to the same connection enables better network system performance by providing the same context, no processor cross-calls and keeps the caches “warm”.
  • the network system software stack 410 migrates flows to insure that receive and transmit affinity is maintained. More specifically, the network system software stack 410 migrates receive flows by programming flow tables. The network system software stack 410 migrates transmit flows by computing the same hash value for a transmit as the network interface unit 110 .
  • the connection to a processor affinity is controlled by the operating system 430 , with a network interface unit 110 and the network interface unit device driver 420 following suit.
  • the operating system 430 presently associates each flow with the processing entity 120 that creates the flow either at “open” or at “accept” time.
  • the flow to DMA channel mapping of a connection is passed to the network interface unit 110 and associated network system software and stored in the hash tables 950 for use by the receive packet classifier 818 .
  • the other alternative is based on a general fanout technique defined by the operating system 430 and does not use a flow table entry.
  • the network interface unit device driver 420 can be a multi-threaded driver with single thread access to data structures.
  • the network system software stack 410 exploits the capabilities of the network interface unit 110 .
  • the packet classifier 818 is optionally programmed to take into account the ingress port and VLAN tag of the packet. This programming allows multiple network interface units 110 to be under the network system software stack 410 .
  • FIG. 24 a flow diagram of a receive flow between a network interface unit and a network system software stack 410 is shown.
  • the network interface unit 110 is programmed to provide hash based receive packets spreading which sends different IP packets to different DMA channels.
  • the network interface unit packet header parsing uses source and destination IP addresses, and the TCP port numbers, (e.g., TCP 5-tuples). These fields along with the port and VLAN uniquely identify a flow. Hashing is one of many ways to spread load.
  • the network interface unit 110 When the network interface unit 110 is functioning in an interrupt model, when a packet is received, it generates an interrupt, subject to interrupt coalescing criteria. Interrupts are used to indicate to a processor entity 120 that there are packets ready for processing. In the polling mechanism, reads across the I/O bus 112 are performed to determine whether there are packets to be processed.
  • the network interface unit 110 includes two modes for processing the received packets.
  • a standard interrupt based mode is controlled via the device driver 420 and the second polled based mode that is controlled by the ULP.
  • the ULP in this case the operating system 430 ) exploits the appropriate mode to meet certain performance goals. Flows that have been classified as exact matches by the combination of the network interface unit packet classifier 818 and the device driver 420 are sent directly to the operating system 430 within the receive interrupt context or queued and pulled via polled queue threads. In either case, the network interface unit packet classifier 818 helps map particular flows to the same processing entity 120 .
  • An interrupt coalescing feature per receive descriptor can provide multiple packet processing and chaining.
  • the device driver 420 registers the interrupt service routine with the operating system 430 which then tries to spread the processing to different processing entities 120 .
  • the device driver 420 configures the network interface unit 110 to exploit the DMA channels, translation table, buffer management, and the packet classifier.
  • the polled mode module includes interfaces between the ULP and the network interface unit 110 .
  • the interface to the network interface unit device driver 420 is via either a device driver specific interface or via an operating system framework.
  • the device driver 420 uses a standard operating system interface.
  • the network interface unit 110 places a number of packets into each page sized buffer by dividing the buffer into multiple packet buffers. Depending on packet size distribution, buffers may be returned in a different order than they were placed on the descriptor ring. Descriptor and completion ring processing is handled in the interrupt handler or invoked from the thread model.
  • the flow of a transmit flow between a network interface unit and a network system software stack 410 is shown.
  • the device driver 420 When the device driver 420 is functioning at the transmit side, the device driver 420 provides one of two approaches, an IP queue fanout approach and a hash table approach.
  • the IP queue fanout approach uses a fanout element to potentially help provide better affinity between transmit and receive side flow processing. If a network function uses the same hash as the network interface unit packet classifier 818 , then the operating system 430 distributes “open” or “accept” connections to the same queue as the network interface unit packet classifier 818 .
  • the fanout approach provides processor affinity to flows/connections without the hash table. All incoming flows classified by the network interface unit packet classifier 818 come to the operating system 430 on the same processing entity 120 . So, the accept connection function uses the same queue and the “open” connection function uses the hash algorithm to fan the packet out to the right queue. Thus, the queue fanout approach enables the network interface unit device driver 420 and the operating system 430 to exploit the affinity of a flow/connection to a particular processing entity 120 .
  • the hash table approach uses a mechanism for load balancing the IP packets to the appropriate processing entity 120 based on transmit affinity. If the operating system 430 wants to drive the affinity from a transmit perspective, then the operating system 430 exploits the hash table interface provided by the network interface unit 110 .
  • the application sourcing data running on a particular processing entity 120 e.g., CPU#n
  • the hash table 950 provides the capabilities to manage a large number (e.g., four million) of flows. Each entry in the hash table 950 allows a flow to have a well defined processing entity 120 plus some pointer, e.g., a pointer to the connection structure.
  • the hash table approach provides interfaces which are defined between the operating system 430 and the device driver 420 to program the hash table 950 .
  • the entries in the hash table 950 are updated according to the processing entity 120 on which the connection is being initiated or terminated as the case may be. Updating the hash table allows subsequent packets for that flow to come to the same processing entity 120 .
  • the entries in the flow are inserted before the packet is sent on the wire (i.e., sent onto the network).
  • One feature of the network interface unit 110 on the transmit side is the support for multiple transmit descriptor rings per port, allowing multiple threads to send packets concurrently to the same port and even use some of the queues for qualities of service (QOS) for outbound traffic.
  • a transmit descriptor is associated with a particular VLAN and at during the configuration of the network interface unit 110 .
  • the network interface unit 110 ensures that a given flow is always associated with the same transmit descriptor ring.
  • the device driver 420 performs the spreading of the flows that come down from the operating system 430 .
  • the device driver 420 includes a map identifying which physical ports to which transmit queues.
  • the device driver approach identifies the transmit descriptor by a hashing algorithm and distributes the packets to different descriptors but tied to the same port.
  • the attachment on which the packet comes to the device driver 420 of operating system parameter is used to identify the port.
  • Flow control is defined for the operating system programming interface. If all transmit descriptors that are tied to the given ports are locked, then the device driver 420 informs the operating system 430 to queue the packets in its queue. This helps in alleviating the lock connection issue associated in a multiprocessing environment.
  • the locks are mainly for preventing the descriptor entries from being used by two separate threads and are desirable to be held for a very short duration.
  • the operating system 430 wants to fan out the packets to different descriptors then the operating system 430 has to ensure that the same flow always uses the same transmit descriptor.
  • the operating system 430 provides the port and the appropriate transmit descriptor over which the flow needs to go.
  • the operating system API also adheres to the flow control push back from the device driver 420 in case the transmit descriptors are already in use.
  • the above-discussed embodiments include modules and units that perform certain tasks.
  • the modules and units discussed herein may include hardware modules or software modules.
  • the hardware modules may be implemented within custom circuitry or via some form of programmable logic device.
  • the software modules may include script, batch, or other executable files.
  • the modules may be stored on a machine-readable or computer-readable storage medium such as a disk drive.
  • Storage devices used for storing software modules in accordance with an embodiment of the invention may be magnetic floppy disks, hard disks, or optical discs such as CD-ROMs or CD-Rs, for example.
  • a storage device used for storing firmware or hardware modules in accordance with an embodiment of the invention may also include a semiconductor-based memory, which may be permanently, removably or remotely coupled to a microprocessor/memory system.
  • the modules may be stored within a computer system memory to configure the computer system to perform the functions of the module.
  • Other new and various types of computer-readable storage media may be used to store the modules discussed herein.
  • those skilled in the art will recognize that the separation of functionality into modules and units is for illustrative purposes. Alternative embodiments may merge the functionality of multiple modules or units into a single module or unit or may impose an alternate decomposition of functionality of modules or units. For example, a software module for calling sub-modules may be decomposed so that each sub-module performs its function and passes control directly to another sub-module.

Abstract

A method for addressing system latency within a network system which includes providing a network interface and moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests is disclosed. The network interface includes a plurality of memory access channels.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to networking and more particularly to hiding system latencies within a throughput network system.
  • 2. Description of the Related Art
  • In known networked computer systems, the network interface functionality is treated and supported as an undifferentiated instance of a general purpose Input Output (I/O) interface. This treatment is because computer systems are optimized for computational functions, and thus networking specific optimizations might not apply to generic I/O scenarios. A generic I/O treatment results in no special provisions being made to favor network workload idiosyncrasies. Known networked computer systems include platform servers, server based appliances and desktop computer systems.
  • Known specialized networking systems, such as switches, routers, remote access network interface units and perimeter security network interface units include internal architectures to support their respective fixed function metrics. In the known architectures, low level packet processing is segregated to separate hardware entities residing outside the general purpose processing system components.
  • The system design tradeoffs associated with networked computer systems, just like many other disciplines, include balancing functional efficiency against generality and modularity. Generality refers to the ability of a system to perform a large number of functional variants, possibly through deployment of different software components into the system or by exposing the system to different external workloads. Modularity refers to the ability to use the system as a subsystem within a wide array of configurations by selectively replacing the type and number of subsystems interfaced.
  • It is desirable to develop networked systems that can provide high functional efficiencies while retaining the attributes of generality and modularity. Networked systems are generally judged by a number of efficiencies relating to network throughput (i.e., the aggregate network data movement ability for a given traffic profile), network latency (i.e., the system contribution to network message latency), packet rate (i.e., the system's upper limit on the number of packets processed per time unit), session rate (i.e., the system's upper limit on creation and removal of network connections or sessions), and networking processing overhead (i.e., the processing cost associated with a given network workload). Different uses of networked systems are more or less sensitive to each of these efficiency aspects. For example, bulk data movement workloads such as disk backup, media streaming and file transfers tend to be sensitive to network throughput, transactional uses, such as web servers, tend to also be sensitive to session rates, and distributed application workloads, such as clustering, tend to be sensitive to latency.
  • Scalability is the ability of a system to increase its performance in proportion to the amount of resources provided to the system, within a certain range. Scalability is another important attribute of networked systems. Scalability underlies many of the limitations of known I/O architectures. On one hand, there is the desirability of being able to augment the capabilities of an existing system over time by adding additional computational resources so that systems always have reasonable room to grow. In this context, it is desirable to architect a system whose network efficiencies improve as processors are added to the system. On the other hand, scalability is also important to improve system performance over time, as subsequent generations of systems deliver more processing resources per unit of cost or unit of size.
  • The networking function, like other I/O functions, resides outside the memory coherency domain of multiprocessor systems. Networking data and control structures are memory based and access memory through host bridges using direct memory access (DMA) semantics. The basic unit of network protocol processing in known networks is a packet. Packets have well defined representations when traversing a wire or network interface, but can have arbitrary representations when they are stored in system memory. Network interfaces, in their simplest forms, are essentially queuing mechanisms between the memory representation and the wire representation of packets.
  • There are a plurality of limitations that affect network efficiencies. For example, the number of queues between a network interface and its system is constrained by a need to preserve packet arrival ordering. Also for example, the number of processors servicing a network interface is constrained by the processors having to coordinate service of shared queues, when using multiple processors; it is difficult to achieve a desired affinity between stateful sessions and processors over time. Also for example, a packet arrival notification is asynchronous (e.g., interrupt driven) and is associated with one processor per network interface. Also for example, the I/O path includes at least one host bridge and generally one or more fanout switches or bridges, thus degrading DMA to longer latency and lower bandwidth than processor memory accesses. Also for example, multiple packet memory representations are simultaneously used at different levels of a packet processing sequence with consequent overhead of transforming representations. Also for example, asynchronous interrupt notifications incur a processing penalty of taking an interrupt. The processing penalty can be disproportionately large considering a worst case interrupt rate.
  • One challenge in network systems relates to hiding system latencies. Application data that is sent over the network typically originates in the main memory of one system and is eventually delivered to the main memory of another system. Network performance of a computer system can significantly degrade if the memory access latency becomes too large. Some operations in a typical network interface implementation are serialized. Examples of these operations include access to a control data structure such as a descriptor ring that is stored in main memory, access to packet data and access to a control data structure such as a completion ring that is stored in main memory. Known I/O architectures and protocols enforce strict ordering of application data.
  • In known computer systems there may be one or more contributors to the system latency. These contributors include memory technologies that do not correspond to processor and networking speeds. Also, known computer systems may be based on a non-uniform memory access (NUMA) architecture which increases latency if the data cannot be held in the memory of the local processor. In known network systems it is often difficult to control where data is stored.
  • Some known high end networking systems which include many processors can make the system latency issue worse. Often, an increase in computational scalability also increases the system memory access latency to unacceptable levels from a network throughput perspective.
  • Many known system include at least one bridge or switch. This bridge or switch adds hardware latency due to protocol conversion or buffering. Additionally, some bridges or switches require software intervention to function properly.
  • Input output memory management units (IOMMUs) can also generate system latencies. For example, systems that use a virtual memory (VM) model often require virtual address to physical address translation in hardware. The translation tables are hardware limited. If an entry is evicted from the translation table, the latency penalty can be significant. This issue is typical for networking systems because it is often difficult to control to where information is stored.
  • SUMMARY OF THE INVENTION
  • In accordance with the present invention, a network system is set forth which addresses system latency issues by recognizing that a typical network system communicates with many destinations (via, e.g., multiple TCP connections), and that network traffic is bursty (i.e., multiple packets are sent at a time for a given connection). The network system in accordance with the present invention includes an I/O architecture and protocol which allows relaxed ordering. The network system includes a transmit method of requesting multiple packets and reordering interleaved partial completions. The network system includes a receive method that minimizes ordering constraints on the I/O path of the network system.
  • Additionally, the network system includes one or more of a plurality of features which address system latency issues. For example, in one embodiment, the present invention provides a method for moving data for each connection independently and in parallel to and from memory. When one channel stalls due to a memory latency, another channel takes over. Also for example, in one embodiment, multiple packets are moved at a time. Also for example, in one embodiment, a split transaction model is implemented; the split transaction model enforces strict ordering on a given connection only when necessary and otherwise uses relaxed ordering. Also for example, in one embodiment, the network system maximizes IOMMU locality, thereby reducing the probability of a translation table entry being evicted. Also for example, in one embodiment, the network system reduces bridge latency in certain applications.
  • Also for example, in one embodiment, the network system provides dedicated resources for each connection including independent DMA channels, data structures, FIFOs, etc. Also for example, in one embodiment, the network system requests multiple packets from the same and multiple connections; the network system includes multiple receive descriptor updates and receive mailbox completions. Also for example, in one embodiment, the network system includes a reorder mechanism. Also for example, in one embodiment, the network system provides large virtually contiguous portions including virtually contiguous regions for descriptors and large virtually contiguous consecutively posted sub-buffers.
  • In one embodiment, the invention relates to a method for addressing system latency within a network system which includes providing a network interface and moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests. The network interface includes a plurality of memory access channels.
  • In another embodiment, the invention relates to a network system which includes a plurality of processing entities, a memory system coupled to the plurality of processing entities and a network interface coupled to the plurality of processing entities and the memory system wherein the network interface includes a plurality of memory access channels. The network interface unit moves data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
  • FIG. 1 shows a block diagram of a multiprocessor network system.
  • FIG. 2 shows a conceptual diagram of the asymmetrical processing functional layering of the present invention.
  • FIG. 3 shows a block diagram of the functional components of the asymmetrical processing architecture.
  • FIG. 4 shows a block diagram of a software view of the network system.
  • FIG. 5A shows a block diagram of the flow of packet data and associated control signals in the network system from the operational perspective of receiving incoming packet data.
  • FIG. 5B shows a block diagram of the flow of packet data and associated control signals in the network system from the operational perspective of transmitting packet data.
  • FIG. 6 shows a block diagram of an implementation of a mailbox image of an interrupt status register in the multiprocessor system.
  • FIG. 7 shows a diagram of the timing sequence for an interrupt service routine utilizing the mailbox configuration.
  • FIG. 8 shows a block diagram of a network interface unit.
  • FIGS. 9A and 9B, generally referred to as FIG. 9, show a block diagram of a receive packet FIFO module and a packet classifier module.
  • FIG. 10 shows a schematic block diagram of a receive DMA module.
  • FIG. 11 shows a schematic block diagram of a transmit DMA module and a transmit FIFO/reorder logic module.
  • FIG. 12 shows a schematic block diagram of an example of a four port network interface unit.
  • FIG. 13 shows a schematic block diagram of an example of a two port network interface unit.
  • FIG. 14 shows a flow chart of the classification of a packet received by the network interface unit.
  • FIG. 15 shows a flow chart of the movement of a packet received by the network interface unit.
  • FIG. 16 shows a flow chart of the movement of a packet transmitted by the network interface unit.
  • FIG. 17 shows a flow chart of the operation of a port scheduler.
  • FIG. 18 shows a flow chart of a select operation of the port scheduler.
  • FIG. 19 shows a flow chart of a loop operation of the port scheduler.
  • FIG. 20 shows a flow chart of the operation of a weighted random early discard module.
  • FIG. 21 shows a diagram of a receive DMA channel's data structures.
  • FIG. 22 shows a diagram of a transmit DMA channel's data structures.
  • FIG. 23 shows a block diagram of the packet classification hierarchy.
  • FIG. 24 shows a flow diagram of a receive flow between a network interface unit and a network system software stack.
  • FIG. 25 shows a flow diagram of a transmit flow between a network interface unit and a network system software stack.
  • DETAILED DESCRIPTION
  • Network System Overview
  • Referring to FIG. 1, a block diagram of a network system 100 is shown. More specifically, the network system 100 includes a network interface unit 110 which is coupled to an interconnect device 112 via an interconnect controller 114. The interconnect controller 114 is also coupled to a peripheral interface module 116. The interconnect device 112 is also coupled to a plurality of processing entities 120 and to memory system 130. The processing entities 120 are coupled to the memory system 130. Each processing entity 120 includes a respective cache 121.
  • The interconnect device 112 may be an input/output (I/O) bus (such as e.g., a PCI Express bus) along with a corresponding bus bridge, a crossbar switch or any other type of interconnect device. In one embodiment, the interconnect device 112 or a bus bridge within the interconnect device 112 may include an I/O memory management unit (IOMMU). The interconnect device 112 may be conceptualized as part of the interconnect in the processor coherency domain. The interconnect device 112 resides on the boundary between the coherent and the non-coherent domains of the network system 100.
  • Each processing entity 120 may be a processor, a group of processors, a processor core, a group of processor cores, a processor thread or a group of processor threads or any combination of processors, processor cores or processor threads. A single processor may include a plurality of processor cores and each processor core may include a plurality of processor threads. Accordingly, a single processor may include a plurality of processing entities 120. Each processing entity 120 also includes a corresponding memory hierarchy. The memory hierarchy includes, e.g., a first level cache (such as cache 121), a second level cache, etc. The memory hierarchy may also include a processor portion of a corresponding non-uniform memory architecture (NUMA) memory system.
  • The memory system 130 may include a plurality of individual memory devices such as a plurality of memory modules. Each individual memory module or a subset of the plurality of individual memory modules may be coupled to a respective processing entity 120. The memory system 130 may also include corresponding memory controllers as well as additional cache levels. So for example, if the processing entities 120 of the network system 100 each include a first level cache, then the memory system 130 might include one or more second level caches. The network system 100 addresses system latency issues by recognizing that a typical network system communicates with many destinations (via, e.g., multiple TCP connections), and that network traffic is bursty (i.e., multiple packets are sent at a time for a given connection). The network system 100 includes an I/O architecture and protocol which allows relaxed ordering. The network system 100 includes a transmit method of requesting multiple packets and reordering interleaved partial completions. The network system 100 includes a receive method that minimizes ordering constraints on the I/O path of the network system.
  • Additionally, the network system 100 includes one or more of a plurality of features which address system latency issues. For example, the network system 100 moves data for each connection independently and in parallel to and from the memory system 130. When one channel stalls due to a memory latency, another channel takes over. Also for example, multiple packets are moved at a time. Also for example, a split transaction model is implemented; the split transaction model enforces strict ordering on a given connection only when necessary and otherwise uses relaxed ordering. Also for example, the network system 100 maximizes IOMMU locality, thereby reducing the probability of a translation table entry being evicted. Also for example, the network system 100 reduces bridge latency in certain applications.
  • Also for example, the network system 100 provides dedicated resources for each connection including independent DMA channels. Also for example, the network system requests multiple packets from the same and multiple connections; the network system 100 includes multiple receive descriptor updates and receive mailbox completions. Also for example, the network system includes a reorder mechanism. Also for example, in one embodiment, the network system provides large virtually contiguous portions including virtually contiguous regions for descriptors and large virtual contiguous consecutively posted sub-buffers.
  • In one embodiment, the network system 100 addresses system latency within the network system by providing a network interface which includes a plurality of memory access channels, moving data within each of the plurality of memory access channels independently and in parallel to and from memory so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
  • The network system 100 may includes one or more of a plurality of features relating to reducing system latency. For example, the network system 100 may allow relaxed ordering when internally moving data between the network interface and the memory system A memory access channel may include dedicated queuing, control and buffering to move data while preserving ordering between a processing entity and the network interface. The data may include packets of information; and multiple packets of information are sent at a time for a particular memory access channel. The network system 100 may selectively enforce internal transaction ordering for some transactions within a memory access channel while keeping other transactions as relaxed ordering as necessary. The plurality of memory access channels may include a plurality of receive memory access channels dedicated to moving data between the network interface and the memory system. Each of the plurality of receive memory access channels may include a receive descriptor ring. Each of the plurality of receive memory access channels may include a receive completion ring. The plurality of memory access channels may include a plurality of transmit memory access channels dedicated to moving data between the memory system and the network interface. The plurality of transmit memory access channels may include transmit descriptor rings.
  • Asymmetrical Processing Architecture
  • The method and apparatus of the present invention is capable of implementing asymmetrical multi-processing wherein processing resources are partitioned for processes and flows. The partitions can be used to implement networking functions by using strands of a multi-stranded processor, or Chip Multi-Threaded Core Processor (CMT) to implement key low-level functions, protocols, selective off-loading, or even fixed-function appliance-like systems. Using the CMT architecture for offloading leverages the traditionally larger processor teams and the clock speed benefits possible with custom methodologies. It also makes it possible to leverage a high capacity memory-based communication instead of an I/O interface. On-chip bandwidth and the higher bandwidth per pin supports CMT inclusion of network interfaces and packet classification functionality.
  • Asymmetrical processing in the system of the present invention is based on selectively implementing, off-loading, or optimizing specific functions, protocols, or flows, while preserving the networking functionality already present within the operating system of the local server or remote participants. The network offloading can be viewed as granular slicing through the layers for specific flows, functions or applications. The “offload” category includes the set of networking functions performed either below the TCP/IP stack, or the selective application of networking functions vertically for a set of connections/applications. Examples of the offload category include: (a) bulk data movement (NFS client, RDMA, iSCSI); (b) packet overhead reduction; (c) zero copy (application posted buffer management); and (d) scalability and isolation (traffic spreading from a hardware classifier).
  • FIG. 2 shows the “layers” 1-4 of a traditional networking system that comprise the link, network, transport and application layers, respectively. A dashed line illustrates the delineation of networking functions that are traditionally handled by hardware vs. software. As shown in FIG. 2, in most networking systems this line of delineation is between layers 2 and 3.
  • Network functions in prior art systems are generally layered and computing resources are symmetrically shared by layers that are multiprocessor ready, underutilized by layers that are not multiprocessor ready, or not shared at all by layers that have coarse bindings to hardware resources. In some cases, the layers have different degrees of multiprocessor readiness, but generally they do not have the ability to be adapted for scaling in multiprocessor systems. Layered systems often have bottlenecks that prevent linear scaling. In prior art systems, time slicing occurs across all of the layers, applications, and operating systems. Also, in prior art systems, low-level networking functions are interleaved, over time, in all of the elements. The present invention implements a method and apparatus that dedicates processing resources rather than utilizing those resources as time sliced. The dedicated resources are illustrated by the vertical columns in FIG. 2 that will sometimes be referred to herein as “silos.”
  • The advantage of the asymmetrical model of the present invention is that it moves away from time slicing and moves toward “space slicing.” In the present system, the processing entities are dedicated to implement a particular networking function, even if the dedication of these processing resources to a particular network function sometimes results in “wasting” the dedicated resource because it is unavailable to assist with some other function.
  • In the method and apparatus of the present invention, the allocation of processing entities (processor cores or individual strands) can be allocated with fine granularity. The “silos” that are defined in the architecture of the present invention are desirable for enhancing performance, correctness, or for security purposes.
  • FIG. 3 is an illustration of a networking system that is partitioned whereby a plurality of processing entities are asymmetrically allocated to various networking functions. The functional associations of the processing entities 120 a-n are illustrated by the dashed boundaries designated by reference numerals 310 a-d. The functional association of processing entity 120 a and memory system 130 designated by reference numeral 310 a is a “hypervisor” that is responsible for managing the partitioning and association of the other processing entities, as will be described in greater detail hereinbelow.
  • Reference numeral 310 b shows the association of a processing entity 120 b with memory system 130 and a network interface unit resource of the network interface unit 110. Reference numeral 310 c illustrates the association of a plurality of processing entities 120 c-e with memory system 130 for performing a processing function that does not directly involve a network interface resource. Reference numeral 310 d illustrates an association of a plurality of processing entities 120 f-n with memory system 130 and one or more network interface resources of the network interface unit 110. As is discussed in greater detail herein, the various processing entities 120 a-n can comprise an entire processor core or a processing strand of a processing core.
  • The hypervisor 312 manages the partitioning and association of the various processing entities with the memory system 130 and, in some instances, with a predetermined set of networking resources in the network interface unit. Thus the hypervisor 312 has the responsibility for configuring the control resources that will be dedicated to whichever processing entity is charged with responsibility for managing a particular view of the interface. For example, in the silo that is defined to include the M processing entities 120 f-n, only those processing entities will have the ability to access a predetermined set of hardware resources relating to the interface. The control of the other processing entities, e.g., processing entities 120 c-e, and the access to the memory system 130 for these processing entities is separated.
  • In the asymmetrical processing system illustrated in FIG. 3, the specific assignment and mapping of well defined subfunctions or sessions to preassigned processing entities is done to increase efficiency and throughput. Any number of processing entities can be assigned to a processing task that does not directly involve a network interface resource, such as the N processing entities 120 c-e. Likewise, any number of processing entities can be assigned to perform a network functionality, protocol or hardware function, such as the M processing entities 120 f-n illustrated in FIG. 3.
  • The present invention uses computer resources for network specific functions that could be low level or high level. High-level resources that are concentrated and implemented in the “silo” associations of the present invention are faster than a prior art general implementation of a symmetrical processing system. Using the asymmetrical processing system of the present invention, low-level functionality previously performed in hardware can be raised above the delineation line illustrated in FIG. 2. If there is a processing entity with a bottleneck, another processing entity, or strand, can become part of the flow or part of the function being executed in a particular “silo.” In the asymmetrical system of the present invention, the processing entities that are associated with an interface or other functionality remain efficient because they continue to be associated with the shared memory resources. The processing entities 120 a-n are dedicated without being physically moved within the various layers of the networking system.
  • FIG. 3 also shows two network interface instances 110. Each of the interfaces could have multiple links. The system of the present invention comprises aggregation and policy mechanisms which makes it possible to apply all of the control and the mapping of the processing entities 120 a-120 n to more than one physical interface.
  • In the asymmetrical processing system of the present invention, fine or coarse grain processing resource controls and memory separation can be used to achieve the desired partitioning. Furthermore it is possible to have a separate program image and operating system for each resource. Very “coarse” bindings can be used to partition a large number of processing entities (e.g., half and half), or fine granularity can be implemented wherein a single strand of a particular core can be used for a function or flow. The separation of the processing resources on this basis can be used to define partitions to allow simultaneous operation of various operating systems in a separated environment or it can be used to define two interfaces, but to specify that these two interfaces are linked to the same operating system.
  • Referring to FIG. 4, a block diagram of a software view of the network system 100 is shown. More specifically, a network system software stack 410 includes one or more instantiations of a network interface unit device driver 420, the hypervisor 312, as well as one or more operating systems 430 (e.g., OS1, OS2, OS3). The network interface unit 110 interacts with the operating system 430 via a respective network interface unit device driver 420.
  • One of the processing entities may be configured to execute a partition management module (e.g., hypervisor 312). Hypervisor 312 is a high level firmware based function which performs a plurality of functions and services relating to the network system such as e.g., creating and enforcing the partitioning of a logically partitioned network system. Hypervisor 312 is a software implemented virtual machine. Thus, the network system 100, via hypervisor 312, allows the simultaneous execution of independent operating system images by virtualizing all the hardware resources of the network system 100. Each of the operating systems 430 interact with the network interface unit device driver 420 via extended partition portions of the hypervisor 312.
  • FIGS. 5A and 5B are illustrations of the flow of packet data and associated control signals in the system of the present invention from the operational perspective of receiving incoming packet data and transmitting packet data, respectively. The network interface 110 is comprised of a plurality of physical network interfaces that provide data to a plurality of media access controllers (MACs). The MACs are operably connected to a classifier and a queuing layer comprising a plurality of queues. The classifier “steers” the flow of packet data in conjunction with a flow table, as described in more detail hereinbelow.
  • A mapping function based on the classification function performed by the classifier, and a receive DMA controller function are used to provide an ordered mapping of the packets into a merging module. The output of the merging module is a flow of packets into a plurality of receive DMA channels that are functionally illustrated as a plurality of queuing resources, where the number of receive DMA channels shown in FIG. 5A is independent of the number of physical interfaces providing inputs to the interface unit. Both data and “events” travel over the DMA channels. The queuing resources move the packet data to the shared memory.
  • As was discussed above, the queues also hold “events” and therefore, are used to transfer messages corresponding to interrupts. The main difference between data and events in the system of the present invention is that data is always consumed by memory, while events are directed to the processing entities.
  • Somewhere along the path between the network interface unit 110 and the destination processing entity, the events are translated into a “wake-up” signal. The classifier determines which of the processing entities will receive the interrupt corresponding to the processing of a packet of data. The classifier also determines where in the shared memory a data packet will be stored for further processing. The queues are isolated by the designation of DMA channels.
  • There are multiple instances of control registers (pages) in the network interface unit 110. The associations between the intended strands of the processing entities and the control registers are separable via the hypervisor 312 (see, e.g. FIG. 3). This is a logical relationship, rather than a physical relationship between the functional components of the interface unit. Aggregation and classification are accomplished by the two interfaces that share the classifier and also share the DMA channels. The classification function and the assignment of packets to DMA channels can be accomplished regardless of where the data packet originated. Fine and coarse grain are implemented by the flow table and the operation of the hypervisor to manage the receive DMA channels and the processing entities.
  • FIG. 5 B is an illustration of the flow of packet data and associated control signals from the operational perspective of transmitted packet data. Packets transmitted from the various processing entities 120 a-120 n are received by the interconnect 112 and are directed via plurality of transmit DMA channels. The transmit DMA channels generate a packet stream that is received by the reorder module. As will be described in greater detail hereinbelow, the reorder module is responsible for generating an ordered stream of packets and for providing a fan-out function. The output of the reorder module is a stream of packets that are stored in transmit datal FIFOs. The packets in the transmit data FIFOs are received by the plurality of media access controllers and are thereafter passed to the network interfaces.
  • FIG. 6 is an illustration of a mailbox and register-based interrupt event notification apparatus for separable, low overhead, scalable network interface service. In the shared memory environment of the asymmetrical processing system of the present invention, it is important to avoid physical interrupts, because it complicates management of the shared memory resources. In the present system “events” are messages that are essentially the same as memory writes. The “message” (or the “interrupt”) is simply a means for waking up a specified processing entity; it does not contain information relating to why the processing entity is requested to wake up. When a request to wake up a processing entity is issued, it is also necessary to explain the nature of the task that the processing entity is requested to perform. This is typically accomplished by designating a receive DMA interrupt status register 1016 in the network interface unit 110 that contains information relating to the nature of the task to be performed. When the processing entity, e.g., processing entity 120 b, is awakened, it will read the information in the interrupt status register that denotes the task to be performed. While the interrupt status register in the interface unit hardware provides accurate information relating to the state of the interrupt request, accessing of this information involves significant processing overhead and latency.
  • In the system of the present invention, data corresponding to the interrupt status that would normally be obtained from the Rx DMA interrupt status register 1016 in the network interface unit 110 is transferred into a “mailbox” 1010 in the shared memory 130. The shared memory mailbox is used to store an image of a corresponding interrupt register in the network interface unit 110. The image of the interrupt status register is stored in the shared memory mailbox just prior to sending a message to a processing entity asking it to wake up and perform a specified task. The processing entity that is requested to perform a specified task can access the information in the shared memory mailbox much more efficiently and quickly than it can obtain the information from the corresponding hardware register in the network interface.
  • It is possible, however, that the information in the hardware interrupt status register in the interface unit may change between the time the message is issued to a processing entity and the time the processing entity “wakes up” to perform the specified task. Therefore the data contained in the image of the interrupt storage register that is stored in the shared memory mailbox may not be the latest version.
  • By checking the information stored in the shared memory mailbox 1010, the processing entity can quickly determine the reason it was asked to wake up. It is very easy for the processing entity to consult the shared memory mailbox because of its close proximity to the processing entity. The purpose of the mailbox 1010 is to minimize the number of times that the processing entity must cross the I/O interface. The mailbox 1010 allows the processing entity 120 a to postpone the time that it actually needs to read the contents of the interrupt status register in the interface unit.
  • The advantages relating to the shared memory mailbox implementation of the present invention can be seen by referring to FIG. 7. In a conventional system wherein the processing entity must rely entirely on an interrupt status register, the sequence of processing steps during the interrupt “high” signal is illustrated generally. The system executes an interrupt service routine wherein the interrupt is decoded to identify a particular process to be executed. The processing entity then executes a PIO read (PIORD) to retrieve data from the interrupt status register. There is a latency, illustrated by At1 and a related stall, associated with the time it takes the load from the interrupt status register to complete. The data obtained from the interrupt status register is used by the processing entity to perform actions corresponding to the information contained in the interrupt status register. After the actions associated with the original read of the interrupt status register are completed, a subsequent PIORD is issued to determine if the interrupt status register contains data corresponding to additional actions that must be executed. This subsequent PIORD has a corresponding latency At2 that results in a second stall. If the result of the subsequent PIORD indicates that the data previously obtained from the interrupt status register is the most current information, the processing entity responds with a return (RET) and the interrupt is terminated. As can be seen in FIG. 7, the interrupt processing sequence for an interrupt corresponding to a single process results in a minimum of two accesses to the interrupt register and a significant memory access latency for servicing the interrupt.
  • The interrupt service routine implemented using the shared memory mailbox of the present invention is illustrated generally by the lower timing diagram in FIG. 7. In the present invention, the processing entity accesses the image of the interrupt register in the shared memory mailbox, rather than executing a PIORD. This provides much faster access to the data and, therefore, significantly decreases the overall latency for the interrupt service routine. The present invention also decreases the overall latency of the interrupt service routine by initiating a subsequent PIORD while the process is being executed. The subsequent PIORD is executed on an interleaved basis while the processing entity is executing the process and the contents of the actual interrupt status register can be verified to determine if additional actions have been added to the interrupt request subsequent to storing the contents of the interrupt status register in the shared memory mailbox. In essence, therefore, in the present invention, the subsequent PIORD can be “prefetched” by interleaving it with the processing, thereby allowing the status of the actual interrupt status register to be verified immediately upon completion of the process resulting in an overall significantly shorter time for the system to process the interrupt service routine.
  • Network Interface Unit Overview
  • Referring to FIG. 8, a block diagram of a network interface unit 110 is shown. The network interface unit 110 includes a transmit DMA module 812, a transmit FIFO/reorder logic module 814, a receive FIFO module 816, a receive packet classifier module 818, and a receive DMA module 820. The network interface unit 110 also includes a media access control (MAC) module 830 and a system interface module 832. The transmit packet FIFO reorder logic module 814 includes a transmit packet FIFO 850 and a transmit reorder module 852. The receive FIFO module 816 includes a receive packet FIFO 860 and a receive control FIFO 862.
  • Each of the modules within the network interface unit 110 include respective programmable input/output (PIO) registers. The PIO registers are distributed among the modules of the network interface unit 110 to control respective modules. The PIO registers are where memory mapped I/O loads and stores to control and status registers (CSRs) are dispatched to different functional units.
  • The system interface module 832 provides the interface to the interconnect device 112 and ultimately to the memory system 130.
  • The MAC module 830 provides a network connection such as an Ethernet controller. The MAC module 830 supports a link protocol and statistics collection.
  • Packets received by the MAC module 830 are first classified based upon the packet header information via the packet classifier 818. The classification determines the receive DMA channel within the receive DMA module 820. Transmit packets are posted to a transmit DMA channel within the transmit DMA module 812. Each packet may include a gather list. The network interface unit 110 supports checksum and CRC-32 c offload on both receive and transmit data paths via the receive FIFO module 816 and the transmit FIFO reorder logic module 814, respectively.
  • The network interface unit 110 provides support for partitioning. For functional blocks that are physically associated with a network port (such as MAC registers within the MAC module 830) or logical devices such as receive and transmit DMA channels within the receive DMA module 820 and the transmit DMA module 812, respectively. Control registers are grouped into separate physical pages so that a partition manager (or hypervisor) can manage the functional blocks through a memory management unit on the processor side of the network system to provide an operating system (potentially multiple operating systems) direct access to the control registers. Control registers of shared logical blocks such as the packet classifier module 818, though grouped into one or more physical pages, may be managed solely by a partition manager (or hypervisor).
  • Each DMA channel can be viewed as belonging to a partition. The CSRs of multiple DMA channels can be grouped into a virtual page to simplify management of the DMA channels.
  • Each transmit DMA channel or receive DMA channel can perform range checking and relocation for addresses residing in multiple programmable ranges. The addresses in the configuration registers, packet gather list pointers on the transmit side and the allocated buffer pointer on the receive side are then checked and relocated accordingly.
  • The network interface unit 110 supports sharing available system interrupts. The number of system interrupts may be less than the number of logical devices. A system interrupt is an interrupt that is sent to a processing entity 120. A logical device refers to a functional block that may ultimately cause an interrupt.
  • A logical device may be a transmit DMA channel, a receive DMA channel, a MAC device or other system level module. One or more logical conditions may be defined by a logical device. A logical device may have up to two groups of logical conditions. Each group of logical conditions includes a summary flag, also referred to as a logical device flag (LDF). Depending on the logical conditions captured by the group, the logical device flag may be level sensitive or may be edge triggered. An unmasked logical condition, when true, may trigger an interrupt.
  • Logical devices are grouped into logical device groups. A logical device group is a set of logical devices sharing an interrupt. A group may have one or more logical devices. The state of the logical devices that are part of a logical device group may be read by software.
  • Not all logical devices belonging to a group trigger an interrupt. Whether or not a logical device can trigger an interrupt is controlled by a logical device group interrupt mask (LDGIM). The logical device group interrupt mask is a per logical device group mask that defines which logical device within the group, when a logical condition (LC) becomes true, can issue an interrupt. The logical condition is a condition that when true can trigger an interrupt. A logical condition may be a level, (i.e., the condition is constantly being evaluated) or may be an edge (i.e., a state is maintained when the condition first occurs, this state is cleared to enable detection of a next occurrence of the condition).
  • One example of a logical device that belongs to a group but does not generate an interrupt is a transmit DMA channel which is part of a logical device group. Software may examine the flags associated with the transmit DMA channel by setting the logical device group number of the logical device. However, the transmit DMA channel will not trigger an interrupt if the corresponding bit of the interrupt mask is not set.
  • A system interrupt control value is associated with a logical device group. The system interrupt control value includes an arm bit, a timer and system interrupt data. System interrupt data is the data associated with the system interrupt and is sent along with the system interrupt. The system interrupt control value is set by a partition manager or a hypervisor. A device driver of the network interface unit 110 writes to a register to set the arm bit and set the value of the timer. Hardware causes the timer to start counting down. A system interrupt is only issued if the timer is expired, the arm bit is set and one or more logical devices in a logical device group have their flags set and not masked. This system interrupt timer value ensures that there is some minimal separation between interrupt requests.
  • Software clears the state or adjusts the conditions of individual Logical Devices after servicing. Additionally, software enables a mailbox update of the Logical Device if desired. In one embodiment, hardware does not support any aggregate updates applied to an entire logical device group.
  • With one embodiment of the integrated network interface unit 110, the system interrupt data is provided to a non cacheable unit to lookup the hardware thread and interrupt number. With another embodiment of the network interface unit 110, some higher order bits of the system interrupt data are used to select a PCI function and the other bits of the logical device group ID are passed as part of the message signal interrupt (MSI) data, depending on the range value.
  • For one embodiment of the network interface unit 110, a PCI-Express or HyperTransport (HT) module supports a system interrupt data to message signal interrupt (MSI) lookup unit. Thus, the MSI lookup unit provides a synchronization point. Before an interrupt is issued across the interconnect 112, the network interface unit 110 looks up the MSI address and the MSI data. A posted write to the MSI address with the MSI data is issued. This is always an ordered request. A datapath interface is the interface to the specific interconnect.
  • A FIFO queues up requests from processing entities 120. Requests are read one by one and dispatched to the different functional units of the network interface unit 110. Write requests are dispatched to the functional unit if the function can accept the request. Before a read request is issued, all prior requests (either read requests or write requests) are acknowledged.
  • Another embodiment of the integrated network interface unit 110 system interface supports cache line size transfers. Logically, there are two classes of requests, ordered requests and bypass requests. The two classes of requests are queued separately in the system interface unit 832. An ordered request is not issued to the memory system 130 until “older” ordered and bypass requests are completed. However, acknowledgements may return out of order. Bypass requests may be issued as long as the memory system 130 can accept the request and may overtake “older” ordered requests that are enqueued or in transit to the memory system 130. Packet data transfers both receive and transmit, are submitted as bypass requests. Control data requests that affect the state of the DMA channels are submitted as ordered requests. Additionally, write requests can be posted and no acknowledgement is returned.
  • In the other embodiment of the integrated network interface unit 110, a non cacheable unit is a focal point where PIO requests are dispatched to the network interface unit 110 and where the PIO information read returns and interrupts are processed. The non cacheable unit serializes the PIOs from different processor threads to the network interface unit 110. The non cacheable unit also includes an internal table where, based on the System Interrupt Data, the non cacheable unit looks up the processor thread number and the interrupt number used.
  • Referring to FIGS. 9A and 9B, a block diagram of the receive FIFO module 816 and the packet classifier module 818 is shown. The receive FIFO module 816 is coupled to the MAC module 830 and the receive DMA module 820 as well as to the packet classifier module 818. The packet classifier module 818 is coupled to the MAC module 830 and the receive FIFO module 816.
  • The receive FIFO module 816 includes a per port receive packet FIFO 860 and a per port control FIFO 862. For example, if the network interface unit 110 includes two network ports, then the per port receive packet FIFO 860 includes two corresponding FIFO buffers, if the network interface unit 110 includes four network ports, then the per port receive packet FIFO 860 includes four FIFO buffers. Similarly, if the network interface unit 110 includes two network ports, then the per port control FIFO 860 includes two corresponding control FIFO buffers, if the network interface unit 110 includes four network ports, then the per port control FIFO 860 includes four control FIFO buffers.
  • The packet classifier module 818 includes a Layer 2 parser 920, a virtual local area network (VLAN) table 922, a MAC address table 924, a layer 3 and 4 parser 926, a hash compute module 930, a lookup and compare module 932, a TCAM and associated data module 934 and a merge logic receive DMA channel (RDC) map lookup module 936. The packet classifier module 818 also includes a receive DMA channel multiplexer module 938. The packet classifier module 818 also includes a checksum module 940. The packet classifier module 818, and specifically, the lookup and compare module 932, is coupled to a hash table 950.
  • Referring to FIG. 10, a block diagram of the receive DMA module 820 is shown. The receive DMA module 820 includes a plurality of receive DMA channels 1010, e.g., receive DMA channel 0—receive DMA channel 31. The receive DMA module 820 also includes a port scheduler module 1020, a receive DMA control scheduler module 1022, a datapath engine module 1024, a memory acknowledgement (ACK) processing module 1026 and a memory and system interface module 1028.
  • The plurality of DMA channels 1010 are coupled to the port scheduler module 1020 as well as the receive DMA channel control scheduler 1022 and the memory ACK processing module 1026. The port scheduler module 1020 is coupled to the receive packet FIFO 860 and the receive control FIFO 862 as well as to the datapath engine scheduler module 1024. The datapath engine scheduler 1024 is coupled to the port scheduler module 1020, the receive DMA channel control scheduler 1022 as well as to the memory acknowledgement processing module 1026 and the memory and system interface module 1028. The memory and system interface module 1028 is coupled to the receive packet FIFO 860 and the receive control FIFO 862 as well as to the datapath engine scheduler module 1024 and to the system interface module 832. The memory ACK processing module 1026 is coupled to the plurality of DMA channels 1010 as well as to the datapath engine scheduler 1024 and the system interface module 832.
  • Each of the plurality of receive DMA channels 1010 includes a receive block ring (RBR) prefetch module 1040, a receive completion ring (RCR) Buffer module 1042, a receive DMA channel state module 1044, a weighted random early discard WRED logic module 1046 and a partition definition register module 1048.
  • Referring to FIG. 11, a block diagram of the transmit DMA module 812 and transmit FIFO/reorder logic module 814 is shown. The transmit DMA module 812 is coupled to the system interface module 832 as well as to the transmit FIFO/reorder logic module 814. The transmit FIFO/reorder module 814 is coupled to the system interface module 832 as well as to the transmit DMA module 812.
  • The transmit FIFO/reorder logic module 814 includes per port transmit FIFO 1110 and a per port reorder module 1111 as well as a checksum and CRC module 1162. The per port transmit FIFO 1110 and the per port reorder module 1111 each include logic and buffers which correspond to the number of network ports within the network interface unit 110. For example, if the network interface unit 110 includes two network ports, then the module includes two per port reorder modules and the transmit FIFO 1110 includes two FIFO buffers, if the network interface unit 110 includes four network ports, then the per port reorder module includes four per port reorder modules and the transmit FIFO 1110 includes four FIFO buffers.
  • The transmit DMA module 812 includes a plurality of transmit DMA channels 1120, e.g., transmit DMA channel 0—transmit DMA channel 31. The transmit DMA module 812 also includes a scheduler module 1130, a transmit DMA channel prefetch scheduler 1132, a multiplexer 1134, and an acknowledgement (ACK) processing module 1136.
  • Each transmit DMA channel 1120 includes a control state register portion 1140, a transmit ring prefetch buffer 1142 and a partition control register 1144. The control state register portion 1140 includes a plurality of control state registers which are associated with the PIO registers and which control an individual transmit DMA channel 1120.
  • The scheduler module 1130 includes per port deficit round robin (DRR) scheduler modules 1150 as well as a round robin scheduler module 1152. The per port scheduler modules 1150 correspond to the number of network ports within the network interface unit 110. For example, if the network interface unit 110 includes two network ports, then the scheduler module 1130 includes two per port DRR scheduler modules 1150 (port 0 DRR scheduler module and port 1 DRR scheduler module), if the network interface unit 110 includes four network ports, then the scheduler module 1130 includes four per port DRR scheduler modules 1150 (port 0 DRR scheduler module through port 3 DRR scheduler module). Each per port DRR scheduler module 1150 includes a transmit DMA channel map module 1154.
  • The Transmit FIFO reorder logic module 814 includes a per port reorder module 1111 and a per port transmit FIFO 1110 and a checksum and CRC module 1162. The per port transmit FIFO 1160 includes FIFO buffers which correspond to the number of network ports within the network interface unit 110. For example, if the network interface unit 110 includes two network ports, then the per port transmit FIFO 1110 includes two per port transmit FIFO buffers, if the network interface unit 110 includes four network ports, then the per port transmit FIFO 1110 includes four per port transmit FIFO buffers.
  • Referring to FIG. 12, a schematic block diagram of an example of a four port network interface unit 1200 is shown. The four port network interface unit 1200 includes a transmit DMA module 812, a transmit FIFO reorder logic module 814, a receive FIFO module 816, a receive packet classifier module 818, and a receive DMA module 820. The four port network interface unit 1200 also includes a media access control (MAC) module 830 and a system interface module 832. The four port network interface unit 1200 also includes a zero copy function module 1210 which is coupled to a TCP translation buffer table module 1212.
  • The packet classifier module 818 includes a corresponding ternary content addressable memory (TCAM) module 934. The packet classifier module 818 is coupled to an FC RAM module 950 which stores flow tables for use by the packet classifier module 818.
  • The receive DMA module 820 includes 32 receive DMA channels 1010. The transmit DMA module 812 includes 32 transmit DMA channels 1120. The MAC module 830 includes four MAC ports 1220 as well as a serializer/deserializer (SERDES) bank module 1222. Because there are four MAC ports 1220, the per port receive packet FIFOs 816 include four corresponding receive packet FIFOs and the per port transmit FIFOs 814 include four corresponding transmit FIFOs. The system interface module 832 includes a PCI Express interface module 1230, a system interface SERDES module 1232 and a HT interface module 1234.
  • Referring to FIG. 13, a schematic block diagram of an example of an integrated network interface unit 1300 is shown. In the integrated network interface unit 1300, portions of the four port network interface unit 1200 are included within an integrated solution in which network functions are included with a processor core. (The processor core is omitted from the Figure for clarity purposes).
  • More specifically, the integrated network interface unit 1300 includes a transmit DMA module 812, a transmit FIFO reorder logic module 814, a receive FIFO module 816, a receive packet classifier module 818, and a receive DMA module 820. The integrated network interface unit 1200 also includes a media access control (MAC) module 830 and a system interface module 832.
  • The packet classifier module 818 includes a corresponding TCAM module 934. The packet classifier module 818 is coupled to an FC RAM module 950 which stores flow tables for use by the packet classifier module 818.
  • The receive DMA module 820 includes 32 receive DMA channels 1010. The transmit DMA module 812 includes 32 transmit DMA channels 1120. The MAC module 830 includes two MAC ports 1220 as well as a SERDES bank module 1222. Because there are two MAC ports 1220, the per port receive packet FIFOs 816 include two corresponding receive packet FIFOs and the per port transmit FIFOs 814 include two corresponding transmit FIFOs. The receive and transmit FIFOs are stored within a network interface unit memory pool. The system interface module 832 includes an I/O unit module 1330 and a system interface unit module 1332.
  • Network Interface Unit Functional Overview
  • Referring to FIG. 14, a flow chart of the classification of a packet received by the network interface unit 110 is shown. More specifically, a packet is received by a MAC port of the MAC module 830 at step 1410. The MAC module 830 includes a plurality of media access controller (MAC) ports that support a network protocol such as an Ethernet protocol. The media access controller ports include layer 2 protocol logic, statistic counters, address matching and filtering logic. The output from a media access controller port includes information on a destination address, whether the address is a programmed individual address or an accepted group address, and the index associated with the destination address in that category.
  • Packets from different physical ports are stored temporarily in a per port receive packet FIFO at step 1412. The packets are stored into the per port receive FIFO module 816, the header of the packet is copied to the packet classifier module 818 at step 1414. The packet is passed through the checksum module at steps 1416. The packet classifier module 818 determines at step 1420 to which receive DMA channel group the packet belongs and an offset into the receive DMA channel table at step 1420. In one embodiment, the network interface unit 110 includes eight receive DMA channel groups.
  • Each receive DMA Channel 1010 includes a receive block ring (RBR), a receive completion ring (RCR) and a set of control and status registers. (See, e.g., FIG. 21.) Physically, the receive DMA channels 1010 are allocated as ring buffers in memory system 130. A receive DMA channel 1010 is selected after an incoming packet is classified. A packet buffer is derived from a pool of packet buffers in the memory system 130 and used to store the incoming packet. Each receive DMA channel 1010 is capable of issuing an interrupt based on the queue length of the receive completion ring or a time out. The receive block ring is a ring buffer of memory blocks posted by software. The receive completion ring is a ring that stores the addresses of the buffers used to store incoming packets.
  • In one embodiment, each receive DMA channel group table includes 32 entries (see, e.g., FIG. 23). Each entry contains one receive DMA channel 1010. Each table defines the group of receive DMA channels that are allowed to move a packet to the system memory. The packet classifier module 818 chooses a table as an intermediate step before a final receive DMA channel 1010 is selected. The zeroth entry of the table is the default receive DMA channel 1010. The default receive DMA channel 1010 queues error packets within the group. The default can be one of the receive DMA channels in the group.
  • The Layer 2 parser 920 processes the network header to determine if the received packet contains a virtual local area network (VLAN) Tag at step 1430. For a VLAN tagged packet, a VLAN ID is used to lookup into a VLAN table 922 to determine the receive DMA channel table number for the packet. The packet classifier 818 also looks up the MAC address table 924 to determine a receive DMA channel table number based on the destination MAC address information. Software programs determine which of the two results to use in subsequent classification. The output of the Layer 2 parser 920, together with the resulting receive DMA channel table number, is passed to the layer 3 and 4 parser 926.
  • The Layer 3 and 4 parser 926 examines the EtherType, the Type of Service/Differentiated Services Code Point (TOS/DSCP) field and the Protocol ID/Next header field to determine whether the IP packet needs further classification at step 1432. The Layer 3 and 4 parser 926 recognizes a fixed protocol such as a transmission control protocol (TCP) or a user datagram protocol (UDP). The Layer 3 and 4 parser 926 also supports a programmable Protocol IP number. If the packet needs further classification, the packet generates a flow key and a TCAM key at step 1434.
  • The TCAM key is provided to the TCAM unit 934 for an associative search at step 1440. If there is a match, the result of the search (i.e., the TCAM result) may override the receive DMA channel Table selection for the Layer 2 or provide an offset into the Layer 2 receive DMA channel Table and ignore the result from the Hash unit 930. The result of the search may also specify a zero copy flow identifier to be used in a zero copy translation.
  • The TCAM result also determines whether a hash lookup based on the flow key is needed at step 1442. Using the receive DMA channel table number provided by the TCAM module 934, which determines a partition of the external table the hash unit 930 can search, a lookup is launched and either an exact match or an optimistic match is performed. If there is a match, the result contains the offset into the receive DMA channel table and the user data. The result may also contain a zero copy flow identification value used in a zero copy operation.
  • The output from the hash unit 930 and the TCAM module 934 are merged to determine the receive DMA channel 1010 at step 1450. The receive DMA channel 1010 moves the packet into memory system 130. If a zero copy flow identification value is present as determined at step 1452, then a zero copy function is performed at step 1454 and the receive DMA channel 1010 moves the packet with header payload separation.
  • A zero copy function is a receive function that performs header vs. payload separation and places payloads at a correct location within pre-posted (per flow) buffers. Each per flow buffer list may be viewed as a zero copy DMA channel. Packet headers are stored into memory system 130 via regular receive DMA channels, as determined by the packet classifier module 818. Using zero copy, the network interface unit 110 may operate on a packet by packet basis without requiring reassembly buffers within the network interface unit 110. Zero copy saves costly data movement operations from a host protocol stack, and in some cases reduces the per packet overheads by postponing header processing until a large set of buffers may be visited. Protocol state machines, and exception processing are maintained in the host protocol stack. Thus, the host's data movement function is removed on a selective basis and subject to instantaneous buffer availability.
  • Based on the Flow ID, an anchor (part of the Zero Copy state), which is a variable set associating the transmission control protocol (TCP) sequence number space to a buffer list, and implicitly confining zero copy to the current receive TCP window, and a buffer list are retrieved to determine whether payload placement is possible. Then one or more payload DMA operations are determined.
  • The outputs of the packet classifier module 818 and possibly one or more zero copy DMA operations associated with the packet are stored into the receive control FIFO 862.
  • The network interface unit 110 supports checksum offload and CRC-32 c offload for transmission control protocol/streaming control transmission protocol (TCP/SCTP) payloads. The network interface unit 110 compares the calculated values with the values embedded in the packet. The results of the compare are sent to software via a completion status indication. No discard decision is made based on the CRC result. Checksum/CRC errors do not affect the layer 3 and 4 classification. Similarly, the error status is provided to software via the completion status indication. Zero copy DMA operations are not performed if checksum errors are detected, though zero copy states are updated regardless of the packet error. The entire packet is stored in system memory using the appropriate receive DMA channel.
  • The receive packet FIFO 860 is logically organized per physical port. Layer 2, 3 and 4 error information is logically synchronized with the classification result of the corresponding packet.
  • Referring to FIG. 15, a flow chart of the movement of a packet by the receive DMA module 820 of the network interface unit 110 is shown. More specifically, logically there are 32 Receive DMA channels (receive DMA channel O-receive DMA channel 31) available to incoming packets. The datapath engine scheduler 1024 is common across all DMA operations. The datapath engine scheduler 1024 also prefetches receive block pointers or updates the completion ring of the receive DMA channels 1010 and prefetches zero copy buffer pointers.
  • To support partitioning, each receive DMA channel 1010 supports multiple memory rings. All the addresses posted by software, such as the configuration of the ring buffers and buffer block addresses are range compared and optionally translated when used to reference memory system 130 based on the ranges.
  • A packet arrives at step 1559. Software posts buffer block pointers into the receive block ring at step 1560. The size of each block is programmable, but fixed per channel. There are one or more packet buffers within a buffer block. Software can specify up to three sizes of packet buffer. Hardware partitions a block. Each block can only contain packet buffers of the same size. For Zero Copy Flows, these packet buffers are used to store packet headers only.
  • To reduce the per packet overhead, the network interface unit 110 maintains a prefetch buffer 1040 for the receive block ring and a tail pointer for the receive completion ring. When the receive block ring prefetch signal is low, a request is issued to the DMA system to retrieve a cache line of block addresses from the ring. If the receive completion ring tail pointer needs to be updated, a write request is issued. The consistency of the receive completion ring state is maintained by the network interface unit 110. The receive DMA channel control scheduler 1022 maintains the fairness among the receive DMA channels.
  • The port scheduler 1020 examines whether there are any packets available from the receive packet FIFO 860 and the receive control FIFO 862 at step 1562. The port scheduler 1020 then determines which port to service first at step 1564. The port scheduler 1020 includes a Deficit Round Robin scheduler.
  • The ports scheduler's determination does not depend on whether the packet is part of a zero copy flow. From the control header, the port scheduler 1020 determines which receive DMA channel 1010 to check for congestion and retrieves a buffer to store the packet at step 1566. Congestion is relieved by a WRED algorithm applied on the receive buffer ring and the receive completion ring. If the receive DMA channel 1010 is not congested, a buffer address is allocated according to the packet size at step 1568. Packet data requests are issued as posted writes. For zero copy flows, the buffers reflected in the receive completion ring buffer 1042 only hold the packet headers.
  • The datapath engine 1042 fairly schedules the requests from the Port Scheduler and the receive DMA channel control scheduler 1022 at step 1570. The datapath engine 1024 then issues the requests to the memory system 130 at step 1572.
  • The receive completion ring buffer 1042 is updated after issuing the write requests for the entire packet at step 1574. The DMA status registers are updated every time that the receive completion ring buffer 1042 is updated at step 1576. Software may poll the DMA status registers to determine if any packet has been received. When the receive completion ring queue length reaches a threshold or a timeout occurs, as determined at step 1578, the network interface unit 110 may update the receive completion ring buffer 1042, and simultaneously, write the DMA status registers to a mailbox at step 1580. The software state is then updated and the logical device flag (LDF) may be raised at step 1582. The LDF may then lead to a system interrupt at step 1584. The network interface unit 110 maintains the consistency of the DMA status registers and the receive completion ring buffer 1042 as the status registers reflect the content of the receive completion ring in the memory system 130 at step 1586.
  • FIG. 16 shows a flow chart of the movement of a packet transmitted by the network interface unit 110. More specifically, the transmit DMA module 812 includes 32 transmit DMA channels 1120. Each transmit DMA channel 1120 includes a transmit ring and a set of control and status registers. (See, e.g., FIG. 22.) Similar to the receive channels, each transmit channel supports multiple ranges. Addresses in the transmit ring are subjected to a range checking translation based on the ranges.
  • The transmit ring includes a ring buffer in memory system 130. Software posts packets into the transmit ring at step 1610 and signals the transmit DMA module 812 that packets have been queued at step 1612. Each packet is optimally built as a gather list. (The network interface unit 110 ensures that the packet size does not exceed the maximum packet size limit.) When the transmit ring is not empty, the network interface unit 110 prefetches the transmit ring entries into a per channel transmit ring prefetch buffer 1142 at step 1614.
  • Any transmit DMA channel 1120 can be bound to one of the network ports by software. The binding of the ports is controlled by a mapping register 1154 at the per port DRR scheduler 1150. The DRR scheduler 1150 may be switched to a different channel on packet boundary. This switching ensures that there will be no packet interleaving from different transmit DMA channels 1120 within a packet transfer. The DRR scheduler 1150 first acquires an available buffer for that port at step 1620. If a buffer is available, a memory request is then issued at step 1622. A buffer tag identifying the buffer is provided at step 1624 to enable reordering of potentially out of order read returns. The buffer tag is linked to the request acknowledgement identifier for the packet at step 1626. The network ports are serviced in a round robin order via the round robin scheduler 1152 at step 1630. Requests from different ports may be interleaved.
  • The transmit data requests and the prefetch request share the same datapath to the memory system 130. The returned acknowledgement is first processed at step 1640 to determine whether the returned acknowledgement is a prefetch or a transmit data. The transmit DMA module 812 hardware also supports checksum offload and CRC-32 c offload. The transmit FIFO/Reorder Logic module 814 includes checksum and CRC-32 c functionality.
  • When the entire packet has been received into the transmit DMA module 812, the transfer of the packet is considered to be completed and the state of the transmit DMA channel 1120 is updated via the associated status register at step 1650. A 12-bit counter is initialized to zero and tracks transmitted packets. Software polls the status registers to determine the status. Alternately, software may mark a packet so that an interrupt (if enabled) may be issued after the transmission of the packet. Similar to the receive side, the network interface unit 110 may update the state of the DMA channel to a predefined mailbox after transmitting a marked packet.
  • The transmit and receive portions of the network interface unit 110 fairly share the same memory system interface 832.
  • Referring to FIG. 17, a flow chart of the operation of the port scheduler 1020 is shown. More specifically, because a port may be supporting 1 Gbps or 10 Gbps, a rate based scheduler is provided to ensure no starvation. The port scheduler 1020 only switches port at packet boundary and only schedules a port when the port FIFO has at least one complete packet.
  • The number of queues is set at step 1710 as i:={0, 1, 2, 3}. The number of queues corresponds to the number of ports within the network interface unit 110. Accordingly, for network interface unit 110 having two ports, the number of queues would be set as i:={0, 1}.
  • Next, the port scheduler 1020 sets the deficit counters of queue i at step 1712 as C_i:=deficit counters of queue i. Next, the port scheduler 1020 sets an assigned weight for the queue i at step 1714 as W_i:=assigned weight for queue i. Next, the scheduler 1020 determines whether a queue is eligible at step 1716 as i=last queue in i. A queue is eligible if the queue has a completed packet. The ‘next_queue_in_i’ operation returns the first queue in i if the last queue is reached. Next, the port scheduler 1020 performs a select operation at step 1718. Next the port scheduler 1020 performs a loop operation at step 1720.
  • Referring to FIG. 18, a flow chart showing the operation of the select operation is shown. More specifically, the select operation 1718 starts by setting i equal to the next queue in i at step 1810. Next, the port scheduler 1020 sets C_i equal to the minimum value of C_i plus W_i or W_i at step 1812. Next, the port scheduler 1020 determines whether the queue i is not eligible for scheduling at step 1814. Queue i is not eligible if C_i is less than or equal to zero. If queue i is not eligible, then the operation returns to step 1810. If queue i is eligible, then operation proceeds to the loop operation of step 1720.
  • Referring to FIG. 19, a flow chart showing the operation of the loop operation is shown. More specifically, the loop operation 1720 starts by processing one packet from queue i at step 1910. Next the port scheduler 1020 decrements C_i at step 1912. Next, the port scheduler 1020 determines whether queue i is not eligible for scheduling at step 1914. Queue i is not eligible for scheduling if C_i is less than or equal to zero. If queue i is not eligible for scheduling, then the operation returns to the select operation of step 1910. If queue i is eligible for scheduling then the operation proceeds to the select operation of 1720. C_i is decremented by the number of 16B blocks the packet contains. A partial block is considered as one complete block. The port DRR weight register programs the weight of a corresponding port.
  • Referring to FIG. 20, a flow chart showing the operation of a weighted random early discard (WRED) module 2000 is shown. A goal of congestion management (such as the use of a weighted random early discard module 2000) is to prevent overloading of the processing entity 120 and to fence off potential attacks that deplete system resources associated with network interfaces. The control mechanism for providing congestion management is to discard packets randomly. The weighted random early discard module 2000 provides the benefit of de-synchronizing the TCP slow start behavior and achieving an overall improvement in throughput.
  • The resources of a receive DMA channel are captured by two states: the receive completion ring queue length and the number of posted buffers. A DMA channel is considered congested if there are a lot of packets queued up but not enough buffers posted to the DMA channel. A method for determining congestion is to combine the two states. More specifically if Q is a combined congestion measurement, then
    Q=max Receive Completion Ring Queue Length−[S×Receive Block Ring Queue Length].
  • The receive block ring queue length is scaled up by a constant, S, because a block may store more than one packet.
  • A WRED function is characterized by two parameters, threshold and window. If the Q is larger than the threshold, then the packet is subjected to a WRED discard operation. The window value determines the range of Q above the threshold where the probabilistic discard is applicable. If Q is larger than (Threshold+Window), the packet is always discarded. Because it is desirable to protect existing connections and fence off potential SYN attacks, TCP SYN packets are subject to a different set of (Threshold, Window) pair.
  • More specifically, the operation of the WRED module 2000 starts by initializing a plurality of values at step 2008. The values include setting T=Threshold, W=Window and R=Random. Next, the WRED module 2000 sets a value x equal to Q−T at step 2010. Next, the WRED module 2000 determines whether x is less than 0 at step 2012. If x is less than zero, then the operation of the module exits. If x is not less than zero, then the WRED module 2000 obtains a random number between 0 and 1 at step 2014. Next, the WRED module 2000 determines whether an integer value of R*W is less than x at step 2016. If the integer value is less than x, then the packet is discarded at step 2018. If the value is not less than x, then the operation of the module completes.
  • In one embodiment, the random number is implemented with a 16 bit linear feedback shift register (LFSR) with polynomial such as
    X16+X5+X3+X2+1
  • Network Interface Unit Data Movement Profiles
  • The network interface unit 110 provides performance based on parallelism, selective offloading of data movement and pipelined usage of an I/O interface. The network interface unit 110 selectively uses direct virtual memory access (DVMA) and physical DMA models. The network interface unit 110 provides partitionable control and data path (via, e.g., hypervisor partitions). The network interface unit 110 provides packet classification for partitions, services and flow identification. The network interface unit 110 is multi-ported for multi-homing, blade architectures and look aside applications.
  • The network interface unit 110 receives and transmits data movement profiles as described below. More specifically, the receive data movement profile provides that DMA writes are performed in up to 512 byte posted write transactions, that there are a plurality of pipelined write transitions per DMA channel, that the total number of pipelined write transactions is determined based upon I/O and memory latency characteristics, that the receive DMA write PCI-Express transactions have byte granularity and that most DMA writes are initiated with relaxed ordering. The read data movement profile provides for a plurality of pipelined DMA read requests per DMA channel, that the total number of pipelined DMA read requests across channels is determined based upon I/O and memory latency characteristics, that each transmit DMA read request can be up to 2 K bytes, that the network interface unit 110 tries to request an entire packet or 2k whichever is smaller, that the DMA read completions can be partial, but in order for a given request, that the network interface unit 110 handles interleaved DMA read completions for outstanding requests, and that the network interface unit 110 preserves packet ordering per DMA channel despite request or completion reordering. It will be appreciated that any of the data movement profiles may be adjusted based upon the I/O and memory latency characteristics associated within the network system.
  • DMA channels, which include both receive DMA channels 1010 and transmit DMA channels 1120, are the basic constructs for queuing, and for enabling parallelism in servicing network interface units 110 from different processing entities 120. Thus, DMA channels are serviced independently, thereby avoiding the overhead of mutual exclusion when managing transmit and receive queues. In one embodiment, receive zero copy (i.e., TCP reassembly) is associated with each of the DMA channels but does not consume additional DMA channels. Translation tables are not considered separate channels.
  • The transmit DMA channels 1120 and receive DMA channels 1010 each include respective kick registers which are used via PIO posted writes to update network interface units 110 regarding how far the hardware may advance on each ring. Completion registers, analogously indicate to the software how far the hardware has advanced, while avoiding descriptor writebacks.
  • All PIO registers associated with the operation of a DMA channel are separable into pages. Thus, the DMA channels may be managed by their own partitions. The PIO registers, and thus the DMA channels, are groupable so that an arbitrary ensemble of DMA channels can be placed in a single partition.
  • Both the transmit DMA channels 1120 and the receive DMA channels 1010 cache at least a cache line worth of fetched descriptors to minimize descriptor memory accesses. Similarly, completion updates are batched to fill a cache line whenever possible. Every DMA channel includes a corresponding polling register. The polling register reflects the state of the channel (not empty completion) so that the network interface unit 110 can use a programmable I/O read operation to the polling register.
  • Referring to FIG. 21, a receive DMA channel 1010 includes a receive descriptor ring 2110 and a receive completion ring 2112. The receive descriptor ring 2110 holds free buffer pointers to blocks of buffers of pre-defined size, typically an operating system page size or a multiple of an operating system page size. Buffer consumption granularity discriminates packet lengths based on three ranges, small, large or jumbo, which are defined by SMALL_PACKET_SIZE, LARGE_PACKET_SIZE, JUMBO_PACKET_SIZE elements, respectively. More specifically, with the small packet length range the length of the packet is less than the value defined by the SMALL_PACKET_SIZE element; with the large packet length range, the length of the packet is greater than the value defined by the SMALL_PACKET_SIZE element and less than or equal to the value defined by the LARGE_PACKET_SIZE element; and, with a jumbo packet length range, the length of the packet is greater than the value defined by the LARGE_PACKET_SIZE element and less than or equal to the value defined by JUMBO_PACKET_SIZE element.
  • At any time, the receive DMA channel 1010 uses three free buffer pointers cached from its descriptor ring, one buffer is carved up for small packets, another buffer for large packets, and a third buffer for jumbo packets. The PACKET_SIZE thresholds are coarsely programmable per channel and determine the number of packets per buffer and the fixed receive buffer sub-divisions where packets may start. The respective packet pointers are posted to the channel's receive completion ring 2112.
  • The receive completion ring 2112 therefore defines the order of packet arrival for the receive DMA channel 1010 corresponding to the completion ring. Jumbo packets may exceed the buffer size by spilling over into a second buffer. Two pointers per packet are posted to the receive completion ring 2112 in the case of spillover.
  • For each receive DMA channel 1010, the receive DMA channel context includes a plurality of elements. More specifically, each receive DMA channel includes a buffer size element; a SMALL_PACKET_SIZE element; a LARGE_PACKET_SIZE element; a JUMBO_PACKET_SIZE element; a receive descriptor ring start pointer element; a receive descriptor ring size element; a receive descriptor ring head pointer element; a receive kick register element; a receive descriptor ring tail pointer element; a receive completion ring start pointer element; a receive completion ring size element; a receive completion ring head pointer element; a receive completion tail pointer element; a receive buffer pointer for SMALL element; a receive Buffer pointer for LARGE element; a receive Polling register element (reflects completion ring queue depth, i.e. the distance between completion head and tail register values); and WRED register elements (thresholds, discard statistics).
  • The completion ring size is programmed by software to be larger than the descriptor ring size. To accommodate small packet workloads, the ratio between the ring sizes is at least (Buffer size/SMALL_PACKET_SIZE).
  • Referring to FIG. 22, a transmit DMA channel 1120 includes a single transmit descriptor ring 2210 holding buffer pointers for new packets to be transmitted. Each transmit DMA channel 1120 is associated via register programming with one of the MAC ports, or one trunk when link aggregation is used. Multiple DMA channels may be associated with a single MAC port. Transmit gather is supported, i.e., a packet may span an arbitrary number of buffers.
  • A transmit operation executes in open loop mode (i.e., with no interrupts) whenever possible. Complete descriptor removal is scheduled at the end of new packet queuing, or periodic interrupts requested at enqueuing time, but there is no need to generate an interrupt for every packet completion or to service the transmit process in any form for the transmit process to make progress.
  • For each transmit DMA channel 1120, the transmit DMA channel context includes a plurality of elements. More specifically, each transmit DMA channel context includes a transmit descriptor ring start pointer element; a transmit descriptor ring size element; a transmit descriptor ring head pointer element; a transmit kick register element; a transmit descriptor ring tail pointer element; a transmit completion register element; and, a transmit Polling register element (reflects descriptor ring queue depth, i.e. Distance between Head and Tail register values).
  • The descriptor structures defining the transmit DMA channels 1120 are very simple so that the descriptor structures can efficiently correspond to the DVMA structures without unnecessary input output memory management unit (IOMMU) thrashing for network interface units.
  • With the other embodiment of the integrated network interface unit 1300, the memory accesses proceed directly to a memory system 130 (after translating virtual addresses to physical address within the four port network interface unit) but without going through any bridge or IOMMU. Memory accesses proceeding directly to a memory system 130 allows superior latency and additional I/O bandwidth, as networking does not compete with any other I/O.
  • Another subtlety of direct memory interface in the integrated network interface unit 1300 is that memory accesses may complete in arbitrary order when considering multiple banks. A reorder function correlates DMA memory completions, and serializes some operations whenever necessary (either via descriptor update after DMA WR, or polling register update after DMA WR).
  • Referring to FIG. 23, a block diagram of the packet classification hierarchy is shown. The packet classification hierarchy which is provided by the packet classifier module 818 provides several receive packet classification primitives. These receive packet classification primitives include virtualization, traffic spreading, perfect ternary matches, and imperfect and perfect flow matching.
  • More specifically, the virtualization packet classification primitive determines the partition to be used for a given receive packet. Virtualization allows multiple partitions to co-exist within a given network interface unit 110 or even a given port within a network interface unit 110 while keeping strict separation of DMA channels and their corresponding processing resources. The shared parts of the network interface unit 110 are limited to the cable connected to the network interface unit 110, the MAC module 830, and the receive packet FIFOs 816 servicing the port. The cable, the MAC module 830 and the receive packet FIFOs 816 provide continuous packet service (i.e., no stalls or blocking). Virtualization can be based on VLANS, MAC addresses, or service addresses such as IP addresses or TCPIUDP ports. Virtualization essentially selects a group of receive DMA channels 1010 as the set of channels where a packet may end up regardless of all other traffic spreading and classification criteria.
  • The traffic spreading classification primitive is an efficient way of separating traffic statically into multiple queues. Traffic spreading classification preserves affinity as long as the parser is sophisticated enough to ignore all mutable header fields. The implementation of traffic spreading is based on pre-defined packet classes and a hash function applied over a programmable set of header fields. The hash function can be tweaked by programming its initial value. The traffic spreading function can consider or ignore the ingress port, enabling different or identical spreading patterns for different ports.
  • The perfect ternary match classification primitive is the ultimate classification, where the packet can be associated with flows, or with wild-carded entries representing services, addresses, virtualized partitions, etc. The implementation of perfect match is based on a TCAM match, and is therefore limited in depth. The TCAM value is generally intended to match layer 3 and layer 4 fields for Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6), and also bind layer 2 virtualization to layers 3 and 4 by keying group numbers in addition to IP headers and transport headers.
  • The flow matching classification primitive is the association of packets to pre-inserted flows within a large hash table. The hash entries can be used for perfect or imperfect binary matches, where a perfect match consumes four times the space of an imperfect match. Therefore, in general, there is a low but finite probability of having a false match, and also of not being able to insert the desired flow for a specific packet. Flow matching is used for maintaining flow associations to DMA channels for a large number of connections (for example for operating system style hardware classification) as well as zero copy flows. The implementation of flow matching is based on hashing into the hash table 950. In the case of zero copy flows, regardless of the match type, the translation table stage does again a full 5-tuple comparison thus eliminating the risk of false matches. “Don't care” bits for flow matching are masked by a class filter before the hashing function, and are an attribute of the class, rather than the individual entry.
  • Populating the hash table 950 is optional and software functions with scenarios where the hash table 950 is or is not populated. Furthermore, the hash table 950 is partitionable into a plurality of separate tables (e.g., four separate tables), so that separate partitions can manage their own flows or connections directly without having to serialize access or invoke hypervisor calls in flow setup.
  • There are a plurality of relationships between the various classification primitives. More specifically layer 2 virtualization results (MAC DA, VLAN) can be factored into the TCAM match via the Group # so that IP addresses/TCP/UDP ports are restricted to VLANs, ingress ports, and MAC addresses. Also, TCAM matches and flow matches are largely independent, except that the TCAM match virtualization determines which hash table partition to search. The TCAM match virtualization results in some serialization between the searches. The TCAM and flow matches are merged, allowing TCAM entries to override or defer to flow matches. The flow match key is not controllable by the TCAM match, and its construction and hash computation may be overlapped with the TCAM search. The ingress port is considered part of all matches and tables so that different policies can be applied across different ports. The flow match and the traffic spreading function use the same key into the hash function. Key masking and assembly is programmable.
  • The tables have various sizes and roles. For example, the MAC table virtualizes based on the MAC Address index provided by the MAC blocks (e.g., 4 bits) and the ingress port number (e.g., 2 bits). The output of the MAC table is a group # (e.g., 4 bits) and a MAC_Dominates signal to control how to merge this result with the VLAN table result. The VLAN table virtualizes based on VLAN IDs (e.g., 12 bits) and a VLAN_Dominates signal to control how to merge this result. The group tables include 16 sets of receive DMA channels grouped for virtualization. The receive DMA channels are programmed into one of the group tables. All 32 entries of a group table are filled with valid receive DMA channel numbers. Receive DMA channels are written more than once per group table if necessary to fill the table.
  • Both transmit and receive functions operate as store and forward in and out of the corresponding FIFO. There are fields stored with the packet FIFOs used for control purposes, and there are also dedicated control structures in the form of FIFOs.
  • Within the receive path, receive packet FIFOs arbitrate for DMA channel scheduling on packet boundaries. The packet at the head of a given receive packet FIFO determines the DMA channel number to use for the packet.
  • Translation table lookups represent the longest latency step of ingress processing. The pipeline design assumes that every packet goes through translation at ingress, and overlaps the translation with data flowing into the Receive packet FIFO.
  • Some receive control information is stored in the receive buffers along with the receive packets while other fields are deposited into the descriptors themselves. Information consumed by the driver goes to descriptors, and information needed above the driver stays in the buffer.
  • In addition, receive buffers accommodate a number of reserved locations per buffer to be used by software. The number is programmable per channel and up to 86 bytes. Receive packets using TCP re-assembly derive their DMA addresses from the translation result in the form of a pair of (address, length) pairs with arbitrary byte granularity.
  • Within the transmit path, there is one FIFO per MAC port. Packets are read from the head of the FIFO into the MAC port only when a full packet is ready (for checksum insertion purposes). Packets may be written in interleaved fashion into the transmit FIFO to accommodate out of order memory read completions. The transmit reorder module 852 produces the transmit FIFO address location for writing memory read (MEM RD) completions based on the transaction ID, address, byte count, and byte enables of the completion. A packet may require more than one request and therefore the packet may consume multiple transaction IDs. The transmit reorder module 852 handles as many transaction IDs as the number of pipelined MEM RD requests issued by the network interface unit 110. Completions are of arbitrary size up to Max_Payload_Size for the PCI-Express receive direction.
  • The transmit reorder module 852 therefore manages the re-assembly of completions at insertion time into Transmit FIFOs 850, and in the process of doing so enforces a network packet order per MAC/DMA channel that is identical to the memory read request order for the transmit DMA channel 812.
  • The memory read request order is derived from the packet descriptor order of each transmit DMA channel 1120, with the freedom to schedule across transmit DMA channels 1120 with no order constraints.
  • The transmit reorder module 852 also determines when a given packet is completely written into the transmit FIFO 850 by determining that all the packet requests are completely satisfied. For simplicity purposes the request order is enforced within a transmit FIFO 850 even for requests from different transmit DMA channels 1120.
  • TCP checksum insertion is performed by maintaining partial checksums per packets in the transmit reorder module 852 and using the additive property of the i's complement checksum to overcome completion interleaving.
  • For the integrated network interface unit 1300, the reorder module 852 is simplified because MEM RD completions are of fixed size, and possibly a smaller number of outstanding requests are pipelined.
  • The data buffering includes a plurality of discard policies. More specifically, the discard policy for a transmit operation is that there is not congestive discard in the transmit data path because the four port network interface unit only requests from memory packets that fit in the corresponding Transmit FIFO.
  • The discard policy for a receive is that congestive discard for Receive occurs under several scenarios at the boundary between a receive FIFO module 816 and a receive DMA channel 1010. Accordingly, the receive FIFO module 816 is always serviced, be it by the receive DMA channel 1010 corresponding to the packet at the head of the receive FIFO module 816, or by discarding from the head of the receive FIFO module 816. Packets are never backpressured at the receive FIFO module 816. All discard operations are on packet boundaries.
  • There are a plurality of different scenarios that may trigger packet discard. More specifically, a DMA congestion scenario where no buffer is posted to the descriptor ring at the time the packet is at the head of its receive FIFO module 816 may trigger packet discard. A DMA disabled scenario where a receive DMA channel 1010 is disabled at the time the packet is at the head of its receive FIFO module 816 may trigger packet discard. A random early discard (RED) scenario which is implemented per receive DMA channel 1010 which determines that queue length requires packet discard, and randomizer determines that the next packet is the victim. A classifier triggered scenario when the packet classifier 818 indicates a packet is to be dropped; the packet is dropped from the head of the receive FIFO module 816. The classification result which is carried by the receive control FIFO 862 includes the packet drop indication. A late discard scenario occurs in cases of congestion in the middle of the packet, or packet malfunction (Length or CRC based) signaled by the MAC at the end of a packet, packet discard is marked on the FIFO ingress side, possibly by rewriting the first receive packet FIFO 860 with a special marker sequence. The design may also reclaim most of the offending packet's FIFO locations used so far by rewinding the ingress pointer.
  • Packet drop at the receive packet FIFO tail also occurs when the receive packet FIFO 860 fills. For example, for lookup congestion, if the packet classifier 818 fails to keep up with averaged packet rate (averaged by the receive packet FIFO depth), the receive control FIFO 862 is updated with results at a slower rate than the receive packet FIFO 860. Should the receive packet FIFO fill, the affected packet is dropped on the FIFO ingress side by reclaiming the locations used so far.
  • The hypervisor 312 adds a level of indirection to the physical address space by introducing real addresses. Real addresses are unique per partition, but only physical addresses are system unique. There are two types of hypervisor hooks with the address usage of network interface units. These two hooks include any slave access to network interface unit registers intended to be directly manipulated by software in the partition without the hypervisor 312 (or equivalent) coordination is grouped into pages that the network system memory management unit can map separately and any DMA access originated from network interface units apply an address relocation mapping based on a per partition offset and range limit. The offset and limit values are programmable through yet another partition different from the partition that posts addresses to the DMA channel.
  • The level of indirection can be used in a hypervisor environment to achieve full partition isolation. This level of indirection can also be used in non-partitioned environments to avoid having to serialize access to shared resources in the data path. Providing a level of indirection is valuable to enable scalable performance.
  • The network interface unit 110 includes a plurality of register groups. These register groups include a MAC/PCS register group, a classification register group, a virtualized register group, transmit and receive DMA register groups, a PCI configuration space register group, an interrupt status and control register group, a partition control register group, and an additional control register group.
  • The register structure and event definition relies on separating datapath interrupt events so that the events can be mapped univocally to strands or processors, regardless of whether the processors enable interrupts, poll, or yield on an event register load.
  • The actual event signaling for network interface units 110 is based on interrupt messages (MSIs) to different addresses per target. In the integrated network interface unit, the event signaling is done towards a set of interrupt registers placed close to the processor core.
  • Network System Software Stack
  • Referring again to FIG. 4, the interface unit device driver 420 assists an operating system 430 with throughput, connection setup and teardown. While higher bandwidth data rates may saturate the network stacks on a single processor, the network system helps to achieve throughput networking by distributing the processing.
  • The network system device driver 420 programs the packet classifier 818 for identification of flows or connections to the appropriate processor entities 120. The network interface unit packet classifier 818 is programmed to place well defined flows on the appropriate DMA channel.
  • A model of a flow can occur in a single stage or multiple stages, so that different processing entities 120 can service different receive channels. A single stage is when a packet is received, is classified as a flow, and sent to the software stack for processing without further context switching. Multiple stages is when packets which are classified as flows are queued and then some other thread or operating system entity is informed to process the packets at some other time.
  • The operating system 430 creates a queue instance for each processor plus a thread with affinity to that processor entity 120. By providing flow affinity to a processor entity 120, packet ordering is maintained on receive flows. Also, maintaining affinity of receive and transmit packets that belong to the same connection enables better network system performance by providing the same context, no processor cross-calls and keeps the caches “warm”.
  • The network system software stack 410 migrates flows to insure that receive and transmit affinity is maintained. More specifically, the network system software stack 410 migrates receive flows by programming flow tables. The network system software stack 410 migrates transmit flows by computing the same hash value for a transmit as the network interface unit 110.
  • The connection to a processor affinity is controlled by the operating system 430, with a network interface unit 110 and the network interface unit device driver 420 following suit. There are at least two alternatives for controlling the affinity. In one alternative, the operating system 430 presently associates each flow with the processing entity 120 that creates the flow either at “open” or at “accept” time. In this case, the flow to DMA channel mapping of a connection is passed to the network interface unit 110 and associated network system software and stored in the hash tables 950 for use by the receive packet classifier 818. The other alternative is based on a general fanout technique defined by the operating system 430 and does not use a flow table entry. The network interface unit device driver 420 can be a multi-threaded driver with single thread access to data structures.
  • The network system software stack 410 exploits the capabilities of the network interface unit 110. The packet classifier 818 is optionally programmed to take into account the ingress port and VLAN tag of the packet. This programming allows multiple network interface units 110 to be under the network system software stack 410.
  • Referring to FIG. 24, a flow diagram of a receive flow between a network interface unit and a network system software stack 410 is shown. When the device driver 420 is functioning on the receive side with multiple processor receives, the network interface unit 110 is programmed to provide hash based receive packets spreading which sends different IP packets to different DMA channels. The network interface unit packet header parsing uses source and destination IP addresses, and the TCP port numbers, (e.g., TCP 5-tuples). These fields along with the port and VLAN uniquely identify a flow. Hashing is one of many ways to spread load.
  • When the network interface unit 110 is functioning in an interrupt model, when a packet is received, it generates an interrupt, subject to interrupt coalescing criteria. Interrupts are used to indicate to a processor entity 120 that there are packets ready for processing. In the polling mechanism, reads across the I/O bus 112 are performed to determine whether there are packets to be processed.
  • The network interface unit 110 includes two modes for processing the received packets. A standard interrupt based mode is controlled via the device driver 420 and the second polled based mode that is controlled by the ULP. The ULP (in this case the operating system 430) exploits the appropriate mode to meet certain performance goals. Flows that have been classified as exact matches by the combination of the network interface unit packet classifier 818 and the device driver 420 are sent directly to the operating system 430 within the receive interrupt context or queued and pulled via polled queue threads. In either case, the network interface unit packet classifier 818 helps map particular flows to the same processing entity 120.
  • An interrupt coalescing feature per receive descriptor can provide multiple packet processing and chaining. On the interrupt module, the device driver 420 registers the interrupt service routine with the operating system 430 which then tries to spread the processing to different processing entities 120. The device driver 420 configures the network interface unit 110 to exploit the DMA channels, translation table, buffer management, and the packet classifier.
  • On the polled mode module, the queue thread or another thread pulls packets out of the receive queue. The polled mode module includes interfaces between the ULP and the network interface unit 110.
  • The interface to the network interface unit device driver 420 is via either a device driver specific interface or via an operating system framework.
  • For packets which are not classified appropriately, the device driver 420 uses a standard operating system interface.
  • The network interface unit 110 places a number of packets into each page sized buffer by dividing the buffer into multiple packet buffers. Depending on packet size distribution, buffers may be returned in a different order than they were placed on the descriptor ring. Descriptor and completion ring processing is handled in the interrupt handler or invoked from the thread model.
  • Referring to FIG. 25, the flow of a transmit flow between a network interface unit and a network system software stack 410 is shown. When the device driver 420 is functioning at the transmit side, the device driver 420 provides one of two approaches, an IP queue fanout approach and a hash table approach.
  • The IP queue fanout approach uses a fanout element to potentially help provide better affinity between transmit and receive side flow processing. If a network function uses the same hash as the network interface unit packet classifier 818, then the operating system 430 distributes “open” or “accept” connections to the same queue as the network interface unit packet classifier 818.
  • The fanout approach provides processor affinity to flows/connections without the hash table. All incoming flows classified by the network interface unit packet classifier 818 come to the operating system 430 on the same processing entity 120. So, the accept connection function uses the same queue and the “open” connection function uses the hash algorithm to fan the packet out to the right queue. Thus, the queue fanout approach enables the network interface unit device driver 420 and the operating system 430 to exploit the affinity of a flow/connection to a particular processing entity 120.
  • The hash table approach uses a mechanism for load balancing the IP packets to the appropriate processing entity 120 based on transmit affinity. If the operating system 430 wants to drive the affinity from a transmit perspective, then the operating system 430 exploits the hash table interface provided by the network interface unit 110. The application sourcing data running on a particular processing entity 120 (e.g., CPU#n) results in the network interface unit device driver 420 programming the hash table 950 so that received packets for that flow are sent to the particular processing entity 120 (e.g., CPU#n). The hash table 950 provides the capabilities to manage a large number (e.g., four million) of flows. Each entry in the hash table 950 allows a flow to have a well defined processing entity 120 plus some pointer, e.g., a pointer to the connection structure.
  • The hash table approach provides interfaces which are defined between the operating system 430 and the device driver 420 to program the hash table 950. Before sending out a TCP SYN packet for active open or before sending TCP SYN ACK or TCP ACK, the entries in the hash table 950 are updated according to the processing entity 120 on which the connection is being initiated or terminated as the case may be. Updating the hash table allows subsequent packets for that flow to come to the same processing entity 120. The entries in the flow are inserted before the packet is sent on the wire (i.e., sent onto the network).
  • One feature of the network interface unit 110 on the transmit side is the support for multiple transmit descriptor rings per port, allowing multiple threads to send packets concurrently to the same port and even use some of the queues for qualities of service (QOS) for outbound traffic. A transmit descriptor is associated with a particular VLAN and at during the configuration of the network interface unit 110. The network interface unit 110 ensures that a given flow is always associated with the same transmit descriptor ring.
  • There are two approaches for sending a flow to a given port, a device driver approach and an operating system defined approach. With the device driver approach, the device driver 420 controls the fanning out of the flows to a given transmit descriptor. With the operating system defined approach, an API is defined which allows informing the device driver 420 of which transmit descriptor to use. With either approach, the same flow always uses the same descriptor. Thus, multiple flows can come concurrently into the device driver 420 on different transmit descriptors.
  • The device driver 420 performs the spreading of the flows that come down from the operating system 430. The device driver 420 includes a map identifying which physical ports to which transmit queues. The device driver approach identifies the transmit descriptor by a hashing algorithm and distributes the packets to different descriptors but tied to the same port. The attachment on which the packet comes to the device driver 420 of operating system parameter is used to identify the port. Flow control is defined for the operating system programming interface. If all transmit descriptors that are tied to the given ports are locked, then the device driver 420 informs the operating system 430 to queue the packets in its queue. This helps in alleviating the lock connection issue associated in a multiprocessing environment.
  • Thus, because multiple flows can be transmitted on the same port, all transmit descriptors associated with that port could be busy. The locks are mainly for preventing the descriptor entries from being used by two separate threads and are desirable to be held for a very short duration.
  • If the operating system 430 wants to fan out the packets to different descriptors then the operating system 430 has to ensure that the same flow always uses the same transmit descriptor. The operating system 430 provides the port and the appropriate transmit descriptor over which the flow needs to go. The operating system API also adheres to the flow control push back from the device driver 420 in case the transmit descriptors are already in use.
  • Other Embodiments
  • The present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only, and are not exhaustive of the scope of the invention.
  • For example, while particular architectures are set forth with respect to the network system and the network interface unit, it will be appreciated that variations within these architectures are within the scope of the present invention. Also, while particular packet flow descriptions are set forth, it will be appreciated that variations within the packet flow are within the scope of the present invention.
  • Also for example, the above-discussed embodiments include modules and units that perform certain tasks. The modules and units discussed herein may include hardware modules or software modules. The hardware modules may be implemented within custom circuitry or via some form of programmable logic device. The software modules may include script, batch, or other executable files. The modules may be stored on a machine-readable or computer-readable storage medium such as a disk drive. Storage devices used for storing software modules in accordance with an embodiment of the invention may be magnetic floppy disks, hard disks, or optical discs such as CD-ROMs or CD-Rs, for example. A storage device used for storing firmware or hardware modules in accordance with an embodiment of the invention may also include a semiconductor-based memory, which may be permanently, removably or remotely coupled to a microprocessor/memory system. Thus, the modules may be stored within a computer system memory to configure the computer system to perform the functions of the module. Other new and various types of computer-readable storage media may be used to store the modules discussed herein. Additionally, those skilled in the art will recognize that the separation of functionality into modules and units is for illustrative purposes. Alternative embodiments may merge the functionality of multiple modules or units into a single module or unit or may impose an alternate decomposition of functionality of modules or units. For example, a software module for calling sub-modules may be decomposed so that each sub-module performs its function and passes control directly to another sub-module.
  • Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.

Claims (20)

1. A method for addressing system latency within a network system comprising:
providing a network interface, the network interface including a plurality of memory access channels; and,
moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
2. The method of claim 1 wherein:
the network system allows relaxed ordering when internally moving data between the network interface and the memory system.
3. The method of claim 1 wherein:
a memory access channel includes dedicated queuing, control and buffering to move data while preserving ordering between a processing entity and the network interface.
4. The method of claim 3 wherein:
the data includes packets of information; and
multiple packets of information are sent at a time for a particular memory access channel.
5. The method of claim 2 further comprising:
selectively enforcing internal transaction ordering for some transactions within a memory access channel while keeping other transactions as relaxed ordering as necessary.
6. The method of claim 1 wherein:
the plurality of memory access channels include a plurality of receive memory access channels dedicated to moving data between the network interface and the memory system.
7. The method of claim 6 wherein:
each of the plurality of receive memory access channels includes a receive descriptor ring.
8. The method of claim 7 wherein:
each of the plurality of receive memory access channels includes a receive completion ring.
9. The method of claim 1 wherein:
the plurality of memory access channels include a plurality of transmit memory access channels dedicated to moving data between the memory system and the network interface.
10. The method of claim 9 wherein:
the plurality of transmit memory access channels include transmit descriptor rings.
11. A network system comprising:
a plurality of processing entities;
a memory system coupled to the plurality of processing entities;
a network interface coupled to the plurality of processing entities and the memory system, the network interface including a plurality of memory access channels, the network interface unit moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests.
12. The network system of claim 11 wherein:
the network system allows relaxed ordering when internally moving data between the network interface and the memory system
13. The network system of claim 11 wherein:
a memory access channel includes dedicated queuing, control and buffering to move data while preserving ordering between a processing entity and the network interface.
14. The network system of claim 13 wherein:
the data includes packets of information; and
multiple packets of information are sent at a time for a particular memory access channel.
15. The network system of claim 12 further comprising:
selectively enforcing internal transaction ordering for some transactions within a memory access channel while keeping other transactions as relaxed ordering as necessary.
16. The network system of claim 11 wherein:
the plurality of memory access channels include a plurality of receive memory access channels dedicated to moving data between the network interface and the memory system.
17. The network system of claim 16 wherein:
each of the plurality of receive memory access channels includes a receive descriptor ring.
18. The network system of claim 17 wherein:
each of the plurality of receive memory access channels includes a receive completion ring.
19. The network system of claim 11 wherein:
the plurality of memory access channels include a plurality of transmit memory access channels dedicated to moving data between the memory system and the network interface.
20. The network system of claim 19 wherein:
the plurality of transmit memory access channels include transmit descriptor rings.
US11/098,245 2005-04-04 2005-04-04 Hiding system latencies in a throughput networking system Active 2029-02-25 US7987306B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/098,245 US7987306B2 (en) 2005-04-04 2005-04-04 Hiding system latencies in a throughput networking system
US13/008,092 US8006016B2 (en) 2005-04-04 2011-01-18 Hiding system latencies in a throughput networking systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/098,245 US7987306B2 (en) 2005-04-04 2005-04-04 Hiding system latencies in a throughput networking system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/008,092 Continuation US8006016B2 (en) 2005-04-04 2011-01-18 Hiding system latencies in a throughput networking systems

Publications (2)

Publication Number Publication Date
US20060221990A1 true US20060221990A1 (en) 2006-10-05
US7987306B2 US7987306B2 (en) 2011-07-26

Family

ID=37070393

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/098,245 Active 2029-02-25 US7987306B2 (en) 2005-04-04 2005-04-04 Hiding system latencies in a throughput networking system
US13/008,092 Active US8006016B2 (en) 2005-04-04 2011-01-18 Hiding system latencies in a throughput networking systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/008,092 Active US8006016B2 (en) 2005-04-04 2011-01-18 Hiding system latencies in a throughput networking systems

Country Status (1)

Country Link
US (2) US7987306B2 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070064737A1 (en) * 2005-09-07 2007-03-22 Emulex Design & Manufacturing Corporation Receive coalescing and automatic acknowledge in network interface controller
US20070186212A1 (en) * 2006-01-24 2007-08-09 Citrix Systems, Inc. Methods and systems for providing access to a computing environment
US20070299863A1 (en) * 2006-04-27 2007-12-27 Qualcomm Incorporated Portable object serialization
US20080005257A1 (en) * 2006-06-29 2008-01-03 Kestrelink Corporation Dual processor based digital media player architecture with network support
US20080130641A1 (en) * 2006-12-05 2008-06-05 Electronics And Telecommunications Research Institute METHOD FOR CLASSIFYING DOWNSTREAM PACKET IN CABLE MODEM TERMINATION SYSTEM AT HEAD-END SUPPORTING CHANNEL BONDING MODE, AND cable modem termination system
US20080144502A1 (en) * 2006-12-19 2008-06-19 Deterministic Networks, Inc. In-Band Quality-of-Service Signaling to Endpoints that Enforce Traffic Policies at Traffic Sources Using Policy Messages Piggybacked onto DiffServ Bits
US20080155571A1 (en) * 2006-12-21 2008-06-26 Yuval Kenan Method and System for Host Software Concurrent Processing of a Network Connection Using Multiple Central Processing Units
US20090248845A1 (en) * 2008-03-31 2009-10-01 Waltermann Rod D Network bandwidth control for network storage
US20100067539A1 (en) * 2008-09-12 2010-03-18 Realtek Semiconductor Corp. Single Network Interface Circuit with Multiple-Ports and Method Thereof
US7853679B2 (en) 2007-03-12 2010-12-14 Citrix Systems, Inc. Systems and methods for configuring handling of undefined policy events
US7853678B2 (en) 2007-03-12 2010-12-14 Citrix Systems, Inc. Systems and methods for configuring flow control of policy expressions
US7865589B2 (en) 2007-03-12 2011-01-04 Citrix Systems, Inc. Systems and methods for providing structured policy expressions to represent unstructured data in a network appliance
US7870277B2 (en) 2007-03-12 2011-01-11 Citrix Systems, Inc. Systems and methods for using object oriented expressions to configure application security policies
US7924884B2 (en) 2005-12-20 2011-04-12 Citrix Systems, Inc. Performance logging using relative differentials and skip recording
WO2011115844A2 (en) 2010-03-16 2011-09-22 Microsoft Corporation Shaping virtual machine communication traffic
US20110286457A1 (en) * 2010-05-24 2011-11-24 Cheng Tien Ee Methods and apparatus to route control packets based on address partitioning
US8116207B2 (en) 2006-08-21 2012-02-14 Citrix Systems, Inc. Systems and methods for weighted monitoring of network services
US8341287B2 (en) 2007-03-12 2012-12-25 Citrix Systems, Inc. Systems and methods for configuring policy bank invocations
US20130086183A1 (en) * 2011-09-30 2013-04-04 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
US20140003436A1 (en) * 2012-06-27 2014-01-02 Futurewei Technologies, Inc. Internet Protocol and Ethernet Lookup Via a Unified Hashed Trie
US8687638B2 (en) 2008-07-10 2014-04-01 At&T Intellectual Property I, L.P. Methods and apparatus to distribute network IP traffic
US8699484B2 (en) 2010-05-24 2014-04-15 At&T Intellectual Property I, L.P. Methods and apparatus to route packets in a network
US8908700B2 (en) 2007-09-07 2014-12-09 Citrix Systems, Inc. Systems and methods for bridging a WAN accelerator with a security gateway
US9160768B2 (en) 2007-03-12 2015-10-13 Citrix Systems, Inc. Systems and methods for managing application security profiles
CN105264837A (en) * 2013-12-05 2016-01-20 华为技术有限公司 Data packet transmission system, transmission method and device thereof
US9269439B1 (en) * 2012-08-31 2016-02-23 Marvell Israel (M.I.S.L) Ltd. Method and apparatus for TCAM based look-up
US10223307B2 (en) 2017-06-15 2019-03-05 International Business Machines Corporation Management of data transaction from I/O devices
US20190173810A1 (en) * 2017-12-06 2019-06-06 Mellanox Technologies Tlv Ltd. Packet scheduling in a switch for reducing cache-miss rate at a destination network node
US10796029B2 (en) 2017-11-30 2020-10-06 International Business Machines Corporation Software controlled port locking mechanisms
US11119968B2 (en) * 2018-08-07 2021-09-14 Dell Products L.P. Increasing cache hits for USB request blocks that target a redirected USB device
US11855898B1 (en) * 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8103993B2 (en) * 2006-05-24 2012-01-24 International Business Machines Corporation Structure for dynamically allocating lanes to a plurality of PCI express connectors
US8793361B1 (en) * 2006-06-30 2014-07-29 Blue Coat Systems, Inc. Traffic synchronization across multiple devices in wide area network topologies
US7769942B2 (en) * 2006-07-27 2010-08-03 Rambus, Inc. Cross-threaded memory system
JP2009048298A (en) * 2007-08-15 2009-03-05 Sony Corp Information processor, information processing method, program for implementing this information processing method, medium for recording this program, dma controller, dma transfer method, program for implementing this dma transfer method, and medium for recording this program
US7916728B1 (en) 2007-09-28 2011-03-29 F5 Networks, Inc. Lockless atomic table update
US8306036B1 (en) 2008-06-20 2012-11-06 F5 Networks, Inc. Methods and systems for hierarchical resource allocation through bookmark allocation
US20100208729A1 (en) * 2008-10-17 2010-08-19 John Oddie Method and System for Receiving Market Data Across Multiple Markets and Accelerating the Execution of Orders
US8447884B1 (en) 2008-12-01 2013-05-21 F5 Networks, Inc. Methods for mapping virtual addresses to physical addresses in a network device and systems thereof
US8112491B1 (en) 2009-01-16 2012-02-07 F5 Networks, Inc. Methods and systems for providing direct DMA
US8880696B1 (en) 2009-01-16 2014-11-04 F5 Networks, Inc. Methods for sharing bandwidth across a packetized bus and systems thereof
US8103809B1 (en) * 2009-01-16 2012-01-24 F5 Networks, Inc. Network devices with multiple direct memory access channels and methods thereof
US9152483B2 (en) 2009-01-16 2015-10-06 F5 Networks, Inc. Network devices with multiple fully isolated and independently resettable direct memory access channels and methods thereof
US8880632B1 (en) 2009-01-16 2014-11-04 F5 Networks, Inc. Method and apparatus for performing multiple DMA channel based network quality of service
US8621159B2 (en) 2009-02-11 2013-12-31 Rambus Inc. Shared access memory scheme
US9313047B2 (en) 2009-11-06 2016-04-12 F5 Networks, Inc. Handling high throughput and low latency network data packets in a traffic management device
US9612775B1 (en) 2009-12-30 2017-04-04 Micron Technology, Inc. Solid state drive controller
KR101841173B1 (en) * 2010-12-17 2018-03-23 삼성전자주식회사 Device and Method for Memory Interleaving based on a reorder buffer
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
US9036822B1 (en) 2012-02-15 2015-05-19 F5 Networks, Inc. Methods for managing user information and devices thereof
US10033837B1 (en) 2012-09-29 2018-07-24 F5 Networks, Inc. System and method for utilizing a data reducing module for dictionary compression of encoded data
US9270602B1 (en) 2012-12-31 2016-02-23 F5 Networks, Inc. Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US9864606B2 (en) 2013-09-05 2018-01-09 F5 Networks, Inc. Methods for configurable hardware logic device reloading and devices thereof
EP3085051A1 (en) 2013-12-16 2016-10-26 F5 Networks, Inc Methods for facilitating improved user authentication using persistent data and devices thereof
US9696920B2 (en) 2014-06-02 2017-07-04 Micron Technology, Inc. Systems and methods for improving efficiencies of a memory system
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
WO2016003646A1 (en) * 2014-06-30 2016-01-07 Unisys Corporation Enterprise management for secure network communications over ipsec
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US10218647B2 (en) * 2015-12-07 2019-02-26 Intel Corporation Mechanism to support multiple-writer/multiple-reader concurrency for software flow/packet classification on general purpose multi-core systems
US10110707B2 (en) * 2015-12-11 2018-10-23 International Business Machines Corporation Chaining virtual network function services via remote memory sharing
US10372470B2 (en) 2016-07-27 2019-08-06 Hewlett Packard Enterprise Development Lp Copy of memory information from a guest transmit descriptor from a free pool and assigned an intermediate state to a tracking data structure
US10462059B2 (en) 2016-10-19 2019-10-29 Intel Corporation Hash table entries insertion method and apparatus using virtual buckets
US10310811B2 (en) 2017-03-31 2019-06-04 Hewlett Packard Enterprise Development Lp Transitioning a buffer to be accessed exclusively by a driver layer for writing immediate data stream
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
US10353833B2 (en) 2017-07-11 2019-07-16 International Business Machines Corporation Configurable ordering controller for coupling transactions
US11537716B1 (en) 2018-11-13 2022-12-27 F5, Inc. Methods for detecting changes to a firmware and devices thereof

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5909686A (en) * 1997-06-30 1999-06-01 Sun Microsystems, Inc. Hardware-assisted central processing unit access to a forwarding database
US5920566A (en) * 1997-06-30 1999-07-06 Sun Microsystems, Inc. Routing in a multi-layer distributed network element
US5923847A (en) * 1996-07-02 1999-07-13 Sun Microsystems, Inc. Split-SMP computer system configured to operate in a protected mode having repeater which inhibits transaction to local address partiton
US5938736A (en) * 1997-06-30 1999-08-17 Sun Microsystems, Inc. Search engine architecture for a high performance multi-layer switch element
US5940401A (en) * 1997-01-10 1999-08-17 Sun Microsystems, Inc. Carrier extension for gigabit/second ethernet networks operable up to at least 200 m distances
US6014380A (en) * 1997-06-30 2000-01-11 Sun Microsystems, Inc. Mechanism for packet field replacement in a multi-layer distributed network element
US6016310A (en) * 1997-06-30 2000-01-18 Sun Microsystems, Inc. Trunking support in a high performance network device
US6021132A (en) * 1997-06-30 2000-02-01 Sun Microsystems, Inc. Shared memory management in a switched network element
US6049528A (en) * 1997-06-30 2000-04-11 Sun Microsystems, Inc. Trunking ethernet-compatible networks
US6081512A (en) * 1997-06-30 2000-06-27 Sun Microsystems, Inc. Spanning tree support in a high performance network device
US6081522A (en) * 1997-06-30 2000-06-27 Sun Microsystems, Inc. System and method for a multi-layer network element
US6088356A (en) * 1997-06-30 2000-07-11 Sun Microsystems, Inc. System and method for a multi-layer network element
US6115378A (en) * 1997-06-30 2000-09-05 Sun Microsystems, Inc. Multi-layer distributed network element
US6128666A (en) * 1997-06-30 2000-10-03 Sun Microsystems, Inc. Distributed VLAN mechanism for packet field replacement in a multi-layered switched network element using a control field/signal for indicating modification of a packet with a database search engine
US6246680B1 (en) * 1997-06-30 2001-06-12 Sun Microsystems, Inc. Highly integrated multi-layer switch element architecture
US6587866B1 (en) * 2000-01-10 2003-07-01 Sun Microsystems, Inc. Method for distributing packets to server nodes using network client affinity and packet distribution table
US6591303B1 (en) * 1997-03-07 2003-07-08 Sun Microsystems, Inc. Method and apparatus for parallel trunking of interfaces to increase transfer bandwidth
US6633946B1 (en) * 1999-09-28 2003-10-14 Sun Microsystems, Inc. Flexible switch-based I/O system interconnect
US6667980B1 (en) * 1999-10-21 2003-12-23 Sun Microsystems, Inc. Method and apparatus for providing scalable services using a packet distribution table
US6735206B1 (en) * 2000-01-10 2004-05-11 Sun Microsystems, Inc. Method and apparatus for performing a fast service lookup in cluster networking
US20040098496A1 (en) * 1999-12-28 2004-05-20 Intel Corporation, A California Corporation Thread signaling in multi-threaded network processor
US7047372B2 (en) * 2003-04-15 2006-05-16 Newisys, Inc. Managing I/O accesses in multiprocessor systems
US7099986B2 (en) * 1998-09-03 2006-08-29 Hewlett-Packard Development Company, L.P. High speed peripheral interconnect apparatus, method and system
US7152128B2 (en) * 2001-08-24 2006-12-19 Intel Corporation General input/output architecture, protocol and related methods to manage data integrity

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6314501B1 (en) 1998-07-23 2001-11-06 Unisys Corporation Computer system and method for operating multiple operating systems in different partitions of the computer system and for allowing the different partitions to communicate with one another through shared memory
CN1488104A (en) 2001-01-31 2004-04-07 国际商业机器公司 Method and apparatus for controlling flow of data between data processing systems via meenory
US6757795B2 (en) 2001-04-03 2004-06-29 International Business Machines Corporation Apparatus and method for efficiently sharing memory bandwidth in a network processor

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923847A (en) * 1996-07-02 1999-07-13 Sun Microsystems, Inc. Split-SMP computer system configured to operate in a protected mode having repeater which inhibits transaction to local address partiton
US5940401A (en) * 1997-01-10 1999-08-17 Sun Microsystems, Inc. Carrier extension for gigabit/second ethernet networks operable up to at least 200 m distances
US6591303B1 (en) * 1997-03-07 2003-07-08 Sun Microsystems, Inc. Method and apparatus for parallel trunking of interfaces to increase transfer bandwidth
US6115378A (en) * 1997-06-30 2000-09-05 Sun Microsystems, Inc. Multi-layer distributed network element
US6246680B1 (en) * 1997-06-30 2001-06-12 Sun Microsystems, Inc. Highly integrated multi-layer switch element architecture
US6014380A (en) * 1997-06-30 2000-01-11 Sun Microsystems, Inc. Mechanism for packet field replacement in a multi-layer distributed network element
US6016310A (en) * 1997-06-30 2000-01-18 Sun Microsystems, Inc. Trunking support in a high performance network device
US6021132A (en) * 1997-06-30 2000-02-01 Sun Microsystems, Inc. Shared memory management in a switched network element
US6049528A (en) * 1997-06-30 2000-04-11 Sun Microsystems, Inc. Trunking ethernet-compatible networks
US6081512A (en) * 1997-06-30 2000-06-27 Sun Microsystems, Inc. Spanning tree support in a high performance network device
US6081522A (en) * 1997-06-30 2000-06-27 Sun Microsystems, Inc. System and method for a multi-layer network element
US6088356A (en) * 1997-06-30 2000-07-11 Sun Microsystems, Inc. System and method for a multi-layer network element
US5909686A (en) * 1997-06-30 1999-06-01 Sun Microsystems, Inc. Hardware-assisted central processing unit access to a forwarding database
US6128666A (en) * 1997-06-30 2000-10-03 Sun Microsystems, Inc. Distributed VLAN mechanism for packet field replacement in a multi-layered switched network element using a control field/signal for indicating modification of a packet with a database search engine
US5938736A (en) * 1997-06-30 1999-08-17 Sun Microsystems, Inc. Search engine architecture for a high performance multi-layer switch element
US5920566A (en) * 1997-06-30 1999-07-06 Sun Microsystems, Inc. Routing in a multi-layer distributed network element
US7099986B2 (en) * 1998-09-03 2006-08-29 Hewlett-Packard Development Company, L.P. High speed peripheral interconnect apparatus, method and system
US6633946B1 (en) * 1999-09-28 2003-10-14 Sun Microsystems, Inc. Flexible switch-based I/O system interconnect
US6667980B1 (en) * 1999-10-21 2003-12-23 Sun Microsystems, Inc. Method and apparatus for providing scalable services using a packet distribution table
US20040098496A1 (en) * 1999-12-28 2004-05-20 Intel Corporation, A California Corporation Thread signaling in multi-threaded network processor
US6587866B1 (en) * 2000-01-10 2003-07-01 Sun Microsystems, Inc. Method for distributing packets to server nodes using network client affinity and packet distribution table
US6735206B1 (en) * 2000-01-10 2004-05-11 Sun Microsystems, Inc. Method and apparatus for performing a fast service lookup in cluster networking
US7152128B2 (en) * 2001-08-24 2006-12-19 Intel Corporation General input/output architecture, protocol and related methods to manage data integrity
US7047372B2 (en) * 2003-04-15 2006-05-16 Newisys, Inc. Managing I/O accesses in multiprocessor systems

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070064737A1 (en) * 2005-09-07 2007-03-22 Emulex Design & Manufacturing Corporation Receive coalescing and automatic acknowledge in network interface controller
US8311059B2 (en) * 2005-09-07 2012-11-13 Emulex Design & Manufacturing Corporation Receive coalescing and automatic acknowledge in network interface controller
US7924884B2 (en) 2005-12-20 2011-04-12 Citrix Systems, Inc. Performance logging using relative differentials and skip recording
US20070186212A1 (en) * 2006-01-24 2007-08-09 Citrix Systems, Inc. Methods and systems for providing access to a computing environment
US7949677B2 (en) 2006-01-24 2011-05-24 Citrix Systems, Inc. Methods and systems for providing authorized remote access to a computing environment provided by a virtual machine
US7870153B2 (en) 2006-01-24 2011-01-11 Citrix Systems, Inc. Methods and systems for executing, by a virtual machine, an application program requested by a client machine
US8117314B2 (en) 2006-01-24 2012-02-14 Citrix Systems, Inc. Methods and systems for providing remote access to a computing environment provided by a virtual machine
US8355407B2 (en) 2006-01-24 2013-01-15 Citrix Systems, Inc. Methods and systems for interacting, via a hypermedium page, with a virtual machine executing in a terminal services session
US8341732B2 (en) 2006-01-24 2012-12-25 Citrix Systems, Inc. Methods and systems for selecting a method for execution, by a virtual machine, of an application program
US8341270B2 (en) * 2006-01-24 2012-12-25 Citrix Systems, Inc. Methods and systems for providing access to a computing environment
US8010679B2 (en) * 2006-01-24 2011-08-30 Citrix Systems, Inc. Methods and systems for providing access to a computing environment provided by a virtual machine executing in a hypervisor executing in a terminal services session
US7954150B2 (en) 2006-01-24 2011-05-31 Citrix Systems, Inc. Methods and systems for assigning access control levels in providing access to resources via virtual machines
US8051180B2 (en) 2006-01-24 2011-11-01 Citrix Systems, Inc. Methods and servers for establishing a connection between a client system and a virtual machine executing in a terminal services session and hosting a requested computing environment
US20070299863A1 (en) * 2006-04-27 2007-12-27 Qualcomm Incorporated Portable object serialization
US20080005257A1 (en) * 2006-06-29 2008-01-03 Kestrelink Corporation Dual processor based digital media player architecture with network support
US8116207B2 (en) 2006-08-21 2012-02-14 Citrix Systems, Inc. Systems and methods for weighted monitoring of network services
US7912050B2 (en) * 2006-12-05 2011-03-22 Electronics And Telecommunications Research Institute Method for classifying downstream packet in cable modem termination system at head-end supporting channel bonding mode, and cable modem termination system
US20080130641A1 (en) * 2006-12-05 2008-06-05 Electronics And Telecommunications Research Institute METHOD FOR CLASSIFYING DOWNSTREAM PACKET IN CABLE MODEM TERMINATION SYSTEM AT HEAD-END SUPPORTING CHANNEL BONDING MODE, AND cable modem termination system
US20080144502A1 (en) * 2006-12-19 2008-06-19 Deterministic Networks, Inc. In-Band Quality-of-Service Signaling to Endpoints that Enforce Traffic Policies at Traffic Sources Using Policy Messages Piggybacked onto DiffServ Bits
US7983170B2 (en) 2006-12-19 2011-07-19 Citrix Systems, Inc. In-band quality-of-service signaling to endpoints that enforce traffic policies at traffic sources using policy messages piggybacked onto DiffServ bits
US20080155571A1 (en) * 2006-12-21 2008-06-26 Yuval Kenan Method and System for Host Software Concurrent Processing of a Network Connection Using Multiple Central Processing Units
US8341287B2 (en) 2007-03-12 2012-12-25 Citrix Systems, Inc. Systems and methods for configuring policy bank invocations
US8631147B2 (en) 2007-03-12 2014-01-14 Citrix Systems, Inc. Systems and methods for configuring policy bank invocations
US9450837B2 (en) 2007-03-12 2016-09-20 Citrix Systems, Inc. Systems and methods for configuring policy bank invocations
US7870277B2 (en) 2007-03-12 2011-01-11 Citrix Systems, Inc. Systems and methods for using object oriented expressions to configure application security policies
US7865589B2 (en) 2007-03-12 2011-01-04 Citrix Systems, Inc. Systems and methods for providing structured policy expressions to represent unstructured data in a network appliance
US9160768B2 (en) 2007-03-12 2015-10-13 Citrix Systems, Inc. Systems and methods for managing application security profiles
US7853678B2 (en) 2007-03-12 2010-12-14 Citrix Systems, Inc. Systems and methods for configuring flow control of policy expressions
US7853679B2 (en) 2007-03-12 2010-12-14 Citrix Systems, Inc. Systems and methods for configuring handling of undefined policy events
US8908700B2 (en) 2007-09-07 2014-12-09 Citrix Systems, Inc. Systems and methods for bridging a WAN accelerator with a security gateway
US20090248845A1 (en) * 2008-03-31 2009-10-01 Waltermann Rod D Network bandwidth control for network storage
US9071524B2 (en) 2008-03-31 2015-06-30 Lenovo (Singapore) Pte, Ltd. Network bandwidth control for network storage
US8687638B2 (en) 2008-07-10 2014-04-01 At&T Intellectual Property I, L.P. Methods and apparatus to distribute network IP traffic
US20100067539A1 (en) * 2008-09-12 2010-03-18 Realtek Semiconductor Corp. Single Network Interface Circuit with Multiple-Ports and Method Thereof
US8155136B2 (en) * 2008-09-12 2012-04-10 Realtek Semiconductor Corp. Single network interface circuit with multiple-ports and method thereof
US8972560B2 (en) 2010-03-16 2015-03-03 Microsoft Technology Licensing, Llc Shaping virtual machine communication traffic
WO2011115844A2 (en) 2010-03-16 2011-09-22 Microsoft Corporation Shaping virtual machine communication traffic
US9231878B2 (en) 2010-03-16 2016-01-05 Microsoft Technology Licensing, Llc Shaping virtual machine communication traffic
EP2548130A4 (en) * 2010-03-16 2014-05-07 Microsoft Corp Shaping virtual machine communication traffic
CN102804164A (en) * 2010-03-16 2012-11-28 微软公司 Shaping virtual machine communication traffic
EP2548130A2 (en) * 2010-03-16 2013-01-23 Microsoft Corporation Shaping virtual machine communication traffic
US9893994B2 (en) 2010-05-24 2018-02-13 At&T Intellectual Property I, L.P. Methods and apparatus to route control packets based on address partitioning
US8699484B2 (en) 2010-05-24 2014-04-15 At&T Intellectual Property I, L.P. Methods and apparatus to route packets in a network
US20110286457A1 (en) * 2010-05-24 2011-11-24 Cheng Tien Ee Methods and apparatus to route control packets based on address partitioning
US9491085B2 (en) * 2010-05-24 2016-11-08 At&T Intellectual Property I, L.P. Methods and apparatus to route control packets based on address partitioning
US20130086183A1 (en) * 2011-09-30 2013-04-04 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
US9996403B2 (en) * 2011-09-30 2018-06-12 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
US20140003436A1 (en) * 2012-06-27 2014-01-02 Futurewei Technologies, Inc. Internet Protocol and Ethernet Lookup Via a Unified Hashed Trie
US9680747B2 (en) * 2012-06-27 2017-06-13 Futurewei Technologies, Inc. Internet protocol and Ethernet lookup via a unified hashed trie
US9269439B1 (en) * 2012-08-31 2016-02-23 Marvell Israel (M.I.S.L) Ltd. Method and apparatus for TCAM based look-up
US9997245B1 (en) 2012-08-31 2018-06-12 Marvell Israel (M.I.S.L) Ltd. Method and apparatus for TCAM based look-up
CN105264837A (en) * 2013-12-05 2016-01-20 华为技术有限公司 Data packet transmission system, transmission method and device thereof
US10223307B2 (en) 2017-06-15 2019-03-05 International Business Machines Corporation Management of data transaction from I/O devices
US10796029B2 (en) 2017-11-30 2020-10-06 International Business Machines Corporation Software controlled port locking mechanisms
US20190173810A1 (en) * 2017-12-06 2019-06-06 Mellanox Technologies Tlv Ltd. Packet scheduling in a switch for reducing cache-miss rate at a destination network node
US10581762B2 (en) * 2017-12-06 2020-03-03 Mellanox Technologies Tlv Ltd. Packet scheduling in a switch for reducing cache-miss rate at a destination network node
US11855898B1 (en) * 2018-03-14 2023-12-26 F5, Inc. Methods for traffic dependent direct memory access optimization and devices thereof
US11119968B2 (en) * 2018-08-07 2021-09-14 Dell Products L.P. Increasing cache hits for USB request blocks that target a redirected USB device

Also Published As

Publication number Publication date
US20110110380A1 (en) 2011-05-12
US8006016B2 (en) 2011-08-23
US7987306B2 (en) 2011-07-26

Similar Documents

Publication Publication Date Title
US7987306B2 (en) Hiding system latencies in a throughput networking system
US7415034B2 (en) Virtualized partitionable shared network interface
US7353360B1 (en) Method for maximizing page locality
US7567567B2 (en) Network system including packet classification for partitioned resources
US7889734B1 (en) Method and apparatus for arbitrarily mapping functions to preassigned processing entities in a network system
US7992144B1 (en) Method and apparatus for separating and isolating control of processing entities in a network interface
US7664127B1 (en) Method for resolving mutex contention in a network system
US7779164B2 (en) Asymmetrical data processing partition
US7443878B2 (en) System for scaling by parallelizing network workload
US7529245B1 (en) Reorder mechanism for use in a relaxed order input/output system
US7843926B1 (en) System for providing virtualization of network interfaces at various layers
US8762595B1 (en) Method for sharing interfaces among multiple domain environments with enhanced hooks for exclusiveness
US7865624B1 (en) Lookup mechanism based on link layer semantics
US8539199B2 (en) Hash processing in a network communications processor architecture
US8279885B2 (en) Lockless processing of command operations in multiprocessor systems
US20140198803A1 (en) Scheduling and Traffic Management with Offload Processors
US20110225168A1 (en) Hash processing in a network communications processor architecture
US8510491B1 (en) Method and apparatus for efficient interrupt event notification for a scalable input/output device
US7415035B1 (en) Device driver access method into a virtualized network interface
EP2018614B1 (en) Virtualized partitionable shared network interface
EP2016496B1 (en) Hiding system latencies in a throughput networking system
EP2016718B1 (en) Method and system for scaling by parallelizing network workload
EP2016498B1 (en) Asymmetrical processing for networking functions and data path offload
EP2014028B1 (en) Asymmetrical processing for networking functions and data path offload
US20230231811A1 (en) Systems, devices and methods with offload processing devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUN MICROSYTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MULLER, SHIMON;PURI, RAHOUL;WONG, MICHAEL;REEL/FRAME:016448/0581

Effective date: 20050401

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: ORACLE AMERICA, INC., CALIFORNIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:ORACLE USA, INC.;SUN MICROSYSTEMS, INC.;ORACLE AMERICA, INC.;REEL/FRAME:039888/0635

Effective date: 20100212

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12