US20030212735A1 - Method and apparatus for providing an integrated network of processors - Google Patents

Method and apparatus for providing an integrated network of processors Download PDF

Info

Publication number
US20030212735A1
US20030212735A1 US10/144,658 US14465802A US2003212735A1 US 20030212735 A1 US20030212735 A1 US 20030212735A1 US 14465802 A US14465802 A US 14465802A US 2003212735 A1 US2003212735 A1 US 2003212735A1
Authority
US
United States
Prior art keywords
processing unit
network
auxiliary
host
processing units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/144,658
Inventor
Gary Hicok
Robert Alfieri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nvidia Corp
Original Assignee
Nvidia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nvidia Corp filed Critical Nvidia Corp
Priority to US10/144,658 priority Critical patent/US20030212735A1/en
Assigned to NVIDIA CORPORATION reassignment NVIDIA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALFIERI, ROBERT A., HICOK, GARY
Priority to JP2004504124A priority patent/JP2005526313A/en
Priority to DE10392634T priority patent/DE10392634T5/en
Priority to AU2003229034A priority patent/AU2003229034A1/en
Priority to PCT/US2003/014908 priority patent/WO2003096202A1/en
Priority to GB0425574A priority patent/GB2405244B/en
Priority to GB0514859A priority patent/GB2413872B/en
Publication of US20030212735A1 publication Critical patent/US20030212735A1/en
Priority to US11/473,832 priority patent/US7383352B2/en
Priority to US11/948,847 priority patent/US7620738B2/en
Priority to US12/608,881 priority patent/US8051126B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • the present invention relates to a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances.
  • IP internet protocol
  • FIG. 1 illustrates traditional internal content sources and data pipes where the data routing function is performed by a host central processing unit (CPU) and its operating system (OS) 110 .
  • the host computer may comprise a number of storage devices 120 , a plurality of media engines 130 , and a plurality of other devices that are accessible via input/output ports 140 , e.g., universal serial bus (USB) and the like.
  • the host computer may access a network 150 via application programming interfaces (APIs) and a media access controller (MAC).
  • APIs application programming interfaces
  • MAC media access controller
  • the present invention is a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances.
  • IP internet protocol
  • NPU network processing unit
  • a host computer's “chipset” is one or more integrated circuits coupled to a CPU that provide various interfaces (e.g., main memory, hard disks, floppy, USB, PCI, etc), exemplified by Intel's Northbridge and Southbridge integrated circuits.
  • the host computer has a virtual port (i.e., host MAC) that is in communication with the network processing unit and communicates with the NPU as if it is an external network appliance using standard networking protocols.
  • the host computer communicates via the NPU with one or more auxiliary or dedicated processing units that are deployed to perform dedicated tasks.
  • auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements.
  • some of these auxiliary processing units include, but are not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a storage processing unit (SPU), and a physics processing unit (PPU).
  • auxiliary processing units refers to these auxiliary processing units as XPU, where the “X” is replaced to signify a particular function performed by the processing unit.
  • the network processing unit itself is an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and so on.
  • the XPUs have logically direct attachments to the NPU which effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances. Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP, TCP, UDP and the like without the involvement of the host CPU/OS.
  • the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment.
  • the present invention allows a single chipset to provide multiple, virtual host computers with each being attached to this NPU.
  • Each of these virtual computers or virtual host may run its own copy of an identical or different operating system, and may communicate with other virtual computers and integrated networked appliances using standard networking protocols.
  • the present invention embodies its own hardware-level operating system and graphical user interface (GUI) that reside below the standard host operating system and host computer definition, and allow the computer user to easily configure the network or to switch from one virtual computer to another without changing the standard definition of that host computer.
  • GUI graphical user interface
  • FIG. 1 illustrates a block diagram of conventional internal content sources and data pipes
  • FIG. 2 illustrates a block diagram of novel internal content sources and data pipes of the present invention
  • FIG. 3 illustrates a block diagram where a network of host computers are in communication with each other via a plurality of network processing units;
  • FIG. 4 illustrates a block diagram where a host computer's resources are networked via a network processing unit of the present invention.
  • FIG. 5 illustrates a block diagram of a network of virtual personal computers in communication with a network processing unit of the present invention.
  • FIG. 2 illustrates a block diagram of novel internal content sources and data pipes 200 of the present invention.
  • the present network architecture has a network processing unit 210 of the present invention at the center of the internal content sources and data pipes.
  • the host CPU/OS 250 is no longer central to the data routing scheme.
  • One advantage of this new architecture is that the NPU 210 provides both local or host access and remote access acceleration.
  • An operating system is any software platform for application programs; typical examples are Microsoft Windows, Unix, and Apple Macintosh OS.
  • An operating system can be run on top of another operating system (an example of a virtual operating system) or another underlying software platform, possibly as an application program.
  • the host CPU/OS 250 has a virtual port (i.e., host MAC) that is in communication with the network processing unit 210 and communicates with the NPU as if it is an external network appliance using standard networking protocols, e.g., TCP/IP protocols.
  • the host computer communicates via the NPU with one or more auxiliary or dedicated processing units 220 , 230 that are deployed to perform dedicated tasks. These auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements.
  • auxiliary processing units include, but are not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a physics processing unit (PPU) and a storage processing unit (SPU) 220 .
  • GPU graphics processing unit
  • APU audio processing unit
  • VPU video processing unit
  • PPU physics processing unit
  • SPU storage processing unit
  • Some of these auxiliary processing units can be deployed as part of the media engines 230 , whereas the SPU 220 is deployed with the storage devices of the host.
  • the network processing unit itself is an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and so on.
  • the NPU 210 is a network router appliance that resides inside the same “box” or chassis as the host computer 250 , i.e., typically within the same chipset.
  • the NPU serves to connect various other “XPUs” that performed dedicated functions such as:
  • SPU Storage Processing Unit
  • the SPU is an auxiliary processing unit that implements a file system, where the file system can be accessed locally by the host or remotely via the NPU's connection to the outside world.
  • the SPU is a special XPU because it behaves as an endpoint for data storage. Streams can originate from an SPU file or terminate at an SPU file.
  • Audio Processing Unit is an auxiliary processing unit that implements audio affects on individual “voices” and mixes them down to a small number of channels. APU also performs encapsulation/decapsulation of audio packets that are transmitted/received over the network via the NPU.
  • Video Processing Unit is an auxiliary processing unit that is similar to the APU except that it operates on compressed video packets (e.g., MPEG-2 compressed), either compressing them or uncompressing them.
  • compressed video packets e.g., MPEG-2 compressed
  • the VPU also performs encapsulations into bitstreams or network video packets.
  • GPU Graphics Processing Unit
  • the GPU is a special XPU because it acts as an endpoint for rendered graphics primitives. Streams can terminate at a GPU frame buffer or originate as raw pixels from a frame buffer.
  • PPU Physics Processing Unit
  • PPU is an auxiliary processing unit that takes object positions, current velocity vectors, and force equations, and produces new positions, velocity vectors, and collision information.
  • NPU Network Processing Unit
  • NPU is itself an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and the like.
  • Some of the above XPUs have a number of commonalities with respect to their association with the host 250 and the NPU 210 .
  • an XPU can be accessed directly by the host CPU and O/S 250 directly as a local resource. Namely, communication is effected by using direct local communication channels.
  • an XPU can be placed on the network via the NPU and accessed remotely from other network nodes (as shown in FIG. 3 below). This indicates that an XPU is capable of processing information that is encapsulated in network packets.
  • an XPU can be accessed as a “remote” node even from the local host. Namely, communication is effected via the NPU by using network protocols.
  • an XPU is always in an “on” state (like most appliances) even when the host (CPU+O/S) is in the “off” state.
  • This unique feature allows the XPUs to operate without the involvement of the host CPU/OS, e.g., extracting data from a disk drive of the host without the involvement of the host. More importantly, the host's resources are still available even though the CPU/OS may be in a dormant state, e.g., in a sleep mode.
  • an XPU has at least two sets of processing queues, one for non-real-time packets and at least one for real-time packets. This duality of queues combined with similar real-time queues in the NPU, allows the system of NPU and XPUs to guarantee latencies and bandwidth for real-time streams.
  • an XPU has two software (SW) drivers, one that manages the host-side connection to the XPU, and one that manages the remotely-accessed component of the XPU.
  • SW drivers communicate with the XPU using abstract command queues, called push buffers (PBs).
  • PBs push buffers
  • Each driver has at least one PB going from the driver to the XPU and at least one PB going from the XPU to the driver.
  • Push buffers are described in U.S. Pat. No. 6,092,124, and is herein incorporated herein by reference.
  • an XPU can also be accessed on the host side directly by a user-level application. Namely, this involves lazy-pinning of user-space buffers by the O/S. Lazy-pinning means to lock the virtual-to-physical address translations of memory pages on demand, i.e., when the translations are needed by the particular XPU. When the translations are no longer needed, they can be unlocked, allowing the operating system to page out those pages.
  • the virtual-to-physical mappings of these buffers are passed to the XPU.
  • a separate pair of PBs are linked into the user's address space and the O/S driver coordinates context switches with the XPU.
  • the present invention discloses the use of a network processing unit 210 to perform routing functions without the involvement of the CPU/OS, the CPU/OS 250 nevertheless still has an alternate direct communication channel 255 with its resources, e.g., storage devices. This provides the host CPU/OS with the option of communicating with its resources or media engines via the NPU or directly via local access channels 255 or 257 .
  • exception routing issues are resolved by the host CPU/OS. For example, if the NPU receives a packet that it is unable to process, the NPU will forward the packet to the host CPU/OS for resolution. This limited use of the CPU/OS serves to accelerate host processing, while retaining the option to more judiciously use the processing power of the host CPU/OS to resolve difficult issues.
  • the host resources may also be accessed via the NPU without the involvement of the host CPU/OS 250 via input/output communication channel 240 , e.g., via an USB.
  • the present architecture can virtualize the remaining resources of the host computer 250 , such as its physical memory, read only memory (ROM), real-time clocks, interrupts, and so on, thereby allowing a single chipset to provide multiple virtual hosts with each host being attached to the NPU 210 .
  • the XPUs have logically direct attachments to the NPU that effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances. Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP, TCP, UDP and the like without the involvement of the host CPU/OS.
  • the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment.
  • FIG. 3 illustrates a block diagram where a network of host computers 300 a - n are in communication with each other via a plurality of network processing units 310 a - n .
  • This unique configuration provides both host access and remote access acceleration. The accelerated functions can be best understood by viewing the present invention in terms of packetized streams.
  • the term “host” means the combination of host CPU and memory in the context of the O/S kernel or a user-level process.
  • node refers to a remote networked host or device that is attached to the NPU via a wired or wireless connection to a MAC that is directly connected to the NPU (e.g., as shown in FIG. 4 below).
  • a host-to-XPU stream is a stream that flows directly from the host 350 a to the XPU 330 a . This is a typical scenario for a dedicated XPU (e.g., a dedicated GPU via communication path 357 ). The stream does not traverse through the NPU 310 a.
  • a dedicated XPU e.g., a dedicated GPU via communication path 357 .
  • the stream does not traverse through the NPU 310 a.
  • An XPU-to-host stream is a stream that flows directly from the XPU to the host.
  • One example is a local file being read from the SPU 320 a via path 355 .
  • the stream does not traverse through the NPU 310 a.
  • a host-to-XPU-to-host stream is a stream that flows from host 350 a to an XPU 330 a for processing then back to the host 350 a .
  • One example is where the host forwards voice data directly to the APU for processing of voices into final mix buffers that are subsequently returned to the host via path 357 .
  • the stream does not traverse through the NPU 310 a .
  • a host-to-NPU-to-XPU stream is a networked stream that flows from the host 350 a via NPU 310 a to an XPU 330 a or 320 a .
  • the three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
  • An XPU-to-NPU-to-Host is a networked stream that flows from an XPU 330 a or 320 a via the NPU 310 a to the host 350 a .
  • the three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
  • a host-to-NPU-to-XPU-to-NPU-to-host is a networked stream that is the combination of the previous two streams.
  • the three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
  • a host-to-NPU-to-Node is a networked stream that flows from the host 350 a via the NPU 310 a to a remote node (e.g., NPU 310 b ). This allows a local host 350 a to communicate and access XPUs 330 b of another host via a second NPU 310 b.
  • a Node-to-NPU-to-Host is a reverse networked stream where the stream flows from a remote node (e.g., NPU 310 b ) via the NPU 310 a to the host 350 a . This allows a remote NPU 350 b to communicate with a local host 350 a via a local NPU 310 a.
  • a remote node e.g., NPU 310 b
  • a Node-to-NPU-to-XPU is a networked stream that flows from a remote node 350 b via the NPU 350 a to an XPU 330 a where it terminates. This allows a remote NPU 310 b to communicate with a local XPU 330 a via a local NPU 310 a.
  • An XPU-to-NPU-to-Node is a networked stream that flows from an XPU 330 a where it originates to a remote node (e.g., NPU 310 b ) via local NPU 310 a.
  • a remote node e.g., NPU 310 b
  • a Node 0 -to-NPU-to-XPU-to-NPU-to-Node 1 is a combination of the previous two streams. It should be noted that Node 0 and Node 1 may be the same or different. For example, Node 0 is 310 a ; NPU is 310 b ; XPU is 330 b ; NPU is 310 b ; and Node 1 is 310 n . Alternatively, Node 0 is 310 a ; NPU is 310 b ; XPU is 330 b ; NPU is 310 b ; and Node 1 is 310 a.
  • a ⁇ Host,Node 0 ,XPU 0 ⁇ -to-NPU-to-XPU 1 -to-NPU-to-XPU 2 -to-NPU-to ⁇ Host,Node 1 ,XPU 3 ⁇ is a stream that originates from the host, a remote node, or an XPU, passes through the NPU to another XPU for some processing, then passes through the NPU to another XPU for some additional processing, then terminates at the host, another remote node, or another XPU. It should be clear that the present architecture of a network of integrated processing units provides a powerful and flexible distributed processing environment, where both host access and remote access acceleration are greatly enhanced.
  • GUI graphical user interface
  • real-time media streaming is implemented using the above described network of integrated processing units.
  • media streaming typically involves multiple software layers.
  • latencies can be unpredictable, particularly when the software runs on a general-purpose computer.
  • media streaming typically has a severe adverse impact on other applications running on the host computer.
  • control requests may arrive from a remote recipient 350 b (typically attached wireless). These control requests may include play, stop, rewind, forward, pause, select title, and so on.
  • the raw data can be streamed directly from a disk managed by the SPU 320 a through the NPU 310 a to the destination client.
  • the data may get preprocessed by the GPU 330 a or APU 330 a prior to being sent out via the NPU 310 a .
  • One important aspect again is that real-time media streaming can take place without host CPU 350 a involvement. Dedicated queuing throughout the system will guarantee latencies and bandwidth.
  • This media streaming embodiment clearly demonstrates the power and flexibility of the present invention.
  • One practical implementation of this real-time media streaming embodiment is within the home environment, where a centralized multimedia host server or computer has a large storage device that contains a library of stored media streams or it may simply be connected to a DVD player, a “PVR” (personal video recorder) or “DVR” (digital video recorder). If there are other client devices throughout the home, it is efficient to use the above network architecture to implement real-time media streaming, where a media stream from a storage device of the host computer can be transmitted to another host computer or a television set in a different part of the home. Thus, the real-time media streaming is implemented without the involvement of the host computer and with guaranteed latencies and bandwidth.
  • FIG. 4 illustrates a block diagram where a host computer's resources are networked via a network processing unit 410 of the present invention.
  • a host 450 communicates with the NPU 410 via a MAC 415 (i.e., a host MAC).
  • a plurality of XPUs and other host resources 430 a are connected to the NPU via a plurality of MACs 425 that interface with a MAC Interface (MI) (not shown) of the NPU.
  • MI MAC Interface
  • FIG. 5 illustrates a block diagram of a network of virtual personal computers or virtual hosts that are in communication with a network processing unit 520 of the present invention. More specifically, FIG. 5 illustrates a network of virtual personal computers (VPCs) in a single system (or a single chassis) 500 , where the system may be a single personal computer, a set top box, a video game console or the like.
  • VPCs virtual personal computers
  • FIG. 5 illustrates a plurality of virtual hosts 510 a - e , which may comprise a plurality of different operating systems (e.g., Microsoft Corporation's Windows (two separate copies 510 a and 510 b ), and Linux 510 c ), a raw video game application 510 d or other raw applications 510 e , where the virtual hosts treat the storage processing unit 530 as a remote file server having a physical storage 540 .
  • FIG. 5 illustrating a “network of VPCs in a box”.
  • the NPU 520 manages multiple IP addresses inside the system for each VPC.
  • the NPU 520 may be assigned a public IP address, whereas each of the VPCs is assigned a private IP address, e.g., in accordance with Dynamic Host Configuration Protocol (DHCP).
  • DHCP Dynamic Host Configuration Protocol
  • each of the VPCs can communicate with each other and the SPU using standard networking protocols.
  • Standard network protocols include, but are not limited to: TCP; TCP/IP; UDP; NFS; HTTP; SMTP; POP; FTP; NNTP; CGI; DHCP; and ARP (to name only a few that are know in the art).
  • the XPUs of the present invention can be implemented as one or more physical devices that are coupled to the host CPU through a communication channel.
  • the XPUs can be represented and provided by one or more software applications (or even a combination of software and hardware, e.g., using application specific integrated circuits (ASIC)), where the software is loaded from a storage medium, (e.g., a ROM, a magnetic or optical drive or diskette) and operated in the memory of the computer.
  • ASIC application specific integrated circuits
  • the XPUs (including associated methods and data structures) of the present invention can be stored and provided on a computer readable medium, e.g., ROM or RAM memory, magnetic or optical drive or diskette and the like.
  • the XPUs can be represented by Field Programmable Gate Arrays (FPGA) having control bits.
  • FPGA Field Programmable Gate Arrays

Abstract

A novel network architecture that integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. The NPU appears logically separate from the host computer even though, in one embodiment, it is sharing the same chip.

Description

  • The present invention relates to a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. [0001]
  • BACKGROUND OF THE DISCLOSURE
  • FIG. 1 illustrates traditional internal content sources and data pipes where the data routing function is performed by a host central processing unit (CPU) and its operating system (OS) [0002] 110. Namely, the host computer may comprise a number of storage devices 120, a plurality of media engines 130, and a plurality of other devices that are accessible via input/output ports 140, e.g., universal serial bus (USB) and the like. In turn, the host computer may access a network 150 via application programming interfaces (APIs) and a media access controller (MAC).
  • However, a significant drawback of this data routing architecture is that the host computer's resources or devices are only accessible with the involvement of the host CPU/OS. Typically, accessing the host resources from external computers is either prohibited or it is necessary to request access through the host computer using high-level protocols. If the host CPU/OS is overtaxed, a substantial latency will exist where data flow may be stuck in the OS stacks. [0003]
  • Therefore, a need exists for a novel network architecture that allows a host computer's resources to be perceived as separate network appliances and are accessible without the interference of the host computer's CPU/OS. [0004]
  • SUMMARY OF THE INVENTION
  • The present invention is a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. The NPU appears logically separate from the host computer even though, in one embodiment, it is sharing the same chip. A host computer's “chipset” is one or more integrated circuits coupled to a CPU that provide various interfaces (e.g., main memory, hard disks, floppy, USB, PCI, etc), exemplified by Intel's Northbridge and Southbridge integrated circuits. [0005]
  • In operation, the host computer has a virtual port (i.e., host MAC) that is in communication with the network processing unit and communicates with the NPU as if it is an external network appliance using standard networking protocols. In one embodiment, the host computer communicates via the NPU with one or more auxiliary or dedicated processing units that are deployed to perform dedicated tasks. These auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements. For example, some of these auxiliary processing units include, but are not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a storage processing unit (SPU), and a physics processing unit (PPU). The present disclosure refers to these auxiliary processing units as XPU, where the “X” is replaced to signify a particular function performed by the processing unit. Finally, the network processing unit itself is an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and so on. [0006]
  • One unique aspect of the present Invention is that the XPUs have logically direct attachments to the NPU which effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances. Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP, TCP, UDP and the like without the involvement of the host CPU/OS. Using this novel architecture, the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment. [0007]
  • Furthermore, by virtualizing the remaining resources of the host computer, such as its physical memory, ROM, real-time clocks, interrupts, and the like, the present invention allows a single chipset to provide multiple, virtual host computers with each being attached to this NPU. Each of these virtual computers or virtual host may run its own copy of an identical or different operating system, and may communicate with other virtual computers and integrated networked appliances using standard networking protocols. Effectively, the present invention embodies its own hardware-level operating system and graphical user interface (GUI) that reside below the standard host operating system and host computer definition, and allow the computer user to easily configure the network or to switch from one virtual computer to another without changing the standard definition of that host computer.[0008]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which: [0009]
  • FIG. 1 illustrates a block diagram of conventional internal content sources and data pipes; [0010]
  • FIG. 2 illustrates a block diagram of novel internal content sources and data pipes of the present invention; [0011]
  • FIG. 3 illustrates a block diagram where a network of host computers are in communication with each other via a plurality of network processing units; [0012]
  • FIG. 4 illustrates a block diagram where a host computer's resources are networked via a network processing unit of the present invention; and [0013]
  • FIG. 5 illustrates a block diagram of a network of virtual personal computers in communication with a network processing unit of the present invention.[0014]
  • To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. [0015]
  • DETAILED DESCRIPTION
  • FIG. 2 illustrates a block diagram of novel internal content sources and [0016] data pipes 200 of the present invention. Unlike FIG. 1, the present network architecture has a network processing unit 210 of the present invention at the center of the internal content sources and data pipes. The host CPU/OS 250 is no longer central to the data routing scheme. One advantage of this new architecture is that the NPU 210 provides both local or host access and remote access acceleration.
  • An operating system is any software platform for application programs; typical examples are Microsoft Windows, Unix, and Apple Macintosh OS. An operating system can be run on top of another operating system (an example of a virtual operating system) or another underlying software platform, possibly as an application program. [0017]
  • In operation, the host CPU/[0018] OS 250 has a virtual port (i.e., host MAC) that is in communication with the network processing unit 210 and communicates with the NPU as if it is an external network appliance using standard networking protocols, e.g., TCP/IP protocols. In one embodiment, the host computer communicates via the NPU with one or more auxiliary or dedicated processing units 220, 230 that are deployed to perform dedicated tasks. These auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements.
  • For example, some of these auxiliary processing units include, but are not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a physics processing unit (PPU) and a storage processing unit (SPU) [0019] 220. Some of these auxiliary processing units can be deployed as part of the media engines 230, whereas the SPU 220 is deployed with the storage devices of the host. Finally, the network processing unit itself is an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and so on.
  • In one embodiment, the NPU [0020] 210 is a network router appliance that resides inside the same “box” or chassis as the host computer 250, i.e., typically within the same chipset. The NPU serves to connect various other “XPUs” that performed dedicated functions such as:
  • 1) Storage Processing Unit (SPU) is an auxiliary processing unit that implements a file system, where the file system can be accessed locally by the host or remotely via the NPU's connection to the outside world. The SPU is a special XPU because it behaves as an endpoint for data storage. Streams can originate from an SPU file or terminate at an SPU file. [0021]
  • 2) Audio Processing Unit (APU) is an auxiliary processing unit that implements audio affects on individual “voices” and mixes them down to a small number of channels. APU also performs encapsulation/decapsulation of audio packets that are transmitted/received over the network via the NPU. [0022]
  • 3) Video Processing Unit (VPU) is an auxiliary processing unit that is similar to the APU except that it operates on compressed video packets (e.g., MPEG-2 compressed), either compressing them or uncompressing them. The VPU also performs encapsulations into bitstreams or network video packets. [0023]
  • 4) Graphics Processing Unit (GPU) is an auxiliary processing unit that takes graphics primitives and produces (partial) frame buffers. The GPU is a special XPU because it acts as an endpoint for rendered graphics primitives. Streams can terminate at a GPU frame buffer or originate as raw pixels from a frame buffer. [0024]
  • 5) Physics Processing Unit (PPU) is an auxiliary processing unit that takes object positions, current velocity vectors, and force equations, and produces new positions, velocity vectors, and collision information. [0025]
  • 6) Network Processing Unit (NPU) is itself an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TCP, IPSec/VPN/PPP encapsulation and the like. [0026]
  • Some of the above XPUs have a number of commonalities with respect to their association with the [0027] host 250 and the NPU 210. First, an XPU can be accessed directly by the host CPU and O/S 250 directly as a local resource. Namely, communication is effected by using direct local communication channels.
  • Second, an XPU can be placed on the network via the NPU and accessed remotely from other network nodes (as shown in FIG. 3 below). This indicates that an XPU is capable of processing information that is encapsulated in network packets. [0028]
  • Third, an XPU can be accessed as a “remote” node even from the local host. Namely, communication is effected via the NPU by using network protocols. [0029]
  • Fourth, an XPU is always in an “on” state (like most appliances) even when the host (CPU+O/S) is in the “off” state. This unique feature allows the XPUs to operate without the involvement of the host CPU/OS, e.g., extracting data from a disk drive of the host without the involvement of the host. More importantly, the host's resources are still available even though the CPU/OS may be in a dormant state, e.g., in a sleep mode. [0030]
  • Fifth, an XPU has at least two sets of processing queues, one for non-real-time packets and at least one for real-time packets. This duality of queues combined with similar real-time queues in the NPU, allows the system of NPU and XPUs to guarantee latencies and bandwidth for real-time streams. [0031]
  • Sixth, an XPU has two software (SW) drivers, one that manages the host-side connection to the XPU, and one that manages the remotely-accessed component of the XPU. In operation, the SW drivers communicate with the XPU using abstract command queues, called push buffers (PBs). Each driver has at least one PB going from the driver to the XPU and at least one PB going from the XPU to the driver. Push buffers are described in U.S. Pat. No. 6,092,124, and is herein incorporated herein by reference. [0032]
  • Seventh, an XPU can also be accessed on the host side directly by a user-level application. Namely, this involves lazy-pinning of user-space buffers by the O/S. Lazy-pinning means to lock the virtual-to-physical address translations of memory pages on demand, i.e., when the translations are needed by the particular XPU. When the translations are no longer needed, they can be unlocked, allowing the operating system to page out those pages. The virtual-to-physical mappings of these buffers are passed to the XPU. A separate pair of PBs are linked into the user's address space and the O/S driver coordinates context switches with the XPU. [0033]
  • Although the present invention discloses the use of a [0034] network processing unit 210 to perform routing functions without the involvement of the CPU/OS, the CPU/OS 250 nevertheless still has an alternate direct communication channel 255 with its resources, e.g., storage devices. This provides the host CPU/OS with the option of communicating with its resources or media engines via the NPU or directly via local access channels 255 or 257.
  • In fact, although the CPU/OS is not involved with the general routing function, in one embodiment of the present invention, exception routing issues are resolved by the host CPU/OS. For example, if the NPU receives a packet that it is unable to process, the NPU will forward the packet to the host CPU/OS for resolution. This limited use of the CPU/OS serves to accelerate host processing, while retaining the option to more judiciously use the processing power of the host CPU/OS to resolve difficult issues. [0035]
  • Additionally, the host resources may also be accessed via the NPU without the involvement of the host CPU/[0036] OS 250 via input/output communication channel 240, e.g., via an USB. For example, the present architecture can virtualize the remaining resources of the host computer 250, such as its physical memory, read only memory (ROM), real-time clocks, interrupts, and so on, thereby allowing a single chipset to provide multiple virtual hosts with each host being attached to the NPU 210.
  • One unique aspect of the present Invention is that the XPUs have logically direct attachments to the NPU that effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances. Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP, TCP, UDP and the like without the involvement of the host CPU/OS. Using this novel architecture, the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment. [0037]
  • FIG. 3 illustrates a block diagram where a network of host computers [0038] 300 a-n are in communication with each other via a plurality of network processing units 310 a-n. This unique configuration provides both host access and remote access acceleration. The accelerated functions can be best understood by viewing the present invention in terms of packetized streams.
  • It is best to view this system of NPU and XPUs in the context of streams of packetized data that flow within this system. There are various types of streams that are allowed by the system. In this discussion, the term “host” means the combination of host CPU and memory in the context of the O/S kernel or a user-level process. The term “node” refers to a remote networked host or device that is attached to the NPU via a wired or wireless connection to a MAC that is directly connected to the NPU (e.g., as shown in FIG. 4 below). [0039]
  • A host-to-XPU stream is a stream that flows directly from the [0040] host 350 a to the XPU 330 a. This is a typical scenario for a dedicated XPU (e.g., a dedicated GPU via communication path 357). The stream does not traverse through the NPU 310 a.
  • An XPU-to-host stream is a stream that flows directly from the XPU to the host. One example is a local file being read from the [0041] SPU 320 a via path 355. The stream does not traverse through the NPU 310 a.
  • A host-to-XPU-to-host stream is a stream that flows from [0042] host 350 a to an XPU 330 a for processing then back to the host 350 a. One example is where the host forwards voice data directly to the APU for processing of voices into final mix buffers that are subsequently returned to the host via path 357. The stream does not traverse through the NPU 310 a.
  • A host-to-NPU-to-XPU stream is a networked stream that flows from the [0043] host 350 a via NPU 310 a to an XPU 330 a or 320 a. The three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
  • An XPU-to-NPU-to-Host is a networked stream that flows from an XPU [0044] 330 a or 320 a via the NPU 310 a to the host 350 a. The three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
  • A host-to-NPU-to-XPU-to-NPU-to-host is a networked stream that is the combination of the previous two streams. The three parties transfer packetized data using standard networking protocols, e.g., TCP/IP. [0045]
  • A host-to-NPU-to-Node is a networked stream that flows from the [0046] host 350 a via the NPU 310 a to a remote node (e.g., NPU 310 b). This allows a local host 350 a to communicate and access XPUs 330 b of another host via a second NPU 310 b.
  • A Node-to-NPU-to-Host is a reverse networked stream where the stream flows from a remote node (e.g., [0047] NPU 310 b) via the NPU 310 a to the host 350 a. This allows a remote NPU 350 b to communicate with a local host 350 a via a local NPU 310 a.
  • A Node-to-NPU-to-XPU is a networked stream that flows from a [0048] remote node 350 b via the NPU 350 a to an XPU 330 a where it terminates. This allows a remote NPU 310 b to communicate with a local XPU 330 a via a local NPU 310 a.
  • An XPU-to-NPU-to-Node is a networked stream that flows from an XPU [0049] 330 a where it originates to a remote node (e.g., NPU 310 b) via local NPU 310 a.
  • A Node[0050] 0-to-NPU-to-XPU-to-NPU-to-Node1 is a combination of the previous two streams. It should be noted that Node0 and Node1 may be the same or different. For example, Node0 is 310 a; NPU is 310 b; XPU is 330 b; NPU is 310 b; and Node1 is 310 n. Alternatively, Node0 is 310 a; NPU is 310 b; XPU is 330 b; NPU is 310 b; and Node1 is 310 a.
  • A {Host,Node[0051] 0,XPU0}-to-NPU-to-XPU1-to-NPU-to-XPU2-to-NPU-to{Host,Node1,XPU3} is a stream that originates from the host, a remote node, or an XPU, passes through the NPU to another XPU for some processing, then passes through the NPU to another XPU for some additional processing, then terminates at the host, another remote node, or another XPU. It should be clear that the present architecture of a network of integrated processing units provides a powerful and flexible distributed processing environment, where both host access and remote access acceleration are greatly enhanced.
  • Under the present architecture, numerous advantages are achieved. First, it is beneficial to tightly integrate other computers and network appliances into the same chipset. Second, it is very advantageous to offload a host computer's I/O functions into a distributed network of intelligent processors, where traditional latencies associated with overtaxed CPU/OS are resolved. Third, it is advantageous to provide these auxiliary I/O processors with first-class network-appliance status within the chipset (optionally illustrated in FIG. 2 with dash lines) without changing the definition of the host computer. Fourth, it is advantageous to allow these auxiliary I/O processors to be shared among the host computer, external computers, and internal and external network appliances. Fifth, it is advantageous to allow the remaining resources of the host computer to be virtualized so that multiple virtual copies of the host computer may be embodied in the same chipset, while sharing the network of intelligent auxiliary I/O processors. Finally, it is advantageous to use a hardware-level operating system and graphical user interface (GUI) that allow the user to configure the network and seamlessly switch among virtual copies of the host computer or virtual host. [0052]
  • In one embodiment of the present invention, real-time media streaming is implemented using the above described network of integrated processing units. Specifically, media streaming typically involves multiple software layers. Thus, latencies can be unpredictable, particularly when the software runs on a general-purpose computer. More importantly, media streaming typically has a severe adverse impact on other applications running on the host computer. [0053]
  • However, by attaching media devices such as an APU or GPU to an NPU+SPU combination, it is now possible to minimize and guarantee latencies as well as offload the main host CPU. For example, referring to FIG. 3, control requests may arrive from a [0054] remote recipient 350 b (typically attached wireless). These control requests may include play, stop, rewind, forward, pause, select title, and so on. Once the stream is set up, the raw data can be streamed directly from a disk managed by the SPU 320 a through the NPU 310 a to the destination client. Alternatively, the data may get preprocessed by the GPU 330 a or APU 330 a prior to being sent out via the NPU 310 a. One important aspect again is that real-time media streaming can take place without host CPU 350 a involvement. Dedicated queuing throughout the system will guarantee latencies and bandwidth.
  • This media streaming embodiment clearly demonstrates the power and flexibility of the present invention. One practical implementation of this real-time media streaming embodiment is within the home environment, where a centralized multimedia host server or computer has a large storage device that contains a library of stored media streams or it may simply be connected to a DVD player, a “PVR” (personal video recorder) or “DVR” (digital video recorder). If there are other client devices throughout the home, it is efficient to use the above network architecture to implement real-time media streaming, where a media stream from a storage device of the host computer can be transmitted to another host computer or a television set in a different part of the home. Thus, the real-time media streaming is implemented without the involvement of the host computer and with guaranteed latencies and bandwidth. [0055]
  • FIG. 4 illustrates a block diagram where a host computer's resources are networked via a [0056] network processing unit 410 of the present invention. Specifically, a host 450 communicates with the NPU 410 via a MAC 415 (i.e., a host MAC). In turn, a plurality of XPUs and other host resources 430 a are connected to the NPU via a plurality of MACs 425 that interface with a MAC Interface (MI) (not shown) of the NPU. One example of an NPU is disclosed in US patent application entitled “A Method And Apparatus For Performing Network Processing Functions” with attorney docket NVDA/P000413.
  • FIG. 5 illustrates a block diagram of a network of virtual personal computers or virtual hosts that are in communication with a [0057] network processing unit 520 of the present invention. More specifically, FIG. 5 illustrates a network of virtual personal computers (VPCs) in a single system (or a single chassis) 500, where the system may be a single personal computer, a set top box, a video game console or the like.
  • In operation, FIG. 5 illustrates a plurality of virtual hosts [0058] 510 a-e, which may comprise a plurality of different operating systems (e.g., Microsoft Corporation's Windows (two separate copies 510 a and 510 b), and Linux 510 c), a raw video game application 510 d or other raw applications 510 e, where the virtual hosts treat the storage processing unit 530 as a remote file server having a physical storage 540. In essence, one can perceive FIG. 5 as illustrating a “network of VPCs in a box”.
  • In one embodiment, the [0059] NPU 520 manages multiple IP addresses inside the system for each VPC. For example, the NPU 520 may be assigned a public IP address, whereas each of the VPCs is assigned a private IP address, e.g., in accordance with Dynamic Host Configuration Protocol (DHCP). Thus, each of the VPCs can communicate with each other and the SPU using standard networking protocols. Standard network protocols include, but are not limited to: TCP; TCP/IP; UDP; NFS; HTTP; SMTP; POP; FTP; NNTP; CGI; DHCP; and ARP (to name only a few that are know in the art).
  • It should be understood that the XPUs of the present invention can be implemented as one or more physical devices that are coupled to the host CPU through a communication channel. Alternatively, the XPUs can be represented and provided by one or more software applications (or even a combination of software and hardware, e.g., using application specific integrated circuits (ASIC)), where the software is loaded from a storage medium, (e.g., a ROM, a magnetic or optical drive or diskette) and operated in the memory of the computer. As such, the XPUs (including associated methods and data structures) of the present invention can be stored and provided on a computer readable medium, e.g., ROM or RAM memory, magnetic or optical drive or diskette and the like. Alternatively, the XPUs can be represented by Field Programmable Gate Arrays (FPGA) having control bits. [0060]
  • Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. In the claims, elements of method claims are listed in a particular order, but no order for practicing of the invention is implied, even if elements of the claims are numerically or alphabetically enumerated. [0061]

Claims (132)

What is claimed is:
1. Method for providing a distributed network of processing units, said method comprising:
a) providing a network processing unit; and
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
2. The method of claim 1, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
3. The method of claim 2, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
4. The method of claim 2, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
5. The method of claim 2, wherein said network protocol is User Datagram Protocol (UDP).
6. The method of claim 1, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
7. The method of claim 1, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
8. The method of claim 1, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
9. The method of claim 1, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
10. The method of claim 1, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
11. The method of claim 1, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
12. The method of claim 1, wherein said at least one host is a virtual host.
13. The method of claim 1, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
14. The method of claim 13, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
15. The method of claim 1, wherein said plurality of auxiliary processing units comprise an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
16. A distributed network of processing units, said network comprising:
a network processing unit; and
at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
17. The network of claim 16, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
18. The network of claim 17, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
19. The network of claim 17, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
20. The network of claim 17, wherein said network protocol is User Datagram Protocol (UDP).
21. The network of claim 16, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
22. The network of claim 16, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
23. The network of claim 16, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
24. The network of claim 16, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
25. The network of claim 16, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
26. The network of claim 16, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
27. The network of claim 16, wherein said at least one host is a virtual host.
28. The network of claim 16, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
29. The network of claim 28, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
30. The network of claim 16, wherein said plurality of auxiliary processing units comprise an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
31. The network of claim 16, wherein said network processing unit is implemented on a chipset.
32. The network of claim 31, wherein at least one of said plurality of auxiliary processing units is implemented on a chipset.
33. Method for providing a distributed network of processing units, said method comprising:
a) providing a network processing unit;
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) loaded with a host operating system; and
c) providing a plurality of auxiliary processing units, wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
34. The method of claim 33, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
35. The method of claim 34, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
36. The method of claim 34, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
37. The method of claim 34, wherein said network protocol is User Datagram Protocol (UDP).
38. The method of claim 33, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
39. The method of claim 33, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
40. The method of claim 33, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
41. The method of claim 33, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
42. The method of claim 33, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
43. The method of claim 33, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
44. The method of claim 33, wherein said at least one host is a virtual host.
45. The method of claim 33, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
46. The method of claim 45, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
47. A distributed network of processing units, said network comprising:
a network processing unit;
at least one host, wherein said at least one host comprises a central processing unit (CPU) loaded with a host operating system; and
a plurality of auxiliary processing units, wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
48. The network of claim 47, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
49. The network of claim 48, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
50. The network of claim 48, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
51. The network of claim 48, wherein said network protocol is User Datagram Protocol (UDP).
52. The network of claim 37, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
53. The network of claim 47, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
54. The network of claim 47, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
55. The network of claim 47, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
56. The network of claim 47, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
57. The network of claim 47, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
58. The network of claim 47, wherein said at least one host is a virtual host.
59. The network of claim 47, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
60. The network of claim 59, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
61. The network of claim 47, wherein said network processing unit is implemented on a chipset.
62. The network of claim 61, wherein at least one of said plurality of auxiliary processing units is implemented on a chipset.
63. Method for providing a distributed network of processing units and host resources, said method comprising:
a) providing a network processing unit; and
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) loaded with a host operating system and a plurality of host resources, wherein each of said plurality of host resources is accessible directly by said central processing unit and via said network processing unit.
64. The method of claim 63, wherein one of said plurality of host resources is a storage device.
65. The method of claim 63, wherein one of said plurality of host resources is a read only memory (ROM).
66. The method of claim 63, wherein one of said plurality of host resources is a random access memory (RAM).
67. A distributed network of processing units and host resources, said network comprising:
a network processing unit; and
at least one host, wherein said at least one host comprises a central processing unit (CPU) loaded with a host operating system and a plurality of host resources, wherein each of said plurality of host resources is accessible directly by said central processing unit and via said network processing unit.
68. The network of claim 67, wherein one of said plurality of host resources is a storage device.
69. The network of claim 67, wherein one of said plurality of host resources is a read only memory (ROM).
70. The network of claim 67, wherein one of said plurality of host resources is a random access memory (RAM).
71. Method for providing a distributed network of processing units and host resources, said method comprising:
a) providing a first network processing unit;
b) providing a first host comprising a first central processing unit (CPU) loaded with a first host operating system and a plurality of first host resources;
c) providing a second network processing unit; and
d) providing a second host comprising a second central processing unit (CPU) loaded with a second host operating system and a plurality of second host resources, wherein each of said plurality of first host resources is accessible via said first and second network processing units by bypassing said first host operating system.
72. The method of claim 71, wherein each of said plurality of second host resources is accessible via said first and second network processing units by bypassing said second host operating system.
73. The method of claim 71, further comprising:
e) forwarding a media stream in real time from one of said plurality of first host resources to said second host operating system.
74. The method of claim 71, wherein said plurality of first host resources comprise at least one auxiliary processing unit.
75. The method of claim 74, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
76. The method of claim 74, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
77. The method of claim 74, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
78. The method of claim 74, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
79. The method of claim 74, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
80. A distributed network of processing units and host resources, said network comprising:
a first network processing unit;
a first host comprising a first central processing unit (CPU) loaded with a first host operating system and a plurality of first host resources;
a second network processing unit; and
a second host comprising a second central processing unit (CPU) loaded with a second host operating system and a plurality of second host resources, wherein each of said plurality of first host resources is accessible via said first and second network processing units by bypassing said first host operating system.
81. The network of claim 80, wherein each of said plurality of second host resources is accessible via said first and second network processing units by bypassing said second host operating system.
82. The network of claim 80, wherein one of said plurality of first host resources forwards a media stream in real time to said second host operating system.
83. The network of claim 80, wherein said plurality of first host resources comprise at least one auxiliary processing unit.
84. The network of claim 83, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
85. The network of claim 83, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
86. The network of claim 83, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
87. The network of claim 83, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
88. The network of claim 83, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
89. Method for providing a distributed network of processing units, said method comprising:
a) providing a network processing unit; and
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and at least one auxiliary processing unit, wherein said central processing unit is loaded with a host operating system and wherein said at least one auxiliary processing unit bypasses said host operating system and communicates directly with said network processing unit.
90. The method of claim 89, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
91. The method of claim 90, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
92. A distributed network of processing units, said network comprising:
a network processing unit; and
at least one host, wherein said at least one host comprises a central processing unit (CPU) and at least one auxiliary processing unit, wherein said central processing unit is loaded with a host operating system and wherein said at least one auxiliary processing unit bypasses said host operating system and communicates directly with said network processing unit.
93. The network of claim 92, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
94. The network of claim 93, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
95. Method for providing a distributed network of processing units for interacting with at least one host that comprises a central processing unit (CPU) and, wherein said central processing unit is loaded with a host operating system, said method comprising:
a) providing a network processing unit; and
b) providing at least one auxiliary processing unit, wherein said network processing unit and said at least one auxiliary processing unit bypass said host operating system and communicate directly with each other.
96. The method of claim 95, wherein said at least one auxiliary processing unit comprises two auxiliary processing units that bypass said host operating system and communicate directly with each other through said network processing unit.
97. The method of claim 95, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
98. The method of claim 97, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
99. The method of claim 97, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
100. The method of claim 97, wherein said network protocol is User Datagram Protocol (UDP).
101. The method of claim 95, wherein said at least one auxiliary processing unit is perceived as a separate network appliance.
102. The method of claim 95, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
103. The method of claim 95, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
104. The method of claim 95, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
105. The method of claim 95, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
106. The method of claim 95, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
107. The method of claim 95, wherein said at least one host is a virtual host.
108. The method of claim 95, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
109. The method of claim 108, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
110. The method of claim 95, wherein said at least one auxiliary processing unit comprises an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
111. A distributed network of processing units for interacting with at least one host that comprises a central processing unit (CPU) and, wherein said central processing unit is loaded with a host operating system, said network comprising:
a network processing unit; and
at least one auxiliary processing unit, wherein said network processing unit and said at least one auxiliary processing unit bypass said host operating system and communicate directly with each other.
112. The network of claim 111, wherein said at least one auxiliary processing unit comprises two auxiliary processing units that bypass said host operating system and communicate directly with each other through said network processing unit.
113. The network of claim 111, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
114. The network of claim 113, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
115. The network of claim 113, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
116. The network of claim 113, wherein said network protocol is User Datagram Protocol (UDP).
117. The network of claim 111, wherein said at least one auxiliary processing unit is perceived as a separate network appliance.
118. The network of claim 111, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
119. The network of claim 111, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
120. The network of claim 111, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
121. The network of claim 111, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
122. The network of claim 111, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
123. The network of claim 111, wherein said at least one host is a virtual host.
124. The network of claim 111, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
125. The network of claim 124, wherein each of said plurality of virtual hosts is capable of accessing said at least one auxiliary processing unit via said network processing unit.
126. The network of claim 111, wherein said at least one auxiliary processing unit comprises an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
127. The network of claim 111, wherein said network processing unit is implemented on a chipset.
128. The network of claim 127, wherein said at least one of said plurality of auxiliary processing units is implemented on a chipset.
129. Method for providing a distributed network of processing units, said method comprising:
a) providing a network processing unit; and
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate with said network processing unit.
130. Method for providing a distributed network of processing units, said method comprising:
a) providing a network processing unit; and
b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ User Datagram Protocol (UDP) to communicate with said network processing unit.
131. A distributed network of processing units, said network comprising:
a network processing unit; and
at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate with said network processing unit.
132. A distributed network of processing units, said network comprising:
a network processing unit; and
at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ User Datagram Protocol (UDP) to communicate with said network processing unit.
US10/144,658 2002-05-13 2002-05-13 Method and apparatus for providing an integrated network of processors Abandoned US20030212735A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
US10/144,658 US20030212735A1 (en) 2002-05-13 2002-05-13 Method and apparatus for providing an integrated network of processors
GB0514859A GB2413872B (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors
PCT/US2003/014908 WO2003096202A1 (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors
DE10392634T DE10392634T5 (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of computers
AU2003229034A AU2003229034A1 (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors
JP2004504124A JP2005526313A (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors
GB0425574A GB2405244B (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors
US11/473,832 US7383352B2 (en) 2002-05-13 2006-06-23 Method and apparatus for providing an integrated network of processors
US11/948,847 US7620738B2 (en) 2002-05-13 2007-11-30 Method and apparatus for providing an integrated network of processors
US12/608,881 US8051126B2 (en) 2002-05-13 2009-10-29 Method and apparatus for providing an integrated network of processors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/144,658 US20030212735A1 (en) 2002-05-13 2002-05-13 Method and apparatus for providing an integrated network of processors

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/473,832 Division US7383352B2 (en) 2002-05-13 2006-06-23 Method and apparatus for providing an integrated network of processors

Publications (1)

Publication Number Publication Date
US20030212735A1 true US20030212735A1 (en) 2003-11-13

Family

ID=29400386

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/144,658 Abandoned US20030212735A1 (en) 2002-05-13 2002-05-13 Method and apparatus for providing an integrated network of processors
US11/473,832 Expired - Lifetime US7383352B2 (en) 2002-05-13 2006-06-23 Method and apparatus for providing an integrated network of processors
US11/948,847 Expired - Lifetime US7620738B2 (en) 2002-05-13 2007-11-30 Method and apparatus for providing an integrated network of processors
US12/608,881 Expired - Lifetime US8051126B2 (en) 2002-05-13 2009-10-29 Method and apparatus for providing an integrated network of processors

Family Applications After (3)

Application Number Title Priority Date Filing Date
US11/473,832 Expired - Lifetime US7383352B2 (en) 2002-05-13 2006-06-23 Method and apparatus for providing an integrated network of processors
US11/948,847 Expired - Lifetime US7620738B2 (en) 2002-05-13 2007-11-30 Method and apparatus for providing an integrated network of processors
US12/608,881 Expired - Lifetime US8051126B2 (en) 2002-05-13 2009-10-29 Method and apparatus for providing an integrated network of processors

Country Status (6)

Country Link
US (4) US20030212735A1 (en)
JP (1) JP2005526313A (en)
AU (1) AU2003229034A1 (en)
DE (1) DE10392634T5 (en)
GB (1) GB2405244B (en)
WO (1) WO2003096202A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030223418A1 (en) * 2002-06-04 2003-12-04 Sachin Desai Network packet steering
US20030223361A1 (en) * 2002-06-04 2003-12-04 Zahid Hussain System and method for hierarchical metering in a virtual router based network switch
US20040095934A1 (en) * 2002-11-18 2004-05-20 Cosine Communications, Inc. System and method for hardware accelerated packet multicast in a virtual routing system
US7177311B1 (en) * 2002-06-04 2007-02-13 Fortinet, Inc. System and method for routing traffic through a virtual router-based network switch
US20070067517A1 (en) * 2005-09-22 2007-03-22 Tzu-Jen Kuo Integrated physics engine and related graphics processing system
US20070073733A1 (en) * 2000-09-13 2007-03-29 Fortinet, Inc. Synchronized backup of an object manager global database as part of a control blade redundancy service
US20070162783A1 (en) * 2002-08-29 2007-07-12 Fortinet, Inc. System and method for virtual router failover in a network routing system
US7340535B1 (en) * 2002-06-04 2008-03-04 Fortinet, Inc. System and method for controlling routing in a virtual router system
US7376125B1 (en) 2002-06-04 2008-05-20 Fortinet, Inc. Service processing switch
US20080317231A1 (en) * 2004-11-18 2008-12-25 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US20090225754A1 (en) * 2004-09-24 2009-09-10 Fortinet, Inc. Scalable ip-services enabled multicast forwarding with efficient resource utilization
US7639715B1 (en) * 2005-09-09 2009-12-29 Qlogic, Corporation Dedicated application interface for network systems
US7735099B1 (en) 2005-12-23 2010-06-08 Qlogic, Corporation Method and system for processing network data
US7777748B2 (en) 2003-11-19 2010-08-17 Lucid Information Technology, Ltd. PC-level computing system with a multi-mode parallel graphics rendering subsystem employing an automatic mode controller, responsive to performance data collected during the run-time of graphics applications
US7796129B2 (en) 2003-11-19 2010-09-14 Lucid Information Technology, Ltd. Multi-GPU graphics processing subsystem for installation in a PC-based computing system having a central processing unit (CPU) and a PC bus
US7808504B2 (en) 2004-01-28 2010-10-05 Lucid Information Technology, Ltd. PC-based computing system having an integrated graphics subsystem supporting parallel graphics processing operations across a plurality of different graphics processing units (GPUS) from the same or different vendors, in a manner transparent to graphics applications
US7885207B2 (en) 2000-09-13 2011-02-08 Fortinet, Inc. Managing and provisioning virtual routers
US20110032942A1 (en) * 2000-09-13 2011-02-10 Fortinet, Inc. Fast path complex flow processing
US7890663B2 (en) 2001-06-28 2011-02-15 Fortinet, Inc. Identifying nodes in a ring network
US7912936B2 (en) 2000-09-13 2011-03-22 Nara Rajagopalan Managing interworking communications protocols
US7961194B2 (en) 2003-11-19 2011-06-14 Lucid Information Technology, Ltd. Method of controlling in real time the switching of modes of parallel operation of a multi-mode parallel graphics processing subsystem embodied within a host computing system
US8085776B2 (en) 2002-06-04 2011-12-27 Fortinet, Inc. Methods and systems for a distributed provider edge
US8085273B2 (en) 2003-11-19 2011-12-27 Lucid Information Technology, Ltd Multi-mode parallel graphics rendering system employing real-time automatic scene profiling and mode control
US8250357B2 (en) 2000-09-13 2012-08-21 Fortinet, Inc. Tunnel interface for securing traffic over a network
US8260918B2 (en) 2000-09-13 2012-09-04 Fortinet, Inc. Packet routing system and method
US8284207B2 (en) 2003-11-19 2012-10-09 Lucid Information Technology, Ltd. Method of generating digital images of objects in 3D scenes while eliminating object overdrawing within the multiple graphics processing pipeline (GPPLS) of a parallel graphics processing system generating partial color-based complementary-type images along the viewing direction using black pixel rendering and subsequent recompositing operations
US20130124932A1 (en) * 2011-11-14 2013-05-16 Lsi Corporation Solid-State Disk Manufacturing Self Test
US8497865B2 (en) 2006-12-31 2013-07-30 Lucid Information Technology, Ltd. Parallel graphics system employing multiple graphics processing pipelines with multiple graphics processing units (GPUS) and supporting an object division mode of parallel graphics processing using programmable pixel or vertex processing resources provided with the GPUS
US8503463B2 (en) 2003-08-27 2013-08-06 Fortinet, Inc. Heterogeneous media packet bridging
CN108064441A (en) * 2017-09-04 2018-05-22 深圳前海达闼云端智能科技有限公司 Method and system for accelerating network transmission optimization
CN110535947A (en) * 2019-08-30 2019-12-03 苏州浪潮智能科技有限公司 A kind of memory device set group configuration node switching method, device and equipment
US20230083741A1 (en) * 2012-04-12 2023-03-16 Supercell Oy System and method for controlling technical processes

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7839854B2 (en) * 2005-03-08 2010-11-23 Thomas Alexander System and method for a fast, programmable packet processing system
GB2462860B (en) * 2008-08-22 2012-05-16 Advanced Risc Mach Ltd Apparatus and method for communicating between a central processing unit and a graphics processing unit
KR101150928B1 (en) * 2010-08-26 2012-05-29 한국과학기술원 Network architecture and method for processing packet data using the same
US8438306B2 (en) * 2010-11-02 2013-05-07 Sonics, Inc. Apparatus and methods for on layer concurrency in an integrated circuit
CN106789985B (en) * 2016-12-08 2019-11-12 武汉斗鱼网络科技有限公司 Client validation method and system based on GPU algorithm

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5349679A (en) * 1990-08-28 1994-09-20 Fujitsu Limited Communication control unit for selecting a control mode of data communication and selectively bypassing an interprocessor interface
US5630174A (en) * 1995-02-03 1997-05-13 Cirrus Logic, Inc. Adapter for detecting whether a peripheral is standard or multimedia type format and selectively switching the peripheral to couple or bypass the system bus
US5630172A (en) * 1992-03-06 1997-05-13 Mitsubishi Denki Kabushiki Kaisha Data transfer control apparatus wherein an externally set value is compared to a transfer count with a comparison of the count values causing a transfer of bus use right
US5797028A (en) * 1995-09-11 1998-08-18 Advanced Micro Devices, Inc. Computer system having an improved digital and analog configuration
US5909546A (en) * 1996-03-08 1999-06-01 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Network interface having support for allowing remote operations with reply that bypass host computer interaction
US5974496A (en) * 1997-01-02 1999-10-26 Ncr Corporation System for transferring diverse data objects between a mass storage device and a network via an internal bus on a network card
US5987627A (en) * 1992-05-13 1999-11-16 Rawlings, Iii; Joseph H. Methods and apparatus for high-speed mass storage access in a computer system
US6097955A (en) * 1997-09-12 2000-08-01 Lucent Technologies, Inc. Apparatus and method for optimizing CPU usage in processing paging messages within a cellular communications system
US6226680B1 (en) * 1997-10-14 2001-05-01 Alacritech, Inc. Intelligent network interface system method for protocol processing
US6343086B1 (en) * 1996-09-09 2002-01-29 Natural Microsystems Corporation Global packet-switched computer network telephony server
US6345072B1 (en) * 1999-02-22 2002-02-05 Integrated Telecom Express, Inc. Universal DSL link interface between a DSL digital controller and a DSL codec
US20020059451A1 (en) * 2000-08-24 2002-05-16 Yaron Haviv System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics
US20020128986A1 (en) * 2001-02-23 2002-09-12 Peter Stutz Communication system for franking system
US20030035431A1 (en) * 2001-08-14 2003-02-20 Siemens Aktiengesellschaft Method and arrangement for controlling data packets
US20030041177A1 (en) * 2000-02-29 2003-02-27 Thomas Warschko Method for controlling the communication of individual computers in a multicomputer system
US20030145230A1 (en) * 2002-01-31 2003-07-31 Huimin Chiu System for exchanging data utilizing remote direct memory access

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524250A (en) * 1991-08-23 1996-06-04 Silicon Graphics, Inc. Central processing unit for processing a plurality of threads using dedicated general purpose registers and masque register for providing access to the registers
DE69525556T2 (en) * 1994-03-21 2002-09-12 Avid Technology Inc Device and method executed on a computer for real-time multimedia data transmission in a distributed computer arrangement
US5696990A (en) * 1995-05-15 1997-12-09 Nvidia Corporation Method and apparatus for providing improved flow control for input/output operations in a computer system having a FIFO circuit and an overflow storage area
US5802320A (en) * 1995-05-18 1998-09-01 Sun Microsystems, Inc. System for packet filtering of data packets at a computer network interface
US5812800A (en) * 1995-09-11 1998-09-22 Advanced Micro Devices, Inc. Computer system which includes a local expansion bus and a dedicated real-time bus and including a multimedia memory for increased multi-media performance
US5826027A (en) * 1995-10-11 1998-10-20 Citrix Systems, Inc. Method for supporting an extensible and dynamically bindable protocol stack in a distrubited process system
US5742773A (en) * 1996-04-18 1998-04-21 Microsoft Corporation Method and system for audio compression negotiation for multiple channels
US6101170A (en) * 1996-09-27 2000-08-08 Cabletron Systems, Inc. Secure fast packet switch having improved memory utilization
US6034963A (en) * 1996-10-31 2000-03-07 Iready Corporation Multiple network protocol encoder/decoder and data processor
US6006264A (en) * 1997-08-01 1999-12-21 Arrowpoint Communications, Inc. Method and system for directing a flow between a client and a server
US6094485A (en) * 1997-09-18 2000-07-25 Netscape Communications Corporation SSL step-up
US6757746B2 (en) * 1997-10-14 2004-06-29 Alacritech, Inc. Obtaining a destination address so that a network interface device can write network data without headers directly into host memory
US6298406B1 (en) 1997-10-24 2001-10-02 Sony Corporation Method of and apparatus for detecting direction of reception of bus packets and controlling direction of transmission of bus packets within an IEEE 1394 serial bus node
US7007126B2 (en) * 1998-02-13 2006-02-28 Intel Corporation Accessing a primary bus messaging unit from a secondary bus through a PCI bridge
US6092124A (en) 1998-04-17 2000-07-18 Nvidia Corporation Method and apparatus for accelerating the rendering of images
US6157955A (en) * 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6438678B1 (en) * 1998-06-15 2002-08-20 Cisco Technology, Inc. Apparatus and method for operating on data in a data communications system
US6327660B1 (en) * 1998-09-18 2001-12-04 Intel Corporation Method for securing communications in a pre-boot environment
US7136926B1 (en) * 1998-12-31 2006-11-14 Pmc-Sierrra Us, Inc. Method and apparatus for high-speed network rule processing
US6542992B1 (en) * 1999-01-26 2003-04-01 3Com Corporation Control and coordination of encryption and compression between network entities
JP4035803B2 (en) * 1999-02-19 2008-01-23 富士通株式会社 Mobile packet communication system
JP2000332817A (en) * 1999-05-18 2000-11-30 Fujitsu Ltd Packet processing unit
JP2001084182A (en) 1999-09-16 2001-03-30 Matsushita Electric Ind Co Ltd Bus connecting device, computer and recording medium
US6389419B1 (en) * 1999-10-06 2002-05-14 Cisco Technology, Inc. Storing and retrieving connection information using bidirectional hashing of connection identifiers
US7149222B2 (en) * 1999-12-21 2006-12-12 Converged Access, Inc. Integrated access point network device
US6704794B1 (en) * 2000-03-03 2004-03-09 Nokia Intelligent Edge Routers Inc. Cell reassembly for packet based networks
US6714985B1 (en) * 2000-04-28 2004-03-30 Cisco Technology, Inc. Method and apparatus for efficiently reassembling fragments received at an intermediate station in a computer network
DE10023051B4 (en) 2000-05-11 2004-02-19 Roche Diagnostics Gmbh Process for the preparation of fluorescein isothiocyanate sinistrin, its use and diagnostic preparation containing fluorescein isothiocyanate sinistrin
EP1162795A3 (en) * 2000-06-09 2007-12-26 Broadcom Corporation Gigabit switch supporting improved layer 3 switching
JP4479064B2 (en) 2000-06-19 2010-06-09 ソニー株式会社 Information input / output device
ATE267502T1 (en) * 2000-07-05 2004-06-15 Roke Manor Research METHOD FOR OPERATING A PACKET REASSEMBLY BUFFER AND NETWORK ROUTER
US6785780B1 (en) * 2000-08-31 2004-08-31 Micron Technology, Inc. Distributed processor memory module and method
WO2002023463A1 (en) * 2000-09-11 2002-03-21 David Edgar System, method, and computer program product for optimization and acceleration of data transport and processing
US20020078118A1 (en) * 2000-12-19 2002-06-20 Cone Robert W. Network interface application specific integrated circuit to allow direct attachment for an appliance,such as a printer device
US20020083344A1 (en) * 2000-12-21 2002-06-27 Vairavan Kannan P. Integrated intelligent inter/intra networking device
US7116640B2 (en) * 2000-12-22 2006-10-03 Mitchell Paul Tasman Architecture and mechanism for forwarding layer interfacing for networks
US6781955B2 (en) * 2000-12-29 2004-08-24 Ericsson Inc. Calling service of a VoIP device in a VLAN environment
WO2002059757A1 (en) 2001-01-26 2002-08-01 Iready Corporation Communications processor
US7328263B1 (en) * 2001-01-30 2008-02-05 Cisco Technology, Inc. Controlling access of concurrent users of computer resources in a distributed system using an improved semaphore counting approach
US7017175B2 (en) 2001-02-02 2006-03-21 Opentv, Inc. Digital television application protocol for interactive television
US6832261B1 (en) * 2001-02-04 2004-12-14 Cisco Technology, Inc. Method and apparatus for distributed resequencing and reassembly of subdivided packets
JP3873639B2 (en) * 2001-03-12 2007-01-24 株式会社日立製作所 Network connection device
US6950862B1 (en) * 2001-05-07 2005-09-27 3Com Corporation System and method for offloading a computational service on a point-to-point communication link
US7010727B1 (en) * 2001-06-15 2006-03-07 Nortel Networks Limited Method and system for negotiating compression techniques to be utilized in packet data communications
US7027443B2 (en) * 2001-08-23 2006-04-11 Pmc-Sierra Ltd. Reassembly engines for multilink applications
US20030061296A1 (en) * 2001-09-24 2003-03-27 International Business Machines Corporation Memory semantic storage I/O
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US7500102B2 (en) * 2002-01-25 2009-03-03 Microsoft Corporation Method and apparatus for fragmenting and reassembling internet key exchange data packets
US7076803B2 (en) * 2002-01-28 2006-07-11 International Business Machines Corporation Integrated intrusion detection services
US7650634B2 (en) * 2002-02-08 2010-01-19 Juniper Networks, Inc. Intelligent integrated network security device
US8370936B2 (en) * 2002-02-08 2013-02-05 Juniper Networks, Inc. Multi-method gateway-based network security systems and methods
US6944706B2 (en) * 2002-02-22 2005-09-13 Texas Instruments Incorporated System and method for efficiently processing broadband network traffic
US6735647B2 (en) * 2002-09-05 2004-05-11 International Business Machines Corporation Data reordering mechanism for high performance networks
US7397797B2 (en) * 2002-12-13 2008-07-08 Nvidia Corporation Method and apparatus for performing network processing functions
US7483376B2 (en) * 2004-06-17 2009-01-27 International Business Machines Corporation Method and apparatus for discovering path maximum transmission unit (PMTU)

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5349679A (en) * 1990-08-28 1994-09-20 Fujitsu Limited Communication control unit for selecting a control mode of data communication and selectively bypassing an interprocessor interface
US5630172A (en) * 1992-03-06 1997-05-13 Mitsubishi Denki Kabushiki Kaisha Data transfer control apparatus wherein an externally set value is compared to a transfer count with a comparison of the count values causing a transfer of bus use right
US5987627A (en) * 1992-05-13 1999-11-16 Rawlings, Iii; Joseph H. Methods and apparatus for high-speed mass storage access in a computer system
US5630174A (en) * 1995-02-03 1997-05-13 Cirrus Logic, Inc. Adapter for detecting whether a peripheral is standard or multimedia type format and selectively switching the peripheral to couple or bypass the system bus
US5797028A (en) * 1995-09-11 1998-08-18 Advanced Micro Devices, Inc. Computer system having an improved digital and analog configuration
US5909546A (en) * 1996-03-08 1999-06-01 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Network interface having support for allowing remote operations with reply that bypass host computer interaction
US6343086B1 (en) * 1996-09-09 2002-01-29 Natural Microsystems Corporation Global packet-switched computer network telephony server
US5974496A (en) * 1997-01-02 1999-10-26 Ncr Corporation System for transferring diverse data objects between a mass storage device and a network via an internal bus on a network card
US6097955A (en) * 1997-09-12 2000-08-01 Lucent Technologies, Inc. Apparatus and method for optimizing CPU usage in processing paging messages within a cellular communications system
US6226680B1 (en) * 1997-10-14 2001-05-01 Alacritech, Inc. Intelligent network interface system method for protocol processing
US6345072B1 (en) * 1999-02-22 2002-02-05 Integrated Telecom Express, Inc. Universal DSL link interface between a DSL digital controller and a DSL codec
US20030041177A1 (en) * 2000-02-29 2003-02-27 Thomas Warschko Method for controlling the communication of individual computers in a multicomputer system
US20020059451A1 (en) * 2000-08-24 2002-05-16 Yaron Haviv System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics
US20020128986A1 (en) * 2001-02-23 2002-09-12 Peter Stutz Communication system for franking system
US20030035431A1 (en) * 2001-08-14 2003-02-20 Siemens Aktiengesellschaft Method and arrangement for controlling data packets
US20030145230A1 (en) * 2002-01-31 2003-07-31 Huimin Chiu System for exchanging data utilizing remote direct memory access

Cited By (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9124555B2 (en) 2000-09-13 2015-09-01 Fortinet, Inc. Tunnel interface for securing traffic over a network
US7885207B2 (en) 2000-09-13 2011-02-08 Fortinet, Inc. Managing and provisioning virtual routers
US9160716B2 (en) 2000-09-13 2015-10-13 Fortinet, Inc. Tunnel interface for securing traffic over a network
US9258280B1 (en) 2000-09-13 2016-02-09 Fortinet, Inc. Tunnel interface for securing traffic over a network
US7912936B2 (en) 2000-09-13 2011-03-22 Nara Rajagopalan Managing interworking communications protocols
US8069233B2 (en) 2000-09-13 2011-11-29 Fortinet, Inc. Switch management system and method
US20070073733A1 (en) * 2000-09-13 2007-03-29 Fortinet, Inc. Synchronized backup of an object manager global database as part of a control blade redundancy service
US8260918B2 (en) 2000-09-13 2012-09-04 Fortinet, Inc. Packet routing system and method
US9853948B2 (en) 2000-09-13 2017-12-26 Fortinet, Inc. Tunnel interface for securing traffic over a network
US20110032942A1 (en) * 2000-09-13 2011-02-10 Fortinet, Inc. Fast path complex flow processing
US8320279B2 (en) 2000-09-13 2012-11-27 Fortinet, Inc. Managing and provisioning virtual routers
US8250357B2 (en) 2000-09-13 2012-08-21 Fortinet, Inc. Tunnel interface for securing traffic over a network
US9391964B2 (en) 2000-09-13 2016-07-12 Fortinet, Inc. Tunnel interface for securing traffic over a network
US9667604B2 (en) 2000-09-13 2017-05-30 Fortinet, Inc. Tunnel interface for securing traffic over a network
US7539744B2 (en) 2000-09-13 2009-05-26 Fortinet, Inc. Network operating system for maintaining redundant master control blade management information
US7890663B2 (en) 2001-06-28 2011-02-15 Fortinet, Inc. Identifying nodes in a ring network
US9602303B2 (en) 2001-06-28 2017-03-21 Fortinet, Inc. Identifying nodes in a ring network
US9998337B2 (en) 2001-06-28 2018-06-12 Fortinet, Inc. Identifying nodes in a ring network
US7720053B2 (en) 2002-06-04 2010-05-18 Fortinet, Inc. Service processing switch
US9215178B2 (en) 2002-06-04 2015-12-15 Cisco Technology, Inc. Network packet steering via configurable association of packet processing resources and network interfaces
US8064462B2 (en) 2002-06-04 2011-11-22 Fortinet, Inc. Service processing switch
US9967200B2 (en) 2002-06-04 2018-05-08 Fortinet, Inc. Service processing switch
US20030223361A1 (en) * 2002-06-04 2003-12-04 Zahid Hussain System and method for hierarchical metering in a virtual router based network switch
US8111690B2 (en) 2002-06-04 2012-02-07 Google Inc. Routing traffic through a virtual router-based network switch
US8085776B2 (en) 2002-06-04 2011-12-27 Fortinet, Inc. Methods and systems for a distributed provider edge
US7376125B1 (en) 2002-06-04 2008-05-20 Fortinet, Inc. Service processing switch
US20030223418A1 (en) * 2002-06-04 2003-12-04 Sachin Desai Network packet steering
US7340535B1 (en) * 2002-06-04 2008-03-04 Fortinet, Inc. System and method for controlling routing in a virtual router system
US8068503B2 (en) 2002-06-04 2011-11-29 Fortinet, Inc. Network packet steering via configurable association of processing resources and netmods or line interface ports
US7668087B2 (en) 2002-06-04 2010-02-23 Fortinet, Inc. Hierarchical metering in a virtual router-based network switch
US7161904B2 (en) 2002-06-04 2007-01-09 Fortinet, Inc. System and method for hierarchical metering in a virtual router based network switch
US20070127382A1 (en) * 2002-06-04 2007-06-07 Fortinet, Inc. Routing traffic through a virtual router-based network switch
US7203192B2 (en) 2002-06-04 2007-04-10 Fortinet, Inc. Network packet steering
US8848718B2 (en) 2002-06-04 2014-09-30 Google Inc. Hierarchical metering in a virtual router-based network switch
US7177311B1 (en) * 2002-06-04 2007-02-13 Fortinet, Inc. System and method for routing traffic through a virtual router-based network switch
US8306040B2 (en) 2002-06-04 2012-11-06 Fortinet, Inc. Network packet steering via configurable association of processing resources and network interfaces
US8638802B2 (en) 2002-06-04 2014-01-28 Cisco Technology, Inc. Network packet steering via configurable association of packet processing resources and network interfaces
US8819486B2 (en) 2002-08-29 2014-08-26 Google Inc. Fault tolerant routing in a non-hot-standby configuration of a network routing system
US8412982B2 (en) 2002-08-29 2013-04-02 Google Inc. Fault tolerant routing in a non-hot-standby configuration of a network routing system
US20070162783A1 (en) * 2002-08-29 2007-07-12 Fortinet, Inc. System and method for virtual router failover in a network routing system
US7278055B2 (en) 2002-08-29 2007-10-02 Fortinet, Inc. System and method for virtual router failover in a network routing system
US7761743B2 (en) 2002-08-29 2010-07-20 Fortinet, Inc. Fault tolerant routing in a non-hot-standby configuration of a network routing system
US9407449B2 (en) 2002-11-18 2016-08-02 Fortinet, Inc. Hardware-accelerated packet multicasting
US8644311B2 (en) 2002-11-18 2014-02-04 Fortinet, Inc. Hardware-accelerated packet multicasting in a virtual routing system
US9014186B2 (en) 2002-11-18 2015-04-21 Fortinet, Inc. Hardware-accelerated packet multicasting
US10200275B2 (en) 2002-11-18 2019-02-05 Fortinet, Inc. Hardware-accelerated packet multicasting
US20040095934A1 (en) * 2002-11-18 2004-05-20 Cosine Communications, Inc. System and method for hardware accelerated packet multicast in a virtual routing system
US8503463B2 (en) 2003-08-27 2013-08-06 Fortinet, Inc. Heterogeneous media packet bridging
US9331961B2 (en) 2003-08-27 2016-05-03 Fortinet, Inc. Heterogeneous media packet bridging
US9509638B2 (en) 2003-08-27 2016-11-29 Fortinet, Inc. Heterogeneous media packet bridging
US9853917B2 (en) 2003-08-27 2017-12-26 Fortinet, Inc. Heterogeneous media packet bridging
US7800610B2 (en) 2003-11-19 2010-09-21 Lucid Information Technology, Ltd. PC-based computing system employing a multi-GPU graphics pipeline architecture supporting multiple modes of GPU parallelization dymamically controlled while running a graphics application
US7843457B2 (en) 2003-11-19 2010-11-30 Lucid Information Technology, Ltd. PC-based computing systems employing a bridge chip having a routing unit for distributing geometrical data and graphics commands to parallelized GPU-driven pipeline cores supported on a plurality of graphics cards and said bridge chip during the running of a graphics application
US9405586B2 (en) 2003-11-19 2016-08-02 Lucidlogix Technologies, Ltd. Method of dynamic load-balancing within a PC-based computing system employing a multiple GPU-based graphics pipeline architecture supporting multiple modes of GPU parallelization
US8125487B2 (en) 2003-11-19 2012-02-28 Lucid Information Technology, Ltd Game console system capable of paralleling the operation of multiple graphic processing units (GPUS) employing a graphics hub device supported on a game console board
US8134563B2 (en) 2003-11-19 2012-03-13 Lucid Information Technology, Ltd Computing system having multi-mode parallel graphics rendering subsystem (MMPGRS) employing real-time automatic scene profiling and mode control
US7796130B2 (en) 2003-11-19 2010-09-14 Lucid Information Technology, Ltd. PC-based computing system employing multiple graphics processing units (GPUS) interfaced with the central processing unit (CPU) using a PC bus and a hardware hub, and parallelized according to the object division mode of parallel operation
US7961194B2 (en) 2003-11-19 2011-06-14 Lucid Information Technology, Ltd. Method of controlling in real time the switching of modes of parallel operation of a multi-mode parallel graphics processing subsystem embodied within a host computing system
US7944450B2 (en) 2003-11-19 2011-05-17 Lucid Information Technology, Ltd. Computing system having a hybrid CPU/GPU fusion-type graphics processing pipeline (GPPL) architecture
US8284207B2 (en) 2003-11-19 2012-10-09 Lucid Information Technology, Ltd. Method of generating digital images of objects in 3D scenes while eliminating object overdrawing within the multiple graphics processing pipeline (GPPLS) of a parallel graphics processing system generating partial color-based complementary-type images along the viewing direction using black pixel rendering and subsequent recompositing operations
US7940274B2 (en) 2003-11-19 2011-05-10 Lucid Information Technology, Ltd Computing system having a multiple graphics processing pipeline (GPPL) architecture supported on multiple external graphics cards connected to an integrated graphics device (IGD) embodied within a bridge circuit
US7800611B2 (en) 2003-11-19 2010-09-21 Lucid Information Technology, Ltd. Graphics hub subsystem for interfacing parallalized graphics processing units (GPUs) with the central processing unit (CPU) of a PC-based computing system having an CPU interface module and a PC bus
US7800619B2 (en) 2003-11-19 2010-09-21 Lucid Information Technology, Ltd. Method of providing a PC-based computing system with parallel graphics processing capabilities
US8085273B2 (en) 2003-11-19 2011-12-27 Lucid Information Technology, Ltd Multi-mode parallel graphics rendering system employing real-time automatic scene profiling and mode control
US7808499B2 (en) 2003-11-19 2010-10-05 Lucid Information Technology, Ltd. PC-based computing system employing parallelized graphics processing units (GPUS) interfaced with the central processing unit (CPU) using a PC bus and a hardware graphics hub having a router
US7812846B2 (en) 2003-11-19 2010-10-12 Lucid Information Technology, Ltd PC-based computing system employing a silicon chip of monolithic construction having a routing unit, a control unit and a profiling unit for parallelizing the operation of multiple GPU-driven pipeline cores according to the object division mode of parallel operation
US9584592B2 (en) 2003-11-19 2017-02-28 Lucidlogix Technologies Ltd. Internet-based graphics application profile management system for updating graphic application profiles stored within the multi-GPU graphics rendering subsystems of client machines running graphics-based applications
US8629877B2 (en) 2003-11-19 2014-01-14 Lucid Information Technology, Ltd. Method of and system for time-division based parallelization of graphics processing units (GPUs) employing a hardware hub with router interfaced between the CPU and the GPUs for the transfer of geometric data and graphics commands and rendered pixel data within the system
US7796129B2 (en) 2003-11-19 2010-09-14 Lucid Information Technology, Ltd. Multi-GPU graphics processing subsystem for installation in a PC-based computing system having a central processing unit (CPU) and a PC bus
US7777748B2 (en) 2003-11-19 2010-08-17 Lucid Information Technology, Ltd. PC-level computing system with a multi-mode parallel graphics rendering subsystem employing an automatic mode controller, responsive to performance data collected during the run-time of graphics applications
US8754894B2 (en) 2003-11-19 2014-06-17 Lucidlogix Software Solutions, Ltd. Internet-based graphics application profile management system for updating graphic application profiles stored within the multi-GPU graphics rendering subsystems of client machines running graphics-based applications
US8754897B2 (en) 2004-01-28 2014-06-17 Lucidlogix Software Solutions, Ltd. Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US7834880B2 (en) 2004-01-28 2010-11-16 Lucid Information Technology, Ltd. Graphics processing and display system employing multiple graphics cores on a silicon chip of monolithic construction
US9659340B2 (en) 2004-01-28 2017-05-23 Lucidlogix Technologies Ltd Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US7812845B2 (en) 2004-01-28 2010-10-12 Lucid Information Technology, Ltd. PC-based computing system employing a silicon chip implementing parallelized GPU-driven pipelines cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US7812844B2 (en) 2004-01-28 2010-10-12 Lucid Information Technology, Ltd. PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores according to the object division mode of parallel operation during the running of a graphics application
US7808504B2 (en) 2004-01-28 2010-10-05 Lucid Information Technology, Ltd. PC-based computing system having an integrated graphics subsystem supporting parallel graphics processing operations across a plurality of different graphics processing units (GPUS) from the same or different vendors, in a manner transparent to graphics applications
US9166805B1 (en) 2004-09-24 2015-10-20 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US9167016B2 (en) 2004-09-24 2015-10-20 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US20090225754A1 (en) * 2004-09-24 2009-09-10 Fortinet, Inc. Scalable ip-services enabled multicast forwarding with efficient resource utilization
US8369258B2 (en) 2004-09-24 2013-02-05 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US9319303B2 (en) 2004-09-24 2016-04-19 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US7881244B2 (en) 2004-09-24 2011-02-01 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US8213347B2 (en) 2004-09-24 2012-07-03 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US10038567B2 (en) 2004-09-24 2018-07-31 Fortinet, Inc. Scalable IP-services enabled multicast forwarding with efficient resource utilization
US20090007228A1 (en) * 2004-11-18 2009-01-01 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US7876683B2 (en) 2004-11-18 2011-01-25 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US7869361B2 (en) 2004-11-18 2011-01-11 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US7843813B2 (en) 2004-11-18 2010-11-30 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US7961615B2 (en) 2004-11-18 2011-06-14 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US20080317231A1 (en) * 2004-11-18 2008-12-25 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US20080320553A1 (en) * 2004-11-18 2008-12-25 Fortinet, Inc. Managing hierarchically organized subscriber profiles
US10867364B2 (en) 2005-01-25 2020-12-15 Google Llc System on chip having processing and graphics units
US11341602B2 (en) 2005-01-25 2022-05-24 Google Llc System on chip having processing and graphics units
US10614545B2 (en) 2005-01-25 2020-04-07 Google Llc System on chip having processing and graphics units
US7639715B1 (en) * 2005-09-09 2009-12-29 Qlogic, Corporation Dedicated application interface for network systems
US20070067517A1 (en) * 2005-09-22 2007-03-22 Tzu-Jen Kuo Integrated physics engine and related graphics processing system
US7735099B1 (en) 2005-12-23 2010-06-08 Qlogic, Corporation Method and system for processing network data
US8497865B2 (en) 2006-12-31 2013-07-30 Lucid Information Technology, Ltd. Parallel graphics system employing multiple graphics processing pipelines with multiple graphics processing units (GPUS) and supporting an object division mode of parallel graphics processing using programmable pixel or vertex processing resources provided with the GPUS
US10803970B2 (en) * 2011-11-14 2020-10-13 Seagate Technology Llc Solid-state disk manufacturing self test
US20130124932A1 (en) * 2011-11-14 2013-05-16 Lsi Corporation Solid-State Disk Manufacturing Self Test
US20230083741A1 (en) * 2012-04-12 2023-03-16 Supercell Oy System and method for controlling technical processes
US11771988B2 (en) * 2012-04-12 2023-10-03 Supercell Oy System and method for controlling technical processes
US20230415041A1 (en) * 2012-04-12 2023-12-28 Supercell Oy System and method for controlling technical processes
CN108064441A (en) * 2017-09-04 2018-05-22 深圳前海达闼云端智能科技有限公司 Method and system for accelerating network transmission optimization
CN110535947A (en) * 2019-08-30 2019-12-03 苏州浪潮智能科技有限公司 A kind of memory device set group configuration node switching method, device and equipment

Also Published As

Publication number Publication date
GB2405244A (en) 2005-02-23
AU2003229034A1 (en) 2003-11-11
WO2003096202A1 (en) 2003-11-20
US8051126B2 (en) 2011-11-01
GB2405244B (en) 2006-01-04
US20080071926A1 (en) 2008-03-20
US7383352B2 (en) 2008-06-03
US20080104271A1 (en) 2008-05-01
DE10392634T5 (en) 2005-06-02
GB0425574D0 (en) 2004-12-22
US20100049780A1 (en) 2010-02-25
JP2005526313A (en) 2005-09-02
US7620738B2 (en) 2009-11-17

Similar Documents

Publication Publication Date Title
US8051126B2 (en) Method and apparatus for providing an integrated network of processors
US7120653B2 (en) Method and apparatus for providing an integrated file system
EP1570361B1 (en) Method and apparatus for performing network processing functions
US7924868B1 (en) Internet protocol (IP) router residing in a processor chipset
US8094670B1 (en) Method and apparatus for performing network processing functions
US8103785B2 (en) Network acceleration techniques
US8671152B2 (en) Network processor system and network protocol processing method
US7949766B2 (en) Offload stack for network, block and file input and output
US8649395B2 (en) Protocol stack using shared memory
US7983266B2 (en) Generalized serialization queue framework for protocol processing
WO2005015424A1 (en) Resolving a distributed topology to stream data
US6920484B2 (en) Method and apparatus for providing an integrated virtual disk subsystem
JP2005085284A (en) Multiple offload of network condition object supporting failover event
GB2413872A (en) An integrated network of processors
US8924504B2 (en) Coprocessing module for processing ethernet data and method for use therewith
JP2004336437A (en) Circuit and system for video image receiving
Lim et al. A single-chip storage LSI for home networks
US20060085562A1 (en) Devices and methods for remote computing using a network processor
Crowley et al. Network acceleration techniques

Legal Events

Date Code Title Description
AS Assignment

Owner name: NVIDIA CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HICOK, GARY;ALFIERI, ROBERT A.;REEL/FRAME:013214/0583

Effective date: 20020730

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION