US20040136241A1 - Pipeline accelerator for improved computing architecture and related system and method - Google Patents

Pipeline accelerator for improved computing architecture and related system and method Download PDF

Info

Publication number
US20040136241A1
US20040136241A1 US10/683,929 US68392903A US2004136241A1 US 20040136241 A1 US20040136241 A1 US 20040136241A1 US 68392903 A US68392903 A US 68392903A US 2004136241 A1 US2004136241 A1 US 2004136241A1
Authority
US
United States
Prior art keywords
data
pipeline
hardwired
operable
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/683,929
Inventor
John Rapp
Larry Jackson
Mark Jones
Troy Cherasaro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Martin Corp filed Critical Lockheed Martin Corp
Priority to US10/683,929 priority Critical patent/US20040136241A1/en
Assigned to LOCKHEED MARTIN CORPORATION reassignment LOCKHEED MARTIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHERASARO, TROY, JACKSON, LARRY, JONES, MARK, RAPP, JOHN W.
Priority to EP03781552A priority patent/EP1570344B1/en
Priority to KR1020057007751A priority patent/KR101012745B1/en
Priority to CA2503613A priority patent/CA2503613C/en
Priority to EP03781554A priority patent/EP1559005A2/en
Priority to DE60318105T priority patent/DE60318105T2/en
Priority to AU2003287321A priority patent/AU2003287321B2/en
Priority to AU2003287319A priority patent/AU2003287319B2/en
Priority to CA2503622A priority patent/CA2503622C/en
Priority to PCT/US2003/034557 priority patent/WO2004042560A2/en
Priority to AU2003287318A priority patent/AU2003287318B2/en
Priority to JP2005502222A priority patent/JP2006515941A/en
Priority to KR1020057007749A priority patent/KR101062214B1/en
Priority to KR1020057007748A priority patent/KR101035646B1/en
Priority to PCT/US2003/034559 priority patent/WO2004042574A2/en
Priority to CA2503611A priority patent/CA2503611C/en
Priority to EP03781551A priority patent/EP1576471A2/en
Priority to ES03781552T priority patent/ES2300633T3/en
Priority to AU2003287320A priority patent/AU2003287320B2/en
Priority to KR1020057007752A priority patent/KR100996917B1/en
Priority to CA002503620A priority patent/CA2503620A1/en
Priority to PCT/US2003/034556 priority patent/WO2004042569A2/en
Priority to JP2005502224A priority patent/JP2006518057A/en
Priority to KR1020057007750A priority patent/KR101012744B1/en
Priority to JP2005502225A priority patent/JP2006518058A/en
Priority to JP2005502223A priority patent/JP2006518056A/en
Priority to EP03781553A priority patent/EP1573515A2/en
Priority to PCT/US2003/034555 priority patent/WO2004042561A2/en
Priority to AU2003287317A priority patent/AU2003287317B2/en
Priority to EP03781550A priority patent/EP1573514A2/en
Priority to JP2005502226A priority patent/JP2006518495A/en
Priority to CA002503617A priority patent/CA2503617A1/en
Publication of US20040136241A1 publication Critical patent/US20040136241A1/en
Priority to JP2011070196A priority patent/JP5568502B2/en
Priority to JP2011071988A priority patent/JP2011170868A/en
Priority to JP2011081733A priority patent/JP2011175655A/en
Priority to JP2011083371A priority patent/JP2011154711A/en
Priority to JP2013107858A priority patent/JP5688432B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7867Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • a common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm.
  • FIG. 1 is a schematic block diagram of a conventional computing machine 10 having a multi-processor architecture.
  • the machine 10 includes a master processor 12 and coprocessors 14 1 - 14 n , which communicate with each other and the master processor via a bus 16 , an input port 18 for receiving raw data from a remote device (not shown in FIG. 1), and an output port 20 for providing processed data to the remote source.
  • the machine 10 also includes a memory 22 for the master processor 12 , respective memories 24 1 - 24 n for the coprocessors 14 1 - 14 n , and a memory 26 that the master processor and coprocessors share via the bus 16 .
  • the memory 22 serves as both a program and a working memory for the master processor 12
  • each memory 24 1 - 24 n serves as both a program and a working memory for a respective coprocessor 14 1 - 14 n
  • the shared memory 26 allows the master processor 12 and the coprocessors 14 to transfer data among themselves, and from/to the remote device via the ports 18 and 20 , respectively.
  • the master processor 12 and the coprocessors 14 also receive a common clock signal that controls the speed at which the machine 10 processes the raw data.
  • the computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14 .
  • the remote source (not shown in FIG. 1) such as a sonar array loads the raw data via the port 18 into a section of the shared memory 26 , which acts as a first-in-first-out (FIFO) buffer (not shown) for the raw data.
  • the master processor 12 retrieves the raw data from the memory 26 via the bus 16 , and then the master processor and the coprocessors 14 process the raw data, transferring data among themselves as necessary via the bus 16 .
  • the master processor 12 loads the processed data into another FIFO buffer (not shown) defined in the shared memory 26 , and the remote source retrieves the processed data from this FIFO via the port 20 .
  • the computing machine 10 processes the raw data by sequentially performing n+1 respective operations on the raw data, where these operations together compose a processing algorithm such as a Fast Fourier Transform (FFT). More specifically, the machine 10 forms a data-processing pipeline from the master processor 12 and the coprocessors 14 . For a given frequency of the clock signal, such a pipeline often allows the machine 10 to process the raw data faster than a machine having only a single processor.
  • FFT Fast Fourier Transform
  • the master processor 12 After retrieving the raw data from the raw-data FIFO (not shown) in the memory 26 , the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26 .
  • the processor 12 executes a program stored in the memory 22 , and performs the above-described actions under the control of the program.
  • the processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
  • the coprocessor 14 1 performs a second operation, such as a logarithmic function, on the first result. This second operation yields a second result, which the coprocessor 14 , stores in a second-result FIFO (not shown) defined within the memory 26 .
  • the coprocessor 14 1 executes a program stored in the memory 24 1 , and performs the above-described actions under the control of the program.
  • the coprocessor 14 1 may also use the memory 24 , as working memory to temporarily store data that the coprocessor generates at intermediate intervals of the second operation.
  • the coprocessors 24 2 - 24 n sequentially perform third—n th operations on the second—(n ⁇ 1) th results in a manner similar to that discussed above for the coprocessor 24 1 .
  • the n th operation which is performed by the coprocessor 24 n , yields the final result, i.e., the processed data.
  • the coprocessor 24 n loads the processed data into a processed-data FIFO (not shown) defined within the memory 26 , and the remote device (not shown in FIG. 1) retrieves the processed data from this FIFO.
  • the computing machine 10 is often able to process the raw data faster than, a computing machine having a single processor that sequentially performs the different operations. Specifically, the single processor cannot retrieve a new set of the raw data until it performs all n+1 operations on the previous set of raw data. But using the pipeline technique discussed above, the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in FIG. 1).
  • the computing machine 10 may process the raw data in parallel by simultaneously performing n+1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n+1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n+1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in FIG. 1).
  • a processing algorithm such as an FFT
  • the computing machine 10 can process data more quickly than a single-processor computer machine (not shown in FIG. 1), the data-processing speed of the machine 10 is often significantly less than the frequency of the processor clock. Specifically, the data-processing speed of the computing machine 10 is limited by the time that the master processor 12 and coprocessors 14 require to process data. For brevity, an example of this speed limitation is discussed in conjunction with the master processor 12 , although it is understood that this discussion also applies to the coprocessors 14 . As discussed above, the master processor 12 executes a program that controls the processor to manipulate data in a desired manner. This program includes a sequence of instructions that the processor 12 executes.
  • the processor 12 typically requires multiple clock cycles to execute a single instruction, and often must execute multiple instructions to process a single value of data. For example, suppose that the processor 12 is to multiply a first data value A (not shown) by a second data value B (not shown). During a first clock cycle, the processor 12 retrieves a multiply instruction from the memory 22 . During second and third clock cycles, the processor 12 respectively retrieves A and B from the memory 26 . During a fourth clock cycle, the processor 12 multiplies A and B, and, during a fifth clock cycle, stores the resulting product in the memory 22 or 26 or provides the resulting product to the remote device (not shown). This is a best-case scenario, because in many cases the processor 12 requires additional clock cycles for overhead tasks such as initializing and closing counters. Therefore, at best the processor 12 requires five clock cycles, or an average of 2.5 clock cycles per data value, to process A and B.
  • the speed at which the computing machine 10 processes data is often significantly lower than the frequency of the clock that drives the master processor 12 and the coprocessors 14 .
  • This effective data-processing speed is often characterized in units of operations per second. Therefore, in this example, for a clock speed of 1.0 GHZ, the processor 12 would be rated with a data-processing speed of 0.4 Gigaoperations/second (Gops).
  • FIG. 2 is a block diagram of a hardwired data pipeline 30 that can typically process data faster than a processor can for a given clock frequency, and often at substantially the same rate at which the pipeline is clocked.
  • the pipeline 30 includes operator circuits 32 1 - 32 n , which each perform a respective operation on respective data without executing program instructions. That is, the desired operation is “burned in” to a circuit 32 such that it implements the operation automatically, without the need of program instructions.
  • the pipeline 30 can typically perform more operations per second than a processor can for a given clock frequency.
  • the pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
  • x k represents a sequence of raw data values.
  • the operator circuit 32 1 is a multiplier that calculates 5x k
  • the circuit 32 2 is an adder that calculates 5x k +3
  • the circuit 32 1 receives data value x 1 and multiplies it by 5 to generate 5x 1 .
  • the circuit 322 receives 5x 1 from the circuit 32 1 and adds 3 to generate 5x 1 +3. Also, during the second clock cycle, the circuit 32 1 generates 5x 2 .
  • the circuit 323 receives 5x 1 +3 from the circuit 32 2 and multiplies by 2 x1
  • the pipeline 30 continues processing subsequent raw data values x k in this manner until all the raw data values are processed.
  • the pipeline 30 thus has a data-processing speed equal to the clock speed.
  • the master processor 12 and coprocessors 14 (FIG. 1) have data-processing speeds that are 0.4 times the clock speed as in the above example, the pipeline 30 can process data 2.5 times faster than the computing machine 10 (FIG. 1) for a given clock speed.
  • a designer may choose to implement the pipeline 30 in a programmable logic IC (PLIC), such as a field-programmable gate array (FPGA), because a PLIC allows more design and modification flexibility than does an application specific IC (ASIC).
  • PLIC programmable logic IC
  • FPGA field-programmable gate array
  • ASIC application specific IC
  • the designer merely sets interconnection-configuration registers disposed within the PLIC to predetermined binary states. The combination of all these binary states is often called “firmware.”
  • the designer loads this firmware into a nonvolatile memory (not shown in FIG. 2) that is coupled to the PLIC. When one “turns on” the PLIC, it downloads the firmware from the memory into the interconnection-configuration registers.
  • the designer merely modifies the firmware and allows the PLIC to download the modified firmware into the interconnection-configuration registers.
  • This ability to modify the PLIC by merely modifying the firmware is particularly useful during the prototyping stage and for upgrading the pipeline 30 “in the field”.
  • the hardwired pipeline 30 may not be the best choice to execute algorithms that entail significant decision making, particularly nested decision making.
  • a processor can typically execute a nested-decision-making instruction (e.g., a nested conditional instruction such as “if A, then do B, else if C, do D, . . . , else do n”) approximately as fast as it can execute an operational instruction (e.g., “A+B”) of comparable length.
  • the pipeline 30 may be able to make a relatively simple decision (e.g., “A>B?”) efficiently, it typically cannot execute a nested decision (e.g., “if A, then do B, else if C, do D, . . .
  • n as efficiently as a processor can.
  • the pipeline 30 may have little on-board memory, and thus may need to access external working/instruction memory (not shown). And although one may be able to design the pipeline 30 to execute such a nested decision, the size and complexity of the required circuitry often makes such a design impractical, particularly where an algorithm includes multiple different nested decisions.
  • processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to “number crunching” applications that entail little or no decision making.
  • Computing components such as processors and their peripherals (e.g., memory), typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine.
  • a standard communication interface typically includes two layers: a physical layer and a services layer.
  • the physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry.
  • the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive signals onto the pins.
  • the operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode).
  • Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS.
  • the services layer includes the protocol by which a computing component transfers data.
  • the protocol defines the format of the data and the manner in which the component sends and receives the formatted data.
  • Conventional communication protocols include file-transfer protocol (FTP) and transmission control protocol/internet protocol (TCP/IP).
  • Designing a computing component that supports an industry-standard communication interface allows one to save design time by using an existing physical-layer design from a design library. This also insures that he/she can easily interface the component to off-the-shelf computing components.
  • a pipeline accelerator includes a memory and a hardwired-pipeline circuit coupled to the memory.
  • the hardwired-pipeline circuit is operable to receive data, load the data into the memory, retrieve the data from the memory, process the retrieved data, and provide the processed data to an external source.
  • the hardwired-pipeline circuit is operable to receive data, process the received data, load the processed data into the memory, retrieve the processed data from the memory, and provide the retrieved processed data to an external source.
  • the memory facilitates the transfer of data whether unidirectional or bidirectional between the hardwired-pipeline circuit and an application that the processor executes.
  • FIG. 1 is a block diagram of a computing machine having a conventional multi-processor architecture.
  • FIG. 2 is a block diagram of a conventional hardwired pipeline.
  • FIG. 3 is a block diagram of a computing machine having a peer-vector architecture according to an embodiment of the invention.
  • FIG. 4 is a block diagram of the pipeline accelerator of FIG. 3 according to an embodiment of the invention.
  • FIG. 5 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 4 according to an embodiment of the invention.
  • FIG. 6 is a block diagram of the memory-write interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
  • FIG. 7 is a block diagram of the memory-read interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
  • FIG. 8 is a block diagram of the pipeline accelerator of FIG. 3 according to another embodiment of the invention.
  • FIG. 9 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 8 according to an embodiment of the invention.
  • FIG. 3 is a schematic block diagram of a computing machine 40 , which has a peer-vector architecture according to an embodiment of the invention.
  • the peer-vector machine 40 includes a pipeline accelerator 44 , which performs at least a portion of the data processing, and which thus effectively replaces the bank of coprocessors 14 in the computing machine 10 of FIG. 1. Therefore, the host-processor 42 and the accelerator 44 (or units thereof as discussed below) are “peers” that can transfer data vectors back and forth. Because the accelerator 44 does not execute program instructions, it typically performs mathematically intensive operations on data significantly faster than a bank of coprocessors can for a given clock frequency.
  • the machine 40 has the same abilities as, but can often process data faster than, a conventional computing machine such as the machine 10 .
  • providing the accelerator 44 with a communication interface that is compatible with the communication interface of the host processor 42 facilitates the design and modification of the machine 40 , particularly where the processor's communication interface is an industry standard.
  • the accelerator 44 includes multiple pipeline units (e.g., PLIC-based circuits), providing each of these units with the same communication interface facilitates the design and modification of the accelerator, particularly where the communication interfaces are compatible with an industry-standard interface.
  • the machine 40 may also provide other advantages as described below and in the previously cited patent applications.
  • the peer-vector computing machine 40 includes a processor memory 46 , an interface memory 48 , a bus 50 , a firmware memory 52 , an optional raw-data input port 54 , a processed-data output port 58 , and an optional router 61 .
  • the host processor 42 includes a processing unit 62 and a message handler 64
  • the processor memory 46 includes a processing-unit memory 66 and a handler memory 68 , which respectively serve as both program and working memories for the processor unit and the message handler.
  • the processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72 , which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the format of the messages that the message handler 64 sends and receives.
  • the pipeline accelerator 44 is disposed on at least one PLIC (not shown) and includes hardwired pipelines 74 1 - 74 n , which process respective data without executing program instructions.
  • the firmware memory 52 stores the configuration firmware for the accelerator 44 . If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 4).
  • the accelerator 44 and pipeline units are discussed further below and in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3).
  • the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable. In this alternative, the machine 40 may omit the firmware memory 52 . Furthermore, although the accelerator 44 is shown including multiple pipelines 74 , it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital-signal processor (DSP). Moreover, although not shown, the accelerator 44 may include a data input port and/or a data output port.
  • DSP digital-signal processor
  • FIG. 4 is a schematic block diagram of the pipeline accelerator 44 of FIG. 3 according to an embodiment of the invention.
  • the accelerator 44 includes one or more pipeline units 78 , each of which includes a pipeline circuit 80 , such as a PLIC or an ASIC.
  • a pipeline circuit 80 such as a PLIC or an ASIC.
  • each pipeline unit 78 is a “peer” of the host processor 42 and of the other pipeline units of the accelerator 44 . That is, each pipeline unit 78 can communicate directly with the host processor 42 or with any other pipeline unit.
  • this peer-vector architecture prevents data “bottlenecks” that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42 . Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 3) without significant modifications to the machine.
  • the pipeline circuit 80 includes a communication interface 82 , which transfers data between a peer, such as the host processor 42 (FIG. 3), and the following other components of the pipeline circuit: the hardwired pipelines 74 1 - 74 n (FIG. 3) via a communication shell 84 , a controller 86 , an exception manager 88 , and a configuration manager 90 .
  • the pipeline circuit 80 may also include an industry-standard bus interface 91 . Alternatively, the functionality of the interface 91 may be included within the communication interface 82 .
  • the communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 3), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 3). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44 . Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 3), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting Or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
  • the data format is an industry standard such as the Rapid I/O format
  • the hardwired pipelines 74 1 - 74 n perform respective operations on data as discussed above in conjunction with FIG. 3 and in previously cited U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), and the communication shell 84 interfaces the pipelines to the other components of the pipeline circuit. 80 and to circuits (such as a data memory 92 discussed below) external to the pipeline circuit.
  • the controller 86 synchronizes the hardwired pipelines 74 1 - 74 n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., “events,” from other peers.
  • a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74 1 - 74 n to begin processing this data.
  • An event that includes data is typically called a message, and an event that does not include data is typically called a “door bell.”
  • the pipeline unit 78 may also synchronize the pipelines 74 1 - 74 n in response to a synchronization signal.
  • the exception manager 88 monitors the status of the hardwired pipelines 74 1 - 74 n , the communication interface 82 , the communication shell 84 , the controller 86 , and the bus interface 91 , and reports exceptions to the host processor 42 (FIG. 3). For example, if a buffer in the communication interface 82 overflows, then the exception manager 88 reports this to the host processor 42 .
  • the exception manager may also correct, or attempt to correct, the problem giving rise to the exception. For example, for an overflowing buffer, the exception manager 88 may increase the size of the buffer, either directly or via the configuration manager 90 as discussed below.
  • the configuration manager 90 sets the soft configuration of the hardwired pipelines 74 1 - 74 n , the communication interface 8 ?, the communication shell 84 , the controller 86 , the exception manager 88 , and the interface 91 in response to soft-configuration data from the host processor 42 (FIG. 3)—as discussed in previously cited U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No.
  • the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80
  • the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components. That is, soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor.
  • the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82 .
  • the exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82 .
  • the pipeline unit 78 of the accelerator 44 includes the data memory 92 , an optional communication bus 94 , and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 3).
  • the data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 3), and the hardwired pipelines 74 1 - 74 n , and is also a working memory for the hardwired pipelines.
  • the communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74 1 - 74 n .
  • the industry-standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 3), then he need only modify the interface 91 and not the communication interface 82 . Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80 . Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74 1 - 74 n and the controller 86 . Or, as discussed above, the bus interface 91 may be part of the communication interface 82 .
  • the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit.
  • the memory 52 loads the firmware into'the pipeline circuit 80 during the configuration of the accelerator 44 , and may receive modified firmware from the host processor 42 (FIG. 3) via the communication interface 82 during or after the configuration of the accelerator.
  • the loading and receiving of firmware is further discussed in previously cited U.S. patent application Ser. No. ______ entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-14-3).
  • the pipeline circuit 80 , data memory 92 , and firmware memory 52 may be disposed on a circuit board or card 98 , which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown).
  • a pipeline-bus connector not shown
  • conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known.
  • FIG. 5 is a block diagram of the pipeline unit 78 of FIG. 4 according to an embodiment of the invention. For clarity, the firmware memory 52 is omitted from FIG. 5 .
  • the pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly.
  • the pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner.
  • the pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below.
  • the data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100 , an output DPSRAM 102 , and an optional working DPSRAM 104 .
  • DPSRAM dual-port-static-random-access memory
  • the input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82 , and includes an output port 108 for providing this data to the hardwired pipelines 74 1 - 74 n via the communication shell 84 .
  • a peer such as the host processor 42 (FIG. 3)
  • the input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82 , and includes an output port 108 for providing this data to the hardwired pipelines 74 1 - 74 n via the communication shell 84 .
  • Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74 1 - 74 n read data from the DPSRAM.
  • using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the pipelines 74 1 - 74 n to operate asynchronously relative to one and other. That is, the peer can send data to the pipelines 74 1 - 74 n without “waiting” for the pipelines to complete a current operation. Likewise, the pipelines 74 1 - 74 n can retrieve data without “waiting” for the peer to complete a data-sending operation.
  • the output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74 1 - 74 n via the communication shell 84 , and includes an output port 112 for providing this data to a peer, such as the host processor 42 (FIG. 3), via the communication interface 82 .
  • a peer such as the host processor 42 (FIG. 3)
  • the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102 , and using the DPSRAM 102 to buffer data from the pipelines 74 1 - 74 n allows the peer and the pipelines to operate asynchronously relative to one another.
  • the pipelines 74 1 - 74 n can publish data to the peer without “waiting” for the output-data handler 126 to complete a data transfer to the peer or to another peer.
  • the output-data handler 126 can transfer data to a peer without “waiting” for the pipelines 74 1 - 74 n to complete a data-publishing operation.
  • the working DPSRAM 104 includes an input port 114 for receiving data from the hardwired pipelines 74 1 - 74 n via the communication shell 84 , and includes an output port 116 for returning this data back to the pipelines via the communication shell.
  • the pipelines 74 1 - 74 n may need to temporarily store partially processed, i.e., intermediate, data before continuing the processing of this data.
  • a first pipeline such as the pipeline 74 1
  • the working DPSRAM 104 provides this temporary storage.
  • the two data ports 114 (input) and 116 (output) increase the speed and efficiency of data transfer between the pipelines 74 1 - 74 n and the DPSRAM 104 .
  • including a separate working DPSRAM 104 typically increases the speed and efficiency of the pipeline circuit 80 by allowing the DPSRAMs 100 and 102 to function exclusively as data-input and data-output buffers, respectively.
  • either or both of the DPSRAMS 100 and 102 can also be a working memory for the pipelines 74 1 - 74 n when the DPSRAM 104 is omitted, and even when it is present.
  • DPSRAMS 100 , 102 , and 104 are described as being external to the pipeline circuit 80 , one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
  • the communication interface 82 includes an industry-standard bus adapter 118 , an input-data handler 120 , input-data and input-event queues 122 and 124 , an output-data handler 126 , and output-data and output-event queues 128 and 130 .
  • the queues 122 , 124 , 128 , and 130 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
  • the industry-standard bus adapter 118 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 4) via the communication bus 94 . Therefore, if one wishes to change the parameters of the bus 94 , then he need only modify the adapter 118 and not the entire communication interface 82 . Where the industry-standard bus interface 91 is omitted from the pipeline unit 78 , then the adapter 118 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80 . In this latter implementation, the modified adapter 118 includes the functionality of the bus interface 91 , and one need only modify the adapter 118 if he/she wishes to change the parameters of the bus 50 .
  • the input-data handler 120 receives data from the industry-standard adapter 118 , loads the data into the DPSRAM 100 via the input port 106 , and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 122 . If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handler 120 extracts the data from the message before loading the data into the DPSRAM 100 .
  • the input-data handler 120 includes an interface 132 , which writes the data to the input port 106 of the DPSRAM 100 and which is further discussed below in conjunction with FIG. 6. Alternatively, the input-data handler 120 can omit the extraction step and load the entire message into the DPSRAM 100 .
  • the input-data handler 120 also receives events from the industry-standard bus adapter 118 , and loads the events into the input-event queue 124 .
  • the input-data handler 120 includes a validation manager 134 , which determines whether received data or events are intended for the pipeline circuit 80 .
  • the validation manager 134 may make this determination by analyzing the header (or a portion thereof) of the message that contains the data or the event, by analyzing the type of data or event, or the analyzing the instance identification (i.e., the hardwired pipeline 74 for which the data/event is intended) of the data or event. If the input-data handler 120 receives data or an event that is not intended for the pipeline circuit 80 , then the validation manager 134 prohibits the input-data handler from loading the received data/even. Where the peer-vector machine 40 includes the router 61 (FIG.
  • the validation manager 134 may also cause the input-data handler 120 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception.
  • the output-data handler 126 retrieves processed data from locations of the DPSRAM 102 pointed to by the output-data queue 128 , and sends the processed data to one or more peers, such as the host processor 42 (FIG. 3), via the industry-standard bus adapter 118 .
  • the output-data handler 126 includes an interface 136 , which reads the processed data from the DPSRAM 102 via the port 112 .
  • the interface 136 is further discussed below in conjunction with FIG. 7.
  • the output-data handler 126 also retrieves from the output-event queue 130 events generated by the pipelines 74 1 - 74 n , and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 3) via the industry-standard bus adapter 118 .
  • the output-data handler 126 includes a subscription manager 138 , which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138 , generates a header that includes the address, and generates the message from the data/event and the header.
  • a subscription manager 138 which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138
  • the technique for storing and retrieving data stored in the DPSRAMS 100 and 102 involves the use of pointers and data identifiers, one may modify the input- and output-data handlers 120 and 126 to implement other data-management techniques.
  • Conventional examples of such data-management techniques include pointers using keys or tokens, input output control ( 10 C) block, and spooling.
  • the communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74 1 - 74 n to the output-data queue 128 , the controller 86 , and the DPSRAMs 100 , 102 , and 104 .
  • the shell 84 includes interfaces 140 and 142 , and optional interfaces 144 and 146 .
  • the interfaces 140 and 146 may be similar to the interface 136 ; the interface 140 reads input data from the DPSRAM 100 via the port 108 , and the interface 146 reads intermediate data from the DPSRAM 104 via the port 116 .
  • the interfaces 142 and 144 may be similar to the interface 132 ; the interface 142 writes processed data to the DPSRAM 102 via the port 110 , and the interface 144 writes intermediate data to the DPSRAM 104 via the port 114 .
  • the controller 86 includes a sequence manager 148 and a synchronization interface 150 , which receives one or more synchronization signals SYNC.
  • a peer such as the host processor 42 (FIG. 3), or a device (not shown) external to the peer-vector machine 40 (FIG. 3) may generate the SYNC signal, which triggers the sequence manager 148 to activate the hardwired pipelines 74 1 - 74 n as discussed below and in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3).
  • the synchronization interface 150 may also generate a SYNC signal to trigger the pipeline circuit 80 or to trigger another peer.
  • the events from the input-event queue 124 also trigger the sequence manager 148 to activate the hardwired pipelines 74 1 - 74 n as discussed below.
  • the sequence manager 148 sequences the hardwired pipelines 74 1 - 74 n through their respective operations via the communication shell 84 .
  • each pipeline 74 has at least three operating states: preprocessing, processing, and post processing.
  • preprocessing the pipeline 74 , e.g., initializes its registers and retrieves input data from the DPSRAM 100 .
  • the pipeline 74 e.g., operates on the retrieved data, temporarily stores intermediate data in the DPSRAM 104 , retrieves the intermediate data from the DPSRAM 104 , and operates on the intermediate data to generate result data.
  • the pipeline 74 e.g., loads the result data into the DPSRAM 102 .
  • the sequence manager 148 monitors the operation of the pipelines 74 1 - 74 n and instructs each pipeline when to begin each of its operating states. And one may distribute the pipeline tasks among the operating states differently than described above. For example, the pipeline 74 may retrieve input data from the DPSRAM 100 during the processing state instead of during the preprocessing state.
  • the sequence manager 148 maintains a predetermined internal operating synchronization among the hardwired pipelines 74 1 - 74 n .
  • a predetermined internal operating synchronization among the hardwired pipelines 74 1 - 74 n For example, to avoid all of the pipelines 74 1 - 74 n simultaneously retrieving data from the DPSRAM 100 , it may be desired to synchronize the pipelines such that while the first pipeline 74 1 is in a preprocessing state, the second pipeline 74 2 is in a processing state and the third pipeline 74 3 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state Of another pipeline, the pipelines 74 1 - 74 n may lose synchronization if allowed to run freely.
  • the sequence manager 148 allows all of the pipelines 74 to complete a current operating state before allowing any of the pipelines to proceed to a next operating state. Therefore, the time that the sequence manager 148 allots for a current operating state is long enough to allow the slowest pipeline 74 to complete that state.
  • circuitry (not shown) for maintaining a predetermined operating synchronization among the hardwired pipelines 74 1 - 74 n may be included within the pipelines themselves.
  • the sequence manager 148 synchronizes the operation of the pipelines to the operation of other peers, such as the host processor 42 (FIG. 3), and to the operation of other external devices in response to one or more SYNC signals or to an event in the input-events queue 124 .
  • a SYNC signal triggers a time-critical function but requires significant hardware resources; comparatively, an event typically triggers a non-time-critical function but requires significantly fewer hardware resources.
  • an event typically triggers a non-time-critical function but requires significantly fewer hardware resources.
  • PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3)
  • a SYNC signal is routed directly from peer to peer, it can trigger a function more quickly than an event, which must makes its way through, e.g., the pipeline bus 50 (FIG. 3), the input-data handler 120 , and the input-event queue 124 .
  • the SYNC signals require dedicated circuitry, such as routing lines, buffers, and the SYNC interface 150 , of the pipeline circuit 80 .
  • the events require only the dedicated input-event queue 124 . Consequently, designers tend to use events to trigger all but the most time-critical functions.
  • the pipeline unit 78 and the sensor can employ to determine when the pipeline 74 , is finished.
  • the sequence manager 148 may provide a corresponding SYNC pulse or event to the sensor.
  • the sensor may send an event to the sequence manager 148 via the pipeline bus 50 (FIG. 3).
  • the sequence manager 148 may also provide to a peer, such as the host processor 42 (FIG. 3), information regarding the operation of the hardwired pipelines 74 1 - 74 n by generating a SYNC pulse or an event.
  • the sequence manager 148 sends a SYNC pulse via the SYNC interface 150 and a dedicated line (not shown), and sends an event via the output-event queue 130 and the output-data handler 126 .
  • a peer further processes the data blocks from the pipeline 742 .
  • the sequence manager 148 may notify the peer via a SYNC pulse or an event when the pipeline 742 has finished processing a block of data.
  • the sequence manager 148 may also confirm receipt of a SYNC pulse or an event by generating and sending a corresponding SYNC pulse or event to the appropriate peer(s).
  • the industry-standard bus interface 91 receives data signals (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 61 if present), and translates these signals into messages each having a header and payload.
  • the industry-standard bus adapter 118 converts the messages from the industry-standard bus interface 91 into a format that is compatible with the input-data handler 120 .
  • the input-data handler 120 dissects the message headers and extracts from each header the portion that describes the data payload.
  • the extracted header portion may include, e.g., the address of the pipeline unit 78 , the type of data in the payload, or an instance identifier that identifies the pipeline(s) 78 1 - 78 n for which the data is intended.
  • the validation manager 134 analyzes the extracted header portion and confirms that the data is intended for one of the hardwired pipelines 74 1 - 74 n , the interface 132 writes the data to a location of the DPSRAM 100 via the port 106 , and the input-data handler 120 stores a pointer to the location and a corresponding data identifier in the input-data queue 122 .
  • the data identifier identifies the pipeline or pipelines 74 1 - 74 n for which the data is intended, or includes information that allows the sequence manager 148 to make this identification as discussed below.
  • the queue 122 may include a respective subqueue (not shown) for each pipeline 74 1 - 74 n , and the input-data handler 120 stores the pointer in the subqueue or subqueues of the intended pipeline or pipelines.
  • the data identifier may be omitted.
  • the input-data handler 120 extracts the data from the message before the interface 132 stores the data in the DPSRAM 100 .
  • the interface 132 may store the entire message in the DPSRAM 100 .
  • the sequence manager 148 reads the pointer and the data identifier from the input-data queue 122 , determines from the data identifier the pipeline or pipelines 74 1 - 74 n for which the data is intended, and passes the pointer to the pipeline or pipelines via the communication shell 84 .
  • the data-receiving pipeline or pipelines 74 1 - 74 n cause the interface 140 to retrieve the data from the pointed-to location of the DPSRAM 100 via the port 108 .
  • the data-receiving pipeline or pipelines 74 1 - 74 n process the retrieved data
  • the interface 142 writes the processed data to a location of the DPSRAM 102 via the port 110
  • the communication shell 84 loads into the output-data queue 128 a pointer to and a data identifier for the processed data.
  • the data identifier identifies the destination peer or peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data, or includes information (such as the data type) that allows the subscription manager 138 to subsequently determine the destination peer or peers (e.g., the host processor 42 of FIG. 3).
  • the queue 128 may include a respective subqueue (not shown) for each pipeline 74 1 - 74 n , and the communication shell 84 stores the pointer in the subqueue or subqueues of the originating pipeline or pipelines.
  • the communication shell 84 may omit loading a data identifier into the queue 128 .
  • the interface 144 writes the intermediate data into the DPSRAM 104 via the port 114
  • the interface 146 retrieves the intermediate data from the DPSRAM 104 via the port 116 .
  • the output-data handler 126 retrieves the pointer and the data identifier from the output-data queue 128 , the subscription manager 138 determines from the identifier the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the data, the interface 136 retrieves the data from the pointed-to location of the DPSRAM 102 via the port 112 , and the output-data handler sends the data to the industry-standard bus adapter 118 . If a destination peer requires the data to be the payload of a message, then the output-data handler 126 generates the message and sends the message to the adapter 118 . For example, suppose the data has multiple destination peers and the pipeline bus 50 supports message broadcasting.
  • the output-data handler 126 generates a single header that includes the addresses of all the destination peers, combines the header and data into a message, and sends (via the adapter 118 and the industry-standard bus interface 91 ) a single message to all of the destination peers simultaneously.
  • the output-data handler 126 generates a respective header, and thus a respective message, for each destination peer, and sends each of the messages separately.
  • the industry-standard bus adapter 118 formats the data from the output-data handler 126 so that it is compatible with the industry-standard bus interface 91 .
  • the industry-standard bus interface 91 formats the data from the industry-standard bus adapter 118 so that it is compatible With the pipeline bus 50 (FIG. 3).
  • the industry-standard bus interface 91 receives a signal (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 61 if present), and translates the signal into a header (i.e., a data-less message) that includes the event.
  • a signal which originates from a peer, such as the host processor 42 of FIG. 3
  • the pipeline bus 50 and the router 61 if present
  • the industry-standard bus adapter 118 converts the header from the industry-standard bus interface 91 into a format that is compatible with the input-data handler 120 .
  • the input-data handler 120 extracts from the header the event and a description of the event.
  • the description may include, e.g. the address of the pipeline unit 78 , the type of event, or an instance identifier that identifies the pipeline(s) 78 1 - 78 n for which the event is intended.
  • the validation manager 134 analyzes the event description and confirms that the event is intended for one of the hardwired pipelines. 74 1 - 74 n , and the input-data handler 120 stores the event and its description in the input-event queue 124 .
  • the sequence manager 148 reads the event and its description from the input-event queue 124 , and, in response to the event, triggers the operation of one or more of the pipelines 74 1 - 74 n as discussed above. For example, the sequence manager 148 may trigger the pipeline 742 to begin processing data that the pipeline 74 1 previously stored in the DPSRAM 104 .
  • the sequence manager 148 To output an event, the sequence manager 148 generates the event and a description of the event, and loads the event and its description into the output-event queue 130 —the event description identifies the destination peer(s) for the event if there is more than one possible destination peer. For example, as discussed above, the event may confirm the receipt and implementation of an input event, an input-data or input-event message, or a SYNC pulse
  • the output-data handler 126 retrieves the event and its description from the output-event queue 130 , the subscription manager 138 determines from the event description the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the event, and the output-data handler sends the event to the proper destination peer or peers via the industry-standard bus adapter 118 and the industry-standard bus interface 91 as discussed above.
  • the destination peer or peers e.g., the host processor 42 of FIG. 3
  • the industry-standard bus adapter 118 receives the command from the host processor 42 (FIG. 3) via the industry-standard bus interface 91 , and provides the command to the input-data handler 120 in a manner similar to that discussed above for a data-less event (i.e., doorbell)
  • a data-less event i.e., doorbell
  • the validation manager 134 confirms that the command is intended for the pipeline unit 78 , and the input-data handler 120 loads the command into the configuration manager 90 . Furthermore, either the input-data handler 120 or the configuration manager 90 may also pass the command to the output-data handler 126 , which confirms that the pipeline unit 78 received the command by sending the command back to the peer (e.g., the host processor 42 of FIG. 3) that sent the command. This confirmation technique is sometimes called “echoing.”
  • the configuration manager 90 implements the command.
  • the command may cause the configuration manager 90 to disable one of the pipelines 74 1 - 74 n for debugging purposes.
  • the command may allow a peer, such as the host processor 42 (FIG. 3), to read the current configuration of the pipeline circuit 80 from the configuration manager 90 via the output-data handler 126 .
  • a configuration command to define an exception that is recognized by the exception manager 88 .
  • a component such as the input-data queue 122 , of the pipeline circuit 80 triggers an exception to the exception manager 88 .
  • the component includes an exception-triggering adapter (not shown) that monitors the component and triggers the exception in response to a predetermined condition or set of conditions.
  • the exception-triggering adapter may be a universal circuit that can be designed once and then included as part of each component of the pipeline circuit 80 that generates exceptions.
  • the exception manager 88 in response to the exception trigger, the exception manager 88 generates an exception identifier.
  • the identifier may indicate that the input-data queue 122 has overflowed.
  • the identifier may include its destination peer if there is more than one possible destination peer.
  • the output-data handler 126 retrieves the exception identifier from the exception manager 88 and sends the exception identifier to the host processor 42 (FIG. 3) as discussed in previously cited U.S. patent application Ser. No. ______ entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-12-3).
  • the exception identifier can also include destination information from which the subscription manager 138 determines the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the identifier.
  • the output-data handler 126 then sends the identifier to the destination peer or peers via the industry-standard bus adapter 118 and the industry-standard bus interface 91 .
  • the data memory 92 may include other types of memory ICs such as quad-data-rate (QDR) SRAMs.
  • QDR quad-data-rate
  • FIG. 6 is a block diagram of the interface 142 of FIG. 5 according to an embodiment of the invention.
  • the interface 142 writes processed data from the hardwired pipelines 74 1 - 74 n to the DPSRAM 102 .
  • the structure of the interface 142 reduces or eliminates data “bottlenecks” and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLIC's local and global routing resources.
  • the interface 142 includes write channels 150 1 - 150 n , one channel for each hardwired pipeline 74 1 - 74 n (FIG. 5), and includes a controller 152 .
  • the channel 1 50 is discussed below, it being understood that the operation and structure of the other channels 150 2 - 150 n are similar unless stated otherwise.
  • the channel 1501 includes a write-address/data FIFO 154 , and a address/data register 156 1 .
  • the FIFO 1541 stores the data that the pipeline 74 , writes to the DPSRAM 102 , and stores the address of the location within the DPSRAM 102 to which the pipeline writes the data, until the controller 152 can actually write the data to the DPSRAM 102 via the register 1561 . Therefore, the FIFO 154 1 reduces or eliminates the data bottleneck that may occur if the pipeline 74 1 had to “Wait” to write data to the channel 1 50 , until the controller 152 finished writing previous data.
  • the FIFO 154 1 receives the data from the pipeline 74 1 via a bus 158 1 , receives the address of the location to which the data is to be written via a bus 160 1 , and provides the data and address to the register 156 1 via busses 162 1 and 164 1 , respectively. Furthermore, the FIFO 154 1 receives a WRITE FIFO signal from the pipeline 74 1 on a line 166 1 , receives a CLOCK signal via a line 168 1 , and provides a FIFO FULL signal to the pipeline 74 1 on a line 170 1 .
  • the FIFO 154 1 receives a READ FIFO signal from the controller 152 via a line 172 1 , and provides a FIFO EMPTY signal to the controller via a line 1741 .
  • the pipeline circuit 80 (FIG. 5) is a PLIC
  • the busses 158 1 , 160 1 , 162 1 , and 164 1 and the lines 166 1 , 168 1 , 170 1 , 172 1 , and 174 1 are preferably formed using local routing resources.
  • local routing resources are preferred to global routing resources because the signal-path lengths are generally shorter and the routing is easier to implement.
  • the register 156 1 receives the data to be written and the address of the write location from the FIFO 154 , via the busses 162 1 and 164 1 , respectively, and provides the data and address to the port 110 of the DPSRAM 102 (FIG. 5) via an address/data bus 176 . Furthermore, the register 156 1 also receives the data and address from the registers 156 2 - 156 n via an address/data bus 178 1 as discussed below. In addition, the register 156 1 receives a SHIFT/LOAD signal from the controller 152 via a line 180 . Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 176 is typically formed using global routing resources, and the busses 178 1 - 178 n-1 and the line 180 are preferably formed using local routing resources.
  • the controller 152 In addition to receiving the FIFO EMPTY signal and generating the READ FIFO and SHIFT/LOAD signals, the controller 152 provides a WRITE DPSRAM signal to the port 110 of the DPSRAM 102 (FIG. 5) via a line 182 .
  • the FIFO 154 1 drives the FIFO FULL signal to the logic level corresponding to the current state (“full” or “not full”) of the FIFO.
  • the pipeline drives the data and corresponding address onto the busses 158 , and 160 1 , respectively, and asserts the WRITE signal, thus loading the data and address into the FIFO. If the FIFO 154 1 , is full, however, the pipeline 74 1 waits until the FIFO is not full before loading the data.
  • the FIFO 154 1 drives the FIFO EMPTY signal to the logic level corresponding to the current state (“empty” or “not empty”) of the FIFO.
  • the controller 152 asserts the READ FIFO signal and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 156 1 . If the FIFO 154 1 , is empty, the controller 152 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 154 2 - 154 n are not empty.
  • the channels 150 2 - 150 n operate in a similar manner such that first-loaded data in the FIFOs 154 2 - 154 n are respectively loaded into the registers 156 2 - 156 n .
  • the controller 152 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 156 1 - 156 n onto the address/data bus 176 and loading the data into the corresponding locations of the DPSRAM 102 .
  • the data and address from the register 156 1 are shifted onto the bus 176 such that the data from the FIFO 154 1 is loaded into the addressed location of the DPSRAM 102 .
  • the data and address from the register 156 2 are shifted into the register 156 1 , the data and address from the register 156 3 (not shown) are shifted into the register 156 2 , and so on.
  • the data and address from the register 156 1 are shifted onto the bus 176 such that the data from the FIFO 154 2 is loaded into the addressed location of the DPSRAM 102 .
  • the data and address from the register 156 2 are shifted into the register 156 1
  • the data and address from the register 156 3 are shifted into the register 156 2 , and so on.
  • the controller 152 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 156 1 - 156 n .
  • the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 176 .
  • the interface 144 is similar to the interface 142 , and the interface 132 is also similar to the interface 142 except that the interface 132 includes only one write channel 150 .
  • FIG. 7 is a block diagram of the interface 140 of FIG. 5 according to an embodiment of the invention.
  • the interface 140 reads input data from the DPSRAM 100 and transfers this data to the hardwired 74 1 - 74 n .
  • the structure of the interface 140 reduces or eliminates data “bottlenecks” and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLIC's local and global routing resources.
  • the interface 140 includes read channels 190 1 - 190 n , one channel for each hardwired pipeline 74 1 - 74 n (FIG. 5), and a controller 192 .
  • the read channel 190 1 is discussed below, it being understood that the operation and structure of the other read channels 190 2 - 190 n are similar unless stated otherwise.
  • the channel 190 1 includes a FIFO 194 1 and an address/identifier (ID) register 196 1 .
  • ID address/identifier
  • the identifier identifies the pipeline 74 1 - 74 n that makes the request to read data from a particular location of the DPSRAM 100 to receive the data.
  • the FIFO 194 1 includes two sub-FIFOs (not shown), one for storing the address of the location within the BPSRAM 100 from which the pipeline 74 1 wishes to read the input data, and the other for storing the data read from the DPSRAM 100 . Therefore, the FIFO 194 1 reduces or eliminates the bottleneck that may occur if the pipeline 74 1 had to “wait” to provide the read address to the channel 190 1 until the controller 192 finished reading previous data, or if the controller had to wait until the pipeline 74 1 retrieved the read data before the controller could read subsequent data.
  • the FIFO 194 1 receives the read address from the pipeline 74 1 via a bus 198 1 and provides the address and ID to the register 196 , via a bus 2001 . Since the ID corresponds to the pipeline 74 , and typically does not change, the FIFO 194 1 may store the ID and concatenate the ID with the address. Alternatively, the pipeline 74 , may provide the ID to the FIFO 194 , via the bus 1981 .
  • the FIFO 194 receives a READY WRITE FIFO signal from the pipeline 74 , via a line 202 1 , receives a CLOCK signal via a line 2041 , and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206 1 .
  • the FIFO 194 receives a WRITE/READ FIFO signal from the controller 192 via a line 2081 , and provides a FIFO EMPTY signal to the controller via a line 210 1 .
  • the FIFO 194 receives the read data and the corresponding ID from the controller 192 via a bus 212 , and provides this data to the pipeline 74 , via a bus 2141 .
  • the pipeline circuit 80 (FIG. 5) is a PLIC
  • the busses 198 1 , 200 1 , and 214 1 and the lines 202 1 , 204 1 , 206 1 , 208 1 , and 210 1 are preferably formed using local routing resources, and the bus 212 is typically formed using global routing resources.
  • the register 196 1 receives the address of the location to be read and the corresponding ID from the FIFO 194 1 via the bus 206 1 , provides the address to the port 108 of the DPSRAM 100 (FIG. 5) via an address bus 216 , and provides the ID to the controller 192 via a bus 218 . Furthermore, the register 196 , also receives the addresses and IDs from the registers 196 2 - 196 n via an address/ID bus 220 , as discussed below. In addition, the register 1 96 , receives a SHIFT/LOAD signal from the controller 192 via a line 222 . Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 216 is typically formed using global routing resources, and the busses 220 1 - 220 n-1 and the line 222 are preferably formed using local routing resources.
  • the controller 192 receives the data read from the port 108 of the DPSRAM 100 (FIG. 5) via a bus 224 and generates a READ DPSRAM signal on a line 226 , which couples this signal to the port 108 .
  • the pipeline circuit 80 (FIG. 5) is a PLIC
  • the bus 224 and the line 226 are typically formed using global routing resources.
  • the FIFO 194 drives the FIFO FULL signal to the logic level corresponding to the current state (“full” or “not full”) of the FIFO relative to the read addresses. That is, if the FIFO 194 1 is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
  • the pipeline drives the address of the data to be read onto the bus 198 1 and asserts the READNVRITE FIFO signal to a write level, thus loading the address into the FIFO.
  • the pipeline 74 gets the address from the input-data queue 122 via the sequence manager 148 . If, however, the FIFO 194 1 is full of read addresses, the pipeline 74 1 waits until the FIFO is not full before loading the read address.
  • the FIFO 194 1 drives the FIFO EMPTY signal to the logic level corresponding to the current state (“empty” or “not empty”) of the FIFO relative to the read addresses. That is, if the FIFO 194 1 is loaded with at least one read address, it drives the logic level of FIFO EMPTY to one level, and if the FIFO is loaded with no read addresses, it drives the logic level of FIFO EMPTY to another level.
  • the controller 192 asserts the WRITE/READ FIFO signal to the read logic level and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded address and the ID from the FIFO into the register 196 1 .
  • the channels 190 2 - 190 n operate in a similar manner such that the controller 192 respectively loads the first-loaded addresses and IDs from the FIFOs 194 2 - 194 n into the registers 196 2 - 196 n . If all of the FIFOs 194 2 - 194 n are empty, then the controller 192 waits for at least one of the FIFOs to receive an address before proceeding.
  • the controller 192 drives the SHIFT/LOAD signal to the shift logic level and asserts the READ DPSRAM signal to serially shift the addresses and IDs from the registers 196 1 - 196 n onto the address and ID busses 216 and 218 and to serially read the data from the corresponding locations of the DPSRAM 100 via the bus 224 .
  • the controller 192 drives the received data and corresponding ID the ID allows each of the FIFOs 194 1 - 194 n to determine whether it is an intended recipient of the data—onto the bus 212 , and drives the WRITE/READ FIFO signal to a write level, thus serially writing the data to the respective FIFO, 194 1 - 194 n .
  • the hardwired pipelines 74 1 - 74 n sequentially assert their READ/WRITE FIFO signals to a read level and sequentially read the data via the busses 214 1 - 214 n .
  • the controller 192 shifts the address and ID from the register 196 1 onto the busses 216 and 218 , respectively, asserts read DPSRAM, and thus reads the data from the corresponding location of the DPSRAM 100 via the bus 224 and reads the ID from the bus 218 .
  • the controller 192 drives WRITE/READ FIFO signal on the line 208 1 to a write level and drives the received data and the ID onto the bus 212 . Because the ID is the ID from the FIFO 194 1 , the FIFO 194 1 recognizes the ID and thus loads the data from the bus 212 in response the write level of the WRITE/READ FIFO signal.
  • the remaining FIFOs 194 2 - 194 n do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 74 1 asserts the READ/WRITE FIFO signal on the line 202 1 to the read level and retrieves the read data via the bus 214 1 . Also during the first shift cycle, the address and ID from the register 1962 are shifted into the register 196 1 , the address and ID from the register 196 3 (not shown) are shifted into the register 196 2 , and so on. Alternatively, the controller 192 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 208 1 to the write level.
  • the controller 192 may send the ID to the FIFOs 194 1 - 194 n .
  • the WRITE/READ FIFO signal may be only a read signal, and the FIFO 194 1 (as well as the other FIFOs 194 2 - 194 n ) may load the data on the bus 212 when the ID on the bus 212 matches the ID of the FIFO 194 1 . This eliminates the need of the controller 192 to generate a write signal.
  • the address and ID from the register 196 1 is shifted onto the busses 216 and 218 such that the controller 192 reads data from the location of the DPSRAM 100 specified by the FIFO 194 2 .
  • the controller 192 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 212 .
  • the ID is the ID from the FIFO 194 2
  • the FIFO 194 2 recognizes the ID and thus loads the data from the bus 212 .
  • the remaining FIFOs 194 , and 194 3 - 194 n do not load the data because the ID on the bus 212 does not correspond to their IDs.
  • the pipeline 742 asserts its READ/WRITE FIFO signal to the read level and retrieves the read data via the bus 2142 . Also during the second shift cycle, the address and ID from the register 1962 is shifted into the register 196 1 , the address and ID from the register 196 3 (not shown) is shifted into the register 1962 , and so on.
  • n shift cycles i.e., until the address and ID from the register 196 n (which is the address and ID from the FIFO 194 n ) are respectively shifted onto the bus 216 and 218 .
  • the controller 192 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 196 1 - 196 n .
  • the controller 192 may bypass the empty register, and thus shorten the shift operation by avoiding shifting a null address onto the bus 216 .
  • the interface 144 is similar to the interface 140
  • the interface 136 is also similar to the interface 140 except that the interface 136 includes only one read channel 190 , and thus includes no ID circuitry.
  • FIG. 8 is a schematic block diagram of a pipeline unit 230 of FIG. 4 according to another embodiment of the invention.
  • the pipeline unit 230 is similar to the pipeline unit 78 of FIG. 4 except that the pipeline unit 230 includes multiple pipeline circuits 80 —here two pipeline circuits 80 a and 80 b .
  • Increasing the number of pipeline circuits 80 typically allows an increase in the number n of hardwired pipelines 74 1 - 74 n , and thus an increase in the functionality of the pipeline unit 230 as compared to the pipeline unit 78 .
  • the services components i.e., the communication interface 82 , the controller 86 , the exception manager 88 , the configuration manager 90 , and the optional industry-standard bus interface 91 , are disposed on the pipeline circuit 80 a
  • the pipelines 74 1 - 74 n and the communication shell 84 are disposed on the pipeline circuit 80 b .
  • the portion of the communication shell 84 that interfaces the pipelines 74 1 - 74 n to the interface 82 and the controller 86 may be disposed on the pipeline circuit 80 a.
  • FIG. 9 is a schematic block diagram of the pipe line circuits 80 a and 80 b and the data memory 92 of the pipeline unit 230 of FIG. 8 according to an embodiment of the invention.
  • the structure and operation of the pipeline circuits 80 a and 80 b and the memory 92 of FIG. 9 are the same as for the pipeline circuit 80 and memory 92 of FIG. 5.

Abstract

A pipeline accelerator includes a memory and a hardwired-pipeline circuit coupled to the memory. The hardwired-pipeline circuit is operable to receive data, load the data into the memory, retrieve the data from the memory, process the retrieved data, and provide the processed data to an external source. In addition or in the alternative, the hardwired-pipeline circuit is operable to receive data, process the received data, load the processed data into the memory, retrieve the processed data from the memory, and provide the retrieved processed data to an external source. Where the pipeline accelerator is coupled to a processor as part of a peer-vector machine, the memory facilitates the transfer of data—whether unidirectional or bidirectional—between the hardwired-pipeline circuit(s) and an application that the processor executes.

Description

    CLAIM OF PRIORITY
  • This application claims priority to U.S. Provisional Application Serial No. 60/422,503, filed on Oct. 31, 2002, which is incorporated by reference. [0001]
  • CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is related to U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), Ser. No. ______ entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-12-3), Ser. No. ______ entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-14-3), and Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), which have a common filing date and owner and which are incorporated by reference.[0002]
  • BACKGROUND
  • A common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm. [0003]
  • FIG. 1 is a schematic block diagram of a [0004] conventional computing machine 10 having a multi-processor architecture. The machine 10 includes a master processor 12 and coprocessors 14 1-14 n, which communicate with each other and the master processor via a bus 16, an input port 18 for receiving raw data from a remote device (not shown in FIG. 1), and an output port 20 for providing processed data to the remote source. The machine 10 also includes a memory 22 for the master processor 12, respective memories 24 1-24 n for the coprocessors 14 1-14 n, and a memory 26 that the master processor and coprocessors share via the bus 16. The memory 22 serves as both a program and a working memory for the master processor 12, and each memory 24 1-24 n serves as both a program and a working memory for a respective coprocessor 14 1-14 n. The shared memory 26 allows the master processor 12 and the coprocessors 14 to transfer data among themselves, and from/to the remote device via the ports 18 and 20, respectively. The master processor 12 and the coprocessors 14 also receive a common clock signal that controls the speed at which the machine 10 processes the raw data.
  • In general, the [0005] computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14. The remote source (not shown in FIG. 1) such as a sonar array loads the raw data via the port 18 into a section of the shared memory 26, which acts as a first-in-first-out (FIFO) buffer (not shown) for the raw data. The master processor 12 retrieves the raw data from the memory 26 via the bus 16, and then the master processor and the coprocessors 14 process the raw data, transferring data among themselves as necessary via the bus 16. The master processor 12 loads the processed data into another FIFO buffer (not shown) defined in the shared memory 26, and the remote source retrieves the processed data from this FIFO via the port 20.
  • In an example of operation, the [0006] computing machine 10 processes the raw data by sequentially performing n+1 respective operations on the raw data, where these operations together compose a processing algorithm such as a Fast Fourier Transform (FFT). More specifically, the machine 10 forms a data-processing pipeline from the master processor 12 and the coprocessors 14. For a given frequency of the clock signal, such a pipeline often allows the machine 10 to process the raw data faster than a machine having only a single processor.
  • After retrieving the raw data from the raw-data FIFO (not shown) in the [0007] memory 26, the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26. Typically, the processor 12 executes a program stored in the memory 22, and performs the above-described actions under the control of the program. The processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
  • Next, after retrieving the first result from the first-result FIFO (not shown) in the [0008] memory 26, the coprocessor 14 1 performs a second operation, such as a logarithmic function, on the first result. This second operation yields a second result, which the coprocessor 14, stores in a second-result FIFO (not shown) defined within the memory 26. Typically, the coprocessor 14 1 executes a program stored in the memory 24 1, and performs the above-described actions under the control of the program. The coprocessor 14 1 may also use the memory 24, as working memory to temporarily store data that the coprocessor generates at intermediate intervals of the second operation.
  • Then, the coprocessors [0009] 24 2-24 n sequentially perform third—nth operations on the second—(n−1)th results in a manner similar to that discussed above for the coprocessor 24 1.
  • The n[0010] th operation, Which is performed by the coprocessor 24 n, yields the final result, i.e., the processed data. The coprocessor 24 n loads the processed data into a processed-data FIFO (not shown) defined within the memory 26, and the remote device (not shown in FIG. 1) retrieves the processed data from this FIFO.
  • Because the [0011] master processor 12 and coprocessors 14 are simultaneously performing different operations of the processing algorithm, the computing machine 10 is often able to process the raw data faster than, a computing machine having a single processor that sequentially performs the different operations. Specifically, the single processor cannot retrieve a new set of the raw data until it performs all n+1 operations on the previous set of raw data. But using the pipeline technique discussed above, the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in FIG. 1).
  • Alternatively, the [0012] computing machine 10 may process the raw data in parallel by simultaneously performing n+1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n+1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n+1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n+1 as compared to a single-processor machine (not shown in FIG. 1).
  • Unfortunately, although the [0013] computing machine 10 can process data more quickly than a single-processor computer machine (not shown in FIG. 1), the data-processing speed of the machine 10 is often significantly less than the frequency of the processor clock. Specifically, the data-processing speed of the computing machine 10 is limited by the time that the master processor 12 and coprocessors 14 require to process data. For brevity, an example of this speed limitation is discussed in conjunction with the master processor 12, although it is understood that this discussion also applies to the coprocessors 14. As discussed above, the master processor 12 executes a program that controls the processor to manipulate data in a desired manner. This program includes a sequence of instructions that the processor 12 executes. Unfortunately, the processor 12 typically requires multiple clock cycles to execute a single instruction, and often must execute multiple instructions to process a single value of data. For example, suppose that the processor 12 is to multiply a first data value A (not shown) by a second data value B (not shown). During a first clock cycle, the processor 12 retrieves a multiply instruction from the memory 22. During second and third clock cycles, the processor 12 respectively retrieves A and B from the memory 26. During a fourth clock cycle, the processor 12 multiplies A and B, and, during a fifth clock cycle, stores the resulting product in the memory 22 or 26 or provides the resulting product to the remote device (not shown). This is a best-case scenario, because in many cases the processor 12 requires additional clock cycles for overhead tasks such as initializing and closing counters. Therefore, at best the processor 12 requires five clock cycles, or an average of 2.5 clock cycles per data value, to process A and B.
  • Consequently, the speed at which the [0014] computing machine 10 processes data is often significantly lower than the frequency of the clock that drives the master processor 12 and the coprocessors 14. For example, if the processor 12 is clocked at 1.0 Gigahertz (GHz) but requires an average of 2.5 clock cycles per data value, then the effective data-processing speed equals (1.0 GHz)/2.5=0.4 GHz. This effective data-processing speed is often characterized in units of operations per second. Therefore, in this example, for a clock speed of 1.0 GHZ, the processor 12 would be rated with a data-processing speed of 0.4 Gigaoperations/second (Gops).
  • FIG. 2 is a block diagram of a [0015] hardwired data pipeline 30 that can typically process data faster than a processor can for a given clock frequency, and often at substantially the same rate at which the pipeline is clocked. The pipeline 30 includes operator circuits 32 1-32 n, which each perform a respective operation on respective data without executing program instructions. That is, the desired operation is “burned in” to a circuit 32 such that it implements the operation automatically, without the need of program instructions. By eliminating the overhead associated with executing program instructions, the pipeline 30 can typically perform more operations per second than a processor can for a given clock frequency.
  • For example, the [0016] pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
  • Y(x k)=(5x k+3)2xk|
  • where x[0017] k represents a sequence of raw data values. In this example, the operator circuit 32 1 is a multiplier that calculates 5xk, the circuit 32 2 is an adder that calculates 5xk+3, and the circuit 32 n (n=3) is a multiplier that calculates (5xk+3)2xk.|.
  • During a first clock cycle k=1, the [0018] circuit 32 1 receives data value x1 and multiplies it by 5 to generate 5x1.
  • During a second clock cycle k=2, the [0019] circuit 322 receives 5x1 from the circuit 32 1 and adds 3 to generate 5x1+3. Also, during the second clock cycle, the circuit 32 1 generates 5x2.
  • During a third clock cycle k=3, the circuit [0020] 323 receives 5x1+3 from the circuit 32 2 and multiplies by 2x1|(effectively left shifts 5x1 +3 by x 1) to generate the first result (5x1+3)2|x1|. Also during the third clock cycle, the circuit 32 1 generates 5x3 and the circuit 322 generates 5x2+3.
  • The [0021] pipeline 30 continues processing subsequent raw data values xk in this manner until all the raw data values are processed.
  • Consequently, a delay of two clock cycles after receiving a raw data value x[0022] 1—this delay is often called the latency of the pipeline 30—the pipeline generates the result (5x1+3)2x1|, and thereafter generates one result e.g., (5x2+3)2x2|, (5x3+3)2x3|, . . . , 5xn+3)2xn|—each clock cycle.
  • Disregarding the latency, the [0023] pipeline 30 thus has a data-processing speed equal to the clock speed. In comparison, assuming that the master processor 12 and coprocessors 14 (FIG. 1) have data-processing speeds that are 0.4 times the clock speed as in the above example, the pipeline 30 can process data 2.5 times faster than the computing machine 10 (FIG. 1) for a given clock speed.
  • Still referring to FIG. 2, a designer may choose to implement the [0024] pipeline 30 in a programmable logic IC (PLIC), such as a field-programmable gate array (FPGA), because a PLIC allows more design and modification flexibility than does an application specific IC (ASIC). To configure the hardwired connections within a PLIC, the designer merely sets interconnection-configuration registers disposed within the PLIC to predetermined binary states. The combination of all these binary states is often called “firmware.” Typically, the designer loads this firmware into a nonvolatile memory (not shown in FIG. 2) that is coupled to the PLIC. When one “turns on” the PLIC, it downloads the firmware from the memory into the interconnection-configuration registers. Therefore, to modify the functioning of the PLIC, the designer merely modifies the firmware and allows the PLIC to download the modified firmware into the interconnection-configuration registers. This ability to modify the PLIC by merely modifying the firmware is particularly useful during the prototyping stage and for upgrading the pipeline 30 “in the field”.
  • Unfortunately, the [0025] hardwired pipeline 30 may not be the best choice to execute algorithms that entail significant decision making, particularly nested decision making. A processor can typically execute a nested-decision-making instruction (e.g., a nested conditional instruction such as “if A, then do B, else if C, do D, . . . , else do n”) approximately as fast as it can execute an operational instruction (e.g., “A+B”) of comparable length. But although the pipeline 30 may be able to make a relatively simple decision (e.g., “A>B?”) efficiently, it typically cannot execute a nested decision (e.g., “if A, then do B, else if C, do D, . . . , else do n”) as efficiently as a processor can. One reason for this inefficiency is that the pipeline 30 may have little on-board memory, and thus may need to access external working/instruction memory (not shown). And although one may be able to design the pipeline 30 to execute such a nested decision, the size and complexity of the required circuitry often makes such a design impractical, particularly where an algorithm includes multiple different nested decisions.
  • Consequently, processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to “number crunching” applications that entail little or no decision making. [0026]
  • Furthermore, as discussed below, it is typically much easier for one to design/modify a processor-based computing machine, such as the computing [0027] machine 10 of FIG. 1, than it is to design/modify a hardwired pipeline such as the pipeline 30 of FIG. 2, particularly where the pipeline 30 includes multiple PLICs.
  • Computing components, such as processors and their peripherals (e.g., memory), typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine. [0028]
  • Typically, a standard communication interface includes two layers: a physical layer and a services layer. [0029]
  • The physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry. For example, the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive signals onto the pins. The operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode). Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS. [0030]
  • The services layer includes the protocol by which a computing component transfers data. The protocol defines the format of the data and the manner in which the component sends and receives the formatted data. Conventional communication protocols include file-transfer protocol (FTP) and transmission control protocol/internet protocol (TCP/IP). [0031]
  • Consequently, because manufacturers and others typically design computing components having industry-standard communication interfaces, one can typically design the interface of such a component and interconnect it to other computing components with relatively little effort. This allows one to devote most of his time to designing the other portions of the computing machine, and to easily modify the machine by adding or removing components. [0032]
  • Designing a computing component that supports an industry-standard communication interface allows one to save design time by using an existing physical-layer design from a design library. This also insures that he/she can easily interface the component to off-the-shelf computing components. [0033]
  • And designing a computing machine using computing components that support a common industry-standard communication interface allows the designer to interconnect the components with little time and effort. Because the components support a common interface, the designer can interconnect them via a system bus with little design effort. And because the supported interface is an industry standard, one can easily modify the machine. For example, one can add different components and peripherals to the machine as the system design evolves, or can easily add/design next-generation components as the technology evolves. Furthermore, because the components support a common industry-standard service layer, one can incorporate into the computing machine's software an existing software module that implements the corresponding protocol. Therefore, one can interface the components with little effort because the interface design is essentially already in place, and thus can focus on designing the portions (e.g., software) of the machine that cause the machine to perform the desired function(s). [0034]
  • But unfortunately, there are no known industry-standard services layers for components, such as PLICs, used to form hardwired pipelines such as the [0035] pipeline 30 of FIG. 2.
  • Consequently, to design a pipeline having multiple PLICs, one typically spends a significant amount of time and exerts a significant effort designing and debugging the services layer of the communication interface between the PLICs “from scratch.” Typically, such an ad hoc services layer depends on the parameters of the data being transferred between the PLICs. Likewise, to design a pipeline that interfaces to a processor, one would have to spend a significant amount of time and exert a significant effort in designing and debugging the services layer of the communication interface between the pipeline and the processor from scratch. [0036]
  • Similarly, to modify such a pipeline by adding a PLIC to it, one typically spends a significant amount of time and exerts a significant effort designing and debugging the services layer of the communication interface between the added PLIC and the existing PLICs. Likewise, to modify a pipeline by adding a processor, or to modify a computing machine by adding a pipeline, one would have to spend a significant amount of time and exert a significant effort in designing and debugging the services layer of the communication interface between the pipeline and processor. [0037]
  • Consequently, referring to FIGS. 1 and 2, because of the difficulties in interfacing multiple PLICs and in interfacing a processor to a pipeline, one is often forced to make significant tradeoffs when designing a computing machine. For example, with a processor-based computing machine, one is forced to trade number-crunching speed and design/modification flexibility for complex decision-making ability. Conversely, with a hardwired pipeline-based computing machine, one is forced to trade complex-decision-making ability and design/modification flexibility for number-crunching speed. Furthermore, because of the difficulties in interfacing multiple PLICS, it is often impractical for one to design a pipeline-based machine having more than a few PLICs. As a result, a practical pipeline-based machine often has limited functionality. And because of the difficulties in interfacing a processor to a PLIC, it would be impractical to interface a processor to more than one PLIC. As a result, the benefits obtained by combining a processor and a pipeline would be minimal. [0038]
  • Therefore, a need has arisen for a new computing architecture that allows one to combine the decision-making ability of a processor-based machine with the number-crunching speed of a hardwired-pipeline-based machine. [0039]
  • SUMMARY
  • According to an embodiment of the invention, a pipeline accelerator includes a memory and a hardwired-pipeline circuit coupled to the memory. The hardwired-pipeline circuit is operable to receive data, load the data into the memory, retrieve the data from the memory, process the retrieved data, and provide the processed data to an external source. [0040]
  • According to another embodiment of the invention, the hardwired-pipeline circuit is operable to receive data, process the received data, load the processed data into the memory, retrieve the processed data from the memory, and provide the retrieved processed data to an external source. [0041]
  • Where the pipeline accelerator is coupled to a processor as part of a peer-vector machine, the memory facilitates the transfer of data whether unidirectional or bidirectional between the hardwired-pipeline circuit and an application that the processor executes.[0042]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a computing machine having a conventional multi-processor architecture. [0043]
  • FIG. 2 is a block diagram of a conventional hardwired pipeline. [0044]
  • FIG. 3 is a block diagram of a computing machine having a peer-vector architecture according to an embodiment of the invention. [0045]
  • FIG. 4 is a block diagram of the pipeline accelerator of FIG. 3 according to an embodiment of the invention. [0046]
  • FIG. 5 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 4 according to an embodiment of the invention. [0047]
  • FIG. 6 is a block diagram of the memory-write interfaces of the communication shell of FIG. 5 according to an embodiment of the invention. [0048]
  • FIG. 7 is a block diagram of the memory-read interfaces of the communication shell of FIG. 5 according to an embodiment of the invention. [0049]
  • FIG. 8 is a block diagram of the pipeline accelerator of FIG. 3 according to another embodiment of the invention. [0050]
  • FIG. 9 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 8 according to an embodiment of the invention.[0051]
  • DETAILED DESCRIPTION
  • FIG. 3 is a schematic block diagram of a [0052] computing machine 40, Which has a peer-vector architecture according to an embodiment of the invention. In addition to a host processor 42, the peer-vector machine 40 includes a pipeline accelerator 44, which performs at least a portion of the data processing, and which thus effectively replaces the bank of coprocessors 14 in the computing machine 10 of FIG. 1. Therefore, the host-processor 42 and the accelerator 44 (or units thereof as discussed below) are “peers” that can transfer data vectors back and forth. Because the accelerator 44 does not execute program instructions, it typically performs mathematically intensive operations on data significantly faster than a bank of coprocessors can for a given clock frequency. Consequently, by combining the decision-making ability of the processor 42 and the number-crunching ability of the accelerator 44, the machine 40 has the same abilities as, but can often process data faster than, a conventional computing machine such as the machine 10. Furthermore, as discussed below, providing the accelerator 44 with a communication interface that is compatible with the communication interface of the host processor 42 facilitates the design and modification of the machine 40, particularly where the processor's communication interface is an industry standard. And where the accelerator 44 includes multiple pipeline units (e.g., PLIC-based circuits), providing each of these units with the same communication interface facilitates the design and modification of the accelerator, particularly where the communication interfaces are compatible with an industry-standard interface. Moreover, the machine 40 may also provide other advantages as described below and in the previously cited patent applications.
  • Still referring to FIG. 3, in addition to the [0053] host processor 42 and the pipeline accelerator 44, the peer-vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, an optional raw-data input port 54, a processed-data output port 58, and an optional router 61.
  • The [0054] host processor 42 includes a processing unit 62 and a message handler 64, and the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler. The processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the format of the messages that the message handler 64 sends and receives.
  • The [0055] pipeline accelerator 44 is disposed on at least one PLIC (not shown) and includes hardwired pipelines 74 1-74 n, which process respective data without executing program instructions. The firmware memory 52 stores the configuration firmware for the accelerator 44. If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 4). The accelerator 44 and pipeline units are discussed further below and in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3). Alternatively, the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable. In this alternative, the machine 40 may omit the firmware memory 52. Furthermore, although the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital-signal processor (DSP). Moreover, although not shown, the accelerator 44 may include a data input port and/or a data output port.
  • The general operation of the peer-[0056] vector machine 40 is discussed in previously cited U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), and the structure and operation of the pipeline accelerator 44 is discussed below in conjunction with FIGS. 4-9.
  • FIG. 4 is a schematic block diagram of the [0057] pipeline accelerator 44 of FIG. 3 according to an embodiment of the invention.
  • The [0058] accelerator 44 includes one or more pipeline units 78, each of which includes a pipeline circuit 80, such as a PLIC or an ASIC. As discussed further below and in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), each pipeline unit 78 is a “peer” of the host processor 42 and of the other pipeline units of the accelerator 44. That is, each pipeline unit 78 can communicate directly with the host processor 42 or with any other pipeline unit. Thus, this peer-vector architecture prevents data “bottlenecks” that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42. Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 3) without significant modifications to the machine.
  • The [0059] pipeline circuit 80 includes a communication interface 82, which transfers data between a peer, such as the host processor 42 (FIG. 3), and the following other components of the pipeline circuit: the hardwired pipelines 74 1-74 n (FIG. 3) via a communication shell 84, a controller 86, an exception manager 88, and a configuration manager 90. The pipeline circuit 80 may also include an industry-standard bus interface 91. Alternatively, the functionality of the interface 91 may be included within the communication interface 82.
  • By designing the components of the [0060] pipeline circuit 80 as separate modules, one can often simplify the design of the pipeline circuit. That is, one can design and test each of these components separately, and then integrate them much like one does when designing software or a processor-based computing system (such as the system 10 of FIG. 1). In addition, one can save in a library (not shown) hardware description language (HDL) that defines these components—particularly components, such as the communication interface 82, that will probably be used frequently in other pipeline designs—thus reducing the design and test time of future pipeline designs that use the same components. That is, by using the HDL from the library, the designer need not redesign previously implemented components “from scratch”, and thus can focus his efforts on the design of components that were not previously implemented, or on the modification of previously implemented components. Moreover, one can save in the library HDL that defines multiple versions of the pipeline circuit 80 or of the entire pipeline accelerator 44, so that one can pick and choose among existing designs.
  • The [0061] communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 3), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 3). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44. Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 3), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting Or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
  • The hardwired pipelines [0062] 74 1-74 n perform respective operations on data as discussed above in conjunction with FIG. 3 and in previously cited U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), and the communication shell 84 interfaces the pipelines to the other components of the pipeline circuit. 80 and to circuits (such as a data memory 92 discussed below) external to the pipeline circuit.
  • The [0063] controller 86 synchronizes the hardwired pipelines 74 1-74 n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., “events,” from other peers. For example, a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74 1-74 n to begin processing this data. An event that includes data is typically called a message, and an event that does not include data is typically called a “door bell.” Furthermore, as discussed below in conjunction with FIG. 5, the pipeline unit 78 may also synchronize the pipelines 74 1-74 n in response to a synchronization signal.
  • The [0064] exception manager 88 monitors the status of the hardwired pipelines 74 1-74 n, the communication interface 82, the communication shell 84, the controller 86, and the bus interface 91, and reports exceptions to the host processor 42 (FIG. 3). For example, if a buffer in the communication interface 82 overflows, then the exception manager 88 reports this to the host processor 42. The exception manager may also correct, or attempt to correct, the problem giving rise to the exception. For example, for an overflowing buffer, the exception manager 88 may increase the size of the buffer, either directly or via the configuration manager 90 as discussed below.
  • The configuration manager [0065] 90 sets the soft configuration of the hardwired pipelines 74 1-74 n, the communication interface 8?, the communication shell 84, the controller 86, the exception manager 88, and the interface 91 in response to soft-configuration data from the host processor 42 (FIG. 3)—as discussed in previously cited U.S. patent application Ser. No. ______ entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80, and the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components. That is, soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor. For example, the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82. The exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82.
  • Still referring to FIG. 4, in addition to the [0066] pipeline circuit 80, the pipeline unit 78 of the accelerator 44 includes the data memory 92, an optional communication bus 94, and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 3).
  • The [0067] data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 3), and the hardwired pipelines 74 1-74 n, and is also a working memory for the hardwired pipelines. The communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74 1-74 n.
  • The industry-[0068] standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 3), then he need only modify the interface 91 and not the communication interface 82. Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80. Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74 1-74 n and the controller 86. Or, as discussed above, the bus interface 91 may be part of the communication interface 82.
  • As discussed above in conjunction with FIG. 3, where the [0069] pipeline circuit 80 is a PLIC, the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit. The memory 52 loads the firmware into'the pipeline circuit 80 during the configuration of the accelerator 44, and may receive modified firmware from the host processor 42 (FIG. 3) via the communication interface 82 during or after the configuration of the accelerator. The loading and receiving of firmware is further discussed in previously cited U.S. patent application Ser. No. ______ entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-14-3).
  • Still referring to FIG. 4, the [0070] pipeline circuit 80, data memory 92, and firmware memory 52 may be disposed on a circuit board or card 98, which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown). Although not shown, conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known.
  • Further details of the structure and operation of the [0071] pipeline unit 78 are discussed below in conjunction with FIG. 5.
  • FIG. 5 is a block diagram of the [0072] pipeline unit 78 of FIG. 4 according to an embodiment of the invention. For clarity, the firmware memory 52 is omitted from FIG. 5. The pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly. The pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner. The pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below.
  • The [0073] data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100, an output DPSRAM 102, and an optional working DPSRAM 104.
  • The [0074] input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74 1-74 n via the communication shell 84. Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74 1-74 n read data from the DPSRAM. Furthermore, as discussed above, using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the pipelines 74 1-74 n to operate asynchronously relative to one and other. That is, the peer can send data to the pipelines 74 1-74 n without “waiting” for the pipelines to complete a current operation. Likewise, the pipelines 74 1-74 n can retrieve data without “waiting” for the peer to complete a data-sending operation.
  • Similarly, the [0075] output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74 1-74 n via the communication shell 84, and includes an output port 112 for providing this data to a peer, such as the host processor 42 (FIG. 3), via the communication interface 82. As discussed above, the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102, and using the DPSRAM 102 to buffer data from the pipelines 74 1-74 n allows the peer and the pipelines to operate asynchronously relative to one another. That is, the pipelines 74 1-74 n can publish data to the peer without “waiting” for the output-data handler 126 to complete a data transfer to the peer or to another peer. Likewise, the output-data handler 126 can transfer data to a peer without “waiting” for the pipelines 74 1-74 n to complete a data-publishing operation.
  • The working [0076] DPSRAM 104 includes an input port 114 for receiving data from the hardwired pipelines 74 1-74 n via the communication shell 84, and includes an output port 116 for returning this data back to the pipelines via the communication shell. While processing input data received from the DPSRAM 100, the pipelines 74 1-74 n may need to temporarily store partially processed, i.e., intermediate, data before continuing the processing of this data. For example, a first pipeline, such as the pipeline 74 1, may generate intermediate data for further processing by a second pipeline, such as the pipeline 74 2; thus, the first pipeline may need to temporarily store the intermediate data until the second pipeline retrieves it. The working DPSRAM 104 provides this temporary storage. As discussed above, the two data ports 114 (input) and 116 (output) increase the speed and efficiency of data transfer between the pipelines 74 1-74 n and the DPSRAM 104. Furthermore, including a separate working DPSRAM 104 typically increases the speed and efficiency of the pipeline circuit 80 by allowing the DPSRAMs 100 and 102 to function exclusively as data-input and data-output buffers, respectively. But, with slight modification to the pipeline circuit 80, either or both of the DPSRAMS 100 and 102 can also be a working memory for the pipelines 74 1-74 n when the DPSRAM 104 is omitted, and even when it is present.
  • Although the [0077] DPSRAMS 100, 102, and 104 are described as being external to the pipeline circuit 80, one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
  • Still referring to FIG. 5, the [0078] communication interface 82 includes an industry-standard bus adapter 118, an input-data handler 120, input-data and input- event queues 122 and 124, an output-data handler 126, and output-data and output- event queues 128 and 130. Although the queues 122, 124, 128, and 130 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
  • The industry-standard bus adapter [0079] 118 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 4) via the communication bus 94. Therefore, if one wishes to change the parameters of the bus 94, then he need only modify the adapter 118 and not the entire communication interface 82. Where the industry-standard bus interface 91 is omitted from the pipeline unit 78, then the adapter 118 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80. In this latter implementation, the modified adapter 118 includes the functionality of the bus interface 91, and one need only modify the adapter 118 if he/she wishes to change the parameters of the bus 50.
  • The input-[0080] data handler 120 receives data from the industry-standard adapter 118, loads the data into the DPSRAM 100 via the input port 106, and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 122. If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handler 120 extracts the data from the message before loading the data into the DPSRAM 100. The input-data handler 120 includes an interface 132, which writes the data to the input port 106 of the DPSRAM 100 and which is further discussed below in conjunction with FIG. 6. Alternatively, the input-data handler 120 can omit the extraction step and load the entire message into the DPSRAM 100.
  • The input-[0081] data handler 120 also receives events from the industry-standard bus adapter 118, and loads the events into the input-event queue 124.
  • Furthermore, the input-[0082] data handler 120 includes a validation manager 134, which determines whether received data or events are intended for the pipeline circuit 80. The validation manager 134 may make this determination by analyzing the header (or a portion thereof) of the message that contains the data or the event, by analyzing the type of data or event, or the analyzing the instance identification (i.e., the hardwired pipeline 74 for which the data/event is intended) of the data or event. If the input-data handler 120 receives data or an event that is not intended for the pipeline circuit 80, then the validation manager 134 prohibits the input-data handler from loading the received data/even. Where the peer-vector machine 40 includes the router 61 (FIG. 3) such that the pipeline unit 78 should receive only data/events that are intended for the pipeline unit, the validation manager 134 may also cause the input-data handler 120 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception.
  • The output-[0083] data handler 126 retrieves processed data from locations of the DPSRAM 102 pointed to by the output-data queue 128, and sends the processed data to one or more peers, such as the host processor 42 (FIG. 3), via the industry-standard bus adapter 118. The output-data handler 126 includes an interface 136, which reads the processed data from the DPSRAM 102 via the port 112. The interface 136 is further discussed below in conjunction with FIG. 7.
  • The output-[0084] data handler 126 also retrieves from the output-event queue 130 events generated by the pipelines 74 1-74 n, and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 3) via the industry-standard bus adapter 118.
  • Furthermore, the output-[0085] data handler 126 includes a subscription manager 138, which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138, generates a header that includes the address, and generates the message from the data/event and the header.
  • Although the technique for storing and retrieving data stored in the [0086] DPSRAMS 100 and 102 involves the use of pointers and data identifiers, one may modify the input- and output- data handlers 120 and 126 to implement other data-management techniques. Conventional examples of such data-management techniques include pointers using keys or tokens, input output control (10C) block, and spooling.
  • The [0087] communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74 1-74 n to the output-data queue 128, the controller 86, and the DPSRAMs 100, 102, and 104. The shell 84 includes interfaces 140 and 142, and optional interfaces 144 and 146. The interfaces 140 and 146 may be similar to the interface 136; the interface 140 reads input data from the DPSRAM 100 via the port 108, and the interface 146 reads intermediate data from the DPSRAM 104 via the port 116. The interfaces 142 and 144 may be similar to the interface 132; the interface 142 writes processed data to the DPSRAM 102 via the port 110, and the interface 144 writes intermediate data to the DPSRAM 104 via the port 114.
  • The [0088] controller 86 includes a sequence manager 148 and a synchronization interface 150, which receives one or more synchronization signals SYNC. A peer, such as the host processor 42 (FIG. 3), or a device (not shown) external to the peer-vector machine 40 (FIG. 3) may generate the SYNC signal, which triggers the sequence manager 148 to activate the hardwired pipelines 74 1-74 n as discussed below and in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3). The synchronization interface 150 may also generate a SYNC signal to trigger the pipeline circuit 80 or to trigger another peer. In addition, the events from the input-event queue 124 also trigger the sequence manager 148 to activate the hardwired pipelines 74 1-74 n as discussed below.
  • The [0089] sequence manager 148 sequences the hardwired pipelines 74 1-74 n through their respective operations via the communication shell 84. Typically, each pipeline 74 has at least three operating states: preprocessing, processing, and post processing. During preprocessing, the pipeline 74, e.g., initializes its registers and retrieves input data from the DPSRAM 100. During processing, the pipeline 74, e.g., operates on the retrieved data, temporarily stores intermediate data in the DPSRAM 104, retrieves the intermediate data from the DPSRAM 104, and operates on the intermediate data to generate result data. During post processing, the pipeline 74, e.g., loads the result data into the DPSRAM 102. Therefore, the sequence manager 148 monitors the operation of the pipelines 74 1-74 n and instructs each pipeline when to begin each of its operating states. And one may distribute the pipeline tasks among the operating states differently than described above. For example, the pipeline 74 may retrieve input data from the DPSRAM 100 during the processing state instead of during the preprocessing state.
  • Furthermore, the [0090] sequence manager 148 maintains a predetermined internal operating synchronization among the hardwired pipelines 74 1-74 n. For example, to avoid all of the pipelines 74 1-74 n simultaneously retrieving data from the DPSRAM 100, it may be desired to synchronize the pipelines such that while the first pipeline 74 1 is in a preprocessing state, the second pipeline 74 2 is in a processing state and the third pipeline 74 3 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state Of another pipeline, the pipelines 74 1-74 n may lose synchronization if allowed to run freely. Consequently, at certain times there may be a “bottle neck,” as, for example, multiple pipelines 74 simultaneously attempt to retrieve data from the DPSRAM 100. To prevent the loss of synchronization and its undesirable consequences, the sequence manager 148 allows all of the pipelines 74 to complete a current operating state before allowing any of the pipelines to proceed to a next operating state. Therefore, the time that the sequence manager 148 allots for a current operating state is long enough to allow the slowest pipeline 74 to complete that state. Alternatively, circuitry (not shown) for maintaining a predetermined operating synchronization among the hardwired pipelines 74 1-74 n may be included within the pipelines themselves.
  • In addition to sequencing and internally synchronizing the hardwired pipelines [0091] 74 1-74 n, the sequence manager 148 synchronizes the operation of the pipelines to the operation of other peers, such as the host processor 42 (FIG. 3), and to the operation of other external devices in response to one or more SYNC signals or to an event in the input-events queue 124.
  • Typically, a SYNC signal triggers a time-critical function but requires significant hardware resources; comparatively, an event typically triggers a non-time-critical function but requires significantly fewer hardware resources. As discussed in previously cited U.S. patent application Ser. No. ______ entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), because a SYNC signal is routed directly from peer to peer, it can trigger a function more quickly than an event, which must makes its way through, e.g., the pipeline bus [0092] 50 (FIG. 3), the input-data handler 120, and the input-event queue 124. But because they are separately routed, the SYNC signals require dedicated circuitry, such as routing lines, buffers, and the SYNC interface 150, of the pipeline circuit 80. Conversely, because they use the existing data-transfer infrastructure (e.g. the pipeline bus 50 and the input-data handler 120), the events require only the dedicated input-event queue 124. Consequently, designers tend to use events to trigger all but the most time-critical functions.
  • The following is an example of function triggering. Assume that a sonar sensor element (not shown) sends blocks of data to the [0093] pipeline unit 78, the input-data handler 120 stores this data in the DPSRAM 100, the pipeline 74 1 transfers this data from the DPSRAM 100 to the DPSRAM 104, and, when triggered, the pipeline 74 2 retrieves and processes the data from the DPSRAM 104. If the processing that the pipeline 74 2 performs on the data is time critical, then the sensor element may generate a SYNC pulse to trigger the pipeline 74 2, via the interface 150 and the sequence manager 148, as soon as the pipeline 74 1 finishes loading an entire block of data into the DPSRAM 104. There are many conventional techniques that the pipeline unit 78 and the sensor can employ to determine when the pipeline 74, is finished. For example, as discussed below, the sequence manager 148 may provide a corresponding SYNC pulse or event to the sensor. Alternatively, if the processing that the pipeline 742 performs is not time critical, then the sensor may send an event to the sequence manager 148 via the pipeline bus 50 (FIG. 3).
  • The [0094] sequence manager 148 may also provide to a peer, such as the host processor 42 (FIG. 3), information regarding the operation of the hardwired pipelines 74 1-74 n by generating a SYNC pulse or an event. The sequence manager 148 sends a SYNC pulse via the SYNC interface 150 and a dedicated line (not shown), and sends an event via the output-event queue 130 and the output-data handler 126. Referring to the above example, suppose that a peer further processes the data blocks from the pipeline 742. The sequence manager 148 may notify the peer via a SYNC pulse or an event when the pipeline 742 has finished processing a block of data. The sequence manager 148 may also confirm receipt of a SYNC pulse or an event by generating and sending a corresponding SYNC pulse or event to the appropriate peer(s).
  • Still referring to FIG. 5, the operation of the [0095] pipeline unit 78 is discussed according to an embodiment of the invention.
  • For data, the industry-[0096] standard bus interface 91 receives data signals (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 61 if present), and translates these signals into messages each having a header and payload.
  • Next, the industry-standard bus adapter [0097] 118 converts the messages from the industry-standard bus interface 91 into a format that is compatible with the input-data handler 120.
  • Then, the input-[0098] data handler 120 dissects the message headers and extracts from each header the portion that describes the data payload. For example, the extracted header portion may include, e.g., the address of the pipeline unit 78, the type of data in the payload, or an instance identifier that identifies the pipeline(s) 78 1-78 n for which the data is intended.
  • Next, the [0099] validation manager 134 analyzes the extracted header portion and confirms that the data is intended for one of the hardwired pipelines 74 1-74 n, the interface 132 writes the data to a location of the DPSRAM 100 via the port 106, and the input-data handler 120 stores a pointer to the location and a corresponding data identifier in the input-data queue 122. The data identifier identifies the pipeline or pipelines 74 1-74 n for which the data is intended, or includes information that allows the sequence manager 148 to make this identification as discussed below. Alternatively, the queue 122 may include a respective subqueue (not shown) for each pipeline 74 1-74 n, and the input-data handler 120 stores the pointer in the subqueue or subqueues of the intended pipeline or pipelines. In this alternative, the data identifier may be omitted. Furthermore, if the data is the payload of a message, then the input-data handler 120 extracts the data from the message before the interface 132 stores the data in the DPSRAM 100. Alternatively, as discussed above, the interface 132 may store the entire message in the DPSRAM 100.
  • Then, at the appropriate time, the [0100] sequence manager 148 reads the pointer and the data identifier from the input-data queue 122, determines from the data identifier the pipeline or pipelines 74 1-74 n for which the data is intended, and passes the pointer to the pipeline or pipelines via the communication shell 84.
  • Next, the data-receiving pipeline or pipelines [0101] 74 1-74 n cause the interface 140 to retrieve the data from the pointed-to location of the DPSRAM 100 via the port 108.
  • Then, the data-receiving pipeline or pipelines [0102] 74 1-74 n process the retrieved data, the interface 142 writes the processed data to a location of the DPSRAM 102 via the port 110, and the communication shell 84 loads into the output-data queue 128 a pointer to and a data identifier for the processed data. The data identifier identifies the destination peer or peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data, or includes information (such as the data type) that allows the subscription manager 138 to subsequently determine the destination peer or peers (e.g., the host processor 42 of FIG. 3). Alternatively, the queue 128 may include a respective subqueue (not shown) for each pipeline 74 1-74 n, and the communication shell 84 stores the pointer in the subqueue or subqueues of the originating pipeline or pipelines. In this alternative, the communication shell 84 may omit loading a data identifier into the queue 128. Furthermore, if the pipeline or pipelines 74 1-74 n generate intermediate data while processing the retrieved data, then the interface 144 writes the intermediate data into the DPSRAM 104 via the port 114, and the interface 146 retrieves the intermediate data from the DPSRAM 104 via the port 116.
  • Next, the output-[0103] data handler 126 retrieves the pointer and the data identifier from the output-data queue 128, the subscription manager 138 determines from the identifier the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the data, the interface 136 retrieves the data from the pointed-to location of the DPSRAM 102 via the port 112, and the output-data handler sends the data to the industry-standard bus adapter 118. If a destination peer requires the data to be the payload of a message, then the output-data handler 126 generates the message and sends the message to the adapter 118. For example, suppose the data has multiple destination peers and the pipeline bus 50 supports message broadcasting. The output-data handler 126 generates a single header that includes the addresses of all the destination peers, combines the header and data into a message, and sends (via the adapter 118 and the industry-standard bus interface 91) a single message to all of the destination peers simultaneously. Alternatively, the output-data handler 126 generates a respective header, and thus a respective message, for each destination peer, and sends each of the messages separately.
  • Then, the industry-standard bus adapter [0104] 118 formats the data from the output-data handler 126 so that it is compatible with the industry-standard bus interface 91.
  • Next, the industry-[0105] standard bus interface 91 formats the data from the industry-standard bus adapter 118 so that it is compatible With the pipeline bus 50 (FIG. 3).
  • For an event with no accompanying data, i.e., a doorbell, the industry-[0106] standard bus interface 91 receives a signal (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 61 if present), and translates the signal into a header (i.e., a data-less message) that includes the event.
  • Next, the industry-standard bus adapter [0107] 118 converts the header from the industry-standard bus interface 91 into a format that is compatible with the input-data handler 120.
  • Then, the input-[0108] data handler 120 extracts from the header the event and a description of the event. For example, the description may include, e.g. the address of the pipeline unit 78, the type of event, or an instance identifier that identifies the pipeline(s) 78 1-78 n for which the event is intended.
  • Next, the [0109] validation manager 134 analyzes the event description and confirms that the event is intended for one of the hardwired pipelines. 74 1-74 n, and the input-data handler 120 stores the event and its description in the input-event queue 124.
  • Then, at the appropriate time, the [0110] sequence manager 148 reads the event and its description from the input-event queue 124, and, in response to the event, triggers the operation of one or more of the pipelines 74 1-74 n as discussed above. For example, the sequence manager 148 may trigger the pipeline 742 to begin processing data that the pipeline 74 1 previously stored in the DPSRAM 104.
  • To output an event, the [0111] sequence manager 148 generates the event and a description of the event, and loads the event and its description into the output-event queue 130—the event description identifies the destination peer(s) for the event if there is more than one possible destination peer. For example, as discussed above, the event may confirm the receipt and implementation of an input event, an input-data or input-event message, or a SYNC pulse
  • Next, the output-[0112] data handler 126 retrieves the event and its description from the output-event queue 130, the subscription manager 138 determines from the event description the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the event, and the output-data handler sends the event to the proper destination peer or peers via the industry-standard bus adapter 118 and the industry-standard bus interface 91 as discussed above.
  • For a configuration command, the industry-standard bus adapter [0113] 118 receives the command from the host processor 42 (FIG. 3) via the industry-standard bus interface 91, and provides the command to the input-data handler 120 in a manner similar to that discussed above for a data-less event (i.e., doorbell)
  • Next, the [0114] validation manager 134 confirms that the command is intended for the pipeline unit 78, and the input-data handler 120 loads the command into the configuration manager 90. Furthermore, either the input-data handler 120 or the configuration manager 90 may also pass the command to the output-data handler 126, which confirms that the pipeline unit 78 received the command by sending the command back to the peer (e.g., the host processor 42 of FIG. 3) that sent the command. This confirmation technique is sometimes called “echoing.”
  • Then, the configuration manager [0115] 90 implements the command. For example, the command may cause the configuration manager 90 to disable one of the pipelines 74 1-74 n for debugging purposes. Or, the command may allow a peer, such as the host processor 42 (FIG. 3), to read the current configuration of the pipeline circuit 80 from the configuration manager 90 via the output-data handler 126. In addition, one may use a configuration command to define an exception that is recognized by the exception manager 88.
  • For an exception, a component, such as the input-[0116] data queue 122, of the pipeline circuit 80 triggers an exception to the exception manager 88. In one implementation, the component includes an exception-triggering adapter (not shown) that monitors the component and triggers the exception in response to a predetermined condition or set of conditions. The exception-triggering adapter may be a universal circuit that can be designed once and then included as part of each component of the pipeline circuit 80 that generates exceptions.
  • Next, in response to the exception trigger, the [0117] exception manager 88 generates an exception identifier. For example, the identifier may indicate that the input-data queue 122 has overflowed. Furthermore, the identifier may include its destination peer if there is more than one possible destination peer.
  • Then, the output-[0118] data handler 126 retrieves the exception identifier from the exception manager 88 and sends the exception identifier to the host processor 42 (FIG. 3) as discussed in previously cited U.S. patent application Ser. No. ______ entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-12-3). Alternatively, if there are multiple possible destination peers, then the exception identifier can also include destination information from which the subscription manager 138 determines the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the identifier. The output-data handler 126 then sends the identifier to the destination peer or peers via the industry-standard bus adapter 118 and the industry-standard bus interface 91.
  • Still referring to FIG. 5, alternative embodiments to the [0119] pipeline unit 78 exist. For example, although described as including DPSRAMs, the data memory 92 may include other types of memory ICs such as quad-data-rate (QDR) SRAMs.
  • FIG. 6 is a block diagram of the [0120] interface 142 of FIG. 5 according to an embodiment of the invention. As discussed above in conjunction with FIG. 5, the interface 142 writes processed data from the hardwired pipelines 74 1-74 n to the DPSRAM 102. As discussed below, the structure of the interface 142 reduces or eliminates data “bottlenecks” and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLIC's local and global routing resources.
  • The [0121] interface 142 includes write channels 150 1-150 n, one channel for each hardwired pipeline 74 1-74 n (FIG. 5), and includes a controller 152. For purposes of illustration, the channel 1 50, is discussed below, it being understood that the operation and structure of the other channels 150 2-150 n are similar unless stated otherwise.
  • The channel [0122] 1501 includes a write-address/data FIFO 154, and a address/data register 156 1.
  • The [0123] FIFO 1541 stores the data that the pipeline 74, writes to the DPSRAM 102, and stores the address of the location within the DPSRAM 102 to which the pipeline writes the data, until the controller 152 can actually write the data to the DPSRAM 102 via the register 1561. Therefore, the FIFO 154 1 reduces or eliminates the data bottleneck that may occur if the pipeline 74 1 had to “Wait” to write data to the channel 1 50, until the controller 152 finished writing previous data.
  • The [0124] FIFO 154 1, receives the data from the pipeline 74 1 via a bus 158 1, receives the address of the location to which the data is to be written via a bus 160 1, and provides the data and address to the register 156 1 via busses 162 1 and 164 1, respectively. Furthermore, the FIFO 154 1 receives a WRITE FIFO signal from the pipeline 74 1 on a line 166 1, receives a CLOCK signal via a line 168 1, and provides a FIFO FULL signal to the pipeline 74 1 on a line 170 1. In addition, the FIFO 154 1 receives a READ FIFO signal from the controller 152 via a line 172 1, and provides a FIFO EMPTY signal to the controller via a line 1741. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the busses 158 1, 160 1, 162 1, and 164 1 and the lines 166 1, 168 1, 170 1, 172 1, and 174 1 are preferably formed using local routing resources. Typically, local routing resources are preferred to global routing resources because the signal-path lengths are generally shorter and the routing is easier to implement.
  • The [0125] register 156 1 receives the data to be written and the address of the write location from the FIFO 154, via the busses 162 1 and 1641, respectively, and provides the data and address to the port 110 of the DPSRAM 102 (FIG. 5) via an address/data bus 176. Furthermore, the register 156 1 also receives the data and address from the registers 156 2-156 n via an address/data bus 178 1 as discussed below. In addition, the register 156 1 receives a SHIFT/LOAD signal from the controller 152 via a line 180. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 176 is typically formed using global routing resources, and the busses 178 1-178 n-1 and the line 180 are preferably formed using local routing resources.
  • In addition to receiving the FIFO EMPTY signal and generating the READ FIFO and SHIFT/LOAD signals, the [0126] controller 152 provides a WRITE DPSRAM signal to the port 110 of the DPSRAM 102 (FIG. 5) via a line 182.
  • Still referring to FIG. 6, the operation of the [0127] interface 142 is discussed.
  • First, the [0128] FIFO 154 1 drives the FIFO FULL signal to the logic level corresponding to the current state (“full” or “not full”) of the FIFO.
  • Next, if the [0129] FIFO 1 54, is not full and the pipeline 74, has processed data to write, the pipeline drives the data and corresponding address onto the busses 158, and 160 1, respectively, and asserts the WRITE signal, thus loading the data and address into the FIFO. If the FIFO 154 1, is full, however, the pipeline 74 1 waits until the FIFO is not full before loading the data.
  • Then, the [0130] FIFO 154 1 drives the FIFO EMPTY signal to the logic level corresponding to the current state (“empty” or “not empty”) of the FIFO.
  • Next, if the [0131] FIFO 154 1, is not empty, the controller 152 asserts the READ FIFO signal and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 156 1. If the FIFO 154 1, is empty, the controller 152 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 154 2-154 n are not empty.
  • The channels [0132] 150 2-150 n operate in a similar manner such that first-loaded data in the FIFOs 154 2-154 n are respectively loaded into the registers 156 2-156 n.
  • Then, the [0133] controller 152 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 156 1-156 n onto the address/data bus 176 and loading the data into the corresponding locations of the DPSRAM 102. Specifically, during a first shift cycle, the data and address from the register 156 1 are shifted onto the bus 176 such that the data from the FIFO 154 1 is loaded into the addressed location of the DPSRAM 102. Also during the first shift cycle, the data and address from the register 156 2 are shifted into the register 156 1, the data and address from the register 156 3 (not shown) are shifted into the register 156 2, and so on. During a second shift cycle, the data and address from the register 156 1 are shifted onto the bus 176 such that the data from the FIFO 154 2 is loaded into the addressed location of the DPSRAM 102. Also during the second shift cycle, the data and address from the register 156 2 are shifted into the register 156 1, the data and address from the register 156 3 (not shown) are shifted into the register 156 2, and so on. There are n shift cycles, and during the nth shift cycle the data and address from the register 156 n (which is the data and address from the FIFO 154 n) is shifted onto the bus 176. The controller 152 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 156 1-156 n. Furthermore, if one of the registers 156 1-156 1, is empty during a particular shift operation because its corresponding FIFO 154 1-154 n was empty when the controller 152 loaded the register, then the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 176.
  • Referring to FIGS. 5 and 6, according to an embodiment of the invention, the [0134] interface 144 is similar to the interface 142, and the interface 132 is also similar to the interface 142 except that the interface 132 includes only one write channel 150.
  • FIG. 7 is a block diagram of the [0135] interface 140 of FIG. 5 according to an embodiment of the invention. As discussed above in conjunction with FIG. 5, the interface 140 reads input data from the DPSRAM 100 and transfers this data to the hardwired 74 1-74 n. As discussed below, the structure of the interface 140 reduces or eliminates data “bottlenecks” and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLIC's local and global routing resources.
  • The [0136] interface 140 includes read channels 190 1-190 n, one channel for each hardwired pipeline 74 1-74 n (FIG. 5), and a controller 192. For purposes of illustration, the read channel 190 1 is discussed below, it being understood that the operation and structure of the other read channels 190 2-190 n are similar unless stated otherwise.
  • The [0137] channel 190 1 includes a FIFO 194 1 and an address/identifier (ID) register 196 1. As discussed below, the identifier identifies the pipeline 74 1-74 n that makes the request to read data from a particular location of the DPSRAM 100 to receive the data.
  • The [0138] FIFO 194 1 includes two sub-FIFOs (not shown), one for storing the address of the location within the BPSRAM 100 from which the pipeline 74 1 wishes to read the input data, and the other for storing the data read from the DPSRAM 100. Therefore, the FIFO 194 1 reduces or eliminates the bottleneck that may occur if the pipeline 74 1 had to “wait” to provide the read address to the channel 190 1 until the controller 192 finished reading previous data, or if the controller had to wait until the pipeline 74 1 retrieved the read data before the controller could read subsequent data.
  • The [0139] FIFO 194 1 receives the read address from the pipeline 74 1 via a bus 198 1 and provides the address and ID to the register 196, via a bus 2001. Since the ID corresponds to the pipeline 74, and typically does not change, the FIFO 194 1 may store the ID and concatenate the ID with the address. Alternatively, the pipeline 74, may provide the ID to the FIFO 194, via the bus 1981. Furthermore, the FIFO 194, receives a READY WRITE FIFO signal from the pipeline 74, via a line 202 1, receives a CLOCK signal via a line 2041, and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206 1. In addition, the FIFO 194, receives a WRITE/READ FIFO signal from the controller 192 via a line 2081, and provides a FIFO EMPTY signal to the controller via a line 210 1. Moreover, the FIFO 194, receives the read data and the corresponding ID from the controller 192 via a bus 212, and provides this data to the pipeline 74, via a bus 2141. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the busses 198 1, 200 1, and 214 1 and the lines 202 1, 204 1, 206 1, 208 1, and 210 1 are preferably formed using local routing resources, and the bus 212 is typically formed using global routing resources.
  • The [0140] register 196 1 receives the address of the location to be read and the corresponding ID from the FIFO 194 1 via the bus 206 1, provides the address to the port 108 of the DPSRAM 100 (FIG. 5) via an address bus 216, and provides the ID to the controller 192 via a bus 218. Furthermore, the register 196, also receives the addresses and IDs from the registers 196 2-196 n via an address/ID bus 220, as discussed below. In addition, the register 1 96, receives a SHIFT/LOAD signal from the controller 192 via a line 222. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 216 is typically formed using global routing resources, and the busses 220 1-220 n-1 and the line 222 are preferably formed using local routing resources.
  • In addition to receiving the FIFO EMPTY signal, generating the WRITE/READ FIFO and SHIFT/LOAD signals, and providing the read data and corresponding ID, the [0141] controller 192 receives the data read from the port 108 of the DPSRAM 100 (FIG. 5) via a bus 224 and generates a READ DPSRAM signal on a line 226, which couples this signal to the port 108. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 224 and the line 226 are typically formed using global routing resources.
  • Still referring to FIG. 7, the operation of the [0142] interface 140 is discussed.
  • First, the [0143] FIFO 194 drives the FIFO FULL signal to the logic level corresponding to the current state (“full” or “not full”) of the FIFO relative to the read addresses. That is, if the FIFO 194 1 is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
  • Next, if the [0144] FIFO 194 1 is not full of read addresses and the pipeline 74, is ready for more input data to process, the pipeline drives the address of the data to be read onto the bus 198 1 and asserts the READNVRITE FIFO signal to a write level, thus loading the address into the FIFO. As discussed above in conjunction with FIG. 5, the pipeline 74, gets the address from the input-data queue 122 via the sequence manager 148. If, however, the FIFO 194 1 is full of read addresses, the pipeline 74 1 waits until the FIFO is not full before loading the read address.
  • Then, the [0145] FIFO 194 1 drives the FIFO EMPTY signal to the logic level corresponding to the current state (“empty” or “not empty”) of the FIFO relative to the read addresses. That is, if the FIFO 194 1 is loaded with at least one read address, it drives the logic level of FIFO EMPTY to one level, and if the FIFO is loaded with no read addresses, it drives the logic level of FIFO EMPTY to another level.
  • Next, if the [0146] FIFO 194 1 is not empty, the controller 192 asserts the WRITE/READ FIFO signal to the read logic level and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded address and the ID from the FIFO into the register 196 1.
  • The channels [0147] 190 2-190 n operate in a similar manner such that the controller 192 respectively loads the first-loaded addresses and IDs from the FIFOs 194 2-194 n into the registers 196 2-196 n. If all of the FIFOs 194 2-194 n are empty, then the controller 192 waits for at least one of the FIFOs to receive an address before proceeding.
  • Then, the [0148] controller 192 drives the SHIFT/LOAD signal to the shift logic level and asserts the READ DPSRAM signal to serially shift the addresses and IDs from the registers 196 1-196 n onto the address and ID busses 216 and 218 and to serially read the data from the corresponding locations of the DPSRAM 100 via the bus 224.
  • Next, the [0149] controller 192 drives the received data and corresponding ID the ID allows each of the FIFOs 194 1-194 n to determine whether it is an intended recipient of the data—onto the bus 212, and drives the WRITE/READ FIFO signal to a write level, thus serially writing the data to the respective FIFO, 194 1-194 n.
  • Then, the hardwired pipelines [0150] 74 1-74 n sequentially assert their READ/WRITE FIFO signals to a read level and sequentially read the data via the busses 214 1-214 n.
  • Still referring to FIG. 7, a more detailed discussion of their data-read operator is presented. [0151]
  • During a first shift cycle, the [0152] controller 192 shifts the address and ID from the register 196 1 onto the busses 216 and 218, respectively, asserts read DPSRAM, and thus reads the data from the corresponding location of the DPSRAM 100 via the bus 224 and reads the ID from the bus 218. Next, the controller 192 drives WRITE/READ FIFO signal on the line 208 1 to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 194 1, the FIFO 194 1 recognizes the ID and thus loads the data from the bus 212 in response the write level of the WRITE/READ FIFO signal. The remaining FIFOs 194 2-194 n do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 74 1 asserts the READ/WRITE FIFO signal on the line 202 1 to the read level and retrieves the read data via the bus 214 1. Also during the first shift cycle, the address and ID from the register 1962 are shifted into the register 196 1, the address and ID from the register 196 3 (not shown) are shifted into the register 196 2, and so on. Alternatively, the controller 192 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 208 1 to the write level. This eliminates the need for the controller 192 to send the ID to the FIFOs 194 1-194 n. In another alternative, the WRITE/READ FIFO signal may be only a read signal, and the FIFO 194 1 (as well as the other FIFOs 194 2-194 n) may load the data on the bus 212 when the ID on the bus 212 matches the ID of the FIFO 194 1. This eliminates the need of the controller 192 to generate a write signal.
  • During a second shift cycle, the address and ID from the [0153] register 196 1 is shifted onto the busses 216 and 218 such that the controller 192 reads data from the location of the DPSRAM 100 specified by the FIFO 194 2. Next, the controller 192 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 194 2, the FIFO 194 2 recognizes the ID and thus loads the data from the bus 212. The remaining FIFOs 194, and 194 3-194 n do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 742 asserts its READ/WRITE FIFO signal to the read level and retrieves the read data via the bus 2142. Also during the second shift cycle, the address and ID from the register 1962 is shifted into the register 196 1, the address and ID from the register 196 3 (not shown) is shifted into the register 1962, and so on.
  • This continues for n shift cycles, i.e., until the address and ID from the register [0154] 196 n (which is the address and ID from the FIFO 194 n) are respectively shifted onto the bus 216 and 218. The controller 192 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 196 1-196 n. Furthermore, if one of the registers 196 1-196 n is empty during a particular shift operation because its corresponding FIFO 194 1-194 n is empty, then the controller 192 may bypass the empty register, and thus shorten the shift operation by avoiding shifting a null address onto the bus 216.
  • Referring to FIGS. 5 and 6, according to an embodiment of the invention, the [0155] interface 144 is similar to the interface 140, and the interface 136 is also similar to the interface 140 except that the interface 136 includes only one read channel 190, and thus includes no ID circuitry.
  • FIG. 8 is a schematic block diagram of a pipeline unit [0156] 230 of FIG. 4 according to another embodiment of the invention. The pipeline unit 230 is similar to the pipeline unit 78 of FIG. 4 except that the pipeline unit 230 includes multiple pipeline circuits 80—here two pipeline circuits 80 a and 80 b. Increasing the number of pipeline circuits 80 typically allows an increase in the number n of hardwired pipelines 74 1-74 n, and thus an increase in the functionality of the pipeline unit 230 as compared to the pipeline unit 78.
  • In the pipeline unit [0157] 230 of FIG. 8, the services components, i.e., the communication interface 82, the controller 86, the exception manager 88, the configuration manager 90, and the optional industry-standard bus interface 91, are disposed on the pipeline circuit 80 a, and the pipelines 74 1-74 n and the communication shell 84 are disposed on the pipeline circuit 80 b. By locating the services components and the pipelines 74 1-74 n on separate pipeline circuits, one can include a higher number n of pipelines and/or more complex pipelines than he can where the service components and the pipelines are located on the same pipeline circuit. Alternatively, the portion of the communication shell 84 that interfaces the pipelines 74 1-74 n to the interface 82 and the controller 86 may be disposed on the pipeline circuit 80 a.
  • FIG. 9 is a schematic block diagram of the [0158] pipe line circuits 80 a and 80 b and the data memory 92 of the pipeline unit 230 of FIG. 8 according to an embodiment of the invention. Other than the pipeline components being disposed on two pipeliners circuits, the structure and operation of the pipeline circuits 80 a and 80 b and the memory 92 of FIG. 9 are the same as for the pipeline circuit 80 and memory 92 of FIG. 5.
  • The preceding discussion is presented to enable a person skilled in the art to make and use the invention. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. [0159]

Claims (65)

What is claimed is:
1. A pipeline accelerator, comprising:
a memory; and
a hardwired-pipeline circuit coupled to the memory and operable to,
receive data,
load the data into the memory,
retrieve the data from the memory,
process the retrieved data, and
provide the processed data to an external source.
2. The pipeline accelerator of claim 1 wherein:
the memory is disposed on a first integrated circuit; and
the pipeline circuit is disposed On a second integrated circuit.
3. The pipeline accelerator of claim 1 wherein the pipeline circuit is disposed on a field-programmable gate array.
4. The pipeline accelerator of claim 1 wherein the pipeline circuit is operable to provide the processed data to the external source by:
loading the processed data into the memory,
retrieving the processed data from the memory; and
providing the retrieved processed data to the external source.
5. The pipeline accelerator of claim 1 wherein:
the external source comprises a processor; and
the pipeline circuit is operable to receive the data from the processor.
6. A computing machine, comprising:
a processor; and
a pipeline accelerator coupled to the processor and comprising,
a memory, and
a hardwired-pipeline circuit coupled to the memory and operable to,
receive data from the processor,
load the data into the memory,
retrieve the data from the memory,
process the retrieved data, and
provide the processed data to the processor.
7. A pipeline accelerator, comprising:
a memory; and
a hardwired-pipeline circuit coupled to the memory and operable to,
receive data,
process the received data,
load the processed data into the memory,
retrieve the processed data from the memory, and
provide the retrieved processed data to an external source.
8. A computing machine, comprising:
a processor; and
a pipeline accelerator coupled to the processor and comprising,
a memory, and
a hardwired-pipeline circuit coupled to the memory and operable to,
receive data from the processor,
process the received data,
load the processed data into the memory,
retrieve the processed data from the memory, and
provide the retrieved processed data to the processor.
9. A pipeline accelerator, comprising:
first and second memories; and
a hardwired-pipeline circuit coupled to the first and second memories and comprising,
an input-data handler operable to receive raw data from an external source and to load the raw data into the first memory,
a hardwired pipeline operable to process the raw data,
a pipeline interface operable to retrieve the raw data from the first memory, provide the retrieved raw data to the hardwired pipeline, and load processed data from the hardwired pipeline into the second memory, and
an output-data handler operable to retrieve the processed data from the second memory and to provide the processed data to the external source.
10. The pipeline accelerator of claim 9 wherein:
the first and second memories each include respective first and second ports;
the input-data handler is operable to load the raw data via the first port of the first memory,
the pipeline interface is operable to retrieve the raw data via the second port of the first memory and to load the processed data via the first port of the second memory, and
the output-data handler is operable to retrieve the processed data via the second port of the second memory.
11. The pipeline accelerator of claim 9, further comprising:
a third memory coupled to the hardwired-pipeline circuit;
wherein the hardwired pipeline is operable to generate intermediate data while processing the raw data; and
wherein the pipeline interface is operable to load the intermediate data into the third memory and to retrieve the intermediate data from the third memory.
12. The pipeline accelerator of claim 9 wherein:
the first and second memories are respectively disposed on first and second integrated circuits; and
the pipeline circuit is disposed on a field-programmable gate array.
13. The pipeline accelerator of claim 9, further comprising:
an input-data queue coupled to the input-data handler and the pipeline interface,
wherein the input-data handler is operable to load into the input-data queue a pointer to a location of the raw data within the first memory; and
wherein the pipeline interface is operable to retrieve the raw data from the location using the pointer.
14. The pipeline accelerator of claim 9, further comprising:
an output-data queue coupled to the output-data handler and the pipeline interface;
wherein the pipeline interface is operable to load into the output-data queue a pointer to a location of the processed data within the second memory; and
wherein the output-data handler is operable to retrieve the processed data from the location using the pointer.
15. The pipeline accelerator of claim 9, further comprising:
wherein each of the input-data handler, hardwired pipeline, pipeline interface, and output-data handler has a respective operating configuration; and
a configuration manager coupled to and operable to set the operating configurations of the input-data handler, hardwired pipeline, pipeline interface, and output-data handler.
16. The pipeline accelerator of claim 9, further comprising:
wherein each of the input-data handler, hardwired pipeline, pipeline interface, and output-data handler has a respective operating status; and
an exception manager coupled to and operable to identify an exception in the input-data handler, hardwired pipeline, pipeline interface, or output-data handler in response to the operating statuses.
17. A pipeline accelerator, comprising:
a hardwired pipeline operable to process data; and
an input-data handler coupled to the hardwired pipeline and operable to,
receive the data,
determine whether the data is directed to the hardwired pipeline, and
provide the data to the hardwired pipeline if the data is directed to the hardwired pipeline.
18. The pipeline accelerator of claim 17 wherein the input-data handler is further operable to:
receive the data by,
receiving a message that includes a header and the data, and
extracting the data from the message; and
determine whether the data is directed to the hardwired pipeline by analyzing the header.
19. The pipeline accelerator of claim 17 wherein the hardwired pipeline and the input-data handler are disposed on a single field-programmable gate array.
20. The pipeline accelerator of claim 17 wherein the hardwired pipeline and the input-data handler are disposed on respective field-programmable gate arrays.
21. A computing machine, comprising:
a processor; and
a pipeline accelerator coupled to the processor and comprising,
a hardwired pipeline operable to process data, and
an input-data handler coupled to the hardwired pipeline and operable to,
receive the data from the processor,
determine whether the data is directed to the hardwired pipeline, and
provide the data to the hardwired pipeline if the data is directed to the hardwired pipeline.
22. A pipeline accelerator, comprising:
a hardwired pipeline operable to generate data; and
an output-data handler coupled to the hardwired pipeline and operable to,
receive the data,
determine a destination of the data, and
provide the data to the destination.
23. The pipeline accelerator of claim 22 wherein the output-data handler is further operable to:
determine the destination of the data by,
identifying a type of the data, and
determining the destination based on the type of the data; and
provide the data to the destination by,
generating a message that identifies the destination and that includes the data, and
providing the message to the destination.
24. A computing machine, comprising:
a processor operable to execute threads of an application; and
a pipeline accelerator coupled to the processor and comprising:
a hardwired pipeline operable to generate data, and
an output-data handler coupled to the hardwired pipeline and operable to,
receive the data,
identify a thread of the application that subscribes to the data, and
provide the data to the subscribing thread.
25. A pipeline accelerator, comprising:
a hardwired pipeline operable to process data values; and
a sequence manager coupled to and operable to control the operation of the hardwired pipeline.
26. The pipeline accelerator of claim 25 wherein the sequence manager is operable to control an order in which the hardwired pipeline receives the data values.
27. The pipeline accelerator of claim 25 wherein the sequence manager is further operable to:
receive an event; and
control the hardwired pipeline in response to the event.
28. The pipeline accelerator of claim 25 wherein the sequence manager is further operable to:
receive a synchronization signal; and
control the operation of the hardwired pipeline in response to the synchronization signal.
29. The pipeline accelerator of claim 25 wherein the sequence manager is further operable to:
sense an occurrence relative to the hardwired pipeline; and
generate an event in response to the occurrence.
30. A computing machine, comprising:
a processor operable to generate data and an event; and
a pipeline accelerator coupled to the processor and comprising,
a hardwired pipeline operable to receive the data from the processor and process the received data; and
a sequence manager coupled to the hardwired pipeline and operable to receive the event from the processor and to control the operation of the hardwired pipeline in response to the event.
31. A pipeline accelerator, comprising:
a hardwired-pipeline circuit having an operating configuration and operable to process data; and
a configuration manager coupled to the hardwired-pipeline circuit and operable to set the operating configuration.
32. The pipeline accelerator of claim 31 wherein:
the hardwired-pipeline circuit includes a configuration register; and
the configuration manager is operable to set the operating configuration by loading a configuration value into the configuration register.
33. The pipeline accelerator of claim 32 wherein the configuration manager is operable to receive the configuration value from an external source.
34. A computing machine, comprising:
a processor operable to generate data and a configuration value; and
pipeline accelerator coupled to the processor and comprising,
a hardwired-pipeline circuit having an operating configuration and operable to process the data, and
a configuration manager coupled to the hardwired-pipeline circuit and operable to set the operating configuration in response to the configuration value.
35. A pipeline accelerator, comprising:
a hardwired-pipeline circuit having an operating status and operable to process data; and
an exception manager coupled to the hardwired-pipeline circuit and operable to identify an exception in the operation status of the hardwired-pipeline circuit in response to the operating status.
36. The pipeline accelerator of claim 35 wherein:
the hardwired-pipeline circuit is operable to generate a status value that represents the operating status; and
the exception manager is operable to identify the exception in response to the status value.
37. The pipeline accelerator of claim 36 wherein:
the hardwired-pipeline circuit includes a status register that is operable to store the status value; and
the exception manager receives the status value from the status register.
38. The pipeline accelerator of claim 35 wherein the exception manager is operable to identify an exception in the operating status of the hardwired-pipeline circuit to an external source.
39. A computing machine, comprising:
a processor operable to generate data; and
a pipeline accelerator, comprising,
a hardwired-pipeline circuit having an operating status and operable to process data and to generate a status value that represents the operating status, and
an exception manager coupled to the hardwired-pipeline circuit and operable to identify an exception in the operating status of the hardwired-pipeline circuit in response to the status value and to notify the processor of the exception.
40. A computing machine, comprising:
a pipeline accelerator, comprising,
a hardwired-pipeline circuit having an operating status and operable to process data, and
an exception manager coupled to the hardwired-pipeline circuit and operable to generate a status value that represents the operating status; and
a processor coupled to the pipeline accelerator and operable to generate the data, to receive the status value, and to determine whether the hardwired-pipeline circuit is malfunctioning by analyzing the status value.
41. A method, comprising:
loading data into a memory,
retrieving the data from the memory;
processing the retrieved data with a hardwired-pipeline circuit; and
providing the processed data to an external source.
42. The method of claim 41 wherein providing the processed data comprises:
loading the processed data into the memory;
retrieving the processed data from the memory; and
providing the retrieved processed data to the external source.
43. A method, comprising:
processing data with a hardwired-pipeline circuit;
loading the processed data into a memory;
retrieving the processed data from the memory; and
providing the retrieved processed data to an external source.
44. A method, comprising:
loading raw data from an external source into a first memory;
retrieving the raw data from the first memory;
processing the retrieved data with a hardwired pipeline;
loading the processed data from the hardwired pipeline into a second memory; and
providing the processed data from the second memory to the external source.
45. The method of claim 44 wherein:
loading the raw data comprises loading the raw data via a first port of the first memory;
retrieving the raw data comprises retrieving the raw data via a second port of the first memory;
loading the processed data comprises loading the processed data via a first port of the second memory; and
providing the processed data comprises retrieving the processed data via a second port of the second memory.
46. The method of claim 44, further comprising:
generating intermediate data with the hardwired pipeline in response to processing the raw data;
loading the intermediate data into a third memory; and
providing the intermediate data from the third memory back to the hardwired pipeline.
47. The method of claim 44, further comprising:
loading into an input-message queue a pointer to a location of the raw data within the first memory; and
wherein retrieving the raw data comprises retrieving the raw data from the location using the pointer.
48. The method of claim 44, further comprising:
loading into an output-message queue a pointer to a location of the processed data within the second memory; and
wherein retrieving the processed data comprises retrieving the processed data from the location using the pointer.
49. The method of claim 44, further comprising setting parameters for loading and retrieving the raw data, processing the retrieved data, and loading and providing the processed data.
50. The method of claim 44, further comprising determining whether an error occurs during the loading and retrieving of the raw data, the processing of the retrieved data, and the loading and providing of the processed data.
51. A method, comprising:
receiving data;
determining whether the data is directed to a hardwired pipeline; and
providing the data to the hardwired pipeline if the data is directed to the hardwired pipeline.
52. The method of claim 51 wherein:
receiving the data comprises,
receiving a message that includes a header and the data, and
extracting the data from the message; and
determining whether the data is directed to the hardwired pipeline comprises analyzing the header.
53. A method, comprising:
generating data with a hardwired pipeline;
determining a destination of the data; and
providing the data to the destination.
54. The method of claim 53 wherein:
determining the destination of the data comprises,
identifying a type of the data, and
determining the destination based on the type of the data; and
providing the data to the destination comprises,
generating a message that identifies the destination and that includes the data, and
providing the message to the destination.
55. A method, comprising:
processing data values with a hardwired pipeline; and
sequencing the operation of the hardwired pipeline.
56. The method of claim 55 wherein sequencing the operation comprises sequencing an order in which the hardwired pipeline processes the data values.
57. The method of claim 55 wherein sequencing the operating comprises synchronizing the operation of the hardwired pipeline to a synchronization signal.
58. The method of claim 55, further comprising:
sensing a predefined occurrence during operation of the hardwired pipeline; and
generating an event in response to the occurrence.
59. A method, comprising:
loading a configuration value into a register; and
setting an operating configuration of a hardwired pipeline with the configuration value.
60. A method, comprising:
processing data with a hardwired pipeline; and
identifying an error in the processed data by analyzing an operating status of the hardwired pipeline.
61. A method for designing a hardwired-pipeline circuit, comprising:
retrieving from a library a first data representation of a communication interface;
generating a second data representation of a hardwired pipeline that is to be coupled to the communication interface; and
combining the first and second data representations to generate hard-configuration data for the hardwired-pipeline circuit.
62. The method of claim 61, further comprising modifying the first data representation by selecting values for predetermined parameters of the services layer before combining the first and second data representations.
63. The method of claim 61 wherein the communication interface is operable to allow the hardwired-pipeline circuit to communicate with another circuit.
64. The method of claim 61 wherein combining the first and second data representations comprises compiling the first and second data representations into the hard-configuration data.
65. The method of claim 61 wherein the hard-configuration data comprises firmware.
US10/683,929 2002-10-31 2003-10-09 Pipeline accelerator for improved computing architecture and related system and method Abandoned US20040136241A1 (en)

Priority Applications (37)

Application Number Priority Date Filing Date Title
US10/683,929 US20040136241A1 (en) 2002-10-31 2003-10-09 Pipeline accelerator for improved computing architecture and related system and method
KR1020057007751A KR101012745B1 (en) 2002-10-31 2003-10-31 Programmable circuit and related computing machine and method
AU2003287320A AU2003287320B2 (en) 2002-10-31 2003-10-31 Pipeline accelerator and related system and method
CA002503620A CA2503620A1 (en) 2002-10-31 2003-10-31 Programmable circuit and related computing machine and method
CA2503613A CA2503613C (en) 2002-10-31 2003-10-31 Pipeline accelerator having multiple pipeline units and related computing machine and method
EP03781554A EP1559005A2 (en) 2002-10-31 2003-10-31 Computing machine having improved computing architecture and related system and method
DE60318105T DE60318105T2 (en) 2002-10-31 2003-10-31 PIPELINE COPROCESSOR
AU2003287321A AU2003287321B2 (en) 2002-10-31 2003-10-31 Computing machine having improved computing architecture and related system and method
AU2003287319A AU2003287319B2 (en) 2002-10-31 2003-10-31 Pipeline coprocessor
CA2503622A CA2503622C (en) 2002-10-31 2003-10-31 Computing machine having improved computing architecture and related system and method
KR1020057007752A KR100996917B1 (en) 2002-10-31 2003-10-31 Pipeline accelerator having multiple pipeline units and related computing machine and method
AU2003287318A AU2003287318B2 (en) 2002-10-31 2003-10-31 Programmable circuit and related computing machine and method
JP2005502222A JP2006515941A (en) 2002-10-31 2003-10-31 Pipeline accelerator having multiple pipeline units, associated computing machine, and method
KR1020057007749A KR101062214B1 (en) 2002-10-31 2003-10-31 Computing machine and related systems and methods with improved computing architecture
KR1020057007748A KR101035646B1 (en) 2002-10-31 2003-10-31 Pipeline coprocessor
PCT/US2003/034559 WO2004042574A2 (en) 2002-10-31 2003-10-31 Computing machine having improved computing architecture and related system and method
CA2503611A CA2503611C (en) 2002-10-31 2003-10-31 Peer-vector system utilizing a host processor and pipeline accelerator
EP03781551A EP1576471A2 (en) 2002-10-31 2003-10-31 Programmable circuit and related computing machine and method
ES03781552T ES2300633T3 (en) 2002-10-31 2003-10-31 CHANNEL COCKROW.
EP03781552A EP1570344B1 (en) 2002-10-31 2003-10-31 Pipeline coprocessor
PCT/US2003/034557 WO2004042560A2 (en) 2002-10-31 2003-10-31 Pipeline coprocessor
CA002503617A CA2503617A1 (en) 2002-10-31 2003-10-31 Pipeline accelerator for improved computing architecture and related system and method
EP03781553A EP1573515A2 (en) 2002-10-31 2003-10-31 Pipeline accelerator and related system and method
JP2005502224A JP2006518057A (en) 2002-10-31 2003-10-31 Improved computational architecture, related systems, and methods
KR1020057007750A KR101012744B1 (en) 2002-10-31 2003-10-31 Pipeline accelerator for improved computing architecture and related system and method
JP2005502225A JP2006518058A (en) 2002-10-31 2003-10-31 Pipeline accelerator, related system and method for improved computing architecture
JP2005502223A JP2006518056A (en) 2002-10-31 2003-10-31 Programmable circuit, related computing machine, and method
PCT/US2003/034556 WO2004042569A2 (en) 2002-10-31 2003-10-31 Programmable circuit and related computing machine and method
PCT/US2003/034555 WO2004042561A2 (en) 2002-10-31 2003-10-31 Pipeline accelerator having multiple pipeline units and related computing machine and method
AU2003287317A AU2003287317B2 (en) 2002-10-31 2003-10-31 Pipeline accelerator having multiple pipeline units and related computing machine and method
EP03781550A EP1573514A2 (en) 2002-10-31 2003-10-31 Pipeline accelerator and related computer and method
JP2005502226A JP2006518495A (en) 2002-10-31 2003-10-31 Computer machine, improved system and method with improved computing architecture
JP2011070196A JP5568502B2 (en) 2002-10-31 2011-03-28 Programmable circuit, related computing machine, and method
JP2011071988A JP2011170868A (en) 2002-10-31 2011-03-29 Pipeline accelerator for improved computing architecture, and related system and method
JP2011081733A JP2011175655A (en) 2002-10-31 2011-04-01 Pipeline accelerator including multiple pipeline units, related computing machine, and method
JP2011083371A JP2011154711A (en) 2002-10-31 2011-04-05 Improved computing architecture, related system and method
JP2013107858A JP5688432B2 (en) 2002-10-31 2013-05-22 Programmable circuit, related computing machine, and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US42250302P 2002-10-31 2002-10-31
US10/683,929 US20040136241A1 (en) 2002-10-31 2003-10-09 Pipeline accelerator for improved computing architecture and related system and method

Publications (1)

Publication Number Publication Date
US20040136241A1 true US20040136241A1 (en) 2004-07-15

Family

ID=32685118

Family Applications (7)

Application Number Title Priority Date Filing Date
US10/683,929 Abandoned US20040136241A1 (en) 2002-10-31 2003-10-09 Pipeline accelerator for improved computing architecture and related system and method
US10/684,102 Expired - Fee Related US7418574B2 (en) 2002-10-31 2003-10-09 Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
US10/684,053 Expired - Fee Related US7987341B2 (en) 2002-10-31 2003-10-09 Computing machine using software objects for transferring data that includes no destination information
US10/684,067 Active 2024-08-06 US7061485B2 (en) 2002-10-31 2003-10-09 Method and system for producing a model from optical images
US10/683,932 Expired - Fee Related US7386704B2 (en) 2002-10-31 2003-10-09 Pipeline accelerator including pipeline circuits in communication via a bus, and related system and method
US10/684,057 Expired - Fee Related US7373432B2 (en) 2002-10-31 2003-10-09 Programmable circuit and related computing machine and method
US12/151,116 Expired - Fee Related US8250341B2 (en) 2002-10-31 2008-05-02 Pipeline accelerator having multiple pipeline units and related computing machine and method

Family Applications After (6)

Application Number Title Priority Date Filing Date
US10/684,102 Expired - Fee Related US7418574B2 (en) 2002-10-31 2003-10-09 Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
US10/684,053 Expired - Fee Related US7987341B2 (en) 2002-10-31 2003-10-09 Computing machine using software objects for transferring data that includes no destination information
US10/684,067 Active 2024-08-06 US7061485B2 (en) 2002-10-31 2003-10-09 Method and system for producing a model from optical images
US10/683,932 Expired - Fee Related US7386704B2 (en) 2002-10-31 2003-10-09 Pipeline accelerator including pipeline circuits in communication via a bus, and related system and method
US10/684,057 Expired - Fee Related US7373432B2 (en) 2002-10-31 2003-10-09 Programmable circuit and related computing machine and method
US12/151,116 Expired - Fee Related US8250341B2 (en) 2002-10-31 2008-05-02 Pipeline accelerator having multiple pipeline units and related computing machine and method

Country Status (3)

Country Link
US (7) US20040136241A1 (en)
JP (1) JP5688432B2 (en)
TW (1) TWI323855B (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060085781A1 (en) * 2004-10-01 2006-04-20 Lockheed Martin Corporation Library for computer-based tool and related system and method
US20060265927A1 (en) * 2004-10-29 2006-11-30 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US20060288350A1 (en) * 2005-06-20 2006-12-21 Microsoft Corporation Multi-thread multimedia processing
US20080147890A1 (en) * 2006-10-10 2008-06-19 International Business Machines Corporation Facilitating access to status and measurement data associated with input/output processing
US20080147889A1 (en) * 2006-10-10 2008-06-19 International Business Machines Corporation Facilitating input/output processing by using transport control words to reduce input/output communications
CN100412790C (en) * 2005-03-07 2008-08-20 富士通株式会社 Microprocessor
US20090210576A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US20090210561A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to perform system changes in an input/output processing system
US20090210583A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Reserved device access contention reduction
US20090210560A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Cancel instruction and command for determining the state of an i/o operation
US20090210884A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to determine compatability in an input/output processing system
US20090210768A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Exception condition handling at a channel subsystem in an i/o processing system
US20090210585A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to suspend operations in an input/output processing log-out system
US20090210563A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Providing indirect data addressing for a control block at a channel subsystem of an i/o processing system
US20090210579A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Open exchange limiting in an i/o processing system
US20090210571A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to monitor input/output operations
US20090210570A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Extended measurement word determination at a channel subsystem of an i/o processing system
US20090210572A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Computer command and response for determining the state of an i/o operation
US20090210769A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Multiple crc insertion in an output data stream
US20090210559A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing a variable length device command word at a control unit in an i/o processing system
US20090210584A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Exception condition determination at a control unit in an i/o processing system
US20090210581A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Bi-directional data transfer within a single i/o operation
US20090210562A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing communication data in a ships passing condition
US20090210580A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Early termination of an i/o operation in an i/o processing system
US20090210557A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Determining extended capability of a channel path
US20090210573A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Computer command and response for determining the state of an i/o operation
US20100030918A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program chain linked branching
US20100030920A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program chain linking
US20100030919A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program message pairing
US20100046177A1 (en) * 2008-06-18 2010-02-25 Lockheed Martin Corporation Enclosure assembly housing at least one electronic board assembly and systems using same
US20100046175A1 (en) * 2008-06-18 2010-02-25 Lockheed Martin Corporation Electronics module, enclosure assembly housing same, and related systems and methods
US20110147762A1 (en) * 2003-03-03 2011-06-23 Sheppard Scott T Integrated Nitride and Silicon Carbide-Based Devices
US7984581B2 (en) 2004-10-29 2011-07-26 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information
US20110185078A1 (en) * 2003-06-25 2011-07-28 Microsoft Corporation Media scrubbing using a media processor
US8001298B2 (en) 2008-02-14 2011-08-16 International Business Machines Corporation Providing extended measurement data in an I/O processing system
US8312176B1 (en) 2011-06-30 2012-11-13 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8332542B2 (en) 2009-11-12 2012-12-11 International Business Machines Corporation Communication with input/output system devices
US8346978B1 (en) 2011-06-30 2013-01-01 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8364853B2 (en) 2011-06-01 2013-01-29 International Business Machines Corporation Fibre channel input/output data routing system and method
US8364854B2 (en) 2011-06-01 2013-01-29 International Business Machines Corporation Fibre channel input/output data routing system and method
US8473641B2 (en) 2011-06-30 2013-06-25 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8549185B2 (en) 2011-06-30 2013-10-01 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8583989B2 (en) 2011-06-01 2013-11-12 International Business Machines Corporation Fibre channel input/output data routing system and method
US8677027B2 (en) 2011-06-01 2014-03-18 International Business Machines Corporation Fibre channel input/output data routing system and method
US8683083B2 (en) 2011-06-01 2014-03-25 International Business Machines Corporation Fibre channel input/output data routing system and method
US20140368236A1 (en) * 2013-06-13 2014-12-18 Altera Corporation Multiple-voltage programmable logic fabric
US8918542B2 (en) 2013-03-15 2014-12-23 International Business Machines Corporation Facilitating transport mode data transfer between a channel subsystem and input/output devices
US8990439B2 (en) 2013-05-29 2015-03-24 International Business Machines Corporation Transport mode data transfer between a channel subsystem and input/output devices
US9021155B2 (en) 2011-06-01 2015-04-28 International Business Machines Corporation Fibre channel input/output data routing including discarding of data transfer requests in response to error detection
US20160103707A1 (en) * 2014-10-10 2016-04-14 Futurewei Technologies, Inc. System and Method for System on a Chip
US10496622B2 (en) 2015-10-09 2019-12-03 Futurewei Technologies, Inc. System and method for real-time data warehouse
US10783160B2 (en) 2015-10-09 2020-09-22 Futurewei Technologies, Inc. System and method for scalable distributed real-time data warehouse
WO2021168145A1 (en) * 2020-02-21 2021-08-26 Pensando Systems Inc. Methods and systems for processing data in a programmable data processing pipeline that includes out-of-pipeline processing

Families Citing this family (144)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006526227A (en) 2003-05-23 2006-11-16 ワシントン ユニヴァーシティー Intelligent data storage and processing using FPGA devices
US6906554B1 (en) * 2003-12-16 2005-06-14 Faraday Technology Corp. Pipeline-based circuit with a postponed clock-gating mechanism for reducing power consumption and related driving method thereof
US7409670B1 (en) * 2004-04-01 2008-08-05 Altera Corporation Scheduling logic on a programmable device implemented using a high-level language
US7370311B1 (en) 2004-04-01 2008-05-06 Altera Corporation Generating components on a programmable device using a high-level language
WO2006039713A2 (en) * 2004-10-01 2006-04-13 Lockheed Martin Corporation Configurable computing machine and related systems and methods
US20060092178A1 (en) * 2004-10-29 2006-05-04 Tanguay Donald O Jr Method and system for communicating through shared media
US20070038059A1 (en) * 2005-07-07 2007-02-15 Garrett Sheffer Implant and instrument morphing
US7596636B2 (en) * 2005-09-23 2009-09-29 Joseph Gormley Systems and methods for implementing a vehicle control and interconnection system
US7590768B2 (en) * 2005-09-23 2009-09-15 Joseph Gormley Control and interconnection system
US7346863B1 (en) 2005-09-28 2008-03-18 Altera Corporation Hardware acceleration of high-level language code sequences on programmable devices
FR2895106A1 (en) * 2005-12-20 2007-06-22 Thomson Licensing Sas METHOD FOR DOWNLOADING A CONFIGURATION FILE IN A PROGRAMMABLE CIRCUIT, AND APPARATUS COMPRISING SAID COMPONENT.
US7813591B2 (en) * 2006-01-20 2010-10-12 3M Innovative Properties Company Visual feedback of 3D scan parameters
US8035637B2 (en) * 2006-01-20 2011-10-11 3M Innovative Properties Company Three-dimensional scan recovery
US7698014B2 (en) * 2006-01-20 2010-04-13 3M Innovative Properties Company Local enforcement of accuracy in fabricated models
US8446410B2 (en) * 2006-05-11 2013-05-21 Anatomage Inc. Apparatus for generating volumetric image and matching color textured external surface
US7921046B2 (en) 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
US7840482B2 (en) 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US9395968B1 (en) 2006-06-30 2016-07-19 American Megatrends, Inc. Uniquely identifying and validating computer system firmware
US7797696B1 (en) 2006-06-30 2010-09-14 American Megatrends, Inc. Dynamically updating a computer system and firmware image utilizing an option read only memory (OPROM) data structure
US7590835B1 (en) * 2006-06-30 2009-09-15 American Megatrends, Inc. Dynamically updating a computer system firmware image
US7856545B2 (en) * 2006-07-28 2010-12-21 Drc Computer Corporation FPGA co-processor for accelerated computation
US7856546B2 (en) * 2006-07-28 2010-12-21 Drc Computer Corporation Configurable processor module accelerator using a programmable logic device
US20080092146A1 (en) * 2006-10-10 2008-04-17 Paul Chow Computing machine
TW200817991A (en) * 2006-10-13 2008-04-16 Etrovision Technology Interface of a storage device and storage device with the interface
US8537167B1 (en) * 2006-10-17 2013-09-17 Nvidia Corporation Method and system for using bundle decoders in a processing pipeline
US7660793B2 (en) 2006-11-13 2010-02-09 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US8326819B2 (en) 2006-11-13 2012-12-04 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
US8289966B1 (en) 2006-12-01 2012-10-16 Synopsys, Inc. Packet ingress/egress block and system and method for receiving, transmitting, and managing packetized data
US8706987B1 (en) 2006-12-01 2014-04-22 Synopsys, Inc. Structured block transfer module, system architecture, and method for transferring
US10298834B2 (en) 2006-12-01 2019-05-21 Google Llc Video refocusing
US8127113B1 (en) * 2006-12-01 2012-02-28 Synopsys, Inc. Generating hardware accelerators and processor offloads
US7539953B1 (en) * 2006-12-05 2009-05-26 Xilinx, Inc. Method and apparatus for interfacing instruction processors and logic in an electronic circuit modeling system
US8694328B1 (en) 2006-12-14 2014-04-08 Joseph Gormley Vehicle customization and personalization activities
US7984272B2 (en) * 2007-06-27 2011-07-19 International Business Machines Corporation Design structure for single hot forward interconnect scheme for delayed execution pipelines
US10229453B2 (en) 2008-01-11 2019-03-12 Ip Reservoir, Llc Method and system for low latency basket calculation
US20090245092A1 (en) * 2008-03-28 2009-10-01 Qualcomm Incorporated Apparatus, processes, and articles of manufacture for fast fourier transformation and beacon searching
EP2286347A4 (en) * 2008-06-04 2012-03-07 Nec Lab America Inc System and method for parallelizing and accelerating learning machine training and classification using a massively parallel accelerator
US8145749B2 (en) * 2008-08-11 2012-03-27 International Business Machines Corporation Data processing in a hybrid computing environment
US8531450B2 (en) * 2008-08-28 2013-09-10 Adobe Systems Incorporated Using two dimensional image adjustment operations on three dimensional objects
US7984267B2 (en) * 2008-09-04 2011-07-19 International Business Machines Corporation Message passing module in hybrid computing system starting and sending operation information to service program for accelerator to execute application program
US8141102B2 (en) 2008-09-04 2012-03-20 International Business Machines Corporation Data processing in a hybrid computing environment
US8230442B2 (en) * 2008-09-05 2012-07-24 International Business Machines Corporation Executing an accelerator application program in a hybrid computing environment
JP5293062B2 (en) * 2008-10-03 2013-09-18 富士通株式会社 Computer apparatus, memory diagnosis method, and memory diagnosis control program
US20120095893A1 (en) 2008-12-15 2012-04-19 Exegy Incorporated Method and apparatus for high-speed processing of financial market depth data
US8527734B2 (en) * 2009-01-23 2013-09-03 International Business Machines Corporation Administering registered virtual addresses in a hybrid computing environment including maintaining a watch list of currently registered virtual addresses by an operating system
US9286232B2 (en) * 2009-01-26 2016-03-15 International Business Machines Corporation Administering registered virtual addresses in a hybrid computing environment including maintaining a cache of ranges of currently registered virtual addresses
US8843880B2 (en) * 2009-01-27 2014-09-23 International Business Machines Corporation Software development for a hybrid computing environment
US8255909B2 (en) * 2009-01-28 2012-08-28 International Business Machines Corporation Synchronizing access to resources in a hybrid computing environment
US8001206B2 (en) * 2009-01-29 2011-08-16 International Business Machines Corporation Broadcasting data in a hybrid computing environment
US9170864B2 (en) * 2009-01-29 2015-10-27 International Business Machines Corporation Data processing in a hybrid computing environment
US20100191923A1 (en) * 2009-01-29 2010-07-29 International Business Machines Corporation Data Processing In A Computing Environment
US8010718B2 (en) * 2009-02-03 2011-08-30 International Business Machines Corporation Direct memory access in a hybrid computing environment
US8037217B2 (en) * 2009-04-23 2011-10-11 International Business Machines Corporation Direct memory access in a hybrid computing environment
US8180972B2 (en) * 2009-08-07 2012-05-15 International Business Machines Corporation Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally
US8478965B2 (en) * 2009-10-30 2013-07-02 International Business Machines Corporation Cascaded accelerator functions
US9417905B2 (en) * 2010-02-03 2016-08-16 International Business Machines Corporation Terminating an accelerator application program in a hybrid computing environment
US8578132B2 (en) 2010-03-29 2013-11-05 International Business Machines Corporation Direct injection of data to be transferred in a hybrid computing environment
US9015443B2 (en) 2010-04-30 2015-04-21 International Business Machines Corporation Reducing remote reads of memory in a hybrid computing environment
US20110307661A1 (en) * 2010-06-09 2011-12-15 International Business Machines Corporation Multi-processor chip with shared fpga execution unit and a design structure thereof
US10037568B2 (en) 2010-12-09 2018-07-31 Ip Reservoir, Llc Method and apparatus for managing orders in financial markets
US8724887B2 (en) 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
JP6083687B2 (en) * 2012-01-06 2017-02-22 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Distributed calculation method, program, host computer, and distributed calculation system (distributed parallel calculation using accelerator device)
US8761534B2 (en) * 2012-02-16 2014-06-24 Ricoh Co., Ltd. Optimization of plenoptic imaging systems
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US20130293686A1 (en) * 2012-05-03 2013-11-07 Qualcomm Incorporated 3d reconstruction of human subject using a mobile device
US9858649B2 (en) 2015-09-30 2018-01-02 Lytro, Inc. Depth-based image blurring
US9001226B1 (en) * 2012-12-04 2015-04-07 Lytro, Inc. Capturing and relighting images using multiple devices
US10713726B1 (en) 2013-01-13 2020-07-14 United Services Automobile Association (Usaa) Determining insurance policy modifications using informatic sensor data
US9811618B1 (en) * 2013-03-07 2017-11-07 Xilinx, Inc. Simulation of system designs
US20140278573A1 (en) 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Systems and methods for initiating insurance processing using ingested data
US10334151B2 (en) 2013-04-22 2019-06-25 Google Llc Phase detection autofocus using subaperture images
US9792062B2 (en) * 2013-05-10 2017-10-17 Empire Technology Development Llc Acceleration of memory access
EP3005290A1 (en) * 2013-05-31 2016-04-13 Longsand Limited Three-dimensional object modeling
CN103322415A (en) * 2013-06-05 2013-09-25 哈尔滨工程大学 Two-dimensional reproduction method for petroleum pipeline defects through least squares support vector machines (LS-SVM)
US10001993B2 (en) 2013-08-08 2018-06-19 Linear Algebra Technologies Limited Variable-length instruction buffer management
US9146747B2 (en) 2013-08-08 2015-09-29 Linear Algebra Technologies Limited Apparatus, systems, and methods for providing configurable computational imaging pipeline
US9910675B2 (en) 2013-08-08 2018-03-06 Linear Algebra Technologies Limited Apparatus, systems, and methods for low power computational imaging
US9727113B2 (en) 2013-08-08 2017-08-08 Linear Algebra Technologies Limited Low power computational imaging
US11768689B2 (en) 2013-08-08 2023-09-26 Movidius Limited Apparatus, systems, and methods for low power computational imaging
US9710858B1 (en) 2013-08-16 2017-07-18 United Services Automobile Association (Usaa) Insurance policy alterations using informatic sensor data
US9998684B2 (en) * 2013-08-16 2018-06-12 Indiana University Research And Technology Corporation Method and apparatus for virtual 3D model generation and navigation using opportunistically captured images
JP6102648B2 (en) * 2013-09-13 2017-03-29 ソニー株式会社 Information processing apparatus and information processing method
EP2851868A1 (en) * 2013-09-20 2015-03-25 ETH Zurich 3D Reconstruction
US11087404B1 (en) 2014-01-10 2021-08-10 United Services Automobile Association (Usaa) Electronic sensor management
US11416941B1 (en) 2014-01-10 2022-08-16 United Services Automobile Association (Usaa) Electronic sensor management
US10552911B1 (en) 2014-01-10 2020-02-04 United Services Automobile Association (Usaa) Determining status of building modifications using informatics sensor data
US11847666B1 (en) 2014-02-24 2023-12-19 United Services Automobile Association (Usaa) Determining status of building modifications using informatics sensor data
US10614525B1 (en) 2014-03-05 2020-04-07 United Services Automobile Association (Usaa) Utilizing credit and informatic data for insurance underwriting purposes
US9665372B2 (en) 2014-05-12 2017-05-30 International Business Machines Corporation Parallel slice processor with dynamic instruction stream mapping
US9672043B2 (en) 2014-05-12 2017-06-06 International Business Machines Corporation Processing of multiple instruction streams in a parallel slice processor
US11797473B2 (en) 2014-05-29 2023-10-24 Altera Corporation Accelerator architecture on a programmable platform
CN110109859B (en) * 2014-05-29 2024-03-12 阿尔特拉公司 Accelerator architecture on programmable platform
US9589362B2 (en) * 2014-07-01 2017-03-07 Qualcomm Incorporated System and method of three-dimensional model generation
US9760375B2 (en) 2014-09-09 2017-09-12 International Business Machines Corporation Register files for storing data operated on by instructions of multiple widths
US9607388B2 (en) 2014-09-19 2017-03-28 Qualcomm Incorporated System and method of pose estimation
US9720696B2 (en) 2014-09-30 2017-08-01 International Business Machines Corporation Independent mapping of threads
US9977678B2 (en) 2015-01-12 2018-05-22 International Business Machines Corporation Reconfigurable parallel execution and load-store slice processor
US10133581B2 (en) 2015-01-13 2018-11-20 International Business Machines Corporation Linkable issue queue parallel execution slice for a processor
US10133576B2 (en) 2015-01-13 2018-11-20 International Business Machines Corporation Parallel slice processor having a recirculating load-store queue for fast deallocation of issue queue entries
US10546424B2 (en) 2015-04-15 2020-01-28 Google Llc Layered content delivery for virtual and augmented reality experiences
US10419737B2 (en) 2015-04-15 2019-09-17 Google Llc Data structures and delivery methods for expediting virtual reality playback
US11328446B2 (en) 2015-04-15 2022-05-10 Google Llc Combining light-field data with active depth data for depth map generation
US10469873B2 (en) 2015-04-15 2019-11-05 Google Llc Encoding and decoding virtual reality video
US10567464B2 (en) 2015-04-15 2020-02-18 Google Llc Video compression with adaptive view-dependent lighting removal
US10275898B1 (en) 2015-04-15 2019-04-30 Google Llc Wedge-based light-field video capture
US10341632B2 (en) 2015-04-15 2019-07-02 Google Llc. Spatial random access enabled video system with a three-dimensional viewing volume
US10440407B2 (en) 2017-05-09 2019-10-08 Google Llc Adaptive control for immersive experience delivery
US10444931B2 (en) 2017-05-09 2019-10-15 Google Llc Vantage generation and interactive playback
US10540818B2 (en) 2015-04-15 2020-01-21 Google Llc Stereo image generation and interactive playback
US10412373B2 (en) 2015-04-15 2019-09-10 Google Llc Image capture for virtual reality displays
US10565734B2 (en) 2015-04-15 2020-02-18 Google Llc Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
US9911242B2 (en) 2015-05-14 2018-03-06 Qualcomm Incorporated Three-dimensional model generation
US10373366B2 (en) 2015-05-14 2019-08-06 Qualcomm Incorporated Three-dimensional model generation
US10304203B2 (en) 2015-05-14 2019-05-28 Qualcomm Incorporated Three-dimensional model generation
GB2540382B (en) * 2015-07-15 2020-03-04 Advanced Risc Mach Ltd Data processing systems
US9979909B2 (en) 2015-07-24 2018-05-22 Lytro, Inc. Automatic lens flare detection and correction for light-field images
US9983875B2 (en) 2016-03-04 2018-05-29 International Business Machines Corporation Operation of a multi-slice processor preventing early dependent instruction wakeup
US10037211B2 (en) 2016-03-22 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor with an expanded merge fetching queue
US10346174B2 (en) 2016-03-24 2019-07-09 International Business Machines Corporation Operation of a multi-slice processor with dynamic canceling of partial loads
US10761854B2 (en) 2016-04-19 2020-09-01 International Business Machines Corporation Preventing hazard flushes in an instruction sequencing unit of a multi-slice processor
US10037229B2 (en) 2016-05-11 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US10275892B2 (en) 2016-06-09 2019-04-30 Google Llc Multi-view scene segmentation and propagation
US9934033B2 (en) 2016-06-13 2018-04-03 International Business Machines Corporation Operation of a multi-slice processor implementing simultaneous two-target loads and stores
US10042647B2 (en) 2016-06-27 2018-08-07 International Business Machines Corporation Managing a divided load reorder queue
US10318419B2 (en) 2016-08-08 2019-06-11 International Business Machines Corporation Flush avoidance in a load store unit
US10341568B2 (en) 2016-10-10 2019-07-02 Qualcomm Incorporated User interface to assist three dimensional scanning of objects
US10679361B2 (en) 2016-12-05 2020-06-09 Google Llc Multi-view rotoscope contour propagation
JP6781089B2 (en) * 2017-03-28 2020-11-04 日立オートモティブシステムズ株式会社 Electronic control device, electronic control system, control method of electronic control device
US10594945B2 (en) 2017-04-03 2020-03-17 Google Llc Generating dolly zoom effect using light field image data
US10540186B1 (en) 2017-04-18 2020-01-21 Amazon Technologies, Inc. Interception of identifier from client configurable hardware logic
US10474227B2 (en) 2017-05-09 2019-11-12 Google Llc Generation of virtual reality with 6 degrees of freedom from limited viewer data
US10354399B2 (en) 2017-05-25 2019-07-16 Google Llc Multi-view back-projection to a light-field
US10545215B2 (en) 2017-09-13 2020-01-28 Google Llc 4D camera tracking and optical stabilization
US10965862B2 (en) 2018-01-18 2021-03-30 Google Llc Multi-camera navigation interface
US11315313B2 (en) * 2018-02-23 2022-04-26 Sony Group Corporation Methods, devices and computer program products for generating 3D models
US11481296B2 (en) * 2018-09-10 2022-10-25 International Business Machines Corporation Detecting configuration errors in multiport I/O cards with simultaneous multi-processing
CN109671151B (en) * 2018-11-27 2023-07-18 先临三维科技股份有限公司 Three-dimensional data processing method and device, storage medium and processor
US20190207868A1 (en) * 2019-02-15 2019-07-04 Intel Corporation Processor related communications
CN113711279A (en) * 2019-05-14 2021-11-26 英特尔公司 Automatic point cloud verification of immersive media
US11269555B2 (en) * 2020-06-22 2022-03-08 Sandisk Technologies Llc System idle time reduction methods and apparatus
US20230042858A1 (en) * 2021-08-02 2023-02-09 Nvidia Corporation Offloading processing tasks to decoupled accelerators for increasing performance in a system on a chip

Citations (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3665173A (en) * 1968-09-03 1972-05-23 Ibm Triple modular redundancy/sparing
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US4774574A (en) * 1987-06-02 1988-09-27 Eastman Kodak Company Adaptive block transform image coding method and apparatus
US4862407A (en) * 1987-10-05 1989-08-29 Motorola, Inc. Digital signal processing apparatus
US4873626A (en) * 1986-12-17 1989-10-10 Massachusetts Institute Of Technology Parallel processing system with processor array having memory system included in system memory
US4914653A (en) * 1986-12-22 1990-04-03 American Telephone And Telegraph Company Inter-processor communication protocol
US4956771A (en) * 1988-05-24 1990-09-11 Prime Computer, Inc. Method for inter-processor data transfer
US4985832A (en) * 1986-09-18 1991-01-15 Digital Equipment Corporation SIMD array processing system with routing networks having plurality of switching stages to transfer messages among processors
US5185871A (en) * 1989-12-26 1993-02-09 International Business Machines Corporation Coordination of out-of-sequence fetching between multiple processors using re-execution of instructions
US5283883A (en) * 1991-10-17 1994-02-01 Sun Microsystems, Inc. Method and direct memory access controller for asynchronously reading/writing data from/to a memory with improved throughput
US5317752A (en) * 1989-12-22 1994-05-31 Tandem Computers Incorporated Fault-tolerant computer system with auto-restart after power-fall
US5339413A (en) * 1992-08-21 1994-08-16 International Business Machines Corporation Data stream protocol for multimedia data streaming data processing system
US5377333A (en) * 1991-09-20 1994-12-27 Hitachi, Ltd. Parallel processor system having computing clusters and auxiliary clusters connected with network of partial networks and exchangers
US5428754A (en) * 1988-03-23 1995-06-27 3Dlabs Ltd Computer system with clock shared between processors executing separate instruction streams
US5544067A (en) * 1990-04-06 1996-08-06 Lsi Logic Corporation Method and system for creating, deriving and validating structural description of electronic system from higher level, behavior-oriented description, including interactive schematic design and simulation
US5583964A (en) * 1994-05-02 1996-12-10 Motorola, Inc. Computer utilizing neural network and method of using same
US5603043A (en) * 1992-11-05 1997-02-11 Giga Operations Corporation System for compiling algorithmic language source code for implementation in programmable hardware
US5623604A (en) * 1992-11-18 1997-04-22 Canon Information Systems, Inc. Method and apparatus for remotely altering programmable firmware stored in an interactive network board coupled to a network peripheral
US5623418A (en) * 1990-04-06 1997-04-22 Lsi Logic Corporation System and method for creating and validating structural description of electronic system
US5640107A (en) * 1995-10-24 1997-06-17 Northrop Grumman Corporation Method for in-circuit programming of a field-programmable gate array configuration memory
US5649135A (en) * 1995-01-17 1997-07-15 International Business Machines Corporation Parallel processing system and method using surrogate instructions
US5655069A (en) * 1994-07-29 1997-08-05 Fujitsu Limited Apparatus having a plurality of programmable logic processing units for self-repair
US5712922A (en) * 1992-04-14 1998-01-27 Eastman Kodak Company Neural network optical character recognition system and method for classifying characters in a moving web
US5732107A (en) * 1995-08-31 1998-03-24 Northrop Grumman Corporation Fir interpolator with zero order hold and fir-spline interpolation combination
US5752071A (en) * 1995-07-17 1998-05-12 Intel Corporation Function coprocessor
US5784636A (en) * 1996-05-28 1998-07-21 National Semiconductor Corporation Reconfigurable computer architecture for use in signal processing applications
US5801958A (en) * 1990-04-06 1998-09-01 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design from higher level, behavior-oriented description, including interactive system for hierarchical display of control and dataflow information
US5867399A (en) * 1990-04-06 1999-02-02 Lsi Logic Corporation System and method for creating and validating structural description of electronic system from higher-level and behavior-oriented description
US5892562A (en) * 1995-12-20 1999-04-06 Semiconductor Energy Laboratory Co., Ltd. Liquid crystal electro-optic device
US5892962A (en) * 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
US5909565A (en) * 1995-04-28 1999-06-01 Matsushita Electric Industrial Co., Ltd. Microprocessor system which efficiently shares register data between a main processor and a coprocessor
US5910897A (en) * 1994-06-01 1999-06-08 Lsi Logic Corporation Specification and design of complex digital systems
US5916307A (en) * 1996-06-05 1999-06-29 New Era Of Networks, Inc. Method and structure for balanced queue communication between nodes in a distributed computing application
US5930147A (en) * 1995-10-12 1999-07-27 Kabushiki Kaisha Toshiba Design support system in which delay is estimated from HDL description
US5931959A (en) * 1997-05-21 1999-08-03 The United States Of America As Represented By The Secretary Of The Air Force Dynamically reconfigurable FPGA apparatus and method for multiprocessing and fault tolerance
US5933356A (en) * 1990-04-06 1999-08-03 Lsi Logic Corporation Method and system for creating and verifying structural logic model of electronic design from behavioral description, including generation of logic and timing models
US5941999A (en) * 1997-03-31 1999-08-24 Sun Microsystems Method and system for achieving high availability in networked computer systems
US5963454A (en) * 1996-09-25 1999-10-05 Vlsi Technology, Inc. Method and apparatus for efficiently implementing complex function blocks in integrated circuit designs
US6018793A (en) * 1997-10-24 2000-01-25 Cirrus Logic, Inc. Single chip controller-memory device including feature-selectable bank I/O and architecture and methods suitable for implementing the same
US6023742A (en) * 1996-07-18 2000-02-08 University Of Washington Reconfigurable computing architecture for providing pipelined data paths
US6028939A (en) * 1997-01-03 2000-02-22 Redcreek Communications, Inc. Data security system and method
US6049222A (en) * 1997-12-30 2000-04-11 Xilinx, Inc Configuring an FPGA using embedded memory
US6096091A (en) * 1998-02-24 2000-08-01 Advanced Micro Devices, Inc. Dynamically reconfigurable logic networks interconnected by fall-through FIFOs for flexible pipeline processing in a system-on-a-chip
US6108693A (en) * 1997-10-17 2000-08-22 Nec Corporation System and method of data communication in multiprocessor system
US6112288A (en) * 1998-05-19 2000-08-29 Paracel, Inc. Dynamic configurable system of parallel modules comprising chain of chips comprising parallel pipeline chain of processors with master controller feeding command and data
US6128755A (en) * 1992-03-04 2000-10-03 International Business Machines Corporation Fault-tolerant multiple processor system with signature voting
US6192384B1 (en) * 1998-09-14 2001-02-20 The Board Of Trustees Of The Leland Stanford Junior University System and method for performing compound vector operations
US6205516B1 (en) * 1997-10-31 2001-03-20 Brother Kogyo Kabushiki Kaisha Device and method for controlling data storage device in data processing system
US6216191B1 (en) * 1997-10-15 2001-04-10 Lucent Technologies Inc. Field programmable gate array having a dedicated processor interface
US6216252B1 (en) * 1990-04-06 2001-04-10 Lsi Logic Corporation Method and system for creating, validating, and scaling structural description of electronic device
US6219828B1 (en) * 1998-09-30 2001-04-17 International Business Machines Corporation Method for using two copies of open firmware for self debug capability
US6237054B1 (en) * 1998-09-14 2001-05-22 Advanced Micro Devices, Inc. Network interface unit including a microcontroller having multiple configurable logic blocks, with a test/program bus for performing a plurality of selected functions
US6247118B1 (en) * 1998-06-05 2001-06-12 Mcdonnell Douglas Corporation Systems and methods for transient error recovery in reduced instruction set computer processors via instruction retry
US6253276B1 (en) * 1998-06-30 2001-06-26 Micron Technology, Inc. Apparatus for adaptive decoding of memory addresses
US20010014937A1 (en) * 1997-12-17 2001-08-16 Huppenthal Jon M. Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem
US6282578B1 (en) * 1995-06-26 2001-08-28 Hitachi, Ltd. Execution management method of program on reception side of message in distributed processing system
US6282627B1 (en) * 1998-06-29 2001-08-28 Chameleon Systems, Inc. Integrated processor and programmable data path chip for reconfigurable computing
US6308311B1 (en) * 1999-05-14 2001-10-23 Xilinx, Inc. Method for reconfiguring a field programmable gate array from a host
US6324678B1 (en) * 1990-04-06 2001-11-27 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design
US6326806B1 (en) * 2000-03-29 2001-12-04 Xilinx, Inc. FPGA-based communications access point and system for reconfiguration
US20020041685A1 (en) * 2000-09-22 2002-04-11 Mcloone Maire Patricia Data encryption apparatus
US20020087829A1 (en) * 2000-12-29 2002-07-04 Snyder Walter L. Re-targetable communication system
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20030009651A1 (en) * 2001-05-15 2003-01-09 Zahid Najam Apparatus and method for interconnecting a processor to co-processors using shared memory
US20030014627A1 (en) * 1999-07-08 2003-01-16 Broadcom Corporation Distributed processing in a cryptography acceleration chip
US20030061409A1 (en) * 2001-02-23 2003-03-27 Rudusky Daryl System, method and article of manufacture for dynamic, automated product fulfillment for configuring a remotely located device
US20030177223A1 (en) * 2002-03-12 2003-09-18 Erickson Michael J. Verification of computer programs
US6624819B1 (en) * 2000-05-01 2003-09-23 Broadcom Corporation Method and system for providing a flexible and efficient processor for use in a graphics processing system
US6625749B1 (en) * 1999-12-21 2003-09-23 Intel Corporation Firmware mechanism for correcting soft errors
US6662285B1 (en) * 2001-01-09 2003-12-09 Xilinx, Inc. User configurable memory system having local and global memory blocks
US20030231649A1 (en) * 2002-06-13 2003-12-18 Awoseyi Paul A. Dual purpose method and apparatus for performing network interface and security transactions
US6684314B1 (en) * 2000-07-14 2004-01-27 Agilent Technologies, Inc. Memory controller with programmable address configuration
US20040019883A1 (en) * 2001-01-26 2004-01-29 Northwestern University Method and apparatus for automatically generating hardware from algorithms described in matlab
US20040045015A1 (en) * 2002-08-29 2004-03-04 Kazem Haji-Aghajani Common interface framework for developing field programmable device based applications independent of target circuit board
US6704816B1 (en) * 1999-07-26 2004-03-09 Sun Microsystems, Inc. Method and apparatus for executing standard functions in a computer system using a field programmable gate array
US20040064198A1 (en) * 2002-05-06 2004-04-01 Cyber Switching, Inc. Method and/or system and/or apparatus for remote power management and monitoring supply
US20040130927A1 (en) * 2002-10-31 2004-07-08 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US6769072B1 (en) * 1999-09-14 2004-07-27 Fujitsu Limited Distributed processing system with registered reconfiguration processors and registered notified processors
US20040153752A1 (en) * 2002-12-02 2004-08-05 Marvell International Ltd. Self-reparable semiconductor and method thereof
US6829697B1 (en) * 2000-09-06 2004-12-07 International Business Machines Corporation Multiple logical interfaces to a shared coprocessor resource
US6839873B1 (en) * 2000-06-23 2005-01-04 Cypress Semiconductor Corporation Method and apparatus for programmable logic device (PLD) built-in-self-test (BIST)
US20050104743A1 (en) * 2003-11-19 2005-05-19 Ripolone James G. High speed communication for measurement while drilling
US6982976B2 (en) * 2000-08-11 2006-01-03 Texas Instruments Incorporated Datapipe routing bridge
US20060123282A1 (en) * 2004-10-01 2006-06-08 Gouldey Brent I Service layer architecture for memory access system and method
US7117390B1 (en) * 2002-05-20 2006-10-03 Sandia Corporation Practical, redundant, failure-tolerant, self-reconfiguring embedded system architecture
US20060236018A1 (en) * 2001-05-18 2006-10-19 Xilinx, Inc. Programmable logic device including programmable interface core and central processing unit
US7137020B2 (en) * 2002-05-17 2006-11-14 Sun Microsystems, Inc. Method and apparatus for disabling defective components in a computer system
US7177310B2 (en) * 2001-03-12 2007-02-13 Hitachi, Ltd. Network connection apparatus
US7228520B1 (en) * 2004-01-30 2007-06-05 Xilinx, Inc. Method and apparatus for a programmable interface of a soft platform on a programmable logic device

Family Cites Families (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4782461A (en) * 1984-06-21 1988-11-01 Step Engineering Logical grouping of facilities within a computer development system
US4895832A (en) * 1988-11-03 1990-01-23 Industrial Technology Research Institute Coprecipitation method for producing superconducting oxides of high homogeneity
US5212777A (en) * 1989-11-17 1993-05-18 Texas Instruments Incorporated Multi-processor reconfigurable in single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) modes and method of operation
US5421028A (en) * 1991-03-15 1995-05-30 Hewlett-Packard Company Processing commands and data in a common pipeline path in a high-speed computer graphics system
US5361373A (en) * 1992-12-11 1994-11-01 Gilson Kent L Integrated circuit computing device comprising a dynamically configurable gate array having a microprocessor and reconfigurable instruction execution means and method therefor
EP0626661A1 (en) * 1993-05-24 1994-11-30 Societe D'applications Generales D'electricite Et De Mecanique Sagem Digital image processing circuitry
US5392393A (en) * 1993-06-04 1995-02-21 Sun Microsystems, Inc. Architecture for a high performance three dimensional graphics accelerator
US5568614A (en) 1994-07-29 1996-10-22 International Business Machines Corporation Data streaming between peer subsystems of a computer system
US5710910A (en) * 1994-09-30 1998-01-20 University Of Washington Asynchronous self-tuning clock domains and method for transferring data among domains
US5649176A (en) 1995-08-10 1997-07-15 Virtual Machine Works, Inc. Transition analysis and circuit resynthesis method and device for digital circuit modeling
US5648732A (en) * 1995-10-04 1997-07-15 Xilinx, Inc. Field programmable pipeline array
JPH09148907A (en) 1995-11-22 1997-06-06 Nec Corp Synchronous semiconductor logic device
US6115047A (en) * 1996-07-01 2000-09-05 Sun Microsystems, Inc. Method and apparatus for implementing efficient floating point Z-buffering
JP3406790B2 (en) * 1996-11-25 2003-05-12 株式会社東芝 Data transfer system and data transfer method
US5978578A (en) * 1997-01-30 1999-11-02 Azarya; Arnon Openbus system for control automation networks
US5996059A (en) * 1997-07-01 1999-11-30 National Semiconductor Corporation System for monitoring an execution pipeline utilizing an address pipeline in parallel with the execution pipeline
US5987620A (en) * 1997-09-19 1999-11-16 Thang Tran Method and apparatus for a self-timed and self-enabled distributed clock
KR100572945B1 (en) 1998-02-04 2006-04-24 텍사스 인스트루먼츠 인코포레이티드 Digital signal processor with efficiently connectable hardware co-processor
US6230253B1 (en) 1998-03-31 2001-05-08 Intel Corporation Executing partial-width packed data instructions
US6202139B1 (en) * 1998-06-19 2001-03-13 Advanced Micro Devices, Inc. Pipelined data cache with multiple ports and processor with load/store unit selecting only load or store operations for concurrent processing
US6862563B1 (en) 1998-10-14 2005-03-01 Arc International Method and apparatus for managing the configuration and functionality of a semiconductor design
US6405266B1 (en) * 1998-11-30 2002-06-11 Hewlett-Packard Company Unified publish and subscribe paradigm for local and remote publishing destinations
US6247134B1 (en) * 1999-03-31 2001-06-12 Synopsys, Inc. Method and system for pipe stage gating within an operating pipelined circuit for power savings
JP2000295613A (en) * 1999-04-09 2000-10-20 Nippon Telegr & Teleph Corp <Ntt> Method and device for image coding using reconfigurable hardware and program recording medium for image coding
US6477170B1 (en) * 1999-05-21 2002-11-05 Advanced Micro Devices, Inc. Method and apparatus for interfacing between systems operating under different clock regimes with interlocking to prevent overwriting of data
EP1061438A1 (en) 1999-06-15 2000-12-20 Hewlett-Packard Company Computer architecture containing processor and coprocessor
EP1061439A1 (en) 1999-06-15 2000-12-20 Hewlett-Packard Company Memory and instructions in computer architecture containing processor and coprocessor
JP2001142695A (en) 1999-10-01 2001-05-25 Hitachi Ltd Loading method of constant to storage place, loading method of constant to address storage place, loading method of constant to register, deciding method of number of code bit, normalizing method of binary number and instruction in computer system
US6526430B1 (en) 1999-10-04 2003-02-25 Texas Instruments Incorporated Reconfigurable SIMD coprocessor architecture for sum of absolute differences and symmetric filtering (scalable MAC engine for image processing)
US6516420B1 (en) * 1999-10-25 2003-02-04 Motorola, Inc. Data synchronizer using a parallel handshaking pipeline wherein validity indicators generate and send acknowledgement signals to a different clock domain
US6648640B2 (en) * 1999-11-30 2003-11-18 Ora Metrix, Inc. Interactive orthodontic care system based on intra-oral scanning of teeth
US6632089B2 (en) * 1999-11-30 2003-10-14 Orametrix, Inc. Orthodontic treatment planning with user-specified simulation of tooth movement
US6678646B1 (en) * 1999-12-14 2004-01-13 Atmel Corporation Method for implementing a physical design for a dynamically reconfigurable logic circuit
US6606360B1 (en) * 1999-12-30 2003-08-12 Intel Corporation Method and apparatus for receiving data
US6826539B2 (en) * 1999-12-31 2004-11-30 Xactware, Inc. Virtual structure data repository and directory
US6611920B1 (en) * 2000-01-21 2003-08-26 Intel Corporation Clock distribution system for selectively enabling clock signals to portions of a pipelined circuit
JP3832557B2 (en) * 2000-05-02 2006-10-11 富士ゼロックス株式会社 Circuit reconfiguration method and information processing system for programmable logic circuit
US6532009B1 (en) 2000-05-18 2003-03-11 International Business Machines Corporation Programmable hardwired geometry pipeline
US6817005B2 (en) * 2000-05-25 2004-11-09 Xilinx, Inc. Modular design method and system for programmable logic devices
JP3707360B2 (en) * 2000-06-27 2005-10-19 富士ゼロックス株式会社 Circuit function reconfiguration method and programmable logic circuit device
US7196710B1 (en) 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
JP3880310B2 (en) * 2000-12-01 2007-02-14 シャープ株式会社 Semiconductor integrated circuit
US6708239B1 (en) 2000-12-08 2004-03-16 The Boeing Company Network device interface for digitally interfacing data channels to a controller via a network
US6785841B2 (en) 2000-12-14 2004-08-31 International Business Machines Corporation Processor with redundant logic
US6925549B2 (en) * 2000-12-21 2005-08-02 International Business Machines Corporation Asynchronous pipeline control interface using tag values to control passing data through successive pipeline stages
US6915502B2 (en) 2001-01-03 2005-07-05 University Of Southern California System level applications of adaptive computing (SLAAC) technology
US7091598B2 (en) * 2001-01-19 2006-08-15 Renesas Technology Corporation Electronic circuit device
US7036059B1 (en) 2001-02-14 2006-04-25 Xilinx, Inc. Techniques for mitigating, detecting and correcting single event upset effects in systems using SRAM-based field programmable gate arrays
US6848060B2 (en) * 2001-02-27 2005-01-25 International Business Machines Corporation Synchronous to asynchronous to synchronous interface
JP2002269063A (en) 2001-03-07 2002-09-20 Toshiba Corp Massaging program, messaging method of distributed system, and messaging system
JP2002281079A (en) 2001-03-21 2002-09-27 Victor Co Of Japan Ltd Image data transmitting device
US7065672B2 (en) * 2001-03-28 2006-06-20 Stratus Technologies Bermuda Ltd. Apparatus and methods for fault-tolerant computing using a switching fabric
US6530073B2 (en) 2001-04-30 2003-03-04 Lsi Logic Corporation RTL annotation tool for layout induced netlist changes
US6985975B1 (en) * 2001-06-29 2006-01-10 Sanera Systems, Inc. Packet lockstep system and method
US7143418B1 (en) 2001-12-10 2006-11-28 Xilinx, Inc. Core template package for creating run-time reconfigurable cores
JP3938308B2 (en) * 2001-12-28 2007-06-27 富士通株式会社 Programmable logic device
US6893873B2 (en) * 2002-01-25 2005-05-17 Georgia Tech Research Corporation Methods for improving conifer embryogenesis
US7073158B2 (en) 2002-05-17 2006-07-04 Pixel Velocity, Inc. Automated system for designing and developing field programmable gate arrays
US7024654B2 (en) 2002-06-11 2006-04-04 Anadigm, Inc. System and method for configuring analog elements in a configurable hardware device
US7076681B2 (en) * 2002-07-02 2006-07-11 International Business Machines Corporation Processor with demand-driven clock throttling power reduction
EP1383042B1 (en) * 2002-07-19 2007-03-28 STMicroelectronics S.r.l. A multiphase synchronous pipeline structure
WO2004042562A2 (en) 2002-10-31 2004-05-21 Lockheed Martin Corporation Pipeline accelerator and related system and method
EP1573515A2 (en) 2002-10-31 2005-09-14 Lockheed Martin Corporation Pipeline accelerator and related system and method
US7200114B1 (en) 2002-11-18 2007-04-03 At&T Corp. Method for reconfiguring a router
US7260794B2 (en) * 2002-12-20 2007-08-21 Quickturn Design Systems, Inc. Logic multiprocessor for FPGA implementation
US20060152087A1 (en) 2003-06-10 2006-07-13 De Oliverira Kastrup Pereira B Embedded computing system with reconfigurable power supply and/or clock frequency domains
US7284225B1 (en) * 2004-05-20 2007-10-16 Xilinx, Inc. Embedding a hardware object in an application system
WO2006039713A2 (en) 2004-10-01 2006-04-13 Lockheed Martin Corporation Configurable computing machine and related systems and methods
US8606360B2 (en) 2006-03-09 2013-12-10 The Cleveland Clinic Foundation Systems and methods for determining volume of activation for spinal cord and peripheral nerve stimulation
JP5009979B2 (en) 2006-05-22 2012-08-29 コーヒレント・ロジックス・インコーポレーテッド ASIC design based on execution of software program in processing system

Patent Citations (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3665173A (en) * 1968-09-03 1972-05-23 Ibm Triple modular redundancy/sparing
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US4985832A (en) * 1986-09-18 1991-01-15 Digital Equipment Corporation SIMD array processing system with routing networks having plurality of switching stages to transfer messages among processors
US4873626A (en) * 1986-12-17 1989-10-10 Massachusetts Institute Of Technology Parallel processing system with processor array having memory system included in system memory
US4914653A (en) * 1986-12-22 1990-04-03 American Telephone And Telegraph Company Inter-processor communication protocol
US4774574A (en) * 1987-06-02 1988-09-27 Eastman Kodak Company Adaptive block transform image coding method and apparatus
US4862407A (en) * 1987-10-05 1989-08-29 Motorola, Inc. Digital signal processing apparatus
US5428754A (en) * 1988-03-23 1995-06-27 3Dlabs Ltd Computer system with clock shared between processors executing separate instruction streams
US4956771A (en) * 1988-05-24 1990-09-11 Prime Computer, Inc. Method for inter-processor data transfer
US5317752A (en) * 1989-12-22 1994-05-31 Tandem Computers Incorporated Fault-tolerant computer system with auto-restart after power-fall
US5185871A (en) * 1989-12-26 1993-02-09 International Business Machines Corporation Coordination of out-of-sequence fetching between multiple processors using re-execution of instructions
US6324678B1 (en) * 1990-04-06 2001-11-27 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design
US5544067A (en) * 1990-04-06 1996-08-06 Lsi Logic Corporation Method and system for creating, deriving and validating structural description of electronic system from higher level, behavior-oriented description, including interactive schematic design and simulation
US6216252B1 (en) * 1990-04-06 2001-04-10 Lsi Logic Corporation Method and system for creating, validating, and scaling structural description of electronic device
US5801958A (en) * 1990-04-06 1998-09-01 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design from higher level, behavior-oriented description, including interactive system for hierarchical display of control and dataflow information
US5623418A (en) * 1990-04-06 1997-04-22 Lsi Logic Corporation System and method for creating and validating structural description of electronic system
US6470482B1 (en) * 1990-04-06 2002-10-22 Lsi Logic Corporation Method and system for creating, deriving and validating structural description of electronic system from higher level, behavior-oriented description, including interactive schematic design and simulation
US5933356A (en) * 1990-04-06 1999-08-03 Lsi Logic Corporation Method and system for creating and verifying structural logic model of electronic design from behavioral description, including generation of logic and timing models
US5867399A (en) * 1990-04-06 1999-02-02 Lsi Logic Corporation System and method for creating and validating structural description of electronic system from higher-level and behavior-oriented description
US5377333A (en) * 1991-09-20 1994-12-27 Hitachi, Ltd. Parallel processor system having computing clusters and auxiliary clusters connected with network of partial networks and exchangers
US5283883A (en) * 1991-10-17 1994-02-01 Sun Microsystems, Inc. Method and direct memory access controller for asynchronously reading/writing data from/to a memory with improved throughput
US6128755A (en) * 1992-03-04 2000-10-03 International Business Machines Corporation Fault-tolerant multiple processor system with signature voting
US5712922A (en) * 1992-04-14 1998-01-27 Eastman Kodak Company Neural network optical character recognition system and method for classifying characters in a moving web
US5339413A (en) * 1992-08-21 1994-08-16 International Business Machines Corporation Data stream protocol for multimedia data streaming data processing system
US5603043A (en) * 1992-11-05 1997-02-11 Giga Operations Corporation System for compiling algorithmic language source code for implementation in programmable hardware
US5623604A (en) * 1992-11-18 1997-04-22 Canon Information Systems, Inc. Method and apparatus for remotely altering programmable firmware stored in an interactive network board coupled to a network peripheral
US5583964A (en) * 1994-05-02 1996-12-10 Motorola, Inc. Computer utilizing neural network and method of using same
US5910897A (en) * 1994-06-01 1999-06-08 Lsi Logic Corporation Specification and design of complex digital systems
US5655069A (en) * 1994-07-29 1997-08-05 Fujitsu Limited Apparatus having a plurality of programmable logic processing units for self-repair
US5649135A (en) * 1995-01-17 1997-07-15 International Business Machines Corporation Parallel processing system and method using surrogate instructions
US5909565A (en) * 1995-04-28 1999-06-01 Matsushita Electric Industrial Co., Ltd. Microprocessor system which efficiently shares register data between a main processor and a coprocessor
US6282578B1 (en) * 1995-06-26 2001-08-28 Hitachi, Ltd. Execution management method of program on reception side of message in distributed processing system
US5752071A (en) * 1995-07-17 1998-05-12 Intel Corporation Function coprocessor
US5732107A (en) * 1995-08-31 1998-03-24 Northrop Grumman Corporation Fir interpolator with zero order hold and fir-spline interpolation combination
US5930147A (en) * 1995-10-12 1999-07-27 Kabushiki Kaisha Toshiba Design support system in which delay is estimated from HDL description
US5640107A (en) * 1995-10-24 1997-06-17 Northrop Grumman Corporation Method for in-circuit programming of a field-programmable gate array configuration memory
US5892562A (en) * 1995-12-20 1999-04-06 Semiconductor Energy Laboratory Co., Ltd. Liquid crystal electro-optic device
US5784636A (en) * 1996-05-28 1998-07-21 National Semiconductor Corporation Reconfigurable computer architecture for use in signal processing applications
US5916307A (en) * 1996-06-05 1999-06-29 New Era Of Networks, Inc. Method and structure for balanced queue communication between nodes in a distributed computing application
US6023742A (en) * 1996-07-18 2000-02-08 University Of Washington Reconfigurable computing architecture for providing pipelined data paths
US5963454A (en) * 1996-09-25 1999-10-05 Vlsi Technology, Inc. Method and apparatus for efficiently implementing complex function blocks in integrated circuit designs
US5892962A (en) * 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
US6028939A (en) * 1997-01-03 2000-02-22 Redcreek Communications, Inc. Data security system and method
US5941999A (en) * 1997-03-31 1999-08-24 Sun Microsystems Method and system for achieving high availability in networked computer systems
US5931959A (en) * 1997-05-21 1999-08-03 The United States Of America As Represented By The Secretary Of The Air Force Dynamically reconfigurable FPGA apparatus and method for multiprocessing and fault tolerance
US6216191B1 (en) * 1997-10-15 2001-04-10 Lucent Technologies Inc. Field programmable gate array having a dedicated processor interface
US6108693A (en) * 1997-10-17 2000-08-22 Nec Corporation System and method of data communication in multiprocessor system
US6018793A (en) * 1997-10-24 2000-01-25 Cirrus Logic, Inc. Single chip controller-memory device including feature-selectable bank I/O and architecture and methods suitable for implementing the same
US6205516B1 (en) * 1997-10-31 2001-03-20 Brother Kogyo Kabushiki Kaisha Device and method for controlling data storage device in data processing system
US20010014937A1 (en) * 1997-12-17 2001-08-16 Huppenthal Jon M. Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem
US6049222A (en) * 1997-12-30 2000-04-11 Xilinx, Inc Configuring an FPGA using embedded memory
US6096091A (en) * 1998-02-24 2000-08-01 Advanced Micro Devices, Inc. Dynamically reconfigurable logic networks interconnected by fall-through FIFOs for flexible pipeline processing in a system-on-a-chip
US6112288A (en) * 1998-05-19 2000-08-29 Paracel, Inc. Dynamic configurable system of parallel modules comprising chain of chips comprising parallel pipeline chain of processors with master controller feeding command and data
US6247118B1 (en) * 1998-06-05 2001-06-12 Mcdonnell Douglas Corporation Systems and methods for transient error recovery in reduced instruction set computer processors via instruction retry
US20010025338A1 (en) * 1998-06-05 2001-09-27 The Boeing Company Systems and methods for transient error recovery in reduced instruction set computer processors via instruction retry
US6785842B2 (en) * 1998-06-05 2004-08-31 Mcdonnell Douglas Corporation Systems and methods for use in reduced instruction set computer processors for retrying execution of instructions resulting in errors
US6282627B1 (en) * 1998-06-29 2001-08-28 Chameleon Systems, Inc. Integrated processor and programmable data path chip for reconfigurable computing
US6253276B1 (en) * 1998-06-30 2001-06-26 Micron Technology, Inc. Apparatus for adaptive decoding of memory addresses
US6237054B1 (en) * 1998-09-14 2001-05-22 Advanced Micro Devices, Inc. Network interface unit including a microcontroller having multiple configurable logic blocks, with a test/program bus for performing a plurality of selected functions
US6192384B1 (en) * 1998-09-14 2001-02-20 The Board Of Trustees Of The Leland Stanford Junior University System and method for performing compound vector operations
US6219828B1 (en) * 1998-09-30 2001-04-17 International Business Machines Corporation Method for using two copies of open firmware for self debug capability
US6308311B1 (en) * 1999-05-14 2001-10-23 Xilinx, Inc. Method for reconfiguring a field programmable gate array from a host
US20030014627A1 (en) * 1999-07-08 2003-01-16 Broadcom Corporation Distributed processing in a cryptography acceleration chip
US6704816B1 (en) * 1999-07-26 2004-03-09 Sun Microsystems, Inc. Method and apparatus for executing standard functions in a computer system using a field programmable gate array
US6769072B1 (en) * 1999-09-14 2004-07-27 Fujitsu Limited Distributed processing system with registered reconfiguration processors and registered notified processors
US7134047B2 (en) * 1999-12-21 2006-11-07 Intel Corporation Firmwave mechanism for correcting soft errors
US6625749B1 (en) * 1999-12-21 2003-09-23 Intel Corporation Firmware mechanism for correcting soft errors
US20040019771A1 (en) * 1999-12-21 2004-01-29 Nhon Quach Firmwave mechanism for correcting soft errors
US6326806B1 (en) * 2000-03-29 2001-12-04 Xilinx, Inc. FPGA-based communications access point and system for reconfiguration
US6624819B1 (en) * 2000-05-01 2003-09-23 Broadcom Corporation Method and system for providing a flexible and efficient processor for use in a graphics processing system
US6839873B1 (en) * 2000-06-23 2005-01-04 Cypress Semiconductor Corporation Method and apparatus for programmable logic device (PLD) built-in-self-test (BIST)
US6684314B1 (en) * 2000-07-14 2004-01-27 Agilent Technologies, Inc. Memory controller with programmable address configuration
US6982976B2 (en) * 2000-08-11 2006-01-03 Texas Instruments Incorporated Datapipe routing bridge
US6829697B1 (en) * 2000-09-06 2004-12-07 International Business Machines Corporation Multiple logical interfaces to a shared coprocessor resource
US20020041685A1 (en) * 2000-09-22 2002-04-11 Mcloone Maire Patricia Data encryption apparatus
US20020087829A1 (en) * 2000-12-29 2002-07-04 Snyder Walter L. Re-targetable communication system
US6662285B1 (en) * 2001-01-09 2003-12-09 Xilinx, Inc. User configurable memory system having local and global memory blocks
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20040019883A1 (en) * 2001-01-26 2004-01-29 Northwestern University Method and apparatus for automatically generating hardware from algorithms described in matlab
US20030061409A1 (en) * 2001-02-23 2003-03-27 Rudusky Daryl System, method and article of manufacture for dynamic, automated product fulfillment for configuring a remotely located device
US7177310B2 (en) * 2001-03-12 2007-02-13 Hitachi, Ltd. Network connection apparatus
US20030009651A1 (en) * 2001-05-15 2003-01-09 Zahid Najam Apparatus and method for interconnecting a processor to co-processors using shared memory
US20060236018A1 (en) * 2001-05-18 2006-10-19 Xilinx, Inc. Programmable logic device including programmable interface core and central processing unit
US20030177223A1 (en) * 2002-03-12 2003-09-18 Erickson Michael J. Verification of computer programs
US20040064198A1 (en) * 2002-05-06 2004-04-01 Cyber Switching, Inc. Method and/or system and/or apparatus for remote power management and monitoring supply
US7137020B2 (en) * 2002-05-17 2006-11-14 Sun Microsystems, Inc. Method and apparatus for disabling defective components in a computer system
US7117390B1 (en) * 2002-05-20 2006-10-03 Sandia Corporation Practical, redundant, failure-tolerant, self-reconfiguring embedded system architecture
US20030231649A1 (en) * 2002-06-13 2003-12-18 Awoseyi Paul A. Dual purpose method and apparatus for performing network interface and security transactions
US20040045015A1 (en) * 2002-08-29 2004-03-04 Kazem Haji-Aghajani Common interface framework for developing field programmable device based applications independent of target circuit board
US20040170070A1 (en) * 2002-10-31 2004-09-02 Lockheed Martin Corporation Programmable circuit and related computing machine and method
US20040133763A1 (en) * 2002-10-31 2004-07-08 Lockheed Martin Corporation Computing architecture and related system and method
US20040130927A1 (en) * 2002-10-31 2004-07-08 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US20080222337A1 (en) * 2002-10-31 2008-09-11 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US20040153752A1 (en) * 2002-12-02 2004-08-05 Marvell International Ltd. Self-reparable semiconductor and method thereof
US20070055907A1 (en) * 2002-12-02 2007-03-08 Sehat Sutardja Self-reparable semiconductor and method thereof
US20050104743A1 (en) * 2003-11-19 2005-05-19 Ripolone James G. High speed communication for measurement while drilling
US7228520B1 (en) * 2004-01-30 2007-06-05 Xilinx, Inc. Method and apparatus for a programmable interface of a soft platform on a programmable logic device
US20060123282A1 (en) * 2004-10-01 2006-06-08 Gouldey Brent I Service layer architecture for memory access system and method

Cited By (128)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8250341B2 (en) 2002-10-31 2012-08-21 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information
US20110147762A1 (en) * 2003-03-03 2011-06-23 Sheppard Scott T Integrated Nitride and Silicon Carbide-Based Devices
US20110213892A1 (en) * 2003-06-25 2011-09-01 Microsoft Corporation Media foundation media processor
US20110185078A1 (en) * 2003-06-25 2011-07-28 Microsoft Corporation Media scrubbing using a media processor
US9460753B2 (en) 2003-06-25 2016-10-04 Microsoft Technology Licensing, Llc Media scrubbing using a media processor
US9502074B2 (en) 2003-06-25 2016-11-22 Microsoft Technology Licensing, Llc Media foundation media processor
US9536565B2 (en) 2003-06-25 2017-01-03 Microsoft Technology Licensing, Llc Media foundation media processor
US8171151B2 (en) 2003-06-25 2012-05-01 Microsoft Corporation Media foundation media processor
US7487302B2 (en) 2004-10-01 2009-02-03 Lockheed Martin Corporation Service layer architecture for memory access system and method
US20060101253A1 (en) * 2004-10-01 2006-05-11 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
US7619541B2 (en) * 2004-10-01 2009-11-17 Lockheed Martin Corporation Remote sensor processing system and method
US20060123282A1 (en) * 2004-10-01 2006-06-08 Gouldey Brent I Service layer architecture for memory access system and method
US20060101250A1 (en) * 2004-10-01 2006-05-11 Lockheed Martin Corporation Configurable computing machine and related systems and methods
US20060085781A1 (en) * 2004-10-01 2006-04-20 Lockheed Martin Corporation Library for computer-based tool and related system and method
US7809982B2 (en) 2004-10-01 2010-10-05 Lockheed Martin Corporation Reconfigurable computing machine and related systems and methods
US8073974B2 (en) 2004-10-01 2011-12-06 Lockheed Martin Corporation Object oriented mission framework and system and method
US7676649B2 (en) 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
US20060230377A1 (en) * 2004-10-01 2006-10-12 Lockheed Martin Corporation Computer-based tool and method for designing an electronic circuit and related system
US20060149920A1 (en) * 2004-10-01 2006-07-06 John Rapp Object oriented mission framework and system and method
US20060087450A1 (en) * 2004-10-01 2006-04-27 Schulz Kenneth R Remote sensor processing system and method
US7984581B2 (en) 2004-10-29 2011-07-26 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US20060265927A1 (en) * 2004-10-29 2006-11-30 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US7814696B2 (en) 2004-10-29 2010-10-19 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
CN100412790C (en) * 2005-03-07 2008-08-20 富士通株式会社 Microprocessor
US20060288350A1 (en) * 2005-06-20 2006-12-21 Microsoft Corporation Multi-thread multimedia processing
US7827554B2 (en) * 2005-06-20 2010-11-02 Microsoft Corporation Multi-thread multimedia processing
US7502873B2 (en) * 2006-10-10 2009-03-10 International Business Machines Corporation Facilitating access to status and measurement data associated with input/output processing
US7984198B2 (en) 2006-10-10 2011-07-19 International Business Machines Corporation System and program products for facilitating access to status and measurement data associated with input/output processing
US20080147889A1 (en) * 2006-10-10 2008-06-19 International Business Machines Corporation Facilitating input/output processing by using transport control words to reduce input/output communications
US7840719B2 (en) 2006-10-10 2010-11-23 International Business Machines Corporation System and program products for facilitating input/output processing by using transport control words to reduce input/output communications
US7500023B2 (en) * 2006-10-10 2009-03-03 International Business Machines Corporation Facilitating input/output processing by using transport control words to reduce input/output communications
US20080147890A1 (en) * 2006-10-10 2008-06-19 International Business Machines Corporation Facilitating access to status and measurement data associated with input/output processing
US8140713B2 (en) 2006-10-10 2012-03-20 International Business Machines Corporation System and program products for facilitating input/output processing by using transport control words to reduce input/output communications
US20090144586A1 (en) * 2006-10-10 2009-06-04 International Business Machines Corporation System and program products for facilitating access to status and measurement data associated with input/output processing
US20090172203A1 (en) * 2006-10-10 2009-07-02 International Business Machines Corporation System and program products for facilitating input/output processing by using transport control words to reduce input/output communications
US20090210584A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Exception condition determination at a control unit in an i/o processing system
US20090210560A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Cancel instruction and command for determining the state of an i/o operation
US20090210557A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Determining extended capability of a channel path
US20090210576A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US20090210561A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to perform system changes in an input/output processing system
US9483433B2 (en) 2008-02-14 2016-11-01 International Business Machines Corporation Processing communication data in a ships passing condition
US20090210583A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Reserved device access contention reduction
US9436272B2 (en) 2008-02-14 2016-09-06 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US20090210580A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Early termination of an i/o operation in an i/o processing system
US20090210562A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing communication data in a ships passing condition
US20090210581A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Bi-directional data transfer within a single i/o operation
US20090210559A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing a variable length device command word at a control unit in an i/o processing system
US7840718B2 (en) 2008-02-14 2010-11-23 International Business Machines Corporation Processing of data to suspend operations in an input/output processing log-out system
US7840717B2 (en) 2008-02-14 2010-11-23 International Business Machines Corporation Processing a variable length device command word at a control unit in an I/O processing system
US20090210769A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Multiple crc insertion in an output data stream
US7856511B2 (en) 2008-02-14 2010-12-21 International Business Machines Corporation Processing of data to suspend operations in an input/output processing system
US7890668B2 (en) 2008-02-14 2011-02-15 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US7899944B2 (en) 2008-02-14 2011-03-01 International Business Machines Corporation Open exchange limiting in an I/O processing system
US9330042B2 (en) 2008-02-14 2016-05-03 International Business Machines Corporation Determining extended capability of a channel path
US7904605B2 (en) 2008-02-14 2011-03-08 International Business Machines Corporation Computer command and response for determining the state of an I/O operation
US7908403B2 (en) 2008-02-14 2011-03-15 International Business Machines Corporation Reserved device access contention reduction
US7917813B2 (en) 2008-02-14 2011-03-29 International Business Machines Corporation Exception condition determination at a control unit in an I/O processing system
US9298379B2 (en) 2008-02-14 2016-03-29 International Business Machines Corporation Bi-directional data transfer within a single I/O operation
US7937507B2 (en) 2008-02-14 2011-05-03 International Business Machines Corporation Extended measurement word determination at a channel subsystem of an I/O processing system
US7941570B2 (en) 2008-02-14 2011-05-10 International Business Machines Corporation Bi-directional data transfer within a single I/O operation
US20090210572A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Computer command and response for determining the state of an i/o operation
US20090210570A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Extended measurement word determination at a channel subsystem of an i/o processing system
US20090210571A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to monitor input/output operations
US20090210579A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Open exchange limiting in an i/o processing system
US20090210563A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Providing indirect data addressing for a control block at a channel subsystem of an i/o processing system
US8001298B2 (en) 2008-02-14 2011-08-16 International Business Machines Corporation Providing extended measurement data in an I/O processing system
US20090210564A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to suspend operations in an input/output processing system
US9052837B2 (en) 2008-02-14 2015-06-09 International Business Machines Corporation Processing communication data in a ships passing condition
US20090210585A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to suspend operations in an input/output processing log-out system
US8082481B2 (en) 2008-02-14 2011-12-20 International Business Machines Corporation Multiple CRC insertion in an output data stream
US8095847B2 (en) 2008-02-14 2012-01-10 International Business Machines Corporation Exception condition handling at a channel subsystem in an I/O processing system
US8108570B2 (en) 2008-02-14 2012-01-31 International Business Machines Corporation Determining the state of an I/O operation
US8117347B2 (en) 2008-02-14 2012-02-14 International Business Machines Corporation Providing indirect data addressing for a control block at a channel subsystem of an I/O processing system
US20090210768A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Exception condition handling at a channel subsystem in an i/o processing system
US8166206B2 (en) 2008-02-14 2012-04-24 International Business Machines Corporation Cancel instruction and command for determining the state of an I/O operation
US20090210884A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Processing of data to determine compatability in an input/output processing system
US8176222B2 (en) 2008-02-14 2012-05-08 International Business Machines Corporation Early termination of an I/O operation in an I/O processing system
US9043494B2 (en) 2008-02-14 2015-05-26 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US8196149B2 (en) 2008-02-14 2012-06-05 International Business Machines Corporation Processing of data to determine compatability in an input/output processing system
US8214562B2 (en) 2008-02-14 2012-07-03 International Business Machines Corporation Processing of data to perform system changes in an input/output processing system
US20090210573A1 (en) * 2008-02-14 2009-08-20 International Business Machines Corporation Computer command and response for determining the state of an i/o operation
US8977793B2 (en) 2008-02-14 2015-03-10 International Business Machines Corporation Determining extended capability of a channel path
US8312189B2 (en) 2008-02-14 2012-11-13 International Business Machines Corporation Processing of data to monitor input/output operations
US8892781B2 (en) 2008-02-14 2014-11-18 International Business Machines Corporation Bi-directional data transfer within a single I/O operation
US8838860B2 (en) 2008-02-14 2014-09-16 International Business Machines Corporation Determining extended capability of a channel path
US8806069B2 (en) 2008-02-14 2014-08-12 International Business Machines Corporation Providing indirect data addressing for a control block at a channel subsystem of an I/O processing system
US8516161B2 (en) 2008-02-14 2013-08-20 International Business Machines Corporation Providing indirect data addressing for a control block at a channel subsystem of an I/O processing system
US8392619B2 (en) 2008-02-14 2013-03-05 International Business Machines Corporation Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous
US8495253B2 (en) 2008-02-14 2013-07-23 International Business Machines Corporation Bi-directional data transfer within a single I/O operation
US8478915B2 (en) 2008-02-14 2013-07-02 International Business Machines Corporation Determining extended capability of a channel path
US20100046177A1 (en) * 2008-06-18 2010-02-25 Lockheed Martin Corporation Enclosure assembly housing at least one electronic board assembly and systems using same
US20100046175A1 (en) * 2008-06-18 2010-02-25 Lockheed Martin Corporation Electronics module, enclosure assembly housing same, and related systems and methods
US8189345B2 (en) 2008-06-18 2012-05-29 Lockheed Martin Corporation Electronics module, enclosure assembly housing same, and related systems and methods
US8773864B2 (en) 2008-06-18 2014-07-08 Lockheed Martin Corporation Enclosure assembly housing at least one electronic board assembly and systems using same
US20100030918A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program chain linked branching
US20100030920A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program chain linking
US20100030919A1 (en) * 2008-07-31 2010-02-04 International Business Machines Corporation Transport control channel program message pairing
US7904606B2 (en) 2008-07-31 2011-03-08 International Business Machines Corporation Transport control channel program chain linked branching
US7937504B2 (en) 2008-07-31 2011-05-03 International Business Machines Corporation Transport control channel program message pairing
US8055807B2 (en) 2008-07-31 2011-11-08 International Business Machines Corporation Transport control channel program chain linking including determining sequence order
US8332542B2 (en) 2009-11-12 2012-12-11 International Business Machines Corporation Communication with input/output system devices
US8972615B2 (en) 2009-11-12 2015-03-03 International Business Machines Corporation Communication with input/output system devices
US8738811B2 (en) 2011-06-01 2014-05-27 International Business Machines Corporation Fibre channel input/output data routing system and method
US8683083B2 (en) 2011-06-01 2014-03-25 International Business Machines Corporation Fibre channel input/output data routing system and method
US8364853B2 (en) 2011-06-01 2013-01-29 International Business Machines Corporation Fibre channel input/output data routing system and method
US8364854B2 (en) 2011-06-01 2013-01-29 International Business Machines Corporation Fibre channel input/output data routing system and method
US8583989B2 (en) 2011-06-01 2013-11-12 International Business Machines Corporation Fibre channel input/output data routing system and method
US8769253B2 (en) 2011-06-01 2014-07-01 International Business Machines Corporation Fibre channel input/output data routing system and method
US8583988B2 (en) 2011-06-01 2013-11-12 International Business Machines Corporation Fibre channel input/output data routing system and method
US8677027B2 (en) 2011-06-01 2014-03-18 International Business Machines Corporation Fibre channel input/output data routing system and method
US9021155B2 (en) 2011-06-01 2015-04-28 International Business Machines Corporation Fibre channel input/output data routing including discarding of data transfer requests in response to error detection
US8683084B2 (en) 2011-06-01 2014-03-25 International Business Machines Corporation Fibre channel input/output data routing system and method
US8549185B2 (en) 2011-06-30 2013-10-01 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8473641B2 (en) 2011-06-30 2013-06-25 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8346978B1 (en) 2011-06-30 2013-01-01 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8631175B2 (en) 2011-06-30 2014-01-14 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8312176B1 (en) 2011-06-30 2012-11-13 International Business Machines Corporation Facilitating transport mode input/output operations between a channel subsystem and input/output devices
US8918542B2 (en) 2013-03-15 2014-12-23 International Business Machines Corporation Facilitating transport mode data transfer between a channel subsystem and input/output devices
US9195394B2 (en) 2013-05-29 2015-11-24 International Business Machines Corporation Transport mode data transfer between a channel subsystem and input/output devices
US8990439B2 (en) 2013-05-29 2015-03-24 International Business Machines Corporation Transport mode data transfer between a channel subsystem and input/output devices
US20140368236A1 (en) * 2013-06-13 2014-12-18 Altera Corporation Multiple-voltage programmable logic fabric
US9048826B2 (en) * 2013-06-13 2015-06-02 Altera Corporation Multiple-voltage programmable logic fabric
US20160103707A1 (en) * 2014-10-10 2016-04-14 Futurewei Technologies, Inc. System and Method for System on a Chip
US10496622B2 (en) 2015-10-09 2019-12-03 Futurewei Technologies, Inc. System and method for real-time data warehouse
US10783160B2 (en) 2015-10-09 2020-09-22 Futurewei Technologies, Inc. System and method for scalable distributed real-time data warehouse
WO2021168145A1 (en) * 2020-02-21 2021-08-26 Pensando Systems Inc. Methods and systems for processing data in a programmable data processing pipeline that includes out-of-pipeline processing
US11494189B2 (en) 2020-02-21 2022-11-08 Pensando Systems Inc. Methods and systems for processing data in a programmable data processing pipeline that includes out-of-pipeline processing

Also Published As

Publication number Publication date
US20080222337A1 (en) 2008-09-11
US20040189686A1 (en) 2004-09-30
US20040170070A1 (en) 2004-09-02
US7373432B2 (en) 2008-05-13
US7987341B2 (en) 2011-07-26
US7418574B2 (en) 2008-08-26
US20040133763A1 (en) 2004-07-08
US8250341B2 (en) 2012-08-21
US7061485B2 (en) 2006-06-13
TWI323855B (en) 2010-04-21
US20040130927A1 (en) 2004-07-08
US7386704B2 (en) 2008-06-10
JP5688432B2 (en) 2015-03-25
US20040181621A1 (en) 2004-09-16
TW200416594A (en) 2004-09-01
JP2013236380A (en) 2013-11-21

Similar Documents

Publication Publication Date Title
US20040136241A1 (en) Pipeline accelerator for improved computing architecture and related system and method
AU2003287320B2 (en) Pipeline accelerator and related system and method
WO2004042562A2 (en) Pipeline accelerator and related system and method
US7487302B2 (en) Service layer architecture for memory access system and method
US8799564B2 (en) Efficiently implementing a plurality of finite state machines
US6658503B1 (en) Parallel transfer size calculation and annulment determination in transfer controller with hub and ports
CN116324741A (en) Method and apparatus for configurable hardware accelerator

Legal Events

Date Code Title Description
AS Assignment

Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAPP, JOHN W.;JACKSON, LARRY;JONES, MARK;AND OTHERS;REEL/FRAME:014614/0367

Effective date: 20031009

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION