WO1997036228A1 - Software pipelining a hyperblock loop - Google Patents

Software pipelining a hyperblock loop Download PDF

Info

Publication number
WO1997036228A1
WO1997036228A1 PCT/US1997/003999 US9703999W WO9736228A1 WO 1997036228 A1 WO1997036228 A1 WO 1997036228A1 US 9703999 W US9703999 W US 9703999W WO 9736228 A1 WO9736228 A1 WO 9736228A1
Authority
WO
WIPO (PCT)
Prior art keywords
program loop
loop
instruction
instructions
iteration
Prior art date
Application number
PCT/US1997/003999
Other languages
French (fr)
Other versions
WO1997036228A9 (en
Inventor
Pohua Chang
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to AU23243/97A priority Critical patent/AU2324397A/en
Priority to EP97915945A priority patent/EP0954778B1/en
Priority to CA002250924A priority patent/CA2250924C/en
Priority to DE69722447T priority patent/DE69722447T2/en
Publication of WO1997036228A1 publication Critical patent/WO1997036228A1/en
Publication of WO1997036228A9 publication Critical patent/WO1997036228A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/445Exploiting fine grain parallelism, i.e. parallelism at instruction level
    • G06F8/4452Software pipelining

Definitions

  • This invention relates to software pipelining of instructions for subsequent execution on a processor.
  • this invention relates to software pipelining of a sequence of instructions that has a single control flow entry and one or more control flow exits.
  • a number of hardware and software techniques may be used to improve the execution speed of a software program.
  • the time required to execute the program is dependent upon a number of factors including the number of instructions required to execute the program, the average number of processor cycles required to execute an instruction, and the processor cycle time.
  • Software scheduling of instructions can be used to enhance the program execution rate. This may be accomplished by using software to reorder the instructions so that they can be executed more efficiently. In other words, software scheduling helps to select the order of a sequence of instructions so that they execute correctly in a minimum amount of time within the constraints of the processor resource limitations.
  • basic block scheduling One prior art software scheduling technique is known as basic block scheduling.
  • program instructions are divided into code or instruction sequences called basic blocks.
  • a program may consist of any number of basic blocks.
  • a basic block has the property that if any instruction in the block is executed, then all other instructions within the basic block are executed. Thus, only the first instruction in the basic block can be a branch target or entry point. Similarly, only the last instruction in the basic block can be a branch instruction. This ensures that if any instruction in the code sequence is executed, then all instructions within the code sequence are executed.
  • block scheduling The technique of scheduling instructions or reordering the instructions within a basic block for optimal execution efficiency is called block scheduling.
  • a program loop is a sequence of instructions in which the last instruction is a branch instruction that may branch to the first instruction in the sequence under certain conditions. Thus the program loop might execute iteratively until some certain condition is met.
  • Software pipelining is a technique used to improve processor throughput of a program loop.
  • Software pipelining effectively hides instruction latencies in a pipelined processor by overlapping the execution of different loop iterations of a loop structure in the program code. In other words, before one loop iteration completes, execution of successive iterations of the loop is initiated.
  • the initiation interval is the number of processor cycles between the initiation of a given iteration and the initiation of the next iteration.
  • prior art software pipelining techniques are effective only for simple loop structures.
  • some prior art software scheduling techniques are applicable only to a simple loop structure consisting of one basic block.
  • some prior art software pipelining methods are unable to handle more complicated loop structures such as nested loop structures or loops having more than one basic block.
  • a software scheduling technique such as a software pipelining that is applicable to a broader class of instruction sequences including instruction sequences having one or more basic blocks.
  • a method of software pipelining a sequence of instructions that have a single control flow entry and one or more control flow exits i.e., a hyperblock program loop.
  • An iterative software pipelining method promotes instructions of a program loop to previous loop iterations and then reschedules the instructions until either 1) the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval) or 2) the resultant schedule is not an improvement over the previous schedule generated.
  • the method is applicable to a sequence of instructions within a program loop having a single control flow entry and one or more control flow exit points (i.e., a hyperblock loop).
  • a minimum initiation interval of the program loop is computed.
  • instruction level parallelism transformations are applied on the program loop.
  • a single iteration schedule is determined for the program loop.
  • selected instructions are percolated to a prior iteration of the program loop to generate a new instruction order for the program loop. Each of steps two through four is performed as long as a previous length of the program loop exceeds a single iteration schedule length and the single iteration schedule length exceeds the minimum initiation interval.
  • Figure 1 contrasts instruction handling by a processor with and without hardware pipelining.
  • Figure 2 contrasts scheduling for a minimum iteration time with scheduling for a minimum iteration interval.
  • FIG. 3 illustrates a flowchart for one embodiment of the software pipelining method.
  • Hardware pipelining and software pipelining are different approaches to improving the execution performance of a processor with respect to a sequence of instructions.
  • software pipelining uses software to reorder the sequence of instructions within a program loop in order that the next iteration of the program loop can be executed as soon as possible.
  • Processor performance can generally be increased by hardware pipelining of the instructions to be executed. Instruction execution generally requires 1) fetching the instruction; 2) decoding the instruction and assembling operands; and 3) executing the instruction and writing the results. To pipeline instruction execution, the various steps of instruction execution are performed by independent units called pipeline stages. The steps of different instructions are executed independently in different pipeline stages. Thus while one instruction can be fetched while another instruction is being decoded. The result of each pipeline stage is communicated to the next pipeline stage. Hardware pipelining does not reduce the total amount of time to execute a given instruction. Hardware pipelining is able to improve processor performance by reducing the average number of processor cycles required to process an instruction (i.e., pipelining increases processor performance by increasing the number of instructions handled at one time). -6-
  • a scalar processor only executes one instruction at a time. Although some instructions may take more than one cycle to complete, the use of hardware pipelining in a scalar processor may theoretically be able to achieve an average execution rate of one instruction per machine cycle (e.g., in a Reduced Instruction Set Computer (i.e., RISC) architecture). Hardware pipelining permits concurrent execution of instructions in different pipeline stages to achieve this result. Alternatively, a superscalar processor is able to reduce the average number of cycles per instruction by permitting concurrent execution of instructions in the same pipeline stage as well as concurrent execution of instructions in different pipeline stages.
  • RISC Reduced Instruction Set Computer
  • Figure 1 illustrates timing guidelines which contrast instruction handling for a scalar processor without hardware pipelining (110) and a scalar processor with hardware pipelining (120). From timing diagram 1 10, if the scalar processor without hardware pipelining is working on instruction n, the processor cannot handle the next instruction until instruction n is executed. Thus the time required to fetch, decode, and execute a subsequent instruction must be stacked end-to-end on the time required to fetch, decode, and execute the nth instruction.
  • the scalar processor with hardware pipelining (120) can handle more than one instruction. In a three stage pipeline, this permits the fetching of instruction n+l while instruction n is decoding.
  • the second stage of the pipeline can be decoding instruction n+l
  • the first stage of the pipeline can be fetching instruction n+2.
  • the scalar pipelined processor has completed execution of instruction n+3 at time 112 while the scalar processor without hardware pipelining is only beginning to process instruction n+2.
  • instruction latencies are effectively "hidden” or reduced by overlapping the fetching, decoding, and execution steps of different instructions.
  • Software pipelining helps to hide instruction latencies in a pipelined processor by overlapping the execution of different loop iterations of a loop structure in the program code. In other words, before one loop iteration completes, execution of successive iterations of the loop is initiated. The initiation interval is the number of processor cycles between the initiation of a given iteration and the initiation of the next iteration. Software pipelining seeks to minimize the initiation interval in order to accelerate the overall execution of the program loop. Software pipelining achieves this by reordering the instructions within the program loop.
  • Block scheduling of a program loop attempts to minimize the execution time of a single iteration of the program loop. Theoretically, this might seem to produce minimum execution time for the program loop. Theoretically, because of the repetitive nature of the program loop structure, small performance improvements in each execution of the loop can be aggregated into greater performance gains. In other words, the small performance gain is reaped for each iteration of the loop. In practice, however, such scheduling does not account for latencies or delays incurred between the initiation of subsequent iterations of the program loop (i.e., the initiation interval).
  • Figure 2 contrasts the processor throughput for a program loop that has been optimized with block scheduling (210) and with software pipelining (220).
  • the block scheduler minimizes the single iteration length of the program loop.
  • the single iteration length, 214, of the block scheduled program loop is less than the single iteration length, 224, of the software pipelined program loop.
  • the n+l iteration of the block scheduled program loop cannot begin until iteration n is complete so that the time required to complete each iteration must be stacked end- to-end.
  • the minimum initiation interval, 212, in the block scheduled code is greater than the minimum initiation interval, 222, in the software pipelined code.
  • Instruction level parallelism is a measure of the average number of instructions that a superscalar processor might be able to execute at the same time.
  • Machine parallelism is a measure of the ability of the processor to take advantage of the instruction level parallelism. A superscalar processor must have sufficient machine parallelism to take advantage of the instruction level parallelism.
  • Procedural, resource, and data dependencies limit the instruction level parallelism of instruction sequences as described below. Instruction level parallelism transformations might be used to decrease these dependencies in order to increase the instruction level parallelism of the program loop.
  • a procedural dependency results when a processor cannot determine which set of instructions to execute until a branch instruction is executed. In other words, instructions following the branch instruction may or may not be executed depending upon the branch condition. Thus these instruction may not be executed until the branch instruction is executed. Furthermore, execution of a given instruction has a procedural dependency on each branch executed up until that point.
  • Resource conflicts can typically be resolved by delaying execution of one of the instructions until the resource is available or duplicating the resource.
  • Data dependencies can be further classified as true dependencies, antidependencies, and output dependencies. Although often collectively referred to as simply data dependencies, the distinction between the classifications is important for purpose of applying instruction level parallelism transformations.
  • True data dependency also called flow dependency or write-read dependency
  • the second instruction is data dependent upon the first instruction.
  • the execution of the second instruction must be delayed until all of its input values are available.
  • Antidependencies are often referred to as read-write dependencies.
  • An antidependency results when a second instruction destroys a value used by a first instruction. Thus instructions subsequent to the first and second instructions must be delayed until the first instruction execution is completed and the second instruction execution has begun.
  • true data dependency and antidependency are manifested through storage locations, true data dependencies represent the flow of data and information through the program. True data dependencies cannot be removed.
  • Antidependencies arise, however, because at different points in time storage locations hold different values for different computations. Antidependencies thus directly result from storage conflicts (hence sometimes antidependencies are grouped with resource conflicts).
  • instruction level parallelism transformations can be used to remove antidependencies. For example, antidependencies can be removed by duplicating the storage resource through register renaming.
  • Output dependencies are often also referred to as write-write dependencies. Similar to antidependencies, an output dependency results from storage conflicts because at different points in time, the storage locations hold different values for different computations. As with antidependencies, output dependencies can be removed through instruction level parallelism transformations such as register renaming.
  • the software pipelining algorithm presented here is an iterative method that continually improves the code schedule of a program loop until no further improvement can be made. This is accomplished by applying dependence breaking transformations to the instructions within the body of the program loop. After the dependence breaking transformations have been applied, the code schedule is improved by selecting instructions to be promoted to the previous iteration of the loop thus resulting in a new ordering for the instructions within the body of the hyperblock loop. The dependence breaking transformations are then applied to the new ordering of instructions to generate a new code schedule.
  • the iterative software pipelining method continues to promote instructions to previous loop iterations and then reschedules them until either 1) the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval) or 2) the resultant schedule is not an improvement over the previous schedule generated.
  • This algorithm is applicable to program loops consisting of one or more basic blocks. In other words, the algorithm may be applied to a sequence of instructions that have a single control flow entry and one or more control flow exits (i.e., a hyperblock loop).
  • a "pseudo-C" representation of one embodiment of the software pipelining method is presented in Appendix I.
  • Figure 3 illustrates a flowchart of one embodiment of the iterative software pipelining method described below.
  • step 310 the minimum initiation interval (Mil) is calculated.
  • Mil the minimum initiation interval
  • the software pipelining method will attempt to reschedule instructions until the initiation interval of the program loop is equal to Mil. This goal may not be achievable depending upon constraints due to insufficient instruction level parallelism or insufficient machine level parallelism.
  • the initiation interval may be constrained by insufficient resources. In other words, if there are insufficient processor cycles in an initiation interval to provide the resources needed to complete an iteration, then the initiation interval is limited due to insufficient machine parallelism.
  • the minimum initiation interval is determined by the larger of 1 ) the minimum number of cycles required to resolve dependencies in one iteration so that the values are ready when needed in a subsequent iteration, and 2) the minimum number of cycles required to provide for the resource utilization of all the instructions in an iteration.
  • the computer implemented software pipelining method initializes the variable PRIOR_LENGTH to ensure at least one pass through the loop represented by steps 330-370. This might be accomplished by setting PRIOR_LENGTH to some arbitrarily large value to ensure that the first time through step 350 PRIOR_LENGTH will be less than or equal to SL.
  • the current state of the loop being pipelined, L is recorded. Given that the method iteratively performs reordering of instructions, a subsequent iteration may produce a worse initiation interval. A copy of L is stored as L' to ensure that a scheduling from the previous iteration can be retrieved if loop L as previously scheduled has a better initiation interval than the current instruction schedule of L.
  • instruction level parallelism transformations are performed on program loop L.
  • constant combining, operation combining, software register renaming, and the RHS/LHS transformation some of the methods used to help increase the instruction level parallelism of the program loop L.
  • Procedural and resource dependencies operate to reduce the instruction level parallelism of the program loop, L. This in turn serves as an impediment to further reordering of instructions from other iterations of program loop L.
  • instruction level parallelism transformations are performed during each iteration of the software pipelining algorithm in order to help maximize the instruction level parallelism of the program loop at each stage. This in turn tends to make more instructions available for reordering because their dependencies have been eliminated.
  • Flow dependencies may be eliminated through the use of constant combining and operation combining.
  • a sequence of code including the following instructions: rO ⁇ - rO + 8; rl ⁇ - memory[r0]; Before constant combining, this sequence requires two addition operations, each of which adds a different constant to register 0 (an implicit 0 is added to rO in the memory reference).
  • the sequence might be as follows: rl ⁇ - memory fr0+8]; rO ⁇ - r0+8; Now the two constants have been combined into one constant such that only one addition operation must be performed. Note also that constant combining has removed flow dependency in this code sequence. In other words, the second instruction is not dependent upon the first instruction.
  • each pair of instructions within each basic block of the program loop is considered as a candidate for constant and operation combining.
  • each pair of instructions within the program loop is considered as a candidate for constant and operation combining.
  • Software based register renaming helps to eliminate antidependencies and output dependencies in much the same way as hardware based dynamic register renaming.
  • Software based register renaming is performed by the compiler.
  • the compiler renames the destination registers of instructions that are causing the false dependencies.
  • the source register in subsequent instructions that depend on the value stored in the renamed register are correspondingly renamed.
  • One "pseudo-C" algorithm for accomplishing the register renaming function is provided in Appendix II.
  • the LHS/RHS split transformation is another method used to help eliminate antidependencies and output dependencies that may be introduced when an instruction is moved above a branch. Antidependencies or output dependencies that result from moving an instruction above a branch must be removed in order to ensure that the data flow occurs as intended.
  • the instruction sequence is as follows: rO ⁇ - memory[rl]; if (r2) rO ⁇ - memory[r3]; rl ⁇ - memory [rO];
  • the branch condition is true.
  • the rl ⁇ - memoryfrO] instruction cannot be executed until the rO ⁇ - memory[r3] instruction completes.
  • a scheduling algorithm might generate the following code: rO ⁇ - memory [rl]; rO ⁇ - memory [r31; if (r2) ; rl ⁇ - memory [rOl; If the branch condition is true, rO will be correct.
  • register rO is not storing the proper value at the time the rl ⁇ -memory[rO] instruction executes.
  • the instruction sequence now appears as rO ⁇ - memory [rl]; r4 ⁇ - memory [r3]; if (r2) rO ⁇ - r4; rl ⁇ - memoryfrO];
  • the LHS RHS split transformation permits the r2 ⁇ -memory[r2] to be scheduled to hide its long execution latency.
  • the rl ⁇ - memory[r0] only needs to wait one cycle so that the rO ⁇ - r4 instruction can complete.
  • a pseudo-C implementation of the LHS RHS splitting transformation is provided in Appendix III.
  • Single Iteration scheduling is applied to instructions in the program loop body.
  • the scheduler attempts to minimize the length of the code schedule by moving critical instructions to the entry basic block of the program loop.
  • the entry basic block is the basic block associated with the control flow entry for the program loop.
  • Global code scheduling algorithms are well known in the art.
  • the global code scheduling algorithms move instructions across basic block boundaries, but only in a manner which is independent of the outcome of the branch.
  • the global code scheduling algorithm converts sequences of basic blocks into larger basic blocks. This might include using hierarchical reduction to convert branching structures into non-branching structures in order to generate larger basic blocks.
  • the software pipelining method iteratively integrates instruction level parallelism transformations with global code scheduling to help achieve as short of an initiation interval as possible.
  • the global code scheduler might be limited to block scheduling.
  • Block scheduling is a special case of global scheduling in which the global code scheduling does not include moving instructions across any block boundaries.
  • a block scheduling technique is used on each block within the program loop (hence "global scheduling"), but instructions are not moved across block boundaries.
  • global scheduling a block scheduling technique is used on each block within the program loop (hence "global scheduling"), but instructions are not moved across block boundaries.
  • the software pipelining method iteratively integrates instruction level parallelism transformations with basic block scheduling to help achieve as short of an initiation interval as possible.
  • step 345 the length of the code schedule after single iteration scheduling is determined.
  • the length of the code schedule for a single iteration of the program loop serves as an upper bound for the initiation interval of the program loop.
  • the iterative scheduling process ends.
  • the program loop as scheduled in the previous cycle is used because further software pipelining would actually increase the initiation interval of the program loop.
  • Step 350 determines whether the single iteration schedule length from the previous cycle (represented by PRIOR_LENGTH) is less than or equal to the current single iteration schedule length (i.e., SL). If not, then step 380 ensures that the previous version of program loop L is used because the resultant schedule is not an improvement over the previous schedule generated.
  • Step 355 determines whether the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval). If the minimum initiation interval has been reached, then no further scheduling can improve upon the current schedule of instructions and thus the iterative scheduling process stops.
  • initiation interval of program loop L as currently scheduled i.e., SL
  • SL the minimum initiation interval
  • further scheduling may result in a lower initiation interval.
  • the method proceeds with steps 360-370 to produce another version of program loop L through rescheduling.
  • PRIOR_LENGTH SL
  • the software pipelined loop may contain instructions from different iterations of the original program loop. This means that before the loop begins executing, the loop pipeline must be initialized or "filled” by executing the earlier iterations of the instructions. These instructions are placed into a loop prologue which is executed before the software pipelined loop begins. The loop prologue is generated at step 385.
  • the scheduling method accomplished this pipeline "draining" process by placing such instructions into a loop epilogue.
  • the loop epilogue is executed once after the program loop terminates.
  • the loop epilogue is generated at step 390.
  • the final product will be a sequence of code including a loop prologue, a software pipelined loop, and a loop epilogue.
  • the method of software pipelining presented above may be combined with other processor architectural designs in order to enhance processor performance. For example, complex instruction set computer (CISC) architectures attempt to reduce the number of instructions required to execute the program. Reduced instruction set computer (RISC) architectures attempt to improve performance by reducing the number of cycles taken to execute an instruction.
  • This method of software pipelining may be used to produce code for execution on scalar, superscalar, CISC, RISC, pipelined, or superpipelined processor architectures in order to improve processor performance and execution throughput of program loops having a single control flow entry point and one or more control flow exit points.
  • the software pipelining method can be implemented as part of a compiler. Thus the compiler produces software-pipelined code for execution on the processor.
  • the software pipelining method is implemented in a post-pass scheduler.
  • a post-pass scheduler works independently of the compiler.
  • the software pipelining algorithm is typically performed on an object code representation of the program that was produced by a compiler.
  • An "arc” or “edge” represents an identified dependency link between two instructions.
  • Arc A identifies a dependency between a source instruction and a destination instruction.
  • Variable pl is assigned the source instruction of arc A.
  • Variable p2 is assigned the destination instruction of arc A.
  • the destination instruction is dependent upon the source instruction. ln order to apply register renaming, several criteria should be met. First, the program ensures that the second instruction is not flow dependent on any instruction. Next, the second instruction is examined to ensure that it is on the critical completion path.
  • the destination operand i.e., p2.destination
  • the program must ensure that a register is available to implement register renaming. If (1 ) p2 is not flow dependent on any instruction, and (2) p2 is in the critical path, and (3) p2.destination is live only within the current program loop iteration, and (4) a register is available for renaming, then the p2.destination register is renamed. The register renaming is applied to all subsequent uses of the register identified by p2.destination to ensure that the renaming of the register is properly propagated.
  • LIVE_OUT represents a set of instructions dependent upon the instruction identified by variable pl.

Abstract

An iterative software pipelining method promotes instructions of a program loop to previous loop iterations and then reschedules the instructions until either 1) the resultant schedule is optimal (i.e. the initiation interval is equal to the minimum initiation interval) or 2) the resultant schedule is not an improvement over the previous schedule generated. The method is applicable to a sequence of instructions within a program loop having a single control flow entry and one or more control flow exit points. First, a minimum initiation interval of the program loop is computed (310). Second, instruction level parallelism transformations are applied on the program loop (335). Third, a single iteration schedule is determined for the program loop (340). Fourth, selected instructions are percolated to a prior iteration of the program loop to generate a new instruction order for the program loop (370). Each of steps two through four is performed as long as a previous length of the program loop exceeds a single iteration schedule length (350) and the single iteration schedule length exceeds the minimum initiation interval (355).

Description

SOFTWARE PIPELINING A HYPERBLOCK LOOP FIELD OF THE INVENTION
This invention relates to software pipelining of instructions for subsequent execution on a processor. In particular, this invention relates to software pipelining of a sequence of instructions that has a single control flow entry and one or more control flow exits. BACKGROUND OF THE INVENTION
A number of hardware and software techniques may be used to improve the execution speed of a software program. The time required to execute the program is dependent upon a number of factors including the number of instructions required to execute the program, the average number of processor cycles required to execute an instruction, and the processor cycle time.
Software scheduling of instructions can be used to enhance the program execution rate. This may be accomplished by using software to reorder the instructions so that they can be executed more efficiently. In other words, software scheduling helps to select the order of a sequence of instructions so that they execute correctly in a minimum amount of time within the constraints of the processor resource limitations.
One prior art software scheduling technique is known as basic block scheduling. Typically the program instructions are divided into code or instruction sequences called basic blocks. A program may consist of any number of basic blocks. A basic block has the property that if any instruction in the block is executed, then all other instructions within the basic block are executed. Thus, only the first instruction in the basic block can be a branch target or entry point. Similarly, only the last instruction in the basic block can be a branch instruction. This ensures that if any instruction in the code sequence is executed, then all instructions within the code sequence are executed. The technique of scheduling instructions or reordering the instructions within a basic block for optimal execution efficiency is called block scheduling.
The prior art technique of block scheduling suffers from the disadvantage that code optimization for each basic block may not result in overall optimal code. Basic block scheduling does not permit instructions to be moved across block boundaries such as branches. Thus although the execution time of each block may be nearly optimal, the overall program execution speed may not be optimal because the processor must hesitate at each branch (i.e., a basic block boundary) until all instructions preceding and including the branch are in execution.
Another class of prior art software scheduling techniques is specifically designed to improved the execution speed of program loops. A program loop is a sequence of instructions in which the last instruction is a branch instruction that may branch to the first instruction in the sequence under certain conditions. Thus the program loop might execute iteratively until some certain condition is met. Software pipelining is a technique used to improve processor throughput of a program loop.
Software pipelining effectively hides instruction latencies in a pipelined processor by overlapping the execution of different loop iterations of a loop structure in the program code. In other words, before one loop iteration completes, execution of successive iterations of the loop is initiated. The initiation interval is the number of processor cycles between the initiation of a given iteration and the initiation of the next iteration.
One disadvantage of prior art software pipelining techniques is that they are effective only for simple loop structures. In particular, some prior art software scheduling techniques are applicable only to a simple loop structure consisting of one basic block. Thus some prior art software pipelining methods are unable to handle more complicated loop structures such as nested loop structures or loops having more than one basic block.
Another disadvantage of some prior art software pipelining methods is that they only work for special hardware or for a narrow class of program loops.
What is needed is a software scheduling technique such as a software pipelining that is applicable to a broader class of instruction sequences including instruction sequences having one or more basic blocks. In particular, a method of software pipelining a sequence of instructions that have a single control flow entry and one or more control flow exits (i.e., a hyperblock program loop) is needed.
SUMMARY AND OBJECTS OF THE INVENTION
An iterative software pipelining method promotes instructions of a program loop to previous loop iterations and then reschedules the instructions until either 1) the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval) or 2) the resultant schedule is not an improvement over the previous schedule generated. The method is applicable to a sequence of instructions within a program loop having a single control flow entry and one or more control flow exit points (i.e., a hyperblock loop).
First, a minimum initiation interval of the program loop is computed. Second, instruction level parallelism transformations are applied on the program loop. Third, a single iteration schedule is determined for the program loop. Fourth, selected instructions are percolated to a prior iteration of the program loop to generate a new instruction order for the program loop. Each of steps two through four is performed as long as a previous length of the program loop exceeds a single iteration schedule length and the single iteration schedule length exceeds the minimum initiation interval.
Other features and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description that follows below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which: Figure 1 contrasts instruction handling by a processor with and without hardware pipelining. Figure 2 contrasts scheduling for a minimum iteration time with scheduling for a minimum iteration interval.
Figure 3 illustrates a flowchart for one embodiment of the software pipelining method.
DETAILED DESCRIPTION
Hardware pipelining and software pipelining are different approaches to improving the execution performance of a processor with respect to a sequence of instructions. Although similarly named, software pipelining uses software to reorder the sequence of instructions within a program loop in order that the next iteration of the program loop can be executed as soon as possible.
Processor performance can generally be increased by hardware pipelining of the instructions to be executed. Instruction execution generally requires 1) fetching the instruction; 2) decoding the instruction and assembling operands; and 3) executing the instruction and writing the results. To pipeline instruction execution, the various steps of instruction execution are performed by independent units called pipeline stages. The steps of different instructions are executed independently in different pipeline stages. Thus while one instruction can be fetched while another instruction is being decoded. The result of each pipeline stage is communicated to the next pipeline stage. Hardware pipelining does not reduce the total amount of time to execute a given instruction. Hardware pipelining is able to improve processor performance by reducing the average number of processor cycles required to process an instruction (i.e., pipelining increases processor performance by increasing the number of instructions handled at one time). -6-
For example, consider a scalar processor. A scalar processor only executes one instruction at a time. Although some instructions may take more than one cycle to complete, the use of hardware pipelining in a scalar processor may theoretically be able to achieve an average execution rate of one instruction per machine cycle (e.g., in a Reduced Instruction Set Computer (i.e., RISC) architecture). Hardware pipelining permits concurrent execution of instructions in different pipeline stages to achieve this result. Alternatively, a superscalar processor is able to reduce the average number of cycles per instruction by permitting concurrent execution of instructions in the same pipeline stage as well as concurrent execution of instructions in different pipeline stages.
Thus hardware pipelining focuses on improving the processor while software pipelining focuses on optimizing a given application to be executed on the processor. Therefore software pipelining techniques can typically be use to help improve the performance of hardware pipelined processors regardless of whether such processors are scalar or superscalar processors and regardless of the processor architecture (e.g., RISC, Complex Instruction Set Computer (CISC), Very Long Instruction Word (VLIW) computer, etc.)
Figure 1 illustrates timing guidelines which contrast instruction handling for a scalar processor without hardware pipelining (110) and a scalar processor with hardware pipelining (120). From timing diagram 1 10, if the scalar processor without hardware pipelining is working on instruction n, the processor cannot handle the next instruction until instruction n is executed. Thus the time required to fetch, decode, and execute a subsequent instruction must be stacked end-to-end on the time required to fetch, decode, and execute the nth instruction. The scalar processor with hardware pipelining (120) can handle more than one instruction. In a three stage pipeline, this permits the fetching of instruction n+l while instruction n is decoding. While the third stage of the pipeline is executing instruction n, the second stage of the pipeline can be decoding instruction n+l, and the first stage of the pipeline can be fetching instruction n+2. In the example provided in Figure 1, the scalar pipelined processor has completed execution of instruction n+3 at time 112 while the scalar processor without hardware pipelining is only beginning to process instruction n+2. Thus instruction latencies are effectively "hidden" or reduced by overlapping the fetching, decoding, and execution steps of different instructions.
Software pipelining helps to hide instruction latencies in a pipelined processor by overlapping the execution of different loop iterations of a loop structure in the program code. In other words, before one loop iteration completes, execution of successive iterations of the loop is initiated. The initiation interval is the number of processor cycles between the initiation of a given iteration and the initiation of the next iteration. Software pipelining seeks to minimize the initiation interval in order to accelerate the overall execution of the program loop. Software pipelining achieves this by reordering the instructions within the program loop.
Block scheduling of a program loop attempts to minimize the execution time of a single iteration of the program loop. Theoretically, this might seem to produce minimum execution time for the program loop. Theoretically, because of the repetitive nature of the program loop structure, small performance improvements in each execution of the loop can be aggregated into greater performance gains. In other words, the small performance gain is reaped for each iteration of the loop. In practice, however, such scheduling does not account for latencies or delays incurred between the initiation of subsequent iterations of the program loop (i.e., the initiation interval). For example, in order to minimize the single iteration execution time of a program loop containing a single basic block, resources needed for a subsequent iteration of such a program loop might be assigned by the software scheduler. This tends to increase the initiation interval between program loop iterations and therefore results in increased execution time for the program loop. Thus basic block scheduling tends to be in conflict with the goals of minimizing the initiation interval of a program loop.
Figure 2 contrasts the processor throughput for a program loop that has been optimized with block scheduling (210) and with software pipelining (220). The block scheduler minimizes the single iteration length of the program loop. Thus the single iteration length, 214, of the block scheduled program loop is less than the single iteration length, 224, of the software pipelined program loop. The n+l iteration of the block scheduled program loop cannot begin until iteration n is complete so that the time required to complete each iteration must be stacked end- to-end. The minimum initiation interval, 212, in the block scheduled code is greater than the minimum initiation interval, 222, in the software pipelined code. Thus iteration n+l of the software pipelined program loop can begin before iteration n has completed so that the single iteration length, 224, is overlapped on each iteration. This results in a reduced total execution time for the program loop as illustrated by Δt, 216. Consider the following instruction sequence for software pipelining:
L10: rl <- memory frOJ; cycle 1 (3 cycle load latency) r2 <- r2 + rl ; cycle 3 (floating point add latency of 3 cycles) rO <- rO + 8; cycle 2 branch to L10 if (r0 < r3); cycle 3 Each iteration of this program loop requires 6 cycles in an in-order execution machine. Using software scheduling to reorder the instructions permits the first instruction to be moved to a previous iteration: L10: r2 <- r2 + rl ; cycle 1 11 rO <- rO + 8; cycle 1 11 rl <- memory [rO]; cycle 2 12 branch to L10 if (r0 < r3); cycle 2 This reordering results in 4 cycles required for each iteration of the program loop in an in-order execution machine.
Reordering of the instructions can introduce data dependencies, procedural dependencies, and resource conflicts that are counterproductive to software scheduling in that they effectively operate to reduce the execution throughput of processor. Dependencies operate to decrease the instruction level parallelism of the program loop. Instruction level parallelism is a measure of the average number of instructions that a superscalar processor might be able to execute at the same time. Machine parallelism is a measure of the ability of the processor to take advantage of the instruction level parallelism. A superscalar processor must have sufficient machine parallelism to take advantage of the instruction level parallelism.
Procedural, resource, and data dependencies limit the instruction level parallelism of instruction sequences as described below. Instruction level parallelism transformations might be used to decrease these dependencies in order to increase the instruction level parallelism of the program loop. A procedural dependency results when a processor cannot determine which set of instructions to execute until a branch instruction is executed. In other words, instructions following the branch instruction may or may not be executed depending upon the branch condition. Thus these instruction may not be executed until the branch instruction is executed. Furthermore, execution of a given instruction has a procedural dependency on each branch executed up until that point.
A resource conflict arises when two instructions must use the same resource (e.g., a register) at the same time. Resource conflicts can typically be resolved by delaying execution of one of the instructions until the resource is available or duplicating the resource. Data dependencies can be further classified as true dependencies, antidependencies, and output dependencies. Although often collectively referred to as simply data dependencies, the distinction between the classifications is important for purpose of applying instruction level parallelism transformations. True data dependency (also called flow dependency or write-read dependency) results when the result of the execution of a first instruction is needed for the execution of second instruction. Thus the second instruction is data dependent upon the first instruction. Typically, the execution of the second instruction must be delayed until all of its input values are available.
Antidependencies are often referred to as read-write dependencies. An antidependency results when a second instruction destroys a value used by a first instruction. Thus instructions subsequent to the first and second instructions must be delayed until the first instruction execution is completed and the second instruction execution has begun. Although both true data dependency and antidependency are manifested through storage locations, true data dependencies represent the flow of data and information through the program. True data dependencies cannot be removed. Antidependencies arise, however, because at different points in time storage locations hold different values for different computations. Antidependencies thus directly result from storage conflicts (hence sometimes antidependencies are grouped with resource conflicts). Typically instruction level parallelism transformations can be used to remove antidependencies. For example, antidependencies can be removed by duplicating the storage resource through register renaming.
Output dependencies are often also referred to as write-write dependencies. Similar to antidependencies, an output dependency results from storage conflicts because at different points in time, the storage locations hold different values for different computations. As with antidependencies, output dependencies can be removed through instruction level parallelism transformations such as register renaming.
In order to overcome many of the dependencies stated above, software scheduling can use dependence breaking transformations to the instructions in the program loop. These dependence breaking transformations are also referred to as instruction level parallelism (LLP) optimizations or transformations because the effect of the transformation is to make more instructions available for concurrent execution. The software scheduler then schedules the instructions. The software scheduler improves the program loop from an execution standpoint by selecting instructions to be promoted (or "percolated") to the previous iteration. This results in a new execution order for the instructions in the program loop.
The software pipelining algorithm presented here is an iterative method that continually improves the code schedule of a program loop until no further improvement can be made. This is accomplished by applying dependence breaking transformations to the instructions within the body of the program loop. After the dependence breaking transformations have been applied, the code schedule is improved by selecting instructions to be promoted to the previous iteration of the loop thus resulting in a new ordering for the instructions within the body of the hyperblock loop. The dependence breaking transformations are then applied to the new ordering of instructions to generate a new code schedule. The iterative software pipelining method continues to promote instructions to previous loop iterations and then reschedules them until either 1) the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval) or 2) the resultant schedule is not an improvement over the previous schedule generated. This algorithm is applicable to program loops consisting of one or more basic blocks. In other words, the algorithm may be applied to a sequence of instructions that have a single control flow entry and one or more control flow exits (i.e., a hyperblock loop). A "pseudo-C" representation of one embodiment of the software pipelining method is presented in Appendix I. Figure 3 illustrates a flowchart of one embodiment of the iterative software pipelining method described below. Initialization
In step 310, the minimum initiation interval (Mil) is calculated. This serves as the lower bound goal for the software pipelining method. The software pipelining method will attempt to reschedule instructions until the initiation interval of the program loop is equal to Mil. This goal may not be achievable depending upon constraints due to insufficient instruction level parallelism or insufficient machine level parallelism.
With respect to insufficient instruction level parallelism, subsequent dependencies on values computed in one iteration serve to limit how far the initiation interval can be reduced. In other words, the initiation interval cannot be so short that a value calculated in one iteration is not completely calculated by the time it is needed in the subsequent iteration. This is an example of instruction level parallelism limitations.
With respect to insufficient machine level parallelism, the initiation interval may be constrained by insufficient resources. In other words, if there are insufficient processor cycles in an initiation interval to provide the resources needed to complete an iteration, then the initiation interval is limited due to insufficient machine parallelism.
Generally the minimum initiation interval is determined by the larger of 1 ) the minimum number of cycles required to resolve dependencies in one iteration so that the values are ready when needed in a subsequent iteration, and 2) the minimum number of cycles required to provide for the resource utilization of all the instructions in an iteration.
At step 320, the computer implemented software pipelining method initializes the variable PRIOR_LENGTH to ensure at least one pass through the loop represented by steps 330-370. This might be accomplished by setting PRIOR_LENGTH to some arbitrarily large value to ensure that the first time through step 350 PRIOR_LENGTH will be less than or equal to SL.
At step 330, the current state of the loop being pipelined, L, is recorded. Given that the method iteratively performs reordering of instructions, a subsequent iteration may produce a worse initiation interval. A copy of L is stored as L' to ensure that a scheduling from the previous iteration can be retrieved if loop L as previously scheduled has a better initiation interval than the current instruction schedule of L.
Instruction Level Parallelism Transformations At step 335, instruction level parallelism transformations are performed on program loop L. In one embodiment constant combining, operation combining, software register renaming, and the RHS/LHS transformation some of the methods used to help increase the instruction level parallelism of the program loop L.
Procedural and resource dependencies operate to reduce the instruction level parallelism of the program loop, L. This in turn serves as an impediment to further reordering of instructions from other iterations of program loop L. In order to improve the instruction level parallelism of program loop L, instruction level parallelism transformations are performed during each iteration of the software pipelining algorithm in order to help maximize the instruction level parallelism of the program loop at each stage. This in turn tends to make more instructions available for reordering because their dependencies have been eliminated. Thus the integration of instruction level parallelism transformations into the software pipelining algorithm helps to ensure maximal instruction parallelism during each iteration of the software pipelining algorithm and as a result helps to produce a maximally (i.e., optimally) software pipelined program loop (i.e., further pipelining will not improve the execution throughput of the program loop L). Constant and Operation Combining
Flow dependencies may be eliminated through the use of constant combining and operation combining. For example, consider a sequence of code including the following instructions: rO <- rO + 8; rl <- memory[r0]; Before constant combining, this sequence requires two addition operations, each of which adds a different constant to register 0 (an implicit 0 is added to rO in the memory reference). After constant combining, the sequence might be as follows: rl <- memory fr0+8]; rO <- r0+8; Now the two constants have been combined into one constant such that only one addition operation must be performed. Note also that constant combining has removed flow dependency in this code sequence. In other words, the second instruction is not dependent upon the first instruction.
Applying constant combining to an earlier example permits reducing the flow dependence between the first and second program loop iterations. Thus the application of constant combining permits the following instruction sequence (from an earlier example):
L10: r2 <- r2 + rl; cycle 1 11 rO <- rO + 8; cycle 1 11 rl <- memory [rO]; cycle 2 12 branch to L 10 if (rO < r3); cycle 2 to be converted to
L10: r2 <- r2 + rl ; cycle 1 rl <- memory[r + 81; cycle 1 rO <- rO + 8; cycle 1 branch to L10 if (r0 < r3); cycle 2 This new code schedule requires 3 cycles per iteration of the program loop in an in-order execution machine. Operation combining is similar to constant combining except that register values are combined instead of constants.
In one embodiment, each pair of instructions within each basic block of the program loop is considered as a candidate for constant and operation combining. In an altemative embodiment, each pair of instructions within the program loop is considered as a candidate for constant and operation combining. Software Register Renaming
Software based register renaming helps to eliminate antidependencies and output dependencies in much the same way as hardware based dynamic register renaming. Software based register renaming, however, is performed by the compiler. The compiler renames the destination registers of instructions that are causing the false dependencies. The source register in subsequent instructions that depend on the value stored in the renamed register are correspondingly renamed. One "pseudo-C" algorithm for accomplishing the register renaming function is provided in Appendix II.
LHSIRHS Split Transformation
The LHS/RHS split transformation is another method used to help eliminate antidependencies and output dependencies that may be introduced when an instruction is moved above a branch. Antidependencies or output dependencies that result from moving an instruction above a branch must be removed in order to ensure that the data flow occurs as intended.
For example, suppose that before any reordering is performed, the instruction sequence is as follows: rO <- memory[rl]; if (r2) rO <- memory[r3]; rl <- memory [rO]; Suppose that the branch condition is true. Before any transformation, the rl <- memoryfrO] instruction cannot be executed until the rO <- memory[r3] instruction completes. Ignoring this dependency, a scheduling algorithm might generate the following code: rO <- memory [rl]; rO <- memory [r31; if (r2) ; rl <- memory [rOl; If the branch condition is true, rO will be correct. If, however, the branch condition is false, register rO is not storing the proper value at the time the rl <-memory[rO] instruction executes. After the LHS/RHS split transformation, however, the instruction sequence now appears as rO <- memory [rl]; r4 <- memory [r3]; if (r2) rO <- r4; rl <- memoryfrO]; The LHS RHS split transformation permits the r2<-memory[r2] to be scheduled to hide its long execution latency. Thus the rl <- memory[r0] only needs to wait one cycle so that the rO <- r4 instruction can complete.
A pseudo-C implementation of the LHS RHS splitting transformation is provided in Appendix III. Single Iteration scheduling At step 340, global scheduling is applied to instructions in the program loop body. In one embodiment, the scheduler attempts to minimize the length of the code schedule by moving critical instructions to the entry basic block of the program loop. The entry basic block is the basic block associated with the control flow entry for the program loop.
Global code scheduling algorithms are well known in the art. In one embodiment, the global code scheduling algorithms move instructions across basic block boundaries, but only in a manner which is independent of the outcome of the branch. In another embodiment, the global code scheduling algorithm converts sequences of basic blocks into larger basic blocks. This might include using hierarchical reduction to convert branching structures into non-branching structures in order to generate larger basic blocks. Thus the software pipelining method iteratively integrates instruction level parallelism transformations with global code scheduling to help achieve as short of an initiation interval as possible.
In one embodiment, the global code scheduler might be limited to block scheduling. Block scheduling is a special case of global scheduling in which the global code scheduling does not include moving instructions across any block boundaries. A block scheduling technique is used on each block within the program loop (hence "global scheduling"), but instructions are not moved across block boundaries. Thus the software pipelining method iteratively integrates instruction level parallelism transformations with basic block scheduling to help achieve as short of an initiation interval as possible.
In step 345, the length of the code schedule after single iteration scheduling is determined. The length of the code schedule for a single iteration of the program loop serves as an upper bound for the initiation interval of the program loop. For any given iteration of the software pipelining method, if the current single iteration schedule length exceeds the single iteration schedule length calculated in the previous cycle, then the iterative scheduling process ends. The program loop as scheduled in the previous cycle is used because further software pipelining would actually increase the initiation interval of the program loop.
Step 350 determines whether the single iteration schedule length from the previous cycle (represented by PRIOR_LENGTH) is less than or equal to the current single iteration schedule length (i.e., SL). If not, then step 380 ensures that the previous version of program loop L is used because the resultant schedule is not an improvement over the previous schedule generated.
Step 355 determines whether the resultant schedule is optimal (i.e., the initiation interval is equal to the minimal initiation interval). If the minimum initiation interval has been reached, then no further scheduling can improve upon the current schedule of instructions and thus the iterative scheduling process stops.
If, however, the initiation interval of program loop L as currently scheduled (i.e., SL) is greater than the minimum initiation interval, then further scheduling may result in a lower initiation interval. In this case, the method proceeds with steps 360-370 to produce another version of program loop L through rescheduling.
In step 360, variable PRIOR_LENGTΗ is set to the initiation interval of program loop L as currently scheduled (i.e., PRIOR_LENGTH = SL). In other words, even though the minimum initiation interval has not yet been reached, this version of program loop L has a lower initiation interval than previously scheduled versions of program loop L. Therefore, the initiation interval of this version of program loop L will be used for comparison with future versions of program loop L. -20-
Selection and Percolation of Instructions
In order to improve the code schedule, instructions are moved ("percolated") to previous iterations of the loop. This permits the instructions to be executed earlier, thus hiding latencies. In order to maintain simplicity of the method, only instructions within the entry basic block of program loop L are considered as candidates for percolation. The following types of instructions within the entry basic block, however, are not candidates for percolation:
1. Branch instructions
2. Store instructions 3. Any instruction executed in the last M clocks of the single iteration schedule where M is the greater of the longest instruction latency in the loop or the minimum initiation interval. Candidate instructions within the entry basic block of program loop L will be percolated to a previous iteration, if possible, in an attempt to minimize the initiation interval by hiding the latencies of the percolated instructions. Iterations
The process of transforming, scheduling, and percolating instructions continues until either 1) there is no improvement in the initiation interval, or 2) the minimum initiation interval is reached. Thus through iterative techniques this method attempts to reduce the initiation interval of program loop L to the lower bound minimum initiation interval until constraints due to insufficient machine or instruction level parallelism limit further reductions.
Unlike the prior art technique of loop unrolling, this software pipelining method is not constrained to scheduling program loop instructions from adjacent program loop iterations. Thus the resulting code permits overlapping of widely separated (i.e., substantially nonadjacent) iterations of the program loop in a hardware pipelined processor. Loop Prologue and Loop Epilogue
Because software pipelining percolates instructions from different iterations of the program loop, the software pipelined loop may contain instructions from different iterations of the original program loop. This means that before the loop begins executing, the loop pipeline must be initialized or "filled" by executing the earlier iterations of the instructions. These instructions are placed into a loop prologue which is executed before the software pipelined loop begins. The loop prologue is generated at step 385.
When the final iteration is detected, some instructions may be in earlier iterations. These instructions must be executed in order to complete any unfinished iterations. The scheduling method accomplished this pipeline "draining" process by placing such instructions into a loop epilogue. The loop epilogue is executed once after the program loop terminates. The loop epilogue is generated at step 390.
Thus the final product will be a sequence of code including a loop prologue, a software pipelined loop, and a loop epilogue. Implementation The method of software pipelining presented above may be combined with other processor architectural designs in order to enhance processor performance. For example, complex instruction set computer (CISC) architectures attempt to reduce the number of instructions required to execute the program. Reduced instruction set computer (RISC) architectures attempt to improve performance by reducing the number of cycles taken to execute an instruction. This method of software pipelining may be used to produce code for execution on scalar, superscalar, CISC, RISC, pipelined, or superpipelined processor architectures in order to improve processor performance and execution throughput of program loops having a single control flow entry point and one or more control flow exit points. In one embodiment, the software pipelining method can be implemented as part of a compiler. Thus the compiler produces software-pipelined code for execution on the processor.
In another embodiment, the software pipelining method is implemented in a post-pass scheduler. A post-pass scheduler works independently of the compiler. In a post-pass scheduler, the software pipelining algorithm is typically performed on an object code representation of the program that was produced by a compiler.
In the preceding detailed description, the invention is described with reference to specific exemplary embodiments thereof. Various modifications and changes may be made without departing from the broader spirit and scope of the invention as set forth in the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
APPENDIX I
A "pseudo-C" representation of the software for computer implementation of the software pipelining algorithm is presented below.
Calculate Mil; (Minimum Initiation Interval) priorjength = INFINITY; (Length of previous schedule) do (
L' = L; (Record original state of loop L)
Apply ILP-Transformations on loop L; Apply Single-Iteration Scheduling on loop L;
SL = length of the single iteration schedule; if (priorjength <= SL) { L=L'; continue = FALSE; } else if ((priorjength > SL) and (SL > Mil)) { priorjength = SL; continue = TRUE;
Apply Select-Instructions-To-Be-Percolated(L); Apply Percolate-Instructions(L);
} else { continue = FALSE;
} } while (continue) Generate loop prologue and epilogue; APPENDIX II The following "pseudo C" code offers one approach to software based register renaming to help remove antidependencies and output dependencies.
for (each antidependency and output dependence arc A) { let pl = A. src; let p2 = A.dst; if ((p2 is not flow dependent on any instruction) and (p2 is critical) and (p2.destination is live only within current loop iteration) and (there is a register available for renaming)) { rename p2.destination; apply copy propagation to all uses of old p2.destination;
) )
An "arc" or "edge" represents an identified dependency link between two instructions. Arc A identifies a dependency between a source instruction and a destination instruction. Variable pl is assigned the source instruction of arc A. Variable p2 is assigned the destination instruction of arc A. The destination instruction is dependent upon the source instruction. ln order to apply register renaming, several criteria should be met. First, the program ensures that the second instruction is not flow dependent on any instruction. Next, the second instruction is examined to ensure that it is on the critical completion path. Next, if the dependent resource identified within the destination instruction is used before it is modified, the resource is determined to be "live." Therefore, the destination operand (i.e., p2.destination) is examined to ensure that it is live only in the current program loop iteration (if at all). Finally, the program must ensure that a register is available to implement register renaming. If (1 ) p2 is not flow dependent on any instruction, and (2) p2 is in the critical path, and (3) p2.destination is live only within the current program loop iteration, and (4) a register is available for renaming, then the p2.destination register is renamed. The register renaming is applied to all subsequent uses of the register identified by p2.destination to ensure that the renaming of the register is properly propagated.
APPENDIX III The following "pseudo-C" code is provided as an example for implementing the LHS/RHS transformation.
for (each antidependency and output dependent arc A) { let pl = A. src; let p2 = A.dst; if (pl is not a branch) continue; if p2.destination is not in LIVE_OUT(pl)) continue; if ((p2 is not flow dependent on any instruction in the same basic block as where p2 resides) and (p2 is critical) and (p2.destination is live only within current loop iteration) and (there is a register for renaming)) { rename p2.destination; apply copy propagation to all uses of old p2.destination;
} )
Most of the terminology is similar to that used in Appendix II. LIVE_OUT represents a set of instructions dependent upon the instruction identified by variable pl.

Claims

CLALMSWhat is claimed is:
1 . A method of scheduling a sequence of instructions within a program loop having a single control flow entry and at least one control flow exit, comprising the steps of: a) computing a minimum initiation interval of the program loop; b) applying instruction level parallelism transformations on the program loop; c) applying single iteration scheduling on the program loop; d) percolating selected instructions to a prior iteration of the program loop to generate a new instruction order for the program loop; e) repeating steps b) - d) as long as a previous length of the program loop exceeds a single iteration schedule length of the program loop and the single iteration schedule length exceeds the minimum initiation interval.
2. The method of claim 1 further comprising the steps of: f) generating a program loop prologue; and g) generating a program loop epilogue.
3. The method of claim 1 wherein the instruction level parallelism transformations include constant combining.
4. The method of claim 1 wherein the instruction level parallelism transformations include operation combining.
. The method of claim 1 wherein the instruction level parallelism transformations include software based register renaming.
6. The method of claim 1 wherein the instruction level parallelism transformations include an LHS/RHS transformation.
7. A method of scheduling a sequence of instructions within a program loop having a single control flow entry and at least one control flow exit, comprising the steps of: a) computing a minimum initiation interval of the program loop; b) recording a recorded program loop from the program loop; c) applying instruction level parallelism transformations on the program loop to produce a revised program loop; c) applying single iteration scheduling on the revised program loop; e) determining a single iteration schedule length, SL, of the revised program loop; setting the revised program loop to the recorded program loop, if the previous length is not greater than SL; g) performing the following steps if the previous length is greater than SL and SL is greater than the minimum initiation interval: i) equating the previous length to SL; and ii) percolating selected instructions of the revised program loop to a prior iteration of the revised program loop; h) repeat steps b)-g) as long as the previous length is greater than and SL is greater than the minimum initiation interval; i) generate a loop prologue for the revised program loop; and j) generate a loop epilogue for the revised program loop.
8. The method of claim 7 wherein the instruction level parallelism transformations include constant combining.
9. The method of claim 7 wherein the instruction level parallelism transformations include operation combining.
10. The method of claim 7 wherein the instruction level parallelism transformations include software based register renaming.
1 1. The method of claim 7 wherein the instruction level parallelism transformations include an LHS/RHS transformation.
PCT/US1997/003999 1996-03-28 1997-03-13 Software pipelining a hyperblock loop WO1997036228A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU23243/97A AU2324397A (en) 1996-03-28 1997-03-13 Software pipelining a hyperblock loop
EP97915945A EP0954778B1 (en) 1996-03-28 1997-03-13 Software pipelining a hyperblock loop
CA002250924A CA2250924C (en) 1996-03-28 1997-03-13 Software pipelining a hyperblock loop
DE69722447T DE69722447T2 (en) 1996-03-28 1997-03-13 SOFTWARE PIPELINE PROCESSING A HYPERBLOCK LOOP

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/630,858 1996-03-28
US08/630,858 US5920724A (en) 1996-03-28 1996-03-28 Software pipelining a hyperblock loop

Publications (2)

Publication Number Publication Date
WO1997036228A1 true WO1997036228A1 (en) 1997-10-02
WO1997036228A9 WO1997036228A9 (en) 1998-01-29

Family

ID=24528840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/003999 WO1997036228A1 (en) 1996-03-28 1997-03-13 Software pipelining a hyperblock loop

Country Status (7)

Country Link
US (2) US5920724A (en)
EP (1) EP0954778B1 (en)
AU (1) AU2324397A (en)
CA (1) CA2250924C (en)
DE (1) DE69722447T2 (en)
TW (1) TW339430B (en)
WO (1) WO1997036228A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1117031A1 (en) * 2000-01-14 2001-07-18 Texas Instruments France A microprocessor

Families Citing this family (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920724A (en) * 1996-03-28 1999-07-06 Intel Corporation Software pipelining a hyperblock loop
US6226790B1 (en) * 1997-02-28 2001-05-01 Silicon Graphics, Inc. Method for selecting optimal parameters for compiling source code
US6567976B1 (en) 1997-03-20 2003-05-20 Silicon Graphics, Inc. Method for unrolling two-deep loops with convex bounds and imperfectly nested code, and for unrolling arbitrarily deep nests with constant bounds and imperfectly nested code
US5943501A (en) * 1997-06-27 1999-08-24 Wisconsin Alumni Research Foundation Multiple processor, distributed memory computer with out-of-order processing
US6253373B1 (en) * 1997-10-07 2001-06-26 Hewlett-Packard Company Tracking loop entry and exit points in a compiler
DE69837138T2 (en) * 1997-12-31 2007-08-16 Texas Instruments Inc., Dallas Interruptible multiple execution unit processing during multiple assignment of register using operations
US6341370B1 (en) * 1998-04-24 2002-01-22 Sun Microsystems, Inc. Integration of data prefetching and modulo scheduling using postpass prefetch insertion
US6192515B1 (en) * 1998-07-17 2001-02-20 Intel Corporation Method for software pipelining nested loops
US6820250B2 (en) * 1999-06-07 2004-11-16 Intel Corporation Mechanism for software pipelining loop nests
US6438747B1 (en) * 1999-08-20 2002-08-20 Hewlett-Packard Company Programmatic iteration scheduling for parallel processors
US6507947B1 (en) * 1999-08-20 2003-01-14 Hewlett-Packard Company Programmatic synthesis of processor element arrays
US6594820B1 (en) * 1999-09-28 2003-07-15 Sun Microsystems, Inc. Method and apparatus for testing a process in a computer system
US7725885B1 (en) * 2000-05-09 2010-05-25 Hewlett-Packard Development Company, L.P. Method and apparatus for trace based adaptive run time compiler
JP2003005980A (en) * 2001-06-22 2003-01-10 Matsushita Electric Ind Co Ltd Compile device and compile program
US7113517B2 (en) 2001-09-27 2006-09-26 International Business Machines Corporation Configurable hardware scheduler calendar search algorithm
JP3974063B2 (en) * 2003-03-24 2007-09-12 松下電器産業株式会社 Processor and compiler
CA2433379A1 (en) * 2003-06-25 2004-12-25 Ibm Canada Limited - Ibm Canada Limitee Modulo scheduling of multiple instruction chains
US7321940B1 (en) 2003-06-30 2008-01-22 Cisco Technology, Inc. Iterative architecture for hierarchical scheduling
JP2006338616A (en) 2005-06-06 2006-12-14 Matsushita Electric Ind Co Ltd Compiler device
US8024714B2 (en) 2006-11-17 2011-09-20 Microsoft Corporation Parallelizing sequential frameworks using transactions
US8010550B2 (en) * 2006-11-17 2011-08-30 Microsoft Corporation Parallelizing sequential frameworks using transactions
US7860847B2 (en) * 2006-11-17 2010-12-28 Microsoft Corporation Exception ordering in contention management to support speculative sequential semantics
US8051411B2 (en) * 2007-08-08 2011-11-01 National Tsing Hua University Method for copy propagations for a processor with distributed register file design
US10698859B2 (en) 2009-09-18 2020-06-30 The Board Of Regents Of The University Of Texas System Data multicasting with router replication and target instruction identification in a distributed multi-core processing architecture
KR101523020B1 (en) 2010-06-18 2015-05-26 더 보드 오브 리전츠 오브 더 유니버시티 오브 텍사스 시스템 Combined branch target and predicate prediction
US9063735B2 (en) * 2010-10-19 2015-06-23 Samsung Electronics Co., Ltd. Reconfigurable processor and method for processing loop having memory dependency
US8752036B2 (en) * 2011-10-31 2014-06-10 Oracle International Corporation Throughput-aware software pipelining for highly multi-threaded systems
US9513922B2 (en) 2012-04-20 2016-12-06 Freescale Semiconductor, Inc. Computer system and a method for generating an optimized program code
US9038042B2 (en) * 2012-06-29 2015-05-19 Analog Devices, Inc. Staged loop instructions
US9239712B2 (en) * 2013-03-29 2016-01-19 Intel Corporation Software pipelining at runtime
WO2014193381A1 (en) * 2013-05-30 2014-12-04 Intel Corporation Dynamic optimization of pipelined software
US9792252B2 (en) 2013-05-31 2017-10-17 Microsoft Technology Licensing, Llc Incorporating a spatial array into one or more programmable processor cores
US9740529B1 (en) 2013-12-05 2017-08-22 The Mathworks, Inc. High throughput synchronous resource-constrained scheduling for model-based design
US9329875B2 (en) * 2014-04-28 2016-05-03 International Business Machines Corporation Global entry point and local entry point for callee function
US10346168B2 (en) 2015-06-26 2019-07-09 Microsoft Technology Licensing, Llc Decoupled processor instruction window and operand buffer
US9940136B2 (en) 2015-06-26 2018-04-10 Microsoft Technology Licensing, Llc Reuse of decoded instructions
US9946548B2 (en) 2015-06-26 2018-04-17 Microsoft Technology Licensing, Llc Age-based management of instruction blocks in a processor instruction window
US10409599B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Decoding information about a group of instructions including a size of the group of instructions
US10169044B2 (en) 2015-06-26 2019-01-01 Microsoft Technology Licensing, Llc Processing an encoding format field to interpret header information regarding a group of instructions
US10409606B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Verifying branch targets
US9720693B2 (en) 2015-06-26 2017-08-01 Microsoft Technology Licensing, Llc Bulk allocation of instruction blocks to a processor instruction window
US9952867B2 (en) 2015-06-26 2018-04-24 Microsoft Technology Licensing, Llc Mapping instruction blocks based on block size
US10191747B2 (en) 2015-06-26 2019-01-29 Microsoft Technology Licensing, Llc Locking operand values for groups of instructions executed atomically
US10175988B2 (en) 2015-06-26 2019-01-08 Microsoft Technology Licensing, Llc Explicit instruction scheduler state information for a processor
US11755484B2 (en) 2015-06-26 2023-09-12 Microsoft Technology Licensing, Llc Instruction block allocation
US10768936B2 (en) 2015-09-19 2020-09-08 Microsoft Technology Licensing, Llc Block-based processor including topology and control registers to indicate resource sharing and size of logical processor
US11016770B2 (en) 2015-09-19 2021-05-25 Microsoft Technology Licensing, Llc Distinct system registers for logical processors
US10719321B2 (en) 2015-09-19 2020-07-21 Microsoft Technology Licensing, Llc Prefetching instruction blocks
US10776115B2 (en) 2015-09-19 2020-09-15 Microsoft Technology Licensing, Llc Debug support for block-based processor
US11681531B2 (en) 2015-09-19 2023-06-20 Microsoft Technology Licensing, Llc Generation and use of memory access instruction order encodings
US10198263B2 (en) 2015-09-19 2019-02-05 Microsoft Technology Licensing, Llc Write nullification
US10095519B2 (en) 2015-09-19 2018-10-09 Microsoft Technology Licensing, Llc Instruction block address register
US10452399B2 (en) 2015-09-19 2019-10-22 Microsoft Technology Licensing, Llc Broadcast channel architectures for block-based processors
US11126433B2 (en) 2015-09-19 2021-09-21 Microsoft Technology Licensing, Llc Block-based processor core composition register
US20170083327A1 (en) 2015-09-19 2017-03-23 Microsoft Technology Licensing, Llc Implicit program order
US10031756B2 (en) 2015-09-19 2018-07-24 Microsoft Technology Licensing, Llc Multi-nullification
US10678544B2 (en) 2015-09-19 2020-06-09 Microsoft Technology Licensing, Llc Initiating instruction block execution using a register access instruction
US10936316B2 (en) 2015-09-19 2021-03-02 Microsoft Technology Licensing, Llc Dense read encoding for dataflow ISA
US10871967B2 (en) 2015-09-19 2020-12-22 Microsoft Technology Licensing, Llc Register read/write ordering
US10180840B2 (en) 2015-09-19 2019-01-15 Microsoft Technology Licensing, Llc Dynamic generation of null instructions
US10061584B2 (en) 2015-09-19 2018-08-28 Microsoft Technology Licensing, Llc Store nullification in the target field
US11106467B2 (en) 2016-04-28 2021-08-31 Microsoft Technology Licensing, Llc Incremental scheduler for out-of-order block ISA processors
KR20180038875A (en) * 2016-10-07 2018-04-17 삼성전자주식회사 Data input/output unit, electronic apparatus and control methods thereof
US11531552B2 (en) 2017-02-06 2022-12-20 Microsoft Technology Licensing, Llc Executing multiple programs simultaneously on a processor core
US10628142B2 (en) * 2017-07-20 2020-04-21 Texas Instruments Incorporated Loop break
US10108538B1 (en) 2017-07-31 2018-10-23 Google Llc Accessing prologue and epilogue data
US10963379B2 (en) 2018-01-30 2021-03-30 Microsoft Technology Licensing, Llc Coupling wide memory interface to wide write back paths
US10824429B2 (en) 2018-09-19 2020-11-03 Microsoft Technology Licensing, Llc Commit logic and precise exceptions in explicit dataflow graph execution architectures
US11714620B1 (en) 2022-01-14 2023-08-01 Triad National Security, Llc Decoupling loop dependencies using buffers to enable pipelining of loops

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265253A (en) * 1990-08-22 1993-11-23 Nec Corporation Method of unrolling/optimizing repetitive loop
US5303357A (en) * 1991-04-05 1994-04-12 Kabushiki Kaisha Toshiba Loop optimization system
US5317743A (en) * 1990-07-18 1994-05-31 Kabushiki Kaisha Toshiba System for compiling iterated loops based on the possibility of parallel execution
US5367651A (en) * 1992-11-30 1994-11-22 Intel Corporation Integrated register allocation, instruction scheduling, instruction reduction and loop unrolling
US5375238A (en) * 1990-11-20 1994-12-20 Nec Corporation Nesting management mechanism for use in loop control system
US5386562A (en) * 1992-05-13 1995-01-31 Mips Computer Systems, Inc. Circular scheduling method and apparatus for executing computer programs by moving independent instructions out of a loop

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920724A (en) * 1996-03-28 1999-07-06 Intel Corporation Software pipelining a hyperblock loop

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5317743A (en) * 1990-07-18 1994-05-31 Kabushiki Kaisha Toshiba System for compiling iterated loops based on the possibility of parallel execution
US5265253A (en) * 1990-08-22 1993-11-23 Nec Corporation Method of unrolling/optimizing repetitive loop
US5375238A (en) * 1990-11-20 1994-12-20 Nec Corporation Nesting management mechanism for use in loop control system
US5303357A (en) * 1991-04-05 1994-04-12 Kabushiki Kaisha Toshiba Loop optimization system
US5386562A (en) * 1992-05-13 1995-01-31 Mips Computer Systems, Inc. Circular scheduling method and apparatus for executing computer programs by moving independent instructions out of a loop
US5367651A (en) * 1992-11-30 1994-11-22 Intel Corporation Integrated register allocation, instruction scheduling, instruction reduction and loop unrolling

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
JOURNAL OF SUPERCOMPUTING, 1991, TIRUMALAI et al., "Parallelizing of WHILE Loops on Pipelined Architectures", pages 119-136. *
PROCEEDINGS OF SUPERCOMPUTING '90, November 1990, TIRUMALAI et al., "Parallelization of Loops with Exits on Pipelined Architectures", pages 200-212. *
PROCEEDINGS OF THE ACM SIGPLAN '92 CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION, SIGPLAN NOTICES, July 1992, Vol. 27, No. 7, RAU et al., "Register Allocation for Software Pipelined Loops", pages 283-299. *
PROCEEDINGS OF THE SIGPLAN '88 CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION, SIGPLAN NOTICES, July 1988, Vol. 23, No. 7, LAM M., "Software Pipelining: An Effective Scheduling Technique for VLIW Machines", pages 318-328. *
RAU et al., "Code Generation Schema for Modulo Scheduled Loops", 1992, pages 158-169. *
SUPERCOMPUTING, 1992, MAHLKE et al., "Compiler Code Transformations for Superscalar-Based High-Performance Systems", pages 808-817. *
SUPERCOMPUTING, 1992, RAMANUJAM, "Non-Unimodular Transformations of Nested Loops", pages 214-223. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1117031A1 (en) * 2000-01-14 2001-07-18 Texas Instruments France A microprocessor
US6795930B1 (en) 2000-01-14 2004-09-21 Texas Instruments Incorporated Microprocessor with selected partitions disabled during block repeat

Also Published As

Publication number Publication date
US6016399A (en) 2000-01-18
CA2250924C (en) 2001-01-30
DE69722447D1 (en) 2003-07-03
DE69722447T2 (en) 2004-01-15
AU2324397A (en) 1997-10-17
EP0954778A1 (en) 1999-11-10
CA2250924A1 (en) 1997-10-02
EP0954778B1 (en) 2003-05-28
US5920724A (en) 1999-07-06
TW339430B (en) 1998-09-01
EP0954778A4 (en) 2002-04-24

Similar Documents

Publication Publication Date Title
US5920724A (en) Software pipelining a hyperblock loop
WO1997036228A9 (en) Software pipelining a hyperblock loop
US11340908B2 (en) Reducing data hazards in pipelined processors to provide high processor utilization
US6044222A (en) System, method, and program product for loop instruction scheduling hardware lookahead
US6754893B2 (en) Method for collapsing the prolog and epilog of software pipelined loops
Rau Iterative modulo scheduling: An algorithm for software pipelining loops
US5887174A (en) System, method, and program product for instruction scheduling in the presence of hardware lookahead accomplished by the rescheduling of idle slots
US5901308A (en) Software mechanism for reducing exceptions generated by speculatively scheduled instructions
US5884060A (en) Processor which performs dynamic instruction scheduling at time of execution within a single clock cycle
US5692169A (en) Method and system for deferring exceptions generated during speculative execution
US9038042B2 (en) Staged loop instructions
US5901318A (en) Method and system for optimizing code
US7565658B2 (en) Hidden job start preparation in an instruction-parallel processor system
US6564372B1 (en) Critical path optimization-unzipping
US20050283772A1 (en) Determination of loop unrolling factor for software loops
JPH06290057A (en) Loop optimizing method
Grossman Compiler and architectural techniques for improving the effectiveness of VLIW compilation
Dupont de Dinechin A unified software pipeline construction scheme for modulo scheduled loops
Achutharaman et al. Exploiting Java-ILP on a simultaneous multi-trace instruction issue (SMTI) processor
JPH0756735A (en) Parallel arithmetic and logic unit
Rau et al. History, Overview and Perspective
Malik et al. Execution dependencies and their resolution in fine grain parallel machines
de Dinechin A unified software pipeline construction scheme for modulo scheduled loops
Bird Data Dependencies in Decoupled, Pipelined Loops
US20040143821A1 (en) Method and structure for converting data speculation to control speculation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
COP Corrected version of pamphlet

Free format text: PAGES 1/3-3/3, DRAWINGS, REPLACED BY NEW PAGES BEARING THE SAME NUMBER; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

ENP Entry into the national phase

Ref document number: 2250924

Country of ref document: CA

Ref country code: CA

Ref document number: 2250924

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1997915945

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97534439

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1997915945

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1997915945

Country of ref document: EP