US20060010261A1 - Highly concurrent DMA controller with programmable DMA channels - Google Patents

Highly concurrent DMA controller with programmable DMA channels Download PDF

Info

Publication number
US20060010261A1
US20060010261A1 US11/136,164 US13616405A US2006010261A1 US 20060010261 A1 US20060010261 A1 US 20060010261A1 US 13616405 A US13616405 A US 13616405A US 2006010261 A1 US2006010261 A1 US 2006010261A1
Authority
US
United States
Prior art keywords
channel
data transfer
controller
computer system
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/136,164
Inventor
Thomas Bonola
Robert Herrington
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/136,164 priority Critical patent/US20060010261A1/en
Publication of US20060010261A1 publication Critical patent/US20060010261A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal

Definitions

  • the invention relates generally to computer systems, and more particularly, but not by way of limitation, to computer systems and computer system components for conducting data transfer transactions without intimate microprocessor involvement.
  • Computer systems have become an integral part of many homes and businesses. The more popular that computers become, the more demands that are placed upon them. For example, computer systems have become highly integrated into most businesses. These businesses depend upon their computer systems to be accurate, fast and reliable. Down time caused by system crashes and slow response times results in expensive losses, including losses in employee productivity. Accordingly, computer system designers must design components and entire systems with speed and reliability in mind.
  • microprocessor is instrumental to the overall performance of the computer system.
  • improvements in system performance are the direct result of improvements in the microprocessor. That is, the computer system performance is increased because of improvements in the microprocessor that, for example, allow it to handle more instructions in the same time period.
  • improvements in system performance are the direct result of relieving the microprocessor of certain time-intensive duties. These certain time-intensive duties are often shifted to other circuitry.
  • DMA controllers For example, instead of requiring the microprocessor to handle time-intensive data transfers, computer system designers have assigned certain data transfer control to specialized circuitry known as direct memory access (DMA) controllers. Generally, DMA controllers only need to know the base location of where data is to be moved from, the address to where the data should go, and the amount of data to be moved. Once the DMA controller knows this information, it will move the data without intimate microprocessor intervention. Without a DMA controller, the microprocessor itself would be forced to control the data transfer—thereby resulting in substantially decreased system performance.
  • DMA direct memory access
  • Original DMA controllers are generally inadequate or modern computer systems and have been essentially abandoned. Instead of the original, dedicated DMA controllers, modern computer systems often use bus masters to perform DMA type transactions. For clarity, original, dedicated DMA controllers and bus masters that perform DMA type transactions will be referred to, collectively, as “DMA controllers.”
  • DMA controllers Existing DMA controllers, however, are plagued by problems and limitations. Both present and future computer systems are in need of a next generation DMA controller.
  • DMA controllers are designed and manufactured by a variety of vendors—each vendor having its own design features. In terms of DMA transactions, each vendor has its own way of rendering DMA transactions. Each DMA controller, accordingly, must have its own driver running in kernel mode. This multitude of drivers adds unneeded complexity to the computer system and causes DMA capabilities to be under-utilized.
  • DMA controllers require a driver running at kernel mode, which is a higher privilege (security) level than is, for example, the application mode used by user programs. Because DMA controllers require a driver running at kernel mode, a user application, which should not have access to the higher privilege level, cannot easily access existing DMA drivers and take advantage of DMA capabilities. In other words, a user application can generally only take full advantage of DMA capabilities with the help of the operating system (OS).
  • OS must transition the application from a lesser privilege level into a higher privilege level.
  • the OS for example, must transition the application from the untrusted domain where user applications operate to a trusted domain where drivers operate.
  • the integrity of the entire computer system is jeopardized.
  • the user application if given access to the trusted domain, could destroy or alter the OS, destroy data, crash the system, etc. Accordingly, a well designed computer system will strictly limit a user application's access to the trusted domain.
  • the access of a user application to the trusted domain is limited, however, the ability of the user application to utilize the features of existing DMA controllers is also limited—thereby forcing the microprocessor to perform data transfers that are better performed by a DMA controller.
  • a next generation DMA controller is needed. More particularly, a DMA controller is needed that permits user applications to render DMA transactions without compromising computer system integrity. Further, a DMA controller is needed that allows user applications to render DMA transactions without intimate OS intervention. Additionally, a DMA controller is needed that allows access by both host processors and non-host processor devices to dynamically acquire and release DMA channels.
  • the present invention provides a method and apparatus for efficiently performing data transfers, such as DMA transactions, for various types of clients without jeopardizing system integrity.
  • the present invention includes a computer system comprising a mass storage device; and a first data transfer controller for controlling data transfers involving the mass storage device, wherein the first data transfer controller is operable in a channel free state and a channel unavailable state.
  • This embodiment further includes a circuit device connected to the first data transfer controller, the circuit device is at least for requesting a particular data transfer to be controlled by the first data transfer controller; and a second data transfer controller connected to the circuit device, the second data transfer controller for controlling data transfers and for controlling the particular data transfer responsive, at least, to the circuit device receiving an indication that the first data controller is in the channel unavailable state.
  • FIG. 1 illustrates a highly concurrent direct memory access (HCDMA) controller in accordance with the principles of the present invention
  • FIG. 2 illustrates in more detail the HCDMA controller as similarly shown in FIG. 1 ;
  • FIG. 3 illustrates a computer system including multiple, chained HCDMA controllers
  • FIG. 4 is a flow chart illustrating HCDMA operation from a client-side perspective
  • FIG. 5 illustrates an exemplary I/O memory map for an HCDMA controller
  • FIG. 6 illustrates a descriptor used to program HCDMA transactions
  • FIG. 7 represents the operation of the HCDMA in client queuing mode.
  • the HCDMA controller 100 includes data lines: input 105 and output 110 . These data lines define side 1 of the HCDMA controller 100 . Additionally, the HCDMA controller 100 includes data lines: input 120 and output 115 . These data lines define side 2 of the HCDMA controller 100 .
  • the data lines on side 1 of the HCDMA are connected to a data bus 125 .
  • the data lines of side 2 of the HCDMA are connected to a data bus 130 .
  • the data lines of side 1 and side 2 are shown to not be multiplexed, one skilled in the art can understand that multiplexing circuitry can be inserted intermediate the HCDMA controller 100 and either data bus 125 or data bus 130 . Accordingly, the HCDMA controller 100 is compatible with any type of bus.
  • HCDMA controller 200 there is illustrated a more detailed depiction of a HCDMA controller 200 .
  • the inputs 202 and 208 generally correspond to the inputs 105 and 120 of FIG. 1
  • the outputs 204 and 206 generally correspond to the outputs 110 and 115 of FIG. 1
  • HCDMA controller 200 includes multiplexers 220 and 218 that are used to control the I/O to and from the inputs 202 and 208 and the outputs 204 and 206 .
  • the HCDMA controller 200 includes a control block 210 and multiple channel blocks such as channel blocks 212 , 214 and 216 .
  • HCDMA controller 200 is illustrated to include only three channel blocks 212 , 214 , 216 , one skilled in the art can appreciate that any number of channel blocks (including only one) can be incorporated into the HCDMA controller 200 .
  • the number of channel blocks in any particular HCDMA controller is a function of the available silicon and the number of DMA channels need for an envisioned implementation.
  • Each channel block of the HCDMA controller 200 supports one DMA channel and each channel block is independently programmable. HCDMA controller 200 , accordingly, supports three DMA channels and each of these channels can be simultaneously acquired, held, programmed and used by different clients such as host software and bus master devices.
  • the acquiring client can program the DMA channel to execute DMA transactions. Until that client concludes all of its DMA transaction and releases the DMA channel, no other client can use that particular DMA channel. Other clients must acquire a different DMA channel from a different channel block.
  • a client To acquire a DMA channel from a HCDMA controller, such as HCDMA controller 200 , a client must communicate with the control block 210 . For example, the client can request a free channel block from the control block 210 , i.e., the client can request a DMA channel not being used by another client. If the HCDMA controller 200 has a free DMA channel, the HCDMA controller 200 will indicate this to the requesting client. If, on the other hand, the HCDMA controller 200 does not have a free DMA channel, this fact will be communicated to the client and the client will either wait for a DMA channel to become free or seek a DMA channel from another HCDMA controller.
  • control block 210 When the client completes all of its DMA transactions, it should signal the control block that the DMA channel is no longer needed. After being signaled by the client, the control block can release the DMA channel. That control block and associated DMA channel can then be acquired by other clients. As can be appreciated by one skilled in the art, by acquiring and releasing DNA channels through a control block such as control block 210 , multiple clients can simultaneously acquire and release DMA channels without operating system (OS) intervention.
  • OS operating system
  • FIG. 3 illustrates an exemplary computer system 300 that includes multiple, chained HCDMA controllers.
  • the computer system 300 includes processors 302 connected with a memory controller 308 by a bus 304 .
  • the memory controller 308 controls all transactions with memory devices 306 .
  • These memory devices 306 can include single storage units, electronic memory, distributed memory systems, RAID systems, etc.
  • the memory controller 308 controls all transactions between memory devices 306 and device 318 , device 326 , bridge 314 and bridge 326 .
  • Devices 318 and 326 can be virtually any computer component, including bus masters, ASICs, I/O devices, bridges, etc.
  • the memory controller 308 controls all transactions between the memory devices 306 and the processors 302 .
  • processors 302 are illustrated as including four processors, one skilled in the art can understand that any number of processors, including one, can be used in the computer system 300 .
  • I/O bridge 314 , I/O bridge 322 , device 318 and device 326 include HCDMA controllers 316 , 324 , 320 and 328 , respectively.
  • the memory controller 308 includes HCDMA controllers 310 and 312 . It is not necessary, however, that each of the I/O bridges, devices, and the memory controller include a HCDMA controller.
  • FIG. 3 is only exemplary and that components and/or HCDMA controllers can be added or removed without altering the basic operation of the invention.
  • the arrows pointing from one HCDMA controller to another HCDMA controller indicate the chaining capabilities of HCDMA controllers constructed in accordance with the principles of the present invention.
  • arrow 330 indicates that HCDMA controller 320 is chained to HCDMA controller 316 and arrow 332 indicates that HCDMA controller 316 is chained to HCDMA controller 312 .
  • the DMA channels of HCDMA controllers 320 , 316 and 312 can be pooled together. That is, if HCDMA controller 320 , for example, has no DMA channels available for acquisition, a client can, instead, acquire a DMA channel from HCDMA controller 316 , which is chained to HCDMA controller 320 .
  • a client can attempt to acquire a DMA channel from HCDMA controller 320 . If the HCDMA controller 320 has a free DMA channel as indicated by its control block (not shown), the HCDMA controller 320 returns the address of that free DMA channel to the client. The client then uses that address to set up the associated channel block such as channel block 212 in FIG. 2 . If, on the other hand, the HCDMA controller 320 has no free DMA channels, the HCDMA controller returns the address of the chained HCDMA controller 316 . The client, using the returned address of HCDMA controller 316 , requests a DMA channel from this new HCDMA controller 316 .
  • HCDMA controller 316 If HCDMA controller 316 has a free DMA channel, it returns the address of that DMA channel. Otherwise, the HCDMA controller 316 returns the address of chained HCDMA controller 312 . As can be appreciated, the client can continue to “walk” the chain until it finds a HCDMA controller with a free DMA channel. Further, the client can “walk” the chain of HCDMA controllers without the intervention of the OS. Accordingly, non-host based entities, such as bus masters, can acquire DMA channels.
  • the client requests a DMA channel from a particular HCDMA controller (step 405 ).
  • the client for example, can access a channel pool list stored in the control block 210 of HCDMA controller 200 (shown in FIG. 2 ).
  • the channel pool list can store the addresses of the free DMA channels associated with HCDMA controller 200 .
  • HCDMA controller 200 has no free DMA channels.
  • step 410 If, in step 410 it is determined that the HCDMA controller has no free DMA channels, branch 415 is followed and the HCDMA controller returns the address of the next chained HCDMA controller (step 420 ).
  • the address of the chained HCDMA controller can be stored in the control block 210 of HCDMA controller 200 (shown in FIG. 2 ).
  • step 421 If a next chained channel controller exists (step 421 ) then branch 422 is followed and the client requests a free DMA channel from the next chained HCDMA controller (step 405 ). Otherwise, branch 423 is followed and the client is notified that no channel resources are presently available (step 424 ).
  • branch 425 is followed from decision block 410 and the HCDMA controller returns and the client receives (step 430 ) the address of the free DMA channel. At this point, the client has successfully acquired a DMA channel.
  • Table 1 includes exemplary control instructions that can be used to set up a DMA channel for control.
  • the “bits” column in Table 1 indicates the offset for each instruction embedded in an exemplary 64 bit instruction.
  • the size of the instruction, the offsets, and the individual instructions as shown in Table 1 are not meant to be limiting. Other individual instructions, offsets and bit lengths may be employed in setting up the DMA channel.
  • the client can configure it (step 440 ).
  • the client is providing the HCDMA controller, for example, with address information for data structures used by the DMA channel in DMA transactions.
  • HCDMA control block such as control block 210
  • HCDMA channel block such as channel block 212
  • clients can simply perform reads and writes to and from the registers of HCDMA controllers by performing reads and writes to and from I/O memory.
  • Clients accordingly, do not need to know how to access the HDCMA controller registers directly.
  • DMA channels clients can write the data structure addresses directly to I/O memory.
  • FIG. 5 illustrates an exemplary I/O memory map of an HCDMA controller.
  • Memory block 500 represent contiguous I/O memory.
  • the memory block 500 includes a control block portion 502 and channel block portions 504 , 506 and 508 , each of which are independently programmable.
  • the control block portion 502 can be mapped to memory in the HCDMA controller's control block such as that in control block 210 (shown in FIG. 2 ).
  • the channel block portions 504 , 506 and 508 can be mapped to memory in channel blocks such as channel blocks 212 , 214 and 216 .
  • the channel block 500 can also include an optional adapter memory portion 510 that would be mapped to corresponding memory in the HCDMA controller.
  • Channel block portion 520 is a more detailed depiction of the I/O memory channel block portion 508 .
  • Channel block portion 520 would be similar for channel block portions 504 and 506 .
  • control block portion 530 is a more detailed depiction of I/O memory control block portion 502 .
  • I/O memory map can be configured in a variety of ways and that FIG. 5 merely illustrates one of those many ways.
  • the client requests that a DMA transaction be performed (step 445 ).
  • the client makes this request by writing the necessary data to a descriptor such as descriptor 605 shown in FIG. 6 .
  • Descriptor 605 is a 64-byte aligned memory region consisting of 8 quad-words. The first 6 quad-words represent the transaction portion of the descriptor and the last 2 quad-words represent the status portion of the descriptor.
  • the client could write to the descriptor the length of the data being transferred (field 610 ), the source of the data (field 620 ) and the destination of the data (field 625 ).
  • the client can also provide a response address (field 630 ) and response data (field 635 ).
  • the HCDMA controller can write the response data to the response address.
  • the client can batch multiple DMA transactions together by providing a link to another descriptor in the descriptor link field 640 .
  • particular control instructions for the DMA transaction can be written to a control field 645 .
  • Table 2 contains exemplary control instructions. As with Table 1, neither the particular instructions nor the offsets are meant to be limiting.
  • RIO 1 Destination address is IO region.
  • RVA 1 Response address is virtual and requires translation.
  • step 450 if the client desires further DMA transactions (step 450 ), branch 455 is followed and the client requests these additional DMA transactions (step 445 ). When no other DMA transactions are requested, branch 460 is followed and the client releases the DMA channel (step 470 ). This DMA channel is now free and can be acquired by other clients.
  • individual DMA channels can be operated in different modes.
  • the operational mode of a particular DMA channel can be determined during DMA channel set up (step 435 of FIG. 4 ).
  • a channel block such as channel block 212 shown in FIG. 2
  • the client can set up the channel block in one of these modes by setting the appropriate bits, e.g., bits 11 - 13 as shown in Table 1, in the set up instruction issued in step 435 .
  • FIG. 7 illustrates an example of the HCDMA controller's client queuing mode.
  • FIG. 7 includes a channel block 702 , which can correspond to one of channel blocks 212 , 214 and 216 shown in FIG. 2 .
  • Channel block 702 includes storage locations 704 and 708 . These storage locations can be used to store the data written to the addresses of the I/O memory's channel block portion 520 as shown in FIG. 5 .
  • storage locations 704 and 708 can be used to store a transaction base and a status base (shown in the channel block portion 520 ), respectively.
  • the stored transaction base can be used to point to a location in memory where descriptors (such as descriptor 605 shown in FIG. 6 ), which include the DMA transaction information, are stored.
  • the status base could point to descriptor block 714 .
  • Each transaction base can be unique for each channel block.
  • the status base is used to point to a location in memory where status information about the DMA transaction should be rendered.
  • the status base stored in storage location 708 can point to the same data structure, e.g., descriptor block 714 , as does the transaction base stored in storage location 704 or the status base can point to a different data structure. In other words, the status information need not be rendered to the same data structure that provided the instructions for the DMA transaction.
  • three queues 710 , 712 and 714 are associated with the channel block 702 .
  • the location (address in memory) of these queues can be established when the DMA channel is originally configured, as in step 440 of FIG. 4 .
  • the location of these queues is written to the appropriate portions of I/O memory's channel block portion 520 shown in FIG. 5 .
  • These queues can be configured as hardware FIFOs; FIFOs in host memory, FIFOs in client memory, etc.
  • queue 710 is an inbound queue and queues 712 and 714 are outbound queues.
  • this queue configuration can be adjusted to fit particular design requirements.
  • the three queue configuration provides the HCDMA controller with substantial versatility.
  • descriptor block 714 with three descriptor storage locations labeled A, B and C.
  • descriptor block 714 can include any number of descriptor storage locations.
  • each descriptor storage location can be 64 bytes of physically contiguous memory.
  • each descriptor storage location can be divided into two portions: a transaction portion 716 corresponding to the first six quad-words of a descriptor and a status portion 718 corresponding to the last two quad-words of a descriptor.
  • the status portion 718 could store the status and context fields of descriptor 605 in FIG. 6 .
  • outbound queue 712 stores the descriptor labels, e.g., “A” and “B”, for all free descriptors. That is, outbound queue 712 stores the descriptor labels corresponding to descriptor storage locations not already programed by other DMA transactions (of the same client).
  • the client acquires a free descriptor by obtaining the identity of a free descriptor from the outbound queue 712 .
  • the client can program (write) that descriptor.
  • the client could then write the necessary instructions (those fields shown in FIG. 6 ) to descriptor “B” in descriptor block 714 .
  • the client After programing a descriptor with the DMA transaction instruction, the client places the descriptor label, such as “B”, on the inbound queue 710 . In the client queuing mode, the client is responsible for synchronizing access to all of the queues, including inbound queue 710 .
  • the client signals the channel block 702 that a new descriptor label has been inserted onto the inbound queue 710 .
  • the channel block 702 then pulls the descriptor label from the inbound queue 710 . Using that pulled descriptor label and the transaction base data stored in storage location 704 , the channel block 702 locates and reads the appropriate descriptor.
  • the channel block 702 pulled label “B” from the inbound queue 710 , the channel block 702 would then use the transaction base data in storage location 704 to locate the descriptor block 714 and would use the label “B” to locate storage location B. The channel block 702 can then read the DMA transaction instruction from that descriptor storage location.
  • the channel block 702 After reading the DMA transaction instruction, the channel block 702 performs the DMA transaction and renders status if requested to do so by the client. Status is only rendered if it is requested by the descriptor as programmed by the client or if the DMA channel is configured to render status. Status can be rendered to a particular address designated in the descriptor, to a location relative the status base stored in storage location 708 , to outbound queue 714 , etc.
  • the client can write the descriptor label, e.g., “B”, to outbound queue 712 —thereby indicating that descriptor “B” is free and can be acquired for other DMA transactions.
  • the channel block 702 can alternatively be placed in a channel queuing mode.
  • Channel queuing mode generally operates similarly to the client queuing mode.
  • Channel queuing mode does not need to use the inbound queue 710 . Instead, after a descriptor is acquired and programmed, the client need only to write the appropriate descriptor label to the data channel 535 (shown in FIG. 5 ) and not to the inbound queue 710 .
  • the client does not need to provide synchronization to the inbound queue and the client does not need to signal the presence of a descriptor label to the channel block 702 .
  • the channel block 702 itself takes care of these functions.
  • the client is responsible for providing synchronization to any outbound queues ( 712 , 714 ) that it uses for rendering status information.
  • the channel block 702 can also be programed to operate in a descriptor stream mode. This mode is best suited for use by non-CPU entities, such as bus masters, which generally do not have memory for forming descriptors. Thus, to utilize the descriptor functions of the present invention, these non-CPU entities must stream descriptor information to the appropriate channel block, such as channel block 702 . For example, these devices deliver one portion of a DMA transaction instruction at a time to the data channel 535 of the channel block portion 520 (shown in FIG. 5 ). Once the entire DMA transaction instruction is loaded into the data channel 535 , the channel block generally operates as if in channel queuing mode.
  • channel blocks of the present invention can be configured to perform non-traditional DMA transactions.
  • a channel block can be configured to operate in a RAM channel mode.
  • RAM channel mode provides an additional level of address translation for data transfers. Effectively, the channel block, when in RAM channel mode, acts as a memory window that points to another memory location, i.e., it provides seamless forwarding of data.
  • FIFO channel mode is another example of the non-traditional capabilities of the present invention. As with RAM channel mode, FIFO channel mode provides an additional level of address translation. FIFO channel mode, however, forwards and receives data from FIFOs (not shown). For example, when an application writes to the data channel 535 of a particular channel block, the channel block then forwards that information to the FIFO. Because the FIFO is a single point write, the channel block writes the information to the FIFO and ignores any page offsets associated with the data channel 535 . Accordingly, CPUs can use burst operations, i.e., write to successive addresses, when they are actually writing to a FIFO. The FIFO channel, in effect, masks the FIFO from the CPU. Similarly, the channel block can read from a FIFO and write the read information to the data channel 535 . The channel block will provide the offsets required to translate the single point FIFO address to the appropriate full address.
  • the present invention provides a method and apparatus for easily and securely rendering DMA transactions.
  • the present invention permits clients such as user applications and non-host entities to utilize DMA transactions. These clients utilize DMA transactions by attempting to acquire one of possibly multiple DMA channels included in an HCDMA controller. Responsive to this attempt, the HCDMA controller can provide a DMA channel to the client. Accordingly, operating system intervention is not necessarily required when a client seeks to acquire a DMA channel.
  • the client Once a DMA channel is acquired, the client must set up the channel for control. For example, the client must select an operating mode, such as client queuing mode, for the DMA channel. The client can next configure the acquired DMA channel by writing to the HCDMA controller any addresses of data structures, such as the queues and the descriptor block, needed for DMA transactions. The client then programs the HCDMA controller to perform the DMA transactions or to translate addresses if the HCDMA controller is in the HCDMA channel mode or the FIFO channel mode. Finally, once a client no longer needs a DMA channel, the DMA channel is released—thereby freeing it for use by another client.
  • an operating mode such as client queuing mode

Abstract

A data transaction controller for transferring data responsive to a request from a client. The data transaction controller includes channel circuitry for providing a channel for data transfers. The channel circuitry includes a first storage device for storing channel configuration data. The data transaction controller further includes control circuitry for controlling access by the client to the channel circuitry.

Description

    FIELD OF THE INVENTION
  • The invention relates generally to computer systems, and more particularly, but not by way of limitation, to computer systems and computer system components for conducting data transfer transactions without intimate microprocessor involvement.
  • BACKGROUND OF THE INVENTION
  • Computer systems have become an integral part of many homes and businesses. The more popular that computers become, the more demands that are placed upon them. For example, computer systems have become highly integrated into most businesses. These businesses depend upon their computer systems to be accurate, fast and reliable. Down time caused by system crashes and slow response times results in expensive losses, including losses in employee productivity. Accordingly, computer system designers must design components and entire systems with speed and reliability in mind.
  • Computer system designers have long realized that the microprocessor is instrumental to the overall performance of the computer system. In many cases, improvements in system performance are the direct result of improvements in the microprocessor. That is, the computer system performance is increased because of improvements in the microprocessor that, for example, allow it to handle more instructions in the same time period. In other cases, however, improvements in system performance are the direct result of relieving the microprocessor of certain time-intensive duties. These certain time-intensive duties are often shifted to other circuitry.
  • For example, instead of requiring the microprocessor to handle time-intensive data transfers, computer system designers have assigned certain data transfer control to specialized circuitry known as direct memory access (DMA) controllers. Generally, DMA controllers only need to know the base location of where data is to be moved from, the address to where the data should go, and the amount of data to be moved. Once the DMA controller knows this information, it will move the data without intimate microprocessor intervention. Without a DMA controller, the microprocessor itself would be forced to control the data transfer—thereby resulting in substantially decreased system performance.
  • Original DMA controllers are generally inadequate or modern computer systems and have been essentially abandoned. Instead of the original, dedicated DMA controllers, modern computer systems often use bus masters to perform DMA type transactions. For clarity, original, dedicated DMA controllers and bus masters that perform DMA type transactions will be referred to, collectively, as “DMA controllers.” Existing DMA controllers, however, are plagued by problems and limitations. Both present and future computer systems are in need of a next generation DMA controller.
  • One problem with existing DMA controllers is the lack of standardization and the resulting complexity caused by this lack of standardization. Existing DMA controllers are designed and manufactured by a variety of vendors—each vendor having its own design features. In terms of DMA transactions, each vendor has its own way of rendering DMA transactions. Each DMA controller, accordingly, must have its own driver running in kernel mode. This multitude of drivers adds unneeded complexity to the computer system and causes DMA capabilities to be under-utilized.
  • As previously noted, DMA controllers require a driver running at kernel mode, which is a higher privilege (security) level than is, for example, the application mode used by user programs. Because DMA controllers require a driver running at kernel mode, a user application, which should not have access to the higher privilege level, cannot easily access existing DMA drivers and take advantage of DMA capabilities. In other words, a user application can generally only take full advantage of DMA capabilities with the help of the operating system (OS). The OS must transition the application from a lesser privilege level into a higher privilege level. The OS, for example, must transition the application from the untrusted domain where user applications operate to a trusted domain where drivers operate.
  • By allowing user applications access to the trusted domain, the integrity of the entire computer system is jeopardized. The user application, if given access to the trusted domain, could destroy or alter the OS, destroy data, crash the system, etc. Accordingly, a well designed computer system will strictly limit a user application's access to the trusted domain. When the access of a user application to the trusted domain is limited, however, the ability of the user application to utilize the features of existing DMA controllers is also limited—thereby forcing the microprocessor to perform data transfers that are better performed by a DMA controller.
  • In light of the deficiencies in the existing technology, a next generation DMA controller is needed. More particularly, a DMA controller is needed that permits user applications to render DMA transactions without compromising computer system integrity. Further, a DMA controller is needed that allows user applications to render DMA transactions without intimate OS intervention. Additionally, a DMA controller is needed that allows access by both host processors and non-host processor devices to dynamically acquire and release DMA channels.
  • SUMMARY OF THE INVENTION
  • To remedy the deficiencies of existing technology, the present invention provides a method and apparatus for efficiently performing data transfers, such as DMA transactions, for various types of clients without jeopardizing system integrity.
  • In one embodiment, the present invention includes a computer system comprising a mass storage device; and a first data transfer controller for controlling data transfers involving the mass storage device, wherein the first data transfer controller is operable in a channel free state and a channel unavailable state. This embodiment further includes a circuit device connected to the first data transfer controller, the circuit device is at least for requesting a particular data transfer to be controlled by the first data transfer controller; and a second data transfer controller connected to the circuit device, the second data transfer controller for controlling data transfers and for controlling the particular data transfer responsive, at least, to the circuit device receiving an indication that the first data controller is in the channel unavailable state.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more complete understanding of the present invention may be had by reference to the following Detailed Description and appended claims when taken in conjunction with the accompanying drawings wherein:
  • FIG. 1 illustrates a highly concurrent direct memory access (HCDMA) controller in accordance with the principles of the present invention;
  • FIG. 2 illustrates in more detail the HCDMA controller as similarly shown in FIG. 1;
  • FIG. 3 illustrates a computer system including multiple, chained HCDMA controllers;
  • FIG. 4 is a flow chart illustrating HCDMA operation from a client-side perspective;
  • FIG. 5 illustrates an exemplary I/O memory map for an HCDMA controller;
  • FIG. 6 illustrates a descriptor used to program HCDMA transactions; and
  • FIG. 7 represents the operation of the HCDMA in client queuing mode.
  • DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENT OF THE INVENTION
  • Although the present invention is open to various modifications and alternative constructions, a preferred exemplary embodiment that is shown in the drawings will be described herein in detail. It is to be understood, however, that there is no intention to limit the invention to the particular forms disclosed. One skilled in the art will recognize that there are numerous modifications, equivalences and alternative constructions that fall within the spirit and scope of the invention as expressed in the claims.
  • Referring now to FIG. 1, there is illustrated a highly concurrent direct memory access (HCDMA) controller 100 constructed in accordance with the principles of the present invention. The HCDMA controller 100 includes data lines: input 105 and output 110. These data lines define side 1 of the HCDMA controller 100. Additionally, the HCDMA controller 100 includes data lines: input 120 and output 115. These data lines define side 2 of the HCDMA controller 100.
  • The data lines on side 1 of the HCDMA are connected to a data bus 125. Similarly, the data lines of side 2 of the HCDMA are connected to a data bus 130. Although the data lines of side 1 and side 2 are shown to not be multiplexed, one skilled in the art can understand that multiplexing circuitry can be inserted intermediate the HCDMA controller 100 and either data bus 125 or data bus 130. Accordingly, the HCDMA controller 100 is compatible with any type of bus.
  • Referring now to FIG. 2, there is illustrated a more detailed depiction of a HCDMA controller 200. As can be appreciated, the inputs 202 and 208 generally correspond to the inputs 105 and 120 of FIG. 1, and furthermore the outputs 204 and 206 generally correspond to the outputs 110 and 115 of FIG. 1. HCDMA controller 200 includes multiplexers 220 and 218 that are used to control the I/O to and from the inputs 202 and 208 and the outputs 204 and 206.
  • Still referring to FIG. 2, the HCDMA controller 200 includes a control block 210 and multiple channel blocks such as channel blocks 212, 214 and 216. Although HCDMA controller 200 is illustrated to include only three channel blocks 212, 214, 216, one skilled in the art can appreciate that any number of channel blocks (including only one) can be incorporated into the HCDMA controller 200. The number of channel blocks in any particular HCDMA controller is a function of the available silicon and the number of DMA channels need for an envisioned implementation.
  • Each channel block of the HCDMA controller 200 supports one DMA channel and each channel block is independently programmable. HCDMA controller 200, accordingly, supports three DMA channels and each of these channels can be simultaneously acquired, held, programmed and used by different clients such as host software and bus master devices.
  • Once a DMA channel supported by a channel block is acquired, the acquiring client can program the DMA channel to execute DMA transactions. Until that client concludes all of its DMA transaction and releases the DMA channel, no other client can use that particular DMA channel. Other clients must acquire a different DMA channel from a different channel block.
  • To acquire a DMA channel from a HCDMA controller, such as HCDMA controller 200, a client must communicate with the control block 210. For example, the client can request a free channel block from the control block 210, i.e., the client can request a DMA channel not being used by another client. If the HCDMA controller 200 has a free DMA channel, the HCDMA controller 200 will indicate this to the requesting client. If, on the other hand, the HCDMA controller 200 does not have a free DMA channel, this fact will be communicated to the client and the client will either wait for a DMA channel to become free or seek a DMA channel from another HCDMA controller.
  • When the client completes all of its DMA transactions, it should signal the control block that the DMA channel is no longer needed. After being signaled by the client, the control block can release the DMA channel. That control block and associated DMA channel can then be acquired by other clients. As can be appreciated by one skilled in the art, by acquiring and releasing DNA channels through a control block such as control block 210, multiple clients can simultaneously acquire and release DMA channels without operating system (OS) intervention.
  • In another embodiment of the present invention, multiple, distributed HCDMA controllers can be linked (chained) so that each HCDMA controller's DMA channels are pooled. Thus, if one HCDMA controller does not have a free DMA channel, a client can obtain a DMA channel from another HCDMA controller. FIG. 3 illustrates an exemplary computer system 300 that includes multiple, chained HCDMA controllers.
  • In FIG. 3, the computer system 300 includes processors 302 connected with a memory controller 308 by a bus 304. The memory controller 308 controls all transactions with memory devices 306. These memory devices 306 can include single storage units, electronic memory, distributed memory systems, RAID systems, etc. Furthermore, the memory controller 308 controls all transactions between memory devices 306 and device 318, device 326, bridge 314 and bridge 326. Devices 318 and 326 can be virtually any computer component, including bus masters, ASICs, I/O devices, bridges, etc. Furthermore, the memory controller 308 controls all transactions between the memory devices 306 and the processors 302. Although processors 302 are illustrated as including four processors, one skilled in the art can understand that any number of processors, including one, can be used in the computer system 300.
  • Still referring to FIG. 3, I/O bridge 314, I/O bridge 322, device 318 and device 326 include HCDMA controllers 316, 324, 320 and 328, respectively. Further, the memory controller 308 includes HCDMA controllers 310 and 312. It is not necessary, however, that each of the I/O bridges, devices, and the memory controller include a HCDMA controller. One skilled in the art can appreciate that FIG. 3 is only exemplary and that components and/or HCDMA controllers can be added or removed without altering the basic operation of the invention.
  • Still referring to FIG. 3, the arrows pointing from one HCDMA controller to another HCDMA controller indicate the chaining capabilities of HCDMA controllers constructed in accordance with the principles of the present invention. For example, arrow 330 indicates that HCDMA controller 320 is chained to HCDMA controller 316 and arrow 332 indicates that HCDMA controller 316 is chained to HCDMA controller 312. Accordingly, the DMA channels of HCDMA controllers 320, 316 and 312 can be pooled together. That is, if HCDMA controller 320, for example, has no DMA channels available for acquisition, a client can, instead, acquire a DMA channel from HCDMA controller 316, which is chained to HCDMA controller 320.
  • More particularly, a client can attempt to acquire a DMA channel from HCDMA controller 320. If the HCDMA controller 320 has a free DMA channel as indicated by its control block (not shown), the HCDMA controller 320 returns the address of that free DMA channel to the client. The client then uses that address to set up the associated channel block such as channel block 212 in FIG. 2. If, on the other hand, the HCDMA controller 320 has no free DMA channels, the HCDMA controller returns the address of the chained HCDMA controller 316. The client, using the returned address of HCDMA controller 316, requests a DMA channel from this new HCDMA controller 316. If HCDMA controller 316 has a free DMA channel, it returns the address of that DMA channel. Otherwise, the HCDMA controller 316 returns the address of chained HCDMA controller 312. As can be appreciated, the client can continue to “walk” the chain until it finds a HCDMA controller with a free DMA channel. Further, the client can “walk” the chain of HCDMA controllers without the intervention of the OS. Accordingly, non-host based entities, such as bus masters, can acquire DMA channels.
  • Referring to FIG. 4, there is illustrated the general process followed by a client to perform DMA transactions. First, the client requests a DMA channel from a particular HCDMA controller (step 405). The client, for example, can access a channel pool list stored in the control block 210 of HCDMA controller 200 (shown in FIG. 2). The channel pool list can store the addresses of the free DMA channels associated with HCDMA controller 200. Thus, when the channel pool list is empty, HCDMA controller 200 has no free DMA channels.
  • If, in step 410 it is determined that the HCDMA controller has no free DMA channels, branch 415 is followed and the HCDMA controller returns the address of the next chained HCDMA controller (step 420). The address of the chained HCDMA controller can be stored in the control block 210 of HCDMA controller 200 (shown in FIG. 2).
  • If a next chained channel controller exists (step 421) then branch 422 is followed and the client requests a free DMA channel from the next chained HCDMA controller (step 405). Otherwise, branch 423 is followed and the client is notified that no channel resources are presently available (step 424).
  • Assuming that the chained HCDMA controller has a free DMA channel, branch 425 is followed from decision block 410 and the HCDMA controller returns and the client receives (step 430) the address of the free DMA channel. At this point, the client has successfully acquired a DMA channel.
  • The client next sets up the acquired DMA channel for control (step 435). Table 1 includes exemplary control instructions that can be used to set up a DMA channel for control. The “bits” column in Table 1 indicates the offset for each instruction embedded in an exemplary 64 bit instruction. The size of the instruction, the offsets, and the individual instructions as shown in Table 1 are not meant to be limiting. Other individual instructions, offsets and bit lengths may be employed in setting up the DMA channel.
    TABLE 1
    CONTROL INSTRUCTIONS FOR SETTING UP DMA CHANNEL
    Bits Access Description
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = RESET. Resets the channel ‘GO’ terminates this
    state.
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = CONFIGURE.
    Figure US20060010261A1-20060112-P00899
    the channel in the Configure state.
    Figure US20060010261A1-20060112-P00899
    terminates this state.
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = PURGE. Purges the channel
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    terminates this state.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = FLUSHTLB. Flushes the channel's
    Figure US20060010261A1-20060112-P00899
    cache.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = INTERRUPTACK. Clears the interrupt
    Figure US20060010261A1-20060112-P00899
    by
    the channel.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = INTERRUPTON.
    Figure US20060010261A1-20060112-P00899
    interrupt
    Figure US20060010261A1-20060112-P00899
    channel.
    Figure US20060010261A1-20060112-P00899
    = INTERRUPTOFF.
    Figure US20060010261A1-20060112-P00899
    for the
    channel.
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = BASEVIRTUAL. The channel translates
    ALL address information
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = BASEPHYSICAL. The channel translates address
    information as specified in the
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = STRONGORDER. The channel processes and
    completes
    Figure US20060010261A1-20060112-P00899
    in the
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = WEAKORDER. The channel processes and
    completes
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    W
    Figure US20060010261A1-20060112-P00899
    = Ignored.
    Figure US20060010261A1-20060112-P00899
    = CLIENTQUEUING mode.
    Figure US20060010261A1-20060112-P00899
    = CHANNELQUEUING mode.
    Figure US20060010261A1-20060112-P00899
    = DESCRIPTORSTREAM mode.
    Figure US20060010261A1-20060112-P00899
    = FIFOCHANNEL mode.
    110 = Ignored.
    111 = Ignored.
    Figure US20060010261A1-20060112-P00899
    W The
    Figure US20060010261A1-20060112-P00899
    -page frame address if the target channel block.
  • After the DMA channel has been set up, the client can configure it (step 440). By configuring the DMA channel, the client is providing the HCDMA controller, for example, with address information for data structures used by the DMA channel in DMA transactions.
  • Although not necessary, it is advantageous to associate the storage capabilities of a HCDMA control block, such as control block 210, and a HCDMA channel block, such as channel block 212, with the I/O memory. By mapping the HCDMA controllers to I/O memory, clients can simply perform reads and writes to and from the registers of HCDMA controllers by performing reads and writes to and from I/O memory. Clients, accordingly, do not need to know how to access the HDCMA controller registers directly. With regard to configuring DMA channels, clients can write the data structure addresses directly to I/O memory.
  • FIG. 5 illustrates an exemplary I/O memory map of an HCDMA controller. Memory block 500 represent contiguous I/O memory. The memory block 500 includes a control block portion 502 and channel block portions 504, 506 and 508, each of which are independently programmable. The control block portion 502, for example, can be mapped to memory in the HCDMA controller's control block such as that in control block 210 (shown in FIG. 2). Furthermore, the channel block portions 504, 506 and 508 can be mapped to memory in channel blocks such as channel blocks 212, 214 and 216. The channel block 500 can also include an optional adapter memory portion 510 that would be mapped to corresponding memory in the HCDMA controller.
  • Channel block portion 520 is a more detailed depiction of the I/O memory channel block portion 508. Channel block portion 520, however, would be similar for channel block portions 504 and 506. Furthermore, control block portion 530 is a more detailed depiction of I/O memory control block portion 502. One skilled in the art can understand that the I/O memory map can be configured in a variety of ways and that FIG. 5 merely illustrates one of those many ways.
  • Referring again to the flow chart of FIG. 4, after configuring the DMA channel (step 440), i.e., after writing the appropriate information to channel block portion 520 in FIG. 5, the client requests that a DMA transaction be performed (step 445). In one embodiment, the client makes this request by writing the necessary data to a descriptor such as descriptor 605 shown in FIG. 6. Descriptor 605 is a 64-byte aligned memory region consisting of 8 quad-words. The first 6 quad-words represent the transaction portion of the descriptor and the last 2 quad-words represent the status portion of the descriptor.
  • To render a DMA transaction (step 445), the client could write to the descriptor the length of the data being transferred (field 610), the source of the data (field 620) and the destination of the data (field 625). The client can also provide a response address (field 630) and response data (field 635). With this data, after a DMA transaction is completed, the HCDMA controller can write the response data to the response address. Additionally, the client can batch multiple DMA transactions together by providing a link to another descriptor in the descriptor link field 640. Furthermore, particular control instructions for the DMA transaction can be written to a control field 645. Table 2 contains exemplary control instructions. As with Table 1, neither the particular instructions nor the offsets are meant to be limiting. One skilled in the art can appreciate that other instructions and offsets can be used.
    TABLE 2
    DESCRIPTOR CONTROL INSTRUCTIONS
    Bits Label Description
    0 ORD 1 = this operation is performed after all
    other previously issued DMA requests.
    1 CNL 1 = Cancel this DMA operation and completes
    it
    Figure US20060010261A1-20060112-P00899
    .
    2 INTERROGAT 1 = Assert an interrupt upon completion of
    ORY this DMA operation.
    3 SYN 1 = Same as ‘SO’ but also halts the
    channel after completion on the DMA operation.
    4 SAD 1 = Interpret ‘Source’ field as data
    instead of as an address.
    5
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    requested upon completion of the DMA
    operation.
    6
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    upon
    completion of the DMA operation
    7
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    .
    8
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    .
    9
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    .
    10
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    valid when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    11
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    address is
    Figure US20060010261A1-20060112-P00899
    . Only valid when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Source address is fully cached.
    Figure US20060010261A1-20060112-P00899
    Only valid when SAD =
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Source address is write-through
    Figure US20060010261A1-20060112-P00899
    . Only valid
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Reserved.
    11 = Source address is
    Figure US20060010261A1-20060112-P00899
    . Only valid
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    14
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Source address is virtual and requires
    translation. Only valid when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Source address is
    Figure US20060010261A1-20060112-P00899
    . Only valid
    Figure US20060010261A1-20060112-P00899
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Source address is
    Figure US20060010261A1-20060112-P00899
    . Only valid
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Source address is
    Figure US20060010261A1-20060112-P00899
    . Only valid
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Source address is
    Figure US20060010261A1-20060112-P00899
    . Only valid
    when SAD =
    Figure US20060010261A1-20060112-P00899
    .
    17
    Figure US20060010261A1-20060112-P00899
    1 = Destination address is
    Figure US20060010261A1-20060112-P00899
    during DMA
    operation.
    18
    Figure US20060010261A1-20060112-P00899
    1 = Destination address is
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    = Destination address is
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Destination address is
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    = Reserved.
    Figure US20060010261A1-20060112-P00899
    = Destination address is
    Figure US20060010261A1-20060112-P00899
    .
    21 DVA 1 = Destination address is virtual and requires
    translation.
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    00 = Destination address is 64-bit
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    01 = Destination address is 08-bit
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    10 = Destination address is 16-bit
    Figure US20060010261A1-20060112-P00899
    .
    11 = Destination address is 32-bit
    Figure US20060010261A1-20060112-P00899
    .
    24 RIO 1 = Destination address is IO region.
    Figure US20060010261A1-20060112-P00899
    RWT 00 = Response address is fully cached.
    01 = Response address is write-through cache.
    10 = Reserved.
    Response address is uncached.
    27 RVA 1 = Response address is virtual and requires
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    translation.
    00 = Response address is 64-bit
    Figure US20060010261A1-20060112-P00899
    .
    01 = Response address is 08-bit
    Figure US20060010261A1-20060112-P00899
    .
    10 = Response address is 16-bit
    Figure US20060010261A1-20060112-P00899
    .
    11 = Response address is 32-bit
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    =
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    .
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
    Figure US20060010261A1-20060112-P00899
  • Referring again to FIG. 4, if the client desires further DMA transactions (step 450), branch 455 is followed and the client requests these additional DMA transactions (step 445). When no other DMA transactions are requested, branch 460 is followed and the client releases the DMA channel (step 470). This DMA channel is now free and can be acquired by other clients.
  • In one embodiment of the present invention, individual DMA channels can be operated in different modes. The operational mode of a particular DMA channel can be determined during DMA channel set up (step 435 of FIG. 4). For example, a channel block, such as channel block 212 shown in FIG. 2, can be initially set up to operate in a client queuing mode, a channel queuing mode, a descriptor streaming mode, a RAM channel mode or a FIFO channel mode. The client can set up the channel block in one of these modes by setting the appropriate bits, e.g., bits 11-13 as shown in Table 1, in the set up instruction issued in step 435.
  • FIG. 7 illustrates an example of the HCDMA controller's client queuing mode. FIG. 7 includes a channel block 702, which can correspond to one of channel blocks 212, 214 and 216 shown in FIG. 2. Channel block 702 includes storage locations 704 and 708. These storage locations can be used to store the data written to the addresses of the I/O memory's channel block portion 520 as shown in FIG. 5. In particular, storage locations 704 and 708 can be used to store a transaction base and a status base (shown in the channel block portion 520), respectively. The stored transaction base can be used to point to a location in memory where descriptors (such as descriptor 605 shown in FIG. 6), which include the DMA transaction information, are stored. For example, the status base could point to descriptor block 714. Each transaction base can be unique for each channel block.
  • Similarly, the status base is used to point to a location in memory where status information about the DMA transaction should be rendered. The status base stored in storage location 708 can point to the same data structure, e.g., descriptor block 714, as does the transaction base stored in storage location 704 or the status base can point to a different data structure. In other words, the status information need not be rendered to the same data structure that provided the instructions for the DMA transaction.
  • In one exemplary embodiment, three queues 710, 712 and 714 are associated with the channel block 702. The location (address in memory) of these queues can be established when the DMA channel is originally configured, as in step 440 of FIG. 4. For example, the location of these queues is written to the appropriate portions of I/O memory's channel block portion 520 shown in FIG. 5. These queues can be configured as hardware FIFOs; FIFOs in host memory, FIFOs in client memory, etc.
  • Still referring to FIG. 7, queue 710 is an inbound queue and queues 712 and 714 are outbound queues. As one skilled in the art can appreciate, this queue configuration can be adjusted to fit particular design requirements. The three queue configuration, however, provides the HCDMA controller with substantial versatility.
  • Also shown in FIG. 7 is a descriptor block 714 with three descriptor storage locations labeled A, B and C. Depending upon the needs of the client using the channel block 702, descriptor block 714 can include any number of descriptor storage locations. In one embodiment, each descriptor storage location can be 64 bytes of physically contiguous memory. Further, each descriptor storage location can be divided into two portions: a transaction portion 716 corresponding to the first six quad-words of a descriptor and a status portion 718 corresponding to the last two quad-words of a descriptor. The status portion 718 could store the status and context fields of descriptor 605 in FIG. 6.
  • In client queuing mode, the client acquires a free descriptor (A, B or C) and then writes DMA transaction instructions to that acquired descriptor. This procedure corresponds to step 445 in FIG. 4. In one embodiment, outbound queue 712 stores the descriptor labels, e.g., “A” and “B”, for all free descriptors. That is, outbound queue 712 stores the descriptor labels corresponding to descriptor storage locations not already programed by other DMA transactions (of the same client). Thus, in this embodiment, the client acquires a free descriptor by obtaining the identity of a free descriptor from the outbound queue 712.
  • Once the identity of a free descriptor is obtained, the client can program (write) that descriptor. Thus, if the client acquired descriptor “B”, the client could then write the necessary instructions (those fields shown in FIG. 6) to descriptor “B” in descriptor block 714.
  • After programing a descriptor with the DMA transaction instruction, the client places the descriptor label, such as “B”, on the inbound queue 710. In the client queuing mode, the client is responsible for synchronizing access to all of the queues, including inbound queue 710. Next, the client signals the channel block 702 that a new descriptor label has been inserted onto the inbound queue 710. The channel block 702 then pulls the descriptor label from the inbound queue 710. Using that pulled descriptor label and the transaction base data stored in storage location 704, the channel block 702 locates and reads the appropriate descriptor. For example, if the channel block 702 pulled label “B” from the inbound queue 710, the channel block 702 would then use the transaction base data in storage location 704 to locate the descriptor block 714 and would use the label “B” to locate storage location B. The channel block 702 can then read the DMA transaction instruction from that descriptor storage location.
  • After reading the DMA transaction instruction, the channel block 702 performs the DMA transaction and renders status if requested to do so by the client. Status is only rendered if it is requested by the descriptor as programmed by the client or if the DMA channel is configured to render status. Status can be rendered to a particular address designated in the descriptor, to a location relative the status base stored in storage location 708, to outbound queue 714, etc. After the transaction is completed and status is rendered, the client can write the descriptor label, e.g., “B”, to outbound queue 712—thereby indicating that descriptor “B” is free and can be acquired for other DMA transactions.
  • During the initial setup stage, the channel block 702 can alternatively be placed in a channel queuing mode. Channel queuing mode generally operates similarly to the client queuing mode. Channel queuing mode, however, does not need to use the inbound queue 710. Instead, after a descriptor is acquired and programmed, the client need only to write the appropriate descriptor label to the data channel 535 (shown in FIG. 5) and not to the inbound queue 710. Furthermore, in channel queuing mode, the client does not need to provide synchronization to the inbound queue and the client does not need to signal the presence of a descriptor label to the channel block 702. The channel block 702 itself takes care of these functions. The client, however, is responsible for providing synchronization to any outbound queues (712, 714) that it uses for rendering status information.
  • The channel block 702 can also be programed to operate in a descriptor stream mode. This mode is best suited for use by non-CPU entities, such as bus masters, which generally do not have memory for forming descriptors. Thus, to utilize the descriptor functions of the present invention, these non-CPU entities must stream descriptor information to the appropriate channel block, such as channel block 702. For example, these devices deliver one portion of a DMA transaction instruction at a time to the data channel 535 of the channel block portion 520 (shown in FIG. 5). Once the entire DMA transaction instruction is loaded into the data channel 535, the channel block generally operates as if in channel queuing mode.
  • In another embodiment of the present invention, channel blocks of the present invention can be configured to perform non-traditional DMA transactions. For example, a channel block can be configured to operate in a RAM channel mode. RAM channel mode provides an additional level of address translation for data transfers. Effectively, the channel block, when in RAM channel mode, acts as a memory window that points to another memory location, i.e., it provides seamless forwarding of data.
  • FIFO channel mode is another example of the non-traditional capabilities of the present invention. As with RAM channel mode, FIFO channel mode provides an additional level of address translation. FIFO channel mode, however, forwards and receives data from FIFOs (not shown). For example, when an application writes to the data channel 535 of a particular channel block, the channel block then forwards that information to the FIFO. Because the FIFO is a single point write, the channel block writes the information to the FIFO and ignores any page offsets associated with the data channel 535. Accordingly, CPUs can use burst operations, i.e., write to successive addresses, when they are actually writing to a FIFO. The FIFO channel, in effect, masks the FIFO from the CPU. Similarly, the channel block can read from a FIFO and write the read information to the data channel 535. The channel block will provide the offsets required to translate the single point FIFO address to the appropriate full address.
  • In summary, the present invention provides a method and apparatus for easily and securely rendering DMA transactions. The present invention permits clients such as user applications and non-host entities to utilize DMA transactions. These clients utilize DMA transactions by attempting to acquire one of possibly multiple DMA channels included in an HCDMA controller. Responsive to this attempt, the HCDMA controller can provide a DMA channel to the client. Accordingly, operating system intervention is not necessarily required when a client seeks to acquire a DMA channel.
  • Once a DMA channel is acquired, the client must set up the channel for control. For example, the client must select an operating mode, such as client queuing mode, for the DMA channel. The client can next configure the acquired DMA channel by writing to the HCDMA controller any addresses of data structures, such as the queues and the descriptor block, needed for DMA transactions. The client then programs the HCDMA controller to perform the DMA transactions or to translate addresses if the HCDMA controller is in the HCDMA channel mode or the FIFO channel mode. Finally, once a client no longer needs a DMA channel, the DMA channel is released—thereby freeing it for use by another client.
  • An exemplary embodiment of the apparatus of the present invention has been illustrated in the accompanying Drawings and described in the foregoing Detailed Description. As one skilled in the art can understand, the invention is not limited to just the embodiment disclosed. Rather, the present invention is capable of numerous rearrangements, modifications and substitutions without departing from the spirit of the invention as set forth and defined in the following claims.

Claims (44)

1. A computer system comprising:
a mass storage device;
a first data transfer controller for controlling data transfers involving the mass storage device, the first data transfer controller operable in a channel free state and a channel unavailable state;
a circuit device connected to the first data transfer controller, the circuit device for requesting a particular data transfer to be controlled by the first data transfer controller; and
a second data transfer controller connected to the circuit device, the second data transfer controller for controlling data transfers and for controlling the particular data transfer responsive, at least, to the circuit device receiving an indication that the first data transfer controller is in the channel unavailable state.
2. The computer system of claim 1, wherein the circuit device is a non-host processor entity.
3. The computer system of claim 1, wherein the circuit device is a microprocessor.
4. The computer system of claim 1, wherein the second data transfer controller is associated with a memory address and wherein the first data transfer controller, when in the channel unavailable state, is configured to provide the circuit device with the memory address for the second data transfer controller.
5. The computer system of claim 1, wherein the first data transfer controller includes a storage device for storing an indication of whether the first data transfer controller is in the channel free state or in the channel unavailable state.
6. The computer system of claim 5, wherein the client is configured to read the indication from the storage device and further wherein responsive to the indication indicating that the first data transfer controller is in the channel unavailable state, request the particular data transfer to be controlled by the second data transfer controller.
7. The computer system of claim 1, wherein the first data transfer controller is operable in one of a client queuing mode and a channel queuing mode.
8. The computer system of claim 1, wherein the first data transfer controller is operable in a data streaming mode.
9. The computer system of claim 8, wherein the first data transfer controller is operable in one of a RAM channel mode and a FIFO channel mode.
10. The computer system of claim 1, wherein the mass storage device is a hard drive.
11. A data transfer controller for transferring data responsive to a request from a client, the data transfer controller comprising:
a first channel circuitry for providing a first channel for data transfers, the channel circuitry operable in a plurality of modes;
a storage device connected to the first channel circuitry, the storage device for storing an indication of a particular one of the plurality of modes; and
a control circuitry connected to the channel circuitry, the control circuitry for controlling provision of the first channel to the client;
wherein the channel circuitry is operable in the particular one of the plurality of modes.
12. The data transfer controller of claim 11, wherein the first channel is configurable in a one of a client queuing mode and a channel queuing mode, and further wherein the storage device is for storing an indication that the first channel is in one of the client queuing mode and the channel queuing mode.
13. The data transfer controller of claim 11, wherein the first channel is configurable in a descriptor stream mode.
14. The data transfer controller of claim 11 included in one of a bridge and an I/O device.
15. The data transfer controller of claim 11 included in a memory controller.
16. A DMA controller comprising:
a first channel device for performing a first DMA transaction, the first channel device operable in a first mode; and
a second channel device for performing a second DMA transaction, the second channel device operable in a second mode;
wherein the first mode and the second mode are different and wherein the first DMA transaction and the second DMA transaction are different.
17. The DMA controller of claim 16, further comprising a control device connected to the first channel device and the second channel device, the control device for controlling an acquisition of the first channel device responsive to a request for a DMA transaction from a client.
18. The DMA controller of claim 16, further comprising:
a first address translation mechanism associated with the first channel device; and
a second address translation mechanism associated with the second channel device, the second address translation mechanism operable independent of the first address translation mechanism.
19. The DMA controller of claim 16, wherein the first mode is one of client queuing mode and channel queuing mode.
20. The DMA controller of claim 16, wherein the second mode is one of descriptor stream mode, FIFO channel mode and RAM channel mode.
21. A computer system comprising:
a first memory configured to store a data transfer instruction;
a second memory connected to the first memory, the second memory for storing an indication of the data transfer instruction, the indication indicating a request for performance of the data transfer instruction; and
a data transfer controller connected to the second memory, the data transfer controller for controlling a data transfer responsive to the second memory receiving the indication of the data transfer instruction.
22. The computer system of claim 21, further comprising a third memory connected to the data transfer controller, the third memory for storing a transaction base indicating a location in memory of the first memory.
23. The computer system of claim 22, wherein the third memory is located proximate the data transfer controller.
24. The computer system of claim 23, wherein the third memory is I/O memory mapped.
25. The computer system of claim 21, wherein the third memory is for storing a response data field and a response address field.
26. The computer system of claim 21, wherein the first memory is configured to store a descriptor.
27. A computer system comprising:
a first means configured to store a data transfer instruction;
a second means connected to the first means, the second means for storing an indication of the data transfer instruction, the indication indicating a request for performance of the data transfer instruction; and
a data transfer means connected to the second means, the data transfer means for controlling a data transfer responsive to the second means receiving the indication of the data transfer instruction.
28. The computer system of claim 27, further comprising a third means connected to the data transfer means, the third means for storing a transaction base indicating a location in memory of the first means.
29. The computer system of claim 28, wherein the third means is located proximate the data transfer means.
30. The computer system of claim 29, wherein the third means is I/O memory mapped.
31. The computer system of claim 27, wherein the third means is for storing a response data field and a response address field.
32. The computer system of claim 27, wherein the first means is configured to store a descriptor.
33. A computer system comprising:
a mass storage means;
a first data transfer means for controlling data transfers involving the mass storage means, the first data transfer means operable in a channel free state and a channel unavailable state;
a logical means connected to the first data transfer means, the logical means for requesting a particular data transfer to be controlled by the first data transfer means; and
a second data transfer means connected to the logical means, the second data transfer means for controlling data transfers and for controlling the particular data transfer responsive, at least, co the logical means receiving an indication that the first data transfer means is it the channel unavailable state.
34. The computer system of claim 33, wherein the second data transfer means is associated with a memory address and wherein the first data transfer means, when in the channel unavailable state, is configured to provide the logical means with the memory address for the second data transfer means.
35. The computer system of claim 33, wherein the first data transfer means includes a storage means for storing an indication of whether the first data transfer means is in the channel free state or in the channel unavailable state.
36. The computer system of claim 35, wherein the client is configured to read the indication from the storage means and further wherein responsive to the indication indicating that the first data transfer means is in the channel unavailable state, request the particular data transfer to be controlled by the second data transfer means.
37. The computer system of claim 33, wherein the first data transfer means is operable in one of a client queuing mode and a channel queuing mode.
38. The computer system of claim 33, wherein the first data transfer means is operable in a data streaming mode.
39. The computer system of claim 38, wherein the first data transfer means is operable in one of a RAM channel mode and a FIFO channel mode.
40. A method for transferring data using a controller including a channel, the channel being operable in a plurality of data transfer modes, the method comprising the steps of:
requesting control of the channel;
responsive to receiving control of the channel, configuring the channel to be operable in a particular one of the plurality of modes; and
requesting data to be transferred through the channel.
41. The method of claim 40, further comprising the step of releasing the control of the channel.
42. The method of claim 40, wherein the plurality of data transfer modes includes a software queuing mode and a client queuing mode.
43. The method of claim 42, wherein the plurality of data transfer modes includes a descriptor streaming mode.
44. The method of claim 42, wherein the plurality of data transfer modes includes FIFO channel mode and RAM channel mode.
US11/136,164 2000-05-03 2005-05-23 Highly concurrent DMA controller with programmable DMA channels Abandoned US20060010261A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/136,164 US20060010261A1 (en) 2000-05-03 2005-05-23 Highly concurrent DMA controller with programmable DMA channels

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/564,341 US6898646B1 (en) 2000-05-03 2000-05-03 Highly concurrent DMA controller with programmable DMA channels
US11/136,164 US20060010261A1 (en) 2000-05-03 2005-05-23 Highly concurrent DMA controller with programmable DMA channels

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/564,341 Continuation US6898646B1 (en) 2000-05-03 2000-05-03 Highly concurrent DMA controller with programmable DMA channels

Publications (1)

Publication Number Publication Date
US20060010261A1 true US20060010261A1 (en) 2006-01-12

Family

ID=34590510

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/564,341 Expired - Lifetime US6898646B1 (en) 2000-05-03 2000-05-03 Highly concurrent DMA controller with programmable DMA channels
US11/136,164 Abandoned US20060010261A1 (en) 2000-05-03 2005-05-23 Highly concurrent DMA controller with programmable DMA channels

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/564,341 Expired - Lifetime US6898646B1 (en) 2000-05-03 2000-05-03 Highly concurrent DMA controller with programmable DMA channels

Country Status (1)

Country Link
US (2) US6898646B1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050091239A1 (en) * 2000-12-21 2005-04-28 Ward Wayne D. Queue bank repository and method for sharing limited queue banks in memory
US20050289253A1 (en) * 2004-06-24 2005-12-29 Edirisooriya Samantha J Apparatus and method for a multi-function direct memory access core
US20110063304A1 (en) * 2009-09-16 2011-03-17 Nvidia Corporation Co-processing synchronizing techniques on heterogeneous graphics processing units
JP2011187039A (en) * 2010-03-05 2011-09-22 Lsi Corp Dma engine for concurrent data manipulation
US9075559B2 (en) 2009-02-27 2015-07-07 Nvidia Corporation Multiple graphics processing unit system and method
US9135675B2 (en) 2009-06-15 2015-09-15 Nvidia Corporation Multiple graphics processing unit display synchronization system and method
US9171350B2 (en) 2010-10-28 2015-10-27 Nvidia Corporation Adaptive resolution DGPU rendering to provide constant framerate with free IGPU scale up
US9818379B2 (en) 2013-08-08 2017-11-14 Nvidia Corporation Pixel data transmission over multiple pixel interfaces
US20220121588A1 (en) * 2020-10-16 2022-04-21 Realtek Semiconductor Corporation Direct memory access (DMA) controller, electronic device using the DMA controller and method of operating the DMA controller
US11860804B2 (en) 2020-10-16 2024-01-02 Realtek Semiconductor Corporation Direct memory access (DMA) controller, electronic device using the DMA controller and method of operating the DMA controller

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4385247B2 (en) * 2003-08-04 2009-12-16 日本電気株式会社 Integrated circuit and information processing apparatus
US7844752B2 (en) * 2005-11-30 2010-11-30 International Business Machines Corporation Method, apparatus and program storage device for enabling multiple asynchronous direct memory access task executions
EP1801700B1 (en) * 2005-12-23 2013-06-26 Texas Instruments Inc. Method and systems to restrict usage of a DMA channel
EP1971925A4 (en) * 2005-12-23 2009-01-07 Texas Instruments Inc Methods and systems to restrict usage of a dma channel
US20080126600A1 (en) * 2006-08-31 2008-05-29 Freescale Semiconductor, Inc. Direct memory access device and methods
US7873757B2 (en) * 2007-02-16 2011-01-18 Arm Limited Controlling complex non-linear data transfers
KR100951856B1 (en) * 2007-11-27 2010-04-12 한국전자통신연구원 SoC for Multimedia system
CN110471747A (en) * 2019-07-04 2019-11-19 深圳市通创通信有限公司 A kind of scheduling application method, device and the terminal device of DMA multichannel

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4419737A (en) * 1979-06-15 1983-12-06 Tokyo Shibaura Denki Kabushiki Kaisha Setting device for protective control system
US4864601A (en) * 1988-04-20 1989-09-05 Berry Wayne F Integrated voice data workstation
US5809335A (en) * 1994-10-13 1998-09-15 Yamaha Corporation Data transfer apparatus capable of handling DMA block transfer interruptions
US5875351A (en) * 1995-12-11 1999-02-23 Compaq Computer Corporation System for requesting access to DMA channel having address not in DMA registers by replacing address of DMA register with address of requested DMA channel
US6128674A (en) * 1997-08-08 2000-10-03 International Business Machines Corporation Method of minimizing host CPU utilization in driving an adapter by residing in system memory a command/status block a soft interrupt block and a status block queue
US6671708B1 (en) * 1998-11-26 2003-12-30 Matsushita Electric Industrial Co., Ltd. Processor and image processing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4419737A (en) * 1979-06-15 1983-12-06 Tokyo Shibaura Denki Kabushiki Kaisha Setting device for protective control system
US4864601A (en) * 1988-04-20 1989-09-05 Berry Wayne F Integrated voice data workstation
US5809335A (en) * 1994-10-13 1998-09-15 Yamaha Corporation Data transfer apparatus capable of handling DMA block transfer interruptions
US5875351A (en) * 1995-12-11 1999-02-23 Compaq Computer Corporation System for requesting access to DMA channel having address not in DMA registers by replacing address of DMA register with address of requested DMA channel
US6128674A (en) * 1997-08-08 2000-10-03 International Business Machines Corporation Method of minimizing host CPU utilization in driving an adapter by residing in system memory a command/status block a soft interrupt block and a status block queue
US6671708B1 (en) * 1998-11-26 2003-12-30 Matsushita Electric Industrial Co., Ltd. Processor and image processing device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050091239A1 (en) * 2000-12-21 2005-04-28 Ward Wayne D. Queue bank repository and method for sharing limited queue banks in memory
US20050289253A1 (en) * 2004-06-24 2005-12-29 Edirisooriya Samantha J Apparatus and method for a multi-function direct memory access core
US9075559B2 (en) 2009-02-27 2015-07-07 Nvidia Corporation Multiple graphics processing unit system and method
US9135675B2 (en) 2009-06-15 2015-09-15 Nvidia Corporation Multiple graphics processing unit display synchronization system and method
US20110063304A1 (en) * 2009-09-16 2011-03-17 Nvidia Corporation Co-processing synchronizing techniques on heterogeneous graphics processing units
JP2011187039A (en) * 2010-03-05 2011-09-22 Lsi Corp Dma engine for concurrent data manipulation
US9171350B2 (en) 2010-10-28 2015-10-27 Nvidia Corporation Adaptive resolution DGPU rendering to provide constant framerate with free IGPU scale up
US9818379B2 (en) 2013-08-08 2017-11-14 Nvidia Corporation Pixel data transmission over multiple pixel interfaces
US20220121588A1 (en) * 2020-10-16 2022-04-21 Realtek Semiconductor Corporation Direct memory access (DMA) controller, electronic device using the DMA controller and method of operating the DMA controller
US11829310B2 (en) * 2020-10-16 2023-11-28 Realtek Semiconductor Corporation Direct memory access (DMA) controller, electronic device using the DMA controller and method of operating the DMA controller
US11860804B2 (en) 2020-10-16 2024-01-02 Realtek Semiconductor Corporation Direct memory access (DMA) controller, electronic device using the DMA controller and method of operating the DMA controller

Also Published As

Publication number Publication date
US6898646B1 (en) 2005-05-24

Similar Documents

Publication Publication Date Title
US20060010261A1 (en) Highly concurrent DMA controller with programmable DMA channels
US5918028A (en) Apparatus and method for smart host bus adapter for personal computer cards
US6813653B2 (en) Method and apparatus for implementing PCI DMA speculative prefetching in a message passing queue oriented bus system
US6353877B1 (en) Performance optimization and system bus duty cycle reduction by I/O bridge partial cache line write
US6704831B1 (en) Method and apparatus for converting address information between PCI bus protocol and a message-passing queue-oriented bus protocol
US5953538A (en) Method and apparatus providing DMA transfers between devices coupled to different host bus bridges
TWI466060B (en) Translation unit,display pipe,method and apparatus of streaming translation in display pipe
US6128669A (en) System having a bridge with distributed burst engine to decouple input/output task from a processor
US6886171B2 (en) Caching for I/O virtual address translation and validation using device drivers
US5978858A (en) Packet protocol and distributed burst engine
US6347347B1 (en) Multicast direct memory access storing selected ones of data segments into a first-in-first-out buffer and a memory simultaneously when enabled by a processor
US20050091432A1 (en) Flexible matrix fabric design framework for multiple requestors and targets in system-on-chip designs
US20050114559A1 (en) Method for efficiently processing DMA transactions
US20040068602A1 (en) Apparatus, method and system for accelerated graphics port bus bridges
US20040186931A1 (en) Transferring data using direct memory access
US11635902B2 (en) Storage device processing stream data, system including the same, and operation method
US8990456B2 (en) Method and apparatus for memory write performance optimization in architectures with out-of-order read/request-for-ownership response
US7886088B2 (en) Device address locking to facilitate optimum usage of the industry standard IIC bus
EP1288785A2 (en) Method and interface for improved efficiency in performing bus-to-bus read data transfers
US6779062B1 (en) Streamlining ATA device initialization
US6883057B2 (en) Method and apparatus embedding PCI-to-PCI bridge functions in PCI devices using PCI configuration header type 0
US5941970A (en) Address/data queuing arrangement and method for providing high data through-put across bus bridge
US7552247B2 (en) Increased computer peripheral throughput by using data available withholding
US6119191A (en) Performing PCI access cycles through PCI bridge hub routing
JP3251903B2 (en) Method and computer system for burst transfer of processor data

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION