ADM-XRC SDK 2.8.1 User Guide (Linux)
© Copyright 2001-2009 Alpha Data
DMA (Direct Memory Access) is an efficient way to transfer a block of data into the host computer's memory with as little burden on the CPU as possible. Bus-mastering PCI devices contain dedicated logic for performing DMA transfers. To perform a DMA transfer, the CPU first programs the PCI device's registers where to transfer the data, how much data to transfer and which direction the data should travel in. It then kicks off the DMA transfer, and typically, the CPU is interrupted by the device once the transfer has been completed. The advantage of DMA then, is that the CPU can perform other tasks while the PCI device performs the data transfer.
Alpha Data recommends using DMA transfers (that is, performed by the PCI device) for large blocks of data, and using Direct Slave transfers (that is, performed by the CPU) for random access or for access to FPGA registers. On many platforms, having the CPU perform bulk data transfer is highly inefficient. For example, most x86 chipsets do not perform bursting at all when the CPU performs reads of a PCI device.
The local bus bridge (PCI9080/PCI9656 etc.) in an ADM-XRC series card contains one or more DMA engines. Software running on the host can use these DMA engines for the rapid transfer of data to and from the FPGA, using API functions such as ADMXRC2_DoDMA and ADMXRC2_DoDMAImmediate .
The local bus protocol of a DMA-initiated burst is the same as that of a direct slave burst. Assuming demand-mode DMA is not used, a DMA-initiated burst is indistinguishable from a direct slave burst. This can be a useful property, as it often permits an FPGA design to be tested first using direct slave transfers (for convenience), and later on with DMA transfers (for throughput).
The following figure illustrates the differences between Direct Slave transfers (CPU-initiated) and DMA transfers:
In (a) and (b) above, the flow of data is from the host to the FPGA in both cases, but they differ with respect to which entity initiates the transfers on the PCI bus.
In (c) and (d) above, the flow of data is from the FPGA to the host in both cases, but they differ with respect to which entity initiates the transfers on the PCI bus. To sum up the differences between DMA and Direct Slave transfers:
Direct Slave | DMA | |
Local bus master is... | Bridge (PCI9080/PCI9656 etc.) | Bridge (PCI9080/PCI9656 etc.) |
Local bus slave is... | FPGA | FPGA |
PCI bus master (initiator) is... | Host CPU | Bridge (PCI9080/PCI9656 etc.) |
PCI bus slave (target) is... | Bridge (PCI9080/PCI9656 etc.) | Host CPU |
Constant addressing mode | implemented by driver | yes |
LEOT mode | N/A | yes |
Demand mode | N/A | yes |
The DMA engines are configurable to operate in a variety of modes. For a discussion of these modes, click on the following topics:
The following topics provide further details about the practicalities of DMA transfers on an ADM-XRC series card:
What happens during a DMA transfer?