image/svg+xml
Programme Matter and Technologies THIS STILL NEEDS A PROPER TITLE Luis Ardila Perez, Timo Dritschler
GPU Computing Computing requirements for modern DAQ systems have increased significantly. Standard CPU computing is no longer able to provide the necessary computing powers. GPUs (Graphics Processing Units) have become accessible to general purpose computing and provide highly parallel, high-performance computing structures. Using GPUs in distri-buted computing systems can greatly increase processing performance. However, data distribution between the nodes of GPU computing clusters is a challenge.
GPUDirect with FPGAs
Rremot Direct Memory Access (RDMA) can be used to directly connect GPUs with FPGAs •
This allows for highly flexible FPGA-based DAQ hardware combined with the high performance computing GPUs •
CPU
GPU
MEMORY
NETWORK
1 2 3
RDMA (Red) Conventional transfer (Green) RDMA transfer
10
5
10
6
10
7
10
8
10
9
2000
4000
6000
Datasize(B)
Throughput(MB/s)
GPU
CPU
FPGA to GPU Memory FPGA to System Memory
GPUs for fast feedback loops
RDMA enables lowes-latency data-transfers. •
Paired with efficient algorithms, GPUscan be used to realize tight feedback loops •
We ilustrate the performance of such systems by a prototype implementation for the CMS Track Trigger in the chart on the left •
In this example, we can show that GPUs can be used to realize feedback systems with less than 10μs latency! •
Performance of the FPGA-based RDMA