GPUDirect RDMA transfer from GPU to remote host

5.2k views Asked by At

Scenario:

I have two machines, a client and a server, connected with Infiniband. The server machine has an NVIDIA Fermi GPU, but the client machine has no GPU. I have an application running on the GPU machine that uses the GPU for some calculations. The result data on the GPU is never used by the server machine, but is instead sent directly to the client machine without any processing. Right now I'm doing a cudaMemcpy to get the data from the GPU to the server's system memory, then sending it off to the client over a socket. I'm using SDP to enable RDMA for this communication.

Question:

Is it possible for me to take advantage of NVIDIA's GPUDirect technology to get rid of the cudaMemcpy call in this situation? I believe I have the GPUDirect drivers correctly installed, but I don't know how to initiate the data transfer without first copying it to the host.

My guess is that it isn't possible to use SDP in conjunction with GPUDirect, but is there some other way to initiate an RDMA data transfer from the server machine's GPU to the client machine?

Bonus: If somone has a simple way to test if I have the GPUDirect dependencies correctly installed that would be helpful as well!

2

There are 2 answers

8
harrism On BEST ANSWER

Yes, it is possible with supporting networking hardware. See the GPUDirect RDMA documentation.

0
Dmitry On

I would like to share my investigation regarding the question. For using GPUDirrect between GPU and NIC your network card should support RDMA. So, if you are using e.g. NVIDIA Mellanox MCX623106AN-CDAT ConnectX®-6 Dx network card and e.g. NVIDIA Quadro card with RDMA support. You can use this example for sending data between GPU and NIC

https://github.com/Mellanox/gpu_direct_rdma_access