Fast tensor-dot on sparse arrays with GPU in any programming language?

75 views Asked by At

I'm now working on two multi-dimensional arrays arr and cost. arr is dense with size (width, height, m, n) and cost is sparse with size (width, height, width, height).

Values: width and height are around 5000, m is less than 100, and n is less than 10. The sparse cost is strictly block-sparse. For cost[i,j,:,:], there are only a certain k*k block, where k is less than 50.

Now I want this:

result = np.tensordot(cost, arr, axes=[[2,3],[0,1]])

The size of result is then (width, height, m, n) (the same as arr). However the cost array is too large if defined as a dense array.

So, the question is: How to do a fast tensor-dot (better with GPU) on sparse arrays?

Possible solutions are ideas in any programming language are OK, including but not limited to Python, C++, Julia, etc.


I've tried:

CPU versions (multithreaded): C++ (simply with std::vector), numpy/scipy, Julia

GPU versions: cupy, CUDA.jl

The C++ version works well with some simple for-loops on std::vectors, but it is not fast enough.

With cupy and CUDA.jl, they all work very fast on small tests with width and height set to small values and define both cost and arr are dense. But I don't know how to modify it to a sparse version.

2

There are 2 answers

0
C.K. On BEST ANSWER

I solved this finally via cupy with some reshapings.

I've implemented the Julia version tensor-dot with reshaping but I didn't realize this is the key to solving this 2d-sparse-matrix problem. Thanks for @hpaulj who reminded me of this.

The minimal example is:

import cupy as cp
import numpy as np
from cupyx.scipy import sparse as sparse_gpu
from scipy import sparse as sparse_cpu

# sparse_cost2d_cpu = get_cost_cpu(...)
data = cp.asarray(sparse_cost2d_cpu.data,dtype=DTYPE_CP)
indices = cp.asarray(sparse_cost2d_cpu.indices)
indptr = cp.asarray(sparse_cost2d_cpu.indptr)
sparse_cost2d_gpu = sparse_gpu.csr_matrix((data,indices,indptr),shape=(width*height,width*height),dtype=DTYPE_CP)

arr4d_cpu = np.random.random((width, height, nvar, nvalue)).astype(DTYPE_NP)
arr2d_cpu = arr.reshape((width*height,nvar*nvalue))
arr2d_gpu = cp.asarray(arr2d_cpu)

for i in range(100):
    new_arr2d_gpu = sparse_cost2d_gpu.dot(arr2d_gpu)
    arr2d_gpu = new_arr2d_gpu*_lambda + arr2d_gpu*(1-_lambda)
    arr4d_gpu = arr2d_gpu.reshape((width,height,nvar,nvalue))
    # Some other computation with arr4d_gpu
    arr2d_gpu = arr4d_gpu.reshape((width*height,nvar*nvalue))
0
hpaulj On

Your tensordot can be evaluated with other numpy tools, and expressed as a dot of 2d arrays.

For illustration purposes I won't try to push sizes, or make anything sparse. The sparseness of cost is not all that clear anyways.

Smallish dimensions, with reproducible values:

In [852]: width, height, m, n = 10,10,5,4    
In [853]: arr = np.arange(width*height*m*n).reshape(width,height,m,n)    
In [854]: cost = np.arange(10**4).reshape(10,10,10,10)

Your tensordot

In [856]: res = np.tensordot(cost,arr,((2,3),(0,1)))    
In [857]: res.shape
Out[857]: (10, 10, 5, 4)

The equivalent using einsum:

In [859]: res1=np.einsum('ijkl,klmn->ijmn',cost,arr)    
In [860]: res1.shape
Out[860]: (10, 10, 5, 4)

In [861]: np.allclose(res,res1)
Out[861]: True

Using matmul/dot with 2d arrays:

In [863]: res2=cost.reshape(10*10,10*10)@arr.reshape(10*10,-1)
In [864]: res2.shape
Out[864]: (100, 20)

In [865]: np.allclose(res, res2.reshape(res.shape))
Out[865]: True

tensordot is doing the same reshaping.