Elementwise addition of sparse scipy matrix vector with broadcasting

1.7k views Asked by At

I'm trying to figure out how to best perform elementwise addition (and subtraction) of a sparse matrix and a sparse vector. I found this trick on SO:

mat = sp.csc_matrix([[1,0,0],[0,1,0],[0,0,1]])
vec = sp.csr_matrix([[1,2,1]])
mat.data += np.repeat(vec.toarray()[0], np.diff(mat.indptr))

But unfortunately it only updates non-zero values:

print(mat.todense())
[[2 0 0]
 [0 3 0]
 [0 0 2]]

The actual accepted answer on the SO thread:

def sum(X,v):
    rows, cols = X.shape
    row_start_stop = as_strided(X.indptr, shape=(rows, 2),
                            strides=2*X.indptr.strides)
    for row, (start, stop) in enumerate(row_start_stop):
        data = X.data[start:stop]
        data -= v[row]

sum(mat,vec.A[0])

Does the same thing. I'm unfortunately out of ideas by now, so I was hoping you could help me figuring out the best way to solve this.

EDIT: I'm expecting it to do the same as a dense version of this would do:

np.eye(3) + np.asarray([[1,2,1]])
array([[ 2.,  2.,  1.],
       [ 1.,  3.,  1.],
       [ 1.,  2.,  2.]])

Thanks

2

There are 2 answers

3
maxymoo On

So your original matrix is sparse, the vector is sparse but in the resulting matrix the columns corresponding to nonzero coordinates in your vector will be dense.

So we may as well materialise those columns as dense matrices

def addvec(mat,vec):
   for i in vec.nonzero()[1]:
      mat[:,i] = sp.csc_matrix(mat[:,i].todense() + vec[0,i])
   return mat
2
hpaulj On

Some tests with 10x10 sparse mat and vec:

In [375]: mat=sparse.rand(10,10,.1) 
In [376]: mat
Out[376]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 10 stored elements in COOrdinate format>

In [377]: mat.A
Out[377]: 
array([[ 0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
         0.        ,  0.        ,  0.        ,  0.        ,  0.        ],
       [ 0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
         0.15568621,  0.59916335,  0.        ,  0.        ,  0.        ],
       ...
       [ 0.        ,  0.        ,  0.15552687,  0.        ,  0.        ,
         0.47483064,  0.        ,  0.        ,  0.        ,  0.        ]])

In [378]: vec=sparse.coo_matrix([0,1,0,2,0,0,0,3,0,0]).tocsr()
<1x10 sparse matrix of type '<class 'numpy.int32'>'
    with 3 stored elements in Compressed Sparse Row format>

maxymoo's solution:

def addvec(mat,vec):
    Mc = mat.tocsc()
    for i in vec.nonzero()[1]:
        Mc[:,i]=sparse.csc_matrix(Mc[:,i].todense()+vec[0,i])
    return Mc    

And variation that uses lil format, which is supposed to be more efficient when changing the sparsity structure:

def addvec2(mat,vec):
    Ml=mat.tolil()
    vec=vec.tocoo()                                            
    for i,v in zip(vec.col, vec.data):
        Ml[:,i]=sparse.coo_matrix(Ml[:,i].A+v)
    return Ml

The sumation has 38 nonzero terms, up from 10 in the original mat. It adds the 3 columns from vec. That's a big change in sparsity.

In [382]: addvec(mat,vec)
Out[382]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in Compressed Sparse Column format>

In [383]: _.A
Out[383]: 
array([[ 0.        ,  1.        ,  0.        ,  2.        ,  0.        ,
         0.        ,  0.        ,  3.        ,  0.        ,  0.        ],
       [ 0.        ,  1.        ,  0.        ,  2.        ,  0.        ,
         0.15568621,  0.59916335,  3.        ,  0.        ,  0.        ],
       ...
       [ 0.        ,  1.        ,  0.15552687,  2.        ,  0.        ,
         0.47483064,  0.        ,  3.        ,  0.        ,  0.        ]])

Same output with addvec2:

In [384]: addvec2(mat,vec)
Out[384]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in LInked List format>

And in timing, addvec2 does better than 2x

In [385]: timeit addvec(mat,vec)
100 loops, best of 3: 6.51 ms per loop

In [386]: timeit addvec2(mat,vec)
100 loops, best of 3: 2.54 ms per loop

and the dense equivalents:

In [388]: sparse.coo_matrix(mat+vec.A)
Out[388]: 
<10x10 sparse matrix of type '<class 'numpy.float64'>'
    with 38 stored elements in COOrdinate format>

In [389]: timeit sparse.coo_matrix(mat+vec.A)
1000 loops, best of 3: 716 µs per loop

In [390]: timeit sparse.coo_matrix(mat.A+vec.A)
1000 loops, best of 3: 338 µs per loop

A version that might save on temporary dense matrix space, runs in the same time:

In [393]: timeit temp=mat.A; temp+=vec.A; sparse.coo_matrix(temp)
1000 loops, best of 3: 334 µs per loop

So the dense version does 5-7x better than my sparse version.

For a really large mat, memory issues might chew into the dense performance, but the iterative sparse solution(s) isn't going to shine either.

I may be able to squeeze more performance from addvec2 by indexing Ml more efficiently. Ml.data[3],Ml.rows[3] is considerably faster than Ml[3,:] or Ml[:,3].

def addvec3(mat,vec):
    Mtl=mat.T.tolil()
    vec=vec.tocoo()
    n = mat.shape[0]
    for i,v in zip(vec.col, vec.data):
        t = np.zeros((n,))+v
        t[Mtl.rows[i]] += Mtl.data[i]
        t = sparse.coo_matrix(t)
        Mtl.rows[i] = t.col
        Mtl.data[i] = t.data
    return Mtl.T

In [468]: timeit addvec3(mat,vec)
1000 loops, best of 3: 1.8 ms per loop

A modest improvement, but not as much as I'd hoped. And squeezing a bit more:

def addvec3(mat,vec):
    Mtl = mat.T.tolil()
    vec = vec.tocoo(); 
    t0 = np.zeros((mat.shape[0],))
    r0 = np.arange(mat.shape[0])
    for i,v in zip(vec.col, vec.data):
        t = t0+v
        t[Mtl.rows[i]] += Mtl.data[i]
        Mtl.rows[i] = r0
        Mtl.data[i] = t
    return Mtl.T

In [531]: timeit mm=addvec3(mat,vec)
1000 loops, best of 3: 1.37 ms per loop