I'm trying to parse a huge 3d-data array of complex values from binary. Later this should become l
matrices (n x m
). Since I'm going to work on these matrices, I'm limited to matrix libraries - hmatrix seems to be promising.
The data layout is not in my requried format, so I have to jump around in positions (i,j,k) -> (k,i,j)
, where i
and j
are elements of n
and m
and k
element of l
.
I think the only way to read in this in is my using mutables, otherwise I'll end up with several Terrabytes of garbage. My idea was to use boxed mutual arrays or vectors of mututal matrices (STMatrix from Numeric.LinearAlgebra.Devel), so I end up with something like:
data MVector s (STMatrix s t)
But I'm not sure how to use them correctly: I can modify one single element of the MVector with modify:
modify :: PrimMonad m => MVector (PrimState m) a -> (a -> a) -> Int -> m ()
or use modifyM (Strange: in stack vector-0.12.3.0 does not have modifyM...)
modifyM :: PrimMonad m => MVector (PrimState m) a -> (a -> m a) -> Int -> m ()
so I could use the function call (a -> a)
to a runST-routine to modify the SMatrix. I'm not sure, if I should put an ST in an IO (?)
Nevertheless - I think, this should work but is only useful, when I want to modify the whole Matrix, calling this (a->a)
-routine n x m x l
- times will be a little bit overhead (Maybe it will be optimized out...).
So I'll end up, in marshalling the Array, modify the content via pointers (i,j,k) -> (k,i,j)
and read everything Matrix by Matrix - but this does not feel right and I wanted to avoid such dirty tricks.
Do you have any ideas of a way to do this a little but more ...clean? Ty
Edit: Thx to K. A. Buhr. His solution works so far. Now, I'm only running into some performance impacts. If I compare the solution:
{-# LANGUAGE BangPatterns #-}
module Main where
import Data.List
import Numeric.LinearAlgebra
import qualified Data.Vector as V
import qualified Data.Vector.Storable as VS
import qualified Data.Vector.Storable.Mutable as VSM
-- Create an l-length list of n x m hmatrix Matrices
toMatrices :: Int -> Int -> Int -> [C] -> [Matrix C]
toMatrices l n m dats = map (reshape m) $ VS.createT $ do
mats <- V.replicateM l $ VSM.unsafeNew (m*n)
sequence_ $ zipWith (\(i,j,k) x ->
VSM.unsafeWrite (mats V.! k) (loc i j) x) idxs (dats ++ repeat 0)
return $ V.toList mats
where idxs = (,,) <$> [0..n-1] <*> [0..m-1] <*> [0..l-1]
loc i j = i*m + j
test1 = toMatrices 1000 1000 100 (fromIntegral <$> [1..])
main = do
let !a = test1
print "done"
With the simpliest C-code:
#include <stdlib.h>
#include <stdio.h>
void main()
{
const int n = 1000;
const int m = 1000;
const int l = 100;
double *src = malloc(n*m*l * sizeof(double));
for (int i = 0; i < n*m*l; i++) {
src[i] = (double)i;
}
double *dest = malloc(n*m*l * sizeof(double));
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
for (int k = 0; k < l; k++) {
dest[k*n*m+i*m+j] = src[i*m*l+j*l+k];
}
}
}
printf("done: %f\n", dest[n*m*l - 1]); // Need to access the array, otherwise it'll get lost by -O2
free(src);
free(dest);
}
Both compiled with -O2 give following performance guesses:
real 0m5,611s
user 0m14,845s
sys 0m2,759s
vs.
real 0m0,441s
user 0m0,200s
sys 0m0,240s
This are approx 2 magnitudes per-core performance. From profiling I learn that
VSM.unsafeWrite (mats V.! k) (loc i j) x
is the expensive function. Since I'll use this procedure in a minute-like intervall, I want to keep the parsing time as low as the disk access time. I'll see, if I can speed this up
PS: This is for some tests, if I could move usual DSP from C-like to Haskell
Edit2 : Ok, this is what I get after sum trying:
{-# LANGUAGE BangPatterns #-}
module Main where
import Data.List
import qualified Data.Vector as V
import qualified Data.Vector.Storable as VS
import qualified Data.Vector.Storable.Mutable as VSM
import Numeric.LinearAlgebra
-- Create an l-length list of n x m hmatrix Matrices
toMatrices :: Int -> Int -> Int -> VS.Vector C -> V.Vector (Matrix C)
toMatrices l n m dats =
V.map (reshape m) newMat
where
newMat = VS.createT $
V.generateM l $ \k -> do
curMat <- VSM.unsafeNew (m * n)
VS.mapM_
(\i ->
VS.mapM_
(\j -> VSM.unsafeWrite curMat (loc i j) (dats VS.! (oldLoc i j k)))
idjs)
idis
return curMat
loc i j = i * m + j
oldLoc i j k = i * m * l + j * l + k
!idis = VS.generate n (\a->a)
!idjs = VS.generate m (\a->a)
test1 = toMatrices 100 1000 1000 arr
where
arr = VS.generate (1000 * 1000 * 100) fromIntegral :: VS.Vector C
main = do
let !a = test1
print "done"
It gives something about:
real 0m1,816s
user 0m1,636s
sys 0m1,120s
, so ~4 times slower than C code. I think I can live with this. I guess, I'm destroying all the stream-functionality of the vector with this code. If there are any suggestions to have them back by a comparable speed, I would be grateful!
As I understand it, you have a "huge" set of data in
i
-major,j
-middling,k
-minor order, and you want to load it into matrices indexed byk
whose elements havei
-indexed rows andj
-indexed columns, right? So, you want a function something like:Note that you've written
n x m
matrices above, associatingi
withn
andj
withm
. It would be more usual to flip the roles ofn
andm
, but I've stuck with your notation, so keep an eye on that.If the entire data list
[C]
could fit comfortably in memory, you could do this immutably by writing something like:This breaks the input data into
l
-sized chunks, transposes them intol
lists, and converts each list to a matrix. If there was some way to force all theMatrix C
values in parallel, this could be done with one traversal through the data, without the need to hold on to the whole list. Unfortunately, the individualMatrix C
values can only be forced one-by-one, and the whole list needs to be kept around until all of them can be forced.So, if the "huge"
[C]
list is too big for memory, you're probably right that you need to load the data into a (partially) mutable structure. The code is somewhat challenging to write, but it's not too bad in its final form. I believe the following will work:Compiled with
-O2
, the maximum residency is about 1.6Gigs, which matches the expected memory needed to hold 100 matrices of one million 16-byte complex values in memory, so that looks right.Anyway, this version of
toMatrices
is made somewhat complicated by the use of three different vector variants. There'sVector
fromhmatrix
, which is the same as the immutable storableVS.Vector
fromvector
; and then there are two more types fromvector
: the immutable boxedV.Vector
, and the mutable storableVSM.Vector
.The
do
-block creates aV.Vector
ofVSM.Vector
s and populates those with a sequence of monadic actions performed across index/value pairs. You can load the data in any order by modifying the definition ofidxs
to match the order of the data stream. Thedo
-block returns the finalVSM.Vector
s in a list, the helper functionVS.createT
freezes them all toVS.Vector
s (i.e.,Vector
fromhmatrix
), andreshape
is mapped across the vectors to turn them intom
-column matrices.Note that you'll have to take care that in your actual application, the list of data items read from the file isn't kept around by code other than
toMatrices
, either in the original text form or the parsed numeric form. This shouldn't be too tough to get right, but you might want to test on medium-sized test input before locking up your computer on the real dataset.