I want to copy as little as possible. At the moment I'm using num_t* array = new num_t[..]
and then copying each value of the multidimensional vector into array
in a for-loop.
I'd like to find a better way to do this.
I want to copy as little as possible. At the moment I'm using num_t* array = new num_t[..]
and then copying each value of the multidimensional vector into array
in a for-loop.
I'd like to find a better way to do this.
For arithmetic types you can use function memcpy
. For example
#include <iostream>
#include <vector>
#include <cstring>
int main()
{
std::vector<std::vector<int>> v =
{
{ 1 },
{ 1, 2 },
{ 1, 2, 3 },
{ 1, 2, 3, 4 }
};
for ( const auto &row : v )
{
for ( int x : row ) std::cout << x << ' ';
std::cout << std::endl;
}
std::cout << std::endl;
size_t n = 0;
for ( const auto &row : v ) n += row.size();
int *a = new int[n];
int *p = a;
for ( const auto &row : v )
{
std::memcpy( p, row.data(), row.size() * sizeof( int ) );
p += row.size();
}
for ( p = a; p != a + n; ++p ) std::cout << *p << ' ';
std::cout << std::endl;
delete []a;
}
The program output is
1
1 2
1 2 3
1 2 3 4
1 1 2 1 2 3 1 2 3 4
As you stated in the comments your inner vectors of your
vector<vector<T>>
structure are of the same size. So what you are actually trying to do is to store am x n
matrix.Usually such matrices are not stored in multi-dimensional structures but in linear memory. The position (row, column) of a given element is then derived based on an indexing scheme of which row-major and column-major order are used most often.
Since you already state that you will copy this data on to a GPU, this copying is then simply done by copying the linear vector as a whole. You will then use the same indexing scheme on the GPU and on the host.
If you are using CUDA, have a look at Thrust. It provides
thrust::host_vector<T>
andthrust::device_vector<T>
and simplifies copying even further: