I have a 3 dimensional cv::Mat with size (10, rows=M, cols=N). This is 10 images with MxN pixels stacked in a cube. I would like to slice the cube by rows in the dimension across the images such that at the end I have M slices of (10, N) to which I'll apply some other opencv algorithms on. I found that I can do this with cv::Ranges(); however, I have to use reshape to make it 2D and thus I have to use clone() to make the slice continuous. Below is the code snippet that I used to do this but the execution time is slow (I think do to the clone/copy that is done for each row slice). Is there a better way to do this? I also found this, which is not encouraging.
const int img_dim[3] = {10, 20, 40};
Mat data = Mat::zeros(3, img_dim, CV_64FC1);
for (int row = 0; row < data.size[1]; row++ {
std::vector<Range> range;
range.push_back(Range(0, data.size[0]));
range.push_back(Range(row, row+1));
range.push_back(Range(0, data.size[2]));
// Below slice is still 3D with (10, 1, 40) so I use reshape to make it (10, 40)
// which requires the clone()
Mat slice = data(&range[0]).clone();
const int sz[] {data.size[0], data.size[2]};
slice = slice.reshape(1, 2, sz);
// Processing of slice
// e.g cv::GaussianBlur(slice, dst, Size(0,0), r, r);
}
You may set the
step
(bytes stride), and build the slice without copying the data:Assume
data
is continuous in memory:row*N
double
elements fromdata.data
(data.data
points matrix data).step
(bytes stride between rows), equalsM * N *sizeof(double)
.[In case
data
is not continuous in memory, solution is more complicated].Here is a code sample (builds a slice of 5th row):
Note: Processing data with large strides may be inefficient due cache misses - it may be more efficient to copy the data (check if data processing gets slower).
Output: