Is there an unfolding-folding operation, that will perform a matrix repetition in pytorch?

179 views Asked by At

I have the following tensor in pytorch:

i1 = torch.randn(1, 32, 320, 640)

I would like to extract the following sliding blocks

[:,:,i,0:15], [:,:,i+80,0:15], [:,:,i+160,0:15], [:,:,i+240,0:15]

[:,:,i,1:16], [:,:,i+80,1:16], [:,:,i+160,1:16], [:,:,i+240,1:16]

[:,:,i,2:17], [:,:,i+80,2:17], [:,:,i+160,2:17], [:,:,i+240,2:17]

...

[:,:,i,624:639], [:,:,i+80,624:639], [:,:,i+160,624:639], [:,:,i+240,624:639]

and repeat for all i, where i = 0,1,2,3,...,79. Ideally, I would like to collect all these blocks back together to a tensor with a size of something like this: [1,:,32,4,16].

I tried using the unfold and fold operations, but without any luck. Any suggestions will be very helpful!

Update with code:

import torch
import torch.nn as nn
from torch.nn import functional as f

i1 = torch.randn(1, 32, 320, 640)

in_chans = i1.shape[1] 
nf       = i1.shape[2]
np       = i1.shape[3]
accel    = 4
step     = np//accel
kernel   = 16

i3       = f.unfold(i1,kernel_size=(1,kernel))

i3 will extract all the sliding [1,16] blocks, but it won't take care of the unfolding for the second dimension. If I use instead i3 = f.unfold(i1,kernel_size=(accel,kernel)), I will get all possible continuous [4,16] blocks, which is not the exact dimensionality I need. I need to introduce somehow a dilation over the first dimension of the kernel size.

1

There are 1 answers

0
kmkurn On

I assume what you meant is that

  • the slices are of length 16 each like 0:16, 1:17, etc. rather than 15 which doesn't evenly divide 640; and
  • if out is the ideal result, out.shape == (1,32,80,4,624,16) such that out[k1,k2,k3,k4,k5,k6] == i1[k1,k2,k3+80*k4,k5+k6].

If my understanding is right, you can use Tensor.as_strided to compute out like

out = i1.as_strided((1,32,80,4,624,16), (32*320*640, 320*640, 640, 80*640, 1, 1))