How is using im2col operation in convolutional nets more efficient?

10k views Asked by At

I am trying to implement a convolutional neural netwrok and I don't understand why using im2col operation is more efficient. It basically stores the input to be multiplied by filter in separate columns. But why shouldn't loops be used directly to calculate convolution instead of first performing im2col ?

1

There are 1 answers

4
Abhishek Nikam On BEST ANSWER
  1. Well, you are thinking in the right way, In Alex Net almost 95% of the GPU time and 89% on CPU time is spent on the Convolutional Layer and Fully Connected Layer.

  2. The Convolutional Layer and Fully Connected Layer are implemented using GEMM that stands for General Matrix to Matrix Multiplication.

  3. So basically in GEMM, we convert the convolution operation to a Matrix Multiplication operation by using a function called im2col() which arranges the data in a way that the convolution output can be achieved by Matrix Multiplication.

  4. Now, you may have a question instead of directly doing element wise convolution, why are we adding a step in between to arrange the data in a different way and then use GEMM.

  5. The answer to this is, scientific programmers, have spent decades optimizing code to perform large matrix to matrix multiplications, and the benefits from the very regular patterns of memory access outweigh any other losses. We have an optimized CUDA GEMM API in cuBLAS library, Intel MKL has an optimized CPU GEMM while ciBLAS's GEMM API can be used for devices supporting OpenCL.

  6. Element wise convolution performs badly because of the irregular memory accesses involved in it.

  7. In turn, Im2col() arranges the data in a way that the memory accesses are regular for Matrix Multiplication.

  8. Im2col() function adds a lot of data redundancy though, but the performance benefit of using Gemm outweigh this data redundancy.

  9. This is the reason for using Im2col() operation in Neural Nets.

  10. This link explains how Im2col() arranges the data for GEMM: https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/