[Caffe]: Check failed: ShapeEquals(proto) shape mismatch (reshape not set)

6.3k views Asked by At

I have this error and I have tried to take a look in Internet but I got nothing clear.

I trained my net with Caffe successfully with around 82% of accuracy.

Now I'm trying to try it with an image through this code:

python python/classify.py --model_def examples/imagenet/imagenet_deploy.prototxt --pretrained_model caffe_mycaffe_train_iter_10000.caffemodel --images_dim 64,64 data/mycaffe/testingset/cat1/113.png foo --mean_file data/mycaffe/mycaffe_train_mean.binaryproto

yes, my images are 64x64,

these are the last lines I'm getting:

I0610 15:33:44.868100 28657 net.cpp:194] conv3 does not need backward computation. I0610 15:33:44.868110 28657 net.cpp:194] norm2 does not need backward computation. I0610 15:33:44.868120 28657 net.cpp:194] pool2 does not need backward computation. I0610 15:33:44.868130 28657 net.cpp:194] relu2 does not need backward computation. I0610 15:33:44.868142 28657 net.cpp:194] conv2 does not need backward computation. I0610 15:33:44.868152 28657 net.cpp:194] norm1 does not need backward computation. I0610 15:33:44.868162 28657 net.cpp:194] pool1 does not need backward computation. I0610 15:33:44.868173 28657 net.cpp:194] relu1 does not need backward computation. I0610 15:33:44.868182 28657 net.cpp:194] conv1 does not need backward computation. I0610 15:33:44.868192 28657 net.cpp:235] This network produces output fc8_pascal I0610 15:33:44.868214 28657 net.cpp:482] Collecting Learning Rate and Weight Decay. I0610 15:33:44.868238 28657 net.cpp:247] Network initialization done. I0610 15:33:44.868249 28657 net.cpp:248] Memory required for data: 3136120 F0610 15:33:45.025965 28657 blob.cpp:458] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) * Check failure stack trace: * Aborted (core dumped)

I've tried to not setting the --mean_file and more things, but my shots are over.

This is my imagenet_deploy.prototxt which I've modified in some parameters to debug, but didn't work anything.

name: "MyCaffe"
input: "data"
input_dim: 10
input_dim: 3
input_dim: 64
input_dim: 64
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 64
    kernel_size: 11
    stride: 4
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "norm1"
  type: "LRN"
  bottom: "pool1"
  top: "norm1"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "norm1"
  top: "conv2"
  convolution_param {
    num_output: 64 
    pad: 2
    kernel_size: 5
    group: 2
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "norm2"
  type: "LRN"
  bottom: "pool2"
  top: "norm2"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "norm2"
  top: "conv3"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "conv4"
  type: "Convolution"
  bottom: "conv3"
  top: "conv4"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu4"
  type: "ReLU"
  bottom: "conv4"
  top: "conv4"
}
layer {
  name: "conv5"
  type: "Convolution"
  bottom: "conv4"
  top: "conv5"
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu5"
  type: "ReLU"
  bottom: "conv5"
  top: "conv5"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc8_pascal"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8_pascal"
  inner_product_param {
    num_output: 3
  }
}

Does anyone could give me a clue? Thank you very much.


The same happens with C++ and the classification bin they provide:

F0610 18:06:14.975601 7906 blob.cpp:455] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) * Check failure stack trace: * @ 0x7f0e3c50761c google::LogMessage::Fail() @ 0x7f0e3c507568 google::LogMessage::SendToLog() @ 0x7f0e3c506f6a google::LogMessage::Flush() @ 0x7f0e3c509f01 google::LogMessageFatal::~LogMessageFatal() @ 0x7f0e3c964a80 caffe::Blob<>::FromProto() @ 0x7f0e3c89576e caffe::Net<>::CopyTrainedLayersFrom() @ 0x7f0e3c8a10d2 caffe::Net<>::CopyTrainedLayersFrom() @ 0x406c32 Classifier::Classifier() @ 0x403d2b main @ 0x7f0e3b124ec5 (unknown) @ 0x4041ce (unknown) Aborted (core dumped)

3

There are 3 answers

6
Anoop K. Prabhu On BEST ANSWER

Let me confirm whether the basic steps are correct.

input_dim: 10
input_dim: 3
input_dim: 64
input_dim: 64

Have you tried changing the first parameter to 1 as you are only passing a single image.

The above mentioned error occurs when the dimensions of the top or bottom blobs are not correct. And there is no where that could go wrong other than the input blobs.

Edit 2:

ShapeEquals(proto) shape mismatch (reshape not set) error message occurs when 'reshape' parameter is set to false for the fromproto function call.

I did a quick search for the fromproto function call within the library as shown here. Other than 'CopyTrainedLayersFrom' function no other function actually set the above mentioned parameter as false.

This is actually confusing. Two methods that I would suggest is:

  1. Check whether the caffe source code is updated from the repository.
  2. Try running the test portion of caffe.bin executable found in /build/tools/.
0
ricardo_8990 On

In my case, the size of the kernel in the second convolutional layer in my solver file differed from the one in the train file. Changing the size in the solver file solved the problem.

0
Jasper Uijlings On

I just had the same error. In my case my output parameters of the final layer were incorrect: Switching datasets, I changed the number of classes in the train.prototxt and failed to do so in test.prototxt (or deploy.prototxt). Correcting this mistake solved the problem for me.