I'm attempting to use transfer learning to train a model for object detection to use with the Intel Neural Compute Stick 2 (NCS2)

Steps so far.

  1. Using transfer learning train faster_rcnn_inception_v2_coco_2018_01_28 model on my custom dataset using tensorflow 1.15 on google COLAB.
  2. Verified saved tensorflow model works with opencv-python for object detection with tensorflow.saved_model.load
  3. freeze model and use the openvino model optimizer command shown below to create the IR .bin and .xml to use with opencv-python dnn function.
python mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config  --transformations_config extensions/front/tf/faster_rcnn_support_api_v1.15.json --reverse_input_channels --data_type FP16 --input_shape [1,600,600,3] --input image_tensor --output=detection_scores,detection_boxes,num_detections

output as follows

Model Optimizer arguments:
Common parameters:
- Path to the Input Model:  frozen_inference_graph.pb
- Path for generated IR:    /.
- IR output name:   frozen_inference_graph
- Log level:    ERROR
- Batch:    Not specified, inherited from the model
- Input layers:     image_tensor
- Output layers:    detection_scores,detection_boxes,num_detections
- Input shapes:     [1,600,600,3]
- Mean values:  Not specified
- Scale values:     Not specified
- Scale factor:     Not specified
- Precision of IR:  FP16
- Enable fusing:    True
- Enable grouped convolutions fusing:   True
- Move mean values to preprocess section:   False
- Reverse input channels:   True

TensorFlow specific parameters:
- Input model in text protobuf format:  False
- Path to model dump for TensorBoard:   None
- List of shared libraries with TensorFlow custom layers implementation:    None
- Update the configuration file with input/output node names:   None
- Use configuration file used to generate the model with Object Detection API:  pipeline.config
- Use the config file:  None

Model Optimizer version:    
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image
keeping aspect ratio. The Inference Engine does not support dynamic image size so the
Intermediate Representation file is generated with the input image size of a fixed size.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and
scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes",
"detection_scores" have been replaced with a single layer of type "Detection Output".
Refer to IR catalogue in the documentation for information about this layer.

[ WARNING ]  Network has 2 inputs overall, but only 1 of them are suitable for input
channels reversing.
Suitable for input channel reversing inputs are 4-dimensional with 3 channels
All inputs: {'image_tensor': [1, 3, 600, 600], 'image_info': [1, 3]}
Suitable inputs {'image_tensor': [1, 3, 600, 600]}

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 26.84 seconds. 
[ SUCCESS ] Memory consumed: 617 MB. 
  1. Load the converted model with opencv-python dnn Using the openvino ubuntu_dev docker image openvino/ubuntu18_dev:latest I run a python script containing the following.
net = cv2.dnn.readNetFromModelOptimizer('frozen_inference_graph.xml',
        'frozen_inference_graph.bin') 
blob = cv2.dnn.blobFromImage(image_from_file)
net.setInput(blob)

The following error gets reported

Traceback (most recent call last):
  File "xxxxxxxxxxxxxx-dnn.py", line 49, in <module>
  
net.setInput(blob)
cv2.error: OpenCV(4.4.0-openvino) ../opencv/modules/dnn/src/dnn.cpp:4017: error:
    (-2:Unspecified error) in function 'void   cv::dnn::dnn4_v20200609::Net::setInput(cv::InputArray, const String&, double, const Scalar&)'
    (expected: 'inputShapeLimitation.size() == blobShape.size()'), where 'inputShapeLimitation.size()' is 2 must be equal to 'blobShape.size()' is 4

Can anyone shed some light on how to resolve this error please?

1

There are 1 answers

0
Rommel_Intel On BEST ANSWER

I suggest that you try to load your model into Openvino's sample as in here: https://docs.openvinotoolkit.org/2018_R5/_samples_object_detection_demo_README.html

It seems like there are incompatible sizes are used which related with the blob size. Your python script might not associated with dynamic shaping.

This might be useful for you: https://www.youtube.com/watch?v=Ga8j0lgi-OQ