Allowed data types in torch.jit.trace()

34 views Asked by At

I want to save a segmentation model pipeline to a mobile optimized model that I can run on iOS or call from a C++ program. I am using two pretrained models, whose code I can not change.

I am majorly facing two kinds of errors:

  1. RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
  2. RuntimeError: Tracer cannot infer type of [0. 0. 0. ... 0. 0. 0.]: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ndarray.

Below is my code:

def main(img):
    with torch.no_grad():
        img_data = test_transforms(img)

        segmentation_module.eval()

        singleton_batch = {'img_data': img_data[None]}
        output_size = img_data.shape[1:]

        # First level segmentation to detect pillar regions
        scores = segmentation_module(singleton_batch, segSize=output_size)

        _, pred = torch.max(scores, dim=1)
        pred = pred[0].numpy()

        # 42 is the index for class pillars
        column_regions = np.where(pred == 42, 1, 0).astype(np.uint8)
        contours, _ = cv2.findContours(column_regions, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        # Get central points of all pillar regions
        centers = []
        for contour in contours:
            M = cv2.moments(contour)
            if M['m00'] != 0:
                cX = int(M['m10'] / M['m00'])
                cY = int(M['m01'] / M['m00'])
                centers.append((cX, cY))

        predictor.set_image(img.permute(1, 2, 0).numpy())
        all_masks = []
        for center in centers:
            center = np.array(center)
            # Using the centers to get the accurate segmented regions using better model.
            masks, confidence, logits = predictor.predict(center[None], np.array([1]))
            all_masks.append(masks[0])

    return all_masks


example_input = read_image('images/2.png')
example_input = transforms.ToTensor()(example_input)
example_input.requires_grad = False
traced_script_module = torch.jit.trace(main, example_input)  ## Error Line
optimized_traced_script_module = optimize_for_mobile(traced_script_module)

MOBILE_MODEL_PATH = 'mobile_model.pt'
optimized_traced_script_module.save(MOBILE_MODEL_PATH)

For the first kind of error, I changed the requires_grad for all parameters in both the models, and the input. For the second kind, I can change my use of ndarrays to lists/tensors but the pretrained models use ndarrays internally. How should I go through with this?

0

There are 0 answers