I'm trying to normalize the NYU v2 Depth Dataset, and this is the transform I'm applying to each image coming through:
def standard_transform(normalise=False):
composition = [
transforms.Resize(standard_img_HW()),
transforms.ToTensor(),
]
if normalise:
composition.append(transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)))
return transforms.Compose(composition)
But it throws a runtime error of
TypeError(f"Input tensor should be a float tensor. Got {tensor.dtype}.")
TypeError: Input tensor should be a float tensor. Got torch.int32.
Now, I'd be happy to solve it if the tensors were indeed int dtypes, but the depth maps that are coming through are float32 tensors.
tensor([[[2.7520, 2.7520, 2.7522, ..., 2.2429, 2.2428, 2.2428],
[2.7519, 2.7520, 2.7521, ..., 2.2429, 2.2428, 2.2427],
[2.7518, 2.7518, 2.7520, ..., 2.2428, 2.2427, 2.2427],
...,
[2.1980, 2.1980, 2.1979, ..., 2.0813, 2.0810, 2.0809],
[2.1979, 2.1979, 2.1977, ..., 2.0816, 2.0813, 2.0812],
[2.1979, 2.1978, 2.1977, ..., 2.0817, 2.0814, 2.0813]]])
torch.Size([1, 480, 640])
torch.float32
How should I go about solving this?
I've gone in circles making sure that the tensors are indeed tensors and not images, via ToTensor(), and also double checking that the tensors that I wish to normalise are indeed float tensors.