Bounding boxes tensor becomes empty after calling map() function

39 views Asked by At

I'm new both of Python and of Keras.

I'm trying to adapt this example, https://keras.io/examples/vision/yolov8/ to a problem I'm trying to solve.

The issue I'm facing is that after this line of code is executed:

train_ds = train_ds.map(map_func = augmenter, num_parallel_calls=tf.data.AUTOTUNE)

bounding boxes tensor becomes empty.

Here below the bounding box tensor before map():

{'boxes': <tf.RaggedTensor [[[315.0, 537.0, 557.0, 522.0],
  [46.0, 549.0, 270.0, 534.0],
  [315.0, 639.0, 557.0, 624.0],
  [26.0, 684.0, 291.0, 669.0]]]>, 'classes': <tf.RaggedTensor [[1.0, 1.0, 1.0, 1.0]]>}

This is the content after map():

{'boxes': <tf.RaggedTensor [[]]>, 'classes': <tf.RaggedTensor [[]]>}

These is the code I use in order to view the tensor content in both cases:

debug_data = next(iter(train_ds.take(1)))
print(debug_data['bounding_boxes'])

Here below the augmenter I use when map() is called:

augmenter = keras.Sequential(
    layers=[
        keras_cv.layers.JitteredResize(
            target_size=(W_RESIZED, H_RESIZED), 
            scale_factor=(FROM_SCALEFACTOR, TO_SCALEFACTOR), 
            bounding_box_format="xyxy"
        ),
    ]
)

Here I tried both the original version and the one used for the evaluation dataset (here below) but the result is the same (empty tensor):

resizing = keras_cv.layers.JitteredResize(
    target_size=(W_RESIZED, H_RESIZED),
    scale_factor=(FROM_SCALEFACTOR, TO_SCALEFACTOR),
    bounding_box_format="xyxy",
)

Here the constants I use:

H_RESIZED = 640 #Original was 480
W_RESIZED = 480 #Original was 640
FROM_SCALEFACTOR = 0.75 #Original was 0.75
TO_SCALEFACTOR = 0.76 #Original was 1.3

These are both Python, Tensorflow an d KerasCV versions I'm using:

Python: 3.11.4 (tags/v3.11.4:d2340ef, Jun  7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Tensorflow: 2.15.0
KerasCV: 0.7.2

Any suggestion would be very appreciated.

1

There are 1 answers

2
Maksym Stetsenko On

I had to reproduce reference code to find that issue. JitteredResize has nothing to do with it.

After line train_ds = train_ds.ragged_batch(BATCH_SIZE, drop_remainder=True), you need to use for loop

for batch in train_ds:
    print(batch)

to look at your batch.

Below from Tensorflow/Keras manual: Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.

Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes:

If an input element is a tf.Tensor whose static tf.TensorShape is fully defined, then it is batched as normal. If an input element is a tf.Tensor whose static tf.TensorShape contains one or more axes with unknown size (i.e., shape[i]=None), then the output will contain a tf.RaggedTensor that is ragged up to any of such dimensions. If an input element is a tf.RaggedTensor or any other type, then it is batched as normal.

"KerasCV’s JitteredResize function, in its turn, is designed for object detection pipelines and implements an image augmentation technique that involves randomly scaling, resizing, cropping, and padding images along with corresponding bounding boxes. This process introduces variability in scale and local features, enhancing the diversity of the training data for improved generalization".