I have encoded some images in TFrecord format in order to train a CNN. Here's there is the function I use to read the TFrecord files and extract images. I have problems reading labels (strings composed of 5 digits in range 10000-50000 and sparse) from tfrecord and convert those strings into "one hot" encoded tensors in order train my classifier. The training is made with a custom Estimator using tensorflow. here's the snippet of the function I used to read the TFRecords files
def imgs_input_fn(filenames, classes, perform_shuffle=False, repeat_count=1, batch_size=1):
def _parse_function(serialized):
features = \
{
'image/encoded': tf.FixedLenFeature([], tf.string),
'image/width': tf.FixedLenFeature([], tf.int64),
'image/height': tf.FixedLenFeature([], tf.int64),
'image/channels': tf.FixedLenFeature([], tf.int64),
'image/colorspace': tf.FixedLenFeature([], tf.string),
'image/class/label': tf.FixedLenFeature([], tf.string),
'image/class/text_label': tf.FixedLenFeature([], tf.string),
'image/filename': tf.FixedLenFeature([], tf.string)
}
# Parse the serialized data so we get a dict with our data.
parsed_example = tf.parse_single_example(serialized=serialized,
features=features)
# Get the image as raw bytes.
# in image_shape I can't use parsed_example['image/channels']
# read from file but need to pass 1 to the shape...
# how to get this?
channels = parsed_example['image/channels']
image_shape = tf.stack([parsed_example['image/width'],
parsed_example['image/height'], 1])
image_raw = parsed_example['image/encoded']
# Labels are string representing numbers but are sparse
label = tf.string_to_number(parsed_example['image/class/label'], out_type=tf.int32)
# Check how to pass the value read from the tfrecord file
image = tf.image.decode_image(image_raw)
image = tf.divide(tf.cast(image, tf.float32), tf.constant(255., dtype=tf.float32))
image = tf.reshape(image, image_shape)
num_classes = classes
#The following operation does not give me expecte result
# as as labels are strings like 12345, 34234, 53453,
# and I have only ie 100 classes so tf.one_hot(10000, 100)
# will give me a tensor with only 0s in it
d = dict(zip([input_name], [image])), tf.one_hot(label, num_classes)
return d
dataset = tf.data.TFRecordDataset(filenames=filenames)
# Parse the serialized data in the TFRecords files.
# This returns TensorFlow tensors for the image and labels.
dataset = dataset.map(_parse_function)
if perform_shuffle:
# Randomizes input using a window of 1024 elements (read into memory)
dataset = dataset.shuffle(buffer_size=1024)
dataset = dataset.repeat(repeat_count) # Repeats dataset this # times
dataset = dataset.batch(batch_size) # Batch size to use
iterator = dataset.make_one_shot_iterator()
batch_features, batch_labels = iterator.get_next()
return batch_features, batch_labels
So how can I fill ie a structure like a lookup table, using for example tf.contrib.lookup.index_table_from_tensor, reading the information directly from the TFRecord files as the images are read for training not providing a file in advance or reading all the TFRecords extracting beforehand the labels? I would like to leverage the fact that if a lablel is "unknown" for the lookup table the "index_table_from_tensor" will use the hash value of the label to give a coherent result. The function I wrote is called from a train loop tf.estimator.train_and_evaluate after I defined the tf.estimator.TrainSpec and tf.estimator.EvalSpec and I use a keras model
Is there a way to achieve this?
Thanks a lot.
Seba