I am designing a neural network in keras for a classification task. My labels are points on a plane, whose locations (x, y) are known. I would like keras' loss function to depend on the distance between the predicted and actual points - the further the prediction, the higher the loss.
I have implemented this custom loss function that takes the distance matrix between the targets as input. However, it crashes at the point of slicing the
distance_matrix numpy array.
def custom_loss(distance_matrix): def loss(y_true, y_pred): return distance_matrix[tf.keras.backend.argmax(y_true, axis=-1), tf.keras.backend.argmax(y_pred, axis=-1)] return loss
My idea is to set the loss value to the [pred, true] element of the distance matrix, where pred is the predicted class, so the argmax of predicted class probabilities y_pred.
Is there a way to convert the tensor
tf.keras.backend.argmax(y_true, axis=-1) to a numpy array to enable slicing, or alternatively, is there a way to use tensors to slice?
I have tried ideas from this thread, without success: How can I convert a tensor into a numpy array in TensorFlow?