Calculating Dice coefficient for image segmentatiom

135 views Asked by At

I have created a function which calculates the Dice coefficient for the binary mask, however, I am not sure if I am doing it correctly:

def dice_metrics(inputs, targets, smooth=1e-8):
#inpts is the predicted mask and targets is the original mask

    #flatten label and prediction tensors
    inputs = inputs.view(-1)
    targets = targets.view(-1)

    intersection = (inputs * targets).sum()
    dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)

    return dice

I am in doubt that my code above is correct because I get different values to the F1 score calculated using the sklearn.metrics.f1_score() function. I thought dice coefficient should be equal to the F1 score...

This is the datatypes for inputs and targets:

Data type of TARGETS: torch.float32 Min: tensor(0., device='cuda:0') Max: tensor(0.9804, device='cuda:0')

Data type of INPUTS: torch.float32 Min: tensor(0., device='cuda:0') Max: tensor(0.9804, device='cuda:0')

When using the sklearn.metrics.f1_score() function, the binary threshold is applied to the inputs and targets and converted to np.uint8.

The calculated dice is 0.15 but the calculated F1 score is 0.04.

1

There are 1 answers

6
Yakov Dan On

It looks fine except your ground truth should also be binary for the dice score