I am attempting to use FluxTraining.jl to train a UNet model u, but I am encountering difficulties in passing the data from DataLoader to the Learner correctly.
Context:
I have two datasets: one for input images called "w" with dimensions 256x256x3x20 (20 observations, 3 RGB channels) and another for ground truth comparison called "wp" with dimensions 256x256x1x20 (20 observations, 1 grayscale channel). I define my data iterators using DataLoader as follows:
trainiter = DataLoader((w, wp), 4)
I then attempt to pass the data to the Learner using the following code:
learner = Learner(
u,
loss,
callbacks = [
Metrics(accuracy),
Checkpointer("trainingData/modelSaves/"),
logger_backend
],
optimizer = opt
)
#one epoch to get error
epoch!(learner, TrainingPhase(), trainiter)
Issue:
When running the code, I encounter an error indicating that the loss function (see bottom of post) is receiving input data x with dimensions 256x256x1x20 instead of the expected 256x256x3x20. It seems that the data is not being passed correctly from the DataLoader to the Learner.
How to properly pass the data from the DataLoader to the Learner in FluxTraining.jl?
Before using FluxTraining, I was able to get training to work with
Flux.train!(loss, Flux.params(u), rep, opt, cb = () -> @show(loss(w, wp))), where rep = Iterators.repeated((w, wp), 100). In all cases, ADAM() is my optimizer (opt).
For reference, my loss function is:
function loss(x, y)
@show size(x)
@show size(y)
Flux.dice_coeff_loss(u(x), y)
end
I have tried various modifications to the code, such as changing the DataLoader syntax to DataLoader((w,w), 4) or DataLoader(w, 4), but I still face issues where either a singular Float32 is being passed into the model instead of an array or the dimensions of the input data are still incorrect.
I have also tried looping through all xs and ys in my trainiter and calling the loss function. In this case, it works well, so I think it's something off with the way I'm using the epoch! function.