Am I able to calculate autograd of NN outputs wrt network inputs in flux.jl?

183 views Asked by At

I have a NN that is a function f:(t, v) -> (x,z). Am I able to calculate an autograd partial derivative df/dt? I want to use the the autograd calculation in a regularization term in my loss function.

yhat = net((t,v))
#calculate current value of df/dt here
penalized_loss(yhat, y) = loss(yhat, y) + penalty(df/dt)

I want to do something like

df/dt = gradient(net,t)

but I don't know how to tell the gradient function what the input (t) is

2

There are 2 answers

1
Heliton Martins On

Based on the documentation, you can use the gradient this way:

function my_custom_train!(loss, ps, data, opt)
  ps = Params(ps)
  for d in data
    gs = gradient(ps) do
      loss(d...)
    end
    update!(opt, ps, gs)
  end
end

This gradient(ps) do ... end is the Julian idiom to:

gradient(loss(d...), ps);

gradient comes from Zygote.jl, you can read more about here.

0
Antonello On

If it is a feed-forward neural network you can use BetaML and in particular copy the getGradient function ("step 1" and "step 2") to retrieve the backward stack up to the inputs.