Using a pytorch model I want to plot the gradients of loss with respect to my activation functions (e.g. ReLU). For the non-activation layers I can get gradients as follows but for the activation functions I cannot do that. How can I plot my activation functions. I need them to check for vanishing/exploding gradients problem. Thanks for your help.
I want to plot the gradients of my activation functions using torch to achieve something like the screenshot below.

# Given a model defined like this how to get grads of torch.nn.ReLU() ?
model = torch.nn.Sequential(torch.nn.Linear(1,2), torch.nn.ReLU(), torch.nn.Linear(2,1), torch.nn.tanh())
# For non-activation layers I can do
print(model[0].grad)
# But I cannot do
model[1].grad
How can I get gradients for ReLU and tanh ?
ReLU(x)=max(0,x) so the gradient is 0 if x<0, 1 if x>0 and undefined when x =0 so you could just take the output from the relu layer and plot the corresponding value. see here
edit:
this is how you can get certain layers outputs, and after that manually compute the gradients.
option one: You could create a
nn.moduleclass instead yourtorch.nn.sequentialto store and return your activation gradients. something like:option two:
is using forward hooks