So, I have been working on neural style transfer in Pytorch, but I'm stuck at the point where we have to run the input image through limited number of layers and minimize the style loss. Long story short, I want to find a way in Pytorch to evaluate the input at different layers of the architecture(I'm using vgg16). I have seen this problem solved very simply in keras, but I wanted to see if there is a similar way in pytorch as well or not.
from keras.applications.vgg16 import VGG16
model = VGG16()
model = Model(inputs=model.inputs, outputs=model.layers[1].output)
Of course you can do that:
You can always
print
your model and see how it's structured. If it istorch.nn.Sequential
(or part of it is, as above), you can always use this approach.