difference between initializing bias by nn.init.constant and (bias=1) in pytorch

1.4k views Asked by At

I'm making a code for AlexNet and i'm confused with how to initialize the weights

what is the difference between:

        for layer in self.cnnnet:
            if isinstance(layer, nn.Conv2d):
                 nn.init.constant_(layer.bias, 0)

and

nn.Linear(shape, bias=0)
1

There are 1 answers

0
Shir On BEST ANSWER

The method nn.init.constant_ receives a parameter to initialize and a constant value to initialize it with. In your case, you use it to initialize the bias parameter of a convolution layer with the value 0.

The method nn.Linear the bias parameter is a boolean stating weather you want the layer to have a bias or not. By setting it to be 0, you're actually creating a linear layer with no bias at all.

A good practice is to start with PyTorch's default initialization techniques for each layer. This is done by just creating the layers, pytorch initializes them implicitly. In more advanced development stages you can also explicitly change it if necessary.

For more info see the official documentation of nn.Linear and nn.Conv2d.