custom activation function in PyTorch - fix prediction

617 views Asked by At

I read this post about customa ctivation function, but still I can't implement my code. My activation function can be expressed as a combination of existing PyTorch functions and it works fine function_pytorch(prediction, Q_sample). [Q_samples, is some variable I need it and it does't need gradient. ]

My activation function should receive the output of NN and , implement the function_pytorch and it's out put goes in the loss function. so:

class Activation_fun(nn.Module):

def __init__(self, prediction):
    super().__init__() 

def forward(self, input, Q_samples):
    return function_pytorch(input, Q_samples) 

in my NN I have

class NeuralNet(nn.Module):
  def __init__(self, input_size, hidden_size, output_size):
    super(NeuralNet, self).__init__()
    self.BN0 = nn.BatchNorm1d(input_size)  
    self.l1 = nn.Linear(input_size, hidden_size)
    self.tan = nn.Tanh()
    self.BN = nn.BatchNorm1d(output_size)
    #custom activation
    self.l2 = Activation_fun()


def forward(self, x, q):
  out = self.BN0(x)
  out = self.l1(out)
  out = self.tan()
  out = self.BN9(out)
  out = self.l2(out, q)
  return out

 model = NeuralNet(input_size, hidden_size, output_size)

and in my training epochs:

outputs = model(inputs, q_samples)

The problem is: my prediction remains fix if I apply my customized activation function. Is there any problem in my implementation?

0

There are 0 answers