How to deal with loss gettting clamped in graph neural networks?

39 views Asked by At

I have trained the following model using pytorch on graphs having the same edge index(task is graph classification on Electronic health records where each graph represents the patients data and node vectors have been derived form a combined knowledge graph)

class mdl(torch.nn.Module):
    def _init_(self, input_size, hidden_size, output_size,dropout_rate):
        super(GCNClassifier, self)._init_()
        self.conv1 = GCNConv(input_size, hidden_size)
        self.conv2 = GCNConv(hidden_size, output_size)
        self.dropout = torch.nn.Dropout(dropout_rate)


    def forward(self, x, edge_index):
        x = self.conv1(x, edge_index)
        x = F.relu(x)
        x = self.dropout(x)
        x = self.conv2(x, edge_index)
        x = torch.mean(x, dim=0, keepdim=True)
        return x

the problem is loss is getting clamped at a particular value

i have tried various values of learning rates and tried various techniques like momentum and learning rate scheduling still the loss is remaining constant

i tried training the above model using the following loop

#training (graphVec) 800 graphs (each graph of shape [5,20])
#y_train is the tensor of 0 and 1 of shape [800,1] for binary classification

num_epochs = 100
for epoch in range(num_epochs):
    model.train()
    

    for i in range(len(graphVec)):  # passing every graph through the model in every iteration
        output = model(graphVec[i], edge_index)
        loss = criterion(output, y_train[i])
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()
    # StepLR scheduler step
    
    scheduler.step()
    print(output)
    # Print loss and learning rate every epoch
    current_lr = optimizer.param_groups[0]['lr']
    print(f'Epoch [{epoch + 1}/{num_epochs}], Loss: {loss.item()}, Learning Rate: {current_lr}')

But i got my loss heavily clamped (loss wasnt reducing through the epochs) What should i do?

0

There are 0 answers