Is adding a FC Linear layer on top of seq2seq architecture a potential source of data leaking from future to past?

23 views Asked by At

I have a boiling problem with my implementation of TCN and need a rescue from more experienced player.

My problem relates to TCN (Temporal Convolutional NeuralNet) architecture and more generally seq2seq models. In my case would like to predict 100 probs out of 100 inputs and I am creating a fully-connected linear layer with 100 outputs (on top of causal convolution layers) to do it.

I have seen it to be a standard to add an FC layer (e.g. followed by a sigmoid) on top of encoder-decoder and other sequence-to-sequence models. However my concern is that it may allow future information to influence outputs generated for past timestamps (data-leakage). Can you confirm my intuition??

Piece of Code:

class TCNBC(nn.Module):
    def __init__(self, input_size, output_size, num_channels, kernel_size, dropout):
        super(TCNBC, self).__init__()
        self.tcn = TemporalConvNet(input_size, num_channels, kernel_size=kernel_size,
dropout=dropout)
        self.tcn = self.tcn.float()
        self.linear = nn.Linear(num_channels[-1], output_size)

        # Sigmoid activation for binary classification
        self.sigmoid = nn.Sigmoid()

    def forward(self, inputs):
        """Inputs have to have dimension (N, C_in, L_in)"""
        y1 = self.tcn(inputs)  # input should have dimension (N, C, L)
        o = self.linear(y1[:, :, -1])
        return o 
0

There are 0 answers