why seq2seq model return negative loss if I used a pre-trained embedding model

289 views Asked by At

I am following this example code to build a seq2seq model using keras. https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py

when I train that code it works normally fine and the results are good. But when I try to train it using a pre-trained embedding model, the loss and the crossentropy always get negative values.

I have tried to use only a dataset of 5 examples to make the model overfit over them, just to make sure it works correct, but the loss and the crossentropy still negative.

I use FastText embedding model, here is the code to load the dataset with the embedding vectors:

    encoder_input_data = np.zeros(
        (input_texts_len, max_encoder_seq_length,vector_length),
        dtype='float32')
    decoder_input_data = np.zeros(
        (input_texts_len, max_decoder_seq_length,vector_length),
        dtype='float32')
    decoder_target_data = np.zeros(
        (input_texts_len, max_decoder_seq_length,vector_length),
        dtype='float32')
    padding = np.zeros((vector_length),dtype='float32')
    for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
        for t, word in enumerate(input_text):
                encoder_input_data[i, t] = w2v.get_vector(word)
        encoder_input_data[i, t + 1:] = padding
    
        for t, word in enumerate(target_text):
                decoder_input_data[i, t] = w2v.get_vector(word)
            if t > 0:
                decoder_target_data[i, t - 1] = w2v.get_vector(word)
                
        decoder_input_data[i, t + 1:] = padding
        decoder_target_data[i, t] = padding

Here is the model code itself:

    encoder_inputs = Input(shape=(max_encoder_seq_length,vec_leng,))
    x = Masking(mask_value=0.0)(encoder_inputs)
    encoder = LSTM(latent_dim,name='lstm_1')
    
    encoder_outputs, state_h, state_c = encoder(x)
    encoder_states = [state_h, state_c]
    decoder_inputs = Input(shape=(max_decoder_seq_length,vec_leng,))
    a = Masking(mask_value=0.0) (decoder_inputs)
    decoder_lstm = LSTM(latent_dim,name='decoder_lstm')
    decoder_outputs, _, _ = decoder_lstm(a, initial_state=encoder_states)
    # Attention layer
    attn_layer = AttentionLayer(name='attention_layer')
    attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs])

    decoder_concat_input = Concatenate(axis=-1)([decoder_outputs, attn_out])
    decoder_dense = Dense(vec_leng, activation='softmax')
    dense_time = TimeDistributed(decoder_dense, name='time_distributed_layer')
    decoder_pred = dense_time(decoder_concat_input)

    model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_pred, name='main_model')
    encoder_model = Model(inputs=encoder_inputs, outputs=[encoder_outputs, state_h, encoder_states], name='encoder_model')

    decoder_state_input_h = Input(shape=(latent_dim,))
    decoder_state_input_c = Input(shape=(latent_dim,))
    encoder_states_ = Input(batch_shape=(1,max_encoder_seq_length, latent_dim))

    decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
    a = Input(shape=(max_decoder_seq_length,vec_leng,))
    decoder_outputs, state_h, state_c = decoder_lstm(a, initial_state=decoder_states_inputs)
    decoder_states = [state_h, state_c]

    attn_inf_out, attn_inf_states = attn_layer([encoder_states_, decoder_outputs])
    decoder_inf_concat = Concatenate(axis=-1)([decoder_outputs, attn_inf_out])
    decoder_inf_pred = TimeDistributed(decoder_dense)(decoder_inf_concat)

    decoder_model = Model(
        [encoder_states_, decoder_states_inputs, a],
        [decoder_inf_pred, attn_inf_states, decoder_states], name='decoder_model')

and here is the training prints: enter image description here

what is the reason that I got these negative values? and how to solve them?

1

There are 1 answers

2
Tou You On

You get negative loss values because your target vectors elements are not correct, your one_hot target vector elements must be 1 or 0 integers.