RNN Encoder Decoder Model generates empty Output

48 views Asked by At

While I was training a RNN model with 2 input sequence where the RNN has an encoder and decoder, the training is perfectly done. But to retrive the word from a sequence it shows an empty string or list as output. What should I do?

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import Input, LSTM, Dense, Embedding, concatenate
from keras.utils import to_categorical
import numpy as np
import random

# Function to generate training data with targeted text
def generate_text_data(samples, max_seq_length, targeted_texts):
    texts1 = ["This is an example sentence." for _ in range(samples)]
    texts21 = [random.randint(1, 100) for _ in range(50)]
    texts2 = []

    for i in texts21:
        texts2.append(str(i))

    texts1 = [
    "The sun sets behind the mountains, casting a warm glow across the valley.",
    "In the quiet library, the only sound was the rustling of pages turning.",
    "Innovations in technology continue to shape the way we live and work.",
    "The aroma of freshly brewed coffee wafted through the air, awakening the senses.",
    "An ancient oak tree stood tall in the center of the enchanted forest.",
    "Raindrops danced on the window pane, creating a soothing melody.",
    "The astronaut gazed in awe at the Earth from the vantage point of space.",
    "Laughter echoed through the park as children played on the swings.",
    "The detective carefully examined the clues, piecing together the mystery.",
    "Fields of vibrant wildflowers stretched out as far as the eye could see.",
    "Waves crashed against the rocky shore, creating a symphony of nature.",
    "The chef skillfully crafted a culinary masterpiece, delighting the diners.",
    "As the plane took off, passengers marveled at the world shrinking below them.",
    "The ancient ruins told the story of a civilization long forgotten.",
    "A gentle breeze whispered through the leaves of the willow tree.",
    "The painter dipped the brush into vibrant hues, creating a masterpiece on canvas.",
    "A shooting star streaked across the night sky, leaving a trail of wonder.",
    "The hiker reached the summit, greeted by breathtaking panoramic views.",
    "Music filled the concert hall, captivating the audience with its melody.",
    "The scholar delved into dusty volumes, seeking knowledge from centuries past.",
    "Lightning illuminated the stormy sky, followed by the distant rumble of thunder.",
    "A cozy fireplace crackled, casting a warm glow in the rustic cabin.",
    "The scientist made a groundbreaking discovery, altering the course of research.",
    "Autumn leaves crunched beneath the footsteps of those strolling in the park.",
    "Time seemed to stand still as the couple exchanged vows on their wedding day.",
    "The aroma of freshly baked bread wafted from the neighborhood bakery.",
    "A solitary lighthouse stood tall against the backdrop of the stormy sea.",
    "The playwright penned a riveting script that brought audiences to tears.",
    "The telescope revealed distant galaxies, expanding the scope of the universe.",
    "A rainbow arched across the sky, painting a colorful bridge between clouds.",
    "The athlete crossed the finish line, breaking the record with sheer determination.",
    "Night fell, and the city lights sparkled like a sea of diamonds.",
    "The gardener tenderly cared for the blossoming roses in the botanical garden.",
    "A gentle stream meandered through the meadow, reflecting the blue sky.",
    "The sculptor chiseled away at the marble, revealing the essence of his creation.",
    "Snowflakes gently blanketed the town, creating a winter wonderland.",
    "The actor delivered a powerful monologue that resonated with the audience.",
    "The archaeologist unearthed artifacts that shed light on ancient civilizations.",
    "A chorus of crickets serenaded the night, creating a symphony of nature.",
    "The diver explored the vibrant coral reefs, encountering a kaleidoscope of marine life.",
    "The novelist weaved a tale of adventure that transported readers to distant lands.",
    "The photographer captured a fleeting moment, freezing it in time.",
    "A rainbow of hot air balloons dotted the sky during the annual festival.",
    "The entrepreneur launched a startup, aiming to revolutionize the industry.",
    "A comet streaked across the celestial expanse, leaving a trail of cosmic beauty.",
    "The astronomer discovered a new celestial body, expanding our understanding of the cosmos.",
    "The poet penned verses that echoed the beauty of nature and the human spirit.",
    "The mountain climber scaled the towering peak, conquering a personal challenge.",
    "The artist molded clay into intricate sculptures, each telling a unique story.",
    "The children giggled as they chased butterflies in the sun-drenched meadow."
    ]

    tokenizer = Tokenizer()
    tokenizer.fit_on_texts(texts1 + texts2)

    word1_sequences = tokenizer.texts_to_sequences(texts1)
    word2_sequences = tokenizer.texts_to_sequences(texts2)

    word1_sequences = pad_sequences(word1_sequences, maxlen=max_seq_length, padding='post')
    word2_sequences = pad_sequences(word2_sequences, maxlen=max_seq_length, padding='post')

    # Convert targeted_texts to sequences and pad them
    targeted_sequences = tokenizer.texts_to_sequences(targeted_texts)
    targeted_sequences = pad_sequences(targeted_sequences, maxlen=max_seq_length, padding='post')

    # Reshape decoder_input_data
    decoder_input_data = targeted_sequences[:, :]
    decoder_target_data = targeted_sequences[:, :]   # Exclude the first word

    # Assuming binary classification, you can modify this based on your problem
    decoder_target_data = to_categorical(decoder_target_data, num_classes=len(tokenizer.word_index) + 1)

    return [word1_sequences, word2_sequences, decoder_input_data], decoder_target_data, tokenizer

# Example parameters
vocab_size = 10000
latent_dim = 256
max_seq_length = 20  # Adjust based on your text data
samples = 1000

# Example targeted text data
targeted_texts = ["apple", "banana", "carrot", "dog", "elephant", "flower", "guitar", "happiness", "ice cream", "jazz",
                  "kangaroo", "lemon", "mountain", "notebook", "ocean", "puzzle", "quasar", "rainbow", "sunshine",
                  "tiger", "apple", "banana", "carrot", "dog", "elephant", "flower", "guitar", "happiness", "ice cream",
                  "jazz", "kangaroo", "lemon", "mountain", "notebook", "ocean", "puzzle", "quasar", "rainbow", "sunshine",
                  "tiger", "apple", "banana", "carrot", "dog", "elephant", "flower", "guitar", "happiness", "ice cream", "jazz"]

# Generate training data with targeted text
input_data, target_data, tokenizer = generate_text_data(samples, max_seq_length, targeted_texts)

output_shape = len(tokenizer.word_index) + 1


def build_model(vocab_size, latent_dim, output_shape, max_seq_length):
    # Define the input layers for the encoder
    input_word1 = Input(shape=(max_seq_length,))
    input_word2 = Input(shape=(max_seq_length,))

    # Embedding layer to convert characters to dense vectors
    embedding_layer = Embedding(vocab_size, latent_dim)

    # Apply embedding to input characters
    embedded_word1 = embedding_layer(input_word1)
    embedded_word2 = embedding_layer(input_word2)

    # LSTM encoding for each character separately
    lstm1 = LSTM(latent_dim, return_state=True)
    _, state_h1, state_c1 = lstm1(embedded_word1)

    lstm2 = LSTM(latent_dim, return_state=True)
    _, state_h2, state_c2 = lstm2(embedded_word2)

    # Concatenate the hidden states from both characters
    state_h = concatenate([state_h1, state_h2])
    state_c = concatenate([state_c1, state_c2])

    # Map the concatenated state to the desired size
    state_h = Dense(latent_dim)(state_h)
    state_c = Dense(latent_dim)(state_c)

    encoder_states = [state_h, state_c]

    # Set up the decoder
    decoder_inputs = Input(shape=(None,))
    decoder_embedding = embedding_layer(decoder_inputs)

    decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
    decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)

    decoder_dense = Dense(output_shape, activation='softmax')
    decoder_outputs = decoder_dense(decoder_outputs)

    # Define the model
    model = Model([input_word1, input_word2, decoder_inputs], decoder_outputs)

    return model


# Build the model
model = build_model(vocab_size, latent_dim, output_shape, max_seq_length)

# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model
model.fit([input_data[0], input_data[1], input_data[2]], target_data, epochs=10, batch_size=32)

# Function to preprocess input sequence
def preprocess_input_sequence(input_sequence, max_seq_length, tokenizer):
    input_sequence = tokenizer.texts_to_sequences([input_sequence])
    input_sequence = pad_sequences(input_sequence, maxlen=max_seq_length, padding='post')
    return input_sequence

# Example input sequence
input_sequence = "The sun sets."

# Repeat input data for the number of samples
input_data_repeated = [np.repeat(data, samples, axis=0) for data in input_data]

# Initialize the decoder input with zeros
decoder_input_data = np.zeros((samples, max_seq_length))

# Repeat the preprocessed input sequence for the number of samples
preprocessed_input = [preprocess_input_sequence(input_sequence, max_seq_length, tokenizer) for _ in range(samples)]
# Example input sequences
input_sequence1 = "The sun"
input_sequence2 = "sets"

# Repeat input data for the number of samples
input_data_repeated = [np.repeat(data, samples, axis=0) for data in input_data]

# Preprocess the input sequences
preprocessed_input_sequence1 = preprocess_input_sequence(input_sequence1, max_seq_length, tokenizer)
preprocessed_input_sequence2 = preprocess_input_sequence(input_sequence2, max_seq_length, tokenizer)

# Repeat the preprocessed input for the number of samples
input_data_repeated = [np.repeat(preprocessed_input_sequence1, samples, axis=0),
                       np.repeat(preprocessed_input_sequence2, samples, axis=0)]

# Initialize the decoder input with zeros
decoder_input_data = np.zeros((samples, max_seq_length))

# Generate predictions using the trained model
predictions = model.predict([input_data_repeated[0], input_data_repeated[1], decoder_input_data])

# Get the index of the predicted word for each sample
predicted_word_indices = np.argmin(predictions, axis=-1)

# Convert the indices back to words using the tokenizer
predicted_words = [word for word, index in tokenizer.word_index.items() if index == predicted_word_indices[0, 0]]

print("Input Sequence 1:", input_sequence1)
print("Input Sequence 2:", input_sequence2)
print("Predicted Word:", predicted_words)

predicted_words is showing an empty list.

While I was training a RNN model with 2 input sequence where the RNN has an encoder and decoder, the training is perfectly done. But to retrive the word from a sequence it shows an empty string or list as output. What should I do?

0

There are 0 answers