Getting a ValueError for 2 inputs into a DCGAN with Tensorflow/Keras

371 views Asked by At

So I'm trying to follow the DCGAN guide for image generation on tensorflow https://www.tensorflow.org/tutorials/generative/dcgan , and I have the code replicated pretty closely, just changing the dataset to one that I want to use. Whenever I try to train the model I'm getting this error -

ValueError: Layer sequential_1 expects 1 inputs, but it received 2 input tensors. Inputs received: [<tf.Tensor 'images:0' shape=(256, 28, 28, 3) dtype=float32>, <tf.Tensor 'images_1:0' shape=(256,) dtype=int32>]

Specifically this line in the train_step function is causing the error,

real_output = discriminator(images, training=True)

when it gets called here within the train function

train(normalizedData, epochs)

The definition of the discriminator function is this, earlier in the code:

def make_discriminator_model():
    model = tf.keras.Sequential()
    model.add(layers.Conv2D(64, (5,5), strides=(2,2), padding='same', input_shape=[28, 28, 1]))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Conv2D(128, (5,5), strides=(2,2), padding='same'))
    model.add(layers.LeakyReLU())
    model.add(layers.Dropout(0.3))

    model.add(layers.Flatten())
    model.add(layers.Dense(1))

    return model

discriminator = make_discriminator_model()

Here is the rest of that block for context.

@tf.function
def train_step(images):
    noise = tf.random.normal([batch_size, noise_dim])

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
        generated_images = generator(noise, training=True)

        real_output = discriminator(images, training=True)
        fake_output = discriminator(generated_images, training=True)

        gen_loss = generator_loss(fake_output)
        disc_loss = discriminator_loss(real_output, fake_output)

    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

def train(dataset, epochs):
    for epoch in range(epochs):
        start = time.time()

        for image_batch in dataset:
            train_step(image_batch)

        display.clear_output(wait=True)
        generate_and_save_images(generator,
                                 epoch + 1,
                                 seed)

        if (epoch + 1) % 15 == 0:
            checkpoint.save(file_prefix = checkpoint_prefix)

        print('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))

    display.clear_output(wait=True)
    generate_and_save_images(generator,
                             epochs,
                             seed)

def generate_and_save_images(model, epoch, test_input):

    predictions = model(test_input, training=False)

    fig = plt.figure(figsize=(4,4))

    for i in range(predictions.shape[0]):
        plt.subplot(4,4,i+1)
        plt.imshow(predicitons[i, :, :, 0] * 127.5 + 127.5, cmap='gist_rainbow')
        plt.axis('off')

    plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
    plt.show()
    
        
train(normalizedData, epochs)

I've seen different variations of this question on here about this value error, from what I've gathered that sequential layer is being input a list instead of a tuple?

Thank you for your time and any help you can offer.

2

There are 2 answers

0
Mr. For Example On

The error is telling you that your inputs to discriminator is shape of [<tf.Tensor 'images:0' shape=(256, 28, 28, 3) dtype=float32>, <tf.Tensor 'images_1:0' shape=(256,) dtype=int32>], but the discriminator you defined have input_shape=[28, 28, 1]

Check the images you feed in discriminator at the line real_output = discriminator(images, training=True), make sure images have have same shape with discriminator's input_shape, e.g (256, 28, 28, 3)

0
Terence Yang On

I faced same issue in GAN tutorial teansorflow doc https://www.tensorflow.org/tutorials/generative/dcgan, pls using tf.reshape to reshape the dateset

discriminator(tf.reshape(images, (1, 28, 28, 1)), training=True)

it works for me.