I'm trying to improve a DCGAN by giving the generator an edge batch to start with. I'm having trouble with the dimensions of the array. I want my image edge array to have the same dimension as np.random.randn(64, 128)
Here is some of my code:
def load_dataset():
"""
"""
(X, _), (_, _) = cifar10.load_data()
#X = np.expand_dims(X, axis=-1).astype('float32')
X = (X - 127.5) / 127.5
return X.astype('float32')
def rgb2edge(img):
bw = cv2.Sobel(rgb2gray(img), cv2.CV_64F, 1, 0)
return bw
c = rgb2edge(X[1000])
for i in range(64):
x = rgb2edge(X[i])
np.concatenate((c,x))
def generate_batch_fake(generator, n_latent_dim, n_samples):
"""
"""
x_input = np.random.randn(n_samples, n_latent_dim) #Here is where I want to put the array "c"
X = generator.predict(x_input)
y = np.zeros((n_samples, 1))
return X, y
As you can see my the size of the array "c" is (64,32,32)
So the error in my trying is the following:
"Input 0 of layer sequential_10 is incompatible with the layer: expected axis -1 of input shape to have value 128 but received input with shape [32, 32]."
Any suggestions on how to change dimensions would be great!