I have a rather simple / standard Unet architecture which looks like the following:
radar_input_layer = layers.Input(shape=(tdata.shape[1],tdata.shape[2],tdata.shape[3]))
print(radar_input_layer.shape)
c1 = layers.Conv2D(neurons, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(radar_input_layer)
c1 = layers.Dropout(0.5)(c1)
c1 = layers.Conv2D(neurons, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c1)
p1 = layers.MaxPooling2D((2,2))(c1)
print(p1.shape)
c2 = layers.Conv2D(neurons * 2, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(p1)
c2 = layers.Dropout(0.5)(c2)
c2 = layers.Conv2D(neurons * 2, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c2)
p2 = layers.MaxPooling2D((2,2))(c2)
print(p2.shape)
c3 = layers.Conv2D(neurons * 4, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(p2)
c3 = layers.Dropout(0.5)(c3)
c3 = layers.Conv2D(neurons * 4, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c3)
p3 = layers.MaxPooling2D((2,2))(c3)
print(p3.shape)
c4 = layers.Conv2D(neurons * 8, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(p3)
c4 = layers.Dropout(0.5)(c4)
c4 = layers.Conv2D(neurons * 8, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c4)
p4 = layers.MaxPooling2D((2,2))(c4)
print(p4.shape)
c5 = layers.Conv2D(neurons * 16, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(p4)
c5 = layers.Dropout(0.5)(c5)
c5 = layers.Conv2D(neurons * 16, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c5)
print(c5.shape)
u1 = layers.Conv2DTranspose(neurons * 8, (2,2), strides=(2,2), padding='same')(c5)
print(u1.shape)
print(c4.shape)
u1 = np.concatenate([u1,c4])
c6 = layers.Conv2D(neurons * 8, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(u1)
c6 = layers.Dropout(0.5)(c6)
c6 = layers.Conv2D(neurons * 8, (3,3), activation='relu', kernel_initializer='he_normal',padding='same')(c6)
...
I have defined my tdata and # of neurons as:
tdata = np.zeros([100,450,552,2])
neurons = 16
Just as a sample test dataset, where channels = last in the above tdata example (i.e., 100 samples, 450 rows, 552 columns).
The output is as follows:
(?, 225, 276, 16)
(?, 112, 138, 32)
(?, 56, 69, 64)
(?, 28, 34, 128)
(?, 28, 34, 256)
(?, ?, ?, 128)
(?, 56, 69, 128)
Traceback (most recent call last):
ValueError: zero-dimensional arrays cannot be concatenated
therefore, the problem is being hung up on concatenating u1 and c4. More specifically, the problem is that u1 is not defined as having an actual shape (?,?,?,128) when it should be (?,56,69,128). Why aren't the dimensions carrying through with this example, and how can this be fixed?
Make sure you have updated versions of Keras or Tensorflow. I got following output from your code.