I am trying to develop a 1D convolutional neural network with residual connections and batch-normalization based on the paper Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, using keras. This is the code so far:

# define model
x = Input(shape=(time_steps, n_features))

# First Conv / BN / ReLU layer
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(x) 
y = BatchNormalization()(y)
y = ReLU()(y)

shortcut = MaxPooling1D(pool_size = n_pool)(y)

# First Residual block
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(y) 
y = BatchNormalization()(y)
y = ReLU()(y)
y = Dropout(rate=drop_rate)(y)
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(y) 
# Add Residual (shortcut)
y = add([shortcut, y])

# Repeated Residual blocks   
for k in range (2,3): # smaller network for testing

    shortcut = MaxPooling1D(pool_size = n_pool)(y)
    y = BatchNormalization()(y)
    y = ReLU()(y)
    y = Dropout(rate=drop_rate)(y)
    y = Conv1D(filters=n_filters * k, kernel_size=n_kernel, strides=n_strides, padding='same')(y)    
    y = BatchNormalization()(y)
    y = ReLU()(y)
    y = Dropout(rate=drop_rate)(y)
    y = Conv1D(filters=n_filters * k, kernel_size=n_kernel, strides=n_strides, padding='same')(y) 
    y = add([shortcut, y])

z = BatchNormalization()(y)
z = ReLU()(z)    
z = Flatten()(z)
z = Dense(64, activation='relu')(z)
predictions = Dense(classes, activation='softmax')(z)

model = Model(inputs=x, outputs=predictions)

# Compiling 
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])

# Fitting 
model.fit(train_x, train_y, epochs=n_epochs, batch_size=n_batch)

And this is the graph of a simplified model of what I am trying to build.

The model described in the paper uses an incrementing number of filters:

The network consists of 16 residual blocks with 2 convolutional layers per block. The convolutional layers all have a filter length of 16 and have 64k filters, where k starts out as 1 and is incremented every 4-th residual block. Every alternate residual block subsamples its inputs by a factor of 2, thus the original input is ultimately subsampled by a factor of 2^8. When a residual block subsamples the input, the corresponding shortcut connections also subsample their input using a Max Pooling operation with the same subsample factor.

But I can only make it work if I use the same number of filters in every Conv1D layer, with k=1, strides=1 and padding=same, without applying any MaxPooling1D. Any changes in these parameters causes a tensor size mismatch and failure to compile with the following error:

ValueError: Operands could not be broadcast together with shapes (70, 64) (70, 128)

Does anyone have any idea on how to fix this size mismatch and make it work?

In addition, if the input has more than one channel (or features) the mismatch is even worst! Is there a way to deal with more than one channel?

1

There are 1 answers

1
Ankit Paliwal On

The issue of tensor shape mismatch should be happening in add([y, shortcut]) layer. Because of the fact that you are using MaxPooling1D layer, this halves your time-steps by default, which you can change it by using the pool_size parameter. On the other hand, your residual portion is not reducing the time-steps by same amount. You should apply stride=2 with padding='same' before adding shortcut and y in any one of Conv1D layer (preferably the last one).

For reference, you can check out the Resnet code here Keras-applications-github