I stumbled over something weird when trying to recreate what my 1D-CNN was doing. I used the numpy function np.convolve to manually calculate the convolution of the input with the filter. I realized that the respective outputs of my model and the numpy function differ drastically. Shouldn't the Conv1D layer in principal do the same?
Here's a minimal working example:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D
# Build a model consisting only of 1 convolutional layer with only 1 filter of length 2 (no bias used)
cnn = Sequential()
cnn.add(Conv1D(filters=1, kernel_size=2, strides=1, padding="valid",use_bias=False, input_shape=(10, 1)))
cnn.summary()
# Get the weights of the layer
w = cnn.layers[0].get_weights()[0][:, :, :]
# Create a random input vector
my_input = np.random.random([1,10,1])
# Create a function to get the output of a layer
get_output = K.function([cnn.layers[0].input],[cnn.layers[0].output])
# Feed the input into the layer and obtain output (convolution of input and the only weight)
output_mod = np.reshape(np.transpose(get_output([my_input])[0])[0],-1,1)
print(output_mod)
# Manually recreate the scenario with the numpy function
output_man = np.convolve(my_input[0,:,0],w[:,0,0],'valid')
print(output_man)
# Plot both convolutions
plt.plot(output_man)
plt.plot(output_mod)
plt.show()
