Linked Questions

Popular Questions

UpSampling1D in keras insanely slow?

Asked by At

I'm trying to build an autoencoder in Keras, everything is going fine but when I add the UpSampling1D layer and run the code and try to get a model summary the program just freezes forever. My problem is that I have an input and output size of 220500, the convolutional layers have no problems with this and compile almost instantly. However the upsampling layers start to become insanely slow when the number of layers to upsample reach about 50 000 and basically freeze. Is there any way around this or is it some inherent limitation in the upsample layer? Also, why would this be? How come a convolution can handle much larger sizes than upsampling? :S

Here's my actual code:

def autoencoder(input_dim):

input_layer = Input(shape=(input_dim,1))
encode = Conv1D(filters=1,kernel_size=10,strides=2,activation="relu",padding='same')(input_layer)
encode = BatchNormalization()(encode)
n=20
for i in range(15):
    encode = Conv1D(filters=1,kernel_size=10,strides=2,activation="relu",padding='same')(encode)
    encode = BatchNormalization()(encode)

decode = Conv1D(filters=1,kernel_size=10,strides=1,activation="relu",padding='same')(encode)
decode = UpSampling1D(2)(decode)
decode = BatchNormalization()(decode)
for i in range(14):
    decode = Conv1D(filters=1,kernel_size=10,strides=1,activation="relu",padding='same')(decode)
    decode = UpSampling1D(2)(decode)
    decode = BatchNormalization()(decode)

decode = Conv1D(filters=1,kernel_size=10,strides=1,activation="sigmoid",padding='same')(decode)
autoencoder_model = Model(input_layer, decode)
autoencoder_model.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder_model.summary()
return autoencoder_model

Related Questions