I trained network using supervised learning and saved its weights, now I created a new network with additional layers (the new network contains the same layers of the old network and has additional layers), I need to transfer all old network weights into the new network and leave the new additional layers randomly initialized, but when I use the new weights it takes random predictions (the weights are randomly initialized). the weights.h5 for the old network doesn't have the same size of the weights.h5 for the new network in the disk.

from keras import layers, models, optimizers
from keras import backend as K
import numpy as np
from keras.models import Model, load_model



import keras.losses

model = load_model("nvidia_41_named.h5") #the old network
model.load_weights("nvidia_41_named_weights.h5") # the old weights

actor=load_model("actormodel.h5") #the new network

for layer in model.layers : 
    name = layer.name

    for lay in actor.layers : 
        if lay.name == name:
            print(lay.name)
            weights = model.get_layer(name).get_weights()
            actor.get_layer(name).set_weights(weights)
            print(lay.name,'correctly transferred')
            print("========================================")

the schematic of the two networks: this photo describes the relation between the old and new network

1 Answers

0
Community On

I had the same question earlier today but cannot find the stackoverflow answer anymore. If you name the layer in your old model, with model.load_weights("./weights_cnn.hdf5", by_name=True) (the flag "by_name" is important!) it should load the correct weights for the named layers.

cnn1 = tf.keras.layers.Conv2D(filters, kernel, padding="same", activation='relu', name='conv_1_j')(inp_layer)