I want to use gradient descent to minimize the variable "loss", but I don't want it to be able to update EVERY variable, just a few of them(There are some variables in the same graph that I would like to keep constant for now.).
So, I listed the variables that I want to update in var_list=[]
in the last line of this code snippet:
conv_weights = [0]*CONVOLUTION_COUNT # couldn't figure out how to initialize this better
conv_biases = [0]*CONVOLUTION_COUNT
for layer in range(0,CONVOLUTION_COUNT):
inSize = max(NUM_CHANNELS,5*layer)
outSize = 5*(layer+1)
conv_weights[layer] = tf.Variable(tf.truncated_normal([FILTER_SIZE, inSize, outSize],stddev=0.1,seed=SEED, dtype=data_type()))
conv_biases[layer] = tf.Variable(tf.constant(0.1, shape=[outSize], dtype=data_type()))
fc1_weights = tf.Variable(tf.truncated_normal([20*35, 512],stddev=0.1,seed=SEED,dtype=data_type()))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512], dtype=data_type()))
fc2_weights = tf.Variable(tf.truncated_normal([512, NUM_OUTPUTS],stddev=0.1,seed=SEED,dtype=data_type()))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_OUTPUTS], dtype=data_type()))
...
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss=loss, global_step=batch,
var_list=[conv_weights,conv_biases,fc1_weights,fc1_biases,fc2_weights,fc2_biases])
Error I am getting:
if v.op.type == "ReadVariableOp":
AttributeError: 'list' object has no attribute 'op'
I'm a bit confused why the elements in the list would need to have operations because I'm just updating the weights of the neural network. These are just variables, right? Does anybody guess what I'm doing wrong or what other approach do I take?
Also, all this code works correctly when I don't specify var_list
.