Fine tuning weights in DBN

257 views Asked by At

In a Deep Belief Network, I have pretrained the net using CD-1. I have the weights and biases stored. Now can I run a supervised mlp code with dropout and initialise the weights as those obtained from pre training. Will it be equivalent to a DBN implemented with dropout fine tuning?

1

There are 1 answers

0
kangshiyin On BEST ANSWER

dropout fine tuning on DBN

means

run a supervised mlp code with dropout and initialise the weights as those obtained from pre training

So yes, they are equivalent.