Can the output of DDPG policy network be a probability distribution instead of a certain action value?

263 views Asked by At

We know that DDPG is a deterministic policy gradient method and the output of its policy network should be a certain action. But once I tried to let the output of policy network be a probability distribution of several actions, which means the length of output is more than one and every action has its own probability and the sum of them equals to 1. The form of output looks like that in the stochastic policy gradient method but the gradients are calculated and the network is updated in a DDPG way. In the end, I found the result looks quite good, but I don't understand why it works as the output form isn't exactly what the DDPG requires.

1

There are 1 answers

0
Simon On

It would work if you include also the gradient with respect to the distribution, otherwise it works just by chance.

If you do something like

  • probs = nn(s)
  • a = softmax(probs)
  • then backprop though softmax and back to nn

Then this is regular stochastic gradient using a softmax distribution, which was very common back then before deterministic gradient (and still used sometimes).