We know that DDPG is a deterministic policy gradient method and the output of its policy network should be a certain action. But once I tried to let the output of policy network be a probability distribution of several actions, which means the length of output is more than one and every action has its own probability and the sum of them equals to 1. The form of output looks like that in the stochastic policy gradient method but the gradients are calculated and the network is updated in a DDPG way. In the end, I found the result looks quite good, but I don't understand why it works as the output form isn't exactly what the DDPG requires.
Can the output of DDPG policy network be a probability distribution instead of a certain action value?
304 views Asked by JinZ At
1
There are 1 answers
Related Questions in REINFORCEMENT-LEARNING
- pygame window is not shutting down with env.close()
- Recommended way to use Gymnasium with neural networks to avoid overheads in model.fit and model.predict
- Bellman equation for MRP?
- when I run the code "env = gym.make('LunarLander-v2')" in stable_baselines3 zoo
- Why the reward becomes smaller and smaller, thanks
- `multiprocessing.pool.starmap()` works wrong when I want to write my custom vector env for DRL
- mat1 and mat2 must have the same dtype, but got Byte and Float
- Stable-Baslines3 Type Error in _predict w. custom environment & policy
- is there any way to use RL for decoder only models
- How do I make sure I'm updating the Q-values correctly?
- Handling batch_size in a TorchRL environment
- Application of Welford algorithm to PPO agent training
- Finite horizon SARSA Lambda
- Custom Reinforcement Learning Environment with Neural Network
- Restored Policy gives action that is out of bound with RLlib
Related Questions in POLICY-GRADIENT-DESCENT
- TypeError: tuple indices must be integers or slices, not NoneType
- Attribute error in PPO algorithm for Cartpole gym environment
- Why `ep_rew_mean` much larger than the reward evaluated by the `evaluate_policy()` fuction
- DDPG always choosing the boundaries actions
- Parallel environments in Pong keep ending up in the same state despite random actions being taken
- python policy gradient reinforcement learning with continous action space is not working
- Action masking for continuous action space in reinforcement learning
- PyTorch PPO implementation for Cartpole-v0 getting stuck in local optima
- REINFORCE for Cartpole: Training Unstable
- How to sample actions for a multi-dimensional continuous action space for REINFORCE algorithm
- One back-propagation pass in keras
- DDPG Actor Update ( Pytorch Implementation Issus )
- ValueError: No gradients provided for any variable in policy gradient
- How to clamp output of nueron in pytorch
- DDPG not converging for a simple control problem
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
It would work if you include also the gradient with respect to the distribution, otherwise it works just by chance.
If you do something like
Then this is regular stochastic gradient using a softmax distribution, which was very common back then before deterministic gradient (and still used sometimes).