PyBrain QTable (ActionValueTable) not changing

58 views Asked by At

I followed a blog post (here) where the author wrote a program with PyBrain to learn how to play Blackjack. He outlines that he is using a Q-Table that was initialized with zeros. At the very end he shows his results (the final Q-Table) after the program played 300 games.

My problem is that when I run his code myself, copied perfectly, the Q-Table does not update. It stays as a table of just zeros forever, even after 1000 hands.

Below is my version of his code, in which I have only added a print statement for the Q-Table.

from blackjacktask import BlackjackTask
from blackjackenv import BlackjackEnv
from pybrain.rl.learners.valuebased import ActionValueTable
from pybrain.rl.agents import LearningAgent
from pybrain.rl.learners import Q
from pybrain.rl.experiments import Experiment
from pybrain.rl.explorers import EpsilonGreedyExplorer    
av_table = ActionValueTable(21, 2)
av_table.initialize(0.)
learner = Q(0.5, 0.0)
learner._setExplorer(EpsilonGreedyExplorer(0.0))
agent = LearningAgent(av_table, learner)
env = BlackjackEnv()
task = BlackjackTask(env)
experiment = Experiment(task, agent)
c=0
while True:
    c+=1
    print "Hand "+str(c)+"."
    experiment.doInteractions(1)
    agent.learn()
    print av_table.params.reshape(21,2)
    agent.reset()

Other than adding the print statement, I changed the file location of blackjacktask and blackjackenv to the same folder that the learning program is in, simply because it refused to import it when it was in the pybrain folder. My program returns no import errors, and neither does the modules, so I doubt this is the problem.

It is possible that the problem is in the blog poster's modules, which can be found on his post (here).

0

There are 0 answers