Genetic algorithm and Tetris

2.6k views Asked by At

Im creating a Tetris player using genetic algorithms, and facing some issues. I've read a lot of related works, but they don't give me enough details on the GA.

The problem is that my agent seems to get stucked very fast...Im using a evaluation function take covers 4 features:height, covered holes, flatness and number of cleared rows. I read some paper that uses the same evaluation, and is capable of doing thousands of rows.

After 600 generations, with a population of 100 agents, the best one is capable of doing only 260 rows on average, that's lame. All agents are playing the same piece sequence.

Details of my GA:

generations:600 population:100

genes: Array of 4 float values, between 0 and 1.

Uniform crossover happens at a certain probability, and swaps genes between two parents, with a certain probability.

Mutation occurs at a certain probability, here i've tried 3 different approaches:swap genes, replace the gene with a random value, or add some noise value to the gene.

I have a elite rate of 50%, and noticed that some good agents are being selected and given birth to worse agents, contaminating the population.

Selection is roulette wheel...

If someone could give me details on the best way to crossover and mutate, i appreciate!

Thanks, and sorry for the long post!

2

There are 2 answers

2
RBarryYoung On BEST ANSWER

There seems to be some difference in the evaluation functions. You describe four features:

  1. height,
  2. covered holes,
  3. flatness and
  4. number of cleared rows

However, the paper you reference describes five features:

The function that the agent uses to determine the utility of a board’s state is the weighted linear sum of numerical features calculated from the state. Colin Fahey’s agents used these features: pile-height, the number of closed holes, and the number of wells (Fahey 2003). The features that we added are the number of lines that were just made, and a number that represents how “bumpy” the pile is.

(emphasis mine)

So it appears that you are missing the "wells" feature in your evaluation function and the makeup of the genes.

1
Louis Ricci On

Unlike the paper, you should implement the 'next piece' aspect of the game.

Simulate all possible 'current piece' placements followed by 'next piece' before calculating the 'utility'.

For the sake of performance you can cache the 'next piece' placements for the best 'utility' so that they don't need to be recalculated as 'current piece' placements again.

While the calculations will be slower I believe your agents will evolve faster/smarter.