Monte-Carlo Tree Search

From Chessprogramming wiki
Revision as of 09:41, 30 May 2018 by GerdIsenberg (talk | contribs)
Jump to: navigation, search

Home * Search * Monte-Carlo Tree Search

Edvard Munch - At the Roulette Table in Monte Carlo [1]

Monte Carlo Tree Search, (Monte-Carlo Tree Search, MCTS)
is a Best-First search algorithm based on random playouts. In conjunction with UCT (Upper Confidence bounds applied to Trees) Monte-Carlo Tree Search has yielded in a breakthrough in Computer Go [2], and is also successful in Amazons [3] [4], Lines of Action [5], Havannah [6], Hex [7], Checkers [8] and other Games with some difficulties in position evaluation, but until December 2017, when a Google DeepMind team reported on AlphaZero [9], not for Chess [10] [11].

MCTS is based on randomized explorations of the search space. Using the results of previous explorations, the algorithm gradually grows a game tree in memory, and successively becomes better at accurately estimating the values of the most promising moves [12].

Four Phases

MCTS consists of four strategic phases, repeated as long as there is time left [13] :

  1. In the Selection phase the tree is traversed from the root node until it selects a leaf node that is not added to the tree yet
  2. The Expansion strategy adds the leaf node to the tree
  3. The Simulation strategy plays moves in self-play until the end of the game. The result is either 1, 0 ,-1
  4. The Backpropagation strategy propagates the results through the tree
MCTS (English).svg

Steps of Monte Carlo Tree Search [14]

Pure Monte-Carlo search

Pure Monte-Carlo search with parameter T means that for each feasible move T random games are generated. The move with the best average score is played. A game is called “Monte Carlo perfect” when this procedure converges to perfect play for each position, when T goes to infinity. However, with limited time per move, increasing T does not guarantee to find a better move [15].

UCT

UCT (Upper Confidence bounds applied to Trees) deals with the flaw of Monte-Carlo Tree Search, when a program may favor a losing move with only one or a few forced refutations, but due to the vast majority of other moves provides a better random playout score than other, better moves [16].

See also

Publications

1987

1990 ...

2000 ...

2005 ...

2006

2007

2008

2009

2010 ...

2011

2012

2013

2014

2015 ...

2016

2017

Forum Posts

2010 ...

2015 ...

2017

2018

External Links

Monte Carlo Tree Search

GitHub - suragnair/alpha-zero-general: A clean and simple implementation of a self-play learning algorithm based on AlphaGo Zero (any game, any framework!)

Monte Carlo Misc

Monte Carlo algorithm
Monte Carlo method
Monte Carlo
Monte Carlo Casino
feat. Raúl Midón, Roy Hargrove and the Monte-Carlo Philharmonic Orchestra, November 29, 2008, Monte-Carlo Jazz Festival

References

  1. Edvard Munch, At the Roulette Table in Monte Carlo, 1893, Edvard Munch from Wikipedia
  2. Rémi Coulom (2009). The Monte-Carlo Revolution in Go. JFFoS'2008: Japanese-French Frontiers of Science Symposium, slides as pdf
  3. Julien Kloetzer, Hiroyuki Iida, Bruno Bouzy (2007). The Monte-Carlo approach in Amazons. CGW 2007
  4. Richard J. Lorentz (2008). Amazons Discover Monte-Carlo. CG 2008
  5. Mark Winands and Yngvi Björnsson (2009). Evaluation Function Based Monte-Carlo LOA. pdf
  6. Richard J. Lorentz (2010). Improving Monte-Carlo Tree Search in Havannah. CG 2010
  7. Broderick Arneson, Ryan Hayward, Philip Henderson (2010). Monte Carlo Tree Search in Hex. IEEE Transactions on Computational Intelligence and AI in Games, Vol. 2, No. 4, pdf
  8. UCT surprise for checkers ! by Daniel Shawul, CCC, March 25, 2011
  9. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815
  10. Raghuram Ramanujan, Ashish Sabharwal, Bart Selman (2010). On Adversarial Search Spaces and Sampling-Based Planning. ICAPS 2010
  11. Oleg Arenz (2012). Monte Carlo Chess. B.Sc. thesis, Darmstadt University of Technology, advisor Johannes Fürnkranz, pdf
  12. Guillaume Chaslot, Mark Winands, Jaap van den Herik (2008). Parallel Monte-Carlo Tree Search. CG 2008, pdf
  13. Guillaume Chaslot, Mark Winands, Jaap van den Herik (2008). Parallel Monte-Carlo Tree Search. CG 2008, pdf
  14. Steps of Monte-Carlo Tree Search by Mciura, March 31, 2013, Wikimedia Commons, Monte Carlo tree search from Wikipedia
  15. Ingo Althöfer (2011). On Board-Filling Games with Random-Turn Order and Monte Carlo Perfectness. Advances in Computer Games 13
  16. Levente Kocsis, Csaba Szepesvári (2006). Bandit based Monte-Carlo Planning ECML-06, LNCS/LNAI 4212, pp. 282-293. introducing UCT, pdf
  17. Jeff Rollason (2015). Mixing the Immiscible - MCTS and evaluation. AI Factory, October 2015
  18. Monte Carlo in LOA by Mark Watkins, Open Chess Forum, December 30, 2010
  19. The Settlers of Catan from Wikipedia
  20. Blokus Duo from Wikipedia
  21. Search traps in MCTS and chess by Daniel Shawul, CCC, December 25, 2017
  22. NoGo (ICGA Tournaments)
  23. Ms. Pac-Man from Wikipedia
  24. Scotland Yard (board game) from Wikipedia
  25. Re: MC methods by Daniel Shawul, CCC, April 13, 2013
  26. Crossings from Wikipedia
  27. Epaminondas from Wikipedia
  28. 7 Wonders (board game) from WIkipedia
  29. Guillaume Chaslot, Mark Winands, Jos Uiterwijk, Jaap van den Herik, Bruno Bouzy (2008). Progressive Strategies for Monte-Carlo Tree Search. New Mathematics and Natural Computation, Vol. 4, No. 3, pdf
  30. Dap Hartmann (2017). Let's Catch the Train to Monte-Carlo. ICGA Journal, Vol. 39, No. 1
  31. Re: Minmax backup operator for MCTS by Brahim Hamadicharef, CCC, December 30, 2017
  32. Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » LCZero
  33. GitHub - suragnair/alpha-zero-general: A clean and simple implementation of a self-play learning algorithm based on AlphaGo Zero (any game, any framework!)
  34. A Simple Alpha(Go) Zero Tutorial by Oliver Roese, CCC, December 30, 2017
  35. David Silver, Gerald Tesauro (2009). Monte-Carlo Simulation Balancing. ICML 2009, pdf

Up one level