Difference between revisions of "UCT"

From Chessprogramming wiki
Jump to: navigation, search
Line 101: Line 101:
 
* [[Xi Liang]], [[Ting-Han Wei]], [[I-Chen Wu]] ('''2015'''). ''Solving Hex Openings Using Job-Level UCT Search''. [[ICGA Journal#38_3|ICGA Journal, Vol. 38, No. 3]]
 
* [[Xi Liang]], [[Ting-Han Wei]], [[I-Chen Wu]] ('''2015'''). ''Solving Hex Openings Using Job-Level UCT Search''. [[ICGA Journal#38_3|ICGA Journal, Vol. 38, No. 3]]
 
* [[Naoki Mizukami]], [[Jun Suzuki]], [[Hirotaka Kameko]], [[Yoshimasa Tsuruoka]] ('''2017'''). ''Exploration Bonuses Based on Upper Confidence Bounds for Sparse Reward Games''. [[Advances in Computer Games 15]]
 
* [[Naoki Mizukami]], [[Jun Suzuki]], [[Hirotaka Kameko]], [[Yoshimasa Tsuruoka]] ('''2017'''). ''Exploration Bonuses Based on Upper Confidence Bounds for Sparse Reward Games''. [[Advances in Computer Games 15]]
 +
* [[Jacek Mańdziuk]] ('''2018'''). ''MCTS/UCT in Solving Real-Life Problems''. [https://www.springer.com/us/book/9783319679457 Advances in Data Analysis with Computational Intelligence Methods], [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
  
 
=Forum Posts=
 
=Forum Posts=

Revision as of 15:16, 23 October 2018

Home * Search * Monte-Carlo Tree Search * UCT

UCT (Upper Confidence bounds applied to Trees),
a popular algorithm that deals with the flaw of Monte-Carlo Tree Search, when a program may favor a losing move with only one or a few forced refutations, but due to the vast majority of other moves provides a better random playout score than other, better moves. UCT was introduced by Levente Kocsis and Csaba Szepesvári in 2006 [1], which accelerated the Monte-Carlo revolution in computer Go [2] and games difficult to evaluate statically. If given infinite time and memory, UCT theoretically converges to Minimax.

Selection

In UCT, upper confidence bounds (UCB1) guide the selection of a node [3], treating selection as a multi-armed bandit problem, where the crucial tradeoff the gambler faces at each trial is between exploration and exploitation - exploitation of the slot machine that has the highest expected payoff and exploration to get more information about the expected payoffs of the other machines. Child node j is selected which maximizes the UCT Evaluation:

UCTFormula.jpg

where:

  • Xj is the win ratio of the child
  • n is the number of times the parent has been visited
  • nj is the number of times the child has been visited
  • C is a constant to adjust the amount of exploration and incorporates the sqrt(2) from the UCB1 formula

The first component of the UCB1 formula above corresponds to exploitation, as it is high for moves with high average win ratio. The second component corresponds to exploration, since it is high for moves with few simulations.

RAVE

Most contemporary implementations of MCTS are based on some variant of UCT [4]. Modifications have been proposed, with the aim of shortening the time to find good moves. They can be divided into improvements based on expert knowledge and into domain-independent improvements in the playouts, and in building the tree in modifying the exploitation part of the UCB1 formula, for instance in Rapid Action Value Estimation (RAVE) [5] considering transpositions.

PUCT

Chris Rosin's PUCT modifies the original UCB1 multi-armed bandit policy by approximately predicting good arms at the start of a sequence of multi-armed bandit trials ('Predictor' + UCB = PUCB) [6]. A variation of PUCT was used in the AlphaGo and AlphaZero projects [7] , and subsequently also in Leela Zero and Leela Chess Zero [8].


Quotes

Gian-Carlo Pascutto

Quote by Gian-Carlo Pascutto in 2010 [9]:

There is no significant difference between an alpha-beta search with heavy LMR  and a static evaluator (current state of the art in chess) and an UCT searcher with a small exploration constant that does playouts (state of the art in go).
The shape of the tree they search is very similar. The main breakthrough in Go the last few years was how to backup an uncertain Monte Carlo score. This was solved. For chess this same problem was solved around the time quiescent search was developed.
Both are producing strong programs and we've proven for both the methods that they scale in strength as hardware speed goes up.
So I would say that we've successfully adopted the simple, brute force methods for chess to Go and they already work without increases in computer speed. The increases will make them progressively stronger though, and with further software tweaks they will eventually surpass humans. 

Raghuram Ramanujan et al.

Quote by Raghuram Ramanujan, Ashish Sabharwal, and Bart Selman from their abstract On Adversarial Search Spaces and Sampling-Based Planning [10]:

UCT has been shown to outperform traditional minimax based approaches in several challenging domains such as Go and KriegSpiel, although minimax search still prevails in other domains such as Chess. This work provides insights into the properties of adversarial search spaces that play a key role in the success or failure of UCT and similar sampling-based approaches. We show that certain "early loss" or "shallow trap" configurations, while unlikely in Go, occur surprisingly often in games like Chess (even in grandmaster games). We provide evidence that UCT, unlike minimax search, is unable to identify such traps in Chess and spends a great deal of time exploring much deeper game play than needed. 

See also

Publications

2000 ...

2005 ...

2006

2007

2008

2009

2010 ...

2011

2012

2013

2014

2015 ...

Forum Posts

External Links

UCT Monte Carlo Tree Search algorithm in Python 2.7 by Peter Cowling, Ed Powley, and Daniel Whitehouse
UCT MCTS implementation in Java by Simon Lucas
Joe Zawinul, Wayne Shorter, Alphonso Johnson, Darryl Brown, Dom Um Romão

References

  1. Levente Kocsis, Csaba Szepesvári (2006). Bandit based Monte-Carlo Planning ECML-06, LNCS/LNAI 4212, pdf
  2. Sylvain Gelly, Marc Schoenauer, Michèle Sebag, Olivier Teytaud, Levente Kocsis, David Silver, Csaba Szepesvári (2012). The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions. Communications of the ACM, Vol. 55, No. 3, pdf preprint
  3. see UCB1 in Peter Auer, Nicolò Cesa-Bianchi, Paul Fischer (2002). Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, Vol. 47, No. 2
  4. Exploration and exploitation - in Monte Carlo tree search from Wikipedia
  5. Sylvain Gelly, David Silver (2011). Monte-Carlo tree search and rapid action value estimation in computer Go. Artificial Intelligence, Vol. 175, No. 11, preprint as pdf
  6. Christopher D. Rosin (2011). Multi-armed bandits with episode context. Annals of Mathematics and Artificial Intelligence, Vol. 61, No. 3, ISAIM 2010 pdf
  7. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529
  8. FAQ · LeelaChessZero/lc0 Wiki · GitHub
  9. Re: Chess vs Go // AI vs IA by Gian-Carlo Pascutto, June 02, 2010
  10. Raghuram Ramanujan, Ashish Sabharwal, Bart Selman (2010). On Adversarial Search Spaces and Sampling-Based Planning. ICAPS 2010
  11. Search traps in MCTS and chess by Daniel Shawul, CCC, December 25, 2017
  12. Crossings from Wikipedia
  13. Epaminondas from Wikipedia

Up one level