Difference between revisions of "Leela Chess Zero"
GerdIsenberg (talk | contribs) m |
GerdIsenberg (talk | contribs) |
||
Line 1: | Line 1: | ||
'''[[Main Page|Home]] * [[Engines]] * LCZero''' | '''[[Main Page|Home]] * [[Engines]] * LCZero''' | ||
− | '''LCZero''', | + | '''LCZero''', (Leela Chess Zero)<br/> |
an adaptation of [[Gian-Carlo Pascutto|Gian-Carlo Pascutto's]] [[Leela Zero]] [[Go]] project <ref>[https://github.com/gcp/leela-zero GitHub - gcp/leela-zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper]</ref> to [[Chess]], using [[Stockfish|Stockfish's]] [[Board Representation|board representation]] and [[Move Generation|move generation]]. No heuristics or prior [[Knowledge|knowledge]] are carried over from Stockfish. The goal to build a strong [[UCT]] chess AI following the same type of [[Deep Learning|deep learning]] techniques of [[AlphaZero]] as described in [[DeepMind|DeepMind's]] paper <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>, but using distributed training for the weights of the [[Neural Networks#Deep|deep]] [[Neural Networks#Residual|residual]] [[Neural Networks#Convolutional|convolutional neural network]]. The training process requires [https://en.wikipedia.org/wiki/CUDA CUDA] and a [[GPU]] accelerated version of [https://en.wikipedia.org/wiki/TensorFlow Tensorflow] installed <ref>[https://github.com/glinscott/leela-chess/blob/master/README.md leela-chess/README.md at master · glinscott/leela-chess · GitHub]</ref>. | an adaptation of [[Gian-Carlo Pascutto|Gian-Carlo Pascutto's]] [[Leela Zero]] [[Go]] project <ref>[https://github.com/gcp/leela-zero GitHub - gcp/leela-zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper]</ref> to [[Chess]], using [[Stockfish|Stockfish's]] [[Board Representation|board representation]] and [[Move Generation|move generation]]. No heuristics or prior [[Knowledge|knowledge]] are carried over from Stockfish. The goal to build a strong [[UCT]] chess AI following the same type of [[Deep Learning|deep learning]] techniques of [[AlphaZero]] as described in [[DeepMind|DeepMind's]] paper <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>, but using distributed training for the weights of the [[Neural Networks#Deep|deep]] [[Neural Networks#Residual|residual]] [[Neural Networks#Convolutional|convolutional neural network]]. The training process requires [https://en.wikipedia.org/wiki/CUDA CUDA] and a [[GPU]] accelerated version of [https://en.wikipedia.org/wiki/TensorFlow Tensorflow] installed <ref>[https://github.com/glinscott/leela-chess/blob/master/README.md leela-chess/README.md at master · glinscott/leela-chess · GitHub]</ref>. | ||
Line 20: | Line 20: | ||
* [http://www.talkchess.com/forum/viewtopic.php?t=67087 LCZero on 10x128 now] by [[Gary Linscott|Gary]], [[CCC]], April 12, 2018 | * [http://www.talkchess.com/forum/viewtopic.php?t=67087 LCZero on 10x128 now] by [[Gary Linscott|Gary]], [[CCC]], April 12, 2018 | ||
* [http://www.talkchess.com/forum/viewtopic.php?t=67092 lczero faq] by Duncan Roberts, [[CCC]], April 13, 2018 | * [http://www.talkchess.com/forum/viewtopic.php?t=67092 lczero faq] by Duncan Roberts, [[CCC]], April 13, 2018 | ||
+ | * [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67728 LcZero and STS] by [[Ed Schroder|Ed Schröder]], [[CCC]], June 14, 2018 » [[Strategic Test Suite]] | ||
=External Links= | =External Links= | ||
* [http://lczero.org/ LCZero] | * [http://lczero.org/ LCZero] | ||
+ | * [https://github.com/LeelaChessZero/ LCZero · GitHub] | ||
* [https://github.com/glinscott/leela-chess GitHub - glinscott/leela-chess: A chess adaption of GCP's Leela Zero] | * [https://github.com/glinscott/leela-chess GitHub - glinscott/leela-chess: A chess adaption of GCP's Leela Zero] | ||
Revision as of 18:20, 2 July 2018
LCZero, (Leela Chess Zero)
an adaptation of Gian-Carlo Pascutto's Leela Zero Go project [1] to Chess, using Stockfish's board representation and move generation. No heuristics or prior knowledge are carried over from Stockfish. The goal to build a strong UCT chess AI following the same type of deep learning techniques of AlphaZero as described in DeepMind's paper [2], but using distributed training for the weights of the deep residual convolutional neural network. The training process requires CUDA and a GPU accelerated version of Tensorflow installed [3].
See also
Forum Posts
- Announcing lczero by Gary, CCC, January 09, 2018
- Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Rollout Paradigm
- LCZero is learning by Gary, CCC, January 30, 2018
- LCZero update by Gary, CCC, March 14, 2018
- LCZero update (2) by Rein Halbersma, CCC, March 25, 2018
- LCZero: Progress and Scaling. Relation to CCRL Elo by Kai Laskos, CCC, March 28, 2018 » Playing Strength
- What does LCzero learn? by Uri Blass, CCC, April 05, 2018
- LCZero in Aquarium / Fritz by Carl Bicknell, CCC, April 11, 2018
- LCZero on 10x128 now by Gary, CCC, April 12, 2018
- lczero faq by Duncan Roberts, CCC, April 13, 2018
- LcZero and STS by Ed Schröder, CCC, June 14, 2018 » Strategic Test Suite
External Links
References
- ↑ GitHub - gcp/leela-zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper
- ↑ David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815
- ↑ leela-chess/README.md at master · glinscott/leela-chess · GitHub