Difference between revisions of "Leela Chess Zero"

From Chessprogramming wiki
Jump to: navigation, search
Line 32: Line 32:
 
Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a [[Playing Strength|stronger]] engine.  
 
Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a [[Playing Strength|stronger]] engine.  
 
As a further improvement, Leele Chess Zero applies the ''Squeeze and Excite'' (SE) extension to the residual block architecture <ref>[https://github.com/LeelaChessZero/lc0/wiki/Technical-Explanation-of-Leela-Chess-Zero Technical Explanation of Leela Chess Zero · LeelaChessZero/lc0 Wiki · GitHub]</ref> <ref>[https://towardsdatascience.com/squeeze-and-excitation-networks-9ef5e71eacd7 Squeeze-and-Excitation Networks – Towards Data Science] by [http://plpp.de/ Paul-Louis Pröve], October 17, 2017</ref>.
 
As a further improvement, Leele Chess Zero applies the ''Squeeze and Excite'' (SE) extension to the residual block architecture <ref>[https://github.com/LeelaChessZero/lc0/wiki/Technical-Explanation-of-Leela-Chess-Zero Technical Explanation of Leela Chess Zero · LeelaChessZero/lc0 Wiki · GitHub]</ref> <ref>[https://towardsdatascience.com/squeeze-and-excitation-networks-9ef5e71eacd7 Squeeze-and-Excitation Networks – Towards Data Science] by [http://plpp.de/ Paul-Louis Pröve], October 17, 2017</ref>.
The body is fully connected to both the policy "head" for the move probability distribution, and value "head" for the evaluation score aka [[Pawn Advantage, Win Percentage, and Elo|winning probability]] of the the the current positions and up to seven predecessor positions on the input planes.
+
The body is fully connected to both the policy "head" for the move probability distribution, and value "head" for the evaluation score aka [[Pawn Advantage, Win Percentage, and Elo|winning probability]] of the the the current position and up to seven predecessor positions on the input planes.
  
 
==Training==
 
==Training==
Like in [[AlphaZero]], the '''Zero''' suffix implies no other initial knowledge than the rules of the games, to build a superhuman player, starting with truly random self-play games to apply [[Reinforcement Learning|reinforcement learning]] based on the outcome of that games.
+
Like in [[AlphaZero]], the '''Zero''' suffix implies no other initial knowledge than the rules of the game, to build a superhuman player, starting with truly random self-play games to apply [[Reinforcement Learning|reinforcement learning]] based on the outcome of that games.
 
However, there are derived approaches, such as [[Albert Silver|Albert Silver's]] [[Deus X]], trying to take a short-cut by initially using [[Supervised Learning|supervised learning]] techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move.
 
However, there are derived approaches, such as [[Albert Silver|Albert Silver's]] [[Deus X]], trying to take a short-cut by initially using [[Supervised Learning|supervised learning]] techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move.
 
The unsupervised training of the NN is about to minimize the [https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm L2-norm] of the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with [[UCT]] <ref>[https://medium.com/oracledevs/lessons-from-alphazero-part-4-improving-the-training-target-6efba2e71628 Lessons From AlphaZero (part 4): Improving the Training Target] by [https://blogs.oracle.com/author/vish-abrams Vish Abrams], [https://blogs.oracle.com/ Oracle Blog], June 27, 2018</ref>.
 
The unsupervised training of the NN is about to minimize the [https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm L2-norm] of the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with [[UCT]] <ref>[https://medium.com/oracledevs/lessons-from-alphazero-part-4-improving-the-training-target-6efba2e71628 Lessons From AlphaZero (part 4): Improving the Training Target] by [https://blogs.oracle.com/author/vish-abrams Vish Abrams], [https://blogs.oracle.com/ Oracle Blog], June 27, 2018</ref>.
Line 48: Line 48:
 
* [[AlphaZero]]
 
* [[AlphaZero]]
 
* [[Leela Zero]]
 
* [[Leela Zero]]
 +
* [[Leila]]
 
* [[Deep Learning]]
 
* [[Deep Learning]]
 
* [[Monte-Carlo Tree Search]]
 
* [[Monte-Carlo Tree Search]]
Line 99: Line 100:
 
==Chess Engine==
 
==Chess Engine==
 
* [https://en.wikipedia.org/wiki/Leela_Chess_Zero Leela Chess Zero from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Leela_Chess_Zero Leela Chess Zero from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Leela_(software) Leela (software) from Wikipedia]
 
* [http://lczero.org/ LCZero]
 
* [http://lczero.org/ LCZero]
 
* [https://github.com/LeelaChessZero/lczero GitHub - LeelaChessZero/lczero: A chess adaption of GCP's Leela Zero]
 
* [https://github.com/LeelaChessZero/lczero GitHub - LeelaChessZero/lczero: A chess adaption of GCP's Leela Zero]
Line 113: Line 115:
 
* [https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc Leela Chess Zero: AlphaZero for the PC] by [[Albert Silver]], [[ChessBase|ChessBase News]], April 26, 2018
 
* [https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc Leela Chess Zero: AlphaZero for the PC] by [[Albert Silver]], [[ChessBase|ChessBase News]], April 26, 2018
 
==Misc==
 
==Misc==
 +
* [https://en.wikipedia.org/wiki/Leela Leela from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Leela_(game) Leela (game) from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Leela_(name) Leela (name) from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Leela_(Doctor_Who) Leela (Doctor Who) from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Leela_(Futurama) Leela (Futurama) from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Leela_(Futurama) Leela (Futurama) from Wikipedia]
 
* [[:Category:Marc Ribot|Marc Ribot's]] Ceramic Dog - Lies My Body Told Me (Live on [https://en.wikipedia.org/wiki/KEXP-FM KEXP], July 20, 2016), [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
 
* [[:Category:Marc Ribot|Marc Ribot's]] Ceramic Dog - Lies My Body Told Me (Live on [https://en.wikipedia.org/wiki/KEXP-FM KEXP], July 20, 2016), [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
Line 124: Line 130:
 
[[Category:GPL]]
 
[[Category:GPL]]
 
[[Category:Marc Ribot]]
 
[[Category:Marc Ribot]]
 +
[[Category:Fiction]]
 +
[[Category:Given Name]]

Revision as of 14:25, 6 January 2019

Home * Engines * Leela Chess Zero

Lc0 logo [1]

Leela Chess Zero, (LCZero, lc0)
an adaption of Gian-Carlo Pascutto's Leela Zero Go project [2] to Chess, initiated and announced by Stockfish co-author Gary Linscott, who was already responsible for the Stockfish Testing Framework called Fishtest. Leela Chess is open source, released under the terms of GPL version 3 or later, and supports UCI. The goal is to build a strong chess playing entity following the same type of deep learning along with Monte-Carlo tree search (MCTS) techniques of AlphaZero as described in DeepMind's 2017 and 2018 papers [3] [4] [5], but using distributed training for the weights of the deep convolutional neural network (CNN, DNN, DCNN).

Lc0

Leela Chess Zero consists of an executable to play or analyze games, initially dubbed LCZero, soon rewritten by a team around Alexander Lyashuk for better performance and then called Lc0 [6]. This executable, the actual chess engine, performs the MCTS and reads the self-taught CNN, which weights are persistent in a separate file. Lc0 is written in C++14 and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on GPUs to perform all the floating point dot products for thousands of neurons, the preferred target platforms are Nvidia GPU’s supporting CUDA and cuDNN libraries [7]. None CUDA compliant GPUs (AMD) are supported through OpenCL, while much slower pure CPU binaries are possible using BLAS, target systems with or without a graphics card (GPU) are Linux, Mac OS and Windows computers, or BLAS only the Raspberry Pi.

Description

Like AlphaZero, Lc0's evaluates positions using non-linear function approximation based on a deep neural network, rather than the linear function approximation as used in classical chess programs. This neural network takes the board position as input and outputs position evaluation (QValue) and a vector of move probabilities (PValue, policy). Once trained, these network is combined with a Monte-Carlo Tree Search (MCTS) using the policy to narrow down the search to high­probability moves, and using the value in conjunction with a fast rollout policy to evaluate positions in the tree. The MCTS selection is done by a variation of Rosin's UCT improvement dubbed PUCT (Predictor + UCT).

Board Representation

Lc0's color agnostic board is represented by five bitboards (own pieces, opponent pieces, orthogonal sliding pieces, diagonal sliding pieces, and pawns including en passant target information coded as pawns on rank 1 and 8), two king squares, casting rights, and a flag whether the board is color flipped. While the structure is suitable as input for the neural network, getting individual pieces bitboards requires some setwise operations such as intersection and set theoretic difference [8].

Network

While AlphaGo used two disjoint networks for policy and value, AlphaZero as well as Leela Chess Zero, share a common "body" connected to disjoint policy and value "heads". The “body” consists of spatial 8x8 input planes, followed by convolutional layers with B residual blocks times 3x3xF filters. BxF specifies the model and size of the CNN (64x6, 128x10, 192x15, 256x20 were used). Concerning nodes per second of the MCTS, smaller models are faster to calculate than larger models. They are faster to train and one may earlier recognize progress, but they will also saturate earlier so that at some point more training will no longer improve the engine. Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a stronger engine. As a further improvement, Leele Chess Zero applies the Squeeze and Excite (SE) extension to the residual block architecture [9] [10]. The body is fully connected to both the policy "head" for the move probability distribution, and value "head" for the evaluation score aka winning probability of the the the current position and up to seven predecessor positions on the input planes.

Training

Like in AlphaZero, the Zero suffix implies no other initial knowledge than the rules of the game, to build a superhuman player, starting with truly random self-play games to apply reinforcement learning based on the outcome of that games. However, there are derived approaches, such as Albert Silver's Deus X, trying to take a short-cut by initially using supervised learning techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move. The unsupervised training of the NN is about to minimize the L2-norm of the mean squared error loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with UCT [11].

The distributed training is realized with an sophisticated client-server model. The client, written entirely in the Go programming language, incorporates Lc0 to produce self-play games. Controlled by the server, the client may download the latest network, will start self-playing, and uploading games to the server, who on the other hand will regularly produce and distribute new neural network weights after a certain amount of games available from contributors. The training software consists of Python code, the pipeline requires NumPy and TensorFlow running on Linux [12]. The server is written in Go along with Python and shell scripts.

See also

UCT
PUCT

Forum Posts

2018

Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Rollout Paradigm
LCZero update (2) by Rein Halbersma, CCC, March 25, 2018
Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Gian-Carlo Pascutto, CCC, August 03, 2018
Re: Has Silver written any code for "his" ZeusX? by Alexander Lyashuk, LCZero Forum, August 02, 2018

2019

Blog Posts

Lessons from AlphaZero: Connect Four by Aditya Prasad, Oracle Blog, June 13, 2018
Lessons from AlphaZero (part 3): Parameter Tweaking by Aditya Prasad, Oracle Blog, June 20, 2018
Lessons From AlphaZero (part 4): Improving the Training Target by Vish Abrams, Oracle Blog, June 27, 2018
Lessons From Alpha Zero (part 5): Performance Optimization by Anthony Young, Oracle Blog, July 03, 2018
Lessons From Alpha Zero (part 6) — Hyperparameter Tuning by Anthony Young, Oracle Blog, July 11, 2018

External Links

Chess Engine

Misc

References

Up one level