Difference between revisions of "Leela Chess Zero"

From Chessprogramming wiki
Jump to: navigation, search
Line 114: Line 114:
* [https://twitter.com/leelachesszero Leela Chess Zero (@LeelaChessZero) | Twitter]
* [https://twitter.com/leelachesszero Leela Chess Zero (@LeelaChessZero) | Twitter]
* [https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc Leela Chess Zero: AlphaZero for the PC] by [[Albert Silver]], [[ChessBase|ChessBase News]], April 26, 2018
* [https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc Leela Chess Zero: AlphaZero for the PC] by [[Albert Silver]], [[ChessBase|ChessBase News]], April 26, 2018
* [https://www.youtube.com/watch?v=Crwg2oT9KWE Leela reacts beautifully to Stockfish's outrageous opening greed], by [https://www.youtube.com/channel/UCDUDDmslypVXYoUsZafHSUQ kingscrusher], January 05, 2019, [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
* [https://www.youtube.com/watch?v=Crwg2oT9KWE Leela reacts beautifully to Stockfish's outrageous opening greed] by [https://www.youtube.com/channel/UCDUDDmslypVXYoUsZafHSUQ kingscrusher], January 05, 2019, [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
: {{#evu:https://www.youtube.com/watch?v=Crwg2oT9KWE|alignment=left|valignment=top}}
* [https://en.wikipedia.org/wiki/Leela Leela from Wikipedia]
* [https://en.wikipedia.org/wiki/Leela Leela from Wikipedia]

Revision as of 15:46, 6 January 2019

Home * Engines * Leela Chess Zero

Lc0 logo [1]

Leela Chess Zero, (LCZero, lc0)
an adaption of Gian-Carlo Pascutto's Leela Zero Go project [2] to Chess, initiated and announced by Stockfish co-author Gary Linscott, who was already responsible for the Stockfish Testing Framework called Fishtest. Leela Chess is open source, released under the terms of GPL version 3 or later, and supports UCI. The goal is to build a strong chess playing entity following the same type of deep learning along with Monte-Carlo tree search (MCTS) techniques of AlphaZero as described in DeepMind's 2017 and 2018 papers [3] [4] [5], but using distributed training for the weights of the deep convolutional neural network (CNN, DNN, DCNN).


Leela Chess Zero consists of an executable to play or analyze games, initially dubbed LCZero, soon rewritten by a team around Alexander Lyashuk for better performance and then called Lc0 [6]. This executable, the actual chess engine, performs the MCTS and reads the self-taught CNN, which weights are persistent in a separate file. Lc0 is written in C++14 and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on GPUs to perform all the floating point dot products for thousands of neurons, the preferred target platforms are Nvidia GPU’s supporting CUDA and cuDNN libraries [7]. None CUDA compliant GPUs (AMD) are supported through OpenCL, while much slower pure CPU binaries are possible using BLAS, target systems with or without a graphics card (GPU) are Linux, Mac OS and Windows computers, or BLAS only the Raspberry Pi.


Like AlphaZero, Lc0's evaluates positions using non-linear function approximation based on a deep neural network, rather than the linear function approximation as used in classical chess programs. This neural network takes the board position as input and outputs position evaluation (QValue) and a vector of move probabilities (PValue, policy). Once trained, these network is combined with a Monte-Carlo Tree Search (MCTS) using the policy to narrow down the search to high­probability moves, and using the value in conjunction with a fast rollout policy to evaluate positions in the tree. The MCTS selection is done by a variation of Rosin's UCT improvement dubbed PUCT (Predictor + UCT).

Board Representation

Lc0's color agnostic board is represented by five bitboards (own pieces, opponent pieces, orthogonal sliding pieces, diagonal sliding pieces, and pawns including en passant target information coded as pawns on rank 1 and 8), two king squares, castling rights, and a flag whether the board is color flipped. While the structure is suitable as input for the neural network, getting individual pieces bitboards requires some setwise operations such as intersection and set theoretic difference [8].


While AlphaGo used two disjoint networks for policy and value, AlphaZero as well as Leela Chess Zero, share a common "body" connected to disjoint policy and value "heads". The “body” consists of spatial 8x8 input planes, followed by convolutional layers with B residual blocks times 3x3xF filters. BxF specifies the model and size of the CNN (64x6, 128x10, 192x15, 256x20 were used). Concerning nodes per second of the MCTS, smaller models are faster to calculate than larger models. They are faster to train and one may earlier recognize progress, but they will also saturate earlier so that at some point more training will no longer improve the engine. Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a stronger engine. As a further improvement, Leele Chess Zero applies the Squeeze and Excite (SE) extension to the residual block architecture [9] [10]. The body is fully connected to both the policy "head" for the move probability distribution, and value "head" for the evaluation score aka winning probability of the the the current position and up to seven predecessor positions on the input planes.


Like in AlphaZero, the Zero suffix implies no other initial knowledge than the rules of the game, to build a superhuman player, starting with truly random self-play games to apply reinforcement learning based on the outcome of that games. However, there are derived approaches, such as Albert Silver's Deus X, trying to take a short-cut by initially using supervised learning techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move. The unsupervised training of the NN is about to minimize the L2-norm of the mean squared error loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with UCT [11].

The distributed training is realized with an sophisticated client-server model. The client, written entirely in the Go programming language, incorporates Lc0 to produce self-play games. Controlled by the server, the client may download the latest network, will start self-playing, and uploading games to the server, who on the other hand will regularly produce and distribute new neural network weights after a certain amount of games available from contributors. The training software consists of Python code, the pipeline requires NumPy and TensorFlow running on Linux [12]. The server is written in Go along with Python and shell scripts.

See also


Forum Posts


Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Rollout Paradigm
LCZero update (2) by Rein Halbersma, CCC, March 25, 2018
Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Gian-Carlo Pascutto, CCC, August 03, 2018
Re: Has Silver written any code for "his" ZeusX? by Alexander Lyashuk, LCZero Forum, August 02, 2018


Blog Posts

Lessons from AlphaZero: Connect Four by Aditya Prasad, Oracle Blog, June 13, 2018
Lessons from AlphaZero (part 3): Parameter Tweaking by Aditya Prasad, Oracle Blog, June 20, 2018
Lessons From AlphaZero (part 4): Improving the Training Target by Vish Abrams, Oracle Blog, June 27, 2018
Lessons From Alpha Zero (part 5): Performance Optimization by Anthony Young, Oracle Blog, July 03, 2018
Lessons From Alpha Zero (part 6) — Hyperparameter Tuning by Anthony Young, Oracle Blog, July 11, 2018

External Links

Chess Engine



Up one level