Leela Chess Zero

Home * Engines * Leela Chess Zero



Leela Chess Zero, (LCZero, lc0) an adaption of Gian-Carlo Pascutto's Leela Zero Go project to Chess, initiated and announced by Stockfish co-author Gary Linscott, who was already responsible for the Stockfish Testing Framework called Fishtest. Leela Chess is open source, released under the terms of GPL version 3 or later, and supports UCI. The goal is to build a strong chess playing entity following the same type of deep learning along with Monte-Carlo tree search (MCTS) techniques of AlphaZero as described in DeepMind's 2017 and 2018 papers , but using distributed training for the weights of the deep convolutional neural network (CNN, DNN, DCNN).

=Lc0= Leela Chess Zero consists of an executable to play or analyze games, initially dubbed LCZero, soon rewritten by a team around Alexander Lyashuk for better performance and then called Lc0. This executable, the actual chess engine, performs the MCTS and reads the self-taught CNN, which weights are persistent in a separate file. Lc0 is written in C++14 and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on GPUs to perform all the floating point dot products for thousands of neurons, the preferred target platforms are Nvidia GPUs supporting CUDA and cuDNN libraries. None CUDA compliant GPUs (AMD) are supported through OpenCL, while much slower pure CPU binaries are possible using BLAS, target systems with or without a graphics card (GPU) are Linux, Mac OS and Windows computers, or BLAS only the Raspberry Pi.

=Description= Like AlphaZero, Lc0's evaluates positions using non-linear function approximation based on a deep neural network, rather than the linear function approximation as used in classical chess programs. This neural network takes the board position as input and outputs position evaluation (QValue) and a vector of move probabilities (PValue, policy). Once trained, these network is combined with a Monte-Carlo Tree Search (MCTS) using the policy to narrow down the search to high­probability moves, and using the value in conjunction with a fast rollout policy to evaluate positions in the tree. The MCTS selection is done by a variation of Rosin's UCT improvement dubbed PUCT (Predictor + UCT).

Board Representation
Lc0's color agnostic board is represented by five bitboards (own pieces, opponent pieces, orthogonal sliding pieces, diagonal sliding pieces, and pawns including en passant target information coded as pawns on rank 1 and 8), two king squares, castling rights, and a flag whether the board is color flipped. Getting individual piece bitboards requires some setwise operations such as intersection and set theoretic difference.

Network
While AlphaGo used two disjoint networks for policy and value, AlphaZero as well as Leela Chess Zero, share a common "body" connected to disjoint policy and value "heads". The “body” consists of spatial 8x8 planes, using B residual blocks with F filters of kernel size 3x3, stride 1. So far, model sizes FxB of 64x6, 128x10, 192x15, and 256x20 were used.

Concerning nodes per second of the MCTS, smaller models are faster to calculate than larger models. They are faster to train and will earlier recognize progress, but they will also saturate earlier so that at some point more training will no longer improve the engine. Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a stronger engine. As a further improvement, Leele Chess Zero applies the Squeeze and Excite (SE) extension to the residual block architecture. The body is connected to both the policy "head" for the move probability distribution, and the value "head" for the evaluation score aka winning probability of the current position and up to seven predecessor positions on the input planes.

Training
Like in AlphaZero, the Zero suffix implies no other initial knowledge than the rules of the game, to build a superhuman player, starting with truly random self-play games to apply reinforcement learning based on the outcome of that games. However, there are derived approaches, such as Albert Silver's Deus X, trying to take a short-cut by initially using supervised learning techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move. The unsupervised training of the NN is about to minimize the L2-norm of the mean squared error loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with UCT.

The distributed training is realized with an sophisticated client-server model. The client, written entirely in the Go programming language, incorporates Lc0 to produce self-play games.Controlled by the server, the client may download the latest network, will start self-playing, and uploading games to the server, who on the other hand will regularly produce and distribute new neural network weights after a certain amount of games available from contributors. The training software consists of Python code, the pipeline requires NumPy and TensorFlow running on Linux. The server is written in Go along with Python and shell scripts.

=See also=
 * AlphaZero
 * Leela Zero
 * Leila
 * Deep Learning
 * Monte-Carlo Tree Search
 * UCT
 * PUCT

=Forum Posts=

2018

 * Announcing lczero by Gary, CCC, January 09, 2018
 * Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Rollout Paradigm


 * Policy and value heads are from AlphaGo Zero, not Alpha Zero Issue #47 by Gian-Carlo Pascutto, glinscott/leela-chess · GitHub, January 24, 2018
 * LCZero is learning by Gary, CCC, January 30, 2018
 * LCZero update by Gary, CCC, March 14, 2018
 * LCZero update (2) by Rein Halbersma, CCC, March 25, 2018


 * LCZero: Progress and Scaling. Relation to CCRL Elo by Kai Laskos, CCC, March 28, 2018 » Playing Strength
 * What does LCzero learn? by Uri Blass, CCC, April 05, 2018
 * LCZero in Aquarium / Fritz by Carl Bicknell, CCC, April 11, 2018
 * LCZero on 10x128 now by Gary, CCC, April 12, 2018
 * lczero faq by Duncan Roberts, CCC, April 13, 2018
 * LC0 - how to catch up? by Srdja Matovic, CCC, April 16, 2018
 * First Win by Leela Chess Zero against Stockfish dev by Ankan Banerjee, CCC, June 07, 2018 » Stockfish
 * LcZero and STS by Ed Schröder, CCC, June 14, 2018 » Strategic Test Suite
 * Who entered Leela into WCCC? Bad idea!! by Chris Whittington, LCZero Forum, June 23, 2018 » WCCC 2018
 * how will Leela fare at the WCCC? by dannyb, CCC, July 10, 2018 » WCCC 2018
 * Lc0 will participate at the WCCC? Wow! by Martin Renneke, LCZero Forum, July 10, 2018
 * How Leela uses history planes by Tristrom Cooke, LCZero Forum, July 19, 2018
 * Why Lc0 eval (in cp) is asymmetric against AB engines? by Kai Laskos, CCC, July 25, 2018 » Asymmetric Evaluation, Pawn Advantage, Win Percentage, and Elo
 * TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Nay Lin Tun, CCC, July 28, 2018
 * Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Gian-Carlo Pascutto, CCC, August 03, 2018


 * Has Silver written any code for "his" ZeusX? by Chris Whittington, LCZero Forum, July 31, 2018
 * Re: Has Silver written any code for "his" ZeusX? by Alexander Lyashuk, LCZero Forum, August 02, 2018


 * How good is the RTX 2080 Ti for Leela? by Hai, September 15, 2018
 * Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, September 16, 2018
 * Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, September 17, 2018
 * Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, October 28, 2018
 * Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, November 15, 2018


 * LC0 0.18rc1 released by Günther Simon, CCC, September 25, 2018
 * My non-OC RTX 2070 is very fast with Lc0 by Kai Laskos, CCC, November 19, 2018
 * 2900 Elo points progress, 10 million games, 330 nets by Kai Laskos, CCC, November 25, 2018
 * Re: 2900 Elo points progress, 10 million games, 330 nets by crem, CCC, November 25, 2018


 * Re: Alphazero news by Gian-Carlo Pascutto, CCC, December 07, 2018 » AlphaZero
 * Policy training in Alpha Zero, LC0 .. by Chris Whittington, CCC, December 18, 2018 » AlphaZero
 * LC0 using 4 x 2080 Ti GPU's on Chess.com tourney? by M. Ansari, CCC, December 28, 2018
 * Smallnet (128x10) run1 progresses remarkably well by Kai Laskos, CCC, December 19, 2018

2019

 * LCZero FAQ is missing one important fact by Jouni Uski, CCC, January 01, 2019 » GPU
 * Leela on a weak pc, question by chessico, CCC, January 09, 2019
 * Can somebody explain what makes Leela as strong as Komodo? by Chessqueen, CCC, January 16, 2019
 * A0 policy head ambiguity by Daniel Shawul, CCC, January 21, 2019 » AlphaZero
 * Schizophrenic rating model for Leela by Kai Laskos, CCC, January 21, 2019 » Match Statistics
 * Leela Zero (Lc0) - NVIDIA Geforce RTX 2060 by Andreas Strangmüller, CSS Forum, January 29, 2019
 * 11258-32x4-se distilled network released by Dietrich Kappe, CCC, February 03, 2019
 * Lc0 setup Hilfe by Clemens Keck, CSS Forum, February 07, 2019 (German)
 * Thanks for LC0 by Peter Berger, CCC, February 19, 2019
 * Re: Training the trainer: how is it done for Stockfish? by Graham Jones, CCC, March 03, 2019 » Monte-Carlo Tree Search, Stockfish
 * Lc0 51010 by Larry Kaufman, CCC, March 29, 2019
 * 32930 Boost network available by Dietrich Kappe, CCC, April 09, 2019

=Blog Posts=
 * Lessons From Implementing AlphaZero by Aditya Prasad, Oracle Blog, June 05, 2018
 * Lessons from AlphaZero: Connect Four by Aditya Prasad, Oracle Blog, June 13, 2018
 * Lessons from AlphaZero (part 3): Parameter Tweaking by Aditya Prasad, Oracle Blog, June 20, 2018
 * Lessons From AlphaZero (part 4): Improving the Training Target by Vish Abrams, Oracle Blog, June 27, 2018
 * Lessons From Alpha Zero (part 5): Performance Optimization by Anthony Young, Oracle Blog, July 03, 2018
 * Lessons From Alpha Zero (part 6) — Hyperparameter Tuning by Anthony Young, Oracle Blog, July 11, 2018


 * Test20 progress by crem, LCZero blog, August 31, 2018
 * A Standard Dataset by Error323, LCZero blog, September 12, 2018
 * Understanding Training against Q as Knowledge Distillation by Cyanogenoid, LCZero blog, October 10, 2018
 * GUIDE: Setting up Leela on a Chess GUI by Bob23, LCZero blog, September 21, 2018
 * Lc0 v0.19.0-rc1 (UPD: rc2) has been released by crem, LCZero blog, November 9, 2018
 * c0 v0.19.0 has been released by crem, LCZero blog, November 9, 2018
 * AlphaZero paper, and Lc0 v0.19.1 by crem, LCZero blog, December 07, 2018

=External Links=

Chess Engine

 * Leela Chess Zero from Wikipedia
 * Leela (software) from Wikipedia
 * LCZero
 * GitHub - LeelaChessZero/lczero: A chess adaption of GCP's Leela Zero
 * LCZero · GitHub
 * GitHub - LeelaChessZero/lc0: The rewritten engine, originally for tensorflow. Now all other backends have been ported here
 * Home · LeelaChessZero/lc0 Wiki · GitHub
 * FAQ · LeelaChessZero/lc0 Wiki · GitHub
 * Technical Explanation of Leela Chess Zero · LeelaChessZero/lc0 Wiki · GitHub
 * Contributors to LeelaChessZero/lc0 · GitHub
 * GitHub - mooskagh/lc0: The rewritten engine, originally for cudnn. Now all other backends have been ported here
 * Distilled Networks · dkappe/leela-chess-weights Wiki · GitHub
 * Leela Chess Zero - Blog
 * LCZero – Google Groups
 * Leela Chess Zero - Facebook
 * Leela Chess Zero (@LeelaChessZero) | Twitter
 * Leela Chess Zero: AlphaZero for the PC by Albert Silver, ChessBase News, April 26, 2018
 * Leela reacts beautifully to Stockfish's outrageous opening greed by kingscrusher, January 05, 2019, YouTube Video
 * Interview with Alexander Lyashuk about the recent success of Lc0, Chessdom, February 6, 2019 » TCEC Season 14

Misc

 * Leela from Wikipedia
 * Leela (game) from Wikipedia
 * Leela (name) from Wikipedia
 * Leela (Doctor Who) from Wikipedia
 * Leela (Futurama) from Wikipedia
 * Marc Ribot's Ceramic Dog - Lies My Body Told Me (Live on KEXP, July 20, 2016), YouTube Video

=References= Up one level