Leela Chess Zero

From Chessprogramming wiki
Revision as of 03:18, 29 October 2024 by Smatovic (talk | contribs) (since 2022 switch to Transformers)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Home * Engines * Leela Chess Zero

Lc0 logo [1]

Leela Chess Zero, (LCZero, lc0)
an adaption of Gian-Carlo Pascutto's Leela Zero Go project [2] to Chess, initiated and announced by Stockfish co-author Gary Linscott, who was already responsible for the Stockfish Testing Framework called Fishtest. Leela Chess is open source, released under the terms of GPL version 3 or later, and supports UCI. The goal is to build a strong chess playing entity following the same type of deep learning along with Monte-Carlo tree search (MCTS) techniques of AlphaZero as described in DeepMind's 2017 and 2018 papers [3] [4] [5], but using distributed training for the weights of the deep convolutional neural network (CNN, DNN, DCNN).

Since 2022 the network architecture switched from CNN, convolutional neural network, as used in image recognition to Transformers, as used in large language models[6][7][8].

Lc0

Leela Chess Zero consists of an executable to play or analyze games, initially dubbed LCZero, soon rewritten by a team around Alexander Lyashuk for better performance and then called Lc0 [9] [10]. This executable, the actual chess engine, performs the MCTS and reads the self-taught CNN, which weights are persistent in a separate file. Lc0 is written in C++ (started with C++14 then upgraded to C++17) and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on GPUs to perform all the floating point dot products for thousands of neurons, the preferred target platforms are Nvidia GPUs supporting CUDA and cuDNN libraries [11]. Ankan Banerjee wrote the cuDNN backend (also shared by Deus X and Allie [12]) and DX12 backend code. There exist different Lc0 backends to be used with different hardware, not all neural network architectures/features are supported on all backends. Different backends and different network architectures with different net size give different nodes per second and Elo. CPUs can be utilized for example via BLAS and DNNL and GPUs via CUDA, cuDNN, OpenCL, DX12, Metal, ONNX, oneDNN backends.

Description

Like AlphaZero, Lc0's evaluates positions using non-linear function approximation based on a deep neural network, rather than the linear function approximation as used in classical chess programs. This neural network takes the board position as input and outputs position evaluation (QValue) and a vector of move probabilities (PValue, policy). Once trained, these network is combined with a Monte-Carlo Tree Search (MCTS) using the policy to narrow down the search to high­probability moves, and using the value in conjunction with a fast rollout policy to evaluate positions in the tree. The MCTS selection is done by a variation of Rosin's UCT improvement dubbed PUCT (Predictor + UCT).

Board Representation

Lc0's color agnostic board is represented by five bitboards (own pieces, opponent pieces, orthogonal sliding pieces, diagonal sliding pieces, and pawns including en passant target information coded as pawns on rank 1 and 8), two king squares, castling rights, and a flag whether the board is color flipped. Getting individual piece bitboards requires some setwise operations such as intersection and set theoretic difference [13].

Network

While AlphaGo used two disjoint networks for policy and value, AlphaZero as well as Leela Chess Zero, share a common "body" connected to disjoint policy and value "heads". The “body” consists of spatial 8x8 planes, using B residual blocks with F filters of kernel size 3x3, stride 1. So far, model sizes FxB of 64x6, 128x10, 192x15, and 256x20 were used.

Concerning nodes per second of the MCTS, smaller models are faster to calculate than larger models. They are faster to train and will earlier recognize progress, but they will also saturate earlier so that at some point more training will no longer improve the engine. Larger and deeper network models will improve the receptivity, the amount of knowledge and pattern to extract from the training samples, with potential for a stronger engine. As a further improvement, Leele Chess Zero applies the Squeeze and Excite (SE) extension to the residual block architecture [14] [15]. The body is connected to both the policy "head" for the move probability distribution, and the value "head" for the evaluation score aka winning probability of the current position and up to seven predecessor positions on the input planes.

Training

Like in AlphaZero, the Zero suffix implies no other initial knowledge than the rules of the game, to build a superhuman player, starting with truly random self-play games to apply reinforcement learning based on the outcome of that games. However, there are derived approaches, such as Albert Silver's Deus X, trying to take a short-cut by initially using supervised learning techniques, such as feeding in high quality games played by other strong chess playing entities, or huge records of positions with a given preferred move. The unsupervised training of the NN is about to minimize the L2-norm of the mean squared error loss of the value output and the policy loss. Further there are experiments to train the value head against not the game outcome, but against the accumulated value for a position after exploring some number of nodes with UCT [16].

The distributed training is realized with an sophisticated client-server model. The client, written entirely in the Go programming language, incorporates Lc0 to produce self-play games.Controlled by the server, the client may download the latest network, will start self-playing, and uploading games to the server, who on the other hand will regularly produce and distribute new neural network weights after a certain amount of games available from contributors. The training software consists of Python code, the pipeline requires NumPy and TensorFlow running on Linux [17]. The server is written in Go along with Python and shell scripts.

Structure Diagrams

Lc0diagram.png

Related to TCEC clone discussions concerning Deus X and Allie aka AllieStein,
Alexander Lyashuk published diagrams with all components of the affected engines,
The above shows the common legend, and the structure of all Leela Chess Zero's components based on current Lc0 engine [18]

Lczero.png

Same diagram, but initial LCZero engine, which played TCEC Season 12 [19]

Tournament Play

First Experience

LCZero gained first tournament experience in April 2018 at TCEC Season 12 and over the board at the WCCC 2018 in Stockholm, July 2018. It won the fourth division of TCEC Season 13 in August 2018, Lc0 finally coming in third in the third division.

Breakthrough

TCEC Season 14 from November 2018 until February 2019 became a breakthrough, Lc0 winning the third, second and first division, to even qualify for the superfinal, losing by the narrow margin of +10 =81 -9, 50½ - 49½ versus Stockfish. Again runner-up at the TCEC Season 15 premier division in April 2019, Lc0 aka LCZero v0.21.1-nT40.T8.610 won the superfinal in May 2019 versus Stockfish with +14 =79 -7, 53½-46½ [20]. At the TCEC Season 16 premier division in September 2019, Lc0 became in third behind Stockfish and the supervised trained AllieStein, but Lc0 took revenge by winning the TCEC Season 17 premier division in spring 2020, as LCZero v0.24-sv-t60-3010 fighting down Stockfish in a thrilling superfinal in April 2020 with +17 =71 -12, 52½-47½ [21], but tables turned again in TCEC Season 18, when Stockfish won the superfinal.

Release Dates

2018

2019

2020

2021

2022

2023

2024

Authors

See also

UCT
PUCT

Publications

Forum Posts

2018

Re: Announcing lczero by Daniel Shawul, CCC, January 21, 2018 » Rollout Paradigm
LCZero update (2) by Rein Halbersma, CCC, March 25, 2018
Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Gian-Carlo Pascutto, CCC, August 03, 2018
Re: Has Silver written any code for "his" ZeusX? by Alexander Lyashuk, LCZero Forum, August 02, 2018
Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, September 16, 2018
Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, September 17, 2018 [24]
Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, October 28, 2018
Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, November 15, 2018
Re: 2900 Elo points progress, 10 million games, 330 nets by crem, CCC, November 25, 2018

2019

Re: Lc0 Evaluation Explanation by Alexander Lyashuk, CCC, September 03, 2019

2020

2021

External Links

Chess Engine

GitHub

Rating Lists

ChessBase

Chessdom

Tuning

Misc

Lineup: Theo Travis, Roy Babbington, John Etheridge, John Marshall

References

  1. Leela Chess Zero (@LeelaChessZero) | Twitter
  2. GitHub - gcp/leela-zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper
  3. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815
  4. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, Vol. 362, No. 6419
  5. AlphaZero paper, and Lc0 v0.19.1 by crem, LCZero blog, December 07, 2018
  6. Transformer Progress from Lc0 Blog, February 28, 2024
  7. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network from arxiv.org, June 2, 2024
  8. Attention Is All You Need from arxiv.org, June 2, 2017
  9. lc0 transition · LeelaChessZero/lc0 Wiki · GitHub
  10. Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X by Gian-Carlo Pascutto, CCC, August 03, 2018
  11. NVIDIA cuDNN | NVIDIA Developer
  12. Re: My failed attempt to change TCEC NN clone rules by Adam Treat, CCC, September 19, 2019
  13. lc0/board.h at master · LeelaChessZero/lc0 · GitHub
  14. Technical Explanation of Leela Chess Zero · LeelaChessZero/lc0 Wiki · GitHub
  15. Squeeze-and-Excitation Networks – Towards Data Science by Paul-Louis Pröve, October 17, 2017
  16. Lessons From AlphaZero (part 4): Improving the Training Target by Vish Abrams, Oracle Blog, June 27, 2018
  17. GitHub - LeelaChessZero/lczero-training: For code etc relating to the network training process.
  18. My failed attempt to change TCEC NN clone rules by Alexander Lyashuk, CCC, September 14, 2019 » TCEC
  19. My failed attempt to change TCEC NN clone rules by Alexander Lyashuk, CCC, September 14, 2019 » TCEC
  20. Lc0 won TCEC 15 by crem, LCZero blog, May 28, 2019
  21. TCEC S17 SUper FInal report by @glbchess64, LCZero blog, April 21, 2020
  22. Book about Neural Networks for Chess by dkl, CCC, September 29, 2021
  23. GeForce 20 series from Wikipedia
  24. Multiply–accumulate operation - Wikipedia
  25. GeForce RTX 2070 Graphics Card | NVIDIA
  26. GitHub - kiudee/chess-tuning-tools by Karlson Pfannschmidt
  27. Fat Fritz 1.1 update and a small gift by Albert Silver. ChessBase News, March 05, 2020
  28. Welcome to Chess Tuning Tools’s documentation!

Up one level