Changes

Jump to: navigation, search

Leela Chess Zero

8 bytes removed, 15:09, 10 August 2023
m
Lc0
Leela Chess Zero consists of an executable to play or analyze [[Chess Game|games]], initially dubbed '''LCZero''', soon rewritten by a team around [[Alexander Lyashuk]] for better performance and then called '''Lc0''' <ref>[https://github.com/LeelaChessZero/lc0/wiki/lc0-transition lc0 transition · LeelaChessZero/lc0 Wiki · GitHub]</ref> <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68094&start=91 Re: TCEC season 13, 2 NN engines will be participating, Leela and Deus X] by [[Gian-Carlo Pascutto]], [[CCC]], August 03, 2018</ref>. This executable, the actual chess engine, performs the [[Monte-Carlo Tree Search|MCTS]] and reads the self-taught [[Neural Networks#Convolutional|CNN]], which weights are persistent in a separate file.
Lc0 is written in [[Cpp|C++]] (started with [[Cpp#14|C++14]] then upgraded to [[Cpp#17|C++17]]) and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on [[GPU|GPUs]] to perform all the [[Float|floating point]] [https://en.wikipedia.org/wiki/Dot_product dot products] for thousands of neurons,
the preferred target platforms are [[Nvidia]] [[GPU|GPUs]] supporting [https://en.wikipedia.org/wiki/CUDA CUDA] and [https://en.wikipedia.org/wiki/CuDNN CuDNNcuDNN cuDNN] libraries <ref>[https://developer.nvidia.com/cudnn NVIDIA cuDNN | NVIDIA Developer]</ref>. [[Ankan Banerjee]] wrote the CuDNNcuDNN, also shared by [[Deus X]] and [[Allie]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=71822&start=48 Re: My failed attempt to change TCEC NN clone rules] by [[Adam Treat]], [[CCC]], September 19, 2019</ref>, and DX12 backend code. None CUDA compliant GPUs ([[AMD]]There exist meanwhile different Lc0 backends to be used with different hardware, [[Intel]]) not all neural network architectures/features are supported through [[OpenCL]] or DX12, while much slower pure CPU binaries are possible using [https://enon all backends.wikipediaDifferent backends and different network architectures with different net size give different nodes per second.org/wiki/Basic_Linear_Algebra_Subprograms CPUs can be utilized for example via BLAS]and DNNL and GPUs via CUDA, cuDNN, OpenCL, DX12, target systems with or without a [https://en.wikipedia.org/wiki/Video_card graphics card] (GPU) are [[Linux]]Metal, [[Mac OS]] and [[Windows]] computersONNX, or BLAS only the [[Raspberry Pi]]oneDNN backends.
=Description=
422
edits

Navigation menu