422
edits
Changes
GPU
,→2020 ...: link to Chinese gpus
'''[[Main Page|Home]] * [[Hardware]] * GPU'''
[[FILE:6600GT GPUNvidiaTesla.jpg|border|right|thumb| [https://en.wikipedia.org/wiki/GeForce_6_series GeForce 6600GT (NV43)Nvidia_Tesla Nvidia Tesla] GPU <ref>[https://commons.wikimedia.org/wiki/Graphics_processing_unit Graphics processing unit - File:NvidiaTesla.jpg Image] by Mahogny, February 09, 2008, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
'''GPU''' (Graphics Processing Unit),<br/>
a specialized processor primarily initially intended to for fast [https://en.wikipedia.org/wiki/Image_processing image processing]. GPUs may have more raw computing power than general purpose [https://en.wikipedia.org/wiki/Central_processing_unit CPUs] but need a specialized and massive parallelized way of programming. [[Leela Chess Zero]] has proven that a [[Best-First|Best-first]] [[Monte-Carlo Tree Search|Monte-Carlo Tree Search]] (MCTS) with [[Deep Learning|deep learning]] methodology will work with GPU architectures.
=History=In the 1970s and 1980s RAM was expensive and Home Computers used custom graphics chips to operate directly on registers/memory without a dedicated frame buffer resp. texture buffer, like [https://en.wikipedia.org/wiki/Television_Interface_Adaptor TIA]in the [[Atari 8-bit|Atari VCS]] gaming system, [https://en.wikipedia.org/wiki/CTIA_and_GTIA GTIA]+[https://en.wikipedia.org/wiki/ANTIC ANTIC] in the [[Atari 8-bit|Atari 400/800]] series, or [https://en.wikipedia.org/wiki/Original_Chip_Set#Denise Denise]+[https://en.wikipedia.org/wiki/Original_Chip_Set#Agnus Agnus] in the [[Amiga|Commodore Amiga]] series. The 1990s would make 3D graphics and 3D modeling more popular, especially for video games. Cards specifically designed to accelerate 3D math, such as [https://en.wikipedia.org/wiki/IMPACT_(computer_graphics) SGI Impact] (1995) in 3D graphics-workstations or [https://en.wikipedia.org/wiki/3dfx#Voodoo_Graphics_PCI 3dfx Voodoo] (1996) for playing 3D games on PCs, emerged. Some game engines could use instead the [[SIMD and SWAR Techniques|SIMD-capabilities]] of CPUs such as the [[Intel]] [[MMX]] instruction set or [[AMD|AMD's]] [[X86#3DNow!|3DNow!]] for [https://en.wikipedia.org/wiki/Real-time_computer_graphics real-time rendering]. Sony's 3D capable chip [https://en.wikipedia.org/wiki/PlayStation_technical_specifications#Graphics_processing_unit_(GPU) GTE] used in the PlayStation (1994) and Nvidia's 2D/3D combi chips like [https://en.wikipedia.org/wiki/NV1 NV1] (1995) coined the term GPU for 3D graphics hardware acceleration. With the advent of the [https://en.wikipedia.org/wiki/Unified_shader_model unified shader architecture], like in Nvidia [https://en.wikipedia.org/wiki/Tesla_(microarchitecture) Tesla] (2006), ATI/AMD [https://en.wikipedia.org/wiki/TeraScale_(microarchitecture) TeraScale] (2007) or Intel [https://en.wikipedia.org/wiki/Intel_GMA#GMA_X3000 GMA X3000] (2006), GPGPUframeworks like [https://en.wikipedia.org/wiki/CUDA CUDA] and [[OpenCL|OpenCL]] emerged and gained in popularity. =GPU in Computer Chess= There are in main four ways how to use a GPU for chess: * As an accelerator in [[Leela_Chess_Zero|Lc0]]: run a neural network for position evaluation on GPU* Offload the search in [[Zeta|Zeta]]: run a parallel game tree search with move generation and position evaluation on GPU* As a hybrid in [http://www.talkchess.com/forum3/viewtopic.php?t=64983&start=4#p729152 perft_gpu]: expand the game tree to a certain degree on CPU and offload to GPU to compute the sub-tree* Neural network training such as [https://github.com/glinscott/nnue-pytorch Stockfish NNUE trainer in Pytorch]<ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t= 75724 Pytorch NNUE training] by [[Gary Linscott]], [[CCC]], November 08, 2020</ref> or [https://github.com/LeelaChessZero/lczero-training Lc0 TensorFlow Training]
== Khronos OpenCL ==
[[OpenCL|OpenCL]] specified by the [https://en.wikipedia.org/wiki/Khronos_Group Khronos Group] is widely adopted across all kind of hardware accelerators from different vendors.
* [https://www.khronos.org/registry/OpenCL/specs/opencl-1.2.pdf OpenCL 1.2 Specification]
* [https://www.khronos.org/registry/OpenCL//sdk/2.0/docs/man/xhtml/ OpenCL 2.0 Reference]
* [https://www.khronos.org/registry/OpenCL/specs/3.0-unified/pdf/ OpenCL 3.0 Specifications] == Nvidia Software overview AMD == [[AMD]] supports language frontends like OpenCL, HIP, C++ AMP and with OpenMP offload directives. It offers with [https://rocmdocs.amd.com/en/latest/ ROCm] its own parallel compute platform. * [https://community.amd.com/t5/opencl/bd-p/opencl-discussions AMD OpenCL Developer Community]* [https://rocmdocs.amd.com/en/latest/index.html AMD ROCm™ documentation]* [https://manualzz.com/doc/o/cggy6/amd-opencl-programming-user-guide-contents AMD OpenCL Programming Guide]* [http://developer.amd.com/wordpress/media/2013/12/AMD_OpenCL_Programming_Optimization_Guide2.pdf AMD OpenCL Optimization Guide]* [https://gpuopen.com/amd-isa-documentation/ AMD GPU ISA documentation] == Apple ==Since macOS 10.14 Mojave a transition from OpenCL to [https://en.wikipedia.org/wiki/Metal_(API) Metal] is recommended by [[Apple]].
== Further == * [https://rocmen.githubwikipedia.ioorg/ ROCm Homepagewiki/Vulkan#Planned_features Vulkan](OpenGL sucessor of Khronos Group)* [httphttps://developeren.amdwikipedia.com/wordpress/media/2013org/07wiki/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guide-rev-2.7.pdf AMD OpenCL Programming GuideDirectCompute DirectCompute](Microsoft)* [httphttps://developeren.amdwikipedia.com/wordpress/mediaorg/2013wiki/12/AMD_OpenCL_Programming_Optimization_Guide2.pdf AMD OpenCL Optimization GuideC%2B%2B_AMP C++ AMP](Microsoft)* [https://gpuopenen.com/wp-content/uploadswikipedia.org/2019wiki/08/RDNA_Shader_ISA_5August2019.pdf RDNA Instruction SetOpenACC OpenACC](offload directives)* [https://developeren.amdwikipedia.comorg/wp-contentwiki/resources/Vega_Shader_ISA_28July2017.pdf Vega Instruction SetOpenMP OpenMP](offload directives)
== Other 3rd party tools =Hardware Model=
{| class=The SIMT Programming Model"wikitable" style="margin:auto"|+ Vendor Terminology|-! AMD Terminology !! Nvidia Terminology|-| Compute Unit || Streaming Multiprocessor|-| Stream Core || CUDA Core|-| Wavefront || Warp|}
== Blocks and Workgroups =Programming Model=
{| class="wikitable" style= Grids and "margin:auto"|+ Terminology|-! OpenCL Terminology !! CUDA Terminology|-| Kernel || Kernel|-| Compute Unit || Streaming Multiprocessor|-| Processing Element || CUDA Core|-| Work-Item || Thread|-| Work-Group || Block|-| NDRange ==|| Grid|-|}
* __shared__ (CUDA) or __private - usually registers, accessable only by a single work-item resp. thread.* __local (OpenCL) - scratch-pad memory shared across work-items of a work-group resp. threads of block.* __constant - This is highlyread-accelerated only memory regions designed for threads to exchange data within a CUDA Block or OpenCL Workgroup. On AMD Systems* __global - usually VRAM, there is more Local "LDS" memory than even L1 Cache (GCN) or L0 Cache (RDNA)accessable by all work-items resp. threads.
===Memory Examples===
* 128 KiB private memory per compute unit
* 48 KiB (16 KiB) local memory per compute unit (configurable)
* 8 KiB constant cache per compute unit
* 16 KiB (48 KiB) L1 cache per compute unit (configurable)
* 768 KiB L2 cachein total
* 1.5 GiB to 3 GiB global memory
* 256 KiB private memory per compute unit
* 64 KiB local memory per compute unit
* 16 KiB constant cache per four compute units
* 16 KiB L1 cache per compute unit
* 768 KiB L2 cachein total
* 3 GiB to 6 GiB global memory
= Architectures ==Unified Memory=== Usually data has to be copied between a CPU host and a discrete GPU device, but different architectures from different vendors with different frameworks on different operating systems may offer a unified and accessible address space between CPU and GPU. =Instruction Throughput= GPUs are used in [https://en.wikipedia.org/wiki/High-performance_computing HPC] environments because of their good [https://en.wikipedia.org/wiki/FLOP FLOP]/Watt ratio. The instruction throughput in general depends on the architecture (like Nvidia's [https://en.wikipedia.org/wiki/Tesla_%28microarchitecture%29 Tesla], [https://en.wikipedia.org/wiki/Fermi_%28microarchitecture%29 Fermi], [https://en.wikipedia.org/wiki/Kepler_%28microarchitecture%29 Kepler], [https://en.wikipedia.org/wiki/Maxwell_%28microarchitecture%29 Maxwell] or AMD's [https://en.wikipedia.org/wiki/TeraScale_%28microarchitecture%29 TeraScale], [https://en.wikipedia.org/wiki/Graphics_Core_Next GCN], [https://en.wikipedia.org/wiki/AMD_RDNA_Architecture RDNA]), the brand (like Nvidia [https://en.wikipedia.org/wiki/GeForce GeForce], [https://en.wikipedia.org/wiki/Nvidia_Quadro Quadro], [https://en.wikipedia.org/wiki/Nvidia_Tesla Tesla] or AMD [https://en.wikipedia.org/wiki/Radeon Radeon], [https://en.wikipedia.org/wiki/Radeon_Pro Radeon Pro], [https://en.wikipedia.org/wiki/Radeon_Instinct Radeon Instinct]) and Physical Hardware the specific model. ==Integer Instruction Throughput==* INT32: The 32-bit integer performance can be architecture and operation depended less than 32-bit FLOP or 24-bit integer performance.
=== Ampere Architecture =Throughput Examples==The [https://en.wikipedia.org/wiki/Ampere_Nvidia GeForce GTX 580 (microarchitecture) Ampere microarchitecture] was announced on May 14Fermi, 2020 <ref>[https://devblogsCC 2.nvidia.com/nvidia0) -ampere32-architecture-in-depthbit integer operations/ NVIDIA Ampere Architecture In-Depth | NVIDIA Developer Blog] by [https://people.csail.mit.edu/ronny/ Ronny Krashinsky], [https://cppcast.com/guest/ogiroux/ Olivier Giroux], [https://blogs.nvidia.com/blog/author/stephenjones/ Stephen Jones], [https://blogs.nvidia.com/blog/author/nick-stam/ Nick Stam] and [https://en.wikipedia.org/wiki/Sridhar_Ramaswamy Sridhar Ramaswamy], May 14, 2020clock cycle per compute unit </ref>. The Nvidia A100 GPU based on the Ampere architecture delivers a generational leap in accelerated computing in conjunction with CUDA 11 <ref>[https://devblogsC Programming Guide v7.nvidia0, Chapter 5.com/cuda-11-features-revealed/ CUDA 11 Features Revealed | NVIDIA Developer Blog] by [https://devblogs4.nvidia1.com/author/pramarao/ Pramod Ramarao], May 14, 2020Arithmetic Instructions</ref>.
Max theoretic ADD operation throughput: 32 Ops x 16 CUs x 1544 MHz === Turing Architecture ===[https://www.nvidia790.com528 GigaOps/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf Architectural Whitepaper]sec
Max theoretic ADD operation throughput: 1 Op x 2048 PEs x 925 MHz === Volta Architecture === 1894.4 GigaOps/sec
=Tensors=MMAC (matrix-multiply-accumulate) units are used in consumer brand GPUs for neural network based upsampling of video game resolutions, in professional brands for upsampling of images and videos, and in server brand GPUs for accelerating convolutional neural networks in general. Convolutions can be implemented as a series of matrix-multiplications via Winograd-transformations <ref>[https://images.nvidiatalkchess.com/contentforum3/volta-architectureviewtopic.php?f=7&t=66025&p=743355#p743355 Re: To TPU or not to TPU...] by [[Rémi Coulom]], [[CCC]], December 16, 2017</pdf/volta-architecture-whitepaperref>. Mobile SoCs usually have an dedicated neural network engine as MMAC unit.pdf Architecture Whitepaper]
==Nvidia TensorCores==: With Nvidia [https://en.wikipedia.org/wiki/Volta_(microarchitecture) Volta] cards were released in 2017. Only Tesla and Titan cards series TensorCores were produced in this generation, aiming only for the most expensive end of the marketintroduced. They were the first cards to launch with Tensor coresoffer FP16xFP16+FP32, supporting 4x4 FP16 matrix multiplications -multiplication-accumulate-units, used to accelerate neural networks.<ref>[https://on-demand.gputechconf.com/gtc/2017/presentation/s7798-luke-durant-inside-volta.pdf INSIDE VOLTA]</ref> Turing's 2nd gen TensorCores add FP16, INT8, INT4 optimized computation.<ref>[https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/6 AnandTech - Nvidia Turing Deep Dive page 6]</ref> Amperes's 3rd gen adds support for BF16, TF32, FP64 and sparsity acceleration.<ref>[Neural Networkshttps://en.wikipedia.org/wiki/Ampere_(microarchitecture)#Convolutional|convolutional neural networksDetails Wikipedia - Ampere microarchitecture]</ref>Ada Lovelaces's 4th gen adds support for FP8.<ref>[https://en.wikipedia.org/wiki/Ada_Lovelace_(microarchitecture) - Ada Lovelace microarchitecture].</ref>
=== Pascal Architecture =Intel XMX Cores==: Intel added XMX, Xe Matrix eXtensions, cores to some of the [https://en.wikipedia.org/wiki/Intel_Xe Intel Xe] GPU series, like [https://en.wikipedia.org/wiki/Intel_Arc#Alchemist Arc Alchemist] and [https://www.intel.com/content/www/us/en/products/sku/232876/intel-data-center-gpu-max-1100/specifications.html Intel Data Center GPU Max Series].
=Host-Device Latencies= One reason GPUs are not used as accelerators for chess engines is the host-device latency, aka. kernel-launch-overhead. Nvidia and AMD have not published official numbers, but in practice there is a measurable latency for null-kernels of 5 microseconds <ref>[https://imagesdevtalk.nvidia.com/contentdefault/pdftopic/tesla1047965/whitepapercuda-programming-and-performance/pascalhost-device-latencies-architecture/post/5318041/#5318041 host-whitepaperdevice latencies?] by [[Srdja Matovic]], Nvidia CUDA ZONE, Feb 28, 2019</ref> up to 100s of microseconds <ref>[https://community.amd.pdf Architecture Whitepapercom/thread/237337#comment-2902071 host-device latencies?] by [[Srdja Matovic]] AMD Developer Community, Feb 28, 2019</ref>. One solution to overcome this limitation is to couple tasks to batches to be executed in one run <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=67347#p761239 Re: GPU ANN, how to deal with host-device latencies?] by [[Milos Stanisavljevic]], [[CCC]], May 06, 2018</ref>.
=Deep Learning=GPUs are much more suited than CPUs to implement and train [[Neural Networks#Convolutional|Convolutional Neural Networks]] (CNN), and were therefore also responsible for the [[Deep Learning|deep learning]] boom, also affecting game playing programs combining CNN with [[Monte-Carlo Tree Search|MCTS]], as pioneered by [[Google]] [[DeepMind|DeepMind's]] [[AlphaGo]] and [[AlphaZero]] entities in [[Go]], [[Shogi]] and [[Chess]] using [https://en.wikipedia.org/wiki/Pascal_(microarchitecture) PascalTensor_processing_unit TPUs], and the open source projects [[Leela Zero]] headed by [[Gian-Carlo Pascutto]] for [[Go]] and its [[Leela Chess Zero]] cards were first released in 2016adaption.
== AMD ==
AMD line of discrete GPUs is branded as Radeon for consumer, Radeon Pro for professional and Radeon Instinct for server.
* [https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units List of AMD graphics processing units on Wikipedia]
=== CDNA3 ===
CDNA3 HPC architecture was unveiled in December, 2023. With MI300A APU model (CPU+GPU+HBM) and MI300X GPU model, both with multi-chip modules design. Featuring Matrix Cores with support for a broad type of precision, as INT8, FP8, BF16, FP16, TF32, FP32, FP64, as well as sparse matrix data (sparsity). Supported by AMD's ROCm open software stack for AMD Instinct accelerators.
* [https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf AMD CDNA3 Whitepaper]
* [https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/instruction-set-architectures/amd-instinct-mi300-cdna3-instruction-set-architecture.pdf AMD Instinct MI300/CDNA3 Instruction Set Architecture]
* [https://www.amd.com/en/developer/resources/rocm-hub.html AMD ROCm Developer Hub]
=== Navi 3x RDNA3 ===
RDNA3 architecture in Radeon RX 7000 series was announced on November 3, 2022, featuring dedicated AI tensor operation acceleration.
* [https://en.wikipedia.org/wiki/Radeon_RX_7000_series AMD Radeon RX 7000 on Wikipedia]
* [https://developer.amd.com/wp-content/resources/RDNA3_Shader_ISA_December2022.pdf RDNA3 Instruction Set Architecture]
=== CDNA2 ===
CDNA2 architecture in MI200 HPC-GPU with optimized FP64 throughput (matrix and vector), multi-chip-module design and Infinity Fabric was unveiled in November, 2021.
* [https://www.amd.com/system/files/documents/amd-cdna2-white-paper.pdf AMD CDNA2 Whitepaper]
* [https://developer.amd.com/wp-content/resources/CDNA2_Shader_ISA_4February2022.pdf CDNA2 Instruction Set Architecture]
=== CDNA ===
CDNA architecture in MI100 HPC-GPU with Matrix Cores was unveiled in November, 2020.
* [https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf AMD CDNA Whitepaper]
* [https://developer.amd.com/wp-content/resources/CDNA1_Shader_ISA_14December2020.pdf CDNA Instruction Set Architecture]
=== Navi 2x RDNA2 ===
[https://en.wikipedia.org/wiki/RDNA_(microarchitecture)#RDNA_2 RDNA2] cards were unveiled on October 28, 2020.
* [https://en.wikipedia.org/wiki/Radeon_RX_6000_series AMD Radeon RX 6000 on Wikipedia]* [https://developer.amd.com/wp-content/resources/RDNA2_Shader_ISA_November2020.pdf RDNA 2 Instruction Set Architecture] === Navi RDNA 1.0 === [https://en.wikipedia.org/wiki/RDNA_(microarchitecture) RDNA] cards were unveiled on July 7, 2019.
* [https://www.amd.com/system/files/documents/rdna-whitepaper.pdf RDNA Whitepaper]
* [https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Architecture_public.pdf Architecture Slide Deck]
* [https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Shader_ISA_5August2019.pdf RDNA Instruction Set Architecture]
=== Vega GCN 5th gen ===
[https://en.wikipedia.org/wiki/Radeon_RX_Vega_series Vega] cards were unveiled on August 14, 2017.
* [https://www.techpowerup.com/gpu-specs/docs/amd-vega-architecture.pdf Architecture Whitepaper]
* [https://developer.amd.com/wp-content/resources/Vega_Shader_ISA_28July2017.pdf Vega Instruction Set Architecture]
=== Polaris GCN 4th gen ===
[https://en.wikipedia.org/wiki/Graphics_Core_Next#Graphics_Core_Next_4 Polaris] cards were first released in 2016.
* [https://www.amd.com/system/files/documents/polaris-whitepaper.pdf Architecture Whitepaper]
* [https://developer.amd.com/wordpress/media/2013/12/AMD_GCN3_Instruction_Set_Architecture_rev1.1.pdf GCN3/4 Instruction Set Architecture]
=== Southern Islands GCN 1st gen ===
Southern Island cards introduced the [https://en.wikipedia.org/wiki/Graphics_Core_Next GCN] architecture in 2012.
* [https://en.wikipedia.org/wiki/Radeon_HD_7000_series AMD Radeon HD 7000 on Wikipedia]
* [https://www.amd.com/content/dam/amd/en/documents/radeon-tech-docs/programmer-references/si_programming_guide_v2.pdf Southern Islands Programming Guide]
* [https://developer.amd.com/wordpress/media/2012/12/AMD_Southern_Islands_Instruction_Set_Architecture.pdf Southern Islands Instruction Set Architecture]
== Apple ==
=== M series ===
Apple released its M series SoC (system on a chip) with integrated GPU for desktops and notebooks in 2020.
* [https://en.wikipedia.org/wiki/Apple_silicon#M_series Apple M series on Wikipedia]
== ARM ==
The ARM Mali GPU variants can be found on various systems on chips (SoCs) from different vendors. Since Midgard (2012) with unified-shader-model OpenCL support is offered.
* [https://en.wikipedia.org/wiki/Mali_(GPU)#Variants Mali variants on Wikipedia]
=== Valhall (2019) ===
* [https://developer.arm.com/documentation/101574/latest Bifrost and Valhall OpenCL Developer Guide]
=== Bifrost (2016) ===
* [https://developer.arm.com/documentation/101574/latest Bifrost and Valhall OpenCL Developer Guide]
=== Midgard (2012) ===
* [https://developer.arm.com/documentation/100614/latest Midgard OpenCL Developer Guide]
* [https://wwwen.techpowerupwikipedia.comorg/gpu-specswiki/docsList_of_Intel_graphics_processing_units#Gen12 List of Intel Gen12 GPUs on Wikipedia]* [https:/amd-vega-architecture/en.pdf Architecture Whitepaperwikipedia.org/wiki/Intel_Arc#Alchemist Arc Alchemist series on Wikipedia]
* [https://wwwimages.amdnvidia.com/systemaem-dam/filesSolutions/documentsgeforce/polarisada/nvidia-whitepaperada-gpu-architecture.pdf Architecture Ada GPU Whitepaper]* [https://docs.nvidia.com/cuda/ada-tuning-guide/index.html Ada Tuning Guide]
* 64 bit Integer Performance: Current GPU [https://enwww.wikipedianvidia.orgcom/wikicontent/Processor_register registers] and Vectordam/en-[https:zz/Solutions/Data-Center/ennvidia-ampere-architecture-whitepaper.wikipedia.org/wiki/Arithmetic_logic_unit ALUspdf Ampere GA100 Whitepaper] are 32 bit wide and have to emulate 64 bit integer operations.<ref>* [https://enwww.wikichipnvidia.orgcom/wcontent/imagesPDF/a/a1/veganvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.pdf AMD Vega White PaperAmpere GA102 Whitepaper]</ref> <ref>* [https://wwwdocs.nvidia.com/content/dam/en-zzcuda/Solutions/designampere-visualization/technologies/turingtuning-architectureguide/NVIDIA-Turing-index.html Ampere GPU Architecture-Whitepaper.pdf Nvidia Turing White PaperTuning Guide]</ref>
* TensorCores: With Nvidia [https://enwww.wikipedianvidia.orgcom/content/wikidam/Volta_(microarchitecture) Volta] series TensorCores were introduced. They offer fp16*fp16+fp32, matrixen-multiplication-accumulate-units, used to accelerate neural networks.<ref>[https:zz/Solutions/ondesign-demand.gputechconf.comvisualization/gtctechnologies/2017/presentationturing-architecture/s7798NVIDIA-lukeTuring-durantArchitecture-inside-voltaWhitepaper.pdf INSIDE VOLTATuring Architecture Whitepaper]</ref> Turing's 2nd gen TensorCores add FP16, INT8, INT4 optimized computation.<ref>* [https://wwwdocs.anandtechnvidia.com/showcuda/13282/nvidia-turing-architecture-deeptuning-diveguide/6 AnandTech - Nvidia index.html Turing Deep Dive page 6]</ref> Amperes's 3rd gen adds support for bfloat16, TensorFloat-32 (TF32), FP64 and sparsity acceleration.<ref>[https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#Details Wikipedia - Ampere microarchitectureTuning Guide]</ref>
==Throughput Examples= Volta Architecture ===Nvidia GeForce GTX 580 [https://en.wikipedia.org/wiki/Volta_(Fermimicroarchitecture) Volta] cards were released in 2017. They were the first cards to launch with TensorCores, CC 2supporting matrix multiplications to accelerate [[Neural Networks#Convolutional|convolutional neural networks]]. * [https://images.nvidia.0) com/content/volta- 32 bit integer operationsarchitecture/clock cycle per compute unit <ref>CUDA C Programming Guide v7pdf/volta-architecture-whitepaper.0, Chapter 5pdf Volta Architecture Whitepaper]* [https://docs.4nvidia.1com/cuda/volta-tuning-guide/index. Arithmetic Instructions</ref>html Volta Tuning Guide]
=Host-Device Latencies= One reason GPUs are not used as accelerators for chess engines is the host-device latency, aka. kernel-launch-overhead. Nvidia and AMD have not published official numbers, but in practice there is an measurable latency for null-kernels of 5 microseconds <ref>[https://devtalk.nvidia.com/default/topic/1047965/cuda-programming-and-performance/host-device-latencies-/post/5318041/#5318041 host-device latencies?] by [[Srdja Matovic]], Nvidia CUDA ZONE, Feb 28, 2019</ref> up to 100s of microseconds <ref>[https://community.amd.com/thread/237337#comment-2902071 host-device latencies?] by [[Srdja Matovic]] AMD Developer Community, Feb 28, 2019</ref>. One solution to overcome this limitation is to couple tasks to batches to be executed in one run <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&tPowerVR ===67347#p761239 Re: GPU ANN, how to deal with host-device latencies?] by [[Milos Stanisavljevic]], [[CCC]], May 06, 2018</ref>.
== Qualcomm ==Qualcomm offers Adreno GPUs are much more suited than CPUs to implement and train [[Neural Networks#Convolutional|Convolutional Neural Networks]] (CNN), and were therefore also responsible for the [[Deep Learning|deep learning]] boom, also affecting game playing programs combining CNN with [[Monte-Carlo Tree Search|MCTS]], in various types as pioneered by [[Google]] [[DeepMind|DeepMind's]] [[AlphaGo]] and [[AlphaZero]] entities in [[Go]], [[Shogi]] and [[Chess]] using [https://en.wikipediaa component of their Snapdragon SoCs.org/wiki/Tensor_processing_unit TPUs], and the open source projects [[Leela Zero]] headed by [[Gian-Carlo Pascutto]] for [[Go]] and its [[Leela Chess Zero]] adaptionSince Adreno 300 series OpenCL support is offered.
=History== Adreno ===In the early 1980s, chips such as the [https://en.wikipedia.org/wiki/ANTIC ANTIC ] or * [https://en.wikipedia.org/wiki/Original_Chip_SetAdreno#Denise Denise] would take a lot of the mechanics of video processing (waiting for scanlines and processing other TV or monitor signals). The 1990s would make 3d graphics and 3d modeling more popular, especially for video games. Cards specifically designed to accelerate 3d math, such as the [https://en.wikipedia.org/wiki/Voodoo2 3dfx Voodoo2Variants Adreno variants on Wikipedia], were used by the video game community to play 3d graphics. Some other engines, such as Quake, would instead use the SIMD-capabilities of CPUs such as the Intel MMX instruction set.
=See also=
* [https://github.com/markgovett Mark Govett], [https://www.linkedin.com/in/craig-tierney-9568545 Craig Tierney], [[Jacques Middlecoff]], [https://www.researchgate.net/profile/Tom_Henderson4 Tom Henderson] ('''2009'''). ''Using Graphical Processing Units (GPUs) for Next Generation Weather and Climate Prediction Models''. [http://www.cisl.ucar.edu/dir/CAS2K9/ CAS2K9 Workshop]
* [[Hank Dietz]], [https://dblp.uni-trier.de/pers/hd/y/Young:Bobby_Dalton Bobby Dalton Young] ('''2009'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-13374-9_5 MIMD Interpretation on a GPU]''. [https://dblp.uni-trier.de/db/conf/lcpc/lcpc2009.html LCPC 2009], [http://aggregate.ee.engr.uky.edu/EXHIBITS/SC09/mogsimlcpc09final.pdf pdf], [http://aggregate.org/GPUMC/mogsimlcpc09slides.pdf slides.pdf]
* [https://dblp.uni-trier.de/pid/28/7183.html Sander van der Maar], [[Joost Batenburg]], [https://scholar.google.com/citations?user=TtXZhj8AAAAJ&hl=en Jan Sijbers] ('''2009'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-03138-0_33 Experiences with Cell-BE and GPU for Tomography]''. [https://dblp.uni-trier.de/db/conf/samos/samos2009.html#MaarBS09 SAMOS 2009] <ref>[https://en.wikipedia.org/wiki/Cell_(microprocessor) Cell (microprocessor) from Wikipedia]</ref>
==2010...==
* [https://www.linkedin.com/in/avi-bleiweiss-456a5644 Avi Bleiweiss] ('''2010'''). ''Playing Zero-Sum Games on the GPU''. [https://en.wikipedia.org/wiki/Nvidia NVIDIA Corporation], [http://www.nvidia.com/object/io_1269574709099.html GPU Technology Conference 2010], [http://www.nvidia.com/content/gtc-2010/pdfs/2207_gtc2010.pdf slides as pdf]
* [https://dblp.uni-trier.de/pers/hd/k/Karami:Ali Ali Karami], [[S. Ali Mirsoleimani]], [https://dblp.uni-trier.de/pers/hd/k/Khunjush:Farshad Farshad Khunjush] ('''2013'''). ''[https://ieeexplore.ieee.org/document/6714232 A statistical performance prediction model for OpenCL kernels on NVIDIA GPUs]''. [https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6708586 CADS 2013]
* [[Diego Rodríguez-Losada]], [[Pablo San Segundo]], [[Miguel Hernando]], [https://dblp.uni-trier.de/pers/hd/p/Puente:Paloma_de_la Paloma de la Puente], [https://dblp.uni-trier.de/pers/hd/v/Valero=Gomez:Alberto Alberto Valero-Gomez] ('''2013'''). ''GPU-Mapping: Robotic Map Building with Graphical Multiprocessors''. [https://dblp.uni-trier.de/db/journals/ram/ram20.html IEEE Robotics & Automation Magazine, Vol. 20, No. 2], [https://www.acin.tuwien.ac.at/fileadmin/acin/v4r/v4r/GPUMap_RAM2013.pdf pdf]
* [https://dblp.org/pid/28/977-2.html David Williams], [[Valeriu Codreanu]], [https://dblp.org/pid/88/5343-1.html Po Yang], [https://dblp.org/pid/54/784.html Baoquan Liu], [https://www.strath.ac.uk/staff/dongfengprofessor/ Feng Dong], [https://dblp.org/pid/136/5430.html Burhan Yasar], [https://scholar.google.com/citations?user=FZVGYiQAAAAJ&hl=en Babak Mahdian], [https://scholar.google.com/citations?user=8WO6cVUAAAAJ&hl=en Alessandro Chiarini], [https://zhaoxiahust.github.io/ Xia Zhao], [https://scholar.google.com/citations?user=jCFYHlkAAAAJ&hl=en Jos Roerdink] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-55224-3_42 Evaluation of Autoparallelization Toolkits for Commodity GPUs]''. [https://dblp.org/db/conf/ppam/ppam2013-1.html#WilliamsCYLDYMCZR13 PPAM 2013]
'''2014'''
* [https://dblp.uni-trier.de/pers/hd/d/Dang:Qingqing Qingqing Dang], [https://dblp.uni-trier.de/pers/hd/y/Yan:Shengen Shengen Yan], [[Ren Wu]] ('''2014'''). ''[https://ieeexplore.ieee.org/document/7097862 A fast integral image generation algorithm on GPUs]''. [https://dblp.uni-trier.de/db/conf/icpads/icpads2014.html ICPADS 2014]
* [[S. Ali Mirsoleimani]], [https://dblp.uni-trier.de/pers/hd/k/Karami:Ali Ali Karami Ali Karami], [https://dblp.uni-trier.de/pers/hd/k/Khunjush:Farshad Farshad Khunjush] ('''2014'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-04891-8_12 A Two-Tier Design Space Exploration Algorithm to Construct a GPU Performance Predictor]''. [https://dblp.uni-trier.de/db/conf/arcs/arcs2014.html ARCS 2014], [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science], Vol. 8350, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* [[Steinar H. Gunderson]] ('''2014'''). ''[https://archive.fosdem.org/2014/schedule/event/movit/ Movit: High-speed, high-quality video filters on the GPU]''. [https://en.wikipedia.org/wiki/FOSDEM FOSDEM] [https://archive.fosdem.org/2014/ 2014], [https://movit.sesse.net/movit-fosdem2014.pdf pdf]
* [https://dblp.org/pid/54/784.html Baoquan Liu], [https://scholar.google.com/citations?user=VspO6ZUAAAAJ&hl=en Alexandru Telea], [https://scholar.google.com/citations?user=jCFYHlkAAAAJ&hl=en Jos Roerdink], [https://dblp.org/pid/87/6797.html Gordon Clapworthy], [https://dblp.org/pid/28/977-2.html David Williams], [https://dblp.org/pid/88/5343-1.html Po Yang], [https://www.strath.ac.uk/staff/dongfengprofessor/ Feng Dong], [[Valeriu Codreanu]], [https://scholar.google.com/citations?user=8WO6cVUAAAAJ&hl=en Alessandro Chiarini] ('''2018'''). ''Parallel centerline extraction on the GPU''. [https://www.journals.elsevier.com/computers-and-graphics Computers & Graphics], Vol. 41, [https://strathprints.strath.ac.uk/70614/1/Liu_etal_CG2014_Parallel_centerline_extraction_GPU.pdf pdf]
==2015 ...==
* [[Peter H. Jin]], [[Kurt Keutzer]] ('''2015'''). ''Convolutional Monte Carlo Rollouts in Go''. [http://arxiv.org/abs/1512.03375 arXiv:1512.03375] » [[Deep Learning]], [[Go]], [[Monte-Carlo Tree Search|MCTS]]
* [[Liang Li]], [[Hong Liu]], [[Hao Wang]], [[Taoying Liu]], [[Wei Li]] ('''2015'''). ''[https://ieeexplore.ieee.org/document/6868996 A Parallel Algorithm for Game Tree Search Using GPGPU]''. [[IEEE#TPDS|IEEE Transactions on Parallel and Distributed Systems]], Vol. 26, No. 8 » [[Parallel Search]]
* [[Simon Portegies Zwart]], [https://github.com/jbedorf Jeroen Bédorf] ('''2015'''). ''[https://www.computer.org/csdl/magazine/co/2015/11/mco2015110050/13rRUx0Pqwe Using GPUs to Enable Simulation with Computational Gravitational Dynamics in Astrophysics]''. [[IEEE #Computer|IEEE Computer]], Vol. 48, No. 11
'''2016'''
* <span id="Astro"></span>[https://www.linkedin.com/in/sean-sheen-b99aba89 Sean Sheen] ('''2016'''). ''[https://digitalcommons.calpoly.edu/theses/1567/ Astro - A Low-Cost, Low-Power Cluster for CPU-GPU Hybrid Computing using the Jetson TK1]''. Master's thesis, [https://en.wikipedia.org/wiki/California_Polytechnic_State_University California Polytechnic State University], [https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2723&context=theses pdf] <ref>[http://www.nvidia.com/object/jetson-tk1-embedded-dev-kit.html Jetson TK1 Embedded Development Kit | NVIDIA]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61761 Jetson GPU architecture] by [[Dann Corbit]], [[CCC]], October 18, 2016</ref>
* [[Balázs Jako|Balázs Jákó]] ('''2016'''). ''[https://www.semanticscholar.org/paper/Hardware-accelerated-hybrid-rendering-on-PowerVR-J%C3%A1k%C3%B3/d9d7f5784263c5abdcd6c1bf93267e334468b9b2 Hardware accelerated hybrid rendering on PowerVR GPUs]''. <ref>[https://en.wikipedia.org/wiki/PowerVR PowerVR from Wikipedia]</ref> [[IEEE]] [https://ieeexplore.ieee.org/xpl/conhome/7547434/proceeding 20th Jubilee International Conference on Intelligent Engineering Systems]
* [[Diogo R. Ferreira]], [https://dblp.uni-trier.de/pers/hd/s/Santos:Rui_M= Rui M. Santos] ('''2016'''). ''[https://github.com/diogoff/transition-counting-gpu Parallelization of Transition Counting for Process Mining on Multi-core CPUs and GPUs]''. [https://dblp.uni-trier.de/db/conf/bpm/bpmw2016.html BPM 2016]
* [https://dblp.org/pers/hd/s/Sch=uuml=tt:Ole Ole Schütt], [https://developer.nvidia.com/blog/author/peter-messmer/ Peter Messmer], [https://scholar.google.ch/citations?user=ajbBWN0AAAAJ&hl=en Jürg Hutter], [[Joost VandeVondele]] ('''2016'''). ''[https://onlinelibrary.wiley.com/doi/10.1002/9781118670712.ch8 GPU Accelerated Sparse Matrix–Matrix Multiplication for Linear Scaling Density Functional Theory]''. [https://www.cp2k.org/_media/gpu_book_chapter_submitted.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Density_functional_theory Density functional theory from Wikipedia]</ref>
: Chapter 8 in [https://scholar.google.com/citations?user=AV307ZUAAAAJ&hl=en Ross C. Walker], [https://scholar.google.com/citations?user=PJusscIAAAAJ&hl=en Andreas W. Götz] ('''2016'''). ''[https://onlinelibrary.wiley.com/doi/book/10.1002/9781118670712 Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics]''. [https://en.wikipedia.org/wiki/Wiley_(publisher) John Wiley & Sons]
'''2017'''
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815] » [[AlphaZero]]
* [[Tristan Cazenave]] ('''2017'''). ''[http://ieeexplore.ieee.org/document/7875402/ Residual Networks for Computer Go]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. PP, No. 99, [http://www.lamsade.dauphine.fr/~cazenave/papers/resnet.pdf pdf]
* [https://scholar.google.com/citations?user=zLksndkAAAAJ&hl=en Jayvant Anantpur], [https://dblp.org/pid/09/10702.html Nagendra Gulur Dwarakanath], [https://dblp.org/pid/16/4410.html Shivaram Kalyanakrishnan], [[Shalabh Bhatnagar]], [https://dblp.org/pid/45/3592.html R. Govindarajan] ('''2017'''). ''RLWS: A Reinforcement Learning based GPU Warp Scheduler''. [https://arxiv.org/abs/1712.04303 arXiv:1712.04303]
'''2018'''
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2018'''). ''[http://science.sciencemag.org/content/362/6419/1140 A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play]''. [https://en.wikipedia.org/wiki/Science_(journal) Science], Vol. 362, No. 6419
* [http://www.talkchess.com/forum/viewtopic.php?t=66280 Announcing lczero] by [[Gary Linscott|Gary]], [[CCC]], January 09, 2018 » [[Leela Chess Zero]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=67347 GPU ANN, how to deal with host-device latencies?] by [[Srdja Matovic]], [[CCC]], May 06, 2018 » [[Neural Networks]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67357 GPU contention] by [[Ian Kennedy]], [[CCC]], May 07, 2018 » [[Leela Chess Zero]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68448 How good is the RTX 2080 Ti for Leela?] by Hai, September 15, 2018 » [[Leela Chess Zero]] <ref>[https://en.wikipedia.org/wiki/GeForce_20_series GeForce 20 series from Wikipedia]</ref>
: [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68448&start=2 Re: How good is the RTX 2080 Ti for Leela?] by [[Ankan Banerjee]], [[CCC]], September 16, 2018
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=69447 Generate EGTB with graphics cards?] by [[Pham Hong Nguyen|Nguyen Pham]], [[CCC]], January 01, 2019 » [[Endgame Tablebases]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=69478 LCZero FAQ is missing one important fact] by [[Jouni Uski]], [[CCC]], January 01, 2019 » [[Leela Chess Zero]]
* [https://groups.google.com/d/msg/lczero/I0lTgR-fFFU/NGC3kJDzAwAJ Michael Larabel benches lc0 on various GPUs] by [[Warren D. Smith]], [[Computer Chess Forums|LCZero Forum]], January 14, 2019 » [[Leela Chess Zero#Lc0|Lc0]] <ref>[https://en.wikipedia.org/wiki/Phoronix_Test_Suite Phoronix Test Suite from Wikipedia]</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=70362 Using LC0 with one or two GPUs - a guide] by [[Srdja Matovic]], [[CCC]], March 30, 2019 » [[Leela Chess Zero#Lc0|Lc0]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=70584 Wouldn't it be nice if C++ GPU] by [[Chris Whittington]], [[CCC]], April 25, 2019 » [[Cpp|C++]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=71058 Lazy-evaluation of futures for parallel work-efficient Alpha-Beta search] by Percival Tiglao, [[CCC]], June 06, 2019
* [https://www.game-ai-forum.org/viewtopic.php?f=21&t=694 My home-made CUDA kernel for convolutions] by [[Rémi Coulom]], [[Computer Chess Forums|Game-AI Forum]], November 09, 2019 » [[Deep Learning]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72320 GPU rumors 2020] by [[Srdja Matovic]], [[CCC]], November 13, 2019
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74771 AB search with NN on GPU...] by [[Srdja Matovic]], [[CCC]], August 13, 2020 » [[Neural Networks]] <ref>[https://forums.developer.nvidia.com/t/kernel-launch-latency/62455 kernel launch latency - CUDA / CUDA Programming and Performance - NVIDIA Developer Forums] by LukeCuda, June 18, 2018</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75073 I stumbled upon this article on the new Nvidia RTX GPUs] by [[Kai Laskos]], [[CCC]], September 10, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75639 Will AMD RDNA2 based Radeon RX 6000 series kick butt with Lc0?] by [[Srdja Matovic]], [[CCC]], November 01, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76986 Zeta with NNUE on GPU?] by [[Srdja Matovic]], [[CCC]], March 31, 2021 » [[Zeta]], [[NNUE]]
* [https://talkchess.com/forum3/viewtopic.php?f=2&t=77097 GPU rumors 2021] by [[Srdja Matovic]], [[CCC]], April 16, 2021
* [https://www.talkchess.com/forum3/viewtopic.php?f=7&t=79078 Comparison of all known Sliding lookup algorithms <nowiki>[CUDA]</nowiki>] by [[Daniel Infuehr]], [[CCC]], January 08, 2022 » [[Sliding Piece Attacks]]
* [https://talkchess.com/forum3/viewtopic.php?f=7&t=72566&p=955538#p955538 Re: China boosts in silicon...] by [[Srdja Matovic]], [[CCC]], January 13, 2024
=External Links=
* [https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units General-purpose computing on graphics processing units (GPGPU) from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units List of AMD graphics processing units from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units List of Intel graphics processing units from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units List of Nvidia graphics processing units from Wikipedia]
* [https://developer.nvidia.com/ NVIDIA Developer]