Changes

Jump to: navigation, search

GPU

12,096 bytes added, 24 January
m
2020 ...: link to Chinese gpus
'''[[Main Page|Home]] * [[Hardware]] * GPU'''
[[FILE:6600GT GPUNvidiaTesla.jpg|border|right|thumb| [https://en.wikipedia.org/wiki/GeForce_6_series GeForce 6600GT (NV43)Nvidia_Tesla Nvidia Tesla] GPU <ref>[https://commons.wikimedia.org/wiki/Graphics_processing_unit Graphics processing unit - File:NvidiaTesla.jpg Image] by Mahogny, February 09, 2008, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
'''GPU''' (Graphics Processing Unit),<br/>
a specialized processor primarily initially intended to for fast [https://en.wikipedia.org/wiki/Image_processing image processing]. GPUs may have more raw computing power than general purpose [https://en.wikipedia.org/wiki/Central_processing_unit CPUs] but need a specialized and massive parallelized way of programming. [[Leela Chess Zero]] has proven that a [[Best-First|Best-first]] [[Monte-Carlo Tree Search|Monte-Carlo Tree Search]] (MCTS) with [[Deep Learning|deep learning]] methodology will work with GPU architectures.
=History=In the 1970s and 1980s RAM was expensive and Home Computers used custom graphics chips to operate directly on registers/memory without a dedicated frame buffer resp. texture buffer, like [https://en.wikipedia.org/wiki/Television_Interface_Adaptor TIA]in the [[Atari 8-bit|Atari VCS]] gaming system, [https://en.wikipedia.org/wiki/CTIA_and_GTIA GTIA]+[https://en.wikipedia.org/wiki/ANTIC ANTIC] in the [[Atari 8-bit|Atari 400/800]] series, or [https://en.wikipedia.org/wiki/Original_Chip_Set#Denise Denise]+[https://en.wikipedia.org/wiki/Original_Chip_Set#Agnus Agnus] in the [[Amiga|Commodore Amiga]] series. The 1990s would make 3D graphics and 3D modeling more popular, especially for video games. Cards specifically designed to accelerate 3D math, such as [https://en.wikipedia.org/wiki/IMPACT_(computer_graphics) SGI Impact] (1995) in 3D graphics-workstations or [https://en.wikipedia.org/wiki/3dfx#Voodoo_Graphics_PCI 3dfx Voodoo] (1996) for playing 3D games on PCs, emerged. Some game engines could use instead the [[SIMD and SWAR Techniques|SIMD-capabilities]] of CPUs such as the [[Intel]] [[MMX]] instruction set or [[AMD|AMD's]] [[X86#3DNow!|3DNow!]] for [https://en.wikipedia.org/wiki/Real-time_computer_graphics real-time rendering]. Sony's 3D capable chip [https://en.wikipedia.org/wiki/PlayStation_technical_specifications#Graphics_processing_unit_(GPU) GTE] used in the PlayStation (1994) and Nvidia's 2D/3D combi chips like [https://en.wikipedia.org/wiki/NV1 NV1] (1995) coined the term GPU for 3D graphics hardware acceleration. With the advent of the [https://en.wikipedia.org/wiki/Unified_shader_model unified shader architecture], like in Nvidia [https://en.wikipedia.org/wiki/Tesla_(microarchitecture) Tesla] (2006), ATI/AMD [https://en.wikipedia.org/wiki/TeraScale_(microarchitecture) TeraScale] (2007) or Intel [https://en.wikipedia.org/wiki/Intel_GMA#GMA_X3000 GMA X3000] (2006), GPGPUframeworks like [https://en.wikipedia.org/wiki/CUDA CUDA] and [[OpenCL|OpenCL]] emerged and gained in popularity. =GPU in Computer Chess=  There are in main four ways how to use a GPU for chess: * As an accelerator in [[Leela_Chess_Zero|Lc0]]: run a neural network for position evaluation on GPU* Offload the search in [[Zeta|Zeta]]: run a parallel game tree search with move generation and position evaluation on GPU* As a hybrid in [http://www.talkchess.com/forum3/viewtopic.php?t=64983&start=4#p729152 perft_gpu]: expand the game tree to a certain degree on CPU and offload to GPU to compute the sub-tree* Neural network training such as [https://github.com/glinscott/nnue-pytorch Stockfish NNUE trainer in Pytorch]<ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t= 75724 Pytorch NNUE training] by [[Gary Linscott]], [[CCC]], November 08, 2020</ref> or [https://github.com/LeelaChessZero/lczero-training Lc0 TensorFlow Training]
The traditional job of a =GPU is to take the Chess Engines=* [https://en.wikipedia.org/wiki/Three-dimensional_space x,y,z coordinates] of [https://en.wikipedia.org/wiki/Triangle_strip triangles], and [httpsCategory://en.wikipedia.org/wiki/3D_projection mapGPU] these triangles to [https://en.wikipedia.org/wiki/Glossary_of_computer_graphics#screen_space screen space] through a [https://en.wikipedia.org/wiki/Matrix_multiplication matrix multiplication]. As video game graphics grew more sophisticated, the number of triangles per scene grew larger. GPUs similarly grew in size to massively parallel behemoths capable of performing billions of transformations hundreds of times per second.
These lists of triangles were specified in Graphics APIs like [https://en.wikipedia.org/wiki/DirectX DirectX]. But video game programmers demanded more flexibility from their hardware: such as lighting, transparency, and reflections. This flexibility was granted with specialized programming languages, called [https://en.wikipedia.org/wiki/Shader#Vertex_shaders vertex shaders] or [https://en.wikipedia.org/wiki/Shader#Pixel_shaders pixel shaders]. GPUs evolved to accelerate general purpose compute from pixel shader and vertex shader programmers, and even merged the functionality into "universal" shaders (which can perform either vertex shading or pixel shading).=GPGPU=
Today, these universal shaders are flexible enough Early efforts to provide General Purpose compute leverage a GPU for GPUs (GPGPU)general-purpose computing required reformulating computational problems in terms of graphics primitives via graphics APIs like [https://en.wikipedia.org/wiki/OpenGL OpenGL] or [https://en.wikipedia. org/wiki/DirectX DirextX], followed by first GPGPU languages, frameworks such as OpenCL [https://en.wikipedia.org/wiki/Lib_Sh Sh/RapidMind] or [https://en.wikipedia.org/wiki/BrookGPU Brook] and finally [https://en.wikipedia.org/wiki/CUDA CUDA, is how the programmer can access this capability] and [https://www.chessprogramming.org/OpenCL OpenCL].
== Khronos OpenCL ==
[[OpenCL|OpenCL]] specified by the [https://en.wikipedia.org/wiki/Khronos_Group Khronos Group] is widely adopted across all kind of hardware accelerators from different vendors.
The * [https://enwww.wikipediakhronos.org/wikiconformance/Khronos_Group Khronos group] is a committee formed to oversee the [https://en.wikipedia.org/wiki/OpenGL OpenGL], [[OpenCL]], and [https://en.wikipedia.orgadopters/wikiconformant-products/Vulkan_(API) Vulkan] standards. Although compute shaders exist in all languages, OpenCL is the designated general purpose compute language. opencl List of OpenCL 1.2 is widely supported by [[AMDConformant Products]], [[Nvidia|NVidia]], and [[Intel]]. OpenCL 2.0, although specified in 2013, has had a slow rollout, and the specific features aren't necessarily widespread in modern GPUs yet. AMD continues to target OpenCL 2.0 support in their ROCm environment, while NVidia has implemented some OpenCL 2.0 features.
* [https://www.khronos.org/registry/OpenCL/specs/opencl-1.2.pdf OpenCL 1.2 Specification]
* [https://www.khronos.org/registry/OpenCL//sdk/2.0/docs/man/xhtml/ OpenCL 2.0 Reference]
== NVidia Software overview ==* [https://www.khronos.org/registry/OpenCL/specs/3.0-unified/pdf/ OpenCL 3.0 Specifications]
[[Nvidia|NVidia]] [https://en.wikipedia.org/wiki/CUDA CUDA] is their general purpose compute framework. CUDA has a [[Cpp|C++]] compiler based on [https://en.wikipedia.org/wiki/LLVM LLVM] / [https://en.wikipedia.org/wiki/Clang clang], which compiles into an assembly-like language called [https://en.wikipedia.org/wiki/Parallel_Thread_Execution PTX]. NVidia device drivers take PTX and compile that down to the final machine code (called NVidia SASS). NVidia keeps PTX portable between its GPUs, while its SASS assembly language may change from year-to-year as NVidia releases new GPUs. A defining feature of CUDA was the "single source" C++ compiler, the same compiler would work with both CPU host-code and GPU device-code. This meant that the data-structures and even pointers from the CPU can be shared directly with the GPU code.== AMD ==
* [https://developer[AMD]] supports language frontends like OpenCL, HIP, C++ AMP and with OpenMP offload directives.nvidia.com/cuda-zone NVidia CUDA Zone]* It offers with [https://docsrocmdocs.nvidiaamd.com/cudaen/parallel-thread-executionlatest/index.html NVidia PTX ISAROCm]* [https://docs.nvidia.com/cuda/indexits own parallel compute platform.html NVidia CUDA Toolkit Documentation]
== * [https://community.amd.com/t5/opencl/bd-p/opencl-discussions AMD Software Overview ==OpenCL Developer Community]* [https://rocmdocs.amd.com/en/latest/index.html AMD ROCm™ documentation]* [https://manualzz.com/doc/o/cggy6/amd-opencl-programming-user-guide-contents AMD OpenCL Programming Guide]* [http://developer.amd.com/wordpress/media/2013/12/AMD_OpenCL_Programming_Optimization_Guide2.pdf AMD OpenCL Optimization Guide]* [https://gpuopen.com/amd-isa-documentation/ AMD GPU ISA documentation]
[[AMD|AMD's]] original software stack, called [https://en== Apple ==Since macOS 10.wikipedia.org/wiki/AMDGPU AMDGPU-pro], provides 14 Mojave a transition from OpenCL 1.2 and 2.0 capabilities on [[Linux]] and [[Windows]]. However, most of AMD's efforts today is on an experimental framework called to [https://en.wikipedia.org/wiki/OpenCL#Implementations ROCmMetal_(API) Metal]. ROCm is AMD's open source compiler and device driver stack intended for general purpose compute. ROCm supports two languages: recommended by [https://en.wikipedia.org/wiki/GPUOpen#AMD_Boltzmann_Initiative HIP] (a CUDA-like single-source C++ compiler also based on LLVM/clang), and OpenCL 2.0. ROCm only works on Linux machines supporting modern hardware, such as [https://en.wikipedia.org/wiki/PCI_Express#3.0 PCIe 3.0Apple] and relatively recent GPUs (such as the [https://en.wikipedia.org/wiki/AMD_Radeon_500_series RX 580], and [https://en.wikipedia.org/wiki/AMD_RX_Vega_series Vega] GPUs).
AMD regularly publishes the assembly language details of their architectures* [https://developer. Their "GCN Assembly" changes slightly from generation to generation, but the fundamental principles have remained the sameapple.com/opencl/ Apple OpenCL Developer] * [https://developer.apple.com/metal/ Apple Metal Developer]* [https://developer.apple.com/library/archive/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Introduction/Introduction.html Apple Metal Programming Guide]* [https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf Metal Shading Language Specification]
AMD's == Intel ==Intel supports OpenCL documentation, especially the "OpenCL Programming Guide" with implementations like BEIGNET and NEO for different GPU architectures and the "Optimization Guide" are good places to start for beginners looking to program their GPUs[https://en.wikipedia.org/wiki/OneAPI_(compute_acceleration) oneAPI] platform with [https://en.wikipedia. For Linux developers, the ROCm environment is under active development and has enough features to get code working wellorg/wiki/DPC++ DPC++] as frontend language.
* [https://rocmwww.githubintel.iocom/content/www/ ROCm Homepageus/en/developer/overview.html#gs.pu62bi Intel Developer Zone]* [httphttps://developerwww.amdintel.com/wordpresscontent/mediawww/2013us/07en/AMD_Accelerated_Parallel_Processing_OpenCL_Programming_Guidedevelop/documentation/oneapi-revprogramming-2guide/top.7html Intel oneAPI Programming Guide] == Nvidia == [https://en.wikipedia.org/wiki/CUDA CUDA] is the parallel computing platform by [[Nvidia]].pdf AMD It supports language frontends like C, C++, Fortran, OpenCL Programming Guideand offload directives via [https://en.wikipedia.org/wiki/OpenACC OpenACC] and [https://en.wikipedia.org/wiki/OpenMP OpenMP]. * [httphttps://developer.amdnvidia.com/cuda-zone Nvidia CUDA Zone]* [https://docs.nvidia.com/cuda/parallel-thread-execution/index.html Nvidia PTX ISA]* [https://docs.nvidia.com/wordpresscuda/mediaindex.html Nvidia CUDA Toolkit Documentation]* [https:/2013/12docs.nvidia.com/AMD_OpenCL_Programming_Optimization_Guide2cuda/cuda-c-programming-guide/index.pdf AMD OpenCL Optimization html Nvidia CUDA C++ Programming Guide]* [https://gpuopendocs.nvidia.com/wpcuda/cuda-c-best-practices-contentguide/index.html Nvidia CUDA C++ Best Practices Guide] == Further == * [https:/uploads/2019en.wikipedia.org/08wiki/RDNA_Shader_ISA_5August2019Vulkan#Planned_features Vulkan] (OpenGL sucessor of Khronos Group)* [https://en.pdf RDNA Instruction Setwikipedia.org/wiki/DirectCompute DirectCompute](Microsoft)* [https://developeren.wikipedia.org/wiki/C%2B%2B_AMP C++ AMP] (Microsoft)* [https://en.amdwikipedia.comorg/wiki/wp-contentOpenACC OpenACC] (offload directives)* [https:/resources/Vega_Shader_ISA_28July2017en.pdf Vega Instruction Setwikipedia.org/wiki/OpenMP OpenMP](offload directives)
== Other 3rd party tools =Hardware Model=
* A common scheme on GPUs with unified shader architecture is to run multiple threads in [https://en.wikipedia.org/wiki/DirectCompute DirectComputeSingle_instruction,_multiple_threads SIMT] fashion and a multitude of SIMT waves on the same [https://en.wikipedia.org/wiki/SIMD SIMD] unit to hide memory latencies. Multiple processing elements (GPU cores) are members of a SIMD unit, multiple SIMD units are coupled to a compute unit, with up to hundreds of compute units present on a discrete GPU. The actual SIMD units may have architecture dependent different numbers of cores (SIMD8, SIMD16, SIMD32), and different computation abilities - floating-point and/or integer with specific bit-width of the FPU/ALU and registers. There is a difference between a vector-processor with variable bit-width and SIMD units with fix bit-width cores. Different architecture white papers from different vendors leave room for speculation about the concrete underlying hardware implementation and the concrete classification as [https://en.wikipedia.org/wiki/Flynn%27s_taxonomy hardware architecture]. Scalar units present in the compute unit perform special functions the SIMD units are not capable of and MMAC units (GPGPU API by Microsoftmatrix-multiply-accumulate units)* OpenMP 4are used to speed up neural networks further.5 Device Offload
{| class=The SIMT Programming Model"wikitable" style="margin:auto"|+ Vendor Terminology|-! AMD Terminology !! Nvidia Terminology|-| Compute Unit || Streaming Multiprocessor|-| Stream Core || CUDA Core|-| Wavefront || Warp|}
CUDA, OpenCL, ROCm HIP, all have the same model of implicitly parallel programming. All threads are given an identifier: a threadIdx in CUDA or local_id in OpenCL. Aside from this index, all threads of a kernel will execute the same code. The only way to alter the behavior of code is to use this threadIdx to access different data.===Hardware Examples===
The executed code is always implicitly Nvidia GeForce GTX 580 ([https://en.wikipedia.org/wiki/Fermi_%28microarchitecture%29 Fermi]) <ref>[SIMD]]https://www. Instead of thinking of SIMD-lanes, each lane is considered its own threadnvidia. The smallest group of threads is called a CUDA Warp, or OpenCL Wavefrontcom/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper. NVidia GPUs execute 32-threads per warp, while AMD GCN GPUs execute 64-threads per wavefrontpdf Fermi white paper from Nvidia]</ref><ref>[https://en. All threads within a Warp or Wavefront share an instruction pointerwikipedia. Consider the following CUDA code:org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_500_series GeForce 500 series on Wikipedia]</ref>
if(threadIdx* 512 CUDA cores @1.x == 0){544GHz doA(); * 16 SMs - Streaming Multiprocessors } else {* organized in 2x16 CUDA cores per SM doB(); }* Warp size of 32 threads
While there is only one thread in the warp that has threadIdx == 0, all 32 threads of the warp will have their shared instruction pointer execute doAAMD Radeon HD 7970 ([https://en.wikipedia.org/wiki/Graphics_Core_Next GCN) together]<ref>[https://en. To keep the code semantically correct, threads #1 through #31 will have their NVidia Predicate-register cleared (or AMD Execution Mask cleared), which means the thread will throw away the work after executing a specific statementwikipedia. For those familiar with x64 AVX code, a GPU thread is comparable to a SIMD-lane in AVXorg/wiki/Graphics_Core_Next Graphics Core Next on Wikipedia]</ref><ref>[https://en. All lanes of an AVX instruction will execute any particular instruction, but you may throw away the results of some registers using mask or comparison instructionswikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_HD_7000_series Radeon HD 7000 series on Wikipedia]</ref>
Once doA() is complete, the machine will continue and doB()* 2048 Stream cores @0. In this case925GHz* 32 Compute Units* organized in 4xSIMD16, thread#0 will have its execution mask-clearedeach SIMT4, while threads #1 through #31 will actually complete the results per Compute Unit* Wavefront size of doB().64 work-items
This highlights ===Wavefront and Warp===Generalized the fundamental trade off definition of the GPU platform. GPUs have many threads of execution, but they are forced to execute with their warps or wavefronts. In complicated loops or trees of if-statements, this thread divergence problem can cause your code to potentially leave many hardware threads idle. In Wavefront and Warp size is the above example code, 97% amount of the threads will be effectively idle during doA(), while 3% of the threads will be idle during doB()executed in SIMT fashion on a GPU with unified shader architecture.
== Blocks and Workgroups =Programming Model=
Programmers A [https://en.wikipedia.org/wiki/Parallel_programming_model parallel programming model] for GPGPU can group warps or wavefronts together into larger clustersbe [https://en.wikipedia.org/wiki/Data_parallelism data-parallel], [https://en.wikipedia.org/wiki/Task_parallelism task-parallel], a mixture of both, called CUDA Blocks or OpenCL Workgroupswith libraries and offload-directives also [https://en.wikipedia.org/wiki/Implicit_parallelism implicitly-parallel]. 1024 Single GPU threads can (work-items in OpenCL) contain the kernel to be computed and are coupled to a work-group, one or multiple work together -groups form the NDRange to be executed on a modern the GPU Compute Unit (AMD) or Symmetric Multiprocessor (NVidia), sharing L1 cache, shared memory and other resourcesdevice. Because The members of a work-group execute the tight coupling of L1 cache and Shared Memorysame kernel, these 1024 threads can communicate extremely efficiently. Case in point: both NVidia PTX be usually synchronized and AMD GCN implement thread barriers as a singular assembly language instruction, as long as those threads are within have access to the same workgroup. Atomic operations, scratch-pad memory fences, with an architecture limit of how many work-items a work-group can hold and other synchronization primitives are extremely fast and well optimized how many threads can run in these casestotal concurrently on the device.
{| class="wikitable" style= Grids and "margin:auto"|+ Terminology|-! OpenCL Terminology !! CUDA Terminology|-| Kernel || Kernel|-| Compute Unit || Streaming Multiprocessor|-| Processing Element || CUDA Core|-| Work-Item || Thread|-| Work-Group || Block|-| NDRange ==|| Grid|-|}
While warps, blocks, wavefronts and workgroups are concepts that the machine executes... Grids and NDRanges are the scope of the problem specified by a programmer. For example, the 1920x1080 screen could be defined as a Grid with 2073600 threads to execute (likely organized as a 2-dimensional 1920x1080 grid for convenience). Specifying these 2,073,600 work items is the purpose of a CUDA Grid or OpenCL NDRange.==Thread Examples==
The programmer may choose to cut up the 1920x1080 screen into blocks of size 32x32 pixels. Or maybe an algorithm is horizontal in nature, and it may be more convenient to work with blocks of 1x1024 pixels instead. Or maybe the block-sizes have been set to some video standardsNvidia GeForce GTX 580 (Fermi, and maybe 8x8 blocks (64-threads) are the biggest you can practically work with (say MPEG-2 decoder 8x8 macroblocksCC2)<ref>[https://en. Regardless, the programmer chooses a block size which is most convenient and optimized for their purposeswikipedia. To complete this hypothetical example, a 1920x1080 screen could be split up into 60x34 org/wiki/CUDA Blocks (or OpenCL Workgroups), each covering 32x32 pixels with 1024 #Technical_Specification CUDA Threads (or OpenCL Workitems) each. Technical_Specification on Wikipedia]</ref>
These * Warp size: 32* Maximum number of threads per block: 1024* Maximum number of resident blocks and workgroups will execute with as much parallel processing as the underlying hardware can support. Roughly 150 CUDA Blocks or OpenCL Workgroups at a time on a typical midrange GPU circa from 2019 (such as a NVidia 2060 Super or AMD 5700). The most important note is that blocks within a grid (or workgroups within an NDRange) may not execute concurrently with each other. Some degree per multiprocessor: 32* Maximum number of resident warps per multiprocessor: 64* Maximum number of sequential processing may happen. If thread #0 creates a Spinlock waiting for thread #1000000 to communicate with it, modern hardware will probably never have the two resident threads executing concurrently with each other, and the code would likely timeout. In practice, the easiest mechanism for Grid or NDRange sized synchronization is to wait for the kernel to finish executingper multiprocessor: to have the CPU wait and process the results in between Grid or NDRanges.2048
For example: LeelaZero will schedule an NDRange for each [https://github.com/leela-zero/leela-zero/blob/next/src/kernels/convolve1.opencl Convolve operation], as well as merge and other primitives. The convolve operation is over a 3-dimensional NDRange for <channel, output, row_batch>. To build up a full CNN operation, the CPU will schedule different operations for the GPU: convolve, merge, transform and more.
==Memory Model==AMD Radeon HD 7970 (GCN) <ref>[https://www.olcf.ornl.gov/wp-content/uploads/2019/10/ORNL_Application_Readiness_Workshop-AMD_GPU_Basics.pdf AMD GPU Hardware Basics]</ref>
OpenCL, CUDA, ROCM, and other GPU* Wavefront size: 64* Maximum number of work-languages all have a similar memory model.items per work-group: 1024* Maximum number of work-groups per compute unit: 40* Maximum number of Wavefronts per compute unit: 40* Maximum number of work-items per compute unit: 2560
* __device__ (CUDA) or __global (OpenCL) memory -- OpenCL __global and CUDA __device__ memory exists on the GPU's VRAM. Any threads can access any part of __device__ or __global memory, although memory-ordering and caching details can get quite complicated if multiple threads simultaneously read and write to a particular memory location. Proper memory ordering with __threadfence() (CUDA) or mem_fence() (OpenCL) is essential to preventing memory-consistency issues.=Memory Model=
* __constant__ (CUDA) or __constant (OpenCL) offers the following memory -- Constants are not allowed to change during model for the execution of a particular kernel. Historically, this was used by Pixel Shaders as they read texture data. The texture-data could be computed and loaded onto the GPU, but the data was not allowed to change during the Pixel Shader's execution. Both NVidia and AMD GPUs have special caches (and in AMD's caseprogrammer: special registers called sGPRs) which accelerate constant-data. The caches associated with this memory space is sometimes called K$ (Konstant-cache), and has to be independently flushed if its data ever changes. The main benefit in both AMD and NVidia systems is that K$ values are broadcast extremely efficiently to all threads in a wavefront, but only if all threads in a wavefront are reading from the same memory location. Instead of haing 32-memory reads (NVidia) or 64-memory reads (AMD GCN), a read from K$ can be optimized into a single-read, broadcast to all 32 or 64-threads of a Warp or Wavefront.
* __shared__ (CUDA) or __private - usually registers, accessable only by a single work-item resp. thread.* __local (OpenCL) - scratch-pad memory shared across work-items of a work-group resp. threads of block.* __constant - This is highlyread-accelerated only memory regions designed for threads to exchange data within a CUDA Block or OpenCL Workgroup. On AMD Systems* __global - usually VRAM, there is more Local "LDS" memory than even L1 Cache (GCN) or L0 Cache (RDNA)accessable by all work-items resp. threads.
* Default ({| class="wikitable" style="margin:auto"|+ Terminology|-! OpenCL Terminology !! CUDA) or __private (OpenCL) Memory -Terminology|- | Private memory typically maps to a GPUMemory || Registers|-register, and is inaccessible to other threads. If a kernel requires more memory than what can exist in GPU| Local Memory || Shared Memory|-registers, the data will automatically spill over into global VRAM (with an associated performance penalty). In practice, this spillover is well interleaved, well| Constant Memory || Constant Memory|-optimized, and reduced to as small a subset as possible through compiler optimizations.| Global Memory || Global Memory|}
===Memory Examples===
Here the data for the Nvidia GeForce GTX 580 ([https://en.wikipedia.org/wiki/Fermi_%28microarchitecture%29 Fermi)] as an example: <ref>CUDA C Programming Guide v7.0, Appendix G.COMPUTE CAPABILITIES</ref>
* 128 KiB private memory per compute unit
* 48 KiB (16 KiB) local memory per compute unit (configurable)
* 8 KiB constant cache per compute unit
* 16 KiB (48 KiB) L1 cache per compute unit (configurable)
* 768 KiB L2 cachein total
* 1.5 GiB to 3 GiB global memory
Here the data for the AMD Radeon HD 7970 ([https://en.wikipedia.org/wiki/Graphics_Core_Next GCN]) as an example: <ref>AMD Accelerated Parallel Processing OpenCL Programming Guide rev2.7, Appendix D Device Parameters, Table D.1 Parameters for 7xxx Devices</ref>
* 256 KiB private memory per compute unit
* 64 KiB local memory per compute unit
* 16 KiB constant cache per four compute units
* 16 KiB L1 cache per compute unit
* 768 KiB L2 cachein total
* 3 GiB to 6 GiB global memory
= Architectures ==Unified Memory=== Usually data has to be copied between a CPU host and a discrete GPU device, but different architectures from different vendors with different frameworks on different operating systems may offer a unified and accessible address space between CPU and GPU. =Instruction Throughput= GPUs are used in [https://en.wikipedia.org/wiki/High-performance_computing HPC] environments because of their good [https://en.wikipedia.org/wiki/FLOP FLOP]/Watt ratio. The instruction throughput in general depends on the architecture (like Nvidia's [https://en.wikipedia.org/wiki/Tesla_%28microarchitecture%29 Tesla], [https://en.wikipedia.org/wiki/Fermi_%28microarchitecture%29 Fermi], [https://en.wikipedia.org/wiki/Kepler_%28microarchitecture%29 Kepler], [https://en.wikipedia.org/wiki/Maxwell_%28microarchitecture%29 Maxwell] or AMD's [https://en.wikipedia.org/wiki/TeraScale_%28microarchitecture%29 TeraScale], [https://en.wikipedia.org/wiki/Graphics_Core_Next GCN], [https://en.wikipedia.org/wiki/AMD_RDNA_Architecture RDNA]), the brand (like Nvidia [https://en.wikipedia.org/wiki/GeForce GeForce], [https://en.wikipedia.org/wiki/Nvidia_Quadro Quadro], [https://en.wikipedia.org/wiki/Nvidia_Tesla Tesla] or AMD [https://en.wikipedia.org/wiki/Radeon Radeon], [https://en.wikipedia.org/wiki/Radeon_Pro Radeon Pro], [https://en.wikipedia.org/wiki/Radeon_Instinct Radeon Instinct]) and Physical Hardware the specific model. ==Integer Instruction Throughput==* INT32: The 32-bit integer performance can be architecture and operation depended less than 32-bit FLOP or 24-bit integer performance.
The market is split into three categories* INT64: server, professional, In general [https://en.wikipedia.org/wiki/Processor_register registers] and Vector-[https://en.wikipedia.org/wiki/Arithmetic_logic_unit ALUs] of consumer. Consumer cards brand GPUs are cheapest 32-bit wide and are primarily targeted for the video game markethave to emulate 64-bit integer operations. Professional cards have better driver support for 3d programs like Autocad* INT8: Some architectures offer higher throughput with lower precision. Finally, server cards provide virtualization services, allowing cloud companies to virtually split their cards between customersThey quadruple the INT8 or octuple the INT4 throughput.
Consumer class GPUs cost anywhere from $100 to $1000. Professional cards can run to $2000, while server class cards can cost as much as $10,000.==Floating-Point Instruction Throughput==
GPUs use high* FP32: Consumer GPU performance is measured usually in single-precision (32-bit) floating-point FMA (fused-bandwidth RAM, such as GDDR6 or HBM2. GDDR6 and HBM2 are designed for the extremely parallel nature of GPUs, and can provide 200GBps to 1000GBps multiply-add) throughput. In comparison: a typical DDR4 channel can provide 20GBps. A dual channel desktop will typically have under 50GBps bandwidth to DDR4 main memory.
== NVidia ==* FP64: Consumer GPUs have in general a lower ratio (FP32:FP64) for double-precision (64-bit) floating-point operations throughput than server brand GPUs.
NVidia's consumer line of cards is Geforce, branded * FP16: Some GPGPU architectures offer half-precision (16-bit) floating-point operation throughput with RTX or GTX labels. Nvidia's professional line of cards is "Quadro". Finally, Nvidia's server line an FP32:FP16 ratio of cards is "Tesla"1:2.
NVidia's "Titan" line of Geforce cards use consumer drivers==Throughput Examples==Nvidia GeForce GTX 580 (Fermi, but use professional or server class chipsCC 2. As such0) - 32-bit integer operations/clock cycle per compute unit <ref>CUDA C Programming Guide v7.0, the Titan line can cost anywhere from $1000 to $3000 per cardChapter 5.4.1.Arithmetic Instructions</ref>
=== Turing Architecture === MAD 16 MUL 16 ADD 32 Bit-shift 16 Bitwise XOR 32
[httpsMax theoretic ADD operation throughput://www.nvidia32 Ops x 16 CUs x 1544 MHz = 790.com/content528 GigaOps/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf Architectural Whitepaper]sec
Turing cards were first released in 2018AMD Radeon HD 7970 (GCN 1. They are the first consumer cores to launch with RTX, or raytracing, features. RTX instructions will more quickly traverse an aabb tree to discover ray0) -intersections with lists of bounding32-boxesbit integer operations/clock cycle per processing element <ref>AMD_OpenCL_Programming_Optimization_Guide.pdf 3.0beta, accelerating raytracing performanceChapter 2. These are also the first consumer cards to launch with Tensor cores, 4x4 matrix multiplication FP16 instructions to accelerate convolutional neural networks7.1 Instruction Bandwidths</ref>
* RTX 2080 Ti MAD 1/4* RTX 2080 MUL 1/4* RTX 2070 Ti ADD 1* RTX 2070 Super* RTX 2070 * RTX 2060 Super Bit-shift 1* RTX 2060* GTX 1660 -- Low-end GPU without Tensor cores or RTX Cores. Bitwise XOR 1
Max theoretic ADD operation throughput: 1 Op x 2048 PEs x 925 MHz === Volta Architecture === 1894.4 GigaOps/sec
=Tensors=MMAC (matrix-multiply-accumulate) units are used in consumer brand GPUs for neural network based upsampling of video game resolutions, in professional brands for upsampling of images and videos, and in server brand GPUs for accelerating convolutional neural networks in general. Convolutions can be implemented as a series of matrix-multiplications via Winograd-transformations <ref>[https://images.nvidiatalkchess.com/contentforum3/volta-architectureviewtopic.php?f=7&t=66025&p=743355#p743355 Re: To TPU or not to TPU...] by [[Rémi Coulom]], [[CCC]], December 16, 2017</pdf/volta-architecture-whitepaperref>. Mobile SoCs usually have an dedicated neural network engine as MMAC unit.pdf Architecture Whitepaper]
==Nvidia TensorCores==: With Nvidia [https://en.wikipedia.org/wiki/Volta_(microarchitecture) Volta cards ] series TensorCores were released in 2017. Only Tesla and Titan cards were produced in this generation, aiming only for the most expensive end of the marketintroduced. They were the first cards to launch with Tensor coresoffer FP16xFP16+FP32, supporting 4x4 FP16 matrix multiplications -multiplication-accumulate-units, used to accelerate convolutional neural networks.<ref>[https://on-demand.gputechconf.com/gtc/2017/presentation/s7798-luke-durant-inside-volta.pdf INSIDE VOLTA]</ref> Turing's 2nd gen TensorCores add FP16, INT8, INT4 optimized computation.<ref>[https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/6 AnandTech - Nvidia Turing Deep Dive page 6]</ref> Amperes's 3rd gen adds support for BF16, TF32, FP64 and sparsity acceleration.<ref>[https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#Details Wikipedia - Ampere microarchitecture]</ref>Ada Lovelaces's 4th gen adds support for FP8.<ref>[https://en.wikipedia.org/wiki/Ada_Lovelace_(microarchitecture) - Ada Lovelace microarchitecture]</ref>
* Tesla V100==AMD Matrix Cores==* Titan V: AMD released 2020 its server-class [https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf CDNA] architecture with Matrix Cores which support MFMA (matrix-fused-multiply-add) operations on various data types like INT8, FP16, BF16, FP32. AMD's CDNA 2 architecture adds FP64 optimized throughput for matrix operations. AMD's RDNA 3 architecture features dedicated AI tensor operation acceleration. AMD's CDNA 3 architecture adds support for FP8 and sparse matrix data (sparsity).
=== Pascal Architecture =Intel XMX Cores==: Intel added XMX, Xe Matrix eXtensions, cores to some of the [https://en.wikipedia.org/wiki/Intel_Xe Intel Xe] GPU series, like [https://en.wikipedia.org/wiki/Intel_Arc#Alchemist Arc Alchemist] and [https://www.intel.com/content/www/us/en/products/sku/232876/intel-data-center-gpu-max-1100/specifications.html Intel Data Center GPU Max Series].
=Host-Device Latencies= One reason GPUs are not used as accelerators for chess engines is the host-device latency, aka. kernel-launch-overhead. Nvidia and AMD have not published official numbers, but in practice there is a measurable latency for null-kernels of 5 microseconds <ref>[https://imagesdevtalk.nvidia.com/contentdefault/pdftopic/tesla1047965/whitepapercuda-programming-and-performance/pascalhost-device-latencies-architecture/post/5318041/#5318041 host-whitepaperdevice latencies?] by [[Srdja Matovic]], Nvidia CUDA ZONE, Feb 28, 2019</ref> up to 100s of microseconds <ref>[https://community.amd.pdf Architecture Whitepapercom/thread/237337#comment-2902071 host-device latencies?] by [[Srdja Matovic]] AMD Developer Community, Feb 28, 2019</ref>. One solution to overcome this limitation is to couple tasks to batches to be executed in one run <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=67347#p761239 Re: GPU ANN, how to deal with host-device latencies?] by [[Milos Stanisavljevic]], [[CCC]], May 06, 2018</ref>.
Pascal cards =Deep Learning=GPUs are much more suited than CPUs to implement and train [[Neural Networks#Convolutional|Convolutional Neural Networks]] (CNN), and were first released therefore also responsible for the [[Deep Learning|deep learning]] boom, also affecting game playing programs combining CNN with [[Monte-Carlo Tree Search|MCTS]], as pioneered by [[Google]] [[DeepMind|DeepMind's]] [[AlphaGo]] and [[AlphaZero]] entities in 2016[[Go]], [[Shogi]] and [[Chess]] using [https://en.wikipedia.org/wiki/Tensor_processing_unit TPUs], and the open source projects [[Leela Zero]] headed by [[Gian-Carlo Pascutto]] for [[Go]] and its [[Leela Chess Zero]] adaption.
* Tesla P100= Architectures =* Titan Xp* GTX 1080 Ti* GTX 1080* GTX 1070 Ti* GTX 1060* GTX 1050* GTX 1030The market is split into two categories, integrated and discrete GPUs. The first being the most important by quantity, the second by performance. Discrete GPUs are divided as consumer brands for playing 3D games, professional brands for CAD/CGI programs and server brands for big-data and number-crunching workloads. Each brand offering different feature sets in driver, VRAM, or computation abilities.
== AMD ==
AMD line of discrete GPUs is branded as Radeon for consumer, Radeon Pro for professional and Radeon Instinct for server.
 
* [https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units List of AMD graphics processing units on Wikipedia]
 
=== CDNA3 ===
CDNA3 HPC architecture was unveiled in December, 2023. With MI300A APU model (CPU+GPU+HBM) and MI300X GPU model, both with multi-chip modules design. Featuring Matrix Cores with support for a broad type of precision, as INT8, FP8, BF16, FP16, TF32, FP32, FP64, as well as sparse matrix data (sparsity). Supported by AMD's ROCm open software stack for AMD Instinct accelerators.
 
* [https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf AMD CDNA3 Whitepaper]
* [https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/instruction-set-architectures/amd-instinct-mi300-cdna3-instruction-set-architecture.pdf AMD Instinct MI300/CDNA3 Instruction Set Architecture]
* [https://www.amd.com/en/developer/resources/rocm-hub.html AMD ROCm Developer Hub]
 
=== Navi 3x RDNA3 ===
RDNA3 architecture in Radeon RX 7000 series was announced on November 3, 2022, featuring dedicated AI tensor operation acceleration.
 
* [https://en.wikipedia.org/wiki/Radeon_RX_7000_series AMD Radeon RX 7000 on Wikipedia]
* [https://developer.amd.com/wp-content/resources/RDNA3_Shader_ISA_December2022.pdf RDNA3 Instruction Set Architecture]
 
=== CDNA2 ===
CDNA2 architecture in MI200 HPC-GPU with optimized FP64 throughput (matrix and vector), multi-chip-module design and Infinity Fabric was unveiled in November, 2021.
 
* [https://www.amd.com/system/files/documents/amd-cdna2-white-paper.pdf AMD CDNA2 Whitepaper]
* [https://developer.amd.com/wp-content/resources/CDNA2_Shader_ISA_4February2022.pdf CDNA2 Instruction Set Architecture]
 
=== CDNA ===
CDNA architecture in MI100 HPC-GPU with Matrix Cores was unveiled in November, 2020.
 
* [https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf AMD CDNA Whitepaper]
* [https://developer.amd.com/wp-content/resources/CDNA1_Shader_ISA_14December2020.pdf CDNA Instruction Set Architecture]
 
=== Navi 2x RDNA2 ===
[https://en.wikipedia.org/wiki/RDNA_(microarchitecture)#RDNA_2 RDNA2] cards were unveiled on October 28, 2020.
* [https://en.wikipedia.org/wiki/Radeon_RX_6000_series AMD Radeon RX 6000 on Wikipedia]* [https://developer.amd.com/wp-content/resources/RDNA2_Shader_ISA_November2020.pdf RDNA 2 Instruction Set Architecture] === Navi RDNA 1.0 === [https://en.wikipedia.org/wiki/RDNA_(microarchitecture) RDNA] cards were unveiled on July 7, 2019.
* [https://www.amd.com/system/files/documents/rdna-whitepaper.pdf RDNA Whitepaper]
* [https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Architecture_public.pdf Architecture Slide Deck]
* [https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Shader_ISA_5August2019.pdf RDNA Instruction Set Architecture]
 
=== Vega GCN 5th gen ===
 
[https://en.wikipedia.org/wiki/Radeon_RX_Vega_series Vega] cards were unveiled on August 14, 2017.
 
* [https://www.techpowerup.com/gpu-specs/docs/amd-vega-architecture.pdf Architecture Whitepaper]
* [https://developer.amd.com/wp-content/resources/Vega_Shader_ISA_28July2017.pdf Vega Instruction Set Architecture]
 
=== Polaris GCN 4th gen ===
 
[https://en.wikipedia.org/wiki/Graphics_Core_Next#Graphics_Core_Next_4 Polaris] cards were first released in 2016.
 
* [https://www.amd.com/system/files/documents/polaris-whitepaper.pdf Architecture Whitepaper]
* [https://developer.amd.com/wordpress/media/2013/12/AMD_GCN3_Instruction_Set_Architecture_rev1.1.pdf GCN3/4 Instruction Set Architecture]
 
=== Southern Islands GCN 1st gen ===
 
Southern Island cards introduced the [https://en.wikipedia.org/wiki/Graphics_Core_Next GCN] architecture in 2012.
 
* [https://en.wikipedia.org/wiki/Radeon_HD_7000_series AMD Radeon HD 7000 on Wikipedia]
* [https://www.amd.com/content/dam/amd/en/documents/radeon-tech-docs/programmer-references/si_programming_guide_v2.pdf Southern Islands Programming Guide]
* [https://developer.amd.com/wordpress/media/2012/12/AMD_Southern_Islands_Instruction_Set_Architecture.pdf Southern Islands Instruction Set Architecture]
 
== Apple ==
 
=== M series ===
 
Apple released its M series SoC (system on a chip) with integrated GPU for desktops and notebooks in 2020.
 
* [https://en.wikipedia.org/wiki/Apple_silicon#M_series Apple M series on Wikipedia]
 
== ARM ==
The ARM Mali GPU variants can be found on various systems on chips (SoCs) from different vendors. Since Midgard (2012) with unified-shader-model OpenCL support is offered.
 
* [https://en.wikipedia.org/wiki/Mali_(GPU)#Variants Mali variants on Wikipedia]
 
=== Valhall (2019) ===
 
* [https://developer.arm.com/documentation/101574/latest Bifrost and Valhall OpenCL Developer Guide]
 
=== Bifrost (2016) ===
RDNA cards were first released in 2019. RDNA is a major change for AMD cards* [https: the underlying hardware supports both Wave32 and Wave64 gangs of threads//developer. Compute Units have 2x32 wide SIMD units, each of which executes 32 threads per clock tick. A Wave64 workgroup will execute on a single SIMD unit, but over two clock ticks. It should be noted that these Wave32 still have 5 cycles of latency before registers can be reused, so a Wave64 executing over two clock ticks will have fewer stalls than a Wave32arm.com/documentation/101574/latest Bifrost and Valhall OpenCL Developer Guide]
* Radeon 5700 XT=== Midgard (2012) ===* Radeon 5700[https://developer.arm.com/documentation/100614/latest Midgard OpenCL Developer Guide]
==Intel = Vega GCN 5th gen = === Xe ===
[https://wwwen.techpowerupwikipedia.comorg/gpuwiki/Intel_Xe Intel Xe] line of GPUs (released since 2020) is divided as Xe-LP (low-power), Xe-HPG (high-performance-gaming), Xe-HP (high-performace) and Xe-specs/docs/amdHPC (high-vegaperformance-architecturecomputing).pdf Architecture Whitepaper]
Vega cards were first released in 2017* [https://en. Vega is the last in the line wikipedia.org/wiki/List_of_Intel_graphics_processing_units#Gen12 List of the GCN ArchitectureIntel Gen12 GPUs on Wikipedia]* [https: 64 threads per wavefront. Each compute unit contains 4x SIMD units, supporting a total of 40 wavefronts per compute unit (a queue of 10-wavefronts per SIMD Unit). Each SIMD unit contains 16 vALUs for general compute + 1 sALU for branching and constant logic//en. Each SIMD unit executes the same instruction over four clock ticks (16 vALUs x 4 clock ticks == 64 threads per Wavefront)wikipedia.org/wiki/Intel_Arc#Alchemist Arc Alchemist series on Wikipedia]
Vega specifically added Packed FP16 instructions==Nvidia==Nvidia line of discrete GPUs is branded as GeForce for consumer, such as dot-product Quadro for professional and packed add and packed multiply. From a programming level, these packed FP16 instructions are SIMD-within-SIMD, each SIMD thread could operate its own SIMD FP16 instruction akin to AVX or SSE from the x64 architectureTesla for server.
* Radeon VII* Vega64* Vega56[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units List of Nvidia graphics processing units on Wikipedia]
=== Polaris GCN 4th gen Grace Hopper Superchip === The Nvidia GH200 Grace Hopper Superchip was unveiled August, 2023 and combines the Nvidia Grace CPU ([[ARM|ARM v9]]) and Nvidia Hopper GPU architectures via NVLink to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications.
Polaris cards were first released in 2016 under the AMD Radeon 400 series name* [https://resources.nvidia.com/en-us-grace-cpu/grace-hopper-superchip NVIDIA Grace Hopper Superchip Data Sheet]* [https://resources.nvidia.com/en-us-grace-cpu/nvidia-grace-hopper NVIDIA Grace Hopper Superchip Architecture Whitepaper]
=== Ada Lovelace Architecture ===The [https://wwwen.amdwikipedia.comorg/systemwiki/files/documents/polarisAda_Lovelace_(microarchitecture) Ada Lovelace microarchitecture] was announced on September 20, 2022, featuring 4th-whitepapergeneration Tensor Cores with FP8, FP16, BF16, TF32 and sparsity acceleration.pdf Architecture Whitepaper]
* RX 580[https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf Ada GPU Whitepaper]* RX 570* RX 560[https://docs.nvidia.com/cuda/ada-tuning-guide/index.html Ada Tuning Guide]
=Instruction Throughput== Hopper Architecture === GPUs are used in The [https://en.wikipedia.org/wiki/High-performance_computing HPC] environments because of their good [https://en.wikipedia.org/wiki/FLOP FLOP]/Watt ratio. The instruction throughput in general depends on the architecture Hopper_(like Nvidia's [https://en.wikipedia.org/wiki/Tesla_%28microarchitecture%29 Tesla], [https://en.wikipedia.org/wiki/Fermi_%28microarchitecture%29 Fermi], [https://en.wikipedia.org/wiki/Kepler_%28microarchitecture%29 Kepler], [https://en.wikipedia.org/wiki/Maxwell_%28microarchitecture%29 Maxwell] or AMD's [https://en.wikipedia.org/wiki/TeraScale_%28microarchitecture%29 Terascale], [https://en.wikipedia.org/wiki/Graphics_Core_Next GCN], [https://en.wikipedia.org/wiki/AMD_RDNA_Architecture RDNA]microarchitecture), the brand (like Nvidia [https://en.wikipedia.org/wiki/GeForce GeForce], [https://en.wikipedia.org/wiki/Nvidia_Quadro Quadro], [https://en.wikipedia.org/wiki/Nvidia_Tesla Tesla] or AMD [https://en.wikipedia.org/wiki/Radeon RadeonHopper GPU Datacenter microarchitecture]was announced on March 22, [https://en.wikipedia.org/wiki/Radeon_Pro Radeon Pro]2022, [https://en.wikipedia.org/wiki/Radeon_Instinct Radeon Instinct]) and the specific modelfeaturing Transfomer Engines for large language models.
* 32 bit Integer Performance [https://resources.nvidia.com/en-us-tensor-core Hopper H100 Whitepaper]* [https: The 32 bit integer performance can be architecture and operation depended less than 32 bit FLOP or 24 bit integer performance//docs.nvidia.com/cuda/hopper-tuning-guide/index.html Hopper Tuning Guide]
* 64 bit Integer Performance=== Ampere Architecture ===: Current GPU The [https://en.wikipedia.org/wiki/Processor_register registersAmpere_(microarchitecture) Ampere microarchitecture] and Vectorwas announced on May 14, 2020 <ref>[https://devblogs.nvidia.com/nvidia-ampere-architecture-in-depth/ NVIDIA Ampere Architecture In-Depth | NVIDIA Developer Blog] by [https://enpeople.csail.wikipediamit.orgedu/wikironny/Arithmetic_logic_unit ALUsRonny Krashinsky] are 32 bit wide and have to emulate 64 bit integer operations, [https://cppcast.<ref>com/guest/ogiroux/ Olivier Giroux], [https://enblogs.wikichipnvidia.orgcom/blog/author/stephenjones/ Stephen Jones], [https:/w/imagesblogs.nvidia.com/ablog/a1author/veganick-whitepaperstam/ Nick Stam] and [https://en.pdf AMD Vega White Paperwikipedia.org/wiki/Sridhar_Ramaswamy Sridhar Ramaswamy], May 14, 2020</ref> . The Nvidia A100 GPU based on the Ampere architecture delivers a generational leap in accelerated computing in conjunction with CUDA 11 <ref>[https://wwwdevblogs.nvidia.com/contentcuda-11-features-revealed/damCUDA 11 Features Revealed | NVIDIA Developer Blog] by [https:/en-zz/Solutionsdevblogs.nvidia.com/design-visualizationauthor/technologiespramarao/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf Nvidia Turing White PaperPramod Ramarao], May 14, 2020</ref>.
* Mixed Precision Support: Newer architectures like Nvidia [https://enwww.wikipedianvidia.orgcom/wiki/Turing_(microarchitecture) Turing] and AMD [https:content/dam/en.wikipedia.org-zz/wikiSolutions/AMD_RX_Vega_series Vega] have mixed precision support. Vega doubles the [https:Data-Center//ennvidia-ampere-architecture-whitepaper.wikipedia.org/wiki/FP16 FP16pdf Ampere GA100 Whitepaper] and quadruples the * [https://enwww.wikipedianvidia.orgcom/wikicontent/Integer_(computer_science)#Common_integral_data_types INT8] throughput.<ref>[https:PDF//ennvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.wikipedia.org/wiki/Graphics_Core_Next#fifth Vega (GCN 5th generation) from Wikipediapdf Ampere GA102 Whitepaper]</ref>Turing doubles the FP16 throughput of its * [https://endocs.wikipedia.org/wiki/Floating-point_unit FPUs].<ref>[https://www.anandtechnvidia.com/showcuda/13282/nvidiaampere-turingtuning-architecture-deep-diveguide/4 AnandTech - Nvidia Turing Deep Dive page 4index.html Ampere GPU Architecture Tuning Guide]</ref>
* TensorCores=== Turing Architecture ===: With Nvidia [https://en.wikipedia.org/wiki/Volta_Turing_(microarchitecture) VoltaTuring] series TensorCores cards were introducedfirst released in 2018. They offer fp16*fp16+fp32are the first consumer cores to launch with RTX, matrix-multiplication-accumulate-units, used to accelerate neural networks.<ref>for [https://on-demanden.gputechconfwikipedia.comorg/gtcwiki/2017/presentation/s7798-luke-durant-inside-volta.pdf INSIDE VOLTARay_tracing_(graphics) raytracing]</ref> Turings 2nd gen TensorCores add FP16, INT8, INT4 optimized computationfeatures.<ref>These are also the first consumer cards to launch with TensorCores used for matrix multiplications to accelerate [https://www[Neural Networks#Convolutional|convolutional neural networks]].anandtechThe Turing GTX line of chips do not offer RTX or TensorCores.com/show/13282/nvidia-turing-architecture-deep-dive/6 AnandTech - Nvidia Turing Deep Dive page 6]</ref>
==Throughput Examples==* [https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf Turing Architecture Whitepaper]* [https://docs.nvidia.com/cuda/turing-tuning-guide/index.html Turing Tuning Guide]
Nvidia GeForce GTX 580 === Volta Architecture === [https://en.wikipedia.org/wiki/Volta_(Fermi, CC 2.0microarchitecture) - 32 bit integer operations/clock cycle per compute unit <ref>CUDA C Programming Guide v7Volta] cards were released in 2017.0They were the first cards to launch with TensorCores, Chapter 5.4supporting matrix multiplications to accelerate [[Neural Networks#Convolutional|convolutional neural networks]].1. Arithmetic Instructions</ref>
MAD 16* [https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf Volta Architecture Whitepaper] MUL 16 ADD 32 Bit* [https://docs.nvidia.com/cuda/volta-tuning-shift 16 Bitwise XOR 32guide/index.html Volta Tuning Guide]
Max theoretic ADD operation throughput=== Pascal Architecture ===[https: 32 Ops * 16 CUs * 1544 MHz = 790//en.wikipedia.528 GigaOpsorg/secwiki/Pascal_(microarchitecture) Pascal] cards were first released in 2016.
AMD Radeon HD 7970 (GCN 1* [https://images.0) nvidia.com/content/pdf/tesla/whitepaper/pascal- 32 bit integer operations/clock cycle per processing element <ref>AMD_OpenCL_Programming_Optimization_Guidearchitecture-whitepaper.pdf 3Pascal Architecture Whitepaper]* [https://docs.0beta, Chapter 2nvidia.7com/cuda/pascal-tuning-guide/index.1 Instruction Bandwidths</ref>html Pascal Tuning Guide]
MAD 1=== Maxwell Architecture ===[https://en.wikipedia.org/4 MUL 1wiki/4 ADD 1 Bit-shift 1 Bitwise XOR 1Maxwell(microarchitecture) Maxwell] cards were first released in 2014.
Max theoretic ADD operation throughput* [https://web.archive.org/web/20170721113746/http: 1 Op //international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_980_Whitepaper_FINAL.PDF Maxwell Architecture Whitepaper on archiv.org]* 2048 PEs * 925 MHz = 1894[https://docs.nvidia.4 GigaOpscom/sec cuda/maxwell-tuning-guide/index.html Maxwell Tuning Guide]
=Host-Device Latencies= PowerVR == One reason GPUs are not PowerVR (Imagination Technologies) licenses IP to third parties (most notable Apple) used as accelerators for chess engines is the host-device latency, akasystem on a chip (SoC) designs. kernel-launch-overhead. Nvidia and AMD have not published official numbers, but in practice there Since Series5 SGX OpenCL support via licensees is an measurable latency for null-kernels of 5 microseconds <ref>[https://devtalk.nvidia.com/default/topic/1047965/cuda-programming-and-performance/host-device-latencies-/post/5318041/#5318041 host-device latencies?] by [[Srdja Matovic]], Nvidia CUDA ZONE, Feb 28, 2019</ref> up to 100s of microseconds <ref>[https://community.amd.com/thread/237337#comment-2902071 host-device latencies?] by [[Srdja Matovic]] AMD Developer Community, Feb 28, 2019</ref>. One solution to overcome this limitation is to couple tasks to batches to be executed in one run <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=67347#p761239 Re: GPU ANN, how to deal with host-device latencies?] by [[Milos Stanisavljevic]], [[CCC]], May 06, 2018</ref>available.
=Deep Learning== PowerVR ===
GPUs were originally intended to process matrix multiplications for graphical transformations and rendering* [https://en. [[Neural Networkswikipedia.org/wiki/PowerVR#Convolutional|Convolutional Neural NetworksPowerVR_Graphics PowerVR series on Wikipedia]] can have their operations interpreted as a series of matrix multiplications. GPUs are therefore a natural fit to parallelize and process CNNs.
GPUs traditionally operated on 32-bit floating point numbers. However, CNNs can make due with 16-bit half floats (FP16), or even 8-bit or 4-bit numbers. One thousand single-precision floats will take up 4kB of space, while one-thousand FP16 will take up 2kB of space. A half-float uses half the memory, eats only half the memory bandwidth, and only half the space in caches. As such, GPUs such as AMD Vega or NVidia Volta added support for FP16 processing.=== IMG ===
Specialized units, such as NVidia Volta's "Tensor cores", can perform an entire 4x4 block of FP16 matrix multiplications in just one PTX assembly language statement* [https://en. It is with these instructions that CNN operations are acceleratedwikipedia. org/wiki/PowerVR#IMG_A-Series_(Albiorix) IMG A series on Wikipedia]* [https://en.wikipedia.org/wiki/PowerVR#IMG_B-Series IMG B series on Wikipedia]
== Qualcomm ==Qualcomm offers Adreno GPUs are much more suited than CPUs to implement and train [[Neural Networks#Convolutional|Convolutional Neural Networks]] (CNN), and were therefore also responsible for the [[Deep Learning|deep learning]] boom, also affecting game playing programs combining CNN with [[Monte-Carlo Tree Search|MCTS]], in various types as pioneered by [[Google]] [[DeepMind|DeepMind's]] [[AlphaGo]] and [[AlphaZero]] entities in [[Go]], [[Shogi]] and [[Chess]] using [https://en.wikipediaa component of their Snapdragon SoCs.org/wiki/Tensor_processing_unit TPUs], and the open source projects [[Leela Zero]] headed by [[Gian-Carlo Pascutto]] for [[Go]] and its [[Leela Chess Zero]] adaptionSince Adreno 300 series OpenCL support is offered.
= History == Adreno ===* [https://en.wikipedia.org/wiki/Adreno#Variants Adreno variants on Wikipedia]
In the very early history of video processors, chips such as the [https://en.wikipedia.org/wiki/ANTIC ANTIC ] or [https://en.wikipedia.org/wiki/Original_Chip_Set#Denise Denise] would take a lot of the mechanics of video processing (waiting == Vivante Corporation ==Vivante licenses IP to third parties for scanlines and processing other TV or monitor signals). 3d modeling and graphics would become popular in the early 90sembedded systems, and eventually modern 3d accelerator cards such as the [https://en.wikipedia.org/wiki/Voodoo2 3dfx Voodoo2] were designed to accelerate 3d mathGC series offers optional OpenCL support.
These 3d accelerator cards drew upon a rich history of vector=== GC-compute and SIMD-compute from 1980s and 1970s supercomputers. As such, many publications relating to Cray-vector supercomputers or the Connection Machine supercomputer easily apply to modern GPUs. For example, all the algorithms described in the 1986 publication "[https://dl.acm.org/citation.cfm?idSeries ===7903 Data Parallel Algorithms]" can be efficiently executed on a modern GPU workgroup (roughly ~256x GPU threads). The Data Parallel Algorithms paper is a beginner-level algorithms paper, demonstrating simple and efficient parallel-prefix sum, parallel-linked list traversal, parallel RegEx matching on the 4096x parallel Connection Machine-2 supercomputer.
Modern papers on GPUs, such as NVidia's excellent * [https://developeren.nvidiawikipedia.comorg/gpugemswiki/GPUGems3/gpugems3_ch39.html Chapter 39. Parallel Prefix Sum (Scan) with CUDA (GPU Gems 3)Vivante_Corporation#Products GC series on Wikipedia], are built on top of these papers from the 1980s or 1990s. As such, the beginner will find it far easier to read the papers from the 1980s or 90s before attempting to read a modern piece like GPU Gems 3.
=See also=
* [[Deep Learning]]
** [[AlphaGo]]
** [[AlphaZero]]
** [[Neural Networks#Convolutional|Convolutional Neural Networks]]
** [[Leela Zero]]
** [[Leela Chess Zero]]
* [[FPGA]]
* [[Graphics Programming]]
* [[SIMD and SWAR Techniques]]
* [[Thread]]
* [[Zeta]]
=Publications=
==1990==
* [[Mathematician#GEBlelloch|Guy E. Blelloch]] ('''1990'''). ''[https://dl.acm.org/citation.cfm?id=91254 Vector Models for Data-Parallel Computing]''. [https://en.wikipedia.org/wiki/MIT_Press MIT Press], [https://www.cs.cmu.edu/~guyb/papers/Ble90.pdf pdf]
==20092008 ...==* [[Vlad Stamate]] ('''2008'''). ''Real Time Photon Mapping Approximation on the GPU''. in [http://shaderx6.com/TOC.html ShaderX6 - Advanced Rendering Techniques] <ref>[https://en.wikipedia.org/wiki/Photon_mapping Photon mapping from Wikipedia]</ref>
* [[Ren Wu]], [http://www.cedar.buffalo.edu/~binzhang/ Bin Zhang], [http://www.hpl.hp.com/people/meichun_hsu/ Meichun Hsu] ('''2009'''). ''[http://portal.acm.org/citation.cfm?id=1531668 Clustering billions of data points using GPUs]''. [http://www.computingfrontiers.org/2009/ ACM International Conference on Computing Frontiers]
* [https://github.com/markgovett Mark Govett], [https://www.linkedin.com/in/craig-tierney-9568545 Craig Tierney], [[Jacques Middlecoff]], [https://www.researchgate.net/profile/Tom_Henderson4 Tom Henderson] ('''2009'''). ''Using Graphical Processing Units (GPUs) for Next Generation Weather and Climate Prediction Models''. [http://www.cisl.ucar.edu/dir/CAS2K9/ CAS2K9 Workshop]
* [[Hank Dietz]], [https://dblp.uni-trier.de/pers/hd/y/Young:Bobby_Dalton Bobby Dalton Young] ('''2009'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-13374-9_5 MIMD Interpretation on a GPU]''. [https://dblp.uni-trier.de/db/conf/lcpc/lcpc2009.html LCPC 2009], [http://aggregate.ee.engr.uky.edu/EXHIBITS/SC09/mogsimlcpc09final.pdf pdf], [http://aggregate.org/GPUMC/mogsimlcpc09slides.pdf slides.pdf]
* [https://dblp.uni-trier.de/pid/28/7183.html Sander van der Maar], [[Joost Batenburg]], [https://scholar.google.com/citations?user=TtXZhj8AAAAJ&hl=en Jan Sijbers] ('''2009'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-03138-0_33 Experiences with Cell-BE and GPU for Tomography]''. [https://dblp.uni-trier.de/db/conf/samos/samos2009.html#MaarBS09 SAMOS 2009] <ref>[https://en.wikipedia.org/wiki/Cell_(microprocessor) Cell (microprocessor) from Wikipedia]</ref>
==2010...==
* [https://www.linkedin.com/in/avi-bleiweiss-456a5644 Avi Bleiweiss] ('''2010'''). ''Playing Zero-Sum Games on the GPU''. [https://en.wikipedia.org/wiki/Nvidia NVIDIA Corporation], [http://www.nvidia.com/object/io_1269574709099.html GPU Technology Conference 2010], [http://www.nvidia.com/content/gtc-2010/pdfs/2207_gtc2010.pdf slides as pdf]
* [https://dblp.uni-trier.de/pers/hd/k/Karami:Ali Ali Karami], [[S. Ali Mirsoleimani]], [https://dblp.uni-trier.de/pers/hd/k/Khunjush:Farshad Farshad Khunjush] ('''2013'''). ''[https://ieeexplore.ieee.org/document/6714232 A statistical performance prediction model for OpenCL kernels on NVIDIA GPUs]''. [https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6708586 CADS 2013]
* [[Diego Rodríguez-Losada]], [[Pablo San Segundo]], [[Miguel Hernando]], [https://dblp.uni-trier.de/pers/hd/p/Puente:Paloma_de_la Paloma de la Puente], [https://dblp.uni-trier.de/pers/hd/v/Valero=Gomez:Alberto Alberto Valero-Gomez] ('''2013'''). ''GPU-Mapping: Robotic Map Building with Graphical Multiprocessors''. [https://dblp.uni-trier.de/db/journals/ram/ram20.html IEEE Robotics & Automation Magazine, Vol. 20, No. 2], [https://www.acin.tuwien.ac.at/fileadmin/acin/v4r/v4r/GPUMap_RAM2013.pdf pdf]
* [https://dblp.org/pid/28/977-2.html David Williams], [[Valeriu Codreanu]], [https://dblp.org/pid/88/5343-1.html Po Yang], [https://dblp.org/pid/54/784.html Baoquan Liu], [https://www.strath.ac.uk/staff/dongfengprofessor/ Feng Dong], [https://dblp.org/pid/136/5430.html Burhan Yasar], [https://scholar.google.com/citations?user=FZVGYiQAAAAJ&hl=en Babak Mahdian], [https://scholar.google.com/citations?user=8WO6cVUAAAAJ&hl=en Alessandro Chiarini], [https://zhaoxiahust.github.io/ Xia Zhao], [https://scholar.google.com/citations?user=jCFYHlkAAAAJ&hl=en Jos Roerdink] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-55224-3_42 Evaluation of Autoparallelization Toolkits for Commodity GPUs]''. [https://dblp.org/db/conf/ppam/ppam2013-1.html#WilliamsCYLDYMCZR13 PPAM 2013]
'''2014'''
* [https://dblp.uni-trier.de/pers/hd/d/Dang:Qingqing Qingqing Dang], [https://dblp.uni-trier.de/pers/hd/y/Yan:Shengen Shengen Yan], [[Ren Wu]] ('''2014'''). ''[https://ieeexplore.ieee.org/document/7097862 A fast integral image generation algorithm on GPUs]''. [https://dblp.uni-trier.de/db/conf/icpads/icpads2014.html ICPADS 2014]
* [[S. Ali Mirsoleimani]], [https://dblp.uni-trier.de/pers/hd/k/Karami:Ali Ali Karami Ali Karami], [https://dblp.uni-trier.de/pers/hd/k/Khunjush:Farshad Farshad Khunjush] ('''2014'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-04891-8_12 A Two-Tier Design Space Exploration Algorithm to Construct a GPU Performance Predictor]''. [https://dblp.uni-trier.de/db/conf/arcs/arcs2014.html ARCS 2014], [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science], Vol. 8350, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* [[Steinar H. Gunderson]] ('''2014'''). ''[https://archive.fosdem.org/2014/schedule/event/movit/ Movit: High-speed, high-quality video filters on the GPU]''. [https://en.wikipedia.org/wiki/FOSDEM FOSDEM] [https://archive.fosdem.org/2014/ 2014], [https://movit.sesse.net/movit-fosdem2014.pdf pdf]
* [https://dblp.org/pid/54/784.html Baoquan Liu], [https://scholar.google.com/citations?user=VspO6ZUAAAAJ&hl=en Alexandru Telea], [https://scholar.google.com/citations?user=jCFYHlkAAAAJ&hl=en Jos Roerdink], [https://dblp.org/pid/87/6797.html Gordon Clapworthy], [https://dblp.org/pid/28/977-2.html David Williams], [https://dblp.org/pid/88/5343-1.html Po Yang], [https://www.strath.ac.uk/staff/dongfengprofessor/ Feng Dong], [[Valeriu Codreanu]], [https://scholar.google.com/citations?user=8WO6cVUAAAAJ&hl=en Alessandro Chiarini] ('''2018'''). ''Parallel centerline extraction on the GPU''. [https://www.journals.elsevier.com/computers-and-graphics Computers & Graphics], Vol. 41, [https://strathprints.strath.ac.uk/70614/1/Liu_etal_CG2014_Parallel_centerline_extraction_GPU.pdf pdf]
==2015 ...==
* [[Peter H. Jin]], [[Kurt Keutzer]] ('''2015'''). ''Convolutional Monte Carlo Rollouts in Go''. [http://arxiv.org/abs/1512.03375 arXiv:1512.03375] » [[Deep Learning]], [[Go]], [[Monte-Carlo Tree Search|MCTS]]
* [[Liang Li]], [[Hong Liu]], [[Hao Wang]], [[Taoying Liu]], [[Wei Li]] ('''2015'''). ''[https://ieeexplore.ieee.org/document/6868996 A Parallel Algorithm for Game Tree Search Using GPGPU]''. [[IEEE#TPDS|IEEE Transactions on Parallel and Distributed Systems]], Vol. 26, No. 8 » [[Parallel Search]]
* [[Simon Portegies Zwart]], [https://github.com/jbedorf Jeroen Bédorf] ('''2015'''). ''[https://www.computer.org/csdl/magazine/co/2015/11/mco2015110050/13rRUx0Pqwe Using GPUs to Enable Simulation with Computational Gravitational Dynamics in Astrophysics]''. [[IEEE #Computer|IEEE Computer]], Vol. 48, No. 11
'''2016'''
* <span id="Astro"></span>[https://www.linkedin.com/in/sean-sheen-b99aba89 Sean Sheen] ('''2016'''). ''[https://digitalcommons.calpoly.edu/theses/1567/ Astro - A Low-Cost, Low-Power Cluster for CPU-GPU Hybrid Computing using the Jetson TK1]''. Master's thesis, [https://en.wikipedia.org/wiki/California_Polytechnic_State_University California Polytechnic State University], [https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=2723&context=theses pdf] <ref>[http://www.nvidia.com/object/jetson-tk1-embedded-dev-kit.html Jetson TK1 Embedded Development Kit | NVIDIA]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61761 Jetson GPU architecture] by [[Dann Corbit]], [[CCC]], October 18, 2016</ref>
* [[Balázs Jako|Balázs Jákó]] ('''2016'''). ''[https://www.semanticscholar.org/paper/Hardware-accelerated-hybrid-rendering-on-PowerVR-J%C3%A1k%C3%B3/d9d7f5784263c5abdcd6c1bf93267e334468b9b2 Hardware accelerated hybrid rendering on PowerVR GPUs]''. <ref>[https://en.wikipedia.org/wiki/PowerVR PowerVR from Wikipedia]</ref> [[IEEE]] [https://ieeexplore.ieee.org/xpl/conhome/7547434/proceeding 20th Jubilee International Conference on Intelligent Engineering Systems]
* [[Diogo R. Ferreira]], [https://dblp.uni-trier.de/pers/hd/s/Santos:Rui_M= Rui M. Santos] ('''2016'''). ''[https://github.com/diogoff/transition-counting-gpu Parallelization of Transition Counting for Process Mining on Multi-core CPUs and GPUs]''. [https://dblp.uni-trier.de/db/conf/bpm/bpmw2016.html BPM 2016]
* [https://dblp.org/pers/hd/s/Sch=uuml=tt:Ole Ole Schütt], [https://developer.nvidia.com/blog/author/peter-messmer/ Peter Messmer], [https://scholar.google.ch/citations?user=ajbBWN0AAAAJ&hl=en Jürg Hutter], [[Joost VandeVondele]] ('''2016'''). ''[https://onlinelibrary.wiley.com/doi/10.1002/9781118670712.ch8 GPU Accelerated Sparse Matrix–Matrix Multiplication for Linear Scaling Density Functional Theory]''. [https://www.cp2k.org/_media/gpu_book_chapter_submitted.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Density_functional_theory Density functional theory from Wikipedia]</ref>
: Chapter 8 in [https://scholar.google.com/citations?user=AV307ZUAAAAJ&hl=en Ross C. Walker], [https://scholar.google.com/citations?user=PJusscIAAAAJ&hl=en Andreas W. Götz] ('''2016'''). ''[https://onlinelibrary.wiley.com/doi/book/10.1002/9781118670712 Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics]''. [https://en.wikipedia.org/wiki/Wiley_(publisher) John Wiley & Sons]
'''2017'''
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815] » [[AlphaZero]]
* [[Tristan Cazenave]] ('''2017'''). ''[http://ieeexplore.ieee.org/document/7875402/ Residual Networks for Computer Go]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. PP, No. 99, [http://www.lamsade.dauphine.fr/~cazenave/papers/resnet.pdf pdf]
* [https://scholar.google.com/citations?user=zLksndkAAAAJ&hl=en Jayvant Anantpur], [https://dblp.org/pid/09/10702.html Nagendra Gulur Dwarakanath], [https://dblp.org/pid/16/4410.html Shivaram Kalyanakrishnan], [[Shalabh Bhatnagar]], [https://dblp.org/pid/45/3592.html R. Govindarajan] ('''2017'''). ''RLWS: A Reinforcement Learning based GPU Warp Scheduler''. [https://arxiv.org/abs/1712.04303 arXiv:1712.04303]
'''2018'''
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2018'''). ''[http://science.sciencemag.org/content/362/6419/1140 A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play]''. [https://en.wikipedia.org/wiki/Science_(journal) Science], Vol. 362, No. 6419
* [http://www.talkchess.com/forum/viewtopic.php?t=66280 Announcing lczero] by [[Gary Linscott|Gary]], [[CCC]], January 09, 2018 » [[Leela Chess Zero]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=67347 GPU ANN, how to deal with host-device latencies?] by [[Srdja Matovic]], [[CCC]], May 06, 2018 » [[Neural Networks]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67357 GPU contention] by [[Ian Kennedy]], [[CCC]], May 07, 2018 » [[Leela Chess Zero]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68448 How good is the RTX 2080 Ti for Leela?] by Hai, September 15, 2018 » [[Leela Chess Zero]] <ref>[https://en.wikipedia.org/wiki/GeForce_20_series GeForce 20 series from Wikipedia]</ref>
: [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=68448&start=2 Re: How good is the RTX 2080 Ti for Leela?] by [[Ankan Banerjee]], [[CCC]], September 16, 2018
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=69447 Generate EGTB with graphics cards?] by [[Pham Hong Nguyen|Nguyen Pham]], [[CCC]], January 01, 2019 » [[Endgame Tablebases]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=69478 LCZero FAQ is missing one important fact] by [[Jouni Uski]], [[CCC]], January 01, 2019 » [[Leela Chess Zero]]
* [https://groups.google.com/d/msg/lczero/I0lTgR-fFFU/NGC3kJDzAwAJ Michael Larabel benches lc0 on various GPUs] by [[Warren D. Smith]], [[Computer Chess Forums|LCZero Forum]], January 14, 2019 » [[Leela Chess Zero#Lc0|Lc0]] <ref>[https://en.wikipedia.org/wiki/Phoronix_Test_Suite Phoronix Test Suite from Wikipedia]</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=70362 Using LC0 with one or two GPUs - a guide] by [[Srdja Matovic]], [[CCC]], March 30, 2019 » [[Leela Chess Zero#Lc0|Lc0]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=70584 Wouldn't it be nice if C++ GPU] by [[Chris Whittington]], [[CCC]], April 25, 2019 » [[Cpp|C++]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=71058 Lazy-evaluation of futures for parallel work-efficient Alpha-Beta search] by Percival Tiglao, [[CCC]], June 06, 2019
* [https://www.game-ai-forum.org/viewtopic.php?f=21&t=694 My home-made CUDA kernel for convolutions] by [[Rémi Coulom]], [[Computer Chess Forums|Game-AI Forum]], November 09, 2019 » [[Deep Learning]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72320 GPU rumors 2020] by [[Srdja Matovic]], [[CCC]], November 13, 2019
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74771 AB search with NN on GPU...] by [[Srdja Matovic]], [[CCC]], August 13, 2020 » [[Neural Networks]] <ref>[https://forums.developer.nvidia.com/t/kernel-launch-latency/62455 kernel launch latency - CUDA / CUDA Programming and Performance - NVIDIA Developer Forums] by LukeCuda, June 18, 2018</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75073 I stumbled upon this article on the new Nvidia RTX GPUs] by [[Kai Laskos]], [[CCC]], September 10, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75639 Will AMD RDNA2 based Radeon RX 6000 series kick butt with Lc0?] by [[Srdja Matovic]], [[CCC]], November 01, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76986 Zeta with NNUE on GPU?] by [[Srdja Matovic]], [[CCC]], March 31, 2021 » [[Zeta]], [[NNUE]]
* [https://talkchess.com/forum3/viewtopic.php?f=2&t=77097 GPU rumors 2021] by [[Srdja Matovic]], [[CCC]], April 16, 2021
* [https://www.talkchess.com/forum3/viewtopic.php?f=7&t=79078 Comparison of all known Sliding lookup algorithms <nowiki>[CUDA]</nowiki>] by [[Daniel Infuehr]], [[CCC]], January 08, 2022 » [[Sliding Piece Attacks]]
* [https://talkchess.com/forum3/viewtopic.php?f=7&t=72566&p=955538#p955538 Re: China boosts in silicon...] by [[Srdja Matovic]], [[CCC]], January 13, 2024
=External Links=
* [https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units General-purpose computing on graphics processing units (GPGPU) from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units List of AMD graphics processing units from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units List of Intel graphics processing units from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units List of Nvidia graphics processing units from Wikipedia]
* [https://developer.nvidia.com/ NVIDIA Developer]
422
edits

Navigation menu