Changes

Jump to: navigation, search

GPU

472 bytes added, 20:33, 9 August 2019
Grids and NDRange
While warps, blocks, wavefronts and workgroups are concepts that the machine executes... Grids and NDRanges are the scope of the problem specified by a programmer. For example, a pixel-shader executing over a 1920x1080 screen will have 2,073,600 pixels to process. GPUs are designed such that each of these pixels could get its own thread of execution. Specifying these 2,073,600 work items is the purpose of a CUDA Grid or OpenCL NDRange.
GPUs A typical midrange GPU will "only" have support for be able to process tens-of-thousands of threads of parallel executionat a time. In practice, the device driver will cut up a Grid or NDRange (usually consisting of millions of items) into Blocks or Workgroups. These blocks and workgroups will execute with as much parallel processing as the underlying hardware can support (maybe 10,000 at a time on a midrange GPU). The device driver will implicitly iterate these blocks over the entire Grid or NDRange to complete the task the programmer has specified, similar to a for-loop.
Grids and NDRanges can be 1-dimensional, 2-dimensional, or 3-dimensional. 2-dimensional grids are common for screen-space operation such as pixel shaders. While 3-dimensional grids are useful for specifying many operations per pixel (such as a raytracer, which may launch 5000 rays per pixel).
 
The most important note is that Grids and NDRanges may not execute concurrently with each other. Some degree of sequential processing may happen. As such, communication across a Grid or NDRange is difficult to achieve. In practice, the easiest mechanism for Grid or NDRange sized synchronization is to wait for the kernel to finish executing, and to have the CPU split tasks as appropriate. CPUs and GPUs can easily work as a team to accomplish their tasks.
= Architectures and Physical Hardware =

Navigation menu