Changes

Jump to: navigation, search

GPU

116 bytes added, 06:16, 27 October 2021
Programming Model
=Programming Model=
A [https://en.wikipedia.org/wiki/Parallel_programming_model parallel programming model] for GPGPU can be [https://en.wikipedia.org/wiki/Data_parallelism data-parallel], [https://en.wikipedia.org/wiki/Task_parallelism task-parallel], a mixture of both, or with libraries and offload-directives also [https://en.wikipedia.org/wiki/Implicit_parallelism implicitly-parallel]. Single GPU threads (resp. work-items in OpenCL) are coupled to a block (resp. work-group in OpenCL), these and one or multiple blocks form the grid (NDRange in OpenCL) to be executed on the GPU device. The members of a block resp. work-group can be usually synchronized and have access to the same scratch-pad memory, with an architecture limit of how many threads a block can hold and how many threads can run in total concurrently on the device.
=Memory Model=
422
edits

Navigation menu