Talk:GPU

From Chessprogramming wiki
Revision as of 09:17, 22 April 2021 by Smatovic (talk | contribs) (Nvidia architectures)
Jump to: navigation, search

Nvidia architectures

Afaik Nvidia did never official mention SIMD in their papers as hardware architecture, with Tesla they only referred to as SIMT.

Nevertheless, my own conclusions are:

  • Tesla has 8 wide SIMD, executing a Warp of 32 threads over 4 cycles.
  • Fermi has 16 wide SIMD, executing a Warp of 32 threads over 2 cycles.
  • Kepler is somehow odd, not sure how the compute units are partitioned.
  • Maxwell and Pascal have 32 wide SIMD, executing a Warp of 32 threads over 1 cycle.
  • Volta and Turing seem to have 16 wide FPU SIMDs, but my own experiments show 32 wide VALU.

Smatovic (talk) 10:17, 22 April 2021 (CEST)

SIMD + Scalar Unit

According to AMD papers every SIMD unit has one scalar unit, Nvidia seems to have SFUs, special function units.

Smatovic (talk) 11:47, 18 April 2021 (CEST)

embedded CPU controller

It is not documented in the white papers, but it seems that every discrete GPU has an embedded CPU controller (e.g. Nvidia Falcon) who (speculation) launches the kernels.

Smatovic (talk) 11:48, 18 April 2021 (CEST)

AMD architectures

AMD has some kind of NDA in their newest whitepapers, so I will put this into the discussion section...my own conclusions are:

  • TeraScale has VLIW design
  • GCN has 16 wide SIMD, executing a Wavefront of 64 threads over 4 cycles.
  • RDNA has 32 wide SIMD, executing a Wavefront:32 over 1 cycle and Wavefront:64 over two cycles.

Smatovic (talk) 10:16, 22 April 2021 (CEST)