Changes

Jump to: navigation, search

GPU

5 bytes added, 08:02, 27 November 2020
m
Floating Point Instruction Throughput
* FP16
: Newer Some newer GPGPU architectures offer half-precision (16 bit) floating point operation throughput with an FP32:FP16 ratio of 1:2. Older architectures migth not support FP16 at all, at the same rate as FP32, or at very low rates.
==Tensors==
422
edits

Navigation menu