422
edits
Changes
GPU
,→Floating Point Instruction Throughput
* FP16
: Newer Some newer GPGPU architectures offer half-precision (16 bit) floating point operation throughput with an FP32:FP16 ratio of 1:2. Older architectures migth not support FP16 at all, at the same rate as FP32, or at very low rates.
==Tensors==