Changes

Jump to: navigation, search

GPU

6 bytes removed, 16:39, 16 April 2021
m
Floating Point Instruction Throughput
* FP16
: Some newer GPGPU architectures offer half-precision (16 bit) floating point operation throughput with an FP32:FP16 ratio of 1:2. Older architectures migth not support FP16 at all, at the same rate as FP32, or at very low rates.
==Tensors==
422
edits

Navigation menu