AVX-512

Home * Hardware * x86-64 * AVX-512

AVX-512, an expansion of Intel's the AVX and AVX2 instructions using the EVEX prefix, featuring 32 512-bit wide vector SIMD registers zmm0 through zmm31, keeping either eight doubles or integer quad words such as bitboards, and eight (seven) dedicated mask registers which specify which vector elements are operated on and written. If the Nth bit of a vector mask register is set, then the Nth element of the destination vector is overridden with the result of the operation; otherwise, dependent of the instruction, the element is zeroed, or overridden by an element from another source register (remains unchanged if same source). A vector mask register can be set using vector compare instructions, instructions to move contents from a GP register, or a special subset of vector mask arithmetic instructions.

=Extensions= AVX-512 consists of multiple extensions not all meant to be supported by all AVX-512 capable processors. Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations AVX-512F and AVX-512CD were first implemented in the Xeon Phi processor and coprocessor known by the code name Knights Landing, launched on June 20, 2016.

=Selected Instructions=

VPTERNLOG
AVX-512 F features the instruction VPTERNLOGQ (or VPTERNLOGD) to perform bitwise ternary logic, for instance to operate on vectors of bitboards. Three input vectors are bitwise combined by an operation determined by an immediate byte operand (imm8), whose 256 possible values corresponds with the boolean output vector of the truth table for all eight combinations of the three input bits, as demonstrated with some selected imm8 values in the table below : Following VPTERNLOGQ intrinsics are declared, where the maskz version sets unmasked destination quad word elements to zero, while the mask version copies unmasked elements from s: __m256i _mm256_ternarylogic_epi64(__m256i a, __m256i b, __m256i c, int imm8); __m256i _mm256_maskz_ternarylogic_epi64(__mmask8 k, __m256i a, __m256i b, __m256i c, int imm8); __m256i _mm256_mask_ternarylogic_epi64(__m256i src, __mmask8 k, __m256i a, __m256i b, int imm8); __m512i _mm512_ternarylogic_epi64(__m512i a, __m512i b, __m512i c, int imm8); __m512i _mm512_maskz_ternarylogic_epi64( __mmask8 m, __m512i a, __m512i b, __m512i c, int imm8); __m512i _mm512_mask_ternarylogic_epi64(__m512i s, __mmask8 m, __m512i a, __m512i b, __m512i c, int imm8);

VPLZCNT
AVX-512 CD has Vector Leading Zero Count - VPLZCNTQ counts leading zeroes on a vector of eight bitboards in parallel - using following intrinsics, where the maskz version sets unmasked destination elements to zero, while the mask version copies unmasked elements from s: __m256i _mm256_lzcnt_epi64(__m256i a); __m256i _mm256_maskz_lzcnt_epi64(__mmask8 k, __m256i a); __m256i _mm256_mask_lzcnt_epi64(__m256i src, __mmask8 k, __m256i a); __m512i _mm512_lzcnt_epi64(__m512i a); __m512i _mm512_maskz_lzcnt_epi64(__mmask8 k, __m512i a); __m512i _mm512_mask_lzcnt_epi64(__m512i src, __mmask8 k, __m512i a);

VPOPCNT
The AVX-512 VPOPCNTDQ extension has a vector population count instruction to count one bits of either 16 32-bit double words (VPOPCNTD) or 8 64-bit quad words aka bitboards (VPOPCNTQ) in parallel. __m128i _mm_mask_popcnt_epi32(__m128i src, __mmask8 k, __m128i a); __m128i _mm_maskz_popcnt_epi32(__mmask8 k, __m128i a); __m128i _mm_popcnt_epi3 (__m128i a); __m256i _mm256_mask_popcnt_epi32(__m256i src, __mmask8 k, __m256i a); __m256i _mm256_maskz_popcnt_epi32(__mmask8 k, __m256i a); __m256i _mm256_popcnt_epi32(__m256i a); __m512i _mm512_mask_popcnt_epi32(__m512i src, __mmask16 k, __m512i a); __m512i _mm512_maskz_popcnt_epi32(__mmask16 k, __m512i a); __m512i _mm512_popcnt_epi32(__m512i a);

__m128i _mm_mask_popcnt_epi64(__m128i src, __mmask8 k, __m128i a); __m128i _mm_maskz_popcnt_epi64(__mmask8 k, __m128i a); __m128i _mm_popcnt_epi64(__m128i a); __m256i _mm256_mask_popcnt_epi64(__m256i src, __mmask8 k, __m256i a); __m256i _mm256_maskz_popcnt_epi64(__mmask8 k, __m256i a); __m256i _mm256_popcnt_epi64(__m256i a); __m512i _mm512_mask_popcnt_epi64(__m512i src, __mmask8 k, __m512i a); __m512i _mm512_maskz_popcnt_epi64(__mmask8 k, __m512i a); __m512i _mm512_popcnt_epi64(__m512i a)

VPDPBUSD
The AVX-512 VNNI extension features several instructions speeding up neural network and deep learning calculations on the CPU, for instance NNUE inference using uint8/int8. VPDPBUSD - Multiply and Add Unsigned and Signed Bytes, executes on both port 0 and port 5 in one cycle. __m512i _mm512_dpbusd_epi32 (__m512i src, __m512i a, __m512i b) { for (j=0; j < 16; j++) { tmp1.word := Signed(ZeroExtend16(a.byte[4*j ]) * SignExtend16(b.byte[4*j  ]);    tmp2.word := Signed(ZeroExtend16(a.byte[4*j+1]) * SignExtend16(b.byte[4*j+1]); tmp3.word := Signed(ZeroExtend16(a.byte[4*j+2]) * SignExtend16(b.byte[4*j+2]);   tmp4.word := Signed(ZeroExtend16(a.byte[4*j+3]) * SignExtend16(b.byte[4*j+3]); dst.dword[j] := src.dword[j] + tmp1 + tmp2 + tmp3 + tmp4 } return dst; } =See also=
 * CFish - AVX2 Attacks
 * DirGolem
 * NNUE
 * Stockfish NNUE
 * SIMD and SWAR Techniques

=SIMD=
 * AltiVec
 * AVX
 * AVX2
 * SSE2
 * XOP

=Publications=
 * Mathias Gottschlag, Frank Bellosa (2018). Mechanism to Mitigate AVX-Induced Frequency Reduction. arXiv:1901.04982
 * Mathias Gottschlag, Philipp Machauer, Yussuf Khalil, Frank Bellosa (2021). Fair Scheduling for AVX2 and AVX-512 Workloads. USENIX ATC '21

=Manuals=
 * Intel® Architecture Instruction Set Extensions Programming Reference (pdf)
 * Intel® 64 and IA-32 Architectures Optimization Reference Manual

=Forum Posts=
 * AVX-512 and NNUE by Gian-Carlo Pascutto, CCC, September 08, 2020 » NNUE
 * VPOPCNTDQ and VBMI2 by Vivien Clauzon, CCC, May 04, 2021

=External Links=
 * AVX-512 from Wikipedia
 * AVX-512 Vector Neural Network Instructions (VNNI) - x86 - WikiChip
 * Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Overview
 * Intel Instruction Set Architecture Extensions

Blogs

 * AVX-512 instructions | Intel® Developer Zone by James Reinders, July 23, 2013
 * Future instruction set: AVX-512 by Agner Fog, October, 09, 2013
 * Additional AVX-512 instructions | Intel® Developer Zone by James Reinders, July 17, 2014
 * Processing Arrays of Bits with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) | Intel® Developer Zone by Thomas Willhalm, July 24, 2014
 * AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors by James Reinders, HPCwire, June 29, 2017
 * Lower Numerical Precision Deep Learning Inference and Training by Andres Rodriguez et al., January 19, 2018

Compiler Support

 * Intel Intrinsics Guide - AVX-512
 * Intel® Advanced Vector Extensions 2015/2016 Support in GNU Compiler Collection (pdf) by Kirill Yukhin, July 2014
 * Guide to Automatic Vectorization with Intel AVX-512 Instructions in Knights Landing Processors - Colfax Research, May 11, 2016
 * Microsoft Visual Studio 2017 Supports Intel® AVX-512 | Visual C++ Team Blog by Eric Battalio, July 11, 2017

=References= Up one Level