AAA--ASCII Adjust After Addition.
AAA
void
37
ASCII adjust AL after addition.
NA
NA
NA
NA
AAD--ASCII Adjust AX Before Division.
AAD
void
D5 0A
ASCII adjust AX before division.
AAD
imm8
D5 ib
Adjust AX before division to number base imm8.
NA
NA
NA
NA
AAM--ASCII Adjust AX After Multiply.
AAM
void
D4 0A
ASCII adjust AX after multiply.
AAM
imm8
D4 ib
Adjust AX after multiply to number base imm8.
NA
NA
NA
NA
AAS--ASCII Adjust AL After Subtraction.
AAS
void
3F
ASCII adjust AL after subtraction.
NA
NA
NA
NA
ADC--Add with Carry.
ADC
AL,imm8
14 ib
Add with carry imm8 to AL.
ADC
AX,imm16
15 iw
Add with carry imm16 to AX.
ADC
EAX,imm32
15 id
Add with carry imm32 to EAX.
ADC
RAX,imm32
REX.W + 15 id
Add with carry imm32 sign extended to 64bits to RAX.
ADC
r/m8,imm8*
80 /2 ib
Add with carry imm8 to r/m8.
ADC
r/m8,imm8
REX + 80 /2 ib
Add with carry imm8 to r/m8.
ADC
r/m16,imm16
81 /2 iw
Add with carry imm16 to r/m16.
ADC
r/m32,imm32
81 /2 id
Add with CF imm32 to r/m32.
ADC
r/m64,imm32
REX.W + 81 /2 id
Add with CF imm32 sign extended to 64-bits to r/m64.
ADC
r/m16,imm8
83 /2 ib
Add with CF sign-extended imm8 to r/m16.
ADC
r/m32,imm8
83 /2 ib
Add with CF sign-extended imm8 into r/m32.
ADC
r/m64,imm8
REX.W + 83 /2 ib
Add with CF sign-extended imm8 into r/m64.
ADC
r/m8,r8**
10 /r
Add with carry byte register to r/m8.
ADC
r/m8,r8
REX + 10 /r
Add with carry byte register to r/m64.
ADC
r/m16,r16
11 /r
Add with carry r16 to r/m16.
ADC
r/m32,r32
11 /r
Add with CF r32 to r/m32.
ADC
r/m64,r64
REX.W + 11 /r
Add with CF r64 to r/m64.
ADC
r8,r/m8**
12 /r
Add with carry r/m8 to byte register.
ADC
r8,r/m8
REX + 12 /r
Add with carry r/m64 to byte register.
ADC
r16,r/m16
13 /r
Add with carry r/m16 to r16.
ADC
r32,r/m32
13 /r
Add with CF r/m32 to r32.
ADC
r64,r/m64
REX.W + 13 /r
Add with CF r/m64 to r64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
AL/AX/EAX/RAX
imm8(r)
NA
NA
ADCX--Unsigned Integer Addition of Two Operands with Carry Flag.
ADCX
r32,r/m32
66 0F 38 F6 /r
ADX
Unsigned addition of r32 with CF, r/m32 to r32, writes CF.
ADCX
r64,r/m64
66 REX.w 0F 38 F6 /r
ADX
Unsigned addition of r64 with CF, r/m64 to r64, writes CF.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ADD--Add.
ADD
AL,imm8
04 ib
Add imm8 to AL.
ADD
AX,imm16
05 iw
Add imm16 to AX.
ADD
EAX,imm32
05 id
Add imm32 to EAX.
ADD
RAX,imm32
REX.W + 05 id
Add imm32 sign-extended to 64-bits to RAX.
ADD
r/m8,imm8*
80 /0 ib
Add imm8 to r/m8.
ADD
r/m8,imm8
REX + 80 /0 ib
Add sign-extended imm8 to r/m64.
ADD
r/m16,imm16
81 /0 iw
Add imm16 to r/m16.
ADD
r/m32,imm32
81 /0 id
Add imm32 to r/m32.
ADD
r/m64,imm32
REX.W + 81 /0 id
Add imm32 sign-extended to 64-bits to r/m64.
ADD
r/m16,imm8
83 /0 ib
Add sign-extended imm8 to r/m16.
ADD
r/m32,imm8
83 /0 ib
Add sign-extended imm8 to r/m32.
ADD
r/m64,imm8
REX.W + 83 /0 ib
Add sign-extended imm8 to r/m64.
ADD
r/m8,r8**
00 /r
Add r8 to r/m8.
ADD
r/m8,r8
REX + 00 /r
Add r8 to r/m8.
ADD
r/m16,r16
01 /r
Add r16 to r/m16.
ADD
r/m32,r32
01 /r
Add r32 to r/m32.
ADD
r/m64,r64
REX.W + 01 /r
Add r64 to r/m64.
ADD
r8,r/m8**
02 /r
Add r/m8 to r8.
ADD
r8,r/m8
REX + 02 /r
Add r/m8 to r8.
ADD
r16,r/m16
03 /r
Add r/m16 to r16.
ADD
r32,r/m32
03 /r
Add r/m32 to r32.
ADD
r64,r/m64
REX.W + 03 /r
Add r/m64 to r64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
AL/AX/EAX/RAX
imm8(r)
NA
NA
ADDPD--Add Packed Double-Precision Floating-Point Values.
ADDPD
xmm1,xmm2/m128
66 0F 58 /r
SSE2
Add packed double-precision floating-point values from xmm2/m128 to xmm1.
VADDPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 58 /r
AVX
Add packed double-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VADDPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 58 /r
AVX
Add packed double-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADDPS--Add Packed Single-Precision Floating-Point Values.
ADDPS
xmm1,xmm2/m128
0F 58 /r
SSE
Add packed single-precision floating-point values from xmm2/m128 to xmm1 and stores result in xmm1.
VADDPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 58 /r
AVX
Add packed single-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VADDPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 58 /r
AVX
Add packed single-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADDSD--Add Scalar Double-Precision Floating-Point Values.
ADDSD
xmm1,xmm2/m64
F2 0F 58 /r
SSE2
Add the low double-precision floating-point value from xmm2/m64 to xmm1.
VADDSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 58 /r
AVX
Add the low double-precision floating-point value from xmm3/mem to xmm2 and store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADDSS--Add Scalar Single-Precision Floating-Point Values.
ADDSS
xmm1,xmm2/m32
F3 0F 58 /r
SSE
Add the low single-precision floating-point value from xmm2/m32 to xmm1.
VADDSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 58 /r
AVX
Add the low single-precision floating-point value from xmm3/mem to xmm2 and store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADDSUBPD--Packed Double-FP Add/Subtract.
ADDSUBPD
xmm1,xmm2/m128
66 0F D0 /r
SSE3
Add/subtract double-precision floating-point values from xmm2/m128 to xmm1.
VADDSUBPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D0 /r
AVX
Add/subtract packed double-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VADDSUBPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG D0 /r
AVX
Add / subtract packed double-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADDSUBPS--Packed Single-FP Add/Subtract.
ADDSUBPS
xmm1,xmm2/m128
F2 0F D0 /r
SSE3
Add/subtract single-precision floating-point values from xmm2/m128 to xmm1.
VADDSUBPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.F2.0F.WIG D0 /r
AVX
Add/subtract single-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VADDSUBPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.F2.0F.WIG D0 /r
AVX
Add / subtract single-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ADOX--Unsigned Integer Addition of Two Operands with Overflow Flag.
ADOX
r32,r/m32
F3 0F 38 F6 /r
ADX
Unsigned addition of r32 with OF, r/m32 to r32, writes OF.
ADOX
r64,r/m64
F3 REX.w 0F 38 F6 /r
ADX
Unsigned addition of r64 with OF, r/m64 to r64, writes OF.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
AESDEC--Perform One Round of an AES Decryption Flow.
AESDEC
xmm1,xmm2/m128
66 0F 38 DE /r
AES
Perform one round of an AES decryption flow, using the Equivalent Inverse Cipher, operating on a 128-bit data (state) from xmm1 with a 128-bit round key from xmm2/m128.
VAESDEC
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG DE /r
AES
AVX
Perform one round of an AES decryption flow, using the Equivalent Inverse Cipher, operating on a 128-bit data (state) from xmm2 with a 128-bit round key from xmm3/m128; store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
AESDECLAST--Perform Last Round of an AES Decryption Flow.
AESDECLAST
xmm1,xmm2/m128
66 0F 38 DF /r
AES
Perform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, operating on a 128-bit data (state) from xmm1 with a 128-bit round key from xmm2/m128.
VAESDECLAST
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG DF /r
AES
AVX
Perform the last round of an AES decryption flow, using the Equivalent Inverse Cipher, operating on a 128-bit data (state) from xmm2 with a 128-bit round key from xmm3/m128; store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
AESENC--Perform One Round of an AES Encryption Flow.
AESENC
xmm1,xmm2/m128
66 0F 38 DC /r
AES
Perform one round of an AES encryption flow, operating on a 128-bit data (state) from xmm1 with a 128-bit round key from xmm2/m128.
VAESENC
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG DC /r
AES
AVX
Perform one round of an AES encryption flow, operating on a 128-bit data (state) from xmm2 with a 128-bit round key from the xmm3/m128; store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
AESENCLAST--Perform Last Round of an AES Encryption Flow.
AESENCLAST
xmm1,xmm2/m128
66 0F 38 DD /r
AES
Perform the last round of an AES encryption flow, operating on a 128-bit data (state) from xmm1 with a 128-bit round key from xmm2/m128.
VAESENCLAST
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG DD /r
AES
AVX
Perform the last round of an AES encryption flow, operating on a 128-bit data (state) from xmm2 with a 128 bit round key from xmm3/m128; store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
AESIMC--Perform the AES InvMixColumn Transformation.
AESIMC
xmm1,xmm2/m128
66 0F 38 DB /r
AES
Perform the InvMixColumn transformation on a 128-bit round key from xmm2/m128 and store the result in xmm1.
VAESIMC
xmm1,xmm2/m128
VEX.128.66.0F38.WIG DB /r
AES
AVX
Perform the InvMixColumn transformation on a 128-bit round key from xmm2/m128 and store the result in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
AESKEYGENASSIST--AES Round Key Generation Assist.
AESKEYGENASSIST
xmm1,xmm2/m128,imm8
66 0F 3A DF /r ib
AES
Assist in AES round key generation using an 8 bits Round Constant (RCON) specified in the immediate byte, operating on 128 bits of data specified in xmm2/m128 and stores the result in xmm1.
VAESKEYGENASSIST
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG DF /r ib
AES
AVX
Assist in AES round key generation using 8 bits Round Constant (RCON) specified in the immediate byte, operating on 128 bits of data specified in xmm2/m128 and stores the result in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
AND--Logical AND.
AND
AL,imm8
24 ib
AL AND imm8.
AND
AX,imm16
25 iw
AX AND imm16.
AND
EAX,imm32
25 id
EAX AND imm32.
AND
RAX,imm32
REX.W + 25 id
RAX AND imm32 sign-extended to 64-bits.
AND
r/m8,imm8*
80 /4 ib
r/m8 AND imm8.
AND
r/m8,imm8
REX + 80 /4 ib
r/m8 AND imm8.
AND
r/m16,imm16
81 /4 iw
r/m16 AND imm16.
AND
r/m32,imm32
81 /4 id
r/m32 AND imm32.
AND
r/m64,imm32
REX.W + 81 /4 id
r/m64 AND imm32 sign extended to 64-bits.
AND
r/m16,imm8
83 /4 ib
r/m16 AND imm8 (sign-extended).
AND
r/m32,imm8
83 /4 ib
r/m32 AND imm8 (sign-extended).
AND
r/m64,imm8
REX.W + 83 /4 ib
r/m64 AND imm8 (sign-extended).
AND
r/m8,r8**
20 /r
r/m8 AND r8.
AND
r/m8,r8
REX + 20 /r
r/m64 AND r8 (sign-extended).
AND
r/m16,r16
21 /r
r/m16 AND r16.
AND
r/m32,r32
21 /r
r/m32 AND r32.
AND
r/m64,r64
REX.W + 21 /r
r/m64 AND r32.
AND
r8,r/m8**
22 /r
r8 AND r/m8.
AND
r8,r/m8
REX + 22 /r
r/m64 AND r8 (sign-extended).
AND
r16,r/m16
23 /r
r16 AND r/m16.
AND
r32,r/m32
23 /r
r32 AND r/m32.
AND
r64,r/m64
REX.W + 23 /r
r64 AND r/m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
AL/AX/EAX/RAX
imm8(r)
NA
NA
ANDN--Logical AND NOT.
ANDN
r32a,r32b,r/m32
VEX.NDS.LZ.0F38.W0 F2 /r
BMI1
Bitwise AND of inverted r32b with r/m32, store result in r32a.
ANDN
r64a,r64b,r/m64
VEX.NDS.LZ. 0F38.W1 F2 /r
BMI1
Bitwise AND of inverted r64b with r/m64, store result in r64a.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ANDPD--Bitwise Logical AND of Packed Double-Precision Floating-Point Values.
ANDPD
xmm1,xmm2/m128
66 0F 54 /r
SSE2
Return the bitwise logical AND of packed double-precision floating-point values in xmm1 and xmm2/m128.
VANDPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 54 /r
AVX
Return the bitwise logical AND of packed double-precision floating-point values in xmm2 and xmm3/mem.
VANDPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 54 /r
AVX
Return the bitwise logical AND of packed double-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ANDPS--Bitwise Logical AND of Packed Single-Precision Floating-Point Values.
ANDPS
xmm1,xmm2/m128
0F 54 /r
SSE
Bitwise logical AND of xmm2/m128 and xmm1.
VANDPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 54 /r
AVX
Return the bitwise logical AND of packed single-precision floating-point values in xmm2 and xmm3/mem.
VANDPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 54 /r
AVX
Return the bitwise logical AND of packed single-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ANDNPD--Bitwise Logical AND NOT of Packed Double-Precision Floating-Point Values.
ANDNPD
xmm1,xmm2/m128
66 0F 55 /r
SSE2
Bitwise logical AND NOT of xmm2/m128 and xmm1.
VANDNPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 55 /r
AVX
Return the bitwise logical AND NOT of packed double-precision floating-point values in xmm2 and xmm3/mem.
VANDNPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 55/r
AVX
Return the bitwise logical AND NOT of packed double-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ANDNPS--Bitwise Logical AND NOT of Packed Single-Precision Floating-Point Values.
ANDNPS
xmm1,xmm2/m128
0F 55 /r
SSE
Bitwise logical AND NOT of xmm2/m128 and xmm1.
VANDNPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 55 /r
AVX
Return the bitwise logical AND NOT of packed single-precision floating-point values in xmm2 and xmm3/mem.
VANDNPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 55 /r
AVX
Return the bitwise logical AND NOT of packed single-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ARPL--Adjust RPL Field of Segment Selector.
ARPL
r/m16,r16
63 /r
Adjust RPL of r/m16 to not less than RPL of r16.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
BLENDPD--Blend Packed Double Precision Floating-Point Values.
BLENDPD
xmm1,xmm2/m128,imm8
66 0F 3A 0D /r ib
SSE4_1
Select packed DP-FP values from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VBLENDPD
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 0D /r ib
AVX
Select packed double-precision floating-point Values from xmm2 and xmm3/m128 from mask in imm8 and store the values in xmm1.
VBLENDPD
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 0D /r ib
AVX
Select packed double-precision floating-point Values from ymm2 and ymm3/m256 from mask in imm8 and store the values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)[3:0]
BEXTR--Bit Field Extract.
BEXTR
r32a,r/m32,r32b
VEX.NDS.LZ.0F38.W0 F7 /r
BMI1
Contiguous bitwise extract from r/m32 using r32b as control; store result in r32a.
BEXTR
r64a,r/m64,r64b
VEX.NDS.LZ.0F38.W1 F7 /r
BMI1
Contiguous bitwise extract from r/m64 using r64b as control; store result in r64a.
ModRM:reg(w)
ModRM:r/m(r)
VEX.vvvv(r)
NA
BLENDPS--Blend Packed Single Precision Floating-Point Values.
BLENDPS
xmm1,xmm2/m128,imm8
66 0F 3A 0C /r ib
SSE4_1
Select packed single precision floating-point values from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VBLENDPS
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 0C /r ib
AVX
Select packed single-precision floating-point values from xmm2 and xmm3/m128 from mask in imm8 and store the values in xmm1.
VBLENDPS
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 0C /r ib
AVX
Select packed single-precision floating-point values from ymm2 and ymm3/m256 from mask in imm8 and store the values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
BLENDVPD--Variable Blend Packed Double Precision Floating-Point Values.
BLENDVPD
xmm1,xmm2/m128,<XMM0>
66 0F 38 15 /r
SSE4_1
Select packed DP FP values from xmm1 and xmm2 from mask specified in XMM0 and store the values in xmm1.
VBLENDVPD
xmm1,xmm2,xmm3/m128,xmm4
VEX.NDS.128.66.0F3A.W0 4B /r /is4
AVX
Conditionally copy double-precision floatingpoint values from xmm2 or xmm3/m128 to xmm1, based on mask bits in the mask operand, xmm4.
VBLENDVPD
ymm1,ymm2,ymm3/m256,ymm4
VEX.NDS.256.66.0F3A.W0 4B /r /is4
AVX
Conditionally copy double-precision floatingpoint values from ymm2 or ymm3/m256 to ymm1, based on mask bits in the mask operand, ymm4.
ModRM:reg(r,w)
ModRM:r/m(r)
implicit XMM0
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)[7:4]
BLENDVPS--Variable Blend Packed Single Precision Floating-Point Values.
BLENDVPS
xmm1,xmm2/m128,<XMM0>
66 0F 38 14 /r
SSE4_1
Select packed single precision floating-point values from xmm1 and xmm2/m128 from mask specified in XMM0 and store the values into xmm1.
VBLENDVPS
xmm1,xmm2,xmm3/m128,xmm4
VEX.NDS.128.66.0F3A.W0 4A /r /is4
AVX
Conditionally copy single-precision floatingpoint values from xmm2 or xmm3/m128 to xmm1, based on mask bits in the specified mask operand, xmm4.
VBLENDVPS
ymm1,ymm2,ymm3/m256,ymm4
VEX.NDS.256.66.0F3A.W0 4A /r /is4
AVX
Conditionally copy single-precision floatingpoint values from ymm2 or ymm3/m256 to ymm1, based on mask bits in the specified mask register, ymm4.
ModRM:reg(r,w)
ModRM:r/m(r)
implicit XMM0
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)[7:4]
BLSI--Extract Lowest Set Isolated Bit.
BLSI
r32,r/m32
VEX.NDD.LZ.0F38.W0 F3 /3
BMI1
Extract lowest set bit from r/m32 and set that bit in r32.
BLSI
r64,r/m64
VEX.NDD.LZ.0F38.W1 F3 /3
BMI1
Extract lowest set bit from r/m64, and set that bit in r64.
VEX.vvvv(w)
ModRM:r/m(r)
NA
NA
BLSMSK--Get Mask Up to Lowest Set Bit.
BLSMSK
r32,r/m32
VEX.NDD.LZ.0F38.W0 F3 /2
BMI1
Set all lower bits in r32 to '1' starting from bit 0 to lowest set bit in r/m32.
BLSMSK
r64,r/m64
VEX.NDD.LZ.0F38.W1 F3 /2
BMI1
Set all lower bits in r64 to '1' starting from bit 0 to lowest set bit in r/m64.
VEX.vvvv(w)
ModRM:r/m(r)
NA
NA
BLSR--Reset Lowest Set Bit.
BLSR
r32,r/m32
VEX.NDD.LZ.0F38.W0 F3 /1
BMI1
Reset lowest set bit of r/m32, keep all other bits of r/m32 and write result to r32.
BLSR
r64,r/m64
VEX.NDD.LZ.0F38.W1 F3 /1
BMI1
Reset lowest set bit of r/m64, keep all other bits of r/m64 and write result to r64.
VEX.vvvv(w)
ModRM:r/m(r)
NA
NA
BNDCL--Check Lower Bound.
BNDCL
bnd,r/m32
F3 0F 1A /r
MPX
Generate a #BR if the address in r/m32 is lower than the lower bound in bnd.LB.
BNDCL
bnd,r/m64
F3 0F 1A /r
MPX
Generate a #BR if the address in r/m64 is lower than the lower bound in bnd.LB.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
BNDCU/BNDCN--Check Upper Bound.
BNDCU
bnd,r/m32
F2 0F 1A /r
MPX
Generate a #BR if the address in r/m32 is higher than the upper bound in bnd.UB (bnb.UB in 1's complement form).
BNDCU
bnd,r/m64
F2 0F 1A /r
MPX
Generate a #BR if the address in r/m64 is higher than the upper bound in bnd.UB (bnb.UB in 1's complement form).
BNDCN
bnd,r/m32
F2 0F 1B /r
MPX
Generate a #BR if the address in r/m32 is higher than the upper bound in bnd.UB (bnb.UB not in 1's complement form).
BNDCN
bnd,r/m64
F2 0F 1B /r
MPX
Generate a #BR if the address in r/m64 is higher than the upper bound in bnd.UB (bnb.UB not in 1's complement form).
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
BNDLDX--Load Extended Bounds Using Address Translation.
BNDLDX
bnd,mib
0F 1A /r
MPX
Load the bounds stored in a bound table entry (BTE) into bnd with address translation using the base of mib and conditional on the index of mib matching the pointer value in the BTE.
ModRM:reg(w)
SIB.base(r): Address of pointer,SIB.index(r)
NA
NA
BNDMK--Make Bounds.
BNDMK
bnd,m32
F3 0F 1B /r
MPX
Make lower and upper bounds from m32 and store them in bnd.
BNDMK
bnd,m64
F3 0F 1B /r
MPX
Make lower and upper bounds from m64 and store them in bnd.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
BNDMOV--Move Bounds.
BNDMOV
bnd1,bnd2/m64
66 0F 1A /r
MPX
Move lower and upper bound from bnd2/m64 to bound register bnd1.
BNDMOV
bnd1,bnd2/m128
66 0F 1A /r
MPX
Move lower and upper bound from bnd2/m128 to bound register bnd1.
BNDMOV
bnd1/m64,bnd2
66 0F 1B /r
MPX
Move lower and upper bound from bnd2 to bnd1/m64.
BNDMOV
bnd1/m128,bnd2
66 0F 1B /r
MPX
Move lower and upper bound from bnd2 to bound register bnd1/m128.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
BNDSTX--Store Extended Bounds Using Address Translation.
BNDSTX
mib,bnd
0F 1B /r
MPX
Store the bounds in bnd and the pointer value in the index register of mib to a bound table entry (BTE) with address translation using the base of mib.
SIB.base(r): Address of pointer,SIB.index(r)
ModRM:reg(r)
NA
NA
BOUND--Check Array Index Against Bounds.
BOUND
r16,m16&16
62 /r
Check if r16 (array index) is within bounds specified by m16&16.
BOUND
r32,m32&32
62 /r
Check if r32 (array index) is within bounds specified by m32&32.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
BSF--Bit Scan Forward.
BSF
r16,r/m16
0F BC /r
Bit scan forward on r/m16.
BSF
r32,r/m32
0F BC /r
Bit scan forward on r/m32.
BSF
r64,r/m64
REX.W + 0F BC /r
Bit scan forward on r/m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
BSR--Bit Scan Reverse.
BSR
r16,r/m16
0F BD /r
Bit scan reverse on r/m16.
BSR
r32,r/m32
0F BD /r
Bit scan reverse on r/m32.
BSR
r64,r/m64
REX.W + 0F BD /r
Bit scan reverse on r/m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
BSWAP--Byte Swap.
BSWAP
r32
0F C8+rd
Reverses the byte order of a 32-bit register.
BSWAP
r64
REX.W + 0F C8+rd
Reverses the byte order of a 64-bit register.
opcode + rd(r,w)
NA
NA
NA
BT--Bit Test.
BT
r/m16,r16
0F A3 /r
Store selected bit in CF flag.
BT
r/m32,r32
0F A3 /r
Store selected bit in CF flag.
BT
r/m64,r64
REX.W + 0F A3 /r
Store selected bit in CF flag.
BT
r/m16,imm8
0F BA /4 ib
Store selected bit in CF flag.
BT
r/m32,imm8
0F BA /4 ib
Store selected bit in CF flag.
BT
r/m64,imm8
REX.W + 0F BA /4 ib
Store selected bit in CF flag.
ModRM:r/m(r)
ModRM:reg(r)
NA
NA
ModRM:r/m(r)
imm8(r)
NA
NA
BTC--Bit Test and Complement.
BTC
r/m16,r16
0F BB /r
Store selected bit in CF flag and complement.
BTC
r/m32,r32
0F BB /r
Store selected bit in CF flag and complement.
BTC
r/m64,r64
REX.W + 0F BB /r
Store selected bit in CF flag and complement.
BTC
r/m16,imm8
0F BA /7 ib
Store selected bit in CF flag and complement.
BTC
r/m32,imm8
0F BA /7 ib
Store selected bit in CF flag and complement.
BTC
r/m64,imm8
REX.W + 0F BA /7 ib
Store selected bit in CF flag and complement.
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
BTR--Bit Test and Reset.
BTR
r/m16,r16
0F B3 /r
Store selected bit in CF flag and clear.
BTR
r/m32,r32
0F B3 /r
Store selected bit in CF flag and clear.
BTR
r/m64,r64
REX.W + 0F B3 /r
Store selected bit in CF flag and clear.
BTR
r/m16,imm8
0F BA /6 ib
Store selected bit in CF flag and clear.
BTR
r/m32,imm8
0F BA /6 ib
Store selected bit in CF flag and clear.
BTR
r/m64,imm8
REX.W + 0F BA /6 ib
Store selected bit in CF flag and clear.
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
BTS--Bit Test and Set.
BTS
r/m16,r16
0F AB /r
Store selected bit in CF flag and set.
BTS
r/m32,r32
0F AB /r
Store selected bit in CF flag and set.
BTS
r/m64,r64
REX.W + 0F AB /r
Store selected bit in CF flag and set.
BTS
r/m16,imm8
0F BA /5 ib
Store selected bit in CF flag and set.
BTS
r/m32,imm8
0F BA /5 ib
Store selected bit in CF flag and set.
BTS
r/m64,imm8
REX.W + 0F BA /5 ib
Store selected bit in CF flag and set.
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
BZHI--Zero High Bits Starting with Specified Bit Position.
BZHI
r32a,r/m32,r32b
VEX.NDS.LZ.0F38.W0 F5 /r
BMI2
Zero bits in r/m32 starting with the position in r32b, write result to r32a.
BZHI
r64a,r/m64,r64b
VEX.NDS.LZ.0F38.W1 F5 /r
BMI2
Zero bits in r/m64 starting with the position in r64b, write result to r64a.
ModRM:reg(w)
ModRM:r/m(r)
VEX.vvvv(r)
NA
CALL--Call Procedure.
CALL
rel16
E8 cw
Call near, relative, displacement relative to next instruction.
CALL
rel32
E8 cd
Call near, relative, displacement relative to next instruction. 32-bit displacement sign extended to 64-bits in 64-bit mode.
CALL
r/m16
FF /2
Call near, absolute indirect, address given in r/m16.
CALL
r/m32
FF /2
Call near, absolute indirect, address given in r/m32.
CALL
r/m64
FF /2
Call near, absolute indirect, address given in r/m64.
CALL
ptr16:16
9A cd
Call far, absolute, address given in operand.
CALL
ptr16:32
9A cp
Call far, absolute, address given in operand.
CALL
m16:16
FF /3
Call far, absolute indirect address given in m16:16. In 32-bit mode: if selector points to a gate, then RIP = 32-bit zero extended displacement taken from gate; else RIP = zero extended 16instruction.
CALL
m16:32
FF /3
In 64-bit mode: If selector points to a gate, then RIP = 64-bit displacement taken from gate; else RIP = zero extended 32-bit offset from far pointer referenced in the instruction.
CALL
m16:64
REX.W + FF /3
In 64-bit mode: If selector points to a gate, then RIP = 64-bit displacement taken from gate; else RIP = 64-bit offset from far pointer referenced in the instruction.
Offset
NA
NA
NA
ModRM:r/m(r)
NA
NA
NA
CBW/CWDE/CDQE--Convert Byte to Word/Convert Word to Doubleword/Convert Doubleword to Quadword.
CBW
void
98
AX <-- sign-extend of AL.
CWDE
void
98
EAX <-- sign-extend of AX.
CDQE
void
REX.W + 98
RAX <-- sign-extend of EAX.
NA
NA
NA
NA
CLAC--Clear AC Flag in EFLAGS Register.
CLAC
void
0F 01 CA
Clear the AC flag in the EFLAGS register.
NA
NA
NA
NA
CLC--Clear Carry Flag.
CLC
void
F8
Clear CF flag.
NA
NA
NA
NA
CLD--Clear Direction Flag.
CLD
void
FC
Clear DF flag.
NA
NA
NA
NA
CLFLUSH--Flush Cache Line.
CLFLUSH
m8
0F AE /7
Flushes cache line containing m8.
ModRM:r/m(w)
NA
NA
NA
CLI--Clear Interrupt Flag.
CLI
void
FA
Clear interrupt flag; interrupts disabled when interrupt flag cleared.
NA
NA
NA
NA
CLTS--Clear Task-Switched Flag in CR0.
CLTS
void
0F 06
Clears TS flag in CR0.
NA
NA
NA
NA
CMC--Complement Carry Flag.
CMC
void
F5
Complement CF flag.
NA
NA
NA
NA
CMOVcc--Conditional Move.
CMOVA
r16,r/m16
0F 47 /r
Move if above (CF=0 and ZF=0).
CMOVA
r32,r/m32
0F 47 /r
Move if above (CF=0 and ZF=0).
CMOVA
r64,r/m64
REX.W + 0F 47 /r
Move if above (CF=0 and ZF=0).
CMOVAE
r16,r/m16
0F 43 /r
Move if above or equal (CF=0).
CMOVAE
r32,r/m32
0F 43 /r
Move if above or equal (CF=0).
CMOVAE
r64,r/m64
REX.W + 0F 43 /r
Move if above or equal (CF=0).
CMOVB
r16,r/m16
0F 42 /r
Move if below (CF=1).
CMOVB
r32,r/m32
0F 42 /r
Move if below (CF=1).
CMOVB
r64,r/m64
REX.W + 0F 42 /r
Move if below (CF=1).
CMOVBE
r16,r/m16
0F 46 /r
Move if below or equal (CF=1 or ZF=1).
CMOVBE
r32,r/m32
0F 46 /r
Move if below or equal (CF=1 or ZF=1).
CMOVBE
r64,r/m64
REX.W + 0F 46 /r
Move if below or equal (CF=1 or ZF=1).
CMOVC
r16,r/m16
0F 42 /r
Move if carry (CF=1).
CMOVC
r32,r/m32
0F 42 /r
Move if carry (CF=1).
CMOVC
r64,r/m64
REX.W + 0F 42 /r
Move if carry (CF=1).
CMOVE
r16,r/m16
0F 44 /r
Move if equal (ZF=1).
CMOVE
r32,r/m32
0F 44 /r
Move if equal (ZF=1).
CMOVE
r64,r/m64
REX.W + 0F 44 /r
Move if equal (ZF=1).
CMOVG
r16,r/m16
0F 4F /r
Move if greater (ZF=0 and SF=OF).
CMOVG
r32,r/m32
0F 4F /r
Move if greater (ZF=0 and SF=OF).
CMOVG
r64,r/m64
REX.W + 0F 4F /r
Move if greater (ZF=0 and SF=OF).
CMOVGE
r16,r/m16
0F 4D /r
Move if greater or equal (SF=OF).
CMOVGE
r32,r/m32
0F 4D /r
Move if greater or equal (SF=OF).
CMOVGE
r64,r/m64
REX.W + 0F 4D /r
Move if greater or equal (SF=OF).
CMOVL
r16,r/m16
0F 4C /r
Move if less (SF != OF).
CMOVL
r32,r/m32
0F 4C /r
Move if less (SF != OF).
CMOVL
r64,r/m64
REX.W + 0F 4C /r
Move if less (SF != OF).
CMOVLE
r16,r/m16
0F 4E /r
Move if less or equal (ZF=1 or SF != OF).
CMOVLE
r32,r/m32
0F 4E /r
Move if less or equal (ZF=1 or SF != OF).
CMOVLE
r64,r/m64
REX.W + 0F 4E /r
Move if less or equal (ZF=1 or SF != OF).
CMOVNA
r16,r/m16
0F 46 /r
Move if not above (CF=1 or ZF=1).
CMOVNA
r32,r/m32
0F 46 /r
Move if not above (CF=1 or ZF=1).
CMOVNA
r64,r/m64
REX.W + 0F 46 /r
Move if not above (CF=1 or ZF=1).
CMOVNAE
r16,r/m16
0F 42 /r
Move if not above or equal (CF=1).
CMOVNAE
r32,r/m32
0F 42 /r
Move if not above or equal (CF=1).
CMOVNAE
r64,r/m64
REX.W + 0F 42 /r
Move if not above or equal (CF=1).
CMOVNB
r16,r/m16
0F 43 /r
Move if not below (CF=0).
CMOVNB
r32,r/m32
0F 43 /r
Move if not below (CF=0).
CMOVNB
r64,r/m64
REX.W + 0F 43 /r
Move if not below (CF=0).
CMOVNBE
r16,r/m16
0F 47 /r
Move if not below or equal (CF=0 and ZF=0).
CMOVNBE
r32,r/m32
0F 47 /r
Move if not below or equal (CF=0 and ZF=0).
CMOVNBE
r64,r/m64
REX.W + 0F 47 /r
Move if not below or equal (CF=0 and ZF=0).
CMOVNC
r16,r/m16
0F 43 /r
Move if not carry (CF=0).
CMOVNC
r32,r/m32
0F 43 /r
Move if not carry (CF=0).
CMOVNC
r64,r/m64
REX.W + 0F 43 /r
Move if not carry (CF=0).
CMOVNE
r16,r/m16
0F 45 /r
Move if not equal (ZF=0).
CMOVNE
r32,r/m32
0F 45 /r
Move if not equal (ZF=0).
CMOVNE
r64,r/m64
REX.W + 0F 45 /r
Move if not equal (ZF=0).
CMOVNG
r16,r/m16
0F 4E /r
Move if not greater (ZF=1 or SF != OF).
CMOVNG
r32,r/m32
0F 4E /r
Move if not greater (ZF=1 or SF != OF).
CMOVNG
r64,r/m64
REX.W + 0F 4E /r
Move if not greater (ZF=1 or SF != OF).
CMOVNGE
r16,r/m16
0F 4C /r
Move if not greater or equal (SF != OF).
CMOVNGE
r32,r/m32
0F 4C /r
Move if not greater or equal (SF != OF).
CMOVNGE
r64,r/m64
REX.W + 0F 4C /r
Move if not greater or equal (SF != OF).
CMOVNL
r16,r/m16
0F 4D /r
Move if not less (SF=OF).
CMOVNL
r32,r/m32
0F 4D /r
Move if not less (SF=OF).
CMOVNL
r64,r/m64
REX.W + 0F 4D /r
Move if not less (SF=OF).
CMOVNLE
r16,r/m16
0F 4F /r
Move if not less or equal (ZF=0 and SF=OF).
CMOVNLE
r32,r/m32
0F 4F /r
Move if not less or equal (ZF=0 and SF=OF).
CMOVNLE
r64,r/m64
REX.W + 0F 4F /r
Move if not less or equal (ZF=0 and SF=OF).
CMOVNO
r16,r/m16
0F 41 /r
Move if not overflow (OF=0).
CMOVNO
r32,r/m32
0F 41 /r
Move if not overflow (OF=0).
CMOVNO
r64,r/m64
REX.W + 0F 41 /r
Move if not overflow (OF=0).
CMOVNP
r16,r/m16
0F 4B /r
Move if not parity (PF=0).
CMOVNP
r32,r/m32
0F 4B /r
Move if not parity (PF=0).
CMOVNP
r64,r/m64
REX.W + 0F 4B /r
Move if not parity (PF=0).
CMOVNS
r16,r/m16
0F 49 /r
Move if not sign (SF=0).
CMOVNS
r32,r/m32
0F 49 /r
Move if not sign (SF=0).
CMOVNS
r64,r/m64
REX.W + 0F 49 /r
Move if not sign (SF=0).
CMOVNZ
r16,r/m16
0F 45 /r
Move if not zero (ZF=0).
CMOVNZ
r32,r/m32
0F 45 /r
Move if not zero (ZF=0).
CMOVNZ
r64,r/m64
REX.W + 0F 45 /r
Move if not zero (ZF=0).
CMOVO
r16,r/m16
0F 40 /r
Move if overflow (OF=1).
CMOVO
r32,r/m32
0F 40 /r
Move if overflow (OF=1).
CMOVO
r64,r/m64
REX.W + 0F 40 /r
Move if overflow (OF=1).
CMOVP
r16,r/m16
0F 4A /r
Move if parity (PF=1).
CMOVP
r32,r/m32
0F 4A /r
Move if parity (PF=1).
CMOVP
r64,r/m64
REX.W + 0F 4A /r
Move if parity (PF=1).
CMOVPE
r16,r/m16
0F 4A /r
Move if parity even (PF=1).
CMOVPE
r32,r/m32
0F 4A /r
Move if parity even (PF=1).
CMOVPE
r64,r/m64
REX.W + 0F 4A /r
Move if parity even (PF=1).
CMOVPO
r16,r/m16
0F 4B /r
Move if parity odd (PF=0).
CMOVPO
r32,r/m32
0F 4B /r
Move if parity odd (PF=0).
CMOVPO
r64,r/m64
REX.W + 0F 4B /r
Move if parity odd (PF=0).
CMOVS
r16,r/m16
0F 48 /r
Move if sign (SF=1).
CMOVS
r32,r/m32
0F 48 /r
Move if sign (SF=1).
CMOVS
r64,r/m64
REX.W + 0F 48 /r
Move if sign (SF=1).
CMOVZ
r16,r/m16
0F 44 /r
Move if zero (ZF=1).
CMOVZ
r32,r/m32
0F 44 /r
Move if zero (ZF=1).
CMOVZ
r64,r/m64
REX.W + 0F 44 /r
Move if zero (ZF=1).
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
CMP--Compare Two Operands.
CMP
AL,imm8
3C ib
Compare imm8 with AL.
CMP
AX,imm16
3D iw
Compare imm16 with AX.
CMP
EAX,imm32
3D id
Compare imm32 with EAX.
CMP
RAX,imm32
REX.W + 3D id
Compare imm32 sign-extended to 64-bits with RAX.
CMP
r/m8,imm8*
80 /7 ib
Compare imm8 with r/m8.
CMP
r/m8,imm8
REX + 80 /7 ib
Compare imm8 with r/m8.
CMP
r/m16,imm16
81 /7 iw
Compare imm16 with r/m16.
CMP
r/m32,imm32
81 /7 id
Compare imm32 with r/m32.
CMP
r/m64,imm32
REX.W + 81 /7 id
Compare imm32 sign-extended to 64-bits with r/m64.
CMP
r/m16,imm8
83 /7 ib
Compare imm8 with r/m16.
CMP
r/m32,imm8
83 /7 ib
Compare imm8 with r/m32.
CMP
r/m64,imm8
REX.W + 83 /7 ib
Compare imm8 with r/m64.
CMP
r/m8,r8**
38 /r
Compare r8 with r/m8.
CMP
r/m8,r8
REX + 38 /r
Compare r8 with r/m8.
CMP
r/m16,r16
39 /r
Compare r16 with r/m16.
CMP
r/m32,r32
39 /r
Compare r32 with r/m32.
CMP
r/m64,r64
REX.W + 39 /r
Compare r64 with r/m64.
CMP
r8,r/m8**
3A /r
Compare r/m8 with r8.
CMP
r8,r/m8
REX + 3A /r
Compare r/m8 with r8.
CMP
r16,r/m16
3B /r
Compare r/m16 with r16.
CMP
r32,r/m32
3B /r
Compare r/m32 with r32.
CMP
r64,r/m64
REX.W + 3B /r
Compare r/m64 with r64.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r)
ModRM:reg(r)
NA
NA
ModRM:r/m(r)
imm8(r)
NA
NA
AL/AX/EAX/RAX(r)
imm8(r)
NA
NA
CMPPD--Compare Packed Double-Precision Floating-Point Values.
CMPPD
xmm1,xmm2/m128,imm8
66 0F C2 /r ib
SSE2
Compare packed double-precision floatingpoint values in xmm2/m128 and xmm1 using imm8 as comparison predicate.
VCMPPD
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F.WIG C2 /r ib
AVX
Compare packed double-precision floatingpoint values in xmm3/m128 and xmm2 using bits 4:0 of imm8 as a comparison predicate.
VCMPPD
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F.WIG C2 /r ib
AVX
Compare packed double-precision floatingpoint values in ymm3/m256 and ymm2 using bits 4:0 of imm8 as a comparison predicate.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
CMPPS--Compare Packed Single-Precision Floating-Point Values.
CMPPS
xmm1,xmm2/m128,imm8
0F C2 /r ib
SSE
Compare packed single-precision floatingpoint values in xmm2/mem and xmm1 using imm8 as comparison predicate.
VCMPPS
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.0F.WIG C2 /r ib
AVX
Compare packed single-precision floatingpoint values in xmm3/m128 and xmm2 using bits 4:0 of imm8 as a comparison predicate.
VCMPPS
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.0F.WIG C2 /r ib
AVX
Compare packed single-precision floatingpoint values in ymm3/m256 and ymm2 using bits 4:0 of imm8 as a comparison predicate.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
CMPS/CMPSB/CMPSW/CMPSD/CMPSQ--Compare String Operands.
CMPS
m8,m8
A6
For legacy mode, compare byte at address DS:(E)SI with byte at address ES:(E)DI; For 64byte at address (R|E)DI. The status flags are set accordingly.
CMPS
m16,m16
A7
For legacy mode, compare word at address DS:(E)SI with word at address ES:(E)DI; For 64word at address (R|E)DI. The status flags are set accordingly.
CMPS
m32,m32
A7
For legacy mode, compare dword at address DS:(E)SI at dword at address ES:(E)DI; For 64dword at address (R|E)DI. The status flags are set accordingly.
CMPS
m64,m64
REX.W + A7
Compares quadword at address (R|E)SI with quadword at address (R|E)DI and sets the status flags accordingly.
CMPSB
void
A6
For legacy mode, compare byte at address DS:(E)SI with byte at address ES:(E)DI; For 64byte at address (R|E)DI. The status flags are set accordingly.
CMPSW
void
A7
For legacy mode, compare word at address DS:(E)SI with word at address ES:(E)DI; For 64word at address (R|E)DI. The status flags are set accordingly.
CMPSD
void
A7
For legacy mode, compare dword at address DS:(E)SI with dword at address ES:(E)DI; For 64-bit mode compare dword at address (R|E)SI with dword at address (R|E)DI. The status flags are set accordingly.
CMPSQ
void
REX.W + A7
Compares quadword at address (R|E)SI with quadword at address (R|E)DI and sets the status flags accordingly.
NA
NA
NA
NA
CMPSD--Compare Scalar Double-Precision Floating-Point Values.
CMPSD
xmm1,xmm2/m64,imm8
F2 0F C2 /r ib
SSE2
Compare low double-precision floating-point value in xmm2/m64 and xmm1 using imm8 as comparison predicate.
VCMPSD
xmm1,xmm2,xmm3/m64,imm8
VEX.NDS.LIG.F2.0F.WIG C2 /r ib
AVX
Compare low double precision floating-point value in xmm3/m64 and xmm2 using bits 4:0 of imm8 as comparison predicate.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
CMPSS--Compare Scalar Single-Precision Floating-Point Values.
CMPSS
xmm1,xmm2/m32,imm8
F3 0F C2 /r ib
SSE
Compare low single-precision floating-point value in xmm2/m32 and xmm1 using imm8 as comparison predicate.
VCMPSS
xmm1,xmm2,xmm3/m32,imm8
VEX.NDS.LIG.F3.0F.WIG C2 /r ib
AVX
Compare low single precision floating-point value in xmm3/m32 and xmm2 using bits 4:0 of imm8 as comparison predicate.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
CMPXCHG--Compare and Exchange.
CMPXCHG
r/m8,r8
0F B0/r
Compare AL with r/m8. If equal, ZF is set and r8 is loaded into r/m8. Else, clear ZF and load r/m8 into AL.
CMPXCHG
r/m8**,r8
REX + 0F B0/r
Compare AL with r/m8. If equal, ZF is set and r8 is loaded into r/m8. Else, clear ZF and load r/m8 into AL.
CMPXCHG
r/m16,r16
0F B1/r
Compare AX with r/m16. If equal, ZF is set and r16 is loaded into r/m16. Else, clear ZF and load r/m16 into AX.
CMPXCHG
r/m32,r32
0F B1/r
Compare EAX with r/m32. If equal, ZF is set and r32 is loaded into r/m32. Else, clear ZF and load r/m32 into EAX.
CMPXCHG
r/m64,r64
REX.W + 0F B1/r
Compare RAX with r/m64. If equal, ZF is set and r64 is loaded into r/m64. Else, clear ZF and load r/m64 into RAX.
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
CMPXCHG8B/CMPXCHG16B--Compare and Exchange Bytes.
CMPXCHG8B
m64
0F C7 / 1 m64
Compare EDX:EAX with m64. If equal, set ZF and load ECX:EBX into m64. Else, clear ZF and load m64 into EDX:EAX.
CMPXCHG16B
m128
REX.W + 0F C7 / 1 m128
Compare RDX:RAX with m128. If equal, set ZF and load RCX:RBX into m128. Else, clear ZF and load m128 into RDX:RAX.
ModRM:r/m(r,w)
NA
NA
NA
COMISD--Compare Scalar Ordered Double-Precision Floating-Point Values and Set EFLAGS.
COMISD
xmm1,xmm2/m64
66 0F 2F /r
SSE2
Compare low double-precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
VCOMISD
xmm1,xmm2/m64
VEX.LIG.66.0F.WIG 2F /r
AVX
Compare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
COMISS--Compare Scalar Ordered Single-Precision Floating-Point Values and Set EFLAGS.
COMISS
xmm1,xmm2/m32
0F 2F /r
SSE
Compare low single-precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
VCOMISS
xmm1,xmm2/m32
VEX.LIG.0F.WIG 2F /r
AVX
Compare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
CPUID--CPU Identification.
CPUID
void
0F A2
Returns processor identification and feature information to the EAX, EBX, ECX, and EDX registers, as determined by input entered in EAX (in some cases, ECX as well).
NA
NA
NA
NA
CRC32--Accumulate CRC32 Value.
CRC32
r32,r/m8
F2 0F 38 F0 /r
Accumulate CRC32 on r/m8.
CRC32
r32,r/m8*
F2 REX 0F 38 F0 /r
Accumulate CRC32 on r/m8.
CRC32
r32,r/m16
F2 0F 38 F1 /r
Accumulate CRC32 on r/m16.
CRC32
r32,r/m32
F2 0F 38 F1 /r
Accumulate CRC32 on r/m32.
CRC32
r64,r/m8
F2 REX.W 0F 38 F0 /r
Accumulate CRC32 on r/m8.
CRC32
r64,r/m64
F2 REX.W 0F 38 F1 /r
Accumulate CRC32 on r/m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
CVTDQ2PD--Convert Packed Dword Integers to Packed Double-Precision FP Values.
CVTDQ2PD
xmm1,xmm2/m64
F3 0F E6
SSE2
Convert two packed signed doubleword integers from xmm2/m128 to two packed double-precision floating-point values in xmm1.
VCVTDQ2PD
xmm1,xmm2/m64
VEX.128.F3.0F.WIG E6 /r
AVX
Convert two packed signed doubleword integers from xmm2/mem to two packed double-precision floating-point values in xmm1.
VCVTDQ2PD
ymm1,xmm2/m128
VEX.256.F3.0F.WIG E6 /r
AVX
Convert four packed signed doubleword integers from xmm2/mem to four packed double-precision floating-point values in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTDQ2PS--Convert Packed Dword Integers to Packed Single-Precision FP Values.
CVTDQ2PS
xmm1,xmm2/m128
0F 5B /r
SSE2
Convert four packed signed doubleword integers from xmm2/m128 to four packed single-precision floating-point values in xmm1.
VCVTDQ2PS
xmm1,xmm2/m128
VEX.128.0F.WIG 5B /r
AVX
Convert four packed signed doubleword integers from xmm2/mem to four packed single-precision floating-point values in xmm1.
VCVTDQ2PS
ymm1,ymm2/m256
VEX.256.0F.WIG 5B /r
AVX
Convert eight packed signed doubleword integers from ymm2/mem to eight packed single-precision floating-point values in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPD2DQ--Convert Packed Double-Precision FP Values to Packed Dword Integers.
CVTPD2DQ
xmm1,xmm2/m128
F2 0F E6 /r
SSE2
Convert two packed double-precision floatingpoint values from xmm2/m128 to two packed signed doubleword integers in xmm1.
VCVTPD2DQ
xmm1,xmm2/m128
VEX.128.F2.0F.WIG E6 /r
AVX
Convert two packed double-precision floatingpoint values in xmm2/mem to two signed doubleword integers in xmm1.
VCVTPD2DQ
xmm1,ymm2/m256
VEX.256.F2.0F.WIG E6 /r
AVX
Convert four packed double-precision floatingpoint values in ymm2/mem to four signed doubleword integers in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPD2PI--Convert Packed Double-Precision FP Values to Packed Dword Integers.
CVTPD2PI
mm,xmm/m128
66 0F 2D /r
Convert two packed double-precision floatingpoint values from xmm/m128 to two packed signed doubleword integers in mm.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPD2PS--Convert Packed Double-Precision FP Values to Packed Single-Precision FP Values.
CVTPD2PS
xmm1,xmm2/m128
66 0F 5A /r
SSE2
Convert two packed double-precision floatingpoint values in xmm2/m128 to two packed single-precision floating-point values in xmm1.
VCVTPD2PS
xmm1,xmm2/m128
VEX.128.66.0F.WIG 5A /r
AVX
Convert two packed double-precision floatingpoint values in xmm2/mem to two singleprecision floating-point values in xmm1.
VCVTPD2PS
xmm1,ymm2/m256
VEX.256.66.0F.WIG 5A /r
AVX
Convert four packed double-precision floatingpoint values in ymm2/mem to four singleprecision floating-point values in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPI2PD--Convert Packed Dword Integers to Packed Double-Precision FP Values.
CVTPI2PD
xmm,mm/m64*
66 0F 2A /r
Convert two packed signed doubleword integers from mm/mem64 to two packed double-precision floating-point values in xmm.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPI2PS--Convert Packed Dword Integers to Packed Single-Precision FP Values.
CVTPI2PS
xmm,mm/m64
0F 2A /r
Convert two signed doubleword integers from mm/m64 to two single-precision floating-point values in xmm.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPS2DQ--Convert Packed Single-Precision FP Values to Packed Dword Integers.
CVTPS2DQ
xmm1,xmm2/m128
66 0F 5B /r
SSE2
Convert four packed single-precision floatingpoint values from xmm2/m128 to four packed signed doubleword integers in xmm1.
VCVTPS2DQ
xmm1,xmm2/m128
VEX.128.66.0F.WIG 5B /r
AVX
Convert four packed single precision floatingpoint values from xmm2/mem to four packed signed doubleword values in xmm1.
VCVTPS2DQ
ymm1,ymm2/m256
VEX.256.66.0F.WIG 5B /r
AVX
Convert eight packed single precision floatingpoint values from ymm2/mem to eight packed signed doubleword values in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPS2PD--Convert Packed Single-Precision FP Values to Packed Double-Precision FP Values.
CVTPS2PD
xmm1,xmm2/m64
0F 5A /r
SSE2
Convert two packed single-precision floatingpoint values in xmm2/m64 to two packed double-precision floating-point values in xmm1.
VCVTPS2PD
xmm1,xmm2/m64
VEX.128.0F.WIG 5A /r
AVX
Convert two packed single-precision floatingpoint values in xmm2/mem to two packed double-precision floating-point values in xmm1.
VCVTPS2PD
ymm1,xmm2/m128
VEX.256.0F.WIG 5A /r
AVX
Convert four packed single-precision floatingpoint values in xmm2/mem to four packed double-precision floating-point values in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTPS2PI--Convert Packed Single-Precision FP Values to Packed Dword Integers.
CVTPS2PI
mm,xmm/m64
0F 2D /r
Convert two packed single-precision floatingpoint values from xmm/m64 to two packed signed doubleword integers in mm.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTSD2SI--Convert Scalar Double-Precision FP Value to Integer.
CVTSD2SI
r32,xmm/m64
F2 0F 2D /r
SSE2
Convert one double-precision floating-point value from xmm/m64 to one signed doubleword integer r32.
CVTSD2SI
r64,xmm/m64
F2 REX.W 0F 2D /r
SSE2
Convert one double-precision floating-point value from xmm/m64 to one signed quadword integer sign-extended into r64.
VCVTSD2SI
r32,xmm1/m64
VEX.LIG.F2.0F.W0 2D /r
AVX
Convert one double precision floating-point value from xmm1/m64 to one signed doubleword integer r32.
VCVTSD2SI
r64,xmm1/m64
VEX.LIG.F2.0F.W1 2D /r
AVX
Convert one double precision floating-point value from xmm1/m64 to one signed quadword integer sign-extended into r64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTSD2SS--Convert Scalar Double-Precision FP Value to Scalar Single-Precision FP Value.
CVTSD2SS
xmm1,xmm2/m64
F2 0F 5A /r
SSE2
Convert one double-precision floating-point value in xmm2/m64 to one single-precision floating-point value in xmm1.
VCVTSD2SS
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 5A /r
AVX
Convert one double-precision floating-point value in xmm3/m64 to one single-precision floating-point value and merge with high bits in xmm2.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
CVTSI2SD--Convert Dword Integer to Scalar Double-Precision FP Value.
CVTSI2SD
xmm,r/m32
F2 0F 2A /r
SSE2
Convert one signed doubleword integer from r/m32 to one double-precision floating-point value in xmm.
CVTSI2SD
xmm,r/m64
F2 REX.W 0F 2A /r
SSE2
Convert one signed quadword integer from r/m64 to one double-precision floating-point value in xmm.
VCVTSI2SD
xmm1,xmm2,r/m32
VEX.NDS.LIG.F2.0F.W0 2A /r
AVX
Convert one signed doubleword integer from r/m32 to one double-precision floating-point value in xmm1.
VCVTSI2SD
xmm1,xmm2,r/m64
VEX.NDS.LIG.F2.0F.W1 2A /r
AVX
Convert one signed quadword integer from r/m64 to one double-precision floating-point value in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
CVTSI2SS--Convert Dword Integer to Scalar Single-Precision FP Value.
CVTSI2SS
xmm,r/m32
F3 0F 2A /r
SSE
Convert one signed doubleword integer from r/m32 to one single-precision floating-point value in xmm.
CVTSI2SS
xmm,r/m64
F3 REX.W 0F 2A /r
SSE
Convert one signed quadword integer from r/m64 to one single-precision floating-point value in xmm.
VCVTSI2SS
xmm1,xmm2,r/m32
VEX.NDS.LIG.F3.0F.W0 2A /r
AVX
Convert one signed doubleword integer from r/m32 to one single-precision floating-point value in xmm1.
VCVTSI2SS
xmm1,xmm2,r/m64
VEX.NDS.LIG.F3.0F.W1 2A /r
AVX
Convert one signed quadword integer from r/m64 to one single-precision floating-point value in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
CVTSS2SD--Convert Scalar Single-Precision FP Value to Scalar Double-Precision FP Value.
CVTSS2SD
xmm1,xmm2/m32
F3 0F 5A /r
SSE2
Convert one single-precision floating-point value in xmm2/m32 to one double-precision floating-point value in xmm1.
VCVTSS2SD
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 5A /r
AVX
Convert one single-precision floating-point value in xmm3/m32 to one double-precision floating-point value and merge with high bits of xmm2.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
CVTSS2SI--Convert Scalar Single-Precision FP Value to Dword Integer.
CVTSS2SI
r32,xmm/m32
F3 0F 2D /r
SSE
Convert one single-precision floating-point value from xmm/m32 to one signed doubleword integer in r32.
CVTSS2SI
r64,xmm/m32
F3 REX.W 0F 2D /r
SSE
Convert one single-precision floating-point value from xmm/m32 to one signed quadword integer in r64.
VCVTSS2SI
r32,xmm1/m32
VEX.LIG.F3.0F.W0 2D /r
AVX
Convert one single-precision floating-point value from xmm1/m32 to one signed doubleword integer in r32.
VCVTSS2SI
r64,xmm1/m32
VEX.LIG.F3.0F.W1 2D /r
AVX
Convert one single-precision floating-point value from xmm1/m32 to one signed quadword integer in r64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTPD2DQ--Convert with Truncation Packed Double-Precision FP Values to Packed Dword Integers.
CVTTPD2DQ
xmm1,xmm2/m128
66 0F E6 /r
SSE2
Convert two packed double-precision floatingpoint values from xmm2/m128 to two packed signed doubleword integers in xmm1 using truncation.
VCVTTPD2DQ
xmm1,xmm2/m128
VEX.128.66.0F.WIG E6 /r
AVX
Convert two packed double-precision floatingpoint values in xmm2/mem to two signed doubleword integers in xmm1 using truncation.
VCVTTPD2DQ
xmm1,ymm2/m256
VEX.256.66.0F.WIG E6 /r
AVX
Convert four packed double-precision floatingpoint values in ymm2/mem to four signed doubleword integers in xmm1 using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTPD2PI--Convert with Truncation Packed Double-Precision FP Values to Packed Dword Integers.
CVTTPD2PI
mm,xmm/m128
66 0F 2C /r
Convert two packer double-precision floatingpoint values from xmm/m128 to two packed signed doubleword integers in mm using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTPS2DQ--Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers.
CVTTPS2DQ
xmm1,xmm2/m128
F3 0F 5B /r
SSE2
Convert four single-precision floating-point values from xmm2/m128 to four signed doubleword integers in xmm1 using truncation.
VCVTTPS2DQ
xmm1,xmm2/m128
VEX.128.F3.0F.WIG 5B /r
AVX
Convert four packed single precision floatingpoint values from xmm2/mem to four packed signed doubleword values in xmm1 using truncation.
VCVTTPS2DQ
ymm1,ymm2/m256
VEX.256.F3.0F.WIG 5B /r
AVX
Convert eight packed single precision floatingpoint values from ymm2/mem to eight packed signed doubleword values in ymm1 using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTPS2PI--Convert with Truncation Packed Single-Precision FP Values to Packed Dword Integers.
CVTTPS2PI
mm,xmm/m64
0F 2C /r
Convert two single-precision floating-point values from xmm/m64 to two signed doubleword signed integers in mm using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTSD2SI--Convert with Truncation Scalar Double-Precision FP Value to Signed Integer.
CVTTSD2SI
r32,xmm/m64
F2 0F 2C /r
SSE2
Convert one double-precision floating-point value from xmm/m64 to one signed doubleword integer in r32 using truncation.
CVTTSD2SI
r64,xmm/m64
F2 REX.W 0F 2C /r
SSE2
Convert one double precision floating-point value from xmm/m64 to one signedquadword integer in r64 using truncation.
VCVTTSD2SI
r32,xmm1/m64
VEX.LIG.F2.0F.W0 2C /r
AVX
Convert one double-precision floating-point value from xmm1/m64 to one signed doubleword integer in r32 using truncation.
VCVTTSD2SI
r64,xmm1/m64
VEX.LIG.F2.0F.W1 2C /r
AVX
Convert one double precision floating-point value from xmm1/m64 to one signed quadword integer in r64 using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CVTTSS2SI--Convert with Truncation Scalar Single-Precision FP Value to Dword Integer.
CVTTSS2SI
r32,xmm/m32
F3 0F 2C /r
SSE
Convert one single-precision floating-point value from xmm/m32 to one signed doubleword integer in r32 using truncation.
CVTTSS2SI
r64,xmm/m32
F3 REX.W 0F 2C /r
SSE
Convert one single-precision floating-point value from xmm/m32 to one signed quadword integer in r64 using truncation.
VCVTTSS2SI
r32,xmm1/m32
VEX.LIG.F3.0F.W0 2C /r
AVX
Convert one single-precision floating-point value from xmm1/m32 to one signed doubleword integer in r32 using truncation.
VCVTTSS2SI
r64,xmm1/m32
VEX.LIG.F3.0F.W1 2C /r
AVX
Convert one single-precision floating-point value from xmm1/m32 to one signed quadword integer in r64 using truncation.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
CWD/CDQ/CQO--Convert Word to Doubleword/Convert Doubleword to Quadword.
CWD
void
99
DX:AX <-- sign-extend of AX.
CDQ
void
99
EDX:EAX <-- sign-extend of EAX.
CQO
void
REX.W + 99
RDX:RAX<-- sign-extend of RAX.
NA
NA
NA
NA
DAA--Decimal Adjust AL after Addition.
DAA
void
27
Decimal adjust AL after addition.
NA
NA
NA
NA
DAS--Decimal Adjust AL after Subtraction.
DAS
void
2F
Decimal adjust AL after subtraction.
NA
NA
NA
NA
DEC--Decrement by 1.
DEC
r/m8*
FE /1
Decrement r/m8 by 1.
DEC
r/m8
REX + FE /1
Decrement r/m8 by 1.
DEC
r/m16
FF /1
Decrement r/m16 by 1.
DEC
r/m32
FF /1
Decrement r/m32 by 1.
DEC
r/m64
REX.W + FF /1
Decrement r/m64 by 1.
DEC
r16
48+rw
Decrement r16 by 1.
DEC
r32
48+rd
Decrement r32 by 1.
ModRM:r/m(r,w)
NA
NA
NA
opcode + rd(r,w)
NA
NA
NA
DIV--Unsigned Divide.
DIV
r/m8*
F6 /6
Unsigned divide AX by r/m8, with result stored in AL <-- Quotient, AH ? Remainder.
DIV
r/m8
REX + F6 /6
Unsigned divide AX by r/m8, with result stored in AL <-- Quotient, AH ? Remainder.
DIV
r/m16
F7 /6
Unsigned divide DX:AX by r/m16, with result stored in AX <-- Quotient, DX ? Remainder.
DIV
r/m32
F7 /6
Unsigned divide EDX:EAX by r/m32, with result stored in EAX <-- Quotient, EDX ? Remainder.
DIV
r/m64
REX.W + F7 /6
Unsigned divide RDX:RAX by r/m64, with result stored in RAX <-- Quotient, RDX ? Remainder.
ModRM:r/m(w)
NA
NA
NA
DIVPD--Divide Packed Double-Precision Floating-Point Values.
DIVPD
xmm1,xmm2/m128
66 0F 5E /r
SSE2
Divide packed double-precision floating-point values in xmm1 by packed double-precision floating-point values xmm2/m128.
VDIVPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 5E /r
AVX
Divide packed double-precision floating-point values in xmm2 by packed double-precision floating-point values in xmm3/mem.
VDIVPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 5E /r
AVX
Divide packed double-precision floating-point values in ymm2 by packed double-precision floating-point values in ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
DIVPS--Divide Packed Single-Precision Floating-Point Values.
DIVPS
xmm1,xmm2/m128
0F 5E /r
SSE
Divide packed single-precision floating-point values in xmm1 by packed single-precision floating-point values xmm2/m128.
VDIVPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 5E /r
AVX
Divide packed single-precision floating-point values in xmm2 by packed double-precision floating-point values in xmm3/mem.
VDIVPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 5E /r
AVX
Divide packed single-precision floating-point values in ymm2 by packed double-precision floating-point values in ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
DIVSD--Divide Scalar Double-Precision Floating-Point Values.
DIVSD
xmm1,xmm2/m64
F2 0F 5E /r
SSE2
Divide low double-precision floating-point value in xmm1 by low double-precision floating-point value in xmm2/mem64.
VDIVSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 5E /r
AVX
Divide low double-precision floating point values in xmm2 by low double precision floating-point value in xmm3/mem64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
DIVSS--Divide Scalar Single-Precision Floating-Point Values.
DIVSS
xmm1,xmm2/m32
F3 0F 5E /r
SSE
Divide low single-precision floating-point value in xmm1 by low single-precision floating-point value in xmm2/m32.
VDIVSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 5E /r
AVX
Divide low single-precision floating point value in xmm2 by low single precision floating-point value in xmm3/m32.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
DPPD--Dot Product of Packed Double Precision Floating-Point Values.
DPPD
xmm1,xmm2/m128,imm8
66 0F 3A 41 /r ib
SSE4_1
Selectively multiply packed DP floating-point values from xmm1 with packed DP floatingpoint values from xmm2, add and selectively store the packed DP floating-point values to xmm1.
VDPPD
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 41 /r ib
AVX
Selectively multiply packed DP floating-point values from xmm2 with packed DP floatingpoint values from xmm3, add and selectively store the packed DP floating-point values to xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
DPPS--Dot Product of Packed Single Precision Floating-Point Values.
DPPS
xmm1,xmm2/m128,imm8
66 0F 3A 40 /r ib
SSE4_1
Selectively multiply packed SP floating-point values from xmm1 with packed SP floatingpoint values from xmm2, add and selectively store the packed SP floating-point values or zero values to xmm1.
VDPPS
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 40 /r ib
AVX
Multiply packed SP floating point values from xmm1 with packed SP floating point values from xmm2/mem selectively add and store to xmm1.
VDPPS
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 40 /r ib
AVX
Multiply packed single-precision floating-point values from ymm2 with packed SP floating point values from ymm3/mem, selectively add pairs of elements and store to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
EMMS--Empty MMX Technology State.
EMMS
void
0F 77
Set the x87 FPU tag word to empty.
NA
NA
NA
NA
ENTER--Make Stack Frame for Procedure Parameters.
ENTER
imm16,0
C8 iw 00
Create a stack frame for a procedure.
ENTER
imm16,1
C8 iw 01
Create a stack frame with a nested pointer for a procedure.
ENTER
imm16,imm8
C8 iw ib
Create a stack frame with nested pointers for a procedure.
iw
imm8(r)
NA
NA
EXTRACTPS--Extract Packed Single Precision Floating-Point Value.
EXTRACTPS
reg/m32,xmm2,imm8
66 0F 3A 17 /r ib
SSE4_1
Extract a single-precision floating-point value from xmm2 at the source offset specified by imm8 and store the result to reg or m32. The upper 32 bits of r64 is zeroed if reg is r64.
VEXTRACTPS
r/m32,xmm1,imm8
VEX.128.66.0F3A.WIG 17 /r ib
AVX
Extract one single-precision floating-point value from xmm1 at the offset specified by imm8 and store the result in reg or m32. Zero extend the results in 64-bit register if applicable.
ModRM:r/m(w)
ModRM:reg(r)
imm8(r)
NA
F2XM1--Compute 2 -1.
F2XM1
void
D9 F0
Replace ST(0) with (2ST(0) 1).
FABS--Absolute Value.
FABS
void
D9 E1
Replace ST with its absolute value.
FADD/FADDP/FIADD--Add.
FADD
m32fp
D8 /0
Add m32fp to ST(0) and store result in ST(0).
FADD
m64fp
DC /0
Add m64fp to ST(0) and store result in ST(0).
FADD
ST(0),ST(i)
D8 C0+i
Add ST(0) to ST(i) and store result in ST(0).
FADD
ST(i),ST(0)
DC C0+i
Add ST(i) to ST(0) and store result in ST(i).
FADDP
ST(i),ST(0)
DE C0+i
Add ST(0) to ST(i), store result in ST(i), and pop the register stack.
FADDP
void
DE C1
Add ST(0) to ST(1), store result in ST(1), and pop the register stack.
FIADD
m32int
DA /0
Add m32int to ST(0) and store result in ST(0).
FIADD
m16int
DE /0
Add m16int to ST(0) and store result in ST(0).
FBLD--Load Binary Coded Decimal.
FBLD
m80dec
DF /4
Convert BCD value to floating-point and push onto the FPU stack.
FBSTP--Store BCD Integer and Pop.
FBSTP
m80bcd
DF /6
Store ST(0) in m80bcd and pop ST(0).
FCHS--Change Sign.
FCHS
void
D9 E0
Complements sign of ST(0).
FCLEX/FNCLEX--Clear Exceptions.
FCLEX*
void
9B DB E2
Clear floating-point exception flags after checking for pending unmasked floating-point exceptions.
FNCLEX
void
DB E2
Clear floating-point exception flags without checking for pending unmasked floating-point exceptions.
FCMOVcc--Floating-Point Conditional Move.
FCMOVB
ST(0),ST(i)
DA C0+i
Move if below (CF=1).
FCMOVE
ST(0),ST(i)
DA C8+i
Move if equal (ZF=1).
FCMOVBE
ST(0),ST(i)
DA D0+i
Move if below or equal (CF=1 or ZF=1).
FCMOVU
ST(0),ST(i)
DA D8+i
Move if unordered (PF=1).
FCMOVNB
ST(0),ST(i)
DB C0+i
Move if not below (CF=0).
FCMOVNE
ST(0),ST(i)
DB C8+i
Move if not equal (ZF=0).
FCMOVNBE
ST(0),ST(i)
DB D0+i
Move if not below or equal (CF=0 and ZF=0).
FCMOVNU
ST(0),ST(i)
DB D8+i
Move if not unordered (PF=0).
FCOM/FCOMP/FCOMPP--Compare Floating Point Values.
FCOM
m32fp
D8 /2
Compare ST(0) with m32fp.
FCOM
m64fp
DC /2
Compare ST(0) with m64fp.
FCOM
ST(i)
D8 D0+i
Compare ST(0) with ST(i).
FCOM
void
D8 D1
Compare ST(0) with ST(1).
FCOMP
m32fp
D8 /3
Compare ST(0) with m32fp and pop register stack.
FCOMP
m64fp
DC /3
Compare ST(0) with m64fp and pop register stack.
FCOMP
ST(i)
D8 D8+i
Compare ST(0) with ST(i) and pop register stack.
FCOMP
void
D8 D9
Compare ST(0) with ST(1) and pop register stack.
FCOMPP
void
DE D9
Compare ST(0) with ST(1) and pop register stack twice.
FCOMI
ST,ST(i)
DB F0+i
Compare ST(0) with ST(i) and set status flags accordingly.
FCOMIP
ST,ST(i)
DF F0+i
Compare ST(0) with ST(i), set status flags accordingly, and pop register stack.
FUCOMI
ST,ST(i)
DB E8+i
Compare ST(0) with ST(i), check for ordered values, and set status flags accordingly.
FUCOMIP
ST,ST(i)
DF E8+i
Compare ST(0) with ST(i), check for ordered values, set status flags accordingly, and pop register stack.
FCOS--Cosine.
FCOS
void
D9 FF
Replace ST(0) with its approximate cosine.
FDECSTP--Decrement Stack-Top Pointer.
FDECSTP
void
D9 F6
Decrement TOP field in FPU status word.
FDIV/FDIVP/FIDIV--Divide.
FDIV
m32fp
D8 /6
Divide ST(0) by m32fp and store result in ST(0).
FDIV
m64fp
DC /6
Divide ST(0) by m64fp and store result in ST(0).
FDIV
ST(0),ST(i)
D8 F0+i
Divide ST(0) by ST(i) and store result in ST(0).
FDIV
ST(i),ST(0)
DC F8+i
Divide ST(i) by ST(0) and store result in ST(i).
FDIVP
ST(i),ST(0)
DE F8+i
Divide ST(i) by ST(0), store result in ST(i), and pop the register stack.
FDIVP
void
DE F9
Divide ST(1) by ST(0), store result in ST(1), and pop the register stack.
FIDIV
m32int
DA /6
Divide ST(0) by m32int and store result in ST(0).
FIDIV
m16int
DE /6
Divide ST(0) by m16int and store result in ST(0).
FDIVR/FDIVRP/FIDIVR--Reverse Divide.
FDIVR
m32fp
D8 /7
Divide m32fp by ST(0) and store result in ST(0).
FDIVR
m64fp
DC /7
Divide m64fp by ST(0) and store result in ST(0).
FDIVR
ST(0),ST(i)
D8 F8+i
Divide ST(i) by ST(0) and store result in ST(0).
FDIVR
ST(i),ST(0)
DC F0+i
Divide ST(0) by ST(i) and store result in ST(i).
FDIVRP
ST(i),ST(0)
DE F0+i
Divide ST(0) by ST(i), store result in ST(i), and pop the register stack.
FDIVRP
void
DE F1
Divide ST(0) by ST(1), store result in ST(1), and pop the register stack.
FIDIVR
m32int
DA /7
Divide m32int by ST(0) and store result in ST(0).
FIDIVR
m16int
DE /7
Divide m16int by ST(0) and store result in ST(0).
FFREE--Free Floating-Point Register.
FFREE
ST(i)
DD C0+i
Sets tag for ST(i) to empty.
FICOM/FICOMP--Compare Integer.
FICOM
m16int
DE /2
Compare ST(0) with m16int.
FICOM
m32int
DA /2
Compare ST(0) with m32int.
FICOMP
m16int
DE /3
Compare ST(0) with m16int and pop stack register.
FICOMP
m32int
DA /3
Compare ST(0) with m32int and pop stack register.
FILD--Load Integer.
FILD
m16int
DF /0
Push m16int onto the FPU register stack.
FILD
m32int
DB /0
Push m32int onto the FPU register stack.
FILD
m64int
DF /5
Push m64int onto the FPU register stack.
FINCSTP--Increment Stack-Top Pointer.
FINCSTP
void
D9 F7
Increment the TOP field in the FPU status register.
FINIT/FNINIT--Initialize Floating-Point Unit.
FINIT*
void
9B DB E3
Initialize FPU after checking for pending unmasked floating-point exceptions.
FNINIT
void
DB E3
Initialize FPU without checking for pending unmasked floating-point exceptions.
FIST/FISTP--Store Integer.
FIST
m16int
DF /2
Store ST(0) in m16int.
FIST
m32int
DB /2
Store ST(0) in m32int.
FISTP
m16int
DF /3
Store ST(0) in m16int and pop register stack.
FISTP
m32int
DB /3
Store ST(0) in m32int and pop register stack.
FISTP
m64int
DF /7
Store ST(0) in m64int and pop register stack.
FISTTP--Store Integer with Truncation.
FISTTP
m16int
DF /1
Store ST(0) in m16int with truncation.
FISTTP
m32int
DB /1
Store ST(0) in m32int with truncation.
FISTTP
m64int
DD /1
Store ST(0) in m64int with truncation.
FLD--Load Floating Point Value.
FLD
m32fp
D9 /0
Push m32fp onto the FPU register stack.
FLD
m64fp
DD /0
Push m64fp onto the FPU register stack.
FLD
m80fp
DB /5
Push m80fp onto the FPU register stack.
FLD
ST(i)
D9 C0+i
Push ST(i) onto the FPU register stack.
FLD1/FLDL2T/FLDL2E/FLDPI/FLDLG2/FLDLN2/FLDZ--Load Constant.
FLD1
void
D9 E8
Push +1.0 onto the FPU register stack.
FLDL2T
void
D9 E9
Push log210 onto the FPU register stack.
FLDL2E
void
D9 EA
Push log2e onto the FPU register stack.
FLDPI
void
D9 EB
Push p onto the FPU register stack.
FLDLG2
void
D9 EC
Push log102 onto the FPU register stack.
FLDLN2
void
D9 ED
Push loge2 onto the FPU register stack.
FLDZ
void
D9 EE
Push +0.0 onto the FPU register stack.
FLDCW--Load x87 FPU Control Word.
FLDCW
m2byte
D9 /5
Load FPU control word from m2byte.
FLDENV--Load x87 FPU Environment.
FLDENV
m14/28byte
D9 /4
Load FPU environment from m14byte or m28byte.
FMUL/FMULP/FIMUL--Multiply.
FMUL
m32fp
D8 /1
Multiply ST(0) by m32fp and store result in ST(0).
FMUL
m64fp
DC /1
Multiply ST(0) by m64fp and store result in ST(0).
FMUL
ST(0),ST(i)
D8 C8+i
Multiply ST(0) by ST(i) and store result in ST(0).
FMUL
ST(i),ST(0)
DC C8+i
Multiply ST(i) by ST(0) and store result in ST(i).
FMULP
ST(i),ST(0)
DE C8+i
Multiply ST(i) by ST(0), store result in ST(i), and pop the register stack.
FMULP
void
DE C9
Multiply ST(1) by ST(0), store result in ST(1), and pop the register stack.
FIMUL
m32int
DA /1
Multiply ST(0) by m32int and store result in ST(0).
FIMUL
m16int
DE /1
Multiply ST(0) by m16int and store result in ST(0).
FNOP--No Operation.
FNOP
void
D9 D0
No operation is performed.
FPATAN--Partial Arctangent.
FPATAN
void
D9 F3
Replace ST(1) with arctan(ST(1)/ST(0)) and pop the register stack.
FPREM--Partial Remainder.
FPREM
void
D9 F8
Replace ST(0) with the remainder obtained from dividing ST(0) by ST(1).
FPREM1--Partial Remainder.
FPREM1
void
D9 F5
Replace ST(0) with the IEEE remainder obtained from dividing ST(0) by ST(1).
FPTAN--Partial Tangent.
FPTAN
void
D9 F2
Replace ST(0) with its approximate tangent and push 1 onto the FPU stack.
FRNDINT--Round to Integer.
FRNDINT
void
D9 FC
Round ST(0) to an integer.
FRSTOR--Restore x87 FPU State.
FRSTOR
m94/108byte
DD /4
Load FPU state from m94byte or m108byte.
FSAVE/FNSAVE--Store x87 FPU State.
FSAVE
m94/108byte*
9B DD /6
Store FPU state to m94byte or m108byte after checking for pending unmasked floating-point exceptions. Then re-initialize the FPU.
FNSAVE
m94/108byte
DD /6
Store FPU environment to m94byte or m108byte without checking for pending unmasked floatingpoint exceptions. Then re-initialize the FPU.
FSCALE--Scale.
FSCALE
void
D9 FD
Scale ST(0) by ST(1).
FSIN--Sine.
FSIN
void
D9 FE
Replace ST(0) with the approximate of its sine.
FSINCOS--Sine and Cosine.
FSINCOS
void
D9 FB
Compute the sine and cosine of ST(0); replace ST(0) with the approximate sine, and push the approximate cosine onto the register stack.
FSQRT--Square Root.
FSQRT
void
D9 FA
Computes square root of ST(0) and stores the result in ST(0).
FST/FSTP--Store Floating Point Value.
FST
m32fp
D9 /2
Copy ST(0) to m32fp.
FST
m64fp
DD /2
Copy ST(0) to m64fp.
FST
ST(i)
DD D0+i
Copy ST(0) to ST(i).
FSTP
m32fp
D9 /3
Copy ST(0) to m32fp and pop register stack.
FSTP
m64fp
DD /3
Copy ST(0) to m64fp and pop register stack.
FSTP
m80fp
DB /7
Copy ST(0) to m80fp and pop register stack.
FSTP
ST(i)
DD D8+i
Copy ST(0) to ST(i) and pop register stack.
FSTCW/FNSTCW--Store x87 FPU Control Word.
FSTCW
m2byte*
9B D9 /7
Store FPU control word to m2byte after checking for pending unmasked floating-point exceptions.
FNSTCW
m2byte
D9 /7
Store FPU control word to m2byte without checking for pending unmasked floating-point exceptions.
FSTENV/FNSTENV--Store x87 FPU Environment.
FSTENV
m14/28byte*
9B D9 /6
Store FPU environment to m14byte or m28byte after checking for pending unmasked floating-point exceptions. Then mask all floating-point exceptions.
FNSTENV
m14/28byte
D9 /6
Store FPU environment to m14byte or m28byte without checking for pending unmasked floatingpoint exceptions. Then mask all floatingpoint exceptions.
FSTSW/FNSTSW--Store x87 FPU Status Word.
FSTSW
m2byte
9B DD /7
Store FPU status word at m2byte after checking for pending unmasked floating-point exceptions.
FSTSW
AX*
9B DF E0
Store FPU status word in AX register after checking for pending unmasked floating-point exceptions.
FNSTSW
m2byte*
DD /7
Store FPU status word at m2byte without checking for pending unmasked floating-point exceptions.
FNSTSW
AX
DF E0
Store FPU status word in AX register without checking for pending unmasked floating-point exceptions.
FSUB/FSUBP/FISUB--Subtract.
FSUB
m32fp
D8 /4
Subtract m32fp from ST(0) and store result in ST(0).
FSUB
m64fp
DC /4
Subtract m64fp from ST(0) and store result in ST(0).
FSUB
ST(0),ST(i)
D8 E0+i
Subtract ST(i) from ST(0) and store result in ST(0).
FSUB
ST(i),ST(0)
DC E8+i
Subtract ST(0) from ST(i) and store result in ST(i).
FSUBP
ST(i),ST(0)
DE E8+i
Subtract ST(0) from ST(i), store result in ST(i), and pop register stack.
FSUBP
void
DE E9
Subtract ST(0) from ST(1), store result in ST(1), and pop register stack.
FISUB
m32int
DA /4
Subtract m32int from ST(0) and store result in ST(0).
FISUB
m16int
DE /4
Subtract m16int from ST(0) and store result in ST(0).
FSUBR/FSUBRP/FISUBR--Reverse Subtract.
FSUBR
m32fp
D8 /5
Subtract ST(0) from m32fp and store result in ST(0).
FSUBR
m64fp
DC /5
Subtract ST(0) from m64fp and store result in ST(0).
FSUBR
ST(0),ST(i)
D8 E8+i
Subtract ST(0) from ST(i) and store result in ST(0).
FSUBR
ST(i),ST(0)
DC E0+i
Subtract ST(i) from ST(0) and store result in ST(i).
FSUBRP
ST(i),ST(0)
DE E0+i
Subtract ST(i) from ST(0), store result in ST(i), and pop register stack.
FSUBRP
void
DE E1
Subtract ST(1) from ST(0), store result in ST(1), and pop register stack.
FISUBR
m32int
DA /5
Subtract ST(0) from m32int and store result in ST(0).
FISUBR
m16int
DE /5
Subtract ST(0) from m16int and store result in ST(0).
FTST--TEST.
FTST
void
D9 E4
Compare ST(0) with 0.0.
FUCOM/FUCOMP/FUCOMPP--Unordered Compare Floating Point Values.
FUCOM
ST(i)
DD E0+i
Compare ST(0) with ST(i).
FUCOM
void
DD E1
Compare ST(0) with ST(1).
FUCOMP
ST(i)
DD E8+i
Compare ST(0) with ST(i) and pop register stack.
FUCOMP
void
DD E9
Compare ST(0) with ST(1) and pop register stack.
FUCOMPP
void
DA E9
Compare ST(0) with ST(1) and pop register stack twice.
FXAM--Examine ModR/M.
FXAM
void
D9 E5
Classify value or number in ST(0).
FXCH--Exchange Register Contents.
FXCH
ST(i)
D9 C8+i
Exchange the contents of ST(0) and ST(i).
FXCH
void
D9 C9
Exchange the contents of ST(0) and ST(1).
FXRSTOR--Restore x87 FPU, MMX, XMM, and MXCSR State.
FXRSTOR
m512byte
0F AE /1
Restore the x87 FPU, MMX, XMM, and MXCSR register state from m512byte.
FXRSTOR64
m512byte
REX.W+ 0F AE /1
Restore the x87 FPU, MMX, XMM, and MXCSR register state from m512byte.
ModRM:r/m(r)
NA
NA
NA
FXSAVE--Save x87 FPU, MMX Technology, and SSE State.
FXSAVE
m512byte
0F AE /0
Save the x87 FPU, MMX, XMM, and MXCSR register state to m512byte.
FXSAVE64
m512byte
REX.W+ 0F AE /0
Save the x87 FPU, MMX, XMM, and MXCSR register state to m512byte.
ModRM:r/m(w)
NA
NA
NA
FXTRACT--Extract Exponent and Significand.
FXTRACT
void
D9 F4
Separate value in ST(0) into exponent and significand, store exponent in ST(0), and push the significand onto the register stack.
FYL2X--Compute y * log x 2.
FYL2X
void
D9 F1
Replace ST(1) with (ST(1) * log2ST(0)) and pop the register stack.
FYL2XP1--Compute y * log (x + 1) 2.
FYL2XP1
void
D9 F9
Replace ST(1) with ST(1) * log2(ST(0) + 1.0) and pop the register stack.
HADDPD--Packed Double-FP Horizontal Add.
HADDPD
xmm1,xmm2/m128
66 0F 7C /r
SSE3
Horizontal add packed double-precision floating-point values from xmm2/m128 to xmm1.
VHADDPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 7C /r
AVX
Horizontal add packed double-precision floating-point values from xmm2 and xmm3/mem.
VHADDPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 7C /r
AVX
Horizontal add packed double-precision floating-point values from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
HADDPS--Packed Single-FP Horizontal Add.
HADDPS
xmm1,xmm2/m128
F2 0F 7C /r
SSE3
Horizontal add packed single-precision floating-point values from xmm2/m128 to xmm1.
VHADDPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.F2.0F.WIG 7C /r
AVX
Horizontal add packed single-precision floating-point values from xmm2 and xmm3/mem.
VHADDPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.F2.0F.WIG 7C /r
AVX
Horizontal add packed single-precision floating-point values from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
HLT--Halt.
HLT
void
F4
Halt.
NA
NA
NA
NA
HSUBPD--Packed Double-FP Horizontal Subtract.
HSUBPD
xmm1,xmm2/m128
66 0F 7D /r
SSE3
Horizontal subtract packed double-precision floating-point values from xmm2/m128 to xmm1.
VHSUBPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 7D /r
AVX
Horizontal subtract packed double-precision floating-point values from xmm2 and xmm3/mem.
VHSUBPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 7D /r
AVX
Horizontal subtract packed double-precision floating-point values from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
HSUBPS--Packed Single-FP Horizontal Subtract.
HSUBPS
xmm1,xmm2/m128
F2 0F 7D /r
SSE3
Horizontal subtract packed single-precision floating-point values from xmm2/m128 to xmm1.
VHSUBPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.F2.0F.WIG 7D /r
AVX
Horizontal subtract packed single-precision floating-point values from xmm2 and xmm3/mem.
VHSUBPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.F2.0F.WIG 7D /r
AVX
Horizontal subtract packed single-precision floating-point values from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
IDIV--Signed Divide.
IDIV
r/m8
F6 /7
Signed divide AX by r/m8, with result stored in: AL <-- Quotient, AH ? Remainder.
IDIV
r/m8*
REX + F6 /7
Signed divide AX by r/m8, with result stored in AL <-- Quotient, AH ? Remainder.
IDIV
r/m16
F7 /7
Signed divide DX:AX by r/m16, with result stored in AX <-- Quotient, DX ? Remainder.
IDIV
r/m32
F7 /7
Signed divide EDX:EAX by r/m32, with result stored in EAX <-- Quotient, EDX ? Remainder.
IDIV
r/m64
REX.W + F7 /7
Signed divide RDX:RAX by r/m64, with result stored in RAX <-- Quotient, RDX ? Remainder.
ModRM:r/m(r)
NA
NA
NA
IMUL--Signed Multiply.
IMUL
r/m8*
F6 /5
AX<-- AL * r/m byte.
IMUL
r/m16
F7 /5
DX:AX <-- AX * r/m word.
IMUL
r/m32
F7 /5
EDX:EAX <-- EAX * r/m32.
IMUL
r/m64
REX.W + F7 /5
RDX:RAX <-- RAX * r/m64.
IMUL
r16,r/m16
0F AF /r
word register <-- word register * r/m16.
IMUL
r32,r/m32
0F AF /r
doubleword register <-- doubleword register * r/m32.
IMUL
r64,r/m64
REX.W + 0F AF /r
Quadword register <-- Quadword register * r/m64.
IMUL
r16,r/m16,imm8
6B /r ib
word register <-- r/m16 * sign-extended immediate byte.
IMUL
r32,r/m32,imm8
6B /r ib
doubleword register <-- r/m32 * signextended immediate byte.
IMUL
r64,r/m64,imm8
REX.W + 6B /r ib
Quadword register <-- r/m64 * sign-extended immediate byte.
IMUL
r16,r/m16,imm16
69 /r iw
word register <-- r/m16 * immediate word.
IMUL
r32,r/m32,imm32
69 /r id
doubleword register <-- r/m32 * immediate doubleword.
IMUL
r64,r/m64,imm32
REX.W + 69 /r id
Quadword register <-- r/m64 * immediate doubleword.
ModRM:r/m(r,w)
NA
NA
NA
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)/16/32
NA
IN--Input from Port.
IN
AL,imm8
E4 ib
Input byte from imm8 I/O port address into AL.
IN
AX,imm8
E5 ib
Input word from imm8 I/O port address into AX.
IN
EAX,imm8
E5 ib
Input dword from imm8 I/O port address into EAX.
IN
AL,DX
EC
Input byte from I/O port in DX into AL.
IN
AX,DX
ED
Input word from I/O port in DX into AX.
IN
EAX,DX
ED
Input doubleword from I/O port in DX into EAX.
imm8(r)
NA
NA
NA
NA
NA
NA
NA
INC--Increment by 1.
INC
r/m8*
FE /0
Increment r/m byte by 1.
INC
r/m8
REX + FE /0
Increment r/m byte by 1.
INC
r/m16
FF /0
Increment r/m word by 1.
INC
r/m32
FF /0
Increment r/m doubleword by 1.
INC
r/m64**
REX.W + FF /0
Increment r/m quadword by 1.
INC
r16
40+ rw
Increment word register by 1.
INC
r32
40+ rd
Increment doubleword register by 1.
ModRM:r/m(r,w)
NA
NA
NA
opcode + rd(r,w)
NA
NA
NA
INS/INSB/INSW/INSD--Input from Port to String.
INS
m8,DX
6C
Input byte from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.*.
INS
m16,DX
6D
Input word from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1.
INS
m32,DX
6D
Input doubleword from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1.
INSB
void
6C
Input byte from I/O port specified in DX into memory location specified with ES:(E)DI or RDI.1.
INSW
void
6D
Input word from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1.
INSD
void
6D
Input doubleword from I/O port specified in DX into memory location specified in ES:(E)DI or RDI.1.
NA
NA
NA
NA
INSERTPS--Insert Packed Single Precision Floating-Point Value.
INSERTPS
xmm1,xmm2/m32,imm8
66 0F 3A 21 /r ib
SSE4_1
Insert a single precision floating-point value selected by imm8 from xmm2/m32 into xmm1 at the specified destination element specified by imm8 and zero out destination elements in xmm1 as indicated in imm8.
VINSERTPS
xmm1,xmm2,xmm3/m32,imm8
VEX.NDS.128.66.0F3A.WIG 21 /r ib
AVX
Insert a single precision floating point value selected by imm8 from xmm3/m32 and merge into xmm2 at the specified destination element specified by imm8 and zero out destination elements in xmm1 as indicated in imm8.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
INTn/INTO/INT3--Call to Interrupt Procedure.
INT
3
CC
Interrupt 3--trap to debugger.
INT
imm8
CD ib
Interrupt vector specified by immediate byte.
INTO
void
CE
Interrupt 4--if overflow flag is 1.
NA
NA
NA
NA
imm8(r)
NA
NA
NA
INVD--Invalidate Internal Caches.
INVD
void
0F 08
Flush internal caches; initiate flushing of external caches.
NA
NA
NA
NA
INVLPG--Invalidate TLB Entries.
INVLPG
m
0F 01/7
Invalidate TLB entries for page containing m.
ModRM:r/m(r)
NA
NA
NA
INVPCID--Invalidate Process-Context Identifier.
INVPCID
r32,m128
66 0F 38 82 /r
INVPCID
Invalidates entries in the TLBs and paging-structure caches based on invalidation type in r32 and descriptor in m128.
INVPCID
r64,m128
66 0F 38 82 /r
INVPCID
Invalidates entries in the TLBs and paging-structure caches based on invalidation type in r64 and descriptor in m128.
ModRM:reg(R)
ModRM:r/m(R)
NA
NA
IRET/IRETD--Interrupt Return.
IRET
void
CF
Interrupt return (16-bit operand size).
IRETD
void
CF
Interrupt return (32-bit operand size).
IRETQ
void
REX.W + CF
Interrupt return (64-bit operand size).
NA
NA
NA
NA
Jcc--Jump if Condition Is Met.
JA
rel8
77 cb
Jump short if above (CF=0 and ZF=0).
JAE
rel8
73 cb
Jump short if above or equal (CF=0).
JB
rel8
72 cb
Jump short if below (CF=1).
JBE
rel8
76 cb
Jump short if below or equal (CF=1 or ZF=1).
JC
rel8
72 cb
Jump short if carry (CF=1).
JCXZ
rel8
E3 cb
Jump short if CX register is 0.
JECXZ
rel8
E3 cb
Jump short if ECX register is 0.
JRCXZ
rel8
E3 cb
Jump short if RCX register is 0.
JE
rel8
74 cb
Jump short if equal (ZF=1).
JG
rel8
7F cb
Jump short if greater (ZF=0 and SF=OF).
JGE
rel8
7D cb
Jump short if greater or equal (SF=OF).
JL
rel8
7C cb
Jump short if less (SF != OF).
JLE
rel8
7E cb
Jump short if less or equal (ZF=1 or SF != OF).
JNA
rel8
76 cb
Jump short if not above (CF=1 or ZF=1).
JNAE
rel8
72 cb
Jump short if not above or equal (CF=1).
JNB
rel8
73 cb
Jump short if not below (CF=0).
JNBE
rel8
77 cb
Jump short if not below or equal (CF=0 and ZF=0).
JNC
rel8
73 cb
Jump short if not carry (CF=0).
JNE
rel8
75 cb
Jump short if not equal (ZF=0).
JNG
rel8
7E cb
Jump short if not greater (ZF=1 or SF != OF).
JNGE
rel8
7C cb
Jump short if not greater or equal (SF != OF).
JNL
rel8
7D cb
Jump short if not less (SF=OF).
JNLE
rel8
7F cb
Jump short if not less or equal (ZF=0 and SF=OF).
JNO
rel8
71 cb
Jump short if not overflow (OF=0).
JNP
rel8
7B cb
Jump short if not parity (PF=0).
JNS
rel8
79 cb
Jump short if not sign (SF=0).
JNZ
rel8
75 cb
Jump short if not zero (ZF=0).
JO
rel8
70 cb
Jump short if overflow (OF=1).
JP
rel8
7A cb
Jump short if parity (PF=1).
JPE
rel8
7A cb
Jump short if parity even (PF=1).
JPO
rel8
7B cb
Jump short if parity odd (PF=0).
JS
rel8
78 cb
Jump short if sign (SF=1).
JZ
rel8
74 cb
Jump short if zero (ZF = 1).
JA
rel16
0F 87 cw
Jump near if above (CF=0 and ZF=0). Not supported in 64-bit mode.
JA
rel32
0F 87 cd
Jump near if above (CF=0 and ZF=0).
JAE
rel16
0F 83 cw
Jump near if above or equal (CF=0). Not supported in 64-bit mode.
JAE
rel32
0F 83 cd
Jump near if above or equal (CF=0).
JB
rel16
0F 82 cw
Jump near if below (CF=1). Not supported in 64-bit mode.
JB
rel32
0F 82 cd
Jump near if below (CF=1).
JBE
rel16
0F 86 cw
Jump near if below or equal (CF=1 or ZF=1). Not supported in 64-bit mode.
JBE
rel32
0F 86 cd
Jump near if below or equal (CF=1 or ZF=1).
JC
rel16
0F 82 cw
Jump near if carry (CF=1). Not supported in 64-bit mode.
JC
rel32
0F 82 cd
Jump near if carry (CF=1).
JE
rel16
0F 84 cw
Jump near if equal (ZF=1). Not supported in 64-bit mode.
JE
rel32
0F 84 cd
Jump near if equal (ZF=1).
JZ
rel16
0F 84 cw
Jump near if 0 (ZF=1). Not supported in 64-bit mode.
JZ
rel32
0F 84 cd
Jump near if 0 (ZF=1).
JG
rel16
0F 8F cw
Jump near if greater (ZF=0 and SF=OF). Not supported in 64-bit mode.
JG
rel32
0F 8F cd
Jump near if greater (ZF=0 and SF=OF).
JGE
rel16
0F 8D cw
Jump near if greater or equal (SF=OF). Not supported in 64-bit mode.
JGE
rel32
0F 8D cd
Jump near if greater or equal (SF=OF).
JL
rel16
0F 8C cw
Jump near if less (SF != OF). Not supported in 64-bit mode.
JL
rel32
0F 8C cd
Jump near if less (SF != OF).
JLE
rel16
0F 8E cw
Jump near if less or equal (ZF=1 or SF != OF). Not supported in 64-bit mode.
JLE
rel32
0F 8E cd
Jump near if less or equal (ZF=1 or SF != OF).
JNA
rel16
0F 86 cw
Jump near if not above (CF=1 or ZF=1). Not supported in 64-bit mode.
JNA
rel32
0F 86 cd
Jump near if not above (CF=1 or ZF=1).
JNAE
rel16
0F 82 cw
Jump near if not above or equal (CF=1). Not supported in 64-bit mode.
JNAE
rel32
0F 82 cd
Jump near if not above or equal (CF=1).
JNB
rel16
0F 83 cw
Jump near if not below (CF=0). Not supported in 64-bit mode.
JNB
rel32
0F 83 cd
Jump near if not below (CF=0).
JNBE
rel16
0F 87 cw
Jump near if not below or equal (CF=0 and ZF=0). Not supported in 64-bit mode.
JNBE
rel32
0F 87 cd
Jump near if not below or equal (CF=0 and ZF=0).
JNC
rel16
0F 83 cw
Jump near if not carry (CF=0). Not supported in 64-bit mode.
JNC
rel32
0F 83 cd
Jump near if not carry (CF=0).
JNE
rel16
0F 85 cw
Jump near if not equal (ZF=0). Not supported in 64-bit mode.
JNE
rel32
0F 85 cd
Jump near if not equal (ZF=0).
JNG
rel16
0F 8E cw
Jump near if not greater (ZF=1 or SF != OF). Not supported in 64-bit mode.
JNG
rel32
0F 8E cd
Jump near if not greater (ZF=1 or SF != OF).
JNGE
rel16
0F 8C cw
Jump near if not greater or equal (SF != OF). Not supported in 64-bit mode.
JNGE
rel32
0F 8C cd
Jump near if not greater or equal (SF != OF).
JNL
rel16
0F 8D cw
Jump near if not less (SF=OF). Not supported in 64-bit mode.
JNL
rel32
0F 8D cd
Jump near if not less (SF=OF).
JNLE
rel16
0F 8F cw
Jump near if not less or equal (ZF=0 and SF=OF). Not supported in 64-bit mode.
JNLE
rel32
0F 8F cd
Jump near if not less or equal (ZF=0 and SF=OF).
JNO
rel16
0F 81 cw
Jump near if not overflow (OF=0). Not supported in 64-bit mode.
JNO
rel32
0F 81 cd
Jump near if not overflow (OF=0).
JNP
rel16
0F 8B cw
Jump near if not parity (PF=0). Not supported in 64-bit mode.
JNP
rel32
0F 8B cd
Jump near if not parity (PF=0).
JNS
rel16
0F 89 cw
Jump near if not sign (SF=0). Not supported in 64-bit mode.
JNS
rel32
0F 89 cd
Jump near if not sign (SF=0).
JNZ
rel16
0F 85 cw
Jump near if not zero (ZF=0). Not supported in 64-bit mode.
JNZ
rel32
0F 85 cd
Jump near if not zero (ZF=0).
JO
rel16
0F 80 cw
Jump near if overflow (OF=1). Not supported in 64-bit mode.
JO
rel32
0F 80 cd
Jump near if overflow (OF=1).
JP
rel16
0F 8A cw
Jump near if parity (PF=1). Not supported in 64-bit mode.
JP
rel32
0F 8A cd
Jump near if parity (PF=1).
JPE
rel16
0F 8A cw
Jump near if parity even (PF=1). Not supported in 64-bit mode.
JPE
rel32
0F 8A cd
Jump near if parity even (PF=1).
JPO
rel16
0F 8B cw
Jump near if parity odd (PF=0). Not supported in 64-bit mode.
JPO
rel32
0F 8B cd
Jump near if parity odd (PF=0).
JS
rel16
0F 88 cw
Jump near if sign (SF=1). Not supported in 64.
JS
rel32
0F 88 cd
Jump near if sign (SF=1).
JZ
rel16
0F 84 cw
Jump near if 0 (ZF=1). Not supported in 64-bit mode.
JZ
rel32
0F 84 cd
Jump near if 0 (ZF=1).
Offset
NA
NA
NA
JMP--Jump.
JMP
rel8
EB cb
Jump short, RIP = RIP + 8-bit displacement sign extended to 64-bits.
JMP
rel16
E9 cw
Jump near, relative, displacement relative to next instruction. Not supported in 64-bit mode.
JMP
rel32
E9 cd
Jump near, relative, RIP = RIP + 32-bit displacement sign extended to 64-bits.
JMP
r/m16
FF /4
Jump near, absolute indirect, address = zeroextended r/m16. Not supported in 64-bit mode.
JMP
r/m32
FF /4
Jump near, absolute indirect, address given in r/m32. Not supported in 64-bit mode.
JMP
r/m64
FF /4
Jump near, absolute indirect, RIP = 64-Bit offset from register or memory.
JMP
ptr16:16
EA cd
Jump far, absolute, address given in operand.
JMP
ptr16:32
EA cp
Jump far, absolute, address given in operand.
JMP
m16:16
FF /5
Jump far, absolute indirect, address given in m16:16.
JMP
m16:32
FF /5
Jump far, absolute indirect, address given in m16:32.
JMP
m16:64
REX.W + FF /5
Jump far, absolute indirect, address given in m16:64.
Offset
NA
NA
NA
ModRM:r/m(r)
NA
NA
NA
LAHF--Load Status Flags into AH Register.
LAHF
void
9F
Load: AH <-- EFLAGS(SF:ZF:0:AF:0:PF:1:CF).
NA
NA
NA
NA
LAR--Load Access Rights Byte.
LAR
r16,r16/m16
0F 02 /r
r16 <-- access rights referenced by r16/m16.
LAR
reg,r32/m16 1
0F 02 /r
reg <-- access rights referenced by r32/m16.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
LDDQU--Load Unaligned Integer 128 Bits.
LDDQU
xmm1,mem
F2 0F F0 /r
SSE3
Load unaligned data from mem and return double quadword in xmm1.
VLDDQU
xmm1,m128
VEX.128.F2.0F.WIG F0 /r
AVX
Load unaligned packed integer values from mem to xmm1.
VLDDQU
ymm1,m256
VEX.256.F2.0F.WIG F0 /r
AVX
Load unaligned packed integer values from mem to ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
LDMXCSR--Load MXCSR Register.
LDMXCSR
m32
0F,AE,/2
SSE
Load MXCSR register from m32.
VLDMXCSR
m32
VEX.LZ.0F.WIG AE /2
AVX
Load MXCSR register from m32.
ModRM:r/m(r)
NA
NA
NA
LDS/LES/LFS/LGS/LSS--Load Far Pointer.
LDS
r16,m16:16
C5 /r
Load DS:r16 with far pointer from memory.
LDS
r32,m16:32
C5 /r
Load DS:r32 with far pointer from memory.
LSS
r16,m16:16
0F B2 /r
Load SS:r16 with far pointer from memory.
LSS
r32,m16:32
0F B2 /r
Load SS:r32 with far pointer from memory.
LSS
r64,m16:64
REX + 0F B2 /r
Load SS:r64 with far pointer from memory.
LES
r16,m16:16
C4 /r
Load ES:r16 with far pointer from memory.
LES
r32,m16:32
C4 /r
Load ES:r32 with far pointer from memory.
LFS
r16,m16:16
0F B4 /r
Load FS:r16 with far pointer from memory.
LFS
r32,m16:32
0F B4 /r
Load FS:r32 with far pointer from memory.
LFS
r64,m16:64
REX + 0F B4 /r
Load FS:r64 with far pointer from memory.
LGS
r16,m16:16
0F B5 /r
Load GS:r16 with far pointer from memory.
LGS
r32,m16:32
0F B5 /r
Load GS:r32 with far pointer from memory.
LGS
r64,m16:64
REX + 0F B5 /r
Load GS:r64 with far pointer from memory.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
LEA--Load Effective Address.
LEA
r16,m
8D /r
Store effective address for m in register r16.
LEA
r32,m
8D /r
Store effective address for m in register r32.
LEA
r64,m
REX.W + 8D /r
Store effective address for m in register r64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
LEAVE--High Level Procedure Exit.
LEAVE
void
C9
Set SP to BP, then pop BP.
LEAVE
void
C9
Set ESP to EBP, then pop EBP.
LEAVE
void
C9
Set RSP to RBP, then pop RBP.
NA
NA
NA
NA
LFENCE--Load Fence.
LFENCE
void
0F AE E8
Serializes load operations.
NA
NA
NA
NA
LGDT/LIDT--Load Global/Interrupt Descriptor Table Register.
LGDT
m16&32
0F 01 /2
Load m into GDTR.
LIDT
m16&32
0F 01 /3
Load m into IDTR.
LGDT
m16&64
0F 01 /2
Load m into GDTR.
LIDT
m16&64
0F 01 /3
Load m into IDTR.
ModRM:r/m(r)
NA
NA
NA
LLDT--Load Local Descriptor Table Register.
LLDT
r/m16
0F 00 /2
Load segment selector r/m16 into LDTR.
ModRM:r/m(r)
NA
NA
NA
LMSW--Load Machine Status Word.
LMSW
r/m16
0F 01 /6
Loads r/m 16 in machine status word of CR0.
ModRM:r/m(r)
NA
NA
NA
LOCK--Assert LOCK# Signal Prefix.
LOCK
void
F0
#Asserts LOCK
signal for duration of the accompanying instruction.
NA
NA
NA
NA
LODS/LODSB/LODSW/LODSD/LODSQ--Load String.
LODS
m8
AC
For legacy mode, Load byte at address DS:(E)SI into AL. For 64-bit mode load byte at address (R)SI into AL.
LODS
m16
AD
For legacy mode, Load word at address DS:(E)SI into AX. For 64-bit mode load word at address (R)SI into AX.
LODS
m32
AD
For legacy mode, Load dword at address DS:(E)SI into EAX. For 64-bit mode load dword at address (R)SI into EAX.
LODS
m64
REX.W + AD
Load qword at address (R)SI into RAX.
LODSB
void
AC
For legacy mode, Load byte at address DS:(E)SI into AL. For 64-bit mode load byte at address (R)SI into AL.
LODSW
void
AD
For legacy mode, Load word at address DS:(E)SI into AX. For 64-bit mode load word at address (R)SI into AX.
LODSD
void
AD
For legacy mode, Load dword at address DS:(E)SI into EAX. For 64-bit mode load dword at address (R)SI into EAX.
LODSQ
void
REX.W + AD
Load qword at address (R)SI into RAX.
NA
NA
NA
NA
LOOP/LOOPcc--Loop According to ECX Counter.
LOOP
rel8
E2 cb
Decrement count; jump short if count != 0.
LOOPE
rel8
E1 cb
Decrement count; jump short if count != 0 and ZF = 1.
LOOPNE
rel8
E0 cb
Decrement count; jump short if count != 0 and ZF = 0.
Offset
NA
NA
NA
LSL--Load Segment Limit.
LSL
r16,r16/m16*
0F 03 /r
Load: r16 <-- segment limit, selector r16/m16.
LSL
r32,r32/m16*
0F 03 /r
Load: r32 <-- segment limit, selector r32/m16.
LSL
r64,r32/m16
REX.W + 0F 03 /r
Load: r64 <-- segment limit, selector r32/m16.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
LTR--Load Task Register.
LTR
r/m16
0F 00 /3
Load r/m16 into task register.
ModRM:r/m(r)
NA
NA
NA
LZCNT--Count the Number of Leading Zero Bits.
LZCNT
r16,r/m16
F3 0F BD /r
LZCNT
Count the number of leading zero bits in r/m16, return result in r16.
LZCNT
r32,r/m32
F3 0F BD /r
LZCNT
Count the number of leading zero bits in r/m32, return result in r32.
LZCNT
r64,r/m64
F3 REX.W 0F BD /r
LZCNT
Count the number of leading zero bits in r/m64, return result in r64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MASKMOVDQU--Store Selected Bytes of Double Quadword.
MASKMOVDQU
xmm1,xmm2
66 0F F7 /r
SSE2
Selectively write bytes from xmm1 to memory location using the byte mask in xmm2. The default memory location is specified by DS:DI/EDI/RDI.
VMASKMOVDQU
xmm1,xmm2
VEX.128.66.0F.WIG F7 /r
AVX
Selectively write bytes from xmm1 to memory location using the byte mask in xmm2. The default memory location is specified by DS:DI/EDI/RDI.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
MASKMOVQ--Store Selected Bytes of Quadword.
MASKMOVQ
mm1,mm2
0F F7 /r
Selectively write bytes from mm1 to memory location using the byte mask in mm2. The default memory location is specified by DS:DI/EDI/RDI.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
MAXPD--Return Maximum Packed Double-Precision Floating-Point Values.
MAXPD
xmm1,xmm2/m128
66 0F 5F /r
SSE2
Return the maximum double-precision floating-point values between xmm2/m128 and xmm1.
VMAXPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 5F /r
AVX
Return the maximum double-precision floating-point values between xmm2 and xmm3/mem.
VMAXPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 5F /r
AVX
Return the maximum packed double-precision floating-point values between ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MAXPS--Return Maximum Packed Single-Precision Floating-Point Values.
MAXPS
xmm1,xmm2/m128
0F 5F /r
SSE
Return the maximum single-precision floatingpoint values between xmm2/m128 and xmm1.
VMAXPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 5F /r
AVX
Return the maximum single-precision floatingpoint values between xmm2 and xmm3/mem.
VMAXPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 5F /r
AVX
Return the maximum single double-precision floating-point values between ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MAXSD--Return Maximum Scalar Double-Precision Floating-Point Value.
MAXSD
xmm1,xmm2/m64
F2 0F 5F /r
SSE2
Return the maximum scalar double-precision floating-point value between xmm2/mem64 and xmm1.
VMAXSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 5F /r
AVX
Return the maximum scalar double-precision floating-point value between xmm3/mem64 and xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MAXSS--Return Maximum Scalar Single-Precision Floating-Point Value.
MAXSS
xmm1,xmm2/m32
F3 0F 5F /r
SSE
Return the maximum scalar single-precision floating-point value between xmm2/mem32 and xmm1.
VMAXSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 5F /r
AVX
Return the maximum scalar single-precision floating-point value between xmm3/mem32 and xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MFENCE--Memory Fence.
MFENCE
void
0F AE F0
Serializes load and store operations.
NA
NA
NA
NA
MINPD--Return Minimum Packed Double-Precision Floating-Point Values.
MINPD
xmm1,xmm2/m128
66 0F 5D /r
SSE2
Return the minimum double-precision floating-point values between xmm2/m128 and xmm1.
VMINPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 5D /r
AVX
Return the minimum double-precision floatingpoint values between xmm2 and xmm3/mem.
VMINPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 5D /r
AVX
Return the minimum packed double-precision floating-point values between ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MINPS--Return Minimum Packed Single-Precision Floating-Point Values.
MINPS
xmm1,xmm2/m128
0F 5D /r
SSE
Return the minimum single-precision floatingpoint values between xmm2/m128 and xmm1.
VMINPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 5D /r
AVX
Return the minimum single-precision floatingpoint values between xmm2 and xmm3/mem.
VMINPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 5D /r
AVX
Return the minimum single double-precision floating-point values between ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MINSD--Return Minimum Scalar Double-Precision Floating-Point Value.
MINSD
xmm1,xmm2/m64
F2 0F 5D /r
SSE2
Return the minimum scalar double-precision floating-point value between xmm2/mem64 and xmm1.
VMINSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 5D /r
AVX
Return the minimum scalar double precision floating-point value between xmm3/mem64 and xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MINSS--Return Minimum Scalar Single-Precision Floating-Point Value.
MINSS
xmm1,xmm2/m32
F3 0F 5D /r
SSE
Return the minimum scalar single-precision floating-point value between xmm2/mem32 and xmm1.
VMINSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 5D /r
AVX
Return the minimum scalar single precision floating-point value between xmm3/mem32 and xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MONITOR--Set Up Monitor Address.
MONITOR
void
0F 01 C8
Sets up a linear address range to be monitored by hardware and activates the monitor. The address range should be a writeback memory caching type. The address is DS:EAX (DS:RAX in 64-bit mode).
NA
NA
NA
NA
MOV--Move.
MOV
r/m8,r8
88 /r
Move r8 to r/m8.
MOV
r/m8***,r8***
REX + 88 /r
Move r8 to r/m8.
MOV
r/m16,r16
89 /r
Move r16 to r/m16.
MOV
r/m32,r32
89 /r
Move r32 to r/m32.
MOV
r/m64,r64
REX.W + 89 /r
Move r64 to r/m64.
MOV
r8,r/m8
8A /r
Move r/m8 to r8.
MOV
r8***,r/m8***
REX + 8A /r
Move r/m8 to r8.
MOV
r16,r/m16
8B /r
Move r/m16 to r16.
MOV
r32,r/m32
8B /r
Move r/m32 to r32.
MOV
r64,r/m64
REX.W + 8B /r
Move r/m64 to r64.
MOV
r/m16,Sreg**
8C /r
Move segment register to r/m16.
MOV
r/m64,Sreg**
REX.W + 8C /r
Move zero extended 16-bit segment register to r/m64.
MOV
Sreg,r/m16**
8E /r
Move r/m16 to segment register.
MOV
Sreg,r/m64**
REX.W + 8E /r
Move lower 16 bits of r/m64 to segment register.
MOV
AL,moffs8*
A0
Move byte at (seg:offset) to AL.
MOV
AL,moffs8*
REX.W + A0
Move byte at (offset) to AL.
MOV
AX,moffs16*
A1
Move word at (seg:offset) to AX.
MOV
EAX,moffs32*
A1
Move doubleword at (seg:offset) to EAX.
MOV
RAX,moffs64*
REX.W + A1
Move quadword at (offset) to RAX.
MOV
moffs8,AL***
A2
Move AL to (seg:offset).
MOV
moffs8,AL
REX.W + A2
Move AL to (offset).
MOV
moffs16*,AX
A3
Move AX to (seg:offset).
MOV
moffs32*,EAX
A3
Move EAX to (seg:offset).
MOV
moffs64*,RAX
REX.W + A3
Move RAX to (offset).
MOV
r8,imm8***
B0+ rb ib
Move imm8 to r8.
MOV
r8,imm8
REX + B0+ rb ib
Move imm8 to r8.
MOV
r16,imm16
B8+ rw iw
Move imm16 to r16.
MOV
r32,imm32
B8+ rd id
Move imm32 to r32.
MOV
r64,imm64
REX.W + B8+ rd io
Move imm64 to r64.
MOV
r/m8,imm8
C6 /0 ib
Move imm8 to r/m8.
MOV
r/m8***,imm8
REX + C6 /0 ib
Move imm8 to r/m8.
MOV
r/m16,imm16
C7 /0 iw
Move imm16 to r/m16.
MOV
r/m32,imm32
C7 /0 id
Move imm32 to r/m32.
MOV
r/m64,imm32
REX.W + C7 /0 io
Move imm32 sign extended to 64-bits to r/m64.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
AL/AX/EAX/RAX
Moffs
NA
NA
Moffs(w)
AL/AX/EAX/RAX
NA
NA
opcode + rd(w)
imm8(r)/16/32/64
NA
NA
ModRM:r/m(w)
imm8(r)/16/32/64
NA
NA
MOV--Move to/from Control Registers.
MOV
r32,CR0-CR7
0F 20/r
Move control register to r32.
MOV
r64,CR0-CR7
0F 20/r
Move extended control register to r64. 1.
MOV
r64,CR8
REX.R + 0F 20 /0
Move extended CR8 to r64.
MOV
CR0-CR7,r32
0F 22 /r
Move r32 to control register.
MOV
CR0-CR7,r64
0F 22 /r
Move r64 to extended control register. 1.
MOV
CR8,r64
REX.R + 0F 22 /0
Move r64 to extended CR8.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOV--Move to/from Debug Registers.
MOV
r32,DR0-DR7
0F 21/r
Move debug register to r32.
MOV
r64,DR0-DR7
0F 21/r
Move extended debug register to r64.
MOV
DR0-DR7,r32
0F 23 /r
Move r32 to debug register.
MOV
DR0-DR7,r64
0F 23 /r
Move r64 to extended debug register.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVAPD--Move Aligned Packed Double-Precision Floating-Point Values.
MOVAPD
xmm1,xmm2/m128
66 0F 28 /r
SSE2
Move packed double-precision floating-point values from xmm2/m128 to xmm1.
MOVAPD
xmm2/m128,xmm1
66 0F 29 /r
SSE2
Move packed double-precision floating-point values from xmm1 to xmm2/m128.
VMOVAPD
xmm1,xmm2/m128
VEX.128.66.0F.WIG 28 /r
AVX
Move aligned packed double-precision floatingpoint values from xmm2/mem to xmm1.
VMOVAPD
xmm2/m128,xmm1
VEX.128.66.0F.WIG 29 /r
AVX
Move aligned packed double-precision floatingpoint values from xmm1 to xmm2/mem.
VMOVAPD
ymm1,ymm2/m256
VEX.256.66.0F.WIG 28 /r
AVX
Move aligned packed double-precision floatingpoint values from ymm2/mem to ymm1.
VMOVAPD
ymm2/m256,ymm1
VEX.256.66.0F.WIG 29 /r
AVX
Move aligned packed double-precision floatingpoint values from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVAPS--Move Aligned Packed Single-Precision Floating-Point Values.
MOVAPS
xmm1,xmm2/m128
0F 28 /r
SSE
Move packed single-precision floating-point values from xmm2/m128 to xmm1.
MOVAPS
xmm2/m128,xmm1
0F 29 /r
SSE
Move packed single-precision floating-point values from xmm1 to xmm2/m128.
VMOVAPS
xmm1,xmm2/m128
VEX.128.0F.WIG 28 /r
AVX
Move aligned packed single-precision floatingpoint values from xmm2/mem to xmm1.
VMOVAPS
xmm2/m128,xmm1
VEX.128.0F.WIG 29 /r
AVX
Move aligned packed single-precision floatingpoint values from xmm1 to xmm2/mem.
VMOVAPS
ymm1,ymm2/m256
VEX.256.0F.WIG 28 /r
AVX
Move aligned packed single-precision floatingpoint values from ymm2/mem to ymm1.
VMOVAPS
ymm2/m256,ymm1
VEX.256.0F.WIG 29 /r
AVX
Move aligned packed single-precision floatingpoint values from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVBE--Move Data After Swapping Bytes.
MOVBE
r16,m16
0F 38 F0 /r
Reverse byte order in m16 and move to r16.
MOVBE
r32,m32
0F 38 F0 /r
Reverse byte order in m32 and move to r32.
MOVBE
r64,m64
REX.W + 0F 38 F0 /r
Reverse byte order in m64 and move to r64.
MOVBE
m16,r16
0F 38 F1 /r
Reverse byte order in r16 and move to m16.
MOVBE
m32,r32
0F 38 F1 /r
Reverse byte order in r32 and move to m32.
MOVBE
m64,r64
REX.W + 0F 38 F1 /r
Reverse byte order in r64 and move to m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVD/MOVQ--Move Doubleword/Move Quadword.
MOVD
mm,r/m32
0F 6E /r
MMX
Move doubleword from r/m32 to mm.
MOVQ
mm,r/m64
REX.W + 0F 6E /r
MMX
Move quadword from r/m64 to mm.
MOVD
r/m32,mm
0F 7E /r
MMX
Move doubleword from mm to r/m32.
MOVQ
r/m64,mm
REX.W + 0F 7E /r
MMX
Move quadword from mm to r/m64.
VMOVD
xmm1,r32/m32
VEX.128.66.0F.W0 6E /
AVX
Move doubleword from r/m32 to xmm1.
VMOVQ
xmm1,r64/m64
VEX.128.66.0F.W1 6E /r
AVX
Move quadword from r/m64 to xmm1.
MOVD
xmm,r/m32
66 0F 6E /r
SSE2
Move doubleword from r/m32 to xmm.
MOVQ
xmm,r/m64
66 REX.W 0F 6E /r
SSE2
Move quadword from r/m64 to xmm.
MOVD
r/m32,xmm
66 0F 7E /r
SSE2
Move doubleword from xmm register to r/m32.
MOVQ
r/m64,xmm
66 REX.W 0F 7E /r
SSE2
Move quadword from xmm register to r/m64.
VMOVD
r32/m32,xmm1
VEX.128.66.0F.W0 7E /r
AVX
Move doubleword from xmm1 register to r/m32.
VMOVQ
r64/m64,xmm1
VEX.128.66.0F.W1 7E /r
AVX
Move quadword from xmm1 register to r/m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVDDUP--Move One Double-FP and Duplicate.
MOVDDUP
xmm1,xmm2/m64
F2 0F 12 /r
SSE3
Move one double-precision floating-point value from the lower 64-bit operand in xmm2/m64 to xmm1 and duplicate.
VMOVDDUP
xmm1,xmm2/m64
VEX.128.F2.0F.WIG 12 /r
AVX
Move double-precision floating-point values from xmm2/mem and duplicate into xmm1.
VMOVDDUP
ymm1,ymm2/m256
VEX.256.F2.0F.WIG 12 /r
AVX
Move even index double-precision floatingpoint values from ymm2/mem and duplicate each element into ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVDQA--Move Aligned Double Quadword.
MOVDQA
xmm1,xmm2/m128
66 0F 6F /r
SSE2
Move aligned double quadword from xmm2/m128 to xmm1.
MOVDQA
xmm2/m128,xmm1
66 0F 7F /r
SSE2
Move aligned double quadword from xmm1 to xmm2/m128.
VMOVDQA
xmm1,xmm2/m128
VEX.128.66.0F.WIG 6F /r
AVX
Move aligned packed integer values from xmm2/mem to xmm1.
VMOVDQA
xmm2/m128,xmm1
VEX.128.66.0F.WIG 7F /r
AVX
Move aligned packed integer values from xmm1 to xmm2/mem.
VMOVDQA
ymm1,ymm2/m256
VEX.256.66.0F.WIG 6F /r
AVX
Move aligned packed integer values from ymm2/mem to ymm1.
VMOVDQA
ymm2/m256,ymm1
VEX.256.66.0F.WIG 7F /r
AVX
Move aligned packed integer values from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVDQU--Move Unaligned Double Quadword.
MOVDQU
xmm1,xmm2/m128
F3 0F 6F /r
SSE2
Move unaligned double quadword from xmm2/m128 to xmm1.
MOVDQU
xmm2/m128,xmm1
F3 0F 7F /r
SSE2
Move unaligned double quadword from xmm1 to xmm2/m128.
VMOVDQU
xmm1,xmm2/m128
VEX.128.F3.0F.WIG 6F /r
AVX
Move unaligned packed integer values from xmm2/mem to xmm1.
VMOVDQU
xmm2/m128,xmm1
VEX.128.F3.0F.WIG 7F /r
AVX
Move unaligned packed integer values from xmm1 to xmm2/mem.
VMOVDQU
ymm1,ymm2/m256
VEX.256.F3.0F.WIG 6F /r
AVX
Move unaligned packed integer values from ymm2/mem to ymm1.
VMOVDQU
ymm2/m256,ymm1
VEX.256.F3.0F.WIG 7F /r
AVX
Move unaligned packed integer values from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVDQ2Q--Move Quadword from XMM to MMX Technology Register.
MOVDQ2Q
mm,xmm
F2 0F D6 /r
Move low quadword from xmm to mmx register.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVHLPS--Move Packed Single-Precision Floating-Point Values High to Low.
MOVHLPS
xmm1,xmm2
0F 12 /r
SSE
Move two packed single-precision floatingpoint values from high quadword of xmm2 to low quadword of xmm1.
VMOVHLPS
xmm1,xmm2,xmm3
VEX.NDS.128.0F.WIG 12 /r
AVX
Merge two packed single-precision floatingpoint values from high quadword of xmm3 and low quadword of xmm2.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVHPD--Move High Packed Double-Precision Floating-Point Value.
MOVHPD
xmm,m64
66 0F 16 /r
SSE2
Move double-precision floating-point value from m64 to high quadword of xmm.
MOVHPD
m64,xmm
66 0F 17 /r
SSE2
Move double-precision floating-point value from high quadword of xmm to m64.
VMOVHPD
xmm2,xmm1,m64
VEX.NDS.128.66.0F.WIG 16 /r
AVX
Merge double-precision floating-point value from m64 and the low quadword of xmm1.
VMOVHPD
m64,xmm1
VEX.128.66.0F.WIG 17/r
AVX
Move double-precision floating-point values from high quadword of xmm1 to m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVHPS--Move High Packed Single-Precision Floating-Point Values.
MOVHPS
xmm,m64
0F 16 /r
SSE
Move two packed single-precision floatingpoint values from m64 to high quadword of xmm.
MOVHPS
m64,xmm
0F 17 /r
SSE
Move two packed single-precision floatingpoint values from high quadword of xmm to m64.
VMOVHPS
xmm2,xmm1,m64
VEX.NDS.128.0F.WIG 16 /r
AVX
Merge two packed single-precision floatingpoint values from m64 and the low quadword of xmm1.
VMOVHPS
m64,xmm1
VEX.128.0F.WIG 17/r
AVX
Move two packed single-precision floatingpoint values from high quadword of xmm1to m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVLHPS--Move Packed Single-Precision Floating-Point Values Low to High.
MOVLHPS
xmm1,xmm2
0F 16 /r
SSE
Move two packed single-precision floatingpoint values from low quadword of xmm2 to high quadword of xmm1.
VMOVLHPS
xmm1,xmm2,xmm3
VEX.NDS.128.0F.WIG 16 /r
AVX
Merge two packed single-precision floatingpoint values from low quadword of xmm3 and low quadword of xmm2.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVLPD--Move Low Packed Double-Precision Floating-Point Value.
MOVLPD
xmm,m64
66 0F 12 /r
SSE2
Move double-precision floating-point value from m64 to low quadword of xmm register.
MOVLPD
m64,xmm
66 0F 13 /r
SSE2
Move double-precision floating-point nvalue from low quadword of xmm register to m64.
VMOVLPD
xmm2,xmm1,m64
VEX.NDS.128.66.0F.WIG 12 /r
AVX
Merge double-precision floating-point value from m64 and the high quadword of xmm1.
VMOVLPD
m64,xmm1
VEX.128.66.0F.WIG 13/r
AVX
Move double-precision floating-point values from low quadword of xmm1 to m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVLPS--Move Low Packed Single-Precision Floating-Point Values.
MOVLPS
xmm,m64
0F 12 /r
SSE
Move two packed single-precision floatingpoint values from m64 to low quadword of xmm.
MOVLPS
m64,xmm
0F 13 /r
SSE
Move two packed single-precision floatingpoint values from low quadword of xmm to m64.
VMOVLPS
xmm2,xmm1,m64
VEX.NDS.128.0F.WIG 12 /r
AVX
Merge two packed single-precision floatingpoint values from m64 and the high quadword of xmm1.
VMOVLPS
m64,xmm1
VEX.128.0F.WIG 13/r
AVX
Move two packed single-precision floatingpoint values from low quadword of xmm1 to m64.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MOVMSKPD--Extract Packed Double-Precision Floating-Point Sign Mask.
MOVMSKPD
reg,xmm
66 0F 50 /r
SSE2
Extract 2-bit sign mask from xmm and store in reg. The upper bits of r32 or r64 are filled with zeros.
VMOVMSKPD
reg,xmm2
VEX.128.66.0F.WIG 50 /r
AVX
Extract 2-bit sign mask from xmm2 and store in reg. The upper bits of r32 or r64 are zeroed.
VMOVMSKPD
reg,ymm2
VEX.256.66.0F.WIG 50 /r
AVX
Extract 4-bit sign mask from ymm2 and store in reg. The upper bits of r32 or r64 are zeroed.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVMSKPS--Extract Packed Single-Precision Floating-Point Sign Mask.
MOVMSKPS
reg,xmm
0F 50 /r
SSE
Extract 4-bit sign mask from xmm and store in reg. The upper bits of r32 or r64 are filled with zeros.
VMOVMSKPS
reg,xmm2
VEX.128.0F.WIG 50 /r
AVX
Extract 4-bit sign mask from xmm2 and store in reg. The upper bits of r32 or r64 are zeroed.
VMOVMSKPS
reg,ymm2
VEX.256.0F.WIG 50 /r
AVX
Extract 8-bit sign mask from ymm2 and store in reg. The upper bits of r32 or r64 are zeroed.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVNTDQA--Load Double Quadword Non-Temporal Aligned Hint.
MOVNTDQA
xmm1,m128
66 0F 38 2A /r
SSE4_1
Move double quadword from m128 to xmm using non-temporal hint if WC memory type.
VMOVNTDQA
xmm1,m128
VEX.128.66.0F38.WIG 2A /r
AVX
Move double quadword from m128 to xmm using non-temporal hint if WC memory type.
VMOVNTDQA
ymm1,m256
VEX.256.66.0F38.WIG 2A /r
AVX2
Move 256-bit data from m256 to ymm using non-temporal hint if WC memory type.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVNTDQ--Store Double Quadword Using Non-Temporal Hint.
MOVNTDQ
m128,xmm
66 0F E7 /r
SSE2
Move double quadword from xmm to m128 using non-temporal hint.
VMOVNTDQ
m128,xmm1
VEX.128.66.0F.WIG E7 /r
AVX
Move packed integer values in xmm1 to m128 using non-temporal hint.
VMOVNTDQ
m256,ymm1
VEX.256.66.0F.WIG E7 /r
AVX
Move packed integer values in ymm1 to m256 using non-temporal hint.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVNTI--Store Doubleword Using Non-Temporal Hint.
MOVNTI
m32,r32
0F C3 /r
Move doubleword from r32 to m32 using nontemporal hint.
MOVNTI
m64,r64
REX.W + 0F C3 /r
Move quadword from r64 to m64 using nontemporal hint.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVNTPD--Store Packed Double-Precision Floating-Point Values Using Non-Temporal Hint.
MOVNTPD
m128,xmm
66 0F 2B /r
SSE2
Move packed double-precision floating-point values from xmm to m128 using nontemporal hint.
VMOVNTPD
m128,xmm1
VEX.128.66.0F.WIG 2B /r
AVX
Move packed double-precision values in xmm1 to m128 using non-temporal hint.
VMOVNTPD
m256,ymm1
VEX.256.66.0F.WIG 2B /r
AVX
Move packed double-precision values in ymm1 to m256 using non-temporal hint.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVNTPS--Store Packed Single-Precision Floating-Point Values Using Non-Temporal Hint.
MOVNTPS
m128,xmm
0F 2B /r
SSE
Move packed single-precision floating-point values from xmm to m128 using nontemporal hint.
VMOVNTPS
m128,xmm1
VEX.128.0F.WIG 2B /r
AVX
Move packed single-precision values xmm1 to mem using non-temporal hint.
VMOVNTPS
m256,ymm1
VEX.256.0F.WIG 2B /r
AVX
Move packed single-precision values ymm1 to mem using non-temporal hint.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVNTQ--Store of Quadword Using Non-Temporal Hint.
MOVNTQ
m64,mm
0F E7 /r
Move quadword from mm to m64 using nontemporal hint.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVQ--Move Quadword.
MOVQ
mm,mm/m64
0F 6F /r
MMX
Move quadword from mm/m64 to mm.
MOVQ
mm/m64,mm
0F 7F /r
MMX
Move quadword from mm to mm/m64.
MOVQ
xmm1,xmm2/m64
F3 0F 7E /r
SSE2
Move quadword from xmm2/mem64 to xmm1.
VMOVQ
xmm1,xmm2
VEX.128.F3.0F.WIG 7E /r
AVX
Move quadword from xmm2 to xmm1.
VMOVQ
xmm1,m64
VEX.128.F3.0F.WIG 7E /r
AVX
Load quadword from m64 to xmm1.
MOVQ
xmm2/m64,xmm1
66 0F D6 /r
SSE2
Move quadword from xmm1 to xmm2/mem64.
VMOVQ
xmm1/m64,xmm2
VEX.128.66.0F.WIG D6 /r
AVX
Move quadword from xmm2 register to xmm1/m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVQ2DQ--Move Quadword from MMX Technology to XMM Register.
MOVQ2DQ
xmm,mm
F3 0F D6 /r
Move quadword from mmx to low quadword of xmm.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVS/MOVSB/MOVSW/MOVSD/MOVSQ--Move Data from String to String \.
MOVS
m8,m8
A4
For legacy mode, Move byte from address DS:(E)SI to ES:(E)DI. For 64-bit mode move byte from address (R|E)SI to (R|E)DI.
MOVS
m16,m16
A5
For legacy mode, move word from address DS:(E)SI to ES:(E)DI. For 64-bit mode move word at address (R|E)SI to (R|E)DI.
MOVS
m32,m32
A5
For legacy mode, move dword from address DS:(E)SI to ES:(E)DI. For 64-bit mode move dword from address (R|E)SI to (R|E)DI.
MOVS
m64,m64
REX.W + A5
Move qword from address (R|E)SI to (R|E)DI.
MOVSB
void
A4
For legacy mode, Move byte from address DS:(E)SI to ES:(E)DI. For 64-bit mode move byte from address (R|E)SI to (R|E)DI.
MOVSW
void
A5
For legacy mode, move word from address DS:(E)SI to ES:(E)DI. For 64-bit mode move word at address (R|E)SI to (R|E)DI.
MOVSD
void
A5
For legacy mode, move dword from address DS:(E)SI to ES:(E)DI. For 64-bit mode move dword from address (R|E)SI to (R|E)DI.
MOVSQ
void
REX.W + A5
Move qword from address (R|E)SI to (R|E)DI.
NA
NA
NA
NA
MOVSD--Move Scalar Double-Precision Floating-Point Value.
MOVSD
xmm1,xmm2/m64
F2 0F 10 /r
SSE2
Move scalar double-precision floating-point value from xmm2/m64 to xmm1 register.
VMOVSD
xmm1,xmm2,xmm3
VEX.NDS.LIG.F2.0F.WIG 10 /r
AVX
Merge scalar double-precision floating-point value from xmm2 and xmm3 to xmm1 register.
VMOVSD
xmm1,m64
VEX.LIG.F2.0F.WIG 10 /r
AVX
Load scalar double-precision floating-point value from m64 to xmm1 register.
MOVSD
xmm2/m64,xmm1
F2 0F 11 /r
SSE2
Move scalar double-precision floating-point value from xmm1 register to xmm2/m64.
VMOVSD
xmm1,xmm2,xmm3
VEX.NDS.LIG.F2.0F.WIG 11 /r
AVX
Merge scalar double-precision floating-point value from xmm2 and xmm3 registers to xmm1.
VMOVSD
m64,xmm1
VEX.LIG.F2.0F.WIG 11 /r
AVX
Move scalar double-precision floating-point value from xmm1 register to m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
VEX.vvvv(r)
ModRM:reg(r)
NA
MOVSHDUP--Move Packed Single-FP High and Duplicate.
MOVSHDUP
xmm1,xmm2/m128
F3 0F 16 /r
SSE3
Move two single-precision floating-point values from the higher 32-bit operand of each qword in xmm2/m128 to xmm1 and duplicate each 32-bit operand to the lower 32-bits of each qword.
VMOVSHDUP
xmm1,xmm2/m128
VEX.128.F3.0F.WIG 16 /r
AVX
Move odd index single-precision floating-point values from xmm2/mem and duplicate each element into xmm1.
VMOVSHDUP
ymm1,ymm2/m256
VEX.256.F3.0F.WIG 16 /r
AVX
Move odd index single-precision floating-point values from ymm2/mem and duplicate each element into ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVSLDUP--Move Packed Single-FP Low and Duplicate.
MOVSLDUP
xmm1,xmm2/m128
F3 0F 12 /r
SSE3
Move two single-precision floating-point values from the lower 32-bit operand of each qword in xmm2/m128 to xmm1 and duplicate each 32-bit operand to the higher 32-bits of each qword.
VMOVSLDUP
xmm1,xmm2/m128
VEX.128.F3.0F.WIG 12 /r
AVX
Move even index single-precision floatingpoint values from xmm2/mem and duplicate each element into xmm1.
VMOVSLDUP
ymm1,ymm2/m256
VEX.256.F3.0F.WIG 12 /r
AVX
Move even index single-precision floatingpoint values from ymm2/mem and duplicate each element into ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVSS--Move Scalar Single-Precision Floating-Point Values.
MOVSS
xmm1,xmm2/m32
F3 0F 10 /r
SSE
Move scalar single-precision floating-point value from xmm2/m32 to xmm1 register.
VMOVSS
xmm1,xmm2,xmm3
VEX.NDS.LIG.F3.0F.WIG 10 /r
AVX
Merge scalar single-precision floating-point value from xmm2 and xmm3 to xmm1 register.
VMOVSS
xmm1,m32
VEX.LIG.F3.0F.WIG 10 /r
AVX
Load scalar single-precision floating-point value from m32 to xmm1 register.
MOVSS
xmm2/m32,xmm
F3 0F 11 /r
SSE
Move scalar single-precision floating-point value from xmm1 register to xmm2/m32.
VMOVSS
xmm1,xmm2,xmm3
VEX.NDS.LIG.F3.0F.WIG 11 /r
AVX
Move scalar single-precision floating-point value from xmm2 and xmm3 to xmm1 register.
VMOVSS
m32,xmm1
VEX.LIG.F3.0F.WIG 11 /r
AVX
Move scalar single-precision floating-point value from xmm1 register to m32.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
VEX.vvvv(r)
ModRM:reg(r)
NA
MOVSX/MOVSXD--Move with Sign-Extension.
MOVSX
r16,r/m8
0F BE /r
Move byte to word with sign-extension.
MOVSX
r32,r/m8
0F BE /r
Move byte to doubleword with signextension.
MOVSX
r64,r/m8*
REX + 0F BE /r
Move byte to quadword with sign-extension.
MOVSX
r32,r/m16
0F BF /r
Move word to doubleword, with signextension.
MOVSX
r64,r/m16
REX.W + 0F BF /r
Move word to quadword with sign-extension.
MOVSXD
r64,r/m32
REX.W** + 63 /r
Move doubleword to quadword with signextension.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MOVUPD--Move Unaligned Packed Double-Precision Floating-Point Values.
MOVUPD
xmm1,xmm2/m128
66 0F 10 /r
SSE2
Move packed double-precision floating-point values from xmm2/m128 to xmm1.
VMOVUPD
xmm1,xmm2/m128
VEX.128.66.0F.WIG 10 /r
AVX
Move unaligned packed double-precision floating-point from xmm2/mem to xmm1.
VMOVUPD
ymm1,ymm2/m256
VEX.256.66.0F.WIG 10 /r
AVX
Move unaligned packed double-precision floating-point from ymm2/mem to ymm1.
MOVUPD
xmm2/m128,xmm
66 0F 11 /r
SSE2
Move packed double-precision floating-point values from xmm1 to xmm2/m128.
VMOVUPD
xmm2/m128,xmm1
VEX.128.66.0F.WIG 11 /r
AVX
Move unaligned packed double-precision floating-point from xmm1 to xmm2/mem.
VMOVUPD
ymm2/m256,ymm1
VEX.256.66.0F.WIG 11 /r
AVX
Move unaligned packed double-precision floating-point from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVUPS--Move Unaligned Packed Single-Precision Floating-Point Values.
MOVUPS
xmm1,xmm2/m128
0F 10 /r
SSE
Move packed single-precision floating-point values from xmm2/m128 to xmm1.
VMOVUPS
xmm1,xmm2/m128
VEX.128.0F.WIG 10 /r
AVX
Move unaligned packed single-precision floating-point from xmm2/mem to xmm1.
VMOVUPS
ymm1,ymm2/m256
VEX.256.0F.WIG 10 /r
AVX
Move unaligned packed single-precision floating-point from ymm2/mem to ymm1.
MOVUPS
xmm2/m128,xmm1
0F 11 /r
SSE
Move packed single-precision floating-point values from xmm1 to xmm2/m128.
VMOVUPS
xmm2/m128,xmm1
VEX.128.0F.WIG 11 /r
AVX
Move unaligned packed single-precision floating-point from xmm1 to xmm2/mem.
VMOVUPS
ymm2/m256,ymm1
VEX.256.0F.WIG 11 /r
AVX
Move unaligned packed single-precision floating-point from ymm1 to ymm2/mem.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
MOVZX--Move with Zero-Extend.
MOVZX
r16,r/m8
0F B6 /r
Move byte to word with zero-extension.
MOVZX
r32,r/m8
0F B6 /r
Move byte to doubleword, zero-extension.
MOVZX
r64,r/m8*
REX.W + 0F B6 /r
Move byte to quadword, zero-extension.
MOVZX
r32,r/m16
0F B7 /r
Move word to doubleword, zero-extension.
MOVZX
r64,r/m16
REX.W + 0F B7 /r
Move word to quadword, zero-extension.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
MPSADBW--Compute Multiple Packed Sums of Absolute Difference.
MPSADBW
xmm1,xmm2/m128,imm8
66 0F 3A 42 /r ib
SSE4_1
Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm1 and xmm2/m128 and writes the results in xmm1. Starting offsets within xmm1 and xmm2/m128 are determined by imm8.
VMPSADBW
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 42 /r ib
AVX
Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm2 and xmm3/m128 and writes the results in xmm1. Starting offsets within xmm2 and xmm3/m128 are determined by imm8.
VMPSADBW
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 42 /r ib
AVX2
Sums absolute 8-bit integer difference of adjacent groups of 4 byte integers in xmm2 and ymm3/m128 and writes the results in ymm1. Starting offsets within ymm2 and xmm3/m128 are determined by imm8.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
MUL--Unsigned Multiply.
MUL
r/m8*
F6 /4
Unsigned multiply (AX <-- AL * r/m8).
MUL
r/m8
REX + F6 /4
Unsigned multiply (AX <-- AL * r/m8).
MUL
r/m16
F7 /4
Unsigned multiply (DX:AX <-- AX * r/m16).
MUL
r/m32
F7 /4
Unsigned multiply (EDX:EAX <-- EAX * r/m32).
MUL
r/m64
REX.W + F7 /4
Unsigned multiply (RDX:RAX <-- RAX * r/m64).
ModRM:r/m(r)
NA
NA
NA
MULPD--Multiply Packed Double-Precision Floating-Point Values.
MULPD
xmm1,xmm2/m128
66 0F 59 /r
SSE2
Multiply packed double-precision floating-point values in xmm2/m128 by xmm1.
VMULPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 59 /r
AVX
Multiply packed double-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VMULPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 59 /r
AVX
Multiply packed double-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MULPS--Multiply Packed Single-Precision Floating-Point Values.
MULPS
xmm1,xmm2/m128
0F 59 /r
SSE
Multiply packed single-precision floating-point values in xmm2/mem by xmm1.
VMULPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 59 /r
AVX
Multiply packed single-precision floating-point values from xmm3/mem to xmm2 and stores result in xmm1.
VMULPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 59 /r
AVX
Multiply packed single-precision floating-point values from ymm3/mem to ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MULSD--Multiply Scalar Double-Precision Floating-Point Values.
MULSD
xmm1,xmm2/m64
F2 0F 59 /r
SSE2
Multiply the low double-precision floatingpoint value in xmm2/mem64 by low doubleprecision floating-point value in xmm1.
VMULSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 59/r
AVX
Multiply the low double-precision floatingpoint value in xmm3/mem64 by low double precision floating-point value in xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MULSS--Multiply Scalar Single-Precision Floating-Point Values.
MULSS
xmm1,xmm2/m32
F3 0F 59 /r
SSE
Multiply the low single-precision floating-point value in xmm2/mem by the low singleprecision floating-point value in xmm1.
VMULSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 59 /r
AVX
Multiply the low single-precision floating-point value in xmm3/mem by the low singleprecision floating-point value in xmm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
MULX--Unsigned Multiply Without Affecting Flags.
MULX
r32a,r32b,r/m32
VEX.NDD.LZ.F2.0F38.W0 F6 /r
BMI2
Unsigned multiply of r/m32 with EDX without affecting arithmetic flags.
MULX
r64a,r64b,r/m64
VEX.NDD.LZ.F2.0F38.W1 F6 /r
BMI2
Unsigned multiply of r/m64 with RDX without affecting arithmetic flags.
ModRM:reg(w)
VEX.vvvv(w)
ModRM:r/m(r)
RDX/EDX is implied 64/32 bits source
MWAIT--Monitor Wait.
MWAIT
void
0F 01 C9
A hint that allow the processor to stop instruction execution and enter an implementation-dependent optimized state until occurrence of a class of events.
NA
NA
NA
NA
NEG--Two's Complement Negation.
NEG
r/m8
F6 /3
Two's complement negate r/m8.
NEG
r/m8*
REX + F6 /3
Two's complement negate r/m8.
NEG
r/m16
F7 /3
Two's complement negate r/m16.
NEG
r/m32
F7 /3
Two's complement negate r/m32.
NEG
r/m64
REX.W + F7 /3
Two's complement negate r/m64.
ModRM:r/m(r,w)
NA
NA
NA
NOP--No Operation.
NOP
void
90
One byte no-operation instruction.
NOP
r/m16
0F 1F /0
Multi-byte no-operation instruction.
NOP
r/m32
0F 1F /0
Multi-byte no-operation instruction.
NA
NA
NA
NA
ModRM:r/m(r)
NA
NA
NA
NOT--One's Complement Negation.
NOT
r/m8
F6 /2
Reverse each bit of r/m8.
NOT
r/m8*
REX + F6 /2
Reverse each bit of r/m8.
NOT
r/m16
F7 /2
Reverse each bit of r/m16.
NOT
r/m32
F7 /2
Reverse each bit of r/m32.
NOT
r/m64
REX.W + F7 /2
Reverse each bit of r/m64.
ModRM:r/m(r,w)
NA
NA
NA
OR--Logical Inclusive OR.
OR
AL,imm8
0C ib
AL OR imm8.
OR
AX,imm16
0D iw
AX OR imm16.
OR
EAX,imm32
0D id
EAX OR imm32.
OR
RAX,imm32
REX.W + 0D id
RAX OR imm32 (sign-extended).
OR
r/m8,imm8
80 /1 ib
r/m8 OR imm8.
OR
r/m8*,imm8
REX + 80 /1 ib
r/m8 OR imm8.
OR
r/m16,imm16
81 /1 iw
r/m16 OR imm16.
OR
r/m32,imm32
81 /1 id
r/m32 OR imm32.
OR
r/m64,imm32
REX.W + 81 /1 id
r/m64 OR imm32 (sign-extended).
OR
r/m16,imm8
83 /1 ib
r/m16 OR imm8 (sign-extended).
OR
r/m32,imm8
83 /1 ib
r/m32 OR imm8 (sign-extended).
OR
r/m64,imm8
REX.W + 83 /1 ib
r/m64 OR imm8 (sign-extended).
OR
r/m8,r8
08 /r
r/m8 OR r8.
OR
r/m8*,r8*
REX + 08 /r
r/m8 OR r8.
OR
r/m16,r16
09 /r
r/m16 OR r16.
OR
r/m32,r32
09 /r
r/m32 OR r32.
OR
r/m64,r64
REX.W + 09 /r
r/m64 OR r64.
OR
r8,r/m8
0A /r
r8 OR r/m8.
OR
r8*,r/m8*
REX + 0A /r
r8 OR r/m8.
OR
r16,r/m16
0B /r
r16 OR r/m16.
OR
r32,r/m32
0B /r
r32 OR r/m32.
OR
r64,r/m64
REX.W + 0B /r
r64 OR r/m64.
AL/AX/EAX/RAX
imm8(r)/16/32
NA
NA
ModRM:r/m(r,w)
imm8(r)/16/32
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ORPD--Bitwise Logical OR of Double-Precision Floating-Point Values.
ORPD
xmm1,xmm2/m128
66 0F 56 /r
SSE2
Bitwise OR of xmm2/m128 and xmm1.
VORPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 56 /r
AVX
Return the bitwise logical OR of packed double-precision floating-point values in xmm2 and xmm3/mem.
VORPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 56 /r
AVX
Return the bitwise logical OR of packed double-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ORPS--Bitwise Logical OR of Single-Precision Floating-Point Values.
ORPS
xmm1,xmm2/m128
0F 56 /r
SSE
Bitwise OR of xmm1 and xmm2/m128.
VORPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 56 /r
AVX
Return the bitwise logical OR of packed singleprecision floating-point values in xmm2 and xmm3/mem.
VORPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 56 /r
AVX
Return the bitwise logical OR of packed singleprecision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
OUT--Output to Port.
OUT
imm8,AL
E6 ib
Output byte in AL to I/O port address imm8.
OUT
imm8,AX
E7 ib
Output word in AX to I/O port address imm8.
OUT
imm8,EAX
E7 ib
Output doubleword in EAX to I/O port address imm8.
OUT
DX,AL
EE
Output byte in AL to I/O port address in DX.
OUT
DX,AX
EF
Output word in AX to I/O port address in DX.
OUT
DX,EAX
EF
Output doubleword in EAX to I/O port address in DX.
imm8(r)
NA
NA
NA
NA
NA
NA
NA
OUTS/OUTSB/OUTSW/OUTSD--Output String to Port.
OUTS
DX,m8
6E
Output byte from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
OUTS
DX,m16
6F
Output word from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
OUTS
DX,m32
6F
Output doubleword from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
OUTSB
void
6E
Output byte from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
OUTSW
void
6F
Output word from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
OUTSD
void
6F
Output doubleword from memory location specified in DS:(E)SI or RSI to I/O port specified in DX**.
NA
NA
NA
NA
PABSB/PABSW/PABSD--Packed Absolute Value.
PABSB
mm1,mm2/m64
0F 38 1C /r1
SSSE3
Compute the absolute value of bytes in mm2/m64 and store UNSIGNED result in mm1.
PABSB
xmm1,xmm2/m128
66 0F 38 1C /r
SSSE3
Compute the absolute value of bytes in xmm2/m128 and store UNSIGNED result in xmm1.
PABSW
mm1,mm2/m64
0F 38 1D /r1
SSSE3
Compute the absolute value of 16-bit integers in mm2/m64 and store UNSIGNED result in mm1.
PABSW
xmm1,xmm2/m128
66 0F 38 1D /r
SSSE3
Compute the absolute value of 16-bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
PABSD
mm1,mm2/m64
0F 38 1E /r1
SSSE3
Compute the absolute value of 32-bit integers in mm2/m64 and store UNSIGNED result in mm1.
PABSD
xmm1,xmm2/m128
66 0F 38 1E /r
SSSE3
Compute the absolute value of 32-bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VPABSB
xmm1,xmm2/m128
VEX.128.66.0F38.WIG 1C /r
AVX
Compute the absolute value of bytes in xmm2/m128 and store UNSIGNED result in xmm1.
VPABSW
xmm1,xmm2/m128
VEX.128.66.0F38.WIG 1D /r
AVX
Compute the absolute value of 16bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VPABSD
xmm1,xmm2/m128
VEX.128.66.0F38.WIG 1E /r
AVX
Compute the absolute value of 32bit integers in xmm2/m128 and store UNSIGNED result in xmm1.
VPABSB
ymm1,ymm2/m256
VEX.256.66.0F38.WIG 1C /r
AVX2
Compute the absolute value of bytes in ymm2/m256 and store UNSIGNED result in ymm1.
VPABSW
ymm1,ymm2/m256
VEX.256.66.0F38.WIG 1D /r
AVX2
Compute the absolute value of 16-bit integers in ymm2/m256 and store UNSIGNED result in ymm1.
VPABSD
ymm1,ymm2/m256
VEX.256.66.0F38.WIG 1E /r
AVX2
Compute the absolute value of 32-bit integers in ymm2/m256 and store UNSIGNED result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
PACKSSWB/PACKSSDW--Pack with Signed Saturation.
PACKSSWB
mm1,mm2/m64
0F 63 /r1
MMX
Converts 4 packed signed word integers from mm1 and from mm2/m64 into 8 packed signed byte integers in mm1 using signed saturation.
PACKSSWB
xmm1,xmm2/m128
66 0F 63 /r
SSE2
Converts 8 packed signed word integers from xmm1 and from xxm2/m128 into 16 packed signed byte integers in xxm1 using signed saturation.
PACKSSDW
mm1,mm2/m64
0F 6B /r1
MMX
Converts 2 packed signed doubleword integers from mm1 and from mm2/m64 into 4 packed signed word integers in mm1 using signed saturation.
PACKSSDW
xmm1,xmm2/m128
66 0F 6B /r
SSE2
Converts 4 packed signed doubleword integers from xmm1 and from xxm2/m128 into 8 packed signed word integers in xxm1 using signed saturation.
VPACKSSWB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 63 /r
AVX
Converts 8 packed signed word integers from xmm2 and from xmm3/m128 into 16 packed signed byte integers in xmm1 using signed saturation.
VPACKSSDW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 6B /r
AVX
Converts 4 packed signed doubleword integers from xmm2 and from xmm3/m128 into 8 packed signed word integers in xmm1 using signed saturation.
VPACKSSWB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 63 /r
AVX2
Converts 16 packed signed word integers from ymm2 and from ymm3/m256 into 32 packed signed byte integers in ymm1 using signed saturation.
VPACKSSDW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 6B /r
AVX2
Converts 8 packed signed doubleword integers from ymm2 and from ymm3/m256 into 16 packed signed word integers in ymm1using signed saturation.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PACKUSDW--Pack with Unsigned Saturation.
PACKUSDW
xmm1,xmm2/m128
66 0F 38 2B /r
SSE4_1
Convert 4 packed signed doubleword integers from xmm1 and 4 packed signed doubleword integers from xmm2/m128 into 8 packed unsigned word integers in xmm1 using unsigned saturation.
VPACKUSDW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 2B /r
AVX
Convert 4 packed signed doubleword integers from xmm2 and 4 packed signed doubleword integers from xmm3/m128 into 8 packed unsigned word integers in xmm1 using unsigned saturation.
VPACKUSDW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 2B /r
AVX2
Convert 8 packed signed doubleword integers from ymm2 and 8 packed signed doubleword integers from ymm3/m128 into 16 packed unsigned word integers in ymm1 using unsigned saturation.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PACKUSWB--Pack with Unsigned Saturation.
PACKUSWB
mm,mm/m64
0F 67 /r1
MMX
Converts 4 signed word integers from mm and 4 signed word integers from mm/m64 into 8 unsigned byte integers in mm using unsigned saturation.
PACKUSWB
xmm1,xmm2/m128
66 0F 67 /r
SSE2
Converts 8 signed word integers from xmm1 and 8 signed word integers from xmm2/m128 into 16 unsigned byte integers in xmm1 using unsigned saturation.
VPACKUSWB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 67 /r
AVX
Converts 8 signed word integers from xmm2 and 8 signed word integers from xmm3/m128 into 16 unsigned byte integers in xmm1 using unsigned saturation.
VPACKUSWB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 67 /r
AVX2
Converts 16 signed word integers from ymm2 and 16signed word integers from ymm3/m256 into 32 unsigned byte integers in ymm1 using unsigned saturation.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PADDB/PADDW/PADDD--Add Packed Integers.
PADDB
mm,mm/m64
0F FC /r1
MMX
Add packed byte integers from mm/m64 and mm.
PADDB
xmm1,xmm2/m128
66 0F FC /r
SSE2
Add packed byte integers from xmm2/m128 and xmm1.
PADDW
mm,mm/m64
0F FD /r1
MMX
Add packed word integers from mm/m64 and mm.
PADDW
xmm1,xmm2/m128
66 0F FD /r
SSE2
Add packed word integers from xmm2/m128 and xmm1.
PADDD
mm,mm/m64
0F FE /r1
MMX
Add packed doubleword integers from mm/m64 and mm.
PADDD
xmm1,xmm2/m128
66 0F FE /r
SSE2
Add packed doubleword integers from xmm2/m128 and xmm1.
VPADDB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG FC /r
AVX
Add packed byte integers from xmm3/m128 and xmm2.
VPADDW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG FD /r
AVX
Add packed word integers from xmm3/m128 and xmm2.
VPADDD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG FE /r
AVX
Add packed doubleword integers from xmm3/m128 and xmm2.
VPADDB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG FC /r
AVX2
Add packed byte integers from ymm2, and ymm3/m256 and store in ymm1.
VPADDW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG FD /r
AVX2
Add packed word integers from ymm2, ymm3/m256 and store in ymm1.
VPADDD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG FE /r
AVX2
Add packed doubleword integers from ymm2, ymm3/m256 and store in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PADDQ--Add Packed Quadword Integers.
PADDQ
mm1,mm2/m64
0F D4 /r1
SSE2
Add quadword integer mm2/m64 to mm1.
PADDQ
xmm1,xmm2/m128
66 0F D4 /r
SSE2
Add packed quadword integers xmm2/m128 to xmm1.
VPADDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D4 /r
AVX
Add packed quadword integers xmm3/m128 and xmm2.
VPADDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG D4 /r
AVX2
Add packed quadword integers from ymm2, ymm3/m256 and store in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PADDSB/PADDSW--Add Packed Signed Integers with Signed Saturation.
PADDSB
mm,mm/m64
0F EC /r1
MMX
Add packed signed byte integers from mm/m64 and mm and saturate the results.
PADDSB
xmm1,xmm2/m128
66 0F EC /r
SSE2
Add packed signed byte integers from xmm2/m128 and xmm1 saturate the results.
PADDSW
mm,mm/m64
0F ED /r1
MMX
Add packed signed word integers from mm/m64 and mm and saturate the results.
PADDSW
xmm1,xmm2/m128
66 0F ED /r
SSE2
Add packed signed word integers from xmm2/m128 and xmm1 and saturate the results.
VPADDSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG EC /r
AVX
Add packed signed byte integers from xmm3/m128 and xmm2 saturate the results.
VPADDSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG ED /r
AVX
Add packed signed word integers from xmm3/m128 and xmm2 and saturate the results.
VPADDSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG EC /r
AVX2
Add packed signed byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
VPADDSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG ED /r
AVX2
Add packed signed word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PADDUSB/PADDUSW--Add Packed Unsigned Integers with Unsigned Saturation.
PADDUSB
mm,mm/m64
0F DC /r1
MMX
Add packed unsigned byte integers from mm/m64 and mm and saturate the results.
PADDUSB
xmm1,xmm2/m128
66 0F DC /r
SSE2
Add packed unsigned byte integers from xmm2/m128 and xmm1 saturate the results.
PADDUSW
mm,mm/m64
0F DD /r1
MMX
Add packed unsigned word integers from mm/m64 and mm and saturate the results.
PADDUSW
xmm1,xmm2/m128
66 0F DD /r
SSE2
Add packed unsigned word integers from xmm2/m128 to xmm1 and saturate the results.
VPADDUSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.660F.WIG DC /r
AVX
Add packed unsigned byte integers from xmm3/m128 to xmm2 and saturate the results.
VPADDUSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG DD /r
AVX
Add packed unsigned word integers from xmm3/m128 to xmm2 and saturate the results.
VPADDUSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG DC /r
AVX2
Add packed unsigned byte integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
VPADDUSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG DD /r
AVX2
Add packed unsigned word integers from ymm2, and ymm3/m256 and store the saturated results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PALIGNR--Packed Align Right.
PALIGNR
mm1,mm2/m64,imm8
0F 3A 0F /r ib1
SSSE3
Concatenate destination and source operands, extract byte-aligned result shifted to the right by constant value in imm8 into mm1.
PALIGNR
xmm1,xmm2/m128,imm8
66 0F 3A 0F /r ib
SSSE3
Concatenate destination and source operands, extract byte-aligned result shifted to the right by constant value in imm8 into xmm1.
VPALIGNR
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 0F /r ib
AVX
Concatenate xmm2 and xmm3/m128, extract byte aligned result shifted to the right by constant value in imm8 and result is stored in xmm1.
VPALIGNR
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 0F /r ib
AVX2
Concatenate pairs of 16 bytes in ymm2 and ymm3/m256 into 32-byte intermediate result, extract byte-aligned, 16-byte result shifted to the right by constant values in imm8 from each intermediate result, and two 16-byte results are stored in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
PAND--Logical AND.
PAND
mm,mm/m64
0F DB /r1
MMX
Bitwise AND mm/m64 and mm.
PAND
xmm1,xmm2/m128
66 0F DB /r
SSE2
Bitwise AND of xmm2/m128 and xmm1.
VPAND
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG DB /r
AVX
Bitwise AND of xmm3/m128 and xmm.
VPAND
ymm1,ymm2,ymm3/.m256
VEX.NDS.256.66.0F.WIG DB /r
AVX2
Bitwise AND of ymm2, and ymm3/m256 and store result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PANDN--Logical AND NOT.
PANDN
mm,mm/m64
0F DF /r1
MMX
Bitwise AND NOT of mm/m64 and mm.
PANDN
xmm1,xmm2/m128
66 0F DF /r
SSE2
Bitwise AND NOT of xmm2/m128 and xmm1.
VPANDN
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG DF /r
AVX
Bitwise AND NOT of xmm3/m128 and xmm2.
VPANDN
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG DF /r
AVX2
Bitwise AND NOT of ymm2, and ymm3/m256 and store result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PAUSE--Spin Loop Hint.
PAUSE
void
F3 90
Gives hint to processor that improves performance of spin-wait loops.
NA
NA
NA
NA
PAVGB/PAVGW--Average Packed Integers.
PAVGB
mm1,mm2/m64
0F E0 /r1
SSE
Average packed unsigned byte integers from mm2/m64 and mm1 with rounding.
PAVGB
xmm1,xmm2/m128
66 0F E0,/r
SSE2
Average packed unsigned byte integers from xmm2/m128 and xmm1 with rounding.
PAVGW
mm1,mm2/m64
0F E3 /r1
SSE
Average packed unsigned word integers from mm2/m64 and mm1 with rounding.
PAVGW
xmm1,xmm2/m128
66 0F E3 /r
SSE2
Average packed unsigned word integers from xmm2/m128 and xmm1 with rounding.
VPAVGB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E0 /r
AVX
Average packed unsigned byte integers from xmm3/m128 and xmm2 with rounding.
VPAVGW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E3 /r
AVX
Average packed unsigned word integers from xmm3/m128 and xmm2 with rounding.
VPAVGB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E0 /r
AVX2
Average packed unsigned byte integers from ymm2, and ymm3/m256 with rounding and store to ymm1.
VPAVGW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E3 /r
AVX2
Average packed unsigned word integers from ymm2, ymm3/m256 with rounding to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PBLENDVB--Variable Blend Packed Bytes.
PBLENDVB
xmm1,xmm2/m128,<XMM0>
66 0F 38 10 /r
SSE4_1
Select byte values from xmm1 and xmm2/m128 from mask specified in the high values into xmm1.
VPBLENDVB
xmm1,xmm2,xmm3/m128,xmm4
VEX.NDS.128.66.0F3A.W0 4C /r /is4
AVX
Select byte values from xmm2 and xmm3/m128 using mask bits in the specified mask register, xmm4, and store the values into xmm1.
VPBLENDVB
ymm1,ymm2,ymm3/m256,ymm4
VEX.NDS.256.66.0F3A.W0 4C /r /is4
AVX2
Select byte values from ymm2 and ymm3/m256 from mask specified in the high values into ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
implicit XMM0
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)[7:4]
PBLENDW--Blend Packed Words.
PBLENDW
xmm1,xmm2/m128,imm8
66 0F 3A 0E /r ib
SSE4_1
Select words from xmm1 and xmm2/m128 from mask specified in imm8 and store the values into xmm1.
VPBLENDW
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 0E /r ib
AVX
Select words from xmm2 and xmm3/m128 from mask specified in imm8 and store the values into xmm1.
VPBLENDW
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.WIG 0E /r ib
AVX2
Select words from ymm2 and ymm3/m256 from mask specified in imm8 and store the values into ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
PCLMULQDQ--Carry-Less Multiplication Quadword.
PCLMULQDQ
xmm1,xmm2/m128,imm8
66 0F 3A 44 /r ib
PCLMULQDQ
Carry-less multiplication of one quadword of xmm1 by one quadword of xmm2/m128, stores the 128-bit result in xmm1. The immediate is used to determine which quadwords of xmm1 and xmm2/m128 should be used.
VPCLMULQDQ
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.WIG 44 /r ib
PCLMULQDQ
AVX
Carry-less multiplication of one quadword of xmm2 by one quadword of xmm3/m128, stores the 128-bit result in xmm1. The immediate is used to determine which quadwords of xmm2 and xmm3/m128 should be used.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
PCMPEQB/PCMPEQW/PCMPEQD--Compare Packed Data for Equal.
PCMPEQB
mm,mm/m64
0F 74 /r1
MMX
Compare packed bytes in mm/m64 and mm for equality.
PCMPEQB
xmm1,xmm2/m128
66 0F 74 /r
SSE2
Compare packed bytes in xmm2/m128 and xmm1 for equality.
PCMPEQW
mm,mm/m64
0F 75 /r1
MMX
Compare packed words in mm/m64 and mm for equality.
PCMPEQW
xmm1,xmm2/m128
66 0F 75 /r
SSE2
Compare packed words in xmm2/m128 and xmm1 for equality.
PCMPEQD
mm,mm/m64
0F 76 /r1
MMX
Compare packed doublewords in mm/m64 and mm for equality.
PCMPEQD
xmm1,xmm2/m128
66 0F 76 /r
SSE2
Compare packed doublewords in xmm2/m128 and xmm1 for equality.
VPCMPEQB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 74 /r
AVX
Compare packed bytes in xmm3/m128 and xmm2 for equality.
VPCMPEQW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 75 /r
AVX
Compare packed words in xmm3/m128 and xmm2 for equality.
VPCMPEQD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 76 /r
AVX
Compare packed doublewords in xmm3/m128 and xmm2 for equality.
VPCMPEQB
ymm1,ymm2,ymm3 /m256
VEX.NDS.256.66.0F.WIG 74 /r
AVX2
Compare packed bytes in ymm3/m256 and ymm2 for equality.
VPCMPEQW
ymm1,ymm2,ymm3 /m256
VEX.NDS.256.66.0F.WIG 75 /r
AVX2
Compare packed words in ymm3/m256 and ymm2 for equality.
VPCMPEQD
ymm1,ymm2,ymm3 /m256
VEX.NDS.256.66.0F.WIG 76 /r
AVX2
Compare packed doublewords in ymm3/m256 and ymm2 for equality.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PCMPEQQ--Compare Packed Qword Data for Equal.
PCMPEQQ
xmm1,xmm2/m128
66 0F 38 29 /r
SSE4_1
Compare packed qwords in xmm2/m128 and xmm1 for equality.
VPCMPEQQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 29 /r
AVX
Compare packed quadwords in xmm3/m128 and xmm2 for equality.
VPCMPEQQ
ymm1,ymm2,ymm3 /m256
VEX.NDS.256.66.0F38.WIG 29 /r
AVX2
Compare packed quadwords in ymm3/m256 and ymm2 for equality.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PCMPESTRI--Packed Compare Explicit Length Strings, Return Index.
PCMPESTRI
xmm1,xmm2/m128,imm8
66 0F 3A 61 /r imm8
SSE4_2
Perform a packed comparison of string data with explicit lengths, generating an index, and storing the result in ECX.
VPCMPESTRI
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 61 /r ib
AVX
Perform a packed comparison of string data with explicit lengths, generating an index, and storing the result in ECX.
ModRM:reg(r)
ModRM:r/m(r)
imm8(r)
NA
PCMPESTRM--Packed Compare Explicit Length Strings, Return Mask.
PCMPESTRM
xmm1,xmm2/m128,imm8
66 0F 3A 60 /r imm8
SSE4_2
Perform a packed comparison of string data with explicit lengths, generating a mask, and storing the result in XMM0.
VPCMPESTRM
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 60 /r ib
AVX
Perform a packed comparison of string data with explicit lengths, generating a mask, and storing the result in XMM0.
ModRM:reg(r)
ModRM:r/m(r)
imm8(r)
NA
PCMPGTB/PCMPGTW/PCMPGTD--Compare Packed Signed Integers for Greater Than.
PCMPGTB
mm,mm/m64
0F 64 /r1
MMX
Compare packed signed byte integers in mm and mm/m64 for greater than.
PCMPGTB
xmm1,xmm2/m128
66 0F 64 /r
SSE2
Compare packed signed byte integers in xmm1 and xmm2/m128 for greater than.
PCMPGTW
mm,mm/m64
0F 65 /r1
MMX
Compare packed signed word integers in mm and mm/m64 for greater than.
PCMPGTW
xmm1,xmm2/m128
66 0F 65 /r
SSE2
Compare packed signed word integers in xmm1 and xmm2/m128 for greater than.
PCMPGTD
mm,mm/m64
0F 66 /r1
MMX
Compare packed signed doubleword integers in mm and mm/m64 for greater than.
PCMPGTD
xmm1,xmm2/m128
66 0F 66 /r
SSE2
Compare packed signed doubleword integers in xmm1 and xmm2/m128 for greater than.
VPCMPGTB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 64 /r
AVX
Compare packed signed byte integers in xmm2 and xmm3/m128 for greater than.
VPCMPGTW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 65 /r
AVX
Compare packed signed word integers in xmm2 and xmm3/m128 for greater than.
VPCMPGTD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 66 /r
AVX
Compare packed signed doubleword integers in xmm2 and xmm3/m128 for greater than.
VPCMPGTB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 64 /r
AVX2
Compare packed signed byte integers in ymm2 and ymm3/m256 for greater than.
VPCMPGTW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 65 /r
AVX2
Compare packed signed word integers in ymm2 and ymm3/m256 for greater than.
VPCMPGTD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 66 /r
AVX2
Compare packed signed doubleword integers in ymm2 and ymm3/m256 for greater than.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PCMPGTQ--Compare Packed Data for Greater Than.
PCMPGTQ
xmm1,xmm2/m128
66 0F 38 37 /r
SSE4_2
Compare packed signed qwords in xmm2/m128 and xmm1 for greater than.
VPCMPGTQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 37 /r
AVX
Compare packed signed qwords in xmm2 and xmm3/m128 for greater than.
VPCMPGTQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 37 /r
AVX2
Compare packed signed qwords in ymm2 and ymm3/m256 for greater than.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PCMPISTRI--Packed Compare Implicit Length Strings, Return Index.
PCMPISTRI
xmm1,xmm2/m128,imm8
66 0F 3A 63 /r imm8
SSE4_2
Perform a packed comparison of string data with implicit lengths, generating an index, and storing the result in ECX.
VPCMPISTRI
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 63 /r ib
AVX
Perform a packed comparison of string data with implicit lengths, generating an index, and storing the result in ECX.
ModRM:reg(r)
ModRM:r/m(r)
imm8(r)
NA
PCMPISTRM--Packed Compare Implicit Length Strings, Return Mask.
PCMPISTRM
xmm1,xmm2/m128,imm8
66 0F 3A 62 /r imm8
SSE4_2
Perform a packed comparison of string data with implicit lengths, generating a mask, and storing the result in XMM0.
VPCMPISTRM
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 62 /r ib
AVX
Perform a packed comparison of string data with implicit lengths, generating a Mask, and storing the result in XMM0.
ModRM:reg(r)
ModRM:r/m(r)
imm8(r)
NA
PDEP--Parallel Bits Deposit.
PDEP
r32a,r32b,r/m32
VEX.NDS.LZ.F2.0F38.W0 F5 /r
BMI2
Parallel deposit of bits from r32b using mask in r/m32, result is written to r32a.
PDEP
r64a,r64b,r/m64
VEX.NDS.LZ.F2.0F38.W1 F5 /r
BMI2
Parallel deposit of bits from r64b using mask in r/m64, result is written to r64a.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PEXT--Parallel Bits Extract.
PEXT
r32a,r32b,r/m32
VEX.NDS.LZ.F3.0F38.W0 F5 /r
BMI2
Parallel extract of bits from r32b using mask in r/m32, result is written to r32a.
PEXT
r64a,r64b,r/m64
VEX.NDS.LZ.F3.0F38.W1 F5 /r
BMI2
Parallel extract of bits from r64b using mask in r/m64, result is written to r64a.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PEXTRB/PEXTRD/PEXTRQ--Extract Byte/Dword/Qword.
PEXTRB
reg/m8,xmm2,imm8
66 0F 3A 14 /r ib
SSE4_1
Extract a byte integer value from xmm2 at the source byte offset specified by imm8 into reg or m8. The upper bits of r32 or r64 are zeroed.
PEXTRD
r/m32,xmm2,imm8
66 0F 3A 16 /r ib
SSE4_1
Extract a dword integer value from xmm2 at the source dword offset specified by imm8 into r/m32.
PEXTRQ
r/m64,xmm2,imm8
66 REX.W 0F 3A 16 /r ib
SSE4_1
Extract a qword integer value from xmm2 at the source qword offset specified by imm8 into r/m64.
VPEXTRB
reg/m8,xmm2,imm8
VEX.128.66.0F3A.W0 14 /r ib
AVX
Extract a byte integer value from xmm2 at the source byte offset specified by imm8 into reg or m8. The upper bits of r64/r32 is filled with zeros.
VPEXTRD
r32/m32,xmm2,imm8
VEX.128.66.0F3A.W0 16 /r ib
AVX
Extract a dword integer value from xmm2 at the source dword offset specified by imm8 into r32/m32.
VPEXTRQ
r64/m64,xmm2,imm8
VEX.128.66.0F3A.W1 16 /r ib
AVX
Extract a qword integer value from xmm2 at the source dword offset specified by imm8 into r64/m64.
ModRM:r/m(w)
ModRM:reg(r)
imm8(r)
NA
PEXTRW--Extract Word.
PEXTRW
reg,mm,imm8
0F C5 /r ib 1
SSE
Extract the word specified by imm8 from mm and move it to reg, bits 15-0. The upper bits of r32 or r64 is zeroed.
PEXTRW
reg,xmm,imm8
66 0F C5 /r ib
SSE2
Extract the word specified by imm8 from xmm and move it to reg, bits 15-0. The upper bits of r32 or r64 is zeroed.
PEXTRW
reg/m16,xmm,imm8
66 0F 3A 15 /r ib
SSE4_1
Extract the word specified by imm8 from xmm and copy it to lowest 16 bits of reg or m16. Zero-extend the result in the destination, r32 or r64.
VPEXTRW
reg,xmm1,imm8
VEX.128.66.0F.W0 C5 /r ib
AVX
Extract the word specified by imm8 from xmm1 and move it to reg, bits 15:0. Zeroextend the result. The upper bits of r64/r32 is filled with zeros.
VPEXTRW
reg/m16,xmm2,imm8
VEX.128.66.0F3A.W0 15 /r ib
AVX
Extract a word integer value from xmm2 at the source word offset specified by imm8 into reg or m16. The upper bits of r64/r32 is filled with zeros.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:r/m(w)
ModRM:reg(r)
imm8(r)
NA
PHADDW/PHADDD--Packed Horizontal Add.
PHADDW
mm1,mm2/m64
0F 38 01 /r1
SSSE3
Add 16-bit integers horizontally, pack to mm1.
PHADDW
xmm1,xmm2/m128
66 0F 38 01 /r
SSSE3
Add 16-bit integers horizontally, pack to xmm1.
PHADDD
mm1,mm2/m64
0F 38 02 /r
SSSE3
Add 32-bit integers horizontally, pack to mm1.
PHADDD
xmm1,xmm2/m128
66 0F 38 02 /r
SSSE3
Add 32-bit integers horizontally, pack to xmm1.
VPHADDW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 01 /r
AVX
Add 16-bit integers horizontally, pack to xmm1.
VPHADDD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 02 /r
AVX
Add 32-bit integers horizontally, pack to xmm1.
VPHADDW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 01 /r
AVX2
Add 16-bit signed integers horizontally, pack to ymm1.
VPHADDD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 02 /r
AVX2
Add 32-bit signed integers horizontally, pack to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PHADDSW--Packed Horizontal Add and Saturate.
PHADDSW
mm1,mm2/m64
0F 38 03 /r1
SSSE3
Add 16-bit signed integers horizontally, pack saturated integers to mm1.
PHADDSW
xmm1,xmm2/m128
66 0F 38 03 /r
SSSE3
Add 16-bit signed integers horizontally, pack saturated integers to xmm1.
VPHADDSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 03 /r
AVX
Add 16-bit signed integers horizontally, pack saturated integers to xmm1.
VPHADDSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 03 /r
AVX2
Add 16-bit signed integers horizontally, pack saturated integers to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PHMINPOSUW--Packed Horizontal Word Minimum.
PHMINPOSUW
xmm1,xmm2/m128
66 0F 38 41 /r
SSE4_1
Find the minimum unsigned word in xmm2/m128 and place its value in the low word of xmm1 and its index in the secondlowest word of xmm1.
VPHMINPOSUW
xmm1,xmm2/m128
VEX.128.66.0F38.WIG 41 /r
AVX
Find the minimum unsigned word in xmm2/m128 and place its value in the low word of xmm1 and its index in the secondlowest word of xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
PHSUBW/PHSUBD--Packed Horizontal Subtract.
PHSUBW
mm1,mm2/m64
0F 38 05 /r1
SSSE3
Subtract 16-bit signed integers horizontally, pack to mm1.
PHSUBW
xmm1,xmm2/m128
66 0F 38 05 /r
SSSE3
Subtract 16-bit signed integers horizontally, pack to xmm1.
PHSUBD
mm1,mm2/m64
0F 38 06 /r
SSSE3
Subtract 32-bit signed integers horizontally, pack to mm1.
PHSUBD
xmm1,xmm2/m128
66 0F 38 06 /r
SSSE3
Subtract 32-bit signed integers horizontally, pack to xmm1.
VPHSUBW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 05 /r
AVX
Subtract 16-bit signed integers horizontally, pack to xmm1.
VPHSUBD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 06 /r
AVX
Subtract 32-bit signed integers horizontally, pack to xmm1.
VPHSUBW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 05 /r
AVX2
Subtract 16-bit signed integers horizontally, pack to ymm1.
VPHSUBD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 06 /r
AVX2
Subtract 32-bit signed integers horizontally, pack to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PHSUBSW--Packed Horizontal Subtract and Saturate.
PHSUBSW
mm1,mm2/m64
0F 38 07 /r1
SSSE3
Subtract 16-bit signed integer horizontally, pack saturated integers to mm1.
PHSUBSW
xmm1,xmm2/m128
66 0F 38 07 /r
SSSE3
Subtract 16-bit signed integer horizontally, pack saturated integers to xmm1.
VPHSUBSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 07 /r
AVX
Subtract 16-bit signed integer horizontally, pack saturated integers to xmm1.
VPHSUBSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 07 /r
AVX2
Subtract 16-bit signed integer horizontally, pack saturated integers to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PINSRB/PINSRD/PINSRQ--Insert Byte/Dword/Qword.
PINSRB
xmm1,r32/m8,imm8
66 0F 3A 20 /r ib
SSE4_1
Insert a byte integer value from r32/m8 into xmm1 at the destination element in xmm1 specified by imm8.
PINSRD
xmm1,r/m32,imm8
66 0F 3A 22 /r ib
SSE4_1
Insert a dword integer value from r/m32 into the xmm1 at the destination element specified by imm8.
PINSRQ
xmm1,r/m64,imm8
66 REX.W 0F 3A 22 /r ib
SSE4_1
Insert a qword integer value from r/m64 into the xmm1 at the destination element specified by imm8.
VPINSRB
xmm1,xmm2,r32/m8,imm8
VEX.NDS.128.66.0F3A.W0 20 /r ib
AVX
Merge a byte integer value from r32/m8 and rest from xmm2 into xmm1 at the byte offset in imm8.
VPINSRD
xmm1,xmm2,r/m32,imm8
VEX.NDS.128.66.0F3A.W0 22 /r ib
AVX
Insert a dword integer value from r32/m32 and rest from xmm2 into xmm1 at the dword offset in imm8.
VPINSRQ
xmm1,xmm2,r/m64,imm8
VEX.NDS.128.66.0F3A.W1 22 /r ib
AVX
Insert a qword integer value from r64/m64 and rest from xmm2 into xmm1 at the qword offset in imm8.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
PINSRW--Insert Word.
PINSRW
mm,r32/m16,imm8
0F C4 /r ib 1
SSE
Insert the low word from r32 or from m16 into mm at the word position specified by imm8.
PINSRW
xmm,r32/m16,imm8
66 0F C4 /r ib
SSE2
Move the low word of r32 or from m16 into xmm at the word position specified by imm8.
VPINSRW
xmm1,xmm2,r32/m16,imm8
VEX.NDS.128.66.0F.W0 C4 /r ib
AVX
Insert a word integer value from r32/m16 and rest from xmm2 into xmm1 at the word offset in imm8.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
PMADDUBSW--Multiply and Add Packed Signed and Unsigned Bytes.
PMADDUBSW
mm1,mm2/m64
0F 38 04 /r1
SSSE3
Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to mm1.
PMADDUBSW
xmm1,xmm2/m128
66 0F 38 04 /r
SSSE3
Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to xmm1.
VPMADDUBSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 04 /r
AVX
Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to xmm1.
VPMADDUBSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 04 /r
AVX2
Multiply signed and unsigned bytes, add horizontal pair of signed words, pack saturated signed-words to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMADDWD--Multiply and Add Packed Integers.
PMADDWD
mm,mm/m64
0F F5 /r1
MMX
Multiply the packed words in mm by the packed words in mm/m64, add adjacent doubleword results, and store in mm.
PMADDWD
xmm1,xmm2/m128
66 0F F5 /r
SSE2
Multiply the packed word integers in xmm1 by the packed word integers in xmm2/m128, add adjacent doubleword results, and store in xmm1.
VPMADDWD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F5 /r
AVX
Multiply the packed word integers in xmm2 by the packed word integers in xmm3/m128, add adjacent doubleword results, and store in xmm1.
VPMADDWD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG F5 /r
AVX2
Multiply the packed word integers in ymm2 by the packed word integers in ymm3/m256, add adjacent doubleword results, and store in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXSB--Maximum of Packed Signed Byte Integers.
PMAXSB
xmm1,xmm2/m128
66 0F 38 3C /r
SSE4_1
Compare packed signed byte integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VPMAXSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3C /r
AVX
Compare packed signed byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VPMAXSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3C /r
AVX2
Compare packed signed byte integers in ymm2 and ymm3/m128 and store packed maximum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXSD--Maximum of Packed Signed Dword Integers.
PMAXSD
xmm1,xmm2/m128
66 0F 38 3D /r
SSE4_1
Compare packed signed dword integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VPMAXSD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3D /r
AVX
Compare packed signed dword integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VPMAXSD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3D /r
AVX2
Compare packed signed dword integers in ymm2 and ymm3/m128 and store packed maximum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXSW--Maximum of Packed Signed Word Integers.
PMAXSW
mm1,mm2/m64
0F EE /r1
SSE
Compare signed word integers in mm2/m64 and mm1 and return maximum values.
PMAXSW
xmm1,xmm2/m128
66 0F EE /r
SSE2
Compare signed word integers in xmm2/m128 and xmm1 and return maximum values.
VPMAXSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG EE /r
AVX
Compare packed signed word integers in xmm3/m128 and xmm2 and store packed maximum values in xmm1.
VPMAXSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG EE /r
AVX2
Compare packed signed word integers in ymm3/m128 and ymm2 and store packed maximum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXUB--Maximum of Packed Unsigned Byte Integers.
PMAXUB
mm1,mm2/m64
0F DE /r1
SSE
Compare unsigned byte integers in mm2/m64 and mm1 and returns maximum values.
PMAXUB
xmm1,xmm2/m128
66 0F DE /r
SSE2
Compare unsigned byte integers in xmm2/m128 and xmm1 and returns maximum values.
VPMAXUB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG DE /r
AVX
Compare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VPMAXUB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG DE /r
AVX2
Compare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXUD--Maximum of Packed Unsigned Dword Integers.
PMAXUD
xmm1,xmm2/m128
66 0F 38 3F /r
SSE4_1
Compare packed unsigned dword integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VPMAXUD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3F /r
AVX
Compare packed unsigned dword integers in xmm2 and xmm3/m128 and store packed maximum values in xmm1.
VPMAXUD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3F /r
AVX2
Compare packed unsigned dword integers in ymm2 and ymm3/m256 and store packed maximum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMAXUW--Maximum of Packed Word Integers.
PMAXUW
xmm1,xmm2/m128
66 0F 38 3E /r
SSE4_1
Compare packed unsigned word integers in xmm1 and xmm2/m128 and store packed maximum values in xmm1.
VPMAXUW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3E/r
AVX
Compare packed unsigned word integers in xmm3/m128 and xmm2 and store maximum packed values in xmm1.
VPMAXUW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3E /r
AVX2
Compare packed unsigned word integers in ymm3/m256 and ymm2 and store maximum packed values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINSB--Minimum of Packed Signed Byte Integers.
PMINSB
xmm1,xmm2/m128
66 0F 38 38 /r
SSE4_1
Compare packed signed byte integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VPMINSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 38 /r
AVX
Compare packed signed byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VPMINSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 38 /r
AVX2
Compare packed signed byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINSD--Minimum of Packed Dword Integers.
PMINSD
xmm1,xmm2/m128
66 0F 38 39 /r
SSE4_1
Compare packed signed dword integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VPMINSD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 39 /r
AVX
Compare packed signed dword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VPMINSD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 39 /r
AVX2
Compare packed signed dword integers in ymm2 and ymm3/m128 and store packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINSW--Minimum of Packed Signed Word Integers.
PMINSW
mm1,mm2/m64
0F EA /r1
SSE
Compare signed word integers in mm2/m64 and mm1 and return minimum values.
PMINSW
xmm1,xmm2/m128
66 0F EA /r
SSE2
Compare signed word integers in xmm2/m128 and xmm1 and return minimum values.
VPMINSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG EA /r
AVX
Compare packed signed word integers in xmm3/m128 and xmm2 and return packed minimum values in xmm1.
VPMINSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG EA /r
AVX2
Compare packed signed word integers in ymm3/m256 and ymm2 and return packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINUB--Minimum of Packed Unsigned Byte Integers.
PMINUB
mm1,mm2/m64
0F DA /r1
SSE
Compare unsigned byte integers in mm2/m64 and mm1 and returns minimum values.
PMINUB
xmm1,xmm2/m128
66 0F DA /r
SSE2
Compare unsigned byte integers in xmm2/m128 and xmm1 and returns minimum values.
VPMINUB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG DA /r
AVX
Compare packed unsigned byte integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VPMINUB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG DA /r
AVX2
Compare packed unsigned byte integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINUD--Minimum of Packed Dword Integers.
PMINUD
xmm1,xmm2/m128
66 0F 38 3B /r
SSE4_1
Compare packed unsigned dword integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VPMINUD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3B /r
AVX
Compare packed unsigned dword integers in xmm2 and xmm3/m128 and store packed minimum values in xmm1.
VPMINUD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3B /r
AVX2
Compare packed unsigned dword integers in ymm2 and ymm3/m256 and store packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMINUW--Minimum of Packed Word Integers.
PMINUW
xmm1,xmm2/m128
66 0F 38 3A /r
SSE4_1
Compare packed unsigned word integers in xmm1 and xmm2/m128 and store packed minimum values in xmm1.
VPMINUW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 3A/r
AVX
Compare packed unsigned word integers in xmm3/m128 and xmm2 and return packed minimum values in xmm1.
VPMINUW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 3A /r
AVX2
Compare packed unsigned word integers in ymm3/m256 and ymm2 and return packed minimum values in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMOVMSKB--Move Byte Mask.
PMOVMSKB
reg,mm
0F D7 /r1
SSE
Move a byte mask of mm to reg. The upper bits of r32 or r64 are zeroed.
PMOVMSKB
reg,xmm
66 0F D7 /r
SSE2
Move a byte mask of xmm to reg. The upper bits of r32 or r64 are zeroed.
VPMOVMSKB
reg,xmm1
VEX.128.66.0F.WIG D7 /r
AVX
Move a byte mask of xmm1 to reg. The upper bits of r32 or r64 are filled with zeros.
VPMOVMSKB
reg,ymm1
VEX.256.66.0F.WIG D7 /r
AVX2
Move a 32-bit mask of ymm1 to reg. The upper bits of r64 are filled with zeros.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
PMOVSX--Packed Move with Sign Extend.
PMOVSXBW
xmm1,xmm2/m64
66 0f 38 20 /r
SSE4_1
Sign extend 8 packed signed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed signed 16-bit integers in xmm1.
PMOVSXBD
xmm1,xmm2/m32
66 0f 38 21 /r
SSE4_1
Sign extend 4 packed signed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed signed 32-bit integers in xmm1.
PMOVSXBQ
xmm1,xmm2/m16
66 0f 38 22 /r
SSE4_1
Sign extend 2 packed signed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed signed 64-bit integers in xmm1.
PMOVSXWD
xmm1,xmm2/m64
66 0f 38 23 /r
SSE4_1
Sign extend 4 packed signed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed signed 32-bit integers in xmm1.
PMOVSXWQ
xmm1,xmm2/m32
66 0f 38 24 /r
SSE4_1
Sign extend 2 packed signed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed signed 64-bit integers in xmm1.
PMOVSXDQ
xmm1,xmm2/m64
66 0f 38 25 /r
SSE4_1
Sign extend 2 packed signed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed signed 64-bit integers in xmm1.
VPMOVSXBW
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 20 /r
AVX
Sign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
VPMOVSXBD
xmm1,xmm2/m32
VEX.128.66.0F38.WIG 21 /r
AVX
Sign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
VPMOVSXBQ
xmm1,xmm2/m16
VEX.128.66.0F38.WIG 22 /r
AVX
Sign extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
VPMOVSXWD
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 23 /r
AVX
Sign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
VPMOVSXWQ
xmm1,xmm2/m32
VEX.128.66.0F38.WIG 24 /r
AVX
Sign extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
VPMOVSXDQ
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 25 /r
AVX
Sign extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VPMOVSXBW
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 20 /r
AVX2
Sign extend 16 packed 8-bit integers in xmm2/m128 to 16 packed 16-bit integers in ymm1.
VPMOVSXBD
ymm1,xmm2/m64
VEX.256.66.0F38.WIG 21 /r
AVX2
Sign extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1.
VPMOVSXBQ
ymm1,xmm2/m32
VEX.256.66.0F38.WIG 22 /r
AVX2
Sign extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1.
VPMOVSXWD
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 23 /r
AVX2
Sign extend 8 packed 16-bit integers in the low 16 bytes of xmm2/m128 to 8 packed 32.
VPMOVSXWQ
ymm1,xmm2/m64
VEX.256.66.0F38.WIG 24 /r
AVX2
Sign extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in ymm1.
VPMOVSXDQ
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 25 /r
AVX2
Sign extend 4 packed 32-bit integers in the low 16 bytes of xmm2/m128 to 4 packed 64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
PMOVZX--Packed Move with Zero Extend.
PMOVZXBW
xmm1,xmm2/m64
66 0f 38 30 /r
SSE4_1
Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
PMOVZXBD
xmm1,xmm2/m32
66 0f 38 31 /r
SSE4_1
Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
PMOVZXBQ
xmm1,xmm2/m16
66 0f 38 32 /r
SSE4_1
Zero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
PMOVZXWD
xmm1,xmm2/m64
66 0f 38 33 /r
SSE4_1
Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
PMOVZXWQ
xmm1,xmm2/m32
66 0f 38 34 /r
SSE4_1
Zero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
PMOVZXDQ
xmm1,xmm2/m64
66 0f 38 35 /r
SSE4_1
Zero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VPMOVZXBW
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 30 /r
AVX
Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 16-bit integers in xmm1.
VPMOVZXBD
xmm1,xmm2/m32
VEX.128.66.0F38.WIG 31 /r
AVX
Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 32-bit integers in xmm1.
VPMOVZXBQ
xmm1,xmm2/m16
VEX.128.66.0F38.WIG 32 /r
AVX
Zero extend 2 packed 8-bit integers in the low 2 bytes of xmm2/m16 to 2 packed 64-bit integers in xmm1.
VPMOVZXWD
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 33 /r
AVX
Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 32-bit integers in xmm1.
VPMOVZXWQ
xmm1,xmm2/m32
VEX.128.66.0F38.WIG 34 /r
AVX
Zero extend 2 packed 16-bit integers in the low 4 bytes of xmm2/m32 to 2 packed 64-bit integers in xmm1.
VPMOVZXDQ
xmm1,xmm2/m64
VEX.128.66.0F38.WIG 35 /r
AVX
Zero extend 2 packed 32-bit integers in the low 8 bytes of xmm2/m64 to 2 packed 64-bit integers in xmm1.
VPMOVZXBW
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 30 /r
AVX2
Zero extend 16 packed 8-bit integers in the low 16 bytes of xmm2/m128 to 16 packed 16-bit integers in ymm1.
VPMOVZXBD
ymm1,xmm2/m64
VEX.256.66.0F38.WIG 31 /r
AVX2
Zero extend 8 packed 8-bit integers in the low 8 bytes of xmm2/m64 to 8 packed 32-bit integers in ymm1.
VPMOVZXBQ
ymm1,xmm2/m32
VEX.256.66.0F38.WIG 32 /r
AVX2
Zero extend 4 packed 8-bit integers in the low 4 bytes of xmm2/m32 to 4 packed 64-bit integers in ymm1.
VPMOVZXWD
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 33 /r
AVX2
Zero extend 8 packed 16-bit integers in the low 16 bytes of xmm2/m128 to 8 packed 32.
VPMOVZXWQ
ymm1,xmm2/m64
VEX.256.66.0F38.WIG 34 /r
AVX2
Zero extend 4 packed 16-bit integers in the low 8 bytes of xmm2/m64 to 4 packed 64-bit integers in xmm1.
VPMOVZXDQ
ymm1,xmm2/m128
VEX.256.66.0F38.WIG 35 /r
AVX2
Zero extend 4 packed 32-bit integers in the low 16 bytes of xmm2/m128 to 4 packed 64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
PMULDQ--Multiply Packed Signed Dword Integers.
PMULDQ
xmm1,xmm2/m128
66 0F 38 28 /r
SSE4_1
Multiply the packed signed dword integers in xmm1 and xmm2/m128 and store the quadword product in xmm1.
VPMULDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 28 /r
AVX
Multiply packed signed doubleword integers in xmm2 by packed signed doubleword integers in xmm3/m128, and store the quadword results in xmm1.
VPMULDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 28 /r
AVX2
Multiply packed signed doubleword integers in ymm2 by packed signed doubleword integers in ymm3/m256, and store the quadword results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULHRSW--Packed Multiply High with Round and Scale.
PMULHRSW
mm1,mm2/m64
0F 38 0B /r1
SSSE3
Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to mm1.
PMULHRSW
xmm1,xmm2/m128
66 0F 38 0B /r
SSSE3
Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to xmm1.
VPMULHRSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 0B /r
AVX
Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to xmm1.
VPMULHRSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 0B /r
AVX2
Multiply 16-bit signed words, scale and round signed doublewords, pack high 16 bits to ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULHUW--Multiply Packed Unsigned Integers and Store High Result.
PMULHUW
mm1,mm2/m64
0F E4 /r1
SSE
Multiply the packed unsigned word integers in mm1 register and mm2/m64, and store the high 16 bits of the results in mm1.
PMULHUW
xmm1,xmm2/m128
66 0F E4 /r
SSE2
Multiply the packed unsigned word integers in xmm1 and xmm2/m128, and store the high 16 bits of the results in xmm1.
VPMULHUW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E4 /r
AVX
Multiply the packed unsigned word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1.
VPMULHUW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E4 /r
AVX2
Multiply the packed unsigned word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULHW--Multiply Packed Signed Integers and Store High Result.
PMULHW
mm,mm/m64
0F E5 /r1
MMX
Multiply the packed signed word integers in mm1 register and mm2/m64, and store the high 16 bits of the results in mm1.
PMULHW
xmm1,xmm2/m128
66 0F E5 /r
SSE2
Multiply the packed signed word integers in xmm1 and xmm2/m128, and store the high 16 bits of the results in xmm1.
VPMULHW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E5 /r
AVX
Multiply the packed signed word integers in xmm2 and xmm3/m128, and store the high 16 bits of the results in xmm1.
VPMULHW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E5 /r
AVX2
Multiply the packed signed word integers in ymm2 and ymm3/m256, and store the high 16 bits of the results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULLD--Multiply Packed Signed Dword Integers and Store Low Result.
PMULLD
xmm1,xmm2/m128
66 0F 38 40 /r
SSE4_1
Multiply the packed dword signed integers in xmm1 and xmm2/m128 and store the low 32 bits of each product in xmm1.
VPMULLD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 40 /r
AVX
Multiply the packed dword signed integers in xmm2 and xmm3/m128 and store the low 32 bits of each product in xmm1.
VPMULLD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 40 /r
AVX2
Multiply the packed dword signed integers in ymm2 and ymm3/m256 and store the low 32 bits of each product in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULLW--Multiply Packed Signed Integers and Store Low Result.
PMULLW
mm,mm/m64
0F D5 /r1
MMX
Multiply the packed signed word integers in mm1 register and mm2/m64, and store the low 16 bits of the results in mm1.
PMULLW
xmm1,xmm2/m128
66 0F D5 /r
SSE2
Multiply the packed signed word integers in xmm1 and xmm2/m128, and store the low 16 bits of the results in xmm1.
VPMULLW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D5 /r
AVX
Multiply the packed dword signed integers in xmm2 and xmm3/m128 and store the low 32 bits of each product in xmm1.
VPMULLW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG D5 /r
AVX2
Multiply the packed signed word integers in ymm2 and ymm3/m256, and store the low 16 bits of the results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PMULUDQ--Multiply Packed Unsigned Doubleword Integers.
PMULUDQ
mm1,mm2/m64
0F F4 /r1
SSE2
Multiply unsigned doubleword integer in mm1 by unsigned doubleword integer in mm2/m64, and store the quadword result in mm1.
PMULUDQ
xmm1,xmm2/m128
66 0F F4 /r
SSE2
Multiply packed unsigned doubleword integers in xmm1 by packed unsigned doubleword integers in xmm2/m128, and store the quadword results in xmm1.
VPMULUDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F4 /r
AVX
Multiply packed unsigned doubleword integers in xmm2 by packed unsigned doubleword integers in xmm3/m128, and store the quadword results in xmm1.
VPMULUDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG F4 /r
AVX2
Multiply packed unsigned doubleword integers in ymm2 by packed unsigned doubleword integers in ymm3/m256, and store the quadword results in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
POP--Pop a Value from the Stack.
POP
r/m16
8F /0
Pop top of stack into m16; increment stack pointer.
POP
r/m32
8F /0
Pop top of stack into m32; increment stack pointer.
POP
r/m64
8F /0
Pop top of stack into m64; increment stack pointer. Cannot encode 32-bit operand size.
POP
r16
58+ rw
Pop top of stack into r16; increment stack pointer.
POP
r32
58+ rd
Pop top of stack into r32; increment stack pointer.
POP
r64
58+ rd
Pop top of stack into r64; increment stack pointer. Cannot encode 32-bit operand size.
POP
DS
1F
Pop top of stack into DS; increment stack pointer.
POP
ES
07
Pop top of stack into ES; increment stack pointer.
POP
SS
17
Pop top of stack into SS; increment stack pointer.
POP
FS
0F A1
Pop top of stack into FS; increment stack pointer by 16 bits.
POP
FS
0F A1
Pop top of stack into FS; increment stack pointer by 32 bits.
POP
FS
0F A1
Pop top of stack into FS; increment stack pointer by 64 bits.
POP
GS
0F A9
Pop top of stack into GS; increment stack pointer by 16 bits.
POP
GS
0F A9
Pop top of stack into GS; increment stack pointer by 32 bits.
POP
GS
0F A9
Pop top of stack into GS; increment stack pointer by 64 bits.
ModRM:r/m(w)
NA
NA
NA
opcode + rd(w)
NA
NA
NA
NA
NA
NA
NA
POPA/POPAD--Pop All General-Purpose Registers.
POPA
void
61
Pop DI, SI, BP, BX, DX, CX, and AX.
POPAD
void
61
Pop EDI, ESI, EBP, EBX, EDX, ECX, and EAX.
NA
NA
NA
NA
POPCNT--Return the Count of Number of Bits Set to 1.
POPCNT
r16,r/m16
F3 0F B8 /r
POPCNT on r/m16.
POPCNT
r32,r/m32
F3 0F B8 /r
POPCNT on r/m32.
POPCNT
r64,r/m64
F3 REX.W 0F B8 /r
POPCNT on r/m64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
POPF/POPFD/POPFQ--Pop Stack into EFLAGS Register.
POPF
void
9D
Pop top of stack into lower 16 bits of EFLAGS.
POPFD
void
9D
Pop top of stack into EFLAGS.
POPFQ
void
9D
Pop top of stack and zero-extend into RFLAGS.
NA
NA
NA
NA
POR--Bitwise Logical OR.
POR
mm,mm/m64
0F EB /r1
MMX
Bitwise OR of mm/m64 and mm.
POR
xmm1,xmm2/m128
66 0F EB /r
SSE2
Bitwise OR of xmm2/m128 and xmm1.
VPOR
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG EB /r
AVX
Bitwise OR of xmm2/m128 and xmm3.
VPOR
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG EB /r
AVX2
Bitwise OR of ymm2/m256 and ymm3.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PREFETCHh--Prefetch Data Into Caches.
PREFETCHT0
m8
0F 18 /1
Move data from m8 closer to the processor using T0 hint.
PREFETCHT1
m8
0F 18 /2
Move data from m8 closer to the processor using T1 hint.
PREFETCHT2
m8
0F 18 /3
Move data from m8 closer to the processor using T2 hint.
PREFETCHNTA
m8
0F 18 /0
Move data from m8 closer to the processor using NTA hint.
ModRM:r/m(r)
NA
NA
NA
PREFETCHW--Prefetch Data into Caches in Anticipation of a Write.
PREFETCHW
m8
0F 0D /1
PRFCHW
Move data from m8 closer to the processor in anticipation of a write.
ModRM:r/m(r)
NA
NA
NA
PREFETCHWT1--Prefetch Vector Data Into Caches with Intent to Write and T1 Hint.
PREFETCHWT1
m8
0F 0D /2
PREFETCHWT1
Move data from m8 closer to the processor using T1 hint with intent to write.
ModRM:r/m(r)
NA
NA
NA
PSADBW--Compute Sum of Absolute Differences.
PSADBW
mm1,mm2/m64
0F F6 /r1
SSE
Computes the absolute differences of the packed unsigned byte integers from mm2 /m64 and mm1; differences are then summed to produce an unsigned word integer result.
PSADBW
xmm1,xmm2/m128
66 0F F6 /r
SSE2
Computes the absolute differences of the packed unsigned byte integers from xmm2 /m128 and xmm1; the 8 low differences and 8 high differences are then summed separately to produce two unsigned word integer results.
VPSADBW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F6 /r
AVX
Computes the absolute differences of the packed unsigned byte integers from xmm3 /m128 and xmm2; the 8 low differences and 8 high differences are then summed separately to produce two unsigned word integer results.
VPSADBW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG F6 /r
AVX2
Computes the absolute differences of the packed unsigned byte integers from ymm3 /m256 and ymm2; then each consecutive 8 differences are summed separately to produce four unsigned word integer results.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSHUFB--Packed Shuffle Bytes.
PSHUFB
mm1,mm2/m64
0F 38 00 /r1
SSSE3
Shuffle bytes in mm1 according to contents of mm2/m64.
PSHUFB
xmm1,xmm2/m128
66 0F 38 00 /r
SSSE3
Shuffle bytes in xmm1 according to contents of xmm2/m128.
VPSHUFB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 00 /r
AVX
Shuffle bytes in xmm2 according to contents of xmm3/m128.
VPSHUFB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 00 /r
AVX2
Shuffle bytes in ymm2 according to contents of ymm3/m256.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSHUFD--Shuffle Packed Doublewords.
PSHUFD
xmm1,xmm2/m128,imm8
66 0F 70 /r ib
SSE2
Shuffle the doublewords in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFD
xmm1,xmm2/m128,imm8
VEX.128.66.0F.WIG 70 /r ib
AVX
Shuffle the doublewords in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFD
ymm1,ymm2/m256,imm8
VEX.256.66.0F.WIG 70 /r ib
AVX2
Shuffle the doublewords in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
PSHUFHW--Shuffle Packed High Words.
PSHUFHW
xmm1,xmm2/m128,imm8
F3 0F 70 /r ib
SSE2
Shuffle the high words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFHW
xmm1,xmm2/m128,imm8
VEX.128.F3.0F.WIG 70 /r ib
AVX
Shuffle the high words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFHW
ymm1,ymm2/m256,imm8
VEX.256.F3.0F.WIG 70 /r ib
AVX2
Shuffle the high words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
PSHUFLW--Shuffle Packed Low Words.
PSHUFLW
xmm1,xmm2/m128,imm8
F2 0F 70 /r ib
SSE2
Shuffle the low words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFLW
xmm1,xmm2/m128,imm8
VEX.128.F2.0F.WIG 70 /r ib
AVX
Shuffle the low words in xmm2/m128 based on the encoding in imm8 and store the result in xmm1.
VPSHUFLW
ymm1,ymm2/m256,imm8
VEX.256.F2.0F.WIG 70 /r ib
AVX2
Shuffle the low words in ymm2/m256 based on the encoding in imm8 and store the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
PSHUFW--Shuffle Packed Words.
PSHUFW
mm1,mm2/m64,imm8
0F 70 /r ib
Shuffle the words in mm2/m64 based on the encoding in imm8 and store the result in mm1.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
PSIGNB/PSIGNW/PSIGND--Packed SIGN.
PSIGNB
mm1,mm2/m64
0F 38 08 /r1
SSSE3
Negate/zero/preserve packed byte integers in mm1 depending on the corresponding sign in mm2/m64.
PSIGNB
xmm1,xmm2/m128
66 0F 38 08 /r
SSSE3
Negate/zero/preserve packed byte integers in xmm1 depending on the corresponding sign in xmm2/m128.
PSIGNW
mm1,mm2/m64
0F 38 09 /r1
SSSE3
Negate/zero/preserve packed word integers in mm1 depending on the corresponding sign in mm2/m128.
PSIGNW
xmm1,xmm2/m128
66 0F 38 09 /r
SSSE3
Negate/zero/preserve packed word integers in xmm1 depending on the corresponding sign in xmm2/m128.
PSIGND
mm1,mm2/m64
0F 38 0A /r1
SSSE3
Negate/zero/preserve packed doubleword integers in mm1 depending on the corresponding sign in mm2/m128.
PSIGND
xmm1,xmm2/m128
66 0F 38 0A /r
SSSE3
Negate/zero/preserve packed doubleword integers in xmm1 depending on the corresponding sign in xmm2/m128.
VPSIGNB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 08 /r
AVX
Negate/zero/preserve packed byte integers in xmm2 depending on the corresponding sign in xmm3/m128.
VPSIGNW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 09 /r
AVX
Negate/zero/preserve packed word integers in xmm2 depending on the corresponding sign in xmm3/m128.
VPSIGND
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.WIG 0A /r
AVX
Negate/zero/preserve packed doubleword integers in xmm2 depending on the corresponding sign in xmm3/m128.
VPSIGNB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 08 /r
AVX2
Negate packed byte integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
VPSIGNW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 09 /r
AVX2
Negate packed 16-bit integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
VPSIGND
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.WIG 0A /r
AVX2
Negate packed doubleword integers in ymm2 if the corresponding sign in ymm3/m256 is less than zero.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSLLDQ--Shift Double Quadword Left Logical.
PSLLDQ
xmm1,imm8
66 0F 73 /7 ib
SSE2
Shift xmm1 left by imm8 bytes while shifting in 0s.
VPSLLDQ
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 73 /7 ib
AVX
Shift xmm2 left by imm8 bytes while shifting in 0s and store result in xmm1.
VPSLLDQ
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 73 /7 ib
AVX2
Shift ymm2 left by imm8 bytes while shifting in 0s and store result in ymm1.
ModRM:r/m(r,w)
imm8(r)
NA
NA
VEX.vvvv(w)
ModRM:r/m(r)
imm8(r)
NA
PSLLW/PSLLD/PSLLQ--Shift Packed Data Left Logical.
PSLLW
mm,mm/m64
0F F1 /r1
MMX
Shift words in mm left mm/m64 while shifting in 0s.
PSLLW
xmm1,xmm2/m128
66 0F F1 /r
SSE2
Shift words in xmm1 left by xmm2/m128 while shifting in 0s.
PSLLW
mm1,imm8
0F 71 /6 ib
MMX
Shift words in mm left by imm8 while shifting in 0s.
PSLLW
xmm1,imm8
66 0F 71 /6 ib
SSE2
Shift words in xmm1 left by imm8 while shifting in 0s.
PSLLD
mm,mm/m64
0F F2 /r1
MMX
Shift doublewords in mm left by mm/m64 while shifting in 0s.
PSLLD
xmm1,xmm2/m128
66 0F F2 /r
SSE2
Shift doublewords in xmm1 left by xmm2/m128 while shifting in 0s.
PSLLD
mm,imm8
0F 72 /6 ib1
MMX
Shift doublewords in mm left by imm8 while shifting in 0s.
PSLLD
xmm1,imm8
66 0F 72 /6 ib
SSE2
Shift doublewords in xmm1 left by imm8 while shifting in 0s.
PSLLQ
mm,mm/m64
0F F3 /r1
MMX
Shift quadword in mm left by mm/m64 while shifting in 0s.
PSLLQ
xmm1,xmm2/m128
66 0F F3 /r
SSE2
Shift quadwords in xmm1 left by xmm2/m128 while shifting in 0s.
PSLLQ
mm,imm8
0F 73 /6 ib1
MMX
Shift quadword in mm left by imm8 while shifting in 0s.
PSLLQ
xmm1,imm8
66 0F 73 /6 ib
SSE2
Shift quadwords in xmm1 left by imm8 while shifting in 0s.
VPSLLW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F1 /r
AVX
Shift words in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLW
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 71 /6 ib
AVX
Shift words in xmm2 left by imm8 while shifting in 0s.
VPSLLD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F2 /r
AVX
Shift doublewords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLD
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 72 /6 ib
AVX
Shift doublewords in xmm2 left by imm8 while shifting in 0s.
VPSLLQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F3 /r
AVX
Shift quadwords in xmm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLQ
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 73 /6 ib
AVX
Shift quadwords in xmm2 left by imm8 while shifting in 0s.
VPSLLW
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG F1 /r
AVX2
Shift words in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLW
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 71 /6 ib
AVX2
Shift words in ymm2 left by imm8 while shifting in 0s.
VPSLLD
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG F2 /r
AVX2
Shift doublewords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLD
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 72 /6 ib
AVX2
Shift doublewords in ymm2 left by imm8 while shifting in 0s.
VPSLLQ
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG F3 /r
AVX2
Shift quadwords in ymm2 left by amount specified in xmm3/m128 while shifting in 0s.
VPSLLQ
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 73 /6 ib
AVX2
Shift quadwords in ymm2 left by imm8 while shifting in 0s.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VEX.vvvv(w)
ModRM:r/m(r)
imm8(r)
NA
PSRAW/PSRAD--Shift Packed Data Right Arithmetic.
PSRAW
mm,mm/m64
0F E1 /r1
MMX
Shift words in mm right by mm/m64 while shifting in sign bits.
PSRAW
xmm1,xmm2/m128
66 0F E1 /r
SSE2
Shift words in xmm1 right by xmm2/m128 while shifting in sign bits.
PSRAW
mm,imm8
0F 71 /4 ib1
MMX
Shift words in mm right by imm8 while shifting in sign bits.
PSRAW
xmm1,imm8
66 0F 71 /4 ib
SSE2
Shift words in xmm1 right by imm8 while shifting in sign bits.
PSRAD
mm,mm/m64
0F E2 /r1
MMX
Shift doublewords in mm right by mm/m64 while shifting in sign bits.
PSRAD
xmm1,xmm2/m128
66 0F E2 /r
SSE2
Shift doubleword in xmm1 right by xmm2 /m128 while shifting in sign bits.
PSRAD
mm,imm8
0F 72 /4 ib1
MMX
Shift doublewords in mm right by imm8 while shifting in sign bits.
PSRAD
xmm1,imm8
66 0F 72 /4 ib
SSE2
Shift doublewords in xmm1 right by imm8 while shifting in sign bits.
VPSRAW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E1 /r
AVX
Shift words in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VPSRAW
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 71 /4 ib
AVX
Shift words in xmm2 right by imm8 while shifting in sign bits.
VPSRAD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E2 /r
AVX
Shift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VPSRAD
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 72 /4 ib
AVX
Shift doublewords in xmm2 right by imm8 while shifting in sign bits.
VPSRAW
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG E1 /r
AVX2
Shift words in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VPSRAW
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 71 /4 ib
AVX2
Shift words in ymm2 right by imm8 while shifting in sign bits.
VPSRAD
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG E2 /r
AVX2
Shift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in sign bits.
VPSRAD
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 72 /4 ib
AVX2
Shift doublewords in ymm2 right by imm8 while shifting in sign bits.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VEX.vvvv(w)
ModRM:r/m(r)
imm8(r)
NA
PSRLDQ--Shift Double Quadword Right Logical.
PSRLDQ
xmm1,imm8
66 0F 73 /3 ib
SSE2
Shift xmm1 right by imm8 while shifting in 0s.
VPSRLDQ
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 73 /3 ib
AVX
Shift xmm2 right by imm8 bytes while shifting in 0s.
VPSRLDQ
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 73 /3 ib
AVX2
Shift ymm1 right by imm8 bytes while shifting in 0s.
ModRM:r/m(r,w)
imm8(r)
NA
NA
VEX.vvvv(w)
ModRM:r/m(r)
imm8(r)
NA
PSRLW/PSRLD/PSRLQ--Shift Packed Data Right Logical.
PSRLW
mm,mm/m64
0F D1 /r1
MMX
Shift words in mm right by amount specified in mm/m64 while shifting in 0s.
PSRLW
xmm1,xmm2/m128
66 0F D1 /r
SSE2
Shift words in xmm1 right by amount specified in xmm2/m128 while shifting in 0s.
PSRLW
mm,imm8
0F 71 /2 ib1
MMX
Shift words in mm right by imm8 while shifting in 0s.
PSRLW
xmm1,imm8
66 0F 71 /2 ib
SSE2
Shift words in xmm1 right by imm8 while shifting in 0s.
PSRLD
mm,mm/m64
0F D2 /r1
MMX
Shift doublewords in mm right by amount specified in mm/m64 while shifting in 0s.
PSRLD
xmm1,xmm2/m128
66 0F D2 /r
SSE2
Shift doublewords in xmm1 right by amount specified in xmm2 /m128 while shifting in 0s.
PSRLD
mm,imm8
0F 72 /2 ib1
MMX
Shift doublewords in mm right by imm8 while shifting in 0s.
PSRLD
xmm1,imm8
66 0F 72 /2 ib
SSE2
Shift doublewords in xmm1 right by imm8 while shifting in 0s.
PSRLQ
mm,mm/m64
0F D3 /r1
MMX
Shift mm right by amount specified in mm/m64 while shifting in 0s.
PSRLQ
xmm1,xmm2/m128
66 0F D3 /r
SSE2
Shift quadwords in xmm1 right by amount specified in xmm2/m128 while shifting in 0s.
PSRLQ
mm,imm8
0F 73 /2 ib1
MMX
Shift mm right by imm8 while shifting in 0s.
PSRLQ
xmm1,imm8
66 0F 73 /2 ib
SSE2
Shift quadwords in xmm1 right by imm8 while shifting in 0s.
VPSRLW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D1 /r
AVX
Shift words in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLW
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 71 /2 ib
AVX
Shift words in xmm2 right by imm8 while shifting in 0s.
VPSRLD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D2 /r
AVX
Shift doublewords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLD
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 72 /2 ib
AVX
Shift doublewords in xmm2 right by imm8 while shifting in 0s.
VPSRLQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D3 /r
AVX
Shift quadwords in xmm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLQ
xmm1,xmm2,imm8
VEX.NDD.128.66.0F.WIG 73 /2 ib
AVX
Shift quadwords in xmm2 right by imm8 while shifting in 0s.
VPSRLW
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG D1 /r
AVX2
Shift words in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLW
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 71 /2 ib
AVX2
Shift words in ymm2 right by imm8 while shifting in 0s.
VPSRLD
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG D2 /r
AVX2
Shift doublewords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLD
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 72 /2 ib
AVX2
Shift doublewords in ymm2 right by imm8 while shifting in 0s.
VPSRLQ
ymm1,ymm2,xmm3/m128
VEX.NDS.256.66.0F.WIG D3 /r
AVX2
Shift quadwords in ymm2 right by amount specified in xmm3/m128 while shifting in 0s.
VPSRLQ
ymm1,ymm2,imm8
VEX.NDD.256.66.0F.WIG 73 /2 ib
AVX2
Shift quadwords in ymm2 right by imm8 while shifting in 0s.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VEX.vvvv(w)
ModRM:r/m(r)
imm8(r)
NA
PSUBB/PSUBW/PSUBD--Subtract Packed Integers.
PSUBB
mm,mm/m64
0F F8 /r1
MMX
Subtract packed byte integers in mm/m64 from packed byte integers in mm.
PSUBB
xmm1,xmm2/m128
66 0F F8 /r
SSE2
Subtract packed byte integers in xmm2/m128 from packed byte integers in xmm1.
PSUBW
mm,mm/m64
0F F9 /r1
MMX
Subtract packed word integers in mm/m64 from packed word integers in mm.
PSUBW
xmm1,xmm2/m128
66 0F F9 /r
SSE2
Subtract packed word integers in xmm2/m128 from packed word integers in xmm1.
PSUBD
mm,mm/m64
0F FA /r1
MMX
Subtract packed doubleword integers in mm/m64 from packed doubleword integers in mm.
PSUBD
xmm1,xmm2/m128
66 0F FA /r
SSE2
Subtract packed doubleword integers in xmm2/mem128 from packed doubleword integers in xmm1.
VPSUBB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F8 /r
AVX
Subtract packed byte integers in xmm3/m128 from xmm2.
VPSUBW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG F9 /r
AVX
Subtract packed word integers in xmm3/m128 from xmm2.
VPSUBD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG FA /r
AVX
Subtract packed doubleword integers in xmm3/m128 from xmm2.
VPSUBB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG F8 /r
AVX2
Subtract packed byte integers in ymm3/m256 from ymm2.
VPSUBW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG F9 /r
AVX2
Subtract packed word integers in ymm3/m256 from ymm2.
VPSUBD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG FA /r
AVX2
Subtract packed doubleword integers in ymm3/m256 from ymm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSUBQ--Subtract Packed Quadword Integers.
PSUBQ
mm1,mm2/m64
0F FB /r1
SSE2
Subtract quadword integer in mm1 from mm2 /m64.
PSUBQ
xmm1,xmm2/m128
66 0F FB /r
SSE2
Subtract packed quadword integers in xmm1 from xmm2 /m128.
VPSUBQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG FB/r
AVX
Subtract packed quadword integers in xmm3/m128 from xmm2.
VPSUBQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG FB /r
AVX2
Subtract packed quadword integers in ymm3/m256 from ymm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSUBSB/PSUBSW--Subtract Packed Signed Integers with Signed Saturation.
PSUBSB
mm,mm/m64
0F E8 /r1
MMX
Subtract signed packed bytes in mm/m64 from signed packed bytes in mm and saturate results.
PSUBSB
xmm1,xmm2/m128
66 0F E8 /r
SSE2
Subtract packed signed byte integers in xmm2/m128 from packed signed byte integers in xmm1 and saturate results.
PSUBSW
mm,mm/m64
0F E9 /r1
MMX
Subtract signed packed words in mm/m64 from signed packed words in mm and saturate results.
PSUBSW
xmm1,xmm2/m128
66 0F E9 /r
SSE2
Subtract packed signed word integers in xmm2/m128 from packed signed word integers in xmm1 and saturate results.
VPSUBSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E8 /r
AVX
Subtract packed signed byte integers in xmm3/m128 from packed signed byte integers in xmm2 and saturate results.
VPSUBSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG E9 /r
AVX
Subtract packed signed word integers in xmm3/m128 from packed signed word integers in xmm2 and saturate results.
VPSUBSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E8 /r
AVX2
Subtract packed signed byte integers in ymm3/m256 from packed signed byte integers in ymm2 and saturate results.
VPSUBSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG E9 /r
AVX2
Subtract packed signed word integers in ymm3/m256 from packed signed word integers in ymm2 and saturate results.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PSUBUSB/PSUBUSW--Subtract Packed Unsigned Integers with Unsigned Saturation.
PSUBUSB
mm,mm/m64
0F D8 /r1
MMX
Subtract unsigned packed bytes in mm/m64 from unsigned packed bytes in mm and saturate result.
PSUBUSB
xmm1,xmm2/m128
66 0F D8 /r
SSE2
Subtract packed unsigned byte integers in xmm2/m128 from packed unsigned byte integers in xmm1 and saturate result.
PSUBUSW
mm,mm/m64
0F D9 /r1
MMX
Subtract unsigned packed words in mm/m64 from unsigned packed words in mm and saturate result.
PSUBUSW
xmm1,xmm2/m128
66 0F D9 /r
SSE2
Subtract packed unsigned word integers in xmm2/m128 from packed unsigned word integers in xmm1 and saturate result.
VPSUBUSB
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D8 /r
AVX
Subtract packed unsigned byte integers in xmm3/m128 from packed unsigned byte integers in xmm2 and saturate result.
VPSUBUSW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG D9 /r
AVX
Subtract packed unsigned word integers in xmm3/m128 from packed unsigned word integers in xmm2 and saturate result.
VPSUBUSB
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG D8 /r
AVX2
Subtract packed unsigned byte integers in ymm3/m256 from packed unsigned byte integers in ymm2 and saturate result.
VPSUBUSW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG D9 /r
AVX2
Subtract packed unsigned word integers in ymm3/m256 from packed unsigned word integers in ymm2 and saturate result.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PTEST--Logical Compare.
PTEST
xmm1,xmm2/m128
66 0F 38 17 /r
SSE4_1
Set ZF if xmm2/m128 AND xmm1 result is all 0s. Set CF if xmm2/m128 AND NOT xmm1 result is all 0s.
VPTEST
xmm1,xmm2/m128
VEX.128.66.0F38.WIG 17 /r
AVX
Set ZF and CF depending on bitwise AND and ANDN of sources.
VPTEST
ymm1,ymm2/m256
VEX.256.66.0F38.WIG 17 /r
AVX
Set ZF and CF depending on bitwise AND and ANDN of sources.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
PUNPCKHBW/PUNPCKHWD/PUNPCKHDQ/PUNPCKHQDQ--Unpack High Data.
PUNPCKHBW
mm,mm/m64
0F 68 /r1
MMX
Unpack and interleave high-order bytes from mm and mm/m64 into mm.
PUNPCKHBW
xmm1,xmm2/m128
66 0F 68 /r
SSE2
Unpack and interleave high-order bytes from xmm1 and xmm2/m128 into xmm1.
PUNPCKHWD
mm,mm/m64
0F 69 /r1
MMX
Unpack and interleave high-order words from mm and mm/m64 into mm.
PUNPCKHWD
xmm1,xmm2/m128
66 0F 69 /r
SSE2
Unpack and interleave high-order words from xmm1 and xmm2/m128 into xmm1.
PUNPCKHDQ
mm,mm/m64
0F 6A /r1
MMX
Unpack and interleave high-order doublewords from mm and mm/m64 into mm.
PUNPCKHDQ
xmm1,xmm2/m128
66 0F 6A /r
SSE2
Unpack and interleave high-order doublewords from xmm1 and xmm2/m128 into xmm1.
PUNPCKHQDQ
xmm1,xmm2/m128
66 0F 6D /r
SSE2
Unpack and interleave high-order quadwords from xmm1 and xmm2/m128 into xmm1.
VPUNPCKHBW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 68/r
AVX
Interleave high-order bytes from xmm2 and xmm3/m128 into xmm1.
VPUNPCKHWD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 69/r
AVX
Interleave high-order words from xmm2 and xmm3/m128 into xmm1.
VPUNPCKHDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 6A/r
AVX
Interleave high-order doublewords from xmm2 and xmm3/m128 into xmm1.
VPUNPCKHQDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 6D/r
AVX
Interleave high-order quadword from xmm2 and xmm3/m128 into xmm1 register.
VPUNPCKHBW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 68 /r
AVX2
Interleave high-order bytes from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKHWD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 69 /r
AVX2
Interleave high-order words from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKHDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 6A /r
AVX2
Interleave high-order doublewords from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKHQDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 6D /r
AVX2
Interleave high-order quadword from ymm2 and ymm3/m256 into ymm1 register.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PUNPCKLBW/PUNPCKLWD/PUNPCKLDQ/PUNPCKLQDQ--Unpack Low Data.
PUNPCKLBW
mm,mm/m32
0F 60 /r1
MMX
Interleave low-order bytes from mm and mm/m32 into mm.
PUNPCKLBW
xmm1,xmm2/m128
66 0F 60 /r
SSE2
Interleave low-order bytes from xmm1 and xmm2/m128 into xmm1.
PUNPCKLWD
mm,mm/m32
0F 61 /r1
MMX
Interleave low-order words from mm and mm/m32 into mm.
PUNPCKLWD
xmm1,xmm2/m128
66 0F 61 /r
SSE2
Interleave low-order words from xmm1 and xmm2/m128 into xmm1.
PUNPCKLDQ
mm,mm/m32
0F 62 /r1
MMX
Interleave low-order doublewords from mm and mm/m32 into mm.
PUNPCKLDQ
xmm1,xmm2/m128
66 0F 62 /r
SSE2
Interleave low-order doublewords from xmm1 and xmm2/m128 into xmm1.
PUNPCKLQDQ
xmm1,xmm2/m128
66 0F 6C /r
SSE2
Interleave low-order quadword from xmm1 and xmm2/m128 into xmm1 register.
VPUNPCKLBW
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 60/r
AVX
Interleave low-order bytes from xmm2 and xmm3/m128 into xmm1.
VPUNPCKLWD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 61/r
AVX
Interleave low-order words from xmm2 and xmm3/m128 into xmm1.
VPUNPCKLDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 62/r
AVX
Interleave low-order doublewords from xmm2 and xmm3/m128 into xmm1.
VPUNPCKLQDQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 6C/r
AVX
Interleave low-order quadword from xmm2 and xmm3/m128 into xmm1 register.
VPUNPCKLBW
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 60 /r
AVX2
Interleave low-order bytes from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKLWD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 61 /r
AVX2
Interleave low-order words from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKLDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 62 /r
AVX2
Interleave low-order doublewords from ymm2 and ymm3/m256 into ymm1 register.
VPUNPCKLQDQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 6C /r
AVX2
Interleave low-order quadword from ymm2 and ymm3/m256 into ymm1 register.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
PUSH--Push Word, Doubleword or Quadword Onto the Stack.
PUSH
r/m16
FF /6
Push r/m16.
PUSH
r/m32
FF /6
Push r/m32.
PUSH
r/m64
FF /6
Push r/m64.
PUSH
r16
50+rw
Push r16.
PUSH
r32
50+rd
Push r32.
PUSH
r64
50+rd
Push r64.
PUSH
imm8
6A ib
Push imm8.
PUSH
imm16
68 iw
Push imm16.
PUSH
imm32
68 id
Push imm32.
PUSH
CS
0E
Push CS.
PUSH
SS
16
Push SS.
PUSH
DS
1E
Push DS.
PUSH
ES
06
Push ES.
PUSH
FS
0F A0
Push FS.
PUSH
GS
0F A8
Push GS.
ModRM:r/m(r)
NA
NA
NA
opcode + rd(r)
NA
NA
NA
imm8(r)/16/32
NA
NA
NA
NA
NA
NA
NA
PUSHA/PUSHAD--Push All General-Purpose Registers.
PUSHA
void
60
Push AX, CX, DX, BX, original SP, BP, SI, and DI.
PUSHAD
void
60
Push EAX, ECX, EDX, EBX, original ESP, EBP, ESI, and EDI.
NA
NA
NA
NA
PUSHF/PUSHFD--Push EFLAGS Register onto the Stack.
PUSHF
void
9C
Push lower 16 bits of EFLAGS.
PUSHFD
void
9C
Push EFLAGS.
PUSHFQ
void
9C
Push RFLAGS.
NA
NA
NA
NA
PXOR--Logical Exclusive OR.
PXOR
mm,mm/m64
0F EF /r1
MMX
Bitwise XOR of mm/m64 and mm.
PXOR
xmm1,xmm2/m128
66 0F EF /r
SSE2
Bitwise XOR of xmm2/m128 and xmm1.
VPXOR
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG EF /r
AVX
Bitwise XOR of xmm3/m128 and xmm2.
VPXOR
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG EF /r
AVX2
Bitwise XOR of ymm3/m256 and ymm2.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
RCL/RCR/ROL/ROR---Rotate.
RCL
r/m8,1
D0 /2
Rotate 9 bits (CF, r/m8) left once.
RCL
r/m8*,1
REX + D0 /2
Rotate 9 bits (CF, r/m8) left once.
RCL
r/m8,CL
D2 /2
Rotate 9 bits (CF, r/m8) left CL times.
RCL
r/m8*,CL
REX + D2 /2
Rotate 9 bits (CF, r/m8) left CL times.
RCL
r/m8,imm8
C0 /2 ib
Rotate 9 bits (CF, r/m8) left imm8 times.
RCL
r/m8*,imm8
REX + C0 /2 ib
Rotate 9 bits (CF, r/m8) left imm8 times.
RCL
r/m16,1
D1 /2
Rotate 17 bits (CF, r/m16) left once.
RCL
r/m16,CL
D3 /2
Rotate 17 bits (CF, r/m16) left CL times.
RCL
r/m16,imm8
C1 /2 ib
Rotate 17 bits (CF, r/m16) left imm8 times.
RCL
r/m32,1
D1 /2
Rotate 33 bits (CF, r/m32) left once.
RCL
r/m64,1
REX.W + D1 /2
Rotate 65 bits (CF, r/m64) left once. Uses a 6.
RCL
r/m32,CL
D3 /2
Rotate 33 bits (CF, r/m32) left CL times.
RCL
r/m64,CL
REX.W + D3 /2
Rotate 65 bits (CF, r/m64) left CL times. Uses a 6 bit count.
RCL
r/m32,imm8
C1 /2 ib
Rotate 33 bits (CF, r/m32) left imm8 times.
RCL
r/m64,imm8
REX.W + C1 /2 ib
Rotate 65 bits (CF, r/m64) left imm8 times. Uses a 6 bit count.
RCR
r/m8,1
D0 /3
Rotate 9 bits (CF, r/m8) right once.
RCR
r/m8*,1
REX + D0 /3
Rotate 9 bits (CF, r/m8) right once.
RCR
r/m8,CL
D2 /3
Rotate 9 bits (CF, r/m8) right CL times.
RCR
r/m8*,CL
REX + D2 /3
Rotate 9 bits (CF, r/m8) right CL times.
RCR
r/m8,imm8
C0 /3 ib
Rotate 9 bits (CF, r/m8) right imm8 times.
RCR
r/m8*,imm8
REX + C0 /3 ib
Rotate 9 bits (CF, r/m8) right imm8 times.
RCR
r/m16,1
D1 /3
Rotate 17 bits (CF, r/m16) right once.
RCR
r/m16,CL
D3 /3
Rotate 17 bits (CF, r/m16) right CL times.
RCR
r/m16,imm8
C1 /3 ib
Rotate 17 bits (CF, r/m16) right imm8 times.
RCR
r/m32,1
D1 /3
Rotate 33 bits (CF, r/m32) right once. Uses a 6.
RCR
r/m64,1
REX.W + D1 /3
Rotate 65 bits (CF, r/m64) right once. Uses a 6.
RCR
r/m32,CL
D3 /3
Rotate 33 bits (CF, r/m32) right CL times.
RCR
r/m64,CL
REX.W + D3 /3
Rotate 65 bits (CF, r/m64) right CL times. Uses a 6 bit count.
RCR
r/m32,imm8
C1 /3 ib
Rotate 33 bits (CF, r/m32) right imm8 times.
RCR
r/m64,imm8
REX.W + C1 /3 ib
Rotate 65 bits (CF, r/m64) right imm8 times. Uses a 6 bit count.
ROL
r/m8,1
D0 /0
Rotate 8 bits r/m8 left once.
ROL
r/m8*,1
REX + D0 /0
Rotate 8 bits r/m8 left once.
ROL
r/m8,CL
D2 /0
Rotate 8 bits r/m8 left CL times.
ROL
r/m8*,CL
REX + D2 /0
Rotate 8 bits r/m8 left CL times.
ROL
r/m8,imm8
C0 /0 ib
Rotate 8 bits r/m8 left imm8 times.
ROL
r/m8*,imm8
REX + C0 /0 ib
Rotate 8 bits r/m8 left imm8 times.
ROL
r/m16,1
D1 /0
Rotate 16 bits r/m16 left once.
ROL
r/m16,CL
D3 /0
Rotate 16 bits r/m16 left CL times.
ROL
r/m16,imm8
C1 /0 ib
Rotate 16 bits r/m16 left imm8 times.
ROL
r/m32,1
D1 /0
Rotate 32 bits r/m32 left once.
ROL
r/m64,1
REX.W + D1 /0
Rotate 64 bits r/m64 left once. Uses a 6 bit count.
ROL
r/m32,CL
D3 /0
Rotate 32 bits r/m32 left CL times.
ROL
r/m64,CL
REX.W + D3 /0
Rotate 64 bits r/m64 left CL times. Uses a 6.
ROL
r/m32,imm8
C1 /0 ib
Rotate 32 bits r/m32 left imm8 times.
ROL
r/m64,imm8
REX.W + C1 /0 ib
Rotate 64 bits r/m64 left imm8 times. Uses a 6 bit count.
ROR
r/m8,1
D0 /1
Rotate 8 bits r/m8 right once.
ROR
r/m8*,1
REX + D0 /1
Rotate 8 bits r/m8 right once.
ROR
r/m8,CL
D2 /1
Rotate 8 bits r/m8 right CL times.
ROR
r/m8*,CL
REX + D2 /1
Rotate 8 bits r/m8 right CL times.
ROR
r/m8,imm8
C0 /1 ib
Rotate 8 bits r/m16 right imm8 times.
ROR
r/m8*,imm8
REX + C0 /1 ib
Rotate 8 bits r/m16 right imm8 times.
ROR
r/m16,1
D1 /1
Rotate 16 bits r/m16 right once.
ROR
r/m16,CL
D3 /1
Rotate 16 bits r/m16 right CL times.
ROR
r/m16,imm8
C1 /1 ib
Rotate 16 bits r/m16 right imm8 times.
ROR
r/m32,1
D1 /1
Rotate 32 bits r/m32 right once.
ROR
r/m64,1
REX.W + D1 /1
Rotate 64 bits r/m64 right once. Uses a 6 bit count.
ROR
r/m32,CL
D3 /1
Rotate 32 bits r/m32 right CL times.
ROR
r/m64,CL
REX.W + D3 /1
Rotate 64 bits r/m64 right CL times. Uses a 6.
ROR
r/m32,imm8
C1 /1 ib
Rotate 32 bits r/m32 right imm8 times.
ROR
r/m64,imm8
REX.W + C1 /1 ib
Rotate 64 bits r/m64 right imm8 times. Uses a 6 bit count.
ModRM:r/m(w)
1
NA
NA
ModRM:r/m(w)
CL
NA
NA
ModRM:r/m(w)
imm8(r)
NA
NA
RCPPS--Compute Reciprocals of Packed Single-Precision Floating-Point Values.
RCPPS
xmm1,xmm2/m128
0F 53 /r
SSE
Computes the approximate reciprocals of the packed single-precision floating-point values in xmm2/m128 and stores the results in xmm1.
VRCPPS
xmm1,xmm2/m128
VEX.128.0F.WIG 53 /r
AVX
Computes the approximate reciprocals of packed single-precision values in xmm2/mem and stores the results in xmm1.
VRCPPS
ymm1,ymm2/m256
VEX.256.0F.WIG 53 /r
AVX
Computes the approximate reciprocals of packed single-precision values in ymm2/mem and stores the results in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
RCPSS--Compute Reciprocal of Scalar Single-Precision Floating-Point Values.
RCPSS
xmm1,xmm2/m32
F3 0F 53 /r
SSE
Computes the approximate reciprocal of the scalar single-precision floating-point value in xmm2/m32 and stores the result in xmm1.
VRCPSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 53 /r
AVX
Computes the approximate reciprocal of the scalar single-precision floating-point value in xmm3/m32 and stores the result in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
RDFSBASE/RDGSBASE--Read FS/GS Segment Base.
RDFSBASE
r32
F3 0F AE /0
FSGSBASE
Load the 32-bit destination register with the FS base address.
RDFSBASE
r64
F3 REX.W 0F AE /0
FSGSBASE
Load the 64-bit destination register with the FS base address.
RDGSBASE
r32
F3 0F AE /1
FSGSBASE
Load the 32-bit destination register with the GS base address.
RDGSBASE
r64
F3 REX.W 0F AE /1
FSGSBASE
Load the 64-bit destination register with the GS base address.
ModRM:r/m(w)
NA
NA
NA
RDMSR--Read from Model Specific Register.
RDMSR
void
0F 32
Read MSR specified by ECX into EDX:EAX.
NA
NA
NA
NA
RDPKRU--Read Protection Key Rights for User Pages.
RDPKRU
void
0F 01 EE
OSPKE
Reads PKRU into EAX.
NA
NA
NA
NA
RDPMC--Read Performance-Monitoring Counters.
RDPMC
void
0F 33
Read performance-monitoring counter specified by ECX into EDX:EAX.
NA
NA
NA
NA
RDRAND--Read Random Number.
RDRAND
r16
0F C7 /6
RDRAND
Read a 16-bit random number and store in the destination register.
RDRAND
r32
0F C7 /6
RDRAND
Read a 32-bit random number and store in the destination register.
RDRAND
r64
REX.W + 0F C7 /6
RDRAND
Read a 64-bit random number and store in the destination register.
ModRM:r/m(w)
NA
NA
NA
RDSEED--Read Random SEED.
RDSEED
r16
0F C7 /7
RDSEED
Read a 16-bit NIST SP800-90B & C compliant random value and store in the destination register.
RDSEED
r32
0F C7 /7
RDSEED
Read a 32-bit NIST SP800-90B & C compliant random value and store in the destination register.
RDSEED
r64
REX.W + 0F C7 /7
RDSEED
Read a 64-bit NIST SP800-90B & C compliant random value and store in the destination register.
ModRM:r/m(w)
NA
NA
NA
RDTSC--Read Time-Stamp Counter.
RDTSC
void
0F 31
Read time-stamp counter into EDX:EAX.
NA
NA
NA
NA
RDTSCP--Read Time-Stamp Counter and Processor ID.
RDTSCP
void
0F 01 F9
Read 64-bit time-stamp counter and 32-bit IA32_TSC_AUX value into EDX:EAX and ECX.
NA
NA
NA
NA
REP/REPE/REPZ/REPNE/REPNZ--Repeat String Operation Prefix.
REP
INS m8,DX
F3 6C
Input (E)CX bytes from port DX into ES:[(E)DI].
REP
INS m8,DX
F3 6C
Input RCX bytes from port DX into [RDI].
REP
INS m16,DX
F3 6D
Input (E)CX words from port DX into ES:[(E)DI.].
REP
INS m32,DX
F3 6D
Input (E)CX doublewords from port DX into ES:[(E)DI].
REP
INS r/m32,DX
F3 6D
Input RCX default size from port DX into [RDI].
REP
MOVS m8,m8
F3 A4
Move (E)CX bytes from DS:[(E)SI] to ES:[(E)DI].
REP
MOVS m8,m8
F3 REX.W A4
Move RCX bytes from [RSI] to [RDI].
REP
MOVS m16,m16
F3 A5
Move (E)CX words from DS:[(E)SI] to ES:[(E)DI].
REP
MOVS m32,m32
F3 A5
Move (E)CX doublewords from DS:[(E)SI] to ES:[(E)DI].
REP
MOVS m64,m64
F3 REX.W A5
Move RCX quadwords from [RSI] to [RDI].
REP
OUTS DX,r/m8
F3 6E
Output (E)CX bytes from DS:[(E)SI] to port DX.
REP
OUTS DX,r/m8*
F3 REX.W 6E
Output RCX bytes from [RSI] to port DX.
REP
OUTS DX,r/m16
F3 6F
Output (E)CX words from DS:[(E)SI] to port DX.
REP
OUTS DX,r/m32
F3 6F
Output (E)CX doublewords from DS:[(E)SI] to port DX.
REP
OUTS DX,r/m32
F3 REX.W 6F
Output RCX default size from [RSI] to port DX.
REP
LODS AL
F3 AC
Load (E)CX bytes from DS:[(E)SI] to AL.
REP
LODS AL
F3 REX.W AC
Load RCX bytes from [RSI] to AL.
REP
LODS AX
F3 AD
Load (E)CX words from DS:[(E)SI] to AX.
REP
LODS EAX
F3 AD
Load (E)CX doublewords from DS:[(E)SI] to EAX.
REP
LODS RAX
F3 REX.W AD
Load RCX quadwords from [RSI] to RAX.
REP
STOS m8
F3 AA
Fill (E)CX bytes at ES:[(E)DI] with AL.
REP
STOS m8
F3 REX.W AA
Fill RCX bytes at [RDI] with AL.
REP
STOS m16
F3 AB
Fill (E)CX words at ES:[(E)DI] with AX.
REP
STOS m32
F3 AB
Fill (E)CX doublewords at ES:[(E)DI] with EAX.
REP
STOS m64
F3 REX.W AB
Fill RCX quadwords at [RDI] with RAX.
REPE
CMPS m8,m8
F3 A6
Find nonmatching bytes in ES:[(E)DI] and DS:[(E)SI].
REPE
CMPS m8,m8
F3 REX.W A6
Find non-matching bytes in [RDI] and [RSI].
REPE
CMPS m16,m16
F3 A7
Find nonmatching words in ES:[(E)DI] and DS:[(E)SI].
REPE
CMPS m32,m32
F3 A7
Find nonmatching doublewords in ES:[(E)DI] and DS:[(E)SI].
REPE
CMPS m64,m64
F3 REX.W A7
Find non-matching quadwords in [RDI] and [RSI].
REPE
SCAS m8
F3 AE
Find non-AL byte starting at ES:[(E)DI].
REPE
SCAS m8
F3 REX.W AE
Find non-AL byte starting at [RDI].
REPE
SCAS m16
F3 AF
Find non-AX word starting at ES:[(E)DI].
REPE
SCAS m32
F3 AF
Find non-EAX doubleword starting at ES:[(E)DI].
REPE
SCAS m64
F3 REX.W AF
Find non-RAX quadword starting at [RDI].
REPNE
CMPS m8,m8
F2 A6
Find matching bytes in ES:[(E)DI] and DS:[(E)SI].
REPNE
CMPS m8,m8
F2 REX.W A6
Find matching bytes in [RDI] and [RSI].
REPNE
CMPS m16,m16
F2 A7
Find matching words in ES:[(E)DI] and DS:[(E)SI].
REPNE
CMPS m32,m32
F2 A7
Find matching doublewords in ES:[(E)DI] and DS:[(E)SI].
REPNE
CMPS m64,m64
F2 REX.W A7
Find matching doublewords in [RDI] and [RSI].
REPNE
SCAS m8
F2 AE
Find AL, starting at ES:[(E)DI].
REPNE
SCAS m8
F2 REX.W AE
Find AL, starting at [RDI].
REPNE
SCAS m16
F2 AF
Find AX, starting at ES:[(E)DI].
REPNE
SCAS m32
F2 AF
Find EAX, starting at ES:[(E)DI].
REPNE
SCAS m64
F2 REX.W AF
Find RAX, starting at [RDI].
NA
NA
NA
NA
RET--Return from Procedure.
RET
void
C3
Near return to calling procedure.
RET
void
CB
Far return to calling procedure.
RET
imm16
C2 iw
Near return to calling procedure and pop imm16 bytes from stack.
RET
imm16
CA iw
Far return to calling procedure and pop imm16 bytes from stack.
NA
NA
NA
NA
imm16(r)
NA
NA
NA
RORX--Rotate Right Logical Without Affecting Flags.
RORX
r32,r/m32,imm8
VEX.LZ.F2.0F3A.W0 F0 /r ib
BMI2
Rotate 32-bit r/m32 right imm8 times without affecting arithmetic flags.
RORX
r64,r/m64,imm8
VEX.LZ.F2.0F3A.W1 F0 /r ib
BMI2
Rotate 64-bit r/m64 right imm8 times without affecting arithmetic flags.
ModRM:reg(w)
ModRM:r/m(r)
Imm8
NA
ROUNDPD--Round Packed Double Precision Floating-Point Values.
ROUNDPD
xmm1,xmm2/m128,imm8
66 0F 3A 09 /r ib
SSE4_1
Round packed double precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDPD
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 09 /r ib
AVX
Round packed double-precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDPD
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.WIG 09 /r ib
AVX
Round packed double-precision floating-point values in ymm2/m256 and place the result in ymm1. The rounding mode is determined by imm8.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ROUNDPS--Round Packed Single Precision Floating-Point Values.
ROUNDPS
xmm1,xmm2/m128,imm8
66 0F 3A 08 /r ib
SSE4_1
Round packed single precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDPS
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.WIG 08 /r ib
AVX
Round packed single-precision floating-point values in xmm2/m128 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDPS
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.WIG 08 /r ib
AVX
Round packed single-precision floating-point values in ymm2/m256 and place the result in ymm1. The rounding mode is determined by imm8.
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ROUNDSD--Round Scalar Double Precision Floating-Point Values.
ROUNDSD
xmm1,xmm2/m64,imm8
66 0F 3A 0B /r ib
SSE4_1
Round the low packed double precision floating-point value in xmm2/m64 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDSD
xmm1,xmm2,xmm3/m64,imm8
VEX.NDS.LIG.66.0F3A.WIG 0B /r ib
AVX
Round the low packed double precision floating-point value in xmm3/m64 and place the result in xmm1. The rounding mode is determined by imm8. Upper packed double precision floating-point value (bits[127:64]) from xmm2 is copied to xmm1[127:64].
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
ROUNDSS--Round Scalar Single Precision Floating-Point Values.
ROUNDSS
xmm1,xmm2/m32,imm8
66 0F 3A 0A /r ib
SSE4_1
Round the low packed single precision floating-point value in xmm2/m32 and place the result in xmm1. The rounding mode is determined by imm8.
VROUNDSS
xmm1,xmm2,xmm3/m32,imm8
VEX.NDS.LIG.66.0F3A.WIG 0A /r ib
AVX
Round the low packed single precision floating-point value in xmm3/m32 and place the result in xmm1. The rounding mode is determined by imm8. Also, upper packed single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
RSM--Resume from System Management Mode.
RSM
void
0F AA
Resume operation of interrupted program.
NA
NA
NA
NA
RSQRTPS--Compute Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values.
RSQRTPS
xmm1,xmm2/m128
0F 52 /r
SSE
Computes the approximate reciprocals of the square roots of the packed single-precision floating-point values in xmm2/m128 and stores the results in xmm1.
VRSQRTPS
xmm1,xmm2/m128
VEX.128.0F.WIG 52 /r
AVX
Computes the approximate reciprocals of the square roots of packed single-precision values in xmm2/mem and stores the results in xmm1.
VRSQRTPS
ymm1,ymm2/m256
VEX.256.0F.WIG 52 /r
AVX
Computes the approximate reciprocals of the square roots of packed single-precision values in ymm2/mem and stores the results in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
RSQRTSS--Compute Reciprocal of Square Root of Scalar Single-Precision Floating-Point Value.
RSQRTSS
xmm1,xmm2/m32
F3 0F 52 /r
SSE
Computes the approximate reciprocal of the square root of the low single-precision floating-point value in xmm2/m32 and stores the results in xmm1.
VRSQRTSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 52 /r
AVX
Computes the approximate reciprocal of the square root of the low single precision floating-point value in xmm3/m32 and stores the results in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SAHF--Store AH into Flags.
SAHF
void
9E
Loads SF, ZF, AF, PF, and CF from AH into EFLAGS register.
NA
NA
NA
NA
SAL/SAR/SHL/SHR--Shift.
SAL
r/m8,1
D0 /4
Multiply r/m8 by 2, once.
SAL
r/m8**,1
REX + D0 /4
Multiply r/m8 by 2, once.
SAL
r/m8,CL
D2 /4
Multiply r/m8 by 2, CL times.
SAL
r/m8**,CL
REX + D2 /4
Multiply r/m8 by 2, CL times.
SAL
r/m8,imm8
C0 /4 ib
Multiply r/m8 by 2, imm8 times.
SAL
r/m8**,imm8
REX + C0 /4 ib
Multiply r/m8 by 2, imm8 times.
SAL
r/m16,1
D1 /4
Multiply r/m16 by 2, once.
SAL
r/m16,CL
D3 /4
Multiply r/m16 by 2, CL times.
SAL
r/m16,imm8
C1 /4 ib
Multiply r/m16 by 2, imm8 times.
SAL
r/m32,1
D1 /4
Multiply r/m32 by 2, once.
SAL
r/m64,1
REX.W + D1 /4
Multiply r/m64 by 2, once.
SAL
r/m32,CL
D3 /4
Multiply r/m32 by 2, CL times.
SAL
r/m64,CL
REX.W + D3 /4
Multiply r/m64 by 2, CL times.
SAL
r/m32,imm8
C1 /4 ib
Multiply r/m32 by 2, imm8 times.
SAL
r/m64,imm8
REX.W + C1 /4 ib
Multiply r/m64 by 2, imm8 times.
SAR
r/m8,1
D0 /7
Signed divide* r/m8 by 2, once.
SAR
r/m8**,1
REX + D0 /7
Signed divide* r/m8 by 2, once.
SAR
r/m8,CL
D2 /7
Signed divide* r/m8 by 2, CL times.
SAR
r/m8**,CL
REX + D2 /7
Signed divide* r/m8 by 2, CL times.
SAR
r/m8,imm8
C0 /7 ib
Signed divide* r/m8 by 2, imm8 time.
SAR
r/m8**,imm8
REX + C0 /7 ib
Signed divide* r/m8 by 2, imm8 times.
SAR
r/m16,1
D1 /7
Signed divide* r/m16 by 2, once.
SAR
r/m16,CL
D3 /7
Signed divide* r/m16 by 2, CL times.
SAR
r/m16,imm8
C1 /7 ib
Signed divide* r/m16 by 2, imm8 times.
SAR
r/m32,1
D1 /7
Signed divide* r/m32 by 2, once.
SAR
r/m64,1
REX.W + D1 /7
Signed divide* r/m64 by 2, once.
SAR
r/m32,CL
D3 /7
Signed divide* r/m32 by 2, CL times.
SAR
r/m64,CL
REX.W + D3 /7
Signed divide* r/m64 by 2, CL times.
SAR
r/m32,imm8
C1 /7 ib
Signed divide* r/m32 by 2, imm8 times.
SAR
r/m64,imm8
REX.W + C1 /7 ib
Signed divide* r/m64 by 2, imm8 times.
SHL
r/m8,1
D0 /4
Multiply r/m8 by 2, once.
SHL
r/m8**,1
REX + D0 /4
Multiply r/m8 by 2, once.
SHL
r/m8,CL
D2 /4
Multiply r/m8 by 2, CL times.
SHL
r/m8**,CL
REX + D2 /4
Multiply r/m8 by 2, CL times.
SHL
r/m8,imm8
C0 /4 ib
Multiply r/m8 by 2, imm8 times.
SHL
r/m8**,imm8
REX + C0 /4 ib
Multiply r/m8 by 2, imm8 times.
SHL
r/m16,1
D1 /4
Multiply r/m16 by 2, once.
SHL
r/m16,CL
D3 /4
Multiply r/m16 by 2, CL times.
SHL
r/m16,imm8
C1 /4 ib
Multiply r/m16 by 2, imm8 times.
SHL
r/m32,1
D1 /4
Multiply r/m32 by 2, once.
SHL
r/m64,1
REX.W + D1 /4
Multiply r/m64 by 2, once.
SHL
r/m32,CL
D3 /4
Multiply r/m32 by 2, CL times.
SHL
r/m64,CL
REX.W + D3 /4
Multiply r/m64 by 2, CL times.
SHL
r/m32,imm8
C1 /4 ib
Multiply r/m32 by 2, imm8 times.
SHL
r/m64,imm8
REX.W + C1 /4 ib
Multiply r/m64 by 2, imm8 times.
SHR
r/m8,1
D0 /5
Unsigned divide r/m8 by 2, once.
SHR
r/m8**,1
REX + D0 /5
Unsigned divide r/m8 by 2, once.
SHR
r/m8,CL
D2 /5
Unsigned divide r/m8 by 2, CL times.
SHR
r/m8**,CL
REX + D2 /5
Unsigned divide r/m8 by 2, CL times.
SHR
r/m8,imm8
C0 /5 ib
Unsigned divide r/m8 by 2, imm8 times.
SHR
r/m8**,imm8
REX + C0 /5 ib
Unsigned divide r/m8 by 2, imm8 times.
SHR
r/m16,1
D1 /5
Unsigned divide r/m16 by 2, once.
SHR
r/m16,CL
D3 /5
Unsigned divide r/m16 by 2, CL times.
SHR
r/m16,imm8
C1 /5 ib
Unsigned divide r/m16 by 2, imm8 times.
SHR
r/m32,1
D1 /5
Unsigned divide r/m32 by 2, once.
SHR
r/m64,1
REX.W + D1 /5
Unsigned divide r/m64 by 2, once.
SHR
r/m32,CL
D3 /5
Unsigned divide r/m32 by 2, CL times.
SHR
r/m64,CL
REX.W + D3 /5
Unsigned divide r/m64 by 2, CL times.
SHR
r/m32,imm8
C1 /5 ib
Unsigned divide r/m32 by 2, imm8 times.
SHR
r/m64,imm8
REX.W + C1 /5 ib
Unsigned divide r/m64 by 2, imm8 times.
ModRM:r/m(r,w)
1
NA
NA
ModRM:r/m(r,w)
CL
NA
NA
ModRM:r/m(r,w)
imm8(r)
NA
NA
SARX/SHLX/SHRX--Shift Without Affecting Flags.
SARX
r32a,r/m32,r32b
VEX.NDS.LZ.F3.0F38.W0 F7 /r
BMI2
Shift r/m32 arithmetically right with count specified in r32b.
SHLX
r32a,r/m32,r32b
VEX.NDS.LZ.66.0F38.W0 F7 /r
BMI2
Shift r/m32 logically left with count specified in r32b.
SHRX
r32a,r/m32,r32b
VEX.NDS.LZ.F2.0F38.W0 F7 /r
BMI2
Shift r/m32 logically right with count specified in r32b.
SARX
r64a,r/m64,r64b
VEX.NDS.LZ.F3.0F38.W1 F7 /r
BMI2
Shift r/m64 arithmetically right with count specified in r64b.
SHLX
r64a,r/m64,r64b
VEX.NDS.LZ.66.0F38.W1 F7 /r
BMI2
Shift r/m64 logically left with count specified in r64b.
SHRX
r64a,r/m64,r64b
VEX.NDS.LZ.F2.0F38.W1 F7 /r
BMI2
Shift r/m64 logically right with count specified in r64b.
ModRM:reg(w)
ModRM:r/m(r)
VEX.vvvv(r)
NA
SBB--Integer Subtraction with Borrow.
SBB
AL,imm8
1C ib
Subtract with borrow imm8 from AL.
SBB
AX,imm16
1D iw
Subtract with borrow imm16 from AX.
SBB
EAX,imm32
1D id
Subtract with borrow imm32 from EAX.
SBB
RAX,imm32
REX.W + 1D id
Subtract with borrow sign-extended imm.32 to 64-bits from RAX.
SBB
r/m8,imm8
80 /3 ib
Subtract with borrow imm8 from r/m8.
SBB
r/m8*,imm8
REX + 80 /3 ib
Subtract with borrow imm8 from r/m8.
SBB
r/m16,imm16
81 /3 iw
Subtract with borrow imm16 from r/m16.
SBB
r/m32,imm32
81 /3 id
Subtract with borrow imm32 from r/m32.
SBB
r/m64,imm32
REX.W + 81 /3 id
Subtract with borrow sign-extended imm32 to 64-bits from r/m64.
SBB
r/m16,imm8
83 /3 ib
Subtract with borrow sign-extended imm8 from r/m16.
SBB
r/m32,imm8
83 /3 ib
Subtract with borrow sign-extended imm8 from r/m32.
SBB
r/m64,imm8
REX.W + 83 /3 ib
Subtract with borrow sign-extended imm8 from r/m64.
SBB
r/m8,r8
18 /r
Subtract with borrow r8 from r/m8.
SBB
r/m8*,r8
REX + 18 /r
Subtract with borrow r8 from r/m8.
SBB
r/m16,r16
19 /r
Subtract with borrow r16 from r/m16.
SBB
r/m32,r32
19 /r
Subtract with borrow r32 from r/m32.
SBB
r/m64,r64
REX.W + 19 /r
Subtract with borrow r64 from r/m64.
SBB
r8,r/m8
1A /r
Subtract with borrow r/m8 from r8.
SBB
r8*,r/m8*
REX + 1A /r
Subtract with borrow r/m8 from r8.
SBB
r16,r/m16
1B /r
Subtract with borrow r/m16 from r16.
SBB
r32,r/m32
1B /r
Subtract with borrow r/m32 from r32.
SBB
r64,r/m64
REX.W + 1B /r
Subtract with borrow r/m64 from r64.
AL/AX/EAX/RAX
imm8(r)/16/32
NA
NA
ModRM:r/m(w)
imm8(r)/16/32
NA
NA
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
SCAS/SCASB/SCASW/SCASD--Scan String.
SCAS
m8
AE
Compare AL with byte at ES:(E)DI or RDI, then set status flags.*.
SCAS
m16
AF
Compare AX with word at ES:(E)DI or RDI, then set status flags.*.
SCAS
m32
AF
Compare EAX with doubleword at ES(E)DI or RDI then set status flags.*.
SCAS
m64
REX.W + AF
Compare RAX with quadword at RDI or EDI then set status flags.
SCASB
void
AE
Compare AL with byte at ES:(E)DI or RDI then set status flags.*.
SCASW
void
AF
Compare AX with word at ES:(E)DI or RDI then set status flags.*.
SCASD
void
AF
Compare EAX with doubleword at ES:(E)DI or RDI then set status flags.*.
SCASQ
void
REX.W + AF
Compare RAX with quadword at RDI or EDI then set status flags.
NA
NA
NA
NA
SETcc--Set Byte on Condition.
SETA
r/m8
0F 97
Set byte if above (CF=0 and ZF=0).
SETA
r/m8*
REX + 0F 97
Set byte if above (CF=0 and ZF=0).
SETAE
r/m8
0F 93
Set byte if above or equal (CF=0).
SETAE
r/m8*
REX + 0F 93
Set byte if above or equal (CF=0).
SETB
r/m8
0F 92
Set byte if below (CF=1).
SETB
r/m8*
REX + 0F 92
Set byte if below (CF=1).
SETBE
r/m8
0F 96
Set byte if below or equal (CF=1 or ZF=1).
SETBE
r/m8*
REX + 0F 96
Set byte if below or equal (CF=1 or ZF=1).
SETC
r/m8
0F 92
Set byte if carry (CF=1).
SETC
r/m8*
REX + 0F 92
Set byte if carry (CF=1).
SETE
r/m8
0F 94
Set byte if equal (ZF=1).
SETE
r/m8*
REX + 0F 94
Set byte if equal (ZF=1).
SETG
r/m8
0F 9F
Set byte if greater (ZF=0 and SF=OF).
SETG
r/m8*
REX + 0F 9F
Set byte if greater (ZF=0 and SF=OF).
SETGE
r/m8
0F 9D
Set byte if greater or equal (SF=OF).
SETGE
r/m8*
REX + 0F 9D
Set byte if greater or equal (SF=OF).
SETL
r/m8
0F 9C
Set byte if less (SF != OF).
SETL
r/m8*
REX + 0F 9C
Set byte if less (SF != OF).
SETLE
r/m8
0F 9E
Set byte if less or equal (ZF=1 or SF != OF).
SETLE
r/m8*
REX + 0F 9E
Set byte if less or equal (ZF=1 or SF != OF).
SETNA
r/m8
0F 96
Set byte if not above (CF=1 or ZF=1).
SETNA
r/m8*
REX + 0F 96
Set byte if not above (CF=1 or ZF=1).
SETNAE
r/m8
0F 92
Set byte if not above or equal (CF=1).
SETNAE
r/m8*
REX + 0F 92
Set byte if not above or equal (CF=1).
SETNB
r/m8
0F 93
Set byte if not below (CF=0).
SETNB
r/m8*
REX + 0F 93
Set byte if not below (CF=0).
SETNBE
r/m8
0F 97
Set byte if not below or equal (CF=0 and ZF=0).
SETNBE
r/m8*
REX + 0F 97
Set byte if not below or equal (CF=0 and ZF=0).
SETNC
r/m8
0F 93
Set byte if not carry (CF=0).
SETNC
r/m8*
REX + 0F 93
Set byte if not carry (CF=0).
SETNE
r/m8
0F 95
Set byte if not equal (ZF=0).
SETNE
r/m8*
REX + 0F 95
Set byte if not equal (ZF=0).
SETNG
r/m8
0F 9E
Set byte if not greater (ZF=1 or SF != OF).
SETNG
r/m8*
REX + 0F 9E
Set byte if not greater (ZF=1 or SF != OF).
SETNGE
r/m8
0F 9C
Set byte if not greater or equal (SF != OF).
SETNGE
r/m8*
REX + 0F 9C
Set byte if not greater or equal (SF != OF).
SETNL
r/m8
0F 9D
Set byte if not less (SF=OF).
SETNL
r/m8*
REX + 0F 9D
Set byte if not less (SF=OF).
SETNLE
r/m8
0F 9F
Set byte if not less or equal (ZF=0 and SF=OF).
SETNLE
r/m8*
REX + 0F 9F
Set byte if not less or equal (ZF=0 and SF=OF).
SETNO
r/m8
0F 91
Set byte if not overflow (OF=0).
SETNO
r/m8*
REX + 0F 91
Set byte if not overflow (OF=0).
SETNP
r/m8
0F 9B
Set byte if not parity (PF=0).
SETNP
r/m8*
REX + 0F 9B
Set byte if not parity (PF=0).
SETNS
r/m8
0F 99
Set byte if not sign (SF=0).
SETNS
r/m8*
REX + 0F 99
Set byte if not sign (SF=0).
SETNZ
r/m8
0F 95
Set byte if not zero (ZF=0).
SETNZ
r/m8*
REX + 0F 95
Set byte if not zero (ZF=0).
SETO
r/m8
0F 90
Set byte if overflow (OF=1).
SETO
r/m8*
REX + 0F 90
Set byte if overflow (OF=1).
SETP
r/m8
0F 9A
Set byte if parity (PF=1).
SETP
r/m8*
REX + 0F 9A
Set byte if parity (PF=1).
SETPE
r/m8
0F 9A
Set byte if parity even (PF=1).
SETPE
r/m8*
REX + 0F 9A
Set byte if parity even (PF=1).
SETPO
r/m8
0F 9B
Set byte if parity odd (PF=0).
SETPO
r/m8*
REX + 0F 9B
Set byte if parity odd (PF=0).
SETS
r/m8
0F 98
Set byte if sign (SF=1).
SETS
r/m8*
REX + 0F 98
Set byte if sign (SF=1).
SETZ
r/m8
0F 94
Set byte if zero (ZF=1).
SETZ
r/m8*
REX + 0F 94
Set byte if zero (ZF=1).
ModRM:r/m(r)
NA
NA
NA
SFENCE--Store Fence.
SFENCE
void
0F AE F8
Serializes store operations.
NA
NA
NA
NA
SGDT--Store Global Descriptor Table Register.
SGDT
m
0F 01 /0
Store GDTR to m.
ModRM:r/m(w)
NA
NA
NA
SHLD--Double Precision Shift Left.
SHLD
r/m16,r16,imm8
0F A4 /r ib
Shift r/m16 to left imm8 places while shifting bits from r16 in from the right.
SHLD
r/m16,r16,CL
0F A5 /r
Shift r/m16 to left CL places while shifting bits from r16 in from the right.
SHLD
r/m32,r32,imm8
0F A4 /r ib
Shift r/m32 to left imm8 places while shifting bits from r32 in from the right.
SHLD
r/m64,r64,imm8
REX.W + 0F A4 /r ib
Shift r/m64 to left imm8 places while shifting bits from r64 in from the right.
SHLD
r/m32,r32,CL
0F A5 /r
Shift r/m32 to left CL places while shifting bits from r32 in from the right.
SHLD
r/m64,r64,CL
REX.W + 0F A5 /r
Shift r/m64 to left CL places while shifting bits from r64 in from the right.
ModRM:r/m(w)
ModRM:reg(r)
imm8(r)
NA
ModRM:r/m(w)
ModRM:reg(r)
CL
NA
SHRD--Double Precision Shift Right.
SHRD
r/m16,r16,imm8
0F AC /r ib
Shift r/m16 to right imm8 places while shifting bits from r16 in from the left.
SHRD
r/m16,r16,CL
0F AD /r
Shift r/m16 to right CL places while shifting bits from r16 in from the left.
SHRD
r/m32,r32,imm8
0F AC /r ib
Shift r/m32 to right imm8 places while shifting bits from r32 in from the left.
SHRD
r/m64,r64,imm8
REX.W + 0F AC /r ib
Shift r/m64 to right imm8 places while shifting bits from r64 in from the left.
SHRD
r/m32,r32,CL
0F AD /r
Shift r/m32 to right CL places while shifting bits from r32 in from the left.
SHRD
r/m64,r64,CL
REX.W + 0F AD /r
Shift r/m64 to right CL places while shifting bits from r64 in from the left.
ModRM:r/m(w)
ModRM:reg(r)
imm8(r)
NA
ModRM:r/m(w)
ModRM:reg(r)
CL
NA
SHUFPD--Shuffle Packed Double-Precision Floating-Point Values.
SHUFPD
xmm1,xmm2/m128,imm8
66 0F C6 /r ib
SSE2
Shuffle packed double-precision floatingpoint values selected by imm8 from xmm1 and xmm2/m128 to xmm1.
VSHUFPD
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F.WIG C6 /r ib
AVX
Shuffle Packed double-precision floatingpoint values selected by imm8 from xmm2 and xmm3/mem.
VSHUFPD
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F.WIG C6 /r ib
AVX
Shuffle Packed double-precision floatingpoint values selected by imm8 from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
SHUFPS--Shuffle Packed Single-Precision Floating-Point Values.
SHUFPS
xmm1,xmm2/m128,imm8
0F C6 /r ib
SSE
Shuffle packed single-precision floating-point values selected by imm8 from xmm1 and xmm1/m128 to xmm1.
VSHUFPS
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.0F.WIG C6 /r ib
AVX
Shuffle Packed single-precision floating-point values selected by imm8 from xmm2 and xmm3/mem.
VSHUFPS
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.0F.WIG C6 /r ib
AVX
Shuffle Packed single-precision floating-point values selected by imm8 from ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
imm8(r)
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
SIDT--Store Interrupt Descriptor Table Register.
SIDT
m
0F 01 /1
Store IDTR to m.
ModRM:r/m(w)
NA
NA
NA
SLDT--Store Local Descriptor Table Register.
SLDT
r/m16
0F 00 /0
Stores segment selector from LDTR in r/m16.
SLDT
r64/m16
REX.W + 0F 00 /0
Stores segment selector from LDTR in r64/m16.
ModRM:r/m(w)
NA
NA
NA
SMSW--Store Machine Status Word.
SMSW
r/m16
0F 01 /4
Store machine status word to r/m16.
SMSW
r32/m16
0F 01 /4
Store machine status word in low-order 16 bits of r32/m16; high-order 16 bits of r32 are undefined.
SMSW
r64/m16
REX.W + 0F 01 /4
Store machine status word in low-order 16 bits of r64/m16; high-order 16 bits of r32 are undefined.
ModRM:r/m(w)
NA
NA
NA
SQRTPD--Compute Square Roots of Packed Double-Precision Floating-Point Values.
SQRTPD
xmm1,xmm2/m128
66 0F 51 /r
SSE2
Computes square roots of the packed doubleprecision floating-point values in xmm2/m128 and stores the results in xmm1.
VSQRTPD
xmm1,xmm2/m128
VEX.128.66.0F.WIG 51 /r
AVX
Computes Square Roots of the packed doubleprecision floating-point values in xmm2/m128 and stores the result in xmm1.
VSQRTPD
ymm1,ymm2/m256
VEX.256.66.0F.WIG 51/r
AVX
Computes Square Roots of the packed doubleprecision floating-point values in ymm2/m256 and stores the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
SQRTPS--Compute Square Roots of Packed Single-Precision Floating-Point Values.
SQRTPS
xmm1,xmm2/m128
0F 51 /r
SSE
Computes square roots of the packed singleprecision floating-point values in xmm2/m128 and stores the results in xmm1.
VSQRTPS
xmm1,xmm2/m128
VEX.128.0F.WIG 51 /r
AVX
Computes Square Roots of the packed singleprecision floating-point values in xmm2/m128 and stores the result in xmm1.
VSQRTPS
ymm1,ymm2/m256
VEX.256.0F.WIG 51/r
AVX
Computes Square Roots of the packed singleprecision floating-point values in ymm2/m256 and stores the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
SQRTSD--Compute Square Root of Scalar Double-Precision Floating-Point Value.
SQRTSD
xmm1,xmm2/m64
F2 0F 51 /r
SSE2
Computes square root of the low doubleprecision floating-point value in xmm2/m64 and stores the results in xmm1.
VSQRTSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 51/r
AVX
Computes square root of the low doubleprecision floating point value in xmm3/m64 and stores the results in xmm2. Also, upper double precision floating-point value (bits[127:64]) from xmm2 are copied to xmm1[127:64].
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SQRTSS--Compute Square Root of Scalar Single-Precision Floating-Point Value.
SQRTSS
xmm1,xmm2/m32
F3 0F 51 /r
SSE
Computes square root of the low singleprecision floating-point value in xmm2/m32 and stores the results in xmm1.
VSQRTSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 51/r
AVX
Computes square root of the low singleprecision floating-point value in xmm3/m32 and stores the results in xmm1. Also, upper single precision floating-point values (bits[127:32]) from xmm2 are copied to xmm1[127:32].
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
STAC--Set AC Flag in EFLAGS Register.
STAC
void
0F 01 CB
Set the AC flag in the EFLAGS register.
NA
NA
NA
NA
STC--Set Carry Flag.
STC
void
F9
Set CF flag.
NA
NA
NA
NA
STD--Set Direction Flag.
STD
void
FD
Set DF flag.
NA
NA
NA
NA
STI--Set Interrupt Flag.
STI
void
FB
Set interrupt flag; external, maskable interrupts enabled at the end of the next instruction.
NA
NA
NA
NA
STMXCSR--Store MXCSR Register State.
STMXCSR
m32
0F AE /3
SSE
Store contents of MXCSR register to m32.
VSTMXCSR
m32
VEX.LZ.0F.WIG AE /3
AVX
Store contents of MXCSR register to m32.
ModRM:r/m(w)
NA
NA
NA
STOS/STOSB/STOSW/STOSD/STOSQ--Store String.
STOS
m8
AA
For legacy mode, store AL at address ES:(E)DI; For 64-bit mode store AL at address RDI or EDI.
STOS
m16
AB
For legacy mode, store AX at address ES:(E)DI; For 64-bit mode store AX at address RDI or EDI.
STOS
m32
AB
For legacy mode, store EAX at address ES:(E)DI; For 64-bit mode store EAX at address RDI or EDI.
STOS
m64
REX.W + AB
Store RAX at address RDI or EDI.
STOSB
void
AA
For legacy mode, store AL at address ES:(E)DI; For 64-bit mode store AL at address RDI or EDI.
STOSW
void
AB
For legacy mode, store AX at address ES:(E)DI; For 64-bit mode store AX at address RDI or EDI.
STOSD
void
AB
For legacy mode, store EAX at address ES:(E)DI; For 64-bit mode store EAX at address RDI or EDI.
STOSQ
void
REX.W + AB
Store RAX at address RDI or EDI.
NA
NA
NA
NA
STR--Store Task Register.
STR
r/m16
0F 00 /1
Stores segment selector from TR in r/m16.
ModRM:r/m(w)
NA
NA
NA
SUB--Subtract.
SUB
AL,imm8
2C ib
Subtract imm8 from AL.
SUB
AX,imm16
2D iw
Subtract imm16 from AX.
SUB
EAX,imm32
2D id
Subtract imm32 from EAX.
SUB
RAX,imm32
REX.W + 2D id
Subtract imm32 sign-extended to 64-bits from RAX.
SUB
r/m8,imm8
80 /5 ib
Subtract imm8 from r/m8.
SUB
r/m8*,imm8
REX + 80 /5 ib
Subtract imm8 from r/m8.
SUB
r/m16,imm16
81 /5 iw
Subtract imm16 from r/m16.
SUB
r/m32,imm32
81 /5 id
Subtract imm32 from r/m32.
SUB
r/m64,imm32
REX.W + 81 /5 id
Subtract imm32 sign-extended to 64-bits from r/m64.
SUB
r/m16,imm8
83 /5 ib
Subtract sign-extended imm8 from r/m16.
SUB
r/m32,imm8
83 /5 ib
Subtract sign-extended imm8 from r/m32.
SUB
r/m64,imm8
REX.W + 83 /5 ib
Subtract sign-extended imm8 from r/m64.
SUB
r/m8,r8
28 /r
Subtract r8 from r/m8.
SUB
r/m8*,r8*
REX + 28 /r
Subtract r8 from r/m8.
SUB
r/m16,r16
29 /r
Subtract r16 from r/m16.
SUB
r/m32,r32
29 /r
Subtract r32 from r/m32.
SUB
r/m64,r64
REX.W + 29 /r
Subtract r64 from r/m64.
SUB
r8,r/m8
2A /r
Subtract r/m8 from r8.
SUB
r8*,r/m8*
REX + 2A /r
Subtract r/m8 from r8.
SUB
r16,r/m16
2B /r
Subtract r/m16 from r16.
SUB
r32,r/m32
2B /r
Subtract r/m32 from r32.
SUB
r64,r/m64
REX.W + 2B /r
Subtract r/m64 from r64.
AL/AX/EAX/RAX
imm8(r)/26/32
NA
NA
ModRM:r/m(r,w)
imm8(r)/26/32
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
SUBPD--Subtract Packed Double-Precision Floating-Point Values.
SUBPD
xmm1,xmm2/m128
66 0F 5C /r
SSE2
Subtract packed double-precision floatingpoint values in xmm2/m128 from xmm1.
VSUBPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 5C /r
AVX
Subtract packed double-precision floatingpoint values in xmm3/mem from xmm2 and stores result in xmm1.
VSUBPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 5C /r
AVX
Subtract packed double-precision floatingpoint values in ymm3/mem from ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SUBPS--Subtract Packed Single-Precision Floating-Point Values.
SUBPS
xmm1 xmm2/m128
0F 5C /r
SSE
Subtract packed single-precision floating-point values in xmm2/mem from xmm1.
VSUBPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 5C /r
AVX
Subtract packed single-precision floating-point values in xmm3/mem from xmm2 and stores result in xmm1.
VSUBPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 5C /r
AVX
Subtract packed single-precision floating-point values in ymm3/mem from ymm2 and stores result in ymm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SUBSD--Subtract Scalar Double-Precision Floating-Point Values.
SUBSD
xmm1,xmm2/m64
F2 0F 5C /r
SSE2
Subtracts the low double-precision floatingpoint values in xmm2/mem64 from xmm1.
VSUBSD
xmm1,xmm2,xmm3/m64
VEX.NDS.LIG.F2.0F.WIG 5C /r
AVX
Subtract the low double-precision floatingpoint value in xmm3/mem from xmm2 and store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SUBSS--Subtract Scalar Single-Precision Floating-Point Values.
SUBSS
xmm1,xmm2/m32
F3 0F 5C /r
SSE
Subtract the lower single-precision floatingpoint values in xmm2/m32 from xmm1.
VSUBSS
xmm1,xmm2,xmm3/m32
VEX.NDS.LIG.F3.0F.WIG 5C /r
AVX
Subtract the low single-precision floatingpoint value in xmm3/mem from xmm2 and store the result in xmm1.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
SWAPGS--Swap GS Base Register.
SWAPGS
void
0F 01 F8
Exchanges the current GS base register value with the value contained in MSR address C0000102H.
NA
NA
NA
NA
SYSCALL--Fast System Call.
SYSCALL
void
0F 05
Fast call to privilege level 0 system procedures.
NA
NA
NA
NA
SYSENTER--Fast System Call.
SYSENTER
void
0F 34
Fast call to privilege level 0 system procedures.
NA
NA
NA
NA
SYSEXIT--Fast Return from Fast System Call.
SYSEXIT
void
0F 35
Fast return to privilege level 3 user code.
SYSEXIT
void
REX.W + 0F 35
Fast return to 64-bit mode privilege level 3 user code.
NA
NA
NA
NA
SYSRET--Return From Fast System Call.
SYSRET
void
0F 07
Return to compatibility mode from fast system call.
SYSRET
void
REX.W + 0F 07
Return to 64-bit mode from fast system call.
NA
NA
NA
NA
TEST--Logical Compare.
TEST
AL,imm8
A8 ib
AND imm8 with AL; set SF, ZF, PF according to result.
TEST
AX,imm16
A9 iw
AND imm16 with AX; set SF, ZF, PF according to result.
TEST
EAX,imm32
A9 id
AND imm32 with EAX; set SF, ZF, PF according to result.
TEST
RAX,imm32
REX.W + A9 id
AND imm32 sign-extended to 64-bits with RAX; set SF, ZF, PF according to result.
TEST
r/m8,imm8
F6 /0 ib
AND imm8 with r/m8; set SF, ZF, PF according to result.
TEST
r/m8*,imm8
REX + F6 /0 ib
AND imm8 with r/m8; set SF, ZF, PF according to result.
TEST
r/m16,imm16
F7 /0 iw
AND imm16 with r/m16; set SF, ZF, PF according to result.
TEST
r/m32,imm32
F7 /0 id
AND imm32 with r/m32; set SF, ZF, PF according to result.
TEST
r/m64,imm32
REX.W + F7 /0 id
AND imm32 sign-extended to 64-bits with r/m64; set SF, ZF, PF according to result.
TEST
r/m8,r8
84 /r
AND r8 with r/m8; set SF, ZF, PF according to result.
TEST
r/m8*,r8*
REX + 84 /r
AND r8 with r/m8; set SF, ZF, PF according to result.
TEST
r/m16,r16
85 /r
AND r16 with r/m16; set SF, ZF, PF according to result.
TEST
r/m32,r32
85 /r
AND r32 with r/m32; set SF, ZF, PF according to result.
TEST
r/m64,r64
REX.W + 85 /r
AND r64 with r/m64; set SF, ZF, PF according to result.
AL/AX/EAX/RAX
imm8(r)/16/32
NA
NA
ModRM:r/m(r)
imm8(r)/16/32
NA
NA
ModRM:r/m(r)
ModRM:reg(r)
NA
NA
TZCNT--Count the Number of Trailing Zero Bits.
TZCNT
r16,r/m16
F3 0F BC /r
BMI1
Count the number of trailing zero bits in r/m16, return result in r16.
TZCNT
r32,r/m32
F3 0F BC /r
BMI1
Count the number of trailing zero bits in r/m32, return result in r32.
TZCNT
r64,r/m64
F3 REX.W 0F BC /r
BMI1
Count the number of trailing zero bits in r/m64, return result in r64.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
UCOMISD--Unordered Compare Scalar Double-Precision Floating-Point Values and Set EFLAGS.
UCOMISD
xmm1,xmm2/m64
66 0F 2E /r
SSE2
Compares (unordered) the low doubleprecision floating-point values in xmm1 and xmm2/m64 and set the EFLAGS accordingly.
VUCOMISD
xmm1,xmm2/m64
VEX.LIG.66.0F.WIG 2E /r
AVX
Compare low double precision floating-point values in xmm1 and xmm2/mem64 and set the EFLAGS flags accordingly.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
UCOMISS--Unordered Compare Scalar Single-Precision Floating-Point Values and Set EFLAGS.
UCOMISS
xmm1,xmm2/m32
0F 2E /r
SSE
Compare lower single-precision floating-point value in xmm1 register with lower singleprecision floating-point value in xmm2/mem and set the status flags accordingly.
VUCOMISS
xmm1,xmm2/m32
VEX.LIG.0F.WIG 2E /r
AVX
Compare low single precision floating-point values in xmm1 and xmm2/mem32 and set the EFLAGS flags accordingly.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
UD2--Undefined Instruction.
UD2
void
0F 0B
Raise invalid opcode exception.
NA
NA
NA
NA
UNPCKHPD--Unpack and Interleave High Packed Double-Precision Floating-Point Values.
UNPCKHPD
xmm1,xmm2/m128
66 0F 15 /r
SSE2
Unpacks and Interleaves double-precision floating-point values from high quadwords of xmm1 and xmm2/m128.
VUNPCKHPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 15 /r
AVX
Unpacks and Interleaves double precision floating-point values from high quadwords of xmm2 and xmm3/m128.
VUNPCKHPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 15 /r
AVX
Unpacks and Interleaves double precision floating-point values from high quadwords of ymm2 and ymm3/m256.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
UNPCKHPS--Unpack and Interleave High Packed Single-Precision Floating-Point Values.
UNPCKHPS
xmm1,xmm2/m128
0F 15 /r
SSE
Unpacks and Interleaves single-precision floating-point values from high quadwords of xmm1 and xmm2/mem into xmm1.
VUNPCKHPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 15 /r
AVX
Unpacks and Interleaves single-precision floating-point values from high quadwords of xmm2 and xmm3/m128.
VUNPCKHPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 15 /r
AVX
Unpacks and Interleaves single-precision floating-point values from high quadwords of ymm2 and ymm3/m256.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
UNPCKLPD--Unpack and Interleave Low Packed Double-Precision Floating-Point Values.
UNPCKLPD
xmm1,xmm2/m128
66 0F 14 /r
SSE2
Unpacks and Interleaves double-precision floating-point values from low quadwords of xmm1 and xmm2/m128.
VUNPCKLPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 14 /r
AVX
Unpacks and Interleaves double precision floating-point values low high quadwords of xmm2 and xmm3/m128.
VUNPCKLPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 14 /r
AVX
Unpacks and Interleaves double precision floating-point values low high quadwords of ymm2 and ymm3/m256.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
UNPCKLPS--Unpack and Interleave Low Packed Single-Precision Floating-Point Values.
UNPCKLPS
xmm1,xmm2/m128
0F 14 /r
SSE
Unpacks and Interleaves single-precision floating-point values from low quadwords of xmm1 and xmm2/mem into xmm1.
VUNPCKLPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 14 /r
AVX
Unpacks and Interleaves single-precision floating-point values from low quadwords of xmm2 and xmm3/m128.
VUNPCKLPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 14 /r
AVX
Unpacks and Interleaves single-precision floating-point values from low quadwords of ymm2 and ymm3/m256.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VBROADCAST--Broadcast Floating-Point Data.
VBROADCASTSS
xmm1,m32
VEX.128.66.0F38.W0 18 /r
AVX
Broadcast single-precision floating-point element in mem to four locations in xmm1.
VBROADCASTSS
ymm1,m32
VEX.256.66.0F38.W0 18 /r
AVX
Broadcast single-precision floating-point element in mem to eight locations in ymm1.
VBROADCASTSD
ymm1,m64
VEX.256.66.0F38.W0 19 /r
AVX
Broadcast double-precision floating-point element in mem to four locations in ymm1.
VBROADCASTF128
ymm1,m128
VEX.256.66.0F38.W0 1A /r
AVX
Broadcast 128 bits of floating-point data in mem to low and high 128-bits in ymm1.
VBROADCASTSS
xmm1,xmm2
VEX.128.66.0F38.W0 18/r
AVX2
Broadcast the low single-precision floatingpoint element in the source operand to four locations in xmm1.
VBROADCASTSS
ymm1,xmm2
VEX.256.66.0F38.W0 18 /r
AVX2
Broadcast low single-precision floating-point element in the source operand to eight locations in ymm1.
VBROADCASTSD
ymm1,xmm2
VEX.256.66.0F38.W0 19 /r
AVX2
Broadcast low double-precision floating-point element in the source operand to four locations in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
VCVTPH2PS--Convert 16-bit FP Values to Single-Precision FP Values.
VCVTPH2PS
ymm1,xmm2/m128
VEX.256.66.0F38.W0 13 /r
F16C
Convert eight packed half precision (16-bit) floating-point values in xmm2/m128 to packed single-precision floating-point value in ymm1.
VCVTPH2PS
xmm1,xmm2/m64
VEX.128.66.0F38.W0 13 /r
F16C
Convert four packed half precision (16-bit) floating-point values in xmm2/m64 to packed single-precision floating-point value in xmm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
VCVTPS2PH--Convert Single-Precision FP value to 16-bit FP value.
VCVTPS2PH
xmm1/m128,ymm2,imm8
VEX.256.66.0F3A.W0 1D /r ib
F16C
Convert eight packed single-precision floating-point value in ymm2 to packed half-precision (16-bit) floating-point value in xmm1/mem. Imm8 provides rounding controls.
VCVTPS2PH
xmm1/m64,xmm2,imm8
VEX.128.66.0F3A.W0.1D /r ib
F16C
Convert four packed single-precision floating-point value in xmm2 to packed halfprecision (16-bit) floating-point value in xmm1/mem. Imm8 provides rounding controls.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
VERR/VERW--Verify a Segment for Reading or Writing.
VERR
r/m16
0F 00 /4
Set ZF=1 if segment specified with r/m16 can be read.
VERW
r/m16
0F 00 /5
Set ZF=1 if segment specified with r/m16 can be written.
ModRM:r/m(r)
NA
NA
NA
VEXTRACTF128--Extract Packed Floating-Point Values.
VEXTRACTF128
xmm1/m128,ymm2,imm8
VEX.256.66.0F3A.W0 19 /r ib
AVX
Extract 128 bits of packed floating-point values from ymm2 and store results in xmm1/mem.
ModRM:r/m(w)
ModRM:reg(r)
NA
NA
VEXTRACTI128--Extract packed Integer Values.
VEXTRACTI128
xmm1/m128,ymm2,imm8
VEX.256.66.0F3A.W0 39 /r ib
AVX2
Extract 128 bits of integer data from ymm2 and store results in xmm1/mem.
ModRM:r/m(w)
ModRM:reg(r)
Imm8
NA
VFMADD132PD/VFMADD213PD/VFMADD231PD--Fused Multiply-Add of Packed Double-Precision Floating-Point Values.
VFMADD132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 98 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, add to xmm1 and put result in xmm0.
VFMADD213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 A8 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, add to xmm2/mem and put result in xmm0.
VFMADD231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 B8 /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, add to xmm0 and put result in xmm0.
VFMADD132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 98 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, add to ymm1 and put result in ymm0.
VFMADD213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 A8 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, add to ymm2/mem and put result in ymm0.
VFMADD231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 B8 /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, add to ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMADD132PS/VFMADD213PS/VFMADD231PS--Fused Multiply-Add of Packed Single-Precision Floating-Point Values.
VFMADD132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 98 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, add to xmm1 and put result in xmm0.
VFMADD213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 A8 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, add to xmm2/mem and put result in xmm0.
VFMADD231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 B8 /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, add to xmm0 and put result in xmm0.
VFMADD132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 98 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, add to ymm1 and put result in ymm0.
VFMADD213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 A8 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, add to ymm2/mem and put result in ymm0.
VFMADD231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 B8 /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, add to ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMADD132SD/VFMADD213SD/VFMADD231SD--Fused Multiply-Add of Scalar Double-Precision Floating-Point Values.
VFMADD132SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 99 /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm2/mem, add to xmm1 and put result in xmm0.
VFMADD213SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 A9 /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm1, add to xmm2/mem and put result in xmm0.
VFMADD231SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 B9 /r
FMA
Multiply scalar double-precision floating-point value from xmm1 and xmm2/mem, add to xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMADD132SS/VFMADD213SS/VFMADD231SS--Fused Multiply-Add of Scalar Single-Precision Floating-Point Values.
VFMADD132SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 99 /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm2/mem, add to xmm1 and put result in xmm0.
VFMADD213SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 A9 /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm1, add to xmm2/mem and put result in xmm0.
VFMADD231SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 B9 /r
FMA
Multiply scalar single-precision floating-point value from xmm1 and xmm2/mem, add to xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMADDSUB132PD/VFMADDSUB213PD/VFMADDSUB231PD--Fused Multiply-Alternating Add/Subtract of Packed Double-Precision Floating-Point Values.
VFMADDSUB132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 96 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, add/subtract elements in xmm1 and put result in xmm0.
VFMADDSUB213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 A6 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, add/subtract elements in xmm2/mem and put result in xmm0.
VFMADDSUB231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 B6 /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, add/subtract elements in xmm0 and put result in xmm0.
VFMADDSUB132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 96 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, add/subtract elements in ymm1 and put result in ymm0.
VFMADDSUB213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 A6 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, add/subtract elements in ymm2/mem and put result in ymm0.
VFMADDSUB231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 B6 /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, add/subtract elements in ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMADDSUB132PS/VFMADDSUB213PS/VFMADDSUB231PS--Fused Multiply-Alternating Add/Subtract of Packed Single-Precision Floating-Point Values.
VFMADDSUB132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 96 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, add/subtract elements in xmm1 and put result in xmm0.
VFMADDSUB213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 A6 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, add/subtract elements in xmm2/mem and put result in xmm0.
VFMADDSUB231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 B6 /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, add/subtract elements in xmm0 and put result in xmm0.
VFMADDSUB132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 96 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, add/subtract elements in ymm1 and put result in ymm0.
VFMADDSUB213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 A6 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, add/subtract elements in ymm2/mem and put result in ymm0.
VFMADDSUB231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 B6 /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, add/subtract elements in ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUBADD132PD/VFMSUBADD213PD/VFMSUBADD231PD--Fused Multiply-Alternating Subtract/Add of Packed Double-Precision Floating-Point Values.
VFMSUBADD132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 97 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, subtract/add elements in xmm1 and put result in xmm0.
VFMSUBADD213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 A7 /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, subtract/add elements in xmm2/mem and put result in xmm0.
VFMSUBADD231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 B7 /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, subtract/add elements in xmm0 and put result in xmm0.
VFMSUBADD132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 97 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, subtract/add elements in ymm1 and put result in ymm0.
VFMSUBADD213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 A7 /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, subtract/add elements in ymm2/mem and put result in ymm0.
VFMSUBADD231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 B7 /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, subtract/add elements in ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUBADD132PS/VFMSUBADD213PS/VFMSUBADD231PS--Fused Multiply-Alternating Subtract/Add of Packed Single-Precision Floating-Point Values.
VFMSUBADD132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 97 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, subtract/add elements in xmm1 and put result in xmm0.
VFMSUBADD213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 A7 /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, subtract/add elements in xmm2/mem and put result in xmm0.
VFMSUBADD231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 B7 /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, subtract/add elements in xmm0 and put result in xmm0.
VFMSUBADD132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 97 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, subtract/add elements in ymm1 and put result in ymm0.
VFMSUBADD213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 A7 /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, subtract/add elements in ymm2/mem and put result in ymm0.
VFMSUBADD231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 B7 /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, subtract/add elements in ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUB132PD/VFMSUB213PD/VFMSUB231PD--Fused Multiply-Subtract of Packed Double-Precision Floating-Point Values.
VFMSUB132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 9A /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, subtract xmm1 and put result in xmm0.
VFMSUB213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 AA /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, subtract xmm2/mem and put result in xmm0.
VFMSUB231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 BA /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, subtract xmm0 and put result in xmm0.
VFMSUB132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 9A /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, subtract ymm1 and put result in ymm0.
VFMSUB213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 AA /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, subtract ymm2/mem and put result in ymm0.
VFMSUB231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 BA /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, subtract ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUB132PS/VFMSUB213PS/VFMSUB231PS--Fused Multiply-Subtract of Packed Single-Precision Floating-Point Values.
VFMSUB132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 9A /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, subtract xmm1 and put result in xmm0.
VFMSUB213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 AA /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, subtract xmm2/mem and put result in xmm0.
VFMSUB231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 BA /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, subtract xmm0 and put result in xmm0.
VFMSUB132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 9A /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, subtract ymm1 and put result in ymm0.
VFMSUB213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 AA /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, subtract ymm2/mem and put result in ymm0.
VFMSUB231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 BA /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, subtract ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUB132SD/VFMSUB213SD/VFMSUB231SD--Fused Multiply-Subtract of Scalar Double-Precision Floating-Point Values.
VFMSUB132SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 9B /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm2/mem, subtract xmm1 and put result in xmm0.
VFMSUB213SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 AB /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm1, subtract xmm2/mem and put result in xmm0.
VFMSUB231SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 BB /r
FMA
Multiply scalar double-precision floating-point value from xmm1 and xmm2/mem, subtract xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFMSUB132SS/VFMSUB213SS/VFMSUB231SS--Fused Multiply-Subtract of Scalar Single-Precision Floating-Point Values.
VFMSUB132SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 9B /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm2/mem, subtract xmm1 and put result in xmm0.
VFMSUB213SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 AB /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm1, subtract xmm2/mem and put result in xmm0.
VFMSUB231SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 BB /r
FMA
Multiply scalar single-precision floating-point value from xmm1 and xmm2/mem, subtract xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMADD132PD/VFNMADD213PD/VFNMADD231PD--Fused Negative Multiply-Add of Packed Double-Precision Floating-Point Values.
VFNMADD132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 9C /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, negate the multiplication result and add to xmm1 and put result in xmm0.
VFNMADD213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 AC /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, negate the multiplication result and add to xmm2/mem and put result in xmm0.
VFNMADD231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 BC /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, negate the multiplication result and add to xmm0 and put result in xmm0.
VFNMADD132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 9C /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, negate the multiplication result and add to ymm1 and put result in ymm0.
VFNMADD213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 AC /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, negate the multiplication result and add to ymm2/mem and put result in ymm0.
VFNMADD231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 BC /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, negate the multiplication result and add to ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMADD132PS/VFNMADD213PS/VFNMADD231PS--Fused Negative Multiply-Add of Packed Single-Precision Floating-Point Values.
VFNMADD132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 9C /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, negate the multiplication result and add to xmm1 and put result in xmm0.
VFNMADD213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 AC /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, negate the multiplication result and add to xmm2/mem and put result in xmm0.
VFNMADD231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 BC /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, negate the multiplication result and add to xmm0 and put result in xmm0.
VFNMADD132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 9C /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, negate the multiplication result and add to ymm1 and put result in ymm0.
VFNMADD213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 AC /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, negate the multiplication result and add to ymm2/mem and put result in ymm0.
VFNMADD231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 BC /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, negate the multiplication result and add to ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMADD132SD/VFNMADD213SD/VFNMADD231SD--Fused Negative Multiply-Add of Scalar Double-Precision Floating-Point Values.
VFNMADD132SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 9D /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm2/mem, negate the multiplication result and add to xmm1 and put result in xmm0.
VFNMADD213SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 AD /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm1, negate the multiplication result and add to xmm2/mem and put result in xmm0.
VFNMADD231SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 BD /r
FMA
Multiply scalar double-precision floating-point value from xmm1 and xmm2/mem, negate the multiplication result and add to xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMADD132SS/VFNMADD213SS/VFNMADD231SS--Fused Negative Multiply-Add of Scalar Single-Precision Floating-Point Values.
VFNMADD132SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 9D /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm2/mem, negate the multiplication result and add to xmm1 and put result in xmm0.
VFNMADD213SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 AD /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm1, negate the multiplication result and add to xmm2/mem and put result in xmm0.
VFNMADD231SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 BD /r
FMA
Multiply scalar single-precision floating-point value from xmm1 and xmm2/mem, negate the multiplication result and add to xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMSUB132PD/VFNMSUB213PD/VFNMSUB231PD--Fused Negative Multiply-Subtract of Packed Double-Precision Floating-Point Values.
VFNMSUB132PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 9E /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm2/mem, negate the multiplication result and subtract xmm1 and put result in xmm0.
VFNMSUB213PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 AE /r
FMA
Multiply packed double-precision floating-point values from xmm0 and xmm1, negate the multiplication result and subtract xmm2/mem and put result in xmm0.
VFNMSUB231PD
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W1 BE /r
FMA
Multiply packed double-precision floating-point values from xmm1 and xmm2/mem, negate the multiplication result and subtract xmm0 and put result in xmm0.
VFNMSUB132PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 9E /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm2/mem, negate the multiplication result and subtract ymm1 and put result in ymm0.
VFNMSUB213PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 AE /r
FMA
Multiply packed double-precision floating-point values from ymm0 and ymm1, negate the multiplication result and subtract ymm2/mem and put result in ymm0.
VFNMSUB231PD
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W1 BE /r
FMA
Multiply packed double-precision floating-point values from ymm1 and ymm2/mem, negate the multiplication result and subtract ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMSUB132PS/VFNMSUB213PS/VFNMSUB231PS--Fused Negative Multiply-Subtract of Packed Single-Precision Floating-Point Values.
VFNMSUB132PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 9E /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm2/mem, negate the multiplication result and subtract xmm1 and put result in xmm0.
VFNMSUB213PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 AE /r
FMA
Multiply packed single-precision floating-point values from xmm0 and xmm1, negate the multiplication result and subtract xmm2/mem and put result in xmm0.
VFNMSUB231PS
xmm0,xmm1,xmm2/m128
VEX.DDS.128.66.0F38.W0 BE /r
FMA
Multiply packed single-precision floating-point values from xmm1 and xmm2/mem, negate the multiplication result and subtract xmm0 and put result in xmm0.
VFNMSUB132PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 9E /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm2/mem, negate the multiplication result and subtract ymm1 and put result in ymm0.
VFNMSUB213PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 AE /r
FMA
Multiply packed single-precision floating-point values from ymm0 and ymm1, negate the multiplication result and subtract ymm2/mem and put result in ymm0.
VFNMSUB231PS
ymm0,ymm1,ymm2/m256
VEX.DDS.256.66.0F38.W0 BE /r
FMA
Multiply packed single-precision floating-point values from ymm1 and ymm2/mem, negate the multiplication result and subtract ymm0 and put result in ymm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMSUB132SD/VFNMSUB213SD/VFNMSUB231SD--Fused Negative Multiply-Subtract of Scalar Double-Precision Floating-Point Values.
VFNMSUB132SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 9F /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm2/mem, negate the multiplication result and subtract xmm1 and put result in xmm0.
VFNMSUB213SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 AF /r
FMA
Multiply scalar double-precision floating-point value from xmm0 and xmm1, negate the multiplication result and subtract xmm2/mem and put result in xmm0.
VFNMSUB231SD
xmm0,xmm1,xmm2/m64
VEX.DDS.LIG.128.66.0F38.W1 BF /r
FMA
Multiply scalar double-precision floating-point value from xmm1 and xmm2/mem, negate the multiplication result and subtract xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VFNMSUB132SS/VFNMSUB213SS/VFNMSUB231SS--Fused Negative Multiply-Subtract of Scalar Single-Precision Floating-Point Values.
VFNMSUB132SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 9F /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm2/mem, negate the multiplication result and subtract xmm1 and put result in xmm0.
VFNMSUB213SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 AF /r
FMA
Multiply scalar single-precision floating-point value from xmm0 and xmm1, negate the multiplication result and subtract xmm2/mem and put result in xmm0.
VFNMSUB231SS
xmm0,xmm1,xmm2/m32
VEX.DDS.LIG.128.66.0F38.W0 BF /r
FMA
Multiply scalar single-precision floating-point value from xmm1 and xmm2/mem, negate the multiplication result and subtract xmm0 and put result in xmm0.
ModRM:reg(r,w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VGATHERDPD/VGATHERQPD--Gather Packed DP FP Values Using Signed Dword/Qword Indices.
VGATHERDPD
xmm1,vm32x,xmm2
VEX.DDS.128.66.0F38.W1 92 /r
AVX2
Using dword indices specified in vm32x, gather double-precision FP values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VGATHERQPD
xmm1,vm64x,xmm2
VEX.DDS.128.66.0F38.W1 93 /r
AVX2
Using qword indices specified in vm64x, gather double-precision FP values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VGATHERDPD
ymm1,vm32x,ymm2
VEX.DDS.256.66.0F38.W1 92 /r
AVX2
Using dword indices specified in vm32x, gather double-precision FP values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VGATHERQPD
ymm1,vm64y,ymm2
VEX.DDS.256.66.0F38.W1 93 /r
AVX2
Using qword indices specified in vm64y, gather double-precision FP values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
ModRM:reg(r,w)
BaseReg(R): VSIB:base,VectorReg(R): VSIB:index
VEX.vvvv(r,w)
NA
VGATHERDPS/VGATHERQPS--Gather Packed SP FP values Using Signed Dword/Qword Indices.
VGATHERDPS
xmm1,vm32x,xmm2
VEX.DDS.128.66.0F38.W0 92 /r
AVX2
Using dword indices specified in vm32x, gather single-precision FP values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VGATHERQPS
xmm1,vm64x,xmm2
VEX.DDS.128.66.0F38.W0 93 /r
AVX2
Using qword indices specified in vm64x, gather single-precision FP values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VGATHERDPS
ymm1,vm32y,ymm2
VEX.DDS.256.66.0F38.W0 92 /r
AVX2
Using dword indices specified in vm32y, gather single-precision FP values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VGATHERQPS
xmm1,vm64y,xmm2
VEX.DDS.256.66.0F38.W0 93 /r
AVX2
Using qword indices specified in vm64y, gather single-precision FP values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
ModRM:reg(r,w)
BaseReg(R): VSIB:base,VectorReg(R): VSIB:index
VEX.vvvv(r,w)
NA
VPGATHERDD/VPGATHERQD--Gather Packed Dword Values Using Signed Dword/Qword Indices.
VPGATHERDD
xmm1,vm32x,xmm2
VEX.DDS.128.66.0F38.W0 90 /r
AVX2
Using dword indices specified in vm32x, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VPGATHERQD
xmm1,vm64x,xmm2
VEX.DDS.128.66.0F38.W0 91 /r
AVX2
Using qword indices specified in vm64x, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VPGATHERDD
ymm1,vm32y,ymm2
VEX.DDS.256.66.0F38.W0 90 /r
AVX2
Using dword indices specified in vm32y, gather dword from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VPGATHERQD
xmm1,vm64y,xmm2
VEX.DDS.256.66.0F38.W0 91 /r
AVX2
Using qword indices specified in vm64y, gather dword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
ModRM:reg(r,w)
BaseReg(R): VSIB:base,VectorReg(R): VSIB:index
VEX.vvvv(r,w)
NA
VPGATHERDQ/VPGATHERQQ--Gather Packed Qword Values Using Signed Dword/Qword Indices.
VPGATHERDQ
xmm1,vm32x,xmm2
VEX.DDS.128.66.0F38.W1 90 /r
AVX2
Using dword indices specified in vm32x, gather qword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VPGATHERQQ
xmm1,vm64x,xmm2
VEX.DDS.128.66.0F38.W1 91 /r
AVX2
Using qword indices specified in vm64x, gather qword values from memory conditioned on mask specified by xmm2. Conditionally gathered elements are merged into xmm1.
VPGATHERDQ
ymm1,vm32x,ymm2
VEX.DDS.256.66.0F38.W1 90 /r
AVX2
Using dword indices specified in vm32x, gather qword values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
VPGATHERQQ
ymm1,vm64y,ymm2
VEX.DDS.256.66.0F38.W1 91 /r
AVX2
Using qword indices specified in vm64y, gather qword values from memory conditioned on mask specified by ymm2. Conditionally gathered elements are merged into ymm1.
ModRM:reg(r,w)
BaseReg(R): VSIB:base,VectorReg(R): VSIB:index
VEX.vvvv(r,w)
NA
VINSERTF128--Insert Packed Floating-Point Values.
VINSERTF128
ymm1,ymm2,xmm3/m128,imm8
VEX.NDS.256.66.0F3A.W0 18 /r ib
AVX
Insert a single precision floating-point value selected by imm8 from xmm3/m128 into ymm2 at the specified destination element specified by imm8 and zero out destination elements in ymm1 as indicated in imm8.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
VINSERTI128--Insert Packed Integer Values.
VINSERTI128
ymm1,ymm2,xmm3/m128,imm8
VEX.NDS.256.66.0F3A.W0 38 /r ib
AVX2
Insert 128-bits of integer data from xmm3/mem and the remaining values from ymm2 into ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
Imm8
VMASKMOV--Conditional SIMD Packed Loads and Stores.
VMASKMOVPS
xmm1,xmm2,m128
VEX.NDS.128.66.0F38.W0 2C /r
AVX
Conditionally load packed single-precision values from m128 using mask in xmm2 and store in xmm1.
VMASKMOVPS
ymm1,ymm2,m256
VEX.NDS.256.66.0F38.W0 2C /r
AVX
Conditionally load packed single-precision values from m256 using mask in ymm2 and store in ymm1.
VMASKMOVPD
xmm1,xmm2,m128
VEX.NDS.128.66.0F38.W0 2D /r
AVX
Conditionally load packed double-precision values from m128 using mask in xmm2 and store in xmm1.
VMASKMOVPD
ymm1,ymm2,m256
VEX.NDS.256.66.0F38.W0 2D /r
AVX
Conditionally load packed double-precision values from m256 using mask in ymm2 and store in ymm1.
VMASKMOVPS
m128,xmm1,xmm2
VEX.NDS.128.66.0F38.W0 2E /r
AVX
Conditionally store packed single-precision values from xmm2 using mask in xmm1.
VMASKMOVPS
m256,ymm1,ymm2
VEX.NDS.256.66.0F38.W0 2E /r
AVX
Conditionally store packed single-precision values from ymm2 using mask in ymm1.
VMASKMOVPD
m128,xmm1,xmm2
VEX.NDS.128.66.0F38.W0 2F /r
AVX
Conditionally store packed double-precision values from xmm2 using mask in xmm1.
VMASKMOVPD
m256,ymm1,ymm2
VEX.NDS.256.66.0F38.W0 2F /r
AVX
Conditionally store packed double-precision values from ymm2 using mask in ymm1.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ModRM:r/m(w)
VEX.vvvv(r)
ModRM:reg(r)
NA
VPBLENDD--Blend Packed Dwords.
VPBLENDD
xmm1,xmm2,xmm3/m128,imm8
VEX.NDS.128.66.0F3A.W0 02 /r ib
AVX2
Select dwords from xmm2 and xmm3/m128 from mask specified in imm8 and store the values into xmm1.
VPBLENDD
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.W0 02 /r ib
AVX2
Select dwords from ymm2 and ymm3/m256 from mask specified in imm8 and store the values into ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
Imm8
VPBROADCAST--Broadcast Integer Data.
VPBROADCASTB
xmm1,xmm2/m8
VEX.128.66.0F38.W0 78 /r
AVX2
Broadcast a byte integer in the source operand to sixteen locations in xmm1.
VPBROADCASTB
ymm1,xmm2/m8
VEX.256.66.0F38.W0 78 /r
AVX2
Broadcast a byte integer in the source operand to thirtytwo locations in ymm1.
VPBROADCASTW
xmm1,xmm2/m16
VEX.128.66.0F38.W0 79 /r
AVX2
Broadcast a word integer in the source operand to eight locations in xmm1.
VPBROADCASTW
ymm1,xmm2/m16
VEX.256.66.0F38.W0 79 /r
AVX2
Broadcast a word integer in the source operand to sixteen locations in ymm1.
VPBROADCASTD
xmm1,xmm2/m32
VEX.128.66.0F38.W0 58 /r
AVX2
Broadcast a dword integer in the source operand to four locations in xmm1.
VPBROADCASTD
ymm1,xmm2/m32
VEX.256.66.0F38.W0 58 /r
AVX2
Broadcast a dword integer in the source operand to eight locations in ymm1.
VPBROADCASTQ
xmm1,xmm2/m64
VEX.128.66.0F38.W0 59 /r
AVX2
Broadcast a qword element in mem to two locations in xmm1.
VPBROADCASTQ
ymm1,xmm2/m64
VEX.256.66.0F38.W0 59 /r
AVX2
Broadcast a qword element in mem to four locations in ymm1.
VBROADCASTI128
ymm1,m128
VEX.256.66.0F38.W0 5A /r
AVX2
Broadcast 128 bits of integer data in mem to low and high 128-bits in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
VPERMD--Full Doublewords Element Permutation.
VPERMD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 36 /r
AVX2
Permute doublewords in ymm3/m256 using indexes in ymm2 and store the result in ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
VPERMPD--Permute Double-Precision Floating-Point Elements.
VPERMPD
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.W1 01 /r ib
AVX2
Permute double-precision floating-point elements in ymm2/m256 using indexes in imm8 and store the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
Imm8
NA
VPERMPS--Permute Single-Precision Floating-Point Elements.
VPERMPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 16 /r
AVX2
Permute single-precision floating-point elements in ymm3/m256 using indexes in ymm2 and store the result in ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
VPERMQ--Qwords Element Permutation.
VPERMQ
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.W1 00 /r ib
AVX2
Permute qwords in ymm2/m256 using indexes in imm8 and store the result in ymm1.
ModRM:reg(w)
ModRM:r/m(r)
Imm8
NA
VPERM2I128--Permute Integer Values.
VPERM2I128
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.W0 46 /r ib
AVX2
Permute 128-bit integer data in ymm2 and ymm3/mem using controls from imm8 and store result in ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
Imm8
VPERMILPD--Permute Double-Precision Floating-Point Values.
VPERMILPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W0 0D /r
AVX
Permute double-precision floating-point values in xmm2 using controls from xmm3/mem and store result in xmm1.
VPERMILPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 0D /r
AVX
Permute double-precision floating-point values in ymm2 using controls from ymm3/mem and store result in ymm1.
VPERMILPD
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.W0 05 /r ib
AVX
Permute double-precision floating-point values in xmm2/mem using controls from imm8.
VPERMILPD
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.W0 05 /r ib
AVX
Permute double-precision floating-point values in ymm2/mem using controls from imm8.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
VPERMILPS--Permute Single-Precision Floating-Point Values.
VPERMILPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W0 0C /r
AVX
Permute single-precision floating-point values in xmm2 using controls from xmm3/mem and store result in xmm1.
VPERMILPS
xmm1,xmm2/m128,imm8
VEX.128.66.0F3A.W0 04 /r ib
AVX
Permute single-precision floating-point values in xmm2/mem using controls from imm8 and store result in xmm1.
VPERMILPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 0C /r
AVX
Permute single-precision floating-point values in ymm2 using controls from ymm3/mem and store result in ymm1.
VPERMILPS
ymm1,ymm2/m256,imm8
VEX.256.66.0F3A.W0 04 /r ib
AVX
Permute single-precision floating-point values in ymm2/mem using controls from imm8 and store result in ymm1.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
ModRM:reg(w)
ModRM:r/m(r)
imm8(r)
NA
VPERM2F128--Permute Floating-Point Values.
VPERM2F128
ymm1,ymm2,ymm3/m256,imm8
VEX.NDS.256.66.0F3A.W0 06 /r ib
AVX
Permute 128-bit floating-point fields in ymm2 and ymm3/mem using controls from imm8 and store result in ymm1.
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
imm8(r)
VPMASKMOV--Conditional SIMD Integer Packed Loads and Stores.
VPMASKMOVD
xmm1,xmm2,m128
VEX.NDS.128.66.0F38.W0 8C /r
AVX2
Conditionally load dword values from m128 using mask in xmm2 and store in xmm1.
VPMASKMOVD
ymm1,ymm2,m256
VEX.NDS.256.66.0F38.W0 8C /r
AVX2
Conditionally load dword values from m256 using mask in ymm2 and store in ymm1.
VPMASKMOVQ
xmm1,xmm2,m128
VEX.NDS.128.66.0F38.W1 8C /r
AVX2
Conditionally load qword values from m128 using mask in xmm2 and store in xmm1.
VPMASKMOVQ
ymm1,ymm2,m256
VEX.NDS.256.66.0F38.W1 8C /r
AVX2
Conditionally load qword values from m256 using mask in ymm2 and store in ymm1.
VPMASKMOVD
m128,xmm1,xmm2
VEX.NDS.128.66.0F38.W0 8E /r
AVX2
Conditionally store dword values from xmm2 using mask in xmm1.
VPMASKMOVD
m256,ymm1,ymm2
VEX.NDS.256.66.0F38.W0 8E /r
AVX2
Conditionally store dword values from ymm2 using mask in ymm1.
VPMASKMOVQ
m128,xmm1,xmm2
VEX.NDS.128.66.0F38.W1 8E /r
AVX2
Conditionally store qword values from xmm2 using mask in xmm1.
VPMASKMOVQ
m256,ymm1,ymm2
VEX.NDS.256.66.0F38.W1 8E /r
AVX2
Conditionally store qword values from ymm2 using mask in ymm1.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
ModRM:r/m(w)
VEX.vvvv
ModRM:reg(r)
NA
VPSLLVD/VPSLLVQ--Variable Bit Shift Left Logical.
VPSLLVD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W0 47 /r
AVX2
Shift bits in doublewords in xmm2 left by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VPSLLVQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W1 47 /r
AVX2
Shift bits in quadwords in xmm2 left by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VPSLLVD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 47 /r
AVX2
Shift bits in doublewords in ymm2 left by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
VPSLLVQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W1 47 /r
AVX2
Shift bits in quadwords in ymm2 left by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
VPSRAVD--Variable Bit Shift Right Arithmetic.
VPSRAVD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W0 46 /r
AVX2
Shift bits in doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in the sign bits.
VPSRAVD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 46 /r
AVX2
Shift bits in doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in the sign bits.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
VPSRLVD/VPSRLVQ--Variable Bit Shift Right Logical.
VPSRLVD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W0 45 /r
AVX2
Shift bits in doublewords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VPSRLVQ
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F38.W1 45 /r
AVX2
Shift bits in quadwords in xmm2 right by amount specified in the corresponding element of xmm3/m128 while shifting in 0s.
VPSRLVD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W0 45 /r
AVX2
Shift bits in doublewords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
VPSRLVQ
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F38.W1 45 /r
AVX2
Shift bits in quadwords in ymm2 right by amount specified in the corresponding element of ymm3/m256 while shifting in 0s.
ModRM:reg(w)
VEX.vvvv
ModRM:r/m(r)
NA
VTESTPD/VTESTPS--Packed Bit Test.
VTESTPS
xmm1,xmm2/m128
VEX.128.66.0F38.W0 0E /r
AVX
Set ZF and CF depending on sign bit AND and ANDN of packed single-precision floating-point sources.
VTESTPS
ymm1,ymm2/m256
VEX.256.66.0F38.W0 0E /r
AVX
Set ZF and CF depending on sign bit AND and ANDN of packed single-precision floating-point sources.
VTESTPD
xmm1,xmm2/m128
VEX.128.66.0F38.W0 0F /r
AVX
Set ZF and CF depending on sign bit AND and ANDN of packed double-precision floating-point sources.
VTESTPD
ymm1,ymm2/m256
VEX.256.66.0F38.W0 0F /r
AVX
Set ZF and CF depending on sign bit AND and ANDN of packed double-precision floating-point sources.
ModRM:reg(r)
ModRM:r/m(r)
NA
NA
VZEROALL--Zero All YMM Registers.
VZEROALL
void
VEX.256.0F.WIG 77
AVX
Zero all YMM registers.
NA
NA
NA
NA
VZEROUPPER--Zero Upper Bits of YMM Registers.
VZEROUPPER
void
VEX.128.0F.WIG 77
AVX
Zero upper 128 bits of all YMM registers.
NA
NA
NA
NA
WAIT/FWAIT--Wait.
WAIT
void
9B
Check pending unmasked floating-point exceptions.
FWAIT
void
9B
Check pending unmasked floating-point exceptions.
NA
NA
NA
NA
WBINVD--Write Back and Invalidate Cache.
WBINVD
void
0F 09
Write back and flush Internal caches; initiate writing-back and flushing of external caches.
NA
NA
NA
NA
WRFSBASE/WRGSBASE--Write FS/GS Segment Base.
WRFSBASE
r32
F3 0F AE /2
FSGSBASE
Load the FS base address with the 32-bit value in the source register.
WRFSBASE
r64
F3 REX.W 0F AE /2
FSGSBASE
Load the FS base address with the 64-bit value in the source register.
WRGSBASE
r32
F3 0F AE /3
FSGSBASE
Load the GS base address with the 32-bit value in the source register.
WRGSBASE
r64
F3 REX.W 0F AE /3
FSGSBASE
Load the GS base address with the 64-bit value in the source register.
ModRM:r/m(r)
NA
NA
NA
WRMSR--Write to Model Specific Register.
WRMSR
void
0F 30
Write the value in EDX:EAX to MSR specified by ECX.
NA
NA
NA
NA
WRPKRU--Write Data to User Page Key Register.
WRPKRU
void
0F 01 EF
OSPKE
Writes EAX into PKRU.
NA
NA
NA
NA
XACQUIRE/XRELEASE--Hardware Lock Elision Prefix Hints.
XACQUIRE
void
F2
HLE1
A hint used with an 'XACQUIRE-enabled' instruction to start lock elision on the instruction memory operand address.
XRELEASE
void
F3
HLE
A hint used with an 'XRELEASE-enabled' instruction to end lock elision on the instruction memory operand address.
XABORT--Transactional Abort.
XABORT
imm8
C6 F8 ib
RTM
Causes an RTM abort if in RTM execution.
imm8(r)
NA
NA
NA
XADD--Exchange and Add.
XADD
r/m8,r8
0F C0 /r
Exchange r8 and r/m8; load sum into r/m8.
XADD
r/m8*,r8*
REX + 0F C0 /r
Exchange r8 and r/m8; load sum into r/m8.
XADD
r/m16,r16
0F C1 /r
Exchange r16 and r/m16; load sum into r/m16.
XADD
r/m32,r32
0F C1 /r
Exchange r32 and r/m32; load sum into r/m32.
XADD
r/m64,r64
REX.W + 0F C1 /r
Exchange r64 and r/m64; load sum into r/m64.
ModRM:r/m(r,w)
ModRM:reg(W)
NA
NA
XBEGIN--Transactional Begin.
XBEGIN
rel16
C7 F8
RTM
Specifies the start of an RTM region. Provides a 16-bit relative offset to compute the address of the fallback instruction address at which execution resumes following an RTM abort.
XBEGIN
rel32
C7 F8
RTM
Specifies the start of an RTM region. Provides a 32-bit relative offset to compute the address of the fallback instruction address at which execution resumes following an RTM abort.
Offset
NA
NA
NA
XCHG--Exchange Register/Memory with Register.
XCHG
AX,r16
90+rw
Exchange r16 with AX.
XCHG
r16,AX
90+rw
Exchange AX with r16.
XCHG
EAX,r32
90+rd
Exchange r32 with EAX.
XCHG
RAX,r64
REX.W + 90+rd
Exchange r64 with RAX.
XCHG
r32,EAX
90+rd
Exchange EAX with r32.
XCHG
r64,RAX
REX.W + 90+rd
Exchange RAX with r64.
XCHG
r/m8,r8
86 /r
Exchange r8 (byte register) with byte from r/m8.
XCHG
r/m8*,r8*
REX + 86 /r
Exchange r8 (byte register) with byte from r/m8.
XCHG
r8,r/m8
86 /r
Exchange byte from r/m8 with r8 (byte register).
XCHG
r8*,r/m8*
REX + 86 /r
Exchange byte from r/m8 with r8 (byte register).
XCHG
r/m16,r16
87 /r
Exchange r16 with word from r/m16.
XCHG
r16,r/m16
87 /r
Exchange word from r/m16 with r16.
XCHG
r/m32,r32
87 /r
Exchange r32 with doubleword from r/m32.
XCHG
r/m64,r64
REX.W + 87 /r
Exchange r64 with quadword from r/m64.
XCHG
r32,r/m32
87 /r
Exchange doubleword from r/m32 with r32.
XCHG
r64,r/m64
REX.W + 87 /r
Exchange quadword from r/m64 with r64.
AX/EAX/RAX(r,w)
opcode + rd(r,w)
NA
NA
opcode + rd(r,w)
AX/EAX/RAX(r,w)
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:reg(w)
ModRM:r/m(r)
NA
NA
XEND--Transactional End.
XEND
void
0F 01 D5
RTM
Specifies the end of an RTM code region.
NA
NA
NA
NA
XGETBV--Get Value of Extended Control Register.
XGETBV
void
0F 01 D0
Reads an XCR specified by ECX into EDX:EAX.
NA
NA
NA
NA
XLAT/XLATB--Table Look-up Translation.
XLAT
m8
D7
Set AL to memory byte DS:[(E)BX + unsigned AL].
XLATB
void
D7
Set AL to memory byte DS:[(E)BX + unsigned AL].
XLATB
void
REX.W + D7
Set AL to memory byte [RBX + unsigned AL].
NA
NA
NA
NA
XOR--Logical Exclusive OR.
XOR
AL,imm8
34 ib
AL XOR imm8.
XOR
AX,imm16
35 iw
AX XOR imm16.
XOR
EAX,imm32
35 id
EAX XOR imm32.
XOR
RAX,imm32
REX.W + 35 id
RAX XOR imm32 (sign-extended).
XOR
r/m8,imm8
80 /6 ib
r/m8 XOR imm8.
XOR
r/m8*,imm8
REX + 80 /6 ib
r/m8 XOR imm8.
XOR
r/m16,imm16
81 /6 iw
r/m16 XOR imm16.
XOR
r/m32,imm32
81 /6 id
r/m32 XOR imm32.
XOR
r/m64,imm32
REX.W + 81 /6 id
r/m64 XOR imm32 (sign-extended).
XOR
r/m16,imm8
83 /6 ib
r/m16 XOR imm8 (sign-extended).
XOR
r/m32,imm8
83 /6 ib
r/m32 XOR imm8 (sign-extended).
XOR
r/m64,imm8
REX.W + 83 /6 ib
r/m64 XOR imm8 (sign-extended).
XOR
r/m8,r8
30 /r
r/m8 XOR r8.
XOR
r/m8*,r8*
REX + 30 /r
r/m8 XOR r8.
XOR
r/m16,r16
31 /r
r/m16 XOR r16.
XOR
r/m32,r32
31 /r
r/m32 XOR r32.
XOR
r/m64,r64
REX.W + 31 /r
r/m64 XOR r64.
XOR
r8,r/m8
32 /r
r8 XOR r/m8.
XOR
r8*,r/m8*
REX + 32 /r
r8 XOR r/m8.
XOR
r16,r/m16
33 /r
r16 XOR r/m16.
XOR
r32,r/m32
33 /r
r32 XOR r/m32.
XOR
r64,r/m64
REX.W + 33 /r
r64 XOR r/m64.
AL/AX/EAX/RAX
imm8(r)/16/32
NA
NA
ModRM:r/m(r,w)
imm8(r)/16/32
NA
NA
ModRM:r/m(r,w)
ModRM:reg(r)
NA
NA
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
XORPD--Bitwise Logical XOR for Double-Precision Floating-Point Values.
XORPD
xmm1,xmm2/m128
66 0F 57 /r
SSE2
Bitwise exclusive-OR of xmm2/m128 and xmm1.
VXORPD
xmm1,xmm2,xmm3/m128
VEX.NDS.128.66.0F.WIG 57 /r
AVX
Return the bitwise logical XOR of packed double-precision floating-point values in xmm2 and xmm3/mem.
VXORPD
ymm1,ymm2,ymm3/m256
VEX.NDS.256.66.0F.WIG 57 /r
AVX
Return the bitwise logical XOR of packed double-precision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
XORPS--Bitwise Logical XOR for Single-Precision Floating-Point Values.
XORPS
xmm1,xmm2/m128
0F 57 /r
SSE
Bitwise exclusive-OR of xmm2/m128 and xmm1.
VXORPS
xmm1,xmm2,xmm3/m128
VEX.NDS.128.0F.WIG 57 /r
AVX
Return the bitwise logical XOR of packed singleprecision floating-point values in xmm2 and xmm3/mem.
VXORPS
ymm1,ymm2,ymm3/m256
VEX.NDS.256.0F.WIG 57 /r
AVX
Return the bitwise logical XOR of packed singleprecision floating-point values in ymm2 and ymm3/mem.
ModRM:reg(r,w)
ModRM:r/m(r)
NA
NA
ModRM:reg(w)
VEX.vvvv(r)
ModRM:r/m(r)
NA
XRSTOR--Restore Processor Extended States.
XRSTOR
mem
0F AE /5
Restore state components specified by EDX:EAX from mem.
XRSTOR64
mem
REX.W+ 0F AE /5
Restore state components specified by EDX:EAX from mem.
ModRM:r/m(r)
NA
NA
NA
XRSTORS--Restore Processor Extended States Supervisor.
XRSTORS
mem
0F C7 /3
Restore state components specified by EDX:EAX from mem.
XRSTORS64
mem
REX.W+ 0F C7 /3
Restore state components specified by EDX:EAX from mem.
ModRM:r/m(r)
NA
NA
NA
XSAVE--Save Processor Extended States.
XSAVE
mem
0F AE /4
Save state components specified by EDX:EAX to mem.
XSAVE64
mem
REX.W+ 0F AE /4
Save state components specified by EDX:EAX to mem.
ModRM:r/m(w)
NA
NA
NA
XSAVEC--Save Processor Extended States with Compaction.
XSAVEC
mem
0F C7 /4
Save state components specified by EDX:EAX to mem with compaction.
XSAVEC64
mem
REX.W+ 0F C7 /4
Save state components specified by EDX:EAX to mem with compaction.
ModRM:r/m(w)
NA
NA
NA
XSAVEOPT--Save Processor Extended States Optimized.
XSAVEOPT
mem
0F AE /6
XSAVEOPT
Save state components specified by EDX:EAX to mem, optimizing if possible.
XSAVEOPT64
mem
REX.W + 0F AE /6
XSAVEOPT
Save state components specified by EDX:EAX to mem, optimizing if possible.
ModRM:r/m(w)
NA
NA
NA
XSAVES--Save Processor Extended States Supervisor.
XSAVES
mem
0F C7 /5
Save state components specified by EDX:EAX to mem with compaction, optimizing if possible.
XSAVES64
mem
REX.W+ 0F C7 /5
Save state components specified by EDX:EAX to mem with compaction, optimizing if possible.
ModRM:r/m(w)
NA
NA
NA
XSETBV--Set Extended Control Register.
XSETBV
void
0F 01 D1
Write the value in EDX:EAX to the XCR specified by ECX.
NA
NA
NA
NA
XTEST--Test If In Transactional Execution.
XTEST
void
0F 01 D6
HLE
RTM
Test if executing in a transactional region.
NA
NA
NA
NA