Using the x87-FPU in x64/AMD64-mode under Linux - linux

Does anyone know whether Linux supports the use of the x87-FPU in 64-bit-mode, i.e. if the instructions are not trapped and the registers are saved on context-switch. I don't want to use it and I know SSE is the standard in x64-mode; it's just for curiosity.

Yes. It is supported. The D language uses that feature. When you use the types float or double it compiles SSE code, when you use real which tells the compiler to use the implementations most precise type. In case of x86 it is x87 with its 80 bit type.
https://godbolt.org/z/50kr-H
real square(real num) {
return num * num;
}
float square(float num) {
return num * num;
}
compiles to
real example.square(real):
push rbp
mov rbp, rsp
fld tbyte ptr [rbp + 16]
fstp tbyte ptr [rbp - 16]
fld tbyte ptr [rbp - 16]
fmul st(0), st
pop rbp
ret
float example.square(float):
push rbp
mov rbp, rsp
movss dword ptr [rbp - 4], xmm0
movss xmm0, dword ptr [rbp - 4]
mulss xmm0, dword ptr [rbp - 4]
pop rbp
ret

Related

msvc x64: how to omit extra size check before __chkstk() for _alloca(size_t)

When call _alloca(size) with a runtime-known size, msvc x64 v19.* will call __chkstk(), but emits extra code that checks if size+15 is overflow, if that occurs it make size=0x0ffffffffffffff0, see: godbolt.org/z/YT4xE8s4q
extern void * _alloca(size_t); //x64 msvc v19.*
int f()
{
size_t n = 3 & (size_t)f;
void * p = _alloca(n);
return 3 & (int)(size_t)p;
}
compiled by x64 msvc v19.latest with option -O2 -GS-:
f PROC ; COMDAT
$LN5:
push rbp
sub rsp, 32 ; 00000020H
lea rbp, QWORD PTR [rsp+32]
lea rax, OFFSET FLAT:f
and eax, 3 ; eax = n
lea rcx, QWORD PTR [rax+15] ; rcx = n+15 for 16-byte align
cmp rcx, rax ; checks overflow
ja SHORT $LN3#f ; normally n+15 is above n
mov rcx, 1152921504606846960 ; 0ffffffffffffff0H
$LN3#f:
and rcx, -16 ; align the size
mov rax, rcx ; rax = argument for __chkstk
call __chkstk ; probe stack pages in sequence
sub rsp, rcx ; do allocation after probe
lea rax, QWORD PTR [rsp+32]
and eax, 3
mov rsp, rbp
pop rbp
ret 0
f ENDP
Such "overflow check" is needless for normal program, clang and gcc do not emit such checks.
It is unexpected that for every _alloca it inserts garbage instructions (cmp + ja + mov = 15 bytes).
I tried __assume(n+15>n) and __assume(n<0xFFFFu), but does not help and seems ignored.
I guess msvc backend (c2.dll) use some hardcoded "snippet" to handle _alloca().
So the question is, is there an option, documented or undocumented, to disable the "overflow check"?
Or, is there some global flag that could "control" the compiler backend's "snippet"?

Top of a stack frame is not on the lowest address when examining $rsp [duplicate]

I have tried to understand this basing on a square function in c++ at godbolt.org . Clearly, return, parameters and local variables use “rbp - alignment” for this function.
Could someone please explain how this is possible?
What then would rbp + alignment do in this case?
int square(int num){
int n = 5;// just to test how locals are treated with frame pointer
return num * num;
}
Compiler (x86-64 gcc 11.1)
Generated Assembly:
square(int):
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-20], edi. ;\\Both param and local var use rbp-*
mov DWORD PTR[rbp-4], 5. ;//
mov eax, DWORD PTR [rbp-20]
imul eax, eax
pop rbp
ret
This is one of those cases where it’s handy to distinguish between parameters and arguments. In short: arguments are the values given by the caller, while parameters are the variables holding them.
When square is called, the caller places the argument in the rdi register, in accordance with the standard x86-64 calling convention. square then allocates a local variable, the parameter, and places the argument in the parameter. This allows the parameter to be used like any other variable: be read, written into, having its address taken, and so on. Since in this case it’s the callee that allocated the memory for the parameter, it necessarily has to reside below the frame pointer.
With an ABI where arguments are passed on the stack, the callee would be able to reuse the stack slot containing the argument as the parameter. This is exactly what happens on x86-32 (pass -m32 to see yourself):
square(int): # #square(int)
push ebp
mov ebp, esp
push eax
mov eax, dword ptr [ebp + 8]
mov dword ptr [ebp - 4], 5
mov eax, dword ptr [ebp + 8]
imul eax, dword ptr [ebp + 8]
add esp, 4
pop ebp
ret
Of course, if you enabled optimisations, the compiler would not bother with allocating a parameter on the stack in the callee; it would just use the value in the register directly:
square(int): # #square(int)
mov eax, edi
imul eax, edi
ret
GCC allows "leaf" functions, those that don't call other functions, to not bother creating a stack frame. The free stack is fair game to do so as these fns wish.

Slow SIMD performance - no inlining

Consider following examples for calculating sum of i32 array:
Example1: Simple for loop
pub fn vec_sum_for_loop_i32(src: &[i32]) -> i32 {
let mut sum = 0;
for c in src {
sum += *c;
}
sum
}
Example2: Explicit SIMD sum:
use std::arch::x86_64::*;
// #[inline]
pub fn vec_sum_simd_direct_loop(src: &[i32]) -> i32 {
#[cfg(debug_assertions)]
assert!(src.as_ptr() as u64 % 64 == 0);
#[cfg(debug_assertions)]
assert!(src.len() % (std::mem::size_of::<__m256i>() / std::mem::size_of::<i32>()) == 0);
let p_src = src.as_ptr();
let batch_size = std::mem::size_of::<__m256i>() / std::mem::size_of::<i32>();
#[cfg(debug_assertions)]
assert!(src.len() % batch_size == 0);
let result: i32;
unsafe {
let mut offset: isize = 0;
let total: isize = src.len() as isize;
let mut curr_sum = _mm256_setzero_si256();
while offset < total {
let curr = _mm256_load_epi32(p_src.offset(offset));
curr_sum = _mm256_add_epi32(curr_sum, curr);
offset += 8;
}
// this can be reduced with hadd.
let a0 = _mm256_extract_epi32::<0>(curr_sum);
let a1 = _mm256_extract_epi32::<1>(curr_sum);
let a2 = _mm256_extract_epi32::<2>(curr_sum);
let a3 = _mm256_extract_epi32::<3>(curr_sum);
let a4 = _mm256_extract_epi32::<4>(curr_sum);
let a5 = _mm256_extract_epi32::<5>(curr_sum);
let a6 = _mm256_extract_epi32::<6>(curr_sum);
let a7 = _mm256_extract_epi32::<7>(curr_sum);
result = a0 + a1 + a2 + a3 + a4 + a5 + a6 + a7;
}
result
}
When I tried to benchmark the code the first example got ~23GB/s (which is close to theoretical maximum for my RAM speed). Second example got 8GB/s.
When looking at the assembly with cargo asm first example translates into unrolled SIMD optimized loops:
.LBB11_7:
sum += *c;
movdqu xmm2, xmmword, ptr, [rcx, +, 4*rax]
paddd xmm2, xmm0
movdqu xmm0, xmmword, ptr, [rcx, +, 4*rax, +, 16]
paddd xmm0, xmm1
movdqu xmm1, xmmword, ptr, [rcx, +, 4*rax, +, 32]
movdqu xmm3, xmmword, ptr, [rcx, +, 4*rax, +, 48]
movdqu xmm4, xmmword, ptr, [rcx, +, 4*rax, +, 64]
paddd xmm4, xmm1
paddd xmm4, xmm2
movdqu xmm2, xmmword, ptr, [rcx, +, 4*rax, +, 80]
paddd xmm2, xmm3
paddd xmm2, xmm0
movdqu xmm0, xmmword, ptr, [rcx, +, 4*rax, +, 96]
paddd xmm0, xmm4
movdqu xmm1, xmmword, ptr, [rcx, +, 4*rax, +, 112]
paddd xmm1, xmm2
add rax, 32
add r11, -4
jne .LBB11_7
.LBB11_8:
test r10, r10
je .LBB11_11
lea r11, [rcx, +, 4*rax]
add r11, 16
shl r10, 5
xor eax, eax
Second example doesn't have any loop unrolling and doesn't even inline code to _mm256_add_epi32:
...
movaps xmmword, ptr, [rbp, +, 320], xmm7
movaps xmmword, ptr, [rbp, +, 304], xmm6
and rsp, -32
mov r12, rdx
mov rdi, rcx
lea rcx, [rsp, +, 32]
let mut curr_sum = _mm256_setzero_si256();
call core::core_arch::x86::avx::_mm256_setzero_si256
movaps xmm6, xmmword, ptr, [rsp, +, 32]
movaps xmm7, xmmword, ptr, [rsp, +, 48]
while offset < total {
test r12, r12
jle .LBB13_3
xor esi, esi
lea rbx, [rsp, +, 384]
lea r14, [rsp, +, 64]
lea r15, [rsp, +, 96]
.LBB13_2:
let curr = _mm256_load_epi32(p_src.offset(offset));
mov rcx, rbx
mov rdx, rdi
call core::core_arch::x86::avx512f::_mm256_load_epi32
curr_sum = _mm256_add_epi32(curr_sum, curr);
movaps xmmword, ptr, [rsp, +, 112], xmm7
movaps xmmword, ptr, [rsp, +, 96], xmm6
mov rcx, r14
mov rdx, r15
mov r8, rbx
call core::core_arch::x86::avx2::_mm256_add_epi32
movaps xmm6, xmmword, ptr, [rsp, +, 64]
movaps xmm7, xmmword, ptr, [rsp, +, 80]
offset += 8;
add rsi, 8
while offset < total {
add rdi, 32
cmp rsi, r12
...
This of course is pretty trivial example and I don't plan to use hand crafted SIMD for simple sum. But it still puzzles me on why explicit SIMD is so slow and why using SIMD intrinsics led to such unoptimized code.
It appears you forgot to tell rustc it was allowed to use AVX2 instructions everywhere, so it couldn't inline those functions. Instead, you get a total disaster where only the wrapper functions are compiled as AVX2-using functions, or something like that.
Works fine for me with -O -C target-cpu=skylake-avx512 (https://godbolt.org/z/csY5or43T) so it can inline even the AVX512VL load you used, _mm256_load_epi321, and then optimize it into a memory source operand for vpaddd ymm0, ymm0, ymmword ptr [rdi + 4*rax] (AVX2) inside a tight loop.
In GCC / clang, you get an error like "inlining failed in call to always_inline foobar" in this case, instead of working but slow asm. (See this for details). This is something Rust should probably sort out before this is ready for prime time, either be like MSVC and actually inline the instruction into a function using the intrinsic, or refuse to compile like GCC/clang.
Footnote 1:
See How to emulate _mm256_loadu_epi32 with gcc or clang? if you didn't mean to use AVX512.
With -O -C target-cpu=skylake (just AVX2), it inlines everything else, including vpaddd ymm, but still calls out to a function that copies 32 bytes from memory to memory with AVX vmovaps. It requires AVX512VL to inline the intrinsic, but later in the optimization process it realizes that with no masking, it's just a 256-bit load it should do without a bloated AVX-512 instruction. It's kinda dumb that Intel even provided a no-masking version of _mm256_mask[z]_loadu_epi32 that requires AVX-512. Or dumb that gcc/clang/rustc consider it an AVX512 intrinsic.

(VC++) Runtime Check for Uninitialized Variables: How is the test Implemented?

I'm trying to understand what this test does exactly. This toy code
int _tmain(int argc, _TCHAR* argv[])
{
int i;
printf("%d", i);
return 0;
}
Compiles into this:
int _tmain(int argc, _TCHAR* argv[])
{
012C2DF0 push ebp
012C2DF1 mov ebp,esp
012C2DF3 sub esp,0D8h
012C2DF9 push ebx
012C2DFA push esi
012C2DFB push edi
012C2DFC lea edi,[ebp-0D8h]
012C2E02 mov ecx,36h
012C2E07 mov eax,0CCCCCCCCh
012C2E0C rep stos dword ptr es:[edi]
012C2E0E mov byte ptr [ebp-0D1h],0
int i;
printf("%d", i);
012C2E15 cmp byte ptr [ebp-0D1h],0
012C2E1C jne wmain+3Bh (012C2E2Bh)
012C2E1E push 12C2E5Ch
012C2E23 call __RTC_UninitUse (012C10B9h)
012C2E28 add esp,4
012C2E2B mov esi,esp
012C2E2D mov eax,dword ptr [i]
012C2E30 push eax
012C2E31 push 12C5858h
012C2E36 call dword ptr ds:[12C9114h]
012C2E3C add esp,8
012C2E3F cmp esi,esp
012C2E41 call __RTC_CheckEsp (012C1140h)
return 0;
012C2E46 xor eax,eax
}
012C2E48 pop edi
012C2E49 pop esi
012C2E4A pop ebx
012C2E4B add esp,0D8h
012C2E51 cmp ebp,esp
012C2E53 call __RTC_CheckEsp (012C1140h)
012C2E58 mov esp,ebp
012C2E5A pop ebp
012C2E5B ret
The 5 lines emphasized are the only ones removed by properly initializing the variable i. The lines 'push 12C2E5Ch, call __RTC_UninitUse' call the function that display the error box, with a pointer to a string containing the variable name ("i") as an argument.
What I can't understand are the 3 lines that perform the actual test:
012C2E0E mov byte ptr [ebp-0D1h],0
012C2E15 cmp byte ptr [ebp-0D1h],0
012C2E1C jne wmain+3Bh (012C2E2Bh)
It would have seemed the compiler is probing the stack area of i (setting a byte to zero and immediately testing whether it's zero), just to be sure it isn't initialized somewhere it couldn't see during build. However, the probed address, ebp-0D1h, has little to do with the actual address of i.
Even worse, it seems if there were such an external (other thread?) initialization that did initialize the probed address but to zero, this test would still shout about the variable being uninitialized.
What's going on? Maybe the probe is meant for something entirely different, say to test if a certain byte is writable?
[ebp-0D1h] is a temporary variable used by the compiler to track "initialized" status of variables. If we modify the source a bit, it will be more clear:
int _tmain(int argc, _TCHAR* argv[])
{
int i, j;
printf("%d %d", i, j);
i = 1;
printf("%d %d", i, j);
j = 2;
return 0;
}
Produces the following (irrelevant parts skipped):
mov DWORD PTR [ebp-12], -858993460 ; ccccccccH
mov DWORD PTR [ebp-8], -858993460 ; ccccccccH
mov DWORD PTR [ebp-4], -858993460 ; ccccccccH
mov BYTE PTR $T4694[ebp], 0
mov BYTE PTR $T4693[ebp], 0
In prolog, variables are filled with 0xCC, and two tracking variables (one for i and one for j) are set to 0.
; 7 : printf("%d %d", i, j);
cmp BYTE PTR $T4693[ebp], 0
jne SHORT $LN3#main
push OFFSET $LN4#main
call __RTC_UninitUse
add esp, 4
$LN3#main:
cmp BYTE PTR $T4694[ebp], 0
jne SHORT $LN5#main
push OFFSET $LN6#main
call __RTC_UninitUse
add esp, 4
$LN5#main:
mov eax, DWORD PTR _j$[ebp]
push eax
mov ecx, DWORD PTR _i$[ebp]
push ecx
push OFFSET $SG4678
call _printf
add esp, 12 ; 0000000cH
This corresponds roughly to:
if ( $T4693 == 0 )
_RTC_UninitUse("j");
if ( $T4694 == 0 )
_RTC_UninitUse("j");
printf("%d %d", i, j);
Next part:
; 8 : i = 1;
mov BYTE PTR $T4694[ebp], 1
mov DWORD PTR _i$[ebp], 1
So, once i is intialized, the tracking variable is set to 1.
; 10 : j = 2;
mov BYTE PTR $T4693[ebp], 1
mov DWORD PTR _j$[ebp], 2
Here, the same is happening for j.
Here is my guess: the compiler probably allocates flags in memory showing the initialization status of variables. In your case for variable i this is a single byte at [ebp-0D1h]. The zeroing of this byte means i is not initialized. I assume if you initialize i this byte will be set to non-zero. Try something run-time like this: if (argc > 1) i = 1; This should generate code instead of omitting the whole check. You can also add another variable, and see if you get two different flags.
The zeroing of the flag and the testing just happen to be consecutive in this case, but that might not always be the case.
C7060F000055 mov dword ptr [esi],5500000Fh
C746048BEC5151 mov dword ptr [esi+0004],5151EC8Bh
b. And one of its later generations:
BF0F000055 mov edi,5500000Fh
893E mov [esi],edi
5F pop edi
52 push edx
B640 mov dh,40
BA8BEC5151 mov edx,5151EC8Bh
53 push ebx
8BDA mov ebx,edx
895E04 mov [esi+0004],ebx
c. And yet another generation with recalculated ("encrypted") "constant" data:
BB0F000055 mov ebx,5500000Fh
891E mov [esi],ebx
5B pop ebx
51 push ecx
B9CB00C05F mov ecx,5FC000CBh
81C1C0EB91F1 add ecx,F191EBC0h ; ecx=5151EC8Bh

gcc x64 stack manipulation

I try to understand the way gcc x64 organize the stack, a small program generate this asm
(gdb) disassemble *main
Dump of assembler code for function main:
0x0000000000400534 <main+0>: push rbp
0x0000000000400535 <main+1>: mov rbp,rsp
0x0000000000400538 <main+4>: sub rsp,0x30
0x000000000040053c <main+8>: mov DWORD PTR [rbp-0x14],edi
0x000000000040053f <main+11>: mov QWORD PTR [rbp-0x20],rsi
0x0000000000400543 <main+15>: mov DWORD PTR [rsp],0x7
0x000000000040054a <main+22>: mov r9d,0x6
0x0000000000400550 <main+28>: mov r8d,0x5
0x0000000000400556 <main+34>: mov ecx,0x4
0x000000000040055b <main+39>: mov edx,0x3
0x0000000000400560 <main+44>: mov esi,0x2
0x0000000000400565 <main+49>: mov edi,0x1
0x000000000040056a <main+54>: call 0x4004c7 <addAll>
0x000000000040056f <main+59>: mov DWORD PTR [rbp-0x4],eax
0x0000000000400572 <main+62>: mov esi,DWORD PTR [rbp-0x4]
0x0000000000400575 <main+65>: mov edi,0x400688
0x000000000040057a <main+70>: mov eax,0x0
0x000000000040057f <main+75>: call 0x400398 <printf#plt>
0x0000000000400584 <main+80>: mov eax,0x0
0x0000000000400589 <main+85>: leave
0x000000000040058a <main+86>: ret
Why it reserve up to 0x30 bytes just to save edi and rsi
I don't see any where restore values of edi and rsi as required by ABI
edi and rsi save at position that has delta 0x20 - 0x14 = 0xC, not a continuous region, does it make sense?
follow is source code
int mix(int a,int b,int c,int d,int e,int f, int g){
return a | b | c | d | e | f |g;
}
int addAll(int a,int b,int c,int d,int e,int f, int g){
return a+b+c+d+e+f+g+mix(a,b,c,d,e,f,g);
}
int main(int argc,char **argv){
int total;
total = addAll(1,2,3,4,5,6,7);
printf("result is %d\n",total);
return 0;
}
Edit
It's seem that stack has stored esi,rdi, 7th parameter call to addAll and total , it should take 4x8 = 32 (0x20) bytes, it round up to 0x30 for some reasons.
I dont know your original code, but locals are also stored on the stack, and when you have some local variables that space is also "allocated". Also for alignment reason it can be, that he "rounded" to the next multiple of 16. I would guess you have a local for passing the result from your addAll to the printf, and that is stored at rbp-04.
I just had look in your linked ABI - where does it say that the callee has to restore rdi and rsi? It says already on page 15, footnote:
Note that in contrast to the Intel386 ABI, %rdi, and %rsi belong to the called function, not
the caller.
Afaik they are used for passing the first arguments to the callee.
0xC are 12. This comes also from alignment, as you can see, he just needs to store edi not rdi, for alignment purpose I assume that he aligns it on a 4 byte border, while si is rsi, which is 64 bit and aligned on 8 byte border.
2: The ABI explicitly says that rdi/rsi need NOT be saved by the called function; see page 15 and footnote 5.
1 and 3: unsure; probably stack alignment issues.

Resources