Linux to Windows: x86_64-w64-mingw32-gcc will not compile a 64-bit executable, how can I resolve this? [duplicate] - linux

This question already has answers here:
What is the bit size of long on 64-bit Windows?
(8 answers)
Determining 32 vs 64 bit in C++
(16 answers)
Closed 3 years ago.
I am using Debian 9 (stable) and I recently decided to install mingw-w64 so that I could do cross platform C/++ development since I don't have consistent access to a Windows machine. When I had first installed it a couple days ago to test out doing 32 and 64-bit builds, it seemed to work but now the 64-bit command (x86_64-w64-mingw32-gcc) seems to only put out 32-bit executables despite the fact that it should only put out 64-bit by default. I even explicitly declared the options -m64 and -march=x86_64 in an attempt to force it, but it seems like no matter what I try now it will only do a 32-bit build.
For reference, all the code consists of is three files: hello.c, hellofunc.c, and hellofunc.h. The file hello.c simply calls a function defined in hellofunc.c called printHelloWorld:
void printHelloWorld(void){
long z;
printf("Hello World!\n");
switch(sizeof(z)){
case 4:
printf("This program is 32-bit\n");
break;
case 8:
printf("This program is 64-bit\n");
break;
default:
printf("This program is of unknown bit size\n");
}
printf("Long int size is %i bytes long!\n", sizeof(z));
}
With the expected output being "This program is 32-bit" or 64-bit, with an explicit statement showing the byte length of a long variable. The problem I am having is that despite the fact that the 64-bit mingw-w64 gcc command is being used, when I do test it on Windows it will display the 32-bit message and a byte length of 4 instead of the expected 8.
Again, when I initially tested it right after download it worked as expected and I have no idea what could have changed the default functionality of it over the last couple of days- I didn't install anything new, and all I was doing was trying to work with makefiles (this directory does not contain one though). Also, for the record, the default gcc native to my Debian system works perfectly well with the -m32 and -m64 commands so it's not like my system isn't capable of it and... in fact, if anything, I would expect the behavior to be backwards as Linux seems to require special set-up to do 32-bit builds or run 32-bit programs on a 64-bit machine.
For the last attempt to understand the problem myself I ran the command x86_64-w64-mingw32-gcc -v -o hello.exe hello.c hellofunc.c -I . so that I could get the full command sequence of the compiler, and I also ran gcc on it as well. Here is the full output I put on pastebin just for reference. I don't 100% understand the output, but it seriously looks like all the configuration and options should be outputting 64-bit?? I don't know, I'm still pretty new to Linux and still don't entirely understand how to dissect when something goes wrong- especially when something runs counter to the explicitly defined functionality lol.
If somebody could recommend possible fixes for this or alternatives I probably didn't think of, I would really appreciate. I tried Googling for answers before hand, but x86_64-w64-ming32-gcc seems to only ever come up in conversation to tell newbies that it's the obvious default way to compile for 64-bit architecture.
Thanks for reading and any help you can give, cheers~

Related

Why does running objcopy --strip-all in a rust program halves their size?

When I execute objcopy --strip-all in any rust program it halves their size. For example if I compile an normal hello world application with cargo build --release it ends with an 3 mb executable (in linux). Then when i run objcopy --strip-all on the executable i end with an 330 kb executable. Why does this happen?
I also tested this in windows with x86_64-pc-windows-gnu as my toolchain and it also lowered down the size of the executable from 4 mb to 1 mb.
In windows my toolchain is nightly 2021-07-22. In linux my toolchain is nightly 2021-07-05.
When you generate a binary, the Rust compiler generates a lot of debugging information, as well as other information such as symbol names for each symbol. Most other compilers do this as well, sometimes with an option (e.g., -g). Having this data is very helpful for debugging, even if you're compiling in release mode.
What you're doing with --strip-all is removing all of this extra data. In Rust, most of the data you're removing is just debugging information, and in a typical Linux distro, this data is stripped out of the binary and stored in special debug packages so it can be used if needed, but otherwise not downloaded.
This data isn't absolutely needed to run the program, so you may decide to strip it (which is usually done with the strip binary). If size isn't a concern for you, keeping it to aid debugging in case of a problem may be more helpful.

Unformatted direct access file portability [duplicate]

This question already has an answer here:
Reading writing fortran direct access unformatted files with different compilers
(1 answer)
Closed 6 years ago.
I have a Fortran code which writes an unformatted direct access file. The problem is that the size and the contents of the file changes when I switch to different platforms:
First platform is Windows (32bit version of the program using Intel compiler - version around 2009)
Second platform is Linux (64 bit version of the program with gfortran compiler v4.9.0).
Unfortunately the file that is produced in Linux can not be read from windows. The file in LINUX is 5-6 times smaller. However, the total number of records that are written seems to be the same. I opened both files with a hex editor and the main difference is that a lot of zeros exist in the windows version of the file.
Is there any way produce exactly the same file in LINUX?
If it helps, you can find both files here: https://www.dropbox.com/sh/erjlf5sps40in0e/AAC4XEi-p4nnTNzhyai_ZCZVa?dl=0
I open the file with the command: OPEN(IAST,FILE=ASTFILR,ACCESS='DIRECT',FORM='UNFORMATTED',RECL=80)
I write with the command:
WRITE(IAST,REC=IRC) (SNGL(PHI(I)-REF), I=IBR,IER)
I read with the command: READ(IAST,REC=IRC,ERR=999) (PHIS(I), I=1,ISTEP)
where PHIS is a REAL*4 array
The issue is that by default Intel Fortran specifies that RECL= is in units of words, whereas GFortran uses bytes. There's an Intel Fortran compiler option that you can use to make it use byte units. On Linux that option is
-assume byterecl
for Windows I'm not sure what the syntax is, maybe something like
/assume:byterecl

MATLAB 32-bit executable file crashing with a function of Optimization Toolbox

I am working on a MATLAB project which we want to export as .exe. The resulting file must then be able to run on both 32 and 64-bits Windows 7 PCs.
After a little research we realized this problem was easier to approach by developing on a 32-bit version of MATLAB building then a 32-bits .exe file.
Till this point, all our development was being carried in the 64-bits version of MATLAB. With it we had been able to successfully generate and run 64-bits .exe versions.
Now that we switched to MATLAB 32-bits, however, and we generate the .exe, something goes wrong and the following error is shown:
Undefined function ‘fmincon’ for input arguments of type ‘function handle’.
This is the line of code in which fmincon first appears:
Options = optimoptions('fmincon', 'DiffMinChange', 10);
A few remarks:
The same scripts which worked on MATLAB 64-bits also work on MATLAB
32-bits. Within the MATLAB environment, everything runs smoothly.
The scripts (with the same exact code) can still be made executable on MATLAB 64-bits without any problem.
In both cases, we properly installed the runtime required for the MATLAB executable to be run on the PC.
We have tried to run the 32-bits .exe in both 64-bits and 32-bits machines with the same result.
Is it possible that the 32-bits version of MATLAB's deployed executable has problems dealing with functions from the Optimization Toolbox (as fmincon is)?
What else could be the cause of this problem? Does anyone have an idea how to fix it?
The problem was only solved thanks to MATLAB's support. This is related to a bug in version R2014a, explained and patched in this Mathworks link.

Weird lock up in the kernel

Basically, I'm messing around with loading and linking object code into the Linux kernel from mach object files, and I've noticed something weird when I do a printk from inside that object. If I call printk with over 3 (4 or more) arguments (printk("%d,%d,%d \n", 1, 1, 1)), the system will lock up, but at some point later (it will not return from the system call and just lock up instead). The actual print works and prints the expected values in all cases.
Now, the weird thing is that this only happens when I build it using Clang+LLVM. Here is the culprit code:
On the other hand, when this is built using LLVM GCC, it works just fine:
This also works when built with GNU GCC:
Can anyone suggest a reason for why the clang version makes the system lock up? So basically, there is something wrong with the first snippet of code that makes it lock up that isn't present in the others. I don't really know what's wrong.
I do not know how you generated the object files, but it seemsthat you're using Darwin ABI, which is basically heavily modified APCS ("old" ARM ABI). However, for linux et al you need to use EABI (aka AAPCS), which differs from APCS in many cases.
For example, R9 is call-saved in EABI, but call-clobbered on darwin, there are differences in passing 64 bit values, etc. Note that your clang example clobbers R9, while llvm-gcc - does not :)

glibc fnmatch vulnerability: how to expose the vulnerability?

I have to validate a vulnerability on one of our 64-bit systems which is running glibc-2.9 .
http://scarybeastsecurity.blogspot.in/2011/02/i-got-accidental-code-execution-via.html
The above link gives a script which when passed a magic number apparently leads to arbitrary code execution. But when I tried it on my system, nothing seems to be happening.
Am I doing something wrong? Does the system crash if the vulnerability exists? How do I detect if it's accidental code execution?
If you're running on a 64-bit machine then the original circumstances of the bug don't apply. As you can see in Chris' blog, he's using a 32-bit Ubuntu 9.04 system. The exploit relies on causing the stack pointer to wrap about the 32-bit address space, leading to stack corruption.
I gave it a quick try on a 64-bit system with glibc 2.5, but saw malloc() failures instead of crashes.
$ ./a.out 3000000000
a.out: malloc() failed.
You asked how to identify accidental code execution; with the toy program here, which doesn't carry an exploit / payload, we'd expect to see either a SIGSEGV, SIGILL, or SIGBUS as the CPU tried to "execute" junk parts of the stack, showing up as the respective error message from the shell.
If you were to run into the problem on a 64-bit machine, you'd have to mimic the original code but provide a number that wraps the stack on a 64-bit machine. The original number provided was:
1073741796
$ bc
z=1073741796
z+28
1073741824
(z+28)*4
4294967296
2^32
4294967296
quit
$
So, one way of describing the input number is (ULONG_MAX - 112) / 4.
The analogue number for a 64-bit machine is 4611686018427387876:
$ bc
x=2^64
x
18446744073709551616
y=x/4
y
4611686018427387904
y-28
4611686018427387876
quit
$
However, to stand a chance of this working, you'd have to modify the reported code to use strtroull() or something similar; atoi() is normally limited to 32-bit integers and would be no use on the 64-bit numbers above. The code also contains:
num_as = atoi(argv[1]);
if (num_as < 5) {
errx(1, "Need 5.");
}
p = malloc(num_as);
Where num_as is a size_t and p is a char *. So, you'd have to be able to malloc() a gargantuan amount of space (almost 4 EiB). Most people don't have enough virtual memory on their machines, even with disk space for backing, to do that. Now, maybe, just maybe, Linux would allow you to over-commit (and let the OOM Killer swoop in later), but the malloc() would more likely fail.
There were other features that were relevant and affect 32-bit systems in a way that it cannot affect 64-bit systems (yet).
If you're going to stand a chance of reproducing it on a 64-bit machine, you probably have to do a 32-bit compilation. Then, if the wind is behind you and you have appropriately old versions of the relevant software perhaps you can reproduce it.

Resources