I have a .NET application that runs on Linux, using Mono. I want to avoid users having to install Mono, so am using mkbundle. I am running mkbundle on an x86 machine, with the expectation of the resulting binary being able to run on x64 machines:
mkbundle MyApp.exe *.dll -o MyApp
I can then run the resulting application on the build machine with `./MyApp'
However, when I copy it to an x64 machine (and make it executable) it won't run, just outputting:
bash: ./MyApp: No such file or directory
If I try ldd I get:
not a dynamic executable
Shouldn't binaries built for x86 run on x64 systems?
I'm rather new to Linux, and it seems x86/x64 isn't quite as straightforward as it is on Windows, as many x64 Linux distributions don't ship with the capability to run 32-bit binaries.
After installing 32-bit libraries on the x64 machine, the x86 code will execute as expected (e.g. on Ubuntu 7.04, apt-get install ia32-libs.
While this works, as I need to target a number of distributions I've decided to just create separate builds for x86 and x64 instead.
Related
Because Msys2 sucks,
as mentioned above, I need to change its default server mirrors to point to Arch Linux Mingw-w64 AUR ones, and make it as the default one.
So when I issue some pacman -S mingw-w64-* it will download the package from Arch Linux Repository and not Msys2.
I need to use Msys2 only as a shell.
Msys2 Minwg-32/64 builds use Dwarf instead of SJLJ as exception model, and this is a very bad choice, because they don't catch exceptions from other DLLs that are built with other tool-chains, and the application will crash (For example Firebird 2).
Arch Linux is smart, and has chosen to use SJLJ as exception model for its Minwg-32/64 builds.
This seems very unlikely to work. pacman for MSYS2 will download Windows PE binaries for your MSYS2 environment. pacman for Arch Linux is going to download Linux ELF binaries. You won't be able to run these on your Windows device.
You may be able to get what you want if you use Windows Subsystem for Linux (WSL).
I have a VM running CentOs Linux on my Windows 10 machine. Yesterday I built the GCC from source, and saw an option where you could build it to cross compile. My question is this: is it possible (and if it is, how is it done), to compile GCC so that it is capable of building Windows executables on Linux (that I can then run on my computer)? I would like to avoid using MinGW if at all possible so that I won't have to use the special libraries.
I have a ARM Coretex-A8 development board from Freescale (i.MX53) running Linux Ubuntu. It boots up just fine and I can access the system with mouse/keyboard/terminal.
To get started I would like to make an application running on the board inside the host OS, just as you do when you run application on your PC.
My problem is to compile my test program, using toolchains like YAGARTO which is based on gcc i end up in trouble with the linking bacause I have not defined any startup script.
I find lot of information on building "bare metal" configurations (inluding compiling the kernel and make load and link scripts), but not anything usefull for making a application running on a host OS.
My development environment is running on Windows 7. I also have the option to run on Linux X86, but i doubt this whould help me making ARM applications.
For ARM-Linux application development the preferable choice is a Linux Host(x86) machine with a ARM toolchain installed in it. In Ubuntu Desktop machine you can use the following command to install ARM toolchain:
apt-get install gcc-arm-linux-gnueabi
After toolchain installation you can use the following command for cross compilation:
gcc-arm-linux-gnueabi-gcc -o hello hello.c
Using this toolchain you can cross-compile your C program using Standard C library without the need of startup code. Applications can be cross-compiled at your Host Linux(x86) platform and run on Target Linux(ARM) platform.
Windows version of ARM-Linux Toolchain is also available. You can get it from here.
Linaro Developers Wiki - an open organization focused on improving Linux on ARM, will be a good reference for your work.
When compiling code with VC++, MSDN gives you the option between using the x86_amd64 toolset or the amd64 toolset (when calling vcvarsall.bat).
How do I choose between those two when compile x64 code? Will the amd64 option churn out more efficient x64 machine code than the cross compiler?
It has nothing to do with efficiency. The native and cross-compiler will both generate the same machine code. You will however gain some benefits by running a native 64-bit compiler process on a 64-bit workstation (larger registers, larger memory space, etc...).
The native compiler will only run on an 64-bit copy of Windows, so if your workstation is 32-bit this compiler won't even run.
The cross-compiler is meant to run on x86 machines even though it will run on a 64-bit copy of Windows via WoW; however, there is no reason to do this.
The page you link says it quite well:
x64 on x86 (x64 cross-compiler)
Allows
you to create output files for x64.
This version of cl.exe runs as a
32-bit process, native on an x86
machine and under WOW64 on a 64-bit
Widows operating system.
x64 on x64
Allows you to create output
files for x64. This version of cl.exe
runs as a native process on an x64
machine.
Thanks to Brian R. Bondy for the quote formatting
From what you linked:
x64 on x86 (x64 cross-compiler)
Allows
you to create output files for x64.
This version of cl.exe runs as a
32-bit process, native on an x86
machine and under WOW64 on a 64-bit
Widows operating system.
x64 on x64
Allows you to create output
files for x64. This version of cl.exe
runs as a native process on an x64
machine.
Paraphrased:
If you use x86_amd64, then you are typically developing on an x86 machine and you want to create x64 files that run natively on x64. You could also use this option on an x64 machine but your compiler will be running under WOW64 emulation.
If you use AMD64, then you are developing on an x64 machine and you want to create x64 files that run natively on x64. The compiler is running natively in x64. This option is more efficient to build x64 programs.
You may wonder why you would ever develop an x64 program on an x86 computer, since you can't run it you can't debug it. Well it's still useful for example if you have a build server which is x86 and that build server needs to generate both x86 and x64 outputs.
How is it possible for a compiler to run under x64 if it is an x86 based program (x86_amd64)? That is the same reason you can run any x86 program on your x64 machine... Thanks to WOW64 emulation.
What is WOW64 emulation:
WOW64 emulation happens when you run an x86 program on an x64 computer (or IA64). WOW64 stands for Windows 32 on Windows 64. It is an emulation layer on top of x64 machines which allow you to execute x86 programs.
Your file system operations will be redirected to WOW64 folders and your registry will be redirected to a subnode as well. For example when you try to obtain the folder for program files it will return c:\program files (x86)\ if you are using WOW64 but it will return c:\program files\ if you are using x64.
Another example, for the registry if you try to write to HKLM\Software\Something it will really redirect you to HKLM\SOFTWARE\Wow6432Node\Something without your x86 program's knowledge.
Running a native x64 build will be more efficient than running through WOW64 emulation Why? Because you don't have that extra emulation layer of transforming your 32bit calls into 64bit ones.
By the way if you are running the x64 version of Windows you can see which processes are running through WOW64 because they will have a *32 appended to the process name in the process list.
My .deb package, built on 32-bit Ubuntu and containing executables compiled with gcc, won't install on the 64-bit version of the OS (the error message says 'Wrong architecture i386'). This is confusing to me because I thought that in general 32-bit software worked on 64-bit hardware, but not vice-versa.
Will it be possible for me to produce a .deb file that I can install on a 64-bit OS, using my 32-bit machine? Is it just a matter of using the appropriate compiler flags to produce the executables (and if so what are they), or is the .deb file itself somehow specific to one processor architecture?
The deb installer is probably refusing to install your package because it was (correctly) labeled with a conflicting Architecture: field. i386 code can be executed on an amd64 machine, but it requires that all the appropriate dependencies (32-bit libraries, etc.) be present. It's better to build separate packages for each architecture.
Yes, you can build for 64-bit on your 32-bit machine. It's called cross-compiling, and it requires that you create a build environment for that purpose. To get started, you might want to look up the dpkg-cross and apt-cross tools.
Alternatively, you can just install a virtual machine running a 64-bit OS, and build for your secondary architecture there.
The architecture is just an option in the config file of debian package. By default it uses those from uname. You can override it but there is an easier way.
In general, most 32-bit programs will run fine on 64-bit. However, unless you have a very old PC, it is also very easy to install a mini 64-bit debian in a virtualbox virtual machine. You probably only need base + build essentials + dev libraries. This will not take a lot of diskspace. If you can spare 2G diskspace, just install a desktop debian.
There are more options to do crosscompilation, with various degrees of automation.
I use the virtualbox method regularly. It is easy and fast.
If you run 64-bit linux making a 32-bit environment is as easy as mkdebootstrap + linux32 + chroot.