My question is quite specific. I have to write a simple program which works with files and should be run on arm 32bit (android). The crucial point is that it MUST use the _llseek syscall. This syscall exists in a 32bit linux kernel but is absent in a 64bit one.
Technically I can write and run this program on the arm device (phone in my case) but I do things on a 64bit box which I would normally use for tests and debugs. In this case my program fails to compile and run on my 64bit box due to the lack of this syscall.
What can be a workaround? I need some friendly test/debug environment. Is installing a 32bit virtual linux the only option?
Thanks in advance, Alex
The -m32 flag worked and indeed the 64bit kernel accepted the bytecode. Stupid enough that I did not come to this simple option myself, thanks.
Or put it differently. I did not notice that the 64bit kernel implements all 32bit calls in parallel, even those absent in the native 64bit code.
You can install an entire Linux 32 bits (x86/32 i.e. ia32) system in a partition -or simply a subdirectory- and run it under chroot in a 64 bits x86-64 Linux kernel (because 64 bits kernels for x86-64 are generally configured to be able to run 32 bits x86 code, that is to execve(2) a 32 bits x86 ELF executable binary) The debootstrap command (on Debian & related) is documenting such a use, see also schroot (you might not need to install every package in 32 bits chroot, only the relevant ones).
You can also use gcc -m32 to compile to a 32 bits x86 ELF binary on a 64 bits x86-64 machine. You may need additional packages (often with multilib or ia32 or x86 in their name).
Of course you need to recompile the application since ARM is not the same as x86/32 bits
However, you cannot run an ARM 32 bits application (only a x86 32 bits one) on a 64 bits x86-64 Linux. To run an ARM application on x86, you need an ARM emulator (e.g. qemu....)
So, debug first your thing on x86/32 bits using a 32 bits chroot-ed environment on your 64 bits x86-64 Linux system.
Related
Going thru the ARMv8 manual, I have the following questions to help understand the big picture.
Can legacy 32 bit app. (ARMv7 or earlier) run as is on the ARMv8 OS?
If the legacy applications need to be rebuilt for ARMv8 and assuming that I rebuild the application as 32 bit (Aarch32), does this need 32 bit OS underlying support? (It is interesting to know how the addressing mechanism works here.)
Please provide references wherever possible.
PS: I am targeting Linux OS with Aarch64 support (3.7 and later)
Aarch64 platform may run 32bit ARM but this compatibility is optional.
To run AArch32 binaries you need all libraries application would use in 32bit versions. Same as with i686 binaries on x86-64 systems.
There is also a Linux arm64 CONFIG_COMPAT at: https://github.com/torvalds/linux/blob/v4.17/arch/arm64/Kconfig#L1274 which says:
This option enables support for a 32-bit EL0 running under a 64-bit
kernel at EL1. AArch32-specific components such as system calls,
the user helper functions, VFP support and the ptrace interface are
handled appropriately by the kernel.
which will likely be required, and an ARM employee mentioned on this thread: https://community.arm.com/processors/f/discussions/5535/running-armv7-binaries-on-armv8 that userland instructions are basically the same with some exceptions:
For something like a Linux application, then yes. ARMv8-A includes AArch32, which provides backwards compatibility with ARMv7-A. There are some limitations, such as the SWP instruction no longer being supported. But these are types of things that applications are unlikely to be using (and were deprecated in ARMv7).
For baremetal, you have all the usual problems of using a binary from one platform on another. So you are going to need to do some degree of porting in most cases.
I then tried it for myself with this QEMU full system setup but my attempt failed: I compiled a C hello world with the armv7 compiler as:
arm-linux-gcc -static hello_world.c
and put the built file into the aarch64 target, but when I tried to run it it failed with:
a.out: line 1: syntax error: unexpected word (expecting ")")
even though /proc/config.gz says that CONFIG_COMPAT is set.
It seems that the Linux kernel is not identifying it as an ELF file but rather falling back to /bin/sh, I get the same error if I do:
sh /mnt/9p/a.out
is trying to use the shell binfmt instead of ELF.
In particular, I know that the Linux kernel can choose between archs from the binfmt signature because qemu-user does so: https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture
Is there is a way to know dynamically Linux architecture, whether it x86-64 or x86?
The Posix standard uname function (implemented in the uname(2) syscall) is dynamically giving you the information about the CPU. You probably want the machine field.
Caution about x86-64 kernels running a 32 bit program (e.g. a 32 bits Debian distribution chroot-ed in a 64 bits Debian, or perhaps a 32 bits ELF binary running on a 64 bits system); I have no idea what they give in that situation; I would imagine some x86_64 in that case, since the kernel does not really know about the binaries and libc of the system.
See also the Linux specific personality(2) syscall.
Google is your friend: http://sourceforge.net/p/predef/wiki/Architectures/
You want to test for the macros __amd64__ and __i386__. Ideally, you don't test the macros at all and write correct, portable code.
You can use lscpu command to list characteristics about CPU.
I encountered this while trying to understand ELF (Executable and Linking Format).
Steps I followed
Wrote a simple application.
main.c containing
int main(int argc, char **argv){ return 0;}
Compiled in linux environment using gcc. (Done on intel laptop)
Simplest command possible
gcc main.c
Now when I run a.out, it runs without any issue. So build is fine.
I used readelf tool to retrieve the ELF information, where in machine field is put as Advanced Micro Devices X86-64.
This part puzzled me.
So I checked the file header of a.out, it was as per ELF-64 specification (Value 64 - EM_X86_64).
Would anyone care to explain, why does the executable, built in 64 bit mode on linux, show machine type as AMD x86 64?
The x86_64 platform was called the AMD64 platform back when AMD introduced it. Initially, it was far from clear that Intel would ever support it.
You notice how long after i386's ceases to exist, a lot of software had the architecture tag i386? It was because i386 CPUs introduced the instruction set that software uses. Similarly, AMD introduced the instruction set your program uses, so it has an architecture tag that reflects the first CPUs that supported its instruction set. (Modern 32-bit code is still often tagged i686 which refers to the Pentium Pro, circa 1995.)
For a while, the IA-64 (Intel Architecture 64-bit) or Itanium chips were Intel's 64-bit offering, and the Pentium-class chips were the IA-32 chips. The IA-64 chip instruction set was sufficiently different from the Pentium code set that people did not pick it up in large numbers. Meanwhile, AMD came out with a 64-bit extension to the Pentium code set - and that got a lot of support. After a while, Intel bowed to the inevitable and made its own chips that were compatible with the AMD x86/64 chips. But it was AMD that specified the architecture, so it gets the credit in the name.
why does the executable ... show machine type as AMD x86 64?
Because the ELF machine code, used by file, was registered by AMD. There is the official list of registered codes: http://www.sco.com/developers/gabi/latest/ch4.eheader.html (the table at second page):
e_machine
This member's value specifies the required architecture for an individual file.
Name Value Meaning
EM_NONE 0 No machine
...
EM_X86_64 62 AMD x86-64 architecture
My current understanding is
No 64-bit GHC, ticket #1884
The 32-bit GHC and the binaries it builds work just fine because the Windows OS loader converts OS calls and pointers to 64 bits. The same applies to DLLs
No mixing 32 bit and 64 bit code (ie. your 32 bit Haskell DLL isn't going to be friends with the 64 bit program that wants to use it)
Latest discussion is a thread started on May 2011
Is this correct? Are there any pitfalls to watch out for, particularly as an FFI user? For example, if I were to export some Haskell code as a 32 bit DLL to some Windows program, should I expect it to work?
Edit: looks like you'd need a 64 bit DLL to go with a 64 bit process
I don't know if anyone's actively working on a 64-bit codegen right now, but 32-bit haskell will work just fine as long as you're only talking to 32-bit FFI libraries (and/or being embedded in 32-bit host programs). If you want to interact with 64-bit programs, you will need to use some form of IPC, as 32-bit and 64-bit code cannot coexist in one process.
64-bit windows is supported now. There is binary a distribution of 64-bit GHC.
No 64-bit Haskell Platform yet though.
I'm writing a small utility that should run on both 16\32\64 bit systems.
My old utility ran both on 32 and 16 bit by compressing the 16bit version to the 32 bit and applying the /stub switch in visual studio 2008 (/STUB -MS-DOS Stub File Name ).
I'm looking for a way to do the same with my 64 bit executable.
The target 64bit system is Win PE 64bit and it doesn't have the WOW64 installed on it.
Is it possible?
The DOS stub of Windows executables uses the MZ section, whereas both 32-bit and 64-bit executables use the PE section. This allows the DOS stub to exist within either Windows executable, but causes a collision when trying to combine 32- and 64-bit executables.
You should pack your 32 and 64 bit util in resources of another exe, let's call it launcher 32 bit.
Then your launcher should detect on what system it is started from and then extract proper binary from it's resources and start it.
Windows 32-bit runs 16-bit applications by wowexec.exe, and Win64 runs 32-bit application by wow64. So without wow64 it's impossible for your program to create a universal launcher on Windows. (Note: Mac OSX supports multiple architecture in single binary anyway)
The best approach I can figure out is to create a single MSI installer package and put both 32/64 exes into it.