I'm currently in the process of learning assembly language.
I'm using Gas on Linux Mint (32-bit). Using this book:
Programming from the Ground Up.
The machine I'm using has an AMD Turion 64 bit processor, but I'm limited to 2 GB of RAM.
I'm thinking of upgrading my Linux installation to the 64-bit version of Linux Mint, but I'm worried that because the book is targeted at 32-bit x86 architecture that the code examples won't work.
So two questions:
Is there likely to be any problems with the code samples?
Has anyone here noticed any benefits in general in using 64-bit Linux over 32-bit (I've seen some threads on Stack Overflow about this but they are mostly related to Windows Vista vs. Windows XP.)
Your code examples should all still work. 64-bit processors and operating systems can still run 32-bit code in a sort of "compatability mode". Your assembly examples are no different. You may have to provide an extra line of assembly or two (such as .BITS 32) but that's all.
In general, using a 64-bit OS will be faster than using a 32-bit OS. x86_64 has more registers than i386. Since you're working on assembly, you already know what registers are used for... Having more of them means less stuff has to be moved on and off the stack (and other temporary memory) thus your program spends less time managing data and more time working on that data.
Edit: To compile 32-bit code on 64-bit linux using gas, you just use the commandline argument "--32", as noted in the GAS manual
Even if you run Linux 64bit, it is possible to compile and run 32bit binaries on it. I don't know how good Mint's support for that is, I assume you should check.
64bit assembler however is not fully compatible to 32bit, for example you have different (more) registers. There are some specific instructions not available on either platform.
I would say the move to 64bit is not a big deal. You can still write 32bit assembly and then perhaps try to get it also running as 64bit (shouldn't be too hard), as a source of even more programming/learning fun.
Usually 32-bits is plenty so only use 64-bits or more if you really NEED IT.
Best to decide prior to programming if you want to do it as a 32-bit app or
a 64-bit app and then stick to it as mixed mode debugging ca get tricky fast.
Related
Basically, what I wonder is how come an x86-64 OS can run a code compiled for x86 machine. I know when first x64 Systems has been introduced, this wasn't a feature of any of them. After that, they somehow managed to do this.
Note that I know that x86 assembly language is a subset of x86-64 assembly language and ISA's is designed in such a way that they can support backward compatibility. But what confuses me here is stack calling conventions. These conventions differ a lot depending on the architecture. For example, in x86, in order to backup frame pointer, proceses pushes where it points to stack(RAM) and pops after it is done. On the other hand, in x86-64, processes doesn't need to update frame pointer at all since all the references is given via stack pointer. And secondly, While in x86 architecture arguments to functions is passed by stack in x86-64, registers are used for that purpose.
Maybe this differences between stack calling conventions of x86-64 and x64 architecture may not affect the way program stack grows as long as different conventions are not used at the same time and this is mostly the case because x32 functions are called by other x32's and same for x64. But, at one point, a function (probably a system function) will call a function whose code is compiled for a x86-64 machine with some arguments, at this point, I am curious about how OS(or some other control unit) handle to get this function work.
Thanks in advance.
Part of the way that the i386/x86-64 architecture is designed is that the CS and other segment registers refer to entries in the GDT. The GDT entries have a few special bits besides the base and limit that describe the operating mode and privilege level of the current running task.
If the CS register refers to a 32-bit code segment, the processor will run in what is essentially i386 compatibility mode. Likewise 64-bit code requires a 64-bit code segment.
So, putting this all together.
When the OS wants to run a 32-bit task, during the task switch into it, it loads a value into CS which refers to a 32-bit code segment. Interrupt handlers also have segment registers associated with them, so when a system call occurs or an interrupt occurs, the handler will switch back to the OS's 64-bit code segment, (allowing the 64-bit OS code to run correctly) and the OS then can do its work and continue scheduling new tasks.
As a follow up with regards to calling convention. Neither i386 or x86-64 require the use of frame pointers. The code is free to do as it pleases. In fact, many compilers (gcc, clang, VS) offer the ability to compile 32-bit code without frame pointers. What is important is that the calling convention is implemented consistently. If all the code expects arguments to be passed on the stack, that's fine, but the called code better agree with that. Likewise, passing via registers is fine too, just everyone has to agree (at least at the library interface level, internal functions can generally do as they please).
Beyond that, just keep in mind that the difference between the two isn't really an issue because every process gets its own private view of memory. A side consequence though is that 32-bit apps can't load 64-bit dlls, and 64-bit apps can't load 32-bit dlls, because a process either has a 32-bit code segment or a 64-bit code segment. It can't be both.
The processor in put into legacy mode, but that requires everything executing at that time to be 32bit code. This switching is handled by the OS.
Windows : It uses WoW64. WoW64 is responsible for changing the processor mode, it also provides the compatible dll and registry functions.
Linux : Until recently Linux used to (like windows) shift to running the processor in legacy mode when ever it started executing 32bit code, you needed all the 32bit glibc libraries installed, and it would break if it tried to work together with 64bit code. Now there are implementing the X32 ABI which should make everything run like smoother and allow 32bit applications to access x64 feature like increased no. of registers. See this article on the x32 abi
PS : I am not very certain on the details of things, but it should give you a start.
Also, this answer combined with Evan Teran's answer probably give a rough picture of everything that is happening.
My current understanding is
No 64-bit GHC, ticket #1884
The 32-bit GHC and the binaries it builds work just fine because the Windows OS loader converts OS calls and pointers to 64 bits. The same applies to DLLs
No mixing 32 bit and 64 bit code (ie. your 32 bit Haskell DLL isn't going to be friends with the 64 bit program that wants to use it)
Latest discussion is a thread started on May 2011
Is this correct? Are there any pitfalls to watch out for, particularly as an FFI user? For example, if I were to export some Haskell code as a 32 bit DLL to some Windows program, should I expect it to work?
Edit: looks like you'd need a 64 bit DLL to go with a 64 bit process
I don't know if anyone's actively working on a 64-bit codegen right now, but 32-bit haskell will work just fine as long as you're only talking to 32-bit FFI libraries (and/or being embedded in 32-bit host programs). If you want to interact with 64-bit programs, you will need to use some form of IPC, as 32-bit and 64-bit code cannot coexist in one process.
64-bit windows is supported now. There is binary a distribution of 64-bit GHC.
No 64-bit Haskell Platform yet though.
I want to learn programming in assembler for PowerPC and ARM, but I'm unable to buy real hardware for this purpose. I'm thinking about using QEMU for that. However I'm not sure if it emulates both architectures enough well, that I'll compile and run my programs in native assembler on it?
QEMU works well for testing program correction (i.e. whether the code would properly run on an actual ARM or PowerPC) but it is not good for testing program efficiency: the emulation is not cycle accurate, and speed measured with QEMU cannot be reliably (or even unreliably) correlated with speed on true hardware.
Also, QEMU will not trap unaligned memory accesses, which is not a problem for PowerPC emulation (the PowerPC tolerates unaligned accesses) but may be for ARM (an unaligned access, e.g. reading a 32-bit word in RAM from an address which is not a multiple of 4, will work fine with QEMU but would trigger an exception on a true ARM processor).
Apart from these points, QEMU is fine for assembly development on ARM or MIPS (haven't tried PowerPC, because I found an old iBook on eBay for that; but I have done ARM and MIPS assembly with QEMU and then ran the resulting code on true hardware, and this worked). You can either emulate a whole system and run Debian in it (in which case the compiler, linker, text editor... will also run in emulation), or use the "user-mode emulation" where the ARM/MIPS executable is run directly, with a wrapper which converts system calls into those for the host PC (this assumes that the host is a PC running Linux). The latter is more convenient (you have access to your normal home directory, programming tools are native...) but requires installing cross-development tools. See buildroot for that (and link with -static, this will avoid many headaches).
Since I have found signs that Debian for PowerPC and for ARM can run on QEMU, I suppose this won't be a problem.
I have a 32-bit .so binary-only library and I have to generate 64-bit program that uses it.
Is there a way to wrap or convert it, so it can be used with 64-bit program?
No. You can't directly link to 32bit code inside of a 64bit program.
The best option is to compile a 32bit (standalone) program that can run on your 64bit platform (using ia32), and then use a form of inter-process communication to communicate to it from your 64bit program.
For an example of using IPC to run 32-bit plugins from 64-bit code, look at the open source NSPluginWrapper.
It is possible, but not without some serious magic behind the scenes and you will not like the answer. Either emulate a 32 bit CPU (no I am not kidding) or switch the main process back to 32 bit. Emulating may be slow though.
This is a proof of concept of the technique.
Then keep a table of every memory access to and from the 32 bit library and keep them in sync. It is very hard to get to a theoretical completeness, but something workable should be pretty easy, but very tedious.
In most cases, I believe two processes and then IPC between the two may actually be easiest, as suggested othwerwise.
What are the guidelines for porting a 32-bit program to a 64-bit version?
Apart from the obvious issues with calling 32-bit libraries:
Don't assume a pointer is the same size as an integer.
Don't assume subtracting one pointer from another yields a value that fits in an integer.
See also http://msdn.microsoft.com/en-us/library/aa384190(VS.85).aspx
Don't use hard coded registry/file system paths as some are different on a 64-Bit machine. For example, 32 bit apps get installed in 'Program Files (x86)'.
If you are developing in Windows using .NET, make sure you are using the System or Microsoft.Win32 libraries to access resources.