In MIPS arguments are placed in $a0 to $a4 registers for faster accesses. Why some x86 architectures make the design choice to place arguments on stack instead of in registers? What are the advantages of doing this?
The real answer is that it depends more on the compiler than the processor, although I suspect the reason that it is so common for x86 compilers to push arguments onto the stack is that the x86 CPU has always suffered from a scarcity of registers. By the time you eliminate the reserved registers, you are left with three - EAX, ECX, and EDX, which corresponded to AX, CX, and DX in the original 16-bit x86 instruction set. So far as I know, the first processor in the Intel line to raise that limit is the 64 bit "AMD" architecture, which adds eight more, numbered R9 through R15. The five reserved registers get new names, but they remain reserved.
To reinforce my assertion that it depends on the compiler, you need look no further than the native code generator that ships with the Microsoft .NET Framework, which exhibits a hybrid approach, where the first two arguments go into ECX (or RCX) and EDX (or RDX), while additional arguments go onto the stack.
Even within C/C++ code generated by the Microsoft Visual C++ compiler, although the most common calling conventions, __cdecl and __stdcall, use the stack, a third, __fastcall, uses registers. Moreover, I have seen Assembly code that used both; if it needed to talk to C, the routine expected arguments in registers, but private routines that received calls only from other routines in the library used registers.
Registers are naturally faster, considerably faster, but you have to have enough of them. x86 traditionally had very few registers so a stack based approach was the way to go, at that time in history it was in general the way to go, risc and others came along with a lot more registers and the idea of using registers for the first few parameters and perhaps the return value was now something to consider. x86 has more registers now but has generally been stack based although I think zortech or watcom or perhaps even gcc now had a command line option to use a few registers, would have to confirm or deny that with research. but historically it has used the stack for parameters and registers.
ARM, MIPS, etc all have a finite number of registers so eventually dump into the stack, if you keep/control your parameters number and size and at times ordering you can try to limit this and improve performance.
At the end of the day the bottom line is someone or some team defines the calling convention, it is the choice ultimately of the compiler authors, doesnt matter if the chip/processor designer has a recommendation the compiler defines what its calling convention is be it follow a recommendation or do their own thing. there is no reason to create a MIPS or ARM compiler/toolchain that is mostly stack based (the instruction set itself might dictate stack or register based returns, or it could be optional), likewise you are more than welcome to make an x86 compiler with a convention that starts with registers then moves to the stack after some number of them are used.
so a little bit history and a little bit because they choose to...
The simple answer is that you must always follow the ABI for the platform that you are running on. The longer answer is that you incorrectly assume that every 32 bit x86 platform will exclusively use the stack for argument passing. In fact, while each platform will adopt a standard, there are numerous approaches, any of which can be used. (fastcall, cdecl, etc.)
Related
I am currently in the process of understanding what it takes for the Linux kernel to boot. I was browsing through the Linux kernel source tree, in particular for the ARM architecture, until I stumbled upon this assembly instruction retne lr in arch/arm/kernel/hyp-stub.S
Conceptually, it's easily understood that the instruction is suppose to return to the address stored in the link register if the Z-flag is 0. What I am looking for is where this ARM assembly instruction is actually documented.
I searched in the ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition section A8.8 and could not find the description of the instruction.
Grepping the sources and seeing if it was an ARM specific GNU AS extension did not turn up anything in particular.
A google search with the queries "arm assembly ret instruction", "arm return instruction" and anything similar along the lines did not turn up anything useful either. Surely I must be looking in the wrong places or I must be missing something.
Any clarification will be much appreciated.
The architectural assembly language is one thing, real world code is another. Once assembler pseudo-ops and macros come into play, a familiarity with both the toolchain and the codebase in question helps a lot. Linux is particularly nasty as much of the assembly source contains multiple layers of both assembler macros and CPP macros. If you know what to look for, and follow the header trail to arch/arm/include/asm/assembler.h, you eventually find this complicated beast:
.irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo
.macro ret\c, reg
#if __LINUX_ARM_ARCH__ < 6
mov\c pc, \reg
#else
.ifeqs "\reg", "lr"
bx\c \reg
.else
mov\c pc, \reg
.endif
#endif
.endm
.endr
The purpose of this is to emit the architecturally-preferred return instruction for the benefit of microarchitectures with a return stack, whilst allowing the same code to still compile for older architectures.
I have been writing a small kernel lately based on x86-64 architecture. When taking care of some user space code, I realized I am virtually not using rbp. I then looked up at some other things and found out compilers are getting smarter these days and they really dont use rbp anymore. (I could be wrong here.)
I was wondering if conventional use of rbp/epb is not required anymore in many instances or am I wrong here. If that kind of usage is not required then can it be used like a general purpose register?
Thanks
It is only needed if you have variable-length arrays in your stack frames (recording the array length would require more memory and more computations). It is no longer needed for unwinding because there is now metadata for that.
It is still useful if you are hand-writing entire assembly functions, but who does that? Assembly should only be used as glue to jump into a C (or whatever) function.
I'd like to know how to read integers from keyboard in assembly. I'm using Linux/x86 IA-32 architecture and GCC/GAS (GNU Assembler). The examples I found so far are for NASM or some other Windows/DOS related compiler.
I heard that it has something to do with the "int 16h" interrupt, but I don't know how it works (does it needs parameters? The result goes to %eax or any of its virtual registers [AX, AH, AL]?).
Thanks in advance,
Flayshon.
:D
Simple answer is that you don't read integers from the keyboard, you read characters from the keyboard. You don't print integers to the screen, either - you print characters. You will need routines to convert "ascii-to-integer" and "integer-to-ascii". You can "just call scanf" for the one, and "just call printf" for the other. "scanf" works okay if the user is well-behaved and confines input to characters representing decimal digits, but it's difficult to get rid of any "junk" entered! "printf" isn't too bad.
Although I'm a Nasm user (it works fine for Linux - not really "Windows/dos related"), I might have routines in (G)as syntax lying around. I'll see if I can find 'em if you can't figure it out.
As Brian points out, int 16h is a BIOS interrupt - 16-bit code - and is not useful in Linux.
Best,
Frank
In 2012, I don't recommend coding an entire program in assembly. Code only the most critical parts (if you absolutely want some assembly code). Compilers are optimizing better than humans. So use C or C++ for low level software, and higher-level languages e.g. Ocaml instead.
On Linux, you need to understand the role of the linux kernel and of system calls, which are documented in the section 2 of man pages. You probably want at least read(2) and write(2) (if only handling stdin and stdout which should have already be opened by the parent process, e.g. a shell), and you probably need many other syscalls (e.g. open(2) and close(2)). Don't forget to do your buffering (for efficiency purpose).
I strongly recommend learning the Linux system interfaces by reading a good book such as Advanced Unix Programming.
How system calls are done at the machine level in assembly is documented in the Linux Assembly Howto (at least for x86 Linux in 32 bits).
If your goal is to "obtain" a program, I would agree entirely with Basile. If your goal is to "learn assembly language", these other languages aren't really going to help. If your goal is to learn the nitty-gritty details of the hardware, you probably want assembly language, but Linux (or any other "protected mode" OS) isolates us from the hardware, so you might want to use clunky old DOS or even "write your own OS". Flayshon doesn't actually say what his goal is, but since he's asking here, he's probably interested in assembly language...
Some of us have a mental illness that makes us think it's "fun" to write in assembly language. Humor us!
Best,
Frank
Well, they bring (should bring at least) great increase in performance, isn’t it?
So, I haven’t seen any Linux kernel sources, but ‘d love to ask: are they used somehow? (In this case – there must be some special “code-cap” for system that has no such instructions?)
The SSE and MMX instruction sets have limited value outside of audio/video and gaming work. You might find a few explicit uses in dark corners of the kernel, but I wouldn't count on it. The answer in the general case is "no, they are not used", nor are they used in most non-kernel/userspace applications.
The kernel does sometimes optionally use certain x86 instructions that are specific to certain CPUs (e.g. present on some AMD or Intel models but not all, nor vice-versa), such as syscall, but these are different from the SIMD instruction sets you're referring to, and are not part of some wider set of similarly-themed extensions.
After Mark's answer, I went looking. The only place I could easily identify them being used is in the RAID 6 library (which also has support for AltiVec, which is the PowerPC SIMD instruction set).
(Be wary just grepping the tree, there are a lot of spots where the kernel "knows" about SSE/MMX to support user-space applications, but isn't actually using it. Also a couple cases of unfortunate variable names that have absolutely nothing to do with SSE, e.g. in the SCTP implementation.)
There are severe restrictions on using vector registers and floating point registers in kernel code. See e.g. chapter 6.3 of "Calling conventions for different C++ compilers and operating systems". http://www.agner.org/optimize/#manuals
They are used in the kernel for a few things, such as
Software RAID
Encryption (possibly)
However, I believe it always checks their presence first.
"cpu simd instructions use FPU"
erm, no, not as I understand it. They're in part a modern and (much) more efficient replacement for FPU instructions, but a large part of the SIMD instruction set deals with integer operations.
I've never looked very hard at it, but I suppose (ok, hope) that SIMD code generated by a recent gcc version will not clobber any registers or state.
I'm learning assembly language. I started with Paul A. Carter's PC Assembly Language which uses NASM (The Netwide Assembler). Then in the middle I switched and started reading Introduction to 80×86 Assembly Language and Computer Architecture which uses MASM.
In NASM I used to write, for initializing a byte
db 110101b
In MASM I'm using
BYTE 110101b
I'm in the middle of reading. Since these are Assembler directives they will be different for each assembler. right?
Doesn't these assembler developers follow a standard for these directives? Because, They know that mnemonics are CPU specific. So, its pain in the ass to learn and code in assembly language.
Now if they follow different directives, its more pain if you change assembler or if you switch the operating system (MASM developer is in deep trouble if he goes to linux).
My confusion is should I acquaint myself with NASM or MASM? I'm fan of windows but I may have to work (in future) on Linux also.
Every book should be titled "_________ Assembly Language using __________ Assembler"
Unfortunately there has never been a standard for assembly language. You'll just have to learn the directives that your assembler supports. Fortunately most of the directives, while having different names, are semantically similar like db and BYTE.
But wait! It gets worse, especially for the x86. You have (at least) two forms of code that assemblers can accept: Intel and AT&T format. AT&T format reverses the order of most operands to instructions (or is it visa versa ;-).
NASM is probably a better choice for portability, but you could also look at the GNU
assembler..
Intel Syntax / AT&T Syntax
With x86 in particular, the first assemblers were from Intel and then largely-compatible assemblers from Microsoft formed one branch.
These assemblers organize source and destination operands right to left and have an unusual (and to my eyes, kind of wacky) abstraction layer that uses a single mnemonic for 8, 16, and 32-bit ops and then derives the actual machine opcode to use based on properties of the operand. Modifiers exist (on operands) to force a particular size.
But Unix was also important and it had a completely different assembler line with different traditions and conventions.
The original Unix vendor was AT&T, which owned the intellectual property developed at Bell Labs. A series of BSD projects and then Linux continued with this tradition. These assemblers historically process operands left to right, have a spare design optimized for speed, and when used by humans they generally use cpp for macros and conditionals, even if the assembler also has parallel features.
These days you are probably using VS on MS or Gnu on Linux or Mac, but this is why we still say AT&T vs Intel. The GNU assembler has an option to assemble both ways, although it's still really in the AT&T camp.
Generally yes. They are mostly feature-compatible though, so converting from one assembler syntax to another is usually not terribly difficult if you know both.
Processors are all documented in a manufacturer supplied Reference manual. This usually developed into the normative syntax (along with the assembler provided by the vendor) for assembly programs on a particular platform. Consequently, many processors from a single vendor have similar syntax.
The situation became more complex with second sourcing of processors and the eventual development of multi-targeting assemblers that, for historical reasons, use mostly consistent syntax across all platforms. This also provides some arguable advantages when porting code across platforms.
Your best choices are to: pick a notation you are comfortable with and accept books with different syntax, see if you can locate cross-system macro libraries or translation tools or bite the bullet and learn multiple dialects. The third is usually tolerable although it makes building private libraries labour intensive.