Related
I have implemented a spinlock in NASM-64 on Windows. I'm using a spinlock to block a shared memory buffer so that each core will write to the shared buffer one core at a time in order (core 0 first, core 1 second, core 2 third, etc.) As far as I know, a mutex or semaphore will not allow me to do the buffer writes in core order, and a spinlock is preferable because it doesn't use an OS call.
Here is the code section at issue. This is not a complete example because the problem is in this small section of a much larger assembly program. I used cmpxchg for atomicity.
On entry to this section, rax contains the core number and rbx contains a memory variable called spinlock_core, which is set to zero (first core) on entry to this section. After each core is finished, spinlock_core is incremented to the next core number.
mov rdi,Test_Array
movq rbx,xmm11
mov [rdi+rax],rbx ; rax contains the core number offset (0, 8, 16, 24)
push rax
; Spin Lock
spin_lock_01:
lock cmpxchg [spinlock_core],rax ; spinlock_core is set to zero on first entry
jnz spin_lock_01
; To test the result:
mov rdi,Test_Array
mov [rdi+rax+32],rax
jmp out_of_here
The results of this test are in Test_Array, which is populated with the number of bytes to write for each core. On return it contains:
40, 40, 40, 16, 0, 8, 16, 24
showing that cores 0-2 each have 40 bytes to write and core 3 has 16 bytes to write. However, the last four elements of Test_Array contain the core offset for each of the four cores. If the spinlock was working correctly (cmpxchg rax,rbx), the last four elements should all contain zero, showing that only the first core was allowed through. But it shows that all four cores were allowed through.
I assume that my cmpxchg is not atomic, and that's why other cores leak through -- they should only be allowed in when spinlock_core is incremented to the next core, but that doesn't happen before we exit with jmp out_of_here. According to the docs, cmpxchg should be preceded by a "lock" prefix, as in lock cmpxchg rax,rbx, but when I do that the NASM assembler says "warning: instruction is not lockable [-w+lock]." Felix Cloutier's site says the lock prefix is only needed if a memory operand is involved, but when I write "cmpxchg rax,[spinlock_core]" I get "invalid combination of opcodes and operands."
To summarize, my questions are: why is cmpxchg not atomic as written above, and why does the NASM assembler not allow the use of the lock prefix?
There are a number of detailed posts on Stack Overflow and elsewhere on the issue of atomicity but I haven't found any that address this specific issue.
Thanks for any help.
Here is how I solved this, with the help of comments below from #Peter Cordes and #prl. The changes are (1) use bx, not ax as a destination register and (2) use only the low 8-bit registers ax and bx, not rax and rbx.
xor rbx,rbx
spin_lock_01:
pop rax ; the core number is stored in the stack
mov bl,al ; put core number into bx
mov rcx,rax ; preserve core number for later use
push rax ; push core number back to stack
lock cmpxchg [spinlock_core],bl ; spinlock_core is set to zero on first entry
jnz spin_lock_01
; now test it
mov rdi,Test_Array
mov rbx,[spinlock_core]
mov rdx,[order_of_execution]
add rdx,1
mov [order_of_execution],rdx
mov [rdi+rcx+0],rcx
mov [rdi+rcx+32],rdx
add rbx,8
mov [spinlock_core],rbx
jmp out_of_here
The output in Test_Array shows as: 0,8,16,24,1,2,3,4. That shows that each of the cores is passed through in core order (1,2,3,4). The memory location spinlock_core is incremented by each core when it completes, signalling the next core.
Yesterday I was looking at some 32 bit code generated by VC++ 2010 (most probably; don't know about the specific options, sorry) and I was intrigued by a curious recurring detail: in many functions, it zeroed out ebx in the prologue, and it always used it like a "zero register" (think $zero on MIPS). In particular, it often:
used it to zero out memory; this is not unusual, as the encoding for a mov mem,imm is 1 to 4 bytes bigger than mov mem,reg (the full immediate value size has to be encoded even for 0), but usually (gcc) the necessary register is zeroed out "on demand", and kept for more useful purposes otherwise;
used it for compares against zero - as in cmp reg,ebx. This is what stroke me as really unusual, as it should be exactly the same as test reg,reg, but adds a dependency to an extra register. Now, keep in mind that this happened in non-leaf functions, with ebx being often pushed (by the callee) on and off the stack, so I would not trust this dependency to be always completely free. Also, it also used test reg,reg in the exact same fashion (test/cmp => jg).
Most importantly, registers on "classic" x86 are a scarce resource, if you start having to spill registers you waste a lot of time for no good reason; why waste one through all the function just to keep a zero in it? (still, thinking about it, I don't remember seeing much register spillage in functions that used this "zero-register" pattern).
So: what am I missing? Is it a compiler blooper or some incredibly smart optimization that was particularly interesting in 2010?
Here's an excerpt:
; standard prologue: ebp/esp, SEH, overflow protection, ... then:
xor ebx, ebx
mov [ebp+4], ebx ; zero out some locals
mov [ebp], ebx
call function_1
xor ecx, ecx ; ebx _not_ used to zero registers
cmp eax, ebx ; ... but used for compares?! why not test eax,eax?
setnz cl ; what? it goes through cl to check if eax is not zero?
cmp ecx, ebx ; still, why not test ecx,ecx?
jnz function_body
push 123456
call throw_something
function_body:
mov edx, [eax]
mov ecx, eax ; it's not like it was interested in ecx anyway...
mov eax, [edx+0Ch]
call eax ; virtual method call; ebx is preserved but possibly pushed/popped
lea esi, [eax+10h]
mov [ebp+0Ch], esi
mov eax, [ebp+10h]
mov ecx, [eax-0Ch]
xor edi, edi ; ugain, registers are zeroed as usual
mov byte ptr [ebp+4], 1
mov [ebp+8], ecx
cmp ecx, ebx ; why not test ecx,ecx?
jg somewhere
label1:
lea eax, [esi-10h]
mov byte ptr [ebp+4], bl ; ok, uses bl to write a zero to memory
lea ecx, [eax+0Ch]
or edx, 0FFFFFFFFh
lock xadd [ecx], edx
dec edx
test edx, edx ; now it's using the regular test reg,reg!
jg somewhere_else
Notice: an earlier version of this question said that it used mov reg,ebx instead of xor ebx,ebx; this was just me not remembering stuff correctly. Sorry if anybody put too much thought trying to understand that.
Everything you commented on as odd looks sub-optimal to me. test eax,eax sets all flags (except AF) the same as cmp against zero, and is preferred for performance and code-size.
On P6 (PPro through Nehalem), reading long-dead registers is bad because it can lead to register-read stalls. P6 cores can only read 2 or 3 not-recently-modified architectural registers from the permanent register file per clock (to fetch operands for the issue stage: the ROB holds operands for uops, unlike on SnB-family where it only holds references to the physical register file).
Since this is from VS2010, Sandybridge wasn't released yet, so it should have put a lot of weight on tuning for Pentium II/III, Pentium-M, Core2, and Nehalem where reading "cold" registers is a possible bottleneck.
IDK if anything like this ever made sense for integer regs, but I don't know much about optimizing for CPUs older than P6.
The cmp / setz / cmp / jnz sequence looks particularly braindead. Maybe it came from a compiler-internal canned sequence for producing a boolean value from something, and it failed to optimize a test of the boolean back into just using the flags directly? That still doesn't explain the use of ebx as a zero-register, which is also completely useless there.
Is it possible that some of that was from inline-asm that returned a boolean integer (using a silly that wanted a zero in a register)?
Or maybe the source code was comparing two unknown values, and it was only after inlining and constant-propagation that it turned into a compare against zero? Which MSVC failed to optimize fully, so it still kept 0 as a constant in a register instead of using test?
(the rest of this was written before the question included code).
Sounds weird, or like a case of CSE / constant-hoisting run amok. i.e. treating 0 like any other constant that you might want to load once and then reg-reg copy throughout the function.
Your analysis of the data-dependency behaviour is correct: moving from a register that was zeroed a while ago essentially starts a new dependency chain.
When gcc wants two zeroed registers, it often xor-zeroes one and then uses a mov or movdqa to copy to the other.
This is sub-optimal on Sandybridge where xor-zeroing doesn't need an execution port, but a possible win on Bulldozer-family where mov can run on the AGU or ALU, but xor-zeroing still needs an ALU port.
For vector moves, it's a clear win on Bulldozer: handled in register rename with no execution unit. But xor-zeroing of an XMM or YMM register still needs an execution port on Bulldozer-family (or two for ymm, so always use xmm with implicit zero-extension).
Still, I don't think that justifies tying up a register for the duration of a whole function, especially not if it costs extra saves/restores. And not for P6-family CPUs where register-read stalls are a thing.
I'm trying to run a binary program that uses CMPXCHG16B instruction at one place, unfortunately my Athlon 64 X2 3800+ doesn't support it. Which is great, because I see it as a programming challenge. The instruction doesn't seem to be that hard to implement with a cave jump, so that's what I did, but something didn't work, program just froze in a loop. Maybe someone can tell me if I implemented my CMPXCHG16B wrong?
Firstly the actual piece of machine code that I'm trying to emulate is this:
f0 49 0f c7 08 lock cmpxchg16b OWORD PTR [r8]
Excerpt from Intel manual describing CMPXCHG16B:
Compare RDX:RAX with m128. If equal, set ZF and load RCX:RBX into m128.
Else, clear ZF and load m128 into RDX:RAX.
First I replace all 5 bytes of the instruction with a jump to code cave with my emulation procedure, luckily the jump takes up exactly 5 bytes! The jump is actually a call instruction e8, but could be a jmp e9, both work.
e8 96 fb ff ff call 0xfffffb96(-649)
This is a relative jump with a 32-bit signed offset encoded in two's complement, the offset points to a code cave relative to address of next instruction.
Next the emulation code I'm jumping to:
PUSH R10
PUSH R11
MOV r10, QWORD PTR [r8]
MOV r11, QWORD PTR [r8+8]
TEST R10, RAX
JNE ELSE
TEST R11, RDX
JNE ELSE
MOV QWORD PTR [r8], RBX
MOV QWORD PTR [r8+8], RCX
JMP END
ELSE:
MOV RAX, r10
MOV RDX, r11
END:
POP R11
POP R10
RET
Personally, I'm happy with it, and I think it matches the functional specification given in manual. It restores stack and two registers r10 and r11 to their original order and then resumes execution. Alas it does not work! That is the code works, but the program acts as if it's waiting for a tip and burning electricity. Which indicates my emulation was not perfect and I inadvertently broke it's loop. Do you see anything wrong with it?
I notice that this is an atomic variant of it—owning to the lock prefix. I'm hoping it's something else besides contention that I did wrong. Or is there a way to emulate atomicity too?
It's not possible to emulate lock cmpxchg16b. It's sort of possible if all accesses to the target address are synchronised with a separate lock, but that includes all other instructions, including non-atomic stores to either half of the object, and atomic read-modify-writes (like xchg, lock cmpxchg, lock add, lock xadd) with one half (or other part) of the 16 byte object.
You can emulate cmpxchg16b (without lock) like you've done here, with the bugfixes from #Fifoernik's answer. That's an interesting learning exercise, but not very useful in practice, because real code that uses cmpxchg16b always uses it with a lock prefix.
A non-atomic replacement will work most of the time, because it's rare for a cache-line invalidate from another core to arrive in the small time window between two nearby instructions. This doesn't mean it's safe, it just means it's really hard to debug when it does occasionally fail. If you just want to get a game working for your own use, and can accept occasional lockups / errors, this might be useful. For anything where correctness is important, you're out of luck.
What about MFENCE? Seems to be what I need.
MFENCE before, after, or between the loads and stores won't prevent another thread from seeing a half-written value ("tearing"), or from modifying the data after your code has made the decision that the compare succeeded, but before it does the store. It might narrow the window of vulnerability, but it can't close it, because MFENCE only prevents reordering of the global visibility of our own stores and loads. It can't stop a store from another core from becoming visible to us after our loads but before our stores. That requires an atomic read-modify-write bus cycle, which is what locked instructions are for.
Doing two 8-byte atomic compare-exchanges would solve the window-of-vulnerability problem, but only for each half separately, leaving the "tearing" problem.
Atomic 16B loads/stores solves the tearing problem but not the atomicity problem between loads and stores. It's possible with SSE on some hardware, but not guaranteed to be atomic by the x86 ISA the way 8B naturally-aligned loads and stores are.
Xen's lock cmpxchg16b emulation:
The Xen virtual machine has an x86 emulator, I guess for the case where a VM starts on one machine and migrates to less-capable hardware. It emulates lock cmpxchg16b by taking a global lock, because there's no other way. If there was a way to emulate it "properly", I'm sure Xen would do that.
As discussed in this mailing list thread, Xen's solution still doesn't work when the emulated version on one core is accessing the same memory as the non-emulated instruction on another core. (The native version doesn't respect the global lock).
See also this patch on the Xen mailing list that changes the lock cmpxchg8b emulation to support both lock cmpxchg8b and lock cmpxchg16b.
I also found that KVM's x86 emulator doesn't support cmpxchg16b either, according to the search results for emulate cmpxchg16b.
I think all this is good evidence that my analysis is correct, and that it's not possible to emulate it safely.
I see these things wrong with your code to emulate the cmpxchg16b instruction:
You need to use cmp in stead of test to get a correct comparison.
You need to save/restore all flags except the ZF. The manual mentions :
The CF, PF, AF, SF, and OF flags are unaffected.
The manual contains the following:
IF (64-Bit Mode and OperandSize = 64)
THEN
TEMP128 ← DEST
IF (RDX:RAX = TEMP128)
THEN
ZF ← 1;
DEST ← RCX:RBX;
ELSE
ZF ← 0;
RDX:RAX ← TEMP128;
DEST ← TEMP128;
FI;
FI
So to really write code that "matches the functional specification given in manual" a write to the m128 is required. Although this particular write is part of the locked version lock cmpxchg16b, it won't of course do any good to the atomicity of the emulation! A straightforward emulation of lock cmpxchg16b is thus not possible. See #PeterCordes' answer.
This instruction can be used with a LOCK prefix to allow the instruction to be executed atomically. To simplify the interface to the processor’s bus, the destination operand receives a write cycle without regard to the result of the comparison
ELSE:
MOV RAX, r10
MOV RDX, r11
MOV QWORD PTR [r8], r10
MOV QWORD PTR [r8+8], r11
END:
As a learning experience, I'm writing a boot-loader for BIOS in NASM on 16-bit real mode in my x86 emulator, Qemu.
BIOS loads your boot-sector at address 0x7C00. NASM assumes you start at 0x0, so your labels are useless unless you do something like specify the origin with [org 0x7C00] (or presumably other techniques). But, when you load the 2nd stage boot-loader, its RAM origin is different, which complicates the hell out of using labels in that newly loaded code.
What's the recommended way to deal with this? It this linker territory? Should I be using segment registers instead of org?
Thanks in advance!
p.s. Here's the code that works right now:
[bits 16]
[org 0x7c00]
LOAD_ADDR: equ 0x9000 ; This is where I'm loading the 2nd stage in RAM.
start:
mov bp, 0x8000 ; set up the stack
mov sp, bp ; relatively out of the way
call disk_load ; load the new instructions
; at 0x9000
jmp LOAD_ADDR
%include "disk_load.asm"
times 510 - ($ - $$) db 0
dw 0xaa55 ;; end of bootsector
seg_two:
;; this is ridiculous. Better way?
mov cx, LOAD_ADDR + print_j - seg_two
jmp cx
jmp $
print_j:
mov ah, 0x0E
mov al, 'k'
int 0x10
jmp $
times 2048 db 0xf
You may be making this harder than it is (not that this is trivial by any means!)
Your labels work fine and will continue to work fine. Remember that, if you look under the hood at the machine code generated, your short jumps (everything after seg_two in what you've posted) are relative jumps. This means that the assembler doesn't actually need to compute the real address, it simply needs to calculate the offset from the current opcode. However, when you load your code into RAM at 0x9000, that is an entirely different story.
Personally, when writing precisely the kind of code that you are, I would separate the code. The boot sector stops at the dw 0xaa55 and the 2nd stage gets its own file with an ORG 0x9000 at the top.
When you compile these to object code you simply need to concatenate them together. Essentially, that's what you're doing now except that you are getting the assembler to do it for you.
Hope this makes sense. :)
I was wondering what are "semantic NOPs" in assembly?
Code that isn't an actual nop but doesn't affect the behavior of the program.
In C, the following sequence could be thought of as a semantic NOP:
{
// Since none of these have side affects, they are effectively no-ops
int x = 5;
int y = x * x;
int z = y / x;
}
They are instructions that have no effect, like a NOP, but take more bytes. Useful to get code aligned to a cache line boundary. An instruction like lea edi,[edi+0] is an example, it would take 7 NOPs to fill the same number of bytes but takes only 1 cycle instead of 7.
A semantic NOP is a collection of machine language instructions that have no effect at all or almost no effect (most instructions change condition codes) whose only purpose is obfuscation of what the program is actually doing.
Code that executes but doesn't do anything meaningful. These are also called "opaque predicates," and are used most often by obfuscators.
A true "semantic nop" is an instruction which has no effect other than taking some time and advancing the program counter. Many machines where register-to-register moves do not affect flags, for example, have numerous instructions that will move a register to itself. On the 8088, for example, any of the following would be semantic NOPs:
mov al,al
mov bl,bl
mov cl,cl
...
mov ax,ax
mob bx,bx
mov cx,cx
...
xchg ax,ax
xchg bx,bx
xchg cx,cx
...
Note that all of the above except for "xchg ax,ax" are two-byte instructions. Intel has therefore declared that "xchg ax,ax" should be used when a one-byte NOP is required. Indeed, if one assembles "mov ax,ax" and disassembles it, it will disassemble as "NOP".
Note that in some cases an instruction or instruction sequence may have potential side-effects, but nonetheless be more desirable than the usual "nop". On the 6502, for example, if one needs a 7-cycle delay and the stack pointer is valid but the top-of-stack value is irrelevant, a PHP followed by a PLP will kill seven cycles using only two bytes of code. If the top-of-stack value isn't a spare byte of RAM, though, the sequence would fail.