NASM relocation truncated to fit: R_386_16 against `.data' [duplicate] - linux

Suppose I have the following declared:
section .bss
buffer resb 1
And these instructions follow in section .text:
mov al, 5 ; mov-immediate
mov [buffer], al ; store
mov bl, [buffer] ; load
mov cl, buffer ; mov-immediate?
Am I correct in understanding that bl will contain the value 5, and cl will contain the memory address of the variable buffer?
I am confused about the differences between
moving an immediate into a register,
moving a register into an immediate (what goes in, the data or the address?) and
moving an immediate into a register without the brackets
For example, mov cl, buffer vs mov cl, [buffer]
UPDATE: After reading the responses, I suppose the following summary is accurate:
mov edi, array puts the memory address of the zeroth array index in edi. i.e. the label address.
mov byte [edi], 3 puts the VALUE 3 into the zeroth index of the array
after add edi, 3, edi now contains the memory address of the 3rd index of the array
mov al, [array] loads the DATA at the zeroth index into al.
mov al, [array+3] loads the DATA at the third index into al.
mov [al], [array] is invalid because x86 can't encode 2 explicit memory operands, and because al is only 8 bits and can't be used even in a 16-bit addressing mode. Referencing the contents of a memory location. (x86 addressing modes)
mov array, 3 is invalid, because you can't say "Hey, I don't like the offset at which array is stored, so I'll call it 3". An immediate can only be a source operand.
mov byte [array], 3 puts the value 3 into the zeroth index (first byte) of the array. The byte specifier is needed to avoid ambiguity between byte/word/dword for instructions with memory, immediate operands. That would be an assemble-time error (ambiguous operand size) otherwise.
Please mention if any of these is false. (editor's note: I fixed syntax errors / ambiguities so the valid ones actually are valid NASM syntax. And linked other Q&As for details)

The square brackets essentially work like a dereference operator (e.g., like * in C).
So, something like
mov REG, x
moves the value of x into REG, whereas
mov REG, [x]
moves the value of the memory location where x points to into REG. Note that if x is a label, its value is the address of that label.
As for you're question:
Am I correct in understanding that bl will contain the value 5, and cl
will contain the memory address of the variable buffer?
Yes, you are correct. But beware that, since CL is only 8 bits wide, it will only contain the least significant byte of the address of buffer.

Indeed, your thought is correct.That is, bl will contain 5 and cl the memory address of buffer(in fact the label buffer is a memory address itself).
Now, let me explain the differences between the operations you mentioned:
moving an immediate into a register can be done using mov reg,imm.What may be confusing is that labels e.g buffer are immediate values themselves that contain an address.
You cannot really move a register into an immediate, since immediate values are constants, like 2 or FF1Ah.What you can do is move a register to the place where the constant points to.You can do it like mov [const], reg .
You can also use indirect addressing like mov reg2,[reg1] provided reg1 points to a valid location, and it will transfer the value pointed by reg1 to reg2.
So, mov cl, buffer will move the address of buffer to cl(which may or may not give the correct address, since cl is only one byte long) , whereas mov cl, [buffer] will get the actual value.
Summary
When you use [a], then you refer to the value at the place where a points to.For example, if a is F5B1, then [a] refers to the address F5B1 in RAM.
Labels are addresses,i.e values like F5B1.
Values stored in registers do not have to be referenced to as [reg] because registers do not have addresses.In fact, registers can be thought of as immediate values.

You are getting the idea. However, there are a few details worth bearing in mind:
Addresses can and usually are greater than what 8 bits can hold (cl is 8-bit, cx is 16-bit, ecx is 32-bit, rcx is 64-bit). So, cl is likely going to be unequal to the address of the variable buffer. It'll only have the least significant 8 bits of the address.
If there are interrupt routines or threads that can preempt the above code and/or access buffer, the value in bl may differ from 5. Broken interrupt routines may actually affect any register when they fail to preserve register values.

For all instruction with using immediate values as an operand for to write the value into a ram location (or for calculating within), we have to specify how many bytes we want to access. Because our assemble can not know if we want access only one byte, a word, or a doppleword for example if the immediate value is a lower value, like the following instructions shows.
array db 0FFh, 0FFh, 0FFh, 0FFh
mov byte [array], 3
results:
array db 03h, 0FFh, 0FFh, 0FFh
....
mov word [array], 3
results:
array db 03h, 00h, 0FFh, 0FFh
....
mov dword [array], 3
results:
array db 03h, 00h, 00h, 00h
Dirk

Related

Why is the last input number entered on the stack at each iteration? [duplicate]

Suppose I have the following declared:
section .bss
buffer resb 1
And these instructions follow in section .text:
mov al, 5 ; mov-immediate
mov [buffer], al ; store
mov bl, [buffer] ; load
mov cl, buffer ; mov-immediate?
Am I correct in understanding that bl will contain the value 5, and cl will contain the memory address of the variable buffer?
I am confused about the differences between
moving an immediate into a register,
moving a register into an immediate (what goes in, the data or the address?) and
moving an immediate into a register without the brackets
For example, mov cl, buffer vs mov cl, [buffer]
UPDATE: After reading the responses, I suppose the following summary is accurate:
mov edi, array puts the memory address of the zeroth array index in edi. i.e. the label address.
mov byte [edi], 3 puts the VALUE 3 into the zeroth index of the array
after add edi, 3, edi now contains the memory address of the 3rd index of the array
mov al, [array] loads the DATA at the zeroth index into al.
mov al, [array+3] loads the DATA at the third index into al.
mov [al], [array] is invalid because x86 can't encode 2 explicit memory operands, and because al is only 8 bits and can't be used even in a 16-bit addressing mode. Referencing the contents of a memory location. (x86 addressing modes)
mov array, 3 is invalid, because you can't say "Hey, I don't like the offset at which array is stored, so I'll call it 3". An immediate can only be a source operand.
mov byte [array], 3 puts the value 3 into the zeroth index (first byte) of the array. The byte specifier is needed to avoid ambiguity between byte/word/dword for instructions with memory, immediate operands. That would be an assemble-time error (ambiguous operand size) otherwise.
Please mention if any of these is false. (editor's note: I fixed syntax errors / ambiguities so the valid ones actually are valid NASM syntax. And linked other Q&As for details)
The square brackets essentially work like a dereference operator (e.g., like * in C).
So, something like
mov REG, x
moves the value of x into REG, whereas
mov REG, [x]
moves the value of the memory location where x points to into REG. Note that if x is a label, its value is the address of that label.
As for you're question:
Am I correct in understanding that bl will contain the value 5, and cl
will contain the memory address of the variable buffer?
Yes, you are correct. But beware that, since CL is only 8 bits wide, it will only contain the least significant byte of the address of buffer.
Indeed, your thought is correct.That is, bl will contain 5 and cl the memory address of buffer(in fact the label buffer is a memory address itself).
Now, let me explain the differences between the operations you mentioned:
moving an immediate into a register can be done using mov reg,imm.What may be confusing is that labels e.g buffer are immediate values themselves that contain an address.
You cannot really move a register into an immediate, since immediate values are constants, like 2 or FF1Ah.What you can do is move a register to the place where the constant points to.You can do it like mov [const], reg .
You can also use indirect addressing like mov reg2,[reg1] provided reg1 points to a valid location, and it will transfer the value pointed by reg1 to reg2.
So, mov cl, buffer will move the address of buffer to cl(which may or may not give the correct address, since cl is only one byte long) , whereas mov cl, [buffer] will get the actual value.
Summary
When you use [a], then you refer to the value at the place where a points to.For example, if a is F5B1, then [a] refers to the address F5B1 in RAM.
Labels are addresses,i.e values like F5B1.
Values stored in registers do not have to be referenced to as [reg] because registers do not have addresses.In fact, registers can be thought of as immediate values.
You are getting the idea. However, there are a few details worth bearing in mind:
Addresses can and usually are greater than what 8 bits can hold (cl is 8-bit, cx is 16-bit, ecx is 32-bit, rcx is 64-bit). So, cl is likely going to be unequal to the address of the variable buffer. It'll only have the least significant 8 bits of the address.
If there are interrupt routines or threads that can preempt the above code and/or access buffer, the value in bl may differ from 5. Broken interrupt routines may actually affect any register when they fail to preserve register values.
For all instruction with using immediate values as an operand for to write the value into a ram location (or for calculating within), we have to specify how many bytes we want to access. Because our assemble can not know if we want access only one byte, a word, or a doppleword for example if the immediate value is a lower value, like the following instructions shows.
array db 0FFh, 0FFh, 0FFh, 0FFh
mov byte [array], 3
results:
array db 03h, 0FFh, 0FFh, 0FFh
....
mov word [array], 3
results:
array db 03h, 00h, 0FFh, 0FFh
....
mov dword [array], 3
results:
array db 03h, 00h, 00h, 00h
Dirk

Endianness followed in NASM 64 bit programming

I have 2 situations :
I.
arr dq 1234567887654321H
mov rsi,arr
mov rbx,[rsi]
Now, as we know that rsi always points to 1 byte of location in memory and x86 follows little-endian. Does rsi points to 21H and then this 21 gets into rbx or the complete value in arr gets transfered to rbx ?
II.
tempbuff resb 16
arr resb 1234567887654321H
mov rbx,qword[arr]
mov rsi,tempbuff
mov [rsi],rbx
Above statements are taken from different sections and combined here so as to focus on important details.
Now, from the above statements, rbx stores the entire contents of arr.
rsi points to 1st memory location of tempbuff. Then does the mov [rsi],rbx
stores the entire content of rbx to tempbuff OR does it simply stores the lowest 1 byte of rbx(here 21) into the the location pointed by rsi(1 byte location) ?
For case 1:
mov rbx,[rsi]
Nasm implicitly resolves memory size pointed by right hand side ,[r64] based on left hand side target which is a r64. Hence rsi is 64bit address pointing to a 64bit [rsi] value which will be moved to a 64 bit rbx register.
This could be explicitly stated in Nasm as
mov rbx,qword [rsi]
It implies the statement in the question
"rsi always points to 1 byte of location in memory"
is incorrect. It would be valid for following instruction:
mov rbx,byte [rsi]
Where first encountered byte pointed by 64 bit rsi address will become rbx least significant byte.
For case 2:
mov [rsi],rbx
Whole 64 bit rbx value is moved into memory pointed by 64 bit rsi address.
From the chat discussion I can draw conclusion OP's real doubt was about little endiannes affecting registers and memory.
exampleQuadWord dq 0102030405060708H ; this is represented in memory as 0807060504030201
exampleQWordWithByte dq 1 ; this is represented in memory as 0100000000000000
In a simplistic example
mov rbx,2
mov rsi, exampleQWordWithByte
mov [rsi],rbx
; exampleQWordWithByte in memory is now 0200000000000000

Getting confused about usage of labels in assembly for X86_64 linux: Why should we write mov [digit], al, but not mov digit, al?

Here's my code:
section .data
digit db 0,10
section .text
global _start
_start:
call _printRAXDigit
mov rax, 60
mov rdx, 0
syscall
_printRAXDigit:
add rax, 48
mov [digit], al
mov rax, 1
mov rdi, 1
mov rsi, digit
mov rdx, 2
syscall
ret
I have a question about the difference between [digit] and digit.
I have learned that labels (like digit in the code), represent the memory address of the data, and the operator "[]" acts like something to dereference the pointer, so it will load the value that the label points at to the destination.
For instance, mov rax, [digit] will throw 0 to the rax register because digit points at the first element of the data (in this case, the integer 0).
However, in my code, it works when I write mov [digit], al, which means "load the value stored in al to the memory address digit", but I have no idea why we should use "[]" in this case. The first argument of mov must be a destination (like a register or a memory address), so I think it should be mov digit, al rather than mov [digit], al. It doesn't make sense to me why we use a value to get the value from another place rather than use a memory address to get the value.
So that's all of my question. Please give me any response about where my thinking is wrong or any correction about my concept of labels.
In NASM syntax (there are assemblers which use different notation, e.g. MASM/TASM use a different flavor of Intel syntax, and gas uses AT&T syntax) the following x86 instructions ...
mov esi, someAddress
mov esi, [someAddress]
mov [someAddress], esi
mov someAddress, esi ; see below
... (would) have the following meaning:
mov esi, someAddress
Write the number that represents the address where someAddress is stored to the register esi. So if someAddress is stored at address 1234 the value 1234 is written to esi.
mov esi, [someAddress]
Write the content of the memory to esi. So if someAddress is stored at address 1234 and the value stored at address 1234 is 5678 the value 5678 is written to esi.
You might also say: The value of the variable someAddress (a variable normally is nothing but the content of the memory at a certain address) is written to the esi register.
mov [someAddress], esi
Write the content of esi to the memory at address someAddress.
You might also say: Write the value of esi to the variable someAddress.
mov someAddress, esi
Would mean: Change the constant number which represents the address someAddress to esi.
So if someAddress is located at address 1234 and esi contains the value 5678 the instruction would mean:
Change the mathematical constant 1234 in a way that 1234 = 5678 after that change.
This is of course stupid because the mathematical constants 1234 and 5678 will never be equal. For this reason the x86 CPU has no such instruction.
(There are CPUs having similar instructions. On the SPARC CPUs for example instructions assigning a value to the zero register (which means: "assign a value to the constant zero") are used if you only want to have the instruction's side effects - like setting the flags - but you are not interested in the result itself.)

Optimizing a loop that prints a counter

I have a very small loop program that prints the numbers from 5000000 to 1. I want to make it run the fastest possible.
I'm learning linux x86-64 assembly with NASM.
global main
extern printf
main:
push rbx
mov rax,5000000d
print:
push rax
push rcx
mov rdi, format
mov rsi, rax
call printf
pop rcx
pop rax
dec rax
jnz print
pop rbx
ret
format:
db "%ld", 10, 0
The call to printf completely dominates the run-time of even that highly inefficient loop. (Did you notice that you push/pop rcx even though you never use it anywhere? Perhaps that's left over from using the slow LOOP instruction).
To learn more about writing efficient x86 asm, see Agner Fog's Optimizing Assembly guide. (And his microarchitecture guide, too, if you want to really get into the details of specific CPUs and how they're different: What's optimal on one uarch CPU might not be on another. e.g. IMUL r64 has much better throughput and latency on Intel CPUs than on AMD, but CMOV and ADC are 2 uops on Intel pre-Broadwell, with 2 cycle latency. vs. 1 on AMD, since 3-input ALU m-ops (FLAGS + both registers) aren't a problem for AMD.) Also see other links in the x86 tag wiki.
Purely optimizing the loop without changing the 5M calls to printf is useful only as an example of how to write a loop properly, not to actually speed up this code. But let's start with that:
; trivial fixes to loop efficiently while calling the same slow function
global main
extern printf
main:
push rbx
mov ebx, 5000000 ; don't waste a REX prefix for constants that fit in 32 bits
.print:
;; removed the push/pops from inside the loop.
; Use call-preserved regs instead of saving/restoring stuff inside a loop yourself.
mov edi, format ; static data / code always has a 32-bit address
mov esi, ebx
xor eax, eax ; The x86-64 SysV ABI requires al = number of FP args passed in FP registers for variadic functions
call printf
dec ebx
jnz .print
pop rbx ; restore rbx, the one call-preserved reg we actually used.
xor eax,eax ; successful exit status.
ret
section .rodata ; it's usually best to put constant data in a separate section of the text segment, not right next to code.
format:
db "%ld", 10, 0
To speed this up, we should take advantage of the redundancy in converting consecutive integers to strings. Since "5000000\n" is only 8 bytes long (including the newline), the string representation fits in a 64-bit register.
We can store that string into a buffer and increment a pointer by the string length. (Since it will get shorter for smaller numbers, just keep the current string length in a register, which you can update in the special-case branch where it changes.)
We can decrement the string representation in-place to avoid ever (re)doing the process of dividing by 10 to turn an integer into a decimal string.
Since carry/borrow doesn't naturally propagate inside a register, and the AAS instruction isn't available in 64-bit mode (and only worked on AX, not even EAX, and is slow), we have to do it ourselves. We're decrementing by 1 every time, so we know what's going to happen. We can handle the least-significant digit by unrolling 10 times, so there's no branching to handle it.
Also note that since we want to the digits in printing order, carry goes the wrong direction anyway, since x86 is little-endian. If there was a good way to take advantage of having our string in the other byte order, we could maybe use BSWAP or MOVBE. (But note that MOVBE r64 is 3 fused-domain uops on Skylake, 2 of them ALU uops. BSWAP r64 is also 2 uops.)
Perhaps we should be doing odd/even counters in parallel, in two halves of an XMM vector register. But that stops working well once the string is shorter than 8B. Storing one number-string at a time, we can easily overlap. Still, we could do the carry-propagation stuff in a vector reg and store the two halves separately with MOVQ and MOVHPS. Or since 4/5th of the numbers from 0 to 5M are 7 digits, it might be worth having code for the special case where we can store a whole 16B vector of two numbers.
A better way to handle shorter strings: SSSE3 PSHUFB to shuffle the two strings to left-packed in a vector register, then a single MOVUPS to store two at once. The shuffle mask only needs to be updated when the string length (number of digits) changes, so the infrequently-executed carry-handling special case code can do that, too.
Vectorization of the hot part of the loop should be very straightforward and cheap, and should just about double performance.
;;; Optimized version: keep the string data in a register and modify it
;;; instead of doing the whole int->string conversion every time.
section .bss
printbuf: resb 1024*128 + 4096 ; Buffer size ~= half L2 cache size on Intel SnB-family. Or use a giant buffer that we write() once. Or maybe vmsplice to give it away to the kernel, since we only run once.
global main
extern printf
main:
push rbx
; use some REX-only regs for values that we're always going to use a REX prefix with anyway for 64-bit operand size.
mov rdx, `5000000\n` ; (NASM string constants as integers work like little-endian, so AL = '5' = 0x35 and the high byte holds '\n' = 10). Note that YASM doesn't support back-ticks for C-style backslash processing.
mov r9, 1<<56 ; decrement by 1 in the 2nd-last byte: LSB of the decimal string
;xor r9d, r9d
;bts r9, 56 ; IDK if this code-size optimization outside the loop would help or not.
mov eax, 8 ; string length.
mov edi, printbuf
.storeloop:
;; rdx = "????x9\n". We compute the start value for the next iteration, i.e. counter -= 10 in rdx.
mov r8, rdx
;; r8 = rdx. We modify it to have each last digit from 9 down to 0 in sequence, and store those strings in the buffer.
;; The string could be any length, always with the first ASCII digit in the low byte; our other constants are adjusted correctly for it
;; narrower than 8B means that our stores overlap, but that's fine.
;; Starting from here to compute the next unrolled iteration's starting value takes the `sub r8, r9` instructions off the critical path, vs. if we started from r8 at the bottom of the loop. This gives out-of-order execution more to play with.
;; It means each loop iteration's sequence of subs and stores are a separate dependency chain (except for the store addresses, but OOO can get ahead on those because we only pointer-increment every 2 stores).
mov [rdi], r8
sub r8, r9 ; r8 = "xxx8\n"
mov [rdi + rax], r8 ; defer p += len by using a 2-reg addressing mode
sub r8, r9 ; r8 = "xxx7\n"
lea edi, [rdi + rax*2] ; if we had len*3 in another reg, we could defer this longer
;; our static buffer is guaranteed to be in the low 31 bits of address space so we can safely save a REX prefix on the LEA here. Normally you shouldn't truncate pointers to 32-bits, but you asked for the fastest possible. This won't hurt, and might help on some CPUs, especially with possible decode bottlenecks.
;; repeat that block 3 more times.
;; using a short inner loop for the 9..0 last digit might be a win on some CPUs (like maybe Core2), depending on their front-end loop-buffer capabilities if the frontend is a bottleneck at all here.
;; anyway, then for the last one:
mov [rdi], r8 ; r8 = "xxx1\n"
sub r8, r9
mov [rdi + rax], r8 ; r8 = "xxx0\n"
lea edi, [rdi + rax*2]
;; compute next iteration's RDX. It's probably a win to interleave some of this into the loop body, but out-of-order execution should do a reasonably good job here.
mov rcx, r9
shr rcx, 8 ; maybe hoist this constant out, too
; rcx = 1 in the second-lowest digit
sub rdx, rcx
; detect carry when '0' (0x30) - 1 = 0x2F by checking the low bit of the high nibble in that byte.
shl rcx, 5
test rdx, rcx
jz .carry_second_digit
; .carry_second_digit is some complicated code to propagate carry as far as it needs to go, up to the most-significant digit.
; when it's done, it re-enters the loop at the top, with eax and r9 set appropriately.
; it only runs once per 100 digits, so it doesn't have to be super-fast
; maybe only do buffer-length checks in the carry-handling branch,
; in which case the jz .carry can be jnz .storeloop
cmp edi, esi ; } while(p < endp)
jbe .storeloop
; write() system call on the buffer.
; Maybe need a loop around this instead of doing all 5M integer-strings in one giant buffer.
pop rbx
xor eax,eax ; successful exit status.
ret
This is not fully fleshed-out, but should give an idea of what might work well.
If vectorizing with SSE2, probably use a scalar integer register to keep track of when you need to break out and handle carry. i.e. a down-counter from 10.
Even this scalar version probably comes close to sustaining one store per clock, which saturates the store port. They're only 8B stores (and when the string gets shorter, the useful part is shorter than that), so we're definitely leaving performance on the table if we don't bottleneck on cache misses. But with a 3GHz CPU and dual channel DDR3-1600 (~25.6GB/s theoretical max bandwidth), 8B per clock is about enough to saturate main memory with a single core.
We could parallelize this, and break the 5M .. 1 range up into chunks. With some clever math, we can figure out what byte to write the first character of "2500000\n", or we could have each thread call write() itself in the correct order. (Or use the same clever math to have them call pwrite(2) independently with different file offsets, so the kernel takes care of all the synchronization for multiple writers to the same file.)
You're essentially printing a fixed string. I'd pre-generate that string into one long constant.
The program then becomes a single call to write (or a short loop to deal with incomplete writes).

string comparison in 8086

I have problem with this question. I don't know what it wants from me.
Question : Write a procedure that compares a source string at DS:SI to a destination string at ES:DI and sets the flags accordingly. If the source is less than the destination, carry flag is set. if string are equal , the zero flag is set. if the source is greater than the destination , the zero and carry flags are both cleared.
My Answer :
MOV ESI , STRING1
MOV EDI, STRING2
MOV ECX, COUNT
CLD
REPE CMPSB
Still I am not sure about it. Is it true or should I try something else ?
p.s: I don't understand why people vote down this question. What is wrong with my question ? I think we are all here for learning. Or not ? Miss I something ?
If the problem statement says the pointers are already in SI and DI when you're called, you shouldn't clobber them.
16-bit code often doesn't stick to a single calling convention for all functions, and passing (the first few) args in registers is usually good (fewer instructions, and avoids store/reload). 32-bit x86 calling conventions usually use stack args, but that's obsolete. Both the Windows x64 and Linux/Mac x86-64 System V ABIs / calling conventions use register args.
The problem statement doesn't mention a count, though. So you're implementing strcmp for strings terminated by a zero-byte, rather than memcmp for known-length blocks of memory. You can't use a single rep instruction, since you need to check for non-equal and for end-of-string. If you just pass some large size, and the strings are equal, repe cmpsb would continue past the terminator.
repe cmpsb is usable if you know the length of either string. e.g. take a length arg in CX to avoid the problem of running past the terminator in both strings.
But for performance, repe cmpsb isn't fast anyway (like 2 to 3 cycles per compare, on Skylake vs. Ryzen. Or even 4 cycles per compare on Bulldozer-family). Only rep movs and rep stos are efficient on modern CPUs, with optimized microcode that copies or stores 16 (or 32 or 64) bytes at a time.
There are 2 major conventions for storing strings in memory: Explicit-length strings (pointer + length) like C++ std::string, and implicit length strings where you just have a pointer, and the end of string is marked by a sentinel / terminator. (Like C char* that uses a 0 byte, or DOS string-print functions that use '$' as a terminator.)
A useful observation is that you only need to check for the terminator in one of the strings. If the other string has a terminator and this one doesn't, it will be a mismatch.
So you want to load a byte into a register from one string, and check it for the teminator and against memory for the other string.
(If you need to actually use ES:DI instead of just DI with the default DS segment base, you can use cmp al, [es: bx + di] (NASM syntax, adjust as needed like maybe cmp al, es: [bx + di] I think). Probably the question intended for you to use lodsb and scasb, because scasb uses ES:DI.)
;; inputs: str1 pointer in DI, str2 pointer in SI
;; outputs: BX = mismatch index, or one-past-the-terminators.
;; FLAGS: ZF=1 for equal strings (je), ZF=0 for mismatch (jne)
;; clobbers: AL (holds str1's terminator or mismatching byte on return)
strcmp:
xor bx, bx
.innerloop: ; do {
mov al, [si + bx] ; load a source byte
cmp al, [di + bx] ; check it against the other string
jne .mismatch ; if (str1[i] != str2[i]) break;
inc bx ; index++
test al, al ; check for 0. Use cmp al, '$' for a $ terminator
jnz .innerloop ; }while(str1[i] != terminator);
; fall through (ZF=1) means we found the terminator
; in str1 *and* str2 at the same position, thus they match
.mismatch: ; we jump here with ZF=0 on mismatch
; sete al ; optionally create an integer in AL from FLAGS
ret
Usage: put pointers in SI/DI, call strcmp / je match, because the match / non-match state is in FLAGS. If you want to turn the condition into an integer, 386 and later CPUs allow sete al to create a 0 or 1 in AL according to the equals condition (ZF==1).
Using sub al, [mem] instead of cmp al, [mem], we'd get al = str1[i] - str2[i], giving us a 0 only if the strings matched. If your strings only hold ASCII values from 0..127, that can't cause signed overflow, so you can use it as a signed return value that actually tells you which string sorts before/after the other. (But if there could be high-ASCII 128..255 bytes in the string, we'd need to zero- or sign-extend to 16-bit first to avoid signed overflow for a case like (unsigned)5 - (unsigned)254 = (signed)+7 because of 8-bit wraparound.
Of course, with our FLAGS return value, the caller can already use ja or jb (unsigned compare results), or jg / jl if they want to treat the strings as holding signed char. This works regardless of the range of input bytes.
Or inline this loop so jne mismatch jumps somewhere useful directly.
16-bit addressing modes are limited, but BX can be a base, and SI and DI can both be indices. I used an index increment instead of inc si and inc di. Using lodsb would also be an option, and maybe even scasb to compare it to the other string. (And then check the terminator.)
Performance
Indexed addressing modes can be slower on some modern x86 CPUs, but this does save instructions in the loop (so it's good for true 8086 where code-size matters). Although to really tune for 8086, I think lodsb / scasb would be your best bet, replacing the mov load and cmp al, [mem], and also the inc bx. Just remember to use cld outside the loop if your calling convention doesn't guarantee that.
If you care about modern x86, use movzx eax, byte [si+bx] to break the false dependency on the old value of EAX, for CPUs that don't rename partial registers separately. (Breaking the false dep is especially important if you use sub al, [str2] because that would turn it into a 2-cycle loop-carried dependency chain through EAX, on CPUs other than PPro through Sandybridge. IvyBridge and later doesn't rename AL separately from EAX, so mov al, [mem] is a micro-fused load+merge uop.)
If the cmp al,[bx+di] micro-fuses the load, and macro-fuses with jne into one compare-and-branch uop, the whole loop could be only 4 uops total on Haswell, and could run at 1 iteration per clock for large inputs. The branch mispredict at the end will make small-input performance worse, unless branching goes the way way every time for a small enough input. See https://agner.org/optimize/. Recent Intel and AMD can do 2 loads per clock.
Unrolling could amortize the inc bx cost, but that's all. With a taken + not-taken branch inside the loop, no current CPUs can run this faster than 1 cycle per iteration. (See Why are loops always compiled into "do...while" style (tail jump)? for more about the do{}while loop structure). To go faster we'd need to check multiple bytes at once.
Even 1 byte / cycle is very slow compared to 16 bytes per 1 or 2 cycles with SSE2 (using some clever tricks to avoid reading memory that might fault).
See https://www.strchr.com/strcmp_and_strlen_using_sse_4.2 for more about using x86 SIMD for string compare, and also glibc's SSE2 and later optimized string functions.
GNU libc's fallback scalar strcmp implementation looks decent (translated from AT&T to Intel syntax, but with the C preprocessor macros and stuff left in. L() makes a local label).
It only uses this when SSE2 or better isn't available. There are bithacks for checking a whole 32-bit register for any zero bytes, which could let you go faster even without SIMD, but alignment is a problem. (If the terminator could be anywhere, you have to be careful when loading multiple bytes at once to not read from any memory pages that you aren't sure contain at least 1 byte of valid data, otherwise you could fault.)
strcmp:
mov ecx,DWORD PTR [esp+0x4]
mov edx,DWORD PTR [esp+0x8] # load pointer args
L(oop): mov al,BYTE PTR [ecx] # movzx eax, byte ptr [ecx] would be avoid a false dep
cmp al,BYTE PTR [edx]
jne L(neq)
inc ecx
inc edx
test al, al
jnz L(oop)
xorl eax, eax
/* when strings are equal, pointers rest one beyond
the end of the NUL terminators. */
ret
L(neq): mov eax, 1
mov ecx, -1
cmovb eax, ecx
ret

Resources