Can i use rust instead of c++ in OS Development - rust

I want to know if rust complied code have OS dependent code in it or not.(not talking about print like stuff)
for example
let x = (4i,2i,3i)
let y = (3i,4i,4i)
now if compare x == y is it using some of its library and if yes is platform dependent.
Edited:
Like in C++ we should not use new, try catch, or any standard lib.
what are the things we should be avoid while writing in rust.

You can see the code that the rust compiler will generate for a snippet like that yourself, without having to even install Rust locally.
Just visit the web-based playpen, and type your snippet in there. You can run the program (and thus observe what it does via print statements), or, more usefully in this case, you can compile the program down to the generated assembly and then inspect it to see if it has calls to underlying system routines.
If you go to this link: http://is.gd/Be6YVJ I have already put such a program into the playpen. (See bottom of this post for the actual program text.)
If you hit the asm button, you can then see the assembly for each routine. (I have added inline(never) attributes to the relevant functions to ensure that they do not get optimized away by the compiler.)
Here is the generated assembly for bar below, a function that calls out to a higher-order function to get a pair of 3-tuples, and then compares them for equality:
.section .text._ZN3bar20h2bb2fd5b9c9e987beaaE,"ax",#progbits
.align 16, 0x90
.type _ZN3bar20h2bb2fd5b9c9e987beaaE,#function
_ZN3bar20h2bb2fd5b9c9e987beaaE:
.cfi_startproc
cmpq %fs:112, %rsp
ja .LBB0_2
movabsq $56, %r10
movabsq $0, %r11
callq __morestack
retq
.LBB0_2:
subq $56, %rsp
.Ltmp0:
.cfi_def_cfa_offset 64
movq %rdi, %rax
leaq 8(%rsp), %rdi
callq *%rax
movq 8(%rsp), %rcx
xorl %eax, %eax
cmpq 32(%rsp), %rcx
jne .LBB0_5
movq 40(%rsp), %rcx
cmpq %rcx, 16(%rsp)
jne .LBB0_5
movq 48(%rsp), %rax
cmpq %rax, 24(%rsp)
sete %al
.LBB0_5:
addq $56, %rsp
retq
.Ltmp1:
.size _ZN3bar20h2bb2fd5b9c9e987beaaE, .Ltmp1-_ZN3bar20h2bb2fd5b9c9e987beaaE
.cfi_endproc
So you can see that the only thing it is calling out to is a helper routine, __morestack, that checks for stack-overflow (or allocate more stack, in systems with segmented stack support). (So for an example like this, that is the only core functionality you will need to provide yourself; note that you could just have it halt the kernel.)
Here is the program I put into the playpen:
#[inline(never)]
fn bar(f: fn() -> ((int, int, int), (int, int, int))) -> bool {
let (x, y) = f();
x == y
}
#[inline(never)]
fn foo_1() -> ((int,int,int), (int,int,int)) {
let x = (4i,2i,3i);
let y = (3i,4i,4i);
(x, y)
}
#[inline(never)]
fn foo_2() -> ((int,int,int), (int,int,int)) {
let x = (4i,2i,3i);
(x, x)
}
fn main() {
println!("bar(foo_1): {}", bar(foo_1));
println!("bar(foo_2): {}", bar(foo_2));
}

Rust had been designed to allow one to implement an operating system kernel, drivers or an application that does not even have an operating systems and runs on bare-metal hardware.
Currently Rust's standard runtime can be disable with #![no_std] attribute in the code. You can still use some libraries, such as libcore. One of the things that you will not get without runtime is format! and println! macros, the sprintf() and printf() equivalents.
For an example of something you can do today, take a look at Zinc project.

Related

Using pointers to return results in x86 Assembly

We've been given the following function to try and implement in C as part of a CS course. We are programming on x86 Linux.
function(float x, float y, float *z);
For a function such as example(int x, int y) I understand that the x value resides at [ebp+8] and y at [ebp+12] on the stack, is the same convention used when pushing floats?
We also have to perform some masking and calculations on the float numbers. Do these float numbers behave the same as 32-bit integers just in IEEE-754 format?
here is a simple function and it's asm code :
function(float x, float y, float *z){
float sum = x + y;
float neg = sum - *z;
}
the asm of the above function will be like this:
function:
pushl %ebp
movl %esp,%ebp
subl $8,%esp
pushl %ebx
flds 8(%ebp)
fadds 12(%ebp)
fstps -4(%ebp)
movl 16(%ebp),%ebx
flds -4(%ebp)
fsubs (%ebx)
fstps -8(%ebp)
leal -12(%ebp),%esp
popl %ebx
leave
ret
as you can see from asm above the reference to ebp+x in this case x will be 8/12/16 to get the parameter from the stack,
so as fuz point out it in the comments it is indeed stored on the stack

How can I set a breakpoint in an async function?

I have a struct with an async method which I'd like to debug. I used gdb to set a breakpoint with a debug build. Here is how the code looks like when stopping at the async method Strct::async_method:
0x5555557f4a6a <bin::Strct::async_method+26> mov QWORD PTR [rsp+0x10],rsi
0x5555557f4a6f <bin::Strct::async_method+31> mov QWORD PTR [rsp+0x18],rdx
0x5555557f4a74 <bin::Strct::async_method+36> mov BYTE PTR [rsp+0x112],0x0
0x5555557f4a7c <bin::Strct::async_method+44> lea rsi,[rsp+0x10]
0x5555557f4a81 <bin::Strct::async_method+49> mov QWORD PTR [rsp+0x8],rax
0x5555557f4a86 <bin::Strct::async_method+54> call 0x5555557e6970 <core::future::from_generator>
The code calls core::future::from_generator which is not what I'd like to debug. What is the proper way to suspend the execution of the async method body?
Let's use this as our MRE
use futures::executor; // 0.3.5
pub fn exercise() {
executor::block_on(example());
}
#[inline(never)]
async fn example() {
canary()
}
#[inline(never)]
fn canary() {}
If you view the assembly for this, you'll see how the compiler implements async functions. The async function returns a type implementing impl Future, which is powered under the hood by a generator:
playground::example:
subq $24, %rsp
movb $0, 16(%rsp)
movzbl 16(%rsp), %edi
callq *core::future::from_generator#GOTPCREL(%rip)
movb %al, 23(%rsp)
movb 23(%rsp), %al
movb %al, 8(%rsp)
movb 8(%rsp), %al
addq $24, %rsp
retq
The actual body of the async function is moved into the generator, which happens to use the name {{closure}}:
playground::example::{{closure}}:
;; Lots of instructions removed
movq playground::canary#GOTPCREL(%rip), %rcx
callq *%rcx
jmp .LBB20_2
;; Even more removed
Thus, you can set a breakpoint on that generated function:
(lldb) br set -r '.*example.*closure.*'
(lldb) r
Process 28101 launched: '/tmp/f/target/debug/f' (x86_64)
Process 28101 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000ab0 f`f::example::_$u7b$$u7b$closure$u7d$$u7d$::h2b30ad777e7395f3((null)=(pointer = 0x00007ffeefbff328), (null)=ResumeTy # 0x00007ffeefbff070) at main.rs:8:20
5 }
6
7 #[inline(never)]
-> 8 async fn example() {
9 canary()
10 }
11
Target 0: (f) stopped.
You could also set a breakpoint on the desired line:
(lldb) breakpoint set --file /private/tmp/f/src/main.rs --line 9
(lldb) r
Process 28113 launched: '/tmp/f/target/debug/f' (x86_64)
Process 28113 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000ad8 f`f::example::_$u7b$$u7b$closure$u7d$$u7d$::h2b30ad777e7395f3((null)=(pointer = 0x00007ffeefbff328), (null)=ResumeTy # 0x00007ffeefbff070) at main.rs:9:5
6
7 #[inline(never)]
8 async fn example() {
-> 9 canary()
10 }
11
12 #[inline(never)]
Target 0: (f) stopped.
See also:
What is the concrete type of a future returned from `async fn`?
What is the purpose of async/await in Rust?
Unable to set a breakpoint on main while debugging a program compiled with Rust 1.10 with GDB
How to use a debugger like GDB or LLDB to debug a crate in Rust?
Can I set an LLDB breakpoint when multiple Rust source files share the same name?

How to use unsafe get a byte slice from a string without memory copy

I have read about "https://github.com/golang/go/issues/25484" about no-copy conversion from []byte to string.
I am wondering if there is a way to convert a string to a byte slice without memory copy?
I am writing a program which processes terra-bytes data, if every string is copied twice in memory, it will slow down the progress. And I do not care about mutable/unsafe, only internal usage, I just need the speed as fast as possible.
Example:
var s string
// some processing on s, for some reasons, I must use string here
// ...
// then output to a writer
gzipWriter.Write([]byte(s)) // !!! Here I want to avoid the memory copy, no WriteString
So the question is: is there a way to prevent from the memory copying? I know maybe I need the unsafe package, but I do not know how. I have searched a while, no answer till now, neither the SO showed related answers works.
Getting the content of a string as a []byte without copying in general is only possible using unsafe, because strings in Go are immutable, and without a copy it would be possible to modify the contents of the string (by changing the elements of the byte slice).
So using unsafe, this is how it could look like (corrected, working solution):
func unsafeGetBytes(s string) []byte {
return (*[0x7fff0000]byte)(unsafe.Pointer(
(*reflect.StringHeader)(unsafe.Pointer(&s)).Data),
)[:len(s):len(s)]
}
This solution is from Ian Lance Taylor.
One thing to note here: the empty string "" has no bytes as its length is zero. This means there is no guarantee what the Data field may be, it may be zero or an arbitrary address shared among the zero-size variables. If an empty string may be passed, that must be checked explicitly (although there's no need to get the bytes of an empty string without copying...):
func unsafeGetBytes(s string) []byte {
if s == "" {
return nil // or []byte{}
}
return (*[0x7fff0000]byte)(unsafe.Pointer(
(*reflect.StringHeader)(unsafe.Pointer(&s)).Data),
)[:len(s):len(s)]
}
Original, wrong solution was:
func unsafeGetBytesWRONG(s string) []byte {
return *(*[]byte)(unsafe.Pointer(&s)) // WRONG!!!!
}
See Nuno Cruces's answer below for reasoning.
Testing it:
s := "hi"
data := unsafeGetBytes(s)
fmt.Println(data, string(data))
data = unsafeGetBytes("gopher")
fmt.Println(data, string(data))
Output (try it on the Go Playground):
[104 105] hi
[103 111 112 104 101 114] gopher
BUT: You wrote you want this because you need performance. You also mentioned you want to compress the data. Please know that compressing data (using gzip) requires a lot more computation than just copying a few bytes! You will not see any noticeable performance gain by using this!
Instead when you want to write strings to an io.Writer, it's recommended to do it via io.WriteString() function which if possible will do so without making a copy of the string (by checking and calling WriteString() method which if exists is most likely does it better than copying the string). For details, see What's the difference between ResponseWriter.Write and io.WriteString?
There are also ways to access the contents of a string without converting it to []byte, such as indexing, or using a loop where the compiler optimizes away the copy:
s := "something"
for i, v := range []byte(s) { // Copying s is optimized away
// ...
}
Also see related questions:
[]byte(string) vs []byte(*string)
What are the possible consequences of using unsafe conversion from []byte to string in go?
What is the difference between the string and []byte in Go?
Does conversion between alias types in Go create copies?
How does type conversion internally work? What is the memory utilization for the same?
After some extensive investigation, I believe I've discovered the most efficient way of getting a []byte from a string as of Go 1.17 (this is for i386/x86_64 gc; I haven't tested other architectures.) The trade-off of being efficient code here is being inefficient to code, though.
Before I say anything else, it should be made clear that the differences are ultimately very small and probably inconsequential -- the info below is for fun/educational purposes only.
Summary
With some minor alterations, the accepted answer illustrating the technique of slicing a pointer to array is the most efficient way. That being said, I wouldn't be surprised if unsafe.Slice becomes the (decisively) better choice in the future.
unsafe.Slice
unsafe.Slice currently has the advantage of being slightly more readable, but I'm skeptical about it's performance. It looks like it makes a call to runtime.unsafeslice. The following is the gc amd64 1.17 assembly of the function provided in Atamiri's answer (FUNCDATA omitted). Note the stack check (lack of NOSPLIT):
unsafeGetBytes_pc0:
TEXT "".unsafeGetBytes(SB), ABIInternal, $48-16
CMPQ SP, 16(R14)
PCDATA $0, $-2
JLS unsafeGetBytes_pc86
PCDATA $0, $-1
SUBQ $48, SP
MOVQ BP, 40(SP)
LEAQ 40(SP), BP
PCDATA $0, $-2
MOVQ BX, ""..autotmp_4+24(SP)
MOVQ AX, "".s+56(SP)
MOVQ BX, "".s+64(SP)
MOVQ "".s+56(SP), DX
PCDATA $0, $-1
MOVQ DX, ""..autotmp_5+32(SP)
LEAQ type.uint8(SB), AX
MOVQ BX, CX
MOVQ DX, BX
PCDATA $1, $1
CALL runtime.unsafeslice(SB)
MOVQ ""..autotmp_5+32(SP), AX
MOVQ ""..autotmp_4+24(SP), BX
MOVQ BX, CX
MOVQ 40(SP), BP
ADDQ $48, SP
RET
unsafeGetBytes_pc86:
NOP
PCDATA $1, $-1
PCDATA $0, $-2
MOVQ AX, 8(SP)
MOVQ BX, 16(SP)
CALL runtime.morestack_noctxt(SB)
MOVQ 8(SP), AX
MOVQ 16(SP), BX
PCDATA $0, $-1
JMP unsafeGetBytes_pc0
Other unimportant fun facts about the above (easily subject to change): compiled size of 3326B; has an inline cost of 7; correct escape analysis: s leaks to ~r1 with derefs=0.
Carefully Modifying *reflect.SliceHeader
This method has the advantage/disadvantage of letting one modify the internal state of a slice directly. Unfortunately, due it's multiline nature and use of uintptr, the GC can easily mess things up if one is not careful about keeping a reference to the original string. (Here I avoided creating temporary pointers to reduce inline cost and to avoid needing to add runtime.KeepAlive):
func unsafeGetBytes(s string) (b []byte) {
(*reflect.SliceHeader)(unsafe.Pointer(&b)).Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
(*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s)
(*reflect.SliceHeader)(unsafe.Pointer(&b)).Len = len(s)
return
}
The corresponding assembly on amd64 (FUNCDATA omitted):
TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16
SUBQ $32, SP
MOVQ BP, 24(SP)
LEAQ 24(SP), BP
MOVQ AX, "".s+40(SP)
MOVQ BX, "".s+48(SP)
MOVQ $0, "".b(SP)
MOVUPS X15, "".b+8(SP)
MOVQ "".s+40(SP), DX
MOVQ DX, "".b(SP)
MOVQ "".s+48(SP), CX
MOVQ CX, "".b+16(SP)
MOVQ "".s+48(SP), BX
MOVQ BX, "".b+8(SP)
MOVQ "".b(SP), AX
MOVQ 24(SP), BP
ADDQ $32, SP
RET
Other unimportant fun facts about the above (easily subject to change): compiled size of 3700B; has an inline cost of 20; subpar escape analysis: s leaks to {heap} with derefs=0.
Unsafer version of modifying SliceHeader
Adapted from Nuno Cruces' answer. This relies on the inherent structural similarity between StringHeader and SliceHeader, so in a sense it breaks "more easily". Additionally, it temporarily creates an illegal state where cap(b) (being 0) is less than len(b).
func unsafeGetBytes(s string) (b []byte) {
*(*string)(unsafe.Pointer(&b)) = s
(*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s)
return
}
Corresponding assembly (FUNCDATA omitted):
TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16
SUBQ $32, SP
MOVQ BP, 24(SP)
LEAQ 24(SP), BP
MOVQ AX, "".s+40(FP)
MOVQ $0, "".b(SP)
MOVUPS X15, "".b+8(SP)
MOVQ AX, "".b(SP)
MOVQ BX, "".b+8(SP)
MOVQ BX, "".b+16(SP)
MOVQ "".b(SP), AX
MOVQ BX, CX
MOVQ 24(SP), BP
ADDQ $32, SP
NOP
RET
Other unimportant details: compiled size 3636B, inline cost of 11, with subpar escape analysis: s leaks to {heap} with derefs=0.
Slicing a pointer to array
This is the accepted answer (shown here for comparison) -- its primary disadvantage is its ugliness (viz. magic number 0x7fff0000). There's also the tiniest possibility of getting a string bigger than the array, and an unavoidable bounds check.
func unsafeGetBytes(s string) []byte {
return (*[0x7fff0000]byte)(unsafe.Pointer(
(*reflect.StringHeader)(unsafe.Pointer(&s)).Data),
)[:len(s):len(s)]
}
Corresponding assembly (FUNCDATA removed).
TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $24-16
SUBQ $24, SP
MOVQ BP, 16(SP)
LEAQ 16(SP), BP
PCDATA $0, $-2
MOVQ AX, "".s+32(SP)
MOVQ BX, "".s+40(SP)
MOVQ "".s+32(SP), AX
PCDATA $0, $-1
TESTB AL, (AX)
NOP
CMPQ BX, $2147418112
JHI unsafeGetBytes_pc54
MOVQ BX, CX
MOVQ 16(SP), BP
ADDQ $24, SP
RET
unsafeGetBytes_pc54:
MOVQ BX, DX
MOVL $2147418112, BX
PCDATA $1, $1
NOP
CALL runtime.panicSlice3Alen(SB)
XCHGL AX, AX
Other unimportant details: compiled size 3142B, inline cost of 9, with correct escape analysis: s leaks to ~r1 with derefs=0
Note the runtime.panicSlice3Alen -- this is bounds check that checks that len(s) is within 0x7fff0000.
Improved slicing pointer to array
This is what I've concluded to be the most efficient method as of Go 1.17. I basically modified the accepted answer to eliminate the bounds check, and found a "more meaningful" constant (math.MaxInt32) to use than 0x7fff0000. Using MaxInt32 preserves 32-bit compatibility.
func unsafeGetBytes(s string) []byte {
const MaxInt32 = 1<<31 - 1
return (*[MaxInt32]byte)(unsafe.Pointer((*reflect.StringHeader)(
unsafe.Pointer(&s)).Data))[:len(s)&MaxInt32:len(s)&MaxInt32]
}
Corresponding assembly (FUNCDATA removed):
TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $0-16
PCDATA $0, $-2
MOVQ AX, "".s+8(SP)
MOVQ BX, "".s+16(SP)
MOVQ "".s+8(SP), AX
PCDATA $0, $-1
TESTB AL, (AX)
ANDQ $2147483647, BX
MOVQ BX, CX
RET
Other unimportant details: compiled size 3188B, inline cost of 13, and correct escape analysis: s leaks to ~r1 with derefs=0
In go 1.17, I'd recommend unsafe.Slice as more readable:
unsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s))
I think that this also works (doesn't violate any unsafe.Pointer rules), with the benefit that it works for a const s:
*(*[]byte)(unsafe.Pointer(&struct{string; int}{s, len(s)}))
Commentary bellow is regarding the accepted answer as it originally stood. The accepted answer now mentions an (authoritative) solution from Ian Lance Taylor. Keeping it as it points out a common error.
The accepted answer is wrong, and may produce the panic #RFC mentioned in the comments. The explanation by #icza about GC and keep alive is misguided.
The reason capacity is zero (or even an arbitrary value) is more prosaic.
A slice is:
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
A string is:
type StringHeader struct {
Data uintptr
Len int
}
Converting a byte slice to a string can be "safely" done as the strings.Builder does it:
func (b *Builder) String() string {
return *(*string)(unsafe.Pointer(&b.buf))
}
This will copy the Data pointer and Len from the slice to the string.
The opposite conversion is not "safe" because Cap doesn't get set to the correct value.
The following (originally by me) is also wrong because it violates unsafe.Pointer rule #1.
This is the correct code, that fixes the panic:
var buf = *(*[]byte)(unsafe.Pointer(&str))
(*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str)
Or perhaps:
var buf []byte
*(*string)(unsafe.Pointer(&buf)) = str
(*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str)
I should add that all these conversions are unsafe in the sense that strings are expected to be immutable, and byte arrays/slices mutable.
But if you know for sure that the byte slice won't be mutated, you won't get bounds (or GC) issues with the above conversions.
In Go 1.17, one can now use unsafe.Slice, so the accepted answer can be rewritten as follows:
func unsafeGetBytes(s string) []byte {
return unsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s))
}
I managed to get the goal by this:
func TestString(t *testing.T) {
b := []byte{'a', 'b', 'c', '1', '2', '3', '4'}
s := *(*string)(unsafe.Pointer(&b))
sb := *(*[]byte)(unsafe.Pointer(&s))
addr1 := unsafe.Pointer(&b)
addr2 := unsafe.Pointer(&s)
addr3 := unsafe.Pointer(&sb)
fmt.Print("&b=", addr1, "\n&s=", addr2, "\n&sb=", addr3, "\n")
hdr1 := (*reflect.StringHeader)(unsafe.Pointer(&b))
hdr2 := (*reflect.SliceHeader)(unsafe.Pointer(&s))
hdr3 := (*reflect.SliceHeader)(unsafe.Pointer(&sb))
fmt.Print("b.data=", hdr1.Data, "\ns.data=", hdr2.Data, "\nsb.data=", hdr3.Data, "\n")
b[0] = 'X'
sb[1] = 'Y' // if sb is from a string directly, this will cause nil panic
fmt.Print("s=", s, "\nsb=")
for _, c := range sb {
fmt.Printf("%c", c)
}
fmt.Println()
}
Output:
=== RUN TestString
&b=0xc000218000
&s=0xc00021a000
&sb=0xc000218020
b.data=824635867152
s.data=824635867152
sb.data=824635867152
s=XYc1234
sb=XYc1234
These variables all share the same memory.
Go 1.20 (February 2023)
You can use unsafe.StringData to greatly simplify YenForYang's answer:
StringData returns a pointer to the underlying bytes of str. For an empty string the return value is unspecified, and may be nil.
Since Go strings are immutable, the bytes returned by StringData must not be modified.
func main() {
str := "foobar"
d := unsafe.StringData(str)
b := unsafe.Slice(d, len(str))
fmt.Printf("%T, %s\n", b, b) // []uint8, foobar (byte is alias of uint8)
}
Go tip playground: https://go.dev/play/p/FIXe0rb8YHE?v=gotip
Remember that you can't assign to b[n]. The memory is still read-only.
Simple, no reflect, and I think it is portable. s is your string and b is your bytes slice
var b []byte
bb:=(*[3]uintptr)(unsafe.Pointer(&b))[:]
copy(bb, (*[2]uintptr)(unsafe.Pointer(&s))[:])
bb[2] = bb[1]
// use b
Remember, bytes value should not be modified (will panic). re-slicing is ok (for example: bytes.split(b, []byte{','} )

What is the point of atomic.Load and atomic.Store

In the Go's memory model nothing is stated about atomics and their relation to memory fencing.
Although many internal packages seem to rely on the memory ordering that could be provided if atomics created memory fences around them. See this issue for details.
After not understanding how it really works, I went to the sources, in particular src/runtime/internal/atomic/atomic_amd64.go and found following implementations of Load and Store:
//go:nosplit
//go:noinline
func Load(ptr *uint32) uint32 {
return *ptr
}
Store is implemented in asm_amd64.s in the same package.
TEXT runtime∕internal∕atomic·Store(SB), NOSPLIT, $0-12
MOVQ ptr+0(FP), BX
MOVL val+8(FP), AX
XCHGL AX, 0(BX)
RET
Both look as if they had nothing to do with parallelism.
I did look into other architectures but implementation seems to be equivalent.
However, if atomics are indeed weak and provide no memory ordering guarantees, than the code below could fail, but it does not.
As an addition I tried replacing atomic calls with simple assignments but it still produces consistent and "successful" result in both cases.
func try() {
var a, b int32
go func() {
// atomic.StoreInt32(&a, 1)
// atomic.StoreInt32(&b, 1)
a = 1
b = 1
}()
for {
// if n := atomic.LoadInt32(&b); n == 1 {
if n := b; n == 1 {
if a != 1 {
panic("fail")
}
break
}
runtime.Gosched()
}
}
func main() {
n := 1000000000
for i := 0; i < n ; i++ {
try()
}
}
The next thought was that the compiler does some magic to provide ordering guarantees. So below is the listing of the variant with atomic Store and Load not commented. Full listing is available on the pastebin.
// Anonymous function implementation with atomic calls inlined
TEXT %22%22.try.func1(SB) gofile../path/atomic.go
atomic.StoreInt32(&a, 1)
0x816 b801000000 MOVL $0x1, AX
0x81b 488b4c2408 MOVQ 0x8(SP), CX
0x820 8701 XCHGL AX, 0(CX)
atomic.StoreInt32(&b, 1)
0x822 b801000000 MOVL $0x1, AX
0x827 488b4c2410 MOVQ 0x10(SP), CX
0x82c 8701 XCHGL AX, 0(CX)
}()
0x82e c3 RET
// Important "cycle" part of try() function
0x6ca e800000000 CALL 0x6cf [1:5]R_CALL:runtime.newproc
for {
0x6cf eb12 JMP 0x6e3
runtime.Gosched()
0x6d1 90 NOPL
checkTimeouts()
0x6d2 90 NOPL
mcall(gosched_m)
0x6d3 488d0500000000 LEAQ 0(IP), AX [3:7]R_PCREL:runtime.gosched_m·f
0x6da 48890424 MOVQ AX, 0(SP)
0x6de e800000000 CALL 0x6e3 [1:5]R_CALL:runtime.mcall
if n := atomic.LoadInt32(&b); n == 1 {
0x6e3 488b442420 MOVQ 0x20(SP), AX
0x6e8 8b08 MOVL 0(AX), CX
0x6ea 83f901 CMPL $0x1, CX
0x6ed 75e2 JNE 0x6d1
if a != 1 {
0x6ef 488b442428 MOVQ 0x28(SP), AX
0x6f4 833801 CMPL $0x1, 0(AX)
0x6f7 750a JNE 0x703
0x6f9 488b6c2430 MOVQ 0x30(SP), BP
0x6fe 4883c438 ADDQ $0x38, SP
0x702 c3 RET
As you can see, no fences or locks are in place again.
Note: all tests are done on x86_64 and i5-8259U
The question:
So, is there any point of wrapping simple pointer dereference in a function call or is there some hidden meaning to it and why do these atomics still work as memory barriers? (if they do)
I don't know Go at all, but it looks like the x86-64 implementations of .load() and .store() are sequentially-consistent. Presumably on purpose / for a reason!
//go:noinline on the load means the compiler can't reorder around a blackbox non-inline function, I assume. On x86 that's all you need for the load side of sequential-consistency, or acq-rel. A plain x86 mov load is an acquire load.
The compiler-generated code gets to take advantage of x86's strongly-ordered memory model, which is sequential consistency + a store buffer (with store forwarding), i.e. acq/rel. To recover sequential consistency, you only need to drain the store buffer after a release-store.
.store() is written in asm, loading its stack args and using xchg as a seq-cst store.
XCHG with memory has an implicit lock prefix which is a full barrier; it's an efficient alternative to mov+mfence to implement what C++ would call a memory_order_seq_cst store.
It flushes the store buffer before later loads and stores are allowed to touch L1d cache. Why does a std::atomic store with sequential consistency use XCHG?
See
https://bartoszmilewski.com/2008/11/05/who-ordered-memory-fences-on-an-x86/
C/C++11 mappings to processors
describes the sequences of instructions that implement relaxed load/store, acq/rel load/store, seq-cst load/store, and various barriers, on various ISAs. So you can recognize things like xchg with memory.
Does lock xchg have the same behavior as mfence? (TL:DR: yes except for maybe some corner cases with NT loads from WC memory, e.g. from video RAM). You may see a dummy lock add $0, (SP) used as an alternative to mfence in some code.
IIRC, AMD's optimization manual even recommends this. It's good on Intel as well, especially on Skylake where mfence was strengthened by microcode update to fully block out-of-order exec even of ALU instructions (like lfence) as well as memory reordering. (To fix an erratum with NT loads.)
https://preshing.com/20120913/acquire-and-release-semantics/

Ownership and conditionally executed code

I read the rust book over the weekend and I have a question about the concept of ownership. The impression I got is that ownership is used to statically determine where a resource can be deallocated. Now, suppose that we have the following:
{ // 1
let x; // 2
{ // 3
let y = Box::new(1); // 4
x = if flip_coin() {y} else {Box::new(2)} // 5
} // 6
} // 7
I was surprised to see that the compiler accepts this program. By inserting println!s and implementing the Drop trait for the boxed value, I saw that the box containing the value 1 will be deallocated at either line 6 or 7 depending on the return value of flip_coin. How does the compiler know when to deallocate that box? Is this decided at run-time using some run-time information (like a flag to indicate if the box is still in use)?
After some research I found out that Rust currently adds a flag to every type that implements the Drop trait so that it knows whether the value has been dropped or not, which of course incurs a run-time cost. There have been proposals to avoid that cost by using static drops or eager drops but those solutions had problems with their semantics, namely that drops could occur at places that you wouldn't expect (e.g. in the middle of a code block), especially if you are used to C++ style RAII. There is now consensus that the best compromise is a different solution where the flags are removed from the types. Instead flags will be added to the stack, but only when the compiler cannot figure out when to do the drop statically (while having the same semantics as C++) which specifically happens when there are conditional moves like the example given in this question. For all other cases there will be no run-time cost. It appears though, that this proposal will not be implemented in time for 1.0.
Note that C++ has similar run-time costs associated with unique_ptr. When the new Drop is implemented, Rust will be strictly better than C++ in that respect.
I hope this is a correct summary of the situation. Credit goes to u/dyoll1013, u/pcwalton, u/!!kibwen, u/Kimundi on reddit, and Chris Morgan here on SO.
In non-optimized code, Rust uses dynamic checks, but it's likely that they will be eliminated in optimized code.
I looked at the behavior of the following code:
#[derive(Debug)]
struct A {
s: String
}
impl Drop for A {
fn drop(&mut self) {
println!("Dropping {:?}", &self);
}
}
fn flip_coin() -> bool { false }
#[allow(unused_variables)]
pub fn test() {
let x;
{
let y1 = A { s: "y1".to_string() };
let y2 = A { s: "y2".to_string() };
x = if flip_coin() { y1 } else { y2 };
println!("leaving inner scope");
}
println!("leaving middle scope");
}
Consistent with your comment on the other answer, the call to drop for the String that was left alone occurs after the "leaving inner scope" println. That does seem consistent with one's expectation that the y's scopes extend to the end of their block.
Looking at the assembly language, compiled without optimization, it seems that the if statement not only copies either y1 or y2 to x, but also zeroes out whichever variable provided the source for the move. Here's the test:
.LBB14_8:
movb -437(%rbp), %al
andb $1, %al
movb %al, -177(%rbp)
testb $1, -177(%rbp)
jne .LBB14_11
jmp .LBB14_12
Here's the 'then' branch, which moves the "y1" String to x. Note especially the call to memset, which is zeroing out y1 after the move:
.LBB14_11:
xorl %esi, %esi
movl $32, %eax
movl %eax, %edx
leaq -64(%rbp), %rcx
movq -64(%rbp), %rdi
movq %rdi, -176(%rbp)
movq -56(%rbp), %rdi
movq %rdi, -168(%rbp)
movq -48(%rbp), %rdi
movq %rdi, -160(%rbp)
movq -40(%rbp), %rdi
movq %rdi, -152(%rbp)
movq %rcx, %rdi
callq memset#PLT
jmp .LBB14_13
(It looks horrible until you realize that all those movq instructions are just copying 32 bytes from %rbp-64, which is y1, to %rbp-176, which is x, or at least some temporary that'll eventually be x.) Note that it copies 32 bytes, not the 24 you'd expect for a Vec (one pointer plus two usizes). This is because Rust adds a hidden "drop flag" to the structure, indicating whether the value is live or not, following the three visible fields.
And here's the 'else' branch, doing exactly the same for y2:
.LBB14_12:
xorl %esi, %esi
movl $32, %eax
movl %eax, %edx
leaq -128(%rbp), %rcx
movq -128(%rbp), %rdi
movq %rdi, -176(%rbp)
movq -120(%rbp), %rdi
movq %rdi, -168(%rbp)
movq -112(%rbp), %rdi
movq %rdi, -160(%rbp)
movq -104(%rbp), %rdi
movq %rdi, -152(%rbp)
movq %rcx, %rdi
callq memset#PLT
.LBB14_13:
This is followed by the code for the "leaving inner scope" println, which is painful to behold, so I won't include it here.
We then call a "glue_drop" routine on both y1 and y2. This seems to be a compiler-generated function that takes an A, checks its String's Vec's drop flag, and if that's set, invokes A's drop routine, followed by the drop routine for the String it contains.
If I'm reading this right, it's pretty clever: even though it's the A that has the drop method we need to call first, Rust knows that it can use ... inhale ... the drop flag of the Vec inside the String inside the A as the flag that indicates whether the A needs to be dropped.
Now, when compiled with optimization, inlining and flow analysis should recognize situations where the drop definitely will happen (and omit the run-time check), or definitely will not happen (and omit the drop altogether). And I believe I have heard of optimizations that duplicate the code following a then/else clause into both paths, and then specialize them. This would eliminate all run-time checks from this code (but duplicate the println! call).
As the original poster points out, there's an RFC proposal to move drop flags out of the values and instead associate them with the stack slots holding the values.
So it's plausible that the optimized code might not have any run-time checks at all. I can't bring myself to read the optimized code, though. Why not give it a try yourself?

Resources