Why does pthread_exit use void*? - linux

I recently started using posix threads and the choice of argument types in the standard made me curious. I haven't been able to asnwer the question of why does pthread_exit use void* instead of int for the returning status of the thread? (the same as exit does).
The only advantage I see is that it lets the programmers define the status how they want (e.g. return a pointer to a complex structure), but I doubt it is widely used like this.
It seems that in most cases this choice has more overhead because of necessary casting.

This isn't just for status, it's the return value of the thread. Using a pointer allows the thread to return a pointer to a dynamically-allocated array or structure.
You can't really compare it with the exit() parameter, because that's for sending status to the operating system. This is intentionally very simple to allow portability with many OSes.

The only advantage I see is that it lets the programmers define the status how they want (e.g. return a pointer to a complex structure), but I doubt it is widely used like this.
Indeed, that's the reason. And it's probably not used that widely (e.g. you can communicate values via other means such as a pointer passed to thread function, global var with synchronisation, etc). But if you were to have a it like void pthread_exit(int);, the it takes away the ability to return pointers. So void pthread_exit(void*); is a more flexible design.
It seems that in most cases this choice has more overhead because of necessary casting.
In most cases, it's not used at all as the common way is to return nothing i.e. pthread_exit(NULL);. So it only matters when returning pointers (to structs and such) and in those cases a conversion to void * isn't necessary as any pointer type can be converted to void * without an explicit cast.

Related

How can I safely share objects between Rust and C++?

One way to construct and destruct C++ objects from Rust is to call the constructor and return an int64_t pointer to Rust. Then, Rust can call methods on the object by passing the int64_t which will be cast to the pointer again.
void do_something(int64_t pointer, char* methodName, ...) {
//cast pointer and call method here
}
However, this is extremely unsafe. Instead I tend to store the pointer into a map and pass the map key to Rust, so it can call C++ back:
void do_something(int id, char* methodName, ...) {
//retrieve pointer from id and call method on it
}
Now, imagine I create, from Rust, a C++ object that calls Rust back. I could do the same: give C++ an int64_t and then C++ calls Rust:
#[no_mangle]
pub fn do_somethind(pointer: i64, method_name: &CString, ...) {
}
but that's also insecure. Instead I'd do something similar as C++, using an id:
#[no_mangle]
pub fn do_something(id: u32, method_name: &CString, ...) {
//search id in map and get object
//execute method on the object
}
However, this isn't possible, as Rust does not have support for static variables like a map. And Rust's lazy_static is immutable.
The only way to do safe calls from C++ back to Rust is to pass the address of something static (the function do_something) so calling it will always point to something concrete. Passing pointers is insecure as it could stop existing. However there should be a way for this function to maintain a map of created objects and ids.
So, how to safely call Rust object functions from C++? (for a Rust program, not a C++ program)
Pointers or Handles
Ultimately, this is about object identity: you need to pass something which allows to identify one instance of an object.
The simplest interface is to return a Pointer. It is the most performant interface, albeit requires trust between the parties, and a clear ownership.
When a Pointer is not suitable, the fallback is to use a Handle. This is, for example, typically what kernels do: a file descriptor, in Linux, is just an int.
Handles do not preclude strong typing.
C and Linux are poor examples, here. Just because a Handle is, often, an integral ID does not preclude encapsulating said integer into a strong type.
For example, you could struct FileDescriptor(i32); to represent a file descriptor handed over from Linux.
Handles do not preclude strongly typed functions.
Similarly, just because you have a Handle does not mean that you have a single syscall interface where the name of the function must be passed by ID (or worse string) and an unknown/untyped soup of arguments follow.
You can perfectly, and really should, use strongly typed functions:
int read(FileDescriptor fd, std::byte* buffer, std::size_t size);
Handles are complicated.
Handles are, to a degree, more complicated than pointers.
First of all, handles are meaningless without some repository: 33 has no intrinsic meaning, it is just a key to look-up the real instance.
The repository need not be a singleton. It can perfectly be passed along in the function call.
The repository should likely be thread-safe and re-entrant.
There may be data-races between usage and deletion of a handle.
The latter point is maybe the most surprising, and means that care must be taken when using the repository: accesses to the underlying values must also be thread-safe, and re-entrant.
(Non thread-safe or non re-entrant underlying values leave you open to Undefined Behavior)
Use Pointers.
In general, my recommendation is to use Pointers.
While Handles may feel safer, implementing a correct system is much more complicated than it looks. Furthermore, Handles do not intrinsically solve ownership issues. Instead of Undefined Behavior, you'll get Null Pointer Dangling Handle Exceptions... and have to reinvent the tooling to track them down.
If you cannot solve the ownership issues with Pointers, you are unlikely to solve them with Handles.

Is there a way in rust to mark a type as non-droppable?

I would like to make it a compiler error to allow a type to be dropped, instead it must be forgotten. My use case is for a type the represents a handle of sorts that must be returned to its source for cleanup. This way a user of the API cannot accidentally leak the handle. They would be required to either return the handle to its source or explicitly forget it. In the source, the associated resources would be cleaned up and the handle explicitly forgotten.
The article The Pain Of Real Linear Types in Rust mentions this. Relevant quote:
One extreme option that I've seen is to implement drop() as
abort("this value must be used"). All "proper" consumers then
mem::forget the value, preventing this "destructor bomb" from going
off. This provides a dynamic version of strict must-use values.
Although it's still vulnerable to the few ways destructors can leak,
this isn't a significant concern in practice. Mostly it just stinks
because it's dynamic and Rust users Want Static Verification.
Ultimately, Rust lacks "proper" support for this kind of type.
So, assuming you want static checks, the answer is no.
You could require the user to pass a function object that returns the handle (FnOnce(Handle) -> Handle), as long as there aren't any other ways to create a handle.

magic statics: similar constructs, interesting non-obvious uses?

C++11 introduced threadsafe local static initialization, aka "magic statics": Is local static variable initialization thread-safe in C++11?
In particular, the spec says:
If control enters the declaration concurrently while the variable is
being initialized, the concurrent execution shall wait for completion
of the initialization.
So there's an implicit mutex lock here. This is very interesting, and seems like an anomaly-- that is, I don't know of any other implicit mutexes built into c++ (i.e. mutex semantics without any use of things like std::mutex). Are there any others, or is this unique in the spec?
I'm also curious whether magic static's implicit mutex (or other implicit mutexes, if there are any) can be leveraged to implement other synchronization primitives. For example, I see that they can be used to implement std::call_once, since this:
std::call_once(onceflag, some_function);
can be expressed as this:
static int dummy = (some_function(), 0);
Note, however, that the magic static version is more limited than std::call_once, since with std::call_once you could re-initialize onceflag and so use the code multiple times per program execution, whereas with magic statics, you really only get to use it once per program execution.
That's the only somewhat non-obvious use of magic statics that I can think of.
Is it possible to use magic static's implicit mutex to implement other synchronization primitives, e.g. a general std::mutex, or other useful things?
Initialization of block-scope static variables is the only place where the language requires synchronization. Several library functions require synchronization, but aren't directly synchronization functions (e.g. atexit).
Since the synchronization on the initialization of a local static is a one-time affair, it would be hard, if not impossible, to implement a general purpose synchronization mechanism on top of it, since every time you needed a synchronization point you would need to be initializing a different local static object.
Though they can be used in place of call_once in some circumstances, they can't be used as a general replacement for that, since a given once_flag object may be used from many places.

What is the fastcall keyword used for in visual c?

I have seen the fastcall notation appended before many functions. Why it is used?
That notation before the function is called the "calling convention." It specifies how (at a low level) the compiler will pass input parameters to the function and retrieve its results once it's been executed.
There are many different calling conventions, the most popular being stdcall and cdecl.
You might think there's only one way of doing it, but in reality, there are dozens of ways you could call a function and pass variables in and out. You could place the input parameters on a stack (push, push, push to call; pop, pop, pop to read input parameters). Or perhaps you would rather stick them in registers (this is fastcall - it tries to fit some of the input params in registers for speed).
But then what about the order? Do you push them from left to right or right to left? What about the result - there's always only one (assuming no reference parameters), so do you place the result on the stack, in a register, at a certain memory address?
Also, let's assume you're using the stack for communication - who's job is it to actually clear the stack after the function is called - the caller or the callee?
What about backing up and then restoring the contents of (certain) CPU registers - should the caller do it, or will the callee guarantee that it'll return everything the way it was?
The most popular calling convention (by far) is cdecl, which is the standard calling convention in both C and C++. The WIN32 API uses stdcall, which means any code that calls the WIN32 API needs to use stdcall for those function calls (making it another popular choice).
fastcall is a bit of an oddball - people realized for many functions with only one in/out parameter, pushing and popping from a memory-based stack is quite a bit of overhead and makes function calls a little bit heavy so the different compilers introduced (different) calling conventions that will place one or more parameters in registers before placing the rest in the stack for better performance. The problem is, not all compilers used the same rules for what goes where and who does what with fastcall, and as a result you have to be careful when using it because you'll never know who does what. Finally, see Is fastcall really faster? for info on fastcall performance benefits.
Complicated stuff.
Something important to keep in mind: don't add or change calling conventions if you don't know exactly what you're doing, because if both the caller and the callee do not agree on the calling convention, you'll likely end up with stack corruption and a segfault. This usually happens when you have the function being called in a DLL/shared library and a program is written that depends on the DLL/SO/dylib being a certain calling convention (say, cdecl), then the library is recompiled with a different calling convention (say, fastcall). Now the old program can no longer communicate with the new library.
Wikipedia states that
Conventions entitled fastcall have not been standardized, and have been implemented differently, depending on the compiler vendor. Typically fastcall calling conventions pass one or more arguments in registers which reduces the number of memory accesses required for the call.

What's going on in the 'offsetof' macro?

Visual C++ 2008 C runtime offers an operator 'offsetof', which is actually macro defined as this:
#define offsetof(s,m) (size_t)&reinterpret_cast<const volatile char&>((((s *)0)->m))
This allows you to calculate the offset of the member variable m within the class s.
What I don't understand in this declaration is:
Why are we casting m to anything at all and then dereferencing it? Wouldn't this have worked just as well:
&(((s*)0)->m)
?
What's the reason for choosing char reference (char&) as the cast target?
Why use volatile? Is there a danger of the compiler optimizing the loading of m? If so, in what exact way could that happen?
An offset is in bytes. So to get a number expressed in bytes, you have to cast the addresses to char, because that is the same size as a byte (on this platform).
The use of volatile is perhaps a cautious step to ensure that no compiler optimisations (either that exist now or may be added in the future) will change the precise meaning of the cast.
Update:
If we look at the macro definition:
(size_t)&reinterpret_cast<const volatile char&>((((s *)0)->m))
With the cast-to-char removed it would be:
(size_t)&((((s *)0)->m))
In other words, get the address of member m in an object at address zero, which does look okay at first glance. So there must be some way that this would potentially cause a problem.
One thing that springs to mind is that the operator & may be overloaded on whatever type m happens to be. If so, this macro would be executing arbitrary code on an "artificial" object that is somewhere quite close to address zero. This would probably cause an access violation.
This kind of abuse may be outside the applicability of offsetof, which is supposed to only be used with POD types. Perhaps the idea is that it is better to return a junk value instead of crashing.
(Update 2: As Steve pointed out in the comments, there would be no similar problem with operator ->)
offsetof is something to be very careful with in C++. It's a relic from C. These days we are supposed to use member pointers. That said, I believe that member pointers to data members are overdesigned and broken - I actually prefer offsetof.
Even so, offsetof is full of nasty surprises.
First, for your specific questions, I suspect the real issue is that they've adapted relative to the traditional C macro (which I thought was mandated in the C++ standard). They probably use reinterpret_cast for "it's C++!" reasons (so why the (size_t) cast?), and a char& rather than a char* to try to simplify the expression a little.
Casting to char looks redundant in this form, but probably isn't. (size_t) is not equivalent to reinterpret_cast, and if you try to cast pointers to other types into integers, you run into problems. I don't think the compiler even allows it, but to be honest, I'm suffering memory failure ATM.
The fact that char is a single byte type has some relevance in the traditional form, but that may only be why the cast is correct again. To be honest, I seem to remember casting to void*, then char*.
Incidentally, having gone to the trouble of using C++-specific stuff, they really should be using std::ptrdiff_t for the final cast.
Anyway, coming back to the nasty surprises...
VC++ and GCC probably won't use that macro. IIRC, they have a compiler intrinsic, depending on options.
The reason is to do what offsetof is intended to do, rather than what the macro does, which is reliable in C but not in C++. To understand this, consider what would happen if your struct uses multiple or virtual inheritance. In the macro, when you dereference a null pointer, you end up trying to access a virtual table pointer that isn't there at address zero, meaning that your app probably crashes.
For this reason, some compilers have an intrinsic that just uses the specified structs layout instead of trying to deduce a run-time type. But the C++ standard doesn't mandate or even suggest this - it's only there for C compatibility reasons. And you still have to be careful if you're working with class heirarchies, because as soon as you use multiple or virtual inheritance, you cannot assume that the layout of the derived class matches the layout of the base class - you have to ensure that the offset is valid for the exact run-time type, not just a particular base.
If you're working on a data structure library, maybe using single inheritance for nodes, but apps cannot see or use your nodes directly, offsetof works well. But strictly speaking, even then, there's a gotcha. If your data structure is in a template, the nodes may have fields with types from template parameters (the contained data type). If that isn't POD, technically your structs aren't POD either. And all the standard demands for offsetof is that it works for POD. In practice, it will work - your type hasn't gained a virtual table or anything just because it has a non-POD member - but you have no guarantees.
If you know the exact run-time type when you dereference using a field offset, you should be OK even with multiple and virtual inheritance, but ONLY if the compiler provides an intrinsic implementation of offsetof to derive that offset in the first place. My advice - don't do it.
Why use inheritance in a data structure library? Well, how about...
class node_base { ... };
class leaf_node : public node_base { ... };
class branch_node : public node_base { ... };
The fields in the node_base are automatically shared (with identical layout) in both the leaf and branch, avoiding a common error in C with accidentally different node layouts.
BTW - offsetof is avoidable with this kind of stuff. Even if you are using offsetof for some jobs, node_base can still have virtual methods and therefore a virtual table, so long as it isn't needed to dereference member variables. Therefore, node_base can have pure virtual getters, setters and other methods. Normally, that's exactly what you should do. Using offsetof (or member pointers) is a complication, and should only be used as an optimisation if you know you need it. If your data structure is in a disk file, for instance, you definitely don't need it - a few virtual call overheads will be insignificant compared with the disk access overheads, so any optimisation efforts should go into minimising disk accesses.
Hmmm - went off on a bit of a tangent there. Whoops.
char is guarenteed to be the smallest number of bits the architectural can "bite" (aka byte).
All pointers are actually numbers, so cast adress 0 to that type because it's the beginning.
Take the address of member starting from 0 (resulting into 0 + location_of_m).
Cast that back to size_t.
1) I also do not know why it is done in this way.
2) The char type is special in two ways.
No other type has weaker alignment restrictions than the char type. This is important for reinterpret cast between pointers and between expression and reference.
It is also the only type (together with its unsigned variant) for which the specification defines behavior in case the char is used to access stored value of variables of different type. I do not know if this applies to this specific situation.
3) I think that the volatile modifier is used to ensure that no compiler optimization will result in attempt to read the memory.
2 . What's the reason for choosing char reference (char&) as the cast target?
if type s has operator& overloaded then we can't get address using &s
so we reinterpret_cast the type s to primitive type char because primitive type char
doesn't have operator& overloaded
now we can get address from that
if in C then reinterpret_cast is not required
3 . Why use volatile? Is there a danger of the compiler optimizing the loading of m? If so, in what exact way could that happen?
here volatile is not relevant to compiler optimizing.
if type s have const or volatile or both qualifier(s) then
reinterpret_cast can't cast to char& because reinterpret_cast can't remove cv-qualifiers
so result is using <const volatile char&> for casting work from any combination

Resources