I'm writing a C++11 class Foo, and I want to give each instance its own thread-local storage of type Bar. That is to say, I want one Bar to be allocated per thread and per Foo instance.
If I were using pthreads, Foo would have a nonstatic member of type pthread_key_t, which Foo's constructor would initialize with pthread_key_create() and Foo's destructor would free with pthread_key_delete(). Or if I were writing for Microsoft Windows only, I could do something similar with TlsAlloc() and TlsFree(). Or if I were using Boost.Thread, Foo would have a nonstatic member of type boost::thread_specific_ptr.
In reality, however, I am trying to write portable C++11. C++11's thread_local keyword does not apply to nonstatic data members. So it's fine if you want one Bar per thread, but not if you want one Bar per thread per Foo.
So as far as I can tell, I need to define a thread-local map from Foos to Bars, and then deal with the question of how to clean up appropriately whenever a Foo is destroyed. But before I undertake that, I'm posting here in the hope that someone will stop me and say "There's an easier way."
(Btw, the reason I'm not using either pthread_key_create() or boost::thread_specific_ptr is because, if I understand correctly, they assume that all threads will be spawned using pthreads or Boost.Thread respectively. I don't want to make any assumptions about how the users of my code will spawn threads.)
You would like Foo to contain a thread_local variable of type Bar. Since, as noted, thread_local cannot apply to a data member, we have to do something more indirect. The underlying behavior will be for N instances of Bar to exist for each instance of Foo, where N is the number of threads in existence.
Here is a somewhat inefficient way of doing it. With more code, it could be made faster. Basically, each Foo will contain a TLS map.
#include <unordered_map>
class Bar { ... };
class Foo {
private:
static thread_local std::unordered_map<Foo*, Bar> tls;
public:
// All internal member functions must use this too.
Bar *get_bar() {
auto I = tls.find(this);
if (I != tls.end())
return &I->second;
auto II = tls.emplace(this, Bar()); // Could use std::piecewise_construct here...
return &II->second.second;
}
};
Related
Often times in embedded setting we need to declare static structs (drivers etc) so that
their memory is known and assigned at compile time.
Is there any way to achieve something similar in rust?
For example, I want to have a uart driver struct
struct DriverUart{
...
}
and an associated impl block.
Now, I want to avoid having a function named new(), and instead, I want to somewhere allocate this memory a-priori (or to have a new function that I can call statically outside
any code block).
In C I would simply put an instantiation of this struct in some header file and it will be statically allocated and globally available.
I haven't found any thing similar in rust.
If it is not possible then why? and what is the best why we can achieve something similar?
Thanks!
Now, I want to avoid having a function named new(), and instead, I want to somewhere allocate this memory a-priori (or to have a new function that I can call statically outside any code block). In C I would simply put an instantiation of this struct in some header file and it will be statically allocated and globally available. I haven't found any thing similar in rust. If it is not possible then why? and what is the best why we can achieve something similar?
https://doc.rust-lang.org/std/keyword.static.html
You can do the same in Rust, without the header, as long as all the elements are const:
struct DriverUart {
whatever: u32
}
static thing: DriverUart = DriverUart { whatever: 5 };
If you need to evaluate non-const expressions, then that obviously will not work and you'll need to use something like lazy_static or once_cell to instantiate simili-statics.
And of course, what with Rust being a safe languages and statics being shared state, mutable statics are wildly unsafe if not mitigated via thread-safe interior-mutability containers (e.g. an atomic, or a Mutex though those are currently non-const, and it's unclear if they can ever be otherwise), a static is considered to always be shared between threads.
I just wanted to déclare a pointer to a struct in a crate shared by several component of my project but using the same process. I m meaning the aim is to get it initilized only once.
type Box = [u64; 64];
pub static mut mmaped: &mut Box;
which on compile time generate this error.
free static item without body
where mmaped is later assigned a value in the following way only one time from the top crate and it s value used from the multiple crates it depends on.
mmaped = unsafe { std::mem::transmute(addr) };
So how to provide a definition mmaped without having to mmap it more than one time?
This question isn t a duplicate of this one as it doesn t talk about exporting the singleton outside the crate and I m getting a compiler error specifically for doing it.
As far as I know structs are thread safe. But when it has a class property would it still be a thread safe?
struct UserLocation {
let geocoder = CLGeocoder()
}
I asked this because I'm currently debugging a random crashes which points to our struct object. Our struct object is being passed in multiple thread.
Due to the ownership of the code I can't post the exact code here so I created a small snippet of the code.
Based on OP Request, he will accept this comment as answer.
well no matter what inside the struct, its value driven that what makes it thread safe, therefore a class instance inside it is totally fine because you are passing a new value of it, however i am not 100% sure i would suggest to make property as ( lazy var )
I've noticed it's possible to set the cpumask of all unbound workqueues using workqueue_set_unbound_cpumask(), however I don't see anything that will target a specific workqueue. The real thorn in my side is that struct workqueue_struct is defined within the source file, so I can't access any of its memebers.
One solution I have is to allocate a new struct workqueue_attrs and call apply_workqueue_attrs(). This would be fine if not for the fact that I can't access the aforementioned workqueue's unbound_attrs member. There seem to be some functions for copying but they are all internal. So now I'm left to reselect values for nice and no_numa, which is irritating since I have to pay attention to whether the workqueues are ordered and if they were allocated with WQ_HIGHPRI.
I know it's possible to do this is userspace with sysfs, but I don't want the user to have to track every workqueue that gets created and pin them all manually, nor have to pin every unbound workqueue.
Hopefully I'm missing something and there is an easier way. Code below for reference.
struct workqueue_struct* wq = alloc_workqueue(name, WQ_UNBOUND | WQ_HIGHPRI);
struct workqueue_attrs* attrs = alloc_workqueue_attrs(GFP_KERNEL);
/* Yuck, setting WQ_HIGHPRI is just declaring intent */
attrs->nice = MIN_NICE;
attrs->no_numa = 0;
cpumask_parse(..., attrs->cpumask);
apply_workqueue_attrs(wq, attrs);
I have a template class similar to this one:
template <typename T>
class Foo {
public:
static void show () {
unique_lock<mutex> l {mtx};
for (const auto& v : vec) {
cout << v << endl;
}
}
static void add (T s) {
unique_lock<mutex> l {mtx};
vec.push_back (s);
}
private:
static mutex mtx;
static vector<T> vec;
};
template <typename T> mutex Foo<T>::mtx;
template <typename T> vector<T> Foo<T>::vec;
And usage of this class looks like this:
Foo<string>::add ("d");
Foo<string>::add ("dr");
Foo<string>::add ("dre");
Foo<string>::add ("drew");
Foo<string>::show ();
Could you tell me if this class is thread-safe? And if it is not, how to make a thread safe version?
If I understand it correctly when we have a class with member functions (not static) and mutex (not static), we prevent race condition of the single object which has been passed across threads, right? And when we have something similar to this we prevent race condition not for the object but for the class instead - in this case for particular type, or am I wrong?
Looks good to me. Up and to a point.
mtx is 'guarding' vec to ensure that the iteration in show() can not be taking place while the vector is being modified during add().
Consider calling it vec_mtx if that is its role.
Up and to a point?
Firstly, your use doesn't (in fact) indicate any threading taking place so I don't (quite) know what you're trying to achieve.
For example if all those adds were taking place in one thread and show in another, your code (obviously) won't ensure that they have all taken place before show.
It will only ensure that (logically) it takes place either before all of them, strictly between two of them or after all of them.
Although not applicable to your use case if you retained references to string objects you passed in or used a type T with a 'shallow' copy constructor then you might be in trouble.
Consider Foo::add(buffer); and continue modifying buffer such that show() is executing in one thread while strcat(buffer,.) is executing on another and buffer is temporarily not '\0' terminated. Boom! Book a ticket to Seg Fault City (where the grass aint green and the girls aint pretty).
You need to look hard (real hard) at what data is being shared and how.
Almost all unqualified statements 'X is thread-safe' are false for all X.
You should always (always) qualify what uses they are safe for and/or how far their safety extends.
That goes 10 times for templates.
Almost all template 'guarantees' can be blown by using some kind of C array (or complex structure holding a pointer or referenced elsewhere, passing it into template in a thread here while smacking it about over there. Your efforts to make an internal copy will themselves be unsafe!
There's a recurring fallacy that if you share data through some kind of thread-safe exchange structure such as a queue you get some kind of thread-safe guarantee and you never need to think about it again and can go back to your single threaded thinking.
Here endeth the lesson.
NB: In theory std::string could be implemented on a memory starved environment that aggressively 'pools' copy strings in a way that exposes you to race conditions you haven't even imagined. Theory you understand.