I have a function which is called by multiple functions. Some functions call it with spinlock held and some without any lock. How can I know if my function is called with spinlock held?
I have a big piece of code written some time back. It has some functions which are called with and without locks from different code paths. The functions allocate skbs with GFP_KERNEL flag only considering the cases without locks. It is causing issues when called with spin_lock(). I need to handle both the cases to avoid sleeping inside a spin_lock.
How can I know if my function is called with spinlock held?
You cannot, not directly. You would need to set a flag in some structure yourself that indicates whether you hold the lock or not.
You are better off creating 2 functions. One that you call if you hold the lock, one that you call if you don't hold the lock.
//b->lck must be taken
void foo_unlocked(struct bar *b)
{
//do your thing, assume relevant lock is held
}
//b->lck must not be taken
void foo(struct bar *b)
{
spin_lock(b->lck);
foo_unlocked(b);
spin_unlock(b->lck);
}
I need to check only preemption disabled or irqs disabled. Based on that I can allocate memory with GFP_KERNEL or GFP_ATOMIC. Hence I don't need to rely on when spin_lock or another lock. Using in_atomic() and irqs_disabled() functions, I can achieve it. Thanks
Related
Basically, the title is self-explanatory.
I use it in following way:
The code is in Objective-C++.
Objective-C classes make concurrent calls to different purpose functions.
I use std::mutex to lock and unlock std::vector<T> editing option across entire class, as C++ std containers are not thread safe.
Using lock_guard automatically unlocks the mutex again when it goes out of scope. That makes it impossible to forget to unlock it, when returning, or when an exception is thrown. You should always prefer to use lock_guard or unique_lock instead of using mutex::lock(). See http://kayari.org/cxx/antipatterns.html#locking-mutex
lock_guard is an example of an RAII or SBRM type.
The std::lock_guard is only used for two purposes:
Automate mutex unlock during destruction (no need to call .unlock()).
Allow simultaneous lock of multiple mutexes to overcome deadlock problem.
For the last use case you will need std::adopt_lock flag:
std::lock(mutex_one, mutex_two);
std::lock_guard<std::mutex> lockPurposeOne(mutex_one, std::adopt_lock);
std::lock_guard<std::mutex> lockPurposeTwo(mutex_two, std::adopt_lock);
On the other hand, you will need allocate yet another class instance for the guard every time you need to lock the mutex, as std::lock_guard has no member functions. If you need guard with unlocking functionality take a look at std::unique_lock class. You may also consider using std::shared_lock for parallel reading of your vector.
You may notice, that std::shared_lock class is commented in header files and will be only accessible with C++17. According to header file you can use std::shared_timed_mutex, but when you will try to build the app it will fail, as Apple had updated the header files, but not the libc++ itself.
So for Objective-C app it may be more convenient to use GCD, allocate a couple of queue for all your C++ containers at the same time and put semaphores where needed. Take a look at this excellent comparison.
I have an stl::map<int, *msg> msg_container, where msg is a class (not relevant here).
There are multiple threads adding to the global msg_container, with locks in place for synchronised access.
In a seperate thread, it needs to assess a local copy of msg_container at a particular time and perform checks on it. Pseudo-code as below
map<int, *msg> msg_container;
map<int, *msg> msg_container_copy;
if (appropriate_time_is_reached)
{
msg_container_copy = msg_container;
//perform functions on msg_container_copy
}
As per my previous question, I know I will need to lock msg_container when reading, if there is a chance that other threads are adding to it.
Do I need to lock msg_container_copy when using it in this manner? It is local only to this thread, so there are no other threads that will be accessing it.
I do not see the necessity to lock the variable msg_container_copy if as you describe, "It is local only to this thread, so there are no other threads that will be accessing it."
By the way, I think the definition "stl::map<int, *msg> msg_container;" should be written as "stl::map<int, msg *> msg_container;" if msg is a class, so that msg * is a pointer type. It must be a typo.
You don't need a lock to access msg_container_copy because no other thread can access it.
You might need a lock when dereferencing the pointers it contains, because they are shared with other threads. It depends what you do with those pointers.
I have the following code:
pthread_mutex lock_row[M], lock_culm[M];
FUNCTION SIGNATURE (..., int i, int j, ...) {
pthread_mutex_lock(&lock_row[i]);
pthread_mutex_lock(&lock_culm[j]);
...CRITICAL CODE...
pthread_mute_unlock(&lock_row[j]);
pthread_mute_unlock(&lock_row[i]);
}
Can I get a deadlock between the first lock to the second? Let's say if we have a context switch after the first row, and other thread tries to lock something again? I don't really get this I would like to understand this a little further.
Besides the probable typo when you try to unlock sth twice, this example will never deadlock. Context switches between the two lock-calls pose no threat to the mechanism involved here. Think of it as a getting a higher level of allowance. With each lock gained, this process or thread is allowed to do more. Each locking is a gate which might hold the process up until no other lock-holder prevents the entering of the higher level. Whatever happens between the two lockings does not matter as long as it does not change that level of allowance.
pthread_mutex_lock(&lock_row[i]);
pthread_mutex_lock(&lock_culm[j]);
This is fine as long as all of your code takes these locks in this order - the lock_row lock first, then the lock_culm lock second. If another part of the code takes these same locks in the opposite order, then it can deadlock.
For this reason it is usual in complex programs to define the locking order - a global ordering of all the locks in the program, defining the order in which they should be taken.
I would like to confirm here if I understood correctly how TCriticalSection and Synchronize operate.
As far as I know right now Synchronize uses SendMessage (update: or at least used it in older VCL versions as mentioned in couple of comments below) which suspends currently executing thread (as well as any other thread) unlike PostMessage which doesn't and then executes required function (from main thread). In a way SendMessage "stops" multithreading when executing.
But I am not sure about TCriticalSection. Let's say for example I create something like this:
// Global variables somewhere in my code any thread can access
boost::scoped_ptr<TCriticalSection> ProtectMyVarAndCallFnction(new TCriticalSection);
int MyVariable1;
void CallMyFunctionThatAlsoModifiesMoreStuff() { /* do even more here */ };
// Thread code within one of the threads
try {
ProtectMyVarAndCallFnction->Acquire();
MyVariable1++;
CallMyFunctionThatAlsoModifiesMoreStuff();
}
__finally {
ProtectMyVarAndCallFnction->Release();
}
Now, my question is - how the critical section "knows" that I am protecting MyVariable1 in this case as well as whatever the called function may modify?
If I understood it correctly - it doesn't - and it is my responsibility to correctly call Acquire() in any thread wants to change MyVariable1 or call this function (or do any of the two). In other words I think of TCriticalSection as user-defined block which defines whatever logically I assigned to it. It may be a set of variables or any particular function as long as I call Acquire() within all of the threads that might write to this block or use this function. For example "DiskOp" may be my name of TCriticalSection that writes on disk, "Internet" may be the name of TCriticalSection that calls functions that retrieve some data from the Internet. Did I get it correctly?
Also, within this context, does TCriticalSection therefore always needs to be a global kind of variable?
SendMessage suspends currently executing thread (as well as any other thread).
No, that is incorrect. SendMessage does not suspend anything. SendMessage merely delivers a message synchronously. The function does not return until the message has been delivered. That is, the window proc of the target window has been executed. And because the window proc is always called on the thread that owns the window, this means that the calling thread may need to be blocked to wait until the window's owning thread is ready to execute the window proc. It most definitely doesn't suspend all threads in the process.
How does the critical section know that I am protecting MyVariable1?
It doesn't. It's entirely up to you to make sure that all uses of MyVariable1 that need protection, are given protection. A critical section is a form of mutex. A mutex ensures that only one thread of execution can hold the mutex any any instant in time.
As I call Acquire() within all of the threads that might write to this block or use this function.
That's not really it either. The "within all of the threads" is a mis-think. You need to be thinking about "at all sections of code that use the variable".
Does a critical section therefore always need to be a global kind of variable?
No, a critical section can be a global variable. But it need not be.
I have a method as below
SomeStruct* abc;
void NullABC()
{
abc = NULL;
}
This is just example and not very interesting.
Many thread could call this method at the same time.
Do I need to lock "abc = NULL" line?
I think it is just pointer so it could be done in one shot and there isn't really need for it but just wanted to make sure.
Thanks
It depends on the platform on which you are running. On many platforms, as long as abc is correctly aligned, the write will be atomic.
However, if your platform does not have such a guarantee, you need to synchronize access to the variable, using a lock, an atomic variable, or an interlocked operation.
No you do not need a lock, at least not on x86. A memory barrier is required in may real world situations though, and locking is one way to get this (the other would be an explicit barrier). You may also consider using an interlocked operation, like VisualC's InterlockedExchangePointer if you need access to the original pointer. There are equivalent intrinsics supported by most compilers.
If no other threads are ever using abc for any other purpose, then the code as shown is fine... but of course it's a bit silly to have a pointer that never gets used except to set it to NULL.
If there is some other code somewhere that does something like this, OTOH:
if (abc != NULL)
{
abc->DoSomething();
}
Then in this case both the code that uses the abc pointer (above) and the code that changes it (that you posted) needed to lock a mutex before accessing (abc). Otherwise the code above risks crashing if the value of abc gets set to NULL after the if statement but before the DoSomething() call.
A borderline case would be if the other code does this:
SomeStruct * my_abc = abc;
if (my_abc != NULL)
{
my_abc->DoSomething();
}
That will probably work, because at the time the abc pointer's value is copied over to my_abc, the value of abc is either NULL or it isn't... and my_abc is a local variable, so other thread's won't be able to change it before DoSomething() is called. The above could theoretically break on some platforms where copying of pointers isn't atomic though (in which case my_abc might end up being an invalid pointer, with have of abc's bits and half NULL bits)... but common/PC hardware will copy pointers atomically, so it shouldn't be an issue there. It might be worthwhile to use a Mutex anyway just to for paranoia's sake though.