#define SW1 RB5
int IOFlag = 2; //while in out
void SW(){
if(!RB5)
__delay_ms(50);
while(!RB5);
__delay_ms(50);
IOFlag++;
}
void main(){
SW();
while(IOFlag % 2 != 0){
SW();
//some routines..
}
}
I used pic16f73, RB5 input use for switch.
When some of the routine is running, switch is not operating properly.
It is possible if you use the interrupt. However I don't know how to use it properly.
You need to understand the difference between polling and interrupts.
With polling (what you appear to be doing), you periodically check the state of some "thing" and act on it.
With interrupts, the "thing" happening causes your main thread of execution to be suspended, and an interrupt service routine (ISR) run.
Polling has the disadvantage of potentially long latency, the time between the thing happening and you finding out about it. In fact, you can even lose events if the thing is a momentary switch for example - you switch it on then off then, when the code checks for it, it's off.
Now you can still use polling if you wish, provided you understand these implications. Sometimes the easiest solution is to poll more often.
For example, if one of your //some routines.. jobs is a long running loop, you can poll from within there:
for (int i = 0; i < numThings; i++) {
doSomethingQuickWitn (thing[i]);
SW(); // Poll here as well
}
// Rather than here.
However, for _minimal latency, using interrupts is usually better and is reasonably simple once you wrap your head around the concept.
Your ISR (which will run on the given event, interrupting the main thread of execution) simply has to store the fact that the event has happened and communicate that to your main thread somehow.
For situations where you don't care how many times the event has happened, a flag will do the job. Your ISR simply sets the flag and your main thread of execution checks it periodically to see if it's been set, then clears it (with interrupts disabled so as to avoid race conditions). That would be something like (pseudo-code):
global val switchHit = false;
main:
interrupt (7, intFn) // call intfn() on interrupt 7
while true:
disableInts() // disallow interrupts for a short while
if switchHit:
handleSwitch() // switch was hit, do something (quickly)
switchHit = false // mark as not hit
enableInts() // and re-allow interrupts
doLotsOfOtherStuff()
intfn:
switchHit = true // notify main
Note that I'm not worry about race conditions within the ISR, interrupts are generally disabled automatically there.
More complicated information transfer may involve a count rather than a flag, or even a message queue of some sort, flowing from the ISR to the main thread of execution.
Related
TL;DR
Why does std::condition_variable::wait needs a mutex as one of its variables?
Answer 1
You may look a the documentation and quote that:
wait... Atomically releases lock
But that's not a real reason. That's just validate my question even more: why does it need it in the first place?
Answer 2
predicate is most likely query the state of a shared resource and it must be lock guarded.
OK. fair.
Two questions here
Is it always true that predicate query the state of a shared resource? I assume yes. I t doesn't make sense to me to implement it otherwise
What if I do not pass any predicate (it is optional)?
Using predicate - lock makes sense
int i = 0;
void waits()
{
std::unique_lock<std::mutex> lk(cv_m);
cv.wait(lk, []{return i == 1;});
std::cout << i;
}
Not Using predicate - why can't we lock after the wait?
int i = 0;
void waits()
{
cv.wait(lk);
std::unique_lock<std::mutex> lk(cv_m);
std::cout << i;
}
Notes
I know that there are no harmful implications to this practice. I just don't know how to explain to my self why it was design this way?
Question
If predicate is optional and is not passed to wait, why do we need the lock?
When using a condition variable to wait for a condition, a thread performs the following sequence of steps:
It determines that the condition is not currently true.
It starts waiting for some other thread to make the condition true. This is the wait call.
For example, the condition might be that a queue has elements in it, and a thread might see that the queue is empty and wait for another thread to put things in the queue.
If another thread were to intercede between these two steps, it could make the condition true and notify on the condition variable before the first thread actually starts waiting. In this case, the waiting thread would not receive the notification, and it might never stop waiting.
The purpose of requiring the lock to be held is to prevent other threads from interceding like this. Additionally, the lock must be unlocked to allow other threads to do whatever we're waiting for, but it can't happen before the wait call because of the notify-before-wait problem, and it can't happen after the wait call because we can't do anything while we're waiting. It has to be part of the wait call, so wait has to know about the lock.
Now, you might look at the notify_* methods and notice that those methods don't require the lock to be held, so there's nothing actually stopping another thread from notifying between steps 1 and 2. However, a thread calling notify_* is supposed to hold the lock while performing whatever action it does to make the condition true, which is usually enough protection.
TL;DR
If predicate is optional and is not passed to wait, why do we need the lock?
condition_variable is designed to wait for a certain condition to come true, not to wait just for a notification. So to "catch" the "moment" when the condition becomes true you need to check the condition and wait for the notification. And to avoid a race condition you need those two to be a single atomic operation.
Purpose Of condition_variable:
Enable a program to implement this: do some action when a condition C holds.
Intended Protocol:
Condition producer changes state of the world from !C to C.
Condition consumer waits for C to happen and takes the action while/after condition C holds.
Simplification:
For simplicity (to limit number of cases to think of) let's assume that C never switches back to !C. Let's also forget about spurious wakeups. Even with this assumptions we'll see that the lock is necessary.
Naive Approach:
Let's have two threads with an essential code summarized like this:
void producer() {
_condition = true;
_condition_variable.notify_all();
}
void consumer() {
if (!_condition) {
_condition_variable.wait();
}
action();
}
The Problem:
The problem here is a race condition. A problematic interleaving of the threads is following:
The consumer reads condition, checks it to be false and decides to wait.
A thread scheduler interrupts consumer and resumes producer.
The producer updates condition to become true and invokes notify_all().
The consumer is resumed.
The consumer actually does wait(), but is never notified and waken up (a liveness hazard).
So without locking the consumer may miss the event of the condition becoming true.
Solution:
Disclaimer: this code still does not handle spurious wakeups and possibility of condition becoming false again.
void producer() {
{ std::unique_lock<std::mutex> l(_mutex);
_condition = true;
}
_condition_variable.notify_all();
}
void consumer() {
{ std::unique_lock<std::mutex> l(_mutex);
if (!_condition) {
_condition_variable.wait(l);
}
}
action();
}
Here we check condition, release lock and start waiting as a single atomic operation, preventing the race condition mentioned before.
See Also
Why Lock condition await must hold the lock
You need a std::unique_lock when using std::condition_variable for the same reason you need a std::FILE* when using std::fwrite and for the same reason a BasicLockable is necessary when using std::unique_lock itself.
The feature std::fwrite gives you, entire the reason it exists, is to write to files. So you have to give it a file. The feature std::unique_lock provides you is RAII locking and unlocking of a mutex (or another BasicLockable, like std::shared_mutex, etc.) so you have to give it something to lock and unlock.
The feature std::condition_variable provides, the entire reason it exists, is the atomically waiting and unlocking a lock (and completing a wait and locking). So you have to give it something to lock.
Why would someone want that is a separate question that has been discussed already. For example:
When is a condition variable needed, isn't a mutex enough?
Conditional Variable vs Semaphore
Advantages of using condition variables over mutex
And so on.
As has been explained, the pred parameter is optional, but having some sort of a predicate and testing it isn't. Or, in other words, not having a predicate doesn't make any sense inn a manner similar to how having a condition variable without a lock doesn't making any sense.
The reason you have a lock is because you have shared state you need to protect from simultaneous access. Some function of that shared state is the predicate.
If you don't have a predicate and you don't have a lock you really don't need a condition variable just like if you don't have a file you really don't need fwrite.
A final point is that the second code snippet you wrote is very broken. Obviously it won't compile as you define the lock after you try to pass it as an argument to condition_variable::wait(). You probably meant something like:
std::mutex mtx_cv;
std::condition_variable cv;
...
{
std::unique_lock<std::mutex> lk(mtx_cv);
cv.wait(lk);
lk.lock(); // throws std::system_error with an error code of std::errc::resource_deadlock_would_occur
}
The reason this is wrong is very simple. condition_variable::wait's effects are (from [thread.condition.condvar]):
Effects:
— Atomically calls lock.unlock() and blocks on *this.
— When unblocked, calls lock.lock() (possibly blocking on the lock), then returns.
— The function will unblock when signaled by a call to notify_one() or a call to notify_all(), or spuriously
After the return from wait() the lock is locked, and unique_lock::lock() throws an exception if it has already locked the mutex it wraps ([thread.lock.unique.locking]).
Again, why would someone want coupling waiting and locking the way std::condition_variable does is a separate question, but given that it does - you cannot, by definition, lock a std::condition_variable's std::unique_lock after std::condition_variable::wait has returned.
It's not stated in the documentation (and could be implemented differently) but conceptually you can imagine the condition variable has another mutex to both protect its own data but also coordinate the condition, waiting and notification with modification of the consumer code data (e.g. queue.size()) affecting the test.
So when you call wait(...) the following (logically) happens.
Precondition: The consumer code holds the lock (CCL) controlling the consumer condition data (CCD).
The condition is checked, if true, execution in the consumer code continues still holding the lock.
If false, it first acquires its own lock (CVL), adds the current thread to the waiting thread collection releases the consumer lock and puts itself to waiting and releases its own lock (CVL).
That final step is tricky because it needs to sleep the thread and release the CVL at the same time or in that order or in a way that threads notified just before going to wait are able to (somehow) not go to wait.
The step of acquiring the CVL before releasing the CCD is key. Any parallel thread trying to update the CCD and notify will be blocked either by the CCL or CVL. If the CCL was released before acquiring the CVL a parallel thread could acquire the CCL, change the data and then notify before the the to-be-waiting thread is added to the waiters.
A parallel thread acquires the CCL, modifies the data to make the condition true (or at least worth testing) and then notifies. Notification acquires the the CVL and identifies a blocked thread (or threads) if any to unwait. The unwaited threads then seek to acquire the CCL and may block there but won't leave wait and re-perform the test until they've acquired it.
Notification must acquire the CVL to make sure threads that have found the test false have been added to the waiters.
It's OK (possibly preferable for performance) to notify without holding the CCL because the hand-off between the CCL and CVL in the wait code is ensuring the ordering.
It may be preferrable because notifying when holding the CCL may mean all the unwaited threads just unwait to block (on the CCL) while the thread modifying the data is still holding the lock.
Notice that even if the CCD is atomic you must modify it holding the CCL or that Lock CVL, unlock CCL step won't ensure the total ordering required to make sure notifications aren't sent when threads are in the process of going to wait.
The standard only talks about atomicity of operations and another implementation may have a way of blocking notification before completing the 'add to waiters' step has completed following a failed test. The C++ Standard is careful to not dictate an implementation.
In all that, to answer some of the specific questions.
Must the state be shared? Sort of. There could be an external condition like a file being in a directory and the wait is timed to re-try after a time-period. You can decide for yourself whether you consider the file system or even just the wall-clock to be shared state.
Must there be any state? Not necessarily. A thread can wait on notification.
That could be tricky to coordinate because there has to be enough sequencing to stop the other thread notifying out of turn. The commonest solution is to have some boolean flag set by the notifying thread so the notified thread knows if it missed it. The normal use of void wait(std::unique_lock<std::mutex>& lk) is when the predicate is checked outside:
std::unique_lock<std::mutex> ulk(ccd_mutex)
while(!condition){
cv.wait(ulk);
}
Where the notifying thread uses:
{
std::lock_guard<std::mutex> guard(ccd_mutex);
condition=true;
}
cv.notify();
The reason is that in some times the waiting-thread holds the m_mutex:
#include <mutex>
#include <condition_variable>
void CMyClass::MyFunc()
{
std::unique_lock<std::mutex> guard(m_mutex);
// do something (on the protected resource)
m_condiotion.wait(guard, [this]() {return !m_bSpuriousWake; });
// do something else (on the protected resource)
guard.unluck();
// do something else than else
}
and a thread should never go to sleep while holding a m_mutex. One doesn't want to lock everybody out, while sleeping. So, atomically: {guard is unlocked and the thread go to sleep}. Once it waked up by the other-thread (m_condiotion.notify_one(), let's say) guard is locked again, and then the thread continue.
Reference (video)
Because if not so, there's a race condition before the waiting thread noticing the change of the shared state and the wait() call.
Assume we got a shared state of type std::atomic state_, there's still a fair chance for the waiting thread to miss a notification:
T1(waiting) | T2(notification)
---------------------------------------------- * ---------------------------
1) for (int i = state_; i != 0; i = state_) { |
2) | state_ = 0;
3) | cv.notify();
4) cv.wait(); |
5) }
6) // go on with the satisfied condition... |
Note that the wait() call failed to notice the latest value of state_ and may keep waiting forever.
Using module_init I have created and woken up a kthread. In order to keep it alive and also do my function task, I used the following approach. That was the only approach I could make it running since I am changing the flag in an interrupt. Now I am facing an unbelievably drop in the performance of the code. I narrowed down a problem to the following piece of code:
while(1){
//Do my tasks here after changing flag
while(get_flag() ){ //Waiting for a flag, to basically do my Func in the previous line.
schedule();
}
}//to keep a kthread alive after initial create.
Details about dropping the performance: without using the second while(1) which includes schedule, the rate of data transmission in my code is 35MB/s but with this little line, it drops to 5MB/s.
Is there any other way that I can make a kthread sleep and wait for a flag change?
Ideally, This is not the way you should do this in Kernel. But if you have to do it this way.
See if you are doing a blocking check for the flag? If that is the case, change it to non-blocking wait, just check for the flag and schedule that should be enough in most of the cases. The scheduling algorithm will make sure to get the fair share of CPU for all the processes. Also, if you are doing a blocking check for flag you are unnecessarily wasting CPU cycles since you are doing the processing only on the next scheduler slice. with the same logic, if you want to get better performance, you should wake up your waiting process from your producer thread with wakeup_task()
-or-
if you just want to achieve the functionality, I feel the right way to do it is the following method. using a wait queue, wait_even_interruptible() and wake_up_interruptible()
From your above said kernel thread you just need to call the wait_event_interruptible
see the pseudo code below
while (1){
wait_event_interruptible(wq, your_flag)
{
<do your task>
}
}
and from the place you are setting the flag
{
<some event>
<set flag>
wake_up_interruptible (wq)
}
You don't have to call the schedule explicitly.
I am working on an user space app for an embedded Linux project using the 2.6.24.3 kernel.
My app passes data between two file nodes by creating 2 pthreads that each sleep until a asynchronous IO operation completes at which point it wakes and runs a completion handler.
The completion handlers need to keep track of how many transfers are pending and maintain a handful of linked lists that one thread will add to and the other will remove.
// sleep here until events arrive or time out expires
for(;;) {
no_of_events = io_getevents(ctx, 1, num_events, events, &timeout);
// Process each aio event that has completed or thrown an error
for (i=0; i<no_of_events; i++) {
// Get pointer to completion handler
io_complete = (io_callback_t) events[i].data;
// Get pointer to data object
iocb = (struct iocb *) events[i].obj;
// Call completion handler and pass it the data object
io_complete(ctx, iocb, events[i].res, events[i].res2);
}
}
My question is this...
Is there a simple way I can prevent the currently active thread from yielding whilst it runs the completion handler rather than going down the mutex/spin lock route?
Or failing that can Linux be configured to prevent yielding a pthread when a mutex/spin lock is held?
You can use the sched_setscheduler() system call to temporarily set the thread's scheduling policy to SCHED_FIFO, then set it back again. From the sched_setscheduler() man page:
A SCHED_FIFO process runs until either
it is blocked by an I/O request, it is
preempted by a higher priority
process, or it calls sched_yield(2).
(In this context, "process" actually means "thread").
However, this is quite a suspicious requirement. What is the problem you are hoping to solve? If you are just trying to protect your linked list of completion handlers from concurrent access, then an ordinary mutex is the way to go. Have the completion thread lock the mutex, remove the list item, unlock the mutex, then call the completion handler.
I think you'll want to use mutexes/locks to prevent race conditions here. Mutexes are by no way voodoo magic and can even make your code simpler than using arbitrary system-specific features, which you'd need to potentially port across systems. Don't know if the latter is an issue for you, though.
I believe you are trying to outsmart the Linux scheduler here, for the wrong reasons.
The correct solution is to use a mutex to prevent completion handlers from running in parallel. Let the scheduler do its job.
When myThread.Start(...) is called, do we have the assurance that the thread is started? The MSDN documentation isn't really specific about that. It says that the status of is changed to Running.
I am asking because I've seen a couple of times the following code. It creates a thread, starts it and then loop until the status become Running. Is that necessary to loop?
Thread t = new Thread(new ParameterizedThreadStart(data));
t.Start(data);
while (t.ThreadState != System.Threading.ThreadState.Running &&
t.ThreadState != System.Threading.ThreadState.WaitSleepJoin)
{
Thread.Sleep(10);
}
Thanks!
If you're set on not allowing your loop to continue until the thread has "started", then it will depend on what exactly you mean by "started". Does that mean that the thread has been created by the OS and signaled to run, but not necessarily that it's done anything yet? Does that mean that it's executed one or more operations?
While it's likely fine, your loop isn't bulletproof, since it's theoretically possible that the entire thread executes between the time you call Start and when you check the ThreadState; it's also not a good idea to check the property directly twice.
If you want to stick with checking the state, something like this would/could be more reliable:
ThreadState state = t.ThreadState;
while(state != ThreadState.Runnung && state != ThreadState.WaitSleepJoin)
{
Thread.Sleep(10:
state = t.ThreadState;
}
However, this is still subject to the possibility of the thread starting, running, then stopping before you even get the chance to check. Yes, you could expand the scope of the if statement to include other states, but I would recommend using a WaitHandle to signal when the thread "starts".
ManualResetEvent signal;
void foo()
{
Thread t = new Thread(new ParameterizedThreadStart(ThreadMethod));
signal = new ManualResetEvent();
t.Start(data);
signal.WaitOne();
/* code to execute after the thread has "started" */
}
void ThreadMethod(object foo)
{
signal.Set();
/* do your work */
}
You still have the possiblity of the thread ending before you check, but you're guaranteed to have that WaitHandle set once the thread starts. The call to WaitOne will block indefinitely until Set has been called on the WaitHandle.
Guess it depends on what you are doing after the loop. If whatever comes after it critically dependant on the thread running then checking is not a bad idea. Personnally I'd use a ManualResetEvent or something similiar that was set by the Thread rather than checking the ThreadStatus
No. Thread.Start causes a "thread to be scheduled for execution". It will start, but it may take a (short) period of time before the code within your delegate actually runs. In fact, the code above doesn't do what (I suspect) the author intended, either. Setting the thread's threadstate to ThreadState.Running (which does happen in Thread.Start) just makes sure it's scheduled to run -- but the ThreadState can be "Running" before the delegate is actually executing.
As John Bergess suggested, using a ManualResetEvent to notify the main thread that the thread is running is a much better option than sleeping and checking the thread's state.
On constrained devices, I often find myself "faking" locks between 2 threads with 2 bools. Each is only read by one thread, and only written by the other. Here's what I mean:
bool quitted = false, paused = false;
bool should_quit = false, should_pause = false;
void downloader_thread() {
quitted = false;
while(!should_quit) {
fill_buffer(bfr);
if(should_pause) {
is_paused = true;
while(should_pause) sleep(50);
is_paused = false;
}
}
quitted = true;
}
void ui_thread() {
// new Thread(downloader_thread).start();
// ...
should_pause = true;
while(!is_paused) sleep(50);
// resize buffer or something else non-thread-safe
should_pause = false;
}
Of course on a PC I wouldn't do this, but on constrained devices, it seems reading a bool value would be much quicker than obtaining a lock. Of course I trade off for slower recovery (see "sleep(50)") when a change to the buffer is needed.
The question -- is it completely thread-safe? Or are there hidden gotchas I need to be aware of when faking locks like this? Or should I not do this at all?
Using bool values to communicate between threads can work as you intend, but there are indeed two hidden gotchas as explained in this blog post by Vitaliy Liptchinsky:
Cache Coherency
A CPU does not always fetch memory values from RAM. Fast memory caches on the die are one of the tricks used by CPU designers to work around the Von Neumann bottleneck. On some multi-cpu or multi-core architectures (like Intel's Itanium) these CPU caches are not shared or automatically kept in sync. In other words, your threads may be seeing different values for the same memory address if they run on different CPU's.
To avoid this you need to declare your variables as volatile (C++, C#, java), or do explicit volatile read/writes, or make use of locking mechanisms.
Compiler Optimizations
The compiler or JITter may perform optimizations which are not safe if multiple threads are involved. See the linked blog post for an example. Again, you must make use of the volatile keyword or other mechanisms to inform you compiler.
Unless you understand the memory architecture of your device in detail, as well as the code generated by your compiler, this code is not safe.
Just because it seems that it would work, doesn't mean that it will. "Constrained" devices, like the unconstrained type, are getting more and more powerful. I wouldn't bet against finding a dual-core CPU in a cell phone, for instance. That means I wouldn't bet that the above code would work.
Concerning the sleep call, you could always just do sleep(0) or the equivalent call that pauses your thread letting the next in line a turn.
Concerning the rest, this is thread safe if you know the implementation details of your device.
Answering the questions.
Is this completely thread safe? I would answer no this is not thread safe and I would just not do this at all. Without knowing the details of our device and compiler, if this is C++, the compiler is free to reorder and optimize things away as it sees fit. e.g. you wrote:
is_paused = true;
while(should_pause) sleep(50);
is_paused = false;
but the compiler may choose to reorder this into something like this:
sleep(50);
is_paused = false;
this probably won't work even a single core device as others have said.
Rather than taking a lock, you may try to do better to just do less on the UI thread rather than yield in the middle of processing UI messages. If you think that you have spent too much time on the UI thread then find a way to cleanly exit and register an asynchronous call back.
If you call sleep on a UI thread (or try to acquire a lock or do anyting that may block) you open the door to hangs and glitchy UIs. A 50ms sleep is enough for a user to notice. And if you try to acquire a lock or do any other blocking operation (like I/O) you need to deal with the reality of waiting for an indeterminate amount of time to get the I/O which tends to translate from glitch to hang.
This code is unsafe under almost all circumstances. On multi-core processors you will not have cache coherency between cores because bool reads and writes are not atomic operations. This means each core is not guarenteed to have the same value in the cache or even from memory if the cache from the last write hasn't been flushed.
However, even on resource constrained single core devices this is not safe because you do not have control over the scheduler. Here is an example, for simplicty I'm going to pretend these are the only two threads on the device.
When the ui_thread runs, the following lines of code could be run in the same timeslice.
// new Thread(downloader_thread).start();
// ...
should_pause = true;
The downloader_thread runs next and in it's time slice the following lines are executed:
quitted = false;
while(!should_quit)
{
fill_buffer(bfr);
The scheduler prempts the downloader_thread before fill_buffer returns and then activates the ui_thread which runs.
while(!is_paused) sleep(50);
// resize buffer or something else non-thread-safe
should_pause = false;
The resize buffer operation is done while the downloader_thread is in the process of filling the buffer. This means the buffer is corrupted and you'll likely crash soon. It won't happen everytime, but the fact that you are filling the buffer before you set is_paused to true makes it more likely to happen, but even if you switched the order of those two operations on the downloader_thread you would still have a race condition, but you'd likely deadlock instead of corrupting the buffer.
Incidentally, this is a type of spinlock, it just doesn't work. Spinlock's aren't very for wait times that are likely to span to many time slices cause the spin the processor. Your implmentation does sleep which is a bit nicer but the scheduler still has to run your thread and thread context switches aren't cheap. If you are waiting on a critical section or semaphore, the scheduler doesn't active your thread again till the resource has become free.
You might be able to get away with this in some form on a specific platform/architecture, but it is really easy to make a mistake that is very hard to track down.