I am running this very simple program, on Ubuntu 13.04 Desktop, however if I comment out the line sleep_for, it hangs after printing cout from main. Can anyone explain why ? As far as I understand, main is a thread and t is another thread and in this case the mutex manages synchronization for shared cout object.
#include <thread>
#include <iostream>
#include <mutex>
using namespace std;
std::mutex mu;
void show()
{
std::lock_guard<mutex> locker(mu);
cout<<"this is from inside the thread"<<endl;
}
int main()
{
std::thread t(show);
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::lock_guard<mutex> locker(mu);
cout<<"This is from inside the main"<<endl;
t.join();
return 0;
}
If you change to main function as follows, the code will work as expected:
int main()
{
std::thread t(show);
{
std::lock_guard<mutex> locker(mu);
cout << "This is from inside the main" << endl;
} // automatically release lock
t.join();
}
In your code, there is a unfortunate race condition. If the thread t gets the lock first, everything works fine. But if the main threads gets the lock first, it holds the lock until the end of the main function. That means, the thread t has no chance to get the lock, can't finish and the main thread will block on t.join().
That's just a classic deadlock: The main thread obtains the lock and then blocks on joining the other thread, but the other thread can only join if it manages to obtain the lock.
Related
Note: I'll give examples in C++ but I believe my question is language-agnostic. Correct me if I'm wrong.
Just so you really understand me - what I'm trying to learn here is what the tool does and nothing else. Not what it's usually used for, not what the conventions says, just what the blunt tool does. In this case - what the condition variable does.
So far it seems to me like it's a simple mechanism that allows threads to wait (block) until some other thread signals them (unblocks them). Nothing more, no dealing with critical section access or data access (of course they can be used for that but it's only a matter of programmer's choice). Also the signaling is usually only done when something important happens (e.g. data was loaded) but theoretically it could be called at any time. Correct so far?
Now, every example that I have seen uses a condition variable object (e.g. std::condition_variable) but also some additional variable to mark if something happened (e.g. bool dataWasLoaded). Take a look at this example from https://thispointer.com//c11-multithreading-part-7-condition-variables-explained/:
#include <iostream>
#include <thread>
#include <functional>
#include <mutex>
#include <condition_variable>
using namespace std::placeholders;
class Application
{
std::mutex m_mutex;
std::condition_variable m_condVar;
bool m_bDataLoaded;
public:
Application()
{
m_bDataLoaded = false;
}
void loadData()
{
// Make This Thread sleep for 1 Second
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::cout << "Loading Data from XML" << std::endl;
// Lock The Data structure
std::lock_guard<std::mutex> guard(m_mutex);
// Set the flag to true, means data is loaded
m_bDataLoaded = true;
// Notify the condition variable
m_condVar.notify_one();
}
bool isDataLoaded()
{
return m_bDataLoaded;
}
void mainTask()
{
std::cout << "Do Some Handshaking" << std::endl;
// Acquire the lock
std::unique_lock<std::mutex> mlock(m_mutex);
// Start waiting for the Condition Variable to get signaled
// Wait() will internally release the lock and make the thread to block
// As soon as condition variable get signaled, resume the thread and
// again acquire the lock. Then check if condition is met or not
// If condition is met then continue else again go in wait.
m_condVar.wait(mlock, std::bind(&Application::isDataLoaded, this));
std::cout << "Do Processing On loaded Data" << std::endl;
}
};
int main()
{
Application app;
std::thread thread_1(&Application::mainTask, &app);
std::thread thread_2(&Application::loadData, &app);
thread_2.join();
thread_1.join();
return 0;
}
Now, other than the std::condition_variable m_condVar it also uses an additional variable bool m_bDataLoaded. But it seems to me that the thread performing mainTask is already notified that the data was loaded by means of std::condition_variable m_condVar. Why also check bool m_bDataLoaded for the same information? Compare (the same code without bool m_bDataLoaded):
#include <iostream>
#include <thread>
#include <functional>
#include <mutex>
#include <condition_variable>
using namespace std::placeholders;
class Application
{
std::mutex m_mutex;
std::condition_variable m_condVar;
public:
Application()
{
}
void loadData()
{
// Make This Thread sleep for 1 Second
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
std::cout << "Loading Data from XML" << std::endl;
// Lock The Data structure
std::lock_guard<std::mutex> guard(m_mutex);
// Notify the condition variable
m_condVar.notify_one();
}
void mainTask()
{
std::cout << "Do Some Handshaking" << std::endl;
// Acquire the lock
std::unique_lock<std::mutex> mlock(m_mutex);
// Start waiting for the Condition Variable to get signaled
// Wait() will internally release the lock and make the thread to block
// As soon as condition variable get signaled, resume the thread and
// again acquire the lock. Then check if condition is met or not
// If condition is met then continue else again go in wait.
m_condVar.wait(mlock);
std::cout << "Do Processing On loaded Data" << std::endl;
}
};
int main()
{
Application app;
std::thread thread_1(&Application::mainTask, &app);
std::thread thread_2(&Application::loadData, &app);
thread_2.join();
thread_1.join();
return 0;
}
Now I know about spurious wakeups and they alone necessitate the usage of an additional variable. My question is - are they they only reason for it? If they didn't occur could one just use condition variables without any additional variables (and btw wouldn't that make the name "condition variable" a misnomer then)?
Another thing is - isn't the usage of additional variables the only reason why condition variables also require a mutex? If not, what are the other reasons?
If additional variables are necessary (for spurious wakeups or other reasons) why doesn't the API require them (in the 2nd code I didn't have to use them for the code to compile)? (I don't know if it's the same in other languages, so this question might be C++-specific.)
It's not all about spurious wakeups.
When you call m_condvar.wait, how do you know the condition you're waiting for has not already happened?
Maybe 'loadData' has already been called in another thread. When it called notify_one(), nothing happened because there were no threads waiting.
Now if you call condvar.wait, you will wait forever because nothing will signal you.
The original version does not have this problem, because:
If m_bDataLoaded is false, then it knows that the data is not loaded, and that after m_bDataLoaded is set true, the caller will signal the condition;
The lock is held, and we know that m_bDataLoaded cannot be modified in another thread until it's released;
condvar.wait will put the current thread in the waiting queue before releasing the lock, so we know that m_bDataLoaded will be set true after we start waiting, and so notify_one will also be called after we start waiting.
To answer your other questions:
Yes, coordination with additional variables is the reason why condition variables are tied to mutexes.
The API doesn't require, say, a boolean variable, because that's not always the kind of condition you're waiting for.
This kind of thing is common, for example:
Task *getTask() {
//anyone who uses m_taskQueue or m_shutDown must lock this mutex
unique_lock<mutex> lock(m_mutex);
while (m_taskQueue.isEmpty()) {
if (m_shutdown) {
return null;
}
// this is signalled after a task is enqueued
// or m_shutdown is asserted
m_condvar.wait(lock);
}
return taskQueue.pop_front();
}
Here we require the same critical guarantee that the thread starts waiting before the lock is released, but the condition we're waiting for is more complex, involving a variable and separate data structure, and there are multiple ways to exit the wait.
Yes, the condition variable is just useful to wait for an event. In my point of view you should not try to use it for controlling concurrent access of critical data structures.
I just can speak about C++. As you see in the example here https://en.cppreference.com/w/cpp/thread/condition_variable/wait, they used this expression cv.wait(lk, []{return i == 1;});. And []{...} is the expression of a nameless function. So you can also write your own function and give the name of the function:
bool condFn()
{
std::cout << "condFn" << std::endl; // debug output ;)
return i == 1;
}
void waits()
{
std::unique_lock<std::mutex> lk(cv_m);
std::cerr << "Waiting... \n";
cv.wait(lk, condFn);
std::cerr << "...finished waiting. i == 1\n";
}
And inside this function you can evaluate, whatever you want. The thread is always sleeping until it gets notified, then it processes always the function that evaluates the condition for continue working. In case of true, the thread continues, in case of false the programm goes sleeping again.
I understand that when a new thread is spawned it must be joined or detached else terminate shall be called, i have the below piece of code which work fine if i join them, but crashes if i call detach instead, I am not able to understand what's going on under the hood.
#include "iostream"
#include "thread"
#include "vector"
#include "algorithm"
#include "iterator"
#include "string"
#include "memory"
using namespace std;
void func() {
cout << " func ";
}
int main(int argc , char** argv)
{
std::vector< std::thread> m_vec;
for(int i = 0; i < 100 ; i++){
m_vec.push_back( std::thread(func));
m_vec[i].detach();
}
return 0;
}
Just detaching a thread doesn't give it permission to outlive the main thread. Once the main thread exits, that's the ballgame; the heap is destroyed, things like cout are cleaned up. Any remaining threads stand a distinct chance of crashing if they do anything before the process as a whole terminates.
If you detach a thread, be prepared to provide your own mechanism for ensuring it does not outlive the main thread.
this is a example from boost library.
int calculate_the_answer_to_life_the_universe_and_everything()
{
return 42;
}
boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
boost:: future<int> fi=pt.get_future();
instead of boost::thread task(boost::move(pt)); to launch a task on the thread,
now I want to put the thread into shared_ptr vector and launch a task on the thread.
First i creat a vector.
std::vector<std::shared_ptr<boost::thread>> vecThreads;
And is this the right way to put a thread into vector?
vecThreads.push_back(std::make_shared<boost::thread>(boost::packaged_task<int> &pt));
thank you all for the attention!
Packaged tasks are just that. They don't "have" threads.
They just run on a thread. Any thread.
In fact, it's an anti-pattern to start a thread for each task. But, of course, you can. I'd suggest using a
boost::thead_group tg;
tg.create_thread(std::move(pt));
So you can depend on
tg.join_all();
to await all pending threads completion.
UPDATE
With shared pointers, here's an example:
Live On Coliru
#include <boost/thread.hpp>
#include <boost/make_shared.hpp>
#include <boost/bind.hpp>
using namespace boost;
int ltuae(int factor) {
this_thread::sleep_for(chrono::milliseconds(rand()%1000));
return factor*42;
}
int main() {
std::vector<unique_future<int> > futures;
std::vector<shared_ptr<thread> > threads;
for (int i=0; i<10; ++i)
{
packaged_task<int> pt(bind(ltuae, i));
futures.emplace_back(pt.get_future());
threads.emplace_back(make_shared<thread>(std::move(pt)));
}
for (auto& f : futures)
std::cout << "Return: " << f.get() << "\n";
for (auto& t: threads)
if (t->joinable())
t->join();
}
Using C++11 STL with VS2013 to implementing a asynchronous print class.
Failing to get thread.join() returns with no deadlocking.
I am trying to debug and finally find this issue may caused by global/local class variable declaration. Here is the details and I dont know why it happened?
#include <iostream>
#include <string>
#include <chrono>
#include <mutex>
#include <thread>
#include <condition_variable>
#include "tbb/concurrent_queue.h"
using namespace std;
class logger
{
public:
~logger()
{
fin();
}
void init()
{
m_quit = false;
m_thd = thread(bind(&logger::printer, this));
//thread printer(bind(&logger::printer, this));
//m_thd.swap(printer);
}
void fin()
{
//not needed
//unique_lock<mutex> locker(m_mtx);
if (m_thd.joinable())
{
m_quit = true;
write("fin");
//locker.unlock();
m_thd.join();
}
}
void write(const char *msg)
{
m_queue.push(msg);
m_cond.notify_one();
}
void printer()
{
string msgstr;
unique_lock<mutex> locker(m_mtx);
while (1)
{
if (m_queue.try_pop(msgstr))
cout << msgstr << endl;
else if (m_quit)
break;
else
m_cond.wait(locker);
}
cout << "printer quit" <<endl;
}
bool m_quit;
mutex m_mtx;
condition_variable m_cond;
thread m_thd;
tbb::concurrent_queue<string> m_queue;
};
For more convenience I placed thread.join into class's destructor in order to ensure the m_thread can be quit normally.
I test the whole class and something wrong occured.
m_thd.join() never return when class logger declared as a global var
like this:
logger lgg;
void main()
{
lgg.init();
for (int i = 0; i < 100; ++i)
{
char s[8];
sprintf_s(s, 8, "%d", i);
lgg.write(s);
}
//if first call lgg.fin() here, m_thd can be joined normally
//lgg.fin();
system("pause");
//dead&blocked here and I observed that printer() finished successfully
}
If class logger declared as a local variable, it seems everything works well.
void main()
{
logger lgg;
lgg.init();
for (int i = 0; i < 100; ++i)
{
char s[8];
sprintf_s(s, 8, "%d", i);
lgg.write(s);
}
system("pause");
}
update 2015/02/27
I tried to delete std::cout in printer(), but program still blocked at same place, seems it is not the std::cout problem?
Deleting supernumerary lock in fin()
Globals and statics are constructed and destructed just prior or post to DllMain getting called respectively for DLL_PROCESS_ATTACH and DLL_PROCESS_DETACH. The problem with this is that it occurs inside the loader lock. Which is the most dangerous place on the planet to be if dealing with kernel objects as it may cause deadlock, or the application to randomly crash. As such you should never use thread primitives as statics on windows EVER. Thus dealing with threading in a destructor of a global object is basically doing the exact things we're warned not to do in DllMain.
To quote Raymond Chen
The building is being demolished. Don't bother sweeping the floor and emptying the trash cans and erasing the whiteboards. And don't line up at the exit to the building so everybody can move their in/out magnet to out. All you're doing is making the demolition team wait for you to finish these pointless housecleaning tasks.
and again:
If your DllMain function creates a thread and then waits for the thread to do something (e.g., waits for the thread to signal an event that says that it has finished initializing, then you've created a deadlock. The DLL_PROCESS_ATTACH notification handler inside DllMain is waiting for the new thread to run, but the new thread can't run until the DllMain function returns so that it can send a new DLL_THREAD_ATTACH notification.
This deadlock is much more commonly seen in DLL_PROCESS_DETACH, where a DLL wants to shut down its worker threads and wait for them to clean up before it unloads itself. You can't wait for a thread inside DLL_PROCESS_DETACH because that thread needs to send out the DLL_THREAD_DETACH notifications before it exits, which it can't do until your DLL_PROCESS_DETACH handler returns.
This also occurs even when using an EXE because the visual C++ runtime cheats and registers its constructors and destructors with the C runtime to be run when the runtime is loaded or unloaded, thus ending up with the same issue:
The answer is that the C runtime library hires a lackey. The hired lackey is the C runtime library DLL (for example, MSVCR80.DLL). The C runtime startup code in the EXE registers all the destructors with the C runtime library DLL, and when the C runtime library DLL gets its DLL_PROCESS_DETACH, it calls all the destructors requested by the EXE.
I'm wondering how you're using m_mtx. The normal pattern is that both thread lock it and both threads unlock it. But fin() fails to lock it.
Similarly unexpected is m_cond.wait(m_mtx). This would release the mutex, except that it isn't locked in the first place!
Finally, as m_mtx isn't locked, I don't see how m_quit = true should become visible in m_thd.
One problem you have is that std::condition_variable::notify_one is called while the same std::mutex that the waiting thread is holding, is held (happens when logger::write is called by logger::fin).
This causes the notified thread to immediately block again, and hence the printer thread will block possibly indefinitely upon destruction (or until spurious wakeup).
You should never notify while holding the same mutex as the waiting thread(s).
Quote from en.cppreference.com:
The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s); in fact doing so is a pessimization, since the notified thread would immediately block again, waiting for the notifying thread to release the lock.
I am not sure if my question is correct, but I have the following example, where the main thread creates two additional threads.
Since I am not using join command at the end of the main, it will continue execution and in the same time, the two created threads will work in parallel. But since the main is terminated before they finish their execution, I am getting the following output:
terminate called without an active exception
Aborted (core dumped)
Here's the code:
#include <iostream> // std::cout
#include <thread> // std::thread
#include <chrono>
void foo()
{
std::chrono::milliseconds dura( 2000 );
std::this_thread::sleep_for( dura );
std::cout << "Waited for 2Sec\n";
}
void bar(int x)
{
std::chrono::milliseconds dura( 4000 );
std::this_thread::sleep_for( dura );
std::cout << "Waited for 4Sec\n";
}
int main()
{
std::thread first (foo);
std::thread second (bar,0);
return 0;
}
So my question is how to keep these two threads working even if the main thread terminated?
I am asking this because in my main program, I have an event handler ,and for each event I create a corresponding thread. But the main problem when the handler creates a new thread, the handler will continue execution. Until it is destroyed which will cause also the newly created thread to be destroyed. So my question is how to keep the thread alive in this case?
Also if I use a join it will convert back to serialization.
void ho_commit_indication_handler(message &msg, const boost::system::error_code &ec)
{
.....
}
void event_handler(message &msg, const boost::system::error_code &ec)
{
if (ec)
{
log_(0, __FUNCTION__, " error: ", ec.message());
return;
}
switch (msg.mid())
{
case n2n_ho_commit:
{
boost::thread thrd(&ho_commit_indication_handler, boost::ref(msg), boost::ref(ec));
}
break
}
};
Thanks a lot.
Keeping the threads alive is a bad idea, because it causes a call to std::terminate. You should definitively join the threads:
int main()
{
std::thread first (foo);
std::thread second (bar, 0);
first.join();
second.join();
}
An alternative is to detach the threads. However you still need to assert that the main thread lives longer (by e.g. using a mutex / condition_variable).
This excerpt from the C++11 standard is relevant here:
15.5.1 The std::terminate() function [except.terminate]
1 In some situations exception handling must be abandoned for less subtle error
handling techniques. [ Note: These situations are:
[...]
-- when the destructor or the copy assignment operator is invoked on an
object of type std::thread that refers to a joinable thread
Hence, you have to call either join or detach on threads before scope exit.
Concerning your edit: You have to store the threads in a list (or similar) and wait for every one of them before main is done. A better idea would be to use a thread pool (because this limits the total number of threads created).