Along with the main thread, i have one more thread that receives data to write them in a file.
std::queue<std::vector<int>> dataQueue;
std::mutex mutex;
void setData(const std::vector<int>& data) {
std::lock_guard<std::mutex> lock(mutex);
dataQueue.push(data);
}
void write(const std::string& fileName) {
std::unique_ptr<std::ostream> ofs = std::unique_ptr<std::ostream>(new zstr::ofstream(fileName));
while (store) {
std::lock_guard<std::mutex> lock(mutex);
while (!dataQueue.empty()) {
std::vector<int>& data= dataQueue.front();
ofs->write(reinterpret_cast<char*>(data.data()), sizeof(data[0])*data.size());
dataQueue.pop();
}
}
}
}
setData is used by the main thread and write is actually the writing thread. I use std::lock_quard to avoid memory conflict but when locking on the writing thread, it slows down the main thread as it has to wait for the Queue to be unlocked. But i guess i can avoid this as the threads never act on the same element of the queue at the same time.
So i would like to do it lock-free but i don't really understand how i should implement it. I mean, how can i do it without locking anything ? moreover, if the writing thread is faster than the main thread, the queue might be empty most of the time, so it should somehow waits for new data instead of looping infinitly to check for non empty queue.
EDIT: I changed simple std::lock_guard by std::cond_variable so that it could wait when the queue is empty. But the main thread can still be blocked as , when cvQeue.wait(.) is resolved, it reacquire the lock. moreover, what if the main thread does cvQueue.notify_one() but the writing thread is not waiting ?
std::queue<std::vector<int>> dataQueue;
std::mutex mutex;
std::condition_variable cvQueue;
void setData(const std::vector<int>& data) {
std::unique_lock<std::mutex> lock(mutex);
dataQueue.push(data);
cvQueue.notify_one();
}
void write(const std::string& fileName) {
std::unique_ptr<std::ostream> ofs = std::unique_ptr<std::ostream>(new zstr::ofstream(fileName));
while (store) {
std::lock_guard<std::mutex> lock(mutex);
while (!dataQueue.empty()) {
std::unique_lock<std::mutex> lock(mutex);
cvQueue.wait(lock);
ofs->write(reinterpret_cast<char*>(data.data()), sizeof(data[0])*data.size());
dataQueue.pop();
}
}
}
}
If you only have two threads, than you could use a lock-free single-producer-single-consumer (SPSC) queue.
A bounded version can be found here: https://github.com/rigtor/SPSCQueue
Dmitry Vyukov presented an unbounded version here: http://www.1024cores.net/home/lock-free-algorithms/queues/unbounded-spsc-queue (You should note though, that this code should be adapted to use atomics.)
Regarding a blocking pop operation - this is something that lock-free data structures do not provide since such an operation is obviously not lock-free. However, it should be relatively straight forward to adapt the linked implementations in such a way, that a push operation notifies a condition variable if the queue was empty before the push.
i guess i have something that met my needs. I did a LockFreeQueue that uses std::atomic. I can thus manage the state of the head/tail of the queue atomically.
template<typename T>
class LockFreeQueue {
public:
void push(const T& newElement) {
fifo.push(newElement);
tail.fetch_add(1);
cvQueue.notify_one();
}
void pop() {
size_t oldTail = tail.load();
size_t oldHead = head.load();
if (oldTail == oldHead) {
return;
}
fifo.pop();
head.store(++oldHead);
}
bool isEmpty() {
return head.load() == tail.load();
}
T& getFront() {
return fifo.front();
}
void waitForNewElements() {
if (tail.load() == head.load()) {
std::mutex m;
std::unique_lock<std::mutex> lock(m);
cvQueue.wait_for(lock, std::chrono::milliseconds(TIMEOUT_VALUE));
}
}
private:
std::queue<T> fifo;
std::atomic<size_t> head = { 0 };
std::atomic<size_t> tail = { 0 };
std::condition_variable cvQueue;
};
LockFreeQueue<std::vector<int>> dataQueue;
std::atomic<bool> store(true);
void setData(const std::vector<int>& data) {
dataQueue.push(data);
// do other things
}
void write(const std::string& fileName) {
std::unique_ptr<std::ostream> ofs = std::unique_ptr<std::ostream>(new zstr::ofstream(fileName));
while (store.load()) {
dataQueue.waitForNewElements();
while (!dataQueue.isEmpty()) {
std::vector<int>& data= dataQueue.getFront();
ofs->write(reinterpret_cast<char*>(data.data()), sizeof(data[0])*data.size());
dataQueue.pop();
}
}
}
}
I still have one lock in waitForNewElements but it is not locking the whole process as it is waiting for things to do. But the big improvement is that the producer can push while the consumer pop. It is only forbidden when LockFreQueue::tail and LockFreeQueue::head are the same. Meaning that the queue is empty and it enters the waiting state.
The thing that i'm not very satisfied at is cvQueue.wait_for(lock, TIMEOUT_VALUE). I wanted to do a simple cvQueue.wait(lock), but the problem is that when it comes to end the thread, I do store.store(false) in the main thread. So if the writing thread is waiting it will never end without a timeout. So, I set a big enough timeout so that most of the time the condition_variable is resolved by the lock, and when the thread ends it is resolved by the timeout.
If you feel that something must be wrong or must be improved, feel free to comment.
Related
I'm reading C++ concurrency in action.
It introduces how to implement interrupting thread using std::condition_variable_any.
I try to understand the code more than a week, but I couldn't.
Below is the code and explanation in the book.
#include <condition_variable>
#include <future>
#include <iostream>
#include <thread>
class thread_interrupted : public std::exception {};
class interrupt_flag {
std::atomic<bool> flag;
std::condition_variable* thread_cond;
std::condition_variable_any* thread_cond_any;
std::mutex set_clear_mutex;
public:
interrupt_flag() : thread_cond(0), thread_cond_any(0) {}
void set() {
flag.store(true, std::memory_order_relaxed);
std::lock_guard<std::mutex> lk(set_clear_mutex);
if (thread_cond) {
thread_cond->notify_all();
} else if (thread_cond_any) {
thread_cond_any->notify_all();
}
}
bool is_set() const { return flag.load(std::memory_order_relaxed); }
template <typename Lockable>
void wait(std::condition_variable_any& cv, Lockable& lk);
};
thread_local static interrupt_flag this_thread_interrupt_flag;
void interruption_point() {
if (this_thread_interrupt_flag.is_set()) {
throw thread_interrupted();
}
}
template <typename Lockable>
void interrupt_flag::wait(std::condition_variable_any& cv, Lockable& lk) {
struct custom_lock {
interrupt_flag* self;
// (1) What is this lk for? Why is lk should be already locked when it is used in costume_lock constructor?
Lockable& lk;
custom_lock(interrupt_flag* self_, std::condition_variable_any& cond,
Lockable& lk_)
: self(self_), lk(lk_) {
self->set_clear_mutex.lock();
self->thread_cond_any = &cond;
}
void unlock() {
lk.unlock();
self->set_clear_mutex.unlock();
}
void lock() { std::lock(self->set_clear_mutex, lk); }
~custom_lock() {
self->thread_cond_any = 0;
self->set_clear_mutex.unlock();
}
};
custom_lock cl(this, cv, lk);
interruption_point();
cv.wait(cl);
interruption_point();
}
class interruptible_thread {
std::thread internal_thread;
interrupt_flag* flag;
public:
template <typename FunctionType>
interruptible_thread(FunctionType f) {
std::promise<interrupt_flag*> p;
internal_thread = std::thread([f, &p] {
p.set_value(&this_thread_interrupt_flag);
f();
});
flag = p.get_future().get();
}
void interrupt() {
if (flag) {
flag->set();
}
};
void join() { internal_thread.join(); };
void detach();
bool joinable() const;
};
template <typename Lockable>
void interruptible_wait(std::condition_variable_any& cv, Lockable& lk) {
this_thread_interrupt_flag.wait(cv, lk);
}
void foo() {
// (2) This is my implementation of how to use interruptible wait. Is it correct?
std::condition_variable_any cv;
std::mutex m;
std::unique_lock<std::mutex> lk(m);
try {
interruptible_wait(cv, lk);
} catch (...) {
std::cout << "interrupted" << std::endl;
}
}
int main() {
std::cout << "Hello" << std::endl;
interruptible_thread th(foo);
th.interrupt();
th.join();
}
Your custom lock type acquires the lock on the internal
set_clear_mutex when it’s constructed 1, and then sets the
thread_cond_any pointer to refer to the std:: condition_variable_any
passed in to the constructor 2.
The Lockable reference is stored for later; this must already be
locked. You can now check for an interruption without worrying about
races. If the interrupt flag is set at this point, it was set before
you acquired the lock on set_clear_mutex. When the condition variable
calls your unlock() function inside wait(), you unlock the Lockable
object and the internal set_clear_mutex 3.
This allows threads that are trying to interrupt you to acquire the
lock on set_clear_mutex and check the thread_cond_any pointer once
you’re inside the wait() call but not before. This is exactly what you
were after (but couldn’t manage) with std::condition_variable.
Once wait() has finished waiting (either because it was notified or
because of a spurious wake), it will call your lock() function, which
again acquires the lock on the internal set_clear_mutex and the lock
on the Lockable object 4. You can now check again for interruptions
that happened during the wait() call before clearing the
thread_cond_any pointer in your custom_lock destructor 5, where you
also unlock the set_clear_mutex.
First, I couldn't understand what is the purpose of Lockabel& lk in mark (1) and why it is already locked in constructor of custom_lock. (It could be locked in the very custom_lock constructor. )
Second there is no example in this book of how to use interruptible wait, so foo() {} in mark (2) is my guess implementation of how to use it. Is it correct way of using it ?
You need a mutex-like object (lk in your foo function) to call the interruptiple waiting just as you would need it for the plain std::condition_variable::wait function.
What's problematic (I also read the book and I have doubts about this example) is that the flag member points to a memory location inside the other thread which could finish right before calling flag->set(). In this specific example the thread only exists after we set the flag so that is okay, but otherwise this approach is limited in my opinion (correct me if I am wrong).
I have a timer that will create a new thread and wait for the timer to expire before calling the notify function. It works correctly during the first execution, but when the timer is started a second time, an exception is thrown trying to create the new thread. The debug output shows that the previous thread has exited before attempting to create the new thread.
Timer.hpp:
class TestTimer
{
private:
std::atomic<bool> active;
int timer_duration;
std::thread thread;
std::mutex mtx;
std::condition_variable cv;
void timer_func();
public:
TestTimer() : active(false) {};
~TestTimer() {
Stop();
}
TestTimer(const TestTimer&) = delete; /* Remove the copy constructor */
TestTimer(TestTimer&&) = delete; /* Remove the move constructor */
TestTimer& operator=(const TestTimer&) & = delete; /* Remove the copy assignment operator */
TestTimer& operator=(TestTimer&&) & = delete; /* Remove the move assignment operator */
bool IsActive();
void StartOnce(int TimerDurationInMS);
void Stop();
virtual void Notify() = 0;
};
Timer.cpp:
void TestTimer::timer_func()
{
auto expire_time = std::chrono::steady_clock::now() + std::chrono::milliseconds(timer_duration);
std::unique_lock<std::mutex> lock{ mtx };
while (active.load())
{
if (cv.wait_until(lock, expire_time) == std::cv_status::timeout)
{
lock.unlock();
Notify();
Stop();
lock.lock();
}
}
}
bool TestTimer::IsActive()
{
return active.load();
}
void TestTimer::StartOnce(int TimerDurationInMS)
{
if (!active.load())
{
if (thread.joinable())
{
thread.join();
}
timer_duration = TimerDurationInMS;
active.store(true);
thread = std::thread(&TestTimer::timer_func, this);
}
else
{
Stop();
StartOnce(TimerDurationInMS);
}
}
void TestTimer::Stop()
{
if (active.load())
{
std::lock_guard<std::mutex> _{ mtx };
active.store(false);
cv.notify_one();
}
}
The error is being thrown from my code block here:
thread = std::thread(&TestTimer::timer_func, this);
during the second execution.
Specifically, the error is being thrown from the move_thread function: _Thr = _Other._Thr;
thread& _Move_thread(thread& _Other)
{ // move from _Other
if (joinable())
_XSTD terminate();
_Thr = _Other._Thr;
_Thr_set_null(_Other._Thr);
return (*this);
}
_Thrd_t _Thr;
};
And this is the exception: Unhandled exception at 0x76ED550B (ucrtbase.dll) in Sandbox.exe: Fatal program exit requested.
Stack trace:
thread::move_thread(std::thread &_Other)
thread::operator=(std::thread &&_Other)
TestTimer::StartOnce(int TimerDurationInMS)
If it's just a test
Make sure the thread handler is empty or joined when calling the destructor.
Make everything that can be accessed from multiple threads thread safe (specifically, reading the active flag). Simply making it an std::atomic_flag should do.
It does seem like you are killing a thread handle pointing to a live thread, but hard to say without seeing the whole application.
If not a test
...then generally, when need a single timer, recurreing or not, you can just go away with scheduling an alarm() signal into itself. You remain perfectly single threaded and don't even need to link with the pthread library. Example here.
And when expecting to need more timers and stay up for a bit it is worth to drop an instance of boost::asio::io_service (or asio::io_service if you need a boost-free header-only version) into your application which has mature production-ready timers support. Example here.
You create the TestTimer and run it the first time via TestTimer::StartOnce, where you create a thread (at the line, which later throws the exception). When the thread finishes, it sets active = false; in timer_func.
Then you call TestTimer::StartOnce a second time. As active == false, Stop() is not called on the current thread, and you proceed to creating a new thread in thread = std::thread(&TestTimer::timer_func, this);.
And then comes the big but:
You have not joined the first thread before creating the second one. And that's why it throws an exception.
I have the following situation
std::mutex m;
void t() {
//lock the mutex m here
}
main() {
//create thread t here
//lock the mutex m here
}
I would like the thread t() to acquire the mutex before main() does, how can I obtain this behaviour using the threading functions provided by C++11?
Putting simply an std::lock_guard inside main() and t() would not work because it can take a bit before the thread is spawned, an so the mutex can be locked by main().
Regarding the conditional variable that Sneftel mentioned in the comment section, and a somewhat similar solution to the one provided by Angew:
One possible solution:
std::condition_variable cv;
std::mutex m;
bool threadIsReady = false; //bool should be fine in this case
void t() {
std::unique_lock<std::mutex> g(m);
threadIsReady = true;
cv.notify_one();
}
int main() {
std::thread th(t);
//if main locks the mutex first, it will have to wait until threadIsReady becomes true
//if main locks the mutex later, wait will do nothing since threadIsReady would have already been true
std::unique_lock<std::mutex> g(m);
cv.wait(g, [] {return threadIsReady; });
}
Here's a quick & dirty way to achieve this effect:
std::atomic<bool> threadIsReady{false};
void t()
{
std::lock_guard<std::mutex> g(m);
threadIsReady = true;
}
main()
{
std::thread th(t);
while (!threadIsReady) {}
std::lock_guard<std::mutex> g(m);
}
I am implementing a multi-threaded queue in C++ with timed capabilities i.e. pop and push can have a timeout as an extra parameter.
The basic code looks like following.
template <typename T>
class Queue
{
public:
Queue() = default;
// Usage of a mutex makes the Queue class neither copyable nor movable
Queue(const Queue&) = delete;
Queue& operator=(const Queue&) = delete;
T Pop(const std::chrono::microseconds& micro_secs=std::chrono::microseconds::max())
{
std::unique_lock<std::mutex> lock(mutex_);
if (!cond_var_.wait_for(lock, micro_secs, [this]() { return !queue_.empty(); }))
{
// TODO: throw
}
auto item = queue_.front();
queue_.pop();
return item;
}
void Push(T& item, const std::chrono::microseconds& micro_secs=std::chrono::microseconds::max())
{
std::unique_lock<std::mutex> lock(mutex_, std::defer_lock);
if (!lock.try_lock_for(micro_secs)) // for this std::mutex should be std::timed_mutex.
{
// Couldn't acquire lock during the specified time.
// TODO: throw
}
queue_.push(item);
lock.unlock();
cond_var_.notify_one();
}
private:
std::queue<T> queue_;
std::mutex mutex_;
std::condition_variable cond_var_;
};
For Pop() function, in order to have a timeout on the condition variable, the mutex should be std::mutex.
But for the Push() function to have a timeout in acquiring the lock, the mutex should be a std::timed_mutex. try_lock_for works if the mutex wrapped in unique_lock satisfies TimedLock requirements.
I would be pleased to hear any workarounds to solve this issue.
Pop() requires the mutex on the queue to be std::mutex, since condition_variable::wait_for() works on std::mutex. On the other hand, Push() requires the mutex on the queue to be std::timed_mutex. How I can solve the issue with a single lock on the queue i.e. a single mutex in my Queue class.
Thanks in advance.
I created two threads, and use mutex to synchronize them.
In the mainwindow program(which i regard as the main thread) in which the other thread is created, I have to use mutex in at least two functions, because one is a slot to accept signals from UI when user selects a menu and configure the data, and there is also a timer which runs out 1 time per sec and triggers a slot function which reads the data.
My program often crashes even i use mutex. In 'main thread' there are different functions which have mutex's lock and unlock operations, one of the functions is a slot linked to the timer. Also the other thread continuously writes the data.
I am so confused, why ?
(:) I really need a better phone to edit my question before this time :) )
My code:
In thread:
class Background : public QThread
{
Q_OBJECT
public:
void Background::run(void)
{
initFile();
while(1)
{
Mutex->lock();
msleep(40);
rcv(); //writes map here
Mutex->unlock();
}
}
...
}
In thread's rcv():
void Background::rcv()
{
DEVMAP::iterator dev_r;
for(dev_r= DevMap.begin(); dev_r!= DevMap.end(); dev_r++)//DevMap is a refrence to the dev_map in mainwindow.
{
... ....//writes the map
}
}
In mainwindow:
void MainWindow::initTimer()
{
refreshTimer = new QTimer(this);
connect(refreshTimer, SIGNAL(timeout()), this, SLOT(refreshLogDisplay()));
refreshTimer->start(1000);
}
void MainWindow::refreshLogDisplay()
{
//MUTEX
mutex->lock();
......//read the map
//MUTEX
mutex->unlock();
}
In the thread's construction:
Background(DEVMap& map,...,QMutex* mutex):DevMap(map)...,Mutex(mutex){}
In mainwindow which creates the thread:
void MainWindow::initThread()
{
mutex = new QMutex;
back = new Background(dev_map,..., mutex);
back->start();
}
And:
void MainWindow::on_Create_triggered()//this function is a slot triggered by a menu item in the MainWindow UI
{
......//get information from a dialog
//MUTEX
mutex->lock();
BitState* bitState = new BitState(string((const char *)dlg->getName().toLocal8Bit()),
string((const char *)dlg->getNO().toLocal8Bit()),
dlg->getRevPortNo().toInt(), dlg->getSndPortNo().toInt());
dev_map.insert(DEVMAP::value_type (string((const char *)dlg->getPIN().toLocal8Bit()), *bitState));
//writes map here
//MUTEX
mutex->unlock();
}
You can use mutex in any thread. It was designed for this purposes. But you should not create dead locks, for instance if you do 'nested' calls of the 'lock'.
Good:
mutex->lock();
//code
mutex->unlock();
//code
mutex->lock();
//code
mutex->unlock();
Bad:
mutex->lock();
//code
mutex->lock(); //dead lock
//code
mutex->unlock();
//code
mutex->unlock();
Be accurate when using locks in functions:
void foo()
{
mutex->lock();
//code
mutex->unlock();
}
mutex->lock();
foo(); //dead lock
mutex->unlock()
Also you need to lock as less code as possible. Placing sleep() inside the lock is not
not a good idea as far other threads will wait while it's sleeping.
Not good:
while(1)
{
Mutex->lock();
msleep(40);
rcv();
Mutex->unlock();
}
Better:
while(1)
{
msleep(40);
Mutex->lock();
rcv();
Mutex->unlock();
}