I was asked this question in an interview. I was pretty clueless.
So I decided to learn some multithreading and hopefully find an answer to this question.
I need to use 3 threads to print the output: 01020304050607.....
Thread1: prints 0
Thread2: prints odd numbers
Thread3: prints even numbers
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex m;
std::condition_variable cv1, cv2, cv3;
int count = 0;
void printzero(int end)
{
while (count <= end)
{
std::unique_lock<std::mutex> lock(m);
cv1.wait(lock);
std::cout << 0 << " ";
++count;
if (count % 2 == 1)
{
lock.unlock();
cv2.notify_one();
}
else
{
lock.unlock();
cv3.notify_one();
}
}
}
void printodd(int end)
{
while (count <= end)
{
std::unique_lock<std::mutex> lock(m);
cv2.wait(lock);
if (count % 2 == 1)
{
std::cout << count << " ";
++count;
lock.unlock();
cv1.notify_one();
}
}
}
void printeven(int end)
{
while (count <= end)
{
std::unique_lock<std::mutex> lock(m);
cv3.wait(lock);
if (count % 2 == 0)
{
std::cout << count << " ";
++count;
lock.unlock();
cv1.notify_one();
}
}
}
int main()
{
int end = 10;
std::thread t3(printzero, end);
std::thread t1(printodd, end);
std::thread t2(printeven, end);
cv1.notify_one();
t1.join();
t2.join();
t3.join();
return 0;
}
My solution seems to be in a deadlock situation. I'm not even sure if the logic is correct. Please help
There are several issues with your code. Here is what you need to do in order to make it work:
Revise your while (count <= end) check. Reading count without synchronization is undefined behavior (UB).
Use a proper predicate with std::condition_variable::wait. Problems of your code without predicate:
If notify_one is called before wait then the notification is lost. In the worst case, main's call to notify_one is executed before the threads start running. As a result, all threads may wait indefinitely.
Spurious wakeups may disrupt your program flow. See also cppreference.com on std::condition variable.
Use std::flush (just to be sure).
I played around with your code quite a lot. Below you find a version where I applied my suggested fixes. In addition, I also experimented with some other ideas that came to my mind.
#include <cassert>
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
// see the `std::mutex` for an example how to avoid global variables
std::condition_variable cv_zero{};
std::condition_variable cv_nonzero{};
bool done = false;
int next_digit = 1;
bool need_zero = true;
void print_zero(std::mutex& mt) {
while(true) {// do not read shared state without holding a lock
std::unique_lock<std::mutex> lk(mt);
auto pred = [&] { return done || need_zero; };
cv_zero.wait(lk, pred);
if(done) break;
std::cout << 0 << "\t"
<< -1 << "\t"// prove that it works
<< std::this_thread::get_id() << "\n"// prove that it works
<< std::flush;
need_zero = false;
lk.unlock();
cv_nonzero.notify_all();// Let the other threads decide which one
// wants to proceed. This is probably less
// efficient, but preferred for
// simplicity.
}
}
void print_nonzero(std::mutex& mt, int end, int n, int N) {
// Example for `n` and `N`: Launch `N == 2` threads with this
// function. Then the thread with `n == 1` prints all odd numbers, and
// the one with `n == 0` prints all even numbers.
assert(N >= 1 && "number of 'nonzero' threads must be positive");
assert(n >= 0 && n < N && "rank of this nonzero thread must be valid");
while(true) {// do not read shared state without holding a lock
std::unique_lock<std::mutex> lk(mt);
auto pred = [&] { return done || (!need_zero && next_digit % N == n); };
cv_nonzero.wait(lk, pred);
if(done) break;
std::cout << next_digit << "\t"
<< n << "\t"// prove that it works
<< std::this_thread::get_id() << "\n"// prove that it works
<< std::flush;
// Consider the edge case of `end == INT_MAX && next_digit == INT_MAX`.
// -> You need to check *before* incrementing in order to avoid UB.
assert(next_digit <= end);
if(next_digit == end) {
done = true;
cv_zero.notify_all();
cv_nonzero.notify_all();
break;
}
++next_digit;
need_zero = true;
lk.unlock();
cv_zero.notify_one();
}
}
int main() {
int end = 10;
int N = 2;// number of threads for `print_nonzero`
std::mutex mt{};// example how to pass by reference (avoiding globals)
std::thread t_zero(print_zero, std::ref(mt));
// Create `N` `print_nonzero` threads with `n` in [0, `N`).
std::vector<std::thread> ts_nonzero{};
for(int n=0; n<N; ++n) {
// Note that it is important to pass `n` by value.
ts_nonzero.emplace_back(print_nonzero, std::ref(mt), end, n, N);
}
t_zero.join();
for(auto&& t : ts_nonzero) {
t.join();
}
}
Related
I am new to using condition_variables and unique_locks in C++. I am working on creating an event loop that polls two custom event-queues and a "boolean" (see integer acting as boolean), which can be acted upon by multiple sources.
I have a demo (below) that appears to work, which I would greatly appreciate if you can review and confirm if it follows the best practices for using unique_lock and condition_variables and any problems you foresee happening (race conditions, thread blocking, etc).
In ThreadSafeQueue::enqueue(...): are we unlocking twice by calling notify and having the unique_lock go out of scope?
In the method TheadSafeQueue::dequeueAll(): We assume it is being called by a method that has been notified (cond.notify), and therefore has been locked. Is there a better way to encapsulate this to keep the caller cleaner?
Do we need to make our class members volatile similar to this?
Is there a better way to mockup our situation that allows us to test if we've correctly implemented the locks? Perhaps without the sleep statements and automating the checking process?
ThreadSafeQueue.h:
#include <condition_variable>
#include <cstdint>
#include <iostream>
#include <mutex>
#include <vector>
template <class T>
class ThreadSafeQueue {
public:
ThreadSafeQueue(std::condition_variable* cond, std::mutex* unvrsl_m)
: ThreadSafeQueue(cond, unvrsl_m, 1) {}
ThreadSafeQueue(std::condition_variable* cond, std::mutex* unvrsl_m,
uint32_t capacity)
: cond(cond),
m(unvrsl_m),
head(0),
tail(0),
capacity(capacity),
buffer((T*)malloc(get_size() * sizeof(T))),
scratch_space((T*)malloc(get_size() * sizeof(T))) {}
std::condition_variable* cond;
~ThreadSafeQueue() {
free(scratch_space);
free(buffer);
}
void resize(uint32_t new_cap) {
std::unique_lock<std::mutex> lock(*m);
check_params_resize(new_cap);
free(scratch_space);
scratch_space = buffer;
buffer = (T*)malloc(sizeof(T) * new_cap);
copy_cyclical_queue();
free(scratch_space);
scratch_space = (T*)malloc(new_cap * sizeof(T));
tail = get_size();
head = 0;
capacity = new_cap;
}
void enqueue(const T& value) {
std::unique_lock<std::mutex> lock(*m);
resize();
buffer[tail++] = value;
if (tail == get_capacity()) {
tail = 0;
} else if (tail > get_capacity())
throw("Something went horribly wrong TSQ: 75");
cond->notify_one();
}
// Assuming m has already been locked by the caller...
void dequeueAll(std::vector<T>* vOut) {
if (get_size() == 0) return;
scratch_space = buffer;
copy_cyclical_queue();
vOut->insert(vOut->end(), buffer, buffer + get_size());
head = tail = 0;
}
// Const functions because they shouldn't be modifying the internal variables
// of the object
bool is_empty() const { return get_size() == 0; }
uint32_t get_size() const {
if (head == tail)
return 0;
else if (head < tail) {
// 1 2 3
// 0 1 2
// 1
// 0
return tail - head;
} else {
// 3 _ 1 2
// 0 1 2 3
// capacity-head + tail+1 = 4-2+0+1 = 2 + 1
return get_capacity() - head + tail + 1;
}
}
uint32_t get_capacity() const { return capacity; }
//---------------------------------------------------------------------------
private:
std::mutex* m;
uint32_t head;
uint32_t tail;
uint32_t capacity;
T* buffer;
T* scratch_space;
uint32_t get_next_empty_spot();
void copy_cyclical_queue() {
uint32_t size = get_size();
uint32_t cap = get_capacity();
if (size == 0) {
return; // because we have nothing to copy
}
if (head + size <= cap) {
// _ 1 2 3 ... index = 1, size = 3, 1+3 = 4 = capacity... only need 1 copy
memcpy(buffer, scratch_space + head, sizeof(T) * size);
} else {
// 5 1 2 3 4 ... index = 1, size = 5, 1+5 = 6 = capacity... need to copy
// 1-4 then 0-1
// copy number of bytes: front = 1, to (5-1 = 4 elements)
memcpy(buffer, scratch_space + head, sizeof(T) * (cap - head));
// just copy the bytes from the front up to the first element in the old
// array
memcpy(buffer + (cap - head), scratch_space, sizeof(T) * tail);
}
}
void check_params_resize(uint32_t new_cap) {
if (new_cap < get_size()) {
std::cerr << "ThreadSafeQueue: check_params_resize: size(" << get_size()
<< ") > new_cap(" << new_cap
<< ")... data "
"loss will occur if this happens. Prevented."
<< std::endl;
}
}
void resize() {
uint32_t new_cap;
uint32_t size = get_size();
uint32_t cap = get_capacity();
if (size + 1 >= cap - 1) {
std::cout << "RESIZE CALLED --- BAD" << std::endl;
new_cap = 2 * cap;
check_params_resize(new_cap);
free(scratch_space); // free existing (too small) scratch space
scratch_space = buffer; // transfer pointer over
buffer = (T*)malloc(sizeof(T) * new_cap); // allocate a bigger buffer
copy_cyclical_queue();
// move over everything with memcpy from scratch_space to buffer
free(scratch_space); // free what used to be the too-small buffer
scratch_space =
(T*)malloc(sizeof(T) * new_cap); // recreate scratch space
tail = size;
head = 0;
// since we're done with the old array... delete for memory management->
capacity = new_cap;
}
}
};
// Event Types
// keyboard/mouse
// network
// dirty flag
Main.cpp:
#include <unistd.h>
#include <cstdint>
#include <iostream>
#include <mutex>
#include <queue>
#include <sstream>
#include <thread>
#include "ThreadSafeQueue.h"
using namespace std;
void write_to_threadsafe_queue(ThreadSafeQueue<uint32_t> *q,
uint32_t startVal) {
uint32_t count = startVal;
while (true) {
q->enqueue(count);
cout << "Successfully enqueued: " << count << endl;
count += 2;
sleep(count);
}
}
void sleep_and_set_redraw(int *redraw, condition_variable *cond) {
while (true) {
sleep(3);
__sync_fetch_and_or(redraw, 1);
cond->notify_one();
}
}
void process_events(vector<uint32_t> *qOut, condition_variable *cond,
ThreadSafeQueue<uint32_t> *q1,
ThreadSafeQueue<uint32_t> *q2, int *redraw, mutex *m) {
while (true) {
unique_lock<mutex> lck(*m);
cond->wait(lck);
q1->dequeueAll(qOut);
q2->dequeueAll(qOut);
if (__sync_fetch_and_and(redraw, 0)) {
cout << "FLAG SET" << endl;
qOut->push_back(0);
}
for (auto a : *qOut) cout << a << "\t";
cout << endl;
cout << "PROCESSING: " << qOut->size() << endl;
qOut->clear();
}
}
void test_2_queues_and_bool() {
try {
condition_variable cond;
mutex m;
ThreadSafeQueue<uint32_t> q1(&cond, &m, 1024);
ThreadSafeQueue<uint32_t> q2(&cond, &m, 1024);
int redraw = 0;
vector<uint32_t> qOut;
thread t1(write_to_threadsafe_queue, &q1, 2);
thread t2(write_to_threadsafe_queue, &q2, 1);
thread t3(sleep_and_set_redraw, &redraw, &cond);
thread t4(process_events, &qOut, &cond, &q1, &q2, &redraw, &m);
t1.join();
t2.join();
t3.join();
t4.join();
} catch (system_error &e) {
cout << "MAIN TEST CRASHED" << e.what();
}
}
int main() { test_2_queues_and_bool(); }
class test
{
void thread1()
{
int i = 0;
while(true){
for(unsigned int k = 0;k < mLD.size(); k++ )
{
mLD[k] = i++;
}
}
}
void thread2()
{
std::cout << "thread2 address : " << &mLD << "\n";
C();
}
void B()
{
std::cout << "B address : " << &mLD << "\n";
for(unsigned int k = 0;k < mLD.size(); k++ )
{
if(mLD[k]<=25)
{
}
}
}
void C()
{
B();
std::cout << "C address : " << &mLD << "\n";
double distance = mLD[0]; // <---- segmetation fault
}
std::array<double, 360> mLD;
};
cout result --->
thread2 address : 0x7e807660
B address : 0x7e807660
C address : 0x1010160 (sometimes 0x7e807660 )
Why mLD's address changed ....?
even i change std::array to std::array<std::atomic<double>360>, the result is the same.
Most probably, the object you referred is destroyed at the point of call to C, which points to a synchronization issue. You need to extend the lifetime of the object referred by thread(s), until the threads done executing their routine. To accomplish this, you can have something like this;
#include <thread>
#include <array>
#include <iostream>
struct foo{
void callback1(){
for(auto & elem: storage){
elem += 5;
}
}
void callback2(){
for(const auto & elem: storage){
std::cout << elem << std::endl;
}
}
std::array<double, 300> storage;
};
int main(void){
foo f;
std::thread t1 {[&f](){f.callback1();}};
std::thread t2 {[&f](){f.callback2();}};
// wait until both threads are done executing their routines
t1.join();
t2.join();
return 0;
}
The instance of foo, f lives in scope of main() function, so its' lifetime is defined by from the line it defined to end of the main's scope. By joining both threads, we block main from proceeding further until both threads are done executing their callback functions, hence the lifetime of f extended until callbacks are done.
The second issue is, the code needs synchronization primitives, because storage variable is shared between two independent execution paths. The final code with proper synchronization can look like this;
#include <thread>
#include <array>
#include <iostream>
#include <mutex>
struct foo{
void callback1(){
// RAII style lock, which invokes .lock() upon construction, and .unlock() upon destruction
// automatically.
std::unique_lock<std::mutex> lock(mtx);
for(auto & elem: storage){
elem += 5;
}
}
void callback2(){
std::unique_lock<std::mutex> lock(mtx);
for(const auto & elem: storage){
std::cout << elem << std::endl;
}
}
std::array<double, 300> storage;
// non-reentrant mutex
mutable std::mutex mtx;
};
int main(void){
foo f;
std::thread t1 {[&f](){f.callback1();}};
std::thread t2 {[&f](){f.callback2();}};
// wait until both threads are done executing their routines
t1.join();
t2.join();
return 0;
}
Is there an alternative way to be sure that the threads are ready to recieve the broadcast signal. I want to replace the Sleep(1) function in main.
#include <iostream>
#include <pthread.h>
#define NUM 4
using namespace std;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
pthread_t tid[NUM];
void *threads(void *arg){
int tid = (int)arg;
while(true){
pthread_mutex_lock(&mutex);
pthread_cond_wait(&cond,&mutex);
//do some work
cout<<"Thread: "<<tid<<endl;;
pthread_mutex_unlock(&mutex);
}
}
int main(){
for(int i=0;i<NUM;i++){
pthread_create(&tid[i],NULL,threads,(void*)i);
}
Sleep(1);
pthread_cond_broadcast(&cond);
Sleep(1);
pthread_cond_broadcast(&cond);
Sleep(1);
pthread_cond_broadcast(&cond);
return 0;
}
I tried memory barriers before pthread_cond_wait and i thought of using an counter, but nothing worked for me yet.
Condition variables are usually connected to a predicate. In the other threads, check if predicate is already fulfilled (check while holding the mutex protecting the predicate), if so, do not wait on the condition variable. In main, acquire mutex, change predicate while holding the mutex. Then release mutex and signal or broadcast on the condvar. Here is a similar question:
Synchronisation before pthread_cond_broadcast
Here is some example code:
#include <iostream>
#include <pthread.h>
#include <unistd.h>
#include <cassert>
#define NUM 4
#define SIZE 256
using std::cout;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
pthread_t tid[NUM];
int work_available;
void *threads(void *arg)
{
int tid = *((int*)arg);
while (1) {
pthread_mutex_lock(&mutex);
while (work_available == 0) {
// While loop since cond_wait can have spurious wakeups.
pthread_cond_wait(&cond, &mutex);
cout << "Worker " << tid << " woke up...\n";
cout << "Work available: " << work_available << '\n';
}
if (work_available == -1) {
cout << "Worker " << tid << " quitting\n";
pthread_mutex_unlock(&mutex); // Easy to forget, better to use C++11 RAII mutexes.
break;
}
assert(work_available > 0);
work_available--;
cout << "Worker " << tid << " took one item of work\n";
pthread_mutex_unlock(&mutex);
//do some work
sleep(2); // simulated work
pthread_mutex_lock(&mutex);
cout << "Worker " << tid << " done with one item of work.\n";
pthread_mutex_unlock(&mutex);
}
}
int main()
{
work_available = 0;
int args[NUM];
for (int i=0; i<NUM; i++) {
args[i] = i;
pthread_create(&tid[i], NULL, threads, (void*)&args[i]);
}
const int MAX_TIME = 10;
for (int i = 0; i < MAX_TIME; i++)
{
pthread_mutex_lock(&mutex);
work_available++;
cout << "Main thread, work available: " << work_available << '\n';
pthread_mutex_unlock(&mutex);
pthread_cond_broadcast(&cond);
sleep(1);
}
pthread_mutex_lock(&mutex);
cout << "Main signalling threads to quit\n";
work_available = -1;
pthread_mutex_unlock(&mutex);
pthread_cond_broadcast(&cond);
for (int i = 0; i < NUM; i++)
{
pthread_join(tid[i], NULL);
}
return 0;
}
I am trying make a lot of mistakes to learn Concurrency in C++11. I have to ask this,
Here is what this one is supposed to do:
One queue, and three threads, one is suppose to put an integer into the queue, the other twos are suppose to correspondingly increase s1, s2 by popping the queue so that I can get total sum of numbers that were in the queue. To make it simpler I put 1 through 10 numbers into the queue.
But sometimes it works and sometimes it seems like there is an infinite loop:: what would be the reason?
#include <queue>
#include <memory>
#include <mutex>
#include <thread>
#include <iostream>
#include <condition_variable>
#include <string>
class threadsafe_queue {
private:
mutable std::mutex mut;
std::queue<int> data_queue;
std::condition_variable data_cond;
std::string log; //just to see what is going on behind
bool done;
public:
threadsafe_queue(){
log = "initializing queue\n";
done = false;
}
threadsafe_queue(threadsafe_queue const& other) {
std::lock_guard<std::mutex> lk(other.mut);
data_queue = other.data_queue;
}
void set_done(bool const s) {
std::lock_guard<std::mutex> lk(mut);
done = s;
}
bool get_done() {
std::lock_guard<std::mutex> lk(mut);
return done;
}
void push(int new_value) {
std::lock_guard<std::mutex> lk(mut);
log += "+pushing " + std::to_string(new_value) + "\n";
data_queue.push(new_value);
data_cond.notify_one();
}
void wait_and_pop(int& value) {
std::unique_lock<std::mutex> lk(mut);
data_cond.wait(lk, [this]{return !data_queue.empty();});
value = data_queue.front();
log += "-poping " + std::to_string(value) + "\n";
data_queue.pop();
}
std::shared_ptr<int> wait_and_pop() {
std::unique_lock<std::mutex> lk(mut);
data_cond.wait(lk, [this]{return !data_queue.empty();});
std::shared_ptr<int> res(std::make_shared<int>(data_queue.front()));
log += "- popping " + std::to_string(*res) + "\n";
data_queue.pop();
return res;
}
bool try_pop(int& value) {
std::lock_guard<std::mutex> lk(mut);
if (data_queue.empty()) {
log += "tried to pop but it was empty\n";
return false;
}
value = data_queue.front();
log += "-popping " + std::to_string(value) + "\n";
data_queue.pop();
return true;
}
std::shared_ptr<int> try_pop() {
std::lock_guard<std::mutex> lk(mut);
if (data_queue.empty()) {
log += "tried to pop but it was empty\n";
return std::shared_ptr<int>();
}
std::shared_ptr<int> res(std::make_shared<int>(data_queue.front()));
log += "-popping " + std::to_string(*res) + "\n";
data_queue.pop();
return res;
}
bool empty() const {
std::lock_guard<std::mutex> lk(mut);
//log += "checking the queue if it is empty\n";
return data_queue.empty();
}
std::string get_log() {
return log;
}
};
threadsafe_queue tq;
int s1, s2;
void prepare() {
for (int i = 1; i <= 10; i++)
tq.push(i);
tq.set_done(true);
}
void p1() {
while (true) {
int data;
tq.wait_and_pop(data);
s1 += data;
if (tq.get_done() && tq.empty()) break;
}
}
void p2() {
while (true) {
int data;
tq.wait_and_pop(data);
s2 += data;
if (tq.get_done() && tq.empty()) break;
}
}
int main(int argc, char *argv[]) {
std::thread pp(prepare);
std::thread worker(p1);
std::thread worker2(p2);
pp.join();
worker.join();
worker2.join();
std::cout << tq.get_log() << std::endl;
std::cout << s1 << " " << s2 << std::endl;
return 0;
}
Look at function p1 line 5
if (tq.get_done() && tq.empty()) break;
So you checked the queue if it was empty. It was not. Now you loop and enter
tq.wait_and_pop(data);
where you'll find
data_cond.wait(lk, [this]{return !data_queue.empty();});
which is essentially
while (data_queue.empty()) {
wait(lk);
}
notice the missing '!'.
Now your thread sits there and waits for the queue not to be empty, which will never happen, because the producer id done filling the queue. The thread will never join.
There are many ways to fix this. I'm sure you'll find one on your own.
I am trying to change the behavior of a future object based on user input.
#include <iostream>
#include <future>
//=======================================================================================!
struct DoWork
{
DoWork(int cycles, int restTime) : _cycles(cycles), _restTime(restTime), _stop(false)
{
}
void operator () ()
{
for(int i = 0 ; i < _cycles; ++i)
{
std::this_thread::sleep_for(std::chrono::milliseconds(_restTime));
if(_stop)break;
doTask();
}
}
void stop()
{
_stop = true;
}
private:
void doTask()
{
std::cout << "doing task!" << std::endl;
}
private:
int _cycles;
int _restTime;
bool _stop;
};
//=======================================================================================!
int main()
{
DoWork doObj(50, 500);
std::future<int> f = std::async(std::launch::async, doObj);
std::cout << "Should I stop work ?" << std::endl;
std::cout << "('1' = Yes, '2' = no, 'any other' = maybe)" << std::endl;
int answer;
std::cin >> answer;
if(answer == 1) doObj.stop();
std::cout << f.get() << std::endl;
return 0;
}
//=======================================================================================!
However this does not stop the execution of the future object. How do I change the behavior of the doObj after I have created the future object?
You have a few problems. First, your function object doesn't actually return int, so std::async will return a std::future<void>. You can fix this either by actually returning int from DoWork::operator(), or by storing the result from async in a std::future<void> and not trying to print it.
Second, std::async copies its arguments if they aren't in reference wrappers, so the doObj on the stack is not going to be the same instance of DoWork that is being used by the asynchronous thread. You can correct this by passing doObj in a reference wrapper a la std::async(std::launch::async, std::ref(doObj)).
Third, both the main thread and the asynchronous thread are simultaneously accessing DoWork::_stop. This is a data race and means the program has undefined behavior. The fix is to protect accesses to _stop with a std::mutex or to make it a std::atomic.
Altogether, program should look like (Live at Coliru):
#include <iostream>
#include <future>
//=======================================================================================!
struct DoWork
{
DoWork(int cycles, int restTime) : _cycles(cycles), _restTime(restTime), _stop(false)
{
}
int operator () ()
{
for(int i = 0 ; i < _cycles; ++i)
{
std::this_thread::sleep_for(std::chrono::milliseconds(_restTime));
if(_stop) return 42;
doTask();
}
return 13;
}
void stop()
{
_stop = true;
}
private:
void doTask()
{
std::cout << "doing task!" << std::endl;
}
private:
int _cycles;
int _restTime;
std::atomic<bool> _stop;
};
//=======================================================================================!
int main()
{
DoWork doObj(50, 500);
std::future<int> f = std::async(std::launch::async, std::ref(doObj));
std::cout << "Should I stop work ?" << std::endl;
std::cout << "('1' = Yes, '2' = no, 'any other' = maybe)" << std::endl;
int answer;
std::cin >> answer;
if(answer == 1) doObj.stop();
std::cout << f.get() << std::endl;
}
//=======================================================================================!