Sharing memory between two threads, but address of data structures changes - multithreading

I am using promises in C++, and have this really simple program. I create a promise, and pass it to a thread function by reference.
Inside the thread, I create a string and then set the promise's value to that string. Then in the main program, I retrieve the promise's value using a future. I'm confused about why the address of the string doesn't remain the same? Ideally it shouldn't vary between the threads since it belongs to shared memory right?
void tfunc(promise<string> & prms) {
string str("Hello from thread");
cout << (void *)str.data() << endl;
prms.set_value(str);
}
int main() {
promise<string> prms;
future<string> ftr = prms.get_future();
thread th(&tfunc, ref(prms));
string str = ftr.get();
// This address should be the same as that inside the thread, right?
cout << (void *)str.data() << endl;
th.join();
return 0;
}

Related

scope block when use std::async in function other than the main function

I have some problem with st::async when is use this in other function other than Main function,
suppose, I have functions like flowing :
void printData()
{
for (size_t i = 0; i < 5; i++)
{
std::cout << "Test Function" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
void runningAsync()
{
auto r = std::async(std::launch::async, test);
}
int main()
{
runningAsync();
std::cout << "Main Function" << std::endl;
}
the output of this code is :
Test Function
Test Function
Test Function
Test Function
Test Function
Main Function
that is not good, Main thread wait for other thread that be end.
I want runningAsync() function run in other thread and at the same time "Main Function" in main thread print on screan, this is possible with std::thread.
is that way for this running this functions an same time (concurrency)?
The reason is that std::async returns a std::future which you store in an auto variable. As soon as your future runs out of scope (at the end of runningAsync()!), its destructor blocks until the task is finished. If you do not want that, you could for example store the future in a global container.
This QUESTION answered in :
main thread waits for std::async to complete
Can I use std::async without waiting for the future limitation?
Whoever, If you store the std::future object, its lifetime will be extended to the end of main and you get the behavior you want.
void printData()
{
for (size_t i = 0; i < 5; i++)
{
std::cout << "Test Function" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
std::future<void> runningAsync()
{
return std::async(std::launch::async, test);
}
int main()
{
auto a = runningAsync();
std::cout << "Main Function" << std::endl;
}
That's a problem because std::future's destructor may block and wait for the thread to finish. see this link for more details

Does passing parameters in lambda's capture to boost asio post/dispatch thread safe?

I'm using lambda's capture in order to pass parameters to boost::asio::io_context::post callback.
Is it thread safe?
Code
#include <iostream>
#include "boost/asio.hpp"
#include <thread>
int main() {
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
std::thread t([&](){
io_service.run();
});
auto var = 1;
io_service.post([&io_service, var]() {
std::cout << "v: " << var << std::endl;
io_service.stop();
});
t.join();
return 0;
}
As you can see, I pass var in the lambda's capture.
the main thread sets var's value, and thread t reads it.
I didn't use any of memory ordering, for example, std::memory_order_release after setting var to 1, and std::memory_order_acquire before reading var value. Even more, I don't think I can - because the variable var is passed by value to the lambda.
Is it safe to do that?
If not, how should it be done?
It is thread-safe.
Closure object is created by main thread (with copying var value) after var was created and initialized.
Next, closure object is passed as argument to post method which queues this function object and returns immediately without calling functor. Functor is called between post and t.join calls - post guarantees it.
So your code must be thread-safe.
You would need some synchronization method (for example, use of mutex+lock_guard)
if var was passed by reference [1] and some writing operations on var [2]
were performed between post and t.join calls:
auto var = 1;
io_service.post([&io_service, &var]() { // [1] takes by reference
std::cout << "v: " << var << std::endl; // lock mutex for printing
io_service.stop();
});
var = 10; // [2] , lock mutex for writing
// synchronization must be added because between post and t.join calls,
// reading and writing operations are executed
t.join();
in this case you would have to protect var.

Get result of future without blocking

This question has been asked before and if I am not wrong, the only way to read the result of a future is either to call get() and block until it is ready or using wait_for() with zero duration as mentioned in the answer - Get the status of a std::future
But, if I just want a worker thread to return me a result that I want it to compute and not wait or block myself for it to complete, can I not just pass it a callback that the worker thread can call when it has computed the result for me? Something like below -
#include <iostream>
#include <thread>
#include <functional>
void foo(std::function<void(int)> callback)
{
int result = 5;
callback(result);
}
int main()
{
int result = 0;
std::thread worker(foo, [](int result)
{
std::cout << "Result from worker is " << result << std::endl;
});
worker.join();
}
Here, the worker thread would just execute the callback when it has computed the result for me. I don't have to wait for it to finish or block or check in a loop to know when it's ready.
Please advice is this is a good approach to be used as currently there is no way to do this without blocking or checking for it in a loop?
You can certainly create your own thread with a callback, but as soon as you move away from a toy example you will notice that you have potentially created a synchronization problem. This is because your callback is being invoked from a separate thread. So you may want to have the worker thread instead post a message to a queue which you will read later, unless there is no shared state or a mutex is already in place.
In your specific example, let's add one line of code:
int main()
{
std::thread worker(foo, [](int result)
{
std::cout << "Result from worker is " << result << std::endl;
});
std::cout << "I am the main thread" << std::endl; // added
worker.join();
}
You might think that there are only two possible outputs:
I am the main thread
Result from worker is 5
and
Result from worker is 5
I am the main thread
But in fact there are other possible outputs, such as:
Result from worker is I am the main thread
5
So you have created a bug. You either need synchronization on your shared state (which includes I/O), or you need to orchestrate everything from the main thread (which is what blocking or checking for a future result gives you).

c++11 threads vs async

Consider the following two snippets of code where I am trying to launch 10000 threads:
Snippet 1
std::array<std::future<void>, 10000> furArr_;
try
{
size_t index = 0;
for (auto & fut : furArr_)
{
std::cout << "Created thread # " << index++ << std::endl;
fut = std::async(std::launch::async, fun);
}
}
catch (std::system_error & ex)
{
std::string str = ex.what();
std::cout << "Caught : " << str.c_str() << std::endl;
}
// I will call get afterwards, still 10000 threads should be active by now assuming "fun" is time consuming
Snippet 2
std::array<std::thread, 10000> threadArr;
try
{
size_t index = 0;
for (auto & thr : threadArr)
{
std::cout << "Created thread # " << index++ << std::endl;
thr = std::thread(fun);
}
}
catch (std::system_error & ex)
{
std::string str = ex.what();
std::cout << "Caught : " << str.c_str() << std::endl;
}
The first case always succeeds .i.e. I am able to create 10000 threads and then I have to wait for all of them to finish. In the second case, almost always I end up getting an exception("resource unavailable try again") after creating 1600+ threads.
With a launch policy of std::launch::async, I thought that the two snippets should behave the same way. How different std::async with a launch policy of async is from launching a thread explicitly using std::thread?
I am on Windows 10, VS2015, binary is built in x86 release mode.
Firstly, thanks to Igor Tandetnik for giving me the direction for this answer.
When we use std::async (with async launch policy), we are saying:
“I want to get this work done on a separate thread”.
When we use std::thread we are saying:
“I want to get this work done on a new thread”.
The subtle difference means that async (is usually) implemented using thread pools. Which means if we have invoked a method using async multiple times, often the thread id inside that method will repeat i.e. async allocates multiple jobs to the same set of threads from the pool. Whereas with std::thread, it never will.
This difference means that launching threads explicitly will be potentially more resource intensive (and thus the exception) than using async with async launch policy.

C++ Locking stream operators with mutex

I need to lock stdout in my logging application to prevent string interleaving in multi-thread applications logging to stdout. Can't figure out how to use move constructor or std::move or sth else to move unique_lock to another object.
I created objects for setting configs and encapsulation and figured out how to lock stdout with static std::mutex to lock from these objects (called shards).
Something like this works for me:
l->log(1, "Test message 1");
While that is fine and could be implemented with templates and variable number of parameters I would like to approach more stream-like possibilities. I am looking for something like this:
*l << "Module id: " << 42 << "value: " << 42 << std::endl;
I dont want to force users to precompute string with concatenation and to_string(42) I just want to find a way to lock stdout.
My approach so far was to create operator << and another object locked stream, as was suggested in other answers. Things is I can't figure how to move mutex to another object. My code:
locked_stream& shard::operator<<(int num)
{
static std::mutex _out_mutex;
std::unique_lock<std::mutex> lock(_out_mutex);
//std::lock_guard<std::mutex> lock (_out_mutex);
std::cout << std::to_string(num) << "(s)";
locked_stream s;
return s;
}
After outputting input to std::cout I woould like to move lock into object stream.
In this case, I would be careful not to use static locks in functions, as you will get a different lock for each stream operator you create.
What you need is to lock some "output lock" when a stream is created, and unlock it when the stream is destroyed. You can piggie back on existing stream operations if you're just wrapping std::ostream. Here's a working implementation:
#include <mutex>
#include <iostream>
class locked_stream
{
static std::mutex s_out_mutex;
std::unique_lock<std::mutex> lock_;
std::ostream* stream_; // can't make this reference so we can move
public:
locked_stream(std::ostream& stream)
: lock_(s_out_mutex)
, stream_(&stream)
{ }
locked_stream(locked_stream&& other)
: lock_(std::move(other.lock_))
, stream_(other.stream_)
{
other.stream_ = nullptr;
}
friend locked_stream&& operator << (locked_stream&& s, std::ostream& (*arg)(std::ostream&))
{
(*s.stream_) << arg;
return std::move(s);
}
template <typename Arg>
friend locked_stream&& operator << (locked_stream&& s, Arg&& arg)
{
(*s.stream_) << std::forward<Arg>(arg);
return std::move(s);
}
};
std::mutex locked_stream::s_out_mutex{};
locked_stream locked_cout()
{
return locked_stream(std::cout);
}
int main (int argc, char * argv[])
{
locked_cout() << "hello world: " << 1 << 3.14 << std::endl;
return 0;
}
Here it is on ideone: https://ideone.com/HezJBD
Also, forgive me, but there will be a mix of spaces and tabs up there because of online editors being awkward.

Resources