I'm new to D and I'm writing a simple multithreaded server for practice. A common paradigm for starting client handler threads in C is to pass the file descriptor of the newly-accept()ed socket into pthread_create(), but D's std.concurrency.spawn() will not allow me to pass the Socket because it's mutable and accessible by two threads.
Of course, I don't actually want an immutable socket (which is why I don't really want to cast it in the main thread unless I have to) - I want to pass a mutable one in and have it go out of scope in the main thread. How would I go about this? Should(/can) I use tid.send(s) to let the thread use the socket? For some reason that seems very clunky to me.
My code now:
void main() {
Socket listener = new TcpSocket;
...
for (;;) {
Socket s = listener.accept();
scope(exit) s.close();
auto tid = spawn(&clientHandler, s);
}
}
void clientHandler(Socket s) {
...
}
Which produces: Error: static assert "Aliases to mutable thread-local data not allowed." ... instantiated from here: spawn!(Socket)
you need to cast the socket to shared and back again in the clienthandler
auto tid = spawn(&clientHandler, cast(shared) s);
void clientHandler(shared Socket s) {
Socket sock = cast(Socket)s;
scope(exit)sock.close();
}
the reason for this is that all local variables are implicitly thread local unless specified shared, and only references to shared or immutable can be passed as argument to spawn (or send) while stuff passed by value (structs without references and primitives) is fine
also you should put the close int the handler as with your current implementation the socket will likely be closed before the newly spawned thread has a chance to run
The problem here isn't the socket, which is a local variable. It's the clientHandler, whose declaration you haven't shown, but clearly it is thread-local as it says in the error message, when there should be a new one per accepted socket. The hint is the word 'alias', which refers to the & operator.
Related
My question is from this implementation of a ThreadPool class in C++11. Following is relevant parts from the code:
whenever enqueue is called on the threadPool object, it binds the passed function with all passed arguments, to create a shared_ptr of std::packaged_task:
auto task = std::make_shared< std::packaged_task<return_type()> >(
std::bind(std::forward<F>(f), std::forward<Args>(args)...)
);
extracts the future from this std::packaged_taskto return to the caller and stores this task in a std::queue<std::function<void()>> tasks;.
In the constructor, it waits for the task in queue, and if it finds one, it executes the task:
for(size_t i = 0;i<threads;++i)
workers.emplace_back(
[this]
{
for(;;)
{
std::function<void()> task;
{
std::unique_lock<std::mutex> lock(this->queue_mutex);
this->condition.wait(lock,[this]{ return !this->tasks.empty(); });
task = std::move(this->tasks.front());
this->tasks.pop();
}
task();
}
}
);
Now, based on this, following is my questions:
If std::packaged_task was stored in a std::queue<std::function<void()>>, then it just becomes a std::function object, right? then how does it still write to the shared state of std::future extracted earlier?
If stored std::packaged_task was not just a std::function object but still a std::packaged_taskthen when a std::thread executes task() through a lambda (code inside constructor), then why doesn't it run on another thread? as std::packaged_task are supposed to run on another thread, right?
As my questions suggest, I am unable to understand the conversion of std::packaged_task into std::function and the capability of std::function to write to the shared state of std::future. Whenever I tested this code with n threads, the maximum number of thread ids I could get was n but never more than n. Here is the complete code (including that of ThreadPool and it also includes a main function which counts the number of threads created).
I wanted to know if I could do something like this with shared_futures.
Essentially I have two threads that receive a reference to a promise.
Incase any of the thread returns an output by setting a value in the promise I would like to process that output and return back to listening for another assignment to a promise from the remaining thread. Can I do something like this.
void tA(std::promise<string>& p )
{
....
std::string r = "Hello from thread A";
p.set_value(std::move(r));
}
void tB(std::promise<string>& p )
{
...
std::string r = "Hello from thread A";
p.set_value(std::move(r));
}
int main() {
std::promise<std::string> inputpromise;
std::shared_future<std::string> inputfuture(inputpromise.get_future());
//start the thread A
std::thread t(std::bind(&tA,std::ref(inputpromise));
//start the thread B
std::thread t(std::bind(&tA,std::ref(inputpromise));
std::future<std::string> f(p.get_future());
std::string response = f.get(); ------> Will this unblock when one thread sets a value to the promise and can i go back listening for more assignments on the promise ?
if(response=="b")
response = f.get(); -->listen for the assignment from the remaining thread
}
You cannot call promise::set_value (or any equivalent function like set_exception) more than once. Promises are not intended to be used in this way, shared across threads. You have one thread which owns the promise, and one or more locations that can tell if the promise has been satisfied, and if so retrieve the value.
A promise is not the right tool for doing what you want. A future/promise is really a special case of a more general tool: a concurrent queue. In a true concurrent queue, generating threads push values into the queue. Receiving threads can extract values from the queue. A future/promise is essentially a single-element queue.
You need a general concurrent queue, not a single-element queue. Unfortunately, the standard library doesn't have one.
I need to pause the current thread in Rust and notify it from another thread. In Java I would write:
synchronized(myThread) {
myThread.wait();
}
and from the second thread (to resume main thread):
synchronized(myThread){
myThread.notify();
}
Is is possible to do the same in Rust?
Using a channel that sends type () is probably easiest:
use std::sync::mpsc::channel;
use std::thread;
let (tx,rx) = channel();
// Spawn your worker thread, giving it `send` and whatever else it needs
thread::spawn(move|| {
// Do whatever
tx.send(()).expect("Could not send signal on channel.");
// Continue
});
// Do whatever
rx.recv().expect("Could not receive from channel.");
// Continue working
The () type is because it's effectively zero-information, which means it's pretty clear you're only using it as a signal. The fact that it's size zero means it's also potentially faster in some scenarios (but realistically probably not any faster than a normal machine word write).
If you just need to notify the program that a thread is done, you can grab its join guard and wait for it to join.
let guard = thread::spawn( ... ); // This will automatically join when finished computing
guard.join().expect("Could not join thread");
You can use std::thread::park() and std::thread::Thread::unpark() to achieve this.
In the thread you want to wait,
fn worker_thread() {
std::thread::park();
}
in the controlling thread, which has a thread handle already,
fn main_thread(worker_thread: std::thread::Thread) {
worker_thread.unpark();
}
Note that the parking thread can wake up spuriously, which means the thread can sometimes wake up without the any other threads calling unpark on it. You should prepare for this situation in your code, or use something like std::sync::mpsc::channel that is suggested in the accepted answer.
There are multiple ways to achieve this in Rust.
The underlying model in Java is that each object contains both a mutex and a condition variable, if I remember correctly. So using a mutex and condition variable would work...
... however, I would personally switch to using a channel instead:
the "waiting" thread has the receiving end of the channel, and waits for it
the "notifying" thread has the sending end of the channel, and sends a message
It is easier to manipulate than a condition variable, notably because there is no risk to accidentally use a different mutex when locking the variable.
The std::sync::mpsc has two channels (asynchronous and synchronous) depending on your needs. Here, the asynchronous one matches more closely: std::sync::mpsc::channel.
There is a monitor crate that provides this functionality by combining Mutex with Condvar in a convenience structure.
(Full disclosure: I am the author.)
Briefly, it can be used like this:
let mon = Arc::new(Monitor::new(false));
{
let mon = mon.clone();
let _ = thread::spawn(move || {
thread::sleep(Duration::from_millis(1000));
mon.with_lock(|mut done| { // done is a monitor::MonitorGuard<bool>
*done = true;
done.notify_one();
});
});
}
mon.with_lock(|mut done| {
while !*done {
done.wait();
}
println!("finished waiting");
});
Here, mon.with_lock(...) is semantically equivalent to Java's synchronized(mon) {...}.
I work on IOCP Server (Overlapped I/O , 4 threads, CreateIoCompletionPort, GetQueuedCompletionStatus, WSASend etc). And my goal is to send single reference counted buffer too all connected sockets.(I followed Len Holgate's suggestion from this post WSAsend to all connected socket in multithreaded iocp server) . After sending buffer to all connected clients it should be deleted.
this is class with buffer to be send
class refbuf
{
private:
int m_nLength;
int m_wsk;
char *m_pnData; // buffer to send
mutable int mRefCount;
public:
...
void grab() const
{
++mRefCount;
}
void release() const
{
if(mRefCount > 0);
--mRefCount;
if(mRefCount == 0) {delete (refbuf *)this;}
}
...
char* bufadr() { return m_pnData;}
};
sending buffer to all socket
refbuf *refb = new refbuf(4);
...
EnterCriticalSection(&g_CriticalSection);
pTmp1 = g_pCtxtList; // start of linked list with sockets
while( pTmp1 )
{
pTmp2 = pTmp1->pCtxtBack;
ovl=TakeOvl(); // ovl -struct containing WSAOVERLAPPED
ovl->wsabuf.buf=refb->bufadr();// adress m_pnData from refbuf
ovl->rcb=refb; //when GQCS get notification rcb is used to decrease mRefCount
ovl->wsabuf.len=4;
refb->grab(); // mRefCount ++
WSASend(pTmp1->Socket, &(ovl->wsabuf),1,&dwSendNumBytes,0,&(ovl->Overlapped), NULL);
pTmp1 = pTmp2;
}
LeaveCriticalSection(&g_CriticalSection);
and 1 of 4 threads
GetQueuedCompletionStatus(hIOCP, &dwIoSize,(PDWORD_PTR)&lpPerSocketContext, (LPOVERLAPPED *)&lpOverlapped, INFINITE);
...
lpIOContext = (PPER_IO_CONTEXT)lpOverlapped;
lpIOContext->rcb->release(); //mRefCount --,if mRefCount reach 0, delete object
i check this with 5 connected clients and it seems to work. When GQCS receives all notifaction, mRefCount reachs 0 and delete is executed.
And my questions: is that approach appropriate? What if there will be for example 100 or more clients? Is situation avoided when one thread can delete object before another still use it? How to implement atomic reference count in this scernario? Thanks in advance.
Obvious issues; in order of importance...
Your refbuf class doesn't use thread safe ref count manipulation. Use InterlockedIncrement() etc.
I assume that TakeOvl() obtains a new OVERLAPPED and WSABUF structure per operation.
Your naming could be better, why grab() rather than AddRef(), what does TakeOvl() take from? Those Tmp variables are something and the least important something is that they're 'temporary' so name them after a more important something. Go Read Code Complete.
I'm learning Boost::asio and all that async stuff. How can I asynchronously read to variable user_ of type std::string? Boost::asio::buffer(user_) works only with async_write(), but not with async_read(). It works with vector, so what is the reason for it not to work with string? Is there another way to do that besides declaring char user_[max_len] and using Boost::asio::buffer(user_, max_len)?
Also, what's the point of inheriting from boost::enable_shared_from_this<Connection> and using shared_from_this() instead of this in async_read() and async_write()? I've seen that a lot in the examples.
Here is a part of my code:
class Connection
{
public:
Connection(tcp::acceptor &acceptor) :
acceptor_(acceptor),
socket_(acceptor.get_io_service(), tcp::v4())
{ }
void start()
{
acceptor_.get_io_service().post(
boost::bind(&Connection::start_accept, this));
}
private:
void start_accept()
{
acceptor_.async_accept(socket_,
boost::bind(&Connection::handle_accept, this,
placeholders::error));
}
void handle_accept(const boost::system::error_code& err)
{
if (err)
{
disconnect();
}
else
{
async_read(socket_, boost::asio::buffer(user_),
boost::bind(&Connection::handle_user_read, this,
placeholders::error, placeholders::bytes_transferred));
}
}
void handle_user_read(const boost::system::error_code& err,
std::size_t bytes_transferred)
{
if (err)
{
disconnect();
}
else
{
...
}
}
...
void disconnect()
{
socket_.shutdown(tcp::socket::shutdown_both);
socket_.close();
socket_.open(tcp::v4());
start_accept();
}
tcp::acceptor &acceptor_;
tcp::socket socket_;
std::string user_;
std::string pass_;
...
};
The Boost.Asio documentation states:
A buffer object represents a contiguous region of memory as a 2-tuple consisting of a pointer and size in bytes. A tuple of the form {void*, size_t} specifies a mutable (modifiable) region of memory.
This means that in order for a call to async_read to write data to a buffer, it must be (in the underlying buffer object) a contiguous block of memory. Additionally, the buffer object must be able to write to that block of memory.
std::string does not allow arbitrary writes into its buffer, so async_read cannot write chunks of memory into a string's buffer (note that std::string does give the caller read-only access to the underlying buffer via the data() method, which guarantees that the returned pointer will be valid until the next call to a non-const member function. For this reason, Asio can easily create a const_buffer wrapping an std::string, and you can use it with async_write).
The Asio documentation has example code for a simple "chat" program (see http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/examples.html#boost_asio.examples.chat) that has a good method of overcoming this problem. Basically, you need to have the sending TCP send along the size of a message first, in a "header" of sorts, and your read handler must interpret the header to allocate a buffer of a fixed size suitable for reading the actual data.
As far as the need for using shared_from_this() in async_read and async_write, the reason is that it guarantees that the method wrapped by boost::bind will always refer to a live object. Consider the following situation:
Your handle_accept method calls async_read and sends a handler "into the reactor" - basically you've asked the io_service to invoke Connection::handle_user_read when it finishes reading data from the socket. The io_service stores this functor and continues its loop, waiting for the asynchronous read operation to complete.
After your call to async_read, the Connection object is deallocated for some reason (program termination, an error condition, etc.)
Suppose the io_service now determines that the asynchronous read is complete, after the Connection object has been deallocated but before the io_service is destroyed (this can occur, for example, if io_service::run is running in a separate thread, as is typical). Now, the io_service attempts to invoke the handler, and it has an invalid reference to a Connection object.
The solution is to allocate Connection via a shared_ptr and use shared_from_this() instead of this when sending a handler "into the reactor" - this allows io_service to store a shared reference to the object, and shared_ptr guarantees that it won't be deallocated until the last reference expires.
So, your code should probably look something like:
class Connection : public boost::enable_shared_from_this<Connection>
{
public:
Connection(tcp::acceptor &acceptor) :
acceptor_(acceptor),
socket_(acceptor.get_io_service(), tcp::v4())
{ }
void start()
{
acceptor_.get_io_service().post(
boost::bind(&Connection::start_accept, shared_from_this()));
}
private:
void start_accept()
{
acceptor_.async_accept(socket_,
boost::bind(&Connection::handle_accept, shared_from_this(),
placeholders::error));
}
void handle_accept(const boost::system::error_code& err)
{
if (err)
{
disconnect();
}
else
{
async_read(socket_, boost::asio::buffer(user_),
boost::bind(&Connection::handle_user_read, shared_from_this(),
placeholders::error, placeholders::bytes_transferred));
}
}
//...
};
Note that you now must make sure that each Connection object is allocated via a shared_ptr, e.g.:
boost::shared_ptr<Connection> new_conn(new Connection(...));
Hope this helps!
This isn't intended to be an answer per se, but just a lengthy comment: a very simple way to convert from an ASIO buffer to a string is to stream from it:
asio::streambuf buff;
asio::read_until(source, buff, '\r'); // for example
istream is(&buff);
is >> targetstring;
This is a data copy, of course, but that's what you need to do if you want it in a string.
You can use a std:string with async\_read() like this:
async_read(socket_, boost::asio::buffer(&user_[0], user_.size()),
boost::bind(&Connection::handle_user_read, this,
placeholders::error, placeholders::bytes_transferred));
However, you'd better make sure that the std::string is big enough to accept the packet that you're expecting and padded with zeros before calling async\_read().
And as for why you should NEVER bind a member function callback to a this pointer if the object can be deleted, a more complete description and a more robust method can be found here: Boost async_* functions and shared_ptr's.
Boost Asio has two styles of buffers. There's boost::asio::buffer(your_data_structure), which cannot grow, and is therefore generally useless for unknown input, and there's boost::asio::streambuf which can grow.
Given a boost::asio::streambuf buf, you turn it into a string with std::string(std::istreambuf_iterator<char>(&buf), {});.
This is not efficient as you end up copying data once more, but that would require making boost::asio::buffer aware of growable containers, i.e. containers that have a .resize(N) method. You can't make it efficient without touching Boost code.