I have a single non-blocking socket sending udp packets to multiple targets and receiving responses from all of them on the same socket. I'm reading in a dedicated thread but writes (sendto) can come from several different threads.
Is this a safe without any additional synchronization? Do I need to write while holding a mutex? Or, do writes need to come from the same thread and I need a queue?
The kernel will synchronize access to underlying file descriptor for you, so you don't need a separate mutex. There would be a problem with this approach if you were using TCP, but since we are talking about UDP this should be safe, though not necessarily best way.
You can write to the socket from a single or multiple threads. If you write to a socket from multiple threads, they should be synchronized with a mutex. If instead your threads place their messages in a queue and a single thread pulls from the queue to do the writes, reads and writes to/from the queue should be protected by a mutex.
Reading and writing to the same socket from different threads won't interfere with each other.
Related
Now I am building an application to send big data from client to server via UDP. I have some questions are:
Should I use one thread to send data or multi-threads to send data?
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Thanks,
Should I use one thread to send data or multi-threads to send data?
Either way can work, so it's mostly a matter of personal preference. If it was me, I would use a single thread rather than multiple threads, because multiple threads are a lot harder to implement correctly, and in this case they won't buy you any additional performance, since your throughput bottleneck is almost certainly going to be either your hard disk or your network card, not the speed of your CPU core(s).
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Again, either way will work (for UDP), but if it was me, I would use one socket per thread, only because then you don't have to worry so much about race conditions during process-setup and process-shutdown (i.e. each thread simply creates and destroys its own separate/private socket, so there's no worrying about who does what to the socket when)
I am trying to write a server program which supports one client till now and over the few days i was trying to develop it, I concluded i needed threads. The reason for such a decision was since I take input from a wifi socket and later process it and finally write to a file, the processing time is slow and hence i needed a input thread -> circular buffer -> output thread pattern with producer consumer model which is quite common in network programming.
Now, The situation becomes complicated, as I need to manage client disconnection and re connection. I thought of using pthread_exit() and cleaning up all the semaphores and then re initializing them each time the single client re connects.
My question is that is this a efficient approach i.e. everytime killing the threads and semaphores and re creating them. Are there any better solutions.
Thanks.
My question is that is this a efficient approach i.e. everytime killing the threads and semaphores and re creating them. Are there any better solutions.
Learn how to use non-blocking sockets and an event loop. Or use a library that provides TCP sessions for you using non-blocking sockets under the hood. Such as boost::asio.
Learn how to use multi-threading without polluting your code with any synchronization primitives by using message passing to communicate between threads, not shared state. The event loop library you use for non-blocking I/O should also provide means for cross-thread message passing.
Some comments and suggestions.
1-In TCP detecting that the other side has silently disconnected it very difficult if not impossible. A client could disconnect sending a RST TCP message to the server or sending a FIN message, this is the good case. Sometimes the client can disconnect without notice (crash, cable disconnection, etc).
One suggestion here is that you consider the way client and server will communicate. For example, you can use function “select” to set a timeout for receiving a message from client and detect a silent client.
Additionally, depending on the programming language and operating system you may need to handle broken pipe (SIGPIPE) signal (in Linux, with C/C++), for a server trying to send a message through a connection closed by the client.
2-Regarding semaphores, you shouldn’t need to clean semaphores in any especial way when a client disconnect. By applying common good practices of locking and unlocking mutexes should be enough. Also with resources like file descriptors, you need to release them before ending the thread either by returning from the thread start function or with pthread_exit. Maybe I didn’t understand this part of the question.
3-Regarding threads: if you work with multiple threads to optimum is to have a pool of pre-created consumer/worker threads that will check the circular buffer to consume the next available connection. Creating and destroying threads is costly for the operating system.
Threads are resource consuming and you may exhaust operating system resources if you need to create 1,000 threads for example.
Another alternative, is to have only one consumer thread that manages all connections (sockets) asynchronously: a) Each connection has its own state. b) The main thread goes through all connections and use function “select” to detect when connection reads or a writes are ready. 3)Use of non-blocking sockets but this is not essential because from select you know which sockets are ready and will not block.
You can use functions select, poll, epoll.
One link about select and non-blocking sockets: Using select() for non-blocking sockets
Other link with an example: http://linux.die.net/man/2/select
It's explicitly stated in the ZeroMQ guide that sockets must not be shared between threads. In case of multiple threaded producers who need to PUSH their output via zmq, I see two possible design patterns:
0mq socket per producer thread
single 0mq socket in a separate thread
In the first case, each thread handles its own affairs. In the latter, you need a thread-safe queue to which all producers write and from which the 0mq thread reads and then sends.
What are the factors for choosing between these two patterns? What are the pros\cons of each?
A lot depends on how many producers there are.
If there are only a handful, then having one socket per thread is manageable, and works well.
If there are many, then a producer-consumer queue with a single socket pushing (effectively being a consumer of the queue and singe producer for downstream sockets) is probably going to be faster. Having lots of sockets running is not without cost.
The main pro of the first case is that it is much more easily scaled out to separate processes for each producer, each one single-threaded with its own socket.
I've asked a similiar question.
You can use a pool of worker threads, like this, where each worker has a dedicated 0mq socket via ThreadLocal, ensuring sockets are used and destroyed in the threads that created them
You can also use a pool of sockets, perhaps backed with an ArrayBlockingQueue, and just take/replace sockets whenever you need them. This approach is less safe than the dedicated socket approach because it shares socket objects (synchronously) amongst different threads; you should be ok since Java handles locking, but its not the 0mq recommended approach.
Hope it helps...
I am using Linux 3.2.0, x86_64.
Can I call accept() for one socket from several threads simultaneously?
Yes, you can call accept() on the same listening socket from multiple threads and multiple process though there might not be as much point to it as you think. The kernel will only allow one to succeed. When this is done with processes it is known as pre-forking and it saves the expense of a fork() for every new connection. But when you are dealing with threads you can more easily have an existing thread pool that waits on a queue of new connections. One thread does the accept and writes the queue and the worker threads read the queue and do their thing. It's cleaner, it's a well understood pattern, and you lose almost nothing.
Im using TClientSocket or indy's TIdTCPClient (depending on project)
I have a few Threads each processing items, and sometimes need to send data over the connected client socket. (Data Read form the socket is NOT used in the processing threads)
Basically my question is...
Is the possible?
is it "safe"?
or should I
have a client socket per thread or
some kinda of Marshalling/critical sections
delphi-7 indy-9
Multiple threads can read and write to the same socket. Since everytime you accept, it shall extract the first connection on the queue of pending connections, create a new socket with the same socket properties and allocate a new file descriptor for that socket.
So only one thread per accepted connection.
If you are asking if you can do multiple write/read on an accepted connection, you will need locking features, hence loss of parallelism benefits. If you want to thread a long process and then write the result in the socket, use synchronization to write in the correct order.