I am writing a server application in which there is a thread deployed to read/write many sockets connecting to clients. My manager tells me that it is not a good design, because if the thread aborts due to unknown reason then all the read/write work will stop forever.
So I wonder in what conditions will a thread abort, except the case we return from the Run() function of a thread. Do we need consider the case that the thread stops running abnormally?
It depends. One thread per client can be a bad thing scalability wise, especially if the thread doesn't do that much work per client. In that circumstance it can be better to have a thread that handles a number of clients, the idea to achieve a good balance between the number of threads and having them do a decent amount of work.
If on the other hand each thread is doing a lot of work per client then one thread isn't such a bad idea, the overhead of the thread not being significant in comparison to the work load.
So setting that aside, a thread will abort if your code is written so that the thread returns or self-terminates. If another thread in your program knows the thread's handle/id then the library you're using may have a function with a name like thread_kill(). That would allow that other thread to kill this thread, though that's almost always a bad idea.
So as far as I'm concerned your thread will only abort and disappear if you've written your code to make that happen deliberately.
Handling exceptions is probably best done in its entirety within the thread where the exception arose. I've never tried to do otherwise (still writing in pure C), but the word is that it's difficult to handle them outside the thread. Irrespective of whether each thread handles one or many clients you still have to handle all errors and events within thread.
It may be simpler to get that correct if you write I so that a thread handles handles a single client. Getting it wrong could lead to a thread getting into a stalled state (eg waiting for the client that is listening too) and accumulating those as time goes past will eventually kill your whole system.
I am writing a server application in which there is a thread deployed to read/write many sockets connecting to clients.
Not a good design. There should be at least one thread per client, in some circumstances two: one to read and one to write. If you're dealing in blocking I/O, servicing one client could block out all the others. (If you're dealing in non-blocking I/O you don't need threads at all.)
My manager tells me that it is not a good design, because if the thread aborts due to unknown reason then all the read/write work will stop forever.
He's right, for more reasons than he is advancing.
Related
I have a question about multi sockets.
I know that I have to use select() for multi sockets. select() waits for a fd ...
But why we need to use select() when we can create a thread for each socket and perform accept() on each one seperatly ? Is it even a bad idea ? Is it just about "too many sockets, too many threads so" or what ??
It's true, you can avoid multiplexing sockets by instead spawning one thread for each socket, and then using blocking I/O on each thread.
That saves you from having to deal with select() (or poll() or etc); but now you have to deal with multiple threads instead, which is often worse.
Whether threads will be more of a hassle to manage than socket-multiplexing in your particular program depends a lot on what your program is trying to do. For example, if the threads in your program don't need to communicate/co-operate with each other or share any resources, then a multithreaded design can work well (as would a multiprocess design). On the other hand, if your threads all need to access a shared data structure or other resource, or if they need to interact with each other, then you've got a bit of a programming challenge on your hands, which you'll need to solve 100% perfectly or you'll end up with a program that "seems to work most of the time" but then occasionally deadlocks, crashes, or gives incorrect results due to incorrect/insufficient synchronization. This phenomenon of "meta-stability" is much more common/severe amongst buggy multithreaded programs than in buggy single-threaded programs, since a multithreaded program's exact flow of execution will be different every time you run it (due to the asynchronous nature of the threads with respect to each other).
Stability and code-correctness issues aside, there are a couple of other problems particular to multithreading that you avoid by using a single-threaded design:
Most OS's don't scale well above a few dozen threads. So if you're thinking one-thread-per-client, and you want to support hundreds or thousands of simultaneous clients, you're in for some performance problems.
It's hard to control a thread that is blocked in a blocking-socket call. Say the user has pressed Command-Q (or whatever the appropriate equivalent is) so it's now time for your program to quit. If you have one or more threads blocked inside a blocking-socket call, there's no straightforward way to do that:
You can't just call unilaterally call exit(), because while the main thread is tearing down process-global resources, one or more threads might still be using them, leading to an occasional crash
You can't ask the threads to exit (via atomic-boolean or whatever) and then call join() to wait for them, because they are blocking inside I/O calls and thus might take minutes/hours/days before they respond
You can't send a signal to the threads and have them react in a signal-handler, because signals are per-process, and you can't control which thread will receive the signal.
You can't just unilaterally kill the threads, because they might be holding resources (like mutexes or file handles) that would then remain unreleased forever, potentially causing deadlocks or other problems
You can't close down the threads' sockets for them, and hope that this will cause the threads to error out and terminate, as this leads to race condition if the threads also try to close down those same resources.
So even in a multithreaded design, if you want a clean shutdown (or any other sort of local control of a network-thread) you usually end up having to use non-blocking I/O and/or socket multiplexing inside each thread anyway, so now you've got the worst of both worlds, complexity-wise.
Referring to the following discussion:
How is Node.js inherently faster when it still relies on Threads internally?
After having gone through all responses, I still have basic questions: If a DB call is made, 'somebody' has to block for the call to return. It turns into a blocking call deep down. Somebody has to make a call to the DB. The 'somebody' has to be a thread. If there are 50 DB calls, though they appear to be non-blocking to the Javascript, deep down they have all blocked. If there are 50 calls, for them to be all fired together on the DB, they have to be each sent to the DB by a thread. This means there would be 50 threads that have sent the DB call and are waiting for their call to return. This is no different than having 50 threads like they do in Apache. Please rectify my understanding. What is Node.js doing cleverly and how to ensure that fewer threads than 50 run in this case?
You are... partially correct. If there are 50 concurrent DB calls, then that means 50 threads, each dedicated to a DB call (actually, the reality is that by default, node provides only 4 concurrent threads in its thread pool, if you want more you have to explicitly specify how many threads you're willing to allow node to spin up; see my answer here - any excess requests are queued).
What makes this more efficient than Apache is that each of those threads is dedicated to the smallest functional unit... it lives only for the life of that database call, and then it's relinquished (in this case, a new thread is created, up to the limit, and then that thread is put back into the pool). This is in dramatic opposition to Apache, which spins up a thread for each new request, and may have to service multiple database calls and other processing in between until that request is completed and can then be relinquished.
Ultimately, this results in each thread spending more of its time doing work or in the pool waiting for more work and less time being idle and unavailable.
Be aware that this is workload-dependent, there are workloads that work better in the Apache model, but in general, most web style workloads are more suited to the node model.
The difference is that I believe in something like apache, those threads are going to be handled in the order they are received. So if thread one is waiting for 10TB of data and thread two is only waiting for 10KB of data, thread two has to wait until thread one is done even though it's work could be done far faster and return quicker. With node the idea is that each thread waiting for I/O is returned as soon as it is done. So depending on the I/O the same situation could allow thread two to return before thread one is done it's work. This is older but still an excellent write up I believe of threading in node. http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/
I've learned that a process has running, ready, blocked, and suspended states. Threads also have these states except for suspended because it lives in the process's address space.
A process blocks most of the time when it is doing a blocking i/o or waiting for an event.
I can easily picture out a process getting blocked if its single-threaded or if it follows a one-to-many model, but how does it work if the process is multi-threaded?
For example:
I have a process with two threads in a system that follows a one-to-one model. One handles the gui and the other handles the blocking i/o. I know the process remains responsive because the other thread handles the i/o.
So is there by any chance the process gets blocked or should I just rule it out in this case?
I'm just getting into these stuff so forgive me If I haven't understand some of the important details yet.
Let's say you have a work queue where the UI thread schedules work to be done and the I\O thread looks there for work to do. The work queue itself is data that is read and modified from both threads, therefor you must synchronize access somehow or race conditions result.
The naive approach is to synchronize access to the queue using a lock (aka critical section). If the I\O thread acquires the lock and then blocks, the UI thread will only remain responsive until it decides it needs to schedule work and tries to acquire the lock. A better approach is to use a lock-free queue about which much has been written and you can easily search for more info.
But to answer your question, yes, it is still much easier than you might think to cause UI to stutter / hang even when using multiple threads. There are various libraries that make it easier or harder to solve this problem, so depending on your OS and language of choice, there may be something better than just OS primitives. Win32 (from what I remember) doesn't it make it very easy at all despite having all sorts of synchronization primitives. Pthreads and Boost never seemed very straightforward to me either. Apple's GCD makes it semantically much easier to express what you want (in my opinion), though there are still pitfalls one must be aware of (such as scheduling too many blocking operations on a single work queue to be done in parallel and causing the processor to thrash when they all wake up at the same time).
My advice is to just dive in and write lots of multithreaded code. It can be tough to debug but you will learn a lot and eventually it becomes second nature.
I came across node.js and python's tornado vs the Apache.
They say :
Apache makes a thread for every connection.
Node.js & tornado actually does event looping on a thread and a single thread can handle many connections.
I don't understand that what logically be a child of a thread.
In computer science terms:
Processes have isolated memory and share CPU with context switches.
Threads divides a process.
Therefore, a process with multiple control points is achieved by multiple threads.
Now,
What how does event loop works under a thread ?
How can it handle different connection under 1 control of a thread ?
Update :
I mean if there is communication with 3 sockets under 1 thread, how can 1 thread communicate with 3 sockets without keeping anyone on wait ?
An event loop at its basic level is something like:
while getNextEvent (&event) {
dispatchEvent (&event);
}
In other words, it's nothing more than a loop which continuously retrieves events from a queue of some description, then dispatches the event to an event handling procedure.
It's likely you know that already but I'm just explaining it for context.
In terms of how the different servers handle it, it appears that every new connection being made in Apache has a thread created for it, and that thread is responsible for that connection and nothing else.
For the other two, it's likely that there are a "set" number of threads running (though this may actually vary based on load) and a connection is handed off to one of those threads. That means any one thread may be handling multiple connections at any point in time.
So the event in that case would have to include some details as to what connection it applies to, so the thread can keep the different connections isolated from each other.
There are no doubt pros and cons to both options. A one-connection-per-thread optio n would have simplified code in the thread function since it didn't have to deal with multiple connections but it may end up with a lot of resource usage as the load got high.
In a multiple-connection-per-thread scenario, the code is a little more complex but you can generally minimise thread creation and destruction overhead by simply having the maximum number of threads running all the time. Outside of high-load periods, they'll just be sitting around doing nothing, waiting on a connection event to be given to them.
And, even under high load, it may be that each thread can quite easily process five concurrent connections without dropping behind which would mean the one-connection-per-thread option was a little wasteful.
Based on your update:
I mean if there is communication with 3 sockets under 1 thread, how can 1 thread communicate with 3 sockets without keeping anyone on wait ?
There are a great many ways to do this. For a start, it would generally all be abstracted behind the getNextEvent() call, which would probably be responsible for handling all connections and farming them out to the correct threads.
At the lowest levels, this could be done with something like a select call, a function that awaits activity on one of many file descriptors, and returns information relating to which file descriptor has something to say.
For example, you provide a file descriptor set of all currently open sockets and pass that to select. It will then give you back a modified set, containing only those that are of interest to you (such as ready-to-read-from).
You can then query that set and dispatch events to the corresponding thread.
As a side project I'm currently writing a server for an age-old game I used to play. I'm trying to make the server as loosely coupled as possible, but I am wondering what would be a good design decision for multithreading. Currently I have the following sequence of actions:
Startup (creates) ->
Server (listens for clients, creates) ->
Client (listens for commands and sends period data)
I'm assuming an average of 100 clients, as that was the max at any given time for the game. What would be the right decision as for threading of the whole thing? My current setup is as follows:
1 thread on the server which listens for new connections, on new connection create a client object and start listening again.
Client object has one thread, listening for incoming commands and sending periodic data. This is done using a non-blocking socket, so it simply checks if there's data available, deals with that and then sends messages it has queued. Login is done before the send-receive cycle is started.
One thread (for now) for the game itself, as I consider that to be separate from the whole client-server part, architecturally speaking.
This would result in a total of 102 threads. I am even considering giving the client 2 threads, one for sending and one for receiving. If I do that, I can use blocking I/O on the receiver thread, which means that thread will be mostly idle in an average situation.
My main concern is that by using this many threads I'll be hogging resources. I'm not worried about race conditions or deadlocks, as that's something I'll have to deal with anyway.
My design is setup in such a way that I could use a single thread for all client communications, no matter if it's 1 or 100. I've separated the communications logic from the client object itself, so I could implement it without having to rewrite a lot of code.
The main question is: is it wrong to use over 200 threads in an application? Does it have advantages? I'm thinking about running this on a multi-core machine, would it take a lot of advantage of multiple cores like this?
Thanks!
Out of all these threads, most of them will be blocked usually. I don't expect connections to be over 5 per minute. Commands from the client will come in infrequently, I'd say 20 per minute on average.
Going by the answers I get here (the context switching was the performance hit I was thinking about, but I didn't know that until you pointed it out, thanks!) I think I'll go for the approach with one listener, one receiver, one sender, and some miscellaneous stuff ;-)
use an event stream/queue and a thread pool to maintain the balance; this will adapt better to other machines which may have more or less cores
in general, many more active threads than you have cores will waste time context-switching
if your game consists of a lot of short actions, a circular/recycling event queue will give better performance than a fixed number of threads
To answer the question simply, it is entirely wrong to use 200 threads on today's hardware.
Each thread takes up 1 MB of memory, so you're taking up 200MB of page file before you even start doing anything useful.
By all means break your operations up into little pieces that can be safely run on any thread, but put those operations on queues and have a fixed, limited number of worker threads servicing those queues.
Update: Does wasting 200MB matter? On a 32-bit machine, it's 10% of the entire theoretical address space for a process - no further questions. On a 64-bit machine, it sounds like a drop in the ocean of what could be theoretically available, but in practice it's still a very big chunk (or rather, a large number of pretty big chunks) of storage being pointlessly reserved by the application, and which then has to be managed by the OS. It has the effect of surrounding each client's valuable information with lots of worthless padding, which destroys locality, defeating the OS and CPU's attempts to keep frequently accessed stuff in the fastest layers of cache.
In any case, the memory wastage is just one part of the insanity. Unless you have 200 cores (and an OS capable of utilizing) then you don't really have 200 parallel threads. You have (say) 8 cores, each frantically switching between 25 threads. Naively you might think that as a result of this, each thread experiences the equivalent of running on a core that is 25 times slower. But it's actually much worse than that - the OS spends more time taking one thread off a core and putting another one on it ("context switching") than it does actually allowing your code to run.
Just look at how any well-known successful design tackles this kind of problem. The CLR's thread pool (even if you're not using it) serves as a fine example. It starts off assuming just one thread per core will be sufficient. It allows more to be created, but only to ensure that badly designed parallel algorithms will eventually complete. It refuses to create more than 2 threads per second, so it effectively punishes thread-greedy algorithms by slowing them down.
I write in .NET and I'm not sure if the way I code is due to .NET limitations and their API design or if this is a standard way of doing things, but this is how I've done this kind of thing in the past:
A queue object that will be used for processing incoming data. This should be sync locked between the queuing thread and worker thread to avoid race conditions.
A worker thread for processing data in the queue. The thread that queues up the data queue uses semaphore to notify this thread to process items in the queue. This thread will start itself before any of the other threads and contain a continuous loop that can run until it receives a shut down request. The first instruction in the loop is a flag to pause/continue/terminate processing. The flag will be initially set to pause so that the thread sits in an idle state (instead of looping continuously) while there is no processing to be done. The queuing thread will change the flag when there are items in the queue to be processed. This thread will then process a single item in the queue on each iteration of the loop. When the queue is empty it will set the flag back to pause so that on the next iteration of the loop it will wait until the queuing process notifies it that there is more work to be done.
One connection listener thread which listens for incoming connection requests and passes these off to...
A connection processing thread that creates the connection/session. Having a separate thread from your connection listener thread means that you're reducing the potential for missed connection requests due to reduced resources while that thread is processing requests.
An incoming data listener thread that listens for incoming data on the current connection. All data is passed off to a queuing thread to be queued up for processing. Your listener threads should do as little as possible outside of basic listening and passing the data off for processing.
A queuing thread that queues up the data in the right order so everything can be processed correctly, this thread raises the semaphore to the processing queue to let it know there's data to be processed. Having this thread separate from the incoming data listener means that you're less likely to miss incoming data.
Some session object which is passed between methods so that each user's session is self contained throughout the threading model.
This keeps threads down to as simple but as robust a model as I've figured out. I would love to find a simpler model than this, but I've found that if I try and reduce the threading model any further, that I start missing data on the network stream or miss connection requests.
It also assists with TDD (Test Driven Development) such that each thread is processing a single task and is much easier to code tests for. Having hundreds of threads can quickly become a resource allocation nightmare, while having a single thread becomes a maintenance nightmare.
It's far simpler to keep one thread per logical task the same way you would have one method per task in a TDD environment and you can logically separate what each should be doing. It's easier to spot potential problems and far easier to fix them.
What's your platform? If Windows then I'd suggest looking at async operations and thread pools (or I/O Completion Ports directly if you're working at the Win32 API level in C/C++).
The idea is that you have a small number of threads that deal with your I/O and this makes your system capable of scaling to large numbers of concurrent connections because there's no relationship between the number of connections and the number of threads used by the process that is serving them. As expected, .Net insulates you from the details and Win32 doesn't.
The challenge of using async I/O and this style of server is that the processing of client requests becomes a state machine on the server and the data arriving triggers changes of state. Sometimes this takes some getting used to but once you do it's really rather marvellous;)
I've got some free code that demonstrates various server designs in C++ using IOCP here.
If you're using unix or need to be cross platform and you're in C++ then you might want to look at boost ASIO which provides async I/O functionality.
I think the question you should be asking is not if 200 as a general thread number is good or bad, but rather how many of those threads are going to be active.
If only several of them are active at any given moment, while all the others are sleeping or waiting or whatnot, then you're fine. Sleeping threads, in this context, cost you nothing.
However if all of those 200 threads are active, you're going to have your CPU wasting so much time doing thread context switches between all those ~200 threads.