Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How many tasks can a single thread execute simultaneously?
Concurrently: Zero or one. A thread is a thread. Not a magic yarn.
If by "in parallel" you mean "processed in parallel" and if you consider awaited Tasks, then there is no upper-bound limit on how many tasks are being awaited - but only one will actually be executed per a single CPU hardware-thread (usually 2x the CPU core count due to superscalar simultaneous multithreading, aka Hyper-Threading).
Also remember that Task is very abstract. It does not refer only to concurrently executing/executed (non-blocking) code, but can also refer to pending IO (e.g. disk IO, network IO, etc) that is being handled asynchronously by the host environment (e.g. the operating system) rather than it blocking the thread if it used a "traditional" (non-asynchronous) OS API call.
Re: comment
I just have a problem with handling multiple (it can be 5000, for instance) clients on the server and for each of them, I need to run a separate handling loop. But I'm concerned about the fact that the thread can handle either 0 or 1 tasks. Does it mean I should create a new thread for every new client? I know it does not matter how much threads I'll create, it won't change speed. But speed does not matter - the loop just should be executed independently for each client.
Ugh, this is not quite the same thing as your question - but I'll try my best to explain...
for each of them, I need to run a separate handling loop
Not necessarily. Just because you need to maintain state for each connected client does not mean you need a separate "loop" (i.e. a thread of execution).
In computers today fundamentally almost all network IO goes through the BSD Sockets API ("WinSock" on Windows, and in .NET this is represented via System.Net.Sockets.Socket). Remember that all kinds of computers work with sockets, including simple single-threaded computers. They don't need a blocking-loop for each connection: instead they use select to get information about socket status without blocking and only read data from the socket's input buffer if safe to do so. Voila! Only a single thread is needed. You can do this in .NET by checking Socket.Available, Socket.Select, or better yet: using the newer NetworkStream.ReadAsync method, for example.
If you're using BSD Sockets API (System.Net.Sockets) then you should use Socket.Select
Does it mean I should create a new thread for every new client?
*NOOOOONONONONONNONONO - no, you do not. Creating and running a new Thread for each connected client (Socket, NetworkStream, TcpClient, etc) is an anti-pattern that will quickly exhaust your available process memory (as each Thread costs 1MB just for its default stack on Windows desktop, ~250KB within IIS).
I know it does not matter how much threads I'll create
YES IT DOES!. Spawning lots of threads is a good way to torpedo your application's network performance and consume unnecessarily large amounts of memory.
the loop just should be executed independently for each client.
Please learn about Asynchronous Sockets. By using the async feature in C# with NetworkStream or Socket's async methods your code will use as few threads as necessary to handle network data.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have started reading node.js. I have a few questions:
Is node better than multi-threading just because it saves us from caring about deadlocks and reduces thread creation overhead, or are there are other factors too? Node does use threads internally, so we can't say that it saves thread creation overhead, just that it is managed internally.
Why do we say that node is not good for multi-core processors? It creates threads internally, so it must be getting benefits of multi-core. Why do we say it is not good for CPU intensive applications? We can always fork new processes for CPU intensive tasks.
Are only functions with callback dispatched as threads or there are other cases too?
Non-blocking I/O can be achieved using threads too. A main thread may be always ready to receive new requests. So what is the benefit?
Correct.
Node.js does scale with cores, through child processes, clusters, among other things.
Callbacks are just a common convention developers use to implement asynchronous methods. There is no technical reason why you have to include them. You could, for example, have all your async methods use promises instead.
Everything node does could be accomplished with threads, but there is less code/overhead involved with node.js's asynchronous IO than there is with multi-threaded code. You do not, for example, need to create an instance of thread or runnable every time like you would in Java.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Using Node.js and one single CPU virtual instance, if I put a worker thread and a web thread on the same node, would one block the other? Would I require two CPUs for them to run perfectly in parallel?
Yes, one would block another if it is all synchronous code.
Since you only have one virtual CPU instance (assuming not hyperthreaded), at the core level, the CPU only takes instructions in a synchronous fashion: 1, then 2, then 3, then 4.
Therefor in theory, if one worker had something like this, it would block:
while (true) {
doSomething();
}
Disclaimer: I'm not sure whether the OS kernel would handle anything regarding blocking instructions.
However, Node.js runs all I/O in the event loop, along with tasks that you explicitly state to be ran in the event loop (process.nextTick(), setTimeout()...). The way the event loop works is explained well here, so I won't go into much detail - however, the only blocking part about Node.js is synchronous running code, like the example above.
So, long story short: since your web worker uses Node.js, and the http module is an async module, your web worker will not block. Since your worker thread also uses Node.js, assuming it executes code that is launched when an event occurs (a visit to your website, for example), it will not block.
To run synchronous code perfectly in parallel, you would need two CPUs. However, assuming it is asynchronous, it should work just fine on one CPU.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Given a single core CPU, what is the benefit to coding using threads?
At least with the Java implementation, and it seems intuitive to naturally extend to any other language considering the single core restriction, you may have several threads performing various actions but the processes are time-limited and switched.
Given process A and process B:
What is the benefit of performing half of process A, finish process B, and then finish the second half of process A VS performing process A then B?
It seems that the switching between the threads would introduce time delays that would prolong the overall completion time of both processes VS not switching and just completing A then B.
The reason to use threads on a single-core system is simply to allow processes that would otherwise use all the CPU to be preempted by other tasks that need to get done sooner. The most common reason to make a system multi-threaded is to have a responsive user interface even while performing long calculations.
Of course, any operation can take a long time (reading a file, accessing a database, resizing a photo, recalculating a spreadsheet), and those operations can be performed on a separate thread to allow the thread responding to user input to operate the whole time.
Twenty years ago, for example, it was rare to have a multi-CPU system or an OS that allowed multi-threading, so nearly every program was single-threaded and there were many frameworks created to allow systems to have UIs and still do I/O. The standard mechanism for this is an event loop, where all events (UI, network, timers, etc.) are processed in a big loop.
This type of system means that the UI is held up during things like file I/O and calculations. In order to not hold up the UI too much, you have to do the I/O in chunks (say, read the file 4k at a time), processing any incoming UI events between chunks. This is really just a hack to keep the system running, but it's hard to make the system run smoothly like this because you don't know how often you need to process events.
The solution is to have a separate thread to recalculate your spreadsheet or write your file. That way the OS can give those threads fair timeslices while still preempting them to run the UI, allowing the UI to always be responsive.
An executing thread is not necessarily doing anything useful. The canonical example is reading from disk -- that data isn't going to be there for another few milliseconds, during which time the processor would be sitting unused. Threads allow one piece of the program to use the CPU while other pieces of the program are waiting for operations to complete.
There are many reasons. Wikipedia gives a decent overview on its page about threads.
Here's a few OTOH:
I/O bound tasks benefit from threading (especially network applications).
Hyperthreaded processors may speed up multithreaded applications even on a single core.
Threads can be instructed to wait (block) and wake up on specific events, enabling responsive event-driven programming.
If your program has to do several things "at the same time" then threads are a good way to go, particularly is some of those tasks are quite long running. Otherwise you find yourself writing code that looks like an operating system scheduler inside your program, which is always a waste of time if the OS underneath you has a perfectly good one already. You'd find that your source code was mostly 'scheduler' and not much 'program', which is very inelegant. A good threaded program can be very elegant and economic in source code, which makes oneself look good and saves time.
Some run times get/got it wrong. In the early days of Ada the runtime environment would do its own thread scheduling, and it was never very satisfactory. That was partly due to the fact that whilst the Ada language spec included the concept of threads, the OSes we had back then quite often didn't provide them. Ada got a lot better when the compiler writers started using the underlying OS threads instead.
Similarly Python doesn't really properly use the underlying OS threads; it spoils it with the Global Interpreter Lock. Python has sidestepped the whole issue by going for multiprocessing instead (not necessarily a good thing on Windows hosts...).
Early versions of Windows didn't do threads either, they did cooperative multitasking. This depended on each process in the whole machine calling any OS routine at least now and then. Each OS routine would first consult the 'scheduler' to see if anything else was waiting to run before getting on with whatever it was supposed to be doing on behalf of the program. There were many terrible programs back then that wouldn't play ball and hogged the entire machine. You couldn't get on with playing a game of Solitaire when something else embarked on a length calculation.
What's the mental model of your program?
IF it depends on multiple external inputs that can happen in unpredictable orders, and if what you want to do in response to those inputs is not simple and can overlap in time ...
THEN it makes sense to devote a separate thread to each input request, and have that thread perform the response needed by that request.
So, for example, if your program is waiting for input requests from an external channel, and each request must trigger its own protocol of outgoing and incoming messages, it can very much simplify the code to create a new thread (or re-use an old one) for each request.
Somehow people seem to enter the workforce thinking that threads are only there for speed (through parallelism).
That's one use, provided it allows multiple CPU chips to get cranking,
but it is by no means the only use.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
In PHP (or Java/ASP.NET/Ruby) based webservers every client request is instantiated on a new thread. But in Node.js all the clients run on the same thread (they can even share the same variables!) I understand that I/O operations are event-based so they don't block the main thread loop.
What I don't understand is WHY the author of Node chose it to be single-threaded? It makes things difficult. For example, I can't run a CPU intensive function because it blocks the main thread (and new client requests are blocked) so I need to spawn a process (which means I need to create a separate JavaScript file and execute another node process on it). However, in PHP cpu intensive tasks do not block other clients because as I mentioned each client is on a different thread. What are its advantages compared to multi-threaded web servers?
Note: I've used clustering to get around this, but it's not pretty.
Node.js was created explicitly as an experiment in async processing. The theory was that doing async processing on a single thread could provide more performance and scalability under typical web loads than the typical thread-based implementation.
And you know what? In my opinion that theory's been borne out. A node.js app that isn't doing CPU intensive stuff can run thousands more concurrent connections than Apache or IIS or other thread-based servers.
The single threaded, async nature does make things complicated. But do you honestly think it's more complicated than threading? One race condition can ruin your entire month! Or empty out your thread pool due to some setting somewhere and watch your response time slow to a crawl! Not to mention deadlocks, priority inversions, and all the other gyrations that go with multithreading.
In the end, I don't think it's universally better or worse; it's different, and sometimes it's better and sometimes it's not. Use the right tool for the job.
The issue with the "one thread per request" model for a server is that they don't scale well for several scenarios compared to the event loop thread model.
Typically, in I/O intensive scenarios the requests spend most of the time waiting for I/O to complete. During this time, in the "one thread per request" model, the resources linked to the thread (such as memory) are unused and memory is the limiting factor. In the event loop model, the loop thread selects the next event (I/O finished) to handle. So the thread is always busy (if you program it correctly of course).
The event loop model as all new things seems shiny and the solution for all issues but which model to use will depend on the scenario you need to tackle. If you have an intensive I/O scenario (like a proxy), the event base model will rule, whereas a CPU intensive scenario with a low number of concurrent processes will work best with the thread-based model.
In the real world most of the scenarios will be a bit in the middle. You will need to balance the real need for scalability with the development complexity to find the correct architecture (e.g. have an event base front-end that delegates to the backend for the CPU intensive tasks. The front end will use little resources waiting for the task result.) As with any distributed system it requires some effort to make it work.
If you are looking for the silver bullet that will fit with any scenario without any effort, you will end up with a bullet in your foot.
Long story short, node draws from V8, which is internally single-threaded. There are ways to work around the constraints for CPU-intensive tasks.
At one point (0.7) the authors tried to introduce isolates as a way of implementing multiple threads of computation, but were ultimately removed: https://groups.google.com/forum/#!msg/nodejs/zLzuo292hX0/F7gqfUiKi2sJ
As a side project I'm currently writing a server for an age-old game I used to play. I'm trying to make the server as loosely coupled as possible, but I am wondering what would be a good design decision for multithreading. Currently I have the following sequence of actions:
Startup (creates) ->
Server (listens for clients, creates) ->
Client (listens for commands and sends period data)
I'm assuming an average of 100 clients, as that was the max at any given time for the game. What would be the right decision as for threading of the whole thing? My current setup is as follows:
1 thread on the server which listens for new connections, on new connection create a client object and start listening again.
Client object has one thread, listening for incoming commands and sending periodic data. This is done using a non-blocking socket, so it simply checks if there's data available, deals with that and then sends messages it has queued. Login is done before the send-receive cycle is started.
One thread (for now) for the game itself, as I consider that to be separate from the whole client-server part, architecturally speaking.
This would result in a total of 102 threads. I am even considering giving the client 2 threads, one for sending and one for receiving. If I do that, I can use blocking I/O on the receiver thread, which means that thread will be mostly idle in an average situation.
My main concern is that by using this many threads I'll be hogging resources. I'm not worried about race conditions or deadlocks, as that's something I'll have to deal with anyway.
My design is setup in such a way that I could use a single thread for all client communications, no matter if it's 1 or 100. I've separated the communications logic from the client object itself, so I could implement it without having to rewrite a lot of code.
The main question is: is it wrong to use over 200 threads in an application? Does it have advantages? I'm thinking about running this on a multi-core machine, would it take a lot of advantage of multiple cores like this?
Thanks!
Out of all these threads, most of them will be blocked usually. I don't expect connections to be over 5 per minute. Commands from the client will come in infrequently, I'd say 20 per minute on average.
Going by the answers I get here (the context switching was the performance hit I was thinking about, but I didn't know that until you pointed it out, thanks!) I think I'll go for the approach with one listener, one receiver, one sender, and some miscellaneous stuff ;-)
use an event stream/queue and a thread pool to maintain the balance; this will adapt better to other machines which may have more or less cores
in general, many more active threads than you have cores will waste time context-switching
if your game consists of a lot of short actions, a circular/recycling event queue will give better performance than a fixed number of threads
To answer the question simply, it is entirely wrong to use 200 threads on today's hardware.
Each thread takes up 1 MB of memory, so you're taking up 200MB of page file before you even start doing anything useful.
By all means break your operations up into little pieces that can be safely run on any thread, but put those operations on queues and have a fixed, limited number of worker threads servicing those queues.
Update: Does wasting 200MB matter? On a 32-bit machine, it's 10% of the entire theoretical address space for a process - no further questions. On a 64-bit machine, it sounds like a drop in the ocean of what could be theoretically available, but in practice it's still a very big chunk (or rather, a large number of pretty big chunks) of storage being pointlessly reserved by the application, and which then has to be managed by the OS. It has the effect of surrounding each client's valuable information with lots of worthless padding, which destroys locality, defeating the OS and CPU's attempts to keep frequently accessed stuff in the fastest layers of cache.
In any case, the memory wastage is just one part of the insanity. Unless you have 200 cores (and an OS capable of utilizing) then you don't really have 200 parallel threads. You have (say) 8 cores, each frantically switching between 25 threads. Naively you might think that as a result of this, each thread experiences the equivalent of running on a core that is 25 times slower. But it's actually much worse than that - the OS spends more time taking one thread off a core and putting another one on it ("context switching") than it does actually allowing your code to run.
Just look at how any well-known successful design tackles this kind of problem. The CLR's thread pool (even if you're not using it) serves as a fine example. It starts off assuming just one thread per core will be sufficient. It allows more to be created, but only to ensure that badly designed parallel algorithms will eventually complete. It refuses to create more than 2 threads per second, so it effectively punishes thread-greedy algorithms by slowing them down.
I write in .NET and I'm not sure if the way I code is due to .NET limitations and their API design or if this is a standard way of doing things, but this is how I've done this kind of thing in the past:
A queue object that will be used for processing incoming data. This should be sync locked between the queuing thread and worker thread to avoid race conditions.
A worker thread for processing data in the queue. The thread that queues up the data queue uses semaphore to notify this thread to process items in the queue. This thread will start itself before any of the other threads and contain a continuous loop that can run until it receives a shut down request. The first instruction in the loop is a flag to pause/continue/terminate processing. The flag will be initially set to pause so that the thread sits in an idle state (instead of looping continuously) while there is no processing to be done. The queuing thread will change the flag when there are items in the queue to be processed. This thread will then process a single item in the queue on each iteration of the loop. When the queue is empty it will set the flag back to pause so that on the next iteration of the loop it will wait until the queuing process notifies it that there is more work to be done.
One connection listener thread which listens for incoming connection requests and passes these off to...
A connection processing thread that creates the connection/session. Having a separate thread from your connection listener thread means that you're reducing the potential for missed connection requests due to reduced resources while that thread is processing requests.
An incoming data listener thread that listens for incoming data on the current connection. All data is passed off to a queuing thread to be queued up for processing. Your listener threads should do as little as possible outside of basic listening and passing the data off for processing.
A queuing thread that queues up the data in the right order so everything can be processed correctly, this thread raises the semaphore to the processing queue to let it know there's data to be processed. Having this thread separate from the incoming data listener means that you're less likely to miss incoming data.
Some session object which is passed between methods so that each user's session is self contained throughout the threading model.
This keeps threads down to as simple but as robust a model as I've figured out. I would love to find a simpler model than this, but I've found that if I try and reduce the threading model any further, that I start missing data on the network stream or miss connection requests.
It also assists with TDD (Test Driven Development) such that each thread is processing a single task and is much easier to code tests for. Having hundreds of threads can quickly become a resource allocation nightmare, while having a single thread becomes a maintenance nightmare.
It's far simpler to keep one thread per logical task the same way you would have one method per task in a TDD environment and you can logically separate what each should be doing. It's easier to spot potential problems and far easier to fix them.
What's your platform? If Windows then I'd suggest looking at async operations and thread pools (or I/O Completion Ports directly if you're working at the Win32 API level in C/C++).
The idea is that you have a small number of threads that deal with your I/O and this makes your system capable of scaling to large numbers of concurrent connections because there's no relationship between the number of connections and the number of threads used by the process that is serving them. As expected, .Net insulates you from the details and Win32 doesn't.
The challenge of using async I/O and this style of server is that the processing of client requests becomes a state machine on the server and the data arriving triggers changes of state. Sometimes this takes some getting used to but once you do it's really rather marvellous;)
I've got some free code that demonstrates various server designs in C++ using IOCP here.
If you're using unix or need to be cross platform and you're in C++ then you might want to look at boost ASIO which provides async I/O functionality.
I think the question you should be asking is not if 200 as a general thread number is good or bad, but rather how many of those threads are going to be active.
If only several of them are active at any given moment, while all the others are sleeping or waiting or whatnot, then you're fine. Sleeping threads, in this context, cost you nothing.
However if all of those 200 threads are active, you're going to have your CPU wasting so much time doing thread context switches between all those ~200 threads.