It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 14 years ago.
In your work, what specifically have you used threads for?
(Please give a description of the application and how the thread helped/enhanced the application.)
Threads are critical for most UI work. Otherwise, any time you want to do a calculation or anything that takes a while, you will freeze the UI.
Therefore, most GUI frameworks have UI threads that handles the event loop (and some drawing activities), but most user code takes place in another thread.
Threads are also useful for occasionally checking things or making episodic changes to the state of the system.
(Less serious answer) I like to use threads in any situation where I want the system to fall on its arse in interesting and unobvious ways, while still having plausible deniability as to how I could have let the problem slip through.
Or, in the words of Rasmus Lerdorf, "People aren't smart enough to write thread-safe code".
Handling concurrent client requests in a server.
Threads are fundamental for most I/O bound applications, and for any reasonably complex server side application. Consider an application that acts as an exchange for information with multiple sources of data. You need to be able to deal with this information in independent threads, in particular if operations on this data is subject to latencies or require a signifigant amount of time to complete.
In most occasions threads often help to decouple various concerns within the application. A single thread dispatching events to interested parties will not scale well in the vast majority of occasions.
All but the simplest of applications will require threading to some degree.
Most common use is for resposive UI like showing a progress bar for a long running background task.
Background tasks:
Handling network connections and protocols.
Doing Sound Synthesis running in the background of a multimedia application.
Doing File-loading in the background in a multimedia application (CD streaming)
Other uses:
Accelerating certain algorithms by running two instances of the same code in two different threads.
I know that most of the time I use threads, what I actually want to do is to launch some asynchronous lump of work - i.e. I want something to happen in the mythical "background". Unfortunately thinking about threads isn't really the correct level of abstraction for doing "launch some lump of work", because you're not putting something into the background. With a thread API you're creating another place to run stuff as a sibling of the process's original thread, and need to worry about what information is shared between them, and how, and so on. That's why I'm enjoying newer API such as Cocoa's NSOperation and NSOperationQueue. In the case of that API, launching some lump of work is just some single line, and the library takes care of whether a new thread should be launched or an old one reused.
To scan directories looking for changed files. It is a lot quicker to spawn a thread per sub directory then do it in a single thread.
We have been using threads for several applications where the main screen was comprised of a workflow tailored to the currently logged on user.
Getting the workflow can be time intensive. Various parts of the workflow get loaded by different threads. For our main application BP/GeNA, about 11 threads are fired, each running a database query.
Regards,
Lieven
I most commonly use threads when I want to do something with a bunch of resources that I know will take a long time, when there's no interdependence between the work to process the elements, and particularly if the bottleneck isn't a local resource (like CPU of disk). For example, if I've got a bunch of URLs to retrieve, each of those will go into a separate thread.
This is a very general question. I've used "threads" to take potentially blocking work off of the UI thread, whether that work is local or network i/o or that work is computationally intensive tasks which will tend to "block" depending on the hardware it is run on.
I think it is more interesting to ask about a particular problem or pattern which help alleviates it and the applicability on threads, i.e.:
how are threads relevant to model
view controller?
how or why should I
take work off of a UI thread to
ensure that the UI doesn't even think
of blocking?
how can a threadpool be
useful for recursive (network)
directory traversal as someone else
alluded to?
Should I be affinitizing
threads for cooperative scheduling of
computationally intensive tasks, or
should I be using a ThreadPool and
let the OS preemptively schedule the
threads as it sees fit.
It's a pretty broad space and more clarity would likely help.
I build web applications, so all code that I write is executed in multiple threads.
Our app is a web service, so we spawn off a thread per request. Technically, JNI spawns off the thread, but the code has to be thread-safe regardless. We've run into some interesting (FSVO) issues with both Hibernate and our ESB-based infrastructure, but for the most part keeping things in ThreadLocals and synchronizing on subsystem entry points has worked pretty well. We haven't tried more than a couple dozen simultaneous requests, so there may well be some race conditions we haven't identified yet, but overall we're performant and give the right answers.
I wrote a function that spawns a thread that beeps (from the speaker) at regular intervals to alert a test operator that something needs attention. After the modal dialog is responded to, the thread is killed.
Not employment related, but I'm doing some side work on the Netflix Prize. My computer has 8 cores and 20 GB of RAM ... running only 1 thread would be an utter waste, so I typically start 16 threads or so.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Given a single core CPU, what is the benefit to coding using threads?
At least with the Java implementation, and it seems intuitive to naturally extend to any other language considering the single core restriction, you may have several threads performing various actions but the processes are time-limited and switched.
Given process A and process B:
What is the benefit of performing half of process A, finish process B, and then finish the second half of process A VS performing process A then B?
It seems that the switching between the threads would introduce time delays that would prolong the overall completion time of both processes VS not switching and just completing A then B.
The reason to use threads on a single-core system is simply to allow processes that would otherwise use all the CPU to be preempted by other tasks that need to get done sooner. The most common reason to make a system multi-threaded is to have a responsive user interface even while performing long calculations.
Of course, any operation can take a long time (reading a file, accessing a database, resizing a photo, recalculating a spreadsheet), and those operations can be performed on a separate thread to allow the thread responding to user input to operate the whole time.
Twenty years ago, for example, it was rare to have a multi-CPU system or an OS that allowed multi-threading, so nearly every program was single-threaded and there were many frameworks created to allow systems to have UIs and still do I/O. The standard mechanism for this is an event loop, where all events (UI, network, timers, etc.) are processed in a big loop.
This type of system means that the UI is held up during things like file I/O and calculations. In order to not hold up the UI too much, you have to do the I/O in chunks (say, read the file 4k at a time), processing any incoming UI events between chunks. This is really just a hack to keep the system running, but it's hard to make the system run smoothly like this because you don't know how often you need to process events.
The solution is to have a separate thread to recalculate your spreadsheet or write your file. That way the OS can give those threads fair timeslices while still preempting them to run the UI, allowing the UI to always be responsive.
An executing thread is not necessarily doing anything useful. The canonical example is reading from disk -- that data isn't going to be there for another few milliseconds, during which time the processor would be sitting unused. Threads allow one piece of the program to use the CPU while other pieces of the program are waiting for operations to complete.
There are many reasons. Wikipedia gives a decent overview on its page about threads.
Here's a few OTOH:
I/O bound tasks benefit from threading (especially network applications).
Hyperthreaded processors may speed up multithreaded applications even on a single core.
Threads can be instructed to wait (block) and wake up on specific events, enabling responsive event-driven programming.
If your program has to do several things "at the same time" then threads are a good way to go, particularly is some of those tasks are quite long running. Otherwise you find yourself writing code that looks like an operating system scheduler inside your program, which is always a waste of time if the OS underneath you has a perfectly good one already. You'd find that your source code was mostly 'scheduler' and not much 'program', which is very inelegant. A good threaded program can be very elegant and economic in source code, which makes oneself look good and saves time.
Some run times get/got it wrong. In the early days of Ada the runtime environment would do its own thread scheduling, and it was never very satisfactory. That was partly due to the fact that whilst the Ada language spec included the concept of threads, the OSes we had back then quite often didn't provide them. Ada got a lot better when the compiler writers started using the underlying OS threads instead.
Similarly Python doesn't really properly use the underlying OS threads; it spoils it with the Global Interpreter Lock. Python has sidestepped the whole issue by going for multiprocessing instead (not necessarily a good thing on Windows hosts...).
Early versions of Windows didn't do threads either, they did cooperative multitasking. This depended on each process in the whole machine calling any OS routine at least now and then. Each OS routine would first consult the 'scheduler' to see if anything else was waiting to run before getting on with whatever it was supposed to be doing on behalf of the program. There were many terrible programs back then that wouldn't play ball and hogged the entire machine. You couldn't get on with playing a game of Solitaire when something else embarked on a length calculation.
What's the mental model of your program?
IF it depends on multiple external inputs that can happen in unpredictable orders, and if what you want to do in response to those inputs is not simple and can overlap in time ...
THEN it makes sense to devote a separate thread to each input request, and have that thread perform the response needed by that request.
So, for example, if your program is waiting for input requests from an external channel, and each request must trigger its own protocol of outgoing and incoming messages, it can very much simplify the code to create a new thread (or re-use an old one) for each request.
Somehow people seem to enter the workforce thinking that threads are only there for speed (through parallelism).
That's one use, provided it allows multiple CPU chips to get cranking,
but it is by no means the only use.
I stumbled over node.js sometime ago and like it a lot. But soon I found out that it lacked badly the ability to perform CPU-intensive tasks. So, I started googling and got these answers to solve the problem: Fibers, Webworkers and Threads (thread-a-gogo). Now which one to use is a confusion and one of them definitely needs to be used - afterall what's the purpose of having a server which is just good at IO and nothing else? Suggestions needed!
UPDATE:
I was thinking of a way off-late; just needing suggestions over it. Now, what I thought of was this: Let's have some threads (using thread_a_gogo or maybe webworkers). Now, when we need more of them, we can create more. But there will be some limit over the creation process. (not implied by the system but probably because of overhead). Now, when we exceed the limit, we can fork a new node, and start creating threads over it. This way, it can go on till we reach some limit (after all, processes too have a big overhead). When this limit is reached, we start queuing tasks. Whenever a thread becomes free, it will be assigned a new task. This way, it can go on smoothly.
So, that was what I thought of. Is this idea good? I am a bit new to all this process and threads stuff, so don't have any expertise in it. Please share your opinions.
Thanks. :)
Node has a completely different paradigm and once it is correctly captured, it is easier to see this different way of solving problems. You never need multiple threads in a Node application(1) because you have a different way of doing the same thing. You create multiple processes; but it is very very different than, for example how Apache Web Server's Prefork mpm does.
For now, let's think that we have just one CPU core and we will develop an application (in Node's way) to do some work. Our job is to process a big file running over its contents byte-by-byte. The best way for our software is to start the work from the beginning of the file, follow it byte-by-byte to the end.
-- Hey, Hasan, I suppose you are either a newbie or very old school from my Grandfather's time!!! Why don't you create some threads and make it much faster?
-- Oh, we have only one CPU core.
-- So what? Create some threads man, make it faster!
-- It does not work like that. If I create threads I will be making it slower. Because I will be adding a lot of overhead to the system for switching between threads, trying to give them a just amount of time, and inside my process, trying to communicate between these threads. In addition to all these facts, I will also have to think about how I will divide a single job into multiple pieces that can be done in parallel.
-- Okay okay, I see you are poor. Let's use my computer, it has 32 cores!
-- Wow, you are awesome my dear friend, thank you very much. I appreciate it!
Then we turn back to work. Now we have 32 cpu cores thanks to our rich friend. Rules we have to abide have just changed. Now we want to utilize all this wealth we are given.
To use multiple cores, we need to find a way to divide our work into pieces that we can handle in parallel. If it was not Node, we would use threads for this; 32 threads, one for each cpu core. However, since we have Node, we will create 32 Node processes.
Threads can be a good alternative to Node processes, maybe even a better way; but only in a specific kind of job where the work is already defined and we have complete control over how to handle it. Other than this, for every other kind of problem where the job comes from outside in a way we do not have control over and we want to answer as quickly as possible, Node's way is unarguably superior.
-- Hey, Hasan, are you still working single-threaded? What is wrong with you, man? I have just provided you what you wanted. You have no excuses anymore. Create threads, make it run faster.
-- I have divided the work into pieces and every process will work on one of these pieces in parallel.
-- Why don't you create threads?
-- Sorry, I don't think it is usable. You can take your computer if you want?
-- No okay, I am cool, I just don't understand why you don't use threads?
-- Thank you for the computer. :) I already divided the work into pieces and I create processes to work on these pieces in parallel. All the CPU cores will be fully utilized. I could do this with threads instead of processes; but Node has this way and my boss Parth Thakkar wants me to use Node.
-- Okay, let me know if you need another computer. :p
If I create 33 processes, instead of 32, the operating system's scheduler will be pausing a thread, start the other one, pause it after some cycles, start the other one again... This is unnecessary overhead. I do not want it. In fact, on a system with 32 cores, I wouldn't even want to create exactly 32 processes, 31 can be nicer. Because it is not just my application that will work on this system. Leaving a little room for other things can be good, especially if we have 32 rooms.
I believe we are on the same page now about fully utilizing processors for CPU-intensive tasks.
-- Hmm, Hasan, I am sorry for mocking you a little. I believe I understand you better now. But there is still something I need an explanation for: What is all the buzz about running hundreds of threads? I read everywhere that threads are much faster to create and dumb than forking processes? You fork processes instead of threads and you think it is the highest you would get with Node. Then is Node not appropriate for this kind of work?
-- No worries, I am cool, too. Everybody says these things so I think I am used to hearing them.
-- So? Node is not good for this?
-- Node is perfectly good for this even though threads can be good too. As for thread/process creation overhead; on things that you repeat a lot, every millisecond counts. However, I create only 32 processes and it will take a tiny amount of time. It will happen only once. It will not make any difference.
-- When do I want to create thousands of threads, then?
-- You never want to create thousands of threads. However, on a system that is doing work that comes from outside, like a web server processing HTTP requests; if you are using a thread for each request, you will be creating a lot of threads, many of them.
-- Node is different, though? Right?
-- Yes, exactly. This is where Node really shines. Like a thread is much lighter than a process, a function call is much lighter than a thread. Node calls functions, instead of creating threads. In the example of a web server, every incoming request causes a function call.
-- Hmm, interesting; but you can only run one function at the same time if you are not using multiple threads. How can this work when a lot of requests arrive at the web server at the same time?
-- You are perfectly right about how functions run, one at a time, never two in parallel. I mean in a single process, only one scope of code is running at a time. The OS Scheduler does not come and pause this function and switch to another one, unless it pauses the process to give time to another process, not another thread in our process. (2)
-- Then how can a process handle 2 requests at a time?
-- A process can handle tens of thousands of requests at a time as long as our system has enough resources (RAM, Network, etc.). How those functions run is THE KEY DIFFERENCE.
-- Hmm, should I be excited now?
-- Maybe :) Node runs a loop over a queue. In this queue are our jobs, i.e, the calls we started to process incoming requests. The most important point here is the way we design our functions to run. Instead of starting to process a request and making the caller wait until we finish the job, we quickly end our function after doing an acceptable amount of work. When we come to a point where we need to wait for another component to do some work and return us a value, instead of waiting for that, we simply finish our function adding the rest of work to the queue.
-- It sounds too complex?
-- No no, I might sound complex; but the system itself is very simple and it makes perfect sense.
Now I want to stop citing the dialogue between these two developers and finish my answer after a last quick example of how these functions work.
In this way, we are doing what OS Scheduler would normally do. We pause our work at some point and let other function calls (like other threads in a multi-threaded environment) run until we get our turn again. This is much better than leaving the work to OS Scheduler which tries to give just time to every thread on system. We know what we are doing much better than OS Scheduler does and we are expected to stop when we should stop.
Below is a simple example where we open a file and read it to do some work on the data.
Synchronous Way:
Open File
Repeat This:
Read Some
Do the work
Asynchronous Way:
Open File and Do this when it is ready: // Our function returns
Repeat this:
Read Some and when it is ready: // Returns again
Do some work
As you see, our function asks the system to open a file and does not wait for it to be opened. It finishes itself by providing next steps after file is ready. When we return, Node runs other function calls on the queue. After running over all the functions, the event loop moves to next turn...
In summary, Node has a completely different paradigm than multi-threaded development; but this does not mean that it lacks things. For a synchronous job (where we can decide the order and way of processing), it works as well as multi-threaded parallelism. For a job that comes from outside like requests to a server, it simply is superior.
(1) Unless you are building libraries in other languages like C/C++ in which case you still do not create threads for dividing jobs. For this kind of work you have two threads one of which will continue communication with Node while the other does the real work.
(2) In fact, every Node process has multiple threads for the same reasons I mentioned in the first footnote. However this is no way like 1000 threads doing similar works. Those extra threads are for things like to accept IO events and to handle inter-process messaging.
UPDATE (As reply to a good question in comments)
#Mark, thank you for the constructive criticism. In Node's paradigm, you should never have functions that takes too long to process unless all other calls in the queue are designed to be run one after another. In case of computationally expensive tasks, if we look at the picture in complete, we see that this is not a question of "Should we use threads or processes?" but a question of "How can we divide these tasks in a well balanced manner into sub-tasks that we can run them in parallel employing multiple CPU cores on the system?" Let's say we will process 400 video files on a system with 8 cores. If we want to process one file at a time, then we need a system that will process different parts of the same file in which case, maybe, a multi-threaded single-process system will be easier to build and even more efficient. We can still use Node for this by running multiple processes and passing messages between them when state-sharing/communication is necessary. As I said before, a multi-process approach with Node is as well as a multi-threaded approach in this kind of tasks; but not more than that. Again, as I told before, the situation that Node shines is when we have these tasks coming as input to system from multiple sources since keeping many connections concurrently is much lighter in Node compared to a thread-per-connection or process-per-connection system.
As for setTimeout(...,0) calls; sometimes giving a break during a time consuming task to allow calls in the queue have their share of processing can be required. Dividing tasks in different ways can save you from these; but still, this is not really a hack, it is just the way event queues work. Also, using process.nextTick for this aim is much better since when you use setTimeout, calculation and checks of the time passed will be necessary while process.nextTick is simply what we really want: "Hey task, go back to end of the queue, you have used your share!"
(Update 2016: Web workers are going into io.js - a Node.js fork Node.js v7 - see below.)
(Update 2017: Web workers are not going into Node.js v7 or v8 - see below.)
(Update 2018: Web workers are going into Node.js Node v10.5.0 - see below.)
Some clarification
Having read the answers above I would like to point out that there is nothing in web workers that is against the philosophy of JavaScript in general and Node in particular regarding concurrency. (If there was, it wouldn't be even discussed by the WHATWG, much less implemented in the browsers).
You can think of a web worker as a lightweight microservice that is accessed asynchronously. No state is shared. No locking problems exist. There is no blocking. There is no synchronization needed. Just like when you use a RESTful service from your Node program you don't worry that it is now "multithreaded" because the RESTful service is not in the same thread as your own event loop. It's just a separate service that you access asynchronously and that is what matters.
The same is with web workers. It's just an API to communicate with code that runs in a completely separate context and whether it is in different thread, different process, different cgroup, zone, container or different machine is completely irrelevant, because of a strictly asynchronous, non-blocking API, with all data passed by value.
As a matter of fact web workers are conceptually a perfect fit for Node which - as many people are not aware of - incidentally uses threads quite heavily, and in fact "everything runs in parallel except your code" - see:
Understanding the node.js event loop by Mikito Takada
Understanding node.js by Felix Geisendรถrfer
Understanding the Node.js Event Loop by Trevor Norris
Node.js itself is blocking, only its I/O is non-blocking by Jeremy Epstein
But the web workers don't even need to be implemented using threads. You could use processes, green threads, or even RESTful services in the cloud - as long as the web worker API is used. The whole beauty of the message passing API with call by value semantics is that the underlying implementation is pretty much irrelevant, as the details of the concurrency model will not get exposed.
A single-threaded event loop is perfect for I/O-bound operations. It doesn't work that well for CPU-bound operations, especially long running ones. For that we need to spawn more processes or use threads. Managing child processes and the inter-process communication in a portable way can be quite difficult and it is often seen as an overkill for simple tasks, while using threads means dealing with locks and synchronization issues that are very difficult to do right.
What is often recommended is to divide long-running CPU-bound operations into smaller tasks (something like the example in the "Original answer" section of my answer to Speed up setInterval) but it is not always practical and it doesn't use more than one CPU core.
I'm writing it to clarify the comments that were basically saying that web workers were created for browsers, not servers (forgetting that it can be said about pretty much everything in JavaScript).
Node modules
There are few modules that are supposed to add Web Workers to Node:
https://github.com/pgriess/node-webworker
https://github.com/audreyt/node-webworker-threads
I haven't used any of them but I have two quick observations that may be relevant: as of March 2015, node-webworker was last updated 4 years ago and node-webworker-threads was last updated a month ago. Also I see in the example of node-webworker-threads usage that you can use a function instead of a file name as an argument to the Worker constructor which seems that may cause subtle problems if it is implemented using threads that share memory (unless the functions is used only for its .toString() method and is otherwise compiled in a different environment, in which case it may be fine - I have to look more deeply into it, just sharing my observations here).
If there is any other relevant project that implements web workers API in Node, please leave a comment.
Update 1
I didn't know it yet at the time of writing but incidentally one day before I wrote this answer Web Workers were added to io.js.
(io.js is a fork of Node.js - see: Why io.js decided to fork Node.js, an InfoWorld interview with Mikeal Rogers, for more info.)
Not only does it prove the point that there is nothing in web workers that is against the philosophy of JavaScript in general and Node in particular regarding concurrency, but it may result in web workers being a first class citizen in server-side JavaScript like io.js (and possibly Node.js in the future) just as it already is in client-side JavaScript in all modern browsers.
Update 2
In Update 1 and my tweet I was referring to io.js pull request #1159
which now redirects to
Node PR #1159
that was closed on Jul 8 and replaced with Node PR #2133 - which is still open.
There is some discussion taking place under those pull requests that may provide some more up to date info on the status of Web workers in io.js/Node.js.
Update 3
Latest info - thanks to NiCk Newman for posting it in
the comments: There is the workers: initial implementation commit by Petka Antonov from Sep 6, 2015
that can be downloaded and tried out in
this tree. See comments by NiCk Newman for details.
Update 4
As of May 2016 the last comments on the still open PR #2133 - workers: initial implementation were 3 months old. On May 30 Matheus Moreira asked me to post an update to this answer in the comments below and he asked for the current status of this feature in the PR comments.
The first answers in the PR discussion were skeptical but later
Ben Noordhuis wrote that "Getting this merged in one shape or another is on my todo list for v7".
All other comments seemed to second that and as of July 2016 it seems that Web Workers should be available in the next version of Node, version 7.0 that is planned to be released on October 2016 (not necessarily in the form of this exact PR).
Thanks to Matheus Moreira for pointing it out in the comments and reviving the discussion on GitHub.
Update 5
As of July 2016 there are few modules on npm that were not available before - for a complete list of relevant modules, search npm for workers, web workers, etc. If anything in particular does or doesn't work for you, please post a comment.
Update 6
As of January 2017 it is unlikely that web workers will get merged into Node.js.
The pull request #2133 workers: initial implementation by Petka Antonov from July 8, 2015 was finally closed by Ben Noordhuis on December 11, 2016 who commented that "multi-threading support adds too many new failure modes for not enough benefit" and "we can also accomplish that using more traditional means like shared memory and more efficient serialization."
For more information see the comments to the PR 2133 on GitHub.
Thanks again to Matheus Moreira for pointing it out in the comments.
Update 6
I'm happy to announce that few days ago, in June 2018 web workers appeared in Node v10.5.0 as an experimental feature activated with the --experimental-worker flag.
For more info, see:
Node v10.5.0 release blog post
Pull Request #20876 - worker: initial implementation by Anna Henningsen
My original tweet of happiness when I learned that this got into v10.5.0:
๐๐๐ Finally! I can make the 7th update to my 3 year old Stack Overflow answer where I argue that threading a la web workers is not against Node philosophy, only this time saying that we finally got it! ๐๐
I come from the old school of thought where we used multi-threading to make software fast. For past 3 years i have been using Node.js and a big supporter of it. As hasanyasin explained in detail how node works and the concept of asyncrous functionality. But let me add few things here.
Back in the old days with single cores and lower clock speeds we tried various ways to make software work fast and parallel. in DOS days we use to run one program at a time. Than in windows we started running multiple applications (processes) together. Concepts like preemptive and non-preemptive (or cooperative) where tested. we know now that preemptive was the answer for better multi-processing task on single core computers. Along came the concepts of processes/tasks and context switching. Than the concept of thread to further reduce the burden of process context switching. Thread where coined as light weight alternative to spawning new processes.
So like it or not signal thread or not multi-core or single core your processes will be preempted and time sliced by the OS.
Nodejs is a single process and provides async mechanism. Here jobs are dispatched to under lying OS to perform tasks while we waiting in an event loop for the task to finish. Once we get a green signal from OS we perform what ever we need to do. Now in a way this is cooperative/non-preemptive multi-tasking, so we should never block the event loop for a very long period of time other wise we will degrade our application very fast.
So if there is ever a task that is blocking in nature or is very time consuming we will have to branch it out to the preemptive world of OS and threads.
there are good examples of this is in the libuv documentation. Also if you read the documentation further you find that FileI/O is handled in threads in node.js.
So Firstly its all in the design of our software. Secondly Context switching is always happening no matter what they tell you. Thread are there and still there for a reason, the reason is they are faster to switch in between then processes.
Under hood in node.js its all c++ and threads. And node provides c++ way to extend its functionality and to further speed out by using threads where they are a must i.e., blocking tasks such as reading from a source writing to a source, large data analysis so on so forth.
I know hasanyasin answer is the accepted one but for me threads will exist no matter what you say or how you hide them behind scripts, secondly no one just breaks things in to threads just for speed it is mostly done for blocking tasks. And threads are in the back bone of Node.js so before completely bashing multi-threading is in correct. Also threads are different from processes and the limitation of having node processes per core don't exactly apply to number of threads, threads are like sub tasks to a process. in fact threads won;t show up in your windows task manager or linux top command. once again they are more little weight then processes
I'm not sure if webworkers are relevant in this case, they are client-side tech (run in the browser), while node.js runs on the server. Fibers, as far as I understand, are also blocking, i.e. they are voluntary multitasking, so you could use them, but should manage context switches yourself via yield. Threads might be actually what you need, but I don't know how mature they are in node.js.
worker_threads has been implemented and shipped behind a flag in node#10.5.0. It's still an initial implementation and more efforts are needed to make it more efficient in future releases. Worth giving it a try in latest node.
In many Node developers' opinions one of the best parts of Node is actually its single-threaded nature. Threads introduce a whole slew of difficulties with shared resources that Node completely avoids by doing nothing but non-blocking IO.
That's not to say that Node is limited to a single thread. It's just that the method for getting threaded concurrency is different from what you're looking for. The standard way to deal with threads is with the cluster module that comes standard with Node itself. It's a simpler approach to threads than manually dealing with them in your code.
For dealing with asynchronous programming in your code (as in, avoiding nested callback pyramids), the [Future] component in the Fibers library is a decent choice. I would also suggest you check out Asyncblock which is based on Fibers. Fibers are nice because they allow you to hide callback by duplicating the stack and then jumping between stacks on a single-thread as they're needed. Saves you the hassle of real threads while giving you the benefits. The downside is that stack traces can get a bit weird when using Fibers, but they aren't too bad.
If you don't need to worry about async stuff and are more just interested in doing a lot of processing without blocking, a simple call to process.nextTick(callback) every once in a while is all you need.
Maybe some more information on what tasks you are performing would help. Why would you need to (as you mentioned in your comment to genericdave's answer) need to create many thousands of them? The usual way of doing this sort of thing in Node is to start up a worker process (using fork or some other method) which always runs and can be communicated to using messages. In other words, don't start up a new worker each time you need to perform whatever task it is you're doing, but simply send a message to the already running worker and get a response when it's done. Honestly, I can't see that starting up many thousands of actual threads would be very efficient either, you are still limited by you CPUs.
Now, after saying all of that, I have been doing a lot of work with Hook.io lately which seems to work very well for this sort of off-loading tasks into other processes, maybe it can accomplish what you need.
Separating different parts of a program into different processes seems (to me) to make a more elegant program than just threading everything. In what scenario would it make sense to make things run on a thread vs. separating the program into different processes? When should I use a thread?
Edit
Anything on how (or if) they act differently with single-core and multi-core would also be helpful.
You'd prefer multiple threads over multiple processes for two reasons:
Inter-thread communication (sharing data etc.) is significantly simpler to program than inter-process communication.
Context switches between threads are faster than between processes. That is, it's quicker for the OS to stop one thread and start running another than do the same with two processes.
Example:
Applications with GUIs typically use one thread for the GUI and others for background computation. The spellchecker in MS Office, for example, is a separate thread from the one running the Office user interface. In such applications, using multiple processes instead would result in slower performance and code that's tough to write and maintain.
Well apart from advantages of using thread over process, like:
Advantages:
Much quicker to create a thread than
a process.
Much quicker to switch
between threads than to switch
between processes.
Threads share data
easily
Consider few disadvantages too:
No security between threads.
One thread can stomp on another thread's
data.
If one thread blocks, all
threads in task block.
As to the important part of your question "When should I use a thread?"
Well you should consider few facts that a threads should not alter the semantics of a program. They simply change the timing of operations. As a result, they are almost always used as an elegant solution to performance related problems. Here are some examples of situations where you might use threads:
Doing lengthy processing: When a windows application is calculating it cannot process any more messages. As a result, the display cannot be updated.
Doing background processing: Some
tasks may not be time critical, but
need to execute continuously.
Doing I/O work: I/O to disk or to
network can have unpredictable
delays. Threads allow you to ensure
that I/O latency does not delay
unrelated parts of your application.
I assume you already know you need a thread or a process, so I'd say the main reason to pick one over the other would be data sharing.
Use of a process means you also need Inter Process Communication (IPC) to get data in and out of the process. This is a good thing if the process is to be isolated though.
You sure don't sound like a newbie. It's an excellent observation that processes are, in many ways, more elegant. Threads are basically an optimization to avoid too many transitions or too much communication between memory spaces.
Superficially using threads may also seem like it makes your program easier to read and write, because you can share variables and memory between the threads freely. In practice, doing that requires very careful attention to avoid race conditions or deadlocks.
There are operating-system kernels (most notably L4) that try very hard to improve the efficiency of inter-process communication. For such systems one could probably make a convincing argument that threads are pointless.
I would like to answer this in a different way. "It depends on your application's working scenario and performance SLA" would be my answer.
For instance threads may be sharing the same address space and communication between threads may be faster and easier but it is also possible that under certain conditions threads deadlock and then what do you think would happen to your process.
Even if you are a programming whiz and have used all the fancy thread synchronization mechanisms to prevent deadlocks it certainly is not rocket science to see that unless a deterministic model is followed which may be the case with hard real time systems running on Real Time OSes where you have a certain degree of control over thread priorities and can expect the OS to respect these priorities it may not be the case with General Purpose OSes like Windows.
From a Design perspective too you might want to isolate your functionality into independent self contained modules where they may not really need to share the same address space or memory or even talk to each other. This is a case where processes will make sense.
Take the case of Google Chrome where multiple processes are spawned as opposed to most browsers which use a multi-threaded model.
Each tab in Chrome can be talking to a different server and rendering a different website. Imagine what would happen if one website stopped responding and if you had a thread stalled due to this, the entire browser would either slow down or come to a stop.
So Google decided to spawn multiple processes and that is why even if one tab freezes you can still continue using other tabs of your Chrome browser.
Read more about it here
and also look here
I agree to most of the answers above. But speaking from design perspective i would rather go for a thread when i want set of logically co-related operations to be carried out parallel. For example if you run a word processor there will be one thread running in foreground as an editor and other thread running in background auto saving the document at regular intervals so no one would design a process to do that auto saving task separately.
In addition to the other answers, maintaining and deploying a single process is a lot simpler than having a few executables.
One would use multiple processes/executables to provide a well-defined interface/decoupling so that one part or the other can be reused or reimplemented more easily than keeping all the functionality in one process.
Came across this post. Interesting discussion. but I felt one point is missing or indirectly pointed.
Creating a new process is costly because of all of the
data structures that must be allocated and initialized. The process is subdivided into different threads of control to achieve multithreading inside the process.
Using a thread or a process to achieve the target is based on your program usage requirements and resource utilization.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I was recently working on an application that sent and received messages over Ethernet and Serial. I was then tasked to add the monitoring of DIO discretes. I throught,
"No reason to interrupt the main
thread which is involved in message
processing, I'll just create
another thread that monitors DIO."
This decision, however, proved to be poor. Sometimes the main thread would be interrupted between a Send and a Receive serial message. This interruption would disrupt the timing and alas, messages would be lost (forever).
I found another way to monitor the DIO without using another thread and Ethernet and Serial communication were restored to their correct functionality.
The whole fiasco, however, got me thinking. Are their any general guidelines about when not to use multiple-threads and/or does anyone have anymore examples of situations when using multiple-threads is not a good idea?
**EDIT:Based on your comments and after scowering the internet for information, I have composed a blog post entitled When is multi-threading not a good idea?
On a single processor machine and a desktop application, you use multi threads so you don't freeze the app but for nothing else really.
On a single processor server and a web based app, no need for multi threading because the web server handles most of it.
On a multi-processor machine and desktop app, you are suggested to use multi threads and parallel programming. Make as many threads as there are processors.
On a multi-processor server and a web based app, no need again for multi threads because the web server handles it.
In total, if you use multiple threads for other than un-freezing desktop apps and any other generic answer, you will make the app slower if you have a single core machine due to the threads interrupting each other.
Why? Because of the hardware switches. It takes time for the hardware to switch between threads in total. On a multi-core box, go ahead and use 1 thread for each core and you will greatly see a ramp up.
To paraphrase an old quote: A programmer had a problem. He thought, "I know, I'll use threads." Now the programmer has two problems. (Often attributed to JWZ, but it seems to predate his use of it talking about regexes.)
A good rule of thumb is "Don't use threads, unless there's a very compelling reason to use threads." Multiple threads are asking for trouble. Try to find a good way to solve the problem without using multiple threads, and only fall back to using threads if avoiding it is as much trouble as the extra effort to use threads. Also, consider switching to multiple threads if you're running on a multi-core/multi-CPU machine, and performance testing of the single threaded version shows that you need the performance of the extra cores.
Multi-threading is a bad idea if:
Several threads access and update the same resource (set a variable, write to a file), and you don't understand thread safety.
Several threads interact with each other and you don't understand mutexes and similar thread-management tools.
Your program uses static variables (threads typically share them by default).
You haven't debugged concurrency issues.
Actually, multi threading is not scalable and is hard to debug, so it should not be used in any case if you can avoid it. There is few cases where it is mandatory : when performance on a multi CPU matters, or when you deal whith a server that have a lot of clients taking a long time to answer.
In any other cases, you can use alternatives such as queue + cron jobs or else.
You might want to take a look at the Dan Kegel's "The C10K problem" web page about handling multiple data sources/sinks.
Basically it is best to use minimal threads, which in sockets can be done in most OS's w/ some event system (or asynchronously in Windows using IOCP).
When you run into the case where the OS and/or libraries do not offer a way to perform communication in a non-blocking manner, it is best to use a thread-pool to handle them while reporting back to the same event loop.
Example diagram of layout:
Per CPU [*] EVENTLOOP ------ Handles nonblocking I/O using OS/library utilities
| \___ Threadpool for various blocking events
Threadpool for handling the I/O messages that would take long
Multithreading is bad except in the single case where it is good. This case is
The work is CPU Bound, or parts of it is CPU Bound
The work is parallelisable.
If either or both of these conditions are missing, multithreading is not going to be a winning strategy.
If the work is not CPU bound, then you are waiting not on threads to finish work, but rather for some external event, such as network activity, for the process to complete its work. Using threads, there is the additional cost of context switches between threads, The cost of synchronization (mutexes, etc), and the irregularity of thread preemption. The alternative in most common use is asynchronous IO, in which a single thread listens to several io ports, and acts on whichever happens to be ready now, one at a time. If by some chance these slow channels all happen to become ready at the same time, It might seem like you will experience a slow-down, but in practice this is rarely true. The cost of handling each port individually is often comparable or better than the cost of synchronizing state on multiple threads as each channel is emptied.
Many tasks may be compute bound, but still not practical to use a multithreaded approach because the process must synchronise on the entire state. Such a program cannot benefit from multithreading because no work can be performed concurrently. Fortunately, most programs that require enormous amounts of CPU can be parallelized to some level.
Multi-threading is not a good idea if you need to guarantee precise physical timing (like in your example). Other cons include intensive data exchange between threads. I would say multi-threading is good for really parallel tasks if you don't care much about their relative speed/priority/timing.
A recent application I wrote that had to use multithreading (although not unbounded number of threads) was one where I had to communicate in several directions over two protocols, plus monitoring a third resource for changes. Both protocol libraries required a thread to run the respective event loop in, and when those were accounted for, it was easy to create a third loop for the resource monitoring. In addition to the event loop requirements, the messages going through the wires had strict timing requirements, and one loop couldn't be risked blocking the other, something that was further alleviated by using a multicore CPU (SPARC).
There were further discussions on whether each message processing should be considered a job that was given to a thread from a thread pool, but in the end that was an extension that wasn't worth the work.
All-in-all, threads should if possible only be considered when you can partition the work into well defined jobs (or series of jobs) such that the semantics are relatively easy to document and implement, and you can put an upper bound on the number of threads you use and that need to interact. Systems where this is best applied are almost message passing systems.
In priciple everytime there is no overhead for the caller to wait in a queue.
A couple more possible reasons to use threads:
Your platform lacks asynchronous I/O operations, e.g. Windows ME (No completion ports or overlapped I/O, a pain when porting XP applications that use them.) Java 1.3 and earlier.
A third-party library function that can hang, e.g. if a remote server is down, and the library provides no way to cancel the operation and you can't modify it.
Keeping a GUI responsive during intensive processing doesn't always require additional threads. A single callback function is usually sufficient.
If none of the above apply and I still want parallelism for some reason, I prefer to launch an independent process if possible.
I would say multi-threading is generally used to:
Allow data processing in the background while a GUI remains responsive
Split very big data analysis onto multiple processing units so that you can get your results quicker.
When you're receiving data from some hardware and need something to continuously add it to a buffer while some other element decides what to do with it (write to disk, display on a GUI etc.).
So if you're not solving one of those issues, it's unlikely that adding threads will make your life easier. In fact it'll almost certainly make it harder because as others have mentioned; debugging mutithreaded applications is considerably more work than a single threaded solution.
Security might be a reason to avoid using multiple threads (over multiple processes). See Google chrome for an example of multi-process safety features.
Multi-threading is scalable, and will allow your UI to maintain its responsivness while doing very complicated things in the background. I don't understand where other responses are acquiring their information on multi-threading.
When you shouldn't multi-thread is a mis-leading question to your problem. Your problem is this: Why did multi-threading my application cause serial / ethernet communications to fail?
The answer to that question will depend on the implementation, which should be discussed in another question. I know for a fact that you can have both ethernet and serial communications happening in a multi-threaded application at the same time as numerous other tasks without causing any data loss.
The one reason to not use multi-threading is:
There is one task, and no user interface with which the task will interfere.
The reasons to use mutli-threading are:
Provides superior responsiveness to the user
Performs multiple tasks at the same time to decrease overall execution time
Uses more of the current multi-core CPUs, and multi-multi-cores of the future.
There are three basic methods of multi-threaded programming that make thread safety implemented with ease - you only need to use one for success:
Thread Safe Data types passed between threads.
Thread Safe Methods in the threaded object to modify data passed between.
PostMessage capabilities to communicate between threads.
Are the processes parallel? Is performance a real concern? Are there multiple 'threads' of execution like on a web server? I don't think there is a finite answer.
A common source of threading issues is the usual approaches employed to synchronize data. Having threads share state and then implement locking at all the appropriate places is a major source of complexity for both design and debugging. Getting the locking right to balance stability, performance, and scalability is always a hard problem to solve. Even the most experienced experts get it wrong frequently. Alternative techniques to deal with threading can alleviate much of this complexity. The Clojure programming language implements several interesting techniques for dealing with concurrency.
How do you make your application multithreaded ?
Do you use asynch functions ?
or do you spawn a new thread ?
I think that asynch functions are already spawning a thread so if your job is doing just some file reading, being lazy and just spawning your job on a thread would just "waste" ressources...
So is there some kind of design when using thread or asynch functions ?
If you are talking about .Net, then don't forget the ThreadPool. The thread pool is also what asynch functions often use. Spawning to much threads can actually hurt your performance. A thread pool is designed to spawn just enough threads to do the work the fastest. So do use a thread pool instead of spwaning your own threads, unless the thread pool doesn't meet your needs.
PS: And keep an eye out on the Parallel Extensions from Microsoft
Spawning threads is only going to waste resources if you start spawning tons of them, one or two extra threads isn't going to effect the platforms proformance, infact System currently has over 70 threads for me, and msn is using 32 (I really have no idea how a messenger can use that many threads, exspecialy when its minimised and not really doing anything...)
Useualy a good time to spawn a thread is when something will take a long time, but you need to keep doing something else.
eg say a calculation will take 30 seconds. The best thing to do is spawn a new thread for the calculation, so that you can continue to update the screen, and handle any user input because users will hate it if your app freezes untill its finished doing the calculation.
On the other hand, creating threads to do something that can be done almost instantly is nearly pointless, since the overhead of creating (or even just passing work to an existing thread using a thread pool) will be higher than just doing the job in the first place.
Sometimes you can break your app into a couple of seprate parts which run in their own threads. For example in games the updates/physics etc may be one thread, while grahpics are another, sound/music is a third, and networking is another. The problem here is you really have to think about how these parts will interact or else you may have worse proformance, bugs that happen seemingly "randomly", or it may even deadlock.
I'll second Fire Lancer's answer - creating your own threads is an excellent way to process big tasks or to handle a task that would otherwise be "blocking" to the rest of synchronous app, but you have to have a clear understanding of the problem that you must solve and develope in a way that clearly defines the task of a thread, and limits the scope of what it does.
For an example I recently worked on - a Java console app runs periodically to capture data by essentially screen-scraping urls, parsing the document with DOM, extracting data and storing it in a database.
As a single threaded application, it, as you would expect, took an age, averaging around 1 url a second for a 50kb page. Not too bad, but when you scale out to needing to processes thousands of urls in a batch, it's no good.
Profiling the app showed that most of the time the active thread was idle - it was waiting for I/O operations - opening of a socket to the remote URL, opening a connection to the database etc. It's this sort of situation that can easily be improved with multithreading. Rewriting to be multi-threaded and with just 5 threads instead of one, even on a single core cpu, gave an increase in throughput of over 20 times.
In this example, each "worker" thread was explicitly limited to what it did - open the remote a remote url, parse the data, store it in the db. All the "high level" processing - generating the list of urls to parse, working out which next, handling errors, all remained with the control of the main thread.
The use of threads makes you think more about the way your application needs threading and can in the long run make it easier to improve / control your performance.
Async methods are faster to use but they are a bit magic - a lot of things happen to make them possible - so it's probable that at some point you will need something that they can't give you. Then you can try and roll some custom threading code.
It all depends on your needs.
The answer is "it depends".
It depends on what you're trying to achieve. I'm going to assume that you're aiming for more performance.
The simplest solution is to find another way to improve your performance. Run a profiler. Look for hot spots. Reduce unnecessary IO.
The next solution is to break your program into multiple processes, each of which can run in their own address space. This is easiest because there is no chance of the individual processes messing each other up.
The next solution is to use threads. At this point you're opening a major can of worms, so start small, and only multi-thread the critical path of the code.
The next solution is to use asynch IO. Generally only recommended for people writing some of very heavily loaded server, and even then I would rather re-use one of the existing frameworks that abstract away the details e.g. the C++ framework ICE, or an EJB server under java.
Note that each of these solutions has multiple sub-solutions - there are different breeds of threads and different kinds of asynch IO, each with slightly different performance characteristics, but again, it's generally best to let the framework handle it for you.