Why do modern OSses (Linux, Windows, Solaris) implement a one - one thread model? - linux

Reading my textbook for my OS class, which is Operating Systems Concepts, 8th edition, by Silberschatz, Galvin, and Gagne, I came across something interesting in the chapter on threads.
In introducing thread models, they start with:
Many to one
-Stating essentially this does not provide true concurrency
Next they move to:
One to one
-Stating this provides true concurrency, but suffers from thread amount restrictions because of overhead with creating too many threads.
Lastly, they move on to the seemingly evident solution:
Many to many
Which apparently takes the best of both worlds.
Yet if you notice in the one-one section, it states Linux, along with the Windows family of operating systems, implement the one-to-one model.
And further in the book after the last image...
So if many-to-many is the best solution, why do Linux, Windows, and Solaris (and maybe others) implement one-to-one?
...and apologies if this should go in programmers SE.

Solaris changed models from MxN to 1:1 because the complexity of managing threads at two different levels did not produce the expected benefits, the lack of a direct mapping caused issues meeting POSIX threads requirements in areas such as signal handling, and the cost of creating a new kernel thread for each user space thread was not too high. Sun published a white paper at the time of the switch which provides more details: Multithreading in the Solaris™ Operating Environment.

Related

Why is the Many-to-Many threading model not used in more

The book Operating System Concepts talks about the various multithreading modules (section 4.3), it mentions that the one-to-one model is used in most operating systems; Linux, Windows, etc. However, it then talks about the Many-to-Many model and that it fixes some of the issues that are seen with the One-to-One model, but later mentions that the only largely used implementation of it is with the two-level model which is just an extension of the Many-to-Many model which was used in an older version of Solaris (9) but is now no longer used and has been replaced by the one-to-one model. My question is if the Many-to-Many model is better then why is it not used more commonly? Is it due to complexity? I can imagine there could be issues with context switching if there was not some sort of a "sticky" mapping between user and kernel-level threads?
Thanks for any help with this.
It is used; go routines in Go are precisely this, managed by the Go runtime. As kernel memory became cheaper (because memory became cheaper) and pthreads use became ubiquitous, the runtime cost of managing the two level model, and the human time cost of supporting it, spelled its demise.
Go routines are a programming model, and meant to be extremely cheap, to the degree that Go programs shouldn't be ashamed to have thousands of them. The Go runtime, very carefully keeps a virtual-cpu pool (constructed with real threads), that can adopt a go routine very quickly.

Advice on starting a large multi-threaded programming project

My company currently runs a third-party simulation program (natural catastrophe risk modeling) that sucks up gigabytes of data off a disk and then crunches for several days to produce results. I will soon be asked to rewrite this as a multi-threaded app so that it runs in hours instead of days. I expect to have about 6 months to complete the conversion and will be working solo.
We have a 24-proc box to run this. I will have access to the source of the original program (written in C++ I think), but at this point I know very little about how it's designed.
I need advice on how to tackle this. I'm an experienced programmer (~ 30 years, currently working in C# 3.5) but have no multi-processor/multi-threaded experience. I'm willing and eager to learn a new language if appropriate. I'm looking for recommendations on languages, learning resources, books, architectural guidelines. etc.
Requirements: Windows OS. A commercial grade compiler with lots of support and good learning resources available. There is no need for a fancy GUI - it will probably run from a config file and put results into a SQL Server database.
Edit: The current app is C++ but I will almost certainly not be using that language for the re-write. I removed the C++ tag that someone added.
Numerical process simulations are typically run over a single discretised problem grid (for example, the surface of the Earth or clouds of gas and dust), which usually rules out simple task farming or concurrency approaches. This is because a grid divided over a set of processors representing an area of physical space is not a set of independent tasks. The grid cells at the edge of each subgrid need to be updated based on the values of grid cells stored on other processors, which are adjacent in logical space.
In high-performance computing, simulations are typically parallelised using either MPI or OpenMP. MPI is a message passing library with bindings for many languages, including C, C++, Fortran, Python, and C#. OpenMP is an API for shared-memory multiprocessing. In general, MPI is more difficult to code than OpenMP, and is much more invasive, but is also much more flexible. OpenMP requires a memory area shared between processors, so is not suited to many architectures. Hybrid schemes are also possible.
This type of programming has its own special challenges. As well as race conditions, deadlocks, livelocks, and all the other joys of concurrent programming, you need to consider the topology of your processor grid - how you choose to split your logical grid across your physical processors. This is important because your parallel speedup is a function of the amount of communication between your processors, which itself is a function of the total edge length of your decomposed grid. As you add more processors, this surface area increases, increasing the amount of communication overhead. Increasing the granularity will eventually become prohibitive.
The other important consideration is the proportion of the code which can be parallelised. Amdahl's law then dictates the maximum theoretically attainable speedup. You should be able to estimate this before you start writing any code.
Both of these facts will conspire to limit the maximum number of processors you can run on. The sweet spot may be considerably lower than you think.
I recommend the book High Performance Computing, if you can get hold of it. In particular, the chapter on performance benchmarking and tuning is priceless.
An excellent online overview of parallel computing, which covers the major issues, is this introduction from Lawerence Livermore National Laboratory.
Your biggest problem in a multithreaded project is that too much state is visible across threads - it is too easy to write code that reads / mutates data in an unsafe manner, especially in a multiprocessor environment where issues such as cache coherency, weakly consistent memory etc might come into play.
Debugging race conditions is distinctly unpleasant.
Approach your design as you would if, say, you were considering distributing your work across multiple machines on a network: that is, identify what tasks can happen in parallel, what the inputs to each task are, what the outputs of each task are, and what tasks must complete before a given task can begin. The point of the exercise is to ensure that each place where data becomes visible to another thread, and each place where a new thread is spawned, are carefully considered.
Once such an initial design is complete, there will be a clear division of ownership of data, and clear points at which ownership is taken / transferred; and so you will be in a very good position to take advantage of the possibilities that multithreading offers you - cheaply shared data, cheap synchronisation, lockless shared data structures - safely.
If you can split the workload up into non-dependent chunks of work (i.e., the data set can be processed in bits, there aren't lots of data dependencies), then I'd use a thread pool / task mechanism. Presumably whatever C# has as an equivalent to Java's java.util.concurrent. I'd create work units from the data, and wrap them in a task, and then throw the tasks at the thread pool.
Of course performance might be a necessity here. If you can keep the original processing code kernel as-is, then you can call it from within your C# application.
If the code has lots of data dependencies, it may be a lot harder to break up into threaded tasks, but you might be able to break it up into a pipeline of actions. This means thread 1 passes data to thread 2, which passes data to threads 3 through 8, which pass data onto thread 9, etc.
If the code has a lot of floating point mathematics, it might be worth looking at rewriting in OpenCL or CUDA, and running it on GPUs instead of CPUs.
For a 6 month project I'd say it definitely pays out to start reading a good book about the subject first. I would suggest Joe Duffy's Concurrent Programming on Windows. It's the most thorough book I know about the subject and it covers both .NET and native Win32 threading. I've written multithreaded programs for 10 years when I discovered this gem and still found things I didn't know in almost every chapter.
Also, "natural catastrophe risk modeling" sounds like a lot of math. Maybe you should have a look at Intel's IPP library: it provides primitives for many common low-level math and signal processing algorithms. It supports multi threading out of the box, which may make your task significantly easier.
There are a lot of techniques that can be used to deal with multithreading if you design the project for it.
The most general and universal is simply "avoid shared state". Whenever possible, copy resources between threads, rather than making them access the same shared copy.
If you're writing the low-level synchronization code yourself, you have to remember to make absolutely no assumptions. Both the compiler and CPU may reorder your code, creating race conditions or deadlocks where none would seem possible when reading the code. The only way to prevent this is with memory barriers. And remember that even the simplest operation may be subject to threading issues. Something as simple as ++i is typically not atomic, and if multiple threads access i, you'll get unpredictable results.
And of course, just because you've assigned a value to a variable, that's no guarantee that the new value will be visible to other threads. The compiler may defer actually writing it out to memory. Again, a memory barrier forces it to "flush" all pending memory I/O.
If I were you, I'd go with a higher level synchronization model than simple locks/mutexes/monitors/critical sections if possible. There are a few CSP libraries available for most languages and platforms, including .NET languages and native C++.
This usually makes race conditions and deadlocks trivial to detect and fix, and allows a ridiculous level of scalability. But there's a certain amount of overhead associated with this paradigm as well, so each thread might get less work done than it would with other techniques. It also requires the entire application to be structured specifically for this paradigm (so it's tricky to retrofit onto existing code, but since you're starting from scratch, it's less of an issue -- but it'll still be unfamiliar to you)
Another approach might be Transactional Memory. This is easier to fit into a traditional program structure, but also has some limitations, and I don't know of many production-quality libraries for it (STM.NET was recently released, and may be worth checking out. Intel has a C++ compiler with STM extensions built into the language as well)
But whichever approach you use, you'll have to think carefully about how to split the work up into independent tasks, and how to avoid cross-talk between threads. Any time two threads access the same variable, you have a potential bug. And any time two threads access the same variable or just another variable near the same address (for example, the next or previous element in an array), data will have to be exchanged between cores, forcing it to be flushed from CPU cache to memory, and then read into the other core's cache. Which can be a major performance hit.
Oh, and if you do write the application in C++, don't underestimate the language. You'll have to learn the language in detail before you'll be able to write robust code, much less robust threaded code.
One thing we've done in this situation that has worked really well for us is to break the work to be done into individual chunks and the actions on each chunk into different processors. Then we have chains of processors and data chunks can work through the chains independently. Each set of processors within the chain can run on multiple threads each and can process more or less data depending on their own performance relative to the other processors in the chain.
Also breaking up both the data and actions into smaller pieces makes the app much more maintainable and testable.
There's plenty of specific bits of individual advice that could be given here, and several people have done so already.
However nobody can tell you exactly how to make this all work for your specific requirements (which you don't even fully know yourself yet), so I'd strongly recommend you read up on HPC (High Performance Computing) for now to get the over-arching concepts clear and have a better idea which direction suits your needs the most.
The model you choose to use will be dictated by the structure of your data. Is your data tightly coupled or loosely coupled? If your simulation data is tightly coupled then you'll want to look at OpenMP or MPI (parallel computing). If your data is loosely coupled then a job pool is probably a better fit... possibly even a distributed computing approach could work.
My advice is get and read an introductory text to get familiar with the various models of concurrency/parallelism. Then look at your application's needs and decide which architecture you're going to need to use. After you know which architecture you need, then you can look at tools to assist you.
A fairly highly rated book which works as an introduction to the topic is "The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Application".
Read about Erlang and the "Actor Model" in particular. If you make all your data immutable, you will have a much easier time parallelizing it.
Most of the other answers offer good advice regarding partitioning the project - look for tasks that can be cleanly executed in parallel with very little data sharing required. Be aware of non-thread safe constructs such as static or global variables, or libraries that are not thread safe. The worst one we've encountered is the TNT library, which doesn't even allow thread-safe reads under some circumstances.
As with all optimisation, concentrate on the bottlenecks first, because threading adds a lot of complexity you want to avoid it where it isn't necessary.
You'll need a good grasp of the various threading primitives (mutexes, semaphores, critical sections, conditions, etc.) and the situations in which they are useful.
One thing I would add, if you're intending to stay with C++, is that we have had a lot of success using the boost.thread library. It supplies most of the required multi-threading primitives, although does lack a thread pool (and I would be wary of the unofficial "boost" thread pool one can locate via google, because it suffers from a number of deadlock issues).
I would consider doing this in .NET 4.0 since it has a lot of new support specifically targeted at making writing concurrent code easier. Its official release date is March 22, 2010, but it will probably RTM before then and you can start with the reasonably stable Beta 2 now.
You can either use C# that you're more familiar with or you can use managed C++.
At a high level, try to break up the program into System.Threading.Tasks.Task's which are individual units of work. In addition, I'd minimize use of shared state and consider using Parallel.For (or ForEach) and/or PLINQ where possible.
If you do this, a lot of the heavy lifting will be done for you in a very efficient way. It's the direction that Microsoft is going to increasingly support.
2: I would consider doing this in .NET 4.0 since it has a lot of new support specifically targeted at making writing concurrent code easier. Its official release date is March 22, 2010, but it will probably RTM before then and you can start with the reasonably stable Beta 2 now. At a high level, try to break up the program into System.Threading.Tasks.Task's which are individual units of work. In addition, I'd minimize use of shared state and consider using Parallel.For and/or PLINQ where possible. If you do this, a lot of the heavy lifting will be done for you in a very efficient way. 1: http://msdn.microsoft.com/en-us/library/dd321424%28VS.100%29.aspx
Sorry i just want to add a pessimistic or better realistic answer here.
You are under time pressure. 6 month deadline and you don't even know for sure what language is this system and what it does and how it is organized. If it is not a trivial calculation then it is a very bad start.
Most importantly: You say you have never done mulitithreading programming before. This is where i get 4 alarm clocks ringing at once. Multithreading is difficult and takes a long time to learn it when you want to do it right - and you need to do it right when you want to win a huge speed increase. Debugging is extremely nasty even with good tools like Total Views debugger or Intels VTune.
Then you say you want to rewrite the app in another lanugage - well this isn't as bad as you have to rewrite it anyway. THe chance to turn a single threaded Program into a well working multithreaded one without total redesign is almost zero.
But learning multithreading and a new language (what is your C++ skills?) with a timeline of 3 month (you have to write a throw away prototype - so i cut the timespan into two halfs) is extremely challenging.
My advise here is simple and will not like it: Learn multithreadings now - because it is a required skill set in the future - but leave this job to someone who already has experience. Well unless you don't care about the program being successfull and are just looking for 6 month payment.
If it's possible to have all the threads working on disjoint sets of process data, and have other information stored in the SQL database, you can quite easily do it in C++, and just spawn off new threads to work on their own parts using the Windows API. The SQL server will handle all the hard synchronization magic with its DB transactions! And of course C++ will perform a lot faster than C#.
You should definitely revise C++ for this task, and understand the C++ code, and look for efficiency bugs in the existing code as well as adding the multi-threaded functionality.
You've tagged this question as C++ but mentioned that you're a C# developer currently, so I'm not sure if you'll be tackling this assignment from C++ or C#. Anyway, in case you're going to be using C# or .NET (including C++/CLI): I have the following MSDN article bookmarked and would highly recommend reading through it as part of your prep work.
Calling Synchronous Methods Asynchronously
Whatever technology your going to write this, take a look a this must read book on concurrency "Concurrent programming in Java" and for .Net I highly recommend the retlang library for concurrent app.
I don't know if it was mentioned yet, but if I were in your shoes, what I would be doing right now (aside from reading every answer posted here) is writing a multiple threaded example application in your favorite (most used) language.
I don't have extensive multithreaded experience. I've played around with it in the past for fun but I think gaining some experience with a throw-away application will suit your future efforts.
I wish you luck in this endeavor and I must admit I wish I had the opportunity to work on something like this...

Programming on future hardware?

I want to practice programming code for future hardware. What are these? The two main things that come to mind is 64bits and multicore. I also note that cache is important along and GPU have their own tech but right now i am not interested in any graphics programming.
What else should i know about?
-edit- i know a lot of these are in the present but pretty soon all cpus will be multicore and threading will be more important. I consider endians (big vs little) but found that not to be important and already have a big endian CPU to test on.
My recommendation for future :)
nVidia CUDA
nVidia Tegra
Or you can focusing on ray tracing.
If you'd like to dive into a "mainstream" OS that has full 64 bit support, I suggest you start coding against the beta of Mac OS X "Snow Leopard" (codename for 10.6). One of the big enhancements is Grand Central, which is a "facility" for developers to code for multicore systems. Grand Central should distribute workload not only between core, but also to the GPU.
Also very important is the explosion of smart devices such as the iPhone, Android, etc. I strongly believe that some upcoming so-called "netbooks" will rely on OS such as Android and iPhone OS, and as such knowing how to code against their SDK, and knowing how to optimize code for mobile devices is very important (e.g. optimizing performance graphic or otherwise, battery usage).
I can't foretell the future, but one aspect to look into is something like the CELL processor used in the PS3, where instead of many identical general purpose cores, there is only one (although capable of symmetric multithreading) plus many cores that are more specific purpose.
In a simple analysis, the Cell processor can be split into four components: external input and output structures, the main processor called the Power Processing Element (PPE) (a two-way simultaneous multithreaded Power ISA v.2.03 compliant core), eight fully-functional co-processors called the Synergistic Processing Elements, or SPEs, and a specialized high-bandwidth circular data bus connecting the PPE, input/output elements and the SPEs, called the Element Interconnect Bus or EIB.
CUDA and OpenCL are similar in that you separate your general purpose code and high performance computations into separate parts that may run on different hardware and language/api.
64 bits and multicore are the present not the future.
About the future:
Quantum computing or something like that?
How about learning OpenCL? It's a massively parallel processing language based on C. It's similar to nVidia's CUDA but is vendor agnostic. There are no major implementations yet, but expect to see some pretty soon.
As for 64 bit, don't really worry about. Programming will not really be any different unless you're doing really low level stuff (kernels). Higher level frameworks such as Java and .NET allow you to run code on 32 bit and 64 bit machines. Even C/C++ allows you to do this (but not quite so transparently).
I agree with Oli's answere (+1) and would add that in addition to 64-bit environments, you look at multi-core environments. The industry is getting pretty close to the end of the cycle of improvements in raw speed. But we're seeing more and more multi-core CPUs. So parallel or concurrent programming -- which is rilly rilly hard -- is quickly becoming very much in demand.
How can you prepare for this and practice it? I've been asking myself the same same question. So for, it seems to me like functional languages such as ML, Haskell, LISP, Arc, Scheme, etc. are a good place to begin, since truly functional languages are generally free of side effects and therefore very "parallelizable". Erlang is another interesting language.
Other interesting developments that I've found include
The Singularity Research OS
Transactional Memory and Software Isolated Processes
The many Software Engineering Podcast episodes on concurrency. (Here's the first one.)
This article from ACM Queue on "Real World Concurrency"
Of course this question is hard to answer because nobody knows what future hardware will look like (at least in long terms), but multi-threading/parallel programming are important and will be definitely even more important for some years.
I'd also suggest working with GPU computing like CUDA/Stream, but this could be a problem because it's very likely that this will change a lot the next years.

Why don't large programs (such as games) use loads of different threads? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I don't know how commercial games work inside very much, but the open source games I have come across don't seem to be massively into threading. Same goes for most other desktop applications, normally two or three threads seem to be used (eg program logic and GUI updates).
Why don't games have many threads? Eg separate threads for physics, sound, graphics, AI etc?
I don't know about the games that you have played, but most games run the sound on a separate thread. Networking code, at least the socket listeners run on a separate thread.
However, the rest of the game engine generally runs in a single thread. There are reasons for this. For example, most processing in a game runs a single chain of dependencies. Graphics depend on state of physics engine as does the artificial intelligence. Designing for multiple threads means that you have to have frame latency between the various subsystems for concurrency. You get quicker response time and snappier game play if these subsystems are computed linearly each frame. The part of the game that benefits the most from parallelization is of course the rendering subsystem which is offloaded to highly parallelized graphics accelerator cards.
You need to think, what are the actual benefits of threads? Remember that on a single core machine, threads don't actually allow concurrent execution, just the impression of it. Behind the scenes, the CPU is context-switching between the different threads, doing a little work on each every time. Therefore, if I have several tasks that involve no waiting, running them concurrently (on a single core) will be no quicker than running them linearly. In fact, it will be slower, due to the added overhead of the frequent context-switching.
If that is the case then, why ever use threads on a single core machine? Well firstly, because sometimes tasks can involve long periods of waiting on some external resource, such as a disk or other hardware device, to become available. Whilst the task in a waiting stage, threading allows other tasks to continue, thus using the CPU's time more efficiency.
Secondly, tasks may have a deadline of some sort in which to complete, particularly if they are responding to an event. The classic example is the user interface of an application. The computer should respond to user action events as quickly as possible, even if it is busy performing some other long running task, otherwise the user will be become agitated and may believe the application has crashed. Threading allows this to happen.
As for games, I am not a games programmer, but my understanding of the situation is this: 3D games create a programmatic model of the game world; players, enemies, items, terrain, etc. This game world is updated in discrete steps, based on the amount of time that has elapsed since the previous update. So, if 1ms has passed since the last time round the game loop, the position of an object is updated by using its velocity and the elapsed time to determine the delta (obviously the physics is a bit more complicated than that, but you get the idea). Other factors such as AI and input keys may also contribute to the update. When everything is finished, the updated game world is rendered as a new frame and the process begins again. This process usually occurs many times per second.
When we think about the game loop in this way, we can see that the engine is in fact achieving a very similar goal to that of threading. It has a number of long running tasks (updating the world's physics, handling user input, etc), and it gives the impression that they are happening concurrently by breaking them down into small pieces of work and interleaving these pieces, but instead of relying on the CPU or operating system to manage the time spent on each, it is doing it itself. This means it can keep all the different tasks properly synchronized, and avoid the complexities that come with real threading: locks, pre-emption, re-entrant code, etc. There is no performance implication to this approach either, because as we said a single core machine can only really execute code linearly anyway.
Things change when have a multi-core system. Now, tasks can be running genuinely concurrently and there may indeed be a benefit to using threading to handle different parts of the game world updates, so long as we can manage to synchronise the results to render consistent frames. We would expect therefore, that with the advent of multi-core systems, games engine developers would be working on this. And so it turns out, they are. Valve, the makers of Half Life, have recently introduced multi-processor support into their Source Engine, and I imagine many other engine developers are following suit.
Well, that turned out a little longer than I expected. I'm not a threading or games expert, but I hope I haven't made any especially glaring errors. If I have I'm sure people will correct me :)
The main reason is that, as elegant as it sounds, using multiple threads in a program as complicated as a 3D game is really, really, really difficult. Also, before the fairly recent introduction of low cost multi-core systems, using multiple threads did not offer much of a performance incentive.
Many games these days are using "task" or "job" systems for parallel processing. That is, the game spawns a fixed number of worker threads which are used for multiple tasks. Work is divided up into small pieces and queued, then sent to be processed by the worker threads as they become available.
This is becoming especially common on consoles. The PS3 is based on Cell architecture so you need to use parallel processing to get the best performance out of the system. The Xbox 360 can emulate a task/job setup that was designed for PS3 as it has multiple cores. You would probably find for most games that a lot of the system design is shared among the 360, PS3, and PC codebases, so PC most likely uses the same sort of tactic.
While it is hard to write threadsafe code, as many of the other answers indicate, I think there are a few other reasons for the things you're seeing:
First, many open source games are a few years old. Especially with this generation of consoles parallel programming is becoming popular and even necessary as mentioned above.
Second, very few open source projects seem concerned about getting the highest possible performance. As John Carmack pointed out to the Utah GLX project, highly optimized code is often harder to maintain than unoptimized code, so the latter would generally be preferred in open source contexts.
Third, I wouldn't take a small number of threads created by a game to mean that it's not using parallel jobs well.
I was about to post the same thing as William, but I'd like to expand on it a little bit. It's very hard to write optimal code for the future. Given the choice between writing something that will scale to hardware you don't have vs. writing something that will work on hardware you do have, most people will chose to do the latter. Since the single-core paradigm has been with us for so long, most code that has been written (especially for games where there is extreme pressure to get it out the door) isn't that future proof.
x86 has been very kind to game programmers, since we haven't had to think about the ramifications of less forgiving hardware platforms.
The fact that everybody here is correctly claiming that multithreading is hard is very sad. We desperately need to make concurrency systems easy.
Personally I think we are going to need a paradigm shift and new tools.
Other than the technical challenges of programming for multiple cores, commercial games have to run well on low end systems w/o multiple cores to make money.
Now that multi-core processors have been out for a while and the major game consoles have multiple cores it's only a matter of time before dual core shows up on the minimum system requirements list for PC games.
Here's a link to an interview with Orion Granatir from Intel where he's talking about getting game developers to take advantage of multi-threading.
There are many issues with race conditions and data locking when using lots of threads. Since the different parts of games are fairly reliant on each other it doesn't make much sense to do all the extra engineering required to use loads of threads.
It's very difficult to use threads without problems, and most GUI APIs are based on event driven coding anyway. Threads mandate the use of locking mechanisms which add delay to the code, and often that delay is unpredictable depending on who is currently holding the lock.
It seems sensible to me to have a single (or perhaps very few) threads handling things in an event driven way rather than hundreds of threads all causing strange and unrepeatable bugs.
Threads are dead, baby.
Realistically, in game development, threads don't scale beyond offloading very dedicated tasks like networking and loading. Job-systems seem to be the only way forward, given 8 CPU systems are becoming more commonplace even on PCs. And you can pretty much guarantee that upcoming super-multicore systems like Intel's Larrabee will be job-system based.
This has been a somewhat painful realization on Playstation3 and XBOX360 projects, and it seems now even Apple has jumped on board with their "revolutionary" Grand Central Dispatch system in Snow Leopard.
Threads have their place, but the naive promise of "put everything in a thread and it will all run faster" simply doesn't work in practice.

Threading paradigm?

Are there any paradigm that give you a different mindset or have a different take to writing multi thread applications? Perhaps something that feels vastly different like procedural programming to function programming.
Concurrency has many different models for different problems. The Wikipedia page for concurrency lists a few models and there's also a page for concurrency patterns which has some good starting point for different kinds of ways to approach concurrency.
The approach you take is very dependent on the problem at hand. Different models solve various different issues that can arise in concurrent applications, and some build on others.
In class I was taught that concurrency uses mutual exclusion and synchronization together to solve concurrency issues. Some solutions only require one, but with both you should be able to solve any concurrency issue.
For a vastly different concept you could look at immutability and concurrency. If all data is immutable then the conventional approaches to concurrency aren't even required. This article explores that topic.
I don't really understand the question, but if you start doing some coding using CUDA give you some different way of thinking about multi-threading applications.
It differs from general multi-threading technics, like Semaphores, Monitors, etc. because you have thousands of threads concurrently. So the problem of parallelism in CUDA resides more in partitioning your data and mixing the chunks of data later.
Just a small example of a complete rethinking of a common serial problem is the SCAN algorithm. It is as simple as:
Given a SET {a,b,c,d,e}
I want the following set:
{a, a+b, a+b+c, a+b+c+d, a+b+c+d+e}
Where the symbol '+' in this case is any Commutattive operator (not only plus, you can do multiplication also).
How to do this in parallel? It's a complete rethink of the problem, it is described in this paper.
Many more implementations of different algorithms in CUDA can be found in the NVIDIA website
Well, a very conservative paradigm shift is from thread-centric concurrency (share everything) towards process-centric concurrency (address-space separation). This way one can avoid unintended data sharing and it's easier to enforce a communication policy between different sub-systems.
This idea is old and was propagated (among others) by the Micro-Kernel OS community to build more reliable operating systems. Interestingly, the Singularity OS prototype by Microsoft Research shows that traditional address spaces are not even required when working with this model.
The relatively new idea I like best is transactional memory: avoid concurrency issues by making sure updates are always atomic.
Have a looksee at OpenMP for an interesting variation.

Resources