In-depth analysis of the difference between the CPU and GPU [closed] - multithreading

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been searching for the major differences between a CPU and a GPU, more precisely the fine line that separates the cpu and gpu. For example, why not use multiple cpus instead of a gpu and vice versa. Why is the gpu "faster" in crunching calculations than the cpu. What are some types of things that one of them can do and the other can't do or do efficiently and why. Please don't reply with answers like " Central processing unit " and " "Graphics processing unit". I'm looking for a in-depth technical answer.

GPUs are basically massively parallel computers. They work well on problems that can use large scale data decomposition and they offer orders of magnitude speedups on those problems.
However, individual processing units in a GPU cannot match a CPU for general purpose performance. They are much simpler and do not have optimizations like long pipelines, out-of-order execution and instruction-level-parallelizaiton.
They also have other drawbacks. Firstly, you users have to have one, which you cannot rely on unless you control the hardware. Also there are overheads in transferring the data from main memory to GPU memory and back.
So it depends on your requirements: in some cases GPUs or dedicated processing units like Tesla are the clear winners, but in other cases, your work cannot be decomposed to make full use of a GPU and the overheads then make CPUs the better choice.

First watch this demonstration:
http://www.nvidia.com/object/nvision08_gpu_v_cpu.html
That was fun!
So what's important here is that the "CPU" can be controlled to perform basically any calculation on command; For calculations that are unrelated to each other, or where each computation is strongly dependent on its neighbors (rather than merely the same operaton), you usually need a full CPU. As an example, compiling a large C/C++ project. The compiler has to read each token of each source file in sequence before it can understand the meaning of the next; Just because there are lots of source files to process, they all have different structure, and so the same calculations don't apply accros the source files.
You could speed that up by having several, independent CPU's, each working on separate files. Improving the speed by a factor of X means you need X CPU's which will cost X times as much as 1 CPU.
Some kinds of task involve doing exactly the same calculation on every item in a dataset; Some physics simulations look like this; in each step, each 'element' in the simulation will move a little bit; the 'sum' of the forces applied to it by its immediate neighbors.
Since you're doing the same calculation on a big set of data, you can repeat some of the parts of a CPU, but share others. (in the linked demonstration, the air system, valves and aiming are shared; Only the barrels are duplicated for each paintball). Doing X calculations requires less than X times the cost in hardware.
The obvious disadvantage is that the shared hardware means that you can't tell a subset of the parallel processor to do one thing while another subset does something unrelated. the extra parallel capacity would go to waste while the GPU performs one task and then another different task.

Related

Difference between SIMD and Multi-threading [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is the difference between the SIMD and Muti-threading concepts that one comes across in parallel programming paradigm?
SIMD means "Single Instruction, Multiple Data" and is an umbrella term describing a method whereby many elements are loaded into extra-wide CPU registers at the same time and a single low-level instruction (such as ADD, MULTIPLY, AND, XOR) is applied to all the elements in parallel. Specific examples are MMX, SSE2/3 and AVX on Intel processors, or NEON on ARM processors, or AltiVec on PowerPC. It is very low-level and typically only for a few clock cycles. An example might be that, rather than go into a for loop increasing the brightness of the pixels in an image one-by-one, you load 64 off 8-bit pixels into a single 512-bit wide register and multiply them all up at the same time in one or two clock cycles. SIMD is often implemented for you in high-performance libraries (like OpenCV) or is generated for you by your compiler when you compile with vectorisation enabled, typically at optimisation level 3 or higher (-O3 switch). Very experienced programmers may choose to write their own, using "intrinsics".
Multi-threading refers to when you have multiple threads of execution, normally running on different CPU cores, at the same time. It is higher-level than SIMD and typically threads exist a lot longer. One thread might be acquiring images, another thread might be detecting objects, another might be tracking the objects and a final one might be displaying the results. A feature of multi-threading is that the threads all share the same address space, so data in one thread can be seen and manipulated by others. This makes threads light-weight compared to multiple processes, but can make for harder debugging. Threads are called "light-weight" because they typically take much less time to create and start than full-blown processes.
Multi-processing is similar to multi-threading except each process has its own address space, so if you want to share data between the processes, you need to work harder to do it. It has the benefit over multi-threading that one process is unlikely to crash another or interfere with its data, making it somewhat easier to debug.
If I make an analogy with cooking a meal, then SIMD is like lining up all your green beans and slicing them in one go. The single instruction is "slice", the multiple, repeated data are the beans. In fact, lining things up ("memory alignment") is an important aspect of SIMD.
Then multi-threading is like having multiple chefs all taking ingredients from a shared vegetable larder, preparing them and putting them in a big shared cook-pot. You get the job done faster because there are multiple chefs - analogous to CPU cores - working at once.
In this little analogy, multi-processing is more like each chef having his own vegetable larder and cook-pot, so if one chef runs out of vegetables, or cooking gas, the others are not affected - things are more independent. You get the job done faster, because there are more chefs, just you have to do a bit more organisation (or "synchronisation") to get all the chefs to serve their meals at the same time at the end.
There is nothing to prevent an application using SIMD as well as multi-threading and multi-processing at the same time. Going back to the cooking analogy, you can have multiple chefs (multi-threading or multi-processing) who are all slicing their green beans efficiently (SIMD). It is my impression that most applications either use SIMD and multi-threading, or SIMD and multi-processing, but relatively few use both multi-threading and multi-processing. YMMV on this bit!

What's the overhead of the different forms of parallelism in Julia v0.5?

As the title states, what is the overhead of the different forms of parallelism, at least in the current implementation of Julia (v0.5, in case the implementation changes drastically in the future)? I am looking for some "practical measures", some general heuristics or ballparks to keep in my head for when it can be useful. For example, it's pretty obvious that multiprocessing won't give you gains in a loop like:
addprocs(4)
#parallel (+) for i=1:4
rand()
end
doesn't give you performance gains because each process is only taking one random number, but is there general heuristic for knowing when it will be worthwhile? Also, what about a heuristic for threading. It's surely a lower overhead than multiprocessing, but for example, with 4 threads, for what N is it a good idea to multithread:
A = rand(4)
Base.#threads (+) for i = 1:N
A[i%4+1]
end
(I know there isn't a threaded reduction right now, but let's act like there is, or edit with a better example). Sure, I can benchmark every example, but some good rules to keep in mind would go a long way.
In more concrete terms: what are some good rules of thumb?
How many numbers do you need to be adding/multiplying before threading gives performance enhancements, or before multiprocessing gives performance enhancements?
How much does the depend on Julia's current implementation?
How much does it depend on the number of threads/processes?
How much does the depend on the architecture? Are there good rules for knowing when the threshold should be higher/lower on a particular system?
What kinds of applications violate these heuristics?
Again, I'm not looking for hard rules, just general guidelines to guide development.
A few caveats: 1. I'm speaking from experience with version 0.4.6, (and prior), haven't played with 0.5 yet (but, as I hope my answer below demonstrates, I don't think this is essential vis-a-vis the response I give). 2. this isn't a fully comprehensive answer.
Nevertheless, from my experience, the overhead for multiple processes itself is very small provided that you aren't dealing with data movement issues. In other words, in my experience, any time that you ever find yourself in a situation of wishing something were faster than a single process on your CPU can manage, you're well past the point where parallelism will be beneficial. For instance, in the sum of random numbers example that you gave, I found through testing just now that the break-even point was somewhere around 10,000 random numbers. Anything more and parallelism was the clear winner. Generating 10,000 random number is trivial for modern computers, taking a tiny fraction of a second, and is well below the threshold where I'd start getting frustrated by the slowness of my scripts and want parallelism to speed them up.
Thus, I at least am of the opinion, that although there are probably even more wonderful things that the Julia developers could do to cut down on the overhead even more, at this point, anything pertinent to Julia isn't going to be so much of your limiting factor, at least in terms of the computation aspects of parallelism. I think that there are still improvements to be made in terms of enhancing both the ease and the efficiency of parallel data movement (I like the package that you've started on that topic as a good step. You and I would probably both agree there's still a ways more to go). But, the big limiting factors will be:
How much data do you need to be moving around between processes?
How much read/write to your memory do you need to be doing during your computations? (e.g. flops per read/write)
Aspect 1. might at times lean against using parallelism. Aspect 2. is more likely just to mean that you won't get so much benefit from it. And, at least as I interpret "overhead," neither of these really fall so directly into that specific consideration. And, both of these are, I believe, going to be far more heavily determined by your system hardware than by Julia.

Are limitations of CPU speed and memory prevent us from creating AI systems?

Many technology optimists say that in 15 years the speed of computers will be comparable with the speed of the human brain. This is why they believe that computers will achieve the same level of intelligence as humans.
If Moore's law holds, then every 18 months we should expect doubling of CPU speed. 15 years is 180 months. So, we will have the doubling 10 times. Which means that in 15 years computer will be 1024 times faster than they are now.
But is the speed the reason of the problem? If it is so, we would be able to build an AI system NOW, it would just 1024 times slower than in 15 years. Which means that to answer a question it will need 1024 second (17 minutes) instead of acceptable 1 second. But do we have now strong (but slow) AI system? I think no. Even if now (2015) we give to a system 1 hour instead of 17 minutes, or 1 day, or 1 month or even 1 year, it still will be unable to answer complex questions formulated in natural language. So, it is not the speed that causes problems.
It means that in 15 years our intelligence will not be 1024 faster than now (because we have no intelligence). Instead our "stupidity" will be 1024 times faster than now.
We need both faster hardware and better algorithms. Of course speed alone is not enough as you pointed out.
We need self-modifying meta-learning algorithms capable of creating hypotheses and performing experiments to verify them (like humans do). Systems that are learning to learn and self-improving. Algorithms that can prove that given self-modification is optimal in certain sense and will lead to even better self-modifications in the future. Systems that can reflect on and inspect their own software (can you call it consciousness ?). Such research is being done and may create superhuman intelligence in the future or even technological singularity as some believe.
There is one problem with this approach, though. People doing this research usually assume that consciousness is computable. That it is all about intelligence. They don't take into account experiences like pleasure and pain which have nothing to do (in my opinion) with computation nor intellect. You can understand pain through experience only (not intellectual speculation). Setting variable pleasure to 5 or behaving like one feels pleasure is very different from experiencing pleasure. Some people say that feelings originate in brain so it is enough to understand brain. Not necessarily. Child can ask: "How did they put small people inside TV box ?". Of course TV is just a receiver and there are no small people inside. Brain might be receiver too. Do we need higher knowledge for feelings and other experiences ?
The answer has to be answered in the context of computation and complexity.
Every algorithm has its own complexity and running time (See Big O notation). There are problems which are non-computable problems such as the halting problem. These problems are proven that an algorithm does not exists independent of the hardware.
Computable algorithms are described in the number of steps required with respect to the input to solve an algorithm. As the number of input increases, the execution time of the algorithm also increases. However, these algorithms can be categorized into two: exponential time algorithms and non-exponential time algorithms. Exponential time algorithms increases drastically with the number of input and becomes intractable.
These executing time of these problems can be improved with better hardware however the complexity will always be the same. This means that no matter what the CPU uses, the execution time will always require the same number of steps. This means that the hardware is important to provide an answer in less time but the hardness of the problem will always remain the same. Thus, the limitation of the hardware is not preventing us from creating an AI system. For instance, you can use parallel programming (ex: GPU) to improve the execution time of the algorithm drastically but the algorithm is still the same as a normal CPU algorithm.
I would say no. As you showed, speed is not the only factor of intelligence. I for one would think Language is, yes language. Language is the primary skill we learn as humans, so why not for computers? Language gives an understanding that can be understood across the globe, given you know that language. Humans use nonverbal and verbal language to communicate. But I honestly think it really works something like this:
Humans go through experiences. These experiences have a bigger impact on our lives the closer we are to our birth date, or the more emotional they are. For example, the first time we are told no means ALOT more to us as an infant than as a 70 year old adult. These get stored as either long term or short term memory and correlated to that event later on in life for reference. We mainly store events to learn from them to prevent negative experience or promote positive experiences.
Think of it as a tag cloud. The more often you do task A, the bigger the cloud is in memory. We then store crucial details such as type of emotion, location, smells etc. Now when we reference them again from memory we pick out those details and create a logical sentence:
Touching that stove hurt me when I was at grandma's house.
All of the bolded words would have to be stored to have a complete memory.
Now inside of this sentence we have learned a lot more things than just being hurt from the stove at grandma's house. We have learned that stove's can be hot, dangerous, and grandma allows it to be in her house. We also learned how long it takes to heal from such an event, emotionally and physically to gauge how important the event is. And so much more. So we also store this sub-event information inside of other knowledge bubbles. And these bubbles continue to grow exponentially.
Now when asked: Are stoves dangerous?
You can identify the words in the sentence:
are, stoves, dangerous, question
and reference the definition of dangerous as: hurt, bad
and then provide more evidence that this is true, such as personal experience to result in:
Yes, stoves are dangerous because I was hurt at grandmas house by one.
So intelligence seems to be a mix of events, correlation and data retrieval to solve some solution. I'm sure there's a lot more to it than that but this is just my understanding of intelligence.

Threads and processes? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In computer science, a thread of execution is the smallest unit of processing that can be scheduled by an operating system.
This is very abstract!
What would be real world/tangible/physical interpretation of threads?
How could I tell if an application (by looking at its code) is a single threaded or multi threaded?
Are there any benefits in making an application multi threaded? In which cases could one do that?
Is there anything like multi process application too?
Is technology a limiting factor to decide if it could be made multi threaded os is it just the design choice? e.g. is it possible to make multi threaded applications in flex?
If anyone can could give me an example or an analogy to explain these things, it would be very helpful.
What would be real world/tangible/physical interpretation of threads?
Think about a thread as an independent unit of execution that can be executed concurrently (at the same time) on a given CPU(s). A good analogy would be multiple cars driving around independently on the same road. Here a "car" is a thread, and a road is that CPU. So the function of all these cars is somewhat the same: "drive people around", but the kicker is that people should not stand in line to wait for a single car => they can drive at the same time in different cars (threads).
Technically, however, depending on number of CPU cores, and overall hardware / OS architecture there will be some context switching, where CPU would make it seem it happens simultaneously, but in reality it switches from one thread to another.
How could I tell if an application (by looking at its code) is a single threaded or multi threaded?
This depends on several things, a language the code is written in, your understanding of the language, what code is trying to accomplish, etc.. Usually you can tell, but I do not believe this will solve anything. If you already have access to the code, it's a lot simpler to just ask the developer, or, in case it is an open source product, read documentation, post on user forums to figure it out.
Are there any benefits in making an application multi threaded? In which cases could one do that?
Yes, think about the car example above. The benefit = at the same time and decoupled execution of things. For example, you need to calculate how many starts are in a known universe. You can either have a single process go over all the stars and count them, or you can "spawn" multiple threads, and give each thread a galaxy to solve: "thread 1 counts all the stars in Milky Way, thread 2 counts all the starts in Andromeda, etc.."
Is there anything like multi process application too?
That is a matter of terminology, but the cleanest answer would be yes. For example, in Erlang, VM is capable of starting many very lightweight processes very fast, where each process does its own thing. On Unix servers if you do "ps -aux / ps -ef", for example, you'd see multiple "processes" executin, where each process may in fact have many threads doing its job.
Is technology a limiting factor to decide if it could be made multi threaded os is it just the design choice? e.g. is it possible to make multi threaded applications in flex?
2 threaded application is already multithreaded. You most likely already have 2 or more cores on your laptop / PC, so technology would pretty always encourage you to utilize those cores, rather than limit you. Having said that, the problem and requirements should drive the decision. Not the technology or tools. But if you do decide write a multithreaded application, make sure you understand all the gotchas and solutions to them. The best language I used so far to solve concurrency is Erlang, since concurrency is just built in to it. However, other languages like Scala, Java, C#, and mostly functional languages, where shared state is not a problem would also be a good choice.

Why don't large programs (such as games) use loads of different threads? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I don't know how commercial games work inside very much, but the open source games I have come across don't seem to be massively into threading. Same goes for most other desktop applications, normally two or three threads seem to be used (eg program logic and GUI updates).
Why don't games have many threads? Eg separate threads for physics, sound, graphics, AI etc?
I don't know about the games that you have played, but most games run the sound on a separate thread. Networking code, at least the socket listeners run on a separate thread.
However, the rest of the game engine generally runs in a single thread. There are reasons for this. For example, most processing in a game runs a single chain of dependencies. Graphics depend on state of physics engine as does the artificial intelligence. Designing for multiple threads means that you have to have frame latency between the various subsystems for concurrency. You get quicker response time and snappier game play if these subsystems are computed linearly each frame. The part of the game that benefits the most from parallelization is of course the rendering subsystem which is offloaded to highly parallelized graphics accelerator cards.
You need to think, what are the actual benefits of threads? Remember that on a single core machine, threads don't actually allow concurrent execution, just the impression of it. Behind the scenes, the CPU is context-switching between the different threads, doing a little work on each every time. Therefore, if I have several tasks that involve no waiting, running them concurrently (on a single core) will be no quicker than running them linearly. In fact, it will be slower, due to the added overhead of the frequent context-switching.
If that is the case then, why ever use threads on a single core machine? Well firstly, because sometimes tasks can involve long periods of waiting on some external resource, such as a disk or other hardware device, to become available. Whilst the task in a waiting stage, threading allows other tasks to continue, thus using the CPU's time more efficiency.
Secondly, tasks may have a deadline of some sort in which to complete, particularly if they are responding to an event. The classic example is the user interface of an application. The computer should respond to user action events as quickly as possible, even if it is busy performing some other long running task, otherwise the user will be become agitated and may believe the application has crashed. Threading allows this to happen.
As for games, I am not a games programmer, but my understanding of the situation is this: 3D games create a programmatic model of the game world; players, enemies, items, terrain, etc. This game world is updated in discrete steps, based on the amount of time that has elapsed since the previous update. So, if 1ms has passed since the last time round the game loop, the position of an object is updated by using its velocity and the elapsed time to determine the delta (obviously the physics is a bit more complicated than that, but you get the idea). Other factors such as AI and input keys may also contribute to the update. When everything is finished, the updated game world is rendered as a new frame and the process begins again. This process usually occurs many times per second.
When we think about the game loop in this way, we can see that the engine is in fact achieving a very similar goal to that of threading. It has a number of long running tasks (updating the world's physics, handling user input, etc), and it gives the impression that they are happening concurrently by breaking them down into small pieces of work and interleaving these pieces, but instead of relying on the CPU or operating system to manage the time spent on each, it is doing it itself. This means it can keep all the different tasks properly synchronized, and avoid the complexities that come with real threading: locks, pre-emption, re-entrant code, etc. There is no performance implication to this approach either, because as we said a single core machine can only really execute code linearly anyway.
Things change when have a multi-core system. Now, tasks can be running genuinely concurrently and there may indeed be a benefit to using threading to handle different parts of the game world updates, so long as we can manage to synchronise the results to render consistent frames. We would expect therefore, that with the advent of multi-core systems, games engine developers would be working on this. And so it turns out, they are. Valve, the makers of Half Life, have recently introduced multi-processor support into their Source Engine, and I imagine many other engine developers are following suit.
Well, that turned out a little longer than I expected. I'm not a threading or games expert, but I hope I haven't made any especially glaring errors. If I have I'm sure people will correct me :)
The main reason is that, as elegant as it sounds, using multiple threads in a program as complicated as a 3D game is really, really, really difficult. Also, before the fairly recent introduction of low cost multi-core systems, using multiple threads did not offer much of a performance incentive.
Many games these days are using "task" or "job" systems for parallel processing. That is, the game spawns a fixed number of worker threads which are used for multiple tasks. Work is divided up into small pieces and queued, then sent to be processed by the worker threads as they become available.
This is becoming especially common on consoles. The PS3 is based on Cell architecture so you need to use parallel processing to get the best performance out of the system. The Xbox 360 can emulate a task/job setup that was designed for PS3 as it has multiple cores. You would probably find for most games that a lot of the system design is shared among the 360, PS3, and PC codebases, so PC most likely uses the same sort of tactic.
While it is hard to write threadsafe code, as many of the other answers indicate, I think there are a few other reasons for the things you're seeing:
First, many open source games are a few years old. Especially with this generation of consoles parallel programming is becoming popular and even necessary as mentioned above.
Second, very few open source projects seem concerned about getting the highest possible performance. As John Carmack pointed out to the Utah GLX project, highly optimized code is often harder to maintain than unoptimized code, so the latter would generally be preferred in open source contexts.
Third, I wouldn't take a small number of threads created by a game to mean that it's not using parallel jobs well.
I was about to post the same thing as William, but I'd like to expand on it a little bit. It's very hard to write optimal code for the future. Given the choice between writing something that will scale to hardware you don't have vs. writing something that will work on hardware you do have, most people will chose to do the latter. Since the single-core paradigm has been with us for so long, most code that has been written (especially for games where there is extreme pressure to get it out the door) isn't that future proof.
x86 has been very kind to game programmers, since we haven't had to think about the ramifications of less forgiving hardware platforms.
The fact that everybody here is correctly claiming that multithreading is hard is very sad. We desperately need to make concurrency systems easy.
Personally I think we are going to need a paradigm shift and new tools.
Other than the technical challenges of programming for multiple cores, commercial games have to run well on low end systems w/o multiple cores to make money.
Now that multi-core processors have been out for a while and the major game consoles have multiple cores it's only a matter of time before dual core shows up on the minimum system requirements list for PC games.
Here's a link to an interview with Orion Granatir from Intel where he's talking about getting game developers to take advantage of multi-threading.
There are many issues with race conditions and data locking when using lots of threads. Since the different parts of games are fairly reliant on each other it doesn't make much sense to do all the extra engineering required to use loads of threads.
It's very difficult to use threads without problems, and most GUI APIs are based on event driven coding anyway. Threads mandate the use of locking mechanisms which add delay to the code, and often that delay is unpredictable depending on who is currently holding the lock.
It seems sensible to me to have a single (or perhaps very few) threads handling things in an event driven way rather than hundreds of threads all causing strange and unrepeatable bugs.
Threads are dead, baby.
Realistically, in game development, threads don't scale beyond offloading very dedicated tasks like networking and loading. Job-systems seem to be the only way forward, given 8 CPU systems are becoming more commonplace even on PCs. And you can pretty much guarantee that upcoming super-multicore systems like Intel's Larrabee will be job-system based.
This has been a somewhat painful realization on Playstation3 and XBOX360 projects, and it seems now even Apple has jumped on board with their "revolutionary" Grand Central Dispatch system in Snow Leopard.
Threads have their place, but the naive promise of "put everything in a thread and it will all run faster" simply doesn't work in practice.

Resources