Is visual studio optimised for hyper-threaded microprocessors? [closed] - multithreading

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I would like to know whether the most common software development suits such as Microsoft visual studio its compilers are optimized for using the Hyper-Threading feature to the maximum extent? Is it worthy to go for a hyper-threaded processor for working with those softwares?
I have read many reviews that hyper threading is only useful for intensive multi threaded applications like video editors,etc..Some reviews says that softwares which are not optimized for using Hyper threading can suffer a decrease in performance and many people run their systems with hyper threading turned off.
As I am a novice programmer I would like to know whether those arguments and reviews stands true in the field of programming.
Again I am talking about the compilers and IDE and not the applications that I 'am going to create!(as if now I don't know how to create multi-threaded applications!!)

Since you have not made up your mind on what IDE/development platform to use then there may be other factors to consider besides threading. Most high level languages and compilers do support thread pooling, which is probably what you are looking for. I can't speak for compilers I have not used so I will leave a link to the article below:
.Net and hyper threading
It appears to be a bit dated, but the basic concepts are explained.

Related

What is extreme programming and when it is using? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to programming and I try to research as much as possible in this field. And once I came across to this expression: "Extreme and pair programming". Pair programming is an easy term, and I found quite clear documentation about this. But extreme programming... I found some articles about it, but explanation wasn't so well. All I understood that extreme programming is an Agile development framework. But why I must use that, what is difference between this and another types of programming styles?
Can anyone explain me what is extreme programming language very clearly?
Extreme programming (often called XP) is an agile framework that was developed by Kent Beck in the 1990's.
There aren't too many people that use the whole XP framework these days, but a lot of the engineering practices it popularised are very common.
Examples include:
Pair programming
Test driven development
Continuous integration
Frequent releases
Constant refactoring
XP favours an approach of writing the minimum amount of code to solve the problem at hand. Things like optimisation and forward planning are generally a low priority. This is the 'extreme' part of extreme programming.
The idea is that you write code to solve the current requirement. If you then find you need the code to be faster, or scaleable, etc. then you refactor it.

What are the disadvantages of an M:N threading model (e.g. goroutines)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
M:N threading is a model which maps M user threads onto N kernel threads. This enables a large number (M) of user threads to be created, due to their light weight, which still allowing (N-way) parallelism.
This seems like a win-win to me, so why do so few languages/implementations use this threading model? The only examples I am aware of are Go's "goroutines" and Erlang's processes.
What are the disadvantages of M:N threading? Why do other languages not use this threading model that, on the surface, seems so promising?
Partly it's because "it's what everyone else is doing". While M:N threading did exist before Go, all mainstream languages (C, C++, Perl, Java, C#, Python, Ruby, PHP) used threads, and many of them (Python, Ruby) did that poorly. Go is the first popular language that shows M:N threading can work well.
Partly it's because threads are the native primitive of the OS.
Implementing M:N threading makes interop with OS code/C libraries harder and a bit slower. When calling C/OS code, Go has to switch from small goroutine stack to regular OS stack.
Many other popular languages (Python, Ruby) rely more heavily on ability to call C code than Go so it's more important for them to optimize for that.
Good M:N threading interop with OS/C code is not impossible (Go does that decently) but it's easier to do if you do what OS does.

What exactly does it mean for a programming language to be simple? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What factors are important? How do you know if a given programming language is "simple" or "simpler" than another language?
I'm not sure if this is a fair question to ask, since different languages serve different purposes and it might not really be comparing apples to apples.
However, with that said, memory management would come to mind. One can argue that Java is a "simpler" language than C++, since it has a garbage collector that can deal with some of the complexities around memory management, instead of forcing you to do it yourself.
In my perspective, these are the points that define the complexity of a language.
Variation of syntax from common pseudocode and constructs
Ease of developing a structure for real-life entities like objects
Methods of structure enforcement at compile time.
Memory management strategy allocation/deallocation
Code reusability
Ease of code headers and directives management
Inbuilt libraries
Relative installation package sizes
Data exchange capabilities like over network of files
Process handling like thread management
Relative brevity of the code
Speed of compilation
Developer community size and documentation
OpenSource implementations
Platform dependence
And many more could be added to this list.

CUDA, Immutability vs. Mutability [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a firm believer in using immutability where possible so that classical synchronization is not needed for multi-threaded programs. This is one of the core concepts used in functionally languages.
I was wondering what people think of this for CUDA programs, I know developing for GPUs is different from developing for CPUs and being a GPU n00b I'd like more knowledgeable people to give me their opinion on the matter at hand.
Thanks,
Gabriel
In CUDA programming, immutability is also beneficial, and sometimes even necessary.
On block-wise communication, immutability may allow you to skip some __syncthreads().
On grid-wise communication, there is no whole-grid synchronize instruction at all. That is why in general case, to have a guarantee that a change of one block is visible by another block requires kernel termination. This is because blocks may scheduled in such a way that they actually run in sequence (e.g. weak GPU, unable to run more blocks in parallel)
Partial communication is however possible through atomic operations and __threadfence(). You can implement, for example, task queues, permitting blocks to fetch new assigments from there in a safe way. These kind of operations should however be done rarely as atomics may be time consuming (although with global L2 caching it is now better than on the older GPUs)

How does Tir compare to other Lua web frameworks? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
How does Zed Shaw's Lua web framwork called Tir, compare to other Lua web frameworks such as Kepler, LuCI, etc?
Comparison to such things like:
maturity of code base
features/functionality
performance
ease of use
UPDATE:
Since Tir is based on the use of Lua's coroutine, doesn't this imply that Tir will never be able to scale well? Reason being, Lua's coroutine's cannot take advantage of multi-core/processor systems given that coroutines are implemented in Lua as a cooperative/collaborative threads (as opposed to pre-emptive)?
Tir is much newer than Kepler or LuCI, so the code isn't nearly as mature. I would rank Tir as experimental, right now. The same factor also means that it has significantly fewer features.
It does have a very pleasant continuation passing style of development available though, through its coroutine based flow stuff.
I would rate it, personally, as fun for experimentation, but probably not ready for heavy lifting until Zed stabilizes it more :-)
This video from PyCon 2011 says basically you scale on multicore or multiprocessor by running more workers, under high load condition the memory advantage gives better performance.
In the video it's said that at Meebo's they have used this approach for last months with huge load.
The video is python specific, so it's just for the scaling of coroutine approach part of the question. Video length is about thirty minutes.

Resources