Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I dont know how to implement multithreading concept on scala language. Can any one suggest me how to implement & provide some samples for multithreading. Thank you
You have several options.
Scala Akka actor system
Akka is a toolkit and runtime
for building highly concurrent,
distributed, and resilient
message-driven applications
on the JVM.
Futures and Promises
Futures provide a way to reason about performing many operations in parallel– in an efficient and non-blocking way. A Future is a placeholder object for a value that may not yet exist. Generally, the value of the Future is supplied concurrently and can subsequently be used. Composing concurrent tasks in this way tends to result in faster, asynchronous, non-blocking parallel code.
Java Concurrency Model
Scala concurrency is built on top of the Java concurrency model. On
Sun JVMs, with a IO-heavy workload, we can run tens of thousands of
threads on a single machine. A Thread takes a Runnable. You have to
call start on a Thread in order for it to run the Runnable.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
To better understand concurrent computing, I would like to know the exact examples of multithreading in projects. Could you list some examples that you came across and describe what responsibilities each thread has?
Please be patient. I'm still learning. :-)
I have seen examples where several threads are used for different purposes: one for handling audit logging, one for handling messaging with external systems, one for the applicative routine (where the actual transaction happens). This is not however a concurrent systems per se, as the threads are handling separate tasks.
One can use threads to divide I/O heavy work: imagine an application processing a lot of files. The basic approach would be to process files one after the other, but the process would be waiting for I/O for every file that is processed. Using a pool of threads and assiging 1 file to each thread can allow the process to keep running: some threads are waiting for I/O, but the others can still keep doing their job. Again, this approach is non-concurrent, as long as you don't process the same file on 2 different threads (one writing to the file and the other one reading, for example).
Multiple Trackers running concurrently is commonly done with threading.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
M:N threading is a model which maps M user threads onto N kernel threads. This enables a large number (M) of user threads to be created, due to their light weight, which still allowing (N-way) parallelism.
This seems like a win-win to me, so why do so few languages/implementations use this threading model? The only examples I am aware of are Go's "goroutines" and Erlang's processes.
What are the disadvantages of M:N threading? Why do other languages not use this threading model that, on the surface, seems so promising?
Partly it's because "it's what everyone else is doing". While M:N threading did exist before Go, all mainstream languages (C, C++, Perl, Java, C#, Python, Ruby, PHP) used threads, and many of them (Python, Ruby) did that poorly. Go is the first popular language that shows M:N threading can work well.
Partly it's because threads are the native primitive of the OS.
Implementing M:N threading makes interop with OS code/C libraries harder and a bit slower. When calling C/OS code, Go has to switch from small goroutine stack to regular OS stack.
Many other popular languages (Python, Ruby) rely more heavily on ability to call C code than Go so it's more important for them to optimize for that.
Good M:N threading interop with OS/C code is not impossible (Go does that decently) but it's easier to do if you do what OS does.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm worried that approach to asynchronous exceptions in GHC might be a net-loss for many applications.
While the paper explains the design to a great detail, it's too complex for average programmers to tell whether this approach provides any benefits at all in their daily work.
Section 2 lists four reasons for current approach (speculative computation, timeouts, user interrupt and resource exhaustion). In my opinion, three are about ability to cancel computations and one is about ability to recover from resource exhaustion which I find questionable (is there any publicly available code that demonstrates this?).
In particular, as mentioned in the paper, Java deprecated Thread.stop() because aborted computation would result in undefined state. Aren't IO actions in GHC subject to the same? Add laziness and the API becomes much more complex in comparison for no clear benefit to most applications.
To summarize, if GHC used the same approach as Java (safe-points, interrupt polling) what would be the consequences to the ecosystem?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a firm believer in using immutability where possible so that classical synchronization is not needed for multi-threaded programs. This is one of the core concepts used in functionally languages.
I was wondering what people think of this for CUDA programs, I know developing for GPUs is different from developing for CPUs and being a GPU n00b I'd like more knowledgeable people to give me their opinion on the matter at hand.
Thanks,
Gabriel
In CUDA programming, immutability is also beneficial, and sometimes even necessary.
On block-wise communication, immutability may allow you to skip some __syncthreads().
On grid-wise communication, there is no whole-grid synchronize instruction at all. That is why in general case, to have a guarantee that a change of one block is visible by another block requires kernel termination. This is because blocks may scheduled in such a way that they actually run in sequence (e.g. weak GPU, unable to run more blocks in parallel)
Partial communication is however possible through atomic operations and __threadfence(). You can implement, for example, task queues, permitting blocks to fetch new assigments from there in a safe way. These kind of operations should however be done rarely as atomics may be time consuming (although with global L2 caching it is now better than on the older GPUs)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
How does Zed Shaw's Lua web framwork called Tir, compare to other Lua web frameworks such as Kepler, LuCI, etc?
Comparison to such things like:
maturity of code base
features/functionality
performance
ease of use
UPDATE:
Since Tir is based on the use of Lua's coroutine, doesn't this imply that Tir will never be able to scale well? Reason being, Lua's coroutine's cannot take advantage of multi-core/processor systems given that coroutines are implemented in Lua as a cooperative/collaborative threads (as opposed to pre-emptive)?
Tir is much newer than Kepler or LuCI, so the code isn't nearly as mature. I would rank Tir as experimental, right now. The same factor also means that it has significantly fewer features.
It does have a very pleasant continuation passing style of development available though, through its coroutine based flow stuff.
I would rate it, personally, as fun for experimentation, but probably not ready for heavy lifting until Zed stabilizes it more :-)
This video from PyCon 2011 says basically you scale on multicore or multiprocessor by running more workers, under high load condition the memory advantage gives better performance.
In the video it's said that at Meebo's they have used this approach for last months with huge load.
The video is python specific, so it's just for the scaling of coroutine approach part of the question. Video length is about thirty minutes.