Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
To better understand concurrent computing, I would like to know the exact examples of multithreading in projects. Could you list some examples that you came across and describe what responsibilities each thread has?
Please be patient. I'm still learning. :-)
I have seen examples where several threads are used for different purposes: one for handling audit logging, one for handling messaging with external systems, one for the applicative routine (where the actual transaction happens). This is not however a concurrent systems per se, as the threads are handling separate tasks.
One can use threads to divide I/O heavy work: imagine an application processing a lot of files. The basic approach would be to process files one after the other, but the process would be waiting for I/O for every file that is processed. Using a pool of threads and assiging 1 file to each thread can allow the process to keep running: some threads are waiting for I/O, but the others can still keep doing their job. Again, this approach is non-concurrent, as long as you don't process the same file on 2 different threads (one writing to the file and the other one reading, for example).
Multiple Trackers running concurrently is commonly done with threading.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Im running both a uni and multi threaded version of an application. There is no speed advantage. That said, what is the best way to access an Arc<Mutex<Vec>> and process each entry concurrently?
You cannot process an Arc<Mutex<Vec<T>>> concurrently - the mutex wraps the entire vector, so no other thread other than the one that locked it will be able to access it.
If you know the number of elements up front, you can use an Arc<Vec<Mutex<T>>>. This has a mutex per-element, so threads will lock only the elements. However you won't be able to grow or shrink the Vec since its shared.
There are also more specialized structures in the Concurrency section of http://lib.rs, with varying semantics, that may fit your needs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Ok, so one can write a custom made eventloop over using given asyncio's eventloop (Writing an EventLoop without using asyncio)
Now the question rises is why? Why prefer writing a custom made over asyncio's eventloop?
Why prefer writing a custom made over asyncio's eventloop?
Usually you invent something new if existing approach doesn't fit your needs. Or may be if you think you can do things more efficiently or conveniently.
First of all it worth noting that asyncio itself provides multiple event loop implementations. Reason for this is that they built on top of different low-level OS API an can behave differently. You can select (or write your own event loop) that fits your task best.
Sometimes people create their own event loop implementations for better performance. Good example of such case is uvloop.
Sometimes people create event loops on top of other non-asyncio event loops. For example quamash provides event loop on top of Qt. It allows to write asynchronous programs using PyQt.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm writing a multi-threaded program and all these threads should write their data to a single file.
these threads only writing different strings for some kind of append-only logging
whats the best practice for sharing a file between threads for out put?
For logging (for future questions, make sure you put that information into the question rather than just a comment) there's a strong preference to not have the threads do file access they don't have to; as it means that logging negatively impacts performance for the rest of that thread.
For that reason, NathanOliver's suggestion of having the threads write to a shared container and then one dedicated to dumping that container to file would probably be the best option for you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I dont know how to implement multithreading concept on scala language. Can any one suggest me how to implement & provide some samples for multithreading. Thank you
You have several options.
Scala Akka actor system
Akka is a toolkit and runtime
for building highly concurrent,
distributed, and resilient
message-driven applications
on the JVM.
Futures and Promises
Futures provide a way to reason about performing many operations in parallel– in an efficient and non-blocking way. A Future is a placeholder object for a value that may not yet exist. Generally, the value of the Future is supplied concurrently and can subsequently be used. Composing concurrent tasks in this way tends to result in faster, asynchronous, non-blocking parallel code.
Java Concurrency Model
Scala concurrency is built on top of the Java concurrency model. On
Sun JVMs, with a IO-heavy workload, we can run tens of thousands of
threads on a single machine. A Thread takes a Runnable. You have to
call start on a Thread in order for it to run the Runnable.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a firm believer in using immutability where possible so that classical synchronization is not needed for multi-threaded programs. This is one of the core concepts used in functionally languages.
I was wondering what people think of this for CUDA programs, I know developing for GPUs is different from developing for CPUs and being a GPU n00b I'd like more knowledgeable people to give me their opinion on the matter at hand.
Thanks,
Gabriel
In CUDA programming, immutability is also beneficial, and sometimes even necessary.
On block-wise communication, immutability may allow you to skip some __syncthreads().
On grid-wise communication, there is no whole-grid synchronize instruction at all. That is why in general case, to have a guarantee that a change of one block is visible by another block requires kernel termination. This is because blocks may scheduled in such a way that they actually run in sequence (e.g. weak GPU, unable to run more blocks in parallel)
Partial communication is however possible through atomic operations and __threadfence(). You can implement, for example, task queues, permitting blocks to fetch new assigments from there in a safe way. These kind of operations should however be done rarely as atomics may be time consuming (although with global L2 caching it is now better than on the older GPUs)