The code below has two threads starting at same time.
How to start and stop two threads at same time, The first Thread must finish executing and second should stop its process.
Like I want to process a large file with one thread and show a GIF using another thread in JavaFX
I can use latch to start two threads, but how to stop them at same time
Related
say we have a program that creates threads on a parallel multi-thread system to count words in a file by splitting the file into sections and having each thread count the words in that section. If we have an event based system on the main thread that should accept these partial sums (to add them up), how would a system handle multiple threads finishing at the same time and generating an "event". I understand that if the threads finished at different times, they could just be added to a queue to be processed on the main thread, but if multiple parallel threads are finishing at the same time, how can they push to this queue without interfering/overwriting each other?
If I start two threads, one immediately after another, why is there no guarantee that the first thread will be started first?
I am coding a 5 state process model(new,ready,running,blocked,exit), for this I created a LinkedList which contains the processes ready to run. For example if I have the processes 1,2,3,4,5 it runs the 1st, then the 2nd, and when the third is running the user pushes a button and blocks the process for 5 seconds. In the meantime the following process(the 4th) runs(it doesn´t wait until the third process is unblocked). The problem that I have is that I don´t know if I should use two threads for this, one for the threads that are running and the other for the blocked process?? or is it possible to only use one thread???
You could use only a single thread if you use cooperative multitasking, where your process code periodically yields to permit other processes to run, or if you want each task to run to completion or blocking before letting another process in or back in.
If it's important that the 3rd process restart after exactly 5 seconds, and if it's okay for it to continue running in parallel with an existing process also running, you might want to use two - or more - threads.
I am working on a project in Prolog where a thread is running the GUI and few threads (say 10) are running in background. All background threads are adding elements to a list. Now if some request comes from GUI, the system needs to show an element from the list.
My approach is that all background threads will apply for a lock to the list. The thread acquiring the lock will starts its execution. If a request comes from GUI, it will also apply for lock and wait for the current thread to finish the work. My problem is how to assign priority to GUI thread so that once the current thread leaves the lock, only GUI will get the lock and not the other 9 threads who have applied for locks.
For example, let us assume that in my operating system a context switch to another process occurs after 100μ of execution time. Furthermore, my computer has only one processor with one thread of execution possible.
If I have Process A which contains only one thread of execution and Process B which has four threads of execution, will this mean that the thread in process A will run for 100μ and process B will also run for 100μ but split the execution time between each thread before context switching?
Process A: ran for 100μ
Thread 1 in Process A execution time: 100μ
Process B: ran for 100μ
Thread 1 in Process A execution time: ~25μ
Thread 2 in Process A execution time: ~25μ
Thread 3 in Process A execution time: ~25μ
Thread 4 in Process A execution time: ~25μ
Would the above be correct?
Moreover, would this be different if I had a quad core processor? If I had a quad core processor, would this potentially mean each thread could run for 100μ each across all processors?
It all really depends on what you are doing within the process / processing in each thread. If the process you are trying to run can benefit from splitting over threads, like for example, making calls to a web service for processing (since a web service can accept multiple calls at once and execute then separately), then no... the single thread will take longer to process than the 4 threads simply because it is executing the calls linearly instead of simultaneously.
On the other hand, if you are executing a process / code that does not benefit from thread splitting, then the time to finish all 4 processing threads will be the same on a single core.
However, in most cases, splitting the processing into threads should take less time than executing it on a single thread, if you do it right.
The matter of Cores doesn't factor in in this case unless you are attempting to run more threads than one core can handle. In which case, the OS will run the extra threads on a separate core.
This link explains a bit more the situation with Cores and Hyper-Threading...
http://www.howtogeek.com/194756/cpu-basics-multiple-cpus-cores-and-hyper-threading-explained/
Thread switches are always on the same interval regardless of process ownership. So if it's 100micro then it's always 100micro. Unless of course the thread itself surrenders execution. When this thread is going to run again is where things get complicated