How can I parallelize the fork/wait method using OpenMP? - multithreading

I have an array of 20 tasks that I want to execute in a parallel way.
Instead of using OMP Tasking at the Master thread level, I decided to create 4 child processes using c_fork and to use !$OMP TASK at the child process level so that each CP uses a number of threads in order to execute a block of 4 tasks. (4 child processes*4 tasks =16 tasks).
Here is the declaration of the variables:
INTEGER ::ff !< the counter
type(tcb)::self !< self
type(variables),intent(inout)::var !< the variables
integer::STATUS !< STATUS
integer::any_child !< child process
integer::rc !< return code
!OpenMP variables
integer::num_thread !< the rank of the thread
integer::num_thread_cp !< the rank of a thread in the child process
integer::nthreads !< the number of threads
integer:: OMP_GET_THREAD_NUM !< function to get the rank of the thread
integer::OMP_GET_NUM_THREADS !< function to get the number of threads
!!$ type(tcb),dimension(20)::tasklist_GRAD !< the array of tasks
type(tcb),dimension(20),intent(inout)::tasks_ready_master !< the master array of tasks
!Variables pour gestion des threads
!!$ INTEGER(KIND=omp_lock_kind), SAVE :: lock
integer::number_children !< the number of child processes
integer::pid !< the pid
Here is the code I implemented:
!=======================================================================================================================================================
!$OMP PARALLEL PRIVATE(num_thread,nthreads,ff) &
!$OMP SHARED(tasks_ready_master)
num_thread=OMP_GET_THREAD_NUM() ! le rang du thread
nthreads=OMP_GET_NUM_THREADS() ! le nombre de threads
if (num_thread==0) then !< Thread Master (number 0)
number_children=4
do ff =5,20,number_children
pid=c_fork() !< spawn a child process
if (pid < 0) then
call perror('fork()' // c_null_char) !< Error
else if (pid == 0) then !< Child process
!$OMP TASK SHARED(tasks_ready_master) IF ((num_thread .ne. 0) .and. (num_thread .ne. 1))
call tasks_ready_master(ff)%f_ptr(self,var) !< execute the task
!$OMP END TASK
call exit(STATUS)
else
any_child=c_wait(STATUS) !< wait for the child process
if (any_child<0) then
stop 'error' !< Error
end if
end if
end do
end if
My problem is at this line : call tasks_ready_master(ff)%f_ptr(self,var) !< execute the task.
Suppose that we have 4 threads. Only 2 workers are available.
How can I be sure that each worker will execute a task in a block of 4 tasks then go to the next task (if not executed yet) ?
I hope that you understood my question. All I want is to parallelize the fork/wait method.
PS: Don't worry about the tcb and variables types.

Related

Why Am I getting a bigger time of execution while I added the number of threads executing the task?

I'm trying to improve the performances of my OpenMP code written in Fortran. In order to do that, I changed the variable threads_list from private to shared as you can see in the code below. My idea was to compute the elements of the thread_list array not only by the thread master (number 0) but also by multiple threads (the exact number is the nthreads-2). I was waiting for a smaller time of execution but no, the time of execution is getting 10 times bigger). I know that shared variables decrease the performances but I know also that increasing the number of threads computing a certain task increase the performances.
Here is the code of the subroutine where I use OpenMP:
subroutine management_16_tasks(tasklist_GRAD,ww,pas,cpt ,nb_element,cpt1,dt,dx,p_element,u_prime,u_prime_moins,u_prime_plus,&
&taux,grad_x_u,grad_t_u,grad_x_f,grad_t_f,ax_plus,ax_moins,ux_plus,ux_moins,sm,flux,tab0,tab)
INTEGER ::ff,pas
INTEGER,intent(inout)::cpt,cpt1,nb_element,ww
real(kind=REAL64) :: dt,dx
integer ,allocatable, dimension(:),intent(inout) ::p_element
REAL(KIND=REAL64) ,allocatable, dimension(:),intent(inout) :: u_prime,u_prime_moins, u_prime_plus,taux,grad_x_u,&
&grad_t_u,grad_t_f,grad_x_f,flux,sm
real(kind=REAL64),allocatable,dimension(:),intent(inout) :: ax_plus,ax_moins,ux_moins,ux_plus
REAL(KIND=REAL64) ,allocatable, dimension(:,:),intent(inout) ::tab0,tab
type(tcb)::self
!OpenMP variables
integer::num_thread,nthreads
integer, external :: OMP_GET_THREAD_NUM, OMP_GET_NUM_THREADS
type(tcb),dimension(20)::tasklist_GRAD,tasks_ready_master
!Variables pour gestion des threads
integer,allocatable,dimension(:)::threads_list !liste contenant les nums des threads workers
integer,dimension(100)::threads_list_all !liste contenant les nums des threads workers dans l'ordre selon les tâches
integer,dimension(16)::threads_list_part3 ! le reste des tâches
!=======================================================================================================================================================
!$OMP PARALLEL PRIVATE(num_thread,threads_list_all,nthreads,ff) &
!$OMP SHARED(tasklist_GRAD,tasks_ready_master) &
!$OMP SHARED(threads_list_part3,threads_list)
num_thread=OMP_GET_THREAD_NUM() ! le rang du thread
nthreads=OMP_GET_NUM_THREADS() ! le nombre de threads
!Thread Application Master (numero 1)
if (num_thread==1) then
do ff=5,20 ! 16 tâches
if (associated(tasklist_GRAD(ff)%f_ptr) .eqv. .true.) then
tasks_ready_master(ff) = tasklist_GRAD(ff)
tasks_ready_master(ff)%state=STATE_READY
end if
end do
end if
!!$ !$OMP BARRIER
if (num_thread==0) then
allocate(threads_list(nthreads-2)) ! liste des threads workers
end if
if ((num_thread .ne. 0) .and. (num_thread .ne. 1)) then
!$OMP DO SCHEDULE(STATIC,1)
do ff=1,nthreads-2
threads_list(ff)=ff+1 ! 2,3,..,nombre de threads-2
end do
!$OMP END DO
end if
!Thread Master (numero 0)
if (num_thread==0) then
do ff=5,20,nthreads-2
if (tasks_ready_master(ff)%state==STATE_READY) then
threads_list_all(ff-4:ff+nthreads-7)=threads_list(:)
end if
end do
threads_list_part3=threads_list_all(1:16) ! 16 tâches
deallocate(threads_list)
end if
!$OMP BARRIER
!Threads workers
do ff=5,20
if (num_thread==threads_list_part3(ff-4)) then
call tasks_ready_master(ff)%f_ptr(self,ww,pas,cpt ,nb_element,cpt1,dt,dx,p_element,u_prime,u_prime_moins,u_prime_plus,&
&taux,grad_x_u,grad_t_u,grad_x_f,grad_t_f,ax_plus,ax_moins,ux_plus,ux_moins,sm,flux,tab0,tab)
tasks_ready_master(ff)%state=STATE_RUNNING
end if
!$OMP BARRIER
end do
!Thread Master (numero 0)
if (num_thread==0) then
do ff=5,20
if (tasks_ready_master(ff)%state==STATE_RUNNING) then
tasklist_GRAD(ff)%state=STATE_RUNNING
end if
end do
end if
!$OMP END PARALLEL
end subroutine management_16_tasks
I moved the part of the allocation of threads_list from inside the if (num_threads==0) then to outside. I changed threads_list from private to shared. The allocation is still done by only one thread. I added the filter if ((num_thread .ne. 0) .and. (num_thread .ne. 1)) then and added also !$OMP DO SCHEDULE (STATIC,1) so that each thread computes an element of the array.

Code takes much more time to finish with more than 1 thread

I want to benchmark some Fortran code with OpenMP-threads with a critical-section. To simulate a realistic environment I tried to generate some load before this critical-section.
!Kompileraufruf: gfortran -fopenmp -o minExample.x minExample.f90
PROGRAM minExample
USE omp_lib
IMPLICIT NONE
INTEGER :: n_chars, real_alloced
INTEGER :: nx,ny,nz,ix,iy,iz, idx
INTEGER :: nthreads, lasteinstellung,i
INTEGER, PARAMETER :: dp = kind(1.0d0)
REAL (KIND = dp) :: j
CHARACTER(LEN=32) :: arg
nx = 2
ny = 2
nz = 2
lasteinstellung= 10000
CALL getarg(1, arg)
READ(arg,*) nthreads
CALL OMP_SET_NUM_THREADS(nthreads)
!$omp parallel
!$omp master
nthreads=omp_get_num_threads()
!$omp end master
!$omp end parallel
WRITE(*,*) "Running OpenMP benchmark on ",nthreads," thread(s)"
n_chars = 0
idx = 0
!$omp parallel do default(none) collapse(3) &
!$omp shared(nx,ny,nz,n_chars) &
!$omp private(ix,iy,iz, idx) &
!$omp private(lasteinstellung,j) !&
DO iz=-nz,nz
DO iy=-ny,ny
DO ix=-nx,nx
! WRITE(*,*) ix,iy,iz
j = 0.0d0
DO i=1,lasteinstellung
j = j + real(i)
END DO
!$omp critical
n_chars = n_chars + 1
idx = n_chars
!$omp end critical
END DO
END DO
END DO
END PROGRAM
I compiled this code with gfortran -fopenmp -o test.x test.f90 and executed it with time ./test.x THREAD
Executing this code gives some strange behaviour depending on the thread-count (set with OMP_SET_NUM_THREADS): compared with one thread (6ms) the execution with more threads costs a lot more time (2 threads: 16000ms, 4 threads: 9000ms) on my multicore machine.
What could cause this behaviour? Is there a better (but still easy) way to generate load without running in some cache-effects or related things?
edit: strange behaviour: if I have the write in the nested loops, the execution speeds dramatically up with 2 threads. If its commented out, the execution with 2 or 3 threads takes forever (write shows very slow incrementation of loop variables)...but not with 1 or 4 threads. I tried this code also on another multicore machine. There it takes for 1 and 3 threads forever but not for 2 or 4 threads.
If the code you are showing is really complete you are missing definition of loadSet in the parallel section in which it is private. It is undefined and loop
DO i=1,loadSet
j = j + real(i)
END DO
can take a completely arbitrary number of iterations.
If the value is defined somewhere before in the code you do not show you probably want firstprivate instead of private.

Is there a timed signal similar to pthread_cond_timedwait?

I have created many threads all waiting for there own condition. Each thread when runs signals its next condition and again goes into wait state.
However, I want that the currently running thread should signal its next condition after some specified period of time (very short period). How to achieve that?
void *threadA(void *t)
{
while(i<100)
{
pthread_mutex_lock(&mutex1);
while (state != my_id )
{
pthread_cond_wait(&cond[my_id], &mutex1);
}
// processing + busy wait
/* Set state to i+1 and wake up thread i+1 */
pthread_mutex_lock(&mutex1);
state = (my_id + 1) % NTHREADS;//usleep(1);
// (Here I don't want this sleep. I want that this thread completes it processing and signals next thread a bit later.)
/*nanosleep(&zero, NULL);*/
pthread_cond_signal(&cond[(my_id + 1) % NTHREADS]); // Send signal to Thread (i+1) to awake
pthread_mutex_unlock(&mutex1);
i++;
}
Signalling a condition does nothing if there is nothing waiting on the condition. So, if pthread 'x' signals condition 'cx' and then waits on it, it will wait for a very long time... unless some other thread also signals 'cx' !
I'm not really sure I understand what you mean by the pthread signalling its "next condition", but it occurs to me that there is not much difference between waiting to signal a waiting thread and the thread sleeping after it is signalled ?

OPENMP running the same job on threads

In my OPENMP code, I want all threads do the same job and at the end take the average ( basically calculate error). ( How I calculate error? Each thread generates different random numbers, so the result from each threads is different.)
Here is simple code
program ...
..
!$OMP PARALLEL
do i=1,Nstep
!.... some code goes here
result=...
end do
!$END PARALLEL
sum = result(from thread 0)+result(from thread 1)+...
sum = sum/(number of threads)
Simply I have to send do loop inside OPENMP to all threads, not blocking this loop.
I can do what I want using MPI and MPI_reduce, but I want to write a hybrid code OPENMP + MPI. I haven't figured out the OPENMP part, so suggestions please?
It is as simple as applying sum reduction over result:
USE omp_lib ! for omp_get_num_threads()
INTEGER :: num_threads
result = 0.0
num_threads = 1
!$OMP PARALLEL REDUCTION(+:result)
!$OMP SINGLE
num_threads = omp_get_num_threads()
!$OMP END SINGLE
do i = 1, Nstep
...
result = ...
...
end do
!$END PARALLEL
result = result / num_threads
Here num_threads is a shared INTEGER variable that is assigned the actual number of threads used to execute the parallel region. The assignment is put in a SINGLE construct since it suffices one thread - and no matter which one - to execute the assignment.

Perl Queue and Threads abnormal exit

I am quite new to Perl, especially Perl Threads.
I want to accomplish:
Have 5 threads that will en-queue data(Random numbers) into a
Thread::queue
Have 3 threads that will de-queue data from the
Thread::queue.
The complete code that I wrote in order to achieve above mission:
#!/usr/bin/perl -w
use strict;
use threads;
use Thread::Queue;
my $queue = new Thread::Queue();
our #Enquing_threads;
our #Dequeuing_threads;
sub buildQueue
{
my $TotalEntry=1000;
while($TotalEntry-- >0)
{
my $query = rand(10000);
$queue->enqueue($query);
print "Enque thread with TID " .threads->tid . " got $query,";
print "Queue Size: " . $queue->pending . "\n";
}
}
sub process_Queue
{
my $query;
while ($query = $queue->dequeue)
{
print "Dequeu thread with TID " .threads->tid . " got $query\n";
}
}
push #Enquing_threads,threads->create(\&buildQueue) for 1..5;
push #Dequeuing_threads,threads->create(\&process_Queue) for 1..3;
Issues that I am Facing:
The threads are not running as concurrently as expected.
The entire program abnormally exit with following console output:
Perl exited with active threads:
8 running and unjoined
0 finished and unjoined
0 running and detached
Enque thread with TID 5 got 6646.13585023883,Queue Size: 595
Enque thread with TID 1 got 3573.84104215917,Queue Size: 595
Any help on code-optimization is appreciated.
This behaviour is to be expected: When the main thread exits, all other threads exit as well. If you don't care, you can $thread->detach them. Otherwise, you have to manually $thread->join them, which we'll do.
The $thread->join waits for the thread to complete, and fetches the return value (threads can return values just like subroutines, although the context (list/void/scalar) has to be fixed at spawn time).
We will detach the threads that enqueue data:
threads->create(\&buildQueue)->detach for 1..5;
Now for the dequeueing threads, we put them into a lexical variable (why are you using globals?), so that we can dequeue them later:
my #dequeue_threads = map threads->create(\&process_queue), 1 .. 3;
Then wait for them to complete:
$_->join for #dequeue_threads;
We know that the detached threads will finish execution before the programm exits, because the only way for the dequeueing threads to exit is to exhaust the queue.
Except for one and a half bugs. You see, there is a difference between an empty queue and a finished queue. If the queue is just empty, the dequeueing threads will block on $queue->dequeue until they get some input. The traditional solution is to dequeue while the value they get is defined. We can break the loop by supplying as many undef values in the queue as there are threads reading from the queue. More modern version of Thread::Queue have an end method, that makes dequeue return undef for all subsequent calls.
The problem is when to end the queue. We should to this after all enqueueing threads have exited. Which means, we should wait for them manually. Sigh.
my #enqueueing = map threads->create(\&enqueue), 1..5;
my #dequeueing = map threads->create(\&dequeue), 1..3;
$_->join for #enqueueing;
$queue->enqueue(undef) for 1..3;
$_->join for #dequeueing;
And in sub dequeuing: while(defined( my $item = $queue->dequeue )) { ... }.
Using the defined test fixes another bug: rand can return zero, although this is quite unlikely and will slip through most tests. The contract of rand is that it returns a pseudo-random floating point number between including zero and excluding some upper bound: A number from the interval [0, x). The bound defaults to 1.
If you don't want to join the enqueueing threads manually, you could use a semaphore to signal completition. A semaphore is a multithreading primitive that can be incremented and decremented, but not below zero. If a decrement operation would let the drop count below zero, the call blocks until another thread raises the count. If the start count is 1, this can be used as a flag to block resources.
We can also start with a negative value 1 - $NUM_THREADS, and have each thread increment the value, so that only when all threads have exited, it can be decremented again.
use threads; # make a habit of importing `threads` as the first thing
use strict; use warnings;
use feature 'say';
use Thread::Queue;
use Thread::Semaphore;
use constant {
NUM_ENQUEUE_THREADS => 5, # it's good to fix the thread counts early
NUM_DEQUEUE_THREADS => 3,
};
sub enqueue {
my ($out_queue, $finished_semaphore) = #_;
my $tid = threads->tid;
# iterate over ranges instead of using the while($maxval --> 0) idiom
for (1 .. 1000) {
$out_queue->enqueue(my $val = rand 10_000);
say "Thread $tid enqueued $val";
}
$finished_semaphore->up;
# try a non-blocking decrement. Returns true only for the last thread exiting.
if ($finished_semaphore->down_nb) {
$out_queue->end; # for sufficiently modern versions of Thread::Queue
# $out_queue->enqueue(undef) for 1 .. NUM_DEQUEUE_THREADS;
}
}
sub dequeue {
my ($in_queue) = #_;
my $tid = threads->tid;
while(defined( my $item = $in_queue->dequeue )) {
say "thread $tid dequeued $item";
}
}
# create the queue and the semaphore
my $queue = Thread::Queue->new;
my $enqueuers_ended_semaphore = Thread::Semaphore->new(1 - NUM_ENQUEUE_THREADS);
# kick off the enqueueing threads -- they handle themself
threads->create(\&enqueue, $queue, $enqueuers_ended_semaphore)->detach for 1..NUM_ENQUEUE_THREADS;
# start and join the dequeuing threads
my #dequeuers = map threads->create(\&dequeue, $queue), 1 .. NUM_DEQUEUE_THREADS;
$_->join for #dequeuers;
Don't be suprised if the threads do not seem to run in parallel, but sequentially: This task (enqueuing a random number) is very fast, and is not well suited for multithreading (enqueueing is more expensive than creating a random number).
Here is a sample run where each enqueuer only creates two values:
Thread 1 enqueued 6.39390993005694
Thread 1 enqueued 0.337993319585337
Thread 2 enqueued 4.34504733960242
Thread 2 enqueued 2.89158054485114
Thread 3 enqueued 9.4947585773571
Thread 3 enqueued 3.17079715055542
Thread 4 enqueued 8.86408863197179
Thread 5 enqueued 5.13654995317669
Thread 5 enqueued 4.2210886147538
Thread 4 enqueued 6.94064174636395
thread 6 dequeued 6.39390993005694
thread 6 dequeued 0.337993319585337
thread 6 dequeued 4.34504733960242
thread 6 dequeued 2.89158054485114
thread 6 dequeued 9.4947585773571
thread 6 dequeued 3.17079715055542
thread 6 dequeued 8.86408863197179
thread 6 dequeued 5.13654995317669
thread 6 dequeued 4.2210886147538
thread 6 dequeued 6.94064174636395
You can see that 5 managed to enqueue a few things before 4. The threads 7 and 8 don't get to dequeue anything, 6 is too fast. Also, all enqueuers are finished before the dequeuers are spawned (for such a small number of inputs).

Resources