I'm splitting a do loop using parallel do and private clause. In this loop I add a variable to itself. Why do I get errors if I don't need a critical block or atomic statement in this case?
How can I fix it?
program trap
use omp_lib
implicit none
double precision::suma=0.d0 ! sum is a scalar
double precision:: h,x,lima,limb
integer::n,i, istart, iend, thread_num=4, total_threads, ppt
integer(kind=8):: tic, toc, rate
double precision:: time
double precision, dimension(4):: pi= 0.d0
call system_clock(count_rate = rate)
call system_clock(tic)
lima=0.0d0; limb=1.0d0; suma=0.0d0; n=100000000
h=(limb-lima)/n
suma=h*(f(lima)+f(limb))*0.5d0 !first and last points
ppt= n/total_threads
!$ call omp_set_num_threads(total_threads)
!$omp parallel do private (istart, iend, thread_num, i)
thread_num = omp_get_thread_num()
!$ istart = thread_num*ppt +1
!$ iend = min(thread_num*ppt + ppt, n)
do i=istart,iend ! this will control the loop in different threads
x=lima+i*h
suma=suma+f(x)
pi(thread_num+1)=suma
enddo
!$omp end parallel do
suma=sum(pi)
suma=suma*h
print *,"The value of pi is= ",suma ! print once from the first image
call system_clock(toc)
time = real(toc-tic)/real(rate)
print*, 'Time ', time, 's'
contains
double precision function f(y)
double precision:: y
f=4.0d0/(1.0d0+y*y)
end function f
end program trap
I get the following errors:
test.f90:23:35:
23 | thread_num = omp_get_thread_num()
Error: Unexpected assignment statement at (1)
test.f90:24:31:
24 | !$ istart = thread_num*ppt +1
Error: Unexpected assignment statement at (1)
test.f90:25:40:
25 | !$ iend = min(thread_num*ppt + ppt, n)
Error: Unexpected assignment statement at (1)
Compiled with:
gfortran -fopenmp -Wall -Wextra -O2 -Wall -o prog.exe test.f90
./prog.exe
I don't understand why you are manually splitting up the loop when the worksharing constructs in openmp, such as !$omp do, can do this automatically for you. Below is how I would do it
ian#eris:~/work/stack$ cat thread.f90
program trap
Use, Intrinsic :: iso_fortran_env, Only : wp => real64, li => int64
use omp_lib
implicit none
Real( wp ) ::suma=0.0_wp ! sum is a scalar
Real( wp ) :: h,x,lima,limb
integer(li):: tic, toc, rate
Real( wp ) :: time
Real( wp ) :: pi
Integer :: i, n
call system_clock(count_rate = rate)
call system_clock(tic)
lima=0.0_wp; limb=1.0_wp; suma=0.0_wp; n=100000000
h=(limb-lima)/n
suma=h*(f(lima)+f(limb))*0.5_wp !first and last points
pi = 0.0_wp
!$omp parallel default( None ) private( i, x, lima ) &
!$omp shared( pi, n, h )
!$omp do reduction( +:pi )
do i= 1, n
x = lima + i * h
pi = pi + f( x )
enddo
!$omp end do
!$omp end parallel
print *,"The value of pi is= ", pi / n
call system_clock(toc)
time = real(toc-tic)/real(rate)
print*, 'Time ', time, 's on ', omp_get_max_threads(), ' threads'
contains
function f(y)
Real( wp ) :: f
Real( wp ) :: y
f=4.0_wp/(1.0_wp+y*y)
end function f
end program trap
ian#eris:~/work/stack$ gfortran --version
GNU Fortran (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
ian#eris:~/work/stack$ export OMP_NUM_THREADS=1
ian#eris:~/work/stack$ ./a.out
The value of pi is= 3.1415926435902248
Time 1.8548842668533325 s on 1 threads
ian#eris:~/work/stack$ export OMP_NUM_THREADS=2
ian#eris:~/work/stack$ ./a.out
The value of pi is= 3.1415926435902120
Time 0.86763000488281250 s on 2 threads
ian#eris:~/work/stack$ export OMP_NUM_THREADS=4
ian#eris:~/work/stack$ ./a.out
The value of pi is= 3.1415926435898771
Time 0.54704123735427856 s on 4 threads
ian#eris:~/work/stack$
Instead of
!$omp parallel do private (istart, iend, thread_num, i)
thread_num = omp_get_thread_num()
!$ istart = thread_num*ppt +1
!$ iend = min(thread_num*ppt + ppt, n)
try the following:
!$omp parallel private (istart, iend, thread_num, i)
thread_num = omp_get_thread_num()
!$ istart = thread_num*ppt +1
!$ iend = min(thread_num*ppt + ppt, n)
....
!$omp end parallel
Related
I have a parallel section of the code where I write out n large arrays (representing a numerical mesh) in blocks that are later read in different sized blocks. To do this I used Stream access so each processor writes their block independently, but I've seen inconsistent timings taking from 0.5-4 seconds in this section testing with 2 processor groups.
I am aware you can do something similar with MPI-IO, but I'm not sure what the benefits would be since there is no synchronization necessary. I would like to know if there is a way to either improve performance of my writes, or if there is a reason MPI-IO would be a better choice for this section.
Here is a sample of the code section where I create the files to write norb arrays using two groups (mygroup = 0 or 1]:
do irbsic=1,norb
[various operations]
blocksize=int(nmsh_tot/ngroups)
OPEN(unit=iunit,FILE='ZPOT',STATUS='UNKNOWN',ACCESS='STREAM')
mypos = 1 + (IRBSIC-1)*nmsh_tot*8 ! starting point for writing IRBSIC
mypos = mypos + mygroup*(8*blocksize) ! starting point for mesh group
WRITE(iunit,POS=mypos) POT(1:nmsh)
CLOSE(iunit)
OPEN(unit=iunit,FILE='RHOI',STATUS='UNKNOWN',ACCESS='STREAM')
mypos = 1 + (IRBSIC-1)*nmsh_tot*8 ! starting point for writing IRBSIC
mypos = mypos + mygroup*(8*blocksize) ! starting point for mesh group
WRITE(iunit,POS=mypos) RHOG(1:nmsh,1,1)
CLOSE(iunit)
[various operations]
end do
(As discussed in the comments) I would strongly recommend against using Fortran stream access for this. Standard Fortran I/O is only guaranteed to work if the file is being accessed by a single process, and in my own work I have seen random corruptions of files when multiple processes try to write to them at once, even if the processes are writing to different parts of the file. MPI-I/O, or a library such as HDF5 or NetCDF which uses MPI-I/O is the only sensible way to achieve this. Below is a simple program illustrating the use of mpi_file_write_at_all
ian#eris:~/work/stack$ cat at.f90
Program write_at
Use mpi
Implicit None
Integer, Parameter :: n = 4
Real, Dimension( 1:n ) :: a
Real, Dimension( : ), Allocatable :: all_of_a
Integer :: me, nproc
Integer :: handle
Integer :: i
Integer :: error
! Set up MPI
Call mpi_init( error )
Call mpi_comm_size( mpi_comm_world, nproc, error )
Call mpi_comm_rank( mpi_comm_world, me , error )
! Provide some data
a = [ ( i, i = n * me, n * ( me + 1 ) - 1 ) ]
! Open the file
Call mpi_file_open( mpi_comm_world, 'stuff.dat', &
mpi_mode_create + mpi_mode_wronly, mpi_info_null, handle, error )
! Describe how the processes will view the file - in this case
! simply a stream of mpi_real
Call mpi_file_set_view( handle, 0_mpi_offset_kind, &
mpi_real, mpi_real, 'native', &
mpi_info_null, error )
! Write the data using a collective routine - generally the most efficent
! but as collective all processes within the communicator must call the routine
Call mpi_file_write_at_all( handle, Int( me * n,mpi_offset_kind ) , &
a, Size( a ), mpi_real, mpi_status_ignore, error )
! Close the file
Call mpi_file_close( handle, error )
! Read the file on rank zero using Fortran to check the data
If( me == 0 ) Then
Open( 10, file = 'stuff.dat', access = 'stream' )
Allocate( all_of_a( 1:n * nproc ) )
Read( 10, pos = 1 ) all_of_a
Write( *, * ) all_of_a
End If
! Shut down MPI
Call mpi_finalize( error )
End Program write_at
ian#eris:~/work/stack$ mpif90 --version
GNU Fortran (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
ian#eris:~/work/stack$ mpif90 -Wall -Wextra -fcheck=all -std=f2008 at.f90
ian#eris:~/work/stack$ mpirun -np 2 ./a.out
0.00000000 1.00000000 2.00000000 3.00000000 4.00000000 5.00000000 6.00000000 7.00000000
ian#eris:~/work/stack$ mpirun -np 5 ./a.out
0.00000000 1.00000000 2.00000000 3.00000000 4.00000000 5.00000000 6.00000000 7.00000000 8.00000000 9.00000000 10.0000000 11.0000000 12.0000000 13.0000000 14.0000000 15.0000000 16.0000000 17.0000000 18.0000000 19.0000000
ian#eris:~/work/stack$
I'm using OpenMP to speed up Fortran code in a Matlab MEX-file. However, I find that OpenMP seems not work on Linux, but actually works on Windows. I attach the code as follows:
1) Matlab Mex file:
clc; clear all; close all; tic
FLAG_SYS = 0; % 0 for Windows; 1 for Linux
%--------------------------------------------------------------------------
% Mex Fortran code
%--------------------------------------------------------------------------
if FLAG_SYS == 0
mex COMPFLAGS="-Qopenmp $COMPFLAGS"...
LINKFLAGS="/Qopenmp $LINKFLAGS"...
OPTIMFLAGS="/Qopenmp $OPTIMFLAGS"...
'-IC:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2017.5.267\windows\mkl\include'...
'-LC:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2017.5.267\windows\mkl\lib\intel64'...
-lmkl_intel_ilp64.lib -lmkl_intel_thread.lib -lmkl_core.lib libiomp5md.lib...
Test_OpenMP_Mex.f90...
-output Test_OpenMP_Mex
elseif FLAG_SYS == 1
mex COMPFLAGS="-fopenmp $COMPFLAGS"...
LINKFLAGS="-fopenmp $LINKFLAGS"...
FFLAGS='$FFLAGS -fdec-math -cpp' ...
'-I${MKLROOT}/include'...
'-L${MKLROOT}/lib'...
-lmkl_avx2 -lmkl_gf_ilp64 -lmkl_core -lmkl_intel_thread -liomp5 -lpthread -lm -ldl...
Test_OpenMP_Mex.f90...
-output Test_OpenMP_Mex
end
Test_OpenMP_Mex;
2) Fortran code
#include "fintrf.h"
!GATEWAY ROUTINE
SUBROUTINE MEXFUNCTION(NLHS, PLHS, NRHS, PRHS)
!DECLARATIONS
IMPLICIT NONE
!MEXFUNCTION ARGUMENTS:
MWPOINTER PLHS(*), PRHS(*)
INTEGER NLHS, NRHS
!FUNCTION DECLARATIONS:
MWPOINTER MXCREATEDOUBLEMATRIX
MWPOINTER MXGETM, MXGETN
INTEGER MXISNUMERIC
!POINTERS TO INPUT MXARRAYS:
MWPOINTER MIV1, MIV2
!POINTERS TO OUTPUT MXARRAYS:
MWPOINTER MOV1, MOV2
!CALL FORTRAN CODE
CALL TEST_OPENMP
RETURN
END
!-----------------------------------------------------------------------
SUBROUTINE TEST_OPENMP
USE OMP_LIB
IMPLICIT NONE
INTEGER I, J, K, STEP
REAL*8 STARTTIME, ENDTIME,Y
OPEN(1,FILE='1.TXT')
!COUNT ELAPSED TIME START
STARTTIME = OMP_GET_WTIME()
DO I = 1,1000000
DO J = 1,50000
DO K = 1,1000
Y=(I+10)*J-SQRT(789.1)+SQRT(789.1)-(I+10)*J
END DO
END DO
END DO
ENDTIME = OMP_GET_WTIME()
WRITE(1,*) ENDTIME-STARTTIME
!COUNT ELAPSED TIME START
STARTTIME = OMP_GET_WTIME()
!$OMP PARALLEL
!$OMP DO PRIVATE(I,J)
DO I = 1,1000000
DO J = 1,50000
DO K = 1,1000
Y=(I+10)*J-SQRT(789.1)+SQRT(789.1)-(I+10)*J
END DO
END DO
END DO
!$OMP END DO
!$OMP END PARALLEL
ENDTIME = OMP_GET_WTIME()
WRITE(1,*) ENDTIME-STARTTIME
!$OMP PARALLEL
! GET THE NUMBER OF THREADS
WRITE(1,*) OMP_GET_THREAD_NUM(), OMP_GET_NUM_THREADS()
!$OMP END PARALLEL
CLOSE(1)
RETURN
END SUBROUTINE TEST_OPENMP
The output on Windows is:
1.09620520001044
4.50355500000296
0 6
1 6
3 6
5 6
2 6
4 6
and the output on Linux is:
0.0000
0.0000
0 1
It's obvious that OpenMP works on Windows, since the calculation time reduces from 4.5s to 1.0s. I can find that there are 6 threads being used for calculation. However, on Linux, no calculation seems to be executed, and there are only 2 threads (the number of threads on Linux is 36, but only 2 of them are used).
Any suggestions are welcome!
You can directly download code from this link:
https://www.dropbox.com/sh/crkuwhu22407sjs/AAAQrtzAvTmFOmAxv_jpTCBaa?dl=0
When compiling MEX-files under Linux (and MacOS) the COMPFLAGS variable is ignored. It is a Windows-specific environment variable. You need to use CFLAGS for C, CXXFLAGS for C++, or FFLAGS for Fortran, and LDFLAGS for the linker. These are the standard Unix environment variables to control compilation.
Your compile command will look like this:
mex LDFLAGS='-fopenmp $LDFLAGS'...
FFLAGS='-fopenmp -fdec-math -cpp $FFLAGS' ...
'-I${MKLROOT}/include'...
'-L${MKLROOT}/lib'...
-lmkl_avx2 -lmkl_gf_ilp64 -lmkl_core -lmkl_intel_thread -liomp5 -lpthread -lm -ldl...
Test_OpenMP_Mex.f90...
-output Test_OpenMP_Mex
Reference:
MATLAB documentation
GCC environment variables
There is one note you shouldn't miss when lining against intel mkl ilp64 versions of libs:
you need to add -I4 compiler option, otherwise, you may see some kind of an unexpected segfault... Please refer to the mkl linker adviser to see more details: https://software.intel.com/content/www/us/en/develop/articles/intel-mkl-link-line-advisor.html
Please help me understand how I can improve sequential, unformatted I/O throughput with (G)Fortran, especially when working on NVMe SSDs.
I wrote a little test program, see bottom of this post. What this does is open one or more files in parallel (OpenMP) and write an array of random numbers into it. Then it flushes system caches (root required, otherwise the read test will most likely read from memory) opens the files, and reads from them. Time is measured in wall time (trying to include only I/O-related times), and performance numbers are given in MiB/s. The program loops until aborted.
The hardware I am using for testing is a Samsung 970 Evo Plus 1TB SSD, connected via 2 PCIe 3.0 lanes. So in theory, it should be capable of ~1500MiB/s sequential reads and writes.
Testing beforehand with "dd if=/dev/zero of=./testfile bs=1G count=1 oflag=direct" results in ~750MB/s. Not too great, but still better than what I get with Gfortran. And depending on who you ask, dd should not be used for benchmarking anyway. This is just to make sure that the hardware is in theory capable of more.
Results with my code tend to get better with larger file size, but even with 1GiB it caps out at around 200MiB/s write, 420MiB/s read. Using more threads (e.g. 4) increases write speeds a bit, but only to around 270MiB/s.
I made sure to keep the benchmark runs short, and give the SSD time to relax between tests.
I was under the impression that it should be possible to saturate 2 PCIe 3.0 lanes worth of bandwidth, even with only a single thread. At least when using unformatted I/O.
The code does not seem to be CPU limited, top shows less than 50% usage on a single core if I move the allocation and initialization of the "values" field out of the loop. Which still does not bode well for overall performance, considering that I would like to see numbers that are at least 5 times higher.
I also tried to use access=stream for the open statements, but to no avail.
So what seems to be the problem?
Is my code wrong/unoptimized? Are my expectations too high?
Platform used:
Opensuse Leap 15.1, Kernel 4.12.14-lp151.28.36-default
2x AMD Epyc 7551, Supermicro H11DSI, Samsung 970 Evo Plus 1TB (2xPCIe 3.0)
gcc version 8.2.1, compiler options: -ffree-line-length-none -O3 -ffast-math -funroll-loops -flto
MODULE types
implicit none
save
INTEGER, PARAMETER :: I8B = SELECTED_INT_KIND(18)
INTEGER, PARAMETER :: I4B = SELECTED_INT_KIND(9)
INTEGER, PARAMETER :: SP = KIND(1.0)
INTEGER, PARAMETER :: DP = KIND(1.0d0)
END MODULE types
MODULE parameters
use types
implicit none
save
INTEGER(I4B) :: filesize ! file size in MiB
INTEGER(I4B) :: nthreads ! number of threads for parallel ececution
INTEGER(I4B) :: alloc_size ! size of the allocated data field
END MODULE parameters
PROGRAM iometer
use types
use parameters
use omp_lib
implicit none
CHARACTER(LEN=100) :: directory_char, filesize_char, nthreads_char
CHARACTER(LEN=40) :: dummy_char1
CHARACTER(LEN=110) :: filename
CHARACTER(LEN=10) :: filenumber
INTEGER(I4B) :: thread, tunit, n
INTEGER(I8B) :: counti, countf, count_rate
REAL(DP) :: telapsed_read, telapsed_write, mib_written, write_speed, mib_read, read_speed
REAL(SP), DIMENSION(:), ALLOCATABLE :: values
call system_clock(counti,count_rate)
call getarg(1,directory_char)
dummy_char1 = ' directory to test:'
write(*,'(A40,A)') dummy_char1, trim(adjustl(directory_char))
call getarg(2,filesize_char)
dummy_char1 = ' file size (MiB):'
read(filesize_char,*) filesize
write(*,'(A40,I12)') dummy_char1, filesize
call getarg(3,nthreads_char)
dummy_char1 = ' number of parallel threads:'
read(nthreads_char,*) nthreads
write(*,'(A40,I12)') dummy_char1, nthreads
alloc_size = filesize * 262144
dummy_char1 = ' allocation size:'
write(*,'(A40,I12)') dummy_char1, alloc_size
mib_written = real(alloc_size,kind=dp) * real(nthreads,kind=dp) / 1048576.0_dp
mib_read = mib_written
CALL OMP_SET_NUM_THREADS(nthreads)
do while(.true.)
!$OMP PARALLEL default(shared) private(thread, filename, filenumber, values, tunit)
thread = omp_get_thread_num()
write(filenumber,'(I0.10)') thread
filename = trim(adjustl(directory_char)) // '/' // trim(adjustl(filenumber)) // '.temp'
allocate(values(alloc_size))
call random_seed()
call RANDOM_NUMBER(values)
tunit = thread + 100
!$OMP BARRIER
!$OMP MASTER
call system_clock(counti)
!$OMP END MASTER
!$OMP BARRIER
open(unit=tunit, file=trim(adjustl(filename)), status='replace', action='write', form='unformatted')
write(tunit) values
close(unit=tunit)
!$OMP BARRIER
!$OMP MASTER
call system_clock(countf)
telapsed_write = real(countf-counti,kind=dp)/real(count_rate,kind=dp)
write_speed = mib_written/telapsed_write
!write(*,*) 'write speed (MiB/s): ', write_speed
call execute_command_line ('echo 3 > /proc/sys/vm/drop_caches', wait=.true.)
call system_clock(counti)
!$OMP END MASTER
!$OMP BARRIER
open(unit=tunit, file=trim(adjustl(filename)), status='old', action='read', form='unformatted')
read(tunit) values
close(unit=tunit)
!$OMP BARRIER
!$OMP MASTER
call system_clock(countf)
telapsed_read = real(countf-counti,kind=dp)/real(count_rate,kind=dp)
read_speed = mib_read/telapsed_read
write(*,'(A29,2F10.3)') ' write / read speed (MiB/s): ', write_speed, read_speed
!$OMP END MASTER
!$OMP BARRIER
deallocate(values)
!$OMP END PARALLEL
call sleep(1)
end do
END PROGRAM iometer
The mistake in your code is that in your calculation of mib_written you have forgotten to take into account the size of a real(sp) variable (4 bytes). Thus your results are a factor of 4 too low. E.g. calculate it as
mib_written = filesize * nthreads
Some minor nits, some specific to GFortran:
Don't repeatedly call random_seed, particularly not from each thread. If you want to call it, call it once in the beginning of the program.
You can use open(newunit=tunit, ...) to let the compiler runtime allocate a unique unit number for each file.
If you want the 'standard' 64-bit integer/floating point kinds, you can use the variables int64 and real64 from the iso_fortran_env intrinsic module.
For testing with larger files, you need to make alloc_size of kind int64.
Use the standard get_command_argument intrinsic instead of the nonstandard getarg.
access='stream' is slightly faster than the default (sequential) as there's no need to handle the record length markers.
Your test program with these fixes (and the parameters module folded into the main program) below:
PROGRAM iometer
use iso_fortran_env
use omp_lib
implicit none
CHARACTER(LEN=100) :: directory_char, filesize_char, nthreads_char
CHARACTER(LEN=40) :: dummy_char1
CHARACTER(LEN=110) :: filename
CHARACTER(LEN=10) :: filenumber
INTEGER :: thread, tunit
INTEGER(int64) :: counti, countf, count_rate
REAL(real64) :: telapsed_read, telapsed_write, mib_written, write_speed, mib_read, read_speed
REAL, DIMENSION(:), ALLOCATABLE :: values
INTEGER :: filesize ! file size in MiB
INTEGER :: nthreads ! number of threads for parallel ececution
INTEGER(int64) :: alloc_size ! size of the allocated data field
call system_clock(counti,count_rate)
call get_command_argument(1, directory_char)
dummy_char1 = ' directory to test:'
write(*,'(A40,A)') dummy_char1, trim(adjustl(directory_char))
call get_command_argument(2, filesize_char)
dummy_char1 = ' file size (MiB):'
read(filesize_char,*) filesize
write(*,'(A40,I12)') dummy_char1, filesize
call get_command_argument(3, nthreads_char)
dummy_char1 = ' number of parallel threads:'
read(nthreads_char,*) nthreads
write(*,'(A40,I12)') dummy_char1, nthreads
alloc_size = filesize * 262144_int64
dummy_char1 = ' allocation size:'
write(*,'(A40,I12)') dummy_char1, alloc_size
mib_written = filesize * nthreads
dummy_char1 = ' MiB written:'
write(*, '(A40,g0)') dummy_char1, mib_written
mib_read = mib_written
CALL OMP_SET_NUM_THREADS(nthreads)
!$OMP PARALLEL default(shared) private(thread, filename, filenumber, values, tunit)
do while (.true.)
thread = omp_get_thread_num()
write(filenumber,'(I0.10)') thread
filename = trim(adjustl(directory_char)) // '/' // trim(adjustl(filenumber)) // '.temp'
if (.not. allocated(values)) then
allocate(values(alloc_size))
call RANDOM_NUMBER(values)
end if
open(newunit=tunit, file=filename, status='replace', action='write', form='unformatted', access='stream')
!$omp barrier
!$omp master
call system_clock(counti)
!$omp end master
!$omp barrier
write(tunit) values
close(unit=tunit)
!$omp barrier
!$omp master
call system_clock(countf)
telapsed_write = real(countf - counti, kind=real64)/real(count_rate, kind=real64)
write_speed = mib_written/telapsed_write
call execute_command_line ('echo 3 > /proc/sys/vm/drop_caches', wait=.true.)
!$OMP END MASTER
open(newunit=tunit, file=trim(adjustl(filename)), status='old', action='read', form='unformatted', access='stream')
!$omp barrier
!$omp master
call system_clock(counti)
!$omp end master
!$omp barrier
read(tunit) values
close(unit=tunit)
!$omp barrier
!$omp master
call system_clock(countf)
telapsed_read = real(countf - counti, kind=real64)/real(count_rate, kind=real64)
read_speed = mib_read/telapsed_read
write(*,'(A29,2F10.3)') ' write / read speed (MiB/s): ', write_speed, read_speed
!$OMP END MASTER
call sleep(1)
end do
!$OMP END PARALLEL
END PROGRAM iometer
I want to benchmark some Fortran code with OpenMP-threads with a critical-section. To simulate a realistic environment I tried to generate some load before this critical-section.
!Kompileraufruf: gfortran -fopenmp -o minExample.x minExample.f90
PROGRAM minExample
USE omp_lib
IMPLICIT NONE
INTEGER :: n_chars, real_alloced
INTEGER :: nx,ny,nz,ix,iy,iz, idx
INTEGER :: nthreads, lasteinstellung,i
INTEGER, PARAMETER :: dp = kind(1.0d0)
REAL (KIND = dp) :: j
CHARACTER(LEN=32) :: arg
nx = 2
ny = 2
nz = 2
lasteinstellung= 10000
CALL getarg(1, arg)
READ(arg,*) nthreads
CALL OMP_SET_NUM_THREADS(nthreads)
!$omp parallel
!$omp master
nthreads=omp_get_num_threads()
!$omp end master
!$omp end parallel
WRITE(*,*) "Running OpenMP benchmark on ",nthreads," thread(s)"
n_chars = 0
idx = 0
!$omp parallel do default(none) collapse(3) &
!$omp shared(nx,ny,nz,n_chars) &
!$omp private(ix,iy,iz, idx) &
!$omp private(lasteinstellung,j) !&
DO iz=-nz,nz
DO iy=-ny,ny
DO ix=-nx,nx
! WRITE(*,*) ix,iy,iz
j = 0.0d0
DO i=1,lasteinstellung
j = j + real(i)
END DO
!$omp critical
n_chars = n_chars + 1
idx = n_chars
!$omp end critical
END DO
END DO
END DO
END PROGRAM
I compiled this code with gfortran -fopenmp -o test.x test.f90 and executed it with time ./test.x THREAD
Executing this code gives some strange behaviour depending on the thread-count (set with OMP_SET_NUM_THREADS): compared with one thread (6ms) the execution with more threads costs a lot more time (2 threads: 16000ms, 4 threads: 9000ms) on my multicore machine.
What could cause this behaviour? Is there a better (but still easy) way to generate load without running in some cache-effects or related things?
edit: strange behaviour: if I have the write in the nested loops, the execution speeds dramatically up with 2 threads. If its commented out, the execution with 2 or 3 threads takes forever (write shows very slow incrementation of loop variables)...but not with 1 or 4 threads. I tried this code also on another multicore machine. There it takes for 1 and 3 threads forever but not for 2 or 4 threads.
If the code you are showing is really complete you are missing definition of loadSet in the parallel section in which it is private. It is undefined and loop
DO i=1,loadSet
j = j + real(i)
END DO
can take a completely arbitrary number of iterations.
If the value is defined somewhere before in the code you do not show you probably want firstprivate instead of private.
I have a simple Fortran code, and despite the use of the omp_set_num_threads() subroutine, I cannot set number of threads, i.e. the output says that I use only 1 thread. I tried also with the export OMP_NUM_THREADS=4 - no result.
I have no idea what is wrong it that pice of code:
program test
use omp_lib
implicit none
integer :: i, tnr,t
call omp_set_num_threads( 4 )
t = omp_get_num_threads()
write(*,*)'t:',t
!$omp parallel
!$omp do
do i = 1, 20
tnr = omp_get_thread_num()
write( *, * ) 'Thread', tnr, ':', i
end do
!$omp end do
!$omp end parallel
end program test
The output of that code is:
t: 1
Thread 0 : 1
Thread 0 : 2
Thread 0 : 3
Thread 0 : 4
Thread 0 : 5
Thread 0 : 6
Thread 0 : 7
Thread 0 : 8
Thread 0 : 9
Thread 0 : 10
Thread 0 : 11
Thread 0 : 12
Thread 0 : 13
Thread 0 : 14
Thread 0 : 15
Thread 0 : 16
Thread 0 : 17
Thread 0 : 18
Thread 0 : 19
Thread 0 : 20
Thanks for any kind of tip!
I use gentoo linux, the gcc-4.5.4 compiler has the openmp flag activated. The cpu is mobile core i7 2nd generation.
ldd test:
linux-vdso.so.1 (0x00007fff85fce000)
libgfortran.so.3 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libgfortran.so.3 (0x00007fe310460000)
libm.so.6 => /lib64/libm.so.6 (0x00007fe310169000)
libgomp.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libgomp.so.1 (0x00007fe30ff5b000)
libgcc_s.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libgcc_s.so.1 (0x00007fe30fd45000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe30fb28000)
libc.so.6 => /lib64/libc.so.6 (0x00007fe30f77d000)
librt.so.1 => /lib64/librt.so.1 (0x00007fe30f574000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe310749000)
gfortran -v
Using built-in specs.
COLLECT_GCC=/usr/x86_64-pc-linux-gnu/gcc-bin/4.5.4/gfortran
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-pc-linux-gnu/4.5.4/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: /var/tmp/portage/sys-devel/gcc-4.5.4/work/gcc-4.5.4/configure --prefix=/usr --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/4.5.4 --includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/include --datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.5.4 --mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.5.4/man --infodir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.5.4/info --with-gxx-include-dir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/include/g++-v4 --host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --disable-altivec --disable-fixed-point --without-ppl --without-cloog --disable-lto --enable-nls --without-included-gettext --with-system-zlib --enable-obsolete --disable-werror --enable-secureplt --enable-multilib --enable-libmudflap --disable-libssp --enable-libgomp --with-python-dir=/share/gcc-data/x86_64-pc-linux-gnu/4.5.4/python --enable-checking=release --disable-libgcj --enable-languages=c,c++,fortran --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu --enable-targets=all --with- bugurl=http://bugs.gentoo.org/ --with-pkgversion='Gentoo 4.5.4 p1.0, pie-0.4.7'
Thread model: posix
gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7)
output of a testmp.f140t.optimized (the one before the *.statistics):
;; Function test (MAIN__)
test ()
{
struct __st_parameter_dt dt_parm.1;
logical(kind=4) D.1545;
struct __st_parameter_dt dt_parm.0;
integer(kind=4) tnr;
integer(kind=4) t;
integer(kind=4) i;
integer(kind=4) i.8;
integer(kind=4) i.7;
integer(kind=4) i.6;
integer(kind=4) tnr.5;
integer(kind=4) i.4;
integer(kind=4) t.3;
<bb 2>:
omp_set_num_threads (&C.1537);
t.3_1 = omp_get_max_threads ();
t = t.3_1;
dt_parm.0.common.filename = &"testmp.f"[1]{lb: 1 sz: 1};
dt_parm.0.common.line = 11;
dt_parm.0.common.flags = 128;
dt_parm.0.common.unit = 6;
_gfortran_st_write (&dt_parm.0);
_gfortran_transfer_character (&dt_parm.0, &"t:"[1]{lb: 1 sz: 1}, 2);
_gfortran_transfer_integer (&dt_parm.0, &t, 4);
_gfortran_st_write_done (&dt_parm.0);
i = 1;
i.4_2 = i;
if (i.4_2 <= 20)
goto <bb 3>;
else
goto <bb 5>;
<bb 3>:
tnr.5_3 = omp_get_thread_num ();
tnr = tnr.5_3;
dt_parm.1.common.filename = &"testmp.f"[1]{lb: 1 sz: 1};
dt_parm.1.common.line = 16;
dt_parm.1.common.flags = 128;
dt_parm.1.common.unit = 6;
_gfortran_st_write (&dt_parm.1);
_gfortran_transfer_character (&dt_parm.1, &"Thread"[1]{lb: 1 sz: 1}, 6);
_gfortran_transfer_integer (&dt_parm.1, &tnr, 4);
_gfortran_transfer_character (&dt_parm.1, &":"[1]{lb: 1 sz: 1}, 1);
_gfortran_transfer_integer (&dt_parm.1, &i, 4);
_gfortran_st_write_done (&dt_parm.1);
i.6_4 = i;
D.1545_5 = i.6_4 == 20;
i.7_6 = i;
i.8_7 = i.7_6 + 1;
i = i.8_7;
if (D.1545_5 != 0)
goto <bb 5>;
else
goto <bb 4>;
<bb 4>:
goto <bb 3>;
<bb 5>:
return;
}
;; Function main (main)
main (integer(kind=4) argc, character(kind=1) * * argv)
{
static integer(kind=4) options.2[8] = {68, 255, 0, 0, 0, 1, 0, 1};
integer(kind=4) D.1552;
<bb 2>:
_gfortran_set_args (argc_1(D), argv_2(D));
_gfortran_set_options (8, &options.2[0]);
test ();
D.1552_3 = 0;
return D.1552_3;
}
Setting OMP_NUM_THREADS or calling omp_set_num_threads() sets the nthreads-var ICV (Internal Control Variable). To retrieve back its value, one should call omp_get_max_threads() and not omp_get_num_threads().
Second, there is a data race in your code. By default OpenMP will treat both tnr and t shared variables. In that case the value of tnr displayed by the write statement will be the value obtained in the last thread to execute the assignment (note that GCC suppresses register optimisation when it comes to shared variables).
The correct code would be as follows:
program test
use omp_lib
implicit none
integer :: i, tnr,t
call omp_set_num_threads( 4 )
t = omp_get_max_threads()
write(*,*)'t:',t
!$omp parallel do private(tnr)
do i = 1, 20
tnr = omp_get_thread_num()
write( *, * ) 'Thread', tnr, ':', i
end do
!$omp end parallel do
end program test
Note that when a do construct is immediately and the only thing nested inside a parallel region, one could use the combined parallel do construct and save two lines of code.
You have stored Fortran 90 code in a .f file, which is therefore recognised as fixed source form. In this case the OpenMP directives must obey the following rules:
The following sentinels are recognized in fixed form source files:
!$omp | c$omp | *$omp
Sentinels must start in column 1 and appear as a single word with no intervening characters. Fortran fixed form line length, white space, continuation, and column rules apply to the directive line. Initial directive lines must have a space or zero in column 6, and continuation directive lines must have a character other than a space or a zero in column 6. (emphasis mine)
I guess your directives start in the same column as the rest of the program code and therefore are treated simply as comments and not as OpenMP directives, evident by the content of the testmp.f.140t.optimized file.
At first, you have to compile this code with
gfortran -fopenmp FILE
The second thing is, that omp_get_num_threads() shows you the number of threads, at the point, where you are. As you called this function in a serial region, the answer is always 1.