assign two MPI processes per core - openmpi

How do I assign 2 MPI processes per core?
For example, if I do mpirun -np 4 ./application then it should use 2 physical cores to run 4 MPI processes (2 processes per core). I am using Open MPI 1.6. I did mpirun -np 4 -nc 2 ./application but wasn't able to run it.
It complains mpirun was unable to launch the specified application as it could not find an executable:

orterun (the Open MPI SPMD/MPMD launcher; mpirun/mpiexec are just symlinks to it) has some support for process binding but it is not flexible enough to allow you to bind two processes per core. You can try with -bycore -bind-to-core but it will err when all cores already have one process assigned to them.
But there is a workaround - you can use a rankfile where you explicitly specify which slot to bind each rank to. Here is an example: in order to run 4 processes on a dual-core CPU with 2 processes per core, you would do the following:
mpiexec -np 4 -H localhost -rf rankfile ./application
where rankfile is a text file with the following content:
rank 0=localhost slot=0:0
rank 1=localhost slot=0:0
rank 2=localhost slot=0:1
rank 3=localhost slot=0:1
This will place ranks 0 and 1 on core 0 of processor 0 and ranks 2 and 3 on core 1 of processor 0. Ugly but works:
$ mpiexec -np 4 -H localhost -rf rankfile -tag-output cat /proc/self/status | grep Cpus_allowed_list
[1,0]<stdout>:Cpus_allowed_list: 0
[1,1]<stdout>:Cpus_allowed_list: 0
[1,2]<stdout>:Cpus_allowed_list: 1
[1,3]<stdout>:Cpus_allowed_list: 1
Edit: From your other question is becomes clear that you are actually running on a hyperthreaded CPU. Then you would have to figure out the physical numbering of your logical processors (it's a bit confusing but physical numbering corresponds to the value of processor: as reported in /proc/cpuinfo). The easiest way to obtain it is to install the hwloc library. It provides the hwloc-ls tool that you can use like this:
$ hwloc-ls --of console
...
NUMANode L#0 (P#0 48GB) + Socket L#0 + L3 L#0 (12MB)
L2 L#0 (256KB) + L1 L#0 (32KB) + Core L#0
PU L#0 (P#0) <-- Physical ID 0
PU L#1 (P#12) <-- Physical ID 12
...
Physical IDs are listed after P# in the brackets. In your 8-core case the second hyperthread of the first core (core 0) would most likely have ID 8 and hence your rankfile would look something like:
rank 0=localhost slot=p0
rank 1=localhost slot=p8
rank 2=localhost slot=p1
rank 3=localhost slot=p9
(note the p prefix - don't omit it)
If you don't have hwloc or you cannot install it, then you would have to parse /proc/cpuinfo on your own. Hyperthreads would have the same values of physical id and core id but different processor and apicid. The physical ID is equal to the value of processor.

I'm not sure if you have multiple machines or not, and the exact details of how you want the processes distributed, but I'd consider reading up:
mpirun man page
The manual indicates that it has ways of binding processes to different things, including nodes, sockets, and cpu cores.
It's important to note that you will achieve this if you simply run twice as many processes as you have CPU cores, since they will tend to evenly distribute over cores to share load.
I'd try something like the following, though the manual is somewhat ambiguous and I'm not 100% sure it will behave as intended, as long as you have a dual core:
mpirun -np 4 -npersocket 4 ./application

If you use PBS, or something like that, i would suggest this kind of submission:
qsub -l select=128:ncpus=40:mpiprocs=16 -v NPROC=2048./pbs_script.csh
In the present submission i select 128 computational nodes, that have 40 cores, and use 16 of them. In my case, i have 20 physical cores per node.
In this submission i block all the 40 cores of the node and nobody can use these resources. it can avoid other peoples from using the same node and competing with your job.

Using Open MPI 4.0, the two commands:
mpirun --oversubscribe -c 8 ./a.out
and
mpirun -map-by hwthread:OVERSUBSCRIBE -c 8 ./a.out
worked for me (I have a Ryzen 5 processor with 4 cores and 8 logical cores).
I tested with a do loop that includes operations on real numbers. All logical threads are used, though it seems that there is no speedup benefit since computation takes double the amount of time compared to using -c 4 option (with no oversubscribing).

You can run
mpirun --use-hwthread-cpus ./application
In this case, Open MPI will consider that a processor is a thread provided by the Hyperthreading. This contrasts with the default behavior when it considers that a processor is a CPU core.
Open MPI denotes the threads provided by the Hyperthreading as "hardware threads" when you use this option, and allocates one Open MPI processor per "hardware thread".

Related

Understanding `stress` command in Linux

I'm trying to understand what stress command actually does in Linux, in particular -c option. My background is Physics, so I'm struggling with some concepts.
Does stress -c launch 3 processes that consume 100% of 3 bounded CPU cores (for example core 0, 1, 3)? The output of htop is confusing since I don't see 3 CPU cores at 100% all the time. Note: with bounded, I mean that these processes cannot run on other CPU cores (in this case 4 to N).
For example, after running stress -c 3, sometimes I see this (which makes sense to me):
But most of the time I'm seeing something like this (which doesn't because there aren't 3 CPU cores at 100%):

openMp and the number of cores vs cpus

I'm wondering about how openmp figures out how many threads it can run via the omp_get_max_threads() library call. I'm running on a centOS linux machine using gcc -fopenmp. My machine has 16 AMD Opteron(tm) Processor 6136 CPUs, each with 8 cores, all according to /proc/cpuinfo. If I run omp_get_num_procs() it returns 16. But omp_get_max_threads() also returns 16. Why isn't the max threads number 16*8?
When I run a program that uses 16 threads I see the program in top running at ~1600% of CPU and if I toggle 'Last used cpu (SMP)' that number moves around a bit. So the 1600% makes sense but is there any way to know which cores of which CPUs the threads are running on?
I'm pretty new to openmp so sorry if these questions seem naive.
You can use the hwloc tool set to know the binding of the threads of any application to the hardware threads/cores. You need only the name or the PID of the target running process. Here is an example:
$ hwloc-ps --pid 2038168 --threads --get-last-cpu-location
2038168 Machine:0 ./a.out
2038168 Core:5 a.out
2038169 Core:3 a.out
2038170 Core:1 a.out
2038171 Core:4 a.out
2038172 Core:0 a.out
2038173 Core:2 a.out
Here we can see that the process a.out (with the PID 2038168) uses 6 threads each map on different cores.
However, the mapping of threads on cores over time can change if you do not configure OpenMP properly (a starting point is to set the environment variables OMP_PROC_BIND and OMP_PLACES).
Additionally, you can use  hwloc-ps to understand the topology of your machine (how many cores there are, how many threads, how they are connected, etc.).
I am very surprise you can have 16 "AMD Opteron(tm) Processor 6136 CPUs". Indeed, this processor use the G34 socket which is available in up to 4-socket arrangements (and 8 dies). So, please check this with hwloc-ps!
An alternative way is to use a profiling tool (such as Intel VTune).

Linux - Need to limit a 16 core system to 4 to test performance running multiple programs

I have a couple of programs that will be put on a system with limited core's.
I want to test performance of these programs on my current system that is a lot more powerful than the one it will be on.
Is the only way to fully limit the proper resource is through a Virtual Machine on my system, or can I just restrict my system to meet the same core limit as the other system my programs will run on?
taskset might help you.
Start your application your_command as follows:
taskset -ac 0-3 your_command
# -c 0-3: your_command might run on cores 0 to 3
# a : all of the 4 cores may be used
If the application is already running:
taskset -acp 0-3 PID
# PID = process ID
See this answer to 'Limit process to one cpu core' at Unix & Linux for further details .

Curious about how to specify the number of core for MPI in order to get the fastest scientific computation

I have been running several scientific program package in conjunction with MPI by using the following command
nohup mpirun -np N -x OMP_NUM_THREADS=M program.exe < input > output &
where the value of N and M depend on the physical CPU cores of my machine. For example, my machine has the specification like this
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2440 0 # 2.40GHz
Stepping: 7
In this case, I first tried setting with N = 24 and M = 1, so the calculation ran very slowly. Then I changed N and M to 12 and 2 respectively. So I found that the latter had obviously provided me the fastest computation.
I was wondering that why did I set N & M are 12 and 2 provide more performance higher than the first case ?
there is no absolute rule on how to run MPI+OpenMP application.
the only advice is not to run an OpenMP process on more than one socket
(OpenMP was designed for SMP machines with flat memory access, but today, most systems are NUMA)
then just experiment.
some apps run best in flat MPI (e.g. one thread per task), while some other work best with one MPI task per socket, and all available cores for OpenMP.
last but not least, if you run more than one OpenMP thread per MPI task, make sure your MPI library bound the MPI tasks as expected.
for example, if you run with 12 OpenMP threads but MPI bind tasks to one core, you will end up doing time sharing and performance will be horrible.
or if you run with 12 OpenMP threads, and MPI task was bound to 12 cores, make sure the 12 cores are on the same socket (and not 6 on each socket)
There is no general rule about this because, most of the time, this performance is dependent on the computation properties of the application itself.
Applications with coarse synchronization granularity may scale well using plain MPI code (no multithreading).
If the synchronization granularity is fine, then using shared memory multithreading (such as OpenMP) and placing all the threads in a process close to each other (in the same socket) becomes more important: synchronization is cheaper and memory access latency is critical.
Finally, compute-bound applications (performance is limited by the processor) are likely not to benefit from hyper-threading at all, since two threads sharing a core contend for the functional units it contains. In this case, you may find applications that perform better using N=2 and M=6 than using N=2 and M=12.
indeeed there is no absolute rule on how to run MPI+OpenMP application.
I agree with all Gilles said.
so I want to talk about the CPU in your case.
in the specification you give, it shows the system enables hyper-thread.
but this not always helps. your computer has 12 physical cores in fact.
so I advice you try some combinations that make M * N = 12 to 24,
like 12*1, 6*2, 6*3
which one is best, depends on how well your application.

Spawning Multiple Jobs for building a kernel on Quad Core processor?

If my Kernel's Makefiles have correct dependency information,
To spawn mutliple jobs, I should use this
$make -jn
n = is the number of jobs to spawn
If I use Quad Core Machine like i7 4700MQ with 8 GB RAM, what should be the ideal and the maximum value of n ?
grep '^processor' /proc/cpuinfo | wc -l
It should give You number of CPU's in the system (as seen by OS). You can use this number. Also +1 to this number usualy won't hurt (it depends on project You are building, scheduler in use, etc.).

Resources