How could I run Open MPI under Slurm - openmpi

I am unable to run Open MPI under Slurm through a Slurm-script.
In general, I am able to obtain the hostname and run Open MPI on my machine.
$ mpirun hostname
myHost
$ cd NPB3.3-SER/ && make ua CLASS=B && mpirun -n 1 bin/ua.B.x inputua.data # Works
But if I do the same operation through the slurm-script mpirun hostname returns empty string and consequently I am unable to run mpirun -n 1 bin/ua.B.x inputua.data.
slurm-script.sh:
#!/bin/bash
#SBATCH -o slurm.out # STDOUT
#SBATCH -e slurm.err # STDERR
#SBATCH --mail-type=ALL
export LD_LIBRARY_PATH="/usr/lib/openmpi/lib"
mpirun hostname > output.txt # Returns empty
cd NPB3.3-SER/
make ua CLASS=B
mpirun --host myHost -n 1 bin/ua.B.x inputua.data
$ sbatch -N1 slurm-script.sh
Submitted batch job 1
The error I am receiving:
There are no allocated resources for the application
bin/ua.B.x
that match the requested mapping:
------------------------------------------------------------------
Verify that you have mapped the allocated resources properly using the
--host or --hostfile specification.
A daemon (pid unknown) died unexpectedly with status 1 while attempting
to launch so we are aborting.
There may be more information reported by the environment (see above).
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
------------------------------------------------------------------
An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This could be caused by a number
of factors, including an inability to create a connection back
to mpirun due to a lack of common network interfaces and/or no
route found between them. Please check network connectivity
(including firewalls and network routing requirements).
------------------------------------------------------------------

If Slurm and OpenMPI are recent versions, make sure that OpenMPI is compiled with Slurm support (run ompi_info | grep slurm to find out) and just run srun bin/ua.B.x inputua.data in your submission script.
Alternatively, mpirun bin/ua.B.x inputua.data should work too.
If OpenMPI is compiled without Slurm support the following should work:
srun hostname > output.txt
cd NPB3.3-SER/
make ua CLASS=B
mpirun --hostfile output.txt -n 1 bin/ua.B.x inputua.data
Make sure also that by running export LD_LIBRARY_PATH="/usr/lib/openmpi/lib" you do not overwrite other library paths that are necessary. Better would probably be export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/openmpi/lib" (or a more complex version if you want to avoid a leading : if it were initially empty.)

What you need is: 1) run mpirun, 2) from slurm, 3) with --host.
To determine who is responsible for this not to work (Problem 1), you could test a few things.
Whatever you test, you should test exactly the same via command line (CLI) and via slurm (S).
It is understood that some of these tests will produce different results in cases CLI and S.
A few notes are:
1) You are not testing exactly the same things in CLI and S.
2) You say that you are "unable to run mpirun -n 1 bin/ua.B.x inputua.data", while the problem is actually with mpirun --host myHost -n 1 bin/ua.B.x inputua.data.
3) The fact that mpirun hostname > output.txt returns an empty file (Problem 2) does not necessarily have the same origin as your main problem, see paragraph above. You can overcome this problem by using scontrol show hostnames
or with the environment variable SLURM_NODELIST (on which scontrol show hostnames is based), but this will not solve Problem 1.
To work around Problem 2, which is not the most important, try a few things via both CLI and S.
The slurm script below may be helpful.
#SBATCH -o slurm_hostname.out # STDOUT
#SBATCH -e slurm_hostname.err # STDERR
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}/usr/lib64/openmpi/lib"
mpirun hostname > hostname_mpirun.txt # 1. Returns values ok for me
hostname > hostname.txt # 2. Returns values ok for me
hostname -s > hostname_slurmcontrol.txt # 3. Returns values ok for me
scontrol show hostnames > hostname_scontrol.txt # 4. Returns values ok for me
echo ${SLURM_NODELIST} > hostname_slurmcontrol.txt # 5. Returns values ok for me
(for an explanation of the export command see this).
From what you say, I understand 2, 3, 4 and 5 work ok for you, and 1 does not.
So you could now use mpirun with suitable options --host or --hostfile.
Note the different format of the output of scontrol show hostnames (e.g., for me cnode17<newline>cnode18) and echo ${SLURM_NODELIST} (cnode[17-18]).
The host names could perhaps also be obtained in file names set dynamically with %h and %n in slurm.conf, look for e.g. SlurmdLogFile, SlurmdPidFile.
To diagnose/work around/solve Problem 1, try mpirun with/without --host, in CLI and S.
From what you say, assuming you used the correct syntax in each case, this is the outcome:
mpirun, CLI (original post).
"Works".
mpirun, S (comment?).
Same error as item 4 below?
Note that mpirun hostname in S should have produced similar output in your slurm.err.
mpirun --host, CLI (comment).
Error
There are no allocated resources for the application bin/ua.B.x that match the requested mapping:
...
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
mpirun --host, S (original post).
Error (same as item 3 above?)
There are no allocated resources for the application
bin/ua.B.x
that match the requested mapping:
------------------------------------------------------------------
Verify that you have mapped the allocated resources properly using the
--host or --hostfile specification.
...
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
As per comments, you may have a wrong LD_LIBRARY_PATH path set.
You may also need to use mpi --prefix ...
Related?
https://github.com/easybuilders/easybuild-easyconfigs/issues/204

Related

Change salloc behavior to run all commands on remote node

The default behaviour for salloc will run any shell related commands on the node where salloc was called from, while any srun commands called from that salloc job shell will run on the node that was allocated. Does anyone know of a way to get salloc to interactively run all commands on the remote node job shell?
Below is an example of the current default behaviour I'm seeing. Ideally, the first hostname command would run on slurm-node02 and return that hostname. Thanks!
[testyboi#slurm-node01 ~]$ salloc --nodelist=slurm-node02
salloc: Granted job allocation 890
[testyboi#slurm-node01 ~]$ hostname
slurm-node01
[testyboi#slurm-node01 ~]$ cat salloctest.sh
#!/bin/bash
echo "I am running on "; hostname;
[testyboi#slurm-node01 ~]$ srun -N1 salloctest.sh
I am running on
slurm-node02
The former FAQ, relevant for versions prior to 20.11, suggested to set
SallocDefaultCommand="srun -n1 -N1 --mem-per-cpu=0 --pty --preserve-env --cpu-bind=no --mpi=none $SHELL"
in slurm.conf. For the current version, 20.11, there is a new option. You can set
LaunchParameters=use_interactive_step
in slurm.conf.

Why Torque qsub don't create output file?

I trying start task on cluster via Torque PBS with command
qsub -o a.txt a.sh
File a.sh contain single string:
hostname
After command qsub I make qstat command, that give next output:
Job ID Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
302937.voms a.sh user 00:00:00 E long
After 5 seconds command qstat return empty output (no jobs in queue).
Command
qsub --version
give output: version: 2.5.13
Command
which qsub
Output: /usr/bin/qsub
The problem is that the file a.txt (from command qsub -o a.txt a.sh) is not created! In the terminal returned only job id, there is not any errors. Command
qsub a.sh
has the same behavior. How I can fix it? Where is qsub log files with errors?
If I use command
qsub -l nodes=node36:ppn=1 -o a.txt a.sh
then output files I can find in folder
/var/spool/pbs/undelivered
on node36 (after ssh login on it).
Output file contain string "node36", error file is empty.
Why my files is "undelivered"?
The output log and error log files are kept on the execution node in a spool directory and copied back to the head node after the job has completed. The location of the spool directory may vary. But you should look for it
under
/var/torque/spool on the first node from the list of nodes the job has been allocated.
There are multiple reasons that might cause torque to fail to deliver the output files.
The user submitting the job might not exist on the node or their home directory might not be accessible, or there is a user ID mismatch between the nodes of the cluster.
Torque is using ssh to copy files to the head node, but passwordless public key authentication for the user to ssh across the cluster has not been set up consistently on all the nodes.
A node failed during the execution of the job.
This list is by no means complete. Already here on Stack Overflow one can find a number of questions dealing with such a failure. Try to check if any of the above applies to your case.
You(or anyone else finding this thread) should also check out the solution given here:
PBS, refresh stdout
If you have admin access, you can set
$spool_as_final_name true
which causes the output to be written directly to the final destination.

How to run a MPI task?

I am newbie in Linux and recently started working with our university super-computer and I need to install my program ( GAMESS Quantum Chemistry Software ) on my own allocated space. I have installed and ran it successfully under 'sockets' but actually I need to run it under 'mpi' ( otherwise there will be little advantage of using a super-computer ).
System Setting:
OS: Linux64 , Redhat, intel
MPI: impi
compiler: ifort
modules: slurm , intel/intel-15.0.1 , intel/impi-15.0.1
This software runs ' rungms ' and receives arguments as:
rungms [fileName][Version][CPU count ] ( for example: ./rungms Opt 00 4 )
Here is my bash file ( my feeling is this is the main culprit for my problem !):
#!/bin/bash
#Based off of Monte's Original Script for Torque:
#https://gist.github.com/mlunacek/6306340#file-matlab_example-pbs
#These are SBATCH directives specifying name of file, queue, the
#Quality of Service, wall time, Node Count, #of CPUS, and the
#destination output file (which appends node hostname and JobID)
#SBATCH -J OptMPI
#SBATCH --qos janus-debug
#SBATCH -t 00-00:10:00
#SBATCH -N2
#SBATCH --ntasks-per-node=1
#SBATCH -o output-OptMPI-%N-JobID-%j
#NOTE: This Module Will Be Replaced With Slurm Specific:
module load intel/impi-15.0.1
mpirun /projects/augenda/gamess/rungms Opt 00 2 > OptMPI.out
As I said before, the program is compiled for mpi ( and not 'sockets' ) .
My problem is when I run run sbatch Opt.sh , I receive this error:
srun: error: PMK_KVS_Barrier duplicate request from task 1
when I change -N number , sometimes I receive error saying (4 !=2
).
with odd number of -N I receive error saying it expects even number of processes.
What am I missing ?
Here is the code from our super-computer website as a bash file example
The Slurm Workload Manager has a few ways of invoking an Intel MPI process. Likely, all you have to do is use srun rather than mpirun in your case. If errors are still present, refer here for alternative ways to invoke Intel MPI jobs; it's rather dependent on how the HPC admins configured the system.

Torque+MAUI PBS submitted job strange startup

I am using a Torque+MAUI cluster.
The cluster's utilization now is ~10 node/40 nodes available, a lot of job being queued but cannot be started.
I submitted the following PBS script using qsub:
#!/bin/bash
#
#PBS -S /bin/bash
#PBS -o STDOUT
#PBS -e STDERR
#PBS -l walltime=500:00:00
#PBS -l nodes=1:ppn=32
#PBS -q zone0
cd /somedir/workdir/
java -Xmx1024m -Xms256m -jar client_1_05.jar
The job gets R(un) status immediately, but I had this abnormal information from qstat -n
8655.cluster.local user zone0 run.sh -- 1 32 -- 500:00:00 R 00:00:31
z0-1/0+z0-1/1+z0-1/2+z0-1/3+z0-1/4+z0-1/5+z0-1/6+z0-1/7+z0-1/8+z0-1/9
+z0-1/10+z0-1/11+z0-1/12+z0-1/13+z0-1/14+z0-1/15+z0-1/16+z0-1/17+z0-1/18
+z0-1/19+z0-1/20+z0-1/21+z0-1/22+z0-1/23+z0-1/24+z0-1/25+z0-1/26+z0-1/27
+z0-1/28+z0-1/29+z0-1/30+z0-1/31
The abnormal part is -- in run.sh -- 1 32, as the sessionId is missing, and evidently the script does not run at all, i.e. the java program does not ever had traces of being started.
After this kind of strange running for ~5 minutes, the job will be set back to Q(ueue) status and seemingly will not being run again (I had monitored this for ~1 week and it does not run even being queued to the top most job).
I tried submit the same job 14 times, and monitored its node in qstat -n, 7 copies ran successfully, having varied node numbers, but all jobs being allocated z0-1/* get stuck with this strange startup behavior.
Anyone know a solution to this issue?
For a temporary workaround, how can I specify NOT to use those strange nodes in PBS script?
It sounds like something is wrong with those nodes. One solution would be to offline the nodes that aren't working: pbsnodes -o <node name> and allow the cluster to continue to work. You may need to release the holds on any jobs. I believe you can run releasehold ALL to accomplish this in Maui.
Once you take care of that I'd investigate the logs on those nodes (start with the pbs_mom logs and the syslogs) and figure out what is wrong with them. Once you figure out and correct what is wrong with them, you can put the nodes back online: pbsnodes -c <node_name>. You may also want to look into setting up some node health scripts to proactively detect and handle these situations.
For users, contact your administrator and in the mean time, run the job using this workaround.
Use pbsnodes to check for free and healthy nodes
Modify PBS directive #PBS -l nodes=<freenode1>:ppn=<ppn1>+<freenode2>:ppn=<ppn2>+...
submit the job using qsub

Run Hydra (mpiexec) locally gives strange SSH error

I am trying to run example code from this question: MPI basic example doesn't work but when I do:
$ mpirun -np 2 mpi_test
I get this:
ssh: Could not resolve hostname wvxvw-laptop: Name or service not known
And then the program hangs until interrupted.
wvxvw-laptop is the "host name" of my laptop, which is just that, really, a laptopt...
All I want is to try to run the example code, not to set up a network cluster or anything like that.
What did I miss? I'm reading the wiki page http://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager but I can't understand what is the reason.
Sorry, I'm very new to this.
Some more verbose output:
/usr/bin/ssh -x wvxvw-laptop "/usr/lib64/mpich/bin/hydra_pmi_proxy" \
--control-port wvxvw-laptop:54320 --debug --rmk user --launcher ssh \
--demux poll --pgid 0 --retries 10 --usize -2 --proxy-id 0
Formatted for readability. I'm not quite sure why is this even supposed to work (I've never used ssh -x not sure what it is supposed to do :/
mpirun execute your program on all node registered on your mpi cluster.
MPI use the computer name so you can edit your /etc/hosts to add an entry for wvxvw-laptop

Resources