Behaviour of "--cores" when using Snakemake with the slurm profile - slurm

I've been doing some tests on my cluster with this Snakefile:
rule all:
input:[f"test.{id}.txt" for id in [1,2,3]]
rule test:
output: temp(touch("test.{id}.txt"))
threads: 10
shell: "sleep 5"
When not using the slurm profile, --cores behaves as I'd expect and there's one instance that gets executed for every 10 cores specified.
When you do specify --profile slurm, --cores behaves instead as if were limiting the number of jobs submitted: --cores 1 will execute 1 job with 10 cores, --cores 2 2 jobs with 10 cores each, and --cores 3 all three of them with 10 cores each.
I've always found it confusing that -j (for "jobs") and --cores are equivalent, and it would seem like my nightmares came true in this use case.
I'd appreciate any help understanding what's going on, and whether it could be with my setup or the way I'm using the profile.

Related

How to know what kind of work each Spark task/executor runs

When my application runs on a Spark cluster, I know the following
1) the execution plan
2) the DAG with nodes as RDD or operations
3) all jobs/stages/executors/tasks
However, I do not find how to know given a task ID what kinds of work (RDD or operations) the task does.
From a task, I can know its executor ID and which machine it runs. On the machine, if we grep Java and the ID, we can get
/bin/bash -c /usr/lib/jvm/jdk1.8.0_192/bin/java -server -Xmx12288m '-XX:MaxMetaspaceSize=256M' '-Djava.library.path=/opt/hadoop/lib/native' '-Djava.util.logging.config.file=/opt/spark2/conf/parquet.logging.properties' -Djava.io.tmpdir=/tmp/hadoop-root/nmlocaldir/usercache/appcache/application_1549756402460_92964/container_1549756402460_92964_01_000012/tmp '-Dspark.driver.port=35617' '-Dspark.network.timeout=3000s' -Dspark.yarn.app.container.log.dir=/mnt/yarn-logs/userlogs/application_1549756402460_92964/container_1549756402460_92964_01_000012 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler#10.0.72.160:35617 --executor-id 11 --hostname abc --cores 3 --app-id application_1549756402460_92964 --user-class-path file:/tmp/hadoop-root/nm-local-dir/usercache/appcache/application_1549756402460_92964/container_1549756402460_92964_01_000012/__app__.jar 1>/mnt/yarn-logs/userlogs/application_1549756402460_92964/container_1549756402460_92964_01_000012/stdout 2> /mnt/yarn-logs/userlogs/application_1549756402460_92964/container_1549756402460_92964_01_000012/stderr
But it does not tell me what it does... Does Spark expose the information?

How to distribute slurm tasks evenly over the nodes?

I want to run a script on a cluster ~200 times using srun commands in one sbatch script. Since executing the script takes some time it would be great to distribute the tasks evenly over the nodes in the cluster. Sadly, I have issues with that.
Now, I created an example script ("hostname.sh") to test different parameters in the sbatch script:
echo `date +%s` `hostname`
sleep 10
This is my sbatch script:
#SBATCH --ntasks=15
#SBATCH --cpus-per-task=16
for i in `seq 200`; do
srun -n1 -N1 bash hostname.sh &
done
wait
I would expect that hostname.sh is executed 200 times (for loop) but only 15 tasks running at the same time (--ntasks=15). Since my biggest node has 56 cores only three jobs should be able to run on this node at the same time (--cpus-per-task=16).
From the ouptut of the script I can see that the first nine tasks are distributed over nine nodes from the cluster but all the other tasks (191!) are executed on one node at the same time. The whole sbatch script execution just took about 15 seconds.
I think I misunderstand some of slurm's parameters but looking at the official documentation did not help me.
You need to use the --exclusive option of srun in that context:
srun -n1 -N1 --exclusive bash hostname.sh &
From the srun manpage:
By default, a job step has access to every CPU allocated to the job.
To ensure that distinct CPUs are allocated to each job step, use the
--exclusive option.
See also the last-but-one example in said documentation.

Slurm: select nodes with specified number of CPUs

I'm using slurm on a cluster where single partitions have dissimilar nodes. Specifically, the nodes have varying # CPUs. My code is a single-core application being used for a parameter sweep and thus I want to fully use an (eg.) 32 CPU node by sending it 32 jobs.
How can I select nodes (within a named partition) that have a specified number of CPUs?
I know my Partition configuration via
sinfo -e -p <partition_name> -o "%9P %3c %.5D %6t " -t idle,mix
PARTITION CPU NODES STATE
<partition_name> 16 63 mix
<partition_name> 32 164 mix
But if I use a submissions script like
[snip preamble]
#SBATCH --partition <partition_name> # resource to be used
#SBATCH --nodes 1 # Num nodes
#SBATCH -N 1 # Num cores per job
#SBATCH --cores-per-socket=32 # Cores per node
the slurm scheduler says
sbatch: error: Socket, core and/or thread specification can not be satisfied
PS. A minor correction: my code to get partition info isn't the best. Just in case anyone looks up this question later, here is a better query (using X,Y for socket, core counts) that helps identify the problem that damien's excellent answer solved
sinfo -e -p <partition_name> -o "%9P %3c %.3D %6t %2X %2Y %N" -t idle,mix
To strictly answer your question: With
#SBATCH --cores-per-socket=32
you request 32 core per socket, which is per physical CPU. I guess those machines have two CPUs so you should request something like
#SBATCH --sockets-per-node=2
#SBATCH --cores-per-socket=16
Another way of requesting the same is to ask for
#SBATCH --nodes 1
#SBATCH --tasks-per-node 32
But please note that, if your cluster allows node sharing, what you do seems more suited for job arrays :
#SBATCH --ntasks 1
#SBATCH --arrays 1-32
IDS=($(seq RUN_ID_FIRST RUN_ID_LAST))
RUN_ID=${IDS[$SLURM_ARRAY_TASK_ID]}
matlab -nojvm -singleCompThread -r "try myscript(${RUN_ID}); catch me; disp(' *** error'); end; exit" > ./result_${RUN_ID}
This will launch 32 independent jobs, each taking care of running the Matlab script for one value of the parameter sweep.
To answer your additional question; if a 32-process job is scheduled on a 16-CPU node, the node will be overloaded, and depending on the containment solution set up by the administrators, your processes might impact others' jobs and slow them down.

How to increase the performance of spark in hpc (shared memory cluster)

I have been dealing with running spark on an hpc for a while. Due to my limited experience and knowledge I have stuck with a performance related issue.
First of all when i run my code in a server using 36 cores in local mode (
.setMaster("local[*]")
), I got my results in 1 hr 30 mins. Now, I want to increase this by running my code in hpc. This hpc is a shared memory and as I was told that I do not need to set a yarn system or something else since all assigned cores will be able to employ the same cpu. To this end, I have prepared my jar file and invoke it by the bash file. My spark configuration is as below:
val conf = new SparkConf()
.setAppName("LSH-Cosine")
.setMaster("local[*]")
.set("spark.driver.maxResultSize", "0")
.set("spark.local.dir", "/local_scratch/tmp");
And my bash file:
#!/bin/bash -l
#PBS -l nodes=5:ppn=10
#PBS -l walltime=00:45:00
#PBS -l pmem=6gb
module load cerebro/2014a
module load Spark/1.4.1
cd $PBS_O_WORKDIR
spark-submit \
--class "com.soundcloud.lsh.MainCerebro" \
--master local[*] \
--executor-memory 256G \
--driver-memory 256g \
cosine-lsh.jar
So based on these settings, there will be 5 nodes each of which having 10 cores and in total 50 cores will be running. In addition each cores is assigned 6gb memory which is quite high. When I ran this code, I observed that about 100 gb is consumed and this resulted in no difference in temrs of performance compared to the server. then I have changed the #PBS -l nodes=5:ppn=10 to #PBS -l nodes=10:ppn=10 and again I have observed that about 100gb is consumed and no speed performance increasing.
So what should I do to exploit hpc to get my results in a more faster way?
Thanks in advance.

Torque+MAUI PBS submitted job strange startup

I am using a Torque+MAUI cluster.
The cluster's utilization now is ~10 node/40 nodes available, a lot of job being queued but cannot be started.
I submitted the following PBS script using qsub:
#!/bin/bash
#
#PBS -S /bin/bash
#PBS -o STDOUT
#PBS -e STDERR
#PBS -l walltime=500:00:00
#PBS -l nodes=1:ppn=32
#PBS -q zone0
cd /somedir/workdir/
java -Xmx1024m -Xms256m -jar client_1_05.jar
The job gets R(un) status immediately, but I had this abnormal information from qstat -n
8655.cluster.local user zone0 run.sh -- 1 32 -- 500:00:00 R 00:00:31
z0-1/0+z0-1/1+z0-1/2+z0-1/3+z0-1/4+z0-1/5+z0-1/6+z0-1/7+z0-1/8+z0-1/9
+z0-1/10+z0-1/11+z0-1/12+z0-1/13+z0-1/14+z0-1/15+z0-1/16+z0-1/17+z0-1/18
+z0-1/19+z0-1/20+z0-1/21+z0-1/22+z0-1/23+z0-1/24+z0-1/25+z0-1/26+z0-1/27
+z0-1/28+z0-1/29+z0-1/30+z0-1/31
The abnormal part is -- in run.sh -- 1 32, as the sessionId is missing, and evidently the script does not run at all, i.e. the java program does not ever had traces of being started.
After this kind of strange running for ~5 minutes, the job will be set back to Q(ueue) status and seemingly will not being run again (I had monitored this for ~1 week and it does not run even being queued to the top most job).
I tried submit the same job 14 times, and monitored its node in qstat -n, 7 copies ran successfully, having varied node numbers, but all jobs being allocated z0-1/* get stuck with this strange startup behavior.
Anyone know a solution to this issue?
For a temporary workaround, how can I specify NOT to use those strange nodes in PBS script?
It sounds like something is wrong with those nodes. One solution would be to offline the nodes that aren't working: pbsnodes -o <node name> and allow the cluster to continue to work. You may need to release the holds on any jobs. I believe you can run releasehold ALL to accomplish this in Maui.
Once you take care of that I'd investigate the logs on those nodes (start with the pbs_mom logs and the syslogs) and figure out what is wrong with them. Once you figure out and correct what is wrong with them, you can put the nodes back online: pbsnodes -c <node_name>. You may also want to look into setting up some node health scripts to proactively detect and handle these situations.
For users, contact your administrator and in the mean time, run the job using this workaround.
Use pbsnodes to check for free and healthy nodes
Modify PBS directive #PBS -l nodes=<freenode1>:ppn=<ppn1>+<freenode2>:ppn=<ppn2>+...
submit the job using qsub

Resources