Setting up slurm.conf file for single computer - slurm

Hi I am attempting to utilize a processing pipeline which is written to run on multiple computer clusters using slurm however I would prefer to run it on a single compluter. I am on Ubuntu 18 and have installed slurm-wlm however I have not been able to get the pipeline to read my slurm.conf file which I made from Slurm Version 18.08 Configuration Tool online with the goal of running this as a single node so I dont have to rewrite the pipeline code.
Everytime I attempt to run this pipeline sh script the log-file gives this error
sbatch: error: _parse_next_key: Parsing error at unrecognized key: SlurmctldHost
sbatch: error: Parse error in file /etc/slurm-llnl/slurm.conf line 2: "SlurmctldHost=charlie-Z370M-D3H"
sbatch: fatal: Unable to process configuration file
charlie-Z370M-D3H is the hostname
below is my slurm.conf text and I hope someone can see what I need to do to get this to work
#
SlurmctldHost=charlie-Z370M-D3H
#SlurmctldHost=
#
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/cgroup
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/spool
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/affinity
TaskPluginParam=Sched
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
#SlurmctldLogFile=
SlurmdDebug=3
#SlurmdLogFile=
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP

I have had the same issue and it turns out that the conf-file generated on that webpage is only valid for 18.08
If you look at the webpage where you created the slurm.conf-file you may notice that it is only valid for version 18.08.
Thus, please verify that your version of SLURM is at least 18.x, since the key "SlurmctldHost" in the conf-file was introduced then.
You can verify your version of SLURM by simple typing "dpkg -l | grep slurm" and note which version is installed. For Ubuntu 18.x the default package installed is of slurm-version 17.11.9. (You might have to download the source-code from https://www.schedmd.com/archives.php by selecting the version you have installed and download it to your local machine.
Unpack it and look into "/doc/html/"-dir where you´ll find t he corrensponding configurator-html-script for your version.) E.g. if your version is 17.11.9, then the corresponding key of "SlurmctldHost" (as introduced in 18.08), is "ControlMachine" in version 17.11.9. So use the configurator-html-script in your local slurm-doc-dir to generate a valid slurm.conf for your installed version of slurm.
I did that and it works fine.

Related

Slurmd crashes when emulating a larger cluster in versions 21 and 22

I have been maintaining a Slurm simulator for ages. I have everything automated in other to try new features and keep my configuration up to date, version after version.
Unfortunately, from version 21, the front-end mode makes the slurmd daemon crash with the following error message:
slurmd: error: _find_node_record: lookup failure for node "slurm-simulator"
slurmd: error: _find_node_record: lookup failure for node "slurm-simulator", alias "slurm-simulator"
slurmd: error: slurmd initialization failed
The exact same container, with the same configuration but using version 20.11.9, works just fine. I reproduced the same steps manually in a VM to remove the noise introduced by the container, but the result is the same.
The attached configuration is available in the container.
[root#slurm-simulator /]# cat /etc/slurm/slurm.conf
ClusterName=simulator
SlurmctldHost=slurm-simulator
FrontendName=slurm-simulator
MpiDefault=none
ProctrackType=proctrack/linuxproc
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=root
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
AccountingStorageType=accounting_storage/slurmdbd
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurmd.log
SlurmdParameters=config_overrides
include /etc/slurm/nodes.conf
include /etc/slurm/partitions.conf
[root#slurm-simulator /]# cat /etc/slurm/nodes.conf
NodeName=node[001-10] RealMemory=248000 Sockets=2 CoresPerSocket=32 ThreadsPerCore=1 State=UNKNOWN NodeAddr=slurm-simulator NodeHostName=slurm-simulator
[root#slurm-simulator /]# cat /etc/slurm/partitions.conf
PartitionName=long Nodes=node[001-10] Default=YES State=UP OverSubscribe=NO MaxTime=14-00:00:00
The error can be reproduced by running the following commands:
docker run --rm --detach \
--name "${USER}_simulator" \
-h "slurm-simulator" \
--security-opt seccomp:unconfined \
--privileged -e container=docker \
-v /run -v /sys/fs/cgroup:/sys/fs/cgroup \
--cgroupns=host \
hpcnow/slurm_simulator:21.08.8-2 /usr/sbin/init
docker exec -ti ${USER}_simulator /bin/bash
slurmd -D -vvvvv
If you try the same command with v20.11.9 it will work. I have tried using the new SlurmdParameters=config_overrides option, but I still get the same problem.
Any ideas or suggestions?
Thanks!

Problems reading slurm configuration file with Singularity

I'm trying to run an application in Singularity across nodes (864 MPI tasks) on an HPC system, namely the S4 machine at the University of Wisconsin's Space Science and Engineering Center (SSEC).
I'm using what Singularity describes as the hybrid model 1, meaning that I'm using the native (system) MPI but I also have MPI installed in the container. The mpi versions are compatible - I'm using Intel MPI version 17.0.6 outside the container and Intel MPI version 17.0.1 inside the container. The code in the container is compiled with Intel 17.0.1 compilers (C++, C, and Fortran).
So here's the problem. When I first ran the code, it complained about not finding the slurm configuration file:
fv3jedi_var.x: error: s_p_parse_file: unable to status file /etc/slurm-llnl/slurm.conf: No such file or directory, retrying in 1sec up to 60sec
So I found the system slurm.conf file in /etc/slurm and mounted this directory in the container as /etc/slurm-llnl. It now finds the configuration file but it does not understand the site-specific configuration:
fv3jedi_var.x: error: "ALL" is not a valid option for "EnforcePartLimits"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
fv3jedi_var.x: error: Parse error in file /etc/slurm-llnl/slurm.conf line 225: " Features=ivy"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
fv3jedi_var.x: error: Parse error in file /etc/slurm-llnl/slurm.conf line 226: " Features=ivy"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
[...]
So, I'm stuck. I'm guessing that this might be a PMI issue? I currently have slurm libpmi.so installed in the container and that's what I'm specifying with the I_MPI_PMI_LIBRARY variable. But I wonder if the native (system) PMI (I know it is PMI as opposed to PMI2 or PMIx) is somehow configured to properly process the system slurm.conf file? I have tried to use the native PMI library by mounting (binding) the appropriate directory in the container and changing my I_MPI_PMI_LIBRARY variable. But, the native PMI library is in the same directory as the glibc library and when I mount that there is a conflict between the glibc libraries inside and outside the container:
/bin/sh: relocation error: /usr/lib64/libc.so.6: symbol _dl_starting_up, version GLIBC_PRIVATE not defined in file ld-linux-x86-64.so.2 with link time reference
Any ideas on how to proceed? My slurm batch script is below. Thanks!
#!/usr/bin/bash
# --mem-per-cpu=8192M
#SBATCH --job-name=bm_con14
#SBATCH --partition=ivy
#SBATCH --ntasks=864
#SBATCH --cpus-per-task=1
#SBATCH --time=2:00:00
#SBATCH --mail-user=miesch#ucar.edu
source /etc/bashrc
module purge
module load license_intel
module load intel/17.0.6
ulimit -s unlimited
cd /data/users/mmiesch/runs/con-benchmark/con
JEDICON=/data/users/mmiesch
JEDIBUILD=/data/users/mmiesch/jedi/fv3-bundle/build-con
JEDIBIN=/data/users/mmiesch/jedi/fv3-bundle/build-con/bin
export SINGULARITY_BINDPATH="$JEDIBUILD,/etc/slurm:/etc/slurm-llnl"
srun --ntasks=864 --cpu_bind=cores --distribution=block:block --verbose singularity exec --home=$PWD $JEDICON/jedi-intel17-impi-hpc-dev.sif ${JEDIBIN
}/fv3jedi_var.x Config/3dvar_bump.yaml
exit 0

slurmd unable to communicate with slurmctld

I followed the steps to troubleshoot here: https://slurm.schedmd.com/troubleshoot.html.
When running scontrol show slurmd, I get:
Active Steps = NONE
Actual CPUs = 1
Actual Boards = 1
Actual sockets = 1
Actual cores = 1
Actual threads per core = 1
Actual real memory = 984 MB
Actual temp disk space = 492 MB
Boot time = 2019-03-27T17:53:56
Hostname = fedora2
Last slurmctld msg time = NONE
Slurmd PID = 1549
Slurmd Debug = 4
Slurmd Logfile = /var/log/slurmd.log
Version = 17.11.13-2
I don't know why slurmd on fedora2 can't communicate with the controller on fedora1. slurmctld daemon is running fine on fedora1.
The slurm.conf is as follows:
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
#SlurmctldHost=fedora1
#
ControlMachine=fedora1
ControlAddr=192.168.1.4
MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurm/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
SlurmdUser=root
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/affinity
#
#
# TIMERS
#KillWait=30
#MinJobAge=300
#SlurmctldTimeout=120
#SlurmdTimeout=300
#
#
# SCHEDULING
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# LOGGING AND ACCOUNTING
AccountingStorageType=accounting_storage/none
ClusterName=fedora
#JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=verbose
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=verbose
SlurmdLogFile=/var/log/slurmd.log
#
#
# COMPUTE NODES
NodeName=fedora1 NodeAddr=192.168.1.4 CPUs=1 State=UNKNOWN
NodeName=fedora2 NodeAddr=192.168.1.5 CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=fedora[1-2] Default=YES MaxTime=INFINITE State=UP
The output of tail /var/log/slurmd.log on fedora2, on multiple lines:
error: Unable to register: Unable to contact slurm controller (connect failure)
Make sure that:
no firewall prevents the slurmd daemon from talking to the controller
munge is running on each server
the dates are in sync
the Slurm versions are identical
the name fedora1 can be resolved to the correct IP

slurm: frontend as compute node not responding

Similar to slurm: use a control node also for computing.
I would like to use the frontend as an compute node. I made the following entries in slurm.conf
NodeName=gisc RealMemory=63000 Sockets=1 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN Weight=2
NodeName=c[0-2] RealMemory=126000 Sockets=1 CoresPerSocket=16 ThreadsPerCore=2 State=UNKNOWN Weight=1
PartitionName=normal Nodes=gisc,c[0-2] Default=YES MaxTime=INFINITE State=UP
And restarted both slurmd and slurmctld.
However, I always get no response from the frontend node which is proved by an asterix in the status.
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
normal* up infinite 1 idle* gisc
normal* up infinite 2 alloc c[0-1]
normal* up infinite 1 idle c2
Also, I cannot start slurmd on the frontend node. The logs do not help.
Could it be that slurmd and slurmctld are conflicting on the frontend node?
My /etc/hosts looks as follows
192.168.1.1 gisc.localdomain gisc gisc-eth0.localdomain gisc-eth0
### ALL ENTRIES BELOW THIS LINE WILL BE OVERWRITTEN BY WAREWULF ###
#
# See provision.conf for configuration paramaters
# Node Entry for node: c0 (ID=22)
192.168.1.2 c0.localdomain c0 c0-eth0.localdomain c0-eth0
# Node Entry for node: c1 (ID=23)
192.168.1.3 c1.localdomain c1 c1-eth0.localdomain c1-eth0
# Node Entry for node: c2 (ID=24)
192.168.1.4 c2.localdomain c2 c2-eth0.localdomain c2-eth0
facepalm The slurm-client library was missing on the frontend. Only the slurm-server library was installed...

Emulating SLURM on Ubuntu 16.04

I want to emulate SLURM on Ubuntu 16.04. I don't need serious resource management, I just want to test some simple examples. I cannot install SLURM in the usual way, and I am wondering if there are other options. Other things I have tried:
A Docker image. Unfortunately, docker pull agaveapi/slurm; docker run agaveapi/slurm gives me errors:
/usr/lib/python2.6/site-packages/supervisor/options.py:295: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-10-29 15:27:45,436 CRIT Supervisor running as root (no user in config file)
2017-10-29 15:27:45,437 INFO supervisord started with pid 1
2017-10-29 15:27:46,439 INFO spawned: 'slurmd' with pid 9
2017-10-29 15:27:46,441 INFO spawned: 'sshd' with pid 10
2017-10-29 15:27:46,443 INFO spawned: 'munge' with pid 11
2017-10-29 15:27:46,443 INFO spawned: 'slurmctld' with pid 12
2017-10-29 15:27:46,452 INFO exited: munge (exit status 0; not expected)
2017-10-29 15:27:46,452 CRIT reaped unknown pid 13)
2017-10-29 15:27:46,530 INFO gave up: munge entered FATAL state, too many start retries too quickly
2017-10-29 15:27:46,531 INFO exited: slurmd (exit status 1; not expected)
2017-10-29 15:27:46,535 INFO gave up: slurmd entered FATAL state, too many start retries too quickly
2017-10-29 15:27:46,536 INFO exited: slurmctld (exit status 0; not expected)
2017-10-29 15:27:47,537 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-10-29 15:27:47,537 INFO gave up: slurmctld entered FATAL state, too many start retries too quickly
This guide to start a SLURM VM via Vagrant. I tried, but copying over my munge key timed out.
sudo scp /etc/munge/munge.key vagrant#server:/home/vagrant/
ssh: connect to host server port 22: Connection timed out
lost connection
So ... we have an existing cluster here but it runs an older Ubuntu version which does not mesh well with my workstation running 17.04.
So on my workstation, I just made sure I slurmctld (backend) and slurmd installed, and then set up a trivial slurm.conf with
ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000
after which I restarted slurmcltd and then slurmd. Now all is fine:
root#mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
demo up infinite 1 idle mybox
root#mybox:/etc/slurm-llnl$
This is a degenerate setup, our real one has a mix of dev and prod machine and appropriate partitions. But this should answer your "can backend really be client" question. Also, my machine is not really called mybox but is not really pertinent for the question in either case.
Using Ubuntu 17.04, all stock, with munge to communicate (which is the default anyway).
Edit: To wit:
me#mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii slurm-client 16.05.9-1ubun amd64 SLURM client side commands
ii slurm-wlm-basic- 16.05.9-1ubun amd64 SLURM basic plugins
ii slurmctld 16.05.9-1ubun amd64 SLURM central management daemon
ii slurmd 16.05.9-1ubun amd64 SLURM compute node daemon
me#mybox:~$
I would still prefer to run SLURM natively, but I caved and spun up a Debian 9.2 VM. See here for my efforts to troubleshoot a native installation. The directions here worked smoothly, but I needed to make the following changes to slurm.conf. Below, Debian64 is the hostname, and wlandau is my user name.
ControlMachine=Debian64
SlurmUser=wlandau
NodeName=Debian64
Here is the complete slurm.conf. An analogous slurm.conf did not work on my native Ubuntu 16.04.
# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=Debian64
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=Debian64 CPUs=1 RealMemory=744 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
PartitionName=debug Nodes=Debian64 Default=YES MaxTime=INFINITE State=UP

Resources