Slurmd crashes when emulating a larger cluster in versions 21 and 22 - slurm

I have been maintaining a Slurm simulator for ages. I have everything automated in other to try new features and keep my configuration up to date, version after version.
Unfortunately, from version 21, the front-end mode makes the slurmd daemon crash with the following error message:
slurmd: error: _find_node_record: lookup failure for node "slurm-simulator"
slurmd: error: _find_node_record: lookup failure for node "slurm-simulator", alias "slurm-simulator"
slurmd: error: slurmd initialization failed
The exact same container, with the same configuration but using version 20.11.9, works just fine. I reproduced the same steps manually in a VM to remove the noise introduced by the container, but the result is the same.
The attached configuration is available in the container.
[root#slurm-simulator /]# cat /etc/slurm/slurm.conf
ClusterName=simulator
SlurmctldHost=slurm-simulator
FrontendName=slurm-simulator
MpiDefault=none
ProctrackType=proctrack/linuxproc
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=root
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
AccountingStorageType=accounting_storage/slurmdbd
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurmd.log
SlurmdParameters=config_overrides
include /etc/slurm/nodes.conf
include /etc/slurm/partitions.conf
[root#slurm-simulator /]# cat /etc/slurm/nodes.conf
NodeName=node[001-10] RealMemory=248000 Sockets=2 CoresPerSocket=32 ThreadsPerCore=1 State=UNKNOWN NodeAddr=slurm-simulator NodeHostName=slurm-simulator
[root#slurm-simulator /]# cat /etc/slurm/partitions.conf
PartitionName=long Nodes=node[001-10] Default=YES State=UP OverSubscribe=NO MaxTime=14-00:00:00
The error can be reproduced by running the following commands:
docker run --rm --detach \
--name "${USER}_simulator" \
-h "slurm-simulator" \
--security-opt seccomp:unconfined \
--privileged -e container=docker \
-v /run -v /sys/fs/cgroup:/sys/fs/cgroup \
--cgroupns=host \
hpcnow/slurm_simulator:21.08.8-2 /usr/sbin/init
docker exec -ti ${USER}_simulator /bin/bash
slurmd -D -vvvvv
If you try the same command with v20.11.9 it will work. I have tried using the new SlurmdParameters=config_overrides option, but I still get the same problem.
Any ideas or suggestions?
Thanks!

Related

Getting error while running to "runqemu qemux86-64"

I want to run the graphic image of qemux86-64 which I have built with yocto. When I am running command "runqemu qemux86-64" or "runqemu" or "runqemu core-image-minimal" I am getting the following error.
nikita#ubuntu:~/yocto/poky/build$ runqemu qemux86-64
runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
runqemu - INFO - Continuing with the following parameters:
KERNEL: [/home/nikita/yocto/poky/build/tmp/deploy/images/qemux86-64/bzImage--5.4.205+gitAUTOINC+aaaf9f090d_8a59dfded8-r0-qemux86-64-20220803111012.bin]
MACHINE: [qemux86-64]
FSTYPE: [ext4]
ROOTFS: [/home/nikita/yocto/poky/build/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220803111012.rootfs.ext4]
CONFFILE: [/home/nikita/yocto/poky/build/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220803111012.qemuboot.conf]
runqemu - INFO - Setting up tap interface under sudo
[sudo] password for nikita:
runqemu - INFO - Network configuration: ip=192.168.7.2::192.168.7.1:255.255.255.0
runqemu - INFO - Running /home/nikita/yocto/poky/build/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64 -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no -drive file=/home/nikita/yocto/poky/build/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220803111012.rootfs.ext4,if=virtio,format=raw -show-cursor -usb -device usb-tablet -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 -cpu core2duo -m 256 -serial mon:vc -serial null -kernel /home/nikita/yocto/poky/build/tmp/deploy/images/qemux86-64/bzImage--5.4.205+gitAUTOINC+aaaf9f090d_8a59dfded8-r0-qemux86-64-20220803111012.bin -append 'root=/dev/vda rw mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 '
runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not available) - exiting
runqemu - INFO - Cleaning up
Set 'tap0' nonpersistent
Only solution I found on internet that i should run nographic image for this which has worked fine for me but my requirement is graphic image for this. Help me to find the solution to run graphic image for this.
Your replies would be appreciated .
I was facing same issue recently and got fixed by below step,
Change
PACKAGECONFIG_append_pn-qemu-system-native = " sdl"
to
PACKAGECONFIG_append_pn-qemu-system-native = " gtk+"
in conf/local.conf
Now recompile and start runqemu

Gitlab runner doesn't start on Apple M1

On my instance,
I added a runner on a Apple Silicon M1, but this runner doesn't start.
That's why I assigned a project to it, with the hope of starting, but I see this
But how can I check why is there a red ! ?
What prevents to start it ?
This is what I did.
Create docker runner:
docker stop gitlab-runner && docker rm gitlab-runner
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /Users/Shared/gitlab-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
register runner:
docker run --rm -v /Users/Shared/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register \
--non-interactive \
--executor "docker" \
--docker-image hannesa2/android-ndk:api28_emu \
--url "http://latitude:8083/" \
--registration-token "<TOKEN>" \
--description "M1 pro Android NDK + Emu" \
--tag-list "android,android-ndk,android-emu" \
--run-untagged="true" \
--locked="false" \
--access-level="not_protected"
and I see this in docker log
Runtime platform arch=arm64 os=linux pid=8 revision=4b9e985a version=14.4.0
Starting multi-runner from /etc/gitlab-runner/config.toml... builds=0
Running in system-mode.
Configuration loaded builds=0
listen_address not defined, metrics & debug endpoints disabled builds=0
[session_server].listen_address not defined, session endpoints disabled builds=0
ERROR: Checking for jobs... forbidden runner=Jc2yrs_v
ERROR: Checking for jobs... forbidden runner=Jc2yrs_v
ERROR: Checking for jobs... forbidden runner=Jc2yrs_v
ERROR: Runner http://latitude:8083/Jc2yrs_v8JkJyMJAGUd_ is not healthy and will be disabled!
Configuration loaded builds=0
Thank you

Setting up slurm.conf file for single computer

Hi I am attempting to utilize a processing pipeline which is written to run on multiple computer clusters using slurm however I would prefer to run it on a single compluter. I am on Ubuntu 18 and have installed slurm-wlm however I have not been able to get the pipeline to read my slurm.conf file which I made from Slurm Version 18.08 Configuration Tool online with the goal of running this as a single node so I dont have to rewrite the pipeline code.
Everytime I attempt to run this pipeline sh script the log-file gives this error
sbatch: error: _parse_next_key: Parsing error at unrecognized key: SlurmctldHost
sbatch: error: Parse error in file /etc/slurm-llnl/slurm.conf line 2: "SlurmctldHost=charlie-Z370M-D3H"
sbatch: fatal: Unable to process configuration file
charlie-Z370M-D3H is the hostname
below is my slurm.conf text and I hope someone can see what I need to do to get this to work
#
SlurmctldHost=charlie-Z370M-D3H
#SlurmctldHost=
#
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/cgroup
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/spool
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/affinity
TaskPluginParam=Sched
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
#SlurmctldLogFile=
SlurmdDebug=3
#SlurmdLogFile=
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP
I have had the same issue and it turns out that the conf-file generated on that webpage is only valid for 18.08
If you look at the webpage where you created the slurm.conf-file you may notice that it is only valid for version 18.08.
Thus, please verify that your version of SLURM is at least 18.x, since the key "SlurmctldHost" in the conf-file was introduced then.
You can verify your version of SLURM by simple typing "dpkg -l | grep slurm" and note which version is installed. For Ubuntu 18.x the default package installed is of slurm-version 17.11.9. (You might have to download the source-code from https://www.schedmd.com/archives.php by selecting the version you have installed and download it to your local machine.
Unpack it and look into "/doc/html/"-dir where you´ll find t he corrensponding configurator-html-script for your version.) E.g. if your version is 17.11.9, then the corresponding key of "SlurmctldHost" (as introduced in 18.08), is "ControlMachine" in version 17.11.9. So use the configurator-html-script in your local slurm-doc-dir to generate a valid slurm.conf for your installed version of slurm.
I did that and it works fine.

Emulating SLURM on Ubuntu 16.04

I want to emulate SLURM on Ubuntu 16.04. I don't need serious resource management, I just want to test some simple examples. I cannot install SLURM in the usual way, and I am wondering if there are other options. Other things I have tried:
A Docker image. Unfortunately, docker pull agaveapi/slurm; docker run agaveapi/slurm gives me errors:
/usr/lib/python2.6/site-packages/supervisor/options.py:295: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-10-29 15:27:45,436 CRIT Supervisor running as root (no user in config file)
2017-10-29 15:27:45,437 INFO supervisord started with pid 1
2017-10-29 15:27:46,439 INFO spawned: 'slurmd' with pid 9
2017-10-29 15:27:46,441 INFO spawned: 'sshd' with pid 10
2017-10-29 15:27:46,443 INFO spawned: 'munge' with pid 11
2017-10-29 15:27:46,443 INFO spawned: 'slurmctld' with pid 12
2017-10-29 15:27:46,452 INFO exited: munge (exit status 0; not expected)
2017-10-29 15:27:46,452 CRIT reaped unknown pid 13)
2017-10-29 15:27:46,530 INFO gave up: munge entered FATAL state, too many start retries too quickly
2017-10-29 15:27:46,531 INFO exited: slurmd (exit status 1; not expected)
2017-10-29 15:27:46,535 INFO gave up: slurmd entered FATAL state, too many start retries too quickly
2017-10-29 15:27:46,536 INFO exited: slurmctld (exit status 0; not expected)
2017-10-29 15:27:47,537 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-10-29 15:27:47,537 INFO gave up: slurmctld entered FATAL state, too many start retries too quickly
This guide to start a SLURM VM via Vagrant. I tried, but copying over my munge key timed out.
sudo scp /etc/munge/munge.key vagrant#server:/home/vagrant/
ssh: connect to host server port 22: Connection timed out
lost connection
So ... we have an existing cluster here but it runs an older Ubuntu version which does not mesh well with my workstation running 17.04.
So on my workstation, I just made sure I slurmctld (backend) and slurmd installed, and then set up a trivial slurm.conf with
ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000
after which I restarted slurmcltd and then slurmd. Now all is fine:
root#mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
demo up infinite 1 idle mybox
root#mybox:/etc/slurm-llnl$
This is a degenerate setup, our real one has a mix of dev and prod machine and appropriate partitions. But this should answer your "can backend really be client" question. Also, my machine is not really called mybox but is not really pertinent for the question in either case.
Using Ubuntu 17.04, all stock, with munge to communicate (which is the default anyway).
Edit: To wit:
me#mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii slurm-client 16.05.9-1ubun amd64 SLURM client side commands
ii slurm-wlm-basic- 16.05.9-1ubun amd64 SLURM basic plugins
ii slurmctld 16.05.9-1ubun amd64 SLURM central management daemon
ii slurmd 16.05.9-1ubun amd64 SLURM compute node daemon
me#mybox:~$
I would still prefer to run SLURM natively, but I caved and spun up a Debian 9.2 VM. See here for my efforts to troubleshoot a native installation. The directions here worked smoothly, but I needed to make the following changes to slurm.conf. Below, Debian64 is the hostname, and wlandau is my user name.
ControlMachine=Debian64
SlurmUser=wlandau
NodeName=Debian64
Here is the complete slurm.conf. An analogous slurm.conf did not work on my native Ubuntu 16.04.
# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=Debian64
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=Debian64 CPUs=1 RealMemory=744 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
PartitionName=debug Nodes=Debian64 Default=YES MaxTime=INFINITE State=UP

Puppet-Docker Service Error undefined method '[]' for nil:NilClass

I have a fresh install of the garethr-docker repo using command
puppet module install garethr/docker.
My puppet nodes.pp that I am running is very very simple:
include 'docker'
The logs look like they initialize the conf correctly; however, it's unable to reschedule the service. Please see the logs below
debug: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]/content: Executing 'diff -u /etc/init/docker.conf /tmp/puppet-file20140305-9166-j634yb-0'
notice: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]/content:
--- /etc/init/docker.conf 2014-03-05 18:00:12.141549000 +0000
+++ /tmp/puppet-file20140305-9166-j634yb-0 2014-03-05 18:08:46.997549000 +0000
## -6,6 +6,6 ##
respawn
script
- /usr/bin/docker -d -g /dap-home/docker -H unix:///var/run/docker.sock
+ /usr/bin/docker -d -H unix:///var/run/docker.sock
end script
debug: Finishing transaction 70136334948320
info: FileBucket got a duplicate file {md5}35cd6455aae3a3bc020b4db1e9839271
info: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]: Filebucketed /etc/init/docker.conf to puppet with sum 35cd6455aae3a3bc020b4db1e9839271
notice: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]/content: content changed '{md5}35cd6455aae3a3bc020b4db1e9839271' to '{md5}e6ce3c01ccf99456fc57176f1895f808'
info: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]: Scheduling refresh of Service[docker]
debug: /Stage[main]/Docker::Service/File[/etc/init/docker.conf]: The container Class[Docker::Service] will propagate my refresh event
debug: Puppet::Type::Service::ProviderUpstart: Executing '/sbin/status docker'
debug: Puppet::Type::Service::ProviderUpstart: Executing '/sbin/initctl --version'
err: /Stage[main]/Docker::Service/Service[docker]: Could not evaluate: undefined method `[]' for nil:NilClass
It should be noted (not that it really matters), that he machine that docker is being installed on is in fact a Docker container. The container that is the puppet agent is being run with the -privileged.

Categories

Resources