/etc/init.d/puppet doesn't exist in puppet agent - puppet

I've followed this instructions to install the puppet agent in a docker with ubuntu 16.04.
https://puppet.com/docs/puppet/5.5/install_linux.html
So I've excecuted this
wget https://apt.puppetlabs.com/puppet5-release-xenial.deb
dpkg -i puppet5-release-xenial.deb
apt update
apt-get install puppet-agent
/opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
The last line to start the service fails for this reason:
Error: Could not find init script for 'puppet'
Error: /Service[puppet]/ensure: change from 'stopped' to 'running' failed: Could not find init script for 'puppet'
service { 'puppet':
ensure => 'stopped',
enable => 'false',
}
The problem I think is that /etc/init.d/puppet doesn't exist.
The installer puppet agent version is 5.5.1.
Can you help me?
Thanks

Systemd, along with other init processes, are not installed by design, as you should be running your processes via an entrypoint or command option. In other words, the container should be running the command you are interested in, not a wrapper or bootstapping application.
In your case, puppet actually has a container that you can run out of the box for smoke-testing and such. You can find it here:
https://github.com/puppetlabs/puppet-in-docker
If though you are hell bent on running puppet agent jobs via systemd, you can actually attempt this with an example from a Redhat blog here:
https://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/

Related

Unable to use terraform's remote exec syntax to customize VM on proxmox

I recently encountered some problems when writing a script for Terrafrom automation.
In my case VM is using proxmox platform, not cloud platform
So I use Telmate/proxmox as my mod for creating VMs(CentOS7)
The VM builds smoothly, but when I want to customize the VM(CentOS7), there are some problems
There is an inline usage in terraform’s remote exec Provisioner.
According to the official documentation, this usage applies to line-by-line instructions
I followed this step and used it on my Provision script, the script did execute normally, and it also spawned the VM and executed the installation script.
The content of the install script is
yum -y install <something package>
install web service
copy web.conf, web program to /path/to/dir
restart web service
But the most important service is not up, but when I start the command in the script via SSH to the VM, the service is normal. That is, this cannot be achieved through terraform’s remote exec
So I want to ask if terraform is not suitable for customizing some services, such as web server, etc.? Only suitable for generating some resources such as VM?
And another custom script needs to be done using such as ansbile?
here is sample code
provisioner "remote-exec" {
inline = [
"yum -y install tar",
"tar -C / -xvf /tmp/product.tar",
"sh install.sh",
]
}
I found a way to understand this matter later, I am not sure if there is a problem with the program written by the developer or other reasons. Anyway, I can't enable the service (process) via script. But it is possible to enable the service by rebooting and using the built-in system service (systemctl).

Installing gitlab on Ubuntu 22.04 docker :: "warning: logrotate: unable to open supervise/ok: file does not exist"

I am following this tutorial for setting up the gitlab on Ubuntu 22.04 -- https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/
While running the gitlab package using this command
# gitlab-ctl reconfigure
the process is stuck on this step --
* ruby_block[wait for logrotate service socket] action run
Getting following output on running the # gitlab-ctl status and # gitlab-ctl stop command --
warning: logrotate: unable to open supervise/ok: file does not exist
How to resolve this ?
Check if restarting gitlab-runsvdir.service, as shown here, would help.
Maybe, as commented here, you would need to delete /var/opt/gitlab/redis/dump.rdb first.
The alternative is to use a GitLab-CE Docker image, which would not have any of those issues (since everything is already installed in said image).

Puppet can't use config set server

I've installed Puppet (version 4.10.1) via Ruby Gems.
I then use:
sudo puppet config set server mysite.org
Which returns the following error (same error without sudo).
Error: No such file or directory # rb_sysopen -
/etc/puppetlabs/puppet/puppet.conf Error: Try 'puppet
help config set' for usage
The gem install does not create the configuration files, the packages will.
Puppet is best installed with a package for the operating system you're on, rather than the gem.
The steps for installing are documented here:
https://docs.puppet.com/puppet/4.10/install_linux.html
If you're feeling lazy, I even wrote a script that will do all the work for you!
https://github.com/petems/puppet-install-shell
I'm not 100% sure what led to the situation of the /etc/puppetlabs/puppet folder not being created during the install process.
I found creating the folder manually with sudo mkdir /etc/pupppetlabs/puppet before running sudo puppet config set server mysite.org fixed the issue.

uninstall a module and its installed service using foreman puppet in Ubuntu

I have been playing with foreman for quite a while and now facing a problem with uninstalling modules.
When I try to uninstall a module using "Delete" button in Managing Class screen, I got an error that says "Module already used by host". Yes I understood, because its already mapped to host. I then revoked the module from host and deleted this module. This deletion happens only in foreman.
But I want to uninstall the service installed by foreman/puppet by puppet run (puppet agent -t ).
Do I have to use module's uninstall configuration, set appropriate values to params and do a puppet run to uninstall service before I delete module from foreman ?????
For example:
If I install apache with apache puppet module, it manages apache service, conf files, etc.... Now I want to completely remove apache service from all machines connected to my network.

How can I distribute the service discovery tool consul to linux hosts?

Consul isn't currently published in a package manager format. Whats a good way to distribute it across many systems and ensure its installed in a consistent manner?
I found that you can easily create a package from the consul binary using fpm:
fpm --verbose -s dir -t rpm -n consul -v 0.4 --url=http://consul.io --vendor=HashiCorp --description "A distributed service discovery tool" ./consul_/consul=/usr/local/bin/consul
That command will create an rpm file in your current working directory. You can also use 'deb' with the -t flag to create a deb package instead.
If you don't already have fpm installed, you can install it with rubygems:
gem install fpm
FPM requires the tools needed to create the package type you choose, so best to install this on a like system (Red Hat or Debain variant for RPM and DEB respectively)
Deliver it as a docker container.
The Dockerfile would:
wget the zip file, unzip it
Make directories for data and config
Add consul.json configuration file
Create volumes for the configuration and data
Expose consul port(s)
Define entrypoint
The Dockerfile would look approximately like this:
RUN wget 'https://dl.bintray.com/mitchellh/consul/0.3.1_linux_amd64.zip' -O consul.zip && unzip -d /usr/bin consul.zip
RUN mkdir -p /opt/consul/data /opt/consul/config
ADD consul.json /opt/consul/config/
VOLUME ["/opt/consul/data","/opt/consul/config"]
EXPOSE 8500
ENTRYPOINT ["/usr/bin/consul", "agent", "-config-dir=/opt/consul/config"]
CMD ["-server", "-bootstrap"]
consul is a single binary, perfectly suited for easy (re-)distribution and handling.
Packaging as .deb is just a three-liner with fpm.
Prerequisite: install fpm with gem install fpm
Full working example for consul 0.6 (current version as of January 2016):
wget -N https://releases.hashicorp.com/consul/0.6.0/consul_0.6.0_linux_amd64.zip
unzip -o consul_0.6.0_linux_amd64.zip
fpm --force --verbose -s dir -t deb -n consul -v 0.6 \
--url=http://consul.io --vendor=HashiCorp \
--description "A distributed service discovery tool" ./consul=/usr/local/bin/consul
There is a puppet module (https://github.com/solarkennedy/puppet-consul) which can help with this. It pulls the binary from dl.bintray.com and also helps out with configuring the system.
Install a server, joining to 172.20.20.10. we are "expecting" a 3 node cluster, so this snippet will work for all three server nodes (even the first as long as it's the "172.20.20.10" copy)
class { 'consul':
join_cluster => '172.20.20.10',
config_hash => {
'datacenter' => 'dc1',
'data_dir' => '/opt/consul',
'log_level' => 'INFO',
'node_name' => $::hostname,
'bind_addr' => $::ipaddress_eth1,
'bootstrap_expect' => 3,
'server' => true,
}
}
that snippet will also work for the client agents (just flip the "server" bit to false.) The last step is to create a service definition and register it with the local consul client agent:
consul::service { "foo",
tags => ['service'],
port => 8080,
check_script => '/opt/foo-health-checker.sh',
check_interval => '5s',
}
Here is an example Vagrantfile to build up a demo stack, complete with a 3 node consul cluster: https://github.com/benschw/consul-cluster-puppet
... and a blog post walking through how it was built: http://txt.fliglio.com/2014/10/consul-with-puppet/
Another option is to reuse one of the existing docker images.
e.g. progrium/consul is a great container designed to work in the Docker ecosystem
If you are interested in Ubuntu packages, I started maintaining a Launchpad PPA at https://launchpad.net/~bcandrea/+archive/ubuntu/consul. It currently targets LTS releases (12.04/14.04) which are the ones I need, but I might consider adding intermediate releases as well.
You can install it with the usual steps:
$ sudo apt-add-repository ppa:bcandrea/consul
$ sudo apt-get update
$ sudo apt-get install consul consul-web-ui
If you want to make a Debian/Ubuntu package for it and distribute it yourself, you might want to look at my Makefile for creating the packages: https://github.com/bcandrea/consul-deb.
Another alternative which only assumes that the linux target machines have ssh daemon running and that from the source machine you can ssh using keys:
Ansible can be installed on the source machine, then use a simple command line as described in http://docs.ansible.com/ansible/latest/intro_adhoc.html#file-transfer
File Transfer Here’s another use case for the /usr/bin/ansible
command line. Ansible can SCP lots of files to multiple machines in
parallel.
To transfer a file directly to many servers:
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"
So assuming you already have the needed file consul, prepare a file containing the list of target machines. This is called inventory in ansible jargon - http://docs.ansible.com/ansible/latest/intro_inventory.html
then ansible my_linux_machines -m copy -a "src=consul dest=/usr/bin/consul"
you can also have ansible download the zip, untar it and then copy. search "ansible untar"
There is a multi-platform Ansible role that helps create a Consul cluster with clients:
https://github.com/brianshumate/ansible-consul
Another role of his can add Vault on top of Consul.

Resources