How can I distribute the service discovery tool consul to linux hosts? - service-discovery

Consul isn't currently published in a package manager format. Whats a good way to distribute it across many systems and ensure its installed in a consistent manner?

I found that you can easily create a package from the consul binary using fpm:
fpm --verbose -s dir -t rpm -n consul -v 0.4 --url=http://consul.io --vendor=HashiCorp --description "A distributed service discovery tool" ./consul_/consul=/usr/local/bin/consul
That command will create an rpm file in your current working directory. You can also use 'deb' with the -t flag to create a deb package instead.
If you don't already have fpm installed, you can install it with rubygems:
gem install fpm
FPM requires the tools needed to create the package type you choose, so best to install this on a like system (Red Hat or Debain variant for RPM and DEB respectively)

Deliver it as a docker container.
The Dockerfile would:
wget the zip file, unzip it
Make directories for data and config
Add consul.json configuration file
Create volumes for the configuration and data
Expose consul port(s)
Define entrypoint
The Dockerfile would look approximately like this:
RUN wget 'https://dl.bintray.com/mitchellh/consul/0.3.1_linux_amd64.zip' -O consul.zip && unzip -d /usr/bin consul.zip
RUN mkdir -p /opt/consul/data /opt/consul/config
ADD consul.json /opt/consul/config/
VOLUME ["/opt/consul/data","/opt/consul/config"]
EXPOSE 8500
ENTRYPOINT ["/usr/bin/consul", "agent", "-config-dir=/opt/consul/config"]
CMD ["-server", "-bootstrap"]

consul is a single binary, perfectly suited for easy (re-)distribution and handling.
Packaging as .deb is just a three-liner with fpm.
Prerequisite: install fpm with gem install fpm
Full working example for consul 0.6 (current version as of January 2016):
wget -N https://releases.hashicorp.com/consul/0.6.0/consul_0.6.0_linux_amd64.zip
unzip -o consul_0.6.0_linux_amd64.zip
fpm --force --verbose -s dir -t deb -n consul -v 0.6 \
--url=http://consul.io --vendor=HashiCorp \
--description "A distributed service discovery tool" ./consul=/usr/local/bin/consul

There is a puppet module (https://github.com/solarkennedy/puppet-consul) which can help with this. It pulls the binary from dl.bintray.com and also helps out with configuring the system.
Install a server, joining to 172.20.20.10. we are "expecting" a 3 node cluster, so this snippet will work for all three server nodes (even the first as long as it's the "172.20.20.10" copy)
class { 'consul':
join_cluster => '172.20.20.10',
config_hash => {
'datacenter' => 'dc1',
'data_dir' => '/opt/consul',
'log_level' => 'INFO',
'node_name' => $::hostname,
'bind_addr' => $::ipaddress_eth1,
'bootstrap_expect' => 3,
'server' => true,
}
}
that snippet will also work for the client agents (just flip the "server" bit to false.) The last step is to create a service definition and register it with the local consul client agent:
consul::service { "foo",
tags => ['service'],
port => 8080,
check_script => '/opt/foo-health-checker.sh',
check_interval => '5s',
}
Here is an example Vagrantfile to build up a demo stack, complete with a 3 node consul cluster: https://github.com/benschw/consul-cluster-puppet
... and a blog post walking through how it was built: http://txt.fliglio.com/2014/10/consul-with-puppet/

Another option is to reuse one of the existing docker images.
e.g. progrium/consul is a great container designed to work in the Docker ecosystem

If you are interested in Ubuntu packages, I started maintaining a Launchpad PPA at https://launchpad.net/~bcandrea/+archive/ubuntu/consul. It currently targets LTS releases (12.04/14.04) which are the ones I need, but I might consider adding intermediate releases as well.
You can install it with the usual steps:
$ sudo apt-add-repository ppa:bcandrea/consul
$ sudo apt-get update
$ sudo apt-get install consul consul-web-ui
If you want to make a Debian/Ubuntu package for it and distribute it yourself, you might want to look at my Makefile for creating the packages: https://github.com/bcandrea/consul-deb.

Another alternative which only assumes that the linux target machines have ssh daemon running and that from the source machine you can ssh using keys:
Ansible can be installed on the source machine, then use a simple command line as described in http://docs.ansible.com/ansible/latest/intro_adhoc.html#file-transfer
File Transfer Here’s another use case for the /usr/bin/ansible
command line. Ansible can SCP lots of files to multiple machines in
parallel.
To transfer a file directly to many servers:
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"
So assuming you already have the needed file consul, prepare a file containing the list of target machines. This is called inventory in ansible jargon - http://docs.ansible.com/ansible/latest/intro_inventory.html
then ansible my_linux_machines -m copy -a "src=consul dest=/usr/bin/consul"
you can also have ansible download the zip, untar it and then copy. search "ansible untar"

There is a multi-platform Ansible role that helps create a Consul cluster with clients:
https://github.com/brianshumate/ansible-consul
Another role of his can add Vault on top of Consul.

Related

Install Alpine in diskless mode on NON VNC dedicated server

Hello i try to figure out how to install alpine linux, in diskless mode, on my remote dedicated server, without vnc access.
The server hoster just has a few images and a rescue system without a VNC option.
I already tried to boot the iso via grub-image boot but the alpine linux installation image doesn't have openssh installed, so i couldn't connect to the server to do the alpine-setup.
So i thought i can maybe edit the squashfs image.
In a Debian Live CD it's easy to unsquash the image, enable "PermitRootLogin= yes" and squash it again but with alpine linux i have absolutely no clue.
After this i tried to build with mkimage a custom alpine iso but i just don't know how to build properly i get "unable to load key file" and "$apks: unable to select package (or its dependencies)" error after the building.
(https://wiki.alpinelinux.org/wiki/How_to_make_a_custom_ISO_image_with_mkimage)
I used this code for the mkimage profile:
profile_nas() {
profile_standard
kernel_cmdline="unionfs_size=512M console=tty0 console=ttyS0,115200"
syslinux_serial="0 115200"
kernel_addons="zfs"
apks="\$apks openssh"
local _k _a
for _k in \$kernel_flavors; do
apks="\$apks linux-\$_k"
for _a in \$kernel_addons; do
apks="\$apks \$_a-\$_k"
done
done
apks="\$apks linux-firmware"
}
and this one to build
sh mkimage.sh --tag edge \
--outdir ~/iso \
--arch x86_64 \
--repository https://dl-cdn.alpinelinux.org/alpine/edge/main/ \
--profile nas
even if I'm able to generate the custom alpine linux iso i don't understand
this part of the guide (and if i would be able to understand this part i still wouldn't know how to enable remote ROOT access aka "PermitRootLogin= yes" in sshd_config):
Making packages available on boot
A package may be made available in the live system by defining the generation of an apkovl which contains a corresponding /etc/apk/world file, and adding that overlay definition to the mkimg-profile, e.g. with `apkovl="genapkovl-mkimgoverlay.sh"`
The definition may be done as in the genapkovl-dhcp.sh example. Copy the relevant parts (including the rc_add lines) into a `genapkovl-mkimgoverlay.sh` file and add the package(s) that should be installed in the live system on separate lines in the file contents for /etc/apk/world.
After this i tried to do a ssh in initramfs with dropbear-initramfs.
But it also doesn't work. With encrypted filesystems it always worked but to do this task i can't get a connection.
Does someone have a different idea how i can accomplish this task?

/etc/init.d/puppet doesn't exist in puppet agent

I've followed this instructions to install the puppet agent in a docker with ubuntu 16.04.
https://puppet.com/docs/puppet/5.5/install_linux.html
So I've excecuted this
wget https://apt.puppetlabs.com/puppet5-release-xenial.deb
dpkg -i puppet5-release-xenial.deb
apt update
apt-get install puppet-agent
/opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
The last line to start the service fails for this reason:
Error: Could not find init script for 'puppet'
Error: /Service[puppet]/ensure: change from 'stopped' to 'running' failed: Could not find init script for 'puppet'
service { 'puppet':
ensure => 'stopped',
enable => 'false',
}
The problem I think is that /etc/init.d/puppet doesn't exist.
The installer puppet agent version is 5.5.1.
Can you help me?
Thanks
Systemd, along with other init processes, are not installed by design, as you should be running your processes via an entrypoint or command option. In other words, the container should be running the command you are interested in, not a wrapper or bootstapping application.
In your case, puppet actually has a container that you can run out of the box for smoke-testing and such. You can find it here:
https://github.com/puppetlabs/puppet-in-docker
If though you are hell bent on running puppet agent jobs via systemd, you can actually attempt this with an example from a Redhat blog here:
https://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/

Install/Update cifs-utils before mount smb

I'm currently trying to get Vagrant to provision a working CentoOS7 image on Windows10, using Hyper-V. Vagrant 1.8.4, current latest.
I envcounter a problem where the provisioning fails and I need to workaround each time. The CentOS7 image is a minimal image and does not include cifs-utils, therefore the mount wont work. So, I need cifs-utils installed before mount.
Error:
==> default: Mounting SMB shared folders...
default: C:/Programs/vagrant_stuff/centos7 => /vagrant
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,sec=ntlm,credentials=/etc/smb_creds_4d99b2
d500a1bcb656d5a1c481a47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
mount -t cifs -o uid=`id -u vagrant`,gid=`id -g vagrant`,sec=ntlm,credentials=/etc/smb_creds_4d99b2d500a1bcb656d5a1c481a
47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
The error output from the last command was:
mount: wrong fs type, bad option, bad superblock on //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
As it is now, the provisioning has to fail, and I need to:
vagrant ssh (powershell)
(connect to instance via putty/ssh)
sudo yum install cifs-utils -y (putty/ssh)
(wait for install...)
exit (putty/ssh)
vagrant reload --provision (powershell)
This is obviously a pain and I am trying to streamline the process.
Does anyone know a better way?
You can install the missing package in your box and repackage this box so you can distribute a new version of this box containing the missing package.
In order to provision a vagrant box you need to create it from an iso. While preparing the box you can install all needed packages for you. In your case it is Hyper-v - https://www.vagrantup.com/docs/hyperv/boxes.html
Best Regards
Apparently my original question was downvoted for some reason. #whatever
As I mentioned in one of the comments above:
I managed to repackage and upload an updated version. Thanks for the advice. Its available in Atlas as "KptnKMan/bluefhypervalphacentos7repack".
Special thanks to #frédéric-henri :)

How could I execute 'apt-get install' on docker ubuntu contain?

firstly, I installed and ran docker contain using below command.
docker run -i -t ubuntu /bin/bash
Then I executed below commands.
root#d444a77039e7:/# apt-get update
0% [Connecting to archive.ubuntu.com (91.189.92.200)]
It blocked all the time.
Then I ran the below command, but met issues.
root#d444a77039e7:/# apt-get install nodejs
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nodejs
Then I set the http and https proxy like below, but it failed as well.
root#d444a77039e7:/# export HTTP_PROXY=http://proxy.xxx.com
root#d444a77039e7:/# export HTTPS_PROXY=http://proxy.xxx.com
Could you tell me how can I fix this issue? thanks. My host machine is redhat5.9 which does not support latest version of nodejs. So I plan to install it on docker engine.
That means your docker build has not been started with the new docker 1.9+ build-arg arguments. That will avoid putting the full proxy (which can include sometimes your credentials) in the Dockerfile:
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate or final images like ENV values do.
Try with small letters.
export http_proxy=http://proxy.xxx.com
export https_proxy=http://proxy.xxx.com
The proper way to overtake this issue is, you can create a new image with below Dockerfile, with that, you needn't manually set it any more.
FROM ubuntu
ENV http_proxy http://proxy.xxx.com
ENV https_proxy http://proxy.xxx.com

<br>how to install nagios check_procs plugin in nagios</br>

I am new to nagios and I have installed nagios 3 on my linux machine.
i want to install nagios check_procs plugin.can any one suggest me.thanks
You can install from package which depends on Linux distribution you use.
If it is rpm based then install "nagios-plugins" package.
rpm -qf /usr/lib64/nagios/plugins/check_procs
nagios-plugins-1.4.15-2.el6.rf.x86_64
From the tags you've marked on your question, I assume you are using ubuntu as Operative System to your Nagios' Server,
First of all try to verify where is your resources file
# find /* -name resource.cfg
The answer should be something like '/usr/local/nagios/etc/resource.cfg'
Then find where are your plugins, pointed in the resources by the $USER1$ variable (the code below assumes your resources.cfg is in /usr/local/nagios/etc/
# grep '\$USER1\$' /usr/local/nagios/etc/resource.cfg
You'll get the folder of your scripts (in my case it is /usr/local/nagios/libexec/):
$USER1$=/usr/local/nagios/libexec
If in that folder you do not find a check_procs, than you'll need to install a newer version of nagios plugins:
- you can either run the command bellow
apt-get install nagios-plugins
Otherwise you can go to the official Nagios' site and download/install the plugins package: http://www.nagios.org/download/plugins, inside the nagios-plugins .tar.gz archive there is a README file with good instructions for the manual installation process

Resources