How can I customize the output while 'vagrant up' runs? - linux

I want to inform the vagrant user what ip address the used machine has. My first idea was to use '/etc/rc.local' and print the output of ifconfig to a file in the '/vagrant' directory, but it seems this directory is mounted after rc.local is called. So I need another way to inform the user without any ssh login.
My second idea is to write the ifconfig output to some "place" where it is shown in the vagrant start up output ... like in sample below.
...
default: Guest Additions Version: 4.3.10
default: VirtualBox Version: 5.0
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /home/user/vagrant/test
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
# start of desired output
Adresses found:
10.0.2.15
172.28.128.3
# end of desired output
==> default: flag to force provisioning. Provisioners marked to run always will still run.
...
All ideas are welcome.

You may be interested in this SO answer, which attempts to achieve the same thing of outputting the network interface's IP to terminal on vagrant up.
Relevant bits from the answer -
On the Guest:
/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}
Should get you something like this:
10.0.2.15
Which could then be used in your Vagrantfile like so:
config.vm.provision "shell", inline: <<-SHELL
sudo -i /vagrant/my_provisioning_script.sh $(/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
SHELL
The trick here is knowing which interface (eth0 in the example above) has the IP you want. Of course if you are great with grep or awk you could modify that first command to check the IPs on all the interfaces... but that's beyond my abilities.
# content of Vagrantfile
$infoScript = <<SCRIPT
echo 'IP-addresses of the vm ...'
ifconfig | grep 'inet A' | grep Bcast | awk '{print $2}' | sed 's/addr://'
SCRIPT
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.box_check_update = false
config.vm.network "private_network", type: "dhcp"
config.vm.network "forwarded_port", guest: 80, host: 8888
config.vm.provider "virtualbox" do |vb|
vb.name = "demo"
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "50"]
vb.memory = "2048"
vb.cpus = 2
end
# normal provision to set up the vm
config.vm.provision "shell", path: "scripts/bootstrap.sh"
# extra provision to print all the ip's of the vm
config.vm.provision "shell", inline: $infoScript,
run: "always"
end
EDIT
You may also be interested in the vagrant-hostmanager plugin if the purpose of echoing the IP is just so you can connect to the box, this plugin can modify your /etc/hosts on your guest or host so you don't need to worry about the IP and instead use something like http://mydevbox.local

First you will need to determine how the IP address is acquired (i.e. DHCP or static). Then based on that you could essentially just add the private networking code to the vargrantfile as such:
DHCP:
Vagrant.configure("2") do |config|
config.vm.network "private_network", type: "dhcp"
end
Static:
Vagrant.configure("2") do |config|
config.vm.network "private_network", ip: "192.168.50.4"
end
Then you could add a mixture of shell and ruby:
$script = <<SCRIPT
echo private_network...
ip: "192.168.50.4"
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
Hope that helps enough to get you going.

Because the other answer don't describes the way how to publish dynamic data from the inside of the vm, I write my solution.
Step 1: I put a webserver in the vagrant vm and publish guest port 80 to host port 8888
Step 2: prepare /etc/rc.local of guest so it gathers all needed dynamic vm information and write the output to webserver root in a file 'info.txt'
Step 3: add a second provision entry to vagrantfile that runs every 'vagrant up' and informs the user where he can get more information
# Vagrantfile
...
config.vm.provision "shell", path: "scripts/bootstrap.sh"
config.vm.provision "shell", inline: "echo 'check http://localhost:8888/info.txt for more information'",
run: "always"
...

Related

Docker doesn't apply changes to daemon.json

I have created the file /etc/docker/daemon.json on ubuntu with the following contents:
{
"ipv6": false
}
Afterward I rebooted the machine and docker is still looking for ipv6 addresses, giving me the following error on docker swarm init --advertise address enp0s3:
Error response from daemon: interface enp0s3 has more than one IPv6 address (2a00:c98:2060:a000:1:0:1d1e:ca75 and fe80::a00:27ff:fe7e:d9c4)
¿How do I apply the changes to the daemon so I stop encountering this error? I can't advertise an specific ip address since the machine is using dhcp.
Thanks.
The problem was solved using the following command:
sudo docker swarm init --advertise-address "$(ip addr show $MAIN_ETH_INTERFACE | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)"
This way I don't need to specify an ipv4 address.

How can I set up my desktop user inside a Linux Vagrant VM so that it behaves like a standard tools console with my usual username, ssh keys, etc.?

I am a Windows user who uses Linux a lot for development work.
In my company, developers' tastes for desktops vary (Mac, Windows, Arch Linux) but we use Vagrant VMs to make sure that everyone has a common environment for development.
One specific annoyance on Windows is the lack of Linux compatible tools. There are various ways around it, like msys, cygwin but nothing works better than a fully compatible tools console (Ubuntu 14.04 in our case).
So, I made a Vagrant VM for it but discovered that my standard login id, ssh keys inside C:/Users/devang/.ssh had to be created manually inside the VM.
Is there a standard way to build it in Vagrant?
There were some useful answers in stackoverflow related to configuration of the default vagrant user but in the end I had to make my own inline provisioner.
I am posting my solution here. It does not make any assumption about host OS but it does assume a Debian/Ubuntu Vagrant VM as the guest.
The key part is the inline shell provisioner which provides access to the variable Dir.home and ENV['USER'] from the host OS.
AFAIK, to have access to those variables from host OS, one has to use an inline provisioner.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = "tools"
config.vm.provision "shell" do |s|
ssh_pub_key = File.open("#{Dir.home}/.ssh/id_rsa.pub", "rb").read
ssh_key = File.open("#{Dir.home}/.ssh/id_rsa", "rb").read
user_name = ENV['USER'].downcase
home_dir = "/home/#{user_name}"
s.inline = <<-SHELL
sudo useradd -d "#{home_dir}" -m "#{user_name}" -s /bin/bash
sudo chmod 755 "#{home_dir}"
sudo mkdir "#{home_dir}/.ssh"
sudo chmod 700 "#{home_dir}/.ssh"
sudo echo "#{ssh_pub_key}" > "#{home_dir}/.ssh/id_rsa.pub"
sudo echo "#{ssh_key}" > "#{home_dir}/.ssh/id_rsa"
sudo echo "#{ssh_pub_key}" > "#{home_dir}/.ssh/authorized_keys"
sudo chown -R "#{user_name}.#{user_name}" "#{home_dir}/.ssh"
sudo chmod -R 600 "#{home_dir}/.ssh/id_rsa"
sudo usermod -a -G sudo "#{user_name}"
echo "#{user_name} ALL=(ALL) NOPASSWD: ALL" | sudo cat > "/etc/sudoers.d/#{user_name}"
SHELL
end
config.vm.provision :shell, path: "contrib/build-server.sh"
config.vm.box = "debian/contrib-jessie64"
config.vm.network "private_network", ip: "192.168.70.150"
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
v.cpus = 4
end
end

How to pass host ip address to docker in mac OS?

I am working on Linux machine and I wrote an script to pass local host IP address to docker container by passing an parameter It works fine for ubuntu.
will the same script run on mac OS and work as expected (pass IP address of local host to docker container)?
docker run -t -i -e "DOCKER_HOST=$(ip -4 addr show eth0 | grep -Po 'inet \K[\d.]+')" $IMAGE_NAME
On OSX use this command line:
docker run -it -e "DOCKER_HOST=$(ifconfig en0 | awk '/ *inet /{print $2}')" $IMAGE_NAME
On mac, you will be using a VM, so you might want to pass the IP of the docker machine you have declared:
(image from "docker on Mac OS X")
eval $(docker-machine env default)

How can I set a static IP address in a Docker container?

I'm perfectly happy with the IP range that docker is giving me by default 176.17.x.x, so I don't need to create a new bridge, I just want to give my containers a static address within that range so I can point client browsers to it directly.
I tried using
RUN echo "auto eth0" >> /etc/network/interfaces
RUN echo "iface eth0 inet static" >> /etc/network/interfaces
RUN echo "address 176.17.0.250" >> /etc/network/interfaces
RUN echo "netmask 255.255.0.0" >> /etc/network/interfaces
RUN ifdown eth0
RUN ifup eth0
from a Dockerfile, and it properly populated the interfaces file, but the interface itself didn't change. In fact, running ifup eth0 within the container gets this error:
RTNETLINK answers: Operation not permitted Failed to bring up eth0
I have already answered this here
https://stackoverflow.com/a/35359185/4094678
but I see now that this question is actually older then the aforementioned one, so I'll copy the answer as well:
Easy with Docker version 1.10.1, build 9e83765.
First you need to create you own docker network (mynet123)
docker network create --subnet=172.18.0.0/16 mynet123
than simply run the image (I'll take ubuntu as example)
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
then in ubuntu shell
ip addr
Additionally you could use
--hostname to specify a hostname
--add-host to add more entries to /etc/hosts
Docs (and why you need to create a network) at https://docs.docker.com/engine/reference/commandline/network_create/
I'm using the method written here from the official Docker documentation and I have confirmed it works:
# At one shell, start a container and
# leave its shell idle and running
$ sudo docker run -i -t --rm --net=none base /bin/bash
root#63f36fc01b5f:/#
# At another shell, learn the container process ID
# and create its namespace entry in /var/run/netns/
# for the "ip netns" command we will be using below
$ sudo docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
2778
$ pid=2778
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
# Check the bridge's IP address and netmask
$ ip addr show docker0
21: docker0: ...
inet 172.17.42.1/16 scope global docker0
...
# Create a pair of "peer" interfaces A and B,
# bind the A end to the bridge, and bring it up
$ sudo ip link add A type veth peer name B
$ sudo brctl addif docker0 A
$ sudo ip link set A up
# Place B inside the container's network namespace,
# rename to eth0, and activate it with a free IP
$ sudo ip link set B netns $pid
$ sudo ip netns exec $pid ip link set dev B name eth0
$ sudo ip netns exec $pid ip link set eth0 up
$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
$ sudo ip netns exec $pid ip route add default via 172.17.42.1
Using this approach I run my containers always with net=none and set IP addresses with an external script.
Actually, despite my initial failure, #MarkO'Connor's answer was correct. I created a new interface (docker0) in my host /etc/network/interfaces file, ran sudo ifup docker0 on the host, and then ran
docker run --net=host -i -t ...
which picked up the static IP and assigned it to docker0 in the container.
Thanks!
This worked for me:
docker run --cap-add=NET_ADMIN -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Explained:
--cap-add=NET_ADMIN have rights for administering the net (i.e. for the /sbin/ip command)
myimages/image1 image for the container
/bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0 ; bash"
Inside the container run ip addr add 172.17.0.8 dev eth0 to add a new ip address 172.17.0.8 to this container (caution: do use a free ip address now and in the future). Then run bash, just to not have the container automatically stopped.
Bonus:
My target scene: setup a distributed app with containers playing different roles in the dist-app. A "conductor container" is able to run docker commands by itself (inside) so to start and stop containers as needed.
Each container is configured to know where to connect to access a particular role/container in the dist-app (so the set of ip's for each role must be known by each partner).
To do this:
"conductor container"
image created with this Dockerfile
FROM pin3da/docker-zeromq-node
MAINTAINER Foobar
# install docker software
RUN apt-get -yqq update && apt-get -yqq install docker.io
# export /var/run/docker.sock so we can connect it in the host
VOLUME /var/run/docker.sock
image build command:
docker build --tag=myimages/conductor --file=Dockerfile .
container run command:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name=conductor1 -d -it myimages/conductor bash
Run containers with different roles.
First (not absolutely necessary) add entries to /etc/hosts to locate partners by ip or name (option --add-host)
Second (obviously required) assign a ip to the running container (use
/sbin/ip in it)
docker run --cap-add=NET_ADMIN --add-host worker1:172.17.0.8 --add-host worker2:172.17.0.9 --name=worker1 -h worker1.example.com -d -it myimages/image1 /bin/sh -c "/sbin/ip addr add 172.17.0.8 dev eth0; bash"
Docker containers by default do not have sufficient privileges to manipulate the network stack. You can try adding --cap-add=NET_ADMIN to the run command to allow this specific capability. Or you can try --privileged=true (grants all rights) for testing.
Another option is to use pipework from the host.
Setup your own bridge (e.g br0)
Start docker with: -b=br0
& with pipework (192.168.1.1 below being the default gateway ip address):
pipework br0 container-name 192.168.1.10/24#192.168.1.1
Edit: do not start with --net=none : this closes container ports.
See further notes
I understood that you are not looking at multi-host networking of containers at this stage, but I believe you are likely to need it soon. Weave would allow you to first define multiple container networks on one host, and then potentially move some containers to another host without loosing the static IP you have assigned to it.
pipework also great, but If you can use hostname other than ip then you can try this script
#!/bin/bash
# This function will list all ip of running containers
function listip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
ip=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' $vm`;
echo "$ip $vm";
done
}
# This function will copy hosts file to all running container /etc/hosts
function updateip {
for vm in `docker ps|tail -n +2|awk '{print $NF}'`;
do
echo "copy hosts file to $vm";
docker exec -i $vm sh -c 'cat > /etc/hosts' < /tmp/hosts
done
}
listip > /tmp/hosts
updateip
You just need to run this command everytime you boot up your docker labs
You can find my scripts with additional function here dockerip
For completeness: there's another method suggested on the Docker forums. (Edit: and mentioned in passing by the answer from Андрей Сердюк).
Add the static IP address on the host system, then publish ports to that ip, e.g. docker run -p 192.0.2.1:80:80 -d mywebserver.
Of course that syntax won't work for IPv6 and the documentation doesn't mention that...
It sounds wrong to me: the usual wildcard binds (*:80) on the host theoretically conflict with the container. In practice the Docker port takes precedence and doesn't conflict, because of how it's implemented using iptables. But your public container IP will still respond on all the non-conflicting ports, e.g. ssh.
I discovered that --net=host might not always be the best option, as it might allow users to shut down the host from the container! In any case, it turns out that the reason I couldn't properly do it from inside was because network configuration was designed to be restricted to sessions that begun with the --privileged=true argument.
You can set up SkyDns with service discovery tool - https://github.com/crosbymichael/skydock
Or: Simply create network interface and publish docker container ports in it like here https://gist.github.com/andreyserdjuk/bd92b5beba2719054dfe

Vagrant /etc/sysconfig/network and /etc/resolv.conf

I have the following Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.define "admin" , primary: true do |node|
node.vm.box = "centos-6.5-x86_64"
node.vm.hostname = "admin.example.com"
node.vm.network :private_network, ip: "10.10.10.10"
end
end
and I get the following:
$ vagrant up && vagrant ssh
$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=admin.example.com
$ cat /etc/resolv.conf
search example.com
nameserver 10.0.2.3
I need to have HOSTNAME=admin, instead of HOSTNAME=admin.example.com.
How to achieve that?
If I set node.vm.hostname = "admin",
then /etc/resolv.conf does not have search example.com.
I could add a vagrant shell provisioner to create myself the /etc/sysconfig/network and /etc/resolv.conf files, but it does not look nice (for instance, I would need to know the nameserver).
And what is the proper way to set also domain example.com in /etc/resolv.conf?
Add the following lines to your VagrantFile
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end

Resources