Provision Vagrant Linux VM with another Vagrant Linux VM running Ansible - linux

I know Ansible has issues running on windows. Which is why, I want to avoid using it for my host. I want to provision a local linux vm running in VirtualBox.
I was wondering if anyone can tell me if it is possible, to use vagrant to bring up two independent VMs on the same box. Then install Ansible on one of those VMs, then using SSH log into that VM. From there, use the Linux VM with Ansible as the host, to provision another Linux VM, that was created via the windows host machine. So, this is not a VM inside a VM. It is just two VMs running on windows using vagrant, then SSH to one of those VMs to use Ansible to provision the other VM.
Steps:
Vagrant VM 1 and install Ansible
Vangrant VM 2
SSH to VM 1
Use Ansible to provision VM 2 using VM 1.
Can that be done? Sorry if that sounded confusing.

There is now a new Ansible local provisioner in Vagrant 1.8.0, which you can use in your scenario.
Especially, look at "Tips and Tricks" section of the documentation, there is an exact solution (which worked for me).
Below is my Vagrantfile for this scenario (slightly different from the one in the documentation), which also solves potential problems with the ssh permissions and "executable" inventory file (if you're using Cygwin):
Vagrant.configure(2) do |config|
config.vm.synced_folder "./", "/vagrant",
owner: "vagrant",
mount_options: ["dmode=775,fmode=600"]
config.vm.define "vm2" do |machine|
machine.vm.box = "box-cutter/ubuntu1404-desktop"
machine.vm.network "private_network", ip: "172.17.177.21"
end
config.vm.define 'vm1' do |machine|
machine.vm.box = "ubuntu/trusty64"
machine.vm.network "private_network", ip: "172.17.177.11"
machine.vm.provision :ansible_local do |ansible|
ansible.provisioning_path = "/vagrant"
ansible.playbook = "provisioning/playbook.yml"
ansible.limit = "vm2"
ansible.inventory_path = "inventory"
ansible.verbose = "vvv"
ansible.install = true
end
end
end
and inventory file:
vm1 ansible_connection=local
vm2 ansible_ssh_host=172.17.177.21 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/vm2/virtualbox/private_key

In order to provision a box you don't necessary need to do it using another box, in this windows scenario you could simply write your playbooks, share it to your guest and hit it with ansible-playbook using shell provisioning.
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$script = <<SCRIPT
sudo apt-get install -y software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
ansible-playbook /home/vagrant/provisioning/playbook.yml
SCRIPT
config.vm.synced_folder "./provisioning", "/home/vagrant/provisioning"
config.vm.provision "shell", inline: $script
end
The first lines will get ansible on your box then it will target the playbook that you have shared to your box and run the playbooks.
This is an example, I once used this approach to provision my working vagrant box, hope this idea can help you.

Related

How to share (config.vm.synced_folder), directories between Windows 10 and CentOS7 Virtual Machine created using Vagrant and VirtualBox

I'm trying to create a VM CentOS7 using Vagrant (2.2.3) and Virtual Box (6.0.4), on Windows 10 using the following Vagrant file
Vagrant.configure("2") do |config|
config.vm.box = "bento/centos-7"
config.vm.network "private_network", ip: "192.168.56.3"
config.vm.synced_folder "D://SharedWithVM//CentOS7-Work", "/media/sf_CentOS7-Work", type: "virtualbox"
config.vm.provider "virtualbox" do |vb|
vb.name = "Test"
end
config.vm.provision "shell", path: "./scripts/InstallGuestAdditions.sh"
end
and the InstallGuestAdditions.sh shell script is the follow ..
#!/bin/bash
curl -C - -O http://download.virtualbox.org/virtualbox/6.0.4/VBoxGuestAdditions_6.0.4.iso
sudo mkdir /media/VBoxGuestAdditions
sudo mount -o loop,ro VBoxGuestAdditions_6.0.4.iso /media/VBoxGuestAdditions
sudo sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run
rm VBoxGuestAdditions_6.0.4.iso
sudo umount /media/VBoxGuestAdditions
sudo rmdir /media/VBoxGuestAdditions
All works fine and the CentOS7 VM is created.
If I check the machine properties about shared directories I can see this
So I'm quite surprised about this path \\?\D:\SharedWithVM\CentOS7-Work.
How should I change my Vagrantfile to obtain a right path?
I've tried to connect at my CentOS 7 VM using vagrant ssh command and all works. Also the command cd /media/sf_CentOS7-Work works fine but no file or directory can be listed or shared between the two systems.
I've tried to create files or directories in Windows 10 and also in CentOS7 VM.
Any suggestion or example will be appreciated.

vagrant synced folders not working real-time on virtualbox

my synced folders are not working properly, they are synced one-time at start but when I make changes on the host machine, vagrant is not syncing it real-time.
First some details on my system:
OS: Linux Mint 18 Sarah
Virtualbox version: 5.0.24-dfsg-0ubuntu1.16.04.1
Vagrant version: 1.9.0
vagrant-hostmanager (1.8.5)
vagrant-share (1.1.6)
vagrant-vbguest (0.13.0)
Before we start discussing, I am not using newest version of Virtualbox since it is not in the repository and a simple vagrant up fails.
My Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.88.88"
config.vm.hostname = "my.centos.dev"
end
vagrant up gives me this.
Now when I create a file on the host machine:
falnyr#mint:~/centos-vagrant $ ls
ansible Vagrantfile
falnyr#mint:~/centos-vagrant $ touch file.txt
falnyr#mint:~/centos-vagrant $ ls
ansible file.txt Vagrantfile
And ssh to guest machine:
falnyr#mint:~/centos-vagrant $ vagrant ssh
[vagrant#my ~]$ ls /vagrant/
ansible Vagrantfile
As you can see, the file is not created. When I perform vagrant reload the sync is executed again during machine boot.
Note: I cannot use NFS sync, since I need cross-platform ready environment.
Any ideas on how to enable real-time sync?
The owner of the box has enabled rsync by default on the sync type. If you look at Vagrantfile of your box (in my case its ~/.vagrant.d/boxes/centos-VAGRANTSLASH-7/0/vmware_fusion but yours might probably under the virtualbox provider) you'll see a Vagrantfile with content
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", type: "rsync"
end
Just remove this file from the box directory and it will work.
note if you plan to use nfs you can change the sync type in your Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.88.88"
config.vm.hostname = "my.centos.dev"
config.vm.synced_folder ".", "/vagrant", type: "nfs"
end
You can use rsync-auto command:
vagrant rsync-auto
Actually, when I had a problem with sync, adding type: nfs helped me:
config.vm.synced_folder ".", "/home/ubuntu/qb-online", type: "nfs"
You can read more information from the documentation:
https://www.vagrantup.com/docs/synced-folders/rsync.html
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", type: "nfs",
rsync__exclude: ".git/"
end
just use the 2nd and 3rd line inside
Vagrant.configure("2") do |config|
#place here
end
If Anyone is Facing this issue vbox "Syncing/Mount" just enter the Vagrant ssh where the vagrant file is without vagrant up and run the command "sudo yum upgrade" dat'll Take time once it'll get finished exit the vagrant and hit vagrant up again.Issue will resolved. .. :)
and make sure If You are using Centos use "Bento/Centos" in Your VagrantFile
List item
vagrant ssh
sudo yum upgrade
vagrant reload

Run ansible-playbook with a user-data script on an EC2 instance

I am using Packer with Ansible to create an AWS EC2 image (AMI). Ansible is used to install Java 8, install the database (Cassandra), install Ansible and upload an Ansible playbook (I know that I should push the playbook to git and pull it but I will do it when this is working). I am installing Ansible and uploading the playbook, because I have to change some of the Cassandra properties when an instance is launched from the AMI (add the current instance IP in the Cassandra options for example). In order to accomplish this I wrote a simple bash script, that is added as the user-data-file property. This is the script:
#cloud-boothook
#!/bin/bash
#cloud-config
output: {all: '| tee -a /var/log/cloud-init-output.log'}
ansible-playbook -i "localhost," -c local /usr/local/etc/replace_cassandra.yaml
As you can see I am executing the ansible-playbook in a localhost mode.
The problem is that when I start the instance, I am finding an error inside the /var/log/cloud-init.log file. The error states, that ansible-playbook could not be found. So I added an ls line inside the user-data script to check the content of the /usr/bin/ folder (the folder where Ansible is installed) and there were no Ansible in it, but when I access the instance with ssh I can see that Ansible is present inside the /usr/bin/ folder and there is no problem executing the ansible-playbook.
Has anyone encountered a similar problem? I think that this should be a quite popular use case for Ansible with EC2.
EDIT
After some logging I found out that not only there is no Ansible, during the execution of the user data, but the database is missing as well.
Is it possible, that some of the code (or all of it) in the Ansible provisioner in Packer, is executed when the instance is launched?
EDIT2
I have found out what is happening here. When I add the user data via packer trough the user_data_file property, the user data is executed when packer lunches an instance to build the AMI. The script is launched before the Ansible provisioner is executed, and that is why Ansible is missing.
What I want to do is to automatically add a user data to the AMI, so that when an instance is launched from the AMI, the user data will be executed then, and not when packer builds the said AMI.
Any ideas on how to do this?
Just run multiple provisioners and don't try to run ansible via cloud-init.
I'm making an assumption here that your playbook and roles are stored locally where you are starting the packer run from. Instead of shoehorning the ansible stuff into user data, run a shell provisioner to install ansbile, run the ansible-local provisioner to run the playbook/role you want.
Below is a simplified example of what I'm talking about. It won't run without some more values in the builder config but I left those out for the sake of brevity.
In the example json, the install-prereqs.sh just adds the ansible ppa apt repo and runs apt-get update -y, then installs ansible.
#!/bin/bash
sudo apt-get install software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
The second provisioner will then copy the playbook and roles you specify to the target host and run them.
{
"builders": [
{
"type": "amazon-ebs",
"ssh_username": "ubuntu",
"image_name": "some-name",
"source_image": "some-ami-id",
"ssh_pty": true
}
],
"provisioners": [
{
"type": "shell",
"script": "scripts/install-prereqs.sh"
},
{
"type": "ansible-local",
"playbook_file": "path/to/playbook.yml",
"role_paths": ["path/to/roles"]
},
]
}
This is possible! Please make sure of the following.
An Ansible server (install ansible via cloud formation userdata if not built into AMI) and your target have SSH access in the security groups you create in cloudformation.
After you install ansible on the ansible server, your ansible.cfg file points to a private key on the ansible server
The matching public key for the ansible private key is copied to the authorized_keys file on the servers in the root user .ssh directory you wish to run playbooks on
-You have enabled root ssh access between the ansible server and target server(s), this can be done by editing the the /etc/ssh/sshd_config file and making sure there is nothing preventing the SSH access from the root user in the root authorized_keys file on the target server(s)

Configuring Vagrantfile for multiple machine - Vagrant

I have setup a Virtual box with two VMs a) Ubuntu b) Windows 10. I have created vagrant boxes for each of these VMs from scratch. Each of the vagrant box runs good individually. But I want to launch both the VMs at once. So I created a Vagrantfile (shown below) with the help of this documentation: https://docs.vagrantup.com/v2/multi-machine/
With the following Vagrantfile, the box declared first gets launched while the other doesn't. Is there any error in my Vagrantfile?
Any solutions, hints on how to fix this problem? How do I launch both the VMs?
-----------Vagrantfile---------------
Vagrant.configure(2) do |config|
config.vm.define "linux" do |linux|
linux.vm.box = "ubuntu"
linux.vm.box_url = "/Users/xyz/Desktop/vagrant/linux_package.box"
end
config.vm.define "win" do |win|
win.vm.box = "Windows10"
win.vm.box_url = "/Users/xyz/Desktop/vagrant/win_package.box"
end
config.vm.provider "virtualbox" do |v|
v.gui = true
end
end
Output on terminal:
When linux machine is launched first, I get below message on terminal:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
Logging in to the guest (through gui mode) and making sure the vagrant user was set up like so in /etc/sudoers with the fix :
vagrant ALL=(ALL) NOPASSWD: ALL
Run visudo as root in order to edit this file.

SSH Fails Due to Key File Permissions When I Try to Provision a Vagrant VM with Ansible on Windows/Cygwin

I’m using Cygwin (CYGWIN_NT-6.3-WOW64) under Windows 8. I’m also running Vagrant (1.7.2) and Ansible (1.8.4). To be complete, my Virtualbox is 4.3.22.
Cygwin and Vagrant have been installed from their respective Windows install packages. I’m running Python 2.7.8 under Cygwin and used ‘pip install ansible’ to install Ansible.
All of these applications work fine in their own right. Cygwin works wonderfully; I use it as my shell all day, every day with no problems.
Vagrant and Virtualbox also work with no problems when I run Vagrant under Cygwin. Ansible works fine under Cygwin as well when I run plays or modules against the servers on my network.
The problem I run into is when I try to use Ansible to provision a Vagrant VM running locally.
For example, I vagrant up a VM and then draft a simple playbook to provision it. Following are the Vagrantfile:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define :drupal1 do |config|
config.vm.box = "centos65-x86_64-updated"
config.vm.hostname = "drupal1"
config.vm.network "forwarded_port", guest: 80, host: 10080
config.vm.network :private_network, ip: "192.168.56.101"
config.vm.provider "virtualbox" do |v|
v.name = "Drupal Server 1"
v.memory = 1024
end
config.vm.provision :ansible do |ansible|
ansible.playbook = "provisioning/gather_facts.yml"
end
end
and playbook:
---
- hosts: all
gather_facts: yes
However, when I run ‘vagrant provision drupal1’, I get the following error:
vagrant provision drupal1
==> drupal1: Running provisioner: ansible... PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false
ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o
ControlMaster=auto -o ControlPersist=60s' ansible-playbook
--private-key=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
--user=vagrant --connection=ssh --limit='drupal1' --inventory-file=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/provisioners/ansible/inventory
provisioning/gather_facts.yml PLAY [all]
GATHERING FACTS
fatal: [drupal1] => private_key_file
(C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key)
is group-readable or world-readable and thus insecure - you will
probably get an SSH failure PLAY RECAP
to retry, use: --limit #/home/mjenkins/gather_facts.retry
drupal1 : ok=0 changed=0 unreachable=1
failed=0 Ansible failed to complete successfully. Any error output
should be visible above. Please fix these errors and try again.
Looking at the error, its plainly obvious that it has something to do
with Ansible’s interpretation of my key and the file permissions on
either it or the folder its in.
Here are a few observations and steps I’ve tried:
I tried setting the permissions on the file and all the directories leading up to the file in Cygwin. That is chmod -R 700 .vagrant in the project directory. Still got the same error.
The key file is being referenced using a Windows path, not a Cygwin path (odd, though, that the file in the limit output has a Cygwin path). So I checked the permissions from the Windows side and changed it so that ‘Everyone’ has no access to .vagrant and all files/folders under it. Still got the same error.
Then I thought there might still be some problems with the file permissions/paths between my Cygwin based Ansible so I installed Python for Windows; used that pip to install Ansible, set my paths to that location, created an ansible-playbook.bat file, and ran Vagrant from a Windows cmd shell. Glad to say that tool chain worked….but I still got the same problem.
At this point I’m just about out of ideas so I turn to you, friends of Stackoverflow, for your input.
Any thoughts on solving this problem?
Your private key is very open and accessible by anyone. A check in SSH client prevents using such keys.
Try changing permissions with chmod from your cygwin or git bash, on your private and public keys.
On C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
with chmod 700 private_key and ensure you have -rwx------ with ls -la
BAAAH! I just commented out the check in lib/ansible/runner/connection.py
Then I had to add in ansible.cfg
[ssh_connection]
control_path = /tmp
My solution to this was to override synced folder's permissions settings in the VagrantFile with the following ones:
Vagrant.configure(2) do |config|
config.vm.synced_folder "./", "/vagrant",
owner: "vagrant",
mount_options: ["dmode=775,fmode=600"]
...
I had similar issue and figured out a solution. I added following entries in my vagrant file
config.ssh.insert_key = false
config.ssh.private_key_path = "~/.vagrant.d/insecure_private_key"
and copied the insecure_private_key from my windows user folder to cygwin home as the path above. afterwards I did a
chmod 700 ~/.vagrant.d/insecure_private_key
and as a last step I removed the content of this file in cygwin home
~/.ssh/known_hosts
once I rerun the ansible-playbook command, I confirmed to add my localhost back to the known_hosts and the ssh connection worked.
truly saying it is much simpler if you understand what is happening.
Vagrant keep one folder for sharing file with host and other VM, that is /vagrant . Anything into that will be having mode 777 nothing can be done for that. sudo chmod too will not help , and you cannot change the mode.
Ansible is asking you to reduce the mode so that is not readable by group or all
so it is as simple as making a copy of the private key from
/vagrant/.vagrant/machines/yourmachine/virtualbox or any provisioner/
to may be home i.e ~ or /root
and then change chmod to 700 and use it in the inventory list in hosts file.
You could use the ansible_local provisioner for Vagrant. That will install Ansible into the VM. If you work with multiple vagrant virtual machines, then is is useful to let one be the ansible controller. This would then need the private SSH key. That can be done in the Vagrantfile with:
config.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
config.vm.provision "shell", inline: "chmod 600 /home/vagrant/.ssh/id_rsa"

Resources