Server is a fresh centos72 install, with pretty much dead in the water vagrant/vbox provider installed.
I have vagrant 1.8.6 rpm installed on a server with vbox 5.1.8r111374
, the box boxcutter/centos72 comes up, with error:
SSH auth method: private key
gocd: Warning: Remote connection disconnect. Retrying...
And yet... vagrant ssh works. The config file is basic af.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "boxname" do |boxname|
boxname.vm.box = "boxcutter/centos72"
boxname.vm.hostname = "test"
boxname.vm.network "private_network", ip: "192.168.111.10"
boxname.vm.provision :shell,
path: "prov.sh"
end
end
This can't run the prov script as it never gets past ssh set up. And vagrant provision won't work either because of the error above. I've obviously specified a private network, however once on the box the ifcfg-enp file looks like this:
TYPE=Ethernet
BOOTPROTO=dhcp
And the IP is a 10 address.
VirtualBox 5.1.x seems to be having major issues. Revert to 5.0.26 (5.0.28 seems to have major networking issues, too).
Related
I have a last version Jenkis (run under the Tomcat) and the Vagrant and LXC container.
Tomcat running under jenkins user. I have next Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "arjenvrielink/xenial64-lxc"
config.vm.provider :lxc do |lxc|
lxc.backingstore = 'dir'
end
end
So, when I ran lxc container from bash by vagrant up everything was fine. And vagrant ssh worked. But if I run it via Jenkins job I get this
Started by user admin
[EnvInject] - Loading node environment variables.
Building in workspace /home/jenkins/workspaces/server
[server] $ /bin/bash /opt/tomcat/temp/jenkins204809790857124992.sh
Bringing machine 'default' up with 'lxc' provider...
==> default: Importing base box 'arjenvrielink/xenial64-lxc'...
==> default: Checking if box 'arjenvrielink/xenial64-lxc' is up to date...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/jenkins/workspaces/server/vagrant
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 10.0.3.29:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
Build was aborted
Aborted by admin
Finished: ABORTED
Jenkins job contains only these commands
!#/bin/bash
cd vagrant
vagrant up
In process of investigation I found next different. Then I ran from bash the vagrant ssh-config out this:
Host default
HostName 10.0.3.212
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/workspaces/server/vagrant/.vagrant/machines/default/lxc/private_key
IdentitiesOnly yes
LogLevel FATAL
But then I ran from Jenkins job I got this
Host default
HostName 10.0.3.217
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
What did I do wrong?
EDIT:
arjenvrielink/xenial64-lxc is an official box
So I'm still pretty sure your problem is with the vagrant insecure key replacement mecanism but my solution won't help you.
Is arjenvrielink/xenial64-lxc a custom box ?
If so make sure to either let the insecure key in it so any new user(Jenkins included) will have access to the box because at fist up vagrant connect to the box using the insecure key then create a new one.
If you want to include your own key in the box make sure to add the following lines to your Vagrantfile:
Vagrant.configure("2") do |config|
config.ssh.private_key_path = File.expand_path("<path of the key relative to Vagrantfile>", __FILE__)
end
The caveat is you'll have to make the key available everywhere you vagrant environement will run.
my synced folders are not working properly, they are synced one-time at start but when I make changes on the host machine, vagrant is not syncing it real-time.
First some details on my system:
OS: Linux Mint 18 Sarah
Virtualbox version: 5.0.24-dfsg-0ubuntu1.16.04.1
Vagrant version: 1.9.0
vagrant-hostmanager (1.8.5)
vagrant-share (1.1.6)
vagrant-vbguest (0.13.0)
Before we start discussing, I am not using newest version of Virtualbox since it is not in the repository and a simple vagrant up fails.
My Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.88.88"
config.vm.hostname = "my.centos.dev"
end
vagrant up gives me this.
Now when I create a file on the host machine:
falnyr#mint:~/centos-vagrant $ ls
ansible Vagrantfile
falnyr#mint:~/centos-vagrant $ touch file.txt
falnyr#mint:~/centos-vagrant $ ls
ansible file.txt Vagrantfile
And ssh to guest machine:
falnyr#mint:~/centos-vagrant $ vagrant ssh
[vagrant#my ~]$ ls /vagrant/
ansible Vagrantfile
As you can see, the file is not created. When I perform vagrant reload the sync is executed again during machine boot.
Note: I cannot use NFS sync, since I need cross-platform ready environment.
Any ideas on how to enable real-time sync?
The owner of the box has enabled rsync by default on the sync type. If you look at Vagrantfile of your box (in my case its ~/.vagrant.d/boxes/centos-VAGRANTSLASH-7/0/vmware_fusion but yours might probably under the virtualbox provider) you'll see a Vagrantfile with content
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", type: "rsync"
end
Just remove this file from the box directory and it will work.
note if you plan to use nfs you can change the sync type in your Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.88.88"
config.vm.hostname = "my.centos.dev"
config.vm.synced_folder ".", "/vagrant", type: "nfs"
end
You can use rsync-auto command:
vagrant rsync-auto
Actually, when I had a problem with sync, adding type: nfs helped me:
config.vm.synced_folder ".", "/home/ubuntu/qb-online", type: "nfs"
You can read more information from the documentation:
https://www.vagrantup.com/docs/synced-folders/rsync.html
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", type: "nfs",
rsync__exclude: ".git/"
end
just use the 2nd and 3rd line inside
Vagrant.configure("2") do |config|
#place here
end
If Anyone is Facing this issue vbox "Syncing/Mount" just enter the Vagrant ssh where the vagrant file is without vagrant up and run the command "sudo yum upgrade" dat'll Take time once it'll get finished exit the vagrant and hit vagrant up again.Issue will resolved. .. :)
and make sure If You are using Centos use "Bento/Centos" in Your VagrantFile
List item
vagrant ssh
sudo yum upgrade
vagrant reload
I have a RHEL image already preconfigured, I don't know how it was originally setup.
By default, it is configured with a local network interface on the ip 192.168.50.50. What I am trying to do is configure its ip from the Vagrant script.
This doesn't seem to do anything:
config.vm.network "private_network", ip: "192.168.50.10"
This does change the ip:
sudo nmcli con mod bond0 ipv4.addresses 192.168.50.10/24
service network restart
But after that apparently Vagrant doesn't automatically detect the ip to connect to, so I need to add:
config.ssh.host = LOCAL_IP
But here's the problem: on the first time, the ip is the default one (.50.50). So I can't already set config.ssh.host to my desired ip. If I omit the config.ssh.host line, it runs the first time but not after, and vagrant ssh fails as well.
Is there a way to set the box ip without editing the Vagrant script between the first and second vagrant up?
Edit: Result of vagrant up --debug command: http://pastebin.com/BTccc4NT
Edit: The problem was that the Vagrant file from the default box (on Windows, it's at C:\Users\user\.vagrant.d\boxes\nameofbox\virtualbox\Vagrantfile) itself had this line:
config.vm.network "private_network", ip: "192.168.50.50", auto_config: false
hum, its weird, it creates 2 interfaces
DEBUG network: Normalized configuration: {:adapter_ip=>"192.168.50.1", :auto_config=>false, :ip=>"192.168.50.50", :mac=>nil, :name=>nil, :netmask=>"255.255.255.0", :nic_type=>nil, :type=>:static, :adapter=>2}
INFO network: Searching for matching hostonly network: 192.168.50.50
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "list", "hostonlyifs"]
......
DEBUG network: Normalized configuration: {:adapter_ip=>"192.168.50.1", :auto_config=>true, :ip=>"192.168.50.10", :mac=>nil, :name=>nil, :netmask=>"255.255.255.0", :nic_type=>nil, :type=>:static, :adapter=>3}
INFO network: Searching for matching hostonly network: 192.168.50.10
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "list", "hostonlyifs"]
so on on adapter2 you have 192.168.50.50 and on adapter3 you have 192.168.50.10
The possible reason for this is that the box you're using has a specific Vagrantfile which defines already a network on the static address.
I am not fully familiar with windows but on mac, the box definition is under ~/.vagrant/boxes/<yourbox>/<theprovider>/Vagrantfile (note this is not the Vagrantfile from your project, this is really a Vagrantfile which will be applied to any VM built from this box); check the file and remove the network configuration if you see it
As documented the Vagrantfile are merged from the different locations
At each level, settings set will be merged with previous values. What this exactly means depends on the setting. For most settings, this means that the newer setting overrides the older one. However, for things such as defining networks, the networks are actually appended to each other. By default, you should assume that settings will override each other. If the behavior is different, it will be noted in the relevant documentation section.
so by default Vagrant will create additional network interface and will not replace the one coming from the box Vagrantfile.
I have gentoo(linux) host machine. On which, I have Virtualbox 4.3.28 and vagrant 1.4.3 installed(these are the latest available version for gentoo).
On vagrant up, the Ubuntu 14.04 gets launched. I'm also able to ssh to Ubuntu. But then as soon as it gets launched I get the following error. Below is my Vagrantfile and output error.
P.S I have created Ubuntu 14.04 base box from scratch
-----------Vagrantfile-------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "Ubuntu"
config.vm.boot_timeout = "700"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
-----------Output in terminal------------
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
**
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.**
Any solution to fix this problem?
P.S I have created Ubuntu 14.04 base box from scratch
That could be the missing piece - When you package a box, you need to run a few commands as explained below
It is very common for Linux-based boxes to fail to boot initially.
This is often a very confusing experience because it is unclear why it
is happening. The most common case is because there are persistent
network device udev rules in place that need to be reset for the new
virtual machine. To avoid this issue, remove all the persistent-net
rules. On Ubuntu, these are the steps necessary to do this:
$ rm /etc/udev/rules.d/70-persistent-net.rules
$ mkdir /etc/udev/rules.d/70-persistent-net.rules
$ rm -rf /dev/.udev/
$ rm /lib/udev/rules.d/75-persistent-net-generator.rules
Can you make sure to run the command above before packaging the box.
I’m using Cygwin (CYGWIN_NT-6.3-WOW64) under Windows 8. I’m also running Vagrant (1.7.2) and Ansible (1.8.4). To be complete, my Virtualbox is 4.3.22.
Cygwin and Vagrant have been installed from their respective Windows install packages. I’m running Python 2.7.8 under Cygwin and used ‘pip install ansible’ to install Ansible.
All of these applications work fine in their own right. Cygwin works wonderfully; I use it as my shell all day, every day with no problems.
Vagrant and Virtualbox also work with no problems when I run Vagrant under Cygwin. Ansible works fine under Cygwin as well when I run plays or modules against the servers on my network.
The problem I run into is when I try to use Ansible to provision a Vagrant VM running locally.
For example, I vagrant up a VM and then draft a simple playbook to provision it. Following are the Vagrantfile:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define :drupal1 do |config|
config.vm.box = "centos65-x86_64-updated"
config.vm.hostname = "drupal1"
config.vm.network "forwarded_port", guest: 80, host: 10080
config.vm.network :private_network, ip: "192.168.56.101"
config.vm.provider "virtualbox" do |v|
v.name = "Drupal Server 1"
v.memory = 1024
end
config.vm.provision :ansible do |ansible|
ansible.playbook = "provisioning/gather_facts.yml"
end
end
and playbook:
---
- hosts: all
gather_facts: yes
However, when I run ‘vagrant provision drupal1’, I get the following error:
vagrant provision drupal1
==> drupal1: Running provisioner: ansible... PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false
ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o
ControlMaster=auto -o ControlPersist=60s' ansible-playbook
--private-key=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
--user=vagrant --connection=ssh --limit='drupal1' --inventory-file=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/provisioners/ansible/inventory
provisioning/gather_facts.yml PLAY [all]
GATHERING FACTS
fatal: [drupal1] => private_key_file
(C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key)
is group-readable or world-readable and thus insecure - you will
probably get an SSH failure PLAY RECAP
to retry, use: --limit #/home/mjenkins/gather_facts.retry
drupal1 : ok=0 changed=0 unreachable=1
failed=0 Ansible failed to complete successfully. Any error output
should be visible above. Please fix these errors and try again.
Looking at the error, its plainly obvious that it has something to do
with Ansible’s interpretation of my key and the file permissions on
either it or the folder its in.
Here are a few observations and steps I’ve tried:
I tried setting the permissions on the file and all the directories leading up to the file in Cygwin. That is chmod -R 700 .vagrant in the project directory. Still got the same error.
The key file is being referenced using a Windows path, not a Cygwin path (odd, though, that the file in the limit output has a Cygwin path). So I checked the permissions from the Windows side and changed it so that ‘Everyone’ has no access to .vagrant and all files/folders under it. Still got the same error.
Then I thought there might still be some problems with the file permissions/paths between my Cygwin based Ansible so I installed Python for Windows; used that pip to install Ansible, set my paths to that location, created an ansible-playbook.bat file, and ran Vagrant from a Windows cmd shell. Glad to say that tool chain worked….but I still got the same problem.
At this point I’m just about out of ideas so I turn to you, friends of Stackoverflow, for your input.
Any thoughts on solving this problem?
Your private key is very open and accessible by anyone. A check in SSH client prevents using such keys.
Try changing permissions with chmod from your cygwin or git bash, on your private and public keys.
On C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
with chmod 700 private_key and ensure you have -rwx------ with ls -la
BAAAH! I just commented out the check in lib/ansible/runner/connection.py
Then I had to add in ansible.cfg
[ssh_connection]
control_path = /tmp
My solution to this was to override synced folder's permissions settings in the VagrantFile with the following ones:
Vagrant.configure(2) do |config|
config.vm.synced_folder "./", "/vagrant",
owner: "vagrant",
mount_options: ["dmode=775,fmode=600"]
...
I had similar issue and figured out a solution. I added following entries in my vagrant file
config.ssh.insert_key = false
config.ssh.private_key_path = "~/.vagrant.d/insecure_private_key"
and copied the insecure_private_key from my windows user folder to cygwin home as the path above. afterwards I did a
chmod 700 ~/.vagrant.d/insecure_private_key
and as a last step I removed the content of this file in cygwin home
~/.ssh/known_hosts
once I rerun the ansible-playbook command, I confirmed to add my localhost back to the known_hosts and the ssh connection worked.
truly saying it is much simpler if you understand what is happening.
Vagrant keep one folder for sharing file with host and other VM, that is /vagrant . Anything into that will be having mode 777 nothing can be done for that. sudo chmod too will not help , and you cannot change the mode.
Ansible is asking you to reduce the mode so that is not readable by group or all
so it is as simple as making a copy of the private key from
/vagrant/.vagrant/machines/yourmachine/virtualbox or any provisioner/
to may be home i.e ~ or /root
and then change chmod to 700 and use it in the inventory list in hosts file.
You could use the ansible_local provisioner for Vagrant. That will install Ansible into the VM. If you work with multiple vagrant virtual machines, then is is useful to let one be the ansible controller. This would then need the private SSH key. That can be done in the Vagrantfile with:
config.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
config.vm.provision "shell", inline: "chmod 600 /home/vagrant/.ssh/id_rsa"