Ubuntu vivid box not running with vagrant - linux

I am using ubuntu vivid with Vagrant
https://vagrantcloud.com/ubuntu/boxes/vivid64
when i do vagrant up
i get this
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
service hostname start
Stdout from the command:
Stderr from the command:
stdin: is not a tty
Failed to start hostname.service: Unit hostname.service is masked.
Is there any way to use vivid64 . i even tried
https://atlas.hashicorp.com/larryli/vivid64
but same result

Seems as though Vagrant is throwing out an error relating to the hostname... try adding this to your vagrant file:
#host.vm.hostname = "[HOSTNAMEVM]"
host.vm.provision :shell, inline: "hostnamectl set-hostname [HOSTNAMEVM]"
Of course, set [HOSTNAMEVM] to your hostname.
What we are doing here is manually asking Vagrant to provision with a specific hostname, to attempt to fix the issue with the hostname service failing to start.
If this doesn't work, a pastebin with your Vagrantfile might help us see what might be the actual cause here.

At first, try disabling the line with "hostname" on Vagrantfile.
change the line like
config.vm.hostname = "abcd"
to
# config.vm.hostname = "abcd"

Related

Jenkins and Vagrant there is very strange situation

I have a last version Jenkis (run under the Tomcat) and the Vagrant and LXC container.
Tomcat running under jenkins user. I have next Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "arjenvrielink/xenial64-lxc"
config.vm.provider :lxc do |lxc|
lxc.backingstore = 'dir'
end
end
So, when I ran lxc container from bash by vagrant up everything was fine. And vagrant ssh worked. But if I run it via Jenkins job I get this
Started by user admin
[EnvInject] - Loading node environment variables.
Building in workspace /home/jenkins/workspaces/server
[server] $ /bin/bash /opt/tomcat/temp/jenkins204809790857124992.sh
Bringing machine 'default' up with 'lxc' provider...
==> default: Importing base box 'arjenvrielink/xenial64-lxc'...
==> default: Checking if box 'arjenvrielink/xenial64-lxc' is up to date...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/jenkins/workspaces/server/vagrant
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 10.0.3.29:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
Build was aborted
Aborted by admin
Finished: ABORTED
Jenkins job contains only these commands
!#/bin/bash
cd vagrant
vagrant up
In process of investigation I found next different. Then I ran from bash the vagrant ssh-config out this:
Host default
HostName 10.0.3.212
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/workspaces/server/vagrant/.vagrant/machines/default/lxc/private_key
IdentitiesOnly yes
LogLevel FATAL
But then I ran from Jenkins job I got this
Host default
HostName 10.0.3.217
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
What did I do wrong?
EDIT:
arjenvrielink/xenial64-lxc is an official box
So I'm still pretty sure your problem is with the vagrant insecure key replacement mecanism but my solution won't help you.
Is arjenvrielink/xenial64-lxc a custom box ?
If so make sure to either let the insecure key in it so any new user(Jenkins included) will have access to the box because at fist up vagrant connect to the box using the insecure key then create a new one.
If you want to include your own key in the box make sure to add the following lines to your Vagrantfile:
Vagrant.configure("2") do |config|
config.ssh.private_key_path = File.expand_path("<path of the key relative to Vagrantfile>", __FILE__)
end
The caveat is you'll have to make the key available everywhere you vagrant environement will run.

The following SSH command responded with a non-zero exit status

each time I do the vagrant up command I get the error
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
chown `id -u vagrant`:`id -g vagrant` /vagrant
Stdout from the command:
Stderr from the command:
chown: changing ownership of ‘/vagrant’: Not a directory
I can't find any solutions ( already tried to change the sudoers file but don't know exactly what to change)
chown: changing ownership of ‘/vagrant’: Not a directory
This sounds like /vagrant is Not a directory, which probably it is a file, therefore remove the file and re-try again.
Or try to re-create your VM again, also double check your Vagrantfile if such file is not created.
To investigate the issue further, run the vagrant in the debug mode, e.g.
vagrant up --debug
Using a version 0.21 for vagrant-vbguest helped me to fix mine
vagrant plugin uninstall vagrant-vbguest
vagrant plugin install vagrant-vbguest --plugin-version 0.21
I have been trying to get a Vagrant 1.9.1-VirtualBox 5.1.10-Fedora 25 x64-Atomic host image running on my Windows 10 x64 Host.
I thought the Vagrant plugin vbguest didn't work well with the Atomic host type, as it mentioned during the provisioning.
Turns out the error still occurred for me, and I found this bug report: Vagrant cannot create synced folder.
dustymabe seems to support the situation with a temporary workaround until the bug is fixed by using this line of code:
config.vm.synced_folder "/tmp", "/vagrant", disabled: 'true'
jorti, the user that seems to be having the same issue as I, has used these lines of code both to workaround the bug, and set-up their own pathway to continue working with the same feature:
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder ".", "/home/vagrant/provision", type: "rsync"
This issue was reported on Nov 25 2016 at 14:45:46, and was only commented on currently just up to 3 days after that time.
This is no permission problem, but a simple error message, that the expected home directory "/vagrant" does not exist. It may be a file or just not existent.
Anyway this command has to be called by user root.
Just create that directory:
mkdir /vagrant
as user root.

Configuring Vagrantfile for multiple machine - Vagrant

I have setup a Virtual box with two VMs a) Ubuntu b) Windows 10. I have created vagrant boxes for each of these VMs from scratch. Each of the vagrant box runs good individually. But I want to launch both the VMs at once. So I created a Vagrantfile (shown below) with the help of this documentation: https://docs.vagrantup.com/v2/multi-machine/
With the following Vagrantfile, the box declared first gets launched while the other doesn't. Is there any error in my Vagrantfile?
Any solutions, hints on how to fix this problem? How do I launch both the VMs?
-----------Vagrantfile---------------
Vagrant.configure(2) do |config|
config.vm.define "linux" do |linux|
linux.vm.box = "ubuntu"
linux.vm.box_url = "/Users/xyz/Desktop/vagrant/linux_package.box"
end
config.vm.define "win" do |win|
win.vm.box = "Windows10"
win.vm.box_url = "/Users/xyz/Desktop/vagrant/win_package.box"
end
config.vm.provider "virtualbox" do |v|
v.gui = true
end
end
Output on terminal:
When linux machine is launched first, I get below message on terminal:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
Logging in to the guest (through gui mode) and making sure the vagrant user was set up like so in /etc/sudoers with the fix :
vagrant ALL=(ALL) NOPASSWD: ALL
Run visudo as root in order to edit this file.

Timeout while waiting for the machine to boot!! Vagrant-Virtualbox

I have gentoo(linux) host machine. On which, I have Virtualbox 4.3.28 and vagrant 1.4.3 installed(these are the latest available version for gentoo).
On vagrant up, the Ubuntu 14.04 gets launched. I'm also able to ssh to Ubuntu. But then as soon as it gets launched I get the following error. Below is my Vagrantfile and output error.
P.S I have created Ubuntu 14.04 base box from scratch
-----------Vagrantfile-------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "Ubuntu"
config.vm.boot_timeout = "700"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
-----------Output in terminal------------
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
**
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.**
Any solution to fix this problem?
P.S I have created Ubuntu 14.04 base box from scratch
That could be the missing piece - When you package a box, you need to run a few commands as explained below
It is very common for Linux-based boxes to fail to boot initially.
This is often a very confusing experience because it is unclear why it
is happening. The most common case is because there are persistent
network device udev rules in place that need to be reset for the new
virtual machine. To avoid this issue, remove all the persistent-net
rules. On Ubuntu, these are the steps necessary to do this:
$ rm /etc/udev/rules.d/70-persistent-net.rules
$ mkdir /etc/udev/rules.d/70-persistent-net.rules
$ rm -rf /dev/.udev/
$ rm /lib/udev/rules.d/75-persistent-net-generator.rules
Can you make sure to run the command above before packaging the box.

SSH Fails Due to Key File Permissions When I Try to Provision a Vagrant VM with Ansible on Windows/Cygwin

I’m using Cygwin (CYGWIN_NT-6.3-WOW64) under Windows 8. I’m also running Vagrant (1.7.2) and Ansible (1.8.4). To be complete, my Virtualbox is 4.3.22.
Cygwin and Vagrant have been installed from their respective Windows install packages. I’m running Python 2.7.8 under Cygwin and used ‘pip install ansible’ to install Ansible.
All of these applications work fine in their own right. Cygwin works wonderfully; I use it as my shell all day, every day with no problems.
Vagrant and Virtualbox also work with no problems when I run Vagrant under Cygwin. Ansible works fine under Cygwin as well when I run plays or modules against the servers on my network.
The problem I run into is when I try to use Ansible to provision a Vagrant VM running locally.
For example, I vagrant up a VM and then draft a simple playbook to provision it. Following are the Vagrantfile:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define :drupal1 do |config|
config.vm.box = "centos65-x86_64-updated"
config.vm.hostname = "drupal1"
config.vm.network "forwarded_port", guest: 80, host: 10080
config.vm.network :private_network, ip: "192.168.56.101"
config.vm.provider "virtualbox" do |v|
v.name = "Drupal Server 1"
v.memory = 1024
end
config.vm.provision :ansible do |ansible|
ansible.playbook = "provisioning/gather_facts.yml"
end
end
and playbook:
---
- hosts: all
gather_facts: yes
However, when I run ‘vagrant provision drupal1’, I get the following error:
vagrant provision drupal1
==> drupal1: Running provisioner: ansible... PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false
ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o
ControlMaster=auto -o ControlPersist=60s' ansible-playbook
--private-key=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
--user=vagrant --connection=ssh --limit='drupal1' --inventory-file=C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/provisioners/ansible/inventory
provisioning/gather_facts.yml PLAY [all]
GATHERING FACTS
fatal: [drupal1] => private_key_file
(C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key)
is group-readable or world-readable and thus insecure - you will
probably get an SSH failure PLAY RECAP
to retry, use: --limit #/home/mjenkins/gather_facts.retry
drupal1 : ok=0 changed=0 unreachable=1
failed=0 Ansible failed to complete successfully. Any error output
should be visible above. Please fix these errors and try again.
Looking at the error, its plainly obvious that it has something to do
with Ansible’s interpretation of my key and the file permissions on
either it or the folder its in.
Here are a few observations and steps I’ve tried:
I tried setting the permissions on the file and all the directories leading up to the file in Cygwin. That is chmod -R 700 .vagrant in the project directory. Still got the same error.
The key file is being referenced using a Windows path, not a Cygwin path (odd, though, that the file in the limit output has a Cygwin path). So I checked the permissions from the Windows side and changed it so that ‘Everyone’ has no access to .vagrant and all files/folders under it. Still got the same error.
Then I thought there might still be some problems with the file permissions/paths between my Cygwin based Ansible so I installed Python for Windows; used that pip to install Ansible, set my paths to that location, created an ansible-playbook.bat file, and ran Vagrant from a Windows cmd shell. Glad to say that tool chain worked….but I still got the same problem.
At this point I’m just about out of ideas so I turn to you, friends of Stackoverflow, for your input.
Any thoughts on solving this problem?
Your private key is very open and accessible by anyone. A check in SSH client prevents using such keys.
Try changing permissions with chmod from your cygwin or git bash, on your private and public keys.
On C:/Users/mjenkins/workspace/Vagrant_VMs/Drupal1/.vagrant/machines/drupal1/virtualbox/private_key
with chmod 700 private_key and ensure you have -rwx------ with ls -la
BAAAH! I just commented out the check in lib/ansible/runner/connection.py
Then I had to add in ansible.cfg
[ssh_connection]
control_path = /tmp
My solution to this was to override synced folder's permissions settings in the VagrantFile with the following ones:
Vagrant.configure(2) do |config|
config.vm.synced_folder "./", "/vagrant",
owner: "vagrant",
mount_options: ["dmode=775,fmode=600"]
...
I had similar issue and figured out a solution. I added following entries in my vagrant file
config.ssh.insert_key = false
config.ssh.private_key_path = "~/.vagrant.d/insecure_private_key"
and copied the insecure_private_key from my windows user folder to cygwin home as the path above. afterwards I did a
chmod 700 ~/.vagrant.d/insecure_private_key
and as a last step I removed the content of this file in cygwin home
~/.ssh/known_hosts
once I rerun the ansible-playbook command, I confirmed to add my localhost back to the known_hosts and the ssh connection worked.
truly saying it is much simpler if you understand what is happening.
Vagrant keep one folder for sharing file with host and other VM, that is /vagrant . Anything into that will be having mode 777 nothing can be done for that. sudo chmod too will not help , and you cannot change the mode.
Ansible is asking you to reduce the mode so that is not readable by group or all
so it is as simple as making a copy of the private key from
/vagrant/.vagrant/machines/yourmachine/virtualbox or any provisioner/
to may be home i.e ~ or /root
and then change chmod to 700 and use it in the inventory list in hosts file.
You could use the ansible_local provisioner for Vagrant. That will install Ansible into the VM. If you work with multiple vagrant virtual machines, then is is useful to let one be the ansible controller. This would then need the private SSH key. That can be done in the Vagrantfile with:
config.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
config.vm.provision "shell", inline: "chmod 600 /home/vagrant/.ssh/id_rsa"

Resources