I have gentoo(linux) host machine. On which, I have Virtualbox 4.3.28 and vagrant 1.4.3 installed(these are the latest available version for gentoo).
On vagrant up, the Ubuntu 14.04 gets launched. I'm also able to ssh to Ubuntu. But then as soon as it gets launched I get the following error. Below is my Vagrantfile and output error.
P.S I have created Ubuntu 14.04 base box from scratch
-----------Vagrantfile-------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "Ubuntu"
config.vm.boot_timeout = "700"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
-----------Output in terminal------------
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
**
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.**
Any solution to fix this problem?
P.S I have created Ubuntu 14.04 base box from scratch
That could be the missing piece - When you package a box, you need to run a few commands as explained below
It is very common for Linux-based boxes to fail to boot initially.
This is often a very confusing experience because it is unclear why it
is happening. The most common case is because there are persistent
network device udev rules in place that need to be reset for the new
virtual machine. To avoid this issue, remove all the persistent-net
rules. On Ubuntu, these are the steps necessary to do this:
$ rm /etc/udev/rules.d/70-persistent-net.rules
$ mkdir /etc/udev/rules.d/70-persistent-net.rules
$ rm -rf /dev/.udev/
$ rm /lib/udev/rules.d/75-persistent-net-generator.rules
Can you make sure to run the command above before packaging the box.
Related
Hy folks, I'm using Vagrant box provisioned with Ansible and provider Oracle virtualbox, it was working fine for me.
But one day i installed Android Studio and it's Emulator and Minicube wit KVM.
Afterwards vagrant with virtual-box just stop working. Now whenever i run vagrant up i get below error.
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/bionic64' version '20200416.0.0' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
I need to run all of three on Ubuntu, How can i fix this?
1) Stop the VM
VBoxManage controlvm vm_123 poweroff
2) The check the settings.
VirtualBox will likely tell you there are some incompatible settings; correct those.
It could be the embedded virtualisation, or 32 vs 64 bits, or the amount of RAM for display or the virtual VGA display type, etc.
I had a gurumeditation issue with a fresh debian/buster64 and a reboot of my laptop fixed it (Virtualbox was only tellig me that the state was invalid in the log).
Some time lost for nothing. If ever it can help
Server is a fresh centos72 install, with pretty much dead in the water vagrant/vbox provider installed.
I have vagrant 1.8.6 rpm installed on a server with vbox 5.1.8r111374
, the box boxcutter/centos72 comes up, with error:
SSH auth method: private key
gocd: Warning: Remote connection disconnect. Retrying...
And yet... vagrant ssh works. The config file is basic af.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "boxname" do |boxname|
boxname.vm.box = "boxcutter/centos72"
boxname.vm.hostname = "test"
boxname.vm.network "private_network", ip: "192.168.111.10"
boxname.vm.provision :shell,
path: "prov.sh"
end
end
This can't run the prov script as it never gets past ssh set up. And vagrant provision won't work either because of the error above. I've obviously specified a private network, however once on the box the ifcfg-enp file looks like this:
TYPE=Ethernet
BOOTPROTO=dhcp
And the IP is a 10 address.
VirtualBox 5.1.x seems to be having major issues. Revert to 5.0.26 (5.0.28 seems to have major networking issues, too).
Problem
I was working with bento/centos7.2 box. I did a vagrant up and while it was booting up, I noticed the box has an update and I instinctively cancelled the operation (which I suggest not to do, ever!). So I went ahead and did vagrant destroy, rm -rf .vagrantjust to be sure (Again, I suggest not to do, ever!). I removed my box by vagrant box remove bento/centos7.2 and did vagrant up and ended up with this:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Environment
Ubuntu 16.04
Vagrant 1.81
Centos 7.2 Box
Things I tried
Following are the threads I have tried:
vagrant + virtualbox Timed out while waiting for the machine to boot
Timed out while waiting for the machine to boot when vagrant up
Vagrant "Timed out while waiting for the machine to boot."
When I enabled the GUI, I realized the box is booting up properly; it's just stuck at login screen(bug in box with ssh?). Screenshot:
Any help is much appreciated.
There are multiple possibilities that cause this issue:
Try running:
vagrant reload
This re-installs the guest-additions on the box.
Try opening Virtualbox (GUI interface) and the open the virtualbox (console). The box might for example be
i) waiting for fsck (filesystem check) if it was shutdown uncleanly
ii) login to the box over Virtualbox GUI by using the default username/password (typically vagrant/vagrant) and figure out is the ssh server running on the box or not.
Run
vagrant ssh-config
and see to what port and by which ssh key it is trying to use. Use them manually e.g.:
ssh -i <identity_key_location> vagrant#localhost -p 2222
I have a RHEL image already preconfigured, I don't know how it was originally setup.
By default, it is configured with a local network interface on the ip 192.168.50.50. What I am trying to do is configure its ip from the Vagrant script.
This doesn't seem to do anything:
config.vm.network "private_network", ip: "192.168.50.10"
This does change the ip:
sudo nmcli con mod bond0 ipv4.addresses 192.168.50.10/24
service network restart
But after that apparently Vagrant doesn't automatically detect the ip to connect to, so I need to add:
config.ssh.host = LOCAL_IP
But here's the problem: on the first time, the ip is the default one (.50.50). So I can't already set config.ssh.host to my desired ip. If I omit the config.ssh.host line, it runs the first time but not after, and vagrant ssh fails as well.
Is there a way to set the box ip without editing the Vagrant script between the first and second vagrant up?
Edit: Result of vagrant up --debug command: http://pastebin.com/BTccc4NT
Edit: The problem was that the Vagrant file from the default box (on Windows, it's at C:\Users\user\.vagrant.d\boxes\nameofbox\virtualbox\Vagrantfile) itself had this line:
config.vm.network "private_network", ip: "192.168.50.50", auto_config: false
hum, its weird, it creates 2 interfaces
DEBUG network: Normalized configuration: {:adapter_ip=>"192.168.50.1", :auto_config=>false, :ip=>"192.168.50.50", :mac=>nil, :name=>nil, :netmask=>"255.255.255.0", :nic_type=>nil, :type=>:static, :adapter=>2}
INFO network: Searching for matching hostonly network: 192.168.50.50
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "list", "hostonlyifs"]
......
DEBUG network: Normalized configuration: {:adapter_ip=>"192.168.50.1", :auto_config=>true, :ip=>"192.168.50.10", :mac=>nil, :name=>nil, :netmask=>"255.255.255.0", :nic_type=>nil, :type=>:static, :adapter=>3}
INFO network: Searching for matching hostonly network: 192.168.50.10
INFO subprocess: Starting process: ["C:/Program Files/Oracle/VirtualBox/VBoxManage.exe", "list", "hostonlyifs"]
so on on adapter2 you have 192.168.50.50 and on adapter3 you have 192.168.50.10
The possible reason for this is that the box you're using has a specific Vagrantfile which defines already a network on the static address.
I am not fully familiar with windows but on mac, the box definition is under ~/.vagrant/boxes/<yourbox>/<theprovider>/Vagrantfile (note this is not the Vagrantfile from your project, this is really a Vagrantfile which will be applied to any VM built from this box); check the file and remove the network configuration if you see it
As documented the Vagrantfile are merged from the different locations
At each level, settings set will be merged with previous values. What this exactly means depends on the setting. For most settings, this means that the newer setting overrides the older one. However, for things such as defining networks, the networks are actually appended to each other. By default, you should assume that settings will override each other. If the behavior is different, it will be noted in the relevant documentation section.
so by default Vagrant will create additional network interface and will not replace the one coming from the box Vagrantfile.
Vagrant creates a development environment using VirtualBox and then provisions it using ansible. As part of the provisioning, ansible runs a reboot and then waits for SSH to come back up. This works as expected but because the vagrant machine is not being started from a "vagrant up" command the synced folders are not mounted properly when the box comes back up from the reboot.
Running "vagrant reload" fixes the machine and mounts the shares again.
Is there a way of either telling vagrant to reload the server or to do all the bits 'n bobs that vagrant would have done after a manual restart?
Simply running "sudo reboot" when SSH-ed into the vagrant box also produces the same problem.
There is no way for Vagrant to know that the machine is being rebooted during the provisioning.
If possible, the best would be to avoid rebooting here altogether. For example kernel updates should be already done when building the base box.
Another easy (but not very convenient) way is to handle it with log output or documentation, or with a wrapper script which invokes vagrant up && vagrant reload.
And finally, you could write a plugin which injects all the needed mounting etc. actions to Vagrant middleware stack after the provisioning, but you would still need to think how to let the plugin know that the machine has been booted. Other challenge is that this easily gets provider specific.
You should be able to add the filesystems to /etc/fstab to mount on boot.
Here's my example:
vagrant /vagrant vboxsf defaults 0 0
home_vagrant_src /home/vagrant/src vboxsf defaults 0 0
home_vagrant_presenter-src /home/vagrant/presenter-src vboxsf defaults 0 0
Your vagrant directory should have a .vagrant hidden directory in it, and in there you should find a path to the "synced_folders" file (in my case: /vagrant/.vagrant/machines/default/virtualbox/synced_folders).
That file should help you figure out what the labels are and their mount points:
{"virtualbox":{"/home/vagrant/src":{"guestpath":"/home/vagrant/src","hostpath":"/home/rkomorn/src","disabled":false,"__vagrantfile":true},"/home/vagrant/presenter-src":{"guestpath":"/home/vagrant/presenter-src","hostpath":"/home/presenter/src","disabled":false,"__vagrantfile":true},"/vagrant":{"guestpath":"/vagrant","hostpath":"/home/rkomorn/vagrant","disabled":false,"__vagrantfile":true}}}
It's not the easiest to read but, using python terminology, the labels appear to be the inner dictionary's keys, with / translated to _ (eg: the /home/vagrant/presenter-src key became the home_vagrant_presenter-src label).
I'm actually not sure why vagrant doesn't just use /etc/fstab for shared folders but I'm guessing there's a good reason.
Split your provisioners into two separate steps and use the vagrant-reload plugin as additional provisioner between.
Example Vagrantfile:
config.vm.provision "Step 1 - requires reboot", type: "shell", path: "scripts/part1.sh"
config.vm.provision :reload
config.vm.provision "Step 2 - happens after reboot", type: "shell", path: "scripts/part2.sh"
In case anyone else runs into this issue and finds this question like I did here's how I worked around the issue:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "..."
# create a shared folder for the top-level project directory at /vagrant
# normally already configured but for some reason it isn't on these boxes
# https://www.vagrantup.com/docs/synced-folders/virtualbox.html#automount
# http://www.virtualbox.org/manual/ch04.html#sf_mount_auto
config.vm.synced_folder ".", "/mnt/vagrant", id: "vagrant", automount: true
config.vm.provision "shell", inline: "usermod -a -G vboxsf vagrant"
config.vm.provision "shell", inline: "ln -sfT /media/sf_vagrant /vagrant"
# More settings omitted...
end
There's a few parts to this solution:
The first line assigns a specific id of vagrant to the shared folder. This is important because the automatic mount functionality in VIrtualBox uses /mnt/sf_<id> by default. It also mounts the folder at /mnt/vagrant to keep it out of the way. Ideally you'd pick a more obscure location that's present on all of your VMs or just document not to use it there.
The third line creates a symbolic link from the automatic mount location at /mnt/sf_vagrant to the usual place users expect the shared folder at /vagrant.
The second line adds the vagrant user in the virtual machine to the vboxsf group. This is necessary to access files inside /mnt/sf_vagrant because the guest utilities mount the folder with root:vboxsf ownership. They also set appropriate file and directory modes so it works fine in practice but you do need to be a member of the vboxsf group.
This solution has the following benefits:
The mount at /mnt/sf_vagrant is automatically mounted by the virtualbox guest utilities after a reboot so /vagrant should always be available.
It does not require installing plugins or using any outside tools.
It has the following drawbacks:
Potential for unexpected behavior if users find and use the /mnt/vagrant mount. That mount will only be present if the virtual machine was most recently booted / rebooted through the vagrant console client otherwise it will not be present.
It requires a relatively recent version of VirtualBox and Vagrant.
EDIT: Added -T option to ln to avoid the corner case where it creates /vagrant/sf_vagrant as a symlink.
I had a same issue. This is what I had in my /etc/fstab.
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
vagrant_data /vagrant_data vboxsf uid=1000,gid=1000,_netdev 0 0
vagrant /vagrant vboxsf uid=1000,gid=1000,_netdev 0 0
#VAGRANT-END
So if you see fstab entry is still there, all you have to do is run sudo mount -a to trigger mount again. Or you can copy this lines.