I have a last version Jenkis (run under the Tomcat) and the Vagrant and LXC container.
Tomcat running under jenkins user. I have next Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "arjenvrielink/xenial64-lxc"
config.vm.provider :lxc do |lxc|
lxc.backingstore = 'dir'
end
end
So, when I ran lxc container from bash by vagrant up everything was fine. And vagrant ssh worked. But if I run it via Jenkins job I get this
Started by user admin
[EnvInject] - Loading node environment variables.
Building in workspace /home/jenkins/workspaces/server
[server] $ /bin/bash /opt/tomcat/temp/jenkins204809790857124992.sh
Bringing machine 'default' up with 'lxc' provider...
==> default: Importing base box 'arjenvrielink/xenial64-lxc'...
==> default: Checking if box 'arjenvrielink/xenial64-lxc' is up to date...
==> default: Setting up mount entries for shared folders...
default: /vagrant => /home/jenkins/workspaces/server/vagrant
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 10.0.3.29:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
Build was aborted
Aborted by admin
Finished: ABORTED
Jenkins job contains only these commands
!#/bin/bash
cd vagrant
vagrant up
In process of investigation I found next different. Then I ran from bash the vagrant ssh-config out this:
Host default
HostName 10.0.3.212
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/workspaces/server/vagrant/.vagrant/machines/default/lxc/private_key
IdentitiesOnly yes
LogLevel FATAL
But then I ran from Jenkins job I got this
Host default
HostName 10.0.3.217
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/jenkins/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
What did I do wrong?
EDIT:
arjenvrielink/xenial64-lxc is an official box
So I'm still pretty sure your problem is with the vagrant insecure key replacement mecanism but my solution won't help you.
Is arjenvrielink/xenial64-lxc a custom box ?
If so make sure to either let the insecure key in it so any new user(Jenkins included) will have access to the box because at fist up vagrant connect to the box using the insecure key then create a new one.
If you want to include your own key in the box make sure to add the following lines to your Vagrantfile:
Vagrant.configure("2") do |config|
config.ssh.private_key_path = File.expand_path("<path of the key relative to Vagrantfile>", __FILE__)
end
The caveat is you'll have to make the key available everywhere you vagrant environement will run.
Related
So I have a vagrant machine
m#m-ThinkPad-L15-Gen-2:~/Desktop/estudos/shellclass/localuser$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'jasonc/centos7'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'jasonc/centos7' version '1.4.4' is up to date...
==> default: Setting the name of the VM: localuser_default_1635771672344_69157
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.2.6
default: VirtualBox Version: 6.1
==> default: Mounting shared folders...
default: /vagrant => /home/m/Desktop/estudos/shellclass/localuser
When I try to create a test file on the folder:
m#m-ThinkPad-L15-Gen-2:~/Desktop/estudos/shellclass/localuser$ vagrant ssh
[vagrant#localhost ~]$ ls
[vagrant#localhost ~]$ cd vagrant
-bash: cd: vagrant: No such file or directory
[vagrant#localhost ~]$ cd ..
[vagrant#localhost home]$ ls
vagrant
[vagrant#localhost home]$ cd vagrant
[vagrant#localhost ~]$ touch teste
[vagrant#localhost ~]$ ls -l
total 0
-rw-rw-r-- 1 vagrant vagrant 0 Nov 1 09:02 teste
[vagrant#localhost ~]$ ls -l
total 0
-rw-rw-r-- 1 vagrant vagrant 0 Nov 1 09:02 teste
[vagrant#localhost ~]$ vagrant rsync-auto
-bash: vagrant: command not found
[vagrant#localhost ~]$ rsync-auto
-bash: rsync-auto: command not found
[vagrant#localhost ~]$
And when I try to access it from a normal linux terminal, I can't find the files:
m#m-ThinkPad-L15-Gen-2:~/Desktop/estudos/shellclass/localuser$ ls -l
ls: /lib/x86_64-linux-gnu/libselinux.so.1: no version information available (required by ls)
total 4
-rw-rw-r-- 1 m m 3020 nov 1 10:00 Vagrantfile
What is going on?
I had created this machine before and it was working properly until last friday. But now, for some reason, I just can't find the files anymore. When I open it on a graphical interface, I also see no file that I have created.
Without seeing your Vagrantfile it is difficult to answer your question. However after vagrant ssh you probably should do cd /vagrant in stead of cd vagrant (with forward slash).
Further more your guest additions are for a different version of virtualbox.
You could install the vbguest plugin to update: vagrant plugin install vagrant-vbguest
Finally virtualbox has been updated from version 6.1.26 to 6.1.28. This has also broken a number of configurations. See also Vagrant up failing for VirtualBox provider on Ubuntu
my ssh config was okay and it was working fine, however recently my Github ssh connection didn't work and also I wasn't able to connect to my private server using ssh connection. When I try to ssh, I get follwing error:
/home/hacku/.ssh/config: line 9: Bad configuration option: Identityfile
/home/hacku/.ssh/config: line 16: Bad configuration option: Identityfile
/home/hacku/.ssh/config: terminating, 2 bad configuration options
And here is my config file:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/github_ssh
TCPKeepAlive yes
Host linode
HostName serv_ip_address
User hackU
Port 22
IdentityFile ~/.ssh/private_key
I copied exact same config file and my private key into another machine and it worked great (Termux, ssh version => OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021).
I checked my ssh package version it was OpenSSH_8.7p1, so I thought maybe the update broke it. So I downgraded it to OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021, it also didn't work, additionally I tried to restart sshd by using
sudo systemctl restart sshd
But none of the above worked.
I'm using manjaro gnome edition as my daily driver.
Thanks beforehand.
Everything theoretically seemed okay but the thing was that it was weirdly throwing this error. After doing some reading, I found this information here:
if you use an ssh-agent, ssh will automatically try to use the keys in the agent, even if you have not specified them with in ssh_config's IdentityFile (or -i) option. This is a common reason you might get the Too many authentication failures for user error. Using the IdentitiesOnly yes option will disable this behavior.
So I completely deleted IdentityFile option. Hence my final config file is like that and both connection works just fine.
Host github.com
User git
Port 22
Hostname github.com
TCPKeepAlive yes
Host linode
HostName server_ip_address
User hackU
Port 22
However, the reason for the problem for me still is unknown. I would be glad to hear, in case someone finds it out.
Server is a fresh centos72 install, with pretty much dead in the water vagrant/vbox provider installed.
I have vagrant 1.8.6 rpm installed on a server with vbox 5.1.8r111374
, the box boxcutter/centos72 comes up, with error:
SSH auth method: private key
gocd: Warning: Remote connection disconnect. Retrying...
And yet... vagrant ssh works. The config file is basic af.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "boxname" do |boxname|
boxname.vm.box = "boxcutter/centos72"
boxname.vm.hostname = "test"
boxname.vm.network "private_network", ip: "192.168.111.10"
boxname.vm.provision :shell,
path: "prov.sh"
end
end
This can't run the prov script as it never gets past ssh set up. And vagrant provision won't work either because of the error above. I've obviously specified a private network, however once on the box the ifcfg-enp file looks like this:
TYPE=Ethernet
BOOTPROTO=dhcp
And the IP is a 10 address.
VirtualBox 5.1.x seems to be having major issues. Revert to 5.0.26 (5.0.28 seems to have major networking issues, too).
I have gentoo(linux) host machine. On which, I have Virtualbox 4.3.28 and vagrant 1.4.3 installed(these are the latest available version for gentoo).
On vagrant up, the Ubuntu 14.04 gets launched. I'm also able to ssh to Ubuntu. But then as soon as it gets launched I get the following error. Below is my Vagrantfile and output error.
P.S I have created Ubuntu 14.04 base box from scratch
-----------Vagrantfile-------------
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "Ubuntu"
config.vm.boot_timeout = "700"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
-----------Output in terminal------------
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
**
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.**
Any solution to fix this problem?
P.S I have created Ubuntu 14.04 base box from scratch
That could be the missing piece - When you package a box, you need to run a few commands as explained below
It is very common for Linux-based boxes to fail to boot initially.
This is often a very confusing experience because it is unclear why it
is happening. The most common case is because there are persistent
network device udev rules in place that need to be reset for the new
virtual machine. To avoid this issue, remove all the persistent-net
rules. On Ubuntu, these are the steps necessary to do this:
$ rm /etc/udev/rules.d/70-persistent-net.rules
$ mkdir /etc/udev/rules.d/70-persistent-net.rules
$ rm -rf /dev/.udev/
$ rm /lib/udev/rules.d/75-persistent-net-generator.rules
Can you make sure to run the command above before packaging the box.
I am using ubuntu vivid with Vagrant
https://vagrantcloud.com/ubuntu/boxes/vivid64
when i do vagrant up
i get this
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
service hostname start
Stdout from the command:
Stderr from the command:
stdin: is not a tty
Failed to start hostname.service: Unit hostname.service is masked.
Is there any way to use vivid64 . i even tried
https://atlas.hashicorp.com/larryli/vivid64
but same result
Seems as though Vagrant is throwing out an error relating to the hostname... try adding this to your vagrant file:
#host.vm.hostname = "[HOSTNAMEVM]"
host.vm.provision :shell, inline: "hostnamectl set-hostname [HOSTNAMEVM]"
Of course, set [HOSTNAMEVM] to your hostname.
What we are doing here is manually asking Vagrant to provision with a specific hostname, to attempt to fix the issue with the hostname service failing to start.
If this doesn't work, a pastebin with your Vagrantfile might help us see what might be the actual cause here.
At first, try disabling the line with "hostname" on Vagrantfile.
change the line like
config.vm.hostname = "abcd"
to
# config.vm.hostname = "abcd"