Can't understand vagrant ssh implementation - linux

I recently start to use Vagrant (and recently move to Ubuntu from Windows too). My goal to understand fundamentals of vagrant ssh.
So, I'm trying to understand what vagrant ssh actually does.
I've read What does vagrant ssh actually do?, but I haven't understood anything.
I'll try to explain with an example:
The first time, I connect into the vagrant machine by ssh vagrant#192.168.0.x and typing the password.
Next, I configure the keypair and connect into guest by ssh vagrant#192.168.0.x without typing password.
Next, I try to understand how vagrant implements SSH into its own guest machine:
In /etc/ssh/sshd_config, I set PasswordAuthentication no, but vagrant ssh still works
Delete insecure_private_key placed in ~/.vagrant.d on the host machine, but vagrant restores it and vagrant ssh still works.
Remove openssh-server in the vagrant machine and now vagrant ssh really doesn't work :)
Please could anybody in plain English explain me how vagrant implements vagrant ssh?
Update: Vagrant Docs: SSH explains actually what I need.

May be I didn't get the point of your question, but I'll try to explain you the main differences between vagrant ssh and ssh.
vagrant ssh is actually the same as a normal ssh, but there are several differences between them:
port which ssh-client tries to access;
private key that ssh-client uses for authentication;
hosts-key is switched off for vagrant so you will not get initial message "The host is unknown";
other minor differences.
If you know the port where vagrant runs, and know where is the private key that vagrant uses,
you can use ssh instead of vagrant ssh.

Related

Difference between connecting throug Jenkins SSH plugin and normal ssh

I have a remote server.
If I use ssh to connect with the server as the Jenkins user it works perfectly
ssh jenkins#remoteserver.com
The jenkins user is allowed to change to user jboss WITHOUT being asked for password:
sudo su jboss
This works perfectly, no need for entering a password. Everything as expected.
If I make a Jenkins build, connecting to the remote server through a SSH plugin, the connection works fine. I can also run a testscript, it works also!
But if I make the sudo su jboss through Jenkins on my remote server, it's not working.
Jenkins is not throwing any error, there is just the spinning circle
It's never stopping, only if I cancel the job.
Anyone got an idea, what's the difference between running ssh in Jenkins and conncecting through a plugin?
Is the connection lost, when changing the username? (looks like it)
The SSH plugin and the ssh command provide two completely different implementations of the SSH protocol:
Your ssh command will probably run the OpenSSH client
The SSH plugin uses the SSH protocol implementation provided by JSch
I'm not into JSch, but I'd suspect there's a either a problem in the way the plugin configures JSch terminal handling, or there's a problem related to terminal handling with JSch. Either may break the behaviour of sudo:
sudo is somewhat sensitive to terminal/tty settings; see e.g. this discussion, which also contains a few hints which may help to work around the issue.

Vagrant "Timed out while waiting for the machine to boot" after deleting /project/.vagrant

Problem
I was working with bento/centos7.2 box. I did a vagrant up and while it was booting up, I noticed the box has an update and I instinctively cancelled the operation (which I suggest not to do, ever!). So I went ahead and did vagrant destroy, rm -rf .vagrantjust to be sure (Again, I suggest not to do, ever!). I removed my box by vagrant box remove bento/centos7.2 and did vagrant up and ended up with this:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Environment
Ubuntu 16.04
Vagrant 1.81
Centos 7.2 Box
Things I tried
Following are the threads I have tried:
vagrant + virtualbox Timed out while waiting for the machine to boot
Timed out while waiting for the machine to boot when vagrant up
Vagrant "Timed out while waiting for the machine to boot."
When I enabled the GUI, I realized the box is booting up properly; it's just stuck at login screen(bug in box with ssh?). Screenshot:
Any help is much appreciated.
There are multiple possibilities that cause this issue:
Try running:
vagrant reload
This re-installs the guest-additions on the box.
Try opening Virtualbox (GUI interface) and the open the virtualbox (console). The box might for example be
i) waiting for fsck (filesystem check) if it was shutdown uncleanly
ii) login to the box over Virtualbox GUI by using the default username/password (typically vagrant/vagrant) and figure out is the ssh server running on the box or not.
Run
vagrant ssh-config
and see to what port and by which ssh key it is trying to use. Use them manually e.g.:
ssh -i <identity_key_location> vagrant#localhost -p 2222

the usage of scp and ssh

I'm newbie to Linux and trying to set up a passphrase-less ssh. I'm following the instructions in this link: http://wiki.hands.com/howto/passphraseless-ssh/.
In the above link, it said:"One often sees people using passphrase-less ssh keys for things like cron jobs that do things like this:"
scp /etc/bind/named.conf* otherdns:/etc/bind/
ssh otherdns /usr/sbin/rndc reload
which is dangerous because the key that's being used here is being offered root write access, when it need not be.
I'm kind of confused by the above commands.
I understand the usage of scp. But for ssh, what does it mean "ssh otherdns /usr/sbin/rndc reload"?
"the key that's being used here is being offered root write access."
Can anyone also help explain this sentence more detail? Based on my understanding, the key is the public key generated by one server and copied
to otherdns. What does it mean "being offered root write access"?
it means to run a command on a remote server.
the syntax is
ssh <remote> <cmd>
so in your case
ssh otherdns /usr/sbin/rndc reload
is basically 4 parts:
ssh: run the ssh executable
otherdns: is the remote server; it's lacking a user information, so the default user (the same as currently logged in; or the one configured in ~/.ssh/config for this remote machine)
/usr/sbin/rndc is a programm on the remote server to be run
reload is an argument to the program to be run on the remote machine
so in plain words, your command means:
run the program /usr/sbin/rndc with the argument reload on the remote machine otherdns

Define a set keyfile for Ubuntu to use when SSHing into a server

I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.

CoreOS Vagrant Virtual box SSH password

I'm trying to SSH into CoreOS Virtual Box using Putty. I know the username appears in the output when I do Vagrant up but I don't know what the password is.
I've also tried overriding it with config.ssh.password settings in Vagrantfile but when I do vagrant up again it comes up with Authentication failure warning and retries endlessly.
How do we use Putty to log into this Box instance?
By default there is no password set for the core user, only key-based authentication. If you'd like to set a password this can be done via cloud-config.
Place the cloud-config file in a user-data file within the cloned repository. View user-data.sample for an example.
A better method would be to follow this guide so that you can use vagrant ssh as it was designed.
By default for Vagrant:
user: vagrant
password: vagrant
..vagrant up again it comes up with Authentication failure warning and
retries endlessly.
I think because it make connect with wrong ssh public key.
To change it read this: https://stackoverflow.com/a/23554973/3563993

Resources