How do I remove default ssh host from ssh configuration? - linux

I used to connect to Amazon web services using ssh command and application.pem key. Now when I try to connect to other platforms such as Github my ssh client looks for same application.pem key and tries to connect to AWS. How do I connect to Github or change the default host and key configuration.I am using a Ubuntu 13.10 system and following is my ssh output.
pranav#pranav-SVF15318SNW:~/.ssh$ ssh
Warning: Identity file application.pem not accessible: No such file or directory.

You need the identity file to login to the box. Use the command:
ssh -i (identity_file) username#hostname"
This worked for me. Write just the filename (without any slashes), unlike Amazon EC2 tutorial which asks you to enter:
ssh -i /path/key_pair.pem ec2-user#public_dns_name
and also check the permission

Related

Git clone gives "ssh: connect to host github.com port 22: Connection timed out" Linux /opt directory Amazon EC2 Instance

Issue
I am trying to use git in /opt/jamf2snipe directory on an EC2 Instance. I have tried the following command:
sudo git clone git#github.com:MYUSERNAME/jamf2snipe-school.git
It says connection timed out:
Cloning into 'jamf2snipe-school'...
ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
If I try to run this in my home directory it works fine. It seems to be a permission issue with /opt. I am wary of changing permissions for /opt.
Setup
I am trying to do this on an Amazon EC2 Instance. Currently SSH is limited to certain IP addresses (not including Github). I followed this article from github to use SSH over HTTPS. I tested to make sure I had stuff setup correctly by using:
$ ssh -T git#github.com
received
Hi USERNAME! You've successfully authenticated, but GitHub does not provide shell access.
I did this in /opt/jamf2snipe and the home directory successfully.
First, make sure to, if possible, not use sudo.
In addition of executing commands as root (which is dangerous), it uses its own environment variable, and SSH settings (in /root/.ssh), which differs from your normal EC2 user.
Conversely, making a repository in /opt, which might be accessible only by root, is not the best spot to clone a repository.
Second, Using SSH over the HTTPS port is the usual solution (like this one from 2018) on EC2, where the firewall can block by default SSH egress traffic.

Scp connection timed out ubuntu VM

so i'm trying to copy a file for my directory to Azure ubuntu VM , SSH works just fine ,but scp command takes a lot of time and then i had this message
connect to host 10.x.x.x port 22: Connection timed out lost connection
this is the command i used :
scp -vvv -i .ssh/id_rsa BaltimoreCyberTrustRoot.crt.pem azureuser#10.x.x.x:/var/www/html
• AFAIK, the SCP command that you are using to try to connect to your Ubuntu Azure VM might not be correct as the correct command to connect to your Ubuntu Linux VM from your local machine to copy files between them is as follows: -
scp -r ./tmp/ azureuser#10.xxx.xxx.xxx:/home/file/user/local
In the above command, the SCP connection gets established successfully after entering the private key further which files in the local system in ‘/tmp’ directory is recursively getting copied in the Azure ubuntu VM specified in ‘/home/file/user/local’ directory. Thus, the whole directory tree as specified is copied from the local system to the Azure ubuntu VM.
• Also, if you want to use the private key in the ‘SCP’ command through SSH, then you will have to use the below command to copy files from the local system to the Azure ubuntu VM: -
sudo scp -i ~/.ssh/id_rsa /path/cert.pem azureuser#10.xxx.xxx.xxx:/home/file/user/local
Using ‘sudo’ to access a ‘root’ file, while ‘SCP’ is going to look for the identity file ‘id_rsa’ in ‘/root/.ssh/’ instead of in ‘/home/user/.ssh/’. That's why you will have to specify the identity file (private key) in the SCP command to connect to the Azure ubuntu VM and transfer files from local system to the VM.
Other than this, kindly ensure that port 22 is opened in the inbound NSG rule on the Azure ubuntu VM and the VM's default page is accessible on port 80/443 over public IP address and the Azure FQDN assigned.
For more information, kindly refer to the links below: -
Can't scp to Azure's VM
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp#scp-a-directory-from-a-linux-vm

SSH to aks Windows node: azureuser#(windows node internal ip address)'s password

I'm trying to SSH into AKS windows node using this reference which created debugging Linux node, and ssh into the windows node from the debugging node. Once I enter the Linux node and try to SSH into the windows node, it asks me to type in azureuser password like below:
azureuser#10.240.0.128's password:
Permission denied, please try again.
What is azureuser#(windows node internal IP address)'s password? Is it my azure service password or is it a WindowsProfileAdminUserPassword that I pass in when I create an AKS cluster using New-AzAksCluster cmdlet? Or is it my ssh keypair password? If I do not know what it is, is there a way I can reset it? Or is there a way I can create a Windows node free from credentials? Any help is appreciated. Thanks ahead!
It looks like you're trying to login with your password, not your ssh key. Look for the explanation between those methods. These are two different authentication methods. If you want to ssh to your node, you need to chose ssh with key authentication. You can do this by running the command:
ssh -i <id_rsa> azureuser#<your.ip.adress>
But before this, you need to create key pair. It is well done described in this section. Then you can create the SSH connection to a Linux node. You have everything described in detail, step by step, in the documentation you provide.
When you configure everything correctly, you will be able to log into the node using the ssh key pair. You won't need a password. When you execute the command
ssh -i <id_rsa> azureuser#<your.ip.adress>
you should see an output like this:
The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
Are you sure you want to continue connecting (yes/no)? yes
[...]
Microsoft Windows [Version 10.0.17763.1935]
(c) 2018 Microsoft Corporation. All rights reserved.
When you see Are you sure you want to continue connecting (yes/no)? you need to write yes and confirm using Enter.

Issues with using Jump Host

How do I transfer a file from my local machine to a remote host to which I need to get through a jump host? These are the steps I follow to connect to the remote host
1. ssh myname#jump-host
2. enter password
3. sudo su - another-random-name
4. ssh name#remote-host
Now I want to transfer a file from my local machine to the remote-host. How would I achieve this? I have already tried scp -oProxyCommand but I don't quite know where I should include step 3 as part of this command?
Use port forwarding to get third host ssh port on your localhost, in this way:
ssh -L 2222:remote-host:22 myname#jump-host
then (on another tab/shell on first host):
scp -P 2222 file myname#localhost:
will copy directly to remote host.
On the jump host under another-random-name run
ssh -L 2222:remote-host:22 myname#jump-host
then on your local computer you can run
scp -P 2222 file name#jump-host:
SCP will try to connect to jump-host, while in fact this connection will be forwarded to jump-host. And will use name as it is connecting to remote-host.
You are probably still facing problem with certificate for another-random-user. You can either create certificate on your machine for your-local-user and put public key on remote-host in user allowed keys.

Define a set keyfile for Ubuntu to use when SSHing into a server

I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.

Resources