Reliable (cryptographic) way to verify a devices public IP address behind a NAT - linux

I am writing a relatively small bash script that is supposed to update DNS records for a server behind a NAT which might change its external IP address. Essentially a free DynDNS using my DNS provider's API.
I am retrieving the server's IP address using a simple query to an external service. But for the sake of security, before pointing my DNS A record to a new arbitrary IP address given to my by an external service I first need to verify that this indeed is the server's IP address. And this check would need to involve a cryptography step since an active MITM attack could be taking place and just forwarding traffic to the server's real IP address.
So what would be the simplest way (if possible through bash) to verify that this is indeed the server's IP address?

I presume you mean that the bash script is running somewhere other than the server whose IP you need to determine?
The obvious solution would be to connect using ssh with strict host checking (and a remembered server key) or via SSL with certificate versification (you could use a self-signed certificate). The former is a bit easier to do out of the box.

Assuming that $IP is the server's new external IP address, this works by first acquiring the servers SSH keys by running ssh-keyscan on localhost and generating a temporary known hosts file. It then substitutes 127.0.0.1 with the given $IP and initiates an ssh session with the temporary known hosts file to the remote IP address. If the session is established and the key verification is successful the command will exit cleanly. Otherwise it will output the Host key verification failed. message. This will work even if authentication with the server fails as host key verification is done before authentication. The script finally checks whether the ssh output includes the given error message and returns valid or invalid correspondingly.
TMP_KNOWN_HOSTS=$(mktemp)
ssh-keyscan 127.0.0.1 > $TMP_KNOWN_HOSTS
sed -i "s/127\.0\.0\.1/$IP/" $TMP_KNOWN_HOSTS
RESPONSE=$(ssh -n -o "UserKnownHostsFile $TMP_KNOWN_HOSTS" -o "StrictHostKeyChecking yes" $IP true 2>&1)
if ! [[ $RESPONSE = *"Host key verification failed."* ]]; then
echo "valid"
else
echo "invalid"
fi

Related

SSH interception - Linux

Really hoping someone here can point me in the right direction,
Expected result: SSH successfully into a remote device.
Challenge/Back story:
We have devices out in remote places around the country,
These devices do not have a fixed public IP address
(Using GSM as its internet breakout)
These devices are able to SSH and break out.
My thought, with regards to maintaining these devices is to (if possible) use a server in the cloud as a middle man, have these devices create some sort of a reverse tunnel to our middleman server then have us as admins intercept it or something to that effect.
Again to summarize, Device cannot be ssh'd into directly, but can breakout.
Aim to be able to hit their terminal from the office.
have been looking at mitmssh but not coming right on that front.
Server A (no fixed address, cannot SSH into it directly but has breakout)
Server B (standard server which can be used as a middle man
Server C (Us admins)
Tried something along the lines of "ssh user#serverA -R serverB:12345:ServerA:22"
Which creates the tunnel, but struggling with grabbing hold of that SSH connection.
I think I regularly use something very similar. My target machine connects to the machine with a stable address with:
ssh midpoint -R 2022:localhost:22
my ~/.ssh/config file knows the real HostName. My config file on my work machine defines a ProxyCommand option to use this tunnelled TCP connection. like:
Host target
ProxyCommand ssh -q midpoint nc localhost 2022
the reason for using netcat was to get ssh-agent behaving.
I've just been searching around and it seems OpenSSH now has specific handling for this (-W command line option, and JumpHost in the config file). E.g. https://stackoverflow.com/a/29176698/1358308

How to pass private key as text to ssh?

I'm using one service which is connected to remote host via ssh. I don't want to store or write ssh keys on that service, I want pass keys to service and execute ssh connection to another host using passed keys before.
To connect to host I used: ssh user#host -i /path/to/key.
How can I use key as the text not a specific file?
I tried ssh user#host -i "key-text-example". It doesn't work like that.
Not as a literal answer to your question, but as the best way to meet your actual need (of connecting via SSH to a remote machine via a system you don't trust to store your private key) -- you should use SSH agent forwarding.
When you pass your private key to a remote system, even transiently, it can be captured; if an attacker is recording everything that goes on on the system with Sysdig, for example, the writes over the FIFO from the process substitution (or the reads done by the SSH client process) will show up plain as day.
Instead of passing the private key to the remote system, agent forwarding sends the request for a signature back from the remote system to your origin machine. (There are even SSH agents for Android, so you can have the request forwarded to your phone -- presumably a device you trust -- such that the private key never leaves it). Similarly, a hardware device such as a YubiKey can store your private key and perform signature operations on behalf of a SSH client -- on behalf of a remote machine when agent forwarding is requested.
For the simple case:
local$ [[ $SSH_AUTH_SOCK ]] || eval "$(ssh-agent -s)"
local$ ssh-add # load the key into your local agent
local$ ssh -A host1 # connects to host1 with agent forwarding enabled
host1$ ssh host2 # asks the ssh agent on "local" to authenticate to host2
host2$

Allowing hostname access in pg_hba.conf, won't work unless I also add the resolved ip address?

I want to allow postgres access from a hostname rather than an IP. I added access from the hostname to my pg_hba.conf, but when looking at the error log it appears that DNS resolves this hostname to an IP, connections from this IP are not allowed unless I explicitly allow access. This defeats the whole purpose of using the hostname, as hostnames for my services will NEVER change, where as the ip addresses can change daily.
What is the solution to this problem? Maybe my conf is just incorrect?
error:
test#test FATAL: no pg_hba.conf entry for host "10.81.128.90", user "test", database "test", SSL on[1]:
test#test DETAIL: Client IP address resolved to "cannablrv2-locationserver-1.kontena.local", forward lookup not checked.
shell script that adds access to pg_hba.conf
# Restrict subnet to docker private network
echo "host all all 172.17.0.0/16 md5" >> /etc/postgresql/9.5/main/pg_hba.conf
# Allow access from locationserver
echo "host all all cannablrv2-locationserver.test.kontena.local md5" >> /etc/postgresql/9.5/main/pg_hba.conf
# And allow access from DockerToolbox / Boottodocker on OSX
echo "host all all 192.168.0.0/16 md5" >> /etc/postgresql/9.5/main/pg_hba.conf
# Listen on all ip addresses
echo "listen_addresses = '*'" >> /etc/postgresql/9.5/main/postgresql.conf
echo "port = 5432" >> /etc/postgresql/9.5/main/postgresql.conf
You see that the client IP address resolves to a different name than the one you entered in pg_hba.conf, which is why the connection fails.
Did you read the documentation? It explains in detail how host names are handled.
You might get away with using .kontena.local to match name sufixes.
This answer assumes that you are using a DNS server for hostname resolution. According to https://www.postgresql.org/docs/current/auth-pg-hba-conf.html, if hostname is provided, then a reverse DNS look up will be performed with that IP. In your case, the reverse DNS look up of IP 10.81.128.90 is resolving to cannablrv2-locationserver-1.kontena.local instead of cannablrv2-locationserver.test.kontena.local which you have provided in your pg_hba.conf. Also, both reverse and forward DNS look up must give the expected results.

linux command to connect to another server using hostname and port number

what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.

SSH on Linux: Disabling host key checking for hosts on local subnet (known_hosts)

I work on a network where the systems at an IP address will change frequently. They are moved on and off the workbench and DHCP determines the IP they get.
It doesn't seem straightforward how to disable host key caching/checking so that I don't have to edit ~/.ssh/known_hosts every time I need to connect to a system.
I don't care about the host authenticity, they are all on the 10.x.x.x network segment and I'm relatively certain that nobody is MITM'ing me.
Is there a "proper" way to do this? I don't care if it warns me, but halting and causing me to flush my known_hosts entry for that IP every time is annoying and in this scenario it does not really provide any security because I rarely connect to the systems more than once or twice and then the IP is given to another system.
I looked in the ssh_config file and saw that I can set up groups so that the security of connecting to external machines could be preserved and I could just ignore checking for local addresses. This would be optimal.
From searching I have found some very strong opinions on the matter, ranging from "Don't mess with it, it is for security, just deal with it" to "This is the stupidest thing I have ever had to deal with, I just want to turn it off" ... I'm somewhere in the middle. I just want to be able to do my job without having to purge an address from the file every few minutes.
Thanks.
This is the configuration I use for our ever-changing EC2 hosts:
maxim#maxim-desktop:~$ cat ~/.ssh/config
Host *amazonaws.com
IdentityFile ~/.ssh/keypair1-openssh
IdentityFile ~/.ssh/keypair2-openssh
User ubuntu
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
This disables host confirmation StrictHostKeyChecking no and also uses a nice hack to prevent ssh from saving the host identify to a persistent file UserKnownHostsFile /dev/null note that as an added value I've added the default user with which to connect to the host and the option to try several different identify private keys.
Assuming you're using OpenSSH, I believe you can set the
CheckHostIP no
option to prevent host IPs from being checked in known_hosts. From the man page:
CheckHostIP
If this flag is set to 'yes', ssh(1)
will additionally check the host IP
address in the known_hosts file. This
allows ssh to detect if a host key
changed due to DNS spoofing. If the
option is set to 'no', the check will
not be executed. The default is
'yes'.
This took me a while to find. The most common use-case I've seen is when you've got SSH tunnels to remote networks. All the solutions here produced warnings which broke my Nagios scripts.
The option I needed was:
NoHostAuthenticationForLocalhost yes
Which, as the name suggests also only applies to localhost.
Edit your ~/.ssh/config
nano ~/.ssh/config (if there wasn't one already, don't worry, nano will create a new file)
Add the following config:
Host 192.168.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
If you want to disable this temporarily or without needing to change your SSH configuration files, you can use:
ssh -o UserKnownHostsFile=/dev/null username#hostname
Since every other answer explains how to disable the key checking, here are two ideas that preserve the key checking, but avoid the problem:
Use hostnames. This is easy if you control the DHCP server and can assign proper names. After that you can just use the known hostnames, the changing ips don't matter.
Use hostnames. Even if you don't control the DHCP server, you can use a service like avahi, which will broadcast the name of the server in our local network. It takes care of solving collisions and other issues.
Use host key signing. After you built a machine, sign it with a local CA (you don't need a global trusted CA for that). After that, you don't need to trust each host separately on your machine. It's enough that you trust the signing CA in the known_hosts file. More information in the ssh-keygen man page or at many blog posts (https://www.digitalocean.com/community/tutorials/how-to-create-an-ssh-ca-to-validate-hosts-and-clients-with-ubuntu)

Resources