Linux (bash): how to list available SSH authentication methods for local host? - linux

Is there a way to list all available SSH authentication methods for the local host using command line? Basically, I want to see the same list which the server would announce when trying to connect from a (remote) client.
Note: I do not want to search through /etc/ssh/sshd_config, as this will require too much understanding of which authentication methods do in general exist (e.g. sshd_config may just be emtpy)

ssh -v server and look for the first "Authentications that can continue" line.

There is better way from here + modified by me for localhost case:
ssh -o PreferredAuthentications=none -o NoHostAuthenticationForLocalhost=yes localhost

nmap can do this too using ssh-auth-methods:
nmap -p 22 --script ssh-auth-methods localhost
Example output:
Starting Nmap 7.91 ( https://nmap.org ) at 2021-08-16 12:07 +07
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000087s latency).
PORT STATE SERVICE
22/tcp open ssh
| ssh-auth-methods:
| Supported authentication methods:
| publickey
|_ password
Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
What it does behind the scenes is in ssh-auth-methods.nse. Uses libssh2 to connect with a random username and lists out the possible authentication methods.
(It's possible to print out the effective sshd configuration options with sshd -T, this will show what options are in effect even if sshd_config is empty. For this use case running sshd -T | egrep 'permitrootlogin|authentication' would be useful.)

Related

what is the bash command-line to connect to a network?

I am working in Linux/unix sh shell. I am following instructions to download data from a repository which consist of two mirrored systems.
Instructions say I must be able to make a http (Port 80) connection to both systems (or at least one) in order to download the data. What bash command-line should I use?
Use wget :
wget http://url/file.tar.gz
or if you want to check if port 80 is really open on the host , simply use telnet :
telnet server_ip 80
you should get some kind of reply. If no HTTP service running , you will get 'connection refused'
I'd say nmap is the best option here, since it gives a clear answer:
sudo nmap -sS -p80 <ip address>
The output is either something like
Host is up (0.027s latency).
PORT STATE SERVICE
80/tcp open http
or, in the negative case
Host is up (0.024s latency).
PORT STATE SERVICE
80/tcp filtered http
That is something you can easily pipe into a grep command:
sudo nmap -sS -p80 xx.xx.xx.xx | grep "/tcp"
This will result in either
80/tcp open http
or
80/tcp filtered http

How can I find available but unoccupied ports on a Linux box?

Specifically RHEL 6.5
It's a Dev box and we have certain port ranges we are permitted for development use.
...unfortunately, getting a tech's attention to find out what ports are available is like pulling teeth. Would prefer a script or alias that does this so that we don't have to ask all the time. Clues? Is this an iptables command or is it a netstat command or some weird combo? nmap is not available on this machine.
Please don't say this is a Server Fault question. They say it's a programming question. :-|
Definitely a SF question but here we go. From the dev box itself (command line) you should be able to see what ports are in use with the netstat tool.
To see the list of listening ports both UDP and TCP, complete with the program names:
# preferably as root
netstat --listening --program --numeric-ports --protocol=ip -6 -4
From another machine, you can use nmap or a similar tool to see what ports are open/listening by scanning the IP address assigned to the dev box. Before trying this, maybe you should ask for permission. Also, you should consider that the box in question might have firewall rules in place that can thwart your scanning attempts.
To see what firewall rules are in place in the dev box try:
# as root
iptables -nvxL -t filter
# maybe there are NAT rules, redirects to other addresses, etc.
iptables -nvxL -t nat
To see what these iptables options do, try man iptables.
As an example, assuming 172.16.0.1 is the IP address assigned to the dev box, to run nmap in the simplest way possible:
# preferably as root
nmap -v 172.16.0.1
In a few minutes you should see a list of ports/services listening in that relevant box.
Try man nmap and read the documentation for more details.
If you really think this is a programming issue, you can use the netcat tool and program a simple script to do something roughly equivalent to what nmap does.
#!/bin/bash
#
# DISCLAIMER: NOT TESTED -- just an example
# NOTE: This will take many DAYS to complete
HOST=172.16.0.1
for port in `seq 1 65535`
do
echo "Trying ${port}..."
netcat -vvv ${HOST} $port -w 1 -z
done
For every open TCP port you should see a line similar to this:
Connection to 172.16.0.1 23 port [tcp/telnet] succeeded!

ssh port forwarding (tunneling) in linux

I have a specific scenario that I want to solve. I currently connect to a host via port forwarding:
laptop -> gateway -> remote_server_1
and another host:
laptop -> remote_server_2
with passwordless login working on both. Neither of the remote servers are visible to the outside world. Now I'm running a service on remote_server_2, that I'd like to be able to access on remote_server_1. I presume I have to setup reverse port forwarding from remote_server_1 to my laptop, and then on to remote_server_2, but I'm not sure how to do this. Anyone come across this situation before?
Edit:
The full solution in case anyone else needs it:
mylaptop$ ssh -L 3001:localhost:3000 server_2
server_2$ netcat -l 3000
Then setup the tunnel via gateway to server_1:
ssh -t -t -L 3003:server_1:22 gateway
Then access it from server_1:
ssh -R 3002:localhost:3001 -p3003 localhost
echo "bar" | nc localhost 3002`
and hey presto server_2 shows bar :-)
You have to do exactly as you've described. Setup the server on server_2.
mylaptop$ ssh -L 3001:localhost:3000 server_2
server_2$ netcat -l 3000
Then access to it from server_1.
mylaptop$ ssh -R 3002:localhost:3001 server_1
server_1$ echo "foo" | netcat localhost 3002
server_2 will show foo.

How to purge connections left open by SSH ProxyCommand?

I have a webserver WWW1 and a front-facing proxy PRX. I use SSH ProxyCommand to connect to WWW1's internal IP (private IP) via PRX (private+public IP). For some connections (not all) I see a network connection left open after I'm finished. These add up!
~/.ssh/config
Host *
ServerAliveInterval 5
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
Host WWW1 WWW2 WWW3
User foo
ProxyCommand ssh -q -a -x PRX nc %h 22
IdentityFile ~/.ssh/id_foo_WWWx
On PRX, lsof | grep WWW1:ssh shows 124 open connections at the moment. On WWW1, the same command shows 243 open connections. There are similar open connections for WWW2, WWW3 etc.
WWW1 and PRX are Debian. Client connections are coming from a mix of Debian, Ubuntu and OSX10.6. I use Emacs Tramp but this has no special configuration (AFAIK) outside of my ~/.ssh/config.
I'm concerned about running out of internal ports, and ideally I want these connections to clean themselves up without intervention. Ideally by configuring them to kill themselves off; failing that a command I can kill old processes with is fine!
A better way would be to use the -W option of SSH, so you could put
ProxyCommand ssh -q -a -x PRX -W %h:22
instead of
ProxyCommand ssh -q -a -x PRX nc %h 22
This way you get rid of dependence on nc too.
Don't know whether it matters but I use nc -w 1 %h %p

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Resources