I have a webserver WWW1 and a front-facing proxy PRX. I use SSH ProxyCommand to connect to WWW1's internal IP (private IP) via PRX (private+public IP). For some connections (not all) I see a network connection left open after I'm finished. These add up!
~/.ssh/config
Host *
ServerAliveInterval 5
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
Host WWW1 WWW2 WWW3
User foo
ProxyCommand ssh -q -a -x PRX nc %h 22
IdentityFile ~/.ssh/id_foo_WWWx
On PRX, lsof | grep WWW1:ssh shows 124 open connections at the moment. On WWW1, the same command shows 243 open connections. There are similar open connections for WWW2, WWW3 etc.
WWW1 and PRX are Debian. Client connections are coming from a mix of Debian, Ubuntu and OSX10.6. I use Emacs Tramp but this has no special configuration (AFAIK) outside of my ~/.ssh/config.
I'm concerned about running out of internal ports, and ideally I want these connections to clean themselves up without intervention. Ideally by configuring them to kill themselves off; failing that a command I can kill old processes with is fine!
A better way would be to use the -W option of SSH, so you could put
ProxyCommand ssh -q -a -x PRX -W %h:22
instead of
ProxyCommand ssh -q -a -x PRX nc %h 22
This way you get rid of dependence on nc too.
Don't know whether it matters but I use nc -w 1 %h %p
Related
I wrote a scipt to execute a couple of ssh remote comands relating to apache storm. When I execute the script it says:
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: No route to host
ssh: connect to host XXX.XXX.XXX.XXX port 22: Connection refused
If I execute the commands manually it works out well and I can ping the machine. So that there has to be something wrong with this code:
while [ $i -le $numVM ]
do
if [ $i -eq 1 ];then
ssh -i file root#IP1 './zookeeper-3.4.6/bin/zkServer.sh start'
else
ssh -i file root#IP2 'sed -i '"'"'s/#\ storm.zookeeper.servers.*/storm.zookeeper.servers:/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 'sed -i '"'"'0,/^#[[:space:]]*-[[:space:]]*\"server1\".*/s//" - \"'${IParray[1]}'\""/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 'sed -i '"'"'s/#\ nimbus.host:.*/"nimbus.host: \"'${IParray[2]}'\""/'"'"' /root/apache-storm-0.9.3/conf/storm.yaml'
ssh -i file root#IP2 './zookeeper-3.4.6/bin/zkCli.sh -server ${IParray[1]} &'
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm nimbus &' &
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm ui &' &
sleep 10
ssh -i file root#IP2 './apache-storm-0.9.3/bin/storm supervisor &' &
fi
((i++))
done
I'm starting several processes on 2 virtual machines that are deployed from the same image, so that they are identical in general. The confusing part is, that the first ssh command (zkServer.sh start) is working well but if I the script tries to execute the three "sed"-ssh-commands I get the error message above. But then the last four ssh-commands are working out well again. That does not make any sense to me...
Several things I can think of:
Most sshd daemons won't allow root access. Heck, many versions of Unix/Linux no longer allow root login. If you need root access, you need to use sudo.
The sshd daemon on the remote machine isn't running. Although rare, some sites may never had it setup, or purposefully shut it off as a security issue.
Your ssh commands themselves are incorrect.
Instead of executing the ssh commands in your shell script, modify the shell script just to print out what you're attempting to execute. Then, see if you can execute the command outside of the shell script. This way you can determine whether the problem is the shell script, or the problem is with the ssh command itself.
If your ssh commands don't work outside from the command line, you can then simplify them and see if you can determine what the issue could be. You have ssh -i file root#IP2. Is this suppose to be ssh -i $file root#$IP2? (i.e., you're missing the leading sigil).
$ ssh -i file root#$IP2 ls # Can't get simpler than this...
$ ssh -i file root#IPS # See if you can remotely log on...
$ ssh root#IP2 # Try it without an 'identity file'
$ ssh bob#IP2 # Try it as a user other than 'root'
$ telnet IP2 22 # Is port 22 even open on the remote machine?
If these don't work, then you have some very basic issue with the setup of your remote machine's sshd command.
Is there a way to list all available SSH authentication methods for the local host using command line? Basically, I want to see the same list which the server would announce when trying to connect from a (remote) client.
Note: I do not want to search through /etc/ssh/sshd_config, as this will require too much understanding of which authentication methods do in general exist (e.g. sshd_config may just be emtpy)
ssh -v server and look for the first "Authentications that can continue" line.
There is better way from here + modified by me for localhost case:
ssh -o PreferredAuthentications=none -o NoHostAuthenticationForLocalhost=yes localhost
nmap can do this too using ssh-auth-methods:
nmap -p 22 --script ssh-auth-methods localhost
Example output:
Starting Nmap 7.91 ( https://nmap.org ) at 2021-08-16 12:07 +07
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000087s latency).
PORT STATE SERVICE
22/tcp open ssh
| ssh-auth-methods:
| Supported authentication methods:
| publickey
|_ password
Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
What it does behind the scenes is in ssh-auth-methods.nse. Uses libssh2 to connect with a random username and lists out the possible authentication methods.
(It's possible to print out the effective sshd configuration options with sshd -T, this will show what options are in effect even if sshd_config is empty. For this use case running sshd -T | egrep 'permitrootlogin|authentication' would be useful.)
I have to access my server in such way: localhost -> remote1 -> remote2 (my server)
[xxxx#localhost] $ ssh yyyy#remote1
[yyyy#remote1] $ ssh zzzz#remote2
[zzzz#remote2] $ echo "now I logined into my server..."
I know how to transfer files with scp. however I have no read or write permissions on remote1. How can I transfer a file to remote2?
Another alternative could be to use a Proxy command:
scp -o ProxyCommand='ssh yyy#remote1 netcat %h %p 2> /dev/null' zzz#remote2:fromfile tofile
if remote1 has netcat installed. Other viable options could be nc or socat (the latter has a different syntax).
Try this,
ssh -L localhost:8022:remote2:22 remote1
Now, you can use localhost port 8022 to contact 22 of remote2 via remote1. This session session should be active whenever you need to transfer. Use
scp -P 8022 /path/locale/file 127.0.0.1:/path/on/remote2
This is commonly called as SSH Tunneling. You can search and get to know lot about it.
I am using netcat utility on linux to receive outputs from a program on a windows machine. My problem being that the program on the windows machine does not always give an output.
How can i check that either a connection has been made to netcat ?
What i am doing till now is "nc -l -v 9103 > output" then i check the size of output, the problem this poses is that netcat only write to a file after a certain buffer size has been reached or a new line char is encountered, so some cases evne though a connection has been made the file size is detected as zero.
How can i check if someone has made a connection with netcat.
I tried using
nc -l -v -e someprog.exe 9103 > output
but my netcat doesnt seem to support this
below are the options i have
$ nc -h
usage: nc [-46DdhklnrStUuvzC] [-i interval] [-p source_port]
[-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_version]
[-x proxy_address[:port]] [hostname] [port[s]]
Command Summary:
-4 Use IPv4
-6 Use IPv6
-D Enable the debug socket option
-d Detach from stdin
-h This help text
-i secs Delay interval for lines sent, ports scanned
-k Keep inbound sockets open for multiple connects
-l Listen mode, for inbound connects
-n Suppress name/port resolutions
-p port Specify local port for remote connects
-r Randomize remote ports
-s addr Local source address
-T ToS Set IP Type of Service
-C Send CRLF as line-ending
-t Answer TELNET negotiation
-U Use UNIX domain socket
-u UDP mode
-v Verbose
-w secs Timeout for connects and final net reads
-X proto Proxy protocol: "4", "5" (SOCKS) or "connect"
-x addr[:port] Specify proxy address and port
-z Zero-I/O mode [used for scanning]
Port numbers can be individual or ranges: lo-hi [inclusive]
verbose mode will write connectivity to stderr, and you can redirect stderr to a file, the verbose log has something like
connect to [xxx] from [xxxx]
try
nc -l -v -p 9103 -k 1> output 2>connect.log
and monitor connect.log for connectivity
if you don't use -k , netcat quits after 1st connection.
If you can upgrade your copy of netcat: the modern versions (1.10, for one) have an option to execute a program (or a shell command) upon connect. Otherwise, you can make the netcat think it runs in a terminal (to disable buffering of stdout), by using for example script (it just saves everything on stdin/stdout/stderr in the given file). Or use logging features of screen and tmux.
I need to collect tcpdump from different interfaces from a remote host.
Currently I do it in the following way:
ssh remotehost "tcpdump -i iface1 -w - " > iface1_dump.pcap &
ssh remotehost "tcpdump -i iface2 -w - " > iface2_dump.pcap
I wonder if there a way to realize it in single ssh connect (maybe some complex shell redirect is a cure) to minimize packets loss while second ssh command is on its way
2 conditions:
remote host has really limited disk space, so I can't tcpdump locally on that host
tcpdump -i any affects Ethernet headers, so I can't use it
You create a file "SEND_DATA" on the target machine, in which you put these 2 lines:
tcpdump -i iface1 -w - | nc <IP> <PORT1>
tcpdump -i iface2 -w - | nc <IP> <PORT2>
Every machine has netcat, so it works. You mark SEND_DATA executable
Next, you open a listening port on your PC, and run the script on remote machine:
> OUT1 nc -l -p PORT1
> OUT2 nc -l -p PORT1
ssh remotehost SEND_DATA
In this moment, the files OUT1 & OUT2 will start receiving data.
You also need to consult the manuals from 2 versions of nc, because I saw that the parameters differ sometimes.
The two ssh's is probably the nicest method, but you could also sed 's/^/one /' on the first one, in the background, and sed 's/&/two /' on the second, and then pull them apart on the local host with "egrep '^one ' | sed 's/one //'" for example.
You could also save the output in a pair of files, and scp them back when "done".
ssh remotehost "tcpdump -iiface1 -w- &
tcpdump -iiface2 -w- >&2 2>/dev/null" >iface1_dump.pcap 2>iface2_dump.pcap