Use ssh to launch remote process and exit - linux

I am trying to use ssh from the command line to launch a python server on a remote server using the following command:
$ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no \
-o ConnectTimeout=5 -f -i mykey.pem user#99.99.99.99 \
'python -m SimpleHTTPServer 3000 & echo $! > /home/user/pysrv.pid'
After the launch my ssh session goes to the background but does not exit until the python server is running. Is there a way I can setup the command so that ssh does not stick around as a background process on my current machine?

You can do
nohup python -m SimpleHTTPServer 3000 & echo $! > /home/user/pysrv.pid &
It will create a detached task that doesn't need the parent (ssh).
OTOH, if you kill the ssh server process, you won't be able to connect again. Is this what you want, or are you just trying to kill the session? If it's just the session, it should go away by itself after the connection is dropped.

The ssh command exits when the TCP connection is closed. If you redirect stdin and stderr of the remote command, nothing will be connected to the TCP connection and it will close.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=5 -f -i mykey.pem user#99.99.99.99 'python -m SimpleHTTPServer 3000 2>/dev/null 2>&1 </dev/null& echo $! > /home/user/pysrv.pid'

Related

Can't get script to run in the backround

I would like my script to run in the backround, SSH into another computer, run tcpdump, produce a pcap file and save it to my local computer. I have all of this working save for the running in the background portion.
I have looked at several solutions on Stack Overflow (example) but they don't seem to work for me. Admittedly I am a novice with bash however so it is entirely possible that I am reading them incorrectly.
ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" &>/dev/null &disown \ > /root/Destop/BashPcap/01Bash.pcap
Check your quotation endings maybe that's the problem...
Or you can save the file remotely and download back using scp (SecureCoPy).
Eg:
scp root#ipaddress:/path/to/file ~/Documents/path-where you-want-to-save.pcap
As far as I understood your task this is what you want:
nohup ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" &> /root/Destop/BashPcap/01Bash.pcap &
In simple words:
nohup - it will allow you to close your terminal and the script will continue to run
ssh ... - this is the command to execute
&> - redirect both stdout and stderr to file (Bash 4)
& - sends command to the background
Note: &> will send to the file both stdout and stderr, you need this if you want to have in your file the summary lines from tcpdump. They are written to stderr:
N packets captured
X packets received by filter
Y packets dropped by kernel
If you do not want to have these lines, then send stderr to /dev/null
nohup ssh root#ipaddress "tcpdump -c 400000 -s 0 -U -n -w - -i eth0 not arp" 2>/dev/null > /root/Destop/BashPcap/01Bash.pcap &

Issues when spawn'ing ssh on expect ( {$var} vs. "$var" )

I have a expect script that so far it is working fine ... it spawn a ssh session over an existing ssh tunnel like this:
spawn ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o {ProxyCommand=nc -X 5 -x localhost:8888 %h %p} "$user_ip"
$user_iphas the username and destination IP, as expected by ssh, like username#IP
The problem is that sometimes, port 8888 is being used by another ssh tunnel, and every time this happens I have to tweak the code and change the tunnel port.
So, I am trying to send the port in a variable, to avoid touch the code all the time.
I am getting the tunnel port from command line, as:
set proxy_port [lindex $argv 2]
and then
spawn ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o {ProxyCommand=nc -X 5 -x localhost:"$proxy_port" %h %p} "$conn"
I see that proxy_portvariable is properly set, but when the script try to spawn the ssh, I get:
spawn ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o {ProxyCommand=nc -X 5 -x localhost:8888 %h %p} USERID#10.0.0.1
command-line line 0: garbage at end of line; "-o".
send: spawn id exp5 not open
while executing
"send "$pwd\r""
(file "./amm-qev.new.exp" line 36)
In the error above, I see that the port was properly replaced, but it is complaining about ssh syntax.
What is wrong there that my weak eyes are not catching ?
Tcl's {...} syntax is like single-quoted string ('...') in shell where the $var would not be expanded. You should use double quotes.
spawn ssh -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-o "ProxyCommand=nc -X 5 -x localhost:$proxy_port %h %p" \
"$conn"

Skip password prompt using sh script

I have script that inputs the list of server ips and ssh using pem key to run commands but some servers have password i want to skip that so that it take the next ip ?
Below is the script:
cat privateiptest-ss | while read LINE
do
echo $LINE >> ss-prodcht1.txt
stackname=$LINE
ssh -o "PasswordAuthentication=no" -o "StrictHostKeyChecking no" -t -t -i key.pem ec2-user#$stackname "bash -s" < sh.sh
done
If you use the option BatchMode=yes with ssh, i.e.
ssh -o "BatchMode=yes" -o "StrictHostKeyChecking=no" -t -t -i key.pem ec2-user#$stackname "bash -s" < sh.sh
then ssh will never prompt for a password. For servers that do require a password, ssh will fail.

Ksh script: How to remain in ssh and continue the script

So for my script I want to ssh into a remote host and remain in the remote host after the script ends and also have the directory changed to match the remote host when the script ends.
#!/bin/ksh
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD'
The PWD gets changed correctly before the second ssh. The reason for the second ssh is because it ends the script in the correct remote host but it will not retain the directory change which I attempted by putting commands after it but they won't execute.
Does anyone have any ideas?
Just launch a shell at the end of the command list:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD; ksh'
If you want the shell to be a login one (i.e. one that reads .profile), use exec -l:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec -l ksh'
If the remote server uses an old ksh release that doesn't support the exec -l builtin and if bash or ksh93 is available, here is a workaround:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec bash -c "exec -l ksh"'

how to start nc over a remote machine in a screen

I have a requirement to start nc on remote machine in screen and start transfer of file from another remote machine in screen, I am trying to run this through deploy machine(jenkins) with bash script
on remote machine 1 i.e tester1 :
ssh -tt mysql#tester1 'screen -d -m nc -l -w 60 5555 | tar xvif -'
on remote machine 2 i.e tester2 :
ssh -tt tester2 'screen -d -m sudo -u mysql innobackupex --stream=tar --databases="sampledb" /mysql-backup/prodfullbkp | nc -w 30 tester 5555'
While the two above commands are not working when running from deploy machine.Could someone please help me give any better way of doing this.
Thanks in advance =)
You can have a better solution like
ssh user#host << EOF
#command to excecute
EOF
ie tester1 would be
ssh -tt mysql#tester1 << EOF
screen -d -m nc -l -w 60 5555 | tar xvif -
EOF
tester2 would be
ssh -tt tester2 << EOF
screen -d -m sudo -u mysql innobackupex --stream=tar --databases="sampledb" /mysql-backup/prodfullbkp | nc -w 30 tester 5555
EOF

Resources