automatic reverse SSH tunneling - linux

I need to connect to my office computer through ssh, but all the ports are blocked and there is nothing to do there. I'd like to connect with reverse SSH tunneling. For that I want to use an external server that it's always on, and I want to set up my office computer to run the ssh command right at boot (before login).
I tried by modifying /etc/rc.local. These are the permissions:
-rwxr-xr-x 1 root root 385 nov 2 17:27 /etc/rc.local
The file:
#!/bin/sh -e
#
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sleep 1
sshpass -p 'pass' ssh -N -R 9091:localhost:22 user#server &
exit 0
Running /etc/rc.local allows me to connect from my home computer to my office computer, so the code does what it's supposed to, but it doesn't seem to do anything while booting.
Any ideas how to make the script run during booting?
Thanks.

Related

Can rc.local wait for Bash script to finish before booting

I am running the rolling release of Kali Linux, and have started to write a script that is executed by rc.local upon booting, that will be allow the user update the hostname of the computer.
rc.local:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/root/hostnameBoot
exit 0
hostnameBoot Script:
#!/bin/bash
# /etc/init.d/hostnameBoot
# Solution Added
exec < /dev/tty0
echo "Enter desired hostname:"
read hostname
echo "New hostname: $hostname"
#exit 0
As you can see, currently hostnameBoot prompts the user to enter a new hostname, and then returns the hostname to the user.
Upon booting, rc.local execute the script, but does not prompt the user to enter a new hostname.
Sample Boot Output:
- misc boot info -
Enter desired hostname:
New hostname:
The Sample Boot Output shows all at once and does not allow the user to enter a new hostname. Once the lines are shown, the system then continues to the login screen. Desired behavior of the system would allow the user time to enter a new hostname, then be presented with the input previously submitted.
note: The script is not the end product, it was just a proof of concept using rc.local to trigger the script.
Boot scripts, including rc.local, are usually not executed in interactive mode (i.e. with a fully functioning terminal where the user can enter data). Their output is redirected to the console (so you can see the boot messages) but the input is most likely /dev/null (so read returns immediately with nothing to read).
You will need to either manually redirect the read to use a fixed terminal all the time (e.g. read </dev/tty0) or open a virtual console to do the user input in (e.g. openvt -s -w /root/hostnameBoot). See this answer for more details.

Background shell script can't reach directories after ssh logout, even with nohup

I want to run a shell script in the background on a server machine and starts that shell script from an ssh connection. Even though I run the background process script with nohup, the background script fails due to an directory unreachable error as soon as I close my ssh connection (and no sooner).
runInBackground.sh:
#!/bin/bash
...
nohup ./run.sh > /dev/null 2> local/errorLog.txt < /dev/null &
run.sh:
#!/bin/bash
...
while [ true ] ; do
...
cd optaplanner-examples
mvn exec:exec // calls java process
cd ..
done
So when I run runInBackground.sh, everything works fine for hours, until I disconnect my ssh connection.
As soon as I log out, the errorlog.txt fills up with:
java.io.FileNotFoundException: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/output/
./run.sh: line 64: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/processed/failed_machineReassignmentBenchmarkConfig.xml: No such file or directory
fatal: Could not change back to '(unreachable)/server/optaplanner-simple-benchmark-daemon/local/optaplannerGitClone/optaplanner': No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
... // 1000+ more of that ls error
(Full source code)
well, it's not necessarily an encrypted home directory, but likely it's an auto-mounted home directory (e.g over NFS or so). It's mounted upon session startup, and unmounted upon exit. An encrypted home dir is only one of the possible reasons to use such a technique.
The main question is what's the rule that determines whether a user needs home dir or not. I would expect that it could be an allocated pty. You could test if it's actually true by starting a non-interactive SSH session w/o a pseudo-terminal: ssh -T user#host ls /home/myUser/server. I could expect that in this case you won't get a proper directory listing.
Then I would use a program like screen to prolongate interactive session lifetime beyond SSH session limits.
The server might use some other mechanism to provide the home directory for interactive SSH sessions. E.g. monitor interactive sessions listed in utmp In this case you will need a program that would keep the record as long as you need for your service. Perhaps you could use an automatically re-established SSH session. For example I use the following systemd unit to automatically keep a ssh tunnel from one of my workstations in different private networks:
[Unit]
Description=A tunnel to SOME_HOST
PartOf=sshd.service
Requires=network.service
[Service]
ExecStart=/usr/bin/ssh -N -q -R 2222:localhost:22 SOME_HOST
Restart=on-failure
RestartSec=5
User=tunnel
Group=tunnel
[Install]
WantedBy=sshd.service
WantedBy=network.service
When a failure occurs, systemd automatically restarts the unit and SSH session is re-established.
I always use the screen utility to run my scripts instead of nohup.
With screen your process will keep running in even your current ssh session times out or gets disconnected.
Use as follows -
apt-get install screen (On Debian based Systems)
OR
yum install screen (On RedHat based Systems)
To run your application and check the output live (provided your script file does not start a background process and it outputs to the stdout and/or stderr
cd your_app_directory_path
screen ./your_script.sh
Once you are done and want to leave (without stopping the process), use CTRL + A + D to detach the screen.
To check your processes which are run using the screen utility -
screen -r
to reattach a running process
screen -r <screen id or name>
Hope this was useful.
One workaround is to use screen to keep the ssh session open. You can use screen -r to reconnect to the session if you get disconnected.

how to send different commands to multiple hosts to run programs in Linux

I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.

start a python script as soon as power is supplied

I work on raspberry pi via SSH session, model B raspbian
I want the python script to run as soon as I plug in the power supply to my raspberry pi without connecting the ethernet cable.
I have found people asking about starting the script on boot up and what I found is to add the command in rc.local so I did
it looks like that now
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
sudo python tt3.py --cascade=s.xml 0
exit 0
but it doest work neither on plug in power supply or on starting up the SSH session
I think you are headed in the correct direction, but the issue probably revolves around tt3.py and s.xml not being found where rc.local is being run (its cwd - current working dir).
Try making the paths to the files explicit. Also check out /var/log/messages to see if there are any applicable error messages relevant to your script.
Also remember, rc.local is just another file which can be executed. So to test out if this will work, you can always run ./rc.local from its directory.

How to run a Linux Terminal command at startup

I like to start my Siriproxy server on my Raspberry Pi on startup. I have to type
cd siriproxy
rvmsudo siriproxy server
in the Terminal to start the Siriproxy.
Is there a way to run the command on the startup?
Thanks a lot,
David
This is the script I edited:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
#I added this line
/home/pi/siriproxy server
exit 0
/etc/init.d/cron start
You can add commands that are run as root to the /etc/rc.local script, and they will then be run at boot-up. (http://ubuntuforums.org/showthread.php?t=1822137)
From a terminal on your raspberry pi, run:
sudo nano /etc/rc.local
Add the following before the exit 0 line:
/path/to/siriproxy server
You can get the path of siriproxy by typing
which siriproxy
or depending on how your pi has siriproxy installed, it could be the full path of whatever you cd'd to, then adding "siriproxy" to the end.
Save the file and reboot to see it work! Hope this helped.
Try
screen -S ttlp
cd /home/pi/siriproxy
then
rvm siriproxy server
I haven't tried this yet, I will install it on one of my Pi's and help you.
Regards,
IC0NIC

Resources