Sending Ctrl_C over ssh - linux

On many of my servers autofs is hung and a process is trying to access a mount, and while I ssh to the remote host the it hangs there until I press Crtl+C from my keyboard
dew:~ # ssh dew00922
but when I press Crtl+C its goes to the prompt
dew:~ # ssh dew00922
^C-bash-3.2#
-bash-3.2#
I'm trying to send trap signal via the ssh but that also hangs
dew:~ # ssh dew00922 'trap 2 && service autofs restart'
Any help in fixing this issue.

When you ssh in, it's trying to run /etc/profile and then ~root/.profile (or ~root/.bash_profile or ~root/.bash_login) and ~root/.bashrc. One of those scripts runs a command that is trying to access a hung filesystem. When you press ^ C, you're killing the script and getting a prompt from the bash on dew0922. At that point you are logged into dew00922 and can run commands. Any commands you run that try to access a hung filesystem will hang.

timeout is the tool made for such problems:
ssh dew00922 'timeout 2 service autofs restart'
See man timeout for more information

Related

Linux - shutdown-script with SSH

I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)

Background shell script can't reach directories after ssh logout, even with nohup

I want to run a shell script in the background on a server machine and starts that shell script from an ssh connection. Even though I run the background process script with nohup, the background script fails due to an directory unreachable error as soon as I close my ssh connection (and no sooner).
runInBackground.sh:
#!/bin/bash
...
nohup ./run.sh > /dev/null 2> local/errorLog.txt < /dev/null &
run.sh:
#!/bin/bash
...
while [ true ] ; do
...
cd optaplanner-examples
mvn exec:exec // calls java process
cd ..
done
So when I run runInBackground.sh, everything works fine for hours, until I disconnect my ssh connection.
As soon as I log out, the errorlog.txt fills up with:
java.io.FileNotFoundException: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/output/
./run.sh: line 64: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/processed/failed_machineReassignmentBenchmarkConfig.xml: No such file or directory
fatal: Could not change back to '(unreachable)/server/optaplanner-simple-benchmark-daemon/local/optaplannerGitClone/optaplanner': No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
... // 1000+ more of that ls error
(Full source code)
well, it's not necessarily an encrypted home directory, but likely it's an auto-mounted home directory (e.g over NFS or so). It's mounted upon session startup, and unmounted upon exit. An encrypted home dir is only one of the possible reasons to use such a technique.
The main question is what's the rule that determines whether a user needs home dir or not. I would expect that it could be an allocated pty. You could test if it's actually true by starting a non-interactive SSH session w/o a pseudo-terminal: ssh -T user#host ls /home/myUser/server. I could expect that in this case you won't get a proper directory listing.
Then I would use a program like screen to prolongate interactive session lifetime beyond SSH session limits.
The server might use some other mechanism to provide the home directory for interactive SSH sessions. E.g. monitor interactive sessions listed in utmp In this case you will need a program that would keep the record as long as you need for your service. Perhaps you could use an automatically re-established SSH session. For example I use the following systemd unit to automatically keep a ssh tunnel from one of my workstations in different private networks:
[Unit]
Description=A tunnel to SOME_HOST
PartOf=sshd.service
Requires=network.service
[Service]
ExecStart=/usr/bin/ssh -N -q -R 2222:localhost:22 SOME_HOST
Restart=on-failure
RestartSec=5
User=tunnel
Group=tunnel
[Install]
WantedBy=sshd.service
WantedBy=network.service
When a failure occurs, systemd automatically restarts the unit and SSH session is re-established.
I always use the screen utility to run my scripts instead of nohup.
With screen your process will keep running in even your current ssh session times out or gets disconnected.
Use as follows -
apt-get install screen (On Debian based Systems)
OR
yum install screen (On RedHat based Systems)
To run your application and check the output live (provided your script file does not start a background process and it outputs to the stdout and/or stderr
cd your_app_directory_path
screen ./your_script.sh
Once you are done and want to leave (without stopping the process), use CTRL + A + D to detach the screen.
To check your processes which are run using the screen utility -
screen -r
to reattach a running process
screen -r <screen id or name>
Hope this was useful.
One workaround is to use screen to keep the ssh session open. You can use screen -r to reconnect to the session if you get disconnected.

bash remote command with sudo does not work

I've got this question answered here bash - running remote script from local machine about how could I use remote command with sudo involved. I thought it was working, because I got message that my server was successfully restarting etc so I chose answer. But today I saw that server actually was not restarting, but being killed for some reason.
Things I did, when I ran this command first time:
ssh user#host.com -t 'sudo /etc/init.d/script restart' #or -t just after ssh, same thing
I got this message:
Restarting openerp-server: Stopping openerp-server: openerp-server.
Starting openerp-server: openerp-server.
Connection to host.com closed.
So yeah, then I thought everything was good. But actually going to see if process restarted I saw it was not working.
And when I tried to restart again (with same command), I got this message:
Restarting openerp-server: Stopping openerp-server: start-stop-daemon: warning: failed to kill 25205: No such process
openerp-server.
Starting openerp-server: openerp-server.
Connection to host.com closed.
But if I use same command when directly connected to remote server, everything works fine and my script restarts server normally.
Same command I mean this:
ssh user#host.com
sudo /etc/init.d/script restart
So what the heck is going on here?
It seems likely that the script that starts it up is doing something naughty that relies on the TTY staying alive briefly after the command returns. It is probably the immediate exit that is causing trouble. If it starts a background job that's attached to the shell inside the TTY, and detaches shortly afterwards, then closing the connection might kill the shell and kill the job inside it. That would explain why, when you restart, the script thinks there's a process number for the service, but then can't find it: maybe the process number gets logged somewhere, but then the process gets killed off before it can get going.
To confirm, you might try a couple of things. Log in remotely, run your sudo command, then exit immediately:
ssh user#host.com
sudo /etc/init.d/script restart; exit
and see if this immediate exit also hits the same problem.
Also try
ssh user#host.com -t 'sudo /etc/init.d/script restart; sleep 30'
to force it to wait for a bit, and see if that gets you anywhere.

Execute script on remote host - output given in local host

I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

Resources