I am using rclone to mount a folder from my cloudstorage on my local computers. however, on one machine I only connect via terminal and I want to mount the volume on startup.
So I setup a small shell-script with following contents:
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
and I call it in bashrc with exec ~/mount_examplefolder
when I ssh into said computer, it is working as I do not get any errors but the shell refuses to take any further commands as the mount command is executed.
If I add another ssh login, I get an error-prompt, because it can't overwrite the mount folder from the other session.
So how do I fix this, that the rclone is being executed in the background giving me access to shell back?
Or am I restricted to mounting it manually and then using another ssh session to perform the desired actions?
There's a couple of things here that are causing some problems.
First, when you use exec to spawn a process in the shell, that means that you want to replace the existing shell process with the program you've mentioned. When you do that in an SSH session, you replace the shell process that the SSH daemon started (and you were intending to use to log in). SSH will then wait for that process to exit (which it won't until the volume is umounted), which is why you see the hang. You'll want to skip the exec in your shell configuration, which will spawn the process without replacing your shell.
Second, the reason you see the error is that the mount process is designed to be run once, as you've noticed. If you want to skip mounting the folder if it's already mounted, you can use something like the following as your shell script:
#!/bin/sh
if ! grep " $HOME/Documents/examplefolder " /proc/mounts
then
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
fi
Note the spaces inside the quotes that ensure that you haven't matched something else by accident. This will ensure that your script doesn't try to mount multiple times.
Third, you'll probably want to run this command in the background and detached from the shell so that the exit of the shell doesn't cause it to receive SIGHUP and exit (or restart, depending on how it's configured). You can do this by writing the invocation in your shell configuration as nohup ~/mount_examplefolder >/dev/null 2>&1 &. nohup prevents the program from receiving SIGHUP and redirecting output prevents it from printing messages or creating nohup.out files all over the place.
Finally, you may (or may not) want to configure this to run only when you're using an interactive shell; that is, when you're logging in to start a shell for interactive use rather than scripting use. If so, you can make the invocation of nohup condition on PS1 being set like so:
if [ -n "$PS1" ]
then
nohup ~/mount_examplefolder >/dev/null 2>&1 &
fi
Related
I am building a rest API with java, when the endpoint gets called. I will spawn a virtual shell with RunTime.getRuntime().exec(My_script.sh)
My script then proceeds to ssh into a server, create some files, and most importantly... I needs to run a script that sits on that server to process the files.
I run the command with
nohup myscript.sh > /dev/null/ 2>&1 &
I also added sleep 5 because i read that nohup may take a second to get things started
The script runs until the shell disconnects and dies.
So as of now
java application launches a shell
the bash script initiates a ssh connection and creates a subshell that runs a secondary script
while in the subshell, with our second script, we create some files and then we call nohup script to start the script
subshell exits and terminates
main shell exits and terminates
script we called disconnects and never finishes running
Some things I have tried
I have tried calling the script from the ssh connection,
psudo example
ssh user#host "nohup script.sh > /dev/null/ 2>&1 &"
also tried with different quotes
ssh user#host "nohup script.sh > /dev/null/" 2>&1 &
Another thing I can potentially do is keep the initial shell open. So that the script wont die when it disconnects... because it never will
I have looked into to other options like screen but I don't think it will be useful if the machine that initiated the script gets completely terminated after it starts
Things to note
I don't have control over the Linux box I ssh into, so I cannot install packages on that machine. I can however install packages on my shell
I have a command that will SSH and run a script after SSH'ing. The script runs a binary file.
Once the script is done, I can type any key and my local terminal goes back to its normal state. However, since the process is still running in the machine I SSH'ed into, any time it logs to stdout I see it in my local terminal.
How can I ignore this output without monkey patching it on my local machine by passing it to /dev/null? I want to keep the output inside the machine I am SSH'ing to and I want to just leave the SSH altogether after the process starts. I can pass it to /dev/null in the machine, however.
This is an example of what I'm running:
cat ./sh/script.sh | ssh -i ~/.aws/example.pem ec2-user#11.111.11.111
The contents of script.sh looks something like this:
# Some stuff...
# Run binary file
./bin/binary &
Solved it with ./bin/binary &>/dev/null &
Copy the script to the remote machine and then run it remotely. Following commands are executed on your local machine.
$ scp -i /path/to/sshkey /some/script.sh user#remote_machine:/path/to/some/script.sh
# Run the script in the background on the remote machine and pipe the output to a logfile. This will also exit from the SSH session right away.
$ ssh -i /path/to/sshkey \
user#remote_machine "/path/to/some/script.sh &> /path/to/some/logfile &"
Note, logfile will be created on the remote machine.
# View the log file while the process is executing
$ ssh -i /path/to/sshkey user#remote_machine "tail -f /path/to/some/logfile"
BACKGROUND
I would like to explain the scenario properly here.
I am running jenkins_2.73.3 in my cloud server with ubuntu 16.04.
Currently, there are 3 users in the server:
root
develop-user (which I had created for many reasons such as test,deploy etc)
jenkins (which was created by jenkins ofcourse, I also added this jenkins user to sudoers group)
PROBLEM
I have a bash script that I am calling from a build step in Jenkins. Within this bash script,there is a nohup command for calling a separate deployScript in the background such as:
#!/bin/bash
nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
After the build step is completed, I see that a id is generated inside save_pid.txt but app.log is surprisingly empty. I can't kill any processes with this generated pid. So, that means there isn't any process created in the first place here. Also, the deployScript.sh does not seem to have any effect at all. It's just not working. This happens everytime I run the build in Jenkins. I can assure that there is nothing wrong with the deployScript.sh.
I have tried running this bash script with the develop-user manually without Jenkins and it works perfectly. Contents are written to the log file and also I can use the generated pid to kill the process. I have also tested this in my local environment and it works.
QUESTION
I have been looking at this for days. What might be the root cause here ?Where can I look into to see some logs or other info ? How is the pid generated whereas the log file is empty ? Is it a permission issue with the jenkins user ? Please help.
You can use below line inside the execute shell in jenkins to run it in background without the process being killed.
BUILD_ID=dontKillMe <command> &
So, it turned out to be a permission issue and also the script wasn't executable I guess as pointed out in the comments above.
So, now the bash script looks like below:
#!/bin/bash
sudo chmod a+x deployScript.sh
sudo nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
This works.
I want to run a shell script in the background on a server machine and starts that shell script from an ssh connection. Even though I run the background process script with nohup, the background script fails due to an directory unreachable error as soon as I close my ssh connection (and no sooner).
runInBackground.sh:
#!/bin/bash
...
nohup ./run.sh > /dev/null 2> local/errorLog.txt < /dev/null &
run.sh:
#!/bin/bash
...
while [ true ] ; do
...
cd optaplanner-examples
mvn exec:exec // calls java process
cd ..
done
So when I run runInBackground.sh, everything works fine for hours, until I disconnect my ssh connection.
As soon as I log out, the errorlog.txt fills up with:
java.io.FileNotFoundException: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/output/
./run.sh: line 64: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/processed/failed_machineReassignmentBenchmarkConfig.xml: No such file or directory
fatal: Could not change back to '(unreachable)/server/optaplanner-simple-benchmark-daemon/local/optaplannerGitClone/optaplanner': No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
... // 1000+ more of that ls error
(Full source code)
well, it's not necessarily an encrypted home directory, but likely it's an auto-mounted home directory (e.g over NFS or so). It's mounted upon session startup, and unmounted upon exit. An encrypted home dir is only one of the possible reasons to use such a technique.
The main question is what's the rule that determines whether a user needs home dir or not. I would expect that it could be an allocated pty. You could test if it's actually true by starting a non-interactive SSH session w/o a pseudo-terminal: ssh -T user#host ls /home/myUser/server. I could expect that in this case you won't get a proper directory listing.
Then I would use a program like screen to prolongate interactive session lifetime beyond SSH session limits.
The server might use some other mechanism to provide the home directory for interactive SSH sessions. E.g. monitor interactive sessions listed in utmp In this case you will need a program that would keep the record as long as you need for your service. Perhaps you could use an automatically re-established SSH session. For example I use the following systemd unit to automatically keep a ssh tunnel from one of my workstations in different private networks:
[Unit]
Description=A tunnel to SOME_HOST
PartOf=sshd.service
Requires=network.service
[Service]
ExecStart=/usr/bin/ssh -N -q -R 2222:localhost:22 SOME_HOST
Restart=on-failure
RestartSec=5
User=tunnel
Group=tunnel
[Install]
WantedBy=sshd.service
WantedBy=network.service
When a failure occurs, systemd automatically restarts the unit and SSH session is re-established.
I always use the screen utility to run my scripts instead of nohup.
With screen your process will keep running in even your current ssh session times out or gets disconnected.
Use as follows -
apt-get install screen (On Debian based Systems)
OR
yum install screen (On RedHat based Systems)
To run your application and check the output live (provided your script file does not start a background process and it outputs to the stdout and/or stderr
cd your_app_directory_path
screen ./your_script.sh
Once you are done and want to leave (without stopping the process), use CTRL + A + D to detach the screen.
To check your processes which are run using the screen utility -
screen -r
to reattach a running process
screen -r <screen id or name>
Hope this was useful.
One workaround is to use screen to keep the ssh session open. You can use screen -r to reconnect to the session if you get disconnected.
I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).