Shell Script for Sauce Connect proxy - linux

I have created below shell script and running using crontab. This shell script creates a proxy/tunnel in Saucelabs. After adding the shell script to cronjob it's not running, and the proxy/tunnel is not getting created in Saucelabs. The shell script is working as expected when executed manually.
#!/bin/bash
cd /SauceLABS/sc-4.8.1-linux/bin/
./sc -u <usernmae> -k <passowrd > --tunnel-pool --region eu-central -T -B all --tunnel-name Saucelabs_Tunnel -s &
sleep 30
echo "Sauce-Connect is up and running"

Related

When I try execute a shell script from within a shell script via a `#reboot` cron it does not run correctly, only works correctly executing in CML

When I try execute a shell script from within a shell script it works when executing in terminal manually. However, when executing it via a #reboot cron via sudo crontab -e on Raspberry Pi OS it runs everything apart from sh /home/pi/script.sh within the shell script.
My shell script:
#!/bin/sh
clear
sleep 5
python /home/pi/Desktop/Relay-Script-On.py
sleep 3
sh /home/pi/script.sh
sleep 5
python /home/pi/Desktop/Relay-Script-Off.py
sleep 3
I have made the other shell file executable using sudo chmod +x
Note I am still new to shell (apologies if there is an obvious error here).

My Bash script ends after entering chroot environment

My question:
After the following lines in my script, the script ends unexpectedly. I am trying to enter chroot inside of a bash script. How can I make this work
I am writing a script that installs Gentoo
echo " Entering the new environment"
chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) ${PS1}"
chroot command will start new child bash process, so rest of your script will not be executed until you quit from child bash process.
So instead of /bin/bash just run your script in chroot:
chroot /mnt/gentoo myscript.sh
myscript.sh:
#!/bin/bash
echo " Entering the new environment"
source /etc/profile
export PS1="(chroot) ${PS1}"

How to execute a local bash script on remote server via ssh with nohup

I can run a local script on a remote server using the -s option, like so:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "bash -s" < myLocalScript.sh;
And I can run a remote script using nohup, like so:
# run a script on server with nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash myRemoteScript.sh > results.out 2>&1 &"
But can I run my local script with nohup on the remote server? I expect the script to take many hours to complete so I need something like nohup. I know I can copy the script to the server and execute it but then I have to make sure I delete it once the script is complete, would rather not have to do that if possible.
I've tried the following but it's not working:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash -s > results.out 2>&1 &" < myLocalScript.sh;
You shouldn't have to do anything special - Once you kick off a script on another machine, it should finish running even if you terminate the connection:
For example
ssh $SSH_USER#$SSH_HOST "bash -s > results.out 2>&1" < myLocalScript.sh &
# Please wait a few seconds for the connection to be established
kill $! # Optional: Kill the last process
If you want to test it, try a simple script like this
# myLocalScript.sh
echo 'File created - sleeping'
sleep 30
echo 'Finally done!'
The results.out file should immediately be created on the other machine with "File created - sleeping" in it. You can actually kill the local ssh process with kill <your_pid>, and it will still keep running on the other machine, and after 30 seconds, print "Finally done!" into the file, and exit.

Start shell script via plink in windows batch job

I´ve written a small batchjob that should kill and start a process on a linux host. Killing the process works fine but executing the shellscript to start the job again dos not.
plink -v -pw password root#192.168.1.63 "pgrep -f jobname | xargs kill"
plink -v -pw password root#192.168.1.63 "cd /data/server && /bin/bash runsrv.sh"
So the second row shows no error but also no job is startet and I´ve no idea why.
EDIT 1:
Here is the content of the runsrv.sh fil:
JBOSS_CLASSPATH=.
export JBOSS_CLASSPATH
JAVA_OPTS="$JAVA_OPTS -Dfile.encoding=utf8 -Xms3072M -Xmx3072M -XX:MaxPermSize=512m -XX:+AggressiveOpts -XX:+DoEscapeAnalysis"
export JAVA_OPTS
nohup ../../bin/run.sh -c idx -b 192.168.1.63 > log/serverstdout.log 2>&1 &
Thanks for any hint in advance!

Cron Job Killing and Restarting Python Script

I set up a cron job on a linux server to kill and restart a python script (run.py) every other day. I set the job to run as root, but I find that sometimes it doesn't kill the process properly (and ends up running two scripts in a row).
Is there a better way to do this?
My cron job parameters:
0 8 * * 1,4,7 cd /home/myUser && ./start.sh
start.sh:
#!/bin/bash
echo "Running..."
sudo pkill -f run.py
sudo python run.py &
I guess run.py runs as python, not run.py. So you won't find anything with kill -f run.py.
You should echo the PID of the process to a file and use that value to kill the previous process if it's still running. Just add echo $! >/path/to/pid.file as the last line in your start.sh script to do so.
Read more:
https://serverfault.com/questions/205498/how-to-get-pid-of-just-started-process
How to read a file into a variable in shell?
http://www.cyberciti.biz/faq/kill-process-in-linux-or-terminate-a-process-in-unix-or-linux-systems/
Example to get you started:
#!/bin/bash
echo "Running..."
sudo pkill -F /path/to/pid.pid
sudo python /path/to/run.py &
echo $! > /path/to/pid.pid
Another alternative to this is making the python script run on upstart if you are on a system that supports upstart. Then you can just do sudo /sbin/start job_name at the begin and sudo /sbin/stop job_name this makes upstart manage the pids for you.
Python upstart script
Upstart python script

Resources