Sh script runs only first two commands - linux

I'm trying to do simple script that loops itself every 2 minutes.
That's what I got:
#!/bin/sh
while true
do
echo Start
sleep 5
sudo wget 'downloadlink' -O /var/lib/squidguard/db/'blocklist'
sleep 20
sudo squidGuard -C all
sleep 25
sudo chown -R proxy:proxy /var/lib/squidguard/db
sleep 10
sudo systemctl reload squid
echo End
sleep 120
done
I run script by hand in terminal, first echo pops up, then download starts. After file is downloaded, script sleeps and then echoes End, then it proceeds to loop. Without other comands like sudo squidGuard or sudo chown ever running.
echo -> download -> waits -> echo LOOP echo -> download -> waits -> echo
I tried using & at the end of each line but that didnt helped either. I know that I'm missing something just don't know what.

Related

How to write a script could restart the job when exceeding a limit time in Linux?

I'm writing a script could do the same job 100 times and each in a different directory (named run1 to run100). However, the job will stuck for a long time sometimes and I have to delete the directory containing that run and restart it.
Is it possible to write a script that could 1. stop and delete the directory (e.g., run13) if that run exceeds 6 hours and 2. restart that run again?
Here is my original shell script
PREFIX=earlymigration
for i in {1..100}
do
mkdir run$i
cp ${PREFIX}.tpl ${PREFIX}.est ${PREFIX}_jointMAFpop1_0.obs run$i"/"
cd run$i
fsc26 -t ${PREFIX}.tpl -e ${PREFIX}.est -m -0 -C 10 -n 200000 -L 40 -s0 -M -c 10
cd ..
done
So do exactly that. Timeout the command, and if it times out, restart it.
while true; do
timeout $((6 * 60 * 60)) fsc26 ....
ret=$?
if ((ret == 124)); then
rm the_directory_containing_that_run
continue
fi
break
done
See man timeou.

How do I setup two curl commands to execute at different times forever?

For example, I want to run one command every 10 seconds and the other command every 5 minutes. I can only get the first one to log properly to a text file. Below is the shell script I am working on:
echo "script Running. Press CTRL-C to stop the process..."
while sleep 10;
do
curl -s -I --http2 https://www.ubuntu.com/ >> new.txt
echo "------------1st command--------------------" >> logs.txt;
done
||
while sleep 300;
do
curl -s -I --http2 https://www.google.com/
echo "-----------------------2nd command---------------------------" >> logs.txt;
done
I would advise you to go with #Marvin Crone's answer, but researching cronjobs and back-ground processes doesn't seem like the kind of hassle I would go through for this little script. Instead, try putting both loops into separate scripts; like so:
script1.sh
echo "job 1 Running. Type fg 1 and press CTRL-C to stop the process..."
while sleep 10;
do
echo $(curl -s -I --http2 https://www.ubuntu.com/) >> logs.txt;
done
script2.sh
echo "job 2 Running. Type fg 2 and press CTRL-C to stop the process..."
while sleep 300;
do
echo $(curl -s -I --http2 https://www.google.com/) >> logs.txt;
done
adding executable permissions
chmod +x script1.sh
chmod +x script2.sh
and last but not least running them:
./script1.sh & ./script2.sh &
this creates two separate jobs in the background that you can call by typing:
fg (1 or 2)
and stop them with CTRL-C or send them to background again by typing CTRL-Z
I think what is happening is that you start the first loop. Your first loop needs to complete before the second loop will start. But, the first loop is designed to be infinite.
I suggest you put each curl loop in a separate batch file.
Then, you can run each batch file separately, in the background.
I offer two suggestions for you to investigate for your solution.
One, research the use of crontab and set up a cron job to run the batch files.
Two, research the use of nohup as a means of running the batch files.
I strongly suggest you also research the means of monitoring the jobs and knowing how to terminate the jobs if anything goes wrong. You are setting up infinite loops. A simple Control C will not terminate jobs running in the background. You are treading in areas that can get out of control. You need to know what you are doing.

How to automatically terminate ssh connection after starting a script in tmux window?

I'm trying to run a script in a tmux environment on another computer using ssh, but the ssh connection won't terminate until the script has finished. Let me explain this in detail:
This is test_ssh.sh:
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
cd /scratch
mkdir test
cd test
cp /home/user/test_tmux3.sh .
tmux -c ./test_tmux3.sh &
echo 1 # at this point it waits until test_tmux3.sh is finished, instead of exiting :(
EOF
This is test_tmux3.sh (as a test to see if anything happens):
#!/bin/bash
mkdir 0min
sleep 60
mkdir 1min
sleep 60
mkdir 2min
At the end I would like to loop over multiple computers ($name) to start a script on each of them. The problem I am having right now is that test_ssh.sh waits after the echo 1 and only exits after tmux -c test_tmux3.sh & is finished (after 2 minutes). If I manually enter control-C test_ssh.sh stops and tmux -c test_tmux3.sh & continues running on the computer $name (which is what I want). How can automate that last step and get ssh to exit on its own?
Start the command in a detached tmux session.
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
mkdir /scratch/test
cd /scratch/test
cp /home/user/test_tmux3.sh .
tmux new-session -d ./test_tmux3.sh
echo 1
EOF
Now, the tmux command will exit as soon as the new session is created and the script is started in that session.
Have you tried to use nohup command to tell to the process keep running after exit?:
#!/bin/bash
name="computername"
ssh $name /bin/bash <<\EOF
cd /scratch
mkdir test
cd test
cp /home/user/test_tmux3.sh .
nohup tmux -c ./test_tmux3.sh &
echo 1 # at this point it waits until test_tmux3.sh is finished, instead of exiting :(
EOF

Why this Debian-Linux autostart netcat script won't autostart?

I placed a link to my scripts in the rc.local to autostart it on linux debian boot. It starts and then stops at the while loop. It's a netcat script that listens permantently on port 4001.
echo "Start"
while read -r line
do
#some stuff to do
done < <(nc -l -p 4001)
When I start this script as root with command ./myscript it works 100% correctly. Need nc (netcat) root level access or something else?
EDIT:
rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/etc/samba/SQLScripts
exit 0
rc.local starts my script "SQLScripts"
SQLScripts
#! /bin/sh
# The following part always gets executed.
echo "Starting SQL Scripts" >> /var/log/SQLScriptsStart
/etc/samba/PLCCheck >> /var/log/PLCCheck &
"SQLScripts" starts "PLCCheck" (for example only one)
PLCCheck
#!/bin/bash
echo "before SLEEP" >> /var/log/PLCCheck
sleep 5
echo "after SLEEP" >> /var/log/PLCCheck
echo "vor While" >> /var/log/PLCCheck
while read -r line
do
echo "in While" >> /var/log/PLCCheck
done < <(netcat -u -l -p 6001)
In an rc script you have root level access by default. What does "it stops at the while loop" mean? It quits after a while, or so? I guess you need to run your loop in the background in order to achieve functionality usual in autostart scripts:
echo "Starting"
( while read -r line
do
#some stuff to do
done << (nc -l -p 4001) ) &
echo "Started with pid $( jobs -p )"
I have tested yersterday approximatly the same things, and I have discover that you can bypass the system and execute your netcat script with the following crontask. :
(every minute, but you can ajust that as you want.)
* * * * * /home/kali/script-netcat.sh // working for me
#reboot /home/kali/script-netcat.sh // this is blocked by the system.
According to me, I think that by default debian (and maybe others linux distrib) block every script that try to execute a netcat command.

Server Startup Script Problems

I am trying to set up a Minecraft server. However, the basic startup scripts provided do not fit my needs. I want a script that will:
Start a new screen running the jarfile and (pretty much) only the jarfile (so i can ^C it if needed without killing other things like screen or my gzip commands)
Gzip any logs that weren't gzipped automatically by the jarfile (for if/when i ^C'ed the server, or if it crashed)
Run a command with sudo to set the process in the first argument to a high priority (/usr/bin/oom-priority)
Run a http-server on the resource-pack directory in a different screen and send ^C to it when the server closes
I have these three commands. I run startserver to start the server.
startserver:
#!/bin/bash
set -m
cd /home/minecraftuser/server/
echo
screen -dm -S http-server http-server ./resource-pack
screen -dm -S my-mc-server startserver_command
(sleep 1; startserver_after) &
screen -S my-mc-server
startserver_command:
#!/bin/bash
set -m
cd /home/minecraftuser/server/
echo
java -Xmx768M -Xms768M -jar ./craftbukkit.jar $# &
env MC_PID=$! > /dev/null
(sleep 0.5; sudo /usr/bin/oom-priority $MC_PID) &
fg 1
echo
read -n 1 -p 'Press any key to continue...'
and startserver_after:
#!/bin/bash
cd /home/minecraftuser/server/
wait $MC_PID
find /home/minecraftuser/server/logs -type f -name "*.log" -print | while read file; do gzip $file &
done
screen -S http-server -p 0 -X stuff \^c\\r
Edit: When I run startserver, I get a command prompt then a bunch of gzip errors of files already existing (I am expecting these errors, but when I run startserver I'm supposed to get the java program). Somehow I am in a screen because when I do ^A d, I am brought to a new prompt.
Once I am out of the screen, screen -ls returns two instances of my-mc-server. One is a blank command prompt, the other is the server running successfully.
Edit 2: I changed startserver_command to remove the asterisk from env MC_PID=$! & (not needed there) and added it to (sleep 1; startserver_after) (makes it faster), redirected env line to /dev/null (removes entire environment listing at beginning of output). Still didn't fix the entire problem.
Instead of starting each screen session from the scripts, you can just use a custom .screenrc to specify some startup windows (and to run commands/scripts):
#$HOME/mc-server.screenrc
screen -t http-server 0 'startserver'
screen -t my-mc-server 1 'startserver_command'
screen -t gzip-logs 2 'startserver_after'
Then simply start screen (specifying the config file to use, if it's not the default ~/.screenrc)
screen -dm -c mc-server.screenrc

Resources