Why is this file not deleted? - linux

I'm having a hard time understanding why the following case will leave the file "lock" present on the file system.
file script:
#!/bin/sh
set -e
run-parts sub &> /dev/null
file sub/subscript:
#!/bin/sh
set -e
# Exit if another job is running.
[ -e ./lock ] && exit 0
trap "rm -f ./lock; exit" 0 1 2 5 15
touch ./lock
echo "test"
When now running run-parts --report . in the folder where script is, why does it exit with the file lock present?
What I think it should do:
1) look for runnable scripts in the current folder (finds script)
2) it will run script and inside it will run another run-parts instance which finds the script sub/subscript and run this one
3) the file sub/subscript will create the file lock and should remove it again when signals 0, 1, 2, 5 or 15 occur.
What I found out so far:
1) Removing &> /dev/null will remove the lock file
2) Removing --report will also remove the lock file
However in my case, these two resolutions are not an option, as I am not the maintainer of the code.
From what I understand, neither of the mentioned signals will be triggered. Why is this?
OS: Debian Jessie

So...
#!/bin/sh
set -e
# Exit if another job is running.
[ -e ./lock ] && exit 0
First, change #!/bin/sh to #!/bin/bash since this is a bash question. You are probably running a flavor of Linux that links /bin/sh to /bin/bash, but that's not certain. Next, if you run this, and the lock file exists, it will exit and leave the file.
trap "rm -f ./lock; exit" 0 1 2 5 15
touch ./lock
echo "test"
Then the above is ran, if lock wasn't there. You touch the file, echo test, and remove it. You've not mentioned if you've see a "test" echoed back yet. It seems to be working as intended.

Related

Is it possible to auto reboot for 5 loops through mint?

I am currently using the following command to run reboot
sudo shutdown -r now
however, I would need to run it for 5 loops before and after executing some other programs. Was wondering if it is possible to do it in MINT environment?
First a disclaimer: I haven't tried this because I don't want to reboot my machine right now...
Anyway, the idea is to make a script that can track it's iteration progress to a file as #david-c-rankin suggested. This bash script could look like this (I did test this):
#!/bin/sh
ITERATIONS="5"
TRACKING_FILE="/path/to/bootloop.txt"
touch "$TRACKING_FILE"
N=$(cat "$TRACKING_FILE" | wc -c)
if [ "$N" -lt "$ITERATIONS" ]; then
printf "." >> "$TRACKING_FILE"
echo "rebooting (iteration $N)"
# TODO: this is where you put the reboot command
# and anything you want to run before rebooting each time
else
rm "$TRACKING_FILE"
# TODO: other commands to resume anything required
fi
Then add a call to this script somewhere where it will be run on boot. eg. cron (#reboot) or systemd. Don't forget to remove it from a startup/boot command when you're finished or next time you reboot, it will reboot N times.
Not sure exactly how you are planning on using it, but the general workflow would look like:
save script to /path/to/reboot_five_times.sh
add script to run on boot (cron, etc.)
do stuff (manually or in a script)
call the script
computer reboots 5 times
anything in the second TODO section of the script is then run
go back to step 3, or if finished remove from cron/systemd so it won't reboot when you don't want it to.
First create a text document wherever you want,I created one on Desktop,
Then use this file as a physical counter and write a daemon file to run things at startup
For example:
#!/bin/sh
var=$(cat a.txt)
echo "$var"
if [ "$var" != 5 ]
then
var=$((var+1))
echo "$var" > a.txt
echo "restart here"
sudo shutdown -r now
else
echo "stop restart"
echo 0 > a.txt
fi
Hope this helps
I found a way to create a file at startup for my reboot script. I incorporated it with the answers provided by swalladge and also shubh. Thank you so much!
#!/bin/bash
#testing making a startup application
echo "
[Desktop Entry]
Type=Application
Exec=notify-send success
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name[en_CA]=This is a Test
Name=This is a Test
Comment[en_CA]=
Comment=" > ~/.config/autostart/test.desktop
I create a /etc/rc.local file to execute user3089519's script, and this works for my case. (And for bootloop.txt, I put it here: /usr/local/share/bootloop.txt )
First: sudo nano /etc/rc.local
Then edit this:
#!/bin/bash
#TODO: things you want to execute when system boot.
/path/to/reboot_five_times.sh
exit 0
Then it works.
Don't forget edit /etc/rc.local and remove /path/to/reboot_five_times.sh when you done reboot cycling.

bash script flock() locking and starting service

I want to use flock to make sure only once instance of script is running at any given time.
Script skeleton looks like this:
ME=`basename "$0"`;
LOCK="/tmp/${ME}.LCK";
exec 8>$LOCK;
if flock -n -x 8; then
do things
if [ condition ]; then
/path/asterisk_restart.sh
fi
else
echo "$(date) script already running >> $log_file"
fi
Now the script /path/asterisk_restart.sh do many things, but in the end asterisk is stopped and last command is service asterisk start
The problem is this: as file handles and locks are shared across fork()/exec(), 8 filehandle remained locked in asterisk process, so the script will not run again once /path/asterisk_restart.sh is executed (and asterisk are not stopped/restarted by other means outside this script)
So my approach is to start sub-shell and close 8 file handle just before executing /path/asterisk_restart.sh.
It looks like this:
ME=`basename "$0"`;
LOCK="/tmp/${ME}.LCK";
exec 8>$LOCK;
if flock -n -x 8; then
do things
if [ condition ]; then
(
exec 8>&-
/path/asterisk_restart.sh
)
fi
else
echo "$(date) script already running >> $log_file"
fi
Is this a sound approach?
To prevent scripts against parallel run, I would suggest something like this.
if mkdir $LockDir; then
echo "Locking succeeded" >&2
# Your script here.
rm -f $LockDir
else
echo "Lock failed - exit" >&2
exit 1
fi
Using a directory instead of a file is better because mkdir is an atomic operation and hence would eliminate the race condition.
Also don't put your LockDir inside /tmp. If it gets removed, the lock is gone.
The only problem with the above implementation is that it does not work when the LockDir gets removed by some other script.

Why this Debian-Linux autostart netcat script won't autostart?

I placed a link to my scripts in the rc.local to autostart it on linux debian boot. It starts and then stops at the while loop. It's a netcat script that listens permantently on port 4001.
echo "Start"
while read -r line
do
#some stuff to do
done < <(nc -l -p 4001)
When I start this script as root with command ./myscript it works 100% correctly. Need nc (netcat) root level access or something else?
EDIT:
rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/etc/samba/SQLScripts
exit 0
rc.local starts my script "SQLScripts"
SQLScripts
#! /bin/sh
# The following part always gets executed.
echo "Starting SQL Scripts" >> /var/log/SQLScriptsStart
/etc/samba/PLCCheck >> /var/log/PLCCheck &
"SQLScripts" starts "PLCCheck" (for example only one)
PLCCheck
#!/bin/bash
echo "before SLEEP" >> /var/log/PLCCheck
sleep 5
echo "after SLEEP" >> /var/log/PLCCheck
echo "vor While" >> /var/log/PLCCheck
while read -r line
do
echo "in While" >> /var/log/PLCCheck
done < <(netcat -u -l -p 6001)
In an rc script you have root level access by default. What does "it stops at the while loop" mean? It quits after a while, or so? I guess you need to run your loop in the background in order to achieve functionality usual in autostart scripts:
echo "Starting"
( while read -r line
do
#some stuff to do
done << (nc -l -p 4001) ) &
echo "Started with pid $( jobs -p )"
I have tested yersterday approximatly the same things, and I have discover that you can bypass the system and execute your netcat script with the following crontask. :
(every minute, but you can ajust that as you want.)
* * * * * /home/kali/script-netcat.sh // working for me
#reboot /home/kali/script-netcat.sh // this is blocked by the system.
According to me, I think that by default debian (and maybe others linux distrib) block every script that try to execute a netcat command.

CRON JOB running during copying the files

I have a script that is running every 2 minutes by looking into a folder and checking if new files have been delivered
the problem is that sometimes the script starts during the files are copied (the files being quite big) so an email is sent to the customer with "file x is empty" but of course the file is good. the correct message email is received afterwards.
for avoiding the overlapping between different processing I have set up a CRON_RUN file like this. If there are files that are currently processing then my script exits. when the program finishes, the CRON_RUN file is removed for allowing to the next process to be started if exists of course
Now the processing is not overlapping any more but I have observed that
if I deliver a file in the input and during the copy I start the processing the program identifies an EMPTY file so it is still not as expected.
is there a command like?:
if copy process is in run or checksum file is not completed then
exit
else
do my program
fi
my current implementation so far is:
#!/bin/bash
MYDIR=`dirname $0`
ext_dir="/server/oracle/apps/delivery"
if test -f $ext_dir/CRON_RUN
then
echo $0 CRON RUN >&2
exit 1
fi
touch $ext_dir/CRON_RUN
export CRON_RUN=CRON_RUN
for......
my program....
export CRON_RUN=""
rm -f $ext_dir/CRON_RUN
could you please tell me if I understood correctly how to use flock command?
#!/bin/bash
MYDIR=`dirname $0`
ext_dir="/server/oracle/apps/delivery"
if test -f $ext_dir/CRON_RUN
then
echo $0 CRON RUN >&2
exit 1
fi
touch $ext_dir/CRON_RUN
export CRON_RUN=CRON_RUN
#running the program for each input folder
#inside the input some files might be in copying
#if a process copying is in progress program should exit
# if not then the load-input script should start
for input_folder in electro food comp rof
do
(
# Wait for lock on /server/oracle/apps/delivery/$input_folder/.load-input.exclusivelock(fd 200) for 10 seconds
flock -x -w 10 200 || exit 1
# Do stuff
$MYDIR/load-input $input_folder $mylog > $mylog 2>&1
) 200>/server/oracle/apps/delivery/$input_folder/.load-input.exclusivelock
done
export CRON_RUN=""
rm -f $ext_dir/CRON_RUN

Test a weekly cron job [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a #!/bin/bash file in cron.week directory.
Is there a way to test if it works? Can't wait 1 week
I am on Debian 6 with root
Just do what cron does, run the following as root:
run-parts -v /etc/cron.weekly
... or the next one if you receive the "Not a directory: -v" error:
run-parts /etc/cron.weekly -v
Option -v prints the script names before they are run.
A wee bit beyond the scope of your question... but here's what I do.
The "how do I test a cron job?" question is closely connected to "how do I test scripts that run in non-interactive contexts launched by other programs?" In cron, the trigger is some time condition, but lots of other *nix facilities launch scripts or script fragments in non-interactive ways, and often the conditions in which those scripts run contain something unexpected and cause breakage until the bugs are sorted out. (See also: https://stackoverflow.com/a/17805088/237059 )
A general approach to this problem is helpful to have.
One of my favorite techniques is to use a script I wrote called 'crontest'. It launches the target command inside a GNU screen session from within cron, so that you can attach with a separate terminal to see what's going on, interact with the script, even use a debugger.
To set this up, you would use "all stars" in your crontab entry, and specify crontest as the first command on the command line, e.g.:
* * * * * crontest /command/to/be/tested --param1 --param2
So now cron will run your command every minute, but crontest will ensure that only one instance runs at a time. If the command takes time to run, you can do a "screen -x" to attach and watch it run. If the command is a script, you can put a "read" command at the top to make it stop and wait for the screen attachment to complete (hit enter after attaching)
If your command is a bash script, you can do this instead:
* * * * * crontest --bashdb /command/to/be/tested --param1 --param2
Now, if you attach with "screen -x", you'll be facing an interactive bashdb session, and you can step through the code, examine variables, etc.
#!/bin/bash
# crontest
# See https://github.com/Stabledog/crontest for canonical source.
# Test wrapper for cron tasks. The suggested use is:
#
# 1. When adding your cron job, use all 5 stars to make it run every minute
# 2. Wrap the command in crontest
#
#
# Example:
#
# $ crontab -e
# * * * * * /usr/local/bin/crontest $HOME/bin/my-new-script --myparams
#
# Now, cron will run your job every minute, but crontest will only allow one
# instance to run at a time.
#
# crontest always wraps the command in "screen -d -m" if possible, so you can
# use "screen -x" to attach and interact with the job.
#
# If --bashdb is used, the command line will be passed to bashdb. Thus you
# can attach with "screen -x" and debug the remaining command in context.
#
# NOTES:
# - crontest can be used in other contexts, it doesn't have to be a cron job.
# Any place where commands are invoked without an interactive terminal and
# may need to be debugged.
#
# - crontest writes its own stuff to /tmp/crontest.log
#
# - If GNU screen isn't available, neither is --bashdb
#
crontestLog=/tmp/crontest.log
lockfile=$(if [[ -d /var/lock ]]; then echo /var/lock/crontest.lock; else echo /tmp/crontest.lock; fi )
useBashdb=false
useScreen=$( if which screen &>/dev/null; then echo true; else echo false; fi )
innerArgs="$#"
screenBin=$(which screen 2>/dev/null)
function errExit {
echo "[-err-] $#" | tee -a $crontestLog >&2
}
function log {
echo "[-stat-] $#" >> $crontestLog
}
function parseArgs {
while [[ ! -z $1 ]]; do
case $1 in
--bashdb)
if ! $useScreen; then
errExit "--bashdb invalid in crontest because GNU screen not installed"
fi
if ! which bashdb &>/dev/null; then
errExit "--bashdb invalid in crontest: no bashdb on the PATH"
fi
useBashdb=true
;;
--)
shift
innerArgs="$#"
return 0
;;
*)
innerArgs="$#"
return 0
;;
esac
shift
done
}
if [[ -z $sourceMe ]]; then
# Lock the lockfile (no, we do not wish to follow the standard
# advice of wrapping this in a subshell!)
exec 9>$lockfile
flock -n 9 || exit 1
# Zap any old log data:
[[ -f $crontestLog ]] && rm -f $crontestLog
parseArgs "$#"
log "crontest starting at $(date)"
log "Raw command line: $#"
log "Inner args: $#"
log "screenBin: $screenBin"
log "useBashdb: $( if $useBashdb; then echo YES; else echo no; fi )"
log "useScreen: $( if $useScreen; then echo YES; else echo no; fi )"
# Were building a command line.
cmdline=""
# If screen is available, put the task inside a pseudo-terminal
# owned by screen. That allows the developer to do a "screen -x" to
# interact with the running command:
if $useScreen; then
cmdline="$screenBin -D -m "
fi
# If bashdb is installed and --bashdb is specified on the command line,
# pass the command to bashdb. This allows the developer to do a "screen -x" to
# interactively debug a bash shell script:
if $useBashdb; then
cmdline="$cmdline $(which bashdb) "
fi
# Finally, append the target command and params:
cmdline="$cmdline $innerArgs"
log "cmdline: $cmdline"
# And run the whole schlock:
$cmdline
res=$?
log "Command result: $res"
echo "[-result-] $(if [[ $res -eq 0 ]]; then echo ok; else echo fail; fi)" >> $crontestLog
# Release the lock:
9<&-
fi
After messing about with some stuff in cron which wasn't instantly compatible I found that the following approach was nice for debugging:
crontab -e
* * * * * /path/to/prog var1 var2 &>>/tmp/cron_debug_log.log
This will run the task once a minute and you can simply look in the /tmp/cron_debug_log.log file to figure out what is going on.
It is not exactly the "fire job" you might be looking for, but this helped me a lot when debugging a script that didn't work in cron at first.
I'd use a lock file and then set the cron job to run every minute. (use crontab -e and * * * * * /path/to/job) That way you can just keep editing the files and each minute they'll be tested out. Additionally, you can stop the cronjob by just touching the lock file.
#!/bin/sh
if [ -e /tmp/cronlock ]
then
echo "cronjob locked"
exit 1
fi
touch /tmp/cronlock
<...do your regular cron here ....>
rm -f /tmp/cronlock
What about putting it into cron.hourly, waiting until the next run of hourly cron jobs, then removing it? That would run it once within an hour, and in the cron environment. You can also run ./your_script, but that won't have the same environment as under cron.
Aside from that you can also use:
http://pypi.python.org/pypi/cronwrap
to wrap up your cron to send you an email upon success or failure.
None of these answers fit my specific situation, which was that I wanted to run one specific cron job, just once, and run it immediately.
I'm on a Ubuntu server, and I use cPanel to setup my cron jobs.
I simply wrote down my current settings, and then edited them to be one minute from now. When I fixed another bug, I just edited it again to one minute from now. And when I was all done, I just reset the settings back to how they were before.
Example: It's 4:34pm right now, so I put 35 16 * * *, for it to run at 16:35.
It worked like a charm, and the most I ever had to wait was a little less than one minute.
I thought this was a better option than some of the other answers because I didn't want to run all of my weekly crons, and I didn't want the job to run every minute. It takes me a few minutes to fix whatever the issues were before I'm ready to test it again. Hopefully this helps someone.
The solution I am using is as follows:
Edit crontab(use command :crontab -e) to run the job as frequently
as needed (every 1 minute or 5 minutes)
Modify the shell script which should be executed using cron to prints the output into some file (e.g: echo "Working fine" >>
output.txt)
Check the output.txt file using the command : tail -f output.txt, which will print the latest additions into this file, and thus you can track the execution of the script
I normally test by running the job i created like this:
It is easier to use two terminals to do this.
run job:
#./jobname.sh
go to:
#/var/log and run
run the following:
#tailf /var/log/cron
This allows me to see the cron logs update in real time. You can also review the log after you run it, I prefer watching in real time.
Here is an example of a simple cron job. Running a yum update...
#!/bin/bash
YUM=/usr/bin/yum
$YUM -y -R 120 -d 0 -e 0 update yum
$YUM -y -R 10 -e 0 -d 0 update
Here is the breakdown:
First command will update yum itself and next will apply system updates.
-R 120 : Sets the maximum amount of time yum will wait before performing a command
-e 0 : Sets the error level to 0 (range 0 - 10). 0 means print only critical errors about which you must be told.
-d 0 : Sets the debugging level to 0 - turns up or down the amount of things that are printed. (range: 0 - 10).
-y : Assume yes; assume that the answer to any question which would be asked is yes
After I built the cron job I ran the below command to make my job executable.
#chmod +x /etc/cron.daily/jobname.sh
Hope this helps,
Dorlack
sudo run-parts --test /var/spool/cron/crontabs/
files in that crontabs/ directory needs to be executable by owner - octal 700
source: man cron and NNRooth's
I'm using Webmin because its a productivity gem for someone who finds command line administration a bit daunting and impenetrable.
There is a "Save and Run Now" button in the "System > Scheduled Cron Jobs > Edit Cron Job" web interface.
It displays the output of the command and is exactly what I needed.

Resources