pkill with -f flag in crontab not running command after semi colon - linux

I wanted to kill a process and remove a flag indicating that process is running. cron:
00 22 * * 1-5 pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
This works perfectly when I run it on terminal. But in crontab rm is not running. All I can think of is that whole line after -f flag is being taken as arguments for pkill.
Any reason why this is happening?
Keeping them as separate cron entries is working. Also pkill without -f flag is running (though it doesn't kill process as I want pattern to be searched in whole command).

Ran into this problem today and just wanted to post a working example for those who run into this:
pkill -f ^'python3 /Scripts/script.py' > /dev/null 2>&1 ; python3 /Scripts/script.py > /tmp/script.log 2>&1
This runs pkill and searches the whole command (-f) that starts with (regex ^) python3 /Scripts/script.py. As such, it'll never kill itself because it does not start with that command (it starts with pkill).

the short answer: it simply killed itself!
my answer explained:
if you let a command get started by a crond it'll be executed in a subshell. most probably the line you'll find in ps or htop will look like this:
/bin/sh -c pkill -f script.sh >log 2>&1 ; rm lock >log 2>&1
(details may vary. e.g. you might have bash instead of sh)
the point is, that the whole line got one PID (process id) and is one of the command lines which pgrep/pkill is parsing when using the '-f' parameter. as stated in the man page:
-f, --full
The pattern is normally only matched against the process name. When -f is set, the full command line is used.
now your pkill is looking for any command line in your running process list, which somehow contains the expression 'script.sh' and eventually will find that line at some point. as a result of it's finding, it'll get that PID and terminate it. unfortunately the very same PID holds the rest of you command chain, which just got killed by it self.
so you basically wrote a 'suicide line of commands' ;)
btw: i just did the same thing today and thats how i found your question.
hope this answer helps, even if it comes a little late
kind regards

3.141592 and nanananananananananananaBATMAN's answer is correct.
I worked around this problem like this.
00 22 * * 1-5 pkill -f script.[s][h] >log 2>&1 ; rm lock >log 2>&1
This works because script.[s][h](string) is not matched with script.[s][h](regex).

Related

Running a process with the TTY detached

I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.

How to run a script in background (linux openwrt)?

I have this script:
#!/bin/sh
while [ true ] ; do
urlfile=$( ls /root/wget/wget-download-link.txt | head -n 1 )
dir=$( cat /root/wget/wget-dir.txt )
if [ "$urlfile" = "" ] ; then
sleep 30
continue
fi
url=$( head -n 1 $urlfile )
if [ "$url" = "" ] ; then
mv $urlfile $urlfile.invalid
continue
fi
mv $urlfile $urlfile.busy
wget -b $url -P $dir -o /www/wget.log -c -t 100 -nc
mv $urlfile.busy $urlfile.done
done
The script basically checks for any new URLs at wget-download-link.txt for every 30 seconds and if there's a new URL it'll download it with wget, the problem is that when I try to run this script on Putty like this
/root/wget/wget_download.sh --daemon
it's still running in the foreground, I still can see the terminal output. How do I make it run in the background ?
In OpenWRT there is neither nohup nor screen available by default, so a solution with only builtin commands would be to start a subshell with brackets and put that one in the background with &:
(/root/wget/wget_download.sh >/dev/null 2>&1 )&
you can test this structure easily on your desktop for example with
(notify-send one && sleep 15 && notify-send two)&
... and then close your console before those 15 seconds are over, you will see the commands in the brackets continue execution after closing the console.
The following command will also work:
((/root/wget/wget_download.sh)&)&
This way you don't have to install the 'nohub' command in the tight memory space of the router used for OpenWrt.
I found this somewhere several years ago. It works.
The &at the end of script should be enough, if you see output from the script it means, that stdout and/or stderr is not closed, or not redirect to /dev/null
You can use this answer:
How to redirect all output to /dev/null
I am using openwrt merlin and the only way to get it working was using the crud cron manager[1]. Nohub and screen are not available as solutions.
cru a pinggw "0 * * * * /bin/ping -c 10 -q 192.168.2.254"
works like charm
[1][https://www.cyberciti.biz/faq/how-to-add-cron-job-on-asuswrt-merlin-wifi-router/]
https://openwrt.org/packages/pkgdata/coreutils-nohup
opkg update
opkg install coreutils-nohup
nohup yourscript.sh &
You can use nohup.
nohup yourscript.sh
or
nohup yourscript.sh &
Your script will keep running even if you close your putty session, and all the output will be written to a text file in same directory.
nohup is often used in combination with the nice command to run processes on a lower priority.
nohup nice yourscript.sh &
See: http://en.wikipedia.org/wiki/Nohup
For busybox in Openwrt Merlin system, I got a better solution which combined cru and date command
cru a YOUR_UNIQUE_CRON_NAME "`date -D '%s' +'%M %H %d %m *' -d $(( \`date +%s\`+2*60 ))` YOUR_CMD_HERE"
which add a cron job running 2 minutes later, and only run once.
Inspired by PlagTag's idea.
In another way these code would tried:
ssh admin#192.168.1.1 "/jffs/your_script.sh &"
Simple and without any programs like nohup screen...
(BTW: worked on Asus-Merlin firmware)
Try this:
nohup /root/wget/wget_download.sh >/dev/null 2>&1 &
It will go to the background so when you close your Putty session, it will be still running, and it won't send messages to the terminal.

How to get the process id of command executed in bash script?

I have a script i want to run 2 programs at the same time, One is a c program and the other is cpulimit, I want to start the C program in the background first with "&" and then get the PID of the C program and hand it to cpulimit which will also run in the background with "&".
I tried doing this below and it just starts the first program and never starts cpulimit.
Also i am running this as a startup script as root using systemd in arch linux.
#!/bin/bash
/myprogram &
PID=$!
cpulimit -z -p $PID -l 75 &
exit 0
I think i have this solved now, According to this here: link I need to wrap the commands like this (command) to create a sub shell.
#!/bin/bash
(mygprgram &)
mypid=$!
(cpulimit -z -p $mypid -l 75 &)
exit 0
I just found this while googling and wanted to add something.
While your solution seems to be working (see comments about subshells), in this case you don't need to get the pid at all. Just run the command like this:
cpulimit -z -l 75 myprogram &

passing control+C in linux shell script

in a shell script i have a command like, pid -p PID, after that i have some more commands. but as soon as the pid -p PID command runs we should supply a control+C to exit from it and then only the further commands executes. so i wanna do this periodically, i have all the things i want in a shell script and i wanna put this into crontab. the only thing that bothers is, if i schedule this script in the crontab, afetr its first execution, the command pid -p PID, how will i supply the CONTRO+C command for allowing further commands to execute???? please help
my script is like this.. very simple one
top -p $1
free -m
netstat -antp|grep 3306|grep $1
jmap -dump:file=my_stack$RANDOM.bin $1
You can send signals with kill. In your case however, you can just restrict top to one or a few iterations
top -p $1 -n 1
Update:
You can redirect the output of a command to a file. Either overwrite the file each time
command.sh >file.txt 2>&1
or append to a file
command.sh >>file.txt 2>&1
If you don't want the error output, leave out the 2>&1 part.
pid -p PID &
some_pid=$!
kill -s INT $some_pid

How dangerous is this bash script?

WARNING: Dangerous script. Do not run from your command line!
Saw this in a company joke email. Can someone explain to me why this bash script is more dangerous than a normal 'rm -rf' command?:
nohup cd /; rm -rf * > /dev/null 2>&1 &
Particularly, why is nohup used and what are the elements at the end for?
WARNING: Dangerous script. Do not run from your command line!
You can try something less "dangerous":
nohup cd /; find * >/dev/null 2>&1 &
I'm getting this:
nohup: ignoring input and appending output to `nohup.out'
nohup: cannot run command `cd': No such file or directory
[2] 16668
So, nohup part does nothing, it only triggers an error. The second part (of the original script) tries to remove everything in your current directory, and cannot be stopped by Ctrl-C, because it runs in the background. All its output is redirected to void, so you do not see any 'access denied' progress messages.
2>&1 takes stderr (file handle 2) and redirects to stdout (file handle 1). & by itself places the rm command in the background. nohup allows a job to keep running even after the person who started it logs out.
In other words, this command does its best to wipe out the entire file system, even if the user ragequits their terminal/shell.
The joke is kind of broken, obviously it has not been tested, he meant
nohup sh -e "cd / ; rm -rf *" > /dev/null 2>&1 &
or
nohup rm -rf / > /dev/null 2>&1 &
otherwise the nohup cd /; part is considered one separate line by the shell. and the second line just spawn rm -rf * which recursively rm your current directory (less the files with name started with . )
nohup means that it will ignore the hangup signal, meaning that it will keep running even if the user is no longer signed in.
cd / moves the user to the root directory
rm -rf * removes all files recursively(traverses all directories) and forcefully(doesn't care if files are in use)
The piece on the end redirects all output to nowhere. It should essentially format your drive to nothing.
nohup [..] & makes it run in the background even after the user has logged out (making it harder to stop, I suppose)
2>&1 redirects stderr to stdout
> /dev/null discards anything coming from stdout
The command would basically appear to do nothing, as your filesystem slowly gets destroyed in the background.

Resources