how to kill log running jobs in every one hour interval? - linux

I want to search all the jobs which are running more that one hour. kill them. Then sleep for 60 mins. Again search if any job is running more than 60? loop the process.

If you want to find the PIDs for the processes running for more than 60 minutes on your linux box you can use a very simple and basic bash script like the one bellow:
#!/bin/sh
MIN=60
SEC=$((MIN*60))
ps -eo etimes=,pid= | while read sec pid; do
if [ ${sec} -gt ${SEC} ]; then
echo ${pid}
#kill -9 ${pid} # remove the # at the beginning of the line to actually kill those processes
fi
done
This will display the PIDs of the running processes, one per line
Assuming you name this script 60min.sh, you can run it every 60 minute using a cron job:
0 * * * * /bin/bash /path_to/60min.sh
This cron job will run your 60min.sh script every 60 minutes (or every hour)
Please keep in mind that you might accidentally kill system processes and your system might become unstable or unusable so you will have to reboot.
If you run different processes using a specific linux user I would recommend you to search the processes beloging to that user only and not to user root.

Related

Run process in background of shell for 2 minutes, then terminate it before 2 minutes is up BASH

I am using Peppermint distro. I'm new to linux, however I need to display system processes, then create a new process to run in the background for 2 minutes, I need to prove its running and then terminate it before the 2 minutes is up.
So far i'm using xlogo to test my process is working. I have
ps
xlogo &
TASK_PID=$!
if pgrep -x xlogo>/dev/null 2>&1
then
ps
sleep 15
kill $TASK_PID
ps
fi
I can't seem to figure out a way to give it an initial time of 2 minutes but then kill it after 15 seconds anyway.
any help appreciated!
If you want the command to originally have a time limit of 2 minutes you could do
timeout 2m xlogo &
of course, then your $! will be of the timeout command. If you're using pgrep and satisfied it's only finding the process you care about though, you could use pkill instead of the PID to kill the xlogo
Of course, killing the timeout PID will also kill xlogo, so you might be able to keep things as-is for the rest if you're happy with how that works.

Linux - Run script after time period expires

I have a small NodeJS script that does some processing. Depending on the amount of data needing to be processed, this can take a couple of seconds to hours.
What I want is to do is schedule this command to run every hour after the previous attempt has completed. I'm wary of using something like cron because I need to ensure that two instances of the script aren't running at the same.
If you really don't like cron (or at) you can just use a simple bash script:
#!/bin/bash
while true
do
#Do something
echo Invoke long-running node.js script
#Wait an hour
sleep 3600
done
The (obvious) drawback is that you will have to make it run in background somehow (i.e. via nohup or screen) and add a proper error handling (taking that you script might fail, and you still want it to run again in an hour).
A bit more elaborate "custom script" solution might be like that:
#!/bin/bash
#Settings
LAST_RUN_FILE=/var/run/lock/hourly.timestamp
FLOCK_LOCK_FILE=/var/run/lock/hourly.lock
FLOCK_FD=100
#Minimum time to wait between two job runs
MIN_DELAY=3600
#Welcome message, parameter check
if [ -z $1 ]
then
echo "Please specify the command (job) to run, as follows:"
echo "./hourly COMMAND"
exit 1
fi
echo "[$(date)] MIN_DELAY=$MIN_DELAY seconds, JOB=$#"
#Set an exclusive lock, or skip execution if it is already set
eval "exec $FLOCK_FD>$FLOCK_LOCK_FILE"
if ! flock -n $FLOCK_FD
then
echo "Lock is already set, skipping execution."
exit 0
fi
#Last run timestamp
if ! [ -e $LAST_RUN_FILE ]
then
echo "Timestamp file ($LAST_RUN_FILE) is missing, creating a new one."
echo 0 >$LAST_RUN_FILE
fi
#Compute delay, and wait
let DELAY="$MIN_DELAY-($(date +%s)-$(cat $LAST_RUN_FILE))"
if [ $DELAY -gt 0 ]
then
echo "Waiting for $DELAY seconds, before proceeding..."
sleep $DELAY
fi
#Proceed with an actual task
echo "[$(date)] Running the task..."
echo
"$#"
#Update the last run timestamp
echo
echo "Done, going to update the last run timestamp now."
date +%s >$LAST_RUN_FILE
This will do 2 things:
Set an exclusive execution lock (with flock), so that no two instances of the job will run at the same time, irregardless of how you start them (manually or via cron e.t.c.);
If the last job was completed less then MIN_DELAY seconds ago,
it will sleep for the remaining time, before running the job again;
Now, if you schedule this script to run, say every 15 minutes with cron, like that:
* * * * * /home/myuser/hourly my_periodic_task and it's arguments
It will be guaranteed to execute with the fixed delay of at least MIN_DELAY (one hour) since the last job completed, and any intermediate runs will be skipped.
In the worst case, it will execute in MIN_DELAY + 15 minutes,
(as the scheduling period is discrete), but never earlier than that.
Other non-cron scheduling methods should work too (i.e. just running this script in a loop, or re-scheduling and each run with at).
You can use a cron and add process.exit(0) to your node script

Stopping a running bash script from another script

I have a script called first.sh and this script calls another script using "./second.sh". In second.sh there are commands to play songs. For example, the content of second.sh could be:
play song1.mp3
play song2.mp3
...
I want to stop the script second.sh at certain times during the day, the problem is that using killall (and similar commands) do not help because the name of the script "second.sh" does not appear among the list of commands when I use "ps aux", I only see "play song1.mp3" and then "play song2.mp3" once song2 starts playing.
What can I do to stop second.sh using a command in the terminal? Or at least tie all the commands in it to a single process so I can kill that particular process?
Any help is appreciated, I've tried many ideas I found online but nothing seems to work.
Because you said :
at certain times during the day,
I would recommend crontab.
Use crontab -e and append the below line
0 12 * * * kill -9 `ps aux | awk '/play/{print $2}'`
This kills the parent shell that invoked play
The syntax for the crontab file is
m h dom mon dow command
where:
m - minute
h - hours
dom - day of month
mon - month
dow - day of week
command - the command that you wish to execute.
Edit
Or you could do something like this :
0 12 * * * killall -sSIGSTOP play
0 16 * * * killall -sSIGCONT play
which will pause all the play processes from 12 hours till 16 hours.
Requirement
You need to have the cron daemon up and running on your system.
You can save the pgid of the process explicitly, and then use the signals SIGSTOP and SIGCONT to start and stop the process group.
first.sh
#!/bin/bash
nohup ./second.sh > /dev/null 2>&1 &
echo $$ > /tmp/play.pid ### save process group id
second.sh
#!/bin/bash
play ...
play ...
third.sh
#!/bin/bash
case $1 in
(start)
kill -CONT -$(cat /tmp/play.pid)
;;
(stop)
kill -STOP -$(cat /tmp/play.pid)
;;
esac
Now you can launch and control the play as follows:
./first.sh
./third.sh stop
./third.sh start
You just need to stop second.sh and it will kill all its child processes automatically.
killall second.sh

LInux replace sleep calls with process status check

Linux bash commend:
I have .sh scripts as follows which has couple of sleep calls, is there a way i can replace sleep calls b/c sleep time may not be accurate, i want to check process running time and continue once previous process finish.
./deploy.sh
sleep 60
./stop-tomcat.sh
sleep 60
./start-tomcat.sh stop
Help appreciated.
-- Find the PID
$ pgrep gcalctool
15435
wait un-child process to be wait
while [ -e /proc/15435 ]; do sleep 0.1; done

Instance limited cron job

I want to run a cron job every minute that will launch a script. Simple enough there. However, I need to make sure that not more than X number (defined in the script) of instances are ever running. These are queue workers, so if at any minute interval 6 workers are still active, then I would not launch another instance. The script simply launches a PHP script which exits if no job available. Right now I have a shell script that perpetually launches itself every 10 seconds after exit... but there are long periods of time where there are no jobs, and a minute delay is fine. Eventually I would like to have two cron jobs for peak and off-peak, with different intervals.
Make sure you have unique script name.
Then check if 6 instances are already running
if [ $(pgrep '^UNIQUE_SCIPT_NAME$' -c) -lt 6 ]
then
# start my script
else
# do not start my script
fi
I'd say that if you want to iterate as often as every minute, then a process like your current shell script that relaunches itself is what you actually want to do. Just increase the delay from 10 seconds to a minute.
That way, you can also more easily control your delay for peak and off-peak, as you wanted. It would be rather elegant to simply use a shorter delay if the script found something to do the last time it was launched, or a longer delay if it did not find anything.
You could use a script like OneAtATime to guard against multiple simultaneous executions.
This is what i am using in my shell scripts:
echo -n "Checking if job is already running... "
me=`basename $0`
running=$(ps aux | grep ${me} | grep -v .log | grep -v grep | wc -l)
if [ $running -gt 1 ];
then
echo "already running, stopping job"
exit 1
else
echo "OK."
fi;
The command you're looking for is in line 3. Just replace $(me) with your php script name. In case you're wondering about the grep .log part: I'm piping the output into a log file, whose name partially contains the script name, so this way i'm avoiding it to be double-counted.

Resources