How to kill a process after a given real running time in Bash? - linux

For killing a process after a given timeout in Bash, there is a nice command called timeout. However, I'm running my program on a multi-user server, and I don't want the performance of my program to be influenced by others. Is there a way to kill a process in Bash, after a given time that the program is really running?

On Bash+Linux, you can use ulimit -t. Here's from help ulimit:
-t the maximum amount of cpu time in seconds
Here's an example:
$ time bash -c 'ulimit -t 5; while true; do true; done'
Killed
real 0m8.549s
user 0m4.983s
sys 0m0.008s
The infinite loop process was scheduled (i.e. actually ran) for a total of 5 seconds before it was killed. Due to other processes competing for the CPU at the same time, this took 8.5 seconds of wall time.
A command like sleep 3600 would never be killed, since it doesn't use any CPU time.

Related

Get average CPU Usage percentage of single process until I kill it in bash script

I have a bash script where I would like to get know how much percentage of CPU time a process uses until I kill it in the last line of the script.
I know that I could normally do this via the time -v command, however my process is Erlang/OTP based and by using the time command it only measures the startup process statistics.
Therefore I'd like to use the process PID I can get easily and use that one to get the CPU time percentage until the end of the script.
Currently I'm using pidstat but it is only giving me statistics for linear time intervals.
I want to measure the exact timeinterval from when the process started until it gets killed.
Peak RAM statistics woul also be nice.
Could you recommend me any command I could use in this case?
This is my bash script:
sudo emqx start
sleep 10
mypid=$(sudo emqx pid)
echo $mypid
sudo pidstat -h -r -u -v -p "$mypid" 5 > $local/server_results/test1/emqxstats_$b.txt &
# process for load testing
# jthreads = amount of publishing users
sleep 5
until sudo ~/Downloads/apache-jmeter-5.2.1/bin/jmeter -n -t $local/testplans/csv.jmx -Jport=$a | grep -m 1 "... end of run"; do : ; done
sudo emqx stop
kill %!
So I want to measure the CPU percentage from the interval between starting mosquitto and until the Apache Jmeter test finished when it reaches the last 2 lines.
Kind regards
I found the command I was looking after some more research.
ps -o pcpu -p $pid
this was exactly the command I needed, because it calculates the percentage over the lifetime of the process.

Limit CPU time of process group

Is there a way to limit the absolute CPU time (in CPU seconds) spend in a process group?
ulimit -t 10; ./my-process looks like a good option but if my-process forks then each process in the process group gets its own limit. The whole process group can use an arbitrary amount of time by forking every 9 seconds.
The accepted answer on a similar question is to use cgroups but doesn't explain how. However, there are other answers (Limit total CPU usage with cgroups) saying that this is not possible in cgroups and only relative cpu usage can be limited (for example, 0.2 seconds out of every 1 second).
Liran Funaro suggested using a long period for cpu.cfs_period_us (https://stackoverflow.com/a/43660834/892961) but the parameter for the quota can be at most 1 second. So even with a long period I don't see how to set a CPU time limit of 10 seconds or an hour.
If ulimit and cgroups cannot do this, is there another way?
you can do it with cgroups. Do as root:
# Create cgroup
cgcreate -g cpu:/limited
# set shares (cpu limit)
cgset -r cpu.shares=256 limited
# run your program
cgexec -g cpu:limited /my/hungry/program
Alternatively you can use the cpulimit program which can freeze your code periodically. cgroups is the most advanced method though.
to set fixed cpu share :
cgcreate -g cpu:/fixedlimit
# allow fix 25% cpu usage (1 cpu)
cgset -r cpu.cfs_quota_us=25000,cpu.cfs_period_us=100000 fixedlimit
cgexec -g cpu:fixedlimit /my/hungry/program
It turned out, the goal is to limit runtime to certain seconds while measuring it. After setting the desired cgroup limits (in order to get a fair sandbox) you can achieve this goal by running:
((time -p timeout 20 cgexec -g cpu:fixedlimit /program/to/test ) 2>&1) | grep user
After 20 seconds the program will be stopped no matter what, and we can parse for user time (or system or real time) to evaluate it's performance.
This not directly answer the question but refers to the discussion on the actual need of the OP.
If your competition ignores everything except CPU time, it may be fundamentally flawed. One can simply, for example, cache results in the primary storage device. Since you do not count storage access time, it may have the least CPU cycles, but the worse actual performance.
A perfect crime would be to simply send the data via the Internet to another computer, which calculate the task then return the answer. This would finish the task with what appear to be zero cycles.
You actually want to measure "real" time and give this process the highest priority in your system (or actually running it secludedly).
When checking students' homework, we simply used an unrealistic time limit (e.g., 5 minutes for what should be a 10 seconds program), then killing the process if it has not finished in time and failing this submission.
If you want to pick a winner, then simply re-run the best competitors multiple times to ensure the validity of their results.
I found a solution that works for me. It is still far from perfect (read the caveats before using it). I'm somewhat new to bash scripting so any comments about this are welcome.
#!/bin/bash
#
# This script tries to limit the CPU time of a process group similar to
# ulimit but counting the time spent in spawned processes against the
# limit. It works by creating a temporary cgroup to run the process in
# and checking on the used CPU time of that process group. Instead of
# polling in regular intervals, the monitoring process assumes that no
# time is lost to I/O (i.e., wall clock time = CPU time) and checks in
# after the time limit. It then updates its assumption by comparing the
# actual CPU usage to the time limit and waiting again. This is repeated
# until the CPU usage exceeds its limit or the monitored process
# terminates. Once the main process terminates, all remaining processes
# in the temporary cgroup are killed.
#
# NOTE: this script still has some major limitations.
# 1) The monitored process can exceed the limit by up to one second
# since every iteration of the monitoring process takes at least that
# long. It can exceed the limit by an additional second by ignoring
# the SIGXCPU signal sent when hitting the (soft) limit but this is
# configurable below.
# 2) It assumes there is only one CPU core. On a system with n cores
# waiting for t seconds gives the process n*t seconds on the CPU.
# This could be fixed by figuring out how many CPUs the process is
# allowed to use (using the cpuset cgroup) and dividing the remaining
# time by that. Since sleep has a resolution of 1 second, this would
# still introduce an error of up to n seconds.
set -e
if [ "$#" -lt 2 ]; then
echo "Usage: $(basename "$0") TIME_LIMIT_IN_S COMMAND [ ARG ... ]"
exit 1
fi
TIME_LIMIT=$1
shift
# To simulate a hard time limit, set KILL_WAIT to 0. If KILL_WAIT is
# non-zero, TIME_LIMIT is the soft limit and TIME_LIMIT + KILL_WAIT is
# the hard limit.
KILL_WAIT=1
# Update as necessary. The script needs permissions to create cgroups
# in the cpuacct hierarchy in a subgroup "timelimit". To create it use:
# sudo cgcreate -a $USER -t $USER -g cpuacct:timelimit
CGROUPS_ROOT=/sys/fs/cgroup
LOCAL_CPUACCT_GROUP=timelimit/timelimited_$$
LOCAL_CGROUP_TASKS=$CGROUPS_ROOT/cpuacct/$LOCAL_CPUACCT_GROUP/tasks
kill_monitored_cgroup() {
SIGNAL=$1
kill -$SIGNAL $(cat $LOCAL_CGROUP_TASKS) 2> /dev/null
}
get_cpu_usage() {
cgget -nv -r cpuacct.usage $LOCAL_CPUACCT_GROUP
}
# Create a cgroup to measure the CPU time of the monitored process.
cgcreate -a $USER -t $USER -g cpuacct:$LOCAL_CPUACCT_GROUP
# Start the monitored process. In case it fails, we still have to clean
# up, so we disable exiting on errors.
set +e
(
set -e
# In case the process doesn't fork a ulimit is more exact. If the
# process forks, the ulimit still applies to each child process.
ulimit -t $(($TIME_LIMIT + $KILL_WAIT))
ulimit -S -t $TIME_LIMIT
cgexec -g cpuacct:$LOCAL_CPUACCT_GROUP --sticky $#
)&
MONITORED_PID=$!
# Start the monitoring process
(
REMAINING_TIME=$TIME_LIMIT
while [ "$REMAINING_TIME" -gt "0" ]; do
# Wait $REMAINING_TIME seconds for the monitored process to
# terminate. On a single CPU the CPU time cannot exceed the
# wall clock time. It might be less, though. In that case, we
# will go through the loop again.
sleep $REMAINING_TIME
CPU_USAGE=$(get_cpu_usage)
REMAINING_TIME=$(($TIME_LIMIT - $CPU_USAGE / 1000000000))
done
# Time limit exceeded. Kill the monitored cgroup.
if [ "$KILL_WAIT" -gt "0" ]; then
kill_monitored_cgroup XCPU
sleep $KILL_WAIT
fi
kill_monitored_cgroup KILL
)&
MONITOR_PID=$!
# Wait for the monitored job to exit (either on its own or because it
# was killed by the monitor).
wait $MONITORED_PID
EXIT_CODE=$?
# Kill all remaining tasks in the monitored cgroup and the monitor.
kill_monitored_cgroup KILL
kill -KILL $MONITOR_PID 2> /dev/null
wait $MONITOR_PID 2>/dev/null
# Report actual CPU usage.
set -e
CPU_USAGE=$(get_cpu_usage)
echo "Total CPU usage: $(($CPU_USAGE / 1000000))ms"
# Clean up and exit with the return code of the monitored process.
cgdelete cpuacct:$LOCAL_CPUACCT_GROUP
exit $EXIT_CODE

Run process in background of shell for 2 minutes, then terminate it before 2 minutes is up BASH

I am using Peppermint distro. I'm new to linux, however I need to display system processes, then create a new process to run in the background for 2 minutes, I need to prove its running and then terminate it before the 2 minutes is up.
So far i'm using xlogo to test my process is working. I have
ps
xlogo &
TASK_PID=$!
if pgrep -x xlogo>/dev/null 2>&1
then
ps
sleep 15
kill $TASK_PID
ps
fi
I can't seem to figure out a way to give it an initial time of 2 minutes but then kill it after 15 seconds anyway.
any help appreciated!
If you want the command to originally have a time limit of 2 minutes you could do
timeout 2m xlogo &
of course, then your $! will be of the timeout command. If you're using pgrep and satisfied it's only finding the process you care about though, you could use pkill instead of the PID to kill the xlogo
Of course, killing the timeout PID will also kill xlogo, so you might be able to keep things as-is for the rest if you're happy with how that works.

Briefly run / Restart sublime text from terminal

I don't want to run an external program (subl - sublime text) at a certain point of time, I want to run it for a certain amount of time. I basically need to boot up the program for 10 seconds then kill it - multiple times - because of its install and update process.
How can I do this?
You may have a timeout command on your system, which uses a standard alarm signal to terminate a process. I've never quite understood why no shell provides access to this feature as a builtin. If you don't have timeout on your system, you can simulate it with
my_program & pid=$!
sleep 10
kill "$pid"
Use timeout:
timeout 5s <program>
You can also specify the signal which need to be passed to terminating the process.
timeout -s9 5s <program>
(OR)
timeout --signal=KILL 5s <program>
Test:
$ time timeout 5s sleep 40
real 0m5.002s
user 0m0.001s
sys 0m0.001s
I ended up using this in my script
/usr/bin/subl
SPID="$(ps -A | grep sublime_text | awk '{print $1}')"
sleep 5
kill "$SPID"
unset "$SPID"

Run a time-constrained shell command [duplicate]

This question already has answers here:
Command line command to auto-kill a command after a certain amount of time
(15 answers)
Closed 9 years ago.
I'm looking for the simplest way to run a command in a shell and kill it if it doesn't end in less than a second of CPU time. Something like:
$ deadline -- slow-foo
Started fooing...
[deadline] 1 sec deadline hit, killing and returning -1!
$ deadline -- quick-foo
Started fooing...
Finished fooing!
A linux-based solution is more than enough, but more portable ones are welcome.
Coreutils has a timeout utility that does just that, should be available on most Linux distributions:
timeout - run a command with a time limit
Has options for what signal to use and a few other things.
In addition of timeout(1) given in Mat's answer, and if you want to limit CPU time (not idle time or real time), you could use the setrlimit(2) syscall with RLIMIT_CPU (if the CPU time limit is exceeded, your process gets first a SIG_XCPU signal -which it could catch and handle-, and later a SIG_KILL -uncatchable- signal). This syscall is available in bash(1) with the ulimit builtin.
So to limit CPU time to 90 seconds (i.e. 1 minute and 30 seconds) type
ulimit -t 90
in your terminal (assuming your shell is bash; with zsh use limit cputime 90, etc...) - then all further commands are constrained by that limit
Read also the instructive time(7) and signal(7) man pages, and Advanced Linux Programming
This is quick and dirty and doesn't require any software packages to be installed, so is portable:
TIMEOUT=1
YOURPROGRAM & PID=$! ; (sleep $TIMEOUT && kill $PID 2> /dev/null & ) ; wait $PID

Resources