How to make a process run slower in linux? - linux

I would like to keep a process running for a long time (e.g., more than half an hour). My program is gpg. If I encrypt a 500MB file using gpg elgamal encryption, it takes around 1-2 minutes (compression is turned off). To increase the running time, I can only create a file with a few GB, which is not desirable. Is there any other way to make this gpg program run for longer time?

I believe, by default, gpg takes its input from standard input and sends output to standard output.
So you could make it run forever with something like:
gpg --encrypt </dev/urandom >/dev/null
To use that for consuming CPU for an hour (for example), you could create a script like:
gpg --encrypt </dev/urandom >/dev/null &
pid=$!
sleep 3600
kill -9 ${pid}

https://unix.stackexchange.com/questions/18869/slow-down-a-process-without-affecting-other-processes
CPULimit might do what you need without affecting other processes:
http://www.cyberciti.biz/faq/cpu-usage-limiter-for-linux/
You start the program, then run cpulimit against the program name or PID, specifying what percentage you want it limited. Note the percentage is of all cores; so if you have 4 cores, you could use 400%.

Related

Get average CPU Usage percentage of single process until I kill it in bash script

I have a bash script where I would like to get know how much percentage of CPU time a process uses until I kill it in the last line of the script.
I know that I could normally do this via the time -v command, however my process is Erlang/OTP based and by using the time command it only measures the startup process statistics.
Therefore I'd like to use the process PID I can get easily and use that one to get the CPU time percentage until the end of the script.
Currently I'm using pidstat but it is only giving me statistics for linear time intervals.
I want to measure the exact timeinterval from when the process started until it gets killed.
Peak RAM statistics woul also be nice.
Could you recommend me any command I could use in this case?
This is my bash script:
sudo emqx start
sleep 10
mypid=$(sudo emqx pid)
echo $mypid
sudo pidstat -h -r -u -v -p "$mypid" 5 > $local/server_results/test1/emqxstats_$b.txt &
# process for load testing
# jthreads = amount of publishing users
sleep 5
until sudo ~/Downloads/apache-jmeter-5.2.1/bin/jmeter -n -t $local/testplans/csv.jmx -Jport=$a | grep -m 1 "... end of run"; do : ; done
sudo emqx stop
kill %!
So I want to measure the CPU percentage from the interval between starting mosquitto and until the Apache Jmeter test finished when it reaches the last 2 lines.
Kind regards
I found the command I was looking after some more research.
ps -o pcpu -p $pid
this was exactly the command I needed, because it calculates the percentage over the lifetime of the process.

How to kill a process after a given real running time in Bash?

For killing a process after a given timeout in Bash, there is a nice command called timeout. However, I'm running my program on a multi-user server, and I don't want the performance of my program to be influenced by others. Is there a way to kill a process in Bash, after a given time that the program is really running?
On Bash+Linux, you can use ulimit -t. Here's from help ulimit:
-t the maximum amount of cpu time in seconds
Here's an example:
$ time bash -c 'ulimit -t 5; while true; do true; done'
Killed
real 0m8.549s
user 0m4.983s
sys 0m0.008s
The infinite loop process was scheduled (i.e. actually ran) for a total of 5 seconds before it was killed. Due to other processes competing for the CPU at the same time, this took 8.5 seconds of wall time.
A command like sleep 3600 would never be killed, since it doesn't use any CPU time.

How to measure IOPS for a command in linux?

I'm working on a simulation model, where I want to determine when the storage IOPS capacity becomes a bottleneck (e.g. and HDD has ~150 IOPS, while an SSD can have 150,000). So I'm trying to come up with a way to benchmark IOPS in a command (git) for some of it's different operations (push, pull, merge, clone).
So far, I have found tools like iostat, however, I am not sure how to limit the report to what a single command does.
The best idea I can come up with is to determine my HDD IOPS capacity, use time on the actual command, see how long it lasts, multiply that by IOPS and those are my IOPS:
HDD ->150 IOPS
time df -h
real 0m0.032s
150 * .032 = 4.8 IOPS
But, this is of course very stupid, because the duration of the execution may have been related to CPU usage rather than HDD usage, so unless usage of HDD was 100% for that time, it makes no sense to measure things like that.
So, how can I measure the IOPS for a command?
There are multiple time(1) commands on a typical Linux system; the default is a bash(1) builtin which is somewhat basic. There is also /usr/bin/time which you can run by either calling it exactly like that, or telling bash(1) to not use aliases and builtins by prefixing it with a backslash thus: \time. Debian has it in the "time" package which is installed by default, Ubuntu is likely identical, and other distributions will be quite similar.
Invoking it in a similar fashion to the shell builtin is already more verbose and informative, albeit perhaps more opaque unless you're already familiar with what the numbers really mean:
$ \time df
[output elided]
0.00user 0.00system 0:00.01elapsed 66%CPU (0avgtext+0avgdata 864maxresident)k
0inputs+0outputs (0major+261minor)pagefaults 0swaps
However, I'd like to draw your attention to the man page which lists the -f option to customise the output format, and in particular the %w format which counts the number of times the process gave up its CPU timeslice for I/O:
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=184
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=1
Note that the first run stopped for I/O 184 times, but the second run stopped just once. The first figure is credible, as there are 124 directories in my ~/Maildir: the reading of the directory and the inode gives roughly two IOPS per directory, less a bit because some inodes were likely next to each other and read in one operation, plus some extra again for mapping in the du(1) binary, shared libraries, and so on.
The second figure is of course lower due to Linux's disk cache. So the final piece is to flush the cache. sync(1) is a familiar command which flushes dirty writes to disk, but doesn't flush the read cache. You can flush that one by writing 3 to /proc/sys/vm/drop_caches. (Other values are also occasionally useful, but you want 3 here.) As a non-root user, the simplest way to do this is:
echo 3 | sudo tee /proc/sys/vm/drop_caches
Combining that with /usr/bin/time should allow you to build the scripts you need to benchmark the commands you're interested in.
As a minor aside, tee(1) is used because this won't work:
sudo echo 3 >/proc/sys/vm/drop_caches
The reason? Although the echo(1) runs as root, the redirection is as your normal user account, which doesn't have write permissions to drop_caches. tee(1) effectively does the redirection as root.
The iotop command collects I/O usage information about processes on Linux. By default, it is an interactive command but you can run it in batch mode with -b / --batch. Also, you can a list of processes with -p / --pid. Thus, you can monitor the activity of a git command with:
$ sudo iotop -p $(pidof git) -b
You can change the delay with -d / --delay.
You can use pidstat:
pidstat -d 2
More specifically pidstat -d 2 | grep COMMAND or pidstat -C COMMANDNAME -d 2
The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task selected with option -p or for every task managed by the Linux kernel if option -p ALL has been used. Not selecting any tasks is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
The pidstat command can also be used for monitoring the child processes of selected tasks.
-C commDisplay only tasks whose command name includes the stringcomm. This string can be a regular expression.

Pause a shell script every x (mili)seconds to decrease immediate CPU usage

I have a shell script which backups my MySQL database every hour. It's a basic script:
mysqldump --user=$USER --password=$PASS $DB > /$PATH/$DATE.sql &&
7z a -t7z -mx=9 /$PATH/$DATE.sql.7z /$PATH/$DATE.sql &&
rm /$PATH/$DATE.sql
I'm using a 7z compression, because:
file permissions and owner/group aren't required to be kept
the space saved by 7z compared to gzip is essential to me
What's troubling me is that the 7z part (line 2) takes about 30 seconds and uses quite a lot of CPU during that time. There are 50% peaks on my server load graphs every hour from this and I'd like to get rid of those peaks.
I'm already executing this script with nice:
0 * * * * /usr/bin/nice -n 19 /path/to/backup.sh > /dev/null
The only solution I can come up with at this point is to somehow pause the execution of the 7z part, let's say every second for 1 second. This could free up the CPU time for other processes.
Is this possible?
I'm doing something similar in my .php scripts, although the pauses are in loops, for example:
while($d=mysqli_fetch_assoc($q)){
usleep(250000);
// process $d
}
I'd like the 7z execution to sleep every X (something) for 1 second.
You can use
prlimit --cpu=10 7z a -t7z -mx=9 /$PATH/$DATE.sql.7z /$PATH/$DATE.sql
Which will limit the cpu usage to 10%
How about configuring a limit for a user/group using the /etc/security/limits.conf file? And running the script/compression as specific user? After setting the limit you can test it using fork bomb.

Is there a variable in Linux that shows me the last time the machine was turned on?

I want to create a script that, after knowing that my machine has been turned on for at least 7h, it does something.
Is this possible? Is there a system variable or something like that that shows me the last time the machine was turned on?
The following command placed in /etc/rc.local:
echo 'touch /tmp/test' | at -t $(date -d "+7 hours" +%m%d%H%M)
will create a job that will run a touch /tmp/test in seven hours.
To protect against frequent reboots and prevent adding multiple jobs you could use one at queue exclusively for this type of jobs (e.g. c queue). Adding -q c to the list of at parameters will place the job in the c queue. Before adding new job you can delete all jobs from c queue:
for job in $(atq -q c | sed 's/[ \t].*//'); do atrm $job; done
You can parse the output of uptime I suppose.
As Pavel and thkala point out below, this is not a robust solution. See their comments!
The uptime command shows you how long the system has been running.
To accomplish your task, you can make a script that first does sleep 25200 (25200 seconds = 7 hours), and then does something useful. Have this script run at startup, for example by adding it to /etc/rc.local. This is a better idea than polling the uptime command to see if the machine has been up for 7 hours (which is comparable to a kid in the backseat of a car asking "are we there yet?" :-))
Just wait for uptime to equal seven hours.
http://linux.die.net/man/1/uptime
I don't know if this is what you are looking for, but uptime command will give you for how many computer was running since last reboot.
$ cut -d ' ' -f 1 </proc/uptime
This will give you the current system uptime in seconds, in floating point format.
The following could be used in a bash script:
if [[ "$(cut -d . -f 1 </proc/uptime)" -gt "$(($HOURS * 3600))" ]]; then
...
fi
Add the following to your crontab:
#reboot sleep 7h; /path/to/job
Either /etc/crontab, /etc/cron.d/, or your users crontab, depending on whether you want to run it as root or the user -- don't forget to put "root" after "#reboot" if you put it in /etc/crontab or cron.d
This has the benefit that if you reboot multiple times, the jobs get cancelled at shut down, so you won't get a bunch of them stacking up if you reboot several times within 7 hours. The "#reboot" time specification triggers the job to be run once when the system is rebooted. "sleep 7h;" waits for 7 hours before running "/path/to/job".

Resources