How to get overall CPU usage (e.g. 57%) on Linux [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I am wondering how you can get the system CPU usage and present it in percent using bash, for example.
Sample output:
57%
In case there is more than one core, it would be nice if an average percentage could be calculated.

Take a look at cat /proc/stat
grep 'cpu ' /proc/stat | awk '{usage=($2+$4)*100/($2+$4+$5)} END {print usage "%"}'
EDIT please read comments before copy-paste this or using this for any serious work. This was not tested nor used, it's an idea for people who do not want to install a utility or for something that works in any distribution. Some people think you can "apt-get install" anything.
NOTE: this is not the current CPU usage, but the overall CPU usage in all the cores since the system bootup. This could be very different from the current CPU usage. To get the current value top (or similar tool) must be used.
Current CPU usage can be potentially calculated with:
awk '{u=$2+$4; t=$2+$4+$5; if (NR==1){u1=u; t1=t;} else print ($2+$4-u1) * 100 / (t-t1) "%"; }' \
<(grep 'cpu ' /proc/stat) <(sleep 1;grep 'cpu ' /proc/stat)

You can try:
top -bn1 | grep "Cpu(s)" | \
sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \
awk '{print 100 - $1"%"}'

Try mpstat from the sysstat package
> sudo apt-get install sysstat
Linux 3.0.0-13-generic (ws025) 02/10/2012 _x86_64_ (2 CPU)
03:33:26 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
03:33:26 PM all 2.39 0.04 0.19 0.34 0.00 0.01 0.00 0.00 97.03
Then some cutor grepto parse the info you need:
mpstat | grep -A 5 "%idle" | tail -n 1 | awk -F " " '{print 100 - $ 12}'a

Might as well throw up an actual response with my solution, which was inspired by Peter Liljenberg's:
$ mpstat | awk '$12 ~ /[0-9.]+/ { print 100 - $12"%" }'
0.75%
This will use awk to print out 100 minus the 12th field (idle), with a percentage sign after it. awk will only do this for a line where the 12th field has numbers and dots only ($12 ~ /[0-9]+/).
You can also average five samples, one second apart:
$ mpstat 1 5 | awk 'END{print 100-$NF"%"}'
Test it like this:
$ mpstat 1 5 | tee /dev/tty | awk 'END{print 100-$NF"%"}'

EDITED: I noticed that in another user's reply %idle was field 12 instead of field 11. The awk has been updated to account for the %idle field being variable.
This should get you the desired output:
mpstat | awk '$3 ~ /CPU/ { for(i=1;i<=NF;i++) { if ($i ~ /%idle/) field=i } } $3 ~ /all/ { print 100 - $field }'
If you want a simple integer rounding, you can use printf:
mpstat | awk '$3 ~ /CPU/ { for(i=1;i<=NF;i++) { if ($i ~ /%idle/) field=i } } $3 ~ /all/ { printf("%d%%",100 - $field) }'

Do this to see the overall CPU usage. This calls python3 and uses the cross-platform psutil module.
printf "%b" "import psutil\nprint('{}%'.format(psutil.cpu_percent(interval=2)))" | python3
The interval=2 part says to measure the total CPU load over a blocking period of 2 seconds.
Sample output:
9.4%
The python program it contains is this:
import psutil
print('{}%'.format(psutil.cpu_percent(interval=2)))
Placing time in front of the call proves it takes the specified interval time of about 2 seconds in this case. Here is the call and output:
$ time printf "%b" "import psutil\nprint('{}%'.format(psutil.cpu_percent(interval=2)))" | python3
9.5%
real 0m2.127s
user 0m0.119s
sys 0m0.008s
To view the output for individual cores as well, let's use this python program below. First, I obtain a python list (array) of "per CPU" information, then I average everything in that list to get a "total % CPU" type value. Then I print the total and the individual core percents.
Python program:
import psutil
cpu_percent_cores = psutil.cpu_percent(interval=2, percpu=True)
avg = sum(cpu_percent_cores)/len(cpu_percent_cores)
cpu_percent_total_str = ('%.2f' % avg) + '%'
cpu_percent_cores_str = [('%.2f' % x) + '%' for x in cpu_percent_cores]
print('Total: {}'.format(cpu_percent_total_str))
print('Individual CPUs: {}'.format(' '.join(cpu_percent_cores_str)))
This can be wrapped up into an incredibly ugly 1-line bash script like this if you like. I had to be sure to use only single quotes (''), NOT double quotes ("") in the Python program in order to make this wrapping into a bash 1-liner work:
printf "%b" \
"\
import psutil\n\
cpu_percent_cores = psutil.cpu_percent(interval=2, percpu=True)\n\
avg = sum(cpu_percent_cores)/len(cpu_percent_cores)\n\
cpu_percent_total_str = ('%.2f' % avg) + '%'\n\
cpu_percent_cores_str = [('%.2f' % x) + '%' for x in cpu_percent_cores]\n\
print('Total: {}'.format(cpu_percent_total_str))\n\
print('Individual CPUs: {}'.format(' '.join(cpu_percent_cores_str)))\n\
" | python3
Sample output: notice that I have 8 cores, so there are 8 numbers after "Individual CPUs:":
Total: 10.15%
Individual CPUs: 11.00% 8.50% 11.90% 8.50% 9.90% 7.60% 11.50% 12.30%
For more information on how the psutil.cpu_percent(interval=2) python call works, see the official psutil.cpu_percent(interval=None, percpu=False) documentation here:
psutil.cpu_percent(interval=None, percpu=False)
Return a float representing the current system-wide CPU utilization as a percentage. When interval is > 0.0 compares system CPU times elapsed before and after the interval (blocking). When interval is 0.0 or None compares system CPU times elapsed since last call or module import, returning immediately. That means the first time this is called it will return a meaningless 0.0 value which you are supposed to ignore. In this case it is recommended for accuracy that this function be called with at least 0.1 seconds between calls. When percpu is True returns a list of floats representing the utilization as a percentage for each CPU. First element of the list refers to first CPU, second element to second CPU and so on. The order of the list is consistent across calls.
Warning: the first time this function is called with interval = 0.0 or None it will return a meaningless 0.0 value which you are supposed to ignore.
Going further:
I use the above code in my cpu_logger.py script in my eRCaGuy_dotfiles repo.
References:
Stack Overflow: How to get current CPU and RAM usage in Python?
Stack Overflow: Executing multi-line statements in the one-line command-line?
How to display a float with two decimal places?
Finding the average of a list
Related
https://unix.stackexchange.com/questions/295599/how-to-show-processes-that-use-more-than-30-cpu/295608#295608
https://askubuntu.com/questions/22021/how-to-log-cpu-load

Related

How to speed-up sed that uses Regex on very large single cell BAM file

I have the following simple script that tries to count
the tag encoded with "CB:Z" in SAM/BAM file:
samtools view -h small.bam | grep "CB:Z:" |
sed 's/.*CB:Z:\([ACGT]*\).*/\1/' |
sort |
uniq -c |
awk '{print $2 " " $1}'
Typically it needs to process 40 million lines. That codes takes around 1 hour to finish.
This line sed 's/.*CB:Z:\([ACGT]*\).*/\1/' is very time consuming.
How can I speed it up?
The reason I used the Regex is that the "CB" tag column-wise position
is not fixed. Sometimes it's at column 20 and sometimes column 21.
Example BAM file can be found HERE.
Update
Speed comparison on complete 40 million lines file:
My initial code:
real 21m47.088s
user 26m51.148s
sys 1m27.912s
James Brown's with AWK:
real 1m28.898s
user 2m41.336s
sys 0m6.864s
James Brown's with MAWK:
real 1m10.642s
user 1m41.196s
sys 0m6.484s
Another awk, pretty much like #tripleee's, I'd assume:
$ samtools view -h small.bam | awk '
match($0,/CB:Z:[ACGT]*/) { # use match for the regex match
a[substr($0,RSTART+5,RLENGTH-5)]++ # len(CB:z:)==5, hence +-5
}
END {
for(i in a)
print i,a[i] # sample output,tweak it to your liking
}'
Sample output:
...
TCTTAATCGTCC 175
GGGAAGGCCTAA 190
TCGGCCGATCGG 32
GACTTCCAAGCC 76
CCGCGGCATCGG 36
TAGCGATCGTGG 125
...
Notice: Your sed 's/.*CB:Z:... matches the last instance where as my awk 'match($0,/CB:Z:[ACGT]*/)... matches the first.
Notice 2: Quoting #Sundeep in the comments: - - using LC_ALL=C mawk '..' will give even better speed.
With perl
perl -ne '$h{$&}++ if /CB:Z:\K[ACGT]++/; END{print "$_ $h{$_}\n" for keys %h}'
CB:Z:\K[ACGT]++ will match any sequence of ACGT characters preceded by CB:Z:. \K is used here to prevent CB:Z: from being part of matched portion, which is available via $& variable
Sample time with small.bam input file. mawk is fastest for this input, but it might change for larger input file.
# script.awk is the one mentioned in James Brown's answer
# result here shown with GNU awk
$ time LC_ALL=C awk -f script.awk small.bam > f1
real 0m0.092s
# mawk is faster compared to GNU awk for this use case
$ time LC_ALL=C mawk -f script.awk small.bam > f2
real 0m0.054s
$ time perl -ne '$h{$&}++ if /CB:Z:\K[ACGT]++/; END{print "$_ $h{$_}\n" for keys %h}' small.bam > f3
real 0m0.064s
$ diff -sq <(sort f1) <(sort f2)
Files /dev/fd/63 and /dev/fd/62 are identical
$ diff -sq <(sort f1) <(sort f3)
Files /dev/fd/63 and /dev/fd/62 are identical
Better to avoid parsing the output of samtools view in the first place. Here's one way to get what you need just using python and the pysam library:
import pysam
from collections import defaultdict
counts = defaultdict(int)
tag = 'CB'
with pysam.AlignmentFile('small.bam') as sam:
for aln in sam:
if aln.has_tag(tag):
counts[ aln.get_tag(tag) ] += 1
for k, v in counts.items():
print(k, v)
Following your original pipeline approach:
pcre2grep -o 'CB:Z:\K[^\t]*' small.bam |
awk '{++c[$0]} END {for (i in c) print i,c[i]}'
In case you're interested in trying to speed up sed (although it's not likely to be the fastest):
sed 't a;s/CB:Z:/\n/;D;:a;s/\t/\n/;P;d' small.bam |
awk '{++c[$0]} END {for (i in c) print i,c[i]}'
above syntax is compatible with GNU sed.
regrading the AWK based solutions, i've noticed few taking advantage of FS.
I'm not too familiar with BAM format. If CB only show up once per line, then
mawk/mawk2/gawk -b 'BEGIN { FS = "CB:Z:";
} $2 ~ /^[ACGT]/ { # if FS never matches, $2 would be beyond
# end of line, then this would just match
# against null string, & eval to false
seen[substr($2, 1, -1 + match($2, /[^ACGT]|$/))]++
} END { for (x in seen) { print seen[x] " " x } }'
If it shows up more than once, then change that to a loop of any field greater than 1. This version uses the laziest evaluation model possible to speed it up, then do all the uniq -c item.
While this is rather similar to the best answer above, by having FS pre-split the fields, it causes match() and substr() to do a lot less work. I'm simply matching 1 single char after the genetic sequence, and directly using its return, minus 1, as the substring length, and skipping RSTART or RLENGTH all together.
Regarding :
$ diff -sq <(sort f1) <(sort f2)
Files /dev/fd/63 and /dev/fd/62 are identical
$ diff -sq <(sort f1) <(sort f3)
Files /dev/fd/63 and /dev/fd/62 are identical
there's absolutely no need to have them physically output to disk and do a diff. Just simply have the output of each piped to a very high speed hashing algorithm that adds close to no time (when the output is gigantic enough you might end up saving time versus going to disk.
my personal favorite is xxhash in 128-bit mode, available via python pip. it's NOT a cryptographic hash, but it's much faster than even something like MD5. This method also allows for hassle-free compare since the benchmark timing of it will also perform the accuracy check.

Extract the uptime value from "w" command output

How can I get the value of up from below command on linux?
# w
01:16:08 up 20:29, 1 user, load average: 0.50, 0.34, 0.30
USER TTY LOGIN# IDLE JCPU PCPU WHAT
root pts/0 00:57 0.00s 0.11s 0.02s w
# w | grep up
01:16:17 up 20:29, 1 user, load average: 0.42, 0.33, 0.29
On Linux, the easiest way to get the uptime in (fractional) seconds is via the 1st field of /proc/uptime (see man proc):
$ cut -d ' ' -f1 /proc/uptime
350735.47
To format that number the same way that w and uptime do, using awk:
$ awk '{s=int($1);d=int(s/86400);h=int(s % 86400/3600);m=int(s % 3600 / 60);
printf "%d days, %02d:%02d\n", d, h, m}' /proc/uptime
4 days, 01:25 # 4 days, 1 hour, and 25 minutes
To answer the question as asked - parsing the output of w (or uptime, whose output is the same as w's 1st output line, which contains all the information of interest), which also works on macOS/BSD, with a granularity of integral seconds:
A perl solution:
<(uptime) is a Bash process substitution that provides uptime's output as input to the perl command - see bottom.
$ perl -nle 'print for / up +((?:\d+ days?, +)?[^,]+)/' <(uptime)
4 days, 01:25
This assumes that days is the largest unit every displayed.
perl -nle tells Perl to process the input line by line, without printing any output by default (-n), automatically stripping the trailing newline from each input line on input, and automatically appending one on output (-l); -e tells Perl to treat the next argument as the script (expression) to process.
print for /.../ tells Perl to output what each capture group (...) inside regex /.../ captures.
up + matches literal up, preceded by (at least) one space and followed by 1 or more spaces (+)
(?:\d+ days?, +)? is a non-capturing subexpression - due to ?: - that matches:
1 or more digits (\d+)
followed by a single space
followed by literal day, optionally followed by a literal s (s?)
the trailing ? makes the entire subexpression optional, given that a number-of-days part may or may not be present.
[^,]+ matches 1 or more (+) subsequent characters up to, but not including a literal , ([^,]) - this is the hh:mm part.
The overall capture group - the outer (...) therefore captures the entire up-time expression - whether composed of hh:mm only, or preceded by <n> day/s> - and prints that.
<(uptime) is a Bash process substitution (<(...))
that, loosely speaking, presents uptime's output as a (temporary, self-deleting) file that perl can read via stdin.
Something like this with gnu sed:
$ w |head -n1
02:06:19 up 3:42, 1 user, load average: 0.01, 0.05, 0.13
$ w |sed -r '1 s/.*up *(.*),.*user.*/\1/g;q'
3:42
$ echo "18:35:23 up 18 days, 9:08, 6 users, load average: 0.09, 0.31, 0.41" \
|sed -r '1 s/.*up *(.*),.*user.*/\1/g;q'
18 days, 9:08
Given that the format of the uptime depends on whether it is less or more than 24 hours, the best I could come up with is a double awk:
$ w
18:35:23 up 18 days, 9:08, 6 users,...
$ w | awk -F 'user|up ' 'NF > 1 {print $2}' \
| awk -F ',' '{for(i = 1; i < NF; i++) {printf("%s ",$i)}} END{print ""}'
18 days 9:08

How can I check the last 5 min overall cpu usage using SAR

I know this example of sar sar -u 1 3 which gives statistics for the next 3 seconds with 1 second interval .
However sar also keeps on collecting the information in background (My cron set to collect stats for every minute ) . Is there any way I can simply query using sar command to tell the last 5 mins statistics and its average .
Right now I am using following below command
interval=5; sar -f /var/log/sysstat/sa22 | tail -n $interval | head -n -1 | awk '{print $4+$6}'| awk '{s+=$1} END {print s/$interval}'
to check the overall cpu usage in last 5 min .
Is there a better way ?
Unfortunately when using the -f option in sar together with interval and count it doesn't return the average value for the given interval (as you would expect). Instead it always returns the first recorded value in the sar file
The only way to work around that is to use the -s option which allows you to specify a time at which to start your sampling period. I've provided a perl script below that finishes with a call to sar that is constructed in a way that will return what you're looking for.
Hope this helps.
Peter Rhodes.
#!/usr/bin/perl
$interval = 300; # seconds.
$epoch = `date +%s`;
$epoch -= $interval;
$time = `date -d \#$epoch +%H:%M:00`;
$dom = `date +%d`;
chomp($time,$dom);
system("sar -f /var/log/sysstat/sa$dom -B -s $time 300 1");

Know how many users have connected to my computer in the last week, and how many time has been connected each one

I need a script which shows a summary of which users have connected to my computer during the last week and how often.
I know I can use last and filter the time columns with awk, but how?
I would have to get each user connected in the last week and calculate the number of connections plus the total time of all connections.
This is what I have come up with so far:
for USER in `last | awk '{print $1}' | sort -u`; do
echo "Conexiones de $USER:"
last | grep $USER | wc -l
# BUT I NEED T COUNT ONLY LAST WEEK AND PRINT TOTAL TIME
done
I strongly advise against parsing the output of last, as its output may differ from implementation to implementation and parsing the login/logout dates is prone to error. Also, it seems that nearly all implementations don't support -F or similar which without you are completely out of luck, as you need the year information. In theory you could check if there is a leap from one month to another one that is more recent on two consecutive lines (e.g. Jan->Dec would indicate a year change), but this heuristic is flawed - you just cannot guess the correct year(s) correctly. For example, take the rare case that nobody logged in for a year.
If you absolutely have to/want to parse its output either way, don't do it with just bash/awk/cut/... for the reasons above. To get the session duration you would either have to parse the prettyprinted login/logout dates yourself or the already calculated duration, which is also prettyprinted and probably varies from implementation to implementation (as in, it's not just hours and minutes. How do get days/weeks/years represented in that column?).
Doing this with just bash/awk would be a nightmare and even more prone to breakage than my script below - please don't do it.
The best and least hacky solution would involve writing a small C program or script that operates on the wtmp data directly (man wtmp), but then you would have to calculate the session durations yourself based on login/logout pairs (you don't get this for free; login is one record, logout is a second one). See busybox' last implementation for a reference on how it reads its stuff. This is the way to go if you want to do it the right way.
That being said, I came up with the quick'n'dirty (perl) solution below. It doesn't run the last command, you have to feed it proper input yourself, otherwise it will explode. If your last output looks different than mine, doesn't support -F or Date::Parse cannot parse the format your last command prints, it will also explode. There is lots of room for improvement, but this should get you started.
Notes
-F is required for last to print full dates (we need this to get the year, otherwise we cannot determine proper timestamps from its output)
-i tells last to output IP addresses, which just makes its output easier to parse
it does not parse the session duration column but rather both login/logout dates, converts them to epoch time and calculates the diff to get the session duration. There is no other magic involved in parsing the dates other than using Date::Parse, which means that it has to exclude all sessions that don't have a proper login/logout date (i.e., they are still logged in or their session got terminated due to a reboot, crash, etc.), so these sessions won't be part of the calculated output!
it defaults to 7 days, but you can change this on the command line with the -d switch
Code
#!/usr/bin/perl
use strict;
use warnings;
use Date::Parse;
use Getopt::Std;
our $opt_d;
getopt('d');
my $days = $opt_d || 7;
my $since = time() - (60 * 60 * 24 * $days);
my %data;
while (<>)
{
chomp;
next if /ssh|reboot|down|crash|still logged in/;
# last -Fi gives this on my box
# username line ip Mon Apr 1 18:17:49 2013 - Tue Apr 2 01:00:45 2013 (06:42)
my ($user, undef, undef, $date_from, $date_to) = /^(\S+)\s+(\S+)\s+([0-9.]+)\s+([[:alnum:]:\s]+)\s+-\s+([[:alnum:]:\s]+[^\s])\s+\(.+\)/;
my $time_from = str2time($date_from);
last if $time_from < $since;
my $time_to = str2time($date_to);
$data{$user}{"count"}++;
$data{$user}{"duration"} += $time_to - $time_from;
# print "$user|$line|$ip|$date_from|$date_to\n";
}
print "login history for the last $days day(s):\n\n";
if (keys %data > 0)
{
foreach my $user (keys %data)
{
my $duration = $data{$user}{"duration"};
printf "%s was logged in %d time(s) for a total of %d day(s), %d hour(s) and %d minute(s)\n",
$user,
$data{$user}{"count"},
($duration / (24 * 60 * 60)),
($duration / (60 * 60 )) % 24,
($duration / 60 ) % 60,
}
}
else
{
print "no logins during the specified time period\n";
}
Example usage
$ last -Fi | ./last_parse.pl -d 700
login history for the last 700 day(s):
root was logged in 25 time(s) for a total of 36 day(s), 12 hour(s) and 35 minute(s)
foobar was logged in 362 time(s) for a total of 146 day(s), 17 hour(s) and 17 minute(s)
quux was logged in 3 time(s) for a total of 0 day(s), 0 hour(s) and 4 minute(s)
$
Try this
Steps
last > login.txt
last | awk '{print $3}' | sort -u > names.txt
Iterate each lines from names.txt and apply grep "line1" login.txt | wc -l
you will get count for each user (each user is each line in names.txt)

Random number between range in shell

How I can generate random number between 0-60 in sh (/bin/sh, not bash)? This is a satellite box, there is no $RANDOM variable, and other goods [cksum, od (od -vAn -N4 -tu4 < /dev/urandom)].
I want to randomize a crontab job's time.
If you have tr, head and /dev/urandom, you can write this:
tr -cd 0-9 </dev/urandom | head -c 3
Then you have to use the remainder operator to put in 0-60 range.
How about using the nanoseconds of system time?
date +%N
It isn't like you need cryptographically useful numbers here.
Depending on which version of /bin/sh it is, you may be able to do:
$(( date +%N % 60 ))
If it doesn't support the $(()) syntax, but you have dc, you could try:
dc -e `date +%N`' 60 % p'
Without knowing which operating system, version of /bin/sh or what
tools are available it is hard to come up with a solution guaranteed to work.
Do you have awk? You can call awk's rand() function. For instance:
awk 'BEGIN { printf("%d\n",rand()*60) }' < /dev/null
I know this post is old, but the suggested answers are not generating uniform unbiased random numbers. The accepted answer is essentially this:
% echo $(( $(tr -cd 0-9 </dev/urandom | head -c 3) % 60))
The problem with this suggestion is that by choosing a 3-digit number from /dev/urandom, the range is from 0-999, a total of 1,000 numbers. However, 1,000 does not divide into 60 evenly. As such, you'll be biased towards generating 0-959 just slightly more than 960-999.
The second answer, while creative in using nanoseconds from your clock, suffers from the same biased approach:
% echo $(( $(date +%N) % 60 ))
The range for nanoseconds is 0-999,999,999, which is 1 billion numbers. So, if you're dividing that result by 60, you'll again be biased towards generating 0-999,999,959 slightly more than 999,999,960-999,999,999.
All the rest of the answers are the same- biased non-uniform generation.
To generate unbiased uniform random numbers in the range of 0-59 (is what I assume he means rather than 0-60, if he's attempting to randomize a crontab(1) entry), we need to force the output to be a multiple of 60.
First, we'll generate a random 32-bit number between 0 and 4294967295:
% RNUM=$(od -An -N4 -tu2 /dev/urandom | awk '{print $1}')
Now we'll force our range to be between $MIN and 4294967295 that is a multiple of 60:
% MIN=$((2**32 % 60)) # 16
This means:
4294967296 - 16 = 4294967280
4294967280 / 60 = 71582788.0
In other words, my range of [16, 4294967295] is exactly a multiple of 60. So, every number I generate in that range, then divide by 60, will be equally likely as any other number. Thus, I have an unbiased generator of numbers 0-59 (or 1-60 if you add 1).
The only thing left to do is make sure that my number is between 16 and 4294967295. If my number is less than 16, then I'll need to generate a new number:
% while [ $RNUM -lt $MIN ]; do RNUM=$(od -An -N1 -tu2 /dev/urandom); done
% MINUTE=$(($RNUM % 60))
Everything put together for copy/paste goodnees:
#!/bin/bash
RNUM=$(od -An -N4 -tu2 /dev/urandom | awk '{print $1}')
MIN=$((2**32 % 60))
while [ $RNUM -lt $MIN ]; do RNUM=$(od -An -N1 -tu2 /dev/urandom); done
MINUTE=$(($RNUM % 60))
value=`od -An -N2 -tu2 /dev/urandom`
minutes=`expr $value % 60`
The seed will be between 0 and 65535, which is not an even multiple of 60, so minutes 0-15 have a slightly greater chance ob being chosen, but the discrepancy is probably not important.
If you want to achieve perfection, use "od -An -N1 -tu1" and loop until value is less than 240.
Tested with busybox od.
Beware of errors when generated number starts by 0 and has other digits greater than 7, as it is interpreted as octal, I would propose:
tr -cd 0-9 </dev/urandom | head -c 4 | sed -e 's/^00*//
specially in case you want to process it any further, for example to establish a range:
RANDOM=`tr -cd 0-9 </dev/urandom | head -c 4 | sed -e 's/^00*//'`
RND50=$((($RANDOM%50)+1)) // random number between 1 and 50

Resources