We have multiple computers in our company that sometimes used as work stations and sometimes as server (running user defined jobs)
I would like to harnessed all the available computing power of the workstations to be part of the grid (to add them to the dedicated servers)
Each grid client can work in one of two modes low (30%) and high (100%) -(the max percentage of cpu and ram that is allocated to the grid client)
The user should not be effected by it, the moment the user starts using the computer (locally or remotely) the client switch to low mode (30%)
after the user is idle for configured time, for example 5 minuets, and the cpu usage is low (no running tasks) the client should switch to high mode
Here is the solution i made based on few examples i found in stackoverflow:
time since screen idle
time since last console action based on "who"
cpu usage of user based on top
to enter the idle state and setting the mode to high mode i wait for idle time in cmd and screen, and require a limited cpu usage,
to exit from high mode any action via cmd or screen will set it back to low mode
#!/bin/bash
idle=false
idleAfter=300 # consider idle after 300 seconds
idleCpuAfter=100 # max cpu usage to enter high mode
idleCpuCount=10 #seconds to keep idle state(without interruptions) before starting high mode
count=0 #count for idle state, when counter reach idleCpuCount client will initiate high mode
while true; do
idleInSecondsCmd=$( who -s | perl -lane 'print "$F[0]\t" . 86400 * -A "/dev/$F[1]"'| sort -k2 -n | head -1 | cut -f2 )
idleInSecondsScreen=$(./getIdle)
cpuLoad=$(top -b -n 1 -u "$user" | awk 'NR>7 { sum += $9; } END { print user, sum; }')
echo "idleInSecondsCmd " + $idleInSecondsCmd # just for debug purposes.
echo "idleInSecondsScreen" + $idleInSecondsScreen # just for debug purposes.
echo "cpuLoad" + $cpuLoad # just for debug purposes.
if [[ ($idleInSecondsCmd -gt $idleAfter)
&& ($idleInSecondsScreen -gt $idleAfter)
&& ($(bc <<< "$cpuLoad <= $idleCpuAfter") -eq 1 )
&& $idle = false ]] ;then
count=$((count + 1))
echo $count
else
count=0
fi
if [[ $idle = false && ($count -gt $idleCpuCount) ]]; then
idle=true
setupLoad.sh 100
fi
if [[ ( $idleInSecondsCmd -lt $idleAfter
|| $idleInSecondsScreen -lt $idleAfter)
&& $idle = true ]] ; then
idle=false
setupLoad.sh 30
fi
sleep 1 # polling interval
done
is it the best approach?
Related
Hi I have made an automatic light switch program destined for my pi, it tracks if my phone is on wifi network it works but its not very fast. If i speed it up I either receive lots of false positives or doesnt work I have found and only changed a tiny bit of code from other examples online. I'm still learning.
so I've got two lots of code one bash and one python.
python uses arp-scan
and bash uses ip neighbor and ping with seem to be more reliable
so can someone help me mash the bash code into python and make it use {arp-scan, ip neighbor, ping and maybe bluetooth as well
I would like to add some settings like
ensure the lights dont get turned on in day time hours
somehow track the gpio high low toggle in a better way
adding bluetooth scan that actually work when android dev is sleeping (screen off)
at the moment i have it running on the pi as a web site for testing here is a screenshot of my pi site
python code example
#!/usr/bin/python3
import RPi.GPIO as GPIO
import subprocess
from time import sleep
is_home = False
home_run_count = 0
out_run_count = 0
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.OUT)
if __name__ == '__main__':
while True:
p = subprocess.Popen("arp-scan -l -r 6 | grep MAC:MAC:MAC",stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
p_status = p.wait()
if output:
#print("Android device is connected to your network!")
is_home = True
if p.returncode != 0:
#print("The device is not present!")
is_home = False
#home_run_count = 0
#out_run_count -= 1
if is_home is True and home_run_count < 1:
#print("lights on!")
#GPIO.setmode(GPIO.BCM)
#GPIO.setup(17, GPIO.OUT)
GPIO.output(17, True)
sleep(0.5);
GPIO.output(17, False)
home_run_count += 1
out_run_count = 0
is_home = True
if is_home is False and out_run_count < 1:
#print("lights off!")
#GPIO.setmode(GPIO.BCM)
#GPIO.setup(17, GPIO.OUT)
GPIO.output(17, True)
sleep(0.5);
GPIO.output(17, False)
out_run_count += 1
home_run_count = 0
is_home = False
enter code here
Bash script code
#!/bin/bash
# A script to do something when Phone returns Home.
mac="some mac addy" # Your phone mac address
ip_addr="" # Leave blank or ip for test
network="some ip addy" # Your network (Class C only)
range="90 100" # ip address possible range
pgm() {
echo "switching lights "
echo "1" > /sys/class/gpio/gpio17/value
sleep 0.5s
echo "0" > /sys/class/gpio/gpio17/value
}
#-----Rpi-Mod----
echo "17" > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio17/direction
#-----End of Rpi mod first section-------
start=$(echo "$range" | cut -d " " -f1)
stop=$(echo "$range" | cut -d " " -f2)
network=$(echo "$network" | cut -d. -f1-3)
echo "Start $(date)"
while [ 1 ]; do
cnt=0
fail=0
[ "$ip_addr" ] || while [ ! "$ip_addr" ]; do
for x in $(seq "$start" "$stop"); do
(junk=$(ping -c1 -W1 "$network"."$x") & )
wait
done
sleep 8
ip_addr=$(ip neighbor | grep "$mac" | cut -d " " -f1)
((cnt++))
if (( $cnt > 15 )); then
cnt=0
echo "--- Phone not Home $(date)"
#sleep 300 # 5 minutes
fi
if [ "$ip_addr" ]; then
echo "--- Phone is Home, Count = $cnt, Date = $(date)"
echo "Phone ip = $ip_addr mac = $mac"
fi
done
while [ "$ip_addr" ]; do
junk="$(ping -c1 -W1 $ip_addr)"
sleep 8
home_nw="$(ip neighbor | grep $ip_addr | cut -d ' ' -f 1,5,6)"
echo "$home_nw - $(date)"
is_home=$(echo "$home_nw" | cut -d " " -f3)
if [ "$is_home" == "REACHABLE" ] && (( "$fail" >= 3 )); then
echo "--- Phone returned Home - $(date)"
pgm
fi
[ "$is_home" == "REACHABLE" ] && fail=0
mac_stat=$(echo "$home_nw" | cut -d " " -f2)
if [ "$mac_stat" == "FAILED" ]; then
(( "$fail" < 10 )) && ((fail++))
ip_test="$(ip neighbor | grep $mac | cut -d ' ' -f1)"
if [ "$ip_test" ]; then
[ "$ip_test" == "$ip_addr" ] || ip_addr=""
fi
if (( "$fail" == 3 )); then
echo "--- Phone not at Home $(date)"
pgm
fi
else
if [ "$mac_stat" != "$mac" ]; then
ip_addr=""
fi
fi
#sleep 300 # 5 minutes
done
done
I don't know enough to implement my changes but still learning so would appreciate some woking cot to try and in the process learn.
------------------------------------------------------ update -------------------------------------------------------------------
I tried to put code in the comment but it didn't work. I've sort of managed to get what I wanted. I modified the python script to call a new script (see below) because I was unable to get popen to run my function maybe it needs to be a class or clash, I'm open to suggestions.
def present():
with urllib.request.urlopen("http://blah.ip.blah/some.json") as url:
sleep(0.5);
data = json.loads(url.read().decode())
#print(data)
'MAC:MAC:MAC:' in data
So because I could not make the rest of the logic work i just stuck it in a new file like this
#!/usr/bin/python3
import urllib.request, json
from time import sleep
import os
try:
with urllib.request.urlopen("http://blah.ip.blah/some.json") as url:
sleep(0.5);
data = json.loads(url.read().decode())
#print(data)
'MAC:MAC:MAC:' in data
except KeyboardInterrupt:
os._exit(1)
but would love to combine it with the main light script as a function.
why does this not work ?
(output, err) = ( present() )
p_status = ( present() )
----------------------------------------------------------- end of update one -----------------------------------------------
I had some trouble getting the python loop to keep running when called from the site but it was an silly incorrect path issue, that's why I want to have it all on one file
----------------------------------------------------------- end of update two ---------------------------------------------
I've asked a similar question if anyone is interested
I wrote a bash script that uses fping that detects devices entering/exiting a LAN It could easily be adapted to turn lights on and off.
The code is here https://grymoire.wordpress.com/2019/12/09/using-bash-to-monitor-devices-entering-exiting-a-lan/
I’m trying to build a script that asks for a time clock number and a DC number from the user running the script, which I’m intending to fill in the Xs for
/u/applic/tna/shell/tc_software_update.sh tmcxx.s0xxxx.us REFURBISHED
However, I am stumped as to how to have the user’s input fill in those Xs on that command within the script. This script is in its earliest stages so it’s very rough right now lol. Thank you for responding. Here’s the scripts skeleton I’m working on:
#!/bin/bash
#This server is intended to speed up the process to setup timeclocks from DC tickets
#Defines time clock numbers
timeclocks="01|02|03|04|05|06|07|08|09|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35"
#Defines DC number
echo “What is the DC number?”
read dc
#Defines TMC number
echo "What is the Time Clock number?"
read number
if $number == $timeclocks && $dc == ???; then
/u/applic/tna/shell/tc_software_update.sh tmcxx.s0xxxx.us REFURBISHED
Do you mean invoking $ /u/applic/tna/shell/tc_software_update.sh tmc${number}.s0${dc}.us REFURBISHED?
Consider the following snippet:
[test.sh]
read x
read y
echo "x=${x}, y=${y}"
$ sh test.sh
5
4
x=5, y=4
Further on, you can use command line arguments ($1, $2 etc.) instead of the user input.
Modelling this on your script:
timeclocks=( {1..35} )
printf '%s' "DC number: "; read dc
printf '%s' "Time Clock number: "; read tmc
tmc=$( printf '%02d' "$tmc" )
dc=$( printf '%04d' "$dc" )
tmc_valid=$( for t in ${timeclocks[#]}; do \
[[ $tmc -eq $t ]] && echo true && break; \
done )
[[ "$tmc_valid" = "true" && "$dc" = "???" ]] && \
/u/applic/tna/shell/tc_software_update.sh tmc${tmc}.s0${dc}.us REFURBISHED
I have DB error log file, it will grow continuously.
Now i want to set some error monitoring on that file for every 5 minutes.
The problem is i don’t want to scan whole file for every 5 minutes(when monitoring cron executed), because it may grow very big in future. Scanning through whole(big) file for every 5 mins will consume bit more resources.
So i just want to scan only the lines which were inserted/written to the log during last 5 mins interval.
Each error recorded in log will have Timestamp prepend to it like below:
180418 23:45:00 [ERROR] mysql got signal 11.
So i want to search with pattern [ERROR] only on lines which were added from last 5 mins(not whole file) and place the output to another file.
Please help me here.
Feel free if u need more clarification on my question.
I’m using RHEL 7 and i’m trying to implement above monitoring through bash shell script
Serializing the Byte Offset
This picks up where the last instance left off. If you run it every 5 minutes, then, it'll scan 5 minutes of data.
Note that this implementation knowingly can scan data added during an invocation's run twice. This is a little sloppy, but it's much safer to scan overlapping data twice than to never read it at all, which is a risk that can be run if relying on cron to run your program on schedule (likewise, sleeps can run over the requested time if the system is busy).
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$#" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[#]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
Alternate Mode: Bisect For The Timestamp
Note that this can miss content if you're relying on, say, cron to invoke your code every 5 minutes on-the-dot; storing byte offsets can thus be more accurate.
Using the bsearch tool by Ole Tange:
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$#"
Another approach could be something like this:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
this prints all bytes of DB_FILE not previously printed to SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
Note that $(date +%Y-%m-%d_%H-%M-%S) will be the current 'full' date at the time of creating the log file
you can make this an script, and use cron to execute that script every five minutes; something like this:
*/5 * * * * PATH_TO_YOUR_SCRIPT
Here is my approach:
First, read the whole log once so far.
If you reach the end, collect and read new lines for a timespan (in my example 9 seconds, for faster testing, while my dummy server appends to the logfile every 3 seconds).
After the timespan, echo the cache, clear the cache (an array arr), loop and sleep for some time, so that this process doesn't consume all CPU time.
First, my dummy logfile writer:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
Startet via ./seconds-out.sh &.
Now the more complicated part:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[#]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
Output with logfile generator in 3 seconds, running for some time, then starting the read-seconds.sh script, with debugging output activated:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
Every dot represents a logfile line from the past and therefor skipped.
Every dash represents a logfile line collected.
I am launching a bunch of the same script (generate_records.php) into screens. I am doing this to easily parallelize the processes. I would like to write the output of each of the PHP processes to a log file using something like &> log_$i (StdOut an StdErr).
My shell scripting is weak sauce, and I can't get the syntax correct. I keep getting the output of the screen, which is empty.
Exmaple: launch_processes_in_screens.sh
max_record_id=300000000
# number of parallel processors to run
total_processors=10
# max staging companies per processor
(( num_records_per_processor = $max_record_id / $total_processors))
i=0
while [ $i -lt $total_processors ]
do
(( starting_id = $i * $num_records_per_processor + 1 ))
(( ending_id = $starting_id + $num_records_per_processor - 1 ))
printf "\n - Starting processor #%s starting at ID:%s and ending at ID: %s" "$i" "$starting_id" "$ending_id"
screen -d -m -S "process_$i" php generate_records.php "$starting_id" "$num_records_per_processor" "FALSE"
((i++))
done
If the only reason you're using screen is to launch many processes in parallel, you can avoid it entirely and use & to start them in the background:
php generate_records.php "$starting_id" "$num_records_per_processor" FALSE &
You may also be able to remove some code by using parallel.
In a bash script I need to wait until CPU usage gets below a threshold.
In other words, I'd need a command wait_until_cpu_low which I would use like this:
# Trigger some background CPU-heavy command
wait_until_cpu_low 40
# Some other commands executed when CPU usage is below 40%
How could I do that?
Edit:
target OS is: Red Hat Enterprise Linux Server release 6.5
I'm considering the average CPU usage (across all cores)
A much more efficient version just calls mpstat and awk once each, and keeps them both running until done; no need to explicitly sleep and restart both processes every second (which, on an embedded platform, could add up to measurable overhead):
wait_until_cpu_low() {
awk -v target="$1" '
$13 ~ /^[0-9.]+$/ {
current = 100 - $13
if(current <= target) { exit(0); }
}' < <(LC_ALL=C mpstat 1)
}
I'm using $13 here because that's where idle % is for my version of mpstat; substitute appropriately if yours differs.
This has the extra advantage of doing floating point math correctly, rather than needing to round to integers for shell-native math.
wait_for_cpu_usage()
{
current=$(mpstat 1 1 | awk '$12 ~ /[0-9.]+/ { print int(100 - $12 + 0.5) }')
while [[ "$current" -ge "$1" ]]; do
current=$(mpstat 1 1 | awk '$12 ~ /[0-9.]+/ { print int(100 - $12 + 0.5) }')
sleep 1
done
}
Notice it requires sysstat package installed.
You might use a function based on the top utility. But note, that doing so is not very reliable because the CPU utilization might - rapidly - change at any time. Meaning that just because the check succeeded, it is not guaranteed that the CPU utilization will stay low as long the following code runs. You have been warned.
The function:
function wait_for_cpu_usage {
threshold=$1
while true ; do
# Get the current CPU usage
usage=$(top -n1 | awk 'NR==3{print $2}' | tr ',' '.')
# Compared the current usage against the threshold
result=$(bc -l <<< "$usage <= $threshold")
[ $result == "1" ] && break
# Feel free to sleep less than a second. (with GNU sleep)
sleep 1
done
return 0
}
# Example call
wait_for_cpu_usage 25
Note that I'm using bc -l for the comparison since top prints the CPU utilization as a float value.
As noted by "Mr. Llama" in a comment above, I've used uptime to write my simple function:
function wait_cpu_low() {
threshold=$1
while true; do
current=$(uptime | awk '{ gsub(/,/, ""); print $10 * 100; }')
if [ $current -lt $threshold ]; then
break;
else
sleep 5
fi
done
}
In awk expression:
$10 is to get average CPU usage in last minute
$11 is to get average CPU usage in last 5 minutes
$12 is to get average CPU usage in last 15 minutes
And here is an usage example:
wait_cpu_low 20
It waits one minute average CPU usage is below 20% of one core of CPU.