Detect data flowing through a port in a bash script - linux

I have data flowing through a Linux box and a custom command that prints the data as it flows to STDOUT (the screen). I want to detect if data is flowing and restart some processes if it's not.
Let's say my test file is "flowchk.sh". How do I use that in a conditional statement in a shell script? My plan so far has been to push the data to a file then check to see if the file has any data in it:
timeout 5s flowchk.sh > anythinghere
FILENAME=./anythinghere
MAXSIZE=5000
FILESIZE=$(stat -c%s FILENAME)
if (( FILESIZE > MAXSIZE )); then
echo "all ok"
else
restarteverything!
fi
This has run into problems because the timeout command doesn't terminate properly when using my flowchk script (never returns to the command prompt). So I either need help figuring out how to stop flowchk's execution after a period of time (or it will run forever) so I can test the temp file to see if there's anything there OR I need to know if there's a better way to approach this problem and I'm wasting time.

Related

Trying to use SCP to copy multiple files from remote to local using script

So I'll start with the fact that I'm relatively new to linux scripting, so if I am going about it the wrong way, let me know.
I am creating a script that is meant to copy logs from many different hosts onto the local machine depending on user input.
One of the functions I am writing requires the use of scp. Each time you use the scp command at a particular remote host, you have to enter your password. So to save time for the user, I want to copy any file that the particular host may have on it that the user wants.
I know I can do this using scp user#Remoteipaddress:'directory/file1 directory/file2' local/machine/directory
I have it running (what I feel is too many, so if there is a better way let me know) a bunch of loops.
The portion with the scp command is my main issue. Code looks fine if I quote it and echo it. I can even copy and paste the echoed result and it will work, but if I let the script do it I receive bash: -c: line 0: unexpected EOF while looking for matching `''
edit: $app is a static number created in another portion of program
added a couple things that seemed to be missing. I'm trying to piece together from multiple areas of program without making it more messy than it already is
#assigns different remote host paths do array variable
until [ $scriptCounter == $app ]
do
scpScript[$scriptCounter]="user#${ipAddress[$ipCounter]}:'"
((++ipCounter))
((++scriptCounter))
done
#$app value gets set by another function - typically 3 if that matters
scpCount=0
DayCounter=0
ipScriptCounter=0
until [ $Count == $app ]
do
((++scpCount))
mkdir ~/MyDocuments/Logs/$3/app$scpCount
echo "Creating ~/MyDocuments/Logs/${3}/app${scpCount}"
#there is one log for each day, $totalDiffDays is the total amount of days
#$DayCounter is set and gets marked up everytime it goes through loop until
it matches total days
until [ $DayCounter == $totalDiffDays ]
do
scpPath[$DayCounter]="/var/log/docker/theLog*${datePath[$DayCounter]}*"
noSpaceSCP[$DayCounter]=${scpPath[$DayCounter]//[[:blank:]]/}
((++DayCounter))
done
fullSCPscript[$scpCount]="${scpScript[$ipScriptCounter]}${noSpaceSCP[*]}'"
#this portion I have an issue with.
scp ${fullSCPscript[$scpCount]} ~/MyDocuments/Logs/$3/app$scpCount
#this ups the array counter for my ipaddress array
((++ipScriptCounter))
#How im zeroing out the $DayCounter so it will run through again for other
nodes but with different IP address
until [ $DayCounter == "0" ]
do
((--DayCounter))
done
done
example output i get when I echo the line with the scp command
scp user#10.10.200.100:'/var/log/docker/theLog*2018-07-26* /var/log/docker/theLog*2018-07-27*' /home/mobaxterm/MyDocuments/Logs/care3/app1
I'm sorry that this looks messy, but overall I'm trying to build the directory that its grabbing the log from, and if there are multiple days, just add onto the scp command. I'm trying to do this as opposed to running a whole separate command to save the user from entering their password 5 times if they need 5 files. Instead they would only have to enter it once.

Linux Read - Timeout after x seconds *idle*

I have a (bash) script on a server that I have inherited the administration aspect of, and have recently discovered a flaw in the script that nobody has brought to my attention.
After discovering the issue, others have told me that it has been irritating them, but never told me (great...)
So, the script follows this concept
#!/bin/bash
function refreshscreen(){
# This function refreshes a "statistics screen"
...
echo "Enter command to override update"
read -t 10 variable
}
This script refreshes a statistics screen, and allows the user to stall the update in lieu of commands built into a case statement. However, the read times-out (read -t 10) after 10 seconds, regardless of if the user is typing.
Long story short, is there a way to prevent read from timing out if the user is actively typing a command? Best case scenario would be a "Time out of SEC idle/inactive seconds" opposed to just timeout after x seconds.
I have thought about running a background script at the end of the cycle before the read command pauses the screen to check for inactivity, but have not found a way to make that command work.
You can use read in a loop, reading one character at a time, and adding it to a final read string. This would then give the user some timeout amount of time per character rather than per command. Here's a sample function you might be able to incorporate into your script that shows what I'm talking about:
read_with_idle_timeout() {
local input=""
read -t 10 -N 1 variable
while [ ! -z $variable ]
do
input+=$variable
read -t 10 -N 1 variable
done
echo "Read: $input"
}
This will give the user 10 seconds to type each character. If they stop typing, you'll get as much of the command as they had started typing before the timeout occurred, and then your case statement can handle it. Perhaps you can store the final string in a global variable, or just put this code directly into your other function.
If you need more than one word, since read breaks on $IFS, you could call this function multiple times until you get all the input you're expecting.
I have searched for a simple solution that will do the following:
timeout after 10 seconds, if there is no user input at all
the user has infinite time to finish his answer if the first character was typed within the first 10 sec.
This can be implemented in two lines as follows:
read -N 1 -t 10 -p "What is your name? > " a
[ "$a" != "" ] && read b && echo "Your name is $a$b" || echo "(timeout)"
In case the user waits 10 sec before he enters the first character, the output will be:
What is your name? > (timeout)
If the user types the first character within 10 sec, he has unlimited time to finish this task. The output will look like follows:
What is your name? > Oliver
Your name is Oliver
Caveat: the first character is not editable, once it was typed, while all other characters can be edited (backspace and re-type). Any ideas for a simple solution?

using awk and bash for monitoring exec output to log

I am looking for some help with awk and bash commands,
my project have an embedded (so very limited) hardware,
i need to run a specific command called "digitalio show"
the command output is:
Input=0x50ff <-- last char only change
Output=0x7f
OR
Input=0x50fd <-- last char only change
Output=0x7f
i need to extract the input parameter and convert it into either Active or Passive and log them to a file with timestamp.
the log file should look like this:
YYMMDDhhmmss;Active
YYMMDDhhmmss;Passive
YYMMDDhhmmss;Active
YYMMDDhhmmss;Passive
while logging only changes
The command "digitalio show" is an embedded specific command that give the I/O state at the time of the execution, so i basically need to log every change in the I/O into a file using a minimal tools i have in the embedded H/W.
i can run the command for every 500msec, but if i will log all the outputs i can finish the flash very quickly, so i need only log changes.
in the end this will run as a background deamon.
Thanks !
Rotem.
As far as I understand, a single run of digitalio show command outputs two lines in the following format:
Input=HEX_NUMBER
Output=0x7f
where HEX_NUMBER is either 0x50ff, or 0x50fd. Suppose, the former stands for "Active", the latter for "Passive".
Running the command once per 500 milliseconds requires keeping the state. The most obvious implementation is a loop with a sleep.
However, sleep implementations vary. Some of them support a floating point argument (fractional seconds), and some don't. For example, the GNU implementation accepts arbitrary floating point numbers, but the standard UNIX implementation guarantees to suspend execution for at least the integral number of seconds. There are many alternatives, though. For instance, usleep from killproc accepts microseconds. Alternatively, you can write your own utility.
Let's pick the usleep command. Then the Bash script may look like the following:
#!/bin/bash -
last_state=
while true ; do
i=$(digitalio show | awk -F= '/Input=0x[a-zA-Z0-9]+/ {print $2}')
if test "$i" = "0x50ff" ; then
state="Active"
else
state="Passive"
fi
if test "$state" != "$last_state" ; then
printf '%s;%s\n' $(date '+%Y%m%d%H%M%S') "$state"
fi
last_state="$state"
usleep 500000
done
Sample output
20161019103534;Active
20161019103555;Passive
The script launches digitalio show command in an infinite loop, then extracts the hex part from Input lines with awk.
The $state variable is assigned to whether "Active", or "Passive" depending on the value of hex string.
The $last_state variable keeps the value of $state in the last iteration. If $state is not equal to $last_state, then the state is printed to the standard output in the specific format.

Run bash shell in parallel and wait

I have 100 files in a directory, and want to process each one with several steps, while step1 is time-consuming. So the pseudocode is like:
for filename in ~/dir/*; do
run_step1 filename >${filename}.out &
done
for outfile in ~/dir/*.out; do
run_step2 outfile >${outfile}.result
done
My question is how can I check if step1 is complete for a given input file. I used to use threads.join in C#, but not sure if bash shell has equivalent.
It looks like you want:
for filename in ~/dir/*
do
(
run_step1 $filename >${filename}.out
run_step2 ${filename}.out >${filename}.result
) &
done
wait
This processes each file in a separate sub-shell, running first step 1 then step 2 on each file, but processing multiple files in parallel.
About the only issue you'll need to worry about is ensuring you don't try running too many processes in parallel. You might want to consider GNU parallel.
You might want to write a trivial script (doit.sh, perhaps):
run_step1 "$1" > "$1.out"
run_step2 "$1.out" > "$1.result"
and then invoke that script from parallel, one file per invocation.
Try this:
declare -a PROCNUMS
ITERATOR=0
for filename in ~/dir/*; do
run_step1 filename >${filename}.out &
PROCNUMS[$ITERATOR]=$!
let "ITERATOR=ITERATOR+1"
done
ITERATOR=0
for outfile in ~/dir/*.out; do
wait ${PROCNUMS[$ITERATOR]}
run_step2 outfile >${outfile}.result
let "ITERATOR=ITERATOR+1"
done
This will make an array of the created processes then wait for them in order as they need to be completed, not it relies on the fact there is a 1 to 1 relationship between in and out files and the directory is not changed while it is running.
Not for a small performance boost you can now run the second loop asynchronously too if you like assuming each file is independant.
I hope this helps, but if you have any questions please comment.
The Bash builtin wait can wait for a specific background job or all background jobs to complete. The simple approach would be to just insert a wait in between your two loops. If you'd like to be more specific, you could save the PID for each background job and wait PID directly before run_step2 inside the second loop.
After the loop that executes step1 you could write another loop that executes fg command which moves last process moved to background into foreground.
You should be aware that fg could return an error if a process already finished.
After the loop with fgs you are sure that all steps1 have finished.

Handle "race-condition" between 2 cron tasks. What is the best approach?

I have a cron task that runs periodically. This task depends on a condition to be valid in order to complete its processing. In case it matters this condition is just a SELECT for specific records in the database. If the condition is not satisfied (i.e the SELECT does not return the result set expected) then the script exits immediately.
This is bad as the condition would be valid soon enough (don't know how soon but it will be valid due to the run of another script).
So I would like somehow to make the script more robust. I thought of 2 solutions:
Put a while loop and sleep constantly until the condition is
valid. This should work but it has the downside that once the script
is in the loop, it is out of control. So I though to additionally
after waking up to check is a specific file exists. If it does it
"understands" that the user wants to "force" stop it.
Once the script figures out that the condition is not valid yet it
appends a script in crontab and stops. That seconds script
continually polls for the condition and if the condition is valid
then restart the first script to restart its processing. This solution to me it seems to work but I am not sure if it is a good solution. E.g. perhaps programatically modifying the crontab is a bad idea?
Anyway, I thought that perhaps this problem is common and could have a standard solution, much better than the 2 I came up with. Does anyone have a better proposal? Which from my ideas would be best? I am not very experienced with cron tasks so there could be things/problems I could be overseeing.
instead of programmatically appending the crontab, you might want to consider using at to schedule the job to run again at some time in the future. If the script determines that it cannot do its job now, it can simply schedule itself to run again a few minutes (or a few hours, as it may) later by way of an at command.
Following up from our conversation in comments, you can take advantage of conditional execution in a cron entry. Supposing you want to branch based on time of day, you might use the output from date.
For example: this would always invoke the first command, then invoke the second command only if the clock hour is currently 11:
echo 'ScriptA running' ; [ $(date +%H) == 11 ] && echo 'ScriptB running'
More examples!
To check the return value from the first command:
echo 'ScriptA' ; [ $? == 0 ] echo 'ScriptB'
To instead check the STDOUT, you can use as colon as a noop and branch by capturing output with the same $() construct we used with date:
: ; [ $(echo 'ScriptA') == 'ScriptA' ] && echo 'ScriptB'
One downside on the last example: STDOUT from the first command won't be printed to the console. You could capture it to a variable which you echo out, or write it to a file with tee, if that's important.

Resources