How to tail output from adb logcat and execute a command each new line - linux

I'm trying to do very likely to this post but instead of reading from a file I want to "subscribe" to the output of adb logcat and every time a new line is logged I will run some code on this line.
I tried these codes, but none worked
tail -f $(adb logcat) | while read; do
echo $read;
processLine $read;
done
or
adb logcat >> logcat.txt &
tail -f logcat.txt | while read; do
echo $read;
processLine $read;
done
What is the simples way to do this? Thanks in advance

The following two solutions should work. I generally prefer the second form as the wile loop is run in the current process and so I can use local variables. The first form runs the while loop in a child process.
While loop in child process:
#!/bin/bash
adb logcat |
while read -r line; do
echo "${line}"
processLine "${line}"
done
While loop in current process:
#!/bin/bash
while read -r line; do
echo "${line}"
processLine "${line}"
done < <(adb logcat)

Related

Trying to make a live /proc/ reader using bash script for live process monitoring

Im trying to make a little side project script to sit and monitor all of the /proc/ directories, for the most part I have the concept running and it works(to a degree). What im aiming for here is to scan through all the files and cat their status files and pull out the appropriate info, and then I would like to run this process in an infinite loop to give me live updates of when something is running on and dropping off of the scheduler. Right now every time you run the script, it will print 50+ blank lines and every single time it hits the proper regex it will print it correctly, but Im aiming for it to not roll down the screen the way it does. Any help at all would be appreciated.
regex="[0-9]"
temp=""
for f in /proc/*; do
if [[ -d $f && $f =~ /proc/$regex ]]; then
output=$(cat $f/status | grep "^State") #> /dev/null
process_id=$(cut -b 7- <<< $f)
state=$(cut -b 10-19 <<< $output)
tabs 4
if [[ $state =~ "(running)" ]]; then
echo -e "$process_id:$state\n" | sort >> temp
fi
fi
done
cat temp
rm temp````
To get the PID and status of running all processes, try:
awk -F':[[:space:]]*' '/State:/{s=$2} /Pid:/{p=$2} ENDFILE{if (s~/running/) print p,s; p="X"; s="X"}' OFS=: /proc/*/status
To get this output updated every second:
while sleep 1; do awk -F':[[:space:]]*' '/State:/{s=$2} /Pid:/{p=$2} ENDFILE{if (s~/running/) print p,s; p="X"; s="X"}' OFS=: /proc/*/status; done

Execute and delete command from a file

I have multiple files with an insanely long list of commands. I can't run them all in one go, so I need a smart way to read and execute from file as well as delete the command after completion.
So far I have tried
for i in filename.txt ; do ; execute $i ; sed -s 's/$i//' ; done ;
but it doesn't work. Before I introduced sed, $i was executing. Now even that is not working.
I thought of a workaround where I will read first line and delete first line till file is empty.
Any better ideas or commands?
This should work for you, list.txt is your file containing commands.
Make sure you backup the command file before running.
while read line; do $line;sed -i '1d' list.txt;done < "list.txt"
sed -i edits in-place so list.txt will be changed along the loop and you will end up with a empty file.
I think what you want to do is something like this:
while read -r -- i; do $i; sed -i "0,/$i/s/$i//;/^$/d" filename.txt; done < filename.txt
The file is read into the loop. Each line is executed, and the sed command will delete only the first entry it finds, then delete the empty line.
I think that one way to do it is to have the source file of all the commands to be executed, and the script that executes the commands also writes a second log file that lists the files as they are executed.
If you need to resume the process, you work on the lines in the source file that are not present in the log file.
logfile=commands.log
srcfile=commands.src
oldfile=commands.old
trap "mv $oldfile $logfile; exit 1" 0 1 2 3 13 15
[ -f $logfile ] || cp /dev/null $logfile
cp $logfile $oldfile
comm -23 $srcfile $logfile |
while read -r line
do
echo "$line" >> $oldfile
($line) < /dev/null
done
mv $oldfile $logfile
trap 0

Getting out of tail -f in shell script

I cant seem to make this work.
This is the script.
tail -fn0 nohup.out | while read line; do
if [[ "${line}" =~ ".*ERIKA.*" ]]; then
echo "match found"
break
fi
done
echo "Search done"
The code echo "Search done" does not run even after a match has been found.
I just want the rest of the code to be ran when a match has been found.
I have not made it possible yet.
Sorry, I am new with log monitoring.
Is there any workaround with this?
I am gonna run the script via Jenkins so, the code should be free flowing
and should not require any user interaction.
Please help, thanks.
You've got a couple of issues here:
tail is going to keep running until it fails to write to its output pipeline, and thus your pipeline won't complete until tail exits. It won't do that until after your script exits, AND another line (or possibly 4K if buffering, see below) is written to the log file, causing it to attempt to write to its output pipe. (re buffering: Most programs are switched to 4K buffering when writing through pipes. Unless tail explicitly sets its buffering, this would affect the above behaviour).
your regex: "${line}" =~ ".*ERIKA.*" does not match for me. However, "${line}" =~ "ERIKA" does match.
You can use tail's --pid option as a solution to the first issue. Here's an example, reworking your script to use that option:
while read line; do
if [[ "${line}" =~ "ERIKA" ]]; then
echo "match found"
break
fi
done < <(tail --pid=$$ -f /tmp/out)
echo "Search done"
Glenn Jackman's pkill solution is another approach to terminating the tail.
Perhaps consider doing this in something other than bash: perl has a nice File::Tail module that implements the tail behaviour.
There are many more questions related to this problem, you may find something you prefer in their answers:
Ending tail -f started in a shell script
Do a tail -F until matching a pattern
https://superuser.com/questions/275827/how-to-read-one-line-from-tail-f-through-a-pipeline-and-then-terminate
https://unix.stackexchange.com/questions/45941/tail-f-until-text-is-seen
https://unix.stackexchange.com/questions/12075/best-way-to-follow-a-log-and-execute-a-command-when-some-text-appears-in-the-log?rq=1
Here's one way, doesn't feel very elegant though.
tail -fn0 nohup.out |
while IFS= read -r line; do
if [[ $line == *ERIKA* ]]; then
echo "match found"
pkill -P $$ tail
fi
done
echo "Search done"
You can use awk to exit:
tail -fn0 nohup.out | awk '/ERIKA/{print "match found ", $0; exit}'

bash: loop through procces output and terminate the process

I need some help with the following:
I use linux to script commands sent to a device. I need to submit a grep logcat command to the device and then iterate its output as it is being generated and look for a particular string. Once this string is found I want my script to move to the following command.
in pseudocode
for line in "adb shell logcat | grep TestProccess"
do
if "TestProccess test service stopped" in line:
print line
print "TestService finished \n"
break
else:
print line
done
adb shell logcat | grep TestProcess | while read line
do
echo "$line"
if [ "$line" = "TestProces test service stopped" ]
then echo "TestService finished"
break
fi
done
adb shell logcat | grep -Fqm 1 "TestProcess test service stopped" && echo "Test Service finished"
The grep flags:
-F - treat the string literally, not as a regular expression
-q - don't print anything to standard output
-m 1 - stop after the first match
The command after && only executes if grep finds a match. As long as you "know" grep will eventually match and want to unconditionally continue once it returns, just leave off the && ...
You could use an until loop.
adb shell logcat | grep TestProccess | until read line && [[ "$line" =~ "TestProccess test service stopped" ]]; do
echo $line;
done && echo -n "$line\nTestService finished"

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources