Scanning the output of a program at shell - linux

Hello I am currently trying to make a script file in Linux that has as input the output of o prgram and scans it to find how many occurences of some words are existed. To be clearer I want to scan the output and store to variables how many times some words exist in that output. I am new to scripitin in linux. I tried storing the output in a file and then scan it line by line in order to find the words but for somre reason the loop i use to parse it never ends. Can you help me?
./program > buffer.txt
while read LINE
do
echo $LINE | grep word1 #when i use grep command the loop never ends
done <a.txt
Edit: In C an equivalent program would be
char* word="word1"
while(/*parse all the lines at a text */)
{
fgetline("file_a",&buffer)
if(strcmp(buffer,word)==0)
strcpy(word1,"word") //continue the search with this
}

the easiest thing to do is to skip writing to a file altogether
if ./program | grep -q word1 &>/dev/null; then
echo "TRUE"
fi
-q tells grep to be quiet, but it can still produce error messages occasionally which you can suppress w/the &>/dev/null. If you'd prefer to see any error messages, just remove that part.
If you want ./program's errors and stdout to be analyzed by grep then you'll need to redirect stderr to stdout like this

Your code is working fine,
the while loop will exit.
try adding the following line after the while loop and verify
echo "While loop Exited!"

Related

Get last line from grep search on multiple files, and write them in a output file

I have multiple files located in multiple directories. From them I search a keyword 'ENERGY' by grep. In each file I get multiple match cases. I want to take the last line from each file and save the results in the output.txt file. I wrote the following code:
labl=SubDir
ENERGY=`grep 'ENERGY' MyDir*${labl}*/*.txt`
cat > output.txt << EOF
${ENERGY}
EOF
This code saves all match cases from each file. But as mentioned, I need the last match case from each file. For that I modified the grep command as:
ENERGY=`grep 'ENERGY' MyDir*${labl}*/*.txt|taile -l`
Unfortunately this doesn't do the job either. Instead, it saves all the match cases from the last file only.
How to solve it?
Please don't run multiple processes/pipes to achieve this.
gawk '/ENERGY/{last=$0} ENDFILE{if(last!="") print last; last=""}' MyDir*"$labl"*/*.txt
/ENERGY/{last=$0}: On lines which match the regex ENERGY, set variable last to the contents of the entire line $0
ENDFILE{...} Run this {action} at the end of every input file supplied by the glob.
if(last!="") print last: print last if it's not null
last="": reset this variable to null, avoiding duplication
MyDir*"${labl}"*/*.txt: Quoted variable in glob will match directory names that include spaces
Use a for loop:
for f in MyDir*"$lab1"*/*.txt; do
grep ENERGY "$f" | tail -1 >> output.txt
done
Yet one but probably not last possible approach is to use parallel like this. Probably you can achieve the same with xargs, but I personally prefer parallel as simpler and giving the possibility to scale your process.
ls -1 file* | parallel -j1 "grep ENERGY {} | tail -n 1" > output.txt

Redirecting linux cout to a variable and the screen in a script

I am currently trying to make a script file that runs multiple other script files on a server. I would like to display the output of these script to the screen IN ADDITION to passing it into grep so I can do error testing. currently I have written this:
status=$(SOMEPROCESS | grep -i "SOMEPROCESS started completed correctly")
I do further error handling below this using the variable status, so I would like to display SOMEPROCESS's output to the screen for error reference. This is a read only server and I can not save the output to a log file.
You need to use the tee command. It will be slightly fiddly, since tee outputs to a file handle. However you could create a file descriptor using pipe.
Or (simpler) for your use case.
Start the script without grep and pipe it through tee SOMEPROCESS | tee /my/safely/generated/filename. Then use tail -f /my/safely/generated/filename | grep -i "my grep pattern separately.
You can use process substituion together with tee:
SOMEPROCESS | tee >(grep ...)
This will use an anonymous pipe and pass /dev/fd/... as file name to tee (or a named pipe on platforms that don't support /dev/fd/...).
Because SOMEPROCESS is likely to buffer its output when not talking to a terminal, you might see significant lag in screen output.
I'm not sure whether I understood your question exactly.
I think you want to get the output of SOMEPROCESS, test it, print it out when there are errors. If it is, I think the code bellow may help you:
s=$(SOMEPROCESS)
grep -q 'SOMEPROCESS started completed correctly' <<< $s
if [[ $? -ne 0 ]];then
# specified string not found in the output, it means SOMEPROCESS started failed
echo $s
fi
But in this code, it will store the all output in the memory, if the output is big enough, there will be a OOM risk.

Send multiple outputs to sed

When there is a program which, upon execution, prints several lines on stout, how can I redirect all those lines to sed and perform some operations on them while they are being generated?
For example:
7zip a -t7z output_folder input_folder -mx9 > sed 's/.*[ \t][ \t]*\([0-9][0-9]*\)%.*/\1/'
7zip generates a series of lines as output, each including a percentage value, and I would like sed to display these values only, while they are being generated. The above script unfortunately does not work...
What is the best way to do this?
You should use the pipe | instead of redirection > so that sed uses first command output as its input.
The above script line must have created a sed file in the current directory.
Furthermore, maybe 7zip outputs these lines to stderr instead of stdout. If it is the case, first redirect standard error to standard output before piping: 2>&1 |

Special Piping/redirection in bash

(Firstly I've been looking for an hour so I'm pretty sure this isn't a repeat)
I need to write a script that executes 1 command, 1 time, and then does the following:
Saves both the stdout and stderr to a file (while maintaining their proper order)
Saves stderr only to a variable.
Elaboration on point 1, if I have a file like so
echo "one"
thisisanerrror
echo "two"
thisisanotherError
I should expect to see output, followed by error, followed by output, followed by more error (thus concatenating is not sufficient).
The closest I've come is the following, which seems to corrupt the log file:
errs=`((./someCommand.sh 2>&1 1>&3) | tee /dev/stderr ) 3>file.log 2>&3 `
This might be a starting point:
How do I write stderr to a file while using "tee" with a pipe?
Edit:
This seems to work:
((./foo.sh) 2> >(tee >(cat) >&2)) > foo.log
Split stderr with tee, write one copy to stdout (cat) and write the other to stderr. Afterwards you can grab all the stdout and write it to a file.
Edit: to store the output in a variable
varx=`((./foo.sh) 2> >(tee >(cat) >&2))`
I also saw the command enclosed in additional double quotes, but i have no clue what that might be good for.

Grep filtering output from a process after it has already started?

Normally when one wants to look at specific output lines from running something, one can do something like:
./a.out | grep IHaveThisString
but what if IHaveThisString is something which changes every time so you need to first run it, watch the output to catch what IHaveThisString is on that particular run, and then grep it out? I can just dump to file and later grep but is it possible to do something like background it and then bring it to foreground and bringing it back but now piped to some grep? Something akin to:
./a.out
Ctrl-Z
fg | grep NowIKnowThisString
just wondering..
No, it is only in your screen buffer if you didn't save it in some other way.
Short form: You can do this, but you need to know that you need to do it ahead-of-time; it's not something that can be put into place interactively after-the-fact.
Write your script to determine what the string is. We'd need a more detailed example of the output format to give a better example of usage, but here's one for the trivial case where the entire first line is the filter target:
run_my_command | { read string_to_filter_for; fgrep -e "$string_to_filter_for" }
Replace the read string_to_filter_for with as many commands as necessary to read enough input to determine what the target string is; this could be a loop if necessary.
For instance, let's say that the output contains the following:
Session id: foobar
and thereafter, you want to grep for lines containing foobar.
...then you can pipe through the following script:
re='Session id: (.*)'
while read; do
if [[ $REPLY =~ $re ]] ; then
target=${BASH_REMATCH[1]}
break
else
# if you want to print the preamble; leave this out otherwise
printf '%s\n' "$REPLY"
fi
done
[[ $target ]] && grep -F -e "$target"
If you want to manually specify the filter target, this can be done by having the loop check for a file being created with filter contents, and using that when starting up grep afterwards.
That is a little bit strange what you need, but you can do it tis way:
you must go into script session first;
then you use shell how usually;
then you start and interrupt you program;
then run grep over typescript file.
Example:
$ script
$ ./a.out
Ctrl-Z
$ fg
$ grep NowIKnowThisString typescript
You could use a stream editor such as sed instead of grep. Here's an example of what I mean:
$ cat list
Name to look for: Mike
Dora 1
John 2
Mike 3
Helen 4
Here we find the name to look for in the fist line and want to grep for it. Now piping the command to sed:
$ cat list | sed -ne '1{s/Name to look for: //;h}' \
> -e ':r;n;G;/^.*\(.\+\).*\n\1$/P;s/\n.*//;br'
Mike 3
Note: sed itself can take file as a parameter, but you're not working with text files, so that's how you'd use it.
Of course, you'd need to modify the command for your case.

Resources