Tee and Grep, filtering and appending the output of a script several times [duplicate] - linux

This question already has answers here:
How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)?
(6 answers)
Closed 2 years ago.
I have a script that output logs every 30 minutes, this logs are appended to a file that store all logs, then I filter the logs that contains 'Maas" string and store those logs to another file.
(script output) | tee -a alldata.log | grep 'Maas' >> filterMaas.log
What I need to do is to add more filters to output to several files, the following line doesnt work, the file filterCCSA.log is empty.
(script output) | tee -a alldata.log | grep 'Maas' >> filterMaas.log | grep 'CCSA' >> filterCCSA.log
Any idea how can make this work?

You could do something like that:
(script output) | tee >(grep 'Maas' >> filterMaas.log) >(grep 'CCSA' >> filterCCSA.log) >> alldata.log

Related

How to "tail -f" with "grep" save outputs to an another file which name is time varying?

STEP1
Like I said in title,
I would like to save output of
tail -f example | grep "DESIRED"
to different file
I have tried
tail -f example | grep "DESIRED" | tee -a different
tail -f example | grep "DESIRED" >> different
all of them not working
and I have searched similar questions and read several experts suggesting buffered
but I cannot use it.....
Is there any other way I can do it?
STEP2
once above is done, I would like to make "different" (filename from above) to time varying. I want to keep change its name in every 30minutes.
For example like
20221203133000
20221203140000
20221203143000
...
I have tried
tail -f example | grep "DESIRED" | tee -a $(date +%Y%m%d%H)$([ $(date +%M) -lt 30 ] && echo 00 || echo 30)00
The problem is since I did not even solve first step, I could not test the second step. But I think this command will only create one file based on the time I run the command,,,, Could I please get some advice?
Below code should do what you want.
Some explanations: as you want bash to execute some "code" (in your case dumping to a different file name) you might need two things running in parallel: the tail + grep, and the code that would decide where to dump.
To connect the two processes I use a name fifo (created with mkfifo) in which what is written by tail + grep (using > tmp_fifo) is read in the while loop (using < tmp_fifo). Then once in a while loop, you are free to output to whatever file name you want.
Note: without line-buffered (like in your question) grep will work, will just wait until it has more data (prob 8k) to dump to the file. So if you do not have lots of data generated in "example" it will not dump it until it is enough.
rm -rf tmp_fifo
mkfifo tmp_fifo
(tail -f input | grep --line-buffered TEXT_TO_CHECK > tmp_fifo &)
while read LINE < tmp_fifo; do
CURRENT_NAME=$(date +%Y%m%d%H)
# or any other code that determines to what file to dump ...
echo $LINE >> ${CURRENT_NAME}
done

Execute agrep within script | grep terminal output [duplicate]

This question already has answers here:
Assign output to variable in Bash [duplicate]
(2 answers)
Closed 2 years ago.
Can I execute a grep in my script below?
echo 5
echo 4
result='çat output.txt | grep flag'
echo $result
The scipt gets used like
./script | ./program > output.txt
The script is used as input for the program, and the output of the program gets put into output, which I want to be able to grep for instantly. At the moment, it seems to finish without doing the grep command
I have the impression you are using single quotes while it should be backtics. Luckily there is something easier to use
result=$(cat output.txt | grep "flag")
The $(some_command) is used for getting the results of some_command.

Is it possible to use tail and grep in combination? [duplicate]

This question already has answers here:
How to 'grep' a continuous stream?
(13 answers)
Closed 8 years ago.
I am trying to tail a user in production log.
Is it possible to use
tail -f grep "username"
Yes - You use pipe. i.e.
tail -f <some filename> | grep 'username'
Yes, you can just use a pipe
tail -f fileName | grep username
The ack command, which is a grep-like text finder, has a --passthru flag that is designed specifically for this.
Since ack automatically color codes matches for you, you can use it to search the output of a tailed log file, and highlight the matches, but also see the lines that don't match.
tail -f error.log | ack --passthru whatever
All the lines of the tailed log will show up, but the matches will be highlighted.
ack is at http://beyondgrep.com/
factually I have found it to be more efficient to use:
grep username filename | tail

How do I grep multiple lines (output from another command) at the same time?

I have a Linux driver running in the background that is able to return the current system data/stats. I view the data by running a console utility (let's call it dump-data) in a console. All data is dumped every time I run dump-data. The output of the utility is like below
Output:
- A=reading1
- B=reading2
- C=reading3
- D=reading4
- E=reading5
...
- variableX=readingX
...
The list of readings returned by the utility can be really long. Depending on the scenario, certain readings would be useful while everything else would be useless.
I need a way to grep only the useful readings whose names might have have nothing in common (via a bash script). I.e. Sometimes I'll need to collect A,D,E; and other times I'll need C,D,E.
I'm attempting to graph the readings over time to look for trends, so I can't run something like this:
# forgive my pseudocode
Loop
dump-data | grep A
dump-data | grep D
dump-data | grep E
End Loop
to collect A,D,E as that would actually give me readings from 3 separate calls of dump-data as that would not be accurate.
If you want to save all result of grep in the same file, you can just join all expressions in one:
grep -E 'expr1|expr2|expr3'
But if you want to have results (for expr1, expr2 and expr3) in separate files, things are getting more interesting.
You can do this using tee >(command).
For example, here I process the same pipe with thre different commands:
$ echo abc | tee >(sed s/a/_a_/ > file1) | tee >(sed s/b/_b_/ > file2) | sed s/c/_c_/ > file3
$ grep "" file[123]
file1:_a_bc
file2:a_b_c
file3:ab_c_
But the command seems to be too complex.
I would better save dump-data results to a file and then grep it.
TEMP=$(mktemp /tmp/dump-data-XXXXXXXX)
dump-data > ${TEMP}
grep A ${TEMP}
grep B ${TEMP}
grep C ${TEMP}
You can use dump-data | grep -E "A|D|E". Note the -E option of grep. Alternatively you could use egrep without the -E option.
you can simply use:
dump-data | grep -E 'A|D|E'
awk '/MY PATTERN/{print > "matches-"FILENAME;}' myfile{1,3}
thx Guru at Stack Exchange

How can I print intermediate results from a pipeline to the screen? [duplicate]

This question already has answers here:
How can I gzip standard in to a file and also print standard in to standard out?
(4 answers)
Closed 7 years ago.
I'm trying to count the lines from a command and I'd also like to see the lines as they go by. My initial thought was to use the tee command:
complicated_command | tee - | wc -l
But that simply doubles the line count using GNU tee or copies output to a file named - on Solaris.
complicated_command | tee /dev/tty | wc -l
But keep in mind that if you put it in a script and redirect the output, it won't do what you expect.
The solution is to tee to the console directly as opposed to STDOUT:
tty=`tty`
complicated_command | tee $tty | wc -l

Resources