How can I print intermediate results from a pipeline to the screen? [duplicate] - linux

This question already has answers here:
How can I gzip standard in to a file and also print standard in to standard out?
(4 answers)
Closed 7 years ago.
I'm trying to count the lines from a command and I'd also like to see the lines as they go by. My initial thought was to use the tee command:
complicated_command | tee - | wc -l
But that simply doubles the line count using GNU tee or copies output to a file named - on Solaris.

complicated_command | tee /dev/tty | wc -l
But keep in mind that if you put it in a script and redirect the output, it won't do what you expect.

The solution is to tee to the console directly as opposed to STDOUT:
tty=`tty`
complicated_command | tee $tty | wc -l

Related

Execute agrep within script | grep terminal output [duplicate]

This question already has answers here:
Assign output to variable in Bash [duplicate]
(2 answers)
Closed 2 years ago.
Can I execute a grep in my script below?
echo 5
echo 4
result='çat output.txt | grep flag'
echo $result
The scipt gets used like
./script | ./program > output.txt
The script is used as input for the program, and the output of the program gets put into output, which I want to be able to grep for instantly. At the moment, it seems to finish without doing the grep command
I have the impression you are using single quotes while it should be backtics. Luckily there is something easier to use
result=$(cat output.txt | grep "flag")
The $(some_command) is used for getting the results of some_command.

Tee and Grep, filtering and appending the output of a script several times [duplicate]

This question already has answers here:
How can I send the stdout of one process to multiple processes using (preferably unnamed) pipes in Unix (or Windows)?
(6 answers)
Closed 2 years ago.
I have a script that output logs every 30 minutes, this logs are appended to a file that store all logs, then I filter the logs that contains 'Maas" string and store those logs to another file.
(script output) | tee -a alldata.log | grep 'Maas' >> filterMaas.log
What I need to do is to add more filters to output to several files, the following line doesnt work, the file filterCCSA.log is empty.
(script output) | tee -a alldata.log | grep 'Maas' >> filterMaas.log | grep 'CCSA' >> filterCCSA.log
Any idea how can make this work?
You could do something like that:
(script output) | tee >(grep 'Maas' >> filterMaas.log) >(grep 'CCSA' >> filterCCSA.log) >> alldata.log

Tracking a program's progress reading through a file?

Is there a way to figure out where in a file a program is reading from? It seems like might be doable with strace or dtrace?
To clarify the question and give motivation, say I have a 10GB log file and am counting the number of unique lines:
$ cat log.txt | sort | uniq | wc -l
Can I check where in the file cat is currently at, effectively giving the progress of the command? Using lsof, I can't seem to get the offset of last file read, which I think is what would do the trick:
$ lsof log.txt
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cat 16021 erik 3r REG 0,22 13416118210 1078133219
Edit: I apologize, the example I gave is too narrow and misses the point. Ideally, for an arbitrary program, I would like to see where in the file reads are occurring (regardless of pipe).
You can do what you want with the progress command. It shows the progress of coreutils tools such as cat or other programs in reading their file.
File and offset information is available in Linux in /proc/<PID>/fd and /proc/<PID>/fdinfo.
Instead of cat :
pv log.txt | sort | uniq | wc -l
Piping with pv :
SIZE=$( ls -l log.txt | awk '{print $5}'); cat log.txt | sort | pv -s $SIZE | uniq | wc -l
If the example is truly your use case, then I'd recommend pipe viewer.

Is it possible to use tail and grep in combination? [duplicate]

This question already has answers here:
How to 'grep' a continuous stream?
(13 answers)
Closed 8 years ago.
I am trying to tail a user in production log.
Is it possible to use
tail -f grep "username"
Yes - You use pipe. i.e.
tail -f <some filename> | grep 'username'
Yes, you can just use a pipe
tail -f fileName | grep username
The ack command, which is a grep-like text finder, has a --passthru flag that is designed specifically for this.
Since ack automatically color codes matches for you, you can use it to search the output of a tailed log file, and highlight the matches, but also see the lines that don't match.
tail -f error.log | ack --passthru whatever
All the lines of the tailed log will show up, but the matches will be highlighted.
ack is at http://beyondgrep.com/
factually I have found it to be more efficient to use:
grep username filename | tail

How do I grep multiple lines (output from another command) at the same time?

I have a Linux driver running in the background that is able to return the current system data/stats. I view the data by running a console utility (let's call it dump-data) in a console. All data is dumped every time I run dump-data. The output of the utility is like below
Output:
- A=reading1
- B=reading2
- C=reading3
- D=reading4
- E=reading5
...
- variableX=readingX
...
The list of readings returned by the utility can be really long. Depending on the scenario, certain readings would be useful while everything else would be useless.
I need a way to grep only the useful readings whose names might have have nothing in common (via a bash script). I.e. Sometimes I'll need to collect A,D,E; and other times I'll need C,D,E.
I'm attempting to graph the readings over time to look for trends, so I can't run something like this:
# forgive my pseudocode
Loop
dump-data | grep A
dump-data | grep D
dump-data | grep E
End Loop
to collect A,D,E as that would actually give me readings from 3 separate calls of dump-data as that would not be accurate.
If you want to save all result of grep in the same file, you can just join all expressions in one:
grep -E 'expr1|expr2|expr3'
But if you want to have results (for expr1, expr2 and expr3) in separate files, things are getting more interesting.
You can do this using tee >(command).
For example, here I process the same pipe with thre different commands:
$ echo abc | tee >(sed s/a/_a_/ > file1) | tee >(sed s/b/_b_/ > file2) | sed s/c/_c_/ > file3
$ grep "" file[123]
file1:_a_bc
file2:a_b_c
file3:ab_c_
But the command seems to be too complex.
I would better save dump-data results to a file and then grep it.
TEMP=$(mktemp /tmp/dump-data-XXXXXXXX)
dump-data > ${TEMP}
grep A ${TEMP}
grep B ${TEMP}
grep C ${TEMP}
You can use dump-data | grep -E "A|D|E". Note the -E option of grep. Alternatively you could use egrep without the -E option.
you can simply use:
dump-data | grep -E 'A|D|E'
awk '/MY PATTERN/{print > "matches-"FILENAME;}' myfile{1,3}
thx Guru at Stack Exchange

Resources