How to pipe a process' output to logger command and to head command? - linux

What I'm trying to do is:
Start a process that outputs text continously
Pipe that output to two commands:
A logger script
'head' command, so I can save the first lines the initial process output.
What I tried so far (unsuccessfuly) is:
./myProgram | tee > (myLogger log.txt) | head > firstLines.txt
The problem is that the myProgram exits as soon as head is finished.
Even if I use -i in tee command, I can't get myProgram to keep running.
Since the logger may append the incoming text to an existing file, executing
head log.txt > firstLines.txt
won't work in this case.

You can use awk as an alternative to do both:
./myProgram |
awk 'NR<=10{print > "firstLines.txt"} NR>10{close("firstLines.txt")} 1' > log.txt

Like this maybe:
yes | awk 'FNR<4 {print >>"file"; close("file")} 1' | more
where yes is your program, file is where you send the output of head to, and more is your logger.

Related

why this command not working?(pipe, redirection)

bash-3.2$ ls | grep Makefile > a.txt | cat a.txt
why this don't work?? "Makefile" is exist.
There is no output from the grep command since you are redirecting it to a file. Therefore the pipe gets closed before the cat a.txt actually gets called. As per my comment, use && instead of that last |.
Do do what you want which is to save the output of a command to a file but still print to stdout use tee. (also I like to do ls -1 to make sure only one item per line is printed this is not necessary but force of habit)
ls -1 | grep Makefile | tee a.txt

Make a variable containing all digits from the stdout of the command run directly before it

I am trying to make a bash shell script that launches some jobs on a queuing system. After a job is launched, the launch command prints the job-id to the stdout, which I would like to 'trap' and then use in the next command. The job-id digits are the only digits in the stdout message.
#!/bin/bash
./some_function
>>> this is some stdout text and the job number is 1234...
and then I would like to get to:
echo $job_id
>>> 1234
My current method is using a tee command to pipe the original command's stdout to a tmp.txt file and then making the variable by grepping that file with a regex filter...something like:
echo 'pretend this is some dummy output from a function 1234' 2>&1 | tee tmp.txt
job_id=`cat tmp.txt | grep -o '[0-9]'`
echo $job_id
>>> pretend this is some dummy output from a function 1234
>>> 1 2 3 4
...but I get the feeling this is not really the most elegant or 'standard' way of doing this. What is the better way to do this?
And for bonus points, how do I remove the spaces from the grep+regex output?
You can use grep -o when you call your script:
jobid=$(echo 'pretend this is some dummy output from a function 1234' 2>&1 |
tee tmp.txt | grep -Eo '[0-9]+$')
echo "$jobid"
1234
Something like this should work:
$ JOBID=`./some_function | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/'`
$ echo $JOBID
1234

bash standard output can not be redirected into file

I am reading 'advanced bash script', in Chapter 31, there is a problem. I can not figure it out.
tail -f /var/log/msg | grep 'error' >> logfile
Why is there nothing to output into logfile?
can you offer me an explanation?
thank you in advance
As #chepner comments, grep is using a larger buffer (perhaps 4k or more) to buffer its stdout. Most of the standard utilities do this when piping or redirecting to a file. They typically only switch to line-buffered mode when outputting directly to the terminal.
You can use the stdbuf utility to force grep to do line buffering of its output:
tail -f /var/log/msg | stdbuf -oL grep 'error' >> logfile
As an easily observable demonstration of this effect, you can try the following two commands:
for ((i=0;;i++)); do echo $i; sleep 0.001; done | grep . | cat
and
for ((i=0;;i++)); do echo $i; sleep 0.001; done | stdbuf -oL grep . | cat
In the first command, the output from grep . (i.e. match all lines) be buffered going into the pipe to cat. On mine the buffer appears to be about 4k. You will see the ascending numbers output in chunks as the buffer gets filled and then flushed.
In the second command, grep's output into the pipe to cat is line-buffered, so you should see terminal output for every line, i.e. more-or-less continuous output.

Perl script log to file, output lag

I have a Perlscript which does some logfile parsing and sometimes executes a bash command:
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"\`;
I start my perl script like this ./perlscript.pl > logfile.log.
Now I do a tail on the logfile to watch the progress, but the output gets stuck every time at the line I described above.
The output will stop there for some seconds and then continue. ???
To profile the problem I wrapped it like this:
print `date`;
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"`;
print `date`;
The output shows that the command does not consume a lot of time:
So 6. Okt 22:35:04 CEST 2013
So 6. Okt 22:35:04 CEST 2013
If I run the script without redirecting the output to a file there is no LAG.
Any idea why?
I haven't tried to duplicate your behaviour, but it might be a stdout buffering problem. Try with:
$| = 1;
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"`;
Update
I have tried to duplicate the behaviour you observe: I've had to make some assumptions but I believe my suspicion was correct. Here I'm piping, but it's the same as redirecting to a file and tailing that file:
./test.pl | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'
Without $| = 1, output is buffered and aggregated:
2013-10-06 23:08:27 Saluton, mondo: /home/lserni/test.sh
2013-10-06 23:08:27
2013-10-06 23:08:27 Waiting 10s...
2013-10-06 23:08:27 Saluton denove!
With the modification, each line is printed as it is generated:
2013-10-06 23:09:09 Saluton, mondo: /home/lserni/test.sh
2013-10-06 23:09:09
2013-10-06 23:09:09 Waiting 10s...
2013-10-06 23:09:19 Saluton denove!
I expect that your script is doing something that takes some seconds, and which is not generating that messagePath; and the output will be delayed until Perl has a sizeable chunk of data to send along, giving the impression that it's that line that's stalling.
I forgot: the timing pipe comes from here.
In situations like yours, I've had some success using the unbuffer command. It runs a command in an environment that looks to the command like it's outputting to a tty so it doesn't buffer its output. I don't know how to apply it exactly in your case, so if you want to try it, you will have to experiment a little.

Why no output is shown when using grep twice?

Basically I'm wondering why this doesn't output anything:
tail --follow=name file.txt | grep something | grep something_else
You can assume that it should produce output I have run another line to confirm
cat file.txt | grep something | grep something_else
It seems like you can't pipe the output of tail more than once!? Anyone know what the deal is and is there a solution?
EDIT:
To answer the questions so far, the file definitely has contents that should be displayed by the grep. As evidence if the grep is done like so:
tail --follow=name file.txt | grep something
Output shows up correctly, but if this is used instead:
tail --follow=name file.txt | grep something | grep something
No output is shown.
If at all helpful I am running ubuntu 10.04
You might also run into a problem with grep buffering when inside a pipe.
ie, you don't see the output from
tail --follow=name file.txt | grep something > output.txt
since grep will buffer its own output.
Use the --line-buffered switch for grep to work around this:
tail --follow=name file.txt | grep --line-buffered something > output.txt
This is useful if you want to get the results of the follow into the output.txt file as rapidly as possible.
Figured out what was going on here. It turns out that the command is working it's just that the output takes a long time to reach the console (approx 120 seconds in my case). This is because the buffer on the standard out is not written each line but rather each block. So instead of getting every line from the file as it was being written I would get a giant block every 2 minutes or so.
It should be noted that this works correctly:
tail file.txt | grep something | grep something
It is the following of the file with --follow=name that is problematic.
For my purposes I found a way around it, what I was intending to do was capture the output of the first grep to a file, so the command would be:
tail --follow=name file.txt | grep something > output.txt
A way around this is to use the script command like so:
script -c 'tail --follow=name file.txt | grep something' output.txt
Script captures the output of the command and writes it to file, thus avoiding the second pipe.
This has effectively worked around the issue for me, and I have explained why the command wasn't working as I expected, problem solved.
FYI, These other stackoverflow questions are related:
Trick an application into thinking its stdin is interactive, not a pipe
Force another program's standard output to be unbuffered using Python
You do know that tail starts by default with the last ten lines of the file? My guess is everything the cat version found is well into the past. Try tail -n+1 --follow=name file.txt to start from the beginning of the file.
works for me on Mac without --follow=name
bash-3.2$ tail delme.txt | grep po
position.bin
position.lrn
bash-3.2$ tail delme.txt | grep po | grep lr
position.lrn
grep pattern filename | grep pattern | grep pattern | grep pattern ......

Resources