Filter followed tail to file using grep and redirect - linux

I want to get the output of
tail -f /var/log/apache2/error.log | grep "trace1"
into a file. But
tail -f /var/log/apache2/error.log | grep "trace1" > output.txt
does not work, while the first command gives an output in my terminal window as expected.
I guess it has to do with the follow-parameter, because if I omit the "-f", the output file is created.
But why is this so and how can I achieve my goal?
Regards,
Axel

Can you please try:
tail -f /var/log/apache2/error.log | grep "trace1" | tee -a output.txt

Related

linux bash and grep bug?

Doing the following:
First console
touch /tmp/test
Second console
tail -f /tmp/test |grep propo |grep -v miles
Third console
echo propo >> /tmp/test
Second console must show "propo" but it doesn't shows anything, if you run in second console instead:
tail -f /tmp/test |grep propo
And do echo propo >> /tmp/test it will show propo, but the grep -v is for miles not for propo
Why?
Test into your own environment if you want, it's pretty obvious but not working.
Why?
Most probably because the output of a command when piped to another command is fully buffered, not line buffered. The output could be buffered in the first pipe or by grep.
Use stdbuf -oL to force line buffering and grep --line-buffered for line buffered grep.
the problem is that grep does not use line buffering by default; so the output will be buffered. You could use grep --line-buffered:
tail -f /tmp/test | grep --line-buffered propo | grep -v miles

Redirecting tail output into a program

I want to send a program the most recent lines from a text file using tail as stdin.
First, I echo to the program some input that will be the same every time, then send in tail input from an inputfile which should first be processed through sed. The following is the command line that I expect to work. But when the program runs it only receives the echo input, not the tail input.
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
However, the following works exactly as expected, printing everything out to the terminal:
echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat
So I tried with another type of output, and again while the echoed text posted, the tail text does not appear anywhere:
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') | tee out.txt
This made me think it is a problem with buffering, but I tried the unbuffer program and all other advice here (https://superuser.com/questions/59497/writing-tail-f-output-to-another-file) without results. Where is the tail output going and how can I get it to go into my program as expected?
The buffering problem was resolved when I prefixed the sed command with the following:
stdbuf -i0 -o0 -e0
Much more preferable to using unbuffer, which didn't even work for me. Dave M's suggestion of using sed's relatively new -u also seems to do the trick.
One thing you may be getting confused by -- | (pipeline) is higher precedence than && (consecutive execution). So when you say
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
that is equivalent to
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') && cat) | ./program
So the cat isn't really doing anything, and the sed output is probably buffered a bit. You can try using the -u option to sed to get it to use unbuffered output:
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -u -r 'some regex')) | ./program
I believe some versions of sed default to -u when the output is a terminal and not when it is a pipe, so that may be the source of the difference you're seeing.
You can use the i command in sed (see the command list in the manpage for details) to do the inserting at the beginning:
tail -f inputfile | sed -e '1inew file' -e 's/this/that/' | ./program

grep and tee to identify errors during installation

In order to identify if my installation has errors that I should notice, I am using grep command on the file and write the file using tee because I need to elevate permissions.
sudo grep -inw ${LOGFOLDER}/$1.log -e "failed" | sudo tee -a ${LOGFOLDER}/$1.errors.log
sudo grep -inw ${LOGFOLDER}/$1.log -e "error" | sudo tee -a ${LOGFOLDER}/$1.errors.log
The thing is that the file is created even if the grep didn't find anything.
Is there any way I can create the file only if the grep found a match ?
Thanks
You may replace tee with awk, it won't create file if there is nothing to write to it:
... | sudo awk "{print; print \$0 >> \"errors.log\";}"
But such feature of awk is rarely used. I'd rather remove empty error file if nothing is found:
test -s error.log || rm -f error.log
And, by the way, you may grep for multiple words simultaneously:
grep -E 'failed|error' ...

echo $variable in cron not working

Im having trouble printing the result of the following when run by a cron. I have a script name under /usr/local/bin/test
#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ARAW=`date +%y%m%d`
NAME=`hostname`
TODAY=`date '+%D %r'`
cd /directory/bar/foo/
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
echo "Resolve2 Backup" > /home/user/result.txt
echo " " >> /home/user/result.txt
echo "$VARR" >> /home/user/result.txt
mail -s "Result $TODAY" email#email.com < /home/user/result.txt
I configured it in /etc/cron.d/test to run every 1am:
00 1 * * * root /usr/local/bin/test
When Im running it manually in command line
# /usr/local/bin/test
Im getting the complete value. But when I let cron do the work, it never display the part of echo "$VARR" >> /home/user/result.txt
Any ideas?
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
ls -ltr /path/to/dir will not include the directory in the filename part of the output. Then, you call ls again with this output, and this will look in your current directory, not in /path/to/dir.
In cron, your current directory is likely to be /, and in your manual testing, I bet your current directory is /path/to/dir
Here's another approach to finding the newest file in a directory that emits the full path name:
stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-
Requires GNU stat, check your man page for the correct invocation for your system.
I think your VARR invocation can be:
latest_dir=$(stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-)
interesting_files=$(ls -ltr "$latest_dir"/*something*)
Then, no need for a temp file:
{
echo "Resolve2 Backup"
echo
echo "$interesting_files"
} |
mail -s "Result $TODAY" email#email.com
Thanks for all your tips and response. I solved my problem. The problem is the ouput of $8 and $9 in cron. I dont know what special field being read while it is being run in cron. Im just a newbie in scripting so sorry for my bad script =)

How many open files for each process running for a specific user in Linux

Running Apache and Jboss on Linux, sometimes my server halts unexpectedly saying that the problem was Too Many Open Files.
I know that we might set a higher limit for nproc and nofile at /etc/security/limits.conf to fix the open files problem, but I am trying to get better output, such as using watch to monitor them in real-time.
With this command line I can see how many open files per PID:
lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
Output (Column 1 is # of open files for the user apache):
1 PID
1335 13880
1389 13897
1392 13882
If I could just add the watch command it would be enough, but the code below isn't working:
watch lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
You should put the command insides quotes like this:
watch 'lsof -u apache | awk '\''{print $2}'\'' | sort | uniq -c | sort -n'
or you can put the command into a shell script like test.sh and then use watch.
chmod +x test.sh
watch ./test.sh
This command will tell you how many files Apache has opened:
ps -A x |grep apache | awk '{print $1}' | xargs -I '{}' ls /proc/{}/fd | wc -l
You may have to run it as root in order to access the process fd directory. This sounds like you've got a web application which isn't closing its file descriptors. I would focus my efforts on that area.

Resources