Redirecting tail output into a program - linux

I want to send a program the most recent lines from a text file using tail as stdin.
First, I echo to the program some input that will be the same every time, then send in tail input from an inputfile which should first be processed through sed. The following is the command line that I expect to work. But when the program runs it only receives the echo input, not the tail input.
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
However, the following works exactly as expected, printing everything out to the terminal:
echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat
So I tried with another type of output, and again while the echoed text posted, the tail text does not appear anywhere:
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') | tee out.txt
This made me think it is a problem with buffering, but I tried the unbuffer program and all other advice here (https://superuser.com/questions/59497/writing-tail-f-output-to-another-file) without results. Where is the tail output going and how can I get it to go into my program as expected?

The buffering problem was resolved when I prefixed the sed command with the following:
stdbuf -i0 -o0 -e0
Much more preferable to using unbuffer, which didn't even work for me. Dave M's suggestion of using sed's relatively new -u also seems to do the trick.

One thing you may be getting confused by -- | (pipeline) is higher precedence than && (consecutive execution). So when you say
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
that is equivalent to
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') && cat) | ./program
So the cat isn't really doing anything, and the sed output is probably buffered a bit. You can try using the -u option to sed to get it to use unbuffered output:
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -u -r 'some regex')) | ./program
I believe some versions of sed default to -u when the output is a terminal and not when it is a pipe, so that may be the source of the difference you're seeing.

You can use the i command in sed (see the command list in the manpage for details) to do the inserting at the beginning:
tail -f inputfile | sed -e '1inew file' -e 's/this/that/' | ./program

Related

Linux: Tail -f multiple options

I want to add multiple tail scripts in one.
First one:
tail -f /var/script/log/script-log.txt | if grep -q "Text1"; then echo "0:$?:AAC32 ONLINE"
fi
I want to add 5 more lines with a diffrent word, is this possible?
else if, if etc. etc.
Thanks!
tail -f /var/script/log/script-log.txt | if grep -E "Text1|Text2|Text3"; then echo "0:$?:AAC32 ONLINE" fi
In your case it's enough to use logical AND operator:
tail -f /var/script/log/script-log.txt | grep -q "text1\|text2\|text3" && echo "0:$?:AAC32 ONLINE"
#!/bin/sh
PIPENAME="`mktemp -u "/tmp/something-XXXXXX"`"
mkfifo -m 600 "$PIPENAME"
tail -f /tmp/log.txt >"$PIPENAME" &
while read line < "$PIPENAME"
do
echo $line # Whatever you want goes here
done
rm -f "$PIPENAME"
If you want Bash specific, you can use the -u option for read, and then you can rm the named pipe before the loop starts, which is more guaranteed to leave things clean when you're done.

how to use && with grep in bash

I want to check if multiple lines in a file exist in bash.
so for that I use grep -q which works with only one line:
if grep -q string1 "/path/to/file";then
echo 'exists'
else
echo 'does not exist'
fi
I tried many things in various combinations, for example:
if grep -q [ string1 ] && grep -q [ string2 ] "path/to/file";then
I also tried it with -E:
grep -E 'pattern1' filename | grep -E 'pattern2'
but nothing seems to work. Any ideas?
Rather than running multiple grep commands you can use this gnu-awk command to assert presence of multiple strings in a file:
awk -v RS='\\Z' '/string1/ && /string2/ && /string3/{e=1} END{exit !e}' file &&
echo 'exists' || echo 'does not exist'
RS=\Z will make awk read all the input in a single record separator
Using && between multiple search terms will make sure all the search words exist in input file
This will print exists only if all 3 search terms exists in the input file.
since #iruvar hasn't posted his comment as answer, i'll put it here:
grep -q string_1 file && grep -q string_2 file
now, here is my contribution. is #anubhava's more computationally complex awk answer, which reads the file only once, any faster than #iruvar's simpler answer, which reads the file three times?
awk 11.730 s
grep && grep 0.258 s
no.
this surely will depend on the speed of the filesystem vs the cpu, and on how much caching goes on, but on my system, which is probably a typical B+/A- workstation, grep kw1 file && grep kw2 file && grep kw3 file is ~50x as fast as #anubhava's awk solution. this held true both on ssd and spindle raid. (details: test file was 5,000,000 lines, 160M, and had kw1 on the first line, kw2 on the 2.5 millionth, and kw3 on the 5 millionth.)
some easy optimization is possible, for example, if you can solve your problem by matching whole lines, do so (with grep -x); it's twice as fast in this case.
for many (e.g., >1,000) files, it is faster to use grep -l and xargs:
grep -l kw1 *.txt | xargs grep -l kw2 | xargs grep -q kw3
as opposed to a loop:
for f in *.txt; do
grep -q kw1 $f && grep -q kw2 $f && grep -q kw3 $f
done
with the same test file, grep -l | xargs grep took 0.258 s, just like grep && grep. with two test files, it was still no faster than grep && grep. with 2000 test files of 5,000 lines each, none of which contained any matches, grep -l | xargs grep was ~10x as faster as grep && grep.
There are a couple ambiguities in your question, but assuming you want pattern_1 and pattern_2 to exist in a file (not on the same line) then you can do this.
for file in *; do
egrep -q pattern_1 $file && egrep -q pattern_2 $file && echo $file
done
With grep -p you can match multiply patterns in the same line:
grep -P '(?=.*string1)(?=.*string2)' file
The above will print lines that matches string1 and string2.
(?=...) is a positive lookaheads which matches a pattern without making it a part of the match.
And -z will slurp the whole file:
% seq 1 100 | grep -qzP '(?=.*1)(?=.*5)'; echo $?
0
% seq 1 100 | grep -qzP '(?=.*1)(?=.*a)'; echo $?
1
You can do it like this:
if grep -q 'string1' /path/to/file; then
if grep -q 'string2' /path/to/file; then
echo exists
else
echo 'does not exist'
else
echo 'does not exist'
fi
Or:
grep -q 'string1' /path/to/file &&
grep -q 'string2' /path/to/file &&
echo exists ||
echo 'does not exist'
you can use "-q" to search using grep
if grep -q string1 "/path/to/file" && grep -q string2 "/path/to/file";then
echo 'exists'
else
echo 'does not exist'
fi

Concatenating xargs with the use of if-else in bash

I've got two test files, namely, ttt.txt and ttt2.txt, the Content of which is shown as below:
#ttt.txt
(132) 123-2131
543-732-3123
238-3102-312
#ttt2.txt
1
2
3
I've already tried the following commands in bash and it works fine:
if grep -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt1.txt ; then echo "found"; fi
# with output 'found'
if grep -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt2.txt ; then echo "found"; fi
But when I combine the above command with xargs, it complains error '-bash: syntax error near unexpected token `then''. Could anyone give me some explanation? Thanks in advance!
ll | awk '{print $9}' | grep ttt | xargs -I $ if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" $; then echo "found"; fi
$ is a special character in bash (it marks variables) so don't use it as your xargs marker, you'll only get confused.
The real problem here though is that you are passing if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" $ as the argument to xargs, and then the remainder of the line is being treated as a new command, because it breaks at the ;.
You can wrap the whole thing in a sub-invocation of bash, so that xargs sees the whole command:
$ ll | awk '{print $9}' | grep ttt | xargs -I xx bash -c 'if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" xx; then echo "found"; fi'
found
Finally, ll | awk '{print $9}' | grep ttt is a needlessly complicated way of listing the files that you're looking for. You actually you don't need any of the code above, just do this:
$ if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt*; then echo "found"; fi
found
Alternatively, if you want to process each file in turn (which you don't need here, but you might want when this gets more complicated):
for file in ttt*
do
if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" "$file"
then
echo "found"
fi
done

Executing a string as a command in bash that contains pipes

I'm trying to list some ftp directories. I can't work out how to make bash execute a command that contains pipes correctly.
Here's my script:
#/bin/sh
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
cmd='echo "ls /mydir/'"$d"'/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1'
$cmd
done
This just outputs:
"ls /mydir/dir1/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
"ls /mydir/dir2/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
How can I make bash execute the whole string including the echo? I also need to be able to parse the output of the command.
I don't think that you need to be using the -b switch at all. It should be sufficient to specify the commands that you would like to execute as a string:
#/bin/bash
dirs=("/dir1" "/dir2")
for d in "${dirs[#]}"
do
printf -v d_str '%q' "$d"
sftp -i ~/mykey user#example.com "ls /mydir/$d_str/*.tar*" 2>&1 | tail -n1
done
As suggested in the comments (thanks #Charles), I've used printf with the %q format specifier to protect against characters in the directory name that may be interpreted by the shell.
First you need to use /bin/bash as shebang to use BASH arrays.
Then remove echo and use command substitution to capture the output:
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
output=$(ls /mydir/"$d"/*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
echo "$output"
done
I will however advise you not use ls's output in sftp command. You can replace that with:
output=$(echo "/mydir/$d/"*.tar* | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1)
Don't store the command in a string; just use it directly.
#/bin/bash
declare -a dirs=("/dir1" "/dir2") # ... and lots more
for d in "${dirs[#]}"
do
echo "ls /mydir/$d/*.tar*" | sftp -b - -i ~/mykey user#example.com 2>&1 | tail -n1
done
Usually, people store the command in a string so they can both execute it and log it, as a misguided form of factoring. (I'm of the opinion that it's not worth the trouble required to do correctly.)
Note that sftp reads from standard input by default, so you can just use
echo "ls ..." | sftp -i ~/mykey user#example.com 2>&1 | tail -n1
You can also use a here document instead of a pipeline.
sftp -i ~/mykey user#example.com 2>&1 <<EOF | tail -n1
ls /mydir/$d/*.tar.*
EOF

dynamically run linux shell commands

I have a command that should be executed by a shell script.
Actually the command does not matter the only thing that is important the further command execution and the right escaping of the critical parts.
The command that usually is executed normally in putty is something like this(maybe some additional flags for ls)
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`
but now I have a batch of such command so I would like to execute them in a loop
like
for i in {0..100}
do
str=str$i
${!str}
done
where str is :
str0="rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`"
str1="rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash`"
and that gives me a lot of headache cause the execution done by ${!str} brakes the quotations and inline shell between `...` marks
my_rm() { rm -r `ls /test/$1 | awk ... | grep ... `; }
for i in `whatevr`; do
my_rm $i
done;
Getting this right is surprisingly tricky, but it can be done:
for i in $(seq 0 100)
do
str=str$i
eval "eval \"\$$str\""
done
You can also do:
for i in {0..10}
do
<whatevercommand>
done
It's actually simpler to place them on arrays and use glob patterns:
#!/bin/bash
shopt -s nullglob
DIRS=("/test/parse_first/" "/test/parse_second/")
for D in "${DIRS[#]}"; do
for T in "$D"/*trash*; do
rm -r -- "$T"
done
done
And if rm could accept multiple arguments, you don't need to have an extra loop:
for D in "${DIRS[#]}"; do
rm -r -- "$D"/*trash*
done
UPDATE:
#!/bin/bash
readarray -t COMMANDS <<'EOF'
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash
rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash
EOF
for C in "${COMMANDS[#]}"; do
eval "$C"
done
Or you could just read commands from another file:
readarray -t COMMANDS < somefile.txt

Resources