BASH: can grep on command line, but not in script - linux

Did this million times already, but this time it's not working
I try to grep "$TITLE" from a file. on command line it's working, "$TITLE" variable is not empty, but when i run the script it finds nothing
*title contains more than one word
echo "$TITLE"
cat PAGE.$TEMP.2 | grep "$TITLE"
what i've tried:
echo "cat PAGE.$TEMP.2 | grep $TITLE"
to see if title is not empty and file name is actually there

Are you sure that $TITLE does not have leading or trailing whitespace which is not in the file? Your fix with the string would strip out whitespace before execution, so it would not see it.
For example, with a file containing 'Line one':
/home/user1> TITLE=' one '
/home/user1> grep "$TITLE" text.txt
/home/user1> cat text.txt | grep $TITLE
Line one
Try echo "<$TITLE>", or echo "$TITLE"|od -xc which sould enable you to spot errant chars.

This command
echo "cat PAGE.$TEMP.2 | grep $TITLE"
echoes a string that starts with 'cat'. It does not run a command. You would want
echo "$( cat PAGE.$TEMP.2 | grep $TITLE )"
although that is identical in functionality to the simpler
cat PAGE.$TEMP.2 | grep $TITLE
And as pointed out by others, there is no need to pipe a single file using cat; grep can read from files just fine:
grep "$TITLE" "PAGE.$TEMP.2"
(Your default behavior should be to quote parameter expansions, unless you can show it is incorrect to do so.)

Works for me:
~> cat test.dat
abc
cda
xyz
~> export GRP=cda
~> cat test.dat | grep $GRP
cda
Edit:
Also the proper way to use grep is:
~> grep $GRP test.dat

Related

sed works differently for a tail -f vs tail?

I have a log file that is line separated by \n with multi-line logs (ones containing SQL) being separated by \r\n. In order to pull the SQL statements out of the file I need to convert the \r\n to \s. Not knowing sed I googled and found a good solution that works very well but it fails when I switch to tail -f.
eg. these work:
tail -1000 /mylogfile.log | sed -e ':a;N;$!ba;s/\r\n/ /g' | grep "Executing command"
cat /mylogfile.log | sed -e ':a;N;$!ba;s/\r\n/ /g' | grep "Executing command"
but this returns no data at all
tail -f /mylogfile.log | sed -e ':a;N;$!ba;s/\r\n/ /g' | grep "Executing command"
EDIT: For the person who added a "This question already has an answer here:", no that does not answer the question at all. First, that other question wasn't even resolved for the person who asked it. Second, it talks only about grep, the problem is with sed. I can have 10 greps and it still works fine if I switch from sed to perl. eg
tail -f /mylog.log | perl -ne 's/\r\n/ /g; print;' | grep "Executing command" | grep -vi ANALYZE | grep -vi DESCRIBE | grep -vi "SHOW PARTITION"

using sed to set a variable works on command line, but not bash script

I have looked quite a bit for answers but I am not finding any suggestions that have worked so far.
on command line, this works:
$ myvar=$( cat -n /usr/share/dict/cracklib-small | grep $myrand | sed -e "s/$myrand//" )
$ echo $myvar
$ commonness
however, inside a bash script the same exact lines just echoes out a blank line
notes - $myrand is a number, like 10340 generated with $RANDOM
cat prints out a dictionary with line numbers
grep grabs the line with $myrand in it ; e.g. 10340 commonness
sed is intended to remove the $myrand part of the line and replace it with nothing. here is my sample script
#!/bin/bash
# prints out a random word
myrand=$RANDOM
export myrand
myword=$( cat -n /path/to/dict/cracklib-small | grep myrand | sed -e "s/$myrand//g" <<<"$myword" )
echo $myword
Your command line code is running:
grep $myrand
Your script is running:
grep myrand
These are not the same thing; the latter is looking for a word that contains "myrand" within it, not a random number.
By the way -- I'd suggest a different way to get a random line. If you have GNU coreutils, the shuf tool is built-to-purpose:
myword=$(shuf -n 1 /path/to/dict/cracklib-small)
#!/bin/bash
# prints out a random word
myrand=$RANDOM
export myrand
myword=$( cat -n /path/to/dict/cracklib-small | grep myrand | sed -e "s/$myrand//g" <<<"$myword" )
echo $myword
where is the $ sign in grep myrand ?
you must put in some work before posting it here.

passing grep into a variable in bash

I have a file named email.txt like these one :
Subject:My test
From:my email <myemail#gmail.com>
this is third test
I want to take out only the email address in this file by using bash script.So i put this script in my bash script named myscript:
#!/bin/bash
file=$(myscript)
var1=$(awk 'NR==2' $file)
var2=$("$var1" | (grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b'))
echo $var2
But I failed to run this script.When I run this command manually in bash i can obtain the email address:
echo $var1 | grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b'
I need to put the email address to store in a variable so i can use it in other function.Can someone show me how to solve this problem?
Thanks.
I think this is an overly complicated way to go about things, but if you just want to get your script to work, try this:
#!/bin/bash
file="email.txt"
var1=$(awk 'NR==2' $file)
var2=$(echo "$var1" | grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b')
echo $var2
I'm not sure what file=$(myscript) was supposed to do, but on the next line you want a file name as argument to awk, so you should just assign email.txt as a string value to file, not execute a command called myscript. $var1 isn't a command (it's just a line from your text file), so you have to echo it to give grep anything useful to work with. The additional parentheses around grep are redundant.
What is happening is this:
var2=$("$var1" | (grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b'))
^^^^^^^ Execute the program named (what is in variable var1).
You need to do something like this:
var2=$(echo "$var1" | grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b')
or even
var2=$(awk 'NR==2' $file | grep -Eio '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b')
There are very helpful flags for bash: -xv
The line with
var2=$("$var1" | (grep...
should be
var2=$(echo "$var1" | (grep...
Also my version of grep doesn't have -o flag.
And, as far as grep patterns are "greedy" even as the following code runs, it's output is not exactly what you want.
#!/bin/bash -xv
file=test.txt
var1=$(awk 'NR==2' $file)
var2=$(echo "$var1" | (grep -Ei '\b[A-Z0-9._%+-]+#[A-Z0-9.-]+.[A-Z]{2,4}\b'))
echo $var2
Use Bash parameter expansion,
var2="${var1#*:}"
There's a cruder way:
cat $file | grep # | tr '<>' '\012\012' | grep #
That is, extract the line(s) with # signs, turn the angle brackets into newlines, then grep again for anything left with an # sign.
Refine as needed...

Returning Numbers From Text File - BASH

I've a command getting the current SVN Revision and storing it in a file, is there anyway I can select the "53413" from the file to use elsewhere?
Revision: 53413
Thanks
echo "Revision: 53413" | cut -d " " -f2
cut usage: Using space as delimiter, select the second field.
This is a bit more precise, in case filename contains more than one line of data:
rev=`awk '$1=="Revision:"{print $2}' <filename>`
Then, you can use the ${rev} elsewhere in your bash script.
You could use grep:
echo "Revision: 53413" | grep -o -P "\d+"
Or if your file has more lines you could use:
cat file | grep Revision | grep -o -P "\d+"
With file data containing
dddd 2345
try following lines
$ REV=`cat data| awk '{print $2}' `
$ echo $REV
Output is
2345

xargs with multiple arguments

I have a source input, input.txt
a.txt
b.txt
c.txt
I want to feed these input into a program as the following:
my-program --file=a.txt --file=b.txt --file=c.txt
So I try to use xargs, but with no luck.
cat input.txt | xargs -i echo "my-program --file"{}
It gives
my-program --file=a.txt
my-program --file=b.txt
my-program --file=c.txt
But I want
my-program --file=a.txt --file=b.txt --file=c.txt
Any idea?
Don't listen to all of them. :) Just look at this example:
echo argument1 argument2 argument3 | xargs -l bash -c 'echo this is first:$0 second:$1 third:$2'
Output will be:
this is first:argument1 second:argument2 third:argument3
None of the solutions given so far deals correctly with file names containing space. Some even fail if the file names contain ' or ". If your input files are generated by users, you should be prepared for surprising file names.
GNU Parallel deals nicely with these file names and gives you (at least) 3 different solutions. If your program takes 3 and only 3 arguments then this will work:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo a2.txt; echo b2.txt; echo c2.txt;) |
parallel -N 3 my-program --file={1} --file={2} --file={3}
Or:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo a2.txt; echo b2.txt; echo c2.txt;) |
parallel -X -N 3 my-program --file={}
If, however, your program takes as many arguments as will fit on the command line:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo d1.txt; echo e1.txt; echo f1.txt;) |
parallel -X my-program --file={}
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
How about:
echo $'a.txt\nb.txt\nc.txt' | xargs -n 3 sh -c '
echo my-program --file="$1" --file="$2" --file="$3"
' argv0
It's simpler if you use two xargs invocations: 1st to transform each line into --file=..., 2nd to actually do the xargs thing ->
$ cat input.txt | xargs -I# echo --file=# | xargs echo my-program
my-program --file=a.txt --file=b.txt --file=c.txt
You can use sed to prefix --file= to each line and then call xargs:
sed -e 's/^/--file=/' input.txt | xargs my-program
Here is a solution using sed for three arguments, but is limited in that it applies the same transform to each argument:
cat input.txt | sed 's/^/--file=/g' | xargs -n3 my-program
Here's a method that will work for two args, but allows more flexibility:
cat input.txt | xargs -n 2 | xargs -I{} sh -c 'V="{}"; my-program -file=${V% *} -file=${V#* }'
I stumbled on a similar problem and found a solution which I think is nicer and cleaner than those presented so far.
The syntax for xargs that I have ended with would be (for your example):
xargs -I X echo --file=X
with a full command line being:
my-program $(cat input.txt | xargs -I X echo --file=X)
which will work as if
my-program --file=a.txt --file=b.txt --file=c.txt
was done (providing input.txt contains data from your example).
Actually, in my case I needed to first find the files and also needed them sorted so my command line looks like this:
my-program $(find base/path -name "some*pattern" -print0 | sort -z | xargs -0 -I X echo --files=X)
Few details that might not be clear (they were not for me):
some*pattern must be quoted since otherwise shell would expand it before passing to find.
-print0, then -z and finally -0 use null-separation to ensure proper handling of files with spaces or other wired names.
Note however that I didn't test it deeply yet. Though it seems to be working.
xargs doesn't work that way. Try:
myprogram $(sed -e 's/^/--file=/' input.txt)
It's because echo prints a newline. Try something like
echo my-program `xargs --arg-file input.txt -i echo -n " --file "{}`
I was looking for a solution for this exact problem and came to the conclution of coding a script in the midle.
to transform the standard output for the next example use the -n '\n' delimeter
example:
user#mybox:~$ echo "file1.txt file2.txt" | xargs -n1 ScriptInTheMiddle.sh
inside the ScriptInTheMidle.sh:
!#/bin/bash
var1=`echo $1 | cut -d ' ' -f1 `
var2=`echo $1 | cut -d ' ' -f2 `
myprogram "--file1="$var1 "--file2="$var2
For this solution to work you need to have a space between those arguments file1.txt and file2.txt, or whatever delimeter you choose, one more thing, inside the script make sure you check -f1 and -f2 as they mean "take the first word and take the second word" depending on the first delimeter's position found (delimeters could be ' ' ';' '.' whatever you wish between single quotes .
Add as many parameters as you wish.
Problem solved using xargs, cut , and some bash scripting.
Cheers!
if you wanna pass by I have some useful tips http://hongouru.blogspot.com
Actually, it's relatively easy:
... | sed 's/^/--prefix=/g' | xargs echo | xargs -I PARAMS your_cmd PARAMS
The sed 's/^/--prefix=/g' is optional, in case you need to prefix each param with some --prefix=.
The xargs echo turns the list of param lines (one param in each line) into a list of params in a single line and the xargs -I PARAMS your_cmd PARAMS allows you to run a command, placing the params where ever you want.
So cat input.txt | sed 's/^/--file=/g' | xargs echo | xargs -I PARAMS my-program PARAMS does what you need (assuming all lines within input.txt are simple and qualify as a single param value each).
There is another nice way of doing this, if you do not know the number of files upront:
my-program $(find . -name '*.txt' -printf "--file=%p ")
Nobody has mentioned echoing out from a loop yet, so I'll put that in for completeness sake (it would be my second approach, the sed one being the first):
for line in $(< input.txt) ; do echo --file=$line ; done | xargs echo my-program
Old but this is a better answer:
cat input.txt | gsed "s/\(.*\)/\-\-file=\1/g" | tr '\n' ' ' | xargs my_program
# i like clean one liners
gsed is just gnu sed to ensure syntax matches version brew install gsed or just sed if your on gnu linux already...
test it:
cat input.txt | gsed "s/\(.*\)/\-\-file=\1/g" | tr '\n' ' ' | xargs echo my_program

Resources