Difference between awk -FS and awk -f in shell scripting - linux

I am new to shell scripting and I'm very confused between awk -FS and awk -f commands used. I've tried reading multiple pages on the difference between these two but was not able to understand clearly. Kindly help.
Here is an example:
Lets consider that a text file say, data.txt has the below details.
S.No Product Qty Price
1-Pen-2-10
2-Pencil-1-5
3-Eraser-1-2
Now, when i try to use the following command:
$ awk -f'-' '{print $1,$2} data.txt
I get the below output:
1 Pen
2 Pencil
3 Eraser
But when i use the command:
$ awk -FS'-' '{print $1,$2} data.txt
the output is:
1-Pen-2-10
2-Pencil-1-5
3-Eraser-1-2
I don't understand the difference it does using the -FS command. Could somebody help me out on what exactly happens between these two commands. Thanks!

You are more confused than you think. There is no -FS.
FS is a variable that contains the field separator.
-F is an option that sets FS to it's argument.
-f is an option whose argument is the name of a file that contains the script to execute.
The scripts you posted would have produced syntax errors, not the output you say they produced, so idk what to tell you...

-FS is not an argument to awk. -F is, as is -f.
The -F argument tells awk what value to use for FS (the field separator).
The -f argument tells awk to use its argument as the script file to run.
This command (I fixed your quoting):
awk -f'-' '{print $1,$2}' data.txt
tells awk to use standard input (that's what - means) for its argument. This should hang when run in a terminal. And should be an error after that as awk then tries to use '{print $1,$2}' as a filename to read from.
This command:
awk -FS'-' '{print $1,$2}' data.txt
tells awk to use S- as the value of FS. Which you can see by running this command:
awk -FS'-' 'BEGIN {print "["FS"]"}'

Related

bash: awk print with in print

I need to grep some pattern and further i need to print some output within that. Currently I am using the below command which is working fine. But I like to eliminate using multiple pipe and want to use single awk command to achieve the same output. Is there a way to do it using awk?
root#Server1 # cat file
Jenny:Mon,Tue,Wed:Morning
David:Thu,Fri,Sat:Evening
root#Server1 # awk '/Jenny/ {print $0}' file | awk -F ":" '{ print $2 }' | awk -F "," '{ print $1 }'
Mon
I want to get this output using single awk command. Any help?
You can try something like:
awk -F: '/Jenny/ {split($2,a,","); print a[1]}' file
Try this
awk -F'[:,]+' '/Jenny/{print $2}' file.txt
It is using muliple -F value inside the [ ]
The + means one or more since it is treated as a regex.
For this particular job, I find grep to be slightly more robust.
Unless your company has a policy not to hire people named Eve.
(Try it out if you don't understand.)
grep -oP '^[^:]*Jenny[^:]*:\K[^,:]+' file
Or to do a whole-word match:
grep -oP '^[^:]*\bJenny\b[^:]*:\K[^,:]+' file
Or when you are confident that "Jenny" is the full name:
grep -oP '^Jenny:\K[^,:]+' file
Output:
Mon
Explanation:
The stuff up until \K speaks for itself: it selects the line(s) with the desired name.
[^,:]+ captures the day of week (in this case Mon).
\K cuts off everything preceding Mon.
-o cuts off anything following Mon.

Strip text from output in rhel

So im trying to strip away some of the text from this output using awk
This is my output ,
href="/warning:understand-how-this-works!/5HpHagT65TZzG1PH3CSu63k8DbpvD8s5ip4nEB3kEsrePxLM2Uo">+</a>
href="/warning:understand-how-this-works!/5HpHagT65TZzG1PH3CSu63k8DbpvD8s5ip4nEB3kEsrePxLM2Uo">+</a>
href="/warning:understand-how-this-works!/5HpHagT65TZzG1PH3CSu63k8DbpvD8s5ip4nEB3kEsrePxLM2Uo">+</a>
Basically, I am trying to take that info, from the output of a text file,Remove this part:
href="/warning:understand-how-this-works!/
and this part
">+</a>
So it only shows:
5HpHagT65TZzG1PH3CSu63k8DbpvD8s5ip4nEB3kEsrePxLM2Uo
or, outputs that.
Running on centos 6
Could you please try following and let me know if this helps you.
awk '{sub(/.*!\//,X,$0);sub(/\".*/,X,$0);print}' Input_file
You can use grep if you want:
grep -oP '!/\K.*?(?=")' inputfile
Or awk by playing around FS :
awk -F'!/|">' '{print $2}' input
Or use sed backrefrencing:
sed -r 's/(^.*\!\/)(.*?)(">.*)/\2/g' input

cat passwd | awk -F':' '{printf $1}' Is this command correct?

I'd like to know how cat passwd | awk -F':' '{printf $1}' works. cat /etc/passwd is a list of users with ID and folders from root to the current user (I don't know if it has something to do with cat passwd). -F is some kind of input file and {printf $1} is printing the first column. That's what I've search so far but seems confusing to me.
Can anyone help me or explain to me if it's right or wrong, please?
This is equivalent to awk -F: '{print $1}' passwd. The cat command is superfluous as all it does is read a file.
The -F option determines the field separator for awk. The quotes around the colon are also superfluous since colon is not special to the shell in this context. The print invocation tells awk to print the first field using $1. You are not passing a format string, so you probably mean print instead of printf.

Pipe 'tail -f' into awk without hanging

Something like this will hang:
tail -f textfile | awk '{print $0}'
while grep won't hang when used instead of awk.
My actual intention is to add color to some log output using merely standard commands; however it seems that piping tail -f into awk won't work. I don't know if it's a buffer problem, but I tried some approaches that haven't worked, like:
awk '{print $0;fflush()}'
and also How to pipe tail -f into awk
Any ideas?
I ran into almost exactly the same problem with mawk. I think it is due to the way mawk is flushing its buffer, the problem went away when I switched to gawk. Hope this helps (a bit late I know).
I tried this command :
tail -f test | awk '{print $0;}'
And it doesn't hang. Awk will print the new values each time I add something in the test file.
echo "test" >> test
I think you just forgot a quote in your command because you wrote (edit : well, before your post was edited) :
tail -f textfile | awk {print $0}'
Instead of :
tail -f textfile | awk '{print $0}'

Assigning output of a command to a variable(BASH)

I need to assign the output of a command to a variable. The command I tried is:
grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
I try this code to assign a variable:
UUID=$(grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}')
However, it gives a syntax error. In addition I want it to work in a bash script.
The error is:
./upload.sh: line 12: syntax error near unexpected token ENE=$( grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
)'
./upload.sh: line 12: ENE=$( grep UUID fstab | awk '/ext4/ {print $1}' | awk '{print substr($0,6)}'
)'
well, using the '$()' subshell operator is a common way to get the output of a bash command. As it spans a subshell it is not that efficient.
I tried :
UUID=$(grep UUID /etc/fstab|awk '/ext4/ {print $1}'|awk '{print substr($0,6)}')
echo $UUID # writes e577b87e-2fec-893b-c237-6a14aeb5b390
it works perfectly :)
EDIT:
Of course you can shorten your command :
# First step : Only one awk
UUID=$(grep UUID /etc/fstab|awk '/ext4/ {print substr($1,6)}')
Once more time :
# Second step : awk has a powerful regular expression engine ^^
UUID=$(cat /etc/fstab|awk '/UUID.*ext4/ {print substr($1,6)}')
You can also use awk with a file argument ::
# Third step : awk use fstab directlty
UUID=$(awk '/UUID.*ext4/ {print substr($1,6)}' /etc/fstab)
Just for trouble-shooting purposes, and something else to try to see if you can get this to work, you could also try to use "backticks", e.g,
cur_dir=`pwd`
would save the output of the pwd command in your variable cur_dir, though using $() approach is generally preferable.
To quote from a pages given to me on http://unix.stackexchange.com:
The second form `COMMAND` (using backticks) is more or less obsolete for Bash, since it
has some trouble with nesting ("inner" backticks need to be escaped)
and escaping characters. Use $(COMMAND), it's also POSIX!

Resources