command to find words and display columns - linux

I want to search some words in a log file & display only given column numbers from those lines in the file.
eg: i want to search "word" in abc.log and print columns 4,11
grep "word" abc.log | awk '{print $4}' | awk '{print $4}'
but this doesn't workout can some one please help

You need to print $4 and $11 together rather than piping $4 into another awk.
Also, you don't need grep because awk can grep.
Try it like this:
awk '/word/{print $4,$11}' abc.log

Related

bash: awk print with in print

I need to grep some pattern and further i need to print some output within that. Currently I am using the below command which is working fine. But I like to eliminate using multiple pipe and want to use single awk command to achieve the same output. Is there a way to do it using awk?
root#Server1 # cat file
Jenny:Mon,Tue,Wed:Morning
David:Thu,Fri,Sat:Evening
root#Server1 # awk '/Jenny/ {print $0}' file | awk -F ":" '{ print $2 }' | awk -F "," '{ print $1 }'
Mon
I want to get this output using single awk command. Any help?
You can try something like:
awk -F: '/Jenny/ {split($2,a,","); print a[1]}' file
Try this
awk -F'[:,]+' '/Jenny/{print $2}' file.txt
It is using muliple -F value inside the [ ]
The + means one or more since it is treated as a regex.
For this particular job, I find grep to be slightly more robust.
Unless your company has a policy not to hire people named Eve.
(Try it out if you don't understand.)
grep -oP '^[^:]*Jenny[^:]*:\K[^,:]+' file
Or to do a whole-word match:
grep -oP '^[^:]*\bJenny\b[^:]*:\K[^,:]+' file
Or when you are confident that "Jenny" is the full name:
grep -oP '^Jenny:\K[^,:]+' file
Output:
Mon
Explanation:
The stuff up until \K speaks for itself: it selects the line(s) with the desired name.
[^,:]+ captures the day of week (in this case Mon).
\K cuts off everything preceding Mon.
-o cuts off anything following Mon.

How Can I Perform Awk Commands Only On Certain Fields

I have CSV columns that I'm working with:
info,example-string,super-example-string,otherinfo
I would like to get:
example-string super example string
Right now, I'm running the following command:
awk -F ',' '{print $3}' | sed "s/-//g"
But, then I have to paste the lines together to combine $2 and $3.
Is there anyway to do something like this?
awk -F ',' '{print $2" "$3}' | sed "s/-//g"
Except, where the sed command is only performed on $3 and $2 stays in place? I'm just concerned later on if the lines don't match up, the data could be misaligned.
Please note: I need to keep the pipe for the SED command. I just used a simple example but I end up running a lot of commands after that as well.
Try:
$ awk -F, '{gsub(/-/," ",$3); print $2,$3}' file
example-string super example string
How it works
-F,
This tells awk to use a comma as the field separator.
gsub(/-/," ",$3)
This replaces all - in field 3 with spaces.
print $2,$3
This prints fields 2 and 3.
Examples using pipelines
$ echo 'info,example-string,super-example-string,otherinfo' | awk -F, '{gsub(/-/," ",$3); print $2,$3}'
example-string super example string
In a pipeline with sed:
$ echo 'info,example-string,super-example-string,otherinfo' | awk -F, '{gsub(/-/," ",$3); print $2,$3}' | sed 's/string/String/g'
example-String super example String
Though best solution will be either use a single sed or use single awk. Since you have requested to use awk and sed solution so providing this. Also considering your actual data will be same as shown sample Input_file.
awk -F, '{print $2,$3}' Input_file | sed 's/\([^ ]*\)\([^-]*\)-\([^-]*\)-\([^-]*\)/\1 \2 \3 \4/'
Output will be as follows.
example-string super example string

How to print all the lines associated to my sorted result in Linux

I have following command that takes a log, sorts it based on the $6 col and makes a unique process on that. At the end, I have just one column as a result.
zgrep 'send_sms_*' logs_new_2015-11.gz |zgrep '^2015-'| zgrep '+1' | awk '{print $6}' | sort | uniq
I need to see all the rest of the lines instead of the just one column after execution of all those commands and I don't know how I have to do it.
Thank you for any help.
I have found the solution, awk is extracting what you ask it to extract and it is losing the rest of the information, so the best way it keep s what we want to have in awk
zgrep 'txt_*' logs_new_2015-11.gz |zgrep '^2015-'| zgrep '+1' | awk '{print $6 " " $0}' | sort | uniq | awk '{$1="";print $0}'
At the end I am removing the first column and I am keeping the rest of the line.
1 awk is enough after a unzip (keeping first zgrep for this purpose)
zgrep 'send_sms_*' logs_new_2015-11.gz \
|awk '/^2015-/&&/\+1/{u[$6]++}END{for(U in u)print U}'
add | sort for sorting if mandatory and basic akw
add BEGIN{PROCINFO["sorted_in"]="#val_str_asc"} as first action in awk action with gawk

Unix (ksh) script to read file, parse and output certain columns only

I have an input file that looks like this:
"LEVEL1","cn=APP_GROUP_ABC,ou=dept,dc=net","uid=A123456,ou=person,dc=net"
"LEVEL1","cn=APP_GROUP_DEF,ou=dept,dc=net","uid=A123456,ou=person,dc=net"
"LEVEL1","cn=APP_GROUP_ABC,ou=dept,dc=net","uid=A567890,ou=person,dc=net"
I want to read each line, parse and then output like this:
A123456,ABC
A123456,DEF
A567890,ABC
In other words, retrieve the user id from "uid=" and then the identifier from "cn=APP_GROUP_". Repeat for each input record, writing to a new output file.
Note that the column positions aren't fixed, so can't rely on positions, guessing I have to search for the "uid=" string and somehow use the position maybe?
Any help much appreciated.
You can do this easily with sed:
sed 's/.*cn=APP_GROUP_\([^,]*\).*uid=\([^,]*\).*/\2,\1/'
The regex captures the two desired strings, and outputs them in reverse order with a comma between them. You might need to change the context of the captures, depending on the precise nature of your data, because the uid= will match the last uid= in the line, if there are more than one.
You can use awk to split in columns, split by ',' and then split by =, and grab the result. You can do it easily as awk -F, '{ print $5}' | awk -F= '{print $2}'
Take a look at this line looking at the example you provided:
cat file | awk -F, '{ print $5}' | awk -F= '{print $2}'
A123456
A123456
A567890

using awk on a string

can I use awk to extract the first column or any column on a string?
Actually i am using a file and reading it to a variable I want to use AWK on that variable and do my job.
How is it possible? Any suggestions.
Print first column*:
<some output producing command> | awk '{print $1}'
Print second column:
<some output producing command> | awk '{print $2}'
etc.
Where <some output producing command> is like cat filename.txt or echo $VAR, etc.
e.g. ls -l | awk '{print $9}' extracts the ninth column, which is like an ... awkward way of ls -1
*Columns are defined by the separating whitespace.
EDIT: If your text is already in a variable, something like:
VAR2=$(echo $VAR | awk '{print $9}')
would work, provided you change 9 to the desired column.

Resources