Printing columns in the output file - linux

I got the output for the last command using the below command
last -w -F | awk '{print $1","$3","$5$6$7$8","$11$12$13$14","$15}' | tac | tr ',' '\t'
Now for the same output i want to add the below column names and then copy to csv file or xls file.
Can someone help me out here.
Column Names
USERNAME
HOSTNAME
LOGIN_TIME
LOGOUT_TIME
DURATION
Output looks like this
oracle localhost 2015 2.30
root localhost 2014 2.30
Appreciate your help on this.

Try this:
last -w -F | awk '{print $1,$3,$5$6$7$8,$11$12$13$14,$15} END{print "USERNAME\tUSERNAME\tHOSTNAME\tHOSTNAME\tLOGIN_TIME\tLOGIN_TIME\tLOGOUT_TIME\tLOGOUT_TIME DURATION"}' OFS='\t' | tac
I added the headings to the END statement in awk. This way, after tac is run, the headings will be at the beginning.
I also set awk's OFS to a tab so that the tr step should no longer be needed.
I couldn't thoroughly test this because my last command apparently produces a different format than yours.
Writing to a file
To write the above output to a file, we use redirection: stdout is sent to a file:
last -w -F | awk '{print $1,$3,$5$6$7$8,$11$12$13$14,$15} END{print "USERNAME\tUSERNAME\tHOSTNAME\tHOSTNAME\tLOGIN_TIME\tLOGIN_TIME\tLOGOUT_TIME\tLOGOUT_TIME DURATION"}' OFS='\t' | tac >new.tsv
The above code produces a tab-separated file. After selecting the options for tab-separated format, Excel should be able to read this file.
If one wants a comma-separated-file, then all we need to to is replace the \t by ,:
last -w -F | awk '{print $1,$3,$5$6$7$8,$11$12$13$14,$15} END{print "USERNAME,USERNAME,HOSTNAME,HOSTNAME,LOGIN_TIME,LOGIN_TIME,LOGOUT_TIME,LOGOUT_TIME DURATION"}' OFS=',' | tac >new.csv
If I recall correctly, one can open this in excel with file->open->text file.

Related

bash: awk print with in print

I need to grep some pattern and further i need to print some output within that. Currently I am using the below command which is working fine. But I like to eliminate using multiple pipe and want to use single awk command to achieve the same output. Is there a way to do it using awk?
root#Server1 # cat file
Jenny:Mon,Tue,Wed:Morning
David:Thu,Fri,Sat:Evening
root#Server1 # awk '/Jenny/ {print $0}' file | awk -F ":" '{ print $2 }' | awk -F "," '{ print $1 }'
Mon
I want to get this output using single awk command. Any help?
You can try something like:
awk -F: '/Jenny/ {split($2,a,","); print a[1]}' file
Try this
awk -F'[:,]+' '/Jenny/{print $2}' file.txt
It is using muliple -F value inside the [ ]
The + means one or more since it is treated as a regex.
For this particular job, I find grep to be slightly more robust.
Unless your company has a policy not to hire people named Eve.
(Try it out if you don't understand.)
grep -oP '^[^:]*Jenny[^:]*:\K[^,:]+' file
Or to do a whole-word match:
grep -oP '^[^:]*\bJenny\b[^:]*:\K[^,:]+' file
Or when you are confident that "Jenny" is the full name:
grep -oP '^Jenny:\K[^,:]+' file
Output:
Mon
Explanation:
The stuff up until \K speaks for itself: it selects the line(s) with the desired name.
[^,:]+ captures the day of week (in this case Mon).
\K cuts off everything preceding Mon.
-o cuts off anything following Mon.

zcat file not working for gzip file

I have a .gz which I need to merge and do other manipulations with (without compressing it), but I am having trouble just using zcat or gzip -dc or awk, for example when I pass these value to less -S like this:
awk '{print $1}' <(gzip -dc file.gz) | less -S
I get the incorrect column printed. When I use just less -S to view the file, only the last few columns are printed. So I thought it was a problem with the delimiter, but I have tried importing in R some lines (it is too big to import the whole file), and it seems to be space delimited since all the columns are showing up when I do this:
x=read.table("file.gz", header=T, nrows=100)
But how do I read the lines correctly to use this file with zcat?
Thank you so much for your help!
If you want the whole line to be printed, try $0.
awk '{print $0}' <(gzip -dc file.gz) | less -S
If you want specific columns to be printed, use -F to specific field separator. For example, if you want first field of ':' separated fields from each line (like in /etc/passwd), try this command.
awk -F':' '{print $1}' <(gzip -dc passwd.gz) |less -S

grep command not working as my expectation

I have a text file like mentioned below, and along with that I will pass an input for which I want a corresponding output.
Input file: test.txt
abc:abc_1
abcd:abcd_1
1_abcd:1_abcd_bkp
xyz:xyz_2
so if I use abc with the above test.txt file, I want abc_1; and if I pass abcd, I need abcd_1 as output.
I tried cat text.txt | grep abc | cut -d":" -f2,2, but I am getting the output
abc_1
abcd_1
1_abcd_bkp
when I want only abc_1.
With GNU grep:
grep -Po "^abc:\K.*" file
Output:
abc_1
\K keeps the text matched so far out of the overall regex match.
You want to use a regular expression with the -e switch.
In particular, regular expressions allow you to use caret (^) to express the start of a line.
Since you only care about abc when it's at the start of a line and it's followed by :, you want:
cat test.txt | grep -e "^abc:" | cut -d":" -f2,2
Output:
abc_1
awk to the rescue!
awk -F: -v key="abc" '$1==key{print $2}'
using : as the delimiter do the look up for key on field 1 to return field 2.
Or, by moving the key in the script
awk -F: '$1=="abc"{print $2}'
you can try the exclude -v:
cat text.txt | grep abc | grep -vi abc[a-z]
not sure if that would work exactly, try something with that kind of idea
Without specifying second field to be printed the whole line will be or in other cases lines.
awk -F: '/abc_/{print $2}' file
abc_1
awk -F: 'NR==1,/abc/{print $2}' file
abc_1

extracting the column using AWK

I am trying to extract column using AWK.
Source file is a .CSV file and below is command I am using:
awk -F ',' '{print $1}' abc.csv > test1
Data in file abc.csv is like below:
xyz#yahoo.com,160,1,2,3
abc#ymail.com,1,2,3,160
But data obtained in test1 is like :
abc#ymail.comxyz#ymail.com
when file is opened in notepad after downloading the file from server.
Notepad doesn't show newlines created on unix. If you want to add them, try
awk -F ',' '{print $1"\r"}' abc.csv > test1
Since you're using a Window tool to read the output you just need to tell awk to use Windows line-endings as the Output Record Separator:
awk -v ORS='\r\n' -F',' '{print $1}' file

using awk on a string

can I use awk to extract the first column or any column on a string?
Actually i am using a file and reading it to a variable I want to use AWK on that variable and do my job.
How is it possible? Any suggestions.
Print first column*:
<some output producing command> | awk '{print $1}'
Print second column:
<some output producing command> | awk '{print $2}'
etc.
Where <some output producing command> is like cat filename.txt or echo $VAR, etc.
e.g. ls -l | awk '{print $9}' extracts the ninth column, which is like an ... awkward way of ls -1
*Columns are defined by the separating whitespace.
EDIT: If your text is already in a variable, something like:
VAR2=$(echo $VAR | awk '{print $9}')
would work, provided you change 9 to the desired column.

Resources