How to print all the lines associated to my sorted result in Linux - linux

I have following command that takes a log, sorts it based on the $6 col and makes a unique process on that. At the end, I have just one column as a result.
zgrep 'send_sms_*' logs_new_2015-11.gz |zgrep '^2015-'| zgrep '+1' | awk '{print $6}' | sort | uniq
I need to see all the rest of the lines instead of the just one column after execution of all those commands and I don't know how I have to do it.
Thank you for any help.

I have found the solution, awk is extracting what you ask it to extract and it is losing the rest of the information, so the best way it keep s what we want to have in awk
zgrep 'txt_*' logs_new_2015-11.gz |zgrep '^2015-'| zgrep '+1' | awk '{print $6 " " $0}' | sort | uniq | awk '{$1="";print $0}'
At the end I am removing the first column and I am keeping the rest of the line.

1 awk is enough after a unzip (keeping first zgrep for this purpose)
zgrep 'send_sms_*' logs_new_2015-11.gz \
|awk '/^2015-/&&/\+1/{u[$6]++}END{for(U in u)print U}'
add | sort for sorting if mandatory and basic akw
add BEGIN{PROCINFO["sorted_in"]="#val_str_asc"} as first action in awk action with gawk

Related

Sed, Awk for combining the output of two cut statements

I'm trying to combine the below outputs into one command. The issue is that the field I'm trying to grab is in reverse order. I was told that cut doesn't support a "reverse" option and to use AWK for this purpose but it didn't end up working for my purpose. I'm trying to take the output of the ls- l against the /dev/block to return the partitions and automatically build a dd if= / of= for each outputted line based on the output of the command.
I tried piping the output to awk:
cut -d' ' -f23,25 ... | awk '{print $2,$1}'
however, the result was when using sed to input the prefix and suffix, it wasn't in the appropriate order.
I built the two statements below which individually return the expected output, just looking for the "right" way to combine both of these statements in the most efficient manner using sed / awk.
ls -l /dev/block/platform/msm_sdcc.1/by-name/ | cut -d' ' -f 25 | sed "s/^/dd if=/"
ls -l /dev/block/platform/msm_sdcc.1/by-name/ | cut -d' ' -f 23 | sed "s/.*/of=\/external_sd\/&.dsk/"
Any assistance will be appreciated.
Thank you.
If you're already using awk, I don't think you'll need cut or sed. You can probably do something like the following, though I'll have to trust you on the field numbers
ls -l /dev/block/platform/msm_sdcc.1/by-name | awk '{print "dd if=/"$25 " of=/" $23 ".dsk"}'
awk will split on all whitespace, not just the space character, so it's possible the fields will shift some, though it may be more reliable too.

cut or awk command to print first field of first row

I am trying print the first field of the first row of an output. Here is the case. I just need to print only SUSE from this output.
# cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2
Tried with cat /etc/*release | awk {'print $1}' but that print the first string of every row
SUSE
VERSION
PATCHLEVEL
Specify NR if you want to capture output from selected rows:
awk 'NR==1{print $1}' /etc/*release
An alternative (ugly) way of achieving the same would be:
awk '{print $1; exit}'
An efficient way of getting the first string from a specific line, say line 42, in the output would be:
awk 'NR==42{print $1; exit}'
Specify the Line Number using NR built-in variable.
awk 'NR==1{print $1}' /etc/*release
try this:
head -1 /etc/*release | awk '{print $1}'
df -h | head -4 | tail -1 | awk '{ print $2 }'
Change the numbers to tweak it to your liking.
Or use a while loop but thats probably a bad way to do it.
You could use the head instead of cat:
head -n1 /etc/*release | awk '{print $1}'
sed -n 1p /etc/*release |cut -d " " -f1
if tab delimited:
sed -n 1p /etc/*release |cut -f1
Try
sed 'NUMq;d' /etc/*release | awk {'print $1}'
where NUM is line number
ex. sed '1q;d' /etc/*release | awk {'print $1}'
awk, sed, pipe, that's heavy
set `cat /etc/*release`; echo $1
the most code-golfy way i could think of to print first line only in awk :
awk '_{exit}--_' # skip the quotations and make it just
# awk _{exit}--_
#
# if u're feeling adventurous
first pass through exit block, "_" is undefined,
so it fails and skips over for row 1.
then the decrementing of the same counter will make
it "TRUE" in awk's eyes (anything not empty string
or numeric zero is considered "true" in their agile boolean sense). that same counter also triggers default action of print for row 1.
—- incrementing… decrementing… it's same thing,
merely direction and sign inverted.
then finally, at start of row 2, it hits criteria to
enter the action block, which instructs it to instantly
exit, thus performing essentially the same functionality as
awk '{ print; exit }'
… in a slightly less verbose manner. For a single line print, it's not even worth it to set FS to skip the field splitting part.
using that concept to print just 1st row 1st field :
awk '_{exit} NF=++_'
awk '_++{exit} NF=_'
awk 'NR==1&&NF=1' file
grep -om1 '^[^ ]\+' file
# multiple files
awk 'FNR==1&&NF=1' file1 file2
You can kill the process which is running the container.
With this command you can list the processes related with the docker container:
ps -aux | grep $(docker ps -a | grep container-name | awk '{print $1}')
Now you have the process ids to kill with kill or kill -9.

Display users on Linux with tabbed output

I am working with Linux and I am trying to display and count the users on the system. I am currently using who -q, which gives me a count and the users but I am trying not to list one person more than once with it. At the same time I would like the output of users on separate lines as well or tabbed better than it currently is.
The following will show the number of unique users logged in, ignoring the number of times they are each logged in individually:
who | awk '{ print $1; }' | sort -u | awk '{print $1; u++} END{ print "users: " u}'
If the output of who | awk '{ print $1 }' is :
joe
bunty
will
sally
will
bunty
Then the one-liner will output:
bunty
joe
sally
will
users: 4
Previous answers have involved uniq (but this command only removes duplicates if they are storted, which who does not guarantee, hence we use sort -u to achieve the same.
The awk command at the end outputs the results whilst counting the number of unique users and outputtig this value at the end.
I think you want
who | awk '{print $1}' | uniq && who -q | grep "\# " | cut -d' ' -f2

command to find words and display columns

I want to search some words in a log file & display only given column numbers from those lines in the file.
eg: i want to search "word" in abc.log and print columns 4,11
grep "word" abc.log | awk '{print $4}' | awk '{print $4}'
but this doesn't workout can some one please help
You need to print $4 and $11 together rather than piping $4 into another awk.
Also, you don't need grep because awk can grep.
Try it like this:
awk '/word/{print $4,$11}' abc.log

using awk on a string

can I use awk to extract the first column or any column on a string?
Actually i am using a file and reading it to a variable I want to use AWK on that variable and do my job.
How is it possible? Any suggestions.
Print first column*:
<some output producing command> | awk '{print $1}'
Print second column:
<some output producing command> | awk '{print $2}'
etc.
Where <some output producing command> is like cat filename.txt or echo $VAR, etc.
e.g. ls -l | awk '{print $9}' extracts the ninth column, which is like an ... awkward way of ls -1
*Columns are defined by the separating whitespace.
EDIT: If your text is already in a variable, something like:
VAR2=$(echo $VAR | awk '{print $9}')
would work, provided you change 9 to the desired column.

Resources