md5sum file.png | awk '{print $1}' | wc -m
I get: 33
I expect it will return 32 as the length of md5 hash. After read man page and googling I still didn't find out why.
TL;DR
Use awk's length() function:
md5sum file.png | awk '{print length($1)}'
32
It's because awk will add a line feed character to the output. You can check:
md5sum file.png | awk '{print $1}' | xxd
You can tell awk to not doing that using ORS output record separator variable:
md5sum file.png | awk '{print $1}' ORS='' | wc -m
32
Related
I am trying to print the output of awk command with "," delimited.
Trying to get the same output using cut.
cat File1
dot|is-big|a
dot|is-round|a
dot|is-gray|b
cat|is-big|a
hot|in-summer|a
dot|is-big|a
dot|is-round|b
dot|is-gray|a
cat|is-big|a
hot|in-summer|a
Command tried :
$awk 'BEGIN{FS="|"; OFS=","} {print $1,$3}' file1.csv | sort | uniq -c
Output Got:
2 cat,a
4 dot,a
2 dot,b
2 hot,a
Desired Output:
2,cat,a
4,dot,a
2,dot,b
2,hot,a
Couple of other commands tried :
$cat file1.csv |cut --output-delimiter="|" -d'|' -f1,3 | sort | uniq -c
You need to change the delimiter to , after running uniq -c, since it's adding the first column.
awk -F'|' '{print $1, $3}' file1.csv | sort | uniq -c | awk 'BEGIN{OFS=","} {$1=$1;print}'
But you don't need to use sort | uniq -c if you're using awk, it can do the counting itself.
awk 'BEGIN{FS="|";OFS=","} {a[$1 OFS $3]++} END{for(k in a) print a[k], k}' file1.csv
$ grep HxH 20170213.csv | awk -F',' '{print $13}' | cut -b 25-27 | sort -u
868
881
896
904
913
914
918
919
920
Question> How to pipe the sorted results and feed into grep?
Now I have to do the following command manually.
grep 868 /tmp/aaa/*.csv
grep 881 /tmp/aaa/*.csv
...
grep 920 /tmp/aaa/*.csv
Since your output is numeric (output lines do not contain spaces), you can use a for loop with command substitution:
for id in $(grep HxH 20170213.csv | awk -F',' '{print $13}' \
| cut -b 25-27 | sort -u); do
grep $id /tmp/aaa/*.csv
done
Another option is to use xargs:
grep HxH 20170213.csv | awk -F',' '{print $13}' | cut -b 25-27 | sort -u \
| xargs -n1 grep /tmp/aaa/*.csv -e
The xargs variant requires one to jump through a couple hoops to get right:
by default xargs would stick more than one pattern to the same grep, which is prevented using -n1;
xargs specifies the stdin contents as the last argument in the command line, which is a problem because grep expects pattern then file name. Fortunately, grep PATTERN FILES... can be spelled as grep FILES... -e PATTERN, which is why grep must be followed by -e.
I'm trying to run a shell script using cron every 15 & 45 mins of the hour. But for some vague reasons it always produces empty strings while executed by cron whereas in when i run using terminals ./my_script.sh it produces expected results. I read many of answers relating to this questions, but none could solve my problem.
codes:
#!/bin/bash
PATH=/bin:/home/mpadmin/bin:/opt/ant/bin:/opt/cc/bin:/opt/cvsconsole:/opt/cvsconsole:/opt/cvsconsole:/sbin:/usr/bin:/usr/lib/qt-3.3/bin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/var/mp/95930/scripts:/var/mp/95930/opt/bin:/opt/prgs/dlc101c/bin:/opt/cvsconsole
tail -n 1000000 conveyor2.log | grep -P 'curingresult OK' | sed 's/FT\ /FT/g' |awk '{print $5 $13}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/master_list.txt
tail -n 1000000 registration.log | grep -P 'TirePresent: 8' | sed 's/GT\ /GT/g' |awk '{print $7 $15}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/TBM_list.txt
my cron
# run 15 and 45 mins every hour, every day
15,47 * * * * sh /var/mp/95910/log/update_master_list.sh
permissions:
all files are having read write and execute permissions for all users
Hope I have given all the relevant & necessary infos
Probably you need to change to the /var/mp/95910/log/ directory first...
#!/bin/bash
cd /var/mp/95910/log/
PATH=/bin:/home/mpadmin/bin:/opt/ant/bin:/opt/cc/bin:/opt/cvsconsole:/opt/cvsconsole:/opt/cvsconsole:/sbin:/usr/bin:/usr/lib/qt-3.3/bin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/var/mp/95930/scripts:/var/mp/95930/opt/bin:/opt/prgs/dlc101c/bin:/opt/cvsconsole
tail -n 1000000 conveyor2.log | grep -P 'curingresult OK' | sed 's/FT\ /FT/g' |awk '{print $5 $13}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/master_list.txt
tail -n 1000000 registration.log | grep -P 'TirePresent: 8' | sed 's/GT\ /GT/g' |awk '{print $7 $15}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/TBM_list.txt
or specify the file paths explicitly
#!/bin/bash
PATH=/bin:/home/mpadmin/bin:/opt/ant/bin:/opt/cc/bin:/opt/cvsconsole:/opt/cvsconsole:/opt/cvsconsole:/sbin:/usr/bin:/usr/lib/qt-3.3/bin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/var/mp/95930/scripts:/var/mp/95930/opt/bin:/opt/prgs/dlc101c/bin:/opt/cvsconsole
tail -n 1000000 /var/mp/95910/log/conveyor2.log | grep -P 'curingresult OK' | sed 's/FT\ /FT/g' |awk '{print $5 $13}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/master_list.txt
tail -n 1000000 /var/mp/95910/log/registration.log | grep -P 'TirePresent: 8' | sed 's/GT\ /GT/g' |awk '{print $7 $15}' |sed 's/\"//g' | uniq | sort -n |uniq > /var/www/html/95910/TBM_list.txt
I would like to get the name of the person who has maximum age in a unix data file. How can I do this?
Rob,20
Tom,30
I tried this as below but it gives me only max age.
awk -F"," '{print $2}' age.txt | sort -r | head -1
$ cat file | awk -F, '{print $2,$1;}' | sort -n | tail -n1
30 Tom
$ cat file | awk -F, '{print $2,$1;}' | sort -n | tail -n1 | awk '{print $2;}'
Tom
Try perhaps
awk -F, '{if (maxage<$2) { maxage= $2; name=$1; };} END{print name}' \
age.txt
traditional:
sort -t, -nr +1 age.txt | head -1 | cut -d, -f1
POSIXy:
sort -t, -k2,2nr age.txt | head -n 1 | cut -d, -f1
i think you can easily do this using below command
echo -e "Rob,20\nTom,30\nMin,10\nMax,50" | sort -t ',' -rk 2 | head -n 1
Please comment in case of any issues.
Here is the output of my netstat command. I want to count total of first field number like 7+8+1+1+1+1+3+1+2..so on... How do i use bc or any other method command to total count them?
[root#example httpd]# netstat -natp | grep 7143 | grep EST | awk -F' ' '{print $5}' | awk -F: '{print $1}' | sort -nr | uniq -c
7 209.139.35.xxx
8 209.139.35.xxx
1 209.139.35.xxx
1 209.139.35.xxx
1 208.46.149.xxx
3 96.17.177.xxx
1 96.17.177.xxx
2 96.17.177.xxx
You need to get the first column with awk (You don't actually need this, but I'm leaving it as a monument to my eternal shame)
awk {'print $1'}
and then use awk again to sum the column of numbers and print the result
awk '{ sum+=$1} END {print sum}'
All together:
netstat -natp | grep 7143 | grep EST | awk -F' ' '{print $5}' | awk -F: '{print $1}' | sort -nr | uniq -c | awk {'print $1'} | awk '{ sum+=$1} END {print sum}'
I know this doesn't use bc, but it gets the job done, so hopefully that's enough.