I am trying to set titles for columns in my table and i used this code below:
$ awk 'BEGIN {print "Name\tDescription\tType\tPrice";}
> {print $1,"\t",$2,"\t",$3,"\t",$4,"\t",$NF;}
> END{print "Report Generated\n--------------";
> }' toys | column -s '/' -t
I have tried this code als, but I don't understand the sytax and the implementation there:
$ awk 'BEGIN {print "Name\tDescription\tType\tPrice";}
> {printf "%-24s %s\n", $1,$2,$3,$NF;}
> }' toys
The data and results that I want to have:
The table contains:
Mini car Kids under 5 Wireless 2000
Barbie Kids 6 - 12 Electronic 3000
Horse Kids 6 -8 Electronic 4000
Gitar 12 -16 ELectronic 45000
When I print the above command it gives me this output:
Name Description Type Price
Mini car Kids under 5 Wireless 2000
Barbie Kids 6 - 12 Electronic 3000
Horse Kids 6 -8 Electronic 4000
Gitar 12 -16 ELectronic 45000
I want help to print them like that:
Name Description Type Price
Mini car Kids under 5 Wireless 2000
Barbie Kids 6 - 12 Electronic 3000
Horse Kids 6 -8 Electronic 4000
Gitar 12 -16 ELectronic 45000
You need a formatting operator for each value that's being printed. Since you're printing 4 columns, you need 4 formatting operators, to specify the width of each column.
$ awk 'BEGIN {printf("%-10s %-15s %-10s %s\n", "Name", "Description", "Type", "Price");}
> {printf("%-10s %-15s %-10s %d\n", $1, $2, $3, $4)}
> }' toys
`%s` is for printing strings, `%d` is for printing integers.
Related
I have a file with several columns and I want to count the occurrence of one column based on a second columns value being unique to the first column
For example:
column 10 column 15
-------------------------------
orange New York
green New York
blue New York
gold New York
orange Amsterdam
blue New York
green New York
orange Sweden
blue Tokyo
gold New York
I am fairly new to using commands like awk and am looking to gain more practical knowledge.
I've tried some different variations of
awk '{A[$10 OFS $15]++} END {for (k in A) print k, A[k]}' myfile
but, not quite understanding the code, the output was not what I've expected.
I am expecting output of
orange 3
blue 2
green 1
gold 1
With GNU awk. I assume tab is your field separator.
awk '{count[$10 FS $15]++}END{for(j in count) print j}' FS='\t' file | cut -d $'\t' -f 1 | sort | uniq -c | sort -nr
Output:
3 orange
2 blue
1 green
1 gold
I suppose it could be more elegant.
Single GNU awk invocation version (Works with non-GNU awk too, just doesn't sort the output):
$ gawk 'BEGIN{ OFS=FS="\t" }
NR>1 { names[$2,$1]=$1 }
END { for (n in names) colors[names[n]]++;
PROCINFO["sorted_in"] = "#val_num_desc";
for (c in colors) print c, colors[c] }' input.tsv
orange 3
blue 2
gold 1
green 1
Adjust column numbers as needed to match real data.
Bonus solution that uses sqlite3:
$ sqlite3 -batch -noheader <<EOF
.mode tabs
.import input.tsv names
SELECT "column 10", count(DISTINCT "column 15") AS total
FROM names
GROUP BY "column 10"
ORDER BY total DESC, "column 10";
EOF
orange 3
blue 2
gold 1
green 1
I have a csv file with
value name date sentence
0000 name1 date1 I want apples
0021 name2 date1 I want bananas
0212 name3 date2 I want cars
0321 name1 date3 I want pinochio doll
0123 name1 date1 I want lemon
0100 name2 date1 I want drums
1021 name2 date1 I want grape
2212 name3 date2 I want laptop
3321 name1 date3 I want Pot
4123 name1 date1 I want WC
2200 name4 date1 I want ramen
1421 name5 date1 I want noodle
2552 name4 date2 I want film
0211 name6 date3 I want games
0343 name7 date1 I want dvd
I want to find the unique value in the name tab (I know I have to use -f 2 but then I also want to know how many times they appear/the amount of sentence they made.
eg: name1,5
name2,3
name3,2
name4,2
name5,1
name6,1
name7,1
Then afterwards I want to make another data on how many people per appearence
1 appearance, 3
2 appearance ,2
3 appearance ,1
4 appearance ,0
5 appearance ,1
The answer to the first part is using awk below
awk -F" " 'NR>1 { print $2 } ' jerome.txt | sort | uniq -c
For the second part, you can pipe it through Perl and get the results as below
> awk -F" " 'NR>1 { print $2 } ' jerome.txt | sort | uniq -c | perl -lane '{$app{$F[0]}++} END {#c=sort keys %app; foreach($c[0] ..$c[$#c]) {print "$_ appearance,",defined($app{$_})?$app{$_}:0 }}'
1 appearance,3
2 appearance,2
3 appearance,1
4 appearance,0
5 appearance,1
>
EDIT1:
Second part using a Perl one-liner
> perl -lane '{$app{$F[1]}++ if $.>1} END {$app2{$_}++ for(values %app);#c=sort keys %app2;foreach($c[0] ..$c[$#c]) {print "$_ appearance,",$app2{$_}+0}}' jerome.txt
1 appearance,3
2 appearance,2
3 appearance,1
4 appearance,0
5 appearance,1
>
For the 1st report, you can use:
tail -n +2 file | awk '{print $2}' | sort | uniq -c
5 name1
3 name2
2 name3
2 name4
1 name5
1 name6
1 name7
For the 2nd report, you can use:
tail -n +2 file | awk '{print $2}'| sort | uniq -c | awk 'BEGIN{max=0} {map[$1]+=1; if($1>max) max=$1} END{for(i=1;i<=max;i++){print i" appearance,",(i in map)?map[i]:0}}'
1 appearance, 3
2 appearance, 2
3 appearance, 1
4 appearance, 0
5 appearance, 1
The complexity here is due to the fact that you wanted a 0 and custom text appearance in the output.
What you are after is a classic example of combining a set of core-tools of Linux in a pipeline:
This solves your first problem:
$ awk '(NR>1){print $2}' file | sort | uniq -c
5 name1
3 name2
2 name3
2 name4
1 name5
1 name6
1 name7
This solves your second problem:
$ awk '(NR>1){print $2}' file | sort | uniq -c | awk '{print $1}' | uniq -c
1 5
1 3
2 2
3 1
You notice that the formatting is a bit missing, but this essentially solves your problem.
Of course in awk you can do it in one go, but I do believe that you should try to understand the above line. Have a look at man sort and man uniq. The awk solution is:
Problem 1:
awk '(NR>1){a[$2]++}END{ for(i in a) print i "," a[i] }' file
name6,1
name7,1
name1,4
name2,3
name3,2
name4,2
name5,1
Problem 2:
awk '(NR>1){a[$2]++; m=(a[$2]<m?m:a[$2])}
END{ for(i in a) c[a[i]]++;
for(i=1;i<=m;++i) print i, "appearance,", c[i]+0
}' foo.txt
1 appearance, 3
2 appearance, 2
3 appearance, 1
4 appearance, 0
5 appearance, 1
Hello I have a file containing these lines:
apple
12
orange
4
rice
16
how to use bash to sort it by numbers ?
Suppose each number is the price for the above object.
I want they are formatted like this:
12 apple
4 orange
16 rice
or
apple 12
orange 4
rice 16
Thanks
A solution using paste + sort to get each product sorted by its price:
$ paste - - < file|sort -k 2nr
rice 16
apple 12
orange 4
Explanation
From paste man:
Write lines consisting of the sequentially corresponding lines from
each FILE, separated by TABs, to standard output. With no FILE, or
when FILE is -, read standard input.
paste gets the stream coming from the stdin (your <file) and figures that each line belongs to the fictional archive represented by - , so we get two columns using - -
sort use the flag -k 2nr to get paste output sorted by second column in reverse numerical order.
you can use awk:
awk '!(NR%2){printf "%s %s\n" ,$0 ,p}{p=$0}' inputfile
(slightly adapted from this answer)
If you want to sort the output afterwards, you can use sort (quite logically):
awk '!(NR%2){printf "%s %s\n" ,$0 ,p}{p=$0}' inputfile | sort -n
this would give:
4 orange
12 apple
16 rice
Another solution using awk
$ awk '/[0-9]+/{print prev, $0; next} {prev=$0}' input
apple 12
orange 4
rice 16
while read -r line1 && read -r line2;do
printf '%s %s\n' "$line1" "$line2"
done < input_file
If you want lines to be sorted by price, pipe the result to sort -k2:
while read -r line1 && read -r line2;do
printf '%s %s\n' "$line1" "$line2"
done < input_file | sort -k2
You can do this using paste and awk
$ paste - - <lines.txt | awk '{printf("%s %s\n",$2,$1)}'
12 apple
4 orange
16 rice
an awk-based solution without needing external paste / sort, using regex, calculating modulo % of anything, or awk/bash loops
{m,g}awk '(_*=--_) ? (__ = $!_)<__ : ($++NF = __)_' FS='\n'
12 apple
4 orange
16 rice
So I'm doing some work on shell script. I have this code:
Echo "5 Matt male"
Echo "8 Sarah female"
Echo "9 Paul male"
I am meant to set a threshold number of 6 which will only output the lines whose numbers are above 6. Hence the lines containing sarah and Paul. But I have no idea on how to do this. Im so sorry but it is also meant to print only the ones that also contain "female"
your date need to be stored in file.txt.
file.txt:
5 Matt male
8 Sarah female
9 Paul male
cat file.txt | awk '{ if( $1 > 5 && $3=="female") print $0}'
If you don't know the usage of awk, take a look this http://cm.bell-labs.com/cm/cs/awkbook/
This sed command is described as follows
Delete the cars that are $10,000 or more. Pipe the output of the sort into a sed to do this, by quitting as soon as we match a regular expression representing 5 (or more) digits at the end of a record (DO NOT use repetition for this):
So far the command is:
$ grep -iv chevy cars | sort -nk 5
I have to add another pipe at the end of that command I think which "quits as soon as we match a regular expression representing 5 or more digits at the end of a record"
I tried things like
$ grep -iv chevy cars | sort -nk 5 | sed "/[0-9][0-9][0-9][0-9][0-9]/ q"
and other variations within the // but nothing works! What is the command which matches a regular expression representing 5 or more digits and quits according to this question?
Nominally, you should add a $ before the second / to match 5 digits at the end of the record. If you omit the $, then any sequence of 5 digits will cause sed to quit, so if there is another number (a VIN, perhaps) before the price, it might match when you didn't intend it to.
grep -iv chevy cars | sort -nk 5 | sed '/[0-9][0-9][0-9][0-9][0-9]$/q'
On the whole, it's safer to use single quotes around the regex, unless you need to substitute a shell variable into it (or unless the regex contains single quotes itself). You can also specify the repetition:
grep -iv chevy cars | sort -nk 5 | sed '/[0-9]\{5,\}$/q'
The \{5,\} part matches 5 or more digits. If for any reason that doesn't work, you might find you're using GNU sed and you need to do something like sed --posix to get it working in the normal mode. Or you might be able to just remove the backslashes. There certainly are options to GNU sed to change the regex mechanism it uses (as there are with GNU grep too).
Another way.
As you don't post a file sample, a did it as a guess.
Here I'm looking for lines with the word "chevy" where the field 5 is less than 10000.
awk '/chevy/ {if ( $5 < 10000 ) print $0} ' cars
I forgot the flag -i from grep ... so the correct is:
awk 'BEGIN{IGNORECASE=1} /chevy/ {if ( $5 < 10000 ) print $0} ' cars
$ cat > cars
Chevy 2 3 4 10000
Chevy 2 3 4 5000
chEvy 2 3 4 1000
CHEVY 2 3 4 10000
CHEVY 2 3 4 2000
Prevy 2 3 4 1000
Prevy 2 3 4 10000
$ awk 'BEGIN{IGNORECASE=1} /chevy/ {if ( $5 < 10000 ) print $0} ' cars
Chevy 2 3 4 5000
chEvy 2 3 4 1000
CHEVY 2 3 4 2000
grep -iv chevy cars | sort -nk 5 | sed '/[0-9][0-9][0-9][0-9][0-9]$/d'