printf format specifiers in awk does not work for multiple parameters - linux

I'm trying to Write a program script in Bash named example7
Which accepts as parameters: a file name (lets call it file 1) and a list of
Numbers (below we'll call it List 1) and what the program needs to do is to print as output the columns from
File 1 after rescheduling them to right or left by the numbers in list 1. (This is obtainable by using the printf command of awk).
Example
Suppose the contents of an F1 file are:
A abcd ddd eee zz tt
ab gggwe 12 88 iii jjj
yaara yyzz 12abcd xyz x y z
After running the program by command:
example7 F1 -8 -7 6 4
Output:
A abcd ddd eee
ab gggwe 12 88
yaara yyzz 12abcd xyz
In the example above between A and ABCD there are 7 spaces, between abcd and ddd
there are 6 spaces and between ddd and eee
there is one space.
Another example:
After running the program by command:
example7 F1 -8 -7 6 4 5
Output:
A abcd ddd eee zz
ab gggwe 12 88 iii
yaara yyzz 12abcd xyz x
In the example above between A and ABCD there are 7 spaces, between abcd and ddd
there are 6 spaces, between ddd and eee
there is one space, between eee and zz there are 3 spaces, between 88 and iii
there are two spaces, between xyz and x there are 4 spaces.
I've tried doing something like this:
file=$1
shift
awk '{printf "%'$1's\n" ,$1}' $file
but it only works for one number and one parameter and I don't know how I can do it for multiple columns and multiple parameters..
any help will be appreciated.

Set an awk variable to all the remaining parameters, then split it and loop over them.
file=$1
shift
awk -v sizes="$*" '{words = split(sizes, s); for(i = 1; i <= words; i++) printf("%" s[i] "s", $i); print ""; }' "$file"
It's generally wrong to try to substitute a shell variable directly into an awk script. You should prefer to set an awk variable using -v, and then use awk's own string concatenation operation, as I did with s[i].

Related

sed: filter string subset from lines matching regexp

I have a file of the following format:
abc: A B C D E
abc: 1 2 3 4 5
def D E F G H
def: 10 11 12 23 99
...
That is a first line with strings after ':' is a header for the next line with numbers. I'd like to use sed to extract only a line starting with PATTERN string with numbers in the line.
Number of numbers in a line is variable, but assume that I know exactly how many I'm expecting, so I tried this command:
% sed 's/^abc: \([0-9]+ [0-9]+ [0-9]+\)$/\1/g' < file.txt
But it dumps all entries from the file. What am I doing wrong?
sed does substitutions and prints each line, whether a substitution happens or not.
Your regular expression is wrong. It would match only three numbers separated by spaces if extended regex flag was given (-E). Without it, not even that, because the + sign will be interpreted literally.
The best here is to use addresses and only print lines that have a match:
sed -nE '/^abc: [0-9]+ [0-9]+ [0-9]+ [0-9]+ [0-9]+$/p' < file.txt
or better,
sed -nE '/^abc:( [0-9]+){5}$/p' < file.txt
The -n flag disables the "print all lines" behavior of sed described in (1). Only the lines that reach the p command will be printed.
to extract only a line starting with PATTERN string with numbers in the line and Number of numbers in a line is variable means at least one number, so:
$ sed -n '/abc: \([0-9]\+\)/p' file
Output:
abc: 1 2 3 4 5
With exactly 5 numbers, use:
$ sed -n '/abc: \([0-9]\+\( \|$\)\)\{5\}/p' file
With #Mark's additional question in a comment "If I want to just extract the matched numbers (and remove prefix, e.g, abc)…" this is the pattern I came up with:
sed -En 's/^abc: (([0-9]+[ \t]?)+)[ \t]*$/\1/gp' file.txt
I'm using the -E flag for extended regular expressions to avoid all the escaping that would be needed.
Given this file:
abc: A B C D E
abc: 1 2 3 4 5
abc: 1 c9 A 7f
def D E F G H
def: 10 11 12 23 99
… this regex matches abc: 1 2 3 4 5 while excluding abc: 1 c9 A 7f — it also allows variable whitespace and trailing whitespace.
With any sed:
$ sed -n 's/^abc: \([0-9 ]*\)$/\1/p' file
1 2 3 4 5

linux/unix convert delimited file to fixed width

I have a requirement to convert a delimited file to fixed-width file, details as follows.
Input file sample:
AAA|BBB|C|1234|56
AA1|BB2|DD|12345|890
Output file sample:
AAA BBB C 1234 56
AA1 BB2 DD 12345 890
Details of field positions
Field 1 Start at position 1 and length should be 5
Field 2 start at position 6 and length should be 6
Field 3 Start at position 12 and length should be 4
Field 4 Start at position 16 and length should be 6
Field 5 Start at position 22 and length should be 3
Another awk solution:
echo -e "AAA|BBB|C|1234|56\nAA1|BB2|DD|12345|890" |
awk -F '|' '{printf "%-5s%-6s%-4s%-6s%-3s\n",$1,$2,$3,$4,$5}'
Note the - before the %-3s in the printf statement, which will left-align the fields, as required in the question. Output:
AAA BBB C 1234 56
AA1 BB2 DD 12345 890
With the following awk command you can achive your goal:
awk 'BEGIN { RS=" "; FS="|" } { printf "%5s%6s%4s%6s%3s\n",$1,$2,$3,$4,$5 }' your_input_file
Your record separator (RS) is a space and your field separator (FS) is a pipe (|) character. In order to parse your data correctly we set them in the BEGIN statement (before any data is read). Then using printf and the desired format characters we output the data in the desired format.
Output:
AAA BBB C 1234 56
AA1 BB2 DD 12345890
Update:
I just saw your edits on the input file format (previously they seemed different). If your input data records are separated with a new line then simply remove the RS=" "; part from the above one-liner and apply the - modifiers for the format characters to left align your fields:
awk 'BEGIN { FS="|" } { printf "%-5s%-6s%-4s%-6s%-3s\n",$1,$2,$3,$4,$5 }' your_input_file

add a column with different label

I want to add a column that contains two different labels. Let's say I have this text
aa bb cc
dd ee ff
gg hh ii
ll mm nn
oo pp qq
and I want to add 1 at hte first column of the first two lines and 2 at the first column of the remaining lines, so that eventually I will get that text:
1 aa bb cc
1 dd ee ff
2 gg hh ii
3 ll mm nn
4 oo pp qq
Do you know how to do it?
thanks
Assuming you are processing a text file in Linux shell, you could use awk for this. Your problem description says you want two labels 1 and 2, this would be
cat input.txt | awk '{print (NR<=2 ? "1 ":"2 ") $0}'
Your expected output says you want label 1 for the first two lines, and start counting from 2 beginning with the third line, this would be
cat input.txt | awk '{print (NR<=2 ? "1 ":NR-1" ") $0}'
I'm assuming that you want to do this using the shell, if your data is in a file called input.txt, you can either use cat -n or nl.
% tail -n+2 input.txt | cat -n
1 dd ee ff
2 gg hh ii
3 ll mm nn
4 oo pp qq
% tail -n+2 input.txt | nl
1 dd ee ff
2 gg hh ii
3 ll mm nn
4 oo pp qq
The first line can be added back manually.
The two commands will behave differently if you have empty lines in your input file.
Could you please try following and let me know if this helps you.
Solution 1st: By using a variable named count whose initial value is 1 and then checking if line number is either 1 or 2 then simply append 1 in $1 else increase variable count's value by 1 and append it in $1's value.
awk -v count=1 '{$1=NR==1||NR==2?1 FS $1:++count FS $1} 1' Input_file
Solution 2nd: Checking if line number is 1 or 2 then simply adding 1 to $1 else checking if a line is NOT NULL then add NR-1(which means subtract 1 from it's line number) and add to $1's value.
awk '{$1=NR==1||NR==2?1 FS $1:(NF?FNR-1 FS $1:"")} 1' Input_file

compare columns from different files and print those that DO NOT match

I have two files, file1 and file2. I want to compare several columns - $1,$2 ,$3 and $4 of file1 with several columns $1,$2, $3 and $4 of file2 and print those rows of file2 that do not match any row in file1.
E.g.
file1
aaa bbb ccc 1 2 3
aaa ccc eee 4 5 6
fff sss sss 7 8 9
file2
aaa bbb ccc 1 f a
mmm nnn ooo 1 d e
aaa ccc eee 4 a b
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
fff sss sss 7 5 6
I want to have as output:
mmm nnn ooo 1 d e
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
I have seen questions asked here for finding those that do match and printing them, but not viceversa,those that DO NOT match.
Thank you!
Use the following script:
awk '{k=$1 FS $2 FS $3 FS $4} NR==FNR{a[k]; next} !(k in a)' file1 file2
k is the concatenated value of the columns 1, 2, 3 and 4, delimited by FS (see comments), and will be used as a key in a search array a later. NR==FNR is true while reading file1. I'm creating the array a indexed by k while reading file1.
For the remaining lines of input I check with !(k in a) if the index does not exists in a. If that evaluates to true awk will print that line.
here is another approach if the files are sorted and you know the used char set.
$ function f(){ sed 's/ /~/g;s/~/ /4g' $1; }; join -v2 <(f file1) <(f file2) |
sed 's/~/ /g'
mmm nnn ooo 1 d e
aaa ccc eee 4 a b
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
fff sss sss 7 5 6
create a key field by concatenating first four fields (with a ~ char, but any unused char can be used), use join to find the unmatched entries from file2 and partition the synthetic key field back.
However, the best way is to use awk solution with a slight fix
$ awk 'NR==FNR{a[$1,$2,$3,$4]; next} !(($1,$2,$3,$4) in a)' file1 file2
No doubt that the awk solution from #hek2mgl is better than this one, but for information this is also possible using uniq, sort, and rev:
rev file1 file2 | sort -k3 | uniq -u -f2 | rev
rev is reverting both files from right to left.
sort -k3 is sorting lines skipping the 2 first column.
uniq -u -f2 prints only lines that are unique (skipping the 2 first while comparing).
At last the rev is reverting back the lines.
This solution sorts the lines of both files. That might be desired or not.

In AWK, how to split consecutive rows that have the same string as a "record"?

Let's say I have below text.
aaaaaaa
aaaaaaa
bbb
bbb
bbb
ccccccccccccc
ddddd
ddddd
Is there a way to modify the text as the following.
1 aaaaaaa
1 aaaaaaa
2 bbb
2 bbb
2 bbb
3 ccccccccccccc
4 ddddd
4 ddddd
You could use something like this in awk:
$ awk '{print ($0!=p?++i:i),$0;p=$0}' file
1 aaaaaaa
1 aaaaaaa
2 bbb
2 bbb
2 bbb
3 ccccccccccccc
4 ddddd
4 ddddd
i is incremented whenever the current line differs from the previous line. p holds the value of the previous line, $0.
Alternatively, as suggested by JID:
awk '$0!=p{p=$0;i++}{print i,$0}' file
When the current line differs from p, replace p and increment i. See the comments for discussion of the pros and cons of either approach :)
A further contribution (and even shorter!) by NeronLeVelu
$ awk '{print i+=($0!=p),p=$0}' file
This version performs the addition assignment and basic assignment within the print statement. This works because the return value of each assignment is the value that has been assigned.
As pointed out in the comments, if the first line of the file is empty, the behaviour changes slightly. Assuming that the first line should always begin with a 1, the following block can be added to the start of any of the one-liners:
NR==1{p=$0;i=1}
i.e. on the first line, initialise p to the contents of the line (empty or not) and i to 1. Thanks to Wintermute for this suggestion.

Resources