Rearrange column with empty values using awk or sed - linux

i want to rearrange the columns of a txt file, but there are empty values, which cause a problem. For example:
testfile:
Name ID Count Date Other
A 1 10 513 x
6 15 312 x
3 18 314 x
B 19 31 942 x
8 29 722 x
when i tried $ more testfile |awk '{print $2"\t"$1"\t"$3"\t"$4"\t"$5}'
it becomes:
ID Name Count Date Other
1 A 10 513 x
15 6 312 x
18 3 314 x
19 B 31 942 x
29 8 722 x
which is not i want, please help,i want it to be
ID Name Count Date Other
1 A 10 513 x
15 6 312 x
18 3 314 x
19 B 31 942 x
29 8 722 x
moreover i am not sure which columns might contain empty values, and the column length is not fixed, thank you

Assuming your input file is not tab-separated and you have (or can get) GNU awk then I recommend:
$ awk -v FIELDWIDTHS="8 8 8 8 8" -v OFS='\t' '{
for (i=1;i<=NF;i++) {
gsub(/^\s+|\s+$/,"",$i)
}
t=$1; $1=$2; $2=t'
}1' file
ID Name Count Date Other
1 A 10 513 x
6 15 312 x
3 18 314 x
19 B 31 942 x
8 29 722 x
If your file is tab-separated then all you need is:
awk 'BEGIN{FS=OFS="\t"} {t=$1; $1=$2; $2=t}1' file

Another awk alternative is using the number of fields. If you know your data and it's only deficit in the first column you can try this.
awk -v OFS="\t" 'NF==4{$5=$4;$4=$3;$3=$2;$2=$1;$1=""} {print $2,$1,$3,$4,$5}'
However, the output will be tab separated instead of fixed length format. You can achieve the same using printf and changing OFS, but perhaps tab separated is what you really need for tabular representation.

The most natural model for awk to use is columns as defined by the transitions from white-space to non-white-space and back. Since you have columns that may themselves be white-space, the natural model won't work.
However, you can revert to using a model based on column positions rather than transitions, meaning that a file containing only spaces (the presence of tabs will complicate things):
Name ID Count Date Other
A 1 10 513 x
6 15 312 x
3 18 314 x
B 19 31 942 x
8 29 722 x
can still be rearranged, though not as succinctly as transition-based columns.
The following awk script will do the trick, swapping name and id:
{
name = substr($0, 1,7);
id = substr($0, 9,7);
count = substr($0,17,7);
date = substr($0,25,7);
other = substr($0,33 );
print id" "name" "count" "date" "other;
}
If the original file is called pax.in and the awk script is stored in pax.awk, the command awk -f pax.awk pax.in will give you, as desired:
ID Name Count Date Other
1 A 10 513 x
6 15 312 x
3 18 314 x
19 B 31 942 x
8 29 722 x
Keep in mind I've written that script to be relatively flexible, allowing you to change the order of the columns quite easily. If all you want is to swap the first two columns, you could use:
awk '{print substr($0,9,8)substr($0,1,8)substr($0,17)}' qq.in
or the slightly shorter (if you're allowed to use other tools):
sed -E 's/^(.{8})(.{8})/\2\1/' qq.in

Related

Datamash: Transposing the column into rows based on group in bash

I have a tab delim file with a 2 columns like following
A 123
A 23
A 45
A 67
B 88
B 72
B 50
B 23
C 12
C 14
I want to transpose with the above data based on the first column like following
A 123 23 45 67
B 88 72 50 23
C 12 14
I tried the datamash transpose < input-file.txt but it didnt yield the output as expected.
One awk version:
awk '{printf ($1!=f?"\n%s":" "$2),$0;f=$1}' file
A 123 23 45 67
B 88 72 50 23
C 12 14
With this version, you get on blank line, but should be fast and handle large data since no loop or array variable are used.
$1!=f?"\n%s":" "$2),$0 If first field is not equal f, print new line and all fields
if $1 = f, only print field 2.
f=$1 set f to first field
datamash --group=1 --field-separator=' ' collapse 2 <file | tr ',' ' '
Output:
A 123 23 45 67
B 88 72 50 23
C 12 14
Input must be sorted, as in the question.
This might work for you (GNU sed):
sed -E ':a;N;s/^((\S+)\s+.*)\n\2/\1/;ta;P;D' file
Append the next line and if the first field of the first line is the same as the first field of the second line, remove the newline and the first field of the second line. Print the first line in the pattern space and then delete it and the following newline and repeat.

Finding if a column is in a range

I have two files that I want to find out if a column of file1 is in a range of columns.
file1.txt
1 19
1 21
1 24
2 22
4 45
file2.txt
1 19 23 A
1 20 28 A
4 42 45 A
I am trying to see if the 1st column of file1.txt is the same with 1st column of file2.txt, whether the second column of file1.txt is in between 2nd and 3rd columns of file2.txt, and append if it is in the range.
So the output should be :
output.txt
1 19 23 A 1 19
1 19 23 A 1 21
1 20 28 A 1 24
4 42 45 A 4 45
What I am trying is to find out if first columns are the same:
awk 'NR==FNR{c[$1]++;next};c[$1] > 0' file1.txt file2.txt
1 19 23 A
1 20 28 A
4 42 45 A
But I am not able to put the larger/ smaller conditions.
How do I add it?
Following may also help you here.
while read first second
do
awk -v fir="$first" -v sec="$second" '$1==fir && ($2<=sec && $3>=sec){print $0,fir,sec}' file2
done < "file1"
Using join + awk:
join file2.txt file1.txt | awk '{if ($2 <= $5 && $5 <= $3) { print $1,$2,$3,$4,$1,$5 } }'
First two files are joined on the first column, then the columns are compared and output printed (with the first column printed twice, as join hides it).
Using awk:
$ awk 'NR==FNR{a[$1]=a[$1]" "$2;next} {split(a[$1],b);for(i in b) if(b[i]>=$2 && b[i]<=$3) print $0,$1,b[i]}' file1 file2
1 19 23 A 1 19
1 19 23 A 1 21
1 20 28 A 1 21
1 20 28 A 1 24
4 42 45 A 4 45
The first block statement stores the elements of file1 into the array a. The array index is the first column of the file and the array element is the concatenation of all numbers of the second column with the same number in the first column.
The second block statement loops over the the array a element with the same index as the first column and checks for the number in the array is in between the range.
Another approach is to use join:
$ join -o 1.1 1.2 1.3 1.4 1.1 2.2 file2 file1 | awk '$6 >= $2 && $6 <= $3'
1 19 23 A 1 19
1 19 23 A 1 21
1 20 28 A 1 21
1 20 28 A 1 24
4 42 45 A 4 45
join -o generated the expected output format. The awk statement is filtering
the lines that are in range.

How to find pattern and make operation in another field in awk?

I have a file with 4 columns separated by space like this bellow:
1_86500000 50 1_87500000 19
1_87500000 13 1_89500000 42
1_89500000 25 1_90500000 10
1_90500000 3 1_91500000 11
1_91500000 23 1_92500000 29
1_92500000 34 1_93500000 4
1_93500000 39 1_94500000 49
1_94500000 35 1_95500000 26
2_35500000 1 2_31500000 81
2_31500000 12 2_4150000 50
The First and Third columns are not in phase so I can not divide the value of one by another.
As there are only two or one possible columns $1 or $3, a solution would be look for the pattern and divide its value in the another column or set it to 0 if there is none like this expected result shows:
P.S. the second field in this expected result is just illustrative to shown the division.
1_86500000 0/50 0
1_87500000 19/13 1.46154
1_89500000 42/25 1.68
1_90500000 10/3 3.333
1_91500000 11/23 0.47826
1_92500000 29/34 0.85294
1_93500000 4/39 0.10256
1_94500000 49/35 1.4
2_35500000 0/1 0
2_31500000 81/12 6.75
2_4150000 50/0 50
I do not archived anything by myself other than this. So I do not have any starting point by now.
I tried separate the fields merged with _ to see if I could match by subtracting the coordinates. If I got 0 would mean that the columns was in phase and correct. But I could not go further.
awk '{if( ($5-$2)==0) print $1,$2,$3,$4,$5,$6}' file
I tried to match both columns but I only got phased results:
awk '{if(($1==$3)) print $1,$4/$2}' file
Can you help me?
awk to the rescue!
$ awk '{d[$1]=$2; n[$3]=$4}
END {for(k in n)
if(k in d) {print k,n[k]"/"d[k],n[k]/d[k]; delete d[k]}
else print k,n[k]"/0",n[k];
for(k in d) print k,"0/"d[k],0}' file | sort
1_86500000 0/50 0
1_87500000 19/13 1.46154
1_89500000 42/25 1.68
1_90500000 10/3 3.33333
1_91500000 11/23 0.478261
1_92500000 29/34 0.852941
1_93500000 4/39 0.102564
1_94500000 49/35 1.4
1_95500000 26/0 26
2_31500000 81/12 6.75
2_35500000 0/1 0
2_4150000 50/0 50
your division by zero result is little strange though!
Explanation keep two arrays for numerator and denominator. Once scanned the file, go over numerator array and find the corresponding denominator and make the division. For the denominators not used apply the convention given.

Use part of a column in one file as search term in other file

I have two files. The output file I am searching has earthquake locations and has the following format:
19090212 1323 30.12 36 19.41 103 28.24 7.29 0.00 4 149 25.8 0.02 5.7 9.8 D - 0
19090216 1828 49.61 36 13.27 101 35.38 10.94 0.00 13 54 38.5 0.07 0.3 0.7 B 0
19090711 2114 54.11 35 1.07 99 56.42 7.00 0.00 7 177 18.7 4.00 63.3 53.2 D # 0
I want to use the last 6 digits of the first column (i.e. '090418' out of '19090418') with the first 3 digits of the second column (i.e. '072' out of '0728') as my search term. The file I am searching has the following format:
SC17 P 090212132329.89
X25A P 090212132330.50
AMTX P 090216182814.12
X29A P 090216182813.70
Y28A P 090216182822.36
MSTX P 090216182826.80
Y27A P 090216182831.43
After I search the second file for the term, I need to figure out how many lines are in that section. So for this example, if I were searching the terms shown for the second file above, I want to know there are 2 lines for 090212132 and 5 lines for 090216182.
This is my first post, so please let me know how I can improve clarity or conciseness in my posts. Thanks for the help!
awk to the rescue!
$ awk 'NR==FNR{a[substr($1,3) substr($2,1,3)]; next}
{k=substr($3,1,9)}
k in a{a[k]++}
END{for(k in a) if(a[k]>0) print k,a[k]}' file1 file2
with your input files, there is no output as expected.
The answer karakfa suggested worked! My output looks like this:
100224194 7
100117172 18
091004005 11
090520220 10
090526143 21
090122033 20
Thanks for the help!
Karafka answer with explanation
awk 'NR==FNR { # For first file
$1 = substr($1, 3); # Get last 6 characters from first col
$2 = substr($2, 1, 3); # Get first 3 characters from second col
a[$1 $2]; # Add to an array
next } # Move to next record in first file
# Start processing second file
{k = substr($3, 1, 9)} # Get first 9 character for third col
k in a {a[k]++} # If key in a, then increment the key
END {
for (k in a) # Iterate array
if (a[k] > 0) # If pattern was matched
print k, a[k] # print the pattern and num occurrence
}'

How to extract one column from multiple files, and paste those columns into one file?

I want to extract the 5th column from multiple files, named in a numerical order, and paste those columns in sequence, side by side, into one output file.
The file names look like:
sample_problem1_part1.txt
sample_problem1_part2.txt
sample_problem2_part1.txt
sample_problem2_part2.txt
sample_problem3_part1.txt
sample_problem3_part2.txt
......
Each problem file (1,2,3...) has two parts (part1, part2). Each file has the same number of lines.
The content looks like:
sample_problem1_part1.txt
1 1 20 20 1
1 7 21 21 2
3 1 22 22 3
1 5 23 23 4
6 1 24 24 5
2 9 25 25 6
1 0 26 26 7
sample_problem1_part2.txt
1 1 88 88 8
1 1 89 89 9
2 1 90 90 10
1 3 91 91 11
1 1 92 92 12
7 1 93 93 13
1 5 94 94 14
sample_problem2_part1.txt
1 4 330 30 a
3 4 331 31 b
1 4 332 32 c
2 4 333 33 d
1 4 334 34 e
1 4 335 35 f
9 4 336 36 g
The output should look like: (in a sequence of problem1_part1, problem1_part2, problem2_part1, problem2_part2, problem3_part1, problem3_part2,etc.,)
1 8 a ...
2 9 b ...
3 10 c ...
4 11 d ...
5 12 e ...
6 13 f ...
7 14 g ...
I was using:
paste sample_problem1_part1.txt sample_problem1_part2.txt > \
sample_problem1_partall.txt
paste sample_problem2_part1.txt sample_problem2_part2.txt > \
sample_problem2_partall.txt
paste sample_problem3_part1.txt sample_problem3_part2.txt > \
sample_problem3_partall.txt
And then:
for i in `find . -name "sample_problem*_partall.txt"`
do
l=`echo $i | sed 's/sample/extracted_col_/'`
`awk '{print $5, $10}' $i > $l`
done
And:
paste extracted_col_problem1_partall.txt \
extracted_col_problem2_partall.txt \
extracted_col_problem3_partall.txt > \
extracted_col_problemall_partall.txt
It works fine with a few files, but it's a crazy method when the number of files is large (over 4000).
Could anyone help me with simpler solutions that are capable of dealing with multiple files, please?
Thanks!
Here's one way using awk and a sorted glob of files:
awk '{ a[FNR] = (a[FNR] ? a[FNR] FS : "") $5 } END { for(i=1;i<=FNR;i++) print a[i] }' $(ls -1v *)
Results:
1 8 a
2 9 b
3 10 c
4 11 d
5 12 e
6 13 f
7 14 g
Explanation:
For each line of input of each input file:
Add the files line number to an array with a value of column 5.
(a[FNR] ? a[FNR] FS : "") is a ternary operation, which is set up to build up the arrays value as a record. It simply asks if the files line number is already in the array. If so, add the arrays value followed by the default file separator before adding the fifth column. Else, if the line number is not in the array, don't prepend anything, just let it equal the fifth column.
At the end of the script:
Use a C-style loop to iterate through the array, printing each of the arrays values.
For only ~4000 files, you should be able to do:
find . -name sample_problem*_part*.txt | xargs paste
If find is giving names in the wrong order, pipe it to sort:
find . -name sample_problem*_part*.txt | sort ... | xargs paste
# print filenames in sorted order
find -name sample\*.txt | sort |
# extract 5-th column from each file and print it on a single line
xargs -n1 -I{} sh -c '{ cut -s -d " " -f 5 $0 | tr "\n" " "; echo; }' {} |
# transpose
python transpose.py ?
where transpose.py:
#!/usr/bin/env python
"""Write lines from stdin as columns to stdout."""
import sys
from itertools import izip_longest
missing_value = sys.argv[1] if len(sys.argv) > 1 else '-'
for row in izip_longest(*[column.split() for column in sys.stdin],
fillvalue=missing_value):
print " ".join(row)
Output
1 8 a
2 9 b
3 10 c
4 11 d
5 ? e
6 ? f
? ? g
Assuming the first and second files have less lines than the third one (missing values are replaced by '?').
Try this one. My script assumes that every file has the same number of lines.
# get number of lines
lines=$(wc -l sample_problem1_part1.txt | cut -d' ' -f1)
for ((i=1; i<=$lines; i++)); do
for file in sample_problem*; do
# get line number $i and delete everything except the last column
# and then print it
# echo -n means that no newline is appended
echo -n $(sed -n ${i}'s%.*\ %%p' $file)" "
done
echo
done
This works. For 4800 files, each 7 lines long it took 2 minutes 57.865 seconds on a AMD Athlon(tm) X2 Dual Core Processor BE-2400.
PS: The time for my script increases linearly with the number of lines. It would take very long time to merge files with 1000 lines. You should consider learning awk and use the script from steve. I tested it: For 4800 files, each with 1000 lines it took only 65 seconds!
You can pass awk output to paste and redirect it to a new file as follows:
paste <(awk '{print $3}' file1) <(awk '{print $3}' file2) <(awk '{print $3}' file3) > file.txt

Resources