I want to filter a matrix file using a column from another file.
I have 2 tab-separated files. One includes a matrix. I want to filter my matrix file based on the first column of FileB. If the headers(column names) of this matrix file (FileA) are present in the first column of File B, I want to filter them to use in a new file. All solutions I could try were based on filtering rows, not fields. Any help is appreciated. Thanks!
FileA
A B C D E F G H I J K L M N
R1 0 0 0 0 0 0 0 0 0 1 0 0 1 1
R2 1 1 0 1 0 0 0 0 1 0 1 0 0 0
R3 0 0 0 0 0 0 0 0 0 0 0 0 0 1
R4 1 1 0 1 0 0 0 1 0 1 0 1 0 0
R5 0 0 0 0 1 0 1 0 1 0 1 0 1 0
FileB
A Green
B Purple
K Blue
L Blue
Z Green
M Purple
N Red
O Red
U Red
My expected output is:
ExpectedOutput
A B K L M N
R1 0 0 0 0 1 1
R2 1 1 1 0 0 0
R3 0 0 0 0 0 1
R4 1 1 0 1 0 0
R5 0 0 1 0 1 0
Oh, what the heck, I'm not sure having you post an R script is really going to make any difference other than satisfying my need to be pedantic so here y'go:
$ cat tst.awk
NR == FNR {
outFldNames2Nrs[$1] = ++numOutFlds
next
}
FNR == 1 {
$0 = "__" FS $0
for (inFldNr=1; inFldNr<=NF; inFldNr++) {
outFldNr = outFldNames2Nrs[$inFldNr]
out2inFldNrs[outFldNr] = inFldNr
}
}
{
printf "%s", $1
for (outFldNr=1; outFldNr<=numOutFlds; outFldNr++) {
inFldNr = out2inFldNrs[outFldNr]
if (inFldNr) {
printf "%s%s", OFS, $inFldNr
}
}
print ""
}
$ awk -f tst.awk fileB fileA
__ A B K L M N
R1 0 0 0 0 1 1
R2 1 1 1 0 0 0
R3 0 0 0 0 0 1
R4 1 1 0 1 0 0
R5 0 0 1 0 1 0
I'm using the term "field name" to apply to the letter at the top of each column ("field" in awk). Try to figure the rest out for yourself from looking at the man pages and adding "prints" if/when useful and then feel free to ask questions if you have any.
I added __ at the front of your header line so you'd have the same number of columns in every line of output - that makes it easier to pass along to other tools to manipulate further but it's easy to tweak the code to not do that if you don't like it.
As #EdMorton mentions, bash may not be a suitable tool to manipulate
complex data structure as a table from maintainability and robustness
point of view.
Here is a bash script example just for information:
#!/bin/bash
declare -A seen
declare -a ary include
while read -r alpha color; do
seen["$alpha"]=1
done < FileB
while read -r -a ary; do
if (( $((nr++)) == 0 )); then # handle header line
echo -n " "
for (( i=0; i<${#ary[#]}; i++ )); do
alpha="${ary[$i]}"
if [[ ${seen["$alpha"]} = 1 ]]; then
echo -n " $alpha"
include[$((i+1))]=1
fi
done
else
echo -n "${ary[0]}"
for (( i=1; i<${#ary[#]}; i++ )); do
if [[ ${include[$i]} = 1 ]]; then
echo -n " ${ary[$i]}"
fi
done
fi
echo
done < FileA
If python is your option, you can say instead something like:
import pandas as pd
dfb = pd.read_csv("./FileB", sep="\s+", header=None)
vb = [x[0] for x in dfb.values.tolist()]
dfa = pd.read_csv("./FileA", sep="\s+")
va = dfa.columns.tolist()
print(dfa[sorted(set(va) & set(vb))])
Output:
A B K L M N
R1 0 0 0 0 1 1
R2 1 1 1 0 0 0
R3 0 0 0 0 0 1
R4 1 1 0 1 0 0
R5 0 0 1 0 1 0
I have a large text file: 400k rows and 10k columns, all numeric data values as 0,1,2. File size ranging 5-10GBs. I have a few missing values: NAs in the file. I want to replace the NA values with the column means, i.e. NA value in column 'x' must be replaced by the mean value of column 'x'. These are the steps that I want to do :
Compute means of each column of my text file (excluding the header and starting from column7th)
Replace NA in each column with their respective column means
Write the modified file back as a txt file
Data subset:
IID FID PAT MAT SEX PHENOTYPE X1 X2 X3 X4......
1234 1234 0 0 1 -9 0 NA 0 1
2346 2346 0 0 2 -9 1 2 NA 1
1334 1334 0 0 2 -9 2 NA 0 2
4566 4566 0 0 2 -9 2 2 NA 0
4567 4567 0 0 1 -9 NA NA 1 1
# total 400k rows and 10k columns
Desired Output:
# Assuming only 5 rows as given in the above example.
# Mean of column X1 = (0 + 1+ 2+ 2)/4 = 1.25
# Mean of column X2 = (2 + 2)/2 = 2
# Mean of column X3 = (0 + 0 + 1)/3 = 0.33
# Mean of column X4 = No NAs, so no replacements
# Replacing NAs with respective means:
IID FID PAT MAT SEX PHENOTYPE X1 X2 X3 X4......
1234 1234 0 0 1 -9 0 2 0 1
2346 2346 0 0 2 -9 1 2 0.33 1
1334 1334 0 0 2 -9 2 2 0 2
4566 4566 0 0 2 -9 2 2 0.33 0
4567 4567 0 0 1 -9 1.25 2 1 1
I tried this:
file="path/to/data.txt"
#get total number of columns
number_cols=$(awk -F' ' '{print NF; exit}' $file)
for ((i=7; i<=$number_cols; i=i+1))
do
echo $i
# getting the mean of each column
mean+=$(awk '{ total += $i } END { print total/NR }' $file)
done
# array of column means
echo ${mean[#]}
# find and replace (newstr must be replaced by respective column means)
find $file -type f -exec sed -i 's/NA/newstr/g' {} \;
However, this code is incomplete. The for loop is very slow since my data is huge. Is there another way to do this faster? I did this in Python and R, but it is too slow. I am open to get this done in any programming language as long as it is fast. Can someone please help me write the script?
Thanks
how to remove outermost logic?
such as
input column D result
And(OR(A,B),C)
output column E binary number
OR(A,B)
A B C result(D)after extract(E)
0 0 0 0 0
0 0 1 0 0
0 1 0 0 1
0 1 1 1 1
1 0 0 0 1
1 0 1 1 1
1 1 0 0 1
1 1 1 1 1
i tried in excel
=IF(NOT(AND(D2,C2))=TRUE,1,0)
but can not remove outermost logic
result after extract
0 0 0 =IF(AND(OR(A2,B2),C2)=TRUE,1,0) =IF(OR(A2,B2)=TRUE,1,0) =IF(NOT(AND(D2,C2))=TRUE,1,0)
0 0 1 =IF(AND(OR(A3,B3),C3)=TRUE,1,0) =IF(OR(A3,B3)=TRUE,1,0) =IF(NOT(AND(D3,C3))=TRUE,1,0)
0 1 0 =IF(AND(OR(A4,B4),C4)=TRUE,1,0) =IF(OR(A4,B4)=TRUE,1,0) =IF(NOT(AND(D4,C4))=TRUE,1,0)
0 1 1 =IF(AND(OR(A5,B5),C5)=TRUE,1,0) =IF(OR(A5,B5)=TRUE,1,0) =IF(NOT(AND(D5,C5))=TRUE,1,0)
1 0 0 =IF(AND(OR(A6,B6),C6)=TRUE,1,0) =IF(OR(A6,B6)=TRUE,1,0) =IF(NOT(AND(D6,C6))=TRUE,1,0)
1 0 1 =IF(AND(OR(A7,B7),C7)=TRUE,1,0) =IF(OR(A7,B7)=TRUE,1,0) =IF(NOT(AND(D7,C7))=TRUE,1,0)
1 1 0 =IF(AND(OR(A8,B8),C8)=TRUE,1,0) =IF(OR(A8,B8)=TRUE,1,0) =IF(NOT(AND(D8,C8))=TRUE,1,0)
1 1 1 =IF(AND(OR(A9,B9),C9)=TRUE,1,0) =IF(OR(A9,B9)=TRUE,1,0) =IF(NOT(AND(D9,C9))=TRUE,1,0)
By "remove the outermost logic", I assume you want to remove the IF function.
One thing to note is that in a formula like =IF(AND(OR(A2,B2),C2)=TRUE,1,0) you never need the =TRUE test. =IF(AND(OR(A2,B2),C2),1,0) will work exactly the same.
There are a couple of ways to convert a boolean (i.e. true/false value) into an integer without the explicit IF. One is --AND(OR(A2,B2),C2). Another is int(AND(OR(A2,B2),C2)).
Thousands of files ends with *.tab. First column in each file is a header. Every file have their own headers (so they are different). I don't mind to have one header from any file.
Number of rows are equal in all the files and so have an order. My desired output have the same order.
Example files in a directory
test_1.tab
test_2.tab
.
.
.
.
test_1990.tab
test_2000.tab
test_1.tab
Pro_01 0 0 0 0 0 1 1 1 0 1 1 0 .....0
Pro_02 0 0 0 0 0 1 1 0 0 0 0 0 .....1
Pro_03 1 1 1 1 1 0 0 1 0 1 1 0 .....1
.
.
.
Pro_200 0 0 0 0 1 1 1 1 1 1 0 .....0
test_2000.tab
Pro_1901 1 1 1 1 0 1 1 0 0 0 0 1 .....0
Pro_1902 1 1 1 0 0 0 1 0 0 0 0 0 .....1
Pro_1903 1 1 0 1 0 1 0 0 0 0 0 1 .....1
.
.
.
Pro_2000 1 0 0 0 0 1 1 1 1 1 0 .....0
desired output
Pro_01 0 0 0 0 0 1 1 1 0 1 1 0 0 ..... 1 1 1 1 0 1 1 0 0 0 0 1 0
Pro_02 0 0 0 0 0 1 1 0 0 0 0 0 1 ..... 1 1 1 0 0 0 1 0 0 0 0 0 1
Pro_03 1 1 1 1 1 0 0 1 0 1 1 0 1 ..... 1 1 0 1 0 1 0 0 0 0 0 1 1
.
.
.
Pro_200 0 0 0 0 1 1 1 1 1 1 0 0 ..... 1 0 0 0 0 1 1 1 1 1 0 0
My code
for i in *.tab/; do paste allCol.tab <(cut -f 2- "$i") > itermediate.csv; mv intermediate.csv allCol.tab ; done
paste <(cut -f1 test1.tab) allCol.tab > final.tab
rm allCol.tab
It takes a quite time like 3 hrs. Which is a better way?
Also, is there any other command to cross check this output file vs all input files? like diff or wc?
Try this.
#!/bin/bash
TMP=tmp
mkdir "$TMP"
RESULT=result
#read each file and append the contents of each line in them
#to a new file for each line in the tmp directory
for f in *.tab; do
i=1
while read -r l; do
echo "$l" >> "$TMP"/"$i"
((i++))
done < <(cut -f2- "$f")
done
#integrate each file in tmp dir into a single line of the $RESULT file
exec 1>>$RESULT
for f in "$TMP"/*; do
while read -r l; do
printf '%s\t' "$l"
done < <(cat "$f")
echo
done
rm -r "$TMP"
This algorithm can be split on a number of processors and the task would get done faster.
You can also add to it things like checking if $TMP was created successfully.
A recursive function is a good tool. As a first cut -- short, but simple:
pasteAll() {
first=$1; shift
case $# in
0) cut -f 2- "$first" ;;
*) paste <(cut -f 2- "$first") <(pasteAll "$#") ;;
esac
}
set -- *.tab
paste <(cut -f 1 "$1") <(pasteAll "$#")
Checking that all files and lines were included -- if every input file contains an identical number of lines -- is as simple as checking the output file's line count and the number of columns in its last line.
I have this kind of file:
1 0 1
2 0 3
2 1 2
3 0 3
4 0 1
4 1 1
4 2 1
4 3 1
5 0 1
8 0 1
10 0 1
11 0 1
The RS separator is an empty line by default.
If there was a double blank line, we have to substitute on of them by a pattern $1 0 0, where $1 means the increased "number" before the $1 0 * record.
If the separator is empty line + 1 empty line we have to increase the $1 by 1.
If the separator is empty line + 2 empty line we have to increase the $1 by 2.
...
and I need to get this output:
1 0 1
2 0 3
2 1 2
3 0 3
4 0 1
4 1 1
4 2 1
4 3 1
5 0 1
6 0 0
7 0 0
8 0 1
9 0 0
10 0 1
11 0 1
Thanks in advance!
awk 'NF{f=0;n=$1;print;next}f{print ++n " 0 0"}{print;f=1}' ./infile
Output
$ awk 'NF{f=0;n=$1;print;next}f{print ++n " 0 0"}{print;f=1}' ./infile
1 0 1
2 0 3
2 1 2
3 0 3
4 0 1
4 1 1
4 2 1
4 3 1
5 0 1
6 0 0
7 0 0
8 0 1
9 0 0
10 0 1
11 0 1
Explanation
NF{f=0;n=$1;print;next}: if the current line has data, unset flag f, save the number in the first field to n, print the line and skip the rest of the script
{print;f=1}: We only reach this action if the current line is blank. If so, print the line and set the flag f
f{print ++n " 0 0"}: We only execute this action if the flag f is set which only happens if the previous line was blank. If we enter this action, print the missing fields with an incremented n
You can try something like this. The benefit of this way is that your input file need not have an empty line for the missing numbers.
awk -v RS="" -v ORS="\n\n" -v OFS="\n" '
BEGIN{getline; col=$1;line=$0;print line}
$1==col{print $0;next }
($1==col+1){print $0;col=$1;next}
{x=$1;y=$0; col++; while (col < x) {print col" 0 0";col++};print y;next}' file
Input File:
[jaypal:~/Temp] cat file
1 0 1
2 0 3
2 1 2
3 0 3
4 0 1
4 1 1
4 2 1
4 3 1
5 0 1
8 0 1
10 0 1
11 0 1
Script Output:
[jaypal:~/Temp] awk -v RS="" -v ORS="\n\n" -v OFS="\n" '
BEGIN{getline; col=$1;line=$0;print line}
$1==col{print $0;next }
($1==col+1){print $0;col=$1;next}
{x=$1;y=$0; col++; while (col < x) {print col" 0 0";col++};print y;next}' file
1 0 1
2 0 3
2 1 2
3 0 3
4 0 1
4 1 1
4 2 1
4 3 1
5 0 1
6 0 0
7 0 0
8 0 1
9 0 0
10 0 1
11 0 1