Fast removing duplicate rows between multiple files - linux

I have 10k files with 80k rows each and need to compare, and - either delete the duplicate lines or replace them by "0". ultrafast since I have to do it +1000 times.
the following script is fast enough for files with less than 100 rows. now tcsh
import csv
foreach file ( `ls -1 *` )
split -l 1 ${file} ${file}.
end
find *.* -type f -print0 | xargs -0 sha512sum | awk '($1 in aa){print $2}(!($1 in aa)){aa[$1]=$2}' | xargs -I {} cp rowzero {}
cat ${file}.* > ${file}.filtered
where "rowzero" is just a file with a... zero. I have tried python but haven't found a fast way. I have tried pasting them and doing all nice fast things (awk, sed, above commands, etc.) but the i/o slows to incredible levels when the file has over more than e.g. 1000 columns. I need help, thanks a million hours!.

ok this is so far the fastest code that I could make, which works on a transposed and "cat" input. As explained before, "cat"-ed input ">>" works fine however "paste" or "pr" code gives nightmares pasting another column in, say, +1GB files, and that is why we need to transpose. e.g.
each original file looks like
1
2
3
4
...
if we transpose and cat the first file with others the input for the code will look like:
1 2 3 4 ..
1 1 2 4 ..
1 1 1 4 ..
The code will return the original "aka retransposed pasted" format with the minor detail of shuffled rows
1
1 2
1 2 3
2 3 4
..
The repeated rows were effectively removed. below the code,
HOWEVER THE CODE IS NOT GENERAL! it only works with 1-digit integers since the awk array indexes are not sorted. Could someone help to generalize it? thanks!
{for(ii=1;ii<=NF;ii++){aa[ii,$ii]=$ii}}END{mm=1; for (n in aa) {split(n, bb, SUBSEP);
if (bb[1]==mm){cc=bb[2]; printf ( "%2s", cc)}else{if (mm!=bb[1]){printf "\n%2s", bb[2] }; mm=bb[1]}}}

Related

randomly choose the files and add it to the data present in another folder

i have two folders(DATA1 and DATA2) and inside it there are 3 subfolders(folder1,folder2 and folder3) as shown below :
DATA1
folder1/*.txt contain 5 files
folder2/*.txt contain 4 files
folder3/*.txt contain 10 files
DATA2
folder1/*.txt contain 8 files
folder2/*.txt contain 9 files
folder3/*.txt contain 10 files
as depicted above, there are various number of files in each subfolders with different names and each file contain two columns data as shown below:
1 -2.4654174805e+01
2 -2.3655626297e+01
3 -2.2654634476e+01
4 -2.1654865265e+01
5 -2.0653873444e+01
6 -1.9654104233e+01
7 -1.8654333115e+01
8 -1.7653341293e+01
9 -1.6654792786e+01
10 -1.5655022621e+01
I just want add data folder wise by choosing the second columns of files randomly
I mean any random data(only second column) from DATA2/folder1/*.txt will be added to DATA1/folder1/*.txt(only second column), similarly DATA2/folder2/*.txt will be added to DATA1/folder2/*.txt and so on.
most importantly, i don't need to disturb the first column value of any folders only manipulations with second column.And finally i want to save the data.
can anybody suggest solution for the same
My directory and data structure is attached here
https://i.fluffy.cc/2RPrcMxVQ0RXsSW1lzf6vfQ30jgJD8qp.html
i want to add folder wise data(from DATA2 to DATA1). First of all enter the DATA2/folder1 and randomly chose any file and select its(file) second column(as it consists of two column). Then add the selected second column to the second column of any file present inside DATA1/folder1 and save it to the OUTPUT folder
Since there's no code to start from this won't be a ready-to-use answer but rather a few building blocks that may come in handy.
I'll show how to find all files, select a random file, select a random column and extract the value from that column. Duplicating and adapting this for selecting a random file and column to add the value to is left as an exercise to the reader.
#!/bin/bash
IFS=
# a function to generate a random number
prng() {
# You could use $RANDOM instead but it gives a narrower range.
echo $(( $(od -An -N4 -t u4 < /dev/urandom) % $1 ))
}
# Find files
readarray -t files < <(find DATA2/folder* -mindepth 1 -maxdepth 1 -name '*.txt')
# Debug print-out of the files array
declare -p files
echo Found ${#files[#]} files
# List files one-by-one
for file in "${files[#]}"
do
echo "$file"
done
# Select a random file
fileno=$(prng ${#files[#]})
echo "Selecting file number $fileno"
filename=${files[$fileno]}
echo "which is $filename"
lines=$(wc -l < "$filename")
echo "and it has $lines lines"
# Add 1 since awk numbers its lines from 1 and up
rndline=$(( $(prng $lines) + 1 ))
echo "selecting value in column 2 on line $rndline"
value=$(awk -v rndline=$rndline '{ if(NR==rndline) print $2 }' "$filename")
echo "which is $value"
# now pick a random file and line in the other folder using the same technique

How to count the unique values in 2 fields without concatenating them?

I am trying to using the basic shell in unix to count the unique values in 2 fields. I have data with 5 columns but just what to count the unique values in the first 2 WITHOUT concatenating them.
So far I am successful in using cut -f 1 | sort | uniq | wc -l in counting the unique values in column one and I can do the same for column two but because some of the values are the same in column one and two I need to be able to do this command treating column 1 and 2 as one field. Can anyone help me please?
Your question can be interpreted in two ways so I answer both of them.
Given the input file:
2 1
2 1
1 2
1 1
2 2
If you want the result to output 4 because the unique pairs are 1 1, 1 2, 2 1 and 2 2, then you need:
cat test|cut -f1,2|sort|uniq|wc -l
What we do here: we pick only first two columns as well as the delimiter and pass it to sort|uniq which does the job.
If you, on the other hand, want the result to output 2 because there are only two unique elements: 1 and 2, then you can tweak the above like this:
cat test|cut -f1,2|tr "\t" "\n"|sort|uniq|wc -l
This time after we select first two columns, we split each of them into two lines so that sort|uniq picks them up.
These work as long as the columns are separated with TAB character, not spaces. Since you didn't pass -d option to cut in your question and cut uses tabs by default, I assumed your input uses tabs too.

script to compare two large 900 x 900 comma delimited files

I have tried awk but havent been able to perform a diff for every cell 1 at a time on both files. I have tried awk but havent been able to perform a diff for every cell 1 at a time on both files. I have tried awk but havent been able to perform a diff for every cell 1 at a time on both files.
If you just want a rough answer, possibly the simplest thing is to do something like:
tr , \\n file1 > /tmp/output
tr , \\n file2 | diff - /tmp/output
That will convert each file to one column and run diff. You can compute the cells that differ from the line numbers of the output.
Simplest way with awk without accounting for newlines inside fields,quoted commas etc.
Print the same
awk 'BEGIN{RS=",|"RS}a[FNR]==$0;{a[NR]=$0}' file{,2}
Print differences
awk 'BEGIN{RS=",|"RS}FNR!=NR&&a[FNR]!=$0;{a[NR]=$0}' file{,2}
Print which are the same different
awk 'BEGIN{RS=",|"RS}FNR!=NR{print "cell"FNR (a[FNR]==$0?"":" not")" the same"}{a[NR]=$0}' file{,2}
Input
file
1,2,3,4,5
6,7,8,9,10
11,12,13,14,15
file2
1,2,3,4,5
2,7,1,9,12
1,1,1,1,12
Output
same
1
2
3
4
5
7
9
Different
2
1
12
1
1
1
1
12
Same different
cell1 the same
cell2 the same
cell3 the same
cell4 the same
cell5 the same
cell6 not the same
cell7 the same
cell8 not the same
cell9 the same
cell10 not the same
cell11 not the same
cell12 not the same
cell13 not the same
cell14 not the same
cell15 not the same

How to sort by column and break ties randomly

I have a tab-delimited file with three columns like this:
joe W 4
bob A 1
ana F 1
roy J 3
sam S 0
don R 2
tim L 0
cyb M 0
I want to sort this file by decreasing values in the third column, but to break ties I do not want to use some other column to do so (i.e. not use the first column to sort rows with the same entry in the third column).
Instead, I want rows with the same third column entries to either preserve the original order, or be sorted randomly.
Is there a way to do this using the sort command in unix?
sort -k3 -r -s file
This should give you the required output.
-k3 denotes the 3rd column and -r will sort in decreasing order and -s will disable the breaking of ties using other options.

Extract rows and substrings from one file conditional on information of another file

I have a file 1.blast with coordinate information like this
1 gnl|BL_ORD_ID|0 100.00 33 0 0 1 3
27620 gnl|BL_ORD_ID|0 95.65 46 2 0 1 46
35296 gnl|BL_ORD_ID|0 90.91 44 4 0 3 46
35973 gnl|BL_ORD_ID|0 100.00 45 0 0 1 45
41219 gnl|BL_ORD_ID|0 100.00 27 0 0 1 27
46914 gnl|BL_ORD_ID|0 100.00 45 0 0 1 45
and a file 1.fasta with sequence information like this
>1
TCGACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>2
GCATCTGGGCTACGGGATCAGCTAGGCGATGCGAC
...
>100000
TTTGCGAGCGCGAAGCGACGACGAGCAGCAGCGACTCTAGCTACTG
I am searching now a script that takes from 1.blast the first column and extracts those sequence IDs (=first column $1) plus sequence and then from the sequence itself all but those positions between $7 and $8 from the 1.fasta file, meaning from the first two matches the output would be
>1
ACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>27620
GTAGATAGAGATAGAGAGAGAGAGGGGGGAGA
...
(please notice that the first three entries from >1 are not in this sequence)
The IDs are consecutive, meaning I can extract the required information like this:
awk '{print 2*$1-1, 2*$1, $7, $8}' 1.blast
This gives me then a matrix that contains in the first column the right sequence identifier row, in the second column the right sequence row (= one after the ID row) and then the two coordinates that should be excluded. So basically a matrix that contains all required information which elements from 1.fasta shall be extracted
Unfortunately I do not have too much experience with scripting, hence I am now a bit lost, how to I feed the values e.g. in the suitable sed command?
I can get specific rows like this:
sed -n 3,4p 1.fasta
and the string that I want to remove e.g. via
sed -n 5p 1.fasta | awk '{print substr($0,2,5)}'
But my problem is now, how can I pipe the information from the first awk call into the other commands so that they extract the right rows and remove from the sequence rows then the given coordinates. So, substr isn't the right command, I would need a command remstr(string,start,stop) that removes everything between these two positions from a given string, but I think that I could do in an own script. Especially the correct piping is a problem here for me.
If you do bioinformatics and work with DNA sequences (or even more complicated things like sequence annotations), I would recommend having a look at Bioperl. This obviously requires knowledge of Perl, but has quite a lot of functionality.
In your case you would want to generate Bio::Seq objects from your fasta-file using the Bio::SeqIO module.
Then, you would need to read the fasta-entry-numbers and positions wanted into a hash. With the fasta-name as the key and the value being an array of two values for each subsequence you want to extract. If there can be more than one such subsequence per fasta-entry, you would have to create an array of arrays as the value entry for each key.
With this data structure, you could then go ahead and extract the sequences using the subseq method from Bio::Seq.
I hope this is a way to go for you, although I'm sure that this is also feasible with pure bash.
This isn't an answer, it is an attempt to clarify your problem; please let me know if I have gotten the nature of your task correct.
foreach row in blast:
get the proper (blast[$1]) sequence from fasta
drop bases (blast[$7..$8]) from sequence
print blast[$1], shortened_sequence
If I've got your task correct, you are being hobbled by your programming language (bash) and the peculiar format of your data (a record split across rows). Perl or Python would be far more suitable to the task; indeed Perl was written in part because multiple file access in awk of the time was really difficult if not impossible.
You've come pretty far with the tools you know, but it looks like you are hitting the limits of their convenient expressibility.
As either thunk and msw have pointed out, more suitable tools are available for this kind of task but here you have a script that can teach you something about how to handle it with awk:
Content of script.awk:
## Process first file from arguments.
FNR == NR {
## Save ID and the range of characters to remove from sequence.
blast[ $1 ] = $(NF-1) " " $NF
next
}
## Process second file. For each FASTA id...
$1 ~ /^>/ {
## Get number.
id = substr( $1, 2 )
## Read next line (the sequence).
getline sequence
## if the ID is one found in the other file, get ranges and
## extract those characters from sequence.
if ( id in blast ) {
split( blast[id], ranges )
sequence = substr( sequence, 1, ranges[1] - 1 ) substr( sequence, ranges[2] + 1 )
## Print both lines with the shortened sequence.
printf "%s\n%s\n", $0, sequence
}
}
Assuming your 1.blasta of the question and a customized 1.fasta to test it:
>1
TCGACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>2
GCATCTGGGCTACGGGATCAGCTAGGCGATGCGAC
>27620
TTTGCGAGCGCGAAGCGACGACGAGCAGCAGCGACTCTAGCTACTGTTTGCGA
Run the script like:
awk -f script.awk 1.blast 1.fasta
That yields:
>1
ACTAGCTACGACTCGGACTGACGAGCTACGACTACGG
>27620
TTTGCGA
Of course I'm assumming some things, the most important that fasta sequences are not longer than one line.
Updated the answer:
awk '
NR==FNR && NF {
id=substr($1,2)
getline seq
a[id]=seq
next
}
($1 in a) && NF {
x=substr(a[$1],$7,$8)
sub(x, "", a[$1])
print ">"$1"\n"a[$1]
} ' 1.fasta 1.blast

Resources