Replace a full line every Nth line in a text file - text

I'm trying to perform something that should be simple but I'm not quite understanding how to do it. I have a file that looks like this:
#A01182:104:HKNG5DSX3:3:1101:3947:1031 1:N:0:CATTGCCT+NATCTCAG
CNTCATAGCTGGTTGCACAGTTAACGTCGTTCAGGCCACGTTCCAGACCGTAGTTTGCCAGCGTCAGATCATAAACGGTGGTCACCAGGGCGGTGCTGCCA
+
F#FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFF,FFFFFFFFFFFFFFFFFFFFFFFFFF
#A01182:104:HKNG5DSX3:3:1101:7997:1031 1:N:0:CATTGCCT+NATCTCAG
GNCGATCCCTTCGCTGCTGCTGGCAATTATCGTTGTAGCGTTTGCCGGACCGAGTTTGTCTCACGCCATGTTTGCTGTCTGGCTGGCGCTGCTGCCGCGTA
+
F#FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
#A01182:104:HKNG5DSX3:3:1101:5547:1047 1:N:0:CATTGCCT+NATCTCAG
GGTGATGATTGTCTTTGGCGCAACGTTAATGAAAGATGCGCCGAAGCAGGAAGTGAAAACCAGCAATGGTGTGGTGGAGAAGGACTACACCCTGGCAGAGT
+
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFF
#A01182:104:HKNG5DSX3:3:1101:20726:1063 1:N:0:CATTGCCT+GATCTCAG
GGGACGCCCATTACGCTGGTGAATCTGGCAACCCATACCAGCGCCCTGCCCCGTGAACAGCCCGGTGGCGCGGCACATCGTCCGGTATTTGTCTGGCCAAC
+
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFF
and goes on for a lot of lines (The actual file is 2.5 Gb). What I want to do is to replace every fourth line (all those that have a lot of F's) for another string, the same for all.
I have tried with sed but I don't seem to be able to get the script right since I produce and output without changes.
Any help would be really appreciated!

This might work for you (GNU sed):
sed -i '4~4s/.*/another string/' file(s)
Starting at the 4th line and every 4 lines thereafter, replace the whole line with another string.

I'd use awk for this
awk '
NR % 4 == 0 {print "new string"; next}
{print}
' file > file.new && mv file.new file

Related

extract sequences from multifasta file by ID in file using awk

I would like to extract sequences from the multifasta file that match the IDs given by separate list of IDs.
FASTA file seq.fasta:
>7P58X:01332:11636
TTCAGCAAGCCGAGTCCTGCGTCGTTACTTCGCTT
CAAGTCCCTGTTCGGGCGCC
>7P58X:01334:11605
TTCAGCAAGCCGAGTCCTGCGTCGAGAGTTCAAGTC
CCTGTTCGGGCGCCACTGCTAG
>7P58X:01334:11613
ACGAGTGCGTCAGACCCTTTTAGTCAGTGTGGAAAC
>7P58X:01334:11635
TTCAGCAAGCCGAGTCCTGCGTCGAGAGATCGCTTT
CAAGTCCCTGTTCGGGCGCCACTGCGGGTCTGTGTC
GAGCG
>7P58X:01336:11621
ACGCTCGACACAGACCTTTAGTCAGTGTGGAAATCT
CTAGCAGTAGAGGAGATCTCCTCGACGCAGGACT
IDs file id.txt:
7P58X:01332:11636
7P58X:01334:11613
I want to get the fasta file with only those sequences matching the IDs in the id.txt file:
>7P58X:01332:11636
TTCAGCAAGCCGAGTCCTGCGTCGTTACTTCGCTTT
CAAGTCCCTGTTCGGGCGCC
>7P58X:01334:11613
ACGAGTGCGTCAGACCCTTTTAGTCAGTGTGGAAAC
I really like the awk approach I found in answers here and here, but the code given there is still not working perfectly for the example I gave. Here is why:
(1)
awk -v seq="7P58X:01332:11636" -v RS='>' '$1 == seq {print RS $0}' seq.fasta
this code works well for the multiline sequences but IDs have to be inserted separately to the code.
(2)
awk 'NR==FNR{n[">"$0];next} f{print f ORS $0;f=""} $0 in n{f=$0}' id.txt seq.fasta
this code can take the IDs from the id.txt file but returns only the first line of the multiline sequences.
I guess that the good thing would be to modify the RS variable in the code (2) but all of my attempts failed so far. Can, please, anybody help me with that?
$ awk -F'>' 'NR==FNR{ids[$0]; next} NF>1{f=($2 in ids)} f' id.txt seq.fasta
>7P58X:01332:11636
TTCAGCAAGCCGAGTCCTGCGTCGTTACTTCGCTT
CAAGTCCCTGTTCGGGCGCC
>7P58X:01334:11613
ACGAGTGCGTCAGACCCTTTTAGTCAGTGTGGAAAC
Following awk may help you on same.
awk 'FNR==NR{a[$0];next} /^>/{val=$0;sub(/^>/,"",val);flag=val in a?1:0} flag' ids.txt fasta_file
I'm facing a similar problem. The size of my multi-fasta file is ~ 25G.
I use sed instead of awk, though my solution is an ugly hack.
First, I extracted the line number of the title of each sequence to a data file.
grep -n ">" multi-fasta.fa > multi-fasta.idx
What I got is something like this:
1:>DM_0000000004
5:>DM_0000000005
11:>DM_0000000007
19:>DM_0000000008
23:>DM_0000000009
Then, I extracted the wanted sequence by its title, eg. DM_0000000004, using the scripts below.
seqnm=$1
idx0_idx1=`grep -n $seqnm multi-fasta.idx`
idx0=`echo $idx0_idx1 | cut -d ":" -f 1`
idx0plus1=`expr $idx0 + 1`
idx1=`echo $idx0_idx1 | cut -d ":" -f 2`
idx2=`head -n $idx0plus1 multi-fasta.idx | tail -1 | cut -d ":" -f 1`
idx2minus1=`expr $idx2 - 1`
sed ''"$idx1"','"$idx2minus1"'!d' multi-fasta.fa > ${seqnm}.fasta
For example, I want to extract the sequence of DM_0000016115. The idx0_idx1 variable gives me:
7507:42520:>DM_0000016115
7507 (idx0) is the line number of line 42520:>DM_0000016115 in multi-fasta.idx.
42520 (idx1) is the line number of line >DM_0000016115 in multi-fasta.fa.
idx2 is the line number of the sequence title right beneath the wanted one (>DM_0000016115).
At last, using sed, we can extract the lines between idx1 and idx2 minus 1, which are the title and the sequence, in which case you can use grep -A.
The advantage of this ugly-hack is that it does not require a specific number of lines for each sequence in the multi-fasta file.
What bothers me is this process is slow. For my 25G multi-fasta file, such extraction takes tens of seconds. However, it's much faster than using samtools faidx .

Replace text between two strings in file using linux bash

i have file "acl.txt"
192.168.0.1
192.168.4.5
#start_exceptions
192.168.3.34
192.168.6.78
#end_exceptions
192.168.5.55
and another file "exceptions"
192.168.88.88
192.168.76.6
I need to replace everything between #start_exceptions and #end_exceptions with content of exceptions file. I have tried many solutions from this forum but none of them works.
EDITED:
Ok, if you want to retain the #start and #stop, I will revert to awk:
awk '
BEGIN {p=1}
/^#start/ {print;system("cat exceptions");p=0}
/^#end/ {p=1}
p' acl.txt
Thanks to #fedorqui for tweaks in comments below.
Output:
192.168.0.1
192.168.4.5
#start_exceptions
192.168.88.88
192.168.76.6
#end_exceptions
192.168.5.55
p is a flag that says whether or not to print lines. It starts at the beginning as 1, so all lines are printed till I find a line starting with #start. Then I cat the contents of the exceptions file and stop printing lines till I find a line starting with #end, at which point I set the p flag back to 1 so remaining lines get printed.
If you want output to a file, add "> newfile" to the very end of the command like this:
awk '
BEGIN {p=1}
/^#start/ {print;system("cat exceptions");p=0}
/^#end/ {p=1}
p' acl.txt > newfile
YET ANOTHER VERSION IF YOU REALLY WANT TO USE SED
If you really, really want to do it with sed, you can use nested address spaces, firstly to select the lines between #start_exceptions and #end_exceptions, then again to select the first line within that and also lines other than the #end_exceptions line:
sed '
/^#start/,/^#end/{
/^#start/{
n
r exceptions
}
/^#end/!d
}
' acl.txt
Output:
192.168.0.1
192.168.4.5
#start_exceptions
192.168.88.88
192.168.76.6
#end_exceptions
192.168.5.55
ORIGINAL ANSWER
I think this will work:
sed -e '/^#end/r exceptions' -e '/^#start/,/^#end/d' acl.txt
When it finds /^#end/ it reads in the exceptions file. And it also deletes everything between /#start/ and /#end/.
I have left the matching slightly "loose" for clarity of expressing the technique.
You can use the following, based on Replace string with contents of a file using sed:
$ sed $'/end/ {r exceptions\n} ; /start/,/end/ {d}' acl.txt
192.168.0.1
192.168.4.5
192.168.88.88
192.168.76.6
192.168.5.55
Explanation
sed $'one_thing; another_thing' ac1.txt performs the two actions.
/end/ {r exceptions\n} if the line contains end, then read the file exceptions and append it.
/start/,/end/ {d} from a line containing start to a line containing end, delete all the lines.
I had problem with Mark Setchell's solution in MINGW. The caret was not picking up the beginning of line. Indeed, is the detection of the separator dependent on it being at the beginning of the line?
I came up with this awk alternative...
$ awk -v data="$(<exceptions)" '
BEGIN {p=1}
/#start_exceptions/ {print; print data;p=0}
/#end_exceptions/ {p=1}
p
' acl.txt

print specific line if it is matches with the line after it

I have a log file containing the following info:
<msisdn>37495989804</msisdn>
<address>10.14.14.26</address>
<msisdn>37495371855</msisdn>
<address>10.14.0.172</address>
<msisdn>37495989832</msisdn>
<address>10.14.14.29</address>
<msisdn>37495479810</msisdn>
<address>10.14.1.11</address>
<msisdn>37495429157</msisdn>
<address>10.14.0.213</address>
<msisdn>37495275824</msisdn>
<msisdn>37495739176</msisdn>
<address>10.14.2.86</address>
<msisdn>37495479840</msisdn>
<address>10.14.1.12</address>
<msisdn>37495706059</msisdn>
<msisdn>37495619889</msisdn>
<address>10.14.1.198</address>
<msisdn>37495574341</msisdn>
<address>10.14.1.148</address>
<msisdn>37495391624</msisdn>
<address>10.14.0.188</address>
<msisdn>37495989796</msisdn>
<address>10.14.14.24</address>
<msisdn>37495835940</msisdn>
<address>10.14.2.164</address>
<msisdn>37495743249</msisdn>
<address>10.14.2.94</address>
<msisdn>37495674117</msisdn>
<address>10.14.1.236</address>
<msisdn>37495754536</msisdn>
<address>10.14.2.120</address>
<msisdn>37495576434</msisdn>
<msisdn>37495823889</msisdn>
<address>10.14.2.159</address>
There are some lines where the 'msisdn' line is not followed by an 'address' line, like this:
<msisdn>37495576434</msisdn>
<msisdn>37495823889</msisdn>
I would like to write a script which will output only the lines ('msisdn' lines), that aren't followed by 'address'. Expected output:
<msisdn>37495275824</msisdn>
<msisdn>37495706059</msisdn>
<msisdn>37495576434</msisdn>
If it will be smth with awk/sed, it will be perfect.
Thanks.
One way with awk:
awk '/address/{p=0}p{print a;p=0}/msisdn/{a=$0;p=1}' log
you can use pcregrep to match next line is not adddress and use awk show it
pcregrep -M '(.*</msisdn>)\n.*<msi' | awk 'NR % 2 == 1'
This might work for you (GNU sed):
sed -r '$!N;/(<msisdn>).*\n.*\1/P;D' file
This reads 2 lines into the pattern space and trys to match the pattern <msisdn> in both the 2 lines. If the pattern matchs it prints out the first line. The first line is then deleted and the process begins again, however since the pattern space contains the second line (now the first), the automatic reading of a line is forgone and process begins as of $!N.
Perl has its own way to do this:
perl -lne 'if($prev && $_!~/\./){print $prev}unless(/\./){$prev=$_}else{undef $prev}' your_file
Tested Below:
> perl -lne 'if($prev && $_!~/\./){print $prev}unless(/\./){$prev=$_}else{undef $prev}' temp
<msisdn>37495275824</msisdn>
<msisdn>37495706059</msisdn>
<msisdn>37495576434</msisdn>
>

Bash: How to keep lines in a file that have fields that match lines in another file?

I have two big files with a lot of text, and what I have to do is keep all lines in file A that have a field that matches a field in file B.
file A is something like:
Name (tab) # (tab) # (tab) KEYFIELD (tab) Other fields
file B I managed to use cut and sed and other things to basically get it down to one field that is a list.
So The goal is to keep all lines in file A in the 4th field (it says KEYFIELD) if the field for that line matches one of the lines in file B. (Does NOT have to be an exact match, so if file B had Blah and file A said Blah_blah, it'd be ok)
I tried to do:
grep -f fileBcutdown fileA > outputfile
EDIT: Ok I give up. I just force killed it.
Is there a better way to do this? File A is 13.7MB and file B after cutting it down is 32.6MB for anyone that cares.
EDIT: This is an example line in file A:
chr21 33025905 33031813 ENST00000449339.1 0 - 33031813 33031813 0 3 1835,294,104, 0,4341,5804,
example line from file B cut down:
ENST00000111111
Here's one way using GNU awk. Run like:
awk -f script.awk fileB.txt fileA.txt
Contents of script.awk:
FNR==NR {
array[$0]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
Alternatively, here's the one-liner:
awk 'FNR==NR { array[$0]++; next } { line = $4; sub(/\.[0-9]+$/, "", line); if (line in array) print }' fileB.txt fileA.txt
GNU awk can also perform the pre-processing of fileB.txt that you described using cut and sed. If you would like me to build this into the above script, you will need to provide an example of what this line looks like.
UPDATE using files HumanGenCodeV12 and GenBasicV12:
Run like:
awk -f script.awk HumanGenCodeV12 GenBasicV12 > output.txt
Contents of script.awk:
FNR==NR {
gsub(/[^[:alnum:]]/,"",$12)
array[$12]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
This successfully prints lines in GenBasicV12 that can be found in HumanGenCodeV12. The output file (output.txt) contains 65340 lines. The script takes less than 10 seconds to complete.
You're hitting the limit of using the basic shell tools. Assuming about 40 characters per line, File A has 400,000 lines in it and File B has about 1,200,000 lines in it. You're basically running grep for each line in File A and having grep plow through 1,200,000 lines with each execution. that's 480 BILLION lines you're parsing through. Unix tools are surprisingly quick, but even something fast done 480 billion times will add up.
You would be better off using a full programming scripting language like Perl or Python. You put all lines in File B in a hash. You take each line in File A, check to see if that fourth field matches something in the hash.
Reading in a few hundred thousand lines? Creating a 10,000,000 entry hash? Perl can parse both of those in a matter of minutes.
Something -- off the top of my head. You didn't give us much in the way of spects, so I didn't do any testing:
#! /usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
# Create your index
open my $file_b, "<", "file_b.txt";
my %index;
while (my $line = <$file_b>) {
chomp $line;
$index{$line} = $line; #Or however you do it...
}
close $file_b;
#
# Now check against file_a.txt
#
open my $file_a, "<", "file_a.txt";
while (my $line = <$file_a>) {
chomp $line;
my #fields = split /\s+/, $line;
if (exists $index{$field[3]}) {
say "Line: $line";
}
}
close $file_a;
The hash means you only have to read through file_b once instead of 400,000 times. Start the program, go grab a cup of coffee from the office kitchen. (Yum! non-dairy creamer!) By the time you get back to your desk, it'll be done.
grep -f seems to be very slow even for medium sized pattern files (< 1MB). I guess it tries every pattern for each line in the input stream.
A solution, which was faster for me, was to use a while loop. This assumes that fileA is reasonably small (it is the smaller one in your example), so iterating multiple times over the smaller file is preferable over iterating the larger file multiple times.
while read line; do
grep -F "$line" fileA
done < fileBcutdown > outputfile
Note that this loop will output a line several times if it matches multiple patterns. To work around this limitation use sort -u, but this might be slower by quite a bit. You have to try.
while read line; do
grep -F "$line" fileA
done < fileBcutdown | sort -u | outputfile
If you depend on the order of the lines, then I don't think you have any other option than using grep -f. But basically it boils down to trying m*n pattern matches.
use the below command:
awk 'FNR==NR{a[$0];next}($4 in a)' <your filtered fileB with single field> fileA

Extract K-th Line from Chunks Using Sed/AWK/Perl

I have some data that looks like this. It comes in chunk of four lines. Each chunk starts with a # character.
#SRR037212.1 FC30L5TAA_102708:7:1:741:1355 length=27
AAAAAAAAAAAAAAAAAAAAAAAAAAA
+SRR037212.1 FC30L5TAA_102708:7:1:741:1355 length=27
::::::::::::::::::::::::;;8
#SRR037212.2 FC30L5TAA_102708:7:1:1045:1765 length=27
TATAACCAGAAAGTTACAAGTAAACAC
+SRR037212.2 FC30L5TAA_102708:7:1:1045:1765 length=27
88888888888888888888888888
What I want to do is to extract last line of each chunk. Yielding:
::::::::::::::::::::::::;;8
888888888888888888888888888
Note that the last line of the chunk may contain any standard ASCII character
including #.
Is there an effective one-liner to do it?
The following sed command will print the 3rd line after the pattern:
sed -n '/^#/{n;n;n;p}' file.txt
If there are no blank lines:
perl -ne 'print if $. % 4 == 0' file
$ awk 'BEGIN{RS="#";FS="\n"}{print $4 } ' file
::::::::::::::::::::::::;;8
88888888888888888888888888
If you always have those 4 lines in a chunk, some other ways
$ ruby -ne 'print if $.%4==0' file
::::::::::::::::::::::::;;8
88888888888888888888888888
$ awk 'NR%4==0' file
::::::::::::::::::::::::;;8
88888888888888888888888888
It also seems like your line is always after the line that start with "+", so
$ awk '/^\+/{getline;print}' file
::::::::::::::::::::::::;;8
88888888888888888888888888
$ ruby -ne 'gets && print if /^\+/' file
::::::::::::::::::::::::;;8
88888888888888888888888888
This prints the lines before lines that starts with #, and also the last line. It can work with non uniform sized chunks, but assumes that only a chunk leading line starts with #.
sed -ne '1d;$p;/^#/!{x;d};/^#/{x;p}' file
Some explanation is in order:
First you don't need the first line so delete it 1d
Next you always need the last line, so print it $p
If you don't have a match swap it into the hold buffer and delete it x;d
If you do have match swap it out of the hold buffer, and print it x;p
This works similarly to dogbane's answer
awk '/^#/ {mark = NR} NR == mark + 3 {print}' inputfile
And, like that answer, will work regardless of the number of lines in each chunk (as long as there are at least 4).
The direct analog to that answer, however, would be:
awk '/^#/ {next; next; next; print}' inputfile
this can be done using grep easily
grep -A 1 '^#' ./infile
This might work for you (GNU sed):
sed '/^#/,+2d' file

Resources