Append a file in the middle of another file in bash - linux

I need to append a file in a specific location of another file.
I got the line number so, my file is:
file1.txt:
I
am
Cookie
While the second one is
file2.txt:
a
black
dog
named
So, after the solution, file1.txt should be like
I
am
a
black
dog
named
Cookie
The solution should be compatible with the presence of characters like " and / in both files.
Any tool is ok as long as it's native (I mean, no new software installation).

Another option apart from what RavinderSingh13 suggested using sed:
To add the text of file2.txt into file1.txt after a specific line:
sed -i '2 r file2.txt' file1.txt
Output:
I
am
a
black
dog
named
Cookie
Further to add the file after a matched pattern:
sed -i '/^YourPattern/ r file2.txt' file1.txt

Could you please try following and let me know if this helps you.
awk 'FNR==3{system("cat file2.txt")} 1' file1.txt
Output will be as follows.
I
am
a
black
dog
named
Cookie
Explanation: Checking here if line number is 3 while reading Input_file named file1.txt, if yes then using system utility of awk which will help us to call shell's commands, then I am printing the file2.txt with use of cat command. Then mentioning 1 will be printing all the lines from file1.txt. Thus we could concatenate lines from file2.txt into file1.txt.

How about
head -2 file1 && cat file2 && tail -1 file1
You can count the number of lines to decide head and tail parameters in file1 using
wc -l file1

Related

delete lines based on one file contain to another

I'm trying to found a way to speed a delete process.
Currently I've two files, file1.txt and file2.txt
file1 contain records on 20 digits near 10000 lines.
file2 contain length records of 6500 digits and near 2 millions.
My goal is to delete lines on file2 that matches records on file1.
To do this I create a sed file with the record line from the fist file like this:
File1:
/^20606516000100070004/d
/^20630555000100030001/d
/^20636222000800050001/d
command used : sed -i -f file1 file2
The command works fine but it take about 4hours to delete the 10 000 lines on the file2.
I'm looking for a solution that can speed up the delete process.
Additional information:
each records of file1 is on file2 for sure !
line from file2 always start with a number of 20digits that should match or not with the records contain on file1.
to illustrate the upper point here is a line from file2(this is not the entire line as explain each records of file 2 is 6500 length)
20606516000100070004XXXXXXX19.202107.04.202105.03.202101.11.202001.11.2020WWREABBXBOU
Thanks in advance.
All you need is this, using any awk in any shell on every Unix box:
awk 'NR==FNR{a[$0]; next} !(substr($0,1,20) in a)' file1 file2
and with files such as you described on a reasonable processor it'll run in a couple of seconds rather than 4 hours.
Just make sure file1 only contains the numbers you want to match on, not a sed script using those numbers, e.g.:
$ head file?
==> file1 <==
20606516000100070004
20630555000100030001
20636222000800050001
==> file2 <==
20606516000100070004XXXXXXX19.202107.04.202105.03.202101.11.202001.11.2020WWREABBXBOU
99906516000100070004XXXXXXX19.202107.04.202105.03.202101.11.202001.11.2020WWREABBXBOU
$ awk 'NR==FNR{a[$0]; next} !(substr($0,1,20) in a)' file1 file2
99906516000100070004XXXXXXX19.202107.04.202105.03.202101.11.202001.11.2020WWREABBXBOU
You can read the 1st file (containing the 20 first digits) of the files to suppress like this:
while IFS= read -r code; do
< ... process the current code ... >
done < first_file.txt
And to process the current code, you should read only the 1st 20 characters of every file. To read these first characters you could use:
var=$(head -c 20 $curfile)
Then, you can test if the code you read from the 1st file ($code) matches with the first 20 characters you read from $curfile.
if [ "$code" == "$var" ] ; then rm -v $curfile ; fi
Reading only the 1st 20 characters of every big file is likely to be much faster.
With GNU awk, you could try following solution too.
awk 'FNR==NR{arr[$0];next} !($1 in arr)' file1 FPAT="^.{20}" file2
Explanation: This will give difference of lines(which are not present in file1) by comparing only first 20 characters from file2 and complete line from file1.

grep between two files

I want to find matching lines from file 2 when compared to file 1.
file2 contains multiple columns and column one contains information that could match file1.
I tried below commands and they didn't give any matching results (contents in file1 are definitely in file2) . I have used these commands previously to compare between different files and they worked.
grep -f file1 file2
grep -Fwf file1 file2
When i tried to grep whatever that's not matching, i get results
grep -vf file1 file2
file1 contains list of genes (754 genes) , one line each
ATM
ATP5B
ATR
ATRIP
ATRX
I have a feeling the problem is with my file1. When I tried to type several items manually in my file1 just to test, and do grep with file2, I get the matching lines from file2.
When I copied the contents of file1 (originally in excel) into notepad making a .txt file, I didn't get any matching results.
I can't see any problem with my file1. Any suggestion?
You said,
I copied the contents of file1 (originally in excel) into notepad making a .txt file
It's likely that the txt file contains carriage-return/linefeed pairs which are screwing up the grep. As I suggested in a comment, try this:
tr -d '\015' < file1 > file1a
grep -Fwf file1a file2
The tr invocation deletes all the carriage returns, giving you a proper Unix/Linux text file with only newlines (\n) as line terminators.
You said:
I can't see any problem with my file1.
Here's how to see the extra-carriage-return problem:
cat -v test1
Those little ^M markers at the end of each line are cat -v's way of showing you the carriage return control codes.
Addendum:
Carriage Return (CR) is decimal 13, hex 0x0d, octal 015, \r in C.
Line Feed (LF) is decimal 10, hex 0x0a, octal 012, \n in C.
Because it's an old-school utility, tr accepts octal (base 8) notation for control characters.
(I think in some versions tr -d '\r' would work, but I'm not sure, and anyway I'm not sure what version you have. tr -d '\015' should be universal.)
Simple shell script that performs grep for every input in file1.txt
#!/bin/bash
while read content; do
grep -q "$content" file2.txt
if [ $? -eq "0" ]; then
echo "$content" was found in file2 >> results.txt
fi
done < file1.txt
Let's suppose this is file2:
$ cat file2
a b ATM
c d e
f ATR g
Using grep and process substitution
We can get lines from file1 that match any of the columns in file2 via:
$ grep -wFf <(sed 's/[[:space:]]/\n/g' file2) file1
ATM
ATR
This works because it converts file2 to a form that grep understands:
$ sed 's/[[:space:]]/\n/g' file2
a
b
ATM
c
d
e
f
ATR
g
Using awk
$ awk 'FNR==NR{for (i=1;i<=NF;i++) seen[$i]; next} $0 in seen' file2 file1
ATM
ATR
Here, awk keeps track of every column that it sees in file2 and then print only those lines in file1 that match one of those columns
Try to use command
comm
it is a reversed version of diff

using grep -n (data) file1.txt > file2.txt

I'm trying to understand a shell code which includes a line like this:
grep -n data file1.txt > file2.txt
Where data is the text i want to search for.
What does this command mean?
You can have a detailled answer here: http://explainshell.com/explain?cmd=%20grep%20-n%20data%20file1.txt%20%3E%20file2.txt
To sum it up:
grep will look for the string data in file1.txt and will output both the matching lines and their line number (because of the -n flag).
You could read the manual (man grep) to have a better understanding of what grep does.
The output will be redirected into file2.txt; that's what > is used for

how to subtract the two files in linux

I have two files like below:
file1
"Connect" CONNECT_ID="12"
"Connect" CONNECT_ID="11"
"Connect" CONNECT_ID="122"
"Connect" CONNECT_ID="109"
file2
"Quit" CONNECT_ID="12"
"Quit" CONNECT_ID="11"
The file contents are not exactly same but similar to above and the number of records are minimum 100,000.
Now i want to get the result as show below into file1 (means the final result should be there in file1)
"Connect" CONNECT_ID="122"
"Connect" CONNECT_ID="109"
I have used a while loop something like below:
awk {'print $2'} file2 | sed "s/CONNECTION_ID=//g" > sample.txt
while read actual; do
grep -w -v $actual file1 > file1_tmp
mv -f file1_tmp file1
done < sample.txt
Here I have adjusted my code according to example. So it may or may not work.
My problem is the loop is repeating for more than 1 hour to complete the process.
So can any one suggest me how to achieve the same with any other ways like using diff or comm or sed or awk or any other linux command which will run faster?
Here mainly I want to eliminate this big typical while loop.
Most UNIX tools are line based and as you don't have whole line matches that means grep, comm and diff are out the window. To extract field based information like you want awk is perfect:
$ awk 'NR==FNR{a[$2];next}!($2 in a)' file2 file1
"Connect" CONNECT_ID="122"
"Connect" CONNECT_ID="109"
To store the results back to file1 you'll need to redict the output to a temporary file and then move the file into file1 like so:
$ awk 'NR==FNR{a[$2];next}!($2 in a)' file2 file1 > tmp && mv tmp file1
Explanation:
The awk variable NR increments for every record read, that is each line in every file. The FNR variable increments for every record but gets reset for every file.
NR==FNR # This condition is only true when reading file1
a[$2] # Add the second field in file1 into array as a lookup table
next # Get the next line in file1 (skips any following blocks)
!($2 in a) # We are now looking at file2 if the second field not in the look up
# array execute the default block i.e print the line
To modify this command you just need to change the fields that matched. In your real case if you want to match field 1 from file1 with field 4 from file2 then you would do:
$ awk 'NR==FNR{a[$1];next}!($4 in a)' file2 file1
This might work for you (GNU sed):
sed -r 's|\S+\s+(\S+)|/\1/d|' file2 | sed -f - -i file1
The tool best suited to this job is join(1). It joins two files based on values in a given column of each file. Normally it just outputs the lines that match across the two files, but it also has a mode to output the lines from one of the files that do not match the other file.
join requires that the files be sorted on the field(s) you are joining on, so either pre-sort the files, or use process substitution (a bash feature - as in the example below) to do it on the one command line:
$ join -j 2 -v 1 -o "1.1 1.2" <(sort -k2,2 file1) <(sort -k2,2 file2)
"Connect" CONNECT_ID="122"
"Connect" CONNECT_ID="109"
-j 2 says to join the files on the second field for both files.
-v 1 says to only output fields from file 1 that do not match any in file 2
-o "1.1 1.2" says to order the output with the first field of file 1 (1.1) followed by the second field of file 1 (1.2). Without this, join will output the join column first followed by the remaining columns.
You may need to analyze file2 at fist, and append all ID which have appered to a cache(eg. memory)
Than scan file1 line by line to adjust whether the ID in the cache.
python code like this:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import re
p = re.compile(r'CONNECT_ID="(.*)"')
quit_ids = set([])
for line in open('file2'):
m = p.search(line)
if m:
quit_ids.add(m.group(1))
output = open('output_file', 'w')
for line in open('file1'):
m = p.search(line)
if m and m.group(1) not in quit_ids:
output.write(line)
output.close()
The main bottleneck is not really the while loop, but the fact that you rewrite the output file thousands of times.
In your particular case, you might be able to get away with just this:
cut -f2 file2 | grep -Fwvf - file1 >tmp
mv tmp file1
(I don't think the -w option to grep is useful here, but since you had it in your example, I retained it.)
This presupposes that file2 is tab-delimited; if not, the awk '{ print $2 }' file2 you had there is fine.

Comparing two text files with each other

If I had to text files, for example:
file1.txt
apple
orange
pear
banana
file2.txt
banana
pear
How would I take all phrases on the lines of file2.txt away from file1.txt
So file1.txt would be left with:
apple
orange
grep -v -F -f file2.txt file1.txt
-v means listing only the lines of file1.txt that do not match the pattern, and -f means taking the patterns from the file, in this case — file2.txt. And -F — interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched.
grep command is built-in on OS X and Linux. On Windows you'll have to install it; for example via Cygwin.
combine file1 not file2
On Debian and derivatives, combine can be found in the moreutils package.
If the files are huge (but must also be sorted), comm may be preferable to the more general grep solution proposed by Ivan since it operates line by line and thus, would not need to load the entirety of file2.txt into memory (or search it for each line).
comm -3 file1-sorted.txt file2-sorted.txt | sed 's/^\t//'
The sed command is needed to remove a leading tab inserted by comm.

Resources