linux file compare string unincluded - linux

how can i compare 2 files in LINUX containing for example:
file1
1
2
3
4
5
file2
1
2
3
and to get the result
file3
4
5

How about using comm: Select or reject lines common to two files?
comm -3 file1 file2 > file3
would work for your simple example.

If you want to list all the lines that are in file1, but not in file2, you can do this:
diff file1 file2 | grep "^<" | sed "s/^< //" > file3

Related

How to print only words that doesn't match between two files? [duplicate]

This question already has answers here:
Compare two files line by line and generate the difference in another file
(14 answers)
Closed 2 years ago.
FILE1:
cat
dog
house
tree
FILE2:
dog
cat
tree
I need to be printed only:
house
$ cat file1
cat
dog
house
tree
$ cat file2
dog
cat
tree
$ grep -vF -f file2 file1
house
The -v flag only shows non-matches, -f is for a filename to use as a filter, and -F is for exact matches (doesn't slow it down with any pattern matching).
Using awk
awk 'FNR==NR{arr[$0]=1; next} !($0 in arr)' FILE2 FILE1
First build an associative array with words from FILE2 and than loop over FILE1 and only print those.
Using comm
comm -2 -3 <(sort FILE1) <(sort FILE2)
-2 suppresses lines unique to FILE2 and -3 suppresses lines found in both.
If you want just the words, you can sort the files, diff them, then use sed to filter out diff's symbols:
diff <(sort file1) <(sort file2) | sed -n '/^</s/^< //p'
Awk is an option here:
awk 'NR==FNR { arr[$1]="1" } NR != FNR { if (arr[$1] == "") { print $0 } } ' file2 file1
Create an array called arr, using the contents of file2 as indexes. Then with file1, look at each entry and check to see if an entry in the array arr exists. If it doesn't, print.

AWK to filter to files if their columns match

I basically am working with two files (file1 and file2). The goal is to write a script that pulls rows from file1, if columns 1,2,3 match between files1 and files2. Here's the code I have been playing with:
awk -F'|' 'NR==FNR{c[$1$2$3]++;next};c[$1$2$3] > 0' file1 file2 > filtered.txt
ile1 and file2 both look like this (but has many more columns):
name1 0 c
name1 1 c
name1 2 x
name2 3 x
name2 4 c
name2 5 c
The awk code I provided isn't producing any output. Any help would be appreciated!
your delimiter isn't pipe, try this
$ awk 'NR==FNR {c[$1,$2,$3]++; next} c[$1,$2,$3]' file1 file2 > filtered.txt
or
$ awk 'NR==FNR {c[$0]++; next} c[$0]' file1 file2 > filtered.txt
however, if you're matching the whole line perhaps easier with grep
$ grep -xFf file1 file2 > filtered.txt
awk '{key=$1 FS $2 FS $3} NR==FNR{file2[key];next} key in file2' file2 file1

Extracting difference values between two files [duplicate]

This question already has answers here:
Fast way of finding lines in one file that are not in another?
(11 answers)
Closed 7 years ago.
Working in linux/shell env, how can I accomplish the following:
text file 1 contains:
1
2
3
4
5
text file 2 contains:
6
7
1
2
3
4
I need to extract the entries in file 2 which are not in file 1. So '6' and '7' in this example and now where it found them.
For example, 6, 7 in file 1
I already work with this awk command
awk 'FNR==NR{a[$0]++;next}!a[$0]' file1 file2
But this command can only show the difference, So, 6 and 7 but not where it fouind it.
How can I do this from the command line?
many thanks!
Using awk you can do this:
awk 'FNR==NR { seen[$0]=FILENAME; next }
{if ($1 in seen) delete seen[$1]; else print $1, FILENAME}
END { for (i in seen) print i, seen[i] }' file{1,2}
6 file2
7 file2
5 file1
While traversing file1 we are storing column1 of each row in an array seen with value as FILENAME. Next while iterating file2 we print each missing entry and delete if entry is found (common entries). Finally in END block we print all remaining entries from array seen.
The comm program will tell you what lines files have in common (or are unique to one file). comm works best when the files are sorted lexically.
$ echo "only in file1"; comm -2 -3 <(sort file1) <(sort file2)
only in file1
5
$ echo "only in file2"; comm -1 -3 <(sort file1) <(sort file2)
only in file2
6
7
$ echo "common to file1 and file2"; comm -1 -2 <(sort file1) <(sort file2)
common to file1 and file2
1
2
3
4

script to join 2 separate text files and also add specified text

hi there i want to creat a bash script that does the following:
i have 2 texts files one is adf.ly links and the other Recipie names
i want to creat a batch scrript that takes each line from each text file and do the following
<li>**Recipie name line 1 of txt file**</li>
<li>**Recipie name line 2 of txt file**</li>
ect ect and save all the results to another text file called LINKS.txt
someone please help or point me in direction of linux bash script
this awk one-liner will do the job:
awk 'BEGIN{l="<li>%s</li>\n"}NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
more clear version (same script):
awk 'BEGIN{l="<li>%s</li>\n"}
NR==FNR{a[NR]=$0;next}
{printf l, a[FNR],$0}' file1 file2
example:
kent$ seq -f"%g from file1" 7 >file1
kent$ seq -f"%g from file2" 7 >file2
kent$ head file1 file2
==> file1 <==
1 from file1
2 from file1
3 from file1
4 from file1
5 from file1
6 from file1
7 from file1
==> file2 <==
1 from file2
2 from file2
3 from file2
4 from file2
5 from file2
6 from file2
7 from file2
kent$ awk 'BEGIN{l="<li>%s</li>\n"};NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
<li>1 from file2</li>
<li>2 from file2</li>
<li>3 from file2</li>
<li>4 from file2</li>
<li>5 from file2</li>
<li>6 from file2</li>
<li>7 from file2</li>
EDIT for the comment of OP:
if you have only one file: (the foo here is just dummy text)
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1
output from same file1 example:
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
if you want to save the output to a file:
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1 > newfile
Try doing this :
$ cat file1
aaaa
bbb
ccc
$ cat file2
111
222
333
$ paste file1 file2 | while read a b; do
printf '<li>%s</li>\n' "$a" "$b"
done | tee newfile
Output
<li>111</li>
<li>222</li>
<li>333</li>

how to sort a file according to another file?

Is there a unix oneliner or some other quick way on linux to sort a file according to a permutation set by the sorting of another file?
i.e.:
file1: (separated by CRLFs, not spaces)
2
3
7
4
file2:
a
b
c
d
sorted file1:
2
3
4
7
so the result of this one liner should be
sorted file2:
a
b
d
c
paste file1 file2 | sort | cut -f2
Below is a perl one-liner that will print the contents of file2 based on the sorted input of file1.
perl -n -e 'BEGIN{our($x,$t,#a)=(0,1,)}if($t){$a[$.-1]=$_}else{$a[$.-1].=$_ unless($.>$x)};if(eof){$t=0;$x=$.;close ARGV};END{foreach(sort #a){($j,$l)=split(/\n/,$_,2);print qq($l)}}' file1 file2
Note: If the files are different lengths, the output will only print up to the shortest file length.
For example, if file-A has 5 lines and file-B has 8 lines then the output will only be 5 lines.

Resources