Copy differences between two files in unix - linux

Firstly, which is the best and fastest unix command to get only the differences between two files ? I tried using diff to do it (below).
I tried the answer given by Neilvert Noval over here - Compare two files line by line and generate the difference in another file
code -
diff -a --suppress-common-lines -y file1.txt file2.txt >> file3.txt
But, I get a lot of spaces and a > symbol also before the different lines. How do I fix that ? I was thinking of removing trailing spaces and the first '>', but not sure if that is a neat fix.
My file1.txt has -
Hello World!
Its such a nice day!
#this is a newline and not a line of text#
My file1.txt has -
Hello World!
Its such a nice day!
Glad to be here!
#this is a newline and not a line of text#
Output - " #Many spaces here# > Glad to be here:)"
Expected output - Glad to be here:)

Another way to get diff is by using awk:
awk 'FNR==NR{a[$0];next}!($0 in a)' file1 file2
Though I must admit that I haven't run any benchmarks and can't say which is the fastest solution.

The -y option to diff makes it produce a "side by side" diff, which is why you have the spaces. Try -u 0 for the unified format with zero lines of context. That should print:
+Glad to be here:)
The plus means the line was added, whereas a minus means it was removed.

diff -a --suppress-common-lines -y file1.txt file2.txt|tr 'a >' '' |awk '{print $1}' >>file3.txt

Related

Using Sed or Awk to divide a file into two based on whether a line contains a numeric value

I have used sed and awk for little while now, but I am having a challenge with the below problem. I am asking for an experienced sed/awk guru to help.
I have a file where some lines have numbers and some lines do not, like:
afjjdjfj.uihuihi
trfg.rtyhd
0rtgfd.tjbghhh
hbvfd4.rtgbvdgf
00fhfg.fdrgf
rtygfd.ijhniuh
etc.
I would like to have exactly two files out of this one, where every line is represented in one of the two files (none are deleted).
One containing all lines with any numbers 0-9 on them so given above file result would be:
0rtgfd.tjbghhh
hbvfd4.rtgbvdgf
00fhfg.fdrgf
and another file containing the rest of the lines that do not have any numbers 0-9 on them, so given the above, file it would be:
afjjdjfj.uihuihi
trfg.rtyhd
rtygfd.ijhniuh
I've tried different strategies in both sed and awk and nothing is giving me exactly what I need.
What would be the best sed or awk one liner to solve this problem?
Thank you for your time,
Tom
Easily with Awk:
awk '/[0-9]/{print > file1; next} {print > file2}' inputfile
With single GNU sed command:
sed -ne '/[0-9]/w with_digits.txt' -e '//!w no_digits.txt' input
Results:
> cat no_digits.txt
afjjdjfj.uihuihi
trfg.rtyhd
rtygfd.ijhniuh
> cat with_digits.txt
0rtgfd.tjbghhh
hbvfd4.rtgbvdgf
00fhfg.fdrgf
w filename Write the pattern space to filename.
If you don't mind running twice over the input, you can use just grep:
grep '[0-9]' input > with_digits
grep -v '[0-9]' input > without_digits
perl -MFile::Slurp -lpe '/\d/ ? append_file("digits.txt",$_) : append_file("no_digits.txt",$_)' input.txt

How to input a command's result as a string argument in sed

i want to execute a command as follows on my bash terminal:
sed -i '6i `sed '1!d' input.in`' out
with which i can insert at line 6 of file out (with replacing -i option) the result of the sed '%1!d' input.in command. I haven't found anything useful, and have tried both `com`, $(com) and com | sed -i '6i ' out, where com stands for sed '%1!d' input.in. I don't have any problem changing the syntax of the whole command but i want it to be written in one line on terminal use sed.
Thanks for listening,
awaiting your answer.
For EdMorton:
Example Input:
input.in:
into a lake.
out:
Mary was runing around a pond and fell
into a lake.
Mary fell into a what?
Desired Output:
Mary was runing around a pond and fell
into a lake.
Mary fell into a what?
into a lake.
Try using r on standard input instead of i.
sed '%1!d' input.in |
sed -i '6r /dev/stdin' out
If your platform doesn't support /dev/stdin or /dev/fd/0, see if your sed supports - to mean standard input ... or, in the worst case, resort to a temporary file.
As commenters have already pointed out, %1!d does not appear to be a valid command in most sed dialects, but that is basically unimportant here. (If you mean to print just the first line, maybe you mean sed '1!d', although sed 'p;q' does that more efficiently.)
sed is for simple substitutions on individual lines, that is all. For anything else you should be using awk.
Given this modified input file
$ cat input.in
a Windows folder C:\Windows\Temp
Here is what the sed solution you posted in your comments does:
$ sed '1!d' input.in > temp.of.in && sed "6i `cat temp.of.in`" out
Mary was runing around a pond and fell
into a lake.
Mary fell into a what?
a Windows folder C:WindowsTemp
and here is what an awk solution does more efficiently and accurately and without a temp file:
$ awk 'NR==1{x=$0;nextfile} FNR==6{print x} 1' input.in out
Mary was runing around a pond and fell
into a lake.
Mary fell into a what?
a Windows folder C:\Windows\Temp
Notice the awk solution preserved the path-separator backslashes while the sed one stripped them. Also note that you should really add && rm temp.of.in to the end of your sed command line to clean up the temp file and you should be using $(..) to execute your command, not obsolete backticks.
The awk solution uses GNU awk for ;nextfile, with other awks you'd replace that with }NR==FNR{next or similar but since you are using GNU sed I assume you have GNU awk too.
Note that if you DID have a burning desire to use sed and accept it won't exactly reproduce the input, there are simpler, more efficient ways to do what your current script does, e.g.:
sed "6i $(head -1 input.in)" out
or even your original idea, just rewritten to remove the obsolete backticks and negative logic of 1!d:
sed "6i $(sed -n '1p' input.in)" out
But seriously - just use awk. For anything other than simple substitutions on individual lines it's much more robust, efficient, clear, portable, extensible, etc. etc. than sed.
EDIT To address the questions in your comments:
Can you explain the arguments on awk.
There are no arguments, just a script that says: If this is the first line read from the first file save it in variable x then move on to the next file. If this is line 6 of the 2nd file print the contents of variable x. For every line of the 2nd file, print it (the 1 is idiomatic but a bit tricky at first glance - it's a true condition so it invokes the default action of printing the current input, equivalent to just writing {print}.
how can i replace the out file with the output (without using '>') as the option -i does on sed and avoid printing it to stdout? Just like GNU sed has -i, GNU awk has -i inplace. Be careful though because, just like with sed, it applies to every input file so if you don't print the contents of the first file then when the script is done the first file will be empty. There's various was to deal with that, including simply printing the lines from file 1 or turning inplace editing on/off in BEGINFILE/ENDFILE blocks, see https://www.gnu.org/software/gawk/manual/gawk.html#Extension-Sample-Inplace, but IMHO awk 'script' file1 file2 > temp && mv temp file2 is the simplest and clearest as well as being portable to all awks/seds/whatever.
Also if there is a multiline solution like "take lines 1 to 4" from "input.in" and drop them on line 6 of "out"? No problem:
.
awk '
NR==FNR { if (NR<=4) x=x $0 ORS; else nextfile }
FNR==6 { printf "%s", x }
{ print }
' input.in out
I changed the 1 from the previous script to { print } for clarity.

remove \n and keep space in linux

I have a file contained \n hidden behind each line:
input:
s3741206\n
s2561284\n
s4411364\n
s2516482\n
s2071534\n
s2074633\n
s7856856\n
s11957134\n
s682333\n
s9378200\n
s1862626\n
I want to remove \n behind
desired output:
s3741206
s2561284
s4411364
s2516482
s2071534
s2074633
s7856856
s11957134
s682333
s9378200
s1862626
however, I try this:
tr -d '\n' < file1 > file2
but it goes like below without space and new line
s3741206s2561284s4411364s2516482s2071534s2074633s7856856s11957134s682333s9378200s1862626
I also try sed $'s/\n//g' -i file1 and it doesn't work in mac os.
Thank you.
This is a possible solution using sed:
sed 's/\\n/ /g'
with awk
awk '{sub(/\\n/,"")} 1' < file1 > file2
What you are describing so far in your question+comments doesn't make sense. How can you have a multi-line file with a hidden newline character at the end of each line? What you show as your input file:
s3741206\n
s2561284\n
s4411364\n
etc.
where each "\n" above according to your comment is a single newline character "\n" is impossible. If those "\n"s were newline characters then your file would simply look like:
s3741206
s2561284
s4411364
etc.
There's really only 2 possibilities I can think of:
You are wrongly interpreting what you are seeing in your input file
and/or using the wrong terminology and you actually DO have \r\n
at the end of every line. Run cat -v file to see the \rs as
^Ms and run dos2unix or similar (e.g. sed 's/\r$//' file) to
remove the \rs - you do not want to remove the \ns or you will
no longer have a POSIX text file and so POSIX tools will exhibit
undefined behavior when run on it. If that doesn't work for you then
copy/paste the output of cat -v file into your question so we can
see for sure what is in your file.
Or:
It's also entirely possible that your file is a perfectly fine POSIX
text file as-is and you are incorrectly assuming you will have a
problem for some reason so also include in your question a
description of the actual problem you are having, include an example
of the command you are executing on that input file and the output
you are getting and the output you expected to get.
You could use bash-native string substitution
$ cat /tmp/newline
s3741206\n
s2561284\n
s4411364\n
s2516482\n
s2071534\n
s2074633\n
s7856856\n
s11957134\n
s682333\n
s9378200\n
s1862626\n
$ for LINE in $(cat /tmp/newline); do echo "${LINE%\\n}"; done
s3741206
s2561284
s4411364
s2516482
s2071534
s2074633
s7856856
s11957134
s682333
s9378200
s1862626

renaming files using loop in unix

I have a situation here.
I have lot of files like below in linux
SIPTV_FIPTV_ID00$line_T20141003195717_C0000001000_FWD148_IPV_001.DATaac
SIPTV_FIPTV_ID00$line_T20141003195717_C0000001000_FWD148_IPV_001.DATaag
I want to remove the $line and make a counter from 0001 to 6000 for my 6000 such files in its place.
Also i want to remove the trailer 3 characters after this is done for each file.
After fix file should be like
SIPTV_FIPTV_ID0000001_T20141003195717_C0000001000_FWD148_IPV_001.DAT
SIPTV_FIPTV_ID0000002_T20141003195717_C0000001000_FWD148_IPV_001.DAT
Please help.
With some assumption, I think this should do it:
1. list of the files is in a file named input.txt, one file per line
2. the code is running in the directory the files are in
3. bash is available
awk '{i++;printf "mv \x27"$0"\x27 ";printf "\x27"substr($0,1,16);printf "%05d", i;print substr($0,22,47)"\x27"}' input.txt | bash
from the command prompt give the following command
% echo *.DAT??? | awk '{
old=$0;
sub("\\$line",sprintf("%4.4d",++n));
sub("...$","");
print "mv", old, $1}'
%
and check the output, if it looks OK
% echo *.DAT??? | awk '{
old=$0;
sub("\\$line",sprintf("%4.4d",++n));
sub("...$","");
print "mv", old, $1}' | sh
%
A commentary: echo *.DAT??? is meant to give as input to awk a list of all the filenames that you want to modify, you may want something more articulated if the example names you gave aren't representative of the whole spectrum... regarding the awk script itself, I used sprintf to generate a string with the correct number of zeroes for the replacement of $line, the idiom `"\\$..." with two backslashes to quote the dollar sign is required by gawk and does no harm in mawk, and as a last remark I have to say that in similar cases I prefer to make at least a dry run before passing the commands to the shell...

How to do something like grep -B to select only one line?

Everything is in the title. Basicaly let's say I have this pattern
some text lalala
another line
much funny wow grep
I grep funny and I want my output to be "lalala"
Thank you
One possible answer is to use either ed or ex to do this (it is trivial in them):
ed - yourfile <<< 'g/funny/.-2p'
(Or replace ed with ex. You might have red, the restricted editor, too; it can't modify files.) This looks for the pattern /funny/ globally, and whenever it is found, prints the line 2 before the matching line (that's the .-2p part). Or, if you want the most recent line containing 'lalala' before the line matching 'funny':
ed - yourfile <<< 'g/funny/?lalala?p'
The only problem is if you're trying to process standard input rather than a file; then you have to save the standard input to a file and process that file, which spoils the concurrency.
You can't do negative offsets in sed (though GNU sed allows you to do positive offsets, so you could use sed -n '/lalala/,+2p' file to get the 'lalala' to 'funny' lines (which isn't quite what you want) based on finding 'lalala', but you cannot find the 'lalala' lines based on finding 'funny'). Standard sed does not allow offsets at all.
If you need to print just the IP address found on a line 8 lines before the pattern-matching line, you need a slightly more involved ed script, but it is still doable:
ed - yourfile <<< 'g/funny/.-8s/.* //p'
This uses the same basic mechanism to find the right line, then runs a substitute command to remove everything up to the last space on the line and print the modified version. Since there isn't a w command, it doesn't actually modify the file.
Since grep -B only prints each full number of lines before the match, you'll have to pipe the output into something like grep or Awk.
grep -B 2 "funny" file|awk 'NR==1{print $NF; exit}'
You could also just use Awk.
awk -v s="funny" '/[[:space:]]lalala$/{n=NR+2; o=$NF}NR==n && $0~s{print o}' file
For the specific example of an IP address 8 lines before the match as mentioned in your comment:
awk -v s="funny" '
/[[:space:]][0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$/ {
n=NR+8
ip=$NF
}
NR==n && $0~s {
print ip
}' file
These Awk solutions first find the output field you might want, then print the output only if the word you want exists in the nth following line.
Here's an attempt at a slightly generalized Awk solution. It maintains a circular queue of the last q lines and prints the line at the head of the queue when it sees a match.
#!/bin/sh
: ${q=8}
e=$1
shift
awk -v q="$q" -v e="$e" '{ m[(NR%q)+1] = $0 }
$0 ~ e { print m[((NR+1)%q)+1] }' "${#--}"
Adapting to a different default (I set it to 8) or proper option handling (currently, you'd run it like q=3 ./qgrep regex file) as well as remembering (and hence printing) the entire line should be easy enough.
(I also didn't bother to make it work correctly if you see a match in the first q-1 lines. It will just print an empty line then.)

Resources