Linux compare diff / meld - linux

I have this strange issue. I have created an algorithm that compresses Inverted Files. I have the original file (in my example it's 198.3Mb) and the decompressed file (which is 198.0 Mb). File sizes are viewed by Nautilus. I ran meld and it returns identical files. Format of both files is exactly the same. What is wrong ?!?!
Example (i ran sdiff (-s mode) and i got this, the exact same data):
170832 | 170832
170833 | 170833
170834 | 170834
170835 | 170835
170836 | 170836
How are these not identical by sdiff ?

use e.g. od -c to analyze the lines that are reported different.
Each character is displayed, including \r \t and such, so you can see exactly where differences are.

Related

Is it possible to partially unzip a .vcf file?

I have a ~300 GB zipped vcf file (.vcf.gz) which contains the genomes of about 700 dogs. I am only interested in a few of these dogs and I do not have enough space to unzip the whole file at this time, although I am in the process of getting a computer to do this. Is it possible to unzip only parts of the file to begin testing my scripts?
I am trying to a specific SNP at a position on a subset of the samples. I have tried using bcftools to no avail: (If anyone can identify what went wrong with that I would also really appreciate it. I created an empty file for the output (722g.990.SNP.INDEL.chrAll.vcf.bgz) but it returns the following error)
bcftools view -f PASS --threads 8 -r chr9:55252802-55252810 -o 722g.990.SNP.INDEL.chrAll.vcf.gz -O z 722g.990.SNP.INDEL.chrAll.vcf.bgz
The output type "722g.990.SNP.INDEL.chrAll.vcf.bgz" not recognised
I am planning on trying awk, but need to unzip the file first. Is it possible to partially unzip it so I can try this?
Double check your command line for bcftools view.
The error message 'The output type "something" is not recognized' is printed by bcftools when you specify an invalid value for the -O (upper-case O) command line option like this -O something. Based on the error message you are getting it seems that you might have put the file name there.
Check that you don't have your input and output file names the wrong way around in your command. Note that the -o (lower-case o) command line option specifies the output file name, and the file name at the end of the command line is the input file name.
Also, you write that you created an empty file for the output. You don't need to do that, bcftools will create the output file.
I don't have that much experience with bcftools but generically If you want to to use awk to manipulate a gzipped file you can pipe to it so as to only unzip the file as needed, you can also pipe the result directly through gzip so it too is compressed e.g.
gzip -cd largeFile.vcf.gz | awk '{ <some awk> }' | gzip -c > newfile.txt.gz
Also zcat is an alias for gzip -cd, -c is input/output to standard out, -d is decompress.
As a side note if you are trying to perform operations on just a part of a large file you may also find the excellent tool less useful it can be used to view your large file loading only the needed parts, the -S option is particularly useful for wide formats with many columns as it stops line wrapping, as is -N for showing line numbers.
less -S largefile.vcf.gz
quit the view with q and g takes you to the top of the file.

Grep show filename and found line for binary files (PDF)

I have a folder with lots of PDF files. I need to get the filename of matching content files as well as specific text in them - Rotate 270, which defines a page rotation. Grep's arguments anH or /dev/null method seems not to work, nor can pdftotext or pdfgrep help, as it is not any visible or searchable text on page I need.
I can either get the "Binary file aaa.pdf matches" or the line like this (which is not a text visible on a page!):
<</Filter/FlateDecode/Length 61>>stream4 595.19995]/MediaBox[0 0 841.92004 595.19995]/Parent 5 0 R/Resources<</ProcSet[/PDF/Text/ImageB/ImageC/ImageI]/XObject<</img3 11 0 R>>>>/Rotate 270/Type/Page>>
Suspect there is a way to loose the non printable bytes before grep gets them, or split the filename before grep part and assemble back after the grep has found the line, or maybe sed has an easy way to achieve this?
How do I get both filename and found line, approximately like grep does on regular text files?
I don't have a pdf file with that string inside but you can try
identify -verbose somefile.pdf | grep 'Rotate 270'
identify is part of ImageMagick package.
You can also try a brute force method :-)
strings somefile.pdf | grep 'Rotatae 270'

How to use sed command to delete lines without backup file?

I have large file with size of 130GB.
# ls -lrth
-rw-------. 1 root root 129G Apr 20 04:25 syslog.log
So I need to reduce file size by deleting line which starts with "Nov 2" , So I have given the following command,
sed -i '/Nov 2/d' syslog.log
So I can't edit file using VIM editor also.
When I trigger SED command , its creating backup file also. But I don't have much space in root. Please try to give alternate solution to delete particular line from this file without increasing space in server.
It does not create a real backup file. sed is a stream editor. When applied to a file with option -i it will stream that file through the sed process, write the output to a new file (a temporary one), when everything is done, it will rename the new file to the original name.
(There are options to create backup files also, but you didn't give them, so I won't mention that further.)
In your case you have a very large file and don't want to create any copy, however temporary. For this you need to open the file for reading and writing at the same time, then your sed process can overwrite the original. After this, you will have to truncate the file at the end of the writing.
To demonstrate how this can be done, we first perform a test case.
Create a test file, containing lots of lines:
seq 0 999999 > x
Now, lets say we want to remove all lines containing the digit 4:
grep -v 4 1<>x <x
This will open the file for reading and writing as STDOUT (1), and for reading as STDIN. The grep command will read all lines and will output only the lines not containing a 4 (option -v).
This will effectively overwrite the beginning of the original file.
You will not know how long the output is, so after the output the original contents of the file will appear:
…
999991
999992
999993
999995
999996
999997
999998
999999
537824
537825
537826
537827
537828
537829
…
You can use the Unix tool truncate to shorten your file manually afterwards. In a real scenario you will have trouble finding the right spot for this, so it makes sense to count the number of bytes written (using wc):
(Don't forget to recreate the original x for this test.)
(grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c
This will preform the step above and additionally print out the number of bytes written to the terminal, in this example case the output will be 3653658. Now use truncate:
truncate -s 3653658 x
Now you have the result you want.
If you want to do this in a script, i. e. without interaction, you can use this:
length=$((grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c)
truncate -s "$length" x
I cannot guarantee that this will work for files >2GB or >4GB on your machine; depending on your operating system (32bit?) and the versions of the installed tools you might run into largefile issues. I'd perform tests with large files first (>4GB as this is typically a limit for many things) and then cross your fingers and give it a try :)
Some caveats you have to keep in mind:
Of course, nobody is supposed to append log entries to that log file while the procedure is running.
Also, any abort during the running of the process (power failure, signal caught, etc.) will leave the file in an undefined state. But re-running the command again after such a mishap will in most cases produce the correct output; some lines might be doubled, but not more than a single line should be corrupted then.
The output must be smaller than the input, of course, otherwise the writing will overtake the reading, corrupting the whole result so that lines which should be there will be missing (or truncated at the start).

bash script zip filename parsing strangely

I'm trying to zip various files together (one of the included files is actually a zip itself) and name the resulting zip based on a handful of bash variables defined earlier. One of the variables used in the zip file name is being parsed from a #define in a config.h file. I successfully parsed together a .zip with the correct name, but when I tried to implement the same zip script in a slightly different situation I get erroneous zip names.
In Windows explorer, the erroneous zip name looks something like X1276N~E.ZIP
In linux the zip appears with the intended name except with a question mark (which I've come to understand to be some sort of placeholder). i.g. foo-stuff-bar-9.1b?.zip
My current code trying to zip a file with name foo-stuff-bar-9.1b.zip:
foo_name=$1
bar_name=$2
rev_number=$(grep define[[:space:]]*SOME_NUMBER $directory/config.h | awk '{print $3;}'| tr -d '/"')
archive_name="$foo_name"-stuff-"$bar_name"-9."$rev_number"
zip "$archive_name".zip file1 file2 backup1.zip file3
So "foo_name" and "bar_name" are strings coming from the terminal when the script is run, "rev_number" is being parsed from config.h, and I'm formatting it all into "archive_name" before using it in the zip command.
I've tried all sorts of variations of quotation marks and brackets and I get the same weird name name no matter what I try. I'm not sure where my error is being caused as I'm parsing from many sources. Any advice is much appreciated.
Per Marc B's suggestion, I piped the string to xxd -b to look at each character byte by byte. It appeared as though I was accidentally parsing a character at the end of $archive_name when scraping the config.h file.
I was able to fix this by just piping my string through tr -d "[:cntrl:]" to remove the any control characters that would give weird file names.

Manually merge two files using diff

I'd like to merge two files by doing the following:
Output the diff of the two files into a temp file and
Manually select the lines I want to copy/save.
The problem here is that diff -u only gives me a file lines of context, while I want to output the entire file in a unified format.
Is there any way diff can do this?
One option that might fit the bill for you,
sdiff : side-by-side diff of files.
sdiff -o merged.file left.file right.file
Once there, it will prompt you with what lines you want to keep from which file. Hit ? and then enter for a little help. Also man sdiff with the detailed goods.
(In my distro, these come packaged in the "diffutils" package [fedora,centos])
If you need to automate the process, you might want to try the util merge, which will mark conflicts in the files. However, that might put you back at square one.
"I want to output the entire file in a unified format. Is there any way diff can do this?"
Yes.
diff -U 9999999 file1.txt file2.txt > diff.txt
This should work, provided your files are less than 10 million lines long.
You can merge/combine the two files with diff using --
diff --line-format %L file1 file2
The easy answer is to use the -D flag to merge the files and surround the differences with C style #ifdef statements.
From the documentation:
-D NAME --ifdef=NAME
Output merged file to show `#ifdef NAME' diffs.
You can use it as follows:
$ diff -D NEWSTUFF file1 file2 > merged_file
I usually then just open the merged file in an editor and resolve the merge conflicts by hand.
You also can use options to output an ed script, etc.
If you are an emacs user, you can do this directly in emacs using the "emerge" tool:
https://www.gnu.org/software/emacs/manual/html_node/emacs/Emerge.html
Issuing M-x emerge-files will open an interactive prompt with a view of files A, B, and the merged file to allow choosing text that differs between files A & B, inserting part of A into B, and more.

Resources