I need a binary comparison tool for Win/Linux [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
First of all, I don't need a textual comparison so Beyond Compare doesn't do what I need.
I'm looking for a util that can report on the differences between two files, at the byte level. Bare minimum is the need to see the percentage change in the file, or a report on affected bytes/sectors.
Is there anything available to save me the trouble of doing this myself?

I found VBinDiff. I haven't used it, but it probably does what you want.

I guess it depends on what exactly is contained in the file, but here's a quick one:
hexdump file1 > file1.tmp
hexdump file2 > file2.tmp
diff file1.tmp file2.tmp
Since 16 bytes are typically reported on each line, this won't technically give you a count of the bytes changed, but will give you a rough idea where in the file changes have occurred.

UltraCompare is the best for binary comparison. It has a smart comparator that is really useful.

ECMerge recently introduced a binary differ, it can compare files of several giga bytes (the limit is somewhere above the tera byte). it works on linux, windows, mac os x and solaris.
it gives you byte by byte or block per block statistics.
You can parameter synchronization window (if desired) and minimal match.

You can use xdelta. This is open source binary diff tool that you can use then to make binary patches, but I think it also gives the information about differences found.

There's Araxis Merge available for windows. Here's a page that describes their binary comparison feature.

Related

Is there any Linux alternative of windows desktop search tool "everything"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Is there any alternative of popular windows desktop search "Search Everything (by Voidtools)" for linux. "Everything" is the only reason I have to stay with windows and not able to switch to linux as primary OS. I am looking for the alternative for quite sometime. I guess, someone who has already used "Everything" on windows, can only understand what I am looking for. Any help is appreciated.
Take a look at Recoll.
Recoll finds keywords inside documents as well as file names. It can search most document formats. It can reach any storage place: files, archive members, email attachments, transparently handling decompression. One click will open the document inside a native editor or display an even quicker text preview. The software is free, open source, and licensed under the GPL.
I don't really know what is your use case.
To have a index of all file names and search for them use
updatedb & locate
to find manually things in files use ( only as example ):
grep <search-string> -R *
find . -type f -exec grep <search-string> {} \;
Indexing source code:
ctags & etags
More information about text indexing:
Command-line fulltext indexing?
Hope this helps

How can i use tcl to create xls file with multisheets and colored boxes as per requirement? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am using Linux operating System. and my most of the application is in TCL. I am thinking of adding a module to it for creating xls file with multiple sheets and colored boxes as per requirement. Is there a way by which i can created xls file? csv will not help me for the task. Any Help/Suggestion/Keyword will be appreciated. Thanks in advance.
The excel formats are formidably tricky (only CSV is remotely easy to handle, and that's because it doesn't do much). I'd use Apache POI for this, even bearing in mind that it is Java code and so likely to be a bit awkward to integrate with your Tcl code.
If you were able to run on Windows instead, the TCOM package would let you talk to a running Excel instance to do the work more directly. That package is platform-specific though…
This i made it done by simple hit and trial of perl and half day learning of Spreadsheet::WriteExcel
http://homepage.tinet.ie/~jmcnamara/perl/WriteExcel.html
Creating perl file using tcl data and at the end executing the perl script. Final output of xls file generated. Thanks everyone for your effort.

Dreamweaver equivalent for Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking for an equivalent software to Dreamweaver in Linux.
It is not an exact match but it is based out of Eclipse which means super cross platform funky java love.
http://www.aptana.com/
Aptana Studio is actually what I replaced Dreamweaver with since Adobe bought Macromedia, I use it on Windows and Linux without trouble. But for the suggestion you will also get my 2 cents about Wysiwtf... it is almost never what you get. Some of the best code I have ever done in my life was done in SciTE (also available in Linux), it supports multiple coding languages and offers enough features to be useful without becoming bloated.
If you want something reasonably non-technical, then perhaps Kompozer?
Or, if you want more technical stuff, then you probably want Aptana.
Another mention bluefish.
Depending on what desktop environment you use I can recommend Quanta+ to you. It's part of the KDE SC but can also be used in other DEs.
You could also use KompoZer, it seems to be nice as well. Didn't test this one though.
I've also researched this for myself, and the answer is that, in my opinion, there is nothing comparable.
Most people choose Dreamweaver for its WYSIWYG (as good as it can be with HTML), and the ease of use. If you're looking for database connectivity, PHP debugging and the like, then Elipse beats Dreamweaver by a lot, but chance is the original poster is looking for the ease-of-use, so neither Bluefish nor Eclipse is going to satisfy him.

Can anyone recommend a good book or other resource on NTFS semantics? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'd like to improve my understanding of NTFS semantics; ideally, I'd like some kind of specification document(s).
I could (in theory) figure out the basics by experimentation, but there's always the possibility that I'd be ignoring some important variable.
For example, I'm having difficulty finding definitive information on the following:
(1) When do file times (created/modified/accessed) get set/updated? For example, does copying and/or moving a file affect any or all of these times? What about if the file is being copied/moved between volumes? What about alternate streams?
(2) How do sharing modes and read/write access interact?
(3) What happens to security information (SACL, DACL, ownership etc.) when a file is copied and/or moved?
As I said, I could probably "answer" these questions by writing some code, but that would only tell me how the specific operations I tested behaved across any machines that I ran the code on. I'd like to find a resource that can tell me how this stuff is supposed to behave, identifying all the variables that could affect the behaviour.
TIA!
Apparently there are no public non-NDA specifications. Projects such as NTFS-3G would greatly benefit from them, but they don't mention anything.
A predecessor of NTFS-3G, called linux-ntfs, has made some documentation on its own here. Maybe that's good enough for you, maybe not.

Could you recommend an unstructured data indexing software? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am collecting logs from several custom made applications. Each application has it's own log format. What I'm looking for is a central tool which would allow me to search through all of my logs. This means the tool would have to be able to define a different regex (or alike) for each log file (marking where a record begins, ends, and what are the fields). I've been trying Splunk, but I'm not happy with it, since performance are slow, I'm limited (free version) with the amount of indexed data per-day, and it's not as flexible as I want it to be.
Could you recommend a software (preferably free or cheap) for the task?
You can try Lucene. It is free. It is written in Java, and it allows full-text search over large amount of data. It is not a complete application, but rather a library, so you have to write code that uses it to index and to search your logs. You may have to define different document types or at least different indexing functions for your logs, but then search works beautifully.
If you can use Windows, try out Microsoft's best tool ever, Logparser. I wish there was such a simple tool for Unix. But there isn't. And although I've kept wanting to get around to making a Unix version of Logparser, I just haven't had the time.
Note: This would be a great project for someone with time on their hands or for a grad-student somewhere!
http://www.splunk.com/
Never used it, but have heard of it.

Resources