how to compare two exe files with x64dbg - exe

I have two exe files a patched one and the original one (the original one was patched using x64dbg), I want to know what are the differences between the two files using x64dbg or ollydbg.Which mean I want to know where the original file was modified with x64dbg.
thanks.

You shouldn't use a debugger to do this, use a diff tool instead. These are some tools that can compare binary files:
HxD: Freeware, close-source.
radare2-radiff2: Freeware, open-source, docs
These tools only compare the differences in bytes without disassembling them, but knowing the offsets of the differences you could easily take a look at the opcodes in x64dbg.
There is also an IDA plugin for this: patchdiff2, although I haven't tried this one. You can also consider writing your own x64dbg plugin, it shouldn't be too hard.

i still couldn't find exact solution. lets assume that i patched File1.exe with x64dbg and saved, i want to compare file1.exe and patchedfile.exe by looking through jmp je mov dissasembled sections.

Related

Removing assembly instructions from ELF

I am working on an obfuscated binary as a part of a crackme challenge. It has got a sequence of push, pop and nop instructions (which repeats for thousands of times). Functionally, these chunks do not have any effect on the program. But, they make generation of CFGs and the process of reversing, very hard.
There are solutions on how to change the instructions to nop so that I can remove them. But in my case, I would like to completely strip off those instructions, so that I can get a better view of the CFG. If instructions are stripped off, I understand that the memory offsets must be modified too. As far as I could see, there were no tools available to achieve this directly.
I am using IDA Pro evaluation version. I am open to solutions using other reverse engineering frameworks too. It is preferable, if it is scriptable.
I went through a similar question but, the proposed solution is not applicable in my case.
I would like to completely strip off those instructions ... I understand that the memory offsets must be modified too ...
In general, this is practically impossible:
If the binary exports any dynamic symbols, you would have to update the .dynsym (these are probably the offsets you are thinking of).
You would have to find every statically-assigned function pointer, and update it with the new address, but there is no effective way to find such pointers.
Computed GOTOs and switch statements create function pointer tables even when none are present in the program source.
As Peter Cordes pointed out, it's possible to write programs that use delta between two assembly labels, and use such deltas (small immediate values directly encoded into instructions) to control program flow.
It's possible that your target program is free from all of the above complications, but spending much effort on a technique that only works for that one program seems wasteful.

Find duplicate Files

I used to use a program finddupe on Windows (XP) which checked for duplicate files and offered to replace by hardlinks.
This calculated a hash of the 1st 32K, only checking the balance on match. I have the source (for VC++6), but was wondering if there is a Linux/OSX equivalent before I try to port it, although I suspect it may be better to write a new program in a higher level language.
I've found fdupes to be helpful for me.
If you are looking to write your own quick script, I would suggest looping over files and using cmp as it allows you to easily stop comparison after the first mismatched byte.
There are many similar tools. See here
They may not be part of standard distribution.
I have used fslint before and found it to be sufficient for my needs.

CMP command not working properly

I am using cmp command in x86 processor and is working properly (binary files are generated using gcc)
but while using it in arm cortex a9, it does not give proper output (binaries are generated using cross gcc)
board specific binaries while comparing in X86 machine using cmp command, produces proper output.
X-86 machine:
say I got 2 files a.bin, b.bin (should be same while comparing using cmp)
cmp a.bin b.bin
and its proper.
Arm cortex A9:
a.bin, b.bin
cmp a.bin b.bin
here also it must be same.
but it generates a mismatch.
any clue please !!
Your question isn't very clear and is a little vague so I'll take a stab in the dark and assume that you're asking why the same source code compiles to different files.
Although a compiled program (assuming no UB or portability issues) will be functionally the same no matter what compiler is used, the program on the binary level won't necessarily be.
Different optimization levels will generate different files for example. The compiler may embed build dates into the file. Different compilers will arrange the code differently.
These are all reasons why you may be getting different outputs for the 'same' program.

"Function-level linking" (i.e. COMDAT generation) in MASM assembly?

Is there any way to make MASM generate COMDATs for functions, so that unused functions are removed by the linker?
(i.e. I'm looking for the equivalents of /Gy for MASM.)
Not straightforward, but doable; discussed here and here.
The first step involves putting each function into a separate segment with names like .text$a, .text$b, etc. This way, the assembler won't unite them into a single .text section, but the linker eventually will; there's a special rule in Microsoft linkers regarding the stuff past the $ character in the section name. The assembler will emit an .obj file with multiple code sections. I've tried that, I can confirm that it does. At least one flavor of MASM does. :)
Then they suggest running an utility over an object file that will mark your sections as COMDATs. The said utility seems to be lost to time and bit decay, but its action can be roughly deduced. It reads and parses a COFF .obj file, goes through sections and slaps a COMDAT flag on all .text sections. I assume it's just a flag; could be more. As a first step to its recreation, I'd suggest compiling a C file with /Gy then without, and comparing the two .obj files via some low-level PE/COFF browser. I didn't go this far, since my scenario was rather different.

Verifying two different build architectures (one a re-write of the other) are functionally equivalent

I'm re-writing a build that produces a number of things (shared/static libraries, jars, executables, etc). The question came up whether there's a way to verify that the results are functionally equivalent without doing a full top-to-bottom test of the resulting software.
However, that is proving to be more difficult to do than I anticipated.
As an example, I expected that the md5 of two objects produced from the same source (sun studio C++ compiler) and command-line parameters would have the same md5 hash, but that isn't the case. I can build the file, rename it, build again, and they have different hashes.
With that said ... is there a way do a quick check to verify that two files produced from separate build architectures of the same source tree (eg, two shared objects) are functionally equivalent?
edit I am sorry, I neglected to mention this is for a debug build ... when debugging flags aren't used the binaries are identical, but they've been using debugging flags by default for so many years their stuff breaks when you remove the debugging flags (part of the reason I'm re-writing the build is to take that particular 'feature' out of the build so we can get some proper testing going)
Windows DLLs have a link timestamp (TimeDateStamp) as part of PE image.
Looking at linker options, I don't see an option to suppress that. So re-linking a DLL (or an EXE) will always produce a different binary.
You could write a tool to zero out these timestamps (always at a fixed offset from file start), and compare MD5s afterwards. But you'll likely discover lots of other differences as well. In particular, any program that uses __DATE__ or __TIME__ builtins will give you trouble.
We've had to work quite hard to achieve bit-identical rebuilds (using GNU toolchain). It's possible (at least for open-source tools, on Linux), but not easy (as you've discovered).
I forgot about this question; I'm revisiting so I can give the answer I came up with.
objcopy can be used to produce a new binary file in different formats. It's been a few years since I worked on this, so the specifics escape me, but here's what I recall:
objcopy can strip various things out (debug info, symbol information, etc), but even after stripping stuff out I was still seeing different hashes between objects.
In the end I found I could convert it from ELF to other formats. I ended up dumping it to another format (I think I chose SREC) that consistently provided the same MD5 for objects built at different times with identical source/flags.
I'm betting I could have done this a better way with objcopy (or perhaps another binutils tool), but it was good enough to satisfy our concerns.

Resources