I used Reflector 6.8 to disassemble a binary. It shows the Class tree view. Even the declaration of methods of the classes , but "Expand Methods" errors out with some error like "Block statement count of 0 during conditional expression translation"
Then I tried to use Telerik's JustDecompile (in Beta), it worked fine for 1 of the 10-15 assemblies i have. But for another assembly it simply shoots up in memory to 1.5 GB and hangs.
Is there any other stable decompiler I can use to generate C# code ?
The only other one that I know of is IL Spy.
You should report errors in Reflector to the guys at Red Gate.
The no-op loops were probably added by some obfuscator.
Based upon the available information, I believe you may be using an obfuscated assembly.
The current Telerik JustDecompile beta (2011.1.728.1) does not offer support for decompiling obfuscated assemblies. It is very efficient at decompiling non-obfuscated assemblies, though, and its memory footprint is getting smaller with every update. The memory usage you observed is unusual. If you can share more detail over email about the assembly you’re using, we’ll try to reproduce and fix this specific case (chris.eargle [at] telerik.com).
Meanwhile, if you’d like to see more support in future JustDecompile updates for obfuscated assemblies, please share your feedback on the JustDecompile UserVoice so others can vote for the idea: http://justdecompile.uservoice.com.
Related
I've just seen this library ByteNode it's the same as ByteCode of java but this is for NodeJS.
This library compiles your JavaScript code into V8 bytecode, which protect your source code, I'm wondering is there anyway to Decompile byteNode therefore it's not secure enough. I'm wondering because I would like to protect my source code using this library?
TL;DR It'll raise the bar to someone copying the code and trying to pass it off as their own. It won't prevent a dedicated person from doing so. But the primary way to protect your work isn't technical, it's legal.
This library compiles your JavaScript code into V8 bytecode, which protect your source code...
Well, we don't know it's V8 bytecode, but it's "compiled" in some sense. All we know is that it creates a "code cache" via the built-in vm.Script.prototype.createCachedData API, which is officially just a cache used to speed up recompiling the code a second time, third time, etc. In theory, you're supposed to also provide the original source code as a string to the vm.Script constructor. But if you go digging into Node.js's vm.Script and V8 far enough it seems to be the actual code in some compiled form (whether actual V8 bytecode or not), and the code string you give it when running is ignored. (The ByteNode library provides a dummy string when running the code from the code cache, so clearly the actual code isn't [always?] needed.)
I'm wondering is there anyway to Decompile byteNode therefore it's not secure enough.
Naturally, otherwise it would be useless because Node.js wouldn't be able to run it. I didn't find a tool to do it that already exists, but since V8 is open source, it would presumably be possible to find the necessary information to write a decompiler for it that outputs valid JavaScript source code which someone could then try to understand.
Experimenting with it, local variable names appear to be lost, although function names don't. Comments appear to get lost (this may not be as obvious as it seems, given that Function.prototype.toString is required to either return the original source text or a synthetic version [details]).
So if you run the code through a minifier (particularly one that renames functions), then run it through ByteNode (or just do it with vm.Script yourself, ByteNode is a fairly thin wrapper), it will be feasible for someone to decompile it into something resembling source code, but that source code will be very hard to understand. This is very similar to shipping Java class files, which can be decompiled (there's even a standard tool to do it in the JDK, javap), except that the format Java class files are well-documented and don't change from one dot release to the next (though they can change from one major release to another; new releases always support the older format, though), whereas the format of this data is not documented (though it's an open source project) and is subject to change from one dot release to the next.
Certain changes, such as changing the copyright message, are probably fairly easy to make to said source code. More meaningful changes will be harder.
Note that the code cache appears to have a checksum or other similar integrity mechanism, since directly editing the .jsc file to swap one letter for another in a literal string makes the code cache fail to load. So someone tampering with it (for instance, to change a copyright notice) would either need to go the decompilation/recompilation route, or dive into the V8 source to find out how to correct the integrity check.
Fundamentally, the way to protect your work is to ensure that you've put all the relevant notices in the relevant places such that the fact copying it is a violation of copyright is clear, then pursue your legal recourse should you find out about someone passing it off as their own.
is there any way
You could get a hundred answers here saying "I don't know a way", but that still won't guarantee that there isn't one.
not secure enough
Secure enough for what? What's your deployment scenario? What kind of scenario/attack are you trying to defend against?
FWIW, I don't know of an existing tool that "decompiles" V8 bytecode (i.e. produces JavaScript source code with the same behavior). That said, considering that the bytecode is a fairly straightforward translation of the source code, I'm sure it wouldn't be very hard to write such a tool, if someone had a reason to spend some time on it. After all, V8's JS-to-bytecode compiler is open source, so one would only have to look at those sources and implement the reverse direction. So I would assume that shipping as bytecode provides about as much "protection" as shipping as uglified JavaScript, i.e. none that I would trust.
Before you make any decisions, please also keep in mind that bytecode is considered an internal implementation detail of V8; in particular it is not versioned and can change at any time, so it has to be created by exactly the same V8 version that consumes it. If you want to update your Node.js you'll have to recreate all the bytecode, and there is no checking or warning in place that will point out when you forgot to do that.
Node.js source already contains code for decompiling binary bytecode.
You can get a text string from your V8 bytecode and then you would need to analyze it.
But text string would be very long and miss some important information such as a constant pool. So you need to modify the Node.js source.
Please check https://github.com/3DGISKing/pkg10.17.0
I have attached exported xml file.
If you study V8, it would be possible to analyze it and get source code from it.
It keeping it short and sweet, You can try Ghidra node.js package which is based on Ghidra reverse engineering framework which was open-sourced by NSA in the year 2019. Ghidra is capable of disassembling and decompiling the v8 bytecode. The inner working of disassembling is quite complex, this answer is short but sufficient.
I have here a C++/CLI solution which isn't mixed with native C++ (although we have this type too). It consists of three projects, where are two relevant for my question.
The first one is a static library (.lib) and deals with Acitve Diretytory matters.
The second one is the executable main project (.exe) which depends on the other projects.
I'm new to Visual Studio 2012 and want to use the advantages of tools like the code analysis. Running the code analysis over the solution reveals several CA2122 warnings:
CA2122 Do not indirectly expose methods with link demands
I understand the security concerns related to this warning and I think I understood how to deal with it, although I'm also new to this security stuff. This warnings are related to the Active Directory code when the whole solution is examined, while examining only the lib-project they will not appear and everything seems to be ok.
Now to the core of the problem:
I tried to mark all methods where I'm warned with the SecuritySafeCritical attribute
--> no changes, same warnings
I've solved this warning in another project by marking the whole assembly as SecurityCritical and adding the SecuritySafeCritical to the problematic method. This will not work since adding a AssemblyInfo.cpp with marking the assembly as SecurityCritical will not affect this problem. (I know that *.cpp seem to be obsolete in managed static librarys since the code seem to have to be complete in the header files making this kind of project obsolete... but we don't want to have .dll for every small part and we also want to have this stuff capsulated in an own project instead of having some loose header files or have it mixed with other regions)
After that I tried to mark the whole assembly of the main project as SecurityTransparent because so far I understand this SecuritySafeCritical marked code can be called by SecurityTransparent or SecurityCritical code (what is for me every kind of security). --> My as SecuritySafeCritical marked methods now are marked with CA2141 warnings and many other methods produce new warnings (most of them are related to exception handling):
CA2141:Transparent methods must not satisfy LinkDemands
CA2140: Transparent code must not reference security critical items
So I decided to try marking this assembly as SecurityCritical too.
--> My SecuritySafeCritical methods finally produce no warnings, but there are still all these other warnings from methods having exceptionhandling.
So I don't know how to solve this problem. I assume that having a managed static library is the problem and when having just a dll-project maybe I could solve the problem as mentionend in 2., but I want to avoid to share another *.dll project with our programs.
I searched for a solution but found nothing which would help in this case. Also informations on this topic are rare, out of date (because related to .Net Framework 2.0 while the whole security thing seems to be changed massively with .Net Framework 4.0) or hard to understand for me. So I hope someone has an idea what I could try or what I should do.
Is there an option in the Visual Studio 2012 C++ compiler to make it warn if you use uninitialized class members?
The RTC-Checks are not compatible with managed C++ (/clr)
What kind of data member? A pointer member variable or one that gets its constructor automatically called?
It is really up to the author to be experienced enough to be paranoid about pointers and watch their initialization, assignment and dereferencing like a hawk to make sure it's safe. No compiler or static analyzer can take the place of a competent programmer in making sure pointers are used safely.
You basically want to find these issues at compile time if possible, and at run-time only as a last resort.
For compile time tool, you do have some options that might help you:
The static analyzer that comes with Visual Studio can warn if a pointer is being used without being checked first. But it does not give the same emphasis for a pointer class member. I've seen a 3rd party static analyzer called CppCheck that does that check.
Coverity (Another static analyzer) would also probably do that too. Ah, but wait, Coverity doesn't work for managed code (last I checked). And it's so expensive you probably have to sell your house, and your neighbors house to pay for it, and have a coverity engineer come to your office to take 3 days to get it installed, and then it will take 24 hours to run the analysis.
For runtime checking, I'm have no idea what alternative you might have for RTC with managed code. But it would be very Very VERY wise to minimize the amount of pure native code you expose to the /clr switch. Some programmer years ago turned that on for our product for our largest project (It had hundreds of files). Even though out of the hundreds of files in the project only 4 or 5 files used the managed code, he still turned on the switch anyways for the hundreds of other pure native files.
As a result, there was thousands of crashes for years until we reversed that stupidity.
So put your code in clear managable layers. Seperate the managed C++ code from the pure native C++ code and in visual studio only turn on the /clr switch on the managed files.
And by all means use static analysis tools as much possible.
Since I'm a big fan of Dapper and using it for a couple SQL Azure Projects I would like to use on MonoTouch as well against the built-in Mono.Data.SQLite.
I realize that Dapper's speed comes from the dynamic code generation which unfortunately is a big no-no on iOS where everything has to be compiled ahead-of-time by MonoTouch.
First question: Has anyone made any efforts to provide reflection based implementation of the relevant parts of dapper? (I know it will be a LOT slower) If not how hard would it be to implement it (only glanced over the Dapper source).
Second question: I hope I am not sounding naive here but would it be remotely possible to write a little utility that would materialize the dynamically generated IL for your entity POCOs into an IL assembly source file that could be added to your MonoTouch project and thus gets AOTed during build time? Or is this impossible due to joins and QueryMultiple etc?
Note: I realize there is at least one attempt to port Dapper to MonoTouch but glancing over the source I have no idea how's that supposed to fly since all the dynamic method generation stuff is still in there.
So my company uses a delightfully buggy program called Rational Purify (as a plugin to Microsoft Visual Developer Studio) to manage memory leaks. The program is deigned to let you click on a memory leak after you have encountered it, and then jump to the line that the leak occurs on.
Unfortunately Purify is malfunctioning and Purify will not jump to the place that the leak occurred it only mentions the class and method that the leak occurs in. Unfortunately, sometimes this is about as useful as hiring a guide to help you hunt bears and having him point to the forest and tell you there are bears there.
Does anyone with Purify experience have any idea how I might fix this problem or have a good manual to look though?
Generally you have two options, one exclude modules DLL's from instrumentation in Purify, it helps some times. Second is get BoundsChecker, this does compile time instrumentation much slower but the level of detail is an order of magnitude better.
We generally use Purify on check-in, sanity checking, and BoundsChecker when we know a bug/crash exists.
BoundsChecker has some nice features like only instrument files A.cpp & B.cpp, excluding all the rest.
Be aware neither of these two applications function on 64 bit operating systems, and BoundsChecker will not install on 64 bit OS. Most frustrating if you make the switch to native 64 bit development with 32 bit back port!
Purify is like a swiss knife. If you know how to use it, you will get some results, not the best but still results. If you don't, it will crash, because it is just another program running on Windows.
In the end you will need a lot of patience, rebuilds and a bit of luck.
Purify comes with a script called ScanVSSolutionForPurifyPlus.pl which will ensure that your project files have all the right settings for Purify to work properly. If you haven't run it, give it a go.
(I've personally used ScanVSSolutionForPurifyPlus.pl on a large solution, and it worked like a charm. One caveat: when you give it the name of your .sln file, you might need to give it the full pathname.)
Are you sure you have debug build? Or rather you have all PDB's enabled? Try WindDbg on your executable and check with !lmi command what is visible.
Is whole code properly instrumented?
Also consider using something else like free Visual Leak Detector or Microsoft's tool LeakDiag.
I used Purify about 5 years ago. It was really flaky then. They kept promising to fix all the bugs in the 'next release'. We gave up on it in the end. One can only wonder if they used their own QA tools on their products. Oh the irony...