How do you lock a dll? - security

I'm producing a dll for a business partner of mine that he is going to integrate into his app. But I also want to somehow lock the dll so it cannot be used by anyone else. The API of the dll is quite straight forward so it'd be easy to reverse-engineer and use it elsewhere.
How do I do that? My only idea so far would be to add a function in the DLL that'd unlock it if the right parameter is passed to it. But again, it can't be static, this would be too easy to intercept, so I am looking for something semi-dynamic.
Any ideas? Thanks in advance.
A

For .net libraries, this is already built into the framework, you just need to set it up. Here is an MSDN article about it.
How to: License Components and Controls
Other than liccensing, you should also obfuscate your code using a tool such as dotFuscator.
PreEmptive's DotFuscator

How likely do you think it is that you'll actually suffer any ill effects (lost income etc) due to this? How significant would such ill effects be? Weigh that up against the cost of doing this in the first place. You could use obfuscation (potentially - it depends on what kind of DLL it is; native or .NET?) but that will only give a certain measure of protection.
You need to accept that it's unlikely (or impossible) that you'll find a solution which is 100% secure. There are shades of grey, and the harder you make it for miscreants, the more effort (or money) you're like to have to put into it too. It may well also make it harder to diagnose issues (e.g. obfuscators munge stack traces; some allow a mapping tool back to the original, but you're likely to lose some information).

It looks like you need to create and use license keys:
http://www.google.com/search?q=creating+license+keys+for+applications&rls=com.microsoft:pt&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

Quick and dirty in .NET: strong-name all your assemblies and all assemblies that will access your "locked" dll. Mark all your API classes as internal instead of public. Then, on your "locked" dll, specify those dlls that should have access to your internal API with the InternalsVisibleTo attribute.

Are you trying to protect from casual pirates or something else ? Whatever you do, if the software is remotely useful it is gonna be craked, patched and what not - just ask any of the third party controls vendors.
Any solution that you come up with, it is going to be cracked. Someone might just open the dll in hex editor and patch your function that does the checks, validation and verification.

Related

How to add security to Spring boot jar file? [duplicate]

How can I package my Java application into an executable jar that cannot be decompiled (for example , by Jadclipse)?
You can't. If the JRE can run it, an application can de-compile it.
The best you can hope for is to make it very hard to read (replace all symbols with combinations of 'l' and '1' and 'O' and '0', put in lots of useless code and so on). You'd be surprised how unreadable you can make code, even with a relatively dumb translation tool.
This is called obfuscation and, while not perfect, it's sometimes adequate.
Remember, you can't stop the determined hacker any more than the determined burglar. What you're trying to do is make things very hard for the casual attacker. When presented with the symbols O001l1ll10O, O001llll10O, OO01l1ll10O, O0Ol11ll10O and O001l1ll1OO, and code that doesn't seem to do anything useful, most people will just give up.
First you can't avoid people reverse engineering your code. The JVM bytecode has to be plain to be executed and there are several programs to reverse engineer it (same applies to .NET CLR). You can only make it more and more difficult to raise the barrier (i.e. cost) to see and understand your code.
Usual way is to obfuscate the source with some tool. Classes, methods and fields are renamed throughout the codebase, even with invalid identifiers if you choose to, making the code next to impossible to comprehend. I had good results with JODE in the past. After obfuscating use a decompiler to see what your code looks like...
Next to obfuscation you can encrypt your class files (all but a small starter class) with some method and use a custom class loader to decrypt them. Unfortunately the class loader class can't be encrypted itself, so people might figure out the decryption algorithm by reading the decompiled code of your class loader. But the window to attack your code got smaller. Again this does not prevent people from seeing your code, just makes it harder for the casual attacker.
You could also try to convert the Java application to some windows EXE which would hide the clue that it's Java at all (to some degree) or really compile into machine code, depending on your need of JVM features. (I did not try this.)
GCJ is a free tool that can compile to either bytecode or native code. Keeping in mind, that does sort of defeat the purpose of Java.
A little late I know, but the answer is no.
Even if you write in C and compile to native code, there are dissasemblers / debuggers which will allow people to step through your code. Granted - debugging optimized code without symbolic information is a pain - but it can be done, I've had to do it on occasion.
There are steps that you can take to make this harder - e.g. on windows you can call the IsDebuggerPresent API in a loop to see if somebody is debugging your process, and if yes and it is a release build - terminate the process. Of course a sufficiently determined attacker could intercept your call to IsDebuggerPresent and always return false.
There are a whole variety of techniques that have cropped up - people who want to protect something and people who are out to crack it wide open, it is a veritable arms race! Once you go down this path - you will have to constantly keep updating/upgrading your defenses, there is no stopping.
This not my practical solution but , here i think good collection or resource and tutorials for making it happen to highest level of satisfaction.
A suggestion from this website (oracle community)
(clean way), Obfuscate your code, there are many open source and free
obfuscator tools, here is a simple list of them : [Open source
obfuscators list] .
These tools make your code unreadable( though still you can decompile
it) by changing names. this is the most common way to protect your
code.
2.(Not so clean way) If you have a specific target platform (like windows) or you can have different versions for different platforms,
you can write a sophisticated part of your algorithms in a low level
language like C (which is very hard to decompile and understand) and
use it as a native library in you java application. it is not clean,
because many of us use java for it's cross-platform abilities, and
this method fades that ability.
and this one below a step by step follow :
ProtectYourJavaCode
Enjoy!
Keep your solutions added we need this more.

NSIS - Compile with opcode re arranged to prevent access to source code

I am trying to reduce and make as difficult as possible the ability to access my source code after being compiled by NSIS. I have read that the only way to reduce the chance of unzipping is to modify the order of the opcodes in the Source\fileform.h from the source code and then Compile the new version.
This is a bit over my head. I was wondering if anyone has done this before and willing to post one they have done. (Or create one for me?)
Main reason for this is I have info that I encrypt using blow-fish within NSIS and do not want the chance oFf someone finding out what the encryption keys are. (Used for licencing the software) I understand noting is fool proof, but just want it as difficult as possible.
I know its asking a lot, but could really this.
Thanks!
I don't believe there are any publicly available modified builds like that. And if there were and it got popular, the decompilers would just add support for it.
I have a complete step-by-step guide to building NSIS here.
If you know C/C++, Delphi or C# you could build your own private NSIS plug-in that handles the encryption details.
No matter what you do, somebody who knows how to use a debugger can easily set a breakpoint on the blow-fish plug-in and view your key. The only way around that is a custom plug-in or an external application that handles the cryptography internally...

Is it or should it be possible to modify the GUI of an application after it's compiled?

I'm a Linux user, and I have been very hesitant to use Glade to design GUIs, since the xml files it produces can easily be modified. I know it doesn't sound like a major issue, but what if it's a commercial app that you just don't want people changing?
I use Mac OS X every once in a while, and I figured out that they use files called ".nib"s for GUIs. I think they're essentially the same type used in Nextstep and Openstep (there's even a Linux app which lets you edit these files). Anyway, these files are included in the application bundle, and according to some people, are completely editable. This person claims he even successfully edited Keynote's interface.
Now, why would that be possible? Is it completely okay for the end user to change the interface? Or is it better to have the GUI directly in the compiled application code, like traditional GTK apps?
OS X nib files are one option; the other option is to do things programmatically. For android, XML files can define the GUI or program code can do it. In Windows WPF, the UI is made in XML. Firefox/Mozilla? XUL, another XML-based UI language.
Most modern GUI toolkits have either both of these options or even just defining UIs in files.
But even binaries are modifiable. With a good binary reverse engineering tool, it's wide open. The only way to be really certain is to do what Apple did with iOS, and run signed code; the entire bundle is signed by a key and can't be run if modified.
This isn't a problem for most everyone. Why do you care if the UI is modified? The underlying code isn't, so functionality can't be added or modified.
As a corollary (and a little off-topic) something that you might have a valid concern about is stuff a little more like this.
I don't really see a problem with it. If a user messes up his UI, then it's his problem. Think of it like moddable games. Users always loved them, and in the end, most games benefit from it. There is usually nothing secret about an application's user interface. If there is, you could always do some sort of encryption.
As others have said, you can also add checksums if you just want to disallow editing.
The xml specifies little more than what the interface looks like. Without the compiled-in event handling code, it's pretty much useless. My opinion is customers change it at their own risk, and you might actually get some free useful improvements out of their hacks.
If you're really paranoid about people changing it, you could always add an MD5 digest verification step or something when you load the xml, or compile the xml string into a header file, but that defeats many of the benefits.
The theming engine can make substantial-looking changes to your GUI, as can tools like Parasite. Updating the Glade layout — at their own risk — is much safer than either of those.
What's wrong with users customizing the UI anyway?

Terms for modern system

This may not be for this forum, but...
We write new system and people are used to older system where components are called "modules". So they talk about the accounting module and the auditing module, etc...
This feels very old, like cobol/mainframe talk. What would be better term for functional components in a modern-distributed java system? Would you say the accounting component? The accounting service? Not sure. They refer to the function in the system (and all components behind it) that allow you to perform accounting functions.
If it ain't broke, don't fix it.
The fact that you are asking SO for advice on this suggests that you don't have a better nomenclature ready to use. Spend your time doing something more productive than fretting about this.
I think that "module" is a perfectly reasonable way to refer to a set of functionality. It's still widely used in many languages and frameworks. If it sounds "old" it's only because of your own frame of reference.
Besides, the customer is always right. You should be adopting their verbiage instead of trying to force them to use yours. Do what you want internally but stick with "modules" for the customers' sakes.
Python, a thoroughly modern programming language, has "modules." I don't think there is anything archaic about the word.
If users are used to calling things in the old system "modules," then it will make the new system easier to learn if you use the same terminology.
bundle is the new term ;-) ofcourse coined by OSGI. But when you say bundle then lot of assumptions are made on your code. So whether you want to use this or not is left to you.

Should Programmers Use Decompilers?

Hear lately I've been listening to Jeff Atwood and Joel Spolsky's radio show and they have been talking about dogfooding (the process of reusing your own code, see Jeff Atwood's blog post). So my question is should programmers use decompilers to see how that programmers code is implemented and works, to make sure it won't break your code. Or should you just trust that programmers code and adapt to it because using decompilers go against everything we as programmers have ever learn about hiding data (well OO programmers at least)?
Note: I wasn't sure which tags this would go under so feel free to retag it.
Edit: Just to clarify I was asking about decompilers as a last resort, say you can't get the source code for some reason. Sorry, I should have supplied this in the original question.
Yes, It can be useful to use the output of a decompiler, but not for what you suggest. The output of a compiler doesn't ever look much like what a human would write (except when it does.) It can't tell you why the code does what it does, or what a particular variable should mean. It's unlikely to be worth the trouble to do this unless you already have the source.
If you do have the source, then there are lots of good reasons to use a decompiler in your development process.
Most often, the reasons for using the output of a decompiler is to better optimize code. Sometimes, with high optimization settings, a compiler will just get it wrong. This can be almost impossible to sort out in some cases without comparing the output of the compiler at different levels of optimization.
Other times, when trying to squeeze the most performance out of a very hot code path, a developer can try arranging their code in a few different ways and compare the compiled results. As a last resort, this may be the simplest way to start when implementing a code block in assembly language, by duplicating the compiler's output.
Dogfooding is the process of using the code that you write, not necessarily re-using code.
However, code re-use typically means you have the source, hence 'code-reuse' otherwise its just using a library supplied by someone else.
Decompiling is hard to get right, and the output is typically very hard to follow.
You should use a decompiler if it is the tool that's required to get the job done. However, I don't think it's the proper use of a decompiler to get an idea of how well the code which is being decompiled was written. Depending on the language you use, the decompiled code can be very different from the code which was actually written. If you want to see some real code, look at open source code. If you want to see the code of some particular product, it's probably better to try to get access to the actual code through some legal means.
I'm not sure what exactly it is you are asking, what you expect "decompilers" to show you, or what this has to do with Atwood and Spolsky, or what the question is exactly. If you're programming to public interfaces then why would you need to see the original source of the the third party code to see if it will "break" your code? You could more effectively build tests to in order to determine this. As well, what the "decompiler" will tell you largely depends on the language/platform the software was written in, whether it is Java, .NET, C and so forth. It's not the same as having the original source to read, even in the case of .NET assemblies. Anyway, if you are worried about third party code not working for you then you should really be doing typical kinds of unit tests against the code rather than trying to "decompile" it. As far as whether you "should," if you mean whether you "should" in some other way other than what would be the best use of your time then I'm not sure what you mean.
Should Programmers Use Decompilers?
Use the right tool for the right job. Decompilers don't often produce results that are easy to understand, but sometimes they are what's needed.
should programmers use decompilers to
see how that programmers code is
implemented and works, to make sure it
won't break your code.
No, not unless you find a problem and need support. In general you don't use it if you don't trust it, and if you have to use it you even when you don't trust it you develop tests to prove the functionality and verify that later upgrades still work as expected.
Don't use functionality you don't test, unless you have very good support or a relationship of trust.
-Adam
Or should you just trust that programmers code and adapt to it because using decompilers go against everything we as programmers have ever learn about hiding data (well OO programmers at least)?
This is not true at all. You would use a decompiler not because you want to get around any sort of abstraction, encapsulation, or defeat OO principles, but because you want to understand why the code is behaving the way it is better.
Sometimes you need to use a decompiler (or in the Java world, a bytecode viewer) when you are troubleshooting an annoying bug with a 3rd party library where an exception is thrown with no useful error message, no logging, etc.
Use of a decompiler has nothing to do with OO principles.
The short answer to this... Program to a public and documented specification, not to an implementation. Relying on implementation specifics and side-effects will burn you.
Decompilation is not a tool to help you program correctly, though it might, in a pinch, assist you in understanding a problem with someone else's code for which you don't have source.
Also, beware of the possible legal risk of decompiling; many software companies have no-decompile clauses which could expose you and your employer to legal consequences.

Resources