Anyone visiting a torrent tracker is sure to find droves of "cracked" programs ranging from simple shareware to software suites costing thousands of dollars. It seems that as long as the program does not rely on a remote service (e.g. an MMORPG) that any built-in copy protection or user authentication is useless.
Is it effectively not possible to prevent a cracker from circumventing the copy protection? Why?
No, it's not really possible to prevent it. You can make it extremely difficult - some Starforce versions apparently accomplished that, at the expense of seriously pissing off a number of "users" (victims might be more accurate).
Your code is running on their system and they can do whatever they want with it. Attach a debugger, modify memory, whatever. That's just how it is.
Spore appears to be an elegant example of where draconian efforts in this direction have not only totally failed to prevent it from being shared around P2P networks etc, but has significantly harmed the image of the product and almost certainly the sales.
Also worth noting that users may need to crack copy protection for their own use; I recall playing Diablo on my laptop some years back, which had no internal optical drive. So I dropped in a no-cd crack, and was then entertained for several hours on a long plane flight. Forcing that kind of check, and hence users to work around it is a misfeature of the stupidest kind.
It is impossible to stop it without breaking your product. The proof:
Given: The people you are trying to prevent from hacking/stealing will inevitably be much more technically sophisticated than a large portion of your market.
Given: Your product will be used by some members of the public.
Given: Using your product requires access to it's data on some level.
Therefore, You have to released you encrypt-key/copy protection method/program data to the public in enough of a fashion that the data has been seen in it's useable/unencrypted form.
Therefore, you have in some fashion made your data accessible to pirates.
Therefore, your data will be more easily accessible to the hackers than your legitimate audience.
Therefore, ANYTHING past the most simplistic protection method will end up treating your legitimate audience like pirates and alienating them
Or in short, the way the end user sees it:
Because it's a fixed defense against a thinking opponent.
The military theorists beat this one to death how many millennia ago ?
Copy-protection is like security -- it's impossible to achieve 100% perfection but you can add layers that make it successively more difficult to crack.
Most applications have some point where they ask (themselves), "Is the license valid?" The hacker just needs to find that point and alter the compiled code to return "yes." Alternatively, crackers can use brute-force to try different license keys until one works. There's also social factors -- once one person buys the tool they might post a valid license code on the Internet.
So, code obfuscation makes it more difficult (but not impossible) to find the code to alter. Digital signing of the binaries makes it more difficult to change the code, but still not impossible. Brute-force methods can be combated with long license codes with lots of error-correction bits. Social attacks can be mitigated by requiring a name, email, and phone number that is part of the license code itself. I've used that method to great effect.
Good luck!
Sorry to bust in on an ancient thread, but this is what we do for a living and we're really really good at it. It's all we do. So some of the information here is wrong and I want to set the record straight.
Theoretically uncrackable protection is not only possible it's what we sell. The basic model the major copy protection vendors (including us) follow is to use encryption of the exe and dlls and a secret key to decrypt at runtime.
There are three components:
Very strong encryption: we use AES 128-bit encryption which is effectively immune to a brute force attack. Some day when quantum computers are common it might be possible to break it but it's unreasonable to assume you will crack this strength encryption to copy software as opposed to national secrets.
Secure key storage: if a cracker can get the key to the encryption, you're hosed. The only way to GUARANTEE a key can't be stolen is to store it on a secure device. We use a dongle (it comes in many flavors but the OS always just sees it as a removable flash drive). The dongle stores the key on a smart card chip which is hardened against side channel attacks like DPA. The key generation is tied to multiple factors which are non-deterministic and dynamic so no single key/master crack is possible. The communication between the key storage and the runtime on the computer is also encrypted so a man-in-the-middle attack is thwarted.
Debugger detection: Basically you want to stop a cracker from taking a snapshot of memory (after decryption) and making an executable out of that. Some of the stuff we do to prevent this is secret, but in general we allow for debugger detection and lock the license when a debugger is present (this is an optional setting). We also never completely decrypt the entire program in memory so you can never get all the code by "stealing" memory.
We have a full time cryptologist who can crack just about anybody's protection system. He spends all his time studying how to crack software so we can prevent it. So you don't think this is just a cheap shill for what we do, we're not unique: other companies such as SafeNet and Arxan Technologies can do some very strong protection as well.
A lot of software-only or obfuscation schemes are easy to crack since the cracker can just identify the program entry point and branch around any any license checking or other stuff the ISV has put in to try to prevent piracy. Some people even with dongles will throw up a dialog when the license isn't found--setting a breakpoint on that error will give the cracker a nice place in the assembly code to do a patch. Again, this requires unencrypted machine code to be available--something you don't get if you do strong encryption of the .exe.
One last thing: I think we're unique in that we've had several open contests where we provided a system to people and invited them to crack it. We've had some pretty hefty cash prizes but no one has yet cracked our system. If an ISV takes our system and implements it incorrectly it's no different from putting a great padlock on your front door attached to a cheap hasp with wood screws--easy to circumvent. But if you use our tools as we suggest we believe your software cannot be cracked.
HTH.
The difference between security and copy-protection is that with security, you are protecting an asset from an attacker while allowing access by an authorized user. With copy protection, the attacker and the authorized user are the same person. That makes perfect copy protection impossible.
I think given enough time a would-be cracker can circumvent any copy-protection, even ones using callbacks to remote servers. All it takes is redirecting all outgoing traffic through a box that will filter those requests, and respond with the appropriate messages.
On a long enough timeline, the survival rate of copy protection systems is 0. Everything is reverse-engineerable with enough time and knowledge.
Perhaps you should focus on ways of making your software be more attractive with real, registered, uncracked versions. Superior customer service, perks for registration, etc. reward legitimate users.
Basically history has shown us the most you can buy with copy protection is a little time. Fundamentally since there is data you want someone to see one way, there is a way to get to that data. Since there is a way someone can exploit that way to get to the data.
The only thing that any copy protection or encryption for that matter can do is make it very hard to get at something. If someone is motivated enough there is always the brute force way of getting around things.
But more importantly, in the computer software space we have tons of tools that let us see how things are working, and once you get the method of how the copy protection works then its a very simple matter to get what you want.
The other issue is that copy protection for the most part just frustrates your users who are paying for your software. Take a look at the open source model they don't bother and some folks are making a ton of money encouraging people to copy their software.
"Trying to make bits uncopyable is like trying to make water not wet." -- Bruce Schneier
Copy protection and other forms of digital restrictions management are inherently breakable, because it is not possible to make a stream of bits visible to a computer while simultaneously preventing that computer from copying them. It just can't be done.
As others have pointed out, copy protection only serves to punish legitimate customers. I have no desire to play Spore, but if I did, I'd likely buy it but then install the cracked version because it's actually a better product for its lack of the system-damaging SecuROM or property-depriving activation scheme.
}} Why?
You can buy the most expensive safe in the world, and use it to to protect something. Once you give away the combination to open the safe, you have lost your security.
The same is true for software, if you want people to use your product you must given them the ability to open the proverbial safe and access the contents, obfuscating the method to open the lock doesn't help. You have granted them the ability to open it.
You can either trust your customers/users, or you can waste inordinate amounts of time and resource trying to defeat them instead of providing the features they want to pay for.
It just doesn't pay to bother. Really. If you don't protect your software, and it's good, undoubtedly someone will pirate it. The barrier will be low, of course. But the time you save from not bothering will be time you can invest in your product, marketing, customer relationships, etc., building your customer base for the long term.
If you do spend the time on protecting your product instead of developing it, you'll definitely reduce piracy. But now your competitors may be able to develop features that you didn't have time for, and you may very well end up selling less, even in the short term.
As others point out, you can easily end up frustrating real and legitimate users more than you frustrate the crooks. Always keep your paying users in mind when you develop a circumvention technique.
If your software is wanted, you have no hope against the army of bored 17 year old's. :)
In the case of personal copying/non-commercial copyright infringement, the key factor would appear to be the relationship between the price of the item and the ease of copying it. You can increase the difficulty to copy it, but with diminishing returns as highlighted by some of the previous answers. The other tack to take would be to lower the price until even the effort to download it via bittorrent is more cumbersome than simply buying it.
There are actually many successful examples where an author has found a sweet spot of pricing that has certainly resulted in a large profit for themselves. Trying to chase a 100% unauthorized copy prevention is a lost cause, you only need to get a large group of customers willing to pay instead of downloading illegaly. The very thing that makes pirating softweare inexpensive is also what makes it inexpensive to publish software.
There's an easy way, I'm amazed you haven't said so in the answers above.
Move the copy protection to a secured area (understand your server in your secure lab).
Your server will receive random number from clients (check that the number wasn't used before), encrypt some ever evolving binary code / computation results with clients' number and your private key and send it back.
No hacker can circumvent this since they don't have access to your server code.
What I'm describing is basically webservice other SSL, that's where most company goes nowadays.
Cons: A competitor will develop an offline version of the same featured product during the time you finish your crypto code.
On protections that don't require network:
According to notes floated around it took two years to crack a popular application which used similar scheme as described in John's answer. (custom hardware dongle protection)
Another scheme which doesn't involve a dongle is "expansive protection". I coined this just now, but it works like this: There's an application which saves user data and for which the users can buy expansions and such from 3rd parties. When user loads the data or uses new expansion, the expansions and the saved data contains also code which performs checks. And of course these checks are also protected by checksum checks. It's not as secure on paper as the other scheme but in practise this application has been half-cracked all the time, so that it mostly functions as a trial despite being cracked as the cracks will always miss some checks and have to patch these expansions as well.
The key point is, while these can be cracked, if enough software vendors used such schemes, this would overwork the few people in the warescene who are willing to dedicate themselves to those. If you do the maths, the protections don't have to be even that great, as long as enough vendors used these custom protections that changed constantly, it would simply overwhelm the crackers and the warez scene would end then and there. *
The only reason this hasn't happened is because publishers buy a single protection that they use all over, making it a huge target just like Windows is target for malware, any protection used in more than single app is a bigger target. So everyone needs to be doing their own custom, unique multi-layered expansive protection. The amount of warez releases would drop to maybe dozen releases per year if it takes months to crack a single release by the very best crackers.
Now for some theorycrafting in marketing software:
If you believe that warez provides worthwhile marketing value, then that should be factored in the business plan. This could entail a very very (too) basic lite version that still cost few dollars to ensure it was cracked. Then you'd hook in the users with "limited time upgrade cheaply from the lite version" offers regularly and other upselling tactics. The lite version should really have at most one buy-worthy feature and otherwise be very crippled. The price should probably be <10 $. The full version should probably be twice as much as the upgrade price from the $10 lite pay-demo version. eg. If the full-version is $80, You'd offer upgrades from the lite version to full version for $40 or something that really seems like killer bargain. Of course you'd avoid revealing these bargains to purchasers who went direct for the $80 edition.
It would be critical that the full version shared no similarity in code to the lite version. You'd intend that the lite-version gets warezed and the full-version will either be time intensive to crack or have network dependency in functionality that will be hard to mimic locally. Crackers are probably more specialized in cracking than trying to code up/replicate parts of functionality that the application has on the web server.
* addendum: for apps/games the scene might end in such unlikely and theoretical circumstance, for other things like music/movies and in practise, I'd look at making it cheap for digital dl buyers to get additional collectible physical items or online-only value - many people are collectors of stuff (especially the pirates) and they could be enticed into buying if it gains something desirable enough over just a digital copy.
Beware though - There's something called "the law of rising expectations". Example from games: Ultima 4-6 standard box included a map made of cloth, and Skyrim Collectors edition has a map made of paper. Expectations had risen and some people aren't going to be happy with a paper map. You want to either keep quality of produce or service constant or manage expectations ahead of time. I believe this is critical when considering these value-add things as you want them to be desirably but not increasingly expensive to make and not turn into something that seems so worthless that it defeats the purpose.
This is one occasion where quality software is a bad thing, because if no one whats your software then they will not spend time trying to crack it, on the other hand things like Adobe's Master Collection CS3, were available just days after release.
So the moral of this story is if you don't want someone to steal your software there is one option: don't write anything worth stealing.
I think someone will come up with a dynamic AI way of defeating all the currently standard methods of copy protection; heck, I'd sure love to get paid to work on that problem. Once they get there then new methods will be developed, but it'll slow things down.
The second best way for society to stop theft of software, is to penalize it heavily, and enforce the penalties.
The best way is to reverse the moral decline, and thereby increase the level of integrity in society.
A lost cause if ever I heard one... of course that doesn't mean you shouldn't try.
Personally, I like Penny Arcade's take on it: "A Cyclical Argument With A Literal Strawman"alt text http://sonicloft.net/im/52
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am currently working on a project where i need to create some architecture, framework or any standards by which i can "at least" increase the cracking method for a software, i.e, to add to software security. There are already different ways to activate a software which includes online activation, keys etc. I am currently studying few research papers as well. But there are still lot of things that i want to discuss.
Could someone guide me to some decent forum, mailing list or something like that? or any other help would be appreciated.
I'll tell you the closest thing to "crackproof": a web application.
Desktop applications are doomed, for many other reasons, but making your application run "in the cloud", in a browser, gives you a lot more control about security.
A desktop software runs on the client's computer, so the client has full access to it. A web app runs on your server, so the client only sees a tiny bit of it.
You need to begin by infiltrating the local hacking gang, posing as an 11 year old who wants to "hack it up". Once you've earned their trust you can learn what features they find hardest to crack. As you secretly release "uncrackable" software to the local message boards, you can see what they do with it. Build upon your inner knowledge until they can no longer crack your software. When that is done, let your identity be known. Ideally, this will be seen as a sign of betrayal, that you're working against them. Hopefully this will lead them to contact other hackers outside the local community to attack your software.
Continue until you've reached the top of the hacker mafia. Write your thesis as a book, sell to HBO.
Isn't it a sign of success when your product gets cracked? :)
Seriously though - one approach is to use License objects that are serialized to XML and then encrypted using public/private key pairs. They are then read back in at runtime, de-serialized and processed to ensure they are valid.
But there is still the ubiquitous "IsValid()" method which can be cracked to always return true.
You could even put that method into a signed assembly to prevent tampering, but all you've done then is create another layer of "IsValid()" which too can be cracked.
We use licenses to turn on or off various features in our software, and to validate support/upgrade periods. But this is only for our legitimate customers. Anyone who wants to bypass it probably could.
We trust our legitimate customers to not try to bypass the licensing, and we accept that our illegitimate customers will find a way.
We would waste more money attempting to imporve the 'tamper proof' nature of our solution that we loose to people who pirate software.
Plus you've got to consider the pain to our legitimate customers, and asking them to paste a license string from their online account page is as much pain as I'd want to put them through. Why create additional barriers to entry for potential customers?
Anyway, depending on which solution you've got in place already, my description above might give you some ideas that might decrease the likelyhood someone will crack your product.
As nute said, any code you release to a customer's machine is crackable.
Don't try for "uncrackable." Try for "there's enough deterrent to reasonably protect my assets."
There are a lot of ways you can try and increase the cost of cracking. Most of them cost you but there is one thing you can do that actually reduces your costs while increasing the cost of cracking: deliver often.
There is a finite cost to cracking any given binary. That cost is increased by the number of binaries being cracked. If you release new functionality every week, you essentially bifurcate your users into two groups:
Those who don't need the latest features and can wait for a crack.
Those who do need the latest features and will pay for your software.
By engaging in the traditional anti-cracking techniques, you can multiply the cost of cracking one binary an, consequently, widen the gap between when a new feature is released and when it is available on the black market. To top it all off, your costs will go down and the amount of value you deliver in a period of time will go up - that's what makes it free.
The more often you release, the more you will find that quality and value go up, cost goes down, and the less likely people will be to steal your software.
As others have mentioned, once you release the bits to users you have given up control of them. A dedicated hacker can change the code to do whatever they want. If you want something that is closer to crack-proof, don't release the bits to users. Keep it on the server. Provide access to the application through the Internet or, if the user needs a desktop client, keep critical bits on the server and provide access to them via web services.
Like others have said, there is no way of creating a complete crack-proof software, but there are ways to make cracking the software more difficult; most of these techniques are actually used by bad guys to hide the malware inside binaries and by game companies to make cracking and copying the games more difficult.
If you are really serious about doing this, you could check e.g., what executable packers like UPX do. But then you need to implement the unpacker also. I do not actually recommend doing this, but studying game protectors and binary obfuscation might help you in your quest.
First of all, in what language are you writing this?
It's true that a crack-proof program is impossible to achieve, but you can always make it harder. A naive approach to application security means that a program can be cracked in minutes. Some tips:
If you're deploying to a virtual machine, that's too bad. There aren't many alternatives there. All popular vms (java, clr, etc.) are very simple to decompile, and no obfuscator nor signature is enough.
Try to decouple as much as possible the UI programming with the underlying program. This is also a great design principle, and will make the cracker's job harder from the GUI (e.g. enter your serial window) to track the code where you actually perform the check
If you're compiling to actual native machine code, you can always set the build as a release (not to include any debug information is crucial), with optimization as high as possible. Also in the critical parts of your application (e.g. when you validate the software), be sure to make it an inline function call, so you don't end up with a single point of failure. And call this function from many different places in your app.
As it was said before, packers always add up another layer of protection. And while there are many reliable choices now, you can end up being identified as a false positive virus by some anti-virus programs, and all the famous choices (e.g. UPX) have already pretty straight-forward unpackers.
there are some anti-debugging tricks you can also look for. But this is a hassle for you, because at some time you might also need to debug the release application!
Keep in mind that your priority is to make the critical part of your code as untraceable as possible. Clear-text strings, library calls, gui elements, etc... They are all points where an attacker may use to trace the critical parts of your code.
I've read quite a few times how I shouldn't use cryptography if I'm not an expert. Basically both Jeff and Eric tell you the same:
Cryptography is difficult, better buy the security solution from experts than doing it yourself.
I completely agree, for a start it's incredibly difficult to perceive all possible paths an scenario might take, all the possible attacks against it and against your solution... but then When should we use it?
I will face in a few months with the task of providing a security solution to a preexisting solution we have. That is, we exchange data between servers, second phase of the project is providing good security to it. Buying a third party solution will eat up the budget anyway so ... When is it good to use cryptography for a security solution? Even if you are not a TOP expert.
Edit: To clarify due to some comments.
The project is based on data transport across network locations, the current implementation allows for a security layer to be placed before transport and we can make any changes in implementation we like (assuming reasonable changes, the architecture is well design so changes should have an acceptable impact). The question revolves around this phrase from Eric Lippert:
I don’t know nearly enough about cryptography to safely design or implement a crypto-based security system.
We're not talking about reinventing the wheel, I had in mind a certain schema when I designed the system that implied secure key exchange, encryption and decryption and some other "counter measures" (man in the middle, etc) using C# .NET and the included cryptography primitives, but I'm by no means an expert in the field so when I read that, I of course start doubting myself. Am I even capable of implementing a secure system? Would it always be parts of the system that will be insecure unless I subcontract that part?
I think this blog posting (not mine!) gives some good guidelines.
Other than that there are some things you should never do unless you're an expert. This is stuff like implementing your own crypto algorithm (or your own version of a published algorithm). It's just crazy to do that yourself! (When there's CAPI, JCE, OpenSSL, ....)
Beyond that though if you're 'inventing' anything it's almost certainly wrong. In the Coding Horror post you linked to - the main mistake to my mind is that he's doing it a very low level and you just don't need to. If you were encrypting things in Java (I'm not so familiar with .NET) you could use Jasypt which uses strong default algorithms and parameters and doesn't require you to know about ECB and CBC (though, arguably, you should anyway just because...).
There is going to be a prebuilt system for just about anything you're going to want to do with crypto. If you're storing keys then theres KeyCzar, in other cases theres Jasypt. The point is if you're doing anything 'unusual' with crypto - you shouldn't be; if you're doing something not 'unusual' then you don't need to do the crypto yourself. Don't invent a new way to store keys, generate keys from passwords, verify signatures etc - it's not necessary, it's complicated and you'll almost certainly make a mistake unless you're very very careful...
So... I don't think you necessarily need to be afraid of encrypting things but be aware that if you're specifying algorithms and parameters to those algorithms directly in your code it is probably not good. There are exceptions to any rule but as in the blog post I linked above - if you type AES into your code you're doing it wrong!
The key "take-away" from the Matasano blog post is right at the end (note that TLS is a more precise name for SSL):
THOMAS PTACEK
GPG for data at rest. TLS for data in
motion.
NATE LAWSON
You can also use Guttman's cryptlib,
which has a sane API. Or Google
Keyczar. They both have really simple
interfaces, and they try to make it
hard to do the wrong thing. What we
need are fewer libraries with higher
level interfaces. But we also need
more testing for those libraries.
The rule of thumb with cryptography isn't that you shouldn't use it if you're not an expert; rather, it's that you shouldn't re-invent the wheel unless you're an expert. In other words, use existing implementations / libraries / algorithms as much as possible. For example, don't write your own cryptographic authentication algorithm, or come up with yet another way to store keys.
As for when to use it: whenever you have data that needs to be protected from having others see it. Beyond that, it comes down to which algorithms / approaches are best: SSL vs. IPsec vs. symmetric vs. PKI, etc.
Also, a word of advice: key management is often the most challenging part of any comprehensive cryptographic solution.
You have things backwards: first you must specify your actual requirements in detail ("provide a security solution" is meaningless marketing drivel). Then you look for ways to satisfy those specific requirements; croptography will satisfy some of them.
Example of requirements that cryptography can satisfy:
Protect data sent over publich channels from spying
Protect data against tampering (or rather, detect manipulated data)
Allow servers and clients as well as users to prove their identity to each other
You need to go through the same process as for any other requirement. What is the problem being solved, what is the outcome the users are looking for, how is the solution proposed going to be supported going forward, what are the timescales involved. Sometimes there is an off the shelf solution that does the job, sometimes what you want needs to be developed as a custom solution, and sometimes you'll choose a custom solution as it will work out more cost effective than an off the shelf one.
The same is true with security requirements, but the added complexity is that to do any sort of custom solution requires additional expertise in the technical teams (development, support etc). There is also the issue that the solution may need to be not only secure but recognised as secure. This may be far easier to achieve with an off the shelf solution.
And RickNZ is absolutely right - don't forget key management. Consider this right at the outset as part of the decision making process.
The question I would start by asking, is what are you trying to achieve.
If you are trying to just secure the transmission of the data from server a to server b, then there are a number of mechanisms you could use, which would require little work, such as SSL.
However if you are trying to secure all of the data stored in the application that is a far more difficult, although if it is a requirement, then I would suggest that any cryptography, regardless of how easy to break, is better than none.
As someone who has been asked to do similar things, you face a daunting number of questions in implementing your system. There are major difference between securing a system and implementing cryptography systems.
Implementing a cryptography system is very difficult and experts routinely get it wrong, both in theory and practice. A famous theoretical failure was the knapsack cryptosystem which has been largely abandoned due to the Lenstra–Lenstra–Lovász lattice basis reduction algorithm. On the other side, we saw in the last year how an incorrect seed in Debian's random number generator opened up any key generated by the OS. You want to use a prepackaged cryptosystem, not because its an "experts-only" field, but because you want a community tested and supported system. Almost every cryptographic algorithm I know of has bounds that assume certain tasks to be hard, and if those tasks turn out to be computable (as in the LLL algorithm) the whole system becomes useless over night.
But, I believe, the real heart of the question is how to use things in order to make a secure system. While there are many libraries out there to generate keys, cipher the text, and so on, there are very few systems that implement the entire package. But as always security boils down to two concepts: worth of protection and circle of trust.
If you are guarding the Hope diamond, you spend a lot of money designing a system to protect it, employ a constant force to watch it, and hire crackers to continually try to break in. If you are just discouraging bored teenagers from reading your email, you hack something up in an hour and you don't use that address for secret company documents.
Additionally managing the circle of trust is just as difficult of a task. If your circle includes tech savvy, like-minded friends, you make a system and give them a large amount of trust with the system. If it includes many levels of trust, such as users, admins, and so on, you have a tiered system. Since you have to manage more and more interactions with a larger circle, the bugs in the larger system become more weaknesses to hack and thus you must be extremely careful in designing this system.
Now to answer your question. You hire a security expert the moment the item you're protecting is valuable enough and your circle of trust includes those you cannot trust. You don't design cryptography systems unless you do it for a living and have a community to break them, it is a full time academic discipline. If you want to hack for fun, remember that it is only for fun and don't let the value of what you are protecting get too high.
Pay for security (of which cryptography is a part but only a part) what it is worth but no more. So your first task is to decide what your security is worth, or or how much various states of security are worth. Then invite whoever holds the budget to select which state to aim for and therefore how much to spend.
No absolutes here, it's all relative.
Why buy cryptography? It's one of the most developed area in open source software of great quality :) See for example TrueCrypt or OpenSSL
There is a good chance that whatever you need cryptography for there is already a good quality, reputable open source project for it! (And if you can see the source you can see what they did; I once saw an article about a commercial software supposed to "encrypt" a file that simply xorred every byte with a fixed value!)
And, also, why would you want to re-invent the wheel? It's unlikely that with no cryptography background you will do better or even come close to the current algorithms such as AES.
I think it totally depends on what you are trying to achieve.
Does the data need to be stored encrypted at either end or does it just need to be encrypted whilst in transit?
How are you transferring the data? FTP, HTTP etc?
Probably not a good idea to have security as a second phase as by that point presumably you've been moving data around insecurely for a period of time?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently came across a system where all of the DB connections were managed by routines obscured in various ways, including base 64 encoding, md5sums and various other techniques.
Why is security through obscurity a bad idea?
Security through obscurity would be burying your money under a tree. The only thing that makes it safe is no one knows it's there. Real security is putting it behind a lock or combination, say in a safe. You can put the safe on the street corner because what makes it secure is that no one can get inside it but you.
As mentioned by #ThomasPadron-McCarty below in a comment below:
If someone discovers the password, you can just change the password, which is easy. If someone finds the location, you need to dig up the money and move it somewhere else, which is much more work. And if you use security by obscurity in a program, you would have to rewrite the program.
Security through obscurity can be said to be bad because it often implies that the obscurity is being used as the principal means of security. Obscurity is fine until it is discovered, but once someone has worked out your particular obscurity, then your system is vulnerable again. Given the persistence of attackers, this equates to no security at all.
Obscurity should never be used as an alternative to proper security techniques.
Obscurity as a means of hiding your source code to prevent copying is another subject. I'm rather split on that topic; I can understand why you might wish to do that, personally I've never been in a situation where it would be wanted.
Security through obscurity is an interesting topic. It is (rightly) maligned as a substitute for effective security. A typical principle in cryptography is that a message is unknown but the contents are not. Algorithms for encyrption are typically widely published, analyzed by mathematicians and, after a time, some confidence is built up in their effectivness but there is never a guarantee that they're effective.
Some people hide their cryptographic algorithms but this is considered a dangerous practice because then such algorithms haven't gone through the same scrutiny. Only organisations like the NSA, which has a significant budget and staff of mathematicians, can get away with this kind of approach.
One of the more interesting developments in recent years has been the risk of steganography, which is the practice is hiding message in images, sound files or some other medium. The biggest problem in steganalysis is identifying whether or not a message is there or not, making this security through obscurity.
Last year I came across a story that Researchers Calculate Capacity of a Steganographic Channel but the really interesting thing about this is:
Studying a stego-channel in this way
leads to some counter-intuitive
results: for example, in certain
circumstances, doubling the number of
algorithms looking for hidden data can
increase the capacity of the
steganographic channel.
In other words, the more algorithms you use to identify messages the less effective it becomes, which goes against the normal criticism of security through obscurity.
Interesting stuff.
The main reason it is a bad idea is that it does not FIX the underlying problems, just attempts to hide them. Sooner or later, the problems will be discovered.
Also, extra encryption will incur additional overhead.
Finally excessive obscurity (like using checksums) makes maintenance a nightmare.
Better security alternatives is to eliminate potential weaknesses in your code such as enforced inputs to prevent injection attacks.
One factor the ability to recover from a security breach. If someone discovers your password, just reset it. But if someone uncovers your obscure scheme, you're hosed.
Using obscurity as all these people agree is not security, its buying yourself time. That said having a decent security system implemented then adding an extra layer of obscurity is still useful. Lets say tomorrow someone finds an unbeatable crack/hole in the ssh service that can't be patched immediately.
As a rule I've implemented in house... all public facing servers expose only the ports needed ( http/https ) and nothing more. One public facing server then will have ssh exposed to the internet at some obscure high numbered port and a port scanning trigger setup to block any IP's that try to find it.
Obscurity has its place in the world of security, but not as the first and last line of defense. In the example above, I don't get any script/bot attacks on ssh because they don't want to spend the time searching for a non-standard ssh service port and if they do, their unlikely to find it before another layer of security steps in and cuts them off.
All of the forms of security available are actually forms of security through obscurity. Each method increases in complexity and provides better security but they all rely on some algorithm and one or more keys to restore the encrypted data. "Security through obscurity" as most call it is when someone chooses one of the simplest and easiest to crack algorithms.
Algorithms such as character shifting are easy to implement and easy to crack, that's why they are a bad idea. It's probably better than nothing, but it will, at most, only stop a casual glance at the data from being easily read.
There are excellent resources on the Internet you can use to educate yourself about all of the available encryption methods and their strengths and weaknesses.
Security is about letting people in or keeping them out depending on what they know, who they are, or what they have. Currently, biometrics aren't good at finding who you are, and there's always going to be problems with it (fingerprint readers for somebody who's been in a bad accident, forged fingerprints, etc.). So, actually, much of security is about obfuscating something.
Good security is about keeping the stuff you have to keep secret to a minimum. If you've got a properly encrypted AES channel, you can let the bad guys see everything about it except the password, and you're safe. This means you have a much smaller area open to attack, and can concentrate on securing the passwords. (Not that that's trivial.)
In order to do that, you have to have confidence in everything but the password. This normally means using industry-standard crypto that numerous experts have looked at. Anybody can create a cipher they can't break, but not everybody can make a cipher Bruce Schneier can't break. Since there's a thorough lack of theoretical foundations for cipher security, the security of a cipher is determined by having a lot of very smart and knowledgeable people try to come up with attacks, even if they're not practical (attacks on ciphers always get better, never worse). This means the crypto algorithm needs to be widely known. I have very strong confidence in the Advanced Encryption Standard, and almost none in a proprietary algorithm Joe wrote and obfuscated.
However, there's been problems with implementations of crypto algorithms. It's easy to inadvertantly leave holes whereby the key can be found, or other mischief done. It happened with an alternate signature field for PGP, and weaknesses with SSL implemented on Debian Linux. It's even happened to OpenBSD, which is probably the most secure operating system readily available (I think it's up to two exploits in ten years). Therefore, these should be done by a reputable company, and I'd feel better if the implementations were open source. (Closed source won't stop a determined attacker, but it'll make it harder for random good guys to find holes to be closed.)
Therefore, if I wanted security, I'd try to have my system as reliable as possible, which means as open as possible except for the password.
Layering security by obscurity on top of an already secure system might help some, but if the system's secure it won't be necessary, and if it's insecure the best thing is to make it secure. Think of obscurity like the less reputable forms of "alternative medicine" - it is very unlikely to help much, and while it's unlikely to hurt much by itself it may make the patient less likely to see a competent doctor or computer security specialist, whichever.
Lastly, I'd like to make a completely unsolicited and disinterested plug for Bruce Schneier's blog, as nothing more than an interested reader. I've learned a lot about security from it.
One of the best ways of evaluating, testing or improving a security product is to have it banged on by a large, clever peer group.
Products that rely for their security on being a "black box" can't have the benefit of this kind of test. Of course, being a "black box" always invites the suspicion (often justified) that they wouldn't stand up to that kind of scrutiny anyway.
I argued in one case that password protection is really security through obscurity. The only security I can think of that wouldn't be STO is some sort of biometric security.
Besides that bit of semantics and nit picking, STO (Security through obscurity) is obviously bad in any case where you need real security. However, there might be cases where it doesn't matter. I'll often XOR pad a text file i don't want anyone reading. But I don't really care if they do, i'd just prefer that it not be read. In that case, it doesn't matter, and an XOR pad is a perfect example of an easy to find out STO.
It is almost never a good idea. It is the same to say, is it a good idea to drive without seatbelt? Of course you can find some cases where it fits, but the anwser due to experience seems obvious.
Weak encryption will only deter the least motivated hackers, so it isn't valueless, it just isn't very valuable, especially when strong encryption, like AES, is available.
Security through obscurity is based on the assumption that you are smart and your users are stupid. If that assumption is based on arrogance, and not empirical data, then your users- and hackers-- will determine how to invoke the hidden method, bring up the unlinked page, decompile and extract the plain text password from the .dll, etc.
That said, providing comprehensive meta-data to users is not a good idea, and obscuring is perfectly valid technique as long as you back it up with encryption, authorization, authentication and all those other principles of security.
If the OS is Windows, look at using the Data Protection API (DPAPI). It is not security by obscurity, and is a good way to store login credentials for an unattended process. As pretty much everyone is saying here, security through obscurity doesn't give you much protection.
http://msdn.microsoft.com/en-us/library/ms995355.aspx
http://msdn.microsoft.com/en-us/library/ms998280.aspx
The one point I have to add which hasn't been touched on yet is the incredible ability of the internet to smash security through obscurity.
As has been shown time and time again, if your only defense is that "nobody knows the back door/bug/exploit is there", then all it takes is for one person to stumble across it and, within minutes, hundreds of people will know. The next day, pretty much everyone who wants to know, will. Ouch.
This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group?
We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.)
The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors.
As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before?
You can't stop it getting out. So two solutions - stop people wanting to hurt you, and have legal precautions. To stop people hating you treat them right (saying more is probably off topic for stack overflow).
I'm not a lawyer, but to give yourself legal protection, if you believe in it, patent the ideas, put a copyright notice in the code, and make sure the contracts for your programmers specify carefully intellectual property rights.
But at the end of the day, the answer is run quicker than the competition.
Unless you're working with something highly classified and given that you can't block email and USB devices I guess you aren't there's really not to much damage to be had even if the source code leaks. The thing is, what is the code, or parts of it worth without the knowledge of how it works and the organization around it.
In general the value of "source" is much less than is commonly touted, basicly the source without the people or the organization isn't worth the storage it occupies for a competitor.
Also, you're missing the most likely attack vector, and it's also the one you can't stop no matter what. If someone really really want's to know how you made your magic then they'll try to hire your developers away, and since you can't stop them from having information inside their skull and even if they turn in all their possesions ther knowledge and domain expertise is leaving with them. Basicly employee retention and trust is the only way. Sorry.
I don't know how much actual help this is going to be, but:
Don't p*ss your programmers off. Don't get them in a position where they want to give the source to a competitor. Most places undervalue their developers. Given where you are (SO), I guess you are less likely to. Nothing got to me more than seeing the sales folks out for games of golf - paid, and paid for, by the company - while we had to fight to get pizza once a month.
Really, if your direct competitors got your code today, what would it do? Is your product or vertical market that stagnant that you wouldn't release newer, better versions before they could react? Is there no room for innovation? Most companies overvalue their "proprietary algorithms and in-house developed knowledge". Sure, it may cut some time off, but it's only about 10% of the problem.
If you got all the source for all your competitors products, how much actual use would it be? I'd guess it would set you back months. Not forward. Back.
If you had a clean system, and little external/internal knowledge, how long would it take you to get your own product into a buildable state? How long would it take to drill down into the code and workout what is going on? How much time and money would you waste trying to work something out, rather than spending time and money on how to make your product work better?
I've actually been in the position of having all the source - 1million lines+ of code - to a competitor's product. We did nothing with it - aside from a bit of a poke-around and then delete it, which was more than I was comfortable with - but I would expect that we'd have chewed up months of time just to get to where they were then.
So we nuked it, slapped the id10t who got it (yes, a developer/PM who came over from the other company), and thought about how to make our product kick so much butt that it didn't matter what they did. Much better use of time. Worked well, too. We had differentiators, not just re-hashing the same features in the same way they did them.
Sorry, but there is no way you can stop people getting stuff out, and still be able to actually work. You can stop them wanting to do it, or make it so there is no value to them having it.
We were worried about people decompiling our code too. We stopped worrying when we realised that WE had enough trouble working out what was going on inside 500K+ lines of C#, C++ and HTML code talking to MAPI/Exchange. If someone can decompile it and work it out, then we want to hire them......
BTW, for clarity, and given who I now work for, I should point out this is not my current employer. This was quite a while ago.
The code does not leak out on itself. It takes people to take it. There are obviously some security measures you might use like traffic analysis and lock-down on the repositories so only authorized developers can connect to it.
But by the end of the day your best option is to make sure that no one WANTS to steal from you. Your team has to be happy, they have to be proud to work for your they have to be loyal to the company and to each other. If you have such team it's a simple question of explaining to everyone that the code has to be protected from outsiders. It will not stop a dedicated mole but will prevent accidents.
P.S. And yes, proper clauses in the contracts would not harm as well, at least they will make sure that the developers are AWARE that taking code outside is morally wrong.
Follow these guidelines and it shouldn't matter if the contents of your entire source code repository is posted all over stackoverflow:
http://geocities.com/mdetting/unmaintainable.html
Oh, and show your developers that you don't trust them by blocking access to parts of the source code, scanning outgoing/incoming email etc. That is a surefire way to make them want to stay around... ...nothing improves morale like a bit of mistrust in the workplace.
Another cool way is to tell one half that they are "team a" and name the other half as the untrustworthy "team b". Then reverse it and say the same thing to the "team b" members. Encourage them to keep an eye on the "bad guys" in the other team and to report any signs of illoyalty to you. Sprinkle a few "conflict inducers" (e.g. tell "Joe": 'do you know what Ed says about you behind your back?') etc. Works wonders if you set up the developers against each other and create a few [invented-by-you] conflicts here and there...
(Eh, and no, I don't actually recommend any of the above. Just kidding. But I have seen people use all of the tactics above. And it didn't work.)
Okay, I am going to be a little practical here.
Being nice to everybody and hoping they won't hurt you doesn't work.
Every programmer knows from the day he joins a company that he'll not stay there forever. He will change when he's learned enough to get a better opportunity.
The programmers who write the code believe that they have the ownership to it even if they wrote it on the time they rented out to somebody else. So many of them will usually try to get their hands on the source-code even if they don't intend to hurt anybody.
Once they leave the company and they've carried the source code with them and lost contact with their colleagues, the conscience settles down and goes on a vacation and after a while bits and pieces from the code start showing up everywhere.
That's what I KNOW happens cause I've witnessed it happen to my company.
So what does one do?
Sign a NDA which specifically mentions that they programmer WILL not take copies.
Distribute your product between programmers, and if possible get modules coded individually and integrated by a chief whose responsibility is that all programmers do nt get all the code.
At the time of termination get a written undertaking from the coders that they do not possess any IP of the company and they understand the penalties of violation.
If somebody violates your IP, sue the man! No exceptions. It'll work as an example for the present team.
Do I sound extreme?
I remember this happening to Valve when they were developing HL-2. Interesting link here: http://www.shacknews.com/onearticle.x/28619
Most of the answers are based on Moral and ethical values. I wonder if Google, Facebook etc. just rely on their employees good will. Give me a break, that's totally utopian. Don't be a fool. Be realistic.
YES, it is possible to prevent code leaking:
Using a virtual server hosting virtual machines, programmers can only access locally to these virtual machines (intranet) via Remote Desktop. Repository is managed locally. private keys are required to access the repository. Copy/paste from virtual machine to client is disabled. only copy/paste from client to virtual is allowed.
Companies like facebook do that.
The only way to still code is by taking pictures to the actual code, which is totally not practical and feasible at all, and since there are surveillance cameras everywhere, you will have to go to the bathroom to take those pictures.
I've worked somewhere where there was a real culture of secrecy about this sort of thing (historically there had been a number of times when the company was small where "customers" had, shall we say, abused their access to our product).
While at the top the management were very protective, I see it slightly differently. I think our code, while not entirely irrelevant, isn't as key as you'd expect it to be in a software company.
The reason that we are successful is:
1) The code is essentially the solution to a bunch of problems. If you get our code you get those solutions but we still have the smart people who solved those problems. They understand those problems better than you do and are better able to solve the next set of problems better than you are.
2) Because they really understand the problems (and the solutions) we can do things faster than our competitors which translates to cheaper (or more profitable).
3) Also because of those people and the attitude within the company we've delivered well to our clients and provided good support.
4) And because of that we have a good reputation and reference-able customers.
A small number of companies have code which is genuinely worth keeping secret - proprietary algorithms and that sort of thing - but for a vast majority of us our products are very easily replicable by smart people.
What I'm saying is do the basics - write it into people's contracts that they can't take it, keep it secure and so on - but don't obsess over it. Unless you're in a very specific market it's unlikely to be what's really going to make your business succeed or fail.
The best step starts from reruting guys with strong ethical behaviour.
Various other steps can be taken like all communication being scanned. There are places where email and all information going out is scanned. The desktop/laptop does not have hard-disk or the access is restricted and all work is on network folders, even when working from home, one has to get connected to internet. The offline work gets synchronized. The USB and drives are disconnected.
The other policies are to provide access only on need basis.
These will only slow down and hinder to some extent, but is one is very determined then he would find ways to get around this.
The other way is if the code is really very important, then have the idea copywrite protected legaly.
To be honest it's almost impossible. If I wanted to suggest what a company that would shortly appear on the Daily WTF would do:
Disconnect the "work computer" from the internet, bt because they need internet access for reference buy everyone a wbbook.
Stuff the developers USB slots with epoxy and require that they load/unload everything from a centralised server, which scans all the data that goes through it for code like syntax.
Or you could just trust your employees and make them sign an NDA...
I personally never tested on any real case, but I would suggest using code fragmentation:
basically you split your project in a number of libraries, define interfaces and unit tests for each of them, then you separate SVN repositories so that each group have access to a limited part of your precious source code.
This is also a good practice no matter what and should help if you are outsourcing abroad.
The previous answers all seem to center on building trust and employing ethical people.
Another possibility might be to create your own domain specific language and tools. That will make any leaked code harder to use. It might still be possible to steal useful ideas from it, but it would not be possible to simply compile a competing product unless the whole toolchain is leaked.
Trust your developers. People tend to live up or down to expectations. Treat them well, and remember that loyalty goes both ways. After all, if you can't cut off thumb drives, you can't stop anybody from leaking code, no matter how much you don't trust them.
That being said, find yourself a lawyer with trade secret expertise, probably expertise in other parts of IP law, and ask how to legally safeguard stuff. You do want to make sure that, if a competitor gets your stuff, it's not legal for the competitor to benefit from it.
My question is pretty straightforward: You are an executable file that outputs "Access granted" or "Access denied" and evil persons try to understand your algorithm or patch your innards in order to make you say "Access granted" all the time.
After this introduction, you might be heavily wondering what I am doing. Is he going to crack Diablo3 once it is out? I can pacify your worries, I am not one of those crackers. My goal are crackmes.
Crackmes can be found on - for example - www.crackmes.de. A Crackme is a little executable that (most of the time) contains a little algorithm to verify a serial and output "Access granted" or "Access denied" depending on the serial. The goal is to make this executable output "Access granted" all the time. The methods you are allowed to use might be restricted by the author - no patching, no disassembling - or involve anything you can do with a binary, objdump and a hex editor. Cracking crackmes is one part of the fun, definately, however, as a programmer, I am wondering how you can create crackmes that are difficult.
Basically, I think the crackme consists of two major parts: a certain serial verification and the surrounding code.
Making the serial verification hard to track just using assembly is very possible, for example, I have the idea to take the serial as an input for a simulated microprocessor that must end up in a certain state in order to get the serial accepted. On the other hand, one might grow cheap and learn more about cryptographically strong ways to secure this part. Thus, making this hard enough to make the attacker try to patch the executable should not be tha
t hard.
However, the more difficult part is securing the binary. Let us assume a perfectly secure serial verification that cannot be reversed somehow (of course I know it can be reversed, in doubt, you rip parts out of the binary you try to crack and throw random serials at it until it accepts). How can we prevent an attacker from just overriding jumps in the binary in order to make our binary accept anything?
I have been searching on this topic a bit, but most results on binary security, self verifying binaries and such things end up in articles that try to prevent attacks on an operating system using compromised binaries. by signing certain binaries and validate those signatures with the kernel.
My thoughts currently consist of:
checking explicit locations in the binary to be jumps.
checksumming parts of the binary and compare checksums computed at runtime with those.
have positive and negative runtime-checks for your functions in the code. With side-effects on the serial verification. :)
Are you able to think of more ways to annoy a possible attacker longer? (of course, you cannot keep him away forever, somewhen, all checks will be broken, unless you managed to break a checksum-generator by being able to embed the correct checksum for a program in the program itself, hehe)
You're getting into "Anti-reversing techniques". And it's an art basically. Worse is that even if you stomp newbies, there are "anti-anti reversing plugins" for olly and IDA Pro that they can download and bypass much of your countermeasures.
Counter measures include debugger detection by trap Debugger APIs, or detecting 'single stepping'. You can insert code that after detecting a debugger breakin, continues to function, but starts acting up at random times much later in the program. It's really a cat and mouse game and the crackers have a significant upper hand.
Check out...
http://www.openrce.org/reference_library/anti_reversing - Some of what's out there.
http://www.amazon.com/Reversing-Secrets-Engineering-Eldad-Eilam/dp/0764574817/ - This book has a really good anti-reversing info and steps through the techniques. Great place to start if you're getting int reversing in general.
I believe these things are generally more trouble than they're worth.
You spend a lot of effort writing code to protect your binary. The bad guys spend less effort cracking it (they're generally more experienced than you) and then release the crack so everyone can bypass your protection. The only people you'll annoy are those honest ones who are inconvenienced by your protection.
Just view piracy as a cost of business - the incremental cost of pirated software is zero if you ensure all support is done only for paying customers.
There's TPM technology: tpm on wikipedia
It allows you to store the cryptographic check sums of a binary on special chip, which could act as one-way verification.
Note: TPM has sort of a bad rap because it could be used for DRM. But to experts in the field, that's sort of unfair, and there's even an open-TPM group allowing linux users control exactly how their TPM chip is used.
One of the strongest solutions to this problem is Trusted Computing. Basically you would encrypt the application and transmit the decryption key to a special chip (the Trusted Platform Module), The chip would only decrypt the application once it has verified that the computer is in a "trusted" state: no memory viewers/editors, no debuggers etc. Basically, you would need special hardware to just be able to view the decrypted program code.
So, you want to write a program that accepts a key at the beginning and stores it in memory, subsequently retrieving it from disc. If it's the correct key, the software works. If it's the wrong key, the software crashes. The goal is that it's hard for pirates to generate a working key, and it's hard to patch the program to work with an unlicensed key.
This can actually be achieved without special hardware. Consider our genetic code. It works based on the physics of this universe. We try to hack it, create drugs, etc., and we fail miserably, usually creating tons of undesirable side-effects, because we haven't yet fully reverse engineered the complex "world" in which the genetic "code" evolved to operate. Basically, if you're running everything on an common processor (a common "world"), which everyone has access to, then it's virtually impossible to write such a secure code, as demonstrated by current software being so easily cracked.
To achieve security in software, you essentially would have to write your own sufficiently complex platform, which others would have to completely and thoroughly reverse engineer in order to modify the behavior of your code without unpredictable side effects. Once your platform is reverse engineered, however, you'd be back to square one.
The catch is, your platform is probably going to run on common hardware, which makes your platform easier to reverse engineer, which in turn makes your code a bit easier to reverse engineer. Of course, that may just mean the bar is raised a bit for the level of complexity required of your platform to be sufficiently difficult to reverse engineer.
What would a sufficiently complex software platform look like? For example, perhaps after every 6 addition operations, the 7th addition returns the result multiplied by PI divided by the square root of the log of the modulus 5 of the difference of the total number of subtract and multiply operations performed since system initialization. The platform would have to keep track of those numbers independently, as would the code itself, in order to decode correct results. So, your code would be written based on knowledge of the complex underlying behavior of a platform you engineered. Yes, it would eat processor cycles, but someone would have to reverse engineer that little surprise behavior and re-engineer it into any new code to have it behave properly. Furthermore, your own code would be difficult to change once written, because it would collapse into irreducible complexity, with each line depending on everything that happened prior. Of course, there would be much more complexity in a sufficiently secure platform, but the point is that someone would have reverse engineer your platform before they could reverse engineer and modify your code, without debilitating side-effects.
Great article on copy protection and protecting the protection Keeping the Pirates at Bay:
Implementing Crack Protection for Spyro: Year of the Dragon
The most interesting idea mentioned in there that hasn't yet been mentioned is cascading failures - you have checksums that modify a single byte that causes another checksum to fail. Eventually one of the checksums causes the system to crash or do something strange. This makes attempts to pirate your program seem unstable and makes the cause occur a long way from the crash.