What protection menchanism are used to protect binaries from decompilation in commercial software? - decompiling

I'm wondering to what extend one can get source code from binaries and what can be done to protect it's content. Especially I'm interested if expensive commercial software like e.g. CAD systems or ERP systems use such protection.
I know that decompilation can give you assembler code and from that you can get an idea of what e.g. C or C++ code looks like. I also know that there will be no function or variable names, hence data structures will look very different.
But are additional protection mechanism applied to expensive software?

You won't be able to do much with the disassembled code of any application, so only important stuff that's worth to hid gets some protection effort.
That can be reduced to:
Algorithms (Even though many algorithms come from academy research, you don't want your competition to know which one you're using).
DRM enforcement related code.
Embedded devices, were you don't deal with consumer piracy, but with hardware clones.
Malware and backdoors.
If you control in some way the hardware using the code (like with UEFI chain of trust, or with embedded devices) to some extent you may control how the code is used. In that case, encryption of the code is an alternative, where the keys are stored somewhere of difficult access on the hardware.
Otherwise, the only deterrent you have is code obfuscation.

Related

Using built in functions

I am developing a Windows Form Application in C#.I have heard that one should not use built in methods and functions in code since hackers have deep understanding of such built in methods and know how to fail them Instead one should always use his/her own functions and methods and if not then call built in functions intelligently from those newly made functions.How much is that true?
A supporting example in favour of my argument is that I have seen developer always develope there own made encryption algorithm like AES,DES,RC4 and Hash functions since they believe that built in encryption algorithm have many times backdoor in them.
What?! No, no, no! Whoever told you this is just wrong.
There is a common fallacy that published source code is more vulnerable to "h4ckerz" because it is available for anyone to spot the flaws in. However, I'm glad you mentioned crypto, because this is an area where this line of reasoning really stands out as the fallacy it is.
One of the most popular questions of all time on https://security.stackexchange.com/ is about a developer (in the OP he was given the pseudonym "Dave") who shared this fear of published code. Dave, like the developer you saw, was trying to homebrew his own encryption algorithm. Here's one of the most popular comments in that thread:
Dave has a fundamentally false premise, that the security of an algorithm relies on (even partially) its obscurity - that's not the case. The security of a hashing algorithm relies on the limits of our understanding of mathematics, and, to a lesser extent, the hardware ability to brute-force it. Once Dave accepts this reality (and it really is reality, read the Wikipedia article on hashing), it's a question of who is smarter - Dave by himself, or a large group of specialists devoted to this very particular problem. (emphasis added)
As a matter of fact, as it stands now, the top two memes on Security.SE are "Don't roll your own" and "Don't be a Dave".
While this has all been about crypto, this applies in general to most open-source software. The chance that a backdoor will get found and fixed goes up with each new set of eyes laid on the code. This should be a simple and uncontroversial premise: the more people are looking for something, the higher the chance it will be found. Yes, this applies to malicious users looking for exploits. However, it also applies to power users, white hat hackers, security researchers, cryptographers, professional developers, and others working for "good", which generally (hopefully) outnumber those working for "evil". This also implicitly relies on the false premise that hackers need to see the source code to do bad things. This should be obviously false based on the sheer number of proprietary systems whose source code has never been published (various Microsoft and Adobe programs come to mind) which have been inundated with vulnerabilities for years. Maybe having source code to read makes the hacker's job easier, but maybe not -- is it easier to pore over source code looking for an attack vector or to just use scanning tools and scripts against a compiled binary?
tl;dr Don't be a Dave. Rolling your own means you have to be the best at everything to succeed, instead of taking a sampling of the best the community has to offer.
Heartbleed
In your comment, you rebut:
Then why was the Heartbleed bug in openSSL not found and corrected [earlier]?
Because no one was looking at it. That's the sad truth. Here's the difference -- what happened once someone did find it? Now tens of thousands of security researchers, crypto experts, and others are looking at it. Suppose the same kind of vulnerability existed in one of the proprietary products I mentioned earlier, which it very well could. Once it's caught (if it's caught), ask yourself:
Could the team of programmers at the company responsible benefit from the help of the entire worldwide community of security experts, cryptographers, and other analysts right now?
If a bug this critical were discovered (and that's a big if!) in your software, would you be prepared to deal with the fallout caused by your custom implementation?
Unless you know of specific failure modes or weaknesses of the built-in methods your application would use and know how to minimize or eliminate them, it is probably better to use the methods provided by the language or library designers, which will often be both more efficient and more secure than what an average programmer would come up with on the fly for a particular project.
Your example absolutely does not support your view: developing your own encryption algorithm without some serious background in the domain and review by cryptanalysts, and then employing it in security-critical code, is a recipe for disaster. Even developing your own custom implementation of an industry standard encryption algorithm can present problems, and almost certainly will if you are inexperienced at it.

Protecting shared library

Is there any way to protect a shared library (.so file) against reverse engineering ?
Is there any free tool to do that ?
The obvious first step is to strip the library of all symbols, except the ones required for the published API you provide.
The "standard" disassembly-prevention techniques (such as jumping into the middle of instruction, or encrypting some parts of code and decrypting them on-demand) apply to shared libraries.
Other techniques, e.g. detecting that you are running under debugger and failing, do not really apply (unless you want to drive your end-users completely insane).
Assuming you want your end-users to be able to debug the applications they are developing using your library, obfuscation is a mostly lost cause. Your efforts are really much better spent providing features and support.
Reverse engineering protection comes in many forms, here are just a few:
Detecting reversing environments, such as being run in a debugger or a virtual machine, and aborting. This prevents an analyst from figuring out what is going on. Usually used by malware. A common trick is to run undocumented instructions that behave differently in VMWare than on a real CPU.
Formatting the binary so that it is malformed, e.g. missing ELF sections. You're trying to prevent normal analysis tools from being able to open the file. On Linux, this means doing something that libbfd doesn't understand (but other libraries like capstone may still work).
Randomizing the binary's symbols and code blocks so that they don't look like what a compiler would produce. This makes decompiling (guessing at the original source code) more difficult. Some commercial programs (like games) are deployed with this kind of protection.
Polymorphic code that changes itself on the fly (e.g. decompresses into memory when loaded). The most sophisticated ones are designed for use by malware and are called packers (avoid these unless you want to get flagged by anti-malware tools). There are also academic ones like UPX http://upx.sourceforge.net/ (which provides a tool to undo the UPX'ing).

Are Harvard architecture computers immune to arbitrary code injection and execution attacks?

Harvard architecture computers have separate code and data memories. Does this make them immune to code injection attacks (as data cannot be executed as code)?
They are somewhat more immune than Von Neumann architecture, but not entirely. Every architecture has a conversion point, where data starts being treated as code. In Von Neumann it happens immediately inside CPU, while in Harvard it happens before the memory is reserved and declared for the module (or sometimes even before that, when a file is being prepared by the build system). This means that in Harvard architecture a successful code injection attack needs to be a bit more complicated and far-fetching, but not necessarily impossible.
If one can place a file containing malicious code in the machine's storage (e.g. file system) and cause, say, a buffer overrun which would redirect on return to existing (valid, non-malicious) code which loads this malicious file as code and if the architecture allows this file to start executing (e.g. via self-initialization routine), that'll be an example of successful code injection.
It partly depends on what you count as a "code injection attack".
Take a SQL injection attack, for example. The SQL query itself would never need to be in an executable part of memory, because it's converted into native code (or interpreted, or whatever terminology you wish to use) by the database engine. However, that SQL could still be broadly regarded as "code".
If you only include an attacker inserting native code to be executed directly by the processor (e.g. via a buffer overrun), and if the process is prevented from copying data into a "code area", then it provides protection against this sort of attack, yes. (I'm reluctant to claim 100% protection even if I can't think of any attack vectors; it sounds foolproof, but security's a tricky business.)
Apparently, there are some researchers who were able to accomplish a permanent code injection attack on a Harvard architecture. So maybe not as secure as people thought.
Most Harvard architecture machines still use a common shared memory space for both data and instructions outside of the core. So, it would still be possible to inject code and get it executed as instructions. In fact, most processors today are internally Harvard architecture, even if they look Von Neumann externally.
My university had a MS defense recently that discussed this very thing. Unfortunately, I wasn't able to attend. I'm sure if you contacted Mr. Watts he'd be willing to discuss it.
http://www.cs.uidaho.edu/Defenses/KrisWatts.html
x86 has a segmentation architecture that does this, and it has been used by some projects to try to stop data from being executed as code (an effort which is now mostly wasted given the NX bit), and it never came close to stemming the flow of new exploits. Consider the amazing number of remote file inclusions still exploitable in the wild.

Preventing copy protection circumvention

Anyone visiting a torrent tracker is sure to find droves of "cracked" programs ranging from simple shareware to software suites costing thousands of dollars. It seems that as long as the program does not rely on a remote service (e.g. an MMORPG) that any built-in copy protection or user authentication is useless.
Is it effectively not possible to prevent a cracker from circumventing the copy protection? Why?
No, it's not really possible to prevent it. You can make it extremely difficult - some Starforce versions apparently accomplished that, at the expense of seriously pissing off a number of "users" (victims might be more accurate).
Your code is running on their system and they can do whatever they want with it. Attach a debugger, modify memory, whatever. That's just how it is.
Spore appears to be an elegant example of where draconian efforts in this direction have not only totally failed to prevent it from being shared around P2P networks etc, but has significantly harmed the image of the product and almost certainly the sales.
Also worth noting that users may need to crack copy protection for their own use; I recall playing Diablo on my laptop some years back, which had no internal optical drive. So I dropped in a no-cd crack, and was then entertained for several hours on a long plane flight. Forcing that kind of check, and hence users to work around it is a misfeature of the stupidest kind.
It is impossible to stop it without breaking your product. The proof:
Given: The people you are trying to prevent from hacking/stealing will inevitably be much more technically sophisticated than a large portion of your market.
Given: Your product will be used by some members of the public.
Given: Using your product requires access to it's data on some level.
Therefore, You have to released you encrypt-key/copy protection method/program data to the public in enough of a fashion that the data has been seen in it's useable/unencrypted form.
Therefore, you have in some fashion made your data accessible to pirates.
Therefore, your data will be more easily accessible to the hackers than your legitimate audience.
Therefore, ANYTHING past the most simplistic protection method will end up treating your legitimate audience like pirates and alienating them
Or in short, the way the end user sees it:
Because it's a fixed defense against a thinking opponent.
The military theorists beat this one to death how many millennia ago ?
Copy-protection is like security -- it's impossible to achieve 100% perfection but you can add layers that make it successively more difficult to crack.
Most applications have some point where they ask (themselves), "Is the license valid?" The hacker just needs to find that point and alter the compiled code to return "yes." Alternatively, crackers can use brute-force to try different license keys until one works. There's also social factors -- once one person buys the tool they might post a valid license code on the Internet.
So, code obfuscation makes it more difficult (but not impossible) to find the code to alter. Digital signing of the binaries makes it more difficult to change the code, but still not impossible. Brute-force methods can be combated with long license codes with lots of error-correction bits. Social attacks can be mitigated by requiring a name, email, and phone number that is part of the license code itself. I've used that method to great effect.
Good luck!
Sorry to bust in on an ancient thread, but this is what we do for a living and we're really really good at it. It's all we do. So some of the information here is wrong and I want to set the record straight.
Theoretically uncrackable protection is not only possible it's what we sell. The basic model the major copy protection vendors (including us) follow is to use encryption of the exe and dlls and a secret key to decrypt at runtime.
There are three components:
Very strong encryption: we use AES 128-bit encryption which is effectively immune to a brute force attack. Some day when quantum computers are common it might be possible to break it but it's unreasonable to assume you will crack this strength encryption to copy software as opposed to national secrets.
Secure key storage: if a cracker can get the key to the encryption, you're hosed. The only way to GUARANTEE a key can't be stolen is to store it on a secure device. We use a dongle (it comes in many flavors but the OS always just sees it as a removable flash drive). The dongle stores the key on a smart card chip which is hardened against side channel attacks like DPA. The key generation is tied to multiple factors which are non-deterministic and dynamic so no single key/master crack is possible. The communication between the key storage and the runtime on the computer is also encrypted so a man-in-the-middle attack is thwarted.
Debugger detection: Basically you want to stop a cracker from taking a snapshot of memory (after decryption) and making an executable out of that. Some of the stuff we do to prevent this is secret, but in general we allow for debugger detection and lock the license when a debugger is present (this is an optional setting). We also never completely decrypt the entire program in memory so you can never get all the code by "stealing" memory.
We have a full time cryptologist who can crack just about anybody's protection system. He spends all his time studying how to crack software so we can prevent it. So you don't think this is just a cheap shill for what we do, we're not unique: other companies such as SafeNet and Arxan Technologies can do some very strong protection as well.
A lot of software-only or obfuscation schemes are easy to crack since the cracker can just identify the program entry point and branch around any any license checking or other stuff the ISV has put in to try to prevent piracy. Some people even with dongles will throw up a dialog when the license isn't found--setting a breakpoint on that error will give the cracker a nice place in the assembly code to do a patch. Again, this requires unencrypted machine code to be available--something you don't get if you do strong encryption of the .exe.
One last thing: I think we're unique in that we've had several open contests where we provided a system to people and invited them to crack it. We've had some pretty hefty cash prizes but no one has yet cracked our system. If an ISV takes our system and implements it incorrectly it's no different from putting a great padlock on your front door attached to a cheap hasp with wood screws--easy to circumvent. But if you use our tools as we suggest we believe your software cannot be cracked.
HTH.
The difference between security and copy-protection is that with security, you are protecting an asset from an attacker while allowing access by an authorized user. With copy protection, the attacker and the authorized user are the same person. That makes perfect copy protection impossible.
I think given enough time a would-be cracker can circumvent any copy-protection, even ones using callbacks to remote servers. All it takes is redirecting all outgoing traffic through a box that will filter those requests, and respond with the appropriate messages.
On a long enough timeline, the survival rate of copy protection systems is 0. Everything is reverse-engineerable with enough time and knowledge.
Perhaps you should focus on ways of making your software be more attractive with real, registered, uncracked versions. Superior customer service, perks for registration, etc. reward legitimate users.
Basically history has shown us the most you can buy with copy protection is a little time. Fundamentally since there is data you want someone to see one way, there is a way to get to that data. Since there is a way someone can exploit that way to get to the data.
The only thing that any copy protection or encryption for that matter can do is make it very hard to get at something. If someone is motivated enough there is always the brute force way of getting around things.
But more importantly, in the computer software space we have tons of tools that let us see how things are working, and once you get the method of how the copy protection works then its a very simple matter to get what you want.
The other issue is that copy protection for the most part just frustrates your users who are paying for your software. Take a look at the open source model they don't bother and some folks are making a ton of money encouraging people to copy their software.
"Trying to make bits uncopyable is like trying to make water not wet." -- Bruce Schneier
Copy protection and other forms of digital restrictions management are inherently breakable, because it is not possible to make a stream of bits visible to a computer while simultaneously preventing that computer from copying them. It just can't be done.
As others have pointed out, copy protection only serves to punish legitimate customers. I have no desire to play Spore, but if I did, I'd likely buy it but then install the cracked version because it's actually a better product for its lack of the system-damaging SecuROM or property-depriving activation scheme.
}} Why?
You can buy the most expensive safe in the world, and use it to to protect something. Once you give away the combination to open the safe, you have lost your security.
The same is true for software, if you want people to use your product you must given them the ability to open the proverbial safe and access the contents, obfuscating the method to open the lock doesn't help. You have granted them the ability to open it.
You can either trust your customers/users, or you can waste inordinate amounts of time and resource trying to defeat them instead of providing the features they want to pay for.
It just doesn't pay to bother. Really. If you don't protect your software, and it's good, undoubtedly someone will pirate it. The barrier will be low, of course. But the time you save from not bothering will be time you can invest in your product, marketing, customer relationships, etc., building your customer base for the long term.
If you do spend the time on protecting your product instead of developing it, you'll definitely reduce piracy. But now your competitors may be able to develop features that you didn't have time for, and you may very well end up selling less, even in the short term.
As others point out, you can easily end up frustrating real and legitimate users more than you frustrate the crooks. Always keep your paying users in mind when you develop a circumvention technique.
If your software is wanted, you have no hope against the army of bored 17 year old's. :)
In the case of personal copying/non-commercial copyright infringement, the key factor would appear to be the relationship between the price of the item and the ease of copying it. You can increase the difficulty to copy it, but with diminishing returns as highlighted by some of the previous answers. The other tack to take would be to lower the price until even the effort to download it via bittorrent is more cumbersome than simply buying it.
There are actually many successful examples where an author has found a sweet spot of pricing that has certainly resulted in a large profit for themselves. Trying to chase a 100% unauthorized copy prevention is a lost cause, you only need to get a large group of customers willing to pay instead of downloading illegaly. The very thing that makes pirating softweare inexpensive is also what makes it inexpensive to publish software.
There's an easy way, I'm amazed you haven't said so in the answers above.
Move the copy protection to a secured area (understand your server in your secure lab).
Your server will receive random number from clients (check that the number wasn't used before), encrypt some ever evolving binary code / computation results with clients' number and your private key and send it back.
No hacker can circumvent this since they don't have access to your server code.
What I'm describing is basically webservice other SSL, that's where most company goes nowadays.
Cons: A competitor will develop an offline version of the same featured product during the time you finish your crypto code.
On protections that don't require network:
According to notes floated around it took two years to crack a popular application which used similar scheme as described in John's answer. (custom hardware dongle protection)
Another scheme which doesn't involve a dongle is "expansive protection". I coined this just now, but it works like this: There's an application which saves user data and for which the users can buy expansions and such from 3rd parties. When user loads the data or uses new expansion, the expansions and the saved data contains also code which performs checks. And of course these checks are also protected by checksum checks. It's not as secure on paper as the other scheme but in practise this application has been half-cracked all the time, so that it mostly functions as a trial despite being cracked as the cracks will always miss some checks and have to patch these expansions as well.
The key point is, while these can be cracked, if enough software vendors used such schemes, this would overwork the few people in the warescene who are willing to dedicate themselves to those. If you do the maths, the protections don't have to be even that great, as long as enough vendors used these custom protections that changed constantly, it would simply overwhelm the crackers and the warez scene would end then and there. *
The only reason this hasn't happened is because publishers buy a single protection that they use all over, making it a huge target just like Windows is target for malware, any protection used in more than single app is a bigger target. So everyone needs to be doing their own custom, unique multi-layered expansive protection. The amount of warez releases would drop to maybe dozen releases per year if it takes months to crack a single release by the very best crackers.
Now for some theorycrafting in marketing software:
If you believe that warez provides worthwhile marketing value, then that should be factored in the business plan. This could entail a very very (too) basic lite version that still cost few dollars to ensure it was cracked. Then you'd hook in the users with "limited time upgrade cheaply from the lite version" offers regularly and other upselling tactics. The lite version should really have at most one buy-worthy feature and otherwise be very crippled. The price should probably be <10 $. The full version should probably be twice as much as the upgrade price from the $10 lite pay-demo version. eg. If the full-version is $80, You'd offer upgrades from the lite version to full version for $40 or something that really seems like killer bargain. Of course you'd avoid revealing these bargains to purchasers who went direct for the $80 edition.
It would be critical that the full version shared no similarity in code to the lite version. You'd intend that the lite-version gets warezed and the full-version will either be time intensive to crack or have network dependency in functionality that will be hard to mimic locally. Crackers are probably more specialized in cracking than trying to code up/replicate parts of functionality that the application has on the web server.
* addendum: for apps/games the scene might end in such unlikely and theoretical circumstance, for other things like music/movies and in practise, I'd look at making it cheap for digital dl buyers to get additional collectible physical items or online-only value - many people are collectors of stuff (especially the pirates) and they could be enticed into buying if it gains something desirable enough over just a digital copy.
Beware though - There's something called "the law of rising expectations". Example from games: Ultima 4-6 standard box included a map made of cloth, and Skyrim Collectors edition has a map made of paper. Expectations had risen and some people aren't going to be happy with a paper map. You want to either keep quality of produce or service constant or manage expectations ahead of time. I believe this is critical when considering these value-add things as you want them to be desirably but not increasingly expensive to make and not turn into something that seems so worthless that it defeats the purpose.
This is one occasion where quality software is a bad thing, because if no one whats your software then they will not spend time trying to crack it, on the other hand things like Adobe's Master Collection CS3, were available just days after release.
So the moral of this story is if you don't want someone to steal your software there is one option: don't write anything worth stealing.
I think someone will come up with a dynamic AI way of defeating all the currently standard methods of copy protection; heck, I'd sure love to get paid to work on that problem. Once they get there then new methods will be developed, but it'll slow things down.
The second best way for society to stop theft of software, is to penalize it heavily, and enforce the penalties.
The best way is to reverse the moral decline, and thereby increase the level of integrity in society.
A lost cause if ever I heard one... of course that doesn't mean you shouldn't try.
Personally, I like Penny Arcade's take on it: "A Cyclical Argument With A Literal Strawman"alt text http://sonicloft.net/im/52

Self validating binaries?

My question is pretty straightforward: You are an executable file that outputs "Access granted" or "Access denied" and evil persons try to understand your algorithm or patch your innards in order to make you say "Access granted" all the time.
After this introduction, you might be heavily wondering what I am doing. Is he going to crack Diablo3 once it is out? I can pacify your worries, I am not one of those crackers. My goal are crackmes.
Crackmes can be found on - for example - www.crackmes.de. A Crackme is a little executable that (most of the time) contains a little algorithm to verify a serial and output "Access granted" or "Access denied" depending on the serial. The goal is to make this executable output "Access granted" all the time. The methods you are allowed to use might be restricted by the author - no patching, no disassembling - or involve anything you can do with a binary, objdump and a hex editor. Cracking crackmes is one part of the fun, definately, however, as a programmer, I am wondering how you can create crackmes that are difficult.
Basically, I think the crackme consists of two major parts: a certain serial verification and the surrounding code.
Making the serial verification hard to track just using assembly is very possible, for example, I have the idea to take the serial as an input for a simulated microprocessor that must end up in a certain state in order to get the serial accepted. On the other hand, one might grow cheap and learn more about cryptographically strong ways to secure this part. Thus, making this hard enough to make the attacker try to patch the executable should not be tha
t hard.
However, the more difficult part is securing the binary. Let us assume a perfectly secure serial verification that cannot be reversed somehow (of course I know it can be reversed, in doubt, you rip parts out of the binary you try to crack and throw random serials at it until it accepts). How can we prevent an attacker from just overriding jumps in the binary in order to make our binary accept anything?
I have been searching on this topic a bit, but most results on binary security, self verifying binaries and such things end up in articles that try to prevent attacks on an operating system using compromised binaries. by signing certain binaries and validate those signatures with the kernel.
My thoughts currently consist of:
checking explicit locations in the binary to be jumps.
checksumming parts of the binary and compare checksums computed at runtime with those.
have positive and negative runtime-checks for your functions in the code. With side-effects on the serial verification. :)
Are you able to think of more ways to annoy a possible attacker longer? (of course, you cannot keep him away forever, somewhen, all checks will be broken, unless you managed to break a checksum-generator by being able to embed the correct checksum for a program in the program itself, hehe)
You're getting into "Anti-reversing techniques". And it's an art basically. Worse is that even if you stomp newbies, there are "anti-anti reversing plugins" for olly and IDA Pro that they can download and bypass much of your countermeasures.
Counter measures include debugger detection by trap Debugger APIs, or detecting 'single stepping'. You can insert code that after detecting a debugger breakin, continues to function, but starts acting up at random times much later in the program. It's really a cat and mouse game and the crackers have a significant upper hand.
Check out...
http://www.openrce.org/reference_library/anti_reversing - Some of what's out there.
http://www.amazon.com/Reversing-Secrets-Engineering-Eldad-Eilam/dp/0764574817/ - This book has a really good anti-reversing info and steps through the techniques. Great place to start if you're getting int reversing in general.
I believe these things are generally more trouble than they're worth.
You spend a lot of effort writing code to protect your binary. The bad guys spend less effort cracking it (they're generally more experienced than you) and then release the crack so everyone can bypass your protection. The only people you'll annoy are those honest ones who are inconvenienced by your protection.
Just view piracy as a cost of business - the incremental cost of pirated software is zero if you ensure all support is done only for paying customers.
There's TPM technology: tpm on wikipedia
It allows you to store the cryptographic check sums of a binary on special chip, which could act as one-way verification.
Note: TPM has sort of a bad rap because it could be used for DRM. But to experts in the field, that's sort of unfair, and there's even an open-TPM group allowing linux users control exactly how their TPM chip is used.
One of the strongest solutions to this problem is Trusted Computing. Basically you would encrypt the application and transmit the decryption key to a special chip (the Trusted Platform Module), The chip would only decrypt the application once it has verified that the computer is in a "trusted" state: no memory viewers/editors, no debuggers etc. Basically, you would need special hardware to just be able to view the decrypted program code.
So, you want to write a program that accepts a key at the beginning and stores it in memory, subsequently retrieving it from disc. If it's the correct key, the software works. If it's the wrong key, the software crashes. The goal is that it's hard for pirates to generate a working key, and it's hard to patch the program to work with an unlicensed key.
This can actually be achieved without special hardware. Consider our genetic code. It works based on the physics of this universe. We try to hack it, create drugs, etc., and we fail miserably, usually creating tons of undesirable side-effects, because we haven't yet fully reverse engineered the complex "world" in which the genetic "code" evolved to operate. Basically, if you're running everything on an common processor (a common "world"), which everyone has access to, then it's virtually impossible to write such a secure code, as demonstrated by current software being so easily cracked.
To achieve security in software, you essentially would have to write your own sufficiently complex platform, which others would have to completely and thoroughly reverse engineer in order to modify the behavior of your code without unpredictable side effects. Once your platform is reverse engineered, however, you'd be back to square one.
The catch is, your platform is probably going to run on common hardware, which makes your platform easier to reverse engineer, which in turn makes your code a bit easier to reverse engineer. Of course, that may just mean the bar is raised a bit for the level of complexity required of your platform to be sufficiently difficult to reverse engineer.
What would a sufficiently complex software platform look like? For example, perhaps after every 6 addition operations, the 7th addition returns the result multiplied by PI divided by the square root of the log of the modulus 5 of the difference of the total number of subtract and multiply operations performed since system initialization. The platform would have to keep track of those numbers independently, as would the code itself, in order to decode correct results. So, your code would be written based on knowledge of the complex underlying behavior of a platform you engineered. Yes, it would eat processor cycles, but someone would have to reverse engineer that little surprise behavior and re-engineer it into any new code to have it behave properly. Furthermore, your own code would be difficult to change once written, because it would collapse into irreducible complexity, with each line depending on everything that happened prior. Of course, there would be much more complexity in a sufficiently secure platform, but the point is that someone would have reverse engineer your platform before they could reverse engineer and modify your code, without debilitating side-effects.
Great article on copy protection and protecting the protection Keeping the Pirates at Bay:
Implementing Crack Protection for Spyro: Year of the Dragon
The most interesting idea mentioned in there that hasn't yet been mentioned is cascading failures - you have checksums that modify a single byte that causes another checksum to fail. Eventually one of the checksums causes the system to crash or do something strange. This makes attempts to pirate your program seem unstable and makes the cause occur a long way from the crash.

Resources