In my project (windows desktop application) I use symmetric key in order to encrypt/decrypt some configurations that need to be protected. The key is hardcoded in my code (C++).
What are the risks that my key will be exposed by reverse engineering ? (the customers will receive the compiled DLL only)
Is there a way for better security for managing the key?
Are there open source or commercial products which I can use
Windows provides a key storage mechanism as part of the Crypto API. This would only be useful for you if you have your code generate a unique random key for each user. If you are using a single key for installations for all users, it will obviously have to be in your code (or be derived from constants that are in your code), and thus couldn't really be secure.
What are the risks that my key will be exposed by reverse engineering ? (the customers will receive the compiled DLL only)
100%. Assuming of course that the key protects something useful and interesting. If it doesn't, then lower.
Is there a way for better security for managing the key?
There's no security tool you could use, but there are obfuscation and DRM tools (which are a different problem than security). Any approach you use will need to be updated regularly to deal with new attacks that defeat your old approach. But fundamentally this is the same as DRM for music or video or games or whatever. I would shop around. Anything worthwhile will be regularly updated, and likely somewhat pricey.
Are there open source or commercial products which I can use
Open source solutions for this particular problem are... probably unhelpful. The whole point of DRM is obfuscation (making things confusing and hidden rather than secure). If you share "the secret sauce" then you lose the protection. This is how DRM differs from security. In security, I can tell you everything but the secret, and it's still secure. But DRM, I have to hide everything. That said, I'm sure there are some open source tools that try. There are open source obfuscation tools that try to make it hard to debug the binary by scrambling identifiers and the like, but if there's just one small piece of information that's needed (the configuration), it's hard to obfuscate that sufficiently.
If you need this, you'll likely want a commercial solution, which will be imperfect and likely require patching as it's broken (again, assuming that it protects something that anyone really cares about). Recommending specific solutions is off-topic for Stack Overflow, but google can help you. There are some things specific for Windows that may help, but it depends on your exact requirements.
Keep in mind that the "attacker" (it's hard to consider an authorized user an "attacker") doesn't have to actually get your keys. They just have to wait until your program decrypts the configurations, and then read the configurations out of memory. So you'll need obfuscation around that as well. It's a never-ending battle that you'll have to decide how hard you want to fight.
Related
We have this computer code which requires anyone who has access to it pay a license fee. We will pay the fee for our developers but they want our sysadmins to be licensed too as they can see the code archives. But if the code is stored encrypted in the archives then the sysadmins can see the files but not see the contents.
So does any software version control system allow encryption so that only the persons who are checking out the code will require the key and so be able to see the files decrypted.
I was thinking it wouldn't be hard to add this to pserver and cvs but if it is already done elsewhere why reinvent the wheel.
Any insight would be helpful.
There is no way to set up a source control system that can perform server-side diffs in a way that would prevent a sysadmin from at least theoretically accessing the contents. (i.e.: The source control system would not be able to store the decryption key in a place that the sysadmin couldn't access.) Unless your sysadmins habitually browse the source control database contents, such a system should have no practical difference from an unencrypted system from the perspective of your vendor.
The only way to make the source control database illegible to a server admin is to encrypt files on the client before submitting them to the server. For this to meet the desired goal, the decryption keys would need to be inaccessible to the admins, which is unlikely to be practical in most organizations since server admins typically have admin access on all client machines as well. Ignoring this picky detail, it would also mean that all your source control system would ever see is encrypted binaries, which means no server-side diff or blame. It also means potentially horrible bloat of your database size since every file will require complete replacement on each commit. Are you really willing to sacrifice useability of your source control system in order to save licensing fees and/or placate this vendor?
Basically, you want to give all your developers some secret key that they plug into the encryption/decryption routines of git's smudge and clean filters. And you want an encryption scheme that is capable of performing deltas.
First, see Encrypted version control for some examples in git. As written, this can dramatically increase disk usage. However, there are ways to make more "diff-friendly" encryption at the cost of some security. See diph for an example of how you might attack that. Also, any system that uses AES-ECB mode would diff quite well. (You generally shouldn't use AES-ECB mode because of its security flaws... one of those security flaws is that it can diff quite well... hey, that's what you wanted, so this seems a reasonable exception.)
What are some effective and secure methods of securing SQL queries?
In short I would like to insure that programmers do not see the passwords used by the application to perform queries. Something like RSA or PGP comes to mind, but don't know how one can implement a changing password without being encoded in the application somewhere.
Our environment is a typical Linux/MySQL.
This might be more of a process issue and less of a coding issue.
You need to strictly separate the implementation process and the roll-out process during software development. The configuration files containing the passwords must be filled with the real passwords during roll-out, not before. The programmers can work with the password for the developing environment and the roll-out team changes those passwords once the application is complete. That way the real passwords are never disclosed to the people coding the application.
If you cannot ensure that programmers do not get access to the live system, you need to encrypt the configuration files. The best way to do this depends on the programming language. I am currently working on a Java application that encrypts the .properties files with the appropriate functions from the ESAPI project and I can recommend that. If you are using other languages, you have to find equivalent mechanisms.
Any time you want to change passwords, an administrator generates a new file and encrypts it, before copying the file to the server.
In case you want maximum security and do not want to store the key to decrypt the configuration on your system, an administrator can supply it whenever the system reboots. But this might take things too far, depending on your needs.
If programmers don't have access to the configuration files that contain the login credentials and can't get to them through the debug or JMX interfaces then that should work. Of course that introduces other problems but that would potentially satisfy your requirement. (I am not a Qualified Security Assessor - so check with yours to be sure for PCI compliance.)
i got a little program that i want to send it to some other peoples.
But i want to prevent that they can easily share it with others.
Is there some easy protection i can use? It doesnt need to be unhackable, just a little protection that you cant just send the app around.
It can't be uncrackable anyway :) There are lots of different protections that you can use, but it always come down to the skill of the reverse engineer.
A pretty standard technique is to pack your software with a packer like asprotect, armadillo, aspack, upx, there are tons of options. This would make it difficult for them to hexedit your software, debug and disassemble it.
If you want to use a serial protection, there are lots of things you could do. One of my favourites is using the key to dynamically decrypt preencrypted blocks of code and execute them. This is called polymorphism and along with self modifying code, it can be a pretty frustrating protection.
If you want to keep things really simple, you could just create a xor protection where correct_serial XOR constant == another_constant. Using constant XOR another_constant, you could simply create a key.
Really tons of things to do here, it's always a matter of taste and knowledge.
There are lots of free solutions, most are crackable. In spite of popular opinion, modern dongles can be 1) trouble-free and 2) uncrackable. But they can cost $25-$100 each, so not a good choice for low-value software.
the use of keys is frequently tied to symmetric key encryption of the .exe so it can't be easily copied. The key is unique to the installation, and can be created by tying it to the machine characteristics like CPU serial number, MAC address, HD serial number, etc. You can also build a small table of those fingerprints and register that user/SN with that table; then have the app "phone home" from time to time to compare to a server DB. Both these methods are crackable, but you said you weren't looking for something unhackable. Downside of HW fingerprinting is that it can fail when the user upgrades the net card or HD. then you have an unhappy customer because they paid for the license and it won't run.
There are MANY approaches to this, this is one:
Create an authentication web service.
Get your app to generate a unique key from something that identifies the machine.
This gets sent you you and you generate a companion key that your app can verify against its unique key.
As you can imagine this is not something you quickly add in. It requires infrastructure and management, which is tricky.
I've been asked to develop a key generation/validation system for some software. They would also be open to a developed open source or commercial system, but would prefer a system from scratch. Online activation would have to optional, since it is likely that some installations would be on isolated servers. I know there is kind of a user/security complex with a lot of anit-piracy techniques. So I guess I'm asking what software, libraries, and techniques are out there? I would appreciate personal knowledge, web sites, or books.
If you take the hash of something, it will result (ideally) in an unpredictable string of characters.
You could have an algorithm be to take the SHA1 of something predictable (like sequential numbers) concatenated with a sufficiently long salt. Your keys would be really secure as long as your salt remains a permanent secret and SHA1 is never breached.
For example, if you take the SHA1 of "1" (your first license key) and a super secret salt "stackoverflow8as7f98asf9sa78f7as9f87a7", you get the key "95d78a6331e01feca457762a092bdd4a77ef1de1". You could prepend this with version numbers if you want.
If you want online authorization, you need three things:
To ensure that the response cannot be forged
To ensure that the request cannot be forged
To ensure that if Internet is unavailable, you take appropriate action
Public key cryptography can help with items one and two. Even Photoshop CS4 has problems with item 3, that's a tricky one.
I'm biased - given that the company I co-founded developed the Cobalt software licensing solution for .NET - but I'd suggest you go with a third-party solution rather than rolling your own.
Take a look at the article Developing for Software Protection and Licensing, which makes the following point:
We believe that most companies would
be better served by buying a
high-quality third-party licensing
system. This approach will free your
developers to work on core
functionality, and will alleviate
maintenance and support costs. It also
allows you to take advantage of the
domain expertise offered by licensing
specialists, and avoid releasing
software that is easy to crack.
Another advantage to buying a
third-party solution is that you can
quickly and easily evaluate it for
suitability; with an in-house system
you have to pay in advance for the
development of a system that may not
prove adequate for your needs.
Choosing an high-quality third party
system dramatically reduces the risk
involved in developing a solution
in-house.
If you're dead set on rolling your own, a word of advice: test on the widest range of client systems possible. Real-world hardware is weird, and Windows behaviour varies quite dramatically in some ways between versions.
You'll almost certainly have to spend a lot of time ironing the creases out of whatever hardware identification system you implement.
There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.