Best place to hide secret keys? - security

I am looking for advice on where to store encryption keys and other sensitive application data. Is a certificate on a USB stick really the way to go here? What can you do to keep your secret keys safe?

Keep them on a smart card, or use the Trusted Platform Module (TPM) that is present in many machines sold these days.

A keystore (see: getKey()) is often a place where a secret, like a privet key is kept. In order to access this key store a password. These are created using a symmetric cipher.

If it's a secret and you have to store it somewhere, then at some point it can't really be considered a secret anymore because one way or another somebody will be able to find it, etc. Security is always best considered on a case by case basis, what is acceptable for one solution is not for another and therefore there is not any "fits all" answer. However, where possible (or always) make sure you use a tried and tested method rather than roling your own. Hopefully that does help, but is such a wide open question.

Related

How does PasswordVault protect passwords? [duplicate]

I'd like to use Windows.Security.Credentials.PasswordVault in my desktop app (WPF-based) to securely store a user's password. I managed to access this Windows 10 API using this MSDN article.
I did some experiments and it appears that any data written to PasswordVault from one desktop app (not a native UWP app) can be read from any other desktop app. Even packaging my desktop app with Desktop Bridge technology and thus having a Package Identity does not fix this vulnerability.
Any ideas how to fix that and be able storing the app's data secure from other apps?
UPDATE: It appeared that PasswordVault adds no extra security over DPAPI. The case is closed with a negative result.
(this is from what I can understand of your post)
There is no real way of preventing data access between desktop apps when using these kind of API's http://www.hanselman.com/blog/SavingAndRetrievingBrowserAndOtherPasswords.aspx tells more about it. You'd probably just want to decrypt your information.
memory access restriction is difficult, code executed by the user is always retrievable by the user so it would be difficult to restrict this.
have you considered using the Windows Data Protection API :
https://msdn.microsoft.com/en-us/library/ms995355.aspx
grabbed straight from the source
DPAPI is an easy-to-use service that will benefit developers who must provide protection for sensitive application data, such as passwords and private keys
WDPAPI uses keys generated by the operating system and Triple DES to encrypt/decrypt your data. Which means your application doesn't have to generate these keys, which is always nice.
You could also use the Rfc2898DeriveBytes class, this uses a pseudo-random number generator to decrypt your password. It's safer than most decrypters since there is no practical way to go back from the result back to the password. This is only really useful for verifying the input password and not retrieving it back again. I have never actually used this myself so I would not be able to help you.
https://msdn.microsoft.com/en-us/library/system.security.cryptography.rfc2898derivebytes(v=vs.110).aspx
see also this post which gives a way better explanation than I can.
How to securely save username/password (local)?
If I misunderstood the question in some way, tell me, I will try to update the answer.
NOTE that modern/metro apps do not have this problem, although they still are accessible in other ways.
The hard truth is that storing a password in a desktop application, 100% securely is simply not possible. However, you can get close to 100%.
Regarding your original approach, PasswordVault uses the Credential Locker service which is built into windows to securely store data. Credential Locker is bound to the user's profile. Therefore, storing your data via PasswordVault is essentially equivalent to the master password approach to protecting data, which I talk about in detail further down. Only difference is that the master password in that case is the user's credentials. This allows applications running during the user's session to access the data.
Note: To be clear, I'm strictly talking about storing it in a way that allows you access to the plain text. That is to say, storing it in an encrypted database of any sort, or encrypting it yourself and storing the ciphertext somewhere. This kind of functionality is necessary in programs like password managers, but not in programs that just require some sort of authentication. If this is not a necessity then I strongly recommend hashing the password, ideally per the instructions laid out in this answer by zaph. (Some more information in this excellent post by Thomas Pornin).
If it is a necessity, things get a bit more complicated: If you want to prevent other programs (or users I suppose) from being able to view the plaintext password, then your only real option is to encrypt it. Storing the ciphertext within PasswordVault is optional since, if you use good encryption, your only weak point is someone discovering your key. Therefore the ciphertext itself can be stored anywhere. That brings us to the key itself.
Depending on how many passwords you're actually trying to store for each program instance, you might not have to worry about generating and securely storing a key at all. If you want to store multiple passwords, then you can simply ask the user to input one master password, perform some salting and hashing on that, and use the result as the encryption key for all other passwords. When it is time for decryption, then ask the user to input it again. If you are storing multiple passwords then I strongly urge you to go with this approach. It is the most secure approach possible. For the rest of my post however, I will roll with the assumption that this is not a viable option.
First off I urge you not to have the same key for every installation. Create a new one for every instance of your program, based on securely generated random data. Resist the temptation to "avoid having to store the key" by having it be generated on the fly every time it is needed, based on information about the system. That is just as secure as hardcoding string superSecretKey = "12345"; into your program. It won't take attackers long to figure out the process.
Now, storing it is the real tricky part. A general rule of infosec is the following:
Nothing is secure once you have physical access
So, ideally, nobody would. Storing the encryption keys on a properly secured remote server minimizes the chances of it being recovered by attackers. Entire books have been written regarding server-side security, so I will not discuss this here.
Another good option is to use an HSM (Hardware Security Module). These nifty little devices are built for the job. Accessing the keys stored in an HSM is pretty much impossible. However, this option is only viable if you know for sure that every user's computer has one of these, such as in an enterprise environment.
.Net provides a solution of sorts, via the configuration system. You can store your key in an encrypted section of your app.config. This is often used for protecting connection strings. There are plenty of resources out there on how to do this. I recommend this fantastic blog post, which will tell you most of what you need to know.
The reason I said earlier not to go with simply generating the key on the fly is because, like storing it as a variable in your code, you rely exclusively on obfuscation to keep it secure. The thing about this approach is that it usually doesn't. However, sometimes you have no other option. Enter White Box cryptography.
White box cryptography is essentially obfuscation taken to the extreme. It is meant to be effective even in a white-box scenario, where the attacker both has access to and can modify the bytecode. It is the epitome of security through obscurity. As opposed to mere constant hiding (infosec speak for the string superSecretKey approach) or generating the key when it is needed, white box cryptography essentially relies on generating the cipher itself on the fly.
Entire papers have been written on it, It is difficult to pull off writing a proper implementation, and your mileage may vary. You should only consider this if you really really really want to do this as securely as possible.
Obfuscation however is still obfuscation. All it can really do is slow the attackers down. The final solution I have to offer might seem backwards, but it works: Do not hide the encryption key digitally. Hide it physically. Have the user insert a usb drive when it is time for encryption, (securely) generate a random key, then write it to the usb drive. Then, whenever it is time for decryption, the user only has to put the drive back in, and your program reads the key off that.
This is a bit similar to the master password approach, in that it leaves it up to the user to keep the key safe. However, it has some notable advantages. For instance, this approach allows for a massive encryption key. A key that can fit in a mere 1 megabyte file can take literally billions of years to break via a brute force attack. Plus, if the key ever gets discovered, the user has only themselves to blame.
In summary, see if you can avoid having to store an encryption key. If you can't, avoid storing it locally at all costs. Otherwise, your only option is to make it as hard for hackers to figure it out as possible. No matter how you choose to do that, make sure that every key is different, so even if attackers do find one, the other users' keys are safe.
Only alternative is to encrypt password with your own private key stored somewhere in your code. (Someone can easily disassemble your code and get the key) and then store encrypted password inside PasswordVault, however the only security you have is any app will not have access to password.
This is dual security, in case of compromised machines, attacker can get access to PasswordVault but not your password as they will need one more private key to decrypt the password and that will be hidden somewhere in your code.
To make it more secure, if you leave your private key on your server and expose an API to encrypt and decrypt password before storing in Vault, will make it most secure. I think this is the reason people have moved on to OAuth (storing OAuth token in PasswordVault) etc rather then storing password in vault.
Ideally, I would recommend not storing password, instead get some token from server and save it and use that token for authentication. And store that token in PasswordVault.
It is always possible to push the security, with miscellaneous encryption and storage strategies. Making something harder is only making the data retrieval longer, never impossible. Hence you need to consider the most appropriate level of protection considering execution cost x time (human and machine) and development cost x time aspects.
If I consider strictly your request, I would simply add a layer (class, interface) to cipher your passwords. Best with asymmetrical encryption (and not RSA). Supposing the other softs are not accessing your program data (program, files OR process), this is sufficient. You can use SSH.NET (https://github.com/sshnet/SSH.NET) to achieve this quickly.
If you would like to push the security and give a certain level of protection against binary reverse-engineering (including the private key retrieval), I recommend a small (process limited) encrypted VM (like Docker, https://blogs.msdn.microsoft.com/mvpawardprogram/2015/12/15/getting-started-with-net-and-docker/) based solution such as Denuvo (https://www.denuvo.com/). The encryption is unique per customer and machine based. You'll have to encapsulated you c# program into a c/c++ program (which acts like a container) that will do all the in-memory ciphering-deciphering.
You can implement your own strategy, depending on the kind of investment and warranty you require.
In case your program is a backend program, you can pick the best strategy (the only I really recommend) of all which is to store the private key at the client side, public key at backend side and have local deciphering, all transmitted password would be hence encrypted. I would like to remark that password and keys are actually different strategies to achieve the same goal: checking if the program talks to the right person without knowing the person's identity; I mean this: instead of storing passwords, better store directly public keys.
Revisiting this rather helpful issue and adding a bit of additional information which might be helpful.
My task was to extend a Win32 application that uses passwords to authenticate with an online service with a "save password" functionality. The idea was to protect the password using Windows Hello (UserConsentVerifier). I was under the impression that Windows surely has something comparable to the macOS keychain.
If you use the Windows Credential Manager APIs (CredReadA, CredWriteA), another application can simply enumerate the credentials and if it knows what to look for (the target name), it will be able to read the credential.
I also explored using DPAPI where you are in charge of storing the encrypted blob yourself, typically in a file. Again, there seems to be no way (except obfuscation) to prevent another application from finding and reading that file. Supplying additional entropy to CryptProtectData and CryptUnprotectData again poses the question of where to store the entropy (typically I assume it would be hard-coded and perhaps obfuscated in the application: this is security by obscurity).
As it turns out, neither DPAPI (CryptProtectData, CryptUnprotectData) nor Windows Credential Manager APIs (CredRead, CredWrite) can prevent another application running under the same user from reading a secret.
What I was actually looking for was something like the macOS keychain, which allows applications to store secrets, define ACLs on those secrets, enforce biometric authentication on accessing the secret, and critically, prevents other applications from reading the secrets.
As it turns out, Windows has a PasswordVault which claims to isolate apps from each other, but its only available to UWP apps:
Represents a Credential Locker of credentials. The contents of the locker are specific to the app or service. Apps and services don't have access to credentials associated with other apps or services.
Is there a way for a Win32 Desktop application to access this functionality? I realize that if a user can be brought to install and run a random app, that app could probably mimic the original application and just prompt the user to enter the secret, but still, it's a little disappointing that there is no app-level separation by default.

What is the best way to secure your program

I searched a lot about what is the best way to secure your program and I found many results and there were two good ways.
The first one is to hash the mac address of the computer and link it with an activation code but it's still vulnerable.
And the second one is to use a usb device but I didn't find any detail so can anybody tell me in details what is the best way and how to implement it please.
First of all, you need to consider that it doesn't matter what you do, someone will be able to crack it, and because of this is that you need to consider a balance between the security of your application and how hard you will make it for legitimate users (since you don't want to punish a user who already paid for your product, just because you want to protect your applications from the guys who don't want to pay).
Having this in mind, you could go with digital signatures using asymmetric encryption, where you'll sign your license "activation" with your private key, and then your application will use its public key to verify that the received license was submitted by you. You should also take a look at this discussion (I recommend you to focus on the 2nd answer, not the selected one) and this one.
But again, your objective should be to just make things hard for bad guys, but without punishing your legitimate users, because for an attacker, it could be as easy as de-compiling your program and removing your logic to validate the license (unless you're creating an "always online" application, but usually users don't like that, and I'm saying this as a user).

Alternative login system with file upload

I was wondering whether a login system that implies that have to upload a certain file and then the server verifies that this is equal to the one stored in the server would be useful.
I was thinking that to its advantage, it would have that the "password" (the file) could be quite large (without you having to remember it).
Also it would mean that you would have to require a login name.
On the other hand one disadvantage would be that you would have to "carry around" the file everyone in able to login.
I dont want to turn this into a philosophical rather a programming one.
I'm trying to see the usability, safety/vulnerabilities etc
Is this or something similar done?
I am definitely not a security expert, but here are some thoughts.
This sounds somewhat similar to public key encryption. If you look into how that works, I think you will get a sense of the same sort of issues. For example, see http://en.wikipedia.org/wiki/Public-key_encryption
In addition to the challenge of users having to carry the file around with them, another issue is how to keep that file secure. What if somebody's computer or thumb drive is stolen? A common approach with public-key encryption is to encrypt the private key itself, and require a password to use it. Unless you provide the file in a form which requires this, you are counting on your users to protect the file. Even if you are willing to count on them, there is the question of how to give them the tools they need so they can protect the file.
Note that just like passwords, these files would be vulnerable if a user used one to login from a public machine (which might have all sorts of spyware on it). It's an open question whether a file-based system might slip under the spyware since they might not be looking for it. However, that is not so different from security by obscurity.
Also you would want to make sure that you hashed or encrypted the files on your system. Otherwise, you would be doing the equivalent of storing passwords in plain text which would open the possibility of someone hacking your system, and then being able to log in as any user.
what you are saying can match to a physical factor of two factor (password + physical factor) authentication system. But it can not be a replacement of password, because password is something you know & file is something you have. Now if you turn the password into file you are losing one factor and somehow you have to compensate that :-) Maybe using something you are.

saving passwords inside your application code

I have a doubt concerning how to store a password for usage in my application. I need to encrypt/decrypt data on the fly, so the password will need to be somewhere. Options would be to have it hard-coded in my app or load it from a file.
I want to encrypt a license file for an application and one of the security steps involves the app being able to decrypt the license (other steps follow after). The password is never know to the user and only to me as e really doesn't need it!
What I am concerned is with hackers going through my code and retrieving the password that I have stored there and use it to hack the license breaking the first security barrier.
At this point I am not considering code obfuscation (eventually I will), so this is an issue.
I know that any solution that stores passwords is a security hazard but there's no way around it!
I considered assembling the password from multiple pieces before really needing it, but at some point the password is complete so a debugger and a well place breakpoint is all that is needed.
What approaches do you guys(and galls), use when you need to store your passwords hard-coded in your app?
Cheers
My personal opinion is the same as GregS above: it is a waste of time. The application will be pirated, no matter how much you try to prevent it. However...
Your best bet is to cut down on casual-piracy.
Consider that you have two classes of users. The normal user and the pirate. The pirate will go to great lengths to crack your application. The normal user just wants to use your application to get something done. You can't do anything about the pirate.
A normal user isn't going to know anything about cracking code ("uh...what's a hex editor?"). If it is easier for this type of person to buy the application than it is to pirate it, then they are more likely to buy it.
It looks like the solutions you have already considered will be effective against the normal user. And that's about all that you can do.
Decide now how much time/effort you want to spend on preventing piracy. If someone is determined, they're probably going to get your application to work anyway.
I know you don't want to hear it, but it's a waste of time, and if your app needs a hardcoded password then that is a flaw.
I don't know that there is any approach to solving this problem that would deter a hacker in any meaningful way. Keeping the secret a secret is one of cryptography's great problems.
An approach I have done in the past was to generate an unique ID during the install, it would get the HDD and MCU's SN and use it in a complex structure, then the user will send this number for our automated system and we reply back with another block of that, the app will now decrypt and compare this data on the fly during the use.
Yes I works but it still have the harded password, we have some layers for protection (ie. there are some techniques that prevents a mid-level hacker to understand our security system).
I would just recommend you to do a very complex system and try to hack it on your own, see if disassembly can lead to an easy path. Add some random calls to random subroutines, make it very alleatory, try to fake the use of registry keys and global variables, turn the hacker life in a hell so he will eventually give up.

Is using a GUID security though obscurity?

If you use a GUID as a password for a publicly facing application as a means to gain access to a service, is this security through obscurity?
I think the obvious answer is yes, but the level of security seems very high to me since the chances of guessing a GUID is very very low correct?
Update
The GUID will be stored in a device, when plugged in, will send over the GUID via SSL connection.
Maybe I could generate a GUID, then do a AES 128 bit encrption on the GUID and store that value on the device?
In my opinion, the answer is no.
If you set a password to be a newly created GUID, then it is a rather safe password: more than 8 charcters, contains numbers, letters ans special characters, etc.
Of course, in a GUID the position of '{', '}' and '-' are known, as well as the fact that all letters are in uppercase. So as long as nobody knows that you use a GUID, the password is harder to crack. Once the attacker knows that he is seeking a GUID, the effort needed for a brute force attack reduces. From that point of view, it is security by obscurity.
Still, consider this GUID: {91626979-FB5C-439A-BBA3-7715ED647504} If you assume the attacker knows the position of the special characters, his problem is reduced to finding the string 91626979FB5C439ABBA37715ED647504. Brute forcing a 32 characters password? It will only happen in your lifetime, if someone invents a working quantum computer.
This is security by using a very, very long password, not by obscurity.
EDIT:
After reading the answer of Denis Hennessy, I have to revise answer. If the GUID really contains this info (specifically the mac address) in a decryptable form, an attacker can reduce the keyspace considerably. In that case it would definitely be security by obscurity, read: rather insecure.
And of course MusiGenesis is right: there are lots of tools that generate (pseudo) random passwords. My recommendation is to stick with one of those.
Actually, using a GUID as a password is not a good idea (compared to coming up with a truly random password of equivalent length). Although it appears long, it's actually only 16 bytes which typically includes the user's MAC address, the date/time and a smallish random element. If a hacker can determine the users MAC address, it's relatively straightforward to guess possible GUID's that he would generate.
If one can observe the GUID being sent (e.g. via HTTP Auth), then it's irrelevant how guessable it is.
Some sites, like Flickr, employ an API key and a secret key. The secret key is used to create a signature via MD5 hash. The server calculates the same signature using the secret key and does auth that way. The secret never needs to go over the network.
GUID is to prevent accidental collisions, not intentional ones. In other words, you are unlikely to guess a GUID, but it is not necessarily hard to find out if you really want to.
At first I was ready to give an unqualified yes, but it got me thinking about whether that meant that ALL password based authentication is security by obscurity. In the strictest sense I suppose it is, in a way.
However, assuming you have users logging in with passwords and you aren't posting that GUID anywhere, I think the risks are outweighed by the less secure passwords the users have, or even the sysadmin password.
If you had said the URL to an admin page that wasn't otherwise protected included a hard coded GUID, then the answer would be a definite yes.
I agree with most other people that it is better than a weak password but it would be preferable to use something stronger like a certificate exchange that is meant for this sort of authentication (if the device supports it).
I would also ensure that you do some sort of mutual authentication (i.e. have the device verify the servers SSL certificate to ensure it is the one you expect). It would be easy enough of me to grab the device, plug it into my system, and read the GUID off of it then replay that back to the target system.
In general, you introduce security vulnerabilities if you embed the key in your device, or if you transmit the key during authentication. It doesn't matter whether they key is a GUID or a password, as the only cryptographic difference is in their length and randomness. In either case, an attacker can either scan your product's memory or eavesdrop on the authentication process.
You can mitigate this in several ways, each of which ultimately boils down to increasing the obscurity (or level of protection) of the key:
Encrypt the key before you store it. Of course, now you need to store that encryption key, but you've introduced a level of indirection.
Calculate the key, rather than storing it. Now an attacker must reverse-engineer your algorithm, rather than simply searching for a key.
Transmit a hash of the key during authentication, rather than the key itself, as others have suggested, or use challenge-response authentication. Both of these methods prevent the key from being transmitted in plaintext. SSL will also accomplish this, but then you're depending on the user to maintain a proper implementation; you've lost control over the security.
As always, whenever you're addressing security, you need to consider various tradeoffs. What is the likelihood of an attack? What is the risk if an attack is successful? What is the cost of security in terms of development, support, and usability?
A good solution is usually a compromise that addresses each of these factors satisfactorily. Good luck!
It's better than using "password" as the password, at least.
I don't think a GUID would be considered a strong password, and there are lots of strong password generators out there that you could use just as easily as Guid.NewGuid().
It really depends on what you want to do. Using a GUID as password is not in itself security through obscurity (but beware the fact that a GUID contains many guessable bits out of the 128 total: there is a timestamp, some include the MAC address of the machine that generated it, etc.) but the real problem is how you will store and communicate that password to the server.
If the password is stored on a server-side script that is never shown to the end user, there is not much risk. If the password is embedded in some application that the user downloads to its own machine, then you will have to obfuscate the password in the application, and there is no way to do that securely. By running a debugger, a user will always be able to access the password.
Sure it is security by obscurity. But is this bad? Any "strong" password is security by obscurity. You count on the authentication system to be secure, but in the end if your password is easy to guess then it doesn't matter how good the authentication system is. So you make a "strong" and "obscure" password to make it hard to guess.
It's only security through obscurity to the extent that that's what passwords are. Probably the primary problem with using a GUID as a password is that only letters and numbers are used. However, a GUID is pretty long compared to most passwords. No password is secure to an exhaustive search; that's pretty obvious. Simply because a GUID may or may not have some basis on some sort of timestamp or perhaps a MAC address is somewhat irrelevant.
The difference in probability of guessing it and something else is pretty minimal. Some GUIDs might be "easier" (read: quicker) to break then others. Longer is better. However, more diversity in the alphabet is also better. But again, exhaustive search reveals all.
I recommend against using a GUID as a password (except maybe as an initial one to be changed later). Any password that has to be written down to be remembered is inherently unsafe. It will get written down.
Edit: "inherently" is inaccurate. see conversation in comments

Resources