I've been reading about the many various ways to store secrets securely at machine-level and a lot of sources mention how secrets can be stored in environment variables, but warn that they're not encrypted. My question is, why can't we store an encrypted secret in the environment variable?
For example, we can use DPAPI to encrypt the secret and store that ciphertext in the environment variable, then the web service (that's running on that server) grabs the ciphertext from the environment variable and uses DPAPI to decrypt it in the code. What is stopping us from doing that in a production environment?
Is it possible, just not secure? If not secure, what makes it not secure?
Related
My application has a lot of secret keys ranging from database authentication password, AWS secret keys, private decryption keys for encrypted data from client etc.. Although it is inside a private repository within the org, I still need to move it from code, to a safer place to limit the keys used on production by any chance from exposure. I am aware of the options of storing them to a config file with limited access, or to export them to environment variables, but see a lot of feedback of downsides of these approaches as well. What is the enterprise standard/ best way to store the keys to limit the access to these keys ?
What are best practices in securing passwords that are using in application? For example password for database and other services.
plain text password in VCS
crypted password in VCS, provide decrypting key on application launch
a tool for securely managing secrets, like Vault
???
It all comes down to two options:
configure application with passwords on deployment. E.g. properties file, command-line parameters or env variables
pull passwords from secure password repository
And one rule:
Accessing production passwords should require the same privileges as modifying production application that uses that passwords. E.g. passwords should be available only to application and deployment script of that application.
Details really depend on your infrastructure and requirements.
I want to increase my safety of my web app in case of an attack.
The following components are present in my system:
Azure Web App
Azure Blob Storage
Azure SQL Azure
Azure KeyVault
Now there is the scenario that the app encrypts and stores uploaded documents.
This works as described:
1) User Uploads doc to the web app
2) random encryption key is generated
3) random encryption key is stored to the azure key vault
4) sql azure stores the blob url and the key url
Now my question is:
How is using the key vault safer in case of hacking the web app instance? I mean there is the client id and client secret in the app.config to access the keyvault, we need it to read and write keys. So if i use key vault or not does not increase safety in terms of hacking the web app, right?
The Key Vault is an API wrapped around an HSM. What makes the Key Vault or HSM Secure is that the keys can not be extracted from them once imported / created. Also, the crypto (encrypt / decrypt in your case) operations happen inside the vault so the keys are never exposed, even in memory.
If someone was able to hack your web application and get the credentials to your key vault they could use the vault to decrypt the data. So, in this case you could regenerate the credentials for the Key Vault and still continue to use the same keys that are in the vault - because they were never exposed. Meaning any data that is encrypted that the attacker didn't already decrypt is still safe because the keys were never exposed.
Typically HSMs aren't designed to store a large number of keys in only a few really important keys. You might want to consider using a key wrapping solution where you have one key in the vault.
You probably want to encrypt the client id and client secret in your config and decrypt them at runtime - this adds another layer of security. Now the attacker either needs to read the keys out of your application memory while it is running on your Cloud Service / VM (not an easy task). Or the attacker would need to obtain the config file and the private key of the certificate used to encrypt your config values (easier than reading memory, but still requires a lot of access to your system).
So if i use key vault or not does not increase safety in terms of
hacking the web app, right?
It all depends at what level they were able to hack the site. In the case you describe, if they obtained your source code then - yes, its game over. But it doesn't have to be that way. It truly comes down to your configuration.
However, most of the time, developers forget that security is a layered approach. When you're talking about encryption of data and related subjects, they are generally a last line of defense. So if a malicious actors has acquired access to the encrypted sensitive data they have breached other vulnerable areas.
The problem is not Key Vaults but your solution of using client secret. Client secret is a constant string which is not considered safe. You can use certificate and thumbprint as a "client secret". Your application needs to read the .pfx file which is stored in web app, then decrypt to grab thumbprint. Once thumbprint is retrieved successfully then you Key Vault secret is retrievable. Moreover, in Key Vault you are given the ability to use your own certificate rather than just a masked string in Secret. This is so-called "nested encryption".
The hacker if getting access to your app.config, he get nothing than the path of .pfx file which he does not know where to store, even how it looks like. Generating the same pfx file becomes impossible. If he could he would break the entirely crypto world.
In the book Programming Grails, Burt Beckwith gives some really good insights about how to develop Grails applications which follows OWASP Top 10 recommendations in chapter 9.
Punctually,I'm trying to implement the recommendation for Insecure Cryptographic Store. That reads as follows
Do not store passwords in config files, or even in files on the filesystem. Instead, create
a web page that you use to initialize the system where people trusted with passwords
enter the passwords (using SSL!) when the application starts up. Ideally, you shouldn’t
trust any one person with all of the information to start the system. For example, to use
JCE encryption, you will need to load a java.security.KeyStore , and this requires a
password, and you use this to create a javax.crypto.SecretKey , which also requires a
password. Use different passwords. If two people know the key store password and two
other people know the key password (it’s a good idea to have backup users in case
someone isn’t available), then no one person can decrypt the data or be coerced into
giving someone else access.
I want to secure the Amazon AWS[1] Access Credentials that will be used by the application in order to use the KMS[2] API call to secure encrypt and decrypt information.
I would like and example about how this can be achieved. My initial idea is to use a Service in the Singleton scope which holds the credentials and those credentials are setted by a Controller which is responsible of loading the KeyStore and the SecretKey used to decrypt a previusly encrypted and store AWS Access Credentials.
[1]http://en.wikipedia.org/wiki/Amazon_Web_Services
[2]http://aws.amazon.com/en/kms/
I am creating a web application that involves logging into servers via SSH and storing the details. Credentials will include root login details.
What are the best practices for storing this data safely and securely? Authentication using asymmetrical keys will be used be used but not the concern here.
The plan is to use MongoDB and Node.js.
The best way to encrypt data that is extremely sensitive like that, is to use AES 256.
You'll basically want to AES256 encrypt the login credentials in some sort of file (like a CSV), and make sure the encrypt key is stored somewhere extremely safe (on a computer not connected to the internet, for instance).
This is really the only way to handle that sort of information.