Does Digital Ocean have something similar to Credstash or AWS Secrets Manager (both AWS services)?
Trying to decide on the most secure way to store environmental variables with sensitive information (like database access codes, for example).
Locally, I have .env file that is named in my .gitignore to prevent it being version controlled.
If just in a .env file or environment variables, what’s the best way to keep that secure for the app to run appropriately?
Much appreciated :)
Cheers
ADDITIONAL INFO:
I have a fullstack SPA (MongoDB, Node, React, Express) on the digital ocean droplet.
This is quite a loaded question to get this done right, and implementation cost depends on what your risk tolerance is and a variety of other things (i.e. threat modeling). Below are just some things to consider.
At the very minimum, you're going to want to ensure your sensitive configuration is encrypted at rest on the disk and assess whether this is going to end up in your backups as well, as part of infrastructure management, etc. Even if you used a third party system for credential management, you're still going to be maintaining API credentials on your host to connect to that system, else it will still be accessible locally via mount, or in-memory, etc.
You have to also consider how strings might or might not be garbage collected/copied in memory and what your risk tolerance is there.
It also goes without saying that part of your post-commit/CI should explicitly ensure file permissions are being set correctly as intended over your sensitive configuration files (i.e. chmod 0400).
You also want to run through an application compromise scenario where an attacker is able to read the file system (not memory, and not application code injection) with your application. So do a su to impersonate your application process' user on the host to see what it can actually do with your config file. If it's encrypted at rest and is unable to discover the decryption keys or decrypt it altogether, well you're probably in a better place than 99% of WordPress websites out there.
There's definitely more to discuss, but that should get you started. You might also consider popping over to https://security.stackexchange.com/ with this question.
Best wishes!
Related
Let say I have an application that should run on a VPS. The app utilizes a configuration file that contains very important private keys, in a sense that no one should ever have access to! I know VPS providers can easily access my files. So, how may I "hide" the sensitive data from malicious acts while still have them usable for the app?
I believe encryption will be of no help, since the decryption should be done on the same machine! Also, I know running my own private server is a no-brainier; but, that's not an option, unfortunately.
You cannot solve this problem. Whatever workaround you can find, there will be a way for someone with access to repeat the same steps. You can only solve this if you have full control over the server (both hardware and software), otherwise, it's a lost battle.
Some links:
https://cheatsheetseries.owasp.org/cheatsheets/Key_Management_Cheat_Sheet.html
https://owaspsamm.org/model/implementation/secure-deployment/stream-b/
https://security.stackexchange.com/questions/223457/how-to-store-api-keys-when-algo-trading
You can browse security SE for some direction, and ask a more target question.
This problem is mitigated with using your own servers, using specialized hardware for key storage, trusting to your host provider or cloud, and using well-designed security protocols.
But the VPS provider doesn't know how your app will decrypt the keys in the file? Perhaps your app has a decrypt key embedded in it, or maybe it is something even simpler. Without decompiling your app they are no closer to learning the secrets. Of course if your "app" is just a few scripts then they can work it out.
For example if the first key in the file is customerID, they don't know that all the other keys are simply xor'ed against a hash of your customerID - they don't even know the hashing algorithm you used.
Ok, that might be too simplistic of you used one of the few well known hashes, but if there are only a few clients, it can be enough.
Obviously, they could be listening to the network traffic your app is sending, but then that should be end-to-end encrypted already, if you are that paranoid.
The official documentation lists the following practices for appsettings.json:
Never store passwords or other sensitive data in configuration provider code or in plain text configuration files.
Don't use production secrets in development or test environments.
Specify secrets outside of the project so that they can't be accidentally committed to a source code repository.
As far as I know the appsettings.json isn't served when you host the app on IIS and therefore can't be accessed from the web. We also host the source code ourselves (i.e. on our own servers). So as far as I can tell, the only real danger is when somebody manages to compromise the whole system and has actual access to the appsettings.json itself.
But are there other reasons for keeping sensitive data outside of appsettings.json? Are there other security aspects I'm overlooking?
I know there are several questions asking how to keep the appsettings.json secure, but not what the actual risks are.
There's many reasons, but the main one you've already mentioned:
it's usually much, much easier to get access to source code, than it is to get to well-guarded secrets (e.g. Azure Vault)
it's much easier to leak the secrets, possibly accidentally (via logs, or someone looking over your shoulder, or someone with access to the CI server)
you won't typically know you've leaked them, as there's typically no or a lot less auditing than with proper systems for keeping secrets
there's no way to limit the people that have access to specific secrets for specific environments
personally, I also dislike having specifically production secrets near my development setup. If I run code as a developer, I want to be 100% sure I'll never be accidentally running against a production environment ("oops, I tested that mass-delete feature...vs production"). If the prod secrets are just not there then there's no mistake to make
and probably many more reasons...
Basically, limiting the surface area for mistakes and security leaks will limit the chance for a problem, even if there is currently no reasonable combination of factors where a mistake or leak would happen.
Hi security aware people,
I have recently scanned my application with a tool for static code analysis and one of the high severity findings is a hardcoded username and password for creating a connection:
dm.getConnection(databaseUrl,"server","revres");
Why does the scanner think this is a risk for the application? I can see some downsides such as not being able to change the password easily if it's compromised. Theoretically someone could reverse-engineer the binaries to learn the credentials. But I don't see the advantage of storing the credentials in a config file, where they are easy to locate and read, unless they are encrypted. And if I encrypt them, I will be solving the same problem with the encryption key...
Are there any more risks that I cannot see? Or should I use a completely different approach?
Thank you very much.
A fixed password embedded in the code will be the same for every installation, and accessible by anyone with access to the source code or binary (including the installation media).
A password read from a file can be different for each installation, and known only to those who can read the password file.
Typically, your installer will generate a unique password per site, and write that securely to the file to be read by your application. (By "securely", I mean using O_CREAT|O_EXCL to prevent symlink attacks, and with a correct selection of file location and permissions before anyone else can open it).
This is an interesting one, I can give you examples for a .Net application (as you haven't specified running environment / technologies used). Although my guess is Java? I hope this is still relevant and helps you.
My main advice would be to read this article and go from there: Protecting Connection information - MSDN
Here is a page that describes working with encrypted configuration files here
I've seen this solved both using encrypted configuration files and windows authentication. I think that running your application as a user that will be granted access to the relevant stored procedures etc (as little as possible, e.g. Principle of Least Privilege) and furthermore folder access etc is a good route.
I would recommend using both techniques because then you can give relevant local folder access to the pool for IIS and split out your user access in SQL etc. This also makes for better auditing!
This depends on your application needs though. The main reason to make this configurable via a config file or environmental user account I would say is so that when you come to publish your application to production, your developers do not need access to the production user account information and instead can just work with Local / System test / UAT credentials instead.
And of course they are not stored in plain text in your source control checkin then either, which if you host in a private distributed network like GIT could mean that this could be compromised and a hacker would gain access to the credentials.
I think it depends on how accessible / secure your source code or compiled code is. Developers usually have copies of the code on their dev boxes, which are usually not nearly as secure as production servers, and so are much more easily hacked. Generally, a test user / pw is configured on the dev box, and in production, the "real" pw is stored in much more secure config files. Yes, if someone hacked into the server they could easily get the credentials, but that is much more difficult than getting into a dev box in most cases. But like I said it depends. If there is only one dev, and they have a super secure machine they work with, and the repo for their code is also super secure, then there is no effective difference.
What I do is to ask the credentials to end user initially and then encrypt and store them in a file. This way, I don't know their connection details and passwords as a dev. The key is a hashed binary and I store it by poking ekstra bytes in between. One who wants to crack it should find out the algorithm used, key and vector lengths, their location and the start-end positions of the byte sequence keeping the values. A genius, who would also reverse engineer my code to get all this information would break into it (but it might be easier to directly crack the end user's credentials).
What are some effective and secure methods of securing SQL queries?
In short I would like to insure that programmers do not see the passwords used by the application to perform queries. Something like RSA or PGP comes to mind, but don't know how one can implement a changing password without being encoded in the application somewhere.
Our environment is a typical Linux/MySQL.
This might be more of a process issue and less of a coding issue.
You need to strictly separate the implementation process and the roll-out process during software development. The configuration files containing the passwords must be filled with the real passwords during roll-out, not before. The programmers can work with the password for the developing environment and the roll-out team changes those passwords once the application is complete. That way the real passwords are never disclosed to the people coding the application.
If you cannot ensure that programmers do not get access to the live system, you need to encrypt the configuration files. The best way to do this depends on the programming language. I am currently working on a Java application that encrypts the .properties files with the appropriate functions from the ESAPI project and I can recommend that. If you are using other languages, you have to find equivalent mechanisms.
Any time you want to change passwords, an administrator generates a new file and encrypts it, before copying the file to the server.
In case you want maximum security and do not want to store the key to decrypt the configuration on your system, an administrator can supply it whenever the system reboots. But this might take things too far, depending on your needs.
If programmers don't have access to the configuration files that contain the login credentials and can't get to them through the debug or JMX interfaces then that should work. Of course that introduces other problems but that would potentially satisfy your requirement. (I am not a Qualified Security Assessor - so check with yours to be sure for PCI compliance.)
Just a general architecture question.
I know that for web sites, one can use the features built in to IIS to encrypt the connection string section. However, what I am not certain of is this... If I do this and then copy the web.config to another project, will the new project still be able to decrypt the connection strings section in the config file?
Where this becomes an issue is production database access. We don't want anyone to be able to copy the config file from production into their project and have carte blanche access to the production database.
Currently the way my company does it is to store the encrypted connection string in the registry of the server, then use a home-grown tool to read the registry and decrypt the value on the fly. This prevents someone from just looking into the registry or web config to see the connection string.
Further, for thick client (WinForms, WPF, etc.) applications, this could be a little more problematic because once again, I am unsure if the IIS encryption trick will work since the applications would not be running on IIS. We currently have a kludgy solution for this which involved the same home-grown application, but reading the encrypted string from a binary file and decrypting on the fly.
It just seems very patched together, and we are looking for a better way to do it (i.e., industry standard, current technology, etc.)
So, a more general question is this...
What approaches have you used for securing your connection strings? Especially when it comes to multiple application types accessing it, encryption, etc.
A quick Google search will show you other people's attempts at encrypting some or all of an application configuration file (i.e. Google "ecnrypting application configuration files").
But more often than not, I find that the better answer is properly securing the resource that you are concerned about (usually a database). Windows authentication is always preferred of SQL authentication, that way passwords do not need to be stored in the config file, though this may not always be an option. If you want to prevent access to a resource (especially if it's usually accessed through any sort of web layer, like a web service or a website itself), then host the resource on a different server (which is preferred anyways) and don't allow access to it from outside your internal network. If the attacker has access to your internal network, then there's usually bigger concerns than this one resource you are trying to protect.
If you are concerned about a malicious person performing an action that even your application can't perform (like dropping a database), then ensure that the credentials the application is using doesn't have that type of permission either. This obviously doesn't prevent an attack, but it can reduce the amount of damage that is done from it.
Securing information stored in a configuration file that is located on the user's machine is generally not worth the time, IMHO. At the end of the day, the machine itself will need to be able to decrypt the information, and if the machine has the means to do it, then so does the user. You can make it hard for the user to do it, but it's usually still doable.
This isn't really a direct answer to your question, but I hope it gets you thinking down a different path that may lead to an acceptable solution.
From my understanding the protection of encrypted connection strings as for example presented in the article Importing and Exporting Protected Configuration RSA Key Containers protected the connection string on a user-level.
This means that only the account running IIS (NT AUTHORITY\NETWORK SERVICE) can access the cryptographic keys for decrypting the connection string. Therefore this protected only against users who are able to log-on onto the server holding the web.config file. But it can be extended to limit access to certain application.
Regarding the fat client there may be a way to narrow down the interface a bit:
Define all SQL commands as stored procedures on the server and change the settings for the used user account to only allow executing those stored procedures. This would limit access to the database to operations that can be performed using the SQL login credentials.
I would use the SQL DB account management features, with specific permissions only (e.g. at it's most abstract - allow the execution of read only SQL commands) and only from allowed hosts and/or realms.