GAE: best practices for storing secret keys? - security

Are there any non-terrible ways of storing secret keys for Google App Engine? Or, at least, less terrible than checking them into source control?

In the meantime, Google added a Key Management Service: https://cloud.google.com/kms/
You could use it to encrypt your secrets before storing them in a database, or store them in source control encrypted. Only people with both 'decrypt' access to KMS and to your secrets would be able to use them.
The fact remains that people who can deploy code will always be able to get to your secrets (assuming your GAE app needs to be able to use the secrets), but there's no way around that as far as I can think of.

Not exactly an answer:
If you keep keys in the model, anyone who can deploy can read the keys from the model, and deploy again to cover their tracks. While Google lets you download code (unless you disable this feature), I think it only keeps the latest copy of each numbered version.
If you keep keys in a not-checked-in config file and disable code downloads, then only people with the keys can successfully deploy, but nobody can read the keys without sneaking a backdoor into the deployment (potentially not that difficult).
At the end of the day, anyone who can deploy can get at the keys, so the question is whether you think the risk is minimized by storing keys in the datastore (which you might make backups of, for example) or on deployer's machines.
A viable alternative might be to combine the two: Store encrypted API keys in the datastore and put the master key in a config file. This has some potentially nice features:
Attackers need both access to a copy of the datastore and a copy of the config file (and presumably developers don't make backups of the datastore on a laptop and lose it on the train).
By specifying two keys in the config file, you can do key-rollover (so attackers need a datastore/config of similar age).
With asymmetric crypto, you can make it possible for developers to add an API key to the datastore without needing to read the others.
Of course, then you're uploading crypto to Google's servers, which may or may not count as "exporting" crypto with the usual legal issues (e.g. what if Google sets up an Asia-Pacific data centre?).

There's no easy solution here. Checking keys into the repository is bad both because it checks in irrelevant configuration details and because it potentially exposes sensitive data. I generally create a configuration model for this, with exactly one entity, and set the relevant configuration options and keys on it after the first deployment (or whenever they change).
Alternately, you can check in a sample configuration file, then exclude it from version control, and keep the actual keys locally. This requires some way to distribute the keys, though, and makes it impossible for a developer to deploy unless they have the production keys (and all to easy to accidentally deploy the sample configuration file over the live one).

Three ways I can think of:
Store it in DataStore (may be base64 encode to have one more level
of indirection)
Pass it as environment variables through command-line params during deployment.
Keep a configuration file, git-ignore it and read it from server. Here this file itself can be a .py file if you are using a python deployment, so no reading & storing of .json files.
NOTE: If you are taking the conf-file route, dont store this JSON in the static public folders !

If you are using Laravel and want to store your keys in Datastore - this package can make that easy while managing performance using caching. https://github.com/tommerrett/laravel-GAE-secret-manager

Google app engine by default create credential for app engine and inject it in side the environment.
Google Cloud client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
So point 2 means if you grant the permissions to your service account using IAM Admin you do not have to worry about the passing json keys it will aromatically works.
eg.
Suppose your application running in App Engine Standard and it wants the access to the Google Cloud Storage. To do this you do not have to create new service account just grant the access to the ADC.
REF https://cloud.google.com/docs/authentication/production#finding_credentials_automatically

Related

How can i securely store account data in an open source app?a

I need to store user account data for my open source app such as email, hashed password, favourites etc.
The two options I have considered for this are: Storing info on a MongoDB Atlas database or storing in a JSON file.
Since the app is open source on GitHub, these two options create some problems. If MongoDB is used, then my API key will be exposed in the source code, which isn't that great. It will also make it difficult for people to test the project locally. If a local JSON file is used, users will be able to see all of a users sensitive info stored inside the repository, which definitely isn't good. What are my options here to be able to securely and easily control account creation and data storage? Cheers.
Since the app is open source on GitHub, these two options create some problems. If MongoDB is used, then my API key will be exposed in the source code...
This is a good case to use environment variables on your local machine. In the code you can do something like
process.env.<APPLICATION_NAME>_API_KEY;
If the developers you're working with is trusted enough to handle your database credentials then you can share it behind closed doors.
It will also make it difficult for people to test the project locally...
This is good practice for whatever future endeavors you might have with nodejs or other software development. Testing should never be made on production databases. If the database you're testing on isn't the production one, that's great, but there's still the issue with using sensitive user data for testing purposes. Banks hopefully aren't using real accounts to test transactions. What if there's an error and the account ends up drained with no log of how much was in it from the start?
I would recommend you setup a way to fill a database with dummy data for testing purposes. This could be a SQL script which you then can commit to your repo and update your instructions with how new developers and contributors can spin up their own database for testing purposes.

Where to store complex configurations for an Azure Functions app?

I already know about the Azure App Configuration for storing application configurations such as connection strings for my Azure apps. However, I am now working on an Azure Functions app where I have to store a more complex configuration for my application.
The configuration consists of mappings where for each entry I have a key/id and multiple values associated with it. Ideally, I'd like to store this in a database table, but setting up a whole database just to store this configuration seems a bit excessive to me. There will be about 200 entries in this table and I don't expect this number to grow much in the future.
Is there a way to store this in a way how it can easily be edited later using an Azure App Configuration, or do I really need to create a new database just for this purpose? Is there maybe another alternative which I didn't consider so far?
Following suggestion is under the assumption that you are not going to edit that data frequently
One way to do is to create a hash table and store in configuration section in Function App. During run time, you can access the data. And for editing you just need to copy whole data from config section , edit it (using notepad++) and update it back to config section.
Though this is not an ideal way , it’s far better than having an dedicated DB just for this purpose ( plus the DB cost )

How can I use a CF-defined destination during local development?

For CAP/FaaS, can the SAP Cloud SDK be used with a destination defined on SCP CF, by means of a proxy through that destination? This would allow a single destination setup to be used for both local development as eventual cloud runtime.
https://sap.github.io/cloud-sdk/docs/js/features/connectivity/destination-js-sdk/#service-instance
I would expect there to be an example of how to provide the credentials in VCAP_SERVICES so that the Cloud SDK could access the destination instance which would provide access to the destination. However, that is not twat is being described in that section.
First of all, the link is broken and here you can find some relevant documentation regarding destination service.
In general, when running your application locally, it is recommended to use a convenient function mockDestinationsEnv to set destination information. You can find more details here.
Of course, you can copy the VCAP environment variables from the CF and save it to your local VCAP_SERVICES to mock the runtime. But this is not recommended for some reasons. First of all, VCAP variables might contain lots of sensitive information from different services bound to your application which should be treated carefully. Secondly, it will not work for some authentication types (e.g., principal propagation related) which are using JWT. Therefore, you should never use this approach unless you are using e.g., BasicAuth so that no JWT is needed.

How to simple edit local config file throuth API

In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.

Right way to store sensitive credentials for web app

I have a Java web app running on EC2 under Tomcat (a WAR) that requires various sensitive configuration parameters - for example, the credentials associated with various other AWS services. I had been setting these as environment variables, but then discovered that running Tomcat as a service removes almost all environment variables. So currently I use a simple configuration file to store these values.
I don't believe this is a wise choice going forward, however, and would like to find an alternative. What is the right way to handle this kind of sensitive information?
IAM Roles are going to be your best friend here. The official docs here will point you in the right direction. There's also a post on the AWS security blog about it here.

Resources