I am creating a CI/CD pipeline in Cloud Build of a very basic Node.js app with deployment to GCP appengine standard.
None-secret environment variables are stored in app.yaml file. But of course I don't want to put my secrets there. In fact I don't want to put them in any file any where (encrypted or not) since this file will end up on the AppEngine instance and can be "viewed" by a "bad admin". There are many samples out there that suggests to encrypt/decrypt complete files (and some times even code) but I don't want to go down that path.
I am looking for a way to set secret environment variables "in memory" as part of the CI/CD pipeline. Anyone?
I added none secrets in the app.yaml file (env_variables) - works fine
Added encrypted secrets into my cloudbuild.yaml file (secrets) - no error
Added secretEnv: into a build steps but value don't end up as process.env.[KEY] in app engine
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
dir: "appengine/hello-world/standard"
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "test-app.yaml"]
dir: "appengine/hello-world/standard"
secretEnv: ['API_KEY', 'API_URL']
secrets:
- kmsKeyName: projects/XXXXXXXX/locations/global/keyRings/customintegrations-secrets/cryptoKeys/integration-secrets
secretEnv:
API_KEY: XXQAoHgKKoHBKOURrUU2RqU+ki8XyqmTjz+ns+MEWp5Kx3hQBpgSQgATFQ5yRdW4m1TLNqNRIdHIqVJi8tn8jFrtlHIEouOzNDe/ASlOT0ZQBfl9Rf7xlvOHAa667poBq2hEoMNvOclxUQ==
API_URL: YYQAoHgKKklo08ZsQF+/8M2bmi9nhWEtb6klyY4rNthUhSIhQ8oSQQATFQ5ywKOxaM/TLwGDmvMtCpl/1stXOOK0kgy42yipYbw/J/QZL68bMat1u4H3Hvp/GMbUVIKEb9jwUtN2xvbL
I was hoping that the secretEnv: ['API_KEY', 'API_URL'] would make the decrypted values accessable in code (process.env.API_KEY) in app engine.
Here is a full tutorial on how to securely store env vars in your cloud build (triggers) settings and import them into your app.
Basically there are three steps:
Add your env vars to the 'variables' section in one of your build trigger settings
Screenshot of where to add variables in build triggers
By convention variables set in the build trigger must begin with an underscore (_)
Configure cloudbuild.yaml (on the second step in the code example) to read in variables from your build trigger, set them as env vars, and write all env vars in a local .env file
Add couldbuild.yaml (below) to your project root directory
steps:
- name: node:10.15.1
entrypoint: npm
args: ["install"]
- name: node:10.15.1
entrypoint: npm
args: ["run", "create-env"]
env:
- 'MY_SECRET_KEY=${_MY_SECRET_KEY}'
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
Add create-env script to package.json
"scripts": {
"create-env": "printenv > .env"
},
Read env vars from .env to your app (config.js)
Install dotenv package
npm i dotenv -S
Add a config.js to your app
// Import all env vars from .env file
require('dotenv').config()
export const MY_SECRET_KEY = process.env.MY_SECRET_KEY
console.log(MY_SECRET_KEY) // => Hello
Done! Now you may deploy your app by triggering the cloud build and your app will have access to the env vars.
Using secrets from Secrets Manager
Your sample would become:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
dir: "appengine/hello-world/standard"
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "test-app.yaml"]
dir: "appengine/hello-world/standard"
secretEnv: ['API_KEY', 'API_URL']
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/<secret name>/versions/latest
env: API_KEY
- versionName: projects/$PROJECT_ID/secrets/<secret name 2>/versions/latest
env: API_URL
Add a cloud trigger step in your cloudbuild.yaml to add place holders in your app.yaml file
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['API_KEY','API_URL']
entrypoint: 'bash' args:
- -c
- |
echo $'\n API_KEY: '$$API_KEY >> app.yaml
echo $'\n API_URL: '$$API_URL >> app.yaml
gcloud app deploy
availableSecrets: secretManager:
- versionName: projects/012345678901/secrets/API_KEY
env: 'API_KEY'
- versionName: projects/012345678901/secrets/API_URL
env: 'API_URL'
look following reference app.yaml
runtime: nodejs
service: serviceone
env_variables:
PROJECT_ID: demo
PORT: 8080
Reference by: https://stackoverflow.com/users/13763858/cadet
Related
I am trying to get a Google Cloud Build pipeline running with a Node.js application that is using Google Cloud Build, Cloud SQL (PostgreSQL) and Prisma for the ORM. I have started with the default yaml provided by GCP Cloud Build when clicking on the Setup Continuous Integration button on the Cloud Run UI view for an existing application. The part that is missing is the prisma migrations for the Cloud SQL instance.
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- api/Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- api
I solved the issue by using the following Cloud Build yaml. This is hosted within my git repo, so any code changes are tracked. I selected the Repository as the source and added the location of the cloudbuild.yaml file in my repo rather than the inline option in the Google Cloud Build trigger. This solution should work as long as there aren't any breaking changes from the previous API version to the new version (this won't work for a short period of time until the new application code has all the traffic running to it if you, for example, rename a field in the database that the old application code relies on). The way to manage this is not to make breaking changes and migrate the data from the old column to the new column before removing the old column. Another option is to schedule downtime for DB maintenance.
Keep in mine that there is a race condition when the database migrations run, but the previous version of the code is still accepting traffic before the cut over and that people using the application will potentially receive 500 errors.
This is the updated cloudbuild.yaml with the Prisma migration step (note: This also uses Google Cloud Secret Manager for the DB):
steps:
- name: 'node:$_NODE_VERSION'
entrypoint: 'yarn'
id: yarn-install
args: ['install']
waitFor: ["-"]
- id: migrate
name: gcr.io/cloud-builders/yarn
env:
- NODE_ENV=$_NODE_ENV
entrypoint: sh
args:
- "-c"
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=$$_DB_HOST=tcp:$$_DB_PORT & sleep 3
export DATABASE_URL=postgresql://$$_DB_USER:$$_DB_PASS#localhost/$$_DB_NAME?schema=public
yarn workspace api run migrate
secretEnv: ['_DB_USER', '_DB_PASS', '_DB_HOST', '_DB_NAME', '_DB_PORT']
timeout: "1200s"
waitFor: ["yarn-install"]
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- api/Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/DB_NAME/versions/latest
env: '_DB_NAME'
- versionName: projects/$PROJECT_ID/secrets/DB_PASS/versions/latest
env: '_DB_PASS'
- versionName: projects/$PROJECT_ID/secrets/DB_PORT/versions/latest
env: '_DB_PORT'
- versionName: projects/$PROJECT_ID/secrets/DB_USER/versions/latest
env: '_DB_USER'
- versionName: projects/$PROJECT_ID/secrets/DB_HOST/versions/latest
env: '_DB_HOST'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- api
I'm breaking my head over this - I've had the same environment working with the variables 100% (and also on the local env ofcourse)- but I've created another App Service on Azure with the same workflow and all of the env variables defined under the App Settings (Configurations tab) are undefined when running the job in workflow. I'm using the default YML file that Azure created when you deploy it using the Deployment Center. The start command is very simple:
"build": "node app.js",
And this is the YML file:
name: Build and deploy Node.js app to Azure Web App - xxxxxxx
on:
push:
branches:
- master
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '14.x'
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm run test --if-present
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: node-app
path: .
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: node-app
- name: 'Deploy to Azure Web App'
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: 'xxxxxxxxx'
slot-name: 'Production'
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_74C0CC726E3C4567B0FXXXXXXXXXXC }}
package: .
No matter what I do, the process.env.X variables are all undefined, and if you list all variables using SSH on the same instance, I see the variables there, which drives me even more crazy!
Any idea?
As suggested by #Shinoy Babu ,We can try to add the environment variable in pipeline while deploying which will reflect in our App service in Azure after deploying.
Also if want to configure through Azure portal you can refer this
For more information please refer the below links:
SO THREAD| How to use environment variables in React app hosted in Azure
I was trying to create .env in GCP cloud build.
The cloudbuild.yaml file
steps:
- name: 'python:3.8'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.8'
entrypoint: python3
args: ['touch', '.env']
env:
- 'DATABASE_HOST=$$DB_HOST'
- 'DATABASE_USER=$$DB_USER'
- 'DATABASE_PASSWORD=$$DB_PASS'
- 'DATABASE_NAME=$$DB_NAME'
- 'FIREBASE_CREDENTIALS=$$FIRE_CRED'
- name: 'python:3.8'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
I've tried with several ways to do that but it's not yet solved.
Created Substitution variable in GCP Trigger that I used in env.
The problem is
- name: 'python:3.8'
entrypoint: python3
args: ['touch', '.env']
env:
- 'DATABASE_HOST=$$DB_HOST'
- 'DATABASE_USER=$$DB_USER'
- 'DATABASE_PASSWORD=$$DB_PASS'
- 'DATABASE_NAME=$$DB_NAME'
- 'FIREBASE_CREDENTIALS=$$FIRE_CRED'
This portion of the code.
Thank you in advance.
Update
I have used args: ['./create-env.py'] instead of args: ['touch', '.env'] and write the environment file to .env
ok, let's start on correct basis. In Cloud Build, each step run a container. This runtime is based on an image (the name) and several parameters (entrypoint, args, env,...)
env allow you to define environment variable in the runtime environment, for example
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "echo"
args: ["$$ENV_VAR"]
env:
- 'ENV_VAR=WORKS'
will display WORKS. The entry point echo has in argument $$ENV_VAR. The value of this environment variable, in the runtime environment is equals to WORKS.
Note the double $. It's a special mention to indicate to not search in the substitution variables (single $) but to search in the runtime environment variables
A final word: from 1 step to another one, the runtime is destroyed and recreated. only the /workspace directory is kept. All other files and env vars are destroyed.
At the end, I'm not sure of what you want to achieve:
Create a .env file from your env vars? I don't understand the entrypoint and the args of your step
Load env vars from a .env file? if so, it's useless because the context is offloaded between each step.
So, if you need more guidance, explain the ultimate target of your code, I will update this answer accordingly.
How should I edit my cloudbuild.yaml file so that I can pass multiple environment variables as secrets?
I have stored two authentication tokens in two separate files, SECRET1.txt and SECRET2.txt on my local machine's current working directory.
I want to pass both these authentication tokens as secrets to Google Cloud Build using KMS.
How should the cloudbuild.yaml file look like so that my tokens are safely accessed by Cloud Build?
I tried to use encrypted secrets found here https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials
Here is what I tried for cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['SECRET1', 'SECRET2']
timeout: "1600s"
secrets:
- kmsKeyName: projects/<Project-Name>/locations/global/keyRings/<Key-Ring-Name>/cryptoKeys/<Key-Name>
secretEnv:
SECRET1: <encrypted-key-base64 here>
SECRET2: <encrypted-key-base64 here>
I am getting this error message:
Error
Cloud Build is able to read the token(I have struck it out using RED ink here Error), yet it outputs an error message saying that 'Error: ENOENT: no such file or directory'.
Can anyone tell me what went wrong in my approach and why Cloud Build is not able to access these authentication tokens(secrets)?
If you are decrypting a value to use as an env var for a build step, you could use the following setup as you described.
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['SECRET1', 'SECRET2', ...]
timeout: "1600s"
secrets:
- kmsKeyName: projects/[Project-Name]/locations/global/keyRings/[Key-Ring-Name]/cryptoKeys/[Key-Name]
secretEnv:
SECRET1: [encrypted-base64-encoded-secret]
SECRET2: [encrypted-base64-encoded-secret]
However, if you are decrypting a files, you would need to decrypt them in build steps prior to where they are being used, like so:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=SECRET1.txt.enc
- --plaintext-file=SECRET1.txt
- --project=$PROJECT_ID
- --location=global
- --keyring=[KEYRING-NAME]
- --key=[KEY-NAME]
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=SECRET2.txt.enc
- --plaintext-file=SECRET2.txt
- --project=$PROJECT_ID
- --location=global
- --keyring=[KEYRING-NAME]
- --key=[KEY-NAME]
- name: "gcr.io/cloud-builders/gcloud"
args:
- [something that uses SECRET1.txt and SECRET2.txt]
timeout: "1600s"
Is there anyway to inject environment variables from Cloud Build into the App Engine Standard environment?
I do not want to push my environment variables to GitHub inside the app.yaml or .env. Thus, when Cloud Build pulls and deploys it is missing the .env file and the server is unable to complete some requests.
I am trying to avoid using Datastore as the async nature of Datastore will make the code a lot more messy. I tried to use encrypted secrets found here, but that doesn't seem to work as I added the secrets to app deploy and they do not make their way into the deployment, so I assume this is not the use case for Cloud Build.
I also tried the tutorial here, to import the .env file into App Engine Standard from storage, but since Standard does not have local storage I assume it goes into the void.
So is there anyway to inject the .env into App Engine Standard environment without using Datastore, or committing app.yaml or .env to change control? Potentially using Cloud Build, KMS, or some type of storage?
Here is what I tried for cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
secretEnv: ['SECRET1', 'SECRET2', 'SECRET3', 'SECRET4', 'SECRET5']
timeout: "1600s"
secrets:
- kmsKeyName: projects/<Project-Name>/locations/global/keyRings/<Key-Ring-Name>/cryptoKeys/<Key-Name>
secretEnv:
SECRET1: <encrypted-key-base64 here>
SECRET2: <encrypted-key-base64 here>
SECRET3: <encrypted-key-base64 here>
SECRET4: <encrypted-key-base64 here>
SECRET5: <encrypted-key-base64 here>
Here is a tutorial on how to securely store env vars in your cloud build (triggers) settings and import them into your app.
Basically there are three steps:
Add your env vars to the 'variables' section in one of your build trigger settings
Screenshot of where to add variables in build triggers
By convention variables set in the build trigger must begin with an underscore (_)
Configure cloudbuild.yaml (on the second step in the code example) to read in variables from your build trigger, set them as env vars, and write all env vars in a local .env file
Add couldbuild.yaml (below) to your project root directory
steps:
- name: node:10.15.1
entrypoint: npm
args: ["install"]
- name: node:10.15.1
entrypoint: npm
args: ["run", "create-env"]
env:
- 'MY_SECRET_KEY=${_MY_SECRET_KEY}'
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
Add create-env script to package.json
"scripts": {
"create-env": "printenv > .env"
},
Read env vars from .env to your app (config.js)
Install dotenv package
npm i dotenv -S
Add a config.js to your app
// Import all env vars from .env file
require('dotenv').config()
export const MY_SECRET_KEY = process.env.MY_SECRET_KEY
console.log(MY_SECRET_KEY) // => Hello
Done! Now you may deploy your app by triggering the cloud build and your app will have access to the env vars.
I have another solution, if someone is still interested in this. This should work on all languages, because environment variables are added directly into app.yaml file
Add substitution variable in build trigger (as described in this answer).
Add environment variables to app.yaml in a way they can be easily substituted with build trigger variables. Like this:
env_variables:
SECRET_KEY: %SECRET_KEY%
Add a step in cloudbuild.yaml to substitute all %XXX% variables inside app.yaml with their values from build trigger.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args:
- '-c'
- |
sed -i 's/%SECRET_KEY%/'${_SECRET_KEY}'/g' app.yaml
gcloud app deploy app.yaml
The highfivebrian answer is great, but I'm adding my slightly different solution.
1). In the root project folder we need the cloudbuild.yaml file but I'll call it buildsetttings.yaml, because
first one name have a problem
In buildsetttings.yaml I added this code:
steps:
- name: node
entrypoint: npm
args: ['install']
- name: node
entrypoint: npm
env:
- 'DB_URL=${_DB_URL}'
- 'SENDGRID_API_KEY=${_SENDGRID_API_KEY}'
- 'CLIENT_ID=${_CLIENT_ID}'
args: ['run', 'create-app-yaml']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
buildsetttings.yaml will be create app.yaml file in the Cloud Build, using a npm create-app-yaml command.
Tip: app.yaml file we will then use to deploy our app to GCP App Engine.
2). In the root folder(near buildsetttings.yaml) we need to create create-app-yaml.js which will run in Cloud Build after it is called from buildsetttings.yaml.
In buildsetttings.yaml I added this code:
require('dotenv').config();
const fs = require('fs');
const appYamlContent = `runtime: nodejs14
env_variables:
DB_URL: ${process.env.DB_URL}
SENDGRID_API_KEY: ${process.env.SENDGRID_API_KEY}
CLIENT_ID: ${process.env.CLIENT_ID}`;
fs.writeFileSync('./app.yaml', appYamlContent);
This code using a npm package dotenv(add it to package.json) and get variables from Cloud Build Trigger Variables and create with they app.yaml file.
3). app.yaml file was created in the Cloud build and our last step(name: 'gcr.io/cloud-builders/gcloud') in the buildsetttings.yaml, using app.yaml file, deploy the project to the Google Cloud App Engine.
Success!
In short, it works like this: buildsetttings.yaml run "create-app-yaml.js" in the Cloud Build, after which dynamically creates an app.yaml file by adding variables from Cloud Build Trigger Variables, then makes a deployment in the App Engine.
Notes:
Delete the file app.yamlin from you project, because it will be create dynamically in the Cloud Build. Also delete cloudbuild.yaml file, because instead we use buildsetttings.yaml.
package.json:
Cloud Build Trigger Variables:
As of 2020/11/13. It seem like .env will work only at that step and in the next step an invisible .env will no longer there.
If you get stuck do try consume that printed .env it in 1 step like this ...
in cloudbuild.yaml
# [START cloudbuild_yarn_node]
steps:
# Install
- name: node
entrypoint: yarn
args: ["install"]
# Build
- name: node
entrypoint: yarn
env:
- "FOO=${_FOO}"
args: ["env-build"]
and in package.json add this
{
"scripts": {
"env-build": "printenv > .env && yarn build",
}
}
in index.js
require('dotenv').config();
console.log(process.env.FOO);
Took me an hour to figure this out.
First, I created secret using gcp secret manager and uploaded my env file there.
Second, I called the secret in cloudbuild.yaml on run time and created a file with name of '.env' using echo.
Example
steps:
- id: "Injecting ENV"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args:
- '-c'
- |
echo $$ENV > .env
secretEnv: ['ENV']
availableSecrets:
- versionName: projects/<Project-Name>/secrets/environment-variables/versions/1
env: 'ENV'
timeout: 900s
Based on your preferences that you have highlighted (Cloud Build, KMS). The Google Secrets link that you had mentioned involves storing sensitive data at build or runtime using Cloud KMS: KeyRing and CryptoKey. However, Google offers other Secret Management Solutions using Cloud KMS as well.
Here are a couple of other options you can use while storing Secrets:
Option 1 : You can store Secrets in code that are encrypted with a key from Cloud KMS.
(This is typically used by encrypting your secret at the application layer.)
Benefit: Provides a layer of security from insider threats because it restricts access to the code with a corresponding key.
[You can find some additional information about these options on the Google Documentation here.]
Option 2: You can Store Secrets inside a Google Storage Bucket where your data is at rest encryption. (Similar to option 1 this has the ability to limit access to secrets to a small group of Developers.)
Benefit: Storing your secrets in a separate location ensures that if a breach of your code repository has occurred, your secrets may still be protected.)
[Note: Google recommends that you use two projects for proper separation of duties. One project will use Cloud KMS to manage the keys and the other project will use Cloud Storage to store the secrets.]
If the options listed above still do not meet your needs, I have found a StackOverflow question that shares a similar objective as your project. (i.e: Storing environment variables in GAE without Datastore)
The solution provided on this link illustrates the use of storing keys in a client_secrets.json file that gets excluded when uploading to git by listing it in .gitignore. You can find some Google examples (Python) of usage here.