reading .env file from node - env file is not published - node.js

I am trying to read .env file using "dotenv" package but it returns undefined from process.env.DB_HOST after published to gcloud run. I see all files except for the .env file in root directory when I output all files to log. I do have .env file in my project on a root directory. Not sure why it's not getting pushed to gcloud or is it?. I do get a value when I tested locally for process.env.DB_HOST.
I used this command to publish to google run.
gcloud builds submit --tag gcr.io/my-project/test-api:1.0.0 .

If you haven't a .gcloudignore file in your project, gcloud CLI use the .gitignore by default
Create a .gcloudignore and put the file that you don't want to upload when you use gcloud CLI command. So, don't put the .env in it!
EDIT 1
When you add a .gcloudignore, the gcloud CLI no longer read the .gitignore file and use it instead.
Therefore, you can define 2 different logics
.gitignore list the file that you don't want to push to the repository. Put the .env file in it to NOT commit it
.gcloudignore list the file that you don't want to send with the gcloud CLI. DON'T put the .env file in it to include it when you send your code with the gcloud CLI

Related

How to load external config file in Docker?

I am using Docker to build a Node app image. I am having my configurations in a YAML file which is located at source_folder/config.yaml.
When doing await readFile(new URL('../config.yaml', import.meta.url), 'utf8') in my index file it says file not found after running. However, doing - COPY config.yaml ./ in Dockerfile solves it but I don't want to copy my credentials in the image build.
Is there any solution I can load the config file after building the image?
Using ESM.
I use dotenv to load my env variables. I understand the need to not include it in builds. Docker provides a runtime solution of including these variables to your env by passing the file as an argument. So this is what I do to load my env while using docker run:
docker run -e VARIABLE_NAME=variable_value image_to_be_executed
# or
docker run --env-file path_to_env_file image_to_be_executed

Gcloud app deploy cannot ignore Dockerfile

I usually deploy my NodeJS app to Google App Engine and ignore all docker assets when deploying by a .gcloudignore file as below:
.git
.gitignore
Dockerfile
docker-compose.yml
nginx/
redis-data/
.vscode/
.DS_Store
.prettierrc
README.md
node_modules/
.env
Last week I have successfully deployed my app to App Engine without any problems. But today (without any changes except source code) it failed and threw me an Error:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /Users/tranphongbb/Works/unstatic/habitify-annual-report-backend/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Even when removing the .gcloudignore file and go with the skip_files option in app.yaml, it still failed.
My source tree:
.dockerignore
.eslintrc.json
.gcloudignore
.gitignore
.prettierrc
.vscode
Dockerfile
README.md
app.yaml
docker-compose.yml
nginx
package.json
src
I reproduced your issue by cloning both Node.js App Engine Flex Quickstart and adding a Dockerfile to the same folder as the app.yaml file.
Indeed, I received the same error message as you did. But I was able to see that if I move the Dockerfile to a different directory, the deploy succeeds. It seems that gcloud app deploy doesn't respect the .gcloudignore file.
For node.js in the Flexible Environment, there’s no skip_files entry in the App Engine Official Documentation.
To ignore your files defined in .gcloudignore file, please run the command gcloud beta app deploy which worked for me to ignore the Dockerfile when using Nodejs Runtime in app.yaml or you can use gcloud app deploy command but move your Dockerfile to another directory.
The purpose of the .gcloudignore file is to avoid certain files to be uploaded to App Engine, Cloud Functions, etc, deployments which is documented here. When using gcloud app deploy this command will notice if there is a Dockerfile and will correlate that in the app.yaml there is set runtime: custom. In case that condition is not meet, you'll get a similar error message as follows:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in path/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Now the last question, why does this work with gcloud beta app deploy and not with gcloud app deploy?
Checking at the source code of the Cloud SDK which can be viewed by anyone, the gcloud app deploy has the following code which makes the verification mentioned before:
if info.runtime == 'custom':
if has_dockerfile and has_cloudbuild:
raise CustomRuntimeFilesError(
('A custom runtime must have exactly one of [{}] and [{}] in the '
'source directory; [{}] contains both').format(
config.DOCKERFILE, runtime_builders.Resolver.CLOUDBUILD_FILE,
source_dir))
elif has_dockerfile:
log.info('Using %s found in %s', config.DOCKERFILE, source_dir)
return False
elif has_cloudbuild:
log.info('Not using %s because cloudbuild.yaml was found instead.',
config.DOCKERFILE)
return True
else:
raise NoDockerfileError(
'You must provide your own Dockerfile when using a custom runtime. '
'Otherwise provide a "runtime" field with one of the supported '
'runtimes.')
else:
if has_dockerfile:
raise DockerfileError(
'There is a Dockerfile in the current directory, and the runtime '
'field in {0} is currently set to [runtime: {1}]. To use your '
'Dockerfile to build a custom runtime, set the runtime field to '
'[runtime: custom]. To continue using the [{1}] runtime, please '
'remove the Dockerfile from this directory.'.format(info.file,
On the other hand the gcloud beta app deploy does not do this verification at all (assuming I reviewed the correct code):
if runtime == 'custom' and self in (self.ALWAYS,
self.WHITELIST_BETA,
self.WHITELIST_GA):
return needs_dockerfile
In conclusion, the .gcloudignore will prevent some files/folder to be uploaded but not will be considered when doing some pre-checks of this command. In this case a Dockerfile should be considered since it could be part of the deployment.

Using dotenv in cloud run

I have created a .env file in my local system while developing a project. If I upload my project along with .env file, will it work fine or do i have to assign env variables separately?
According to the documentation it won't work like that.
You can set them in Console or provide with --set-env-vars flags during deployment from command line or set id Dockerfile with ENV parameter.

gcloud app deploy failed with error - gcloud crashed FileNotFoundError - python3 app

I am trying to deploy a sample python app which I got from another tutorial. However, the deployment fails as below:
gcloud app deploy Beginning deployment of service [default]... ERROR:
gcloud crashed (FileNotFoundError): [Errno 2] No such file or
directory:
'/Users/nileshdeshmukh/Desktop/Training/Python/FlaskIntroduction-master/env/.Python'
My app.yaml file is as below:
runtime: python3
env: standard
runtime_config:
python_version: 3
I have all dependencies copied in env/bin but the build process is looking for env only..
I think the problem would be solved if the deployment process looks at env/bin, but don't know how to force it to look at given path
The runtime_config setting is for App Engine flex only and isn't needed for App Engine Standard. You can safely remove it.
As per the error, you should ensure that all your dependencies are self-contained and shipped with your app or listed in your requirements.txt file.
Be careful, some gcloud commands use .gitignore file to prevent sending useless file to Cloud for building your app.
You can override this behavior by creating a .gcloudignore file. Same syntax as git ignore but take into account only by gcloud commands and not by git. By the way you can differentiate the file to send to the cloud and file to send to git

Can we run multiple yml files using serverless offline plugins

I have multiple yml files in different folder then how I run locally all those files using serverless offline plugins?
If I'm understanding your question correctly, you have a structure something like this:
./
serverless.yml
/more-yml
/functions
lambda-x.yml
lambda-y.yml
lambda-z.yml
/resources
resource-a.yml
resource-b.yml
You can write a script which parses all these files, runs any validations you may want on the items within, and returns a file for serverless.yml to use, so that your serverless.yml might look like this:
service: your-service
provider:
...
resources: ${file(./scripts/serverless/join-resources.js)}
functions: ${file(./scripts/serverless/join-lambda-functions.js)}
All this scripts (or scripts) need to do is loop over a given directory, load the yml, concat each file's yml to a temp file, then resolve with that temp file!

Resources