Application cannot access env variables while they are available on a container level - node.js

node.js application, built and deployed using docker-compose, doesn't see any custom set variables. console.log(process.env.VAR) logs undefined to console for any of those.
Variables are set using env_file property in yaml file. Only env_file property is used. There is an ENV value set in Dockerfile and it's accessible by the application.
docker exec -it <container-id> envdoes return all custom values. docker exec -it <container-id> sh returns only those set in base image - node-alpine (wiped out by exec?).
What can be wrong with the setup?

I've found that the issue is not with the compose file or wrong usage of env_file field.
The issue was with the env file itself. It used spaces when setting values. Like this: VAR = VAL instead of VAR=VAL.
While dotenv npm package allows this (I've used a sample that comes with a project as a base for deployment), docker and environment don't.

Related

How to load external config file in Docker?

I am using Docker to build a Node app image. I am having my configurations in a YAML file which is located at source_folder/config.yaml.
When doing await readFile(new URL('../config.yaml', import.meta.url), 'utf8') in my index file it says file not found after running. However, doing - COPY config.yaml ./ in Dockerfile solves it but I don't want to copy my credentials in the image build.
Is there any solution I can load the config file after building the image?
Using ESM.
I use dotenv to load my env variables. I understand the need to not include it in builds. Docker provides a runtime solution of including these variables to your env by passing the file as an argument. So this is what I do to load my env while using docker run:
docker run -e VARIABLE_NAME=variable_value image_to_be_executed
# or
docker run --env-file path_to_env_file image_to_be_executed

How build Node JS application using Docker for different staging environments

this should have been routine, but I haven't been able to find any way. I am using Node with Docker for packaging. I have three environments: dev, qa, and prod, as usual. I have three configuration files with numerous variables: dev-config.json, qa-config.json, prod-config.json. I need Docker to pick up files and package them as config.json inside the Docker image. How to go about pl.. Thx
For building an image with only the correct config file included, you can use --build-arg.
Add
ARG CONFIG_FILE
...
COPY $CONFIG_FILE config.json
in your docker file and then use
docker build --build-arg CONFIG_FILE=prod-config.json .
to build your image
EDIT
The other possibility is to put all your config files in your image and decide which one to use, when you startup the container. For instance, you could read the desired name of your config file from an environment variable (at runtime of the container, not to confuse with ARG and --build-arg at build-time of the image) which can be set when you start your container
Iw somewhere in your node app
// read the config file name from the environment variable
// and have a fallback if the environment variable is not defined
const configfilename = process.env.CONFIG_FILE || "config.json";
and when you start your container you can do
docker run --env CONFIG_FILE=prod-config.json YOURIMAGE
to set the environment variable. This way, you will have only one image.
A third possibility would be to not add your configs in the container at all, but load them from external volume that you mount when you run the container. If you have different volumes for diffent configs, you can again decide at startup, which volume to mount. As you can give your config file the same name on every volume, your app does not need to be aware of any environment variables, you just have to make sure, you use the correct path to your config file and all volumes have the same file structure.
Ie in your node app
const configfile = '/config/config.json';
and then you start your container mounting the correct config directory
docker run -v /host/path/to/prod-config:/config YOURIMAGE

How to pass env variable to a json file when executing the docker run command

I'm executing below docker run command to run my nodejs based docker container
docker run -p 8080:7000 --env db_url=10.155.30.13 automation:v1.0.3
And i'm trying to access this env variable by using separate config file from my container. config file is in json format as below.
{
"db_host": "${process.env.db_url}",
}
And in my nodejs code, i'm accessing this db_host value to add the host IP to the listener. But when the above code executed, the docker container is brings down as soon as it brought up. But if i replace the json file value as below, it is working fine and my container is listening as below. Could someone please help me to pass the value and to access it within my json file?
{
"db_host": "10.155.30.13",
}
You can get value in app
const db_host = process.env.db_url || "10.155.30.13"
instead of reading it from json file.
You can not substitute environment variable in JSON file, you can use dotenv or config that will help to have some default value in the config file and override these value from environment variables.
create default config vi config/default.json
{
"db_host": "10.155.30.13"
}
now read from ENV first, else pick the default value from config
app.js
const config = require('config');
const dbConfig = process.env.DB_HOST || config.get('db_host');
console.log(dbConfig)
Now run the docker container
build the container
docker build -t app .
run the container
docker run -it app
console output
10.155.30.13
Now we want to override this default values
docker run -e DB_HOST=192.168.0.1 -it app
console output
192.168.0.1

Setting environmental variables for node from host machine for build server

I'm using bitbucket pipelines as a build server.
I need to pass environmental variables from a host machine into a .env file which will then set the var values to be used in the build.
For example, lets say an environmental variable in a docker container running the build is AWS_ACCESS_KEY_ID.
In my .env file I'd like something like the following:
ACCESS_KEY=${AWS_ACCESS_KEY_ID}
I would then run the build and the ACCESS_KEY var would have a value equal to the env var in the docker container.
My current idea for a solution right now involves replacing values with sed, but that feels pretty hacky. Example:
.env file contains the following line:
ACCESS_KEY=<_access_key_replace_me_>
sed "s/<_access_key_replace_me_>/${AWS_ACCESS_KEY_ID}/g" .env
Any better solution than this?

How can I get the environment variable from docker file

How can I get the environment variable from docker file for example I am adding a
ENV URL_PATH="google.com"
in my dockerfile, so can I get the this URL_PATH in my Jmeter.jmx file with the help of User Defined Variable.
On window its working fine with proper {__env(URL_PATH)}
but on docker its not working. How can I solve this problem?
You can use the -e option to pass environment variables into the container when running it.
docker run -e URL_PATH=google.com ...
Docs: https://docs.docker.com/engine/reference/run/#env-environment-variables
As far as I can see __env() is a Custom JMeter Function therefore it is not available in vanilla JMeter so the options are in:
Amend your Dockerfile to include downloading of http://repo1.maven.org/maven2/kg/apc/jmeter-plugins-functions/2.0/jmeter-plugins-functions-2.0.jar to "lib/ext". This way you will be able to use __env() function in Docker environment normally. See Make Use of Docker with JMeter - Learn How for example Docker configuration assuming using JMeter with Plugins.
Switch to __groovy() function. Replace all the occurrences of {__env(URL_PATH)} with the following expression:
${__groovy(System.getenv('URL_PATH'),)}

Resources