Why doesn´t docker-compose env_file work but environment does? - node.js

When I am using env_file in docker-compose.yml it builds correctly, but when I am trying to use docker-compose my node app can´t find env_file variables inside the process.env object.
Here is my docker-compose file:
node1:
container_name: node01
env_file: ./env/node1.production.env
#environment:
#- SOME_VALUE=9599
build:
context: ./node1
dockerfile: dockerfile
ports:
- "3000:3000"
networks:
- dev_net
Here is my node1.production.env file:
SOME_VALUE=9599
When I use environment instead, my node app works fine:
DOCKER Version : 17.03
DOCKER COMPOSE Version : 1.14
OS : CentOS

It should work. I guess that you might have defined variables more than once in node1.production.env file. Verify if the env file is correct.
From the code you gave, it seems there are no errors in the syntax you are using, and if there were, they would have been reported before build could even be started. In my case, I use env file as follows:
env_file:
- .env
where .env named file is present in the base directory.

Related

Docker compose is using the env which was built by Dockerfile, instead of using env_file located in docker-compose.yml directory because of webpack

First I want to use my application .env variables on my webpack.config.prod.js, so I did this in my webpack file.
I am successfully able to access process.env.BUILD variables.
My application’s env has this configuration -
My nodejs web app is running fine locally, no problem at all. I want to build docker image of this application and need to use docker-compose to create the container.
I built my docker image and everything good so far.
now to create container, instead of docker run. I am using separate folder which consists of docker-compose.yml and .env files. Attached the screenshot below
My docker-compose.yml has this code -
version: '3.9'
services:
api:
image: 'api:latest'
ports:
- '17000:17000'
env_file:
- .env
volumes:
- ./logs:/app/logs
networks:
- default
My docker-compose .env has this redis details

My application has this logs -
I started my docker container by doing docker-compose up. Containers created and up and running, but the problem is
In the console, after connecting to redis.. process.env.REDIS_HOST contains the value called ‘localhost’ (which came from the first env where I used to build docker image). Docker compose .env is not getting accessed.
After spending 5+ hours. I found the culprit, It was webpack. On my initial code, I added some env related things in my webpack right? Once I commented those, taken a new build. Everything is working fine.
But my problem is how I can actually use process.ENV in webpack, also docker-compose need to use the .env from its directory.
Updated -
My DockerFile looks like this:
Just, It will copy the dist which contains bundle.js, npm start will do - pm2 run bundle.js.
From what I know webpack picks up the .env at build time, not at runtime. This means that it needs the environment variables when the image is built.
The one you pass in docker-compose.yml is not used because by then your application is already built. Is that correct? In order to user your .env you should build the image with docker-compose and pass the env variables as build arguments to your Dockerfile.
In order to build the image using your docker-compose.yml, you should do something like this:
version: '3.9'
services:
api:
image: 'api:latest'
build:
context: .
args:
- REDIS_HOST
- REDIS_PORT
ports:
- '17000:17000'
volumes:
- ./logs:/app/logs
Note: the context above points to the current folder. You can change it to point to a folder where you Dockerfile (and the rest of the project is) or you can put your docker-compose.yml together with the rest of the project directly and then context stays ..
In your Dockerfile you need to specify these arguments:
FROM node:14 as build
ARG REDIS_HOST
ARG REDIS_PORT
...
With these changes you can build and run with docker-compose:
docker-compose up -d --build

Dockerfile and Docker Compose for NestJS app with PSQL DB where env vars are expected at runtime

I'm Dockerizing a simple Node/JS (NestJS -- but I don't think that matters for this question) web service and have some questions. This service talks to a Postgres DB. I would like to write a Dockerfile that can be used to build an image of the service (let's call it my-service) and then write a docker-compose.yml that defines a service for the Postgres DB as well as a service for my-service that uses it. That way I can build images of my-service but also have a Docker Compose config for running the service and its DB at the same time together. I think that's the way to do this (keep me honest though!). Kubernetes is not an option for me, just FYI.
The web service has a top-level directory structure like so:
my-service/
.env
package.json
package-lock.json
src/
<lots of other stuff>
Its critical to note that in its present, non-containerized form, you have to set several environment variables ahead of time, including the Postgres DB connection info (host, port, database name, username, password, etc.). The application code fetches the values of these env vars at runtime and uses them to connect to Postgres.
So, I need a way to write a Dockerfile and docker-compose.yml such that:
if I'm just running a container of the my-service image by itself, and want to tell it to connect to any arbitrary Postgres DB, I can pass those env vars in as (ideally) runtime arguments on the Docker CLI command (however remember the app expects them to be set as env vars); and
if I'm spinning up the my-service and its Postgres together via the Docker Compose file, I need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments, and then the container needs to set them as env vars for web service to use
Again, I think this is the correct way to go, but keep me honest!
So my best attempt -- a total WIP so far -- looks like this:
Dockerfile
FROM node:18
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
# creates "dist" to run out of
RUN npm run build
# ideally the env vars are already set at this point via
## docker CLI arguments, so nothing to pass in here (???)
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ??? anyway to say "build whats in the repo?"
environment:
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
volumes:
pgdata:
Can anyone help nudge me over the finish line here? Thanks in advance!
??? do I need to set anything here so it gets passed to the my-service
container as env vars?
Yes, you should pass the variables there. This is a principle of 12 factor design
need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments
If you don't put them directly in the YAML, will this option work for you?
docker-compose --env-file app.env up
Ideally, you also put
depends_on:
postgres
So that when you start your service, the database will also start up.
If you want to connect to a different database instance, then you can either create a separate compose file without that database, or use a different set of variables (written out, or using env_file, as mentioned)
Or you can use NPM dotenv or config packages and set different .env files for different database environments, based on other variables, such as NODE_ENV, at runtime.
??? anyway to say "build whats in the repo?"
Use build instead of image directive.
Kubernetes is not an option for me, just FYI
You could use Minikube instead of Compose... Doesn't really matter, but kompose exists to convert a Docker Compose into k8s resources.
Your Dockerfile is correct. You can specify the environment variables while doing docker run like this:
docker run --name my-service -it <image> -e PG_USER='user' -e PG_PASSWORD='pass'
-e PG_HOST='dbhost' -e PG_DATABASE='dbname' --expose <port>
Or you can specify the environment variables with the help of .env file. Let's call it app.env. Its content would be:
PG_USER=user
PG_PASSWORD=pass
PG_DATABASE=dbname
PG_HOST=dbhost
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Now instead of specifying multiple -e options to docker run command, you can simply tell the name of the file from where the environment variables need to be picked up.
docker run --name my-service -it <image> --env-file app.env --expose <port>
In order to run postgres and your service with a single docker compose command, a few modifications need to be done in your docker-compose.yml. Let's first see the full YAML.
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: . #instead of image directive, use build to tell docker what folder to build
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres #note the name of the postgres service in compose yaml
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
Now you can use docker compose up command to run the services. If you wish to build the my-service container each time you can pass an optional argument --build like this: docker compose up --build.
In order to pass the environment variables from the CLI, there's only one way which is by the use of .env file. In your case of docker-compose.yml the app.env would look like:
PG_USER=user
PG_PASSWORD=pass
#PG_DATABASE=dbname #not required as you're using 'my-service-db' as db name in compose file
#PG_HOST=dbhost #not required as service name of postgres in compose file is being used as db host
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval
Passing this app.env file using docker compose CLI command would look like this:
docker compose --env-file app.env up --build
PS: If you're building your my-service each time just for the code changes to reflect in the docker container, you could make use of bind mount instead. The updated docker-compose.yml in that case would look like this:
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: $PG_PASSWORD
POSTGRES_USER: $PG_USER
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
build: .
volumes:
- .:/usr/src/app #note the use of volumes here
environment:
PG_USER: $PG_USER
PG_PASSWORD: $PG_PASSWORD
PG_HOST: postgres
PG_DATABASE: my-service-db
OTHER_ENV_VAR1: $OTHER_ENV_VAR1
OTHER_ENV_VAR2: $OTHER_ENV_VAR2
depends_on:
postgres
volumes:
pgdata:
This way, you don't need to run docker compose build each time, making a code change in the source folder would get reflected in the docker container.
You just need to add path of your docker file in to build parameter in docker-compose.yaml file and all the environment variables in environment
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: path_to_your_dockerfile
environment:
your_environment_variables_here
container as env vars?
volumes:
pgdata
I am guessing that you have folder structure like this
project_folder/
docker-compose.yaml
my-service/
Dockerfile
.env
package.json
package-lock.json
src/
<lots of other stuff>
and your .env contains following
API_PORT=8082
Environment_var1=Environment_var1_value
Environment_var2=Environment_var2_value
So in your case your docker-compose file should look like this
version: '3.7'
services:
postgres:
container_name: postgres
image: postgres:14.3
environment:
POSTGRES_PASSWORD: ${psql.password}
POSTGRES_USER: ${psql.user}
POSTGRES_DB: my-service-db
PG_DATA: /var/lib/postgresql2/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql2/data
my-service:
container_name: my-service
image: ./my-service/
environment:
- API_PORT=8082
- Environment_var1=Environment_var1_value
- Environment_var2=Environment_var2_value
volumes:
pgdata
FYI: for this docker configuration your database connection host should be postgres (as per service name) not localhost and your

docker-compose : variables from .env not available in the service

I'm new to nodejs and I can't figure out why environment variables are not always available.
Env vars in my .env are parsed by docker-compose automatically. So I'm not using the dotenv package.
console.log(process.env.API_URL) // outputs undefined
But in an asynchronous function (maybe it doesn't work in all async functions)
async function foo() {
console.log(process.env.API_URL) // outputs http://example.com
}
After moving a few ways the console.log in my index.ts, I've found the line where env vars become available :
// index.ts
import {createConnection} from 'typeorm'
console.log(process.env.API_URL) // outputs undefined
createConnection()
console.log(process.env.API_URL) // outputs http://example.com
Any idea on why this is happening ?
Edit :
I'm using this docker-compose.yml :
version: '3.7'
services:
server:
build:
context: ./docker-build
target: server-develop
ports:
- "3200:3000"
volumes:
- .:/var/app
command: "yarn run dev"
Docker docs seems to tell that the .env file is parsed by default when env_file is not specified. Screenshot from docker documentation :
However I don't see env vars when using docker-compose exec server env.
But if I specify the env_file in the docker-compose.yml (see below), environment vars are correctly loaded.
version: '3.7'
services:
server:
build:
context: ./docker-build
target: server-develop
env_file: .env # the only change
ports:
- "3200:3000"
volumes:
- .:/var/app
Maybe I misunderstood what actually docker does when It says
The .env file is loaded by default
You can use dotenv from here https://www.npmjs.com/package/dotenv
Probably you are using nodemon and as far as I know it only loads the NODE_ENV variable. By using dotenv you can just:
create a new file .env in root folder where you put your env variables
as early as possible you load require('dotenv').config()
You'll find more information on the npmjs package page itself like changing your startup command so it can automatically load .env file and more

How to pass environment variables from docker-compose into the NodeJS project?

I have a NodeJS application, which I want to docker-size.
The application consists of two parts:
server part, running an API which is taking data from a DB. This is running on the port 3000;
client part, which is doing a calls to the API end-points from the server part. This is running on the port 8080;
With this, I have a variable named "server_address" in my client part and it has the value of "localhost:3000". But here is the thing, the both projects should be docker-sized in a separate Dockerimage files and combined in one docker-compose.yml file.
So due some reasons, I have to run the docker containers via docker-compose.yml file. So is it possible to connect these things somehow and to pass the server address externally from dockerfile into the NodeJS project?
docker-composer.yml
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"
both of the Dockerfile's looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
by having these files, I have the concern:
will I be able to use the variable BACKEND_SERVER somehow in the project? And if yes, how to do this? I'm not referring to the Dockerimage file, instead into the project itself?
Use process.env in node.js code, like this
process.env.BACKEND_SERVER
Mention your variable in docker-compose file.
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"
In addition to the previous answer, you can alternatively define variables and their values when running a container:
docker run --env variable1=value1 --env variable2=value2 <image>
Other two different ways are: (1) referring environment variables which you’ve already exported to your local environment and (2) loading the variables from a file:
(1)
# In your ~/.bash (or ~/.zshrc) file: export VAR1=value1
docker run --env VAR1 <image>
(2)
cat env.list
# Add the following in the env.list
# variable1=value1
# variable2=value2
# variable3=value3
docker run --env-file env.list <image>
Those options are useful in case you don't want to mention your variables in the docker-compose file.
Reference

Docker Compose throws invalid type error

I'm having an issue running docker compose. Specifically i'm getting this error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.login-service.environment contains {"REDIS_HOST": "redis-server"}, which is an invalid type, it should be a string
And here's my yml file:
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
login-service:
tty: true
build: .
volumes:
- ./:/usr/src/app
ports:
- 3001:3001
depends_on:
- redis
networks:
- my-network
environment:
- REDIS_HOST: redis
command: bash -c "./wait-for-it.sh redis:6379 -- npm install && npm run dev"
networks:
my-network:
Clearly the issue is where I set my environment variable even though i've seen multiple tutorials that use the same syntax. The purpose of it is to set REDIS_HOST to whatever ip address docker assigns to Redis when building the image. Any insights what I may need to change to get this working?
There are two different ways of implementing it. One with = sign and other with : sign. Check the following examples for more information.
Docker compose environments with = sign.
version: '3'
services:
webserver:
environment:
- USER=john
- EMAIL=johh#gmail.com
Docker compose environments with : sign
version: '3'
services:
webserver:
environment:
USER:john
EMAIL:johh#gmail.com
It happens because the leading dash. You can try without it like below:
environment:
REDIS_HOST: redis
For me, when I had two environment variables for a service in my docker-compose.yml file, I was getting the error services.web.environment.1 must be a string because one of the environment variables was
- REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD}
This syntax means that the REDIS_DATABASE_PASSWORD variable is not set probably because for docker compose version 3.9 setting an enviornment variable with this syntax is not allowed and does not work.
When using REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD}, your IDE's syntax highlighting will show REDIS_DATABASE_PASSWORD in blue and ${REDIS_DATABASE_PASSWORD} in red. This means that REDIS_DATABASE_PASSWORD: ${REDIS_DATABASE_PASSWORD} is not a string, and when setting an environment variable it must be a string. The way to solve this problem is to set the environment variable in your docker-compose.yml file using the syntax:
REDIS_DATABASE_PASSWORD=${REDIS_DATABASE_PASSWORD}
Now your IDE should show REDIS_DATABASE_PASSWORD=${REDIS_DATABASE_PASSWORD} in red, meaning it is a string.

Resources