Docker : React App doesn't read environment variable - node.js

I have a React App running using Docker. I want to deploy the same docker image on dev, staging and prod environment.
The value of my React environment variable (REACT_APP_PARAM) is different for each environment.
My dockerfile look like this :
…
COPY . .
ARG REACT_APP_PARAM
ENV REACT_APP_PARAM $ REACT_APP_PARAM
…
I use this Dockecompose file to build my image
version: "3"
services:
frontend:
image: my-image
build:
context: .
args:
- REACT_APP_PARAM=BuildParam
environment:
- REACT_APP_PARAM=${REACT_APP_PARAM}
ports:
- "80"
- "443"
When deploy and run my image on dev environment I use this dockercompose file
version: "3"
services:
frontend:
image: my-image
restart: always
environment:
- REACT_APP_PARAM=DevelopmentParam
ports:
- "80"
- "443"
When I debug my application the value of REACT_APP_PARAM is still BuildParam
But went I list my container environment variable with command docker exec my-image env, the value of REACT_APP_PARAM is DevelopmentParam.
Any ideas how I can solve this or how is the best approach to archive that?
Thanks

After several searches, I found that Environment variables are embedded into the build.
The official React documentation :
The environment variables are embedded during the build time. Since
Create React App produces a static HTML/CSS/JS bundle, it can’t
possibly read them at runtime. To read them at runtime, you would need
to load HTML into memory on the server and replace placeholders in
runtime, as described here. Alternatively you can rebuild the app on
the server anytime you change them.
https://create-react-app.dev/docs/adding-custom-environment-variables/
I don't like the idea to build different image for each environment.

Related

Docker compose is using the env which was built by Dockerfile, instead of using env_file located in docker-compose.yml directory because of webpack

First I want to use my application .env variables on my webpack.config.prod.js, so I did this in my webpack file.
I am successfully able to access process.env.BUILD variables.
My application’s env has this configuration -
My nodejs web app is running fine locally, no problem at all. I want to build docker image of this application and need to use docker-compose to create the container.
I built my docker image and everything good so far.
now to create container, instead of docker run. I am using separate folder which consists of docker-compose.yml and .env files. Attached the screenshot below
My docker-compose.yml has this code -
version: '3.9'
services:
api:
image: 'api:latest'
ports:
- '17000:17000'
env_file:
- .env
volumes:
- ./logs:/app/logs
networks:
- default
My docker-compose .env has this redis details

My application has this logs -
I started my docker container by doing docker-compose up. Containers created and up and running, but the problem is
In the console, after connecting to redis.. process.env.REDIS_HOST contains the value called ‘localhost’ (which came from the first env where I used to build docker image). Docker compose .env is not getting accessed.
After spending 5+ hours. I found the culprit, It was webpack. On my initial code, I added some env related things in my webpack right? Once I commented those, taken a new build. Everything is working fine.
But my problem is how I can actually use process.ENV in webpack, also docker-compose need to use the .env from its directory.
Updated -
My DockerFile looks like this:
Just, It will copy the dist which contains bundle.js, npm start will do - pm2 run bundle.js.
From what I know webpack picks up the .env at build time, not at runtime. This means that it needs the environment variables when the image is built.
The one you pass in docker-compose.yml is not used because by then your application is already built. Is that correct? In order to user your .env you should build the image with docker-compose and pass the env variables as build arguments to your Dockerfile.
In order to build the image using your docker-compose.yml, you should do something like this:
version: '3.9'
services:
api:
image: 'api:latest'
build:
context: .
args:
- REDIS_HOST
- REDIS_PORT
ports:
- '17000:17000'
volumes:
- ./logs:/app/logs
Note: the context above points to the current folder. You can change it to point to a folder where you Dockerfile (and the rest of the project is) or you can put your docker-compose.yml together with the rest of the project directly and then context stays ..
In your Dockerfile you need to specify these arguments:
FROM node:14 as build
ARG REDIS_HOST
ARG REDIS_PORT
...
With these changes you can build and run with docker-compose:
docker-compose up -d --build

Accessing docker compose arm variable in Docker file [duplicate]

Having the following docker-compose file:
db:
build: .
environment:
- MYSQL_ROOT_PASSWORD=password
- ENV=test
env_file: .env
Is there any way to use the env variables declared in docker-compose.yml (either as environment or declared in the env_file) as part of Dockerfile without declaring them in the Dockerfile? Something like this:
FROM java:7
ADD ${ENV}/data.xml /data/
CMD ["run.sh"]
Although this question was asked long ago, there is an answer to a similar question here: Pass environment variables from docker-compose to container at build stage
Basically, to use variables at the container's build time one has to define the variable in docker-compose.yml:
build:
context: .
args:
MYSQL_ROOT_PASSWORD: password
ENV: test
and then reference it in the Dockerfile using ARG:
ARG MYSQL_ROOT_PASSWORD
ARG ENV
ADD ${ENV}/data.xml /data/
Concerning environment variables defined in an *.env file, I believe that they can't be passed to the container at build time.
It works ok this way:
docker-compose.yml
version: '3.5'
services:
container:
build:
context: .
args:
ENV: ${ENV} # from .env file
env_file:
- .env
Dockerfile
# from compose args
ARG ENV
ADD ${ENV}/data.xml /data/
.env
ENV=myenv
Thus all the values are taken from .env file
This approach goes against the 'build once, run anywhere' theory behind Docker and most DevOps approaches. With this approach you'll need to build a container for every environment you expect to use. By doing so you can't safely say if a container works in the dev environment it will work in staging and production since you aren't using the same container.
You'd be better off adding all config files you need on to the container and writing an entrypoint script that selects/copies the data for that environment to the correct location when the container starts. You can also apply this approach to other config on the container, like templated Apache config using jinja2 templates etc.

Spawn mongo server and express asynchronous sever setup script for local and docker deployment

I'm new to MEAN stack development and was wondering whats the ideal way to spin an mongo+express environment.
Running synchronous bash script commands make the mongo server stop further execution and listen for connections. What would be a local + docker compatible script to initiate the environment ?
Many people use docker-compose for a situation like this. You can set up a docker-compose configuration file where you describe services that you would like to run. Each service defines a docker image. In your case, you could have mongodb, your express app and your angular app defined as services. Then, you can launch the whole stack with docker-compose up.
A sample docker-compose config file would look something like:
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: angular-client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forewarding
express: #name of the second service
build: express-server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forewarding
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forewarding
which comes from an article here: https://scotch.io/tutorials/create-a-mean-app-with-angular-2-and-docker-compose

How to pass environment variables from docker-compose into the NodeJS project?

I have a NodeJS application, which I want to docker-size.
The application consists of two parts:
server part, running an API which is taking data from a DB. This is running on the port 3000;
client part, which is doing a calls to the API end-points from the server part. This is running on the port 8080;
With this, I have a variable named "server_address" in my client part and it has the value of "localhost:3000". But here is the thing, the both projects should be docker-sized in a separate Dockerimage files and combined in one docker-compose.yml file.
So due some reasons, I have to run the docker containers via docker-compose.yml file. So is it possible to connect these things somehow and to pass the server address externally from dockerfile into the NodeJS project?
docker-composer.yml
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"
both of the Dockerfile's looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
by having these files, I have the concern:
will I be able to use the variable BACKEND_SERVER somehow in the project? And if yes, how to do this? I'm not referring to the Dockerimage file, instead into the project itself?
Use process.env in node.js code, like this
process.env.BACKEND_SERVER
Mention your variable in docker-compose file.
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"
In addition to the previous answer, you can alternatively define variables and their values when running a container:
docker run --env variable1=value1 --env variable2=value2 <image>
Other two different ways are: (1) referring environment variables which you’ve already exported to your local environment and (2) loading the variables from a file:
(1)
# In your ~/.bash (or ~/.zshrc) file: export VAR1=value1
docker run --env VAR1 <image>
(2)
cat env.list
# Add the following in the env.list
# variable1=value1
# variable2=value2
# variable3=value3
docker run --env-file env.list <image>
Those options are useful in case you don't want to mention your variables in the docker-compose file.
Reference

Why doesn´t docker-compose env_file work but environment does?

When I am using env_file in docker-compose.yml it builds correctly, but when I am trying to use docker-compose my node app can´t find env_file variables inside the process.env object.
Here is my docker-compose file:
node1:
container_name: node01
env_file: ./env/node1.production.env
#environment:
#- SOME_VALUE=9599
build:
context: ./node1
dockerfile: dockerfile
ports:
- "3000:3000"
networks:
- dev_net
Here is my node1.production.env file:
SOME_VALUE=9599
When I use environment instead, my node app works fine:
DOCKER Version : 17.03
DOCKER COMPOSE Version : 1.14
OS : CentOS
It should work. I guess that you might have defined variables more than once in node1.production.env file. Verify if the env file is correct.
From the code you gave, it seems there are no errors in the syntax you are using, and if there were, they would have been reported before build could even be started. In my case, I use env file as follows:
env_file:
- .env
where .env named file is present in the base directory.

Resources