How to pass environment variables from docker-compose into the NodeJS project? - node.js

I have a NodeJS application, which I want to docker-size.
The application consists of two parts:
server part, running an API which is taking data from a DB. This is running on the port 3000;
client part, which is doing a calls to the API end-points from the server part. This is running on the port 8080;
With this, I have a variable named "server_address" in my client part and it has the value of "localhost:3000". But here is the thing, the both projects should be docker-sized in a separate Dockerimage files and combined in one docker-compose.yml file.
So due some reasons, I have to run the docker containers via docker-compose.yml file. So is it possible to connect these things somehow and to pass the server address externally from dockerfile into the NodeJS project?
docker-composer.yml
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"
both of the Dockerfile's looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
by having these files, I have the concern:
will I be able to use the variable BACKEND_SERVER somehow in the project? And if yes, how to do this? I'm not referring to the Dockerimage file, instead into the project itself?

Use process.env in node.js code, like this
process.env.BACKEND_SERVER
Mention your variable in docker-compose file.
version: "3"
services:
client-side-app:
image: my-client-side-docker-image
environment:
- BACKEND_SERVER="here we need to enter backend server"
ports:
- "8080:8080"
server-side-app:
image: my-server-side-docker-image
ports:
- "3000:3000"

In addition to the previous answer, you can alternatively define variables and their values when running a container:
docker run --env variable1=value1 --env variable2=value2 <image>
Other two different ways are: (1) referring environment variables which you’ve already exported to your local environment and (2) loading the variables from a file:
(1)
# In your ~/.bash (or ~/.zshrc) file: export VAR1=value1
docker run --env VAR1 <image>
(2)
cat env.list
# Add the following in the env.list
# variable1=value1
# variable2=value2
# variable3=value3
docker run --env-file env.list <image>
Those options are useful in case you don't want to mention your variables in the docker-compose file.
Reference

Related

Docker and NodeJS: could not connect to the container

I'm trying to dockerize a simple NodeJS API, I've tested it as a standalone and it's working. But after dockerize it I can't connect to the container, in the next image you can see two important facts: the container is permanently restarting and I could not connect to it:
After try to establish connection using a GET request the container begins to restart and after a minute later is up for short seconds.
This is my Dockerfile:
FROM node:lts-buster-slim
# Create app directory
WORKDIR /opt/myapps/noderest01
COPY package.json /opt/myapps/noderest01/package.json
COPY package-lock.json /opt/myapps/noderest01/package-lock.json
RUN npm ci
COPY . /opt/myapps/noderest01
EXPOSE 3005
CMD [ "npm", "run", "dev" ]
And this my yaml file:
services:
rest01:
container_name: rest01
ports:
- "3005:3005"
restart: always
build: .
volumes:
- rest01:/opt/myapps/noderest01
- rest01nmodules:/opt/myapps/noderest01/node_modules
networks:
- node-rest01
volumes:
rest01:
rest01nmodules:
networks:
node-rest01:
I used this command to create the image: docker-compose -f docker-compose.yaml up -d
Surely, I need to update my yaml or dockerfile to fix this, I've been searching for a while but I can't find the origin of the problem, so I want to ask for your advises how to fix and update my docker's files and connect to the container, if you have any suggestions please let me know.
Best.

Docker Multi-container connection with docker compose

I am trying to create a composition where two or more docker service can connect to each other in some way.
Here is my composition.
# docker-compose.yaml
version: "3.9"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
command: sh -c "yarn start"
restart: always
ports:
- "1337:1337"
env_file: ".env.project"
depends_on:
- "database"
links:
- "database"
Services
database
This is using a Image that is made with of Official Postgres Image.
Here is Dockerfile
FROM postgres:alpine
ENV POSTGRES_USER="root"
ENV POSTGRES_PASSWORD="password"
ENV POSTGRES_DB="strapi-postgres"
and using the default exposed port 5432 and forwarding to 5435 as defined in the Composition.
So the database service starts at some IPAddress that can be found using docker inspect.
project
This is a Image running a node application(strapi project configured to use postgres database).
Here is Dockerfile
FROM node:lts-alpine
WORKDIR /project
ADD package*.json .
ADD yarn.lock .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
and I am builing the Image using docker build. That gives me an Image with No Foreground Process.
Problems
When I was running the composition, the strapi-project container Exits with Error Code(0).
Solution: So I added command yarn start to run the Foreground Process.
As the project Starts it could not connect to database since it is trying to connect to 127.0.0.1:5432 (5432 since it should try to connect to the container port of database service and not 5435). This is not possible since this tries to connect to port 5432 inside the container strapi-project, which is not open for any process.
Solution: So I used the IPAddress that is found from the docker inspect and used that in a .env.project and passed this file to the project service of the Composition.
For Every docker compose up there is a incremental pattern(n'th time 172.17.0.2, n+1'th time 172.18.0.2, and so on) for the IPAddress of the Composition. So Everytime I run composition I need to edit the .env.project.
All of these are some hacky way to patch them together. I want some way to Create the Postgres database service to start first and then project to configure, connect, and to the database, start automatically.
Suggest me any edits, or other ways to configure them.
You've forgotten to put the CMD in your Dockerfile, which is why you get the "exited (0)" status when you try to run the container.
FROM node:lts-alpine
...
CMD yarn start
Compose automatically creates a Docker network and each service is accessible using its Compose container name as a host name. You never need to know the container-internal IP addresses and you pretty much never need to run docker inspect. (Other answers might suggest manually creating networks: or overriding container_name: and these are also unnecessary.)
You don't show where you set the database host name for your application, but an environment: variable is a common choice. If your database library doesn't already honor the standard PostgreSQL environment variables then you can reference them in code like process.env.PGHOST. Note that the host name will be different running inside a container vs. in your normal plain-Node development environment.
A complete Compose file might look like
version: "3.8"
services:
database:
image: "strapi-postgres:test"
restart: "always"
ports:
- "5435:5432"
project:
image: "strapi-project:test"
restart: always
ports:
- "1337:1337"
environment:
- PGHOST=database
env_file: ".env.project"
depends_on:
- "database"

Docker compose is using the env which was built by Dockerfile, instead of using env_file located in docker-compose.yml directory because of webpack

First I want to use my application .env variables on my webpack.config.prod.js, so I did this in my webpack file.
I am successfully able to access process.env.BUILD variables.
My application’s env has this configuration -
My nodejs web app is running fine locally, no problem at all. I want to build docker image of this application and need to use docker-compose to create the container.
I built my docker image and everything good so far.
now to create container, instead of docker run. I am using separate folder which consists of docker-compose.yml and .env files. Attached the screenshot below
My docker-compose.yml has this code -
version: '3.9'
services:
api:
image: 'api:latest'
ports:
- '17000:17000'
env_file:
- .env
volumes:
- ./logs:/app/logs
networks:
- default
My docker-compose .env has this redis details

My application has this logs -
I started my docker container by doing docker-compose up. Containers created and up and running, but the problem is
In the console, after connecting to redis.. process.env.REDIS_HOST contains the value called ‘localhost’ (which came from the first env where I used to build docker image). Docker compose .env is not getting accessed.
After spending 5+ hours. I found the culprit, It was webpack. On my initial code, I added some env related things in my webpack right? Once I commented those, taken a new build. Everything is working fine.
But my problem is how I can actually use process.ENV in webpack, also docker-compose need to use the .env from its directory.
Updated -
My DockerFile looks like this:
Just, It will copy the dist which contains bundle.js, npm start will do - pm2 run bundle.js.
From what I know webpack picks up the .env at build time, not at runtime. This means that it needs the environment variables when the image is built.
The one you pass in docker-compose.yml is not used because by then your application is already built. Is that correct? In order to user your .env you should build the image with docker-compose and pass the env variables as build arguments to your Dockerfile.
In order to build the image using your docker-compose.yml, you should do something like this:
version: '3.9'
services:
api:
image: 'api:latest'
build:
context: .
args:
- REDIS_HOST
- REDIS_PORT
ports:
- '17000:17000'
volumes:
- ./logs:/app/logs
Note: the context above points to the current folder. You can change it to point to a folder where you Dockerfile (and the rest of the project is) or you can put your docker-compose.yml together with the rest of the project directly and then context stays ..
In your Dockerfile you need to specify these arguments:
FROM node:14 as build
ARG REDIS_HOST
ARG REDIS_PORT
...
With these changes you can build and run with docker-compose:
docker-compose up -d --build

Using Docker with Node image to develop a VuejS (NuxtJs) app

The situtaion
I have to work on a VueJs (NuxtJs) spa, so I'm trying to use Docker with a Node image to avoid installing it on my pc, but can't figure out how to make it work.
The project
The source cose is in its own application folder, since it is versioned, and at the root level there is the docker-compose.yaml file
The folder structure
my-project-folder
├ application
| └ ...
└ docker-compose.yaml
The docker-compose.yaml
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
image: node:lts-alpine
working_dir: /app
volumes:
- ./application:/app
The problem
The container start but quit immediately with exit status 0 (so it executed correctly), but this way I can't use it to work on the project.
Probably there is something I'm missing about the Node image or Docker in general; what i would like to to do is connecting to the docker container to run npm commands like install, run start etc and then check the application on the browser on localhost:3000 or whatever it is.
I would suggest to use Dockerfile with base image as node and then create your entrypoint which runs the application. That will eliminate the need to use volumes which is used when we want to maintain some state for our containers.
Your Dockerfile may look something like this:
FROM node:lts-alpine
RUN mkdir /app
COPY application/ /app/
EXPOSE 3000
CMD npm start --prefix /app
You can then either run it directly through docker run command or use docker-compose.yaml as following :
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
build:
context: .
ports:
- 3000:3000

Run multiple Docker containers at once using docker-compose

The Problem
Currently I've created a Dockerfile and a docker-compose.yml to run my rest-api and database using docker-compose up.
What I want to do now is add another container, namely the web application (build with React). I'm a little bit confused on how to do that, since I just started learning Docker 2 days ago.
Folder Structure
This is my current folder structure
Folder: rest-api (NodeJS)
Dockerfile
dockercompose.yml
The Question
In the end I want to be able to run docker-compose up to fire up both the rest-api and the web-app.
Do I need to create a seperate Dockerfile in every folder and create a 'global' docker-compose.yml to link everything together?
New folder structure:
dockercompose.yml
Folder: rest-api (NodeJS)
Dockerfile
Folder: web-app (React)
Dockerfile
My current setup to run the rest-api and database
Dockerfile
FROM node:13.10
# The destination of the app in the container
WORKDIR /usr/src/app
# Moves the package.json, package-loc.json and tsconfig.json to the specified workdir
COPY package*.json ./
COPY tsconfig.json ./
# Create user and postgres
ENV POSTGRES_USER root
ENV POSTGRES_PASSWORD 12345
ENV POSTGRES_DB postgres
ENV POSTGRES_URI 'postgres://postgres:12345#postgres:5432/postgres'
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
node:
container_name: rest-api
restart: always
build: .
environment:
PORT: 3000
ports:
- '80:3000'
links:
- postgres
postgres:
container_name: postgres-database
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
Ok - so there are quite a few ways to approach this and it is pretty much more or less based on your preferance.
If you want to go with your proposed folder structure (which is fine), the you can for example do it like so:
Have a Dockerfile in the root of each of your applications which will build the specific application (as you already suggested) place your docker-compose.yml file in the parent folder of both applications (exactly as you proposed already) and then just make some changes to your docker-compose.yml (I only left the essential parts. Note that links are no longer necessary - the internal networking will resolve the service names to the corresponding service IP address)
version: '3'
services:
node:
build:
context: rest-api
environment:
PORT: 3000
ports:
- '3000:3000'
web:
image: web-app
build:
context: web-app
ports:
- 80:80
postgres:
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
So the context is what tells docker that what you are building is actually in a different directory and all of the commands executed in the Dockerfile will be relative to that folder
I also changed the port mappings, cause you probably will want to access your web app via HTTP port. Note that the web-app will be able to communicate with the rest-api container by using the node hostname as long as the node service is binding to 0.0.0.0:3000 (not 127.0.0.1:3000)

Resources