How to update configuration file (.env file) while running docker container - node.js

I have created a nodejs app and built docker image as per this link
Dockerizing a Node.js web app
But, I am also using configuration file (.env file) where I can maintain all the environment variables and access them with process.env.<Variable_name>.
My server.js file looks like this.
'use strict';
const express = require('express');
require('dotenv').config()
// Constants
const PORT = process.env.PORT | 8080;
const HOST = process.env.HOST;
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
And My .env file is this.
HOST=10.20.30.40
PORT=8080
I can change my IP address and port to anything with out changing any code in server.js. As similar as this, I want update the .env when I am building it as a docker image.
This is my Dockerfile
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
COPY .env .
EXPOSE 8080
CMD [ "npm", "start" ]
I know I can update the .env file while building image by giving --build-args. But every time if need to make change in .env, I have to rebuild the image and deploy it. So, I want to update the .env file while running the image or container.
In the below command is there any way to give some arguments so it will update the .env file in the docker.
docker run -p 49160:8080 -d <your username>/node-web-app

You can add a directory to the docker using -v
docker run -v /Path/To/The/Env/file:/env-file-directory -p ...
Then target the file in the linked directory inside of your docker using the name /env-file-directory

Related

Error: connect ECONNREFUSED 0.0.0.0:8000 when hitting Docker containerised Node.js app endpoint

I'm just starting with Docker and dough I succeed in creating an image and a container from it
I'm not succeeding in connecting to the container's port with postman and I get Error: connect ECONNREFUSED 0.0.0.0:8000.
In my server.js file I have:
const app = require('./api/src/app');
const port = process.env.PORT || 3000; // PORT is set to 5000
app.listen(port, () => {
console.log('App executing to port ', port);
});
in my index.js I have :
const express = require('express');
const router = express.Router();
router.get('/api', (req, res) => {
res.status(200).send({
success: 'true',
message: 'Welcome to fixit',
version: '1.0.0',
});
});
module.exports = router;
so if I run my app with either npm start or nodemon server.js the localhost:3000/api endpoint works as expected.
I then build a docker image for my app with the command docker build . -t fixit-server with this Dockerfile:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 5000
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
and run the container with the command docker run -d -p 8000:5000 --name fixit-container fixit-server tail -f /dev/null
and listing the containers with docker ps -a shows it running :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da0e4ef12402 fixit-server "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:8000->5000/tcp fixit-container
but when I hit the endpoint 0.0.0.0:8000/apiI get the ECONNREFUSED error.
I tried both CMD ["npm", "start"]and CMD npm start but I get the error both ways.
Can you what I'm doing wrong?
Update:
#Vincenzo was using docker-machine and to be able to check whether the app was working properly, we needed to execute the following command in the terminal:
docker-machine env
The result was:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.102:2376"
export DOCKER_CERT_PATH="/Users/vinnytwice/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Then based on the DOCKER_HOST value, we hit 192.168.99.102:8000/api and it was working.
I believe the problem is you're never setting the PORT environment variable to 5000.
EXPOSE docker command is a no op. Meaning that it will do nothing but is only for the developer to know that you're exposing the port 5000. You can read it in Docker documentation.
You need to either set an environment variable or pass an environment variable at runtime to the container to specifically tell it that PORT is 5000.
Method 1:
You can change your Dockerfile like below:
FROM node:15.14.0
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=5000
EXPOSE $PORT
# CMD ["npm", "start"]
CMD npm start
# CMD ["nodemon", "server.js"]
Method 2:
Simply use the following command to run your container:
docker run -d -p 8000:5000 --name fixit-container --env PORT=5000 fixit-server

node js docker is not running on heroku

Node js project in Docker container is not running on Heroku.
Here is the source code.
Docker file
FROM node:14
WORKDIR /home/tor/Desktop/work/docker/speech-analysis/build
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
server.js
'use strict';
const express = require('express');
const PORT = process.env.port||8080;
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT);
console.log("Running on http://:${PORT}");
You don't need to expose anything when having a container for Heroku. It takes care of it automatically. If you are running the same Docker locally, you can do:
docker build -t myapp:latest .
docker run -e PORT=8080 -p 8080:8080 -t myapp:latest
I think that the environment variables are case-sensitive on Linux systems - so you need to change the
const PORT = process.env.port||8080;
... to:
const PORT = process.env.PORT||8080;
... as Heroku sets an environment variable PORT (and not port).
According to this answer you just need to use the port 80 in your expose or inside of nodejs:
app.listen(80)
Heroku at run, will generate a random port and bind it to 80
docker run ... -p 46574:80 ...
So if your nodejs app is running at port 80 inside of container, everything will be fine

Docker container can't find path reference

I'm attempting to run a node.js server with a React frontend using a Docker container on my local Synology NAS. I was able to get the node.js server functioning using this guide.
I then attempted to add the React front end, however I'm getting this error:
ReferenceError: path is not defined ... at /app/lib/app.js:7
app.use(express.static(path.join(__dirname, 'client/build)));
I'm able to run the server locally, so it seems that this would be an issue related to Docker, but I'm not quite sure where to look to resolve the issue.
For reference, the Dockerfile I'm using:
# test using the latest node container
FROM node:latest AS teststep
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY lin ./lib
COPY test ./test
RUN npm ci --development
# test
RUN npm test
# build production packages with the latest node container
FROM node:latest AS buildstep
# Copy in package.json, install
# and build all node modules
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm ci --production
# This is our runtime container that will end up
# running on the device.
FROM node:alpine
WORKDIR /app
# Copy our node_modules into our deployable container context.
COPY --from=buildstep /app/node_modules node_modules
COPY lib ./lib
# Launch our App.
CMD ["node", "lib/app.js"]
App.js:
const express = require('express')
const app = express()
const path = require('path');
const port = 3000
app.use(express.static(path.join(__dirname, 'client/build')));
app.get('/', function(req, res) {
res.sendFile(path.join(__dirname, 'client/build', 'index.html'));
});
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
The problem was fixed by resolving the "lin" typo, then deleting the existing container and executing the run.sh script.

Console.log not working in a dockerized Node.js / Express app

I have built a Node.js app within a docker container with the following Dockerfile:
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
I am trying to console.log within an express route, however after docker run -p 49160:8080 -d, the console is not interactive and logs are not being echoed at all.
'use strict';
// Requires
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
// Routes
app.get('/', (req, res) => {
// This isn't being printed anywhere
console.log(req);
});
// Start
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
What am I doing wrong?
Remove the -d from the command you're using to run the container:
docker run -p 49160:8080
The -d option runs the container in the background, so you won't be able to see its output in your console.
If you want to keep the container running in the background and you want to access that container's shell, you can run the following command once your container is up and running:
docker exec -it <container-name> bash
Adding this comment as I was searching for the same thing and this was first result on Google.
docker logs <container-name>
https://docs.docker.com/engine/reference/commandline/logs/

Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

Resources