DRY Config for Docker build and App - node.js

First off, this became a much longer post than I expected, tl;dr; How can I make my config DRY with docker (I'm assuming with env variables)
I am developing a node.js server with a mongodb database and I am switching over to docker to (hopefully) simplify my environment and keep it consistant for my server. Let me start out by saying that I definitely do not understand the entire docker-container life cycle, but I am learning as I go. I cannot find a solution to what I am trying to do after a lot of searching. I am using Docker for Windows with Hyper-V. (docker version at the bottom)
So, I would like to be able to configure
Database name
Database user
Database password
App port
and few other things
And allow these configured values to be used in several places including
Docker build phase (to setup database)
Node App (to connect to database)
What is the best way to be able to set the database name, user and pass, once and use it to initiate the database in the container, and then connect to it whenever the container is spun up?
Current Setup
Currently I have a docker-compose file that is the following (./docker-compose.yml)
version: "2"
services:
myapp:
image: node:boron
volumes:
- ./dist:/usr/src/app
working_dir: /usr/src/app
command: sh -c 'npm install; npm install -g nodemon; nodemon -e js app.js'
ports:
- "9021:9021"
depends_on:
- mongo
networks:
- all
mongo:
build: ./build/mongo
networks:
- all
networks:
all:
Mongo Dockerfile (./build/mongo/Dockerfile)
# Image to make this from
FROM mongo:3
# Add files into the container for db setup
ADD initDB.js /tmp/
ADD createDBAdmin.js /tmp/
ADD mongod.conf /data/configdb
# Map the host directory mongo/myapp to /data
VOLUME /mongo/myapp:/data
RUN ls /data/*
RUN mongod -f /data/configdb/mongod.conf && sleep 5 && mongo ${DB_NAME} /tmp/createDBAdmin.js && sleep 5 && mongo ${DB_NAME} /tmp/initDB.js
CMD mongod --smallfiles --storageEngine wiredTiger
The initDB.js script (./build/mongo/initdb.js)
db.createUser(
{
user: "${DB_USER}",
pwd: "${DB_PASS}",
roles: [
{ role: "dbOwner", db: "${DB_NAME}" }
]
},
{
w: "majority",
wtimeout: 5000
}
);
// For some reason this is needed to save the user
db.createCollection("test");
docker version
Client:
Version: 1.13.0
API version: 1.25
Go version: go1.7.3
Git commit: 49bf474
Built: Tue Jan 17 21:19:34 2017
OS/Arch: windows/amd64
Server:
Version: 1.13.0
API version: 1.25 (minimum version 1.12)
Go version: go1.7.3
Git commit: 49bf474
Built: Wed Jan 18 16:20:26 2017
OS/Arch: linux/amd64
Experimental: true
Extra credit
How can I persist database information outside of the container for backup purposes etc. and view the files? Do I need to map a directory on my host machine to data/db in the container and am i doing this correctly?
EDIT: This seems to be a good solution to the "Extra credit" question. How to deal with persistent storage (e.g. databases) in docker

Related

how can i guarantee docker host volume have the right permission after multiple `docker-compose up` & `docker-compose-down`

I am developing a Node.js app that uses Postgres as its database. I write a "docker-compose.yml" file for Node.js & Postgres.
I create pgdata directory, Then I write my Dockerfile to build Node.js image:
FROM node:14.15.0
WORKDIR "/usr/src/app"
CMD [ "npm", "start" ]
And this is how I write my "docker-compose.yml" file:
version: "3.7"
services:
db:
image: postgres:13
env_file:
- ./.env.postgres
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
volume:
nocopy: true
environment:
PGDATA: /var/lib/postgresql/data/pgdata
container_name: ${APP_NAME}-db
express-app:
image: ${APP_NAME}-express-app:1
build:
context: .
dockerfile: ./Dockerfile
volumes:
- type: bind
source: .
target: /usr/src/app
volume:
nocopy: true
env_file:
- ./.env
- ./.env.postgres
depends_on:
- db
ports:
- "${APP_PORT}:${DOCKER_CONTAINER_APP_PORT}"
environment:
POSTGRES_HOST: db
And then I start my app by running this command: docker-compose up --build and everything was OK. docker build image and create its containers. My app works fine but when I remove its image (node.js image, for some reasons), and run the docker-compose up --build command it throws this error for me:
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Building express-app
error checking context: 'can't stat '/home/project1/pgdata/pgdata''.
ERROR: Service 'express-app' failed to build
And after that, I realize that docker create a directory inside the mounted directory (pgdata)
So I check it permission with 'ls -ltrha' and this is the output:
total 12K
drwxr-xr-x 3 mjb mjb 4.0K FEB 20 12:34 .
drwx------ 19 systemd-coredump root 4.0K FEB 20 12:34 pgdata
drwxr-xr-x 6 mjb mjb 4.0K FEB 21 12:19 ..
When you build a Docker image, the context directory you give (typically the current directory) is sent to the Docker daemon to build. The goal of this is to be able to COPY files from the context directory into the Docker image, so you can have a self-contained image that can run without separately needing the application code mounted in.
In your setup, the database content is stored in ./pgdata, so a subdirectory of the directory you're using as a build context. The data in that directory will be owned by the user in the database container, which frequently won't be the same user as the host user (and that's okay). But now since a different user owns the ./pgdata directory, the image build sequence can't send the build context directory to the Docker daemon, hence the error you get.
You can create a .dockerignore file to tell Docker to exclude the database data from the image build (you will never want this in your image no matter what). Put this in the same directory as the Dockerfile and docker-compose.yml file. It can contain just
# .dockerignore
# Do not copy host's node_modules into the image; we will RUN npm install
node_modules
# Do not copy database storage into the image at all
pgdata

Docker MongoDB immadiately closes connection after receiving metadata

I'm trying to compose docker app with two containers:
mongo
app
Mongo container works just fine, meanwhile app cannot connect to mongo. Neither node.js app nor mongostat can. The weird part is, I tried to run this project on both computers with Win10 and it works normally on the other one.
These are logs from mongo container when I run node app.js or mongostat --uri "mongodb://mongo:27017/project" from app container:
2019-05-22T09:33:52.225+0000 I NETWORK [conn17] received client metadata from 192.168.96.2:42916 conn17: { driver: { name: "nodejs", version: "3.1.10" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.9.125-linuxkit" }, platform: "Node.js v10.15.3, LE, mongodb-core: 3.1.9" }
2019-05-22T09:33:52.231+0000 I NETWORK [conn17] end connection 192.168.96.2:42916 (0 connections now open)
This means both containers can see each other so .yml file should be fine. If the problem was with code then it shouldn't work on both computers.
Dockerfile:
FROM node:10.15.3-alpine
RUN apk update && apk --no-cache --virtual build-dependencies add python make g++ && apk del build-dependencies
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
COPY --chown=node:node . .
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]
docker-compose.yml:
version: "3.5"
services:
app:
container_name: app
restart: always
build: .
ports:
- "3000:3000"
networks:
- mongo
mongo:
restart: always
container_name: mongo
image: mongo
expose:
- 27017
volumes:
- mongodata:/data/db
ports:
- '27017:27017'
networks:
- mongo
volumes:
mongodata:
networks:
mongo:
external: true
snippet from app.js:
MongoClient.connect('mongodb://mongo:27017/project', {useNewUrlParser: true}, (err, client) => {
if (err) throw err; //throws MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo mongo:27017]
console.log("connected");
client.close();//at the moment this line is not being reached because of throw err;
});```
Does it help if you insert a "sleep 10" in your application, before connecting to the mongo db? If so, adding something like wiatforit (https://github.com/maxcnunes/waitforit) might help.
Since you are getting a getaddrinfo ENOTFOUND error, the mongo hostname isn't resolving. Usually, that happens for one of two reasons: 1) your containers aren't on the same network or 2) the other container isn't up and running yet. Seeing that they are on the same network, it sounds like it's something with the container being up.
To troubleshoot, I would start another container, put it on the network, and validate the mongo hostname resolves.
docker container run --rm -ti --network mongo ubuntu
$ apt update && apt install -y dnsutils
$ dig mongo
At this point, you should see the A record resolve to the database. If not, validate the mongo database container is up and running.
You can also try doing this within your app container as well. If that's working, then using something like waitforit should work. This is a common issue, as apps may start up before the database is either running or ready to accept connections.
As one other item of feedback, you don't need to expose the mongo port. This is making it accessible to the world, which most likely isn't what you want. You can still do container-to-container communication without exposing the port.
After hours of trying multiple things I have found solution: turn off Windows Firewall. That's it.
Thanks, I appreciate your help.

Can't access to localhost:8080 with docker on windows 10

When running my docker-compose-development.yaml on my computer, I can't connect to http://localhost:8080.
Also, I can run docker-compose -f docker-compose-development.yaml exec web curl http://localhost:8080 and I got a result. So it seems to not be a code problem.
What I've already done:
Connect directly on container IP with $ docker inspect ...
Try on another Windows 10 laptop (it works)
Change localhost to 127.0.0.1 or 0.0.0.0
Try another port than 8080
This is my $ docker version :
Client:
Version: 17.11.0-ce
API version: 1.34
Go version: go1.8.4
Git commit: 1caf76c
Built: Mon Nov 20 18:30:11 2017
OS/Arch: windows/amd64
Server:
Version: 17.11.0-ce
API version: 1.34 (minimum version 1.12)
Go version: go1.8.5
Git commit: 1caf76c
Built: Mon Nov 20 18:39:28 2017
OS/Arch: linux/amd64
Experimental: true
This is my Dockerfile:
FROM node:9.1-alpine
RUN npm install -g nodemon
WORKDIR /opt/webserver/
COPY . /opt/webserver
RUN npm install
CMD ["npm","run","start"]
EXPOSE 8080
RUN rm -rf /tmp/* /var/tmp/*
This is my docker-compose-development.yaml:
version: "3"
services:
web:
image: registry.gitlab.com/soundtrack/webapp
ports:
- "8080:8080"
links:
- database
volumes:
- ".:/opt/webserver:rw"
database:
image: mongo:3.4.10
ps command from docker-compose:
$ docker-compose -f .\docker-compose-development.yaml ps
Name Command State Ports
--------------------------------------------------------------------------------
webapp_database_1 docker-entrypoint.sh mongod Up 27017/tcp
webapp_web_1 npm run start Up 0.0.0.0:8080->8080/tcp
I ran my container:
docker run -d -it -p 10080:80 --name=container-cool-name <container-id>
And I could see my running app with curl (inside the container)
docker exec -ti container-cool-name bash
#curl localhost:80
Here I read:
If you’re using Docker Toolbox
"docker-machine ip will tell you"
My app was correctly displaying at 192.168.99.100:10080
Try following steps,
1 - List all the running docker containers
docker ps -a
After you run this command you should be able to view all your docker containers that are currently running and you should see a container with the name webapp_web_1 listed there.
2 - Get the IP address where your webserver container is running. To do that run the following command.
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" webapp_web_1
Try accessing the IP which is shown after running above command and access your running container with that IP instead of localhost.
I have answered to a simillar question related to this exact problem on Windows. Please refer it via this link to get a detailed idea on why this issue prevails.
Hope this helps you out.

Exposing docker container on local server

I have a nodejs-mongo db app that is running inside docker containers.
I can access it on localhost etc...
But now I would like to install/deploy these containers on a local server, so that I can access this application outside my network.
Please note my server is Ubuntu 16.04.1 LTS (xenial)
Also I would like to access Db from outside, so that I can send data to the Db using custom script.
I am newbie in both networking and Docker. As a result I am struggling to understand what needs to be done. Any pointers would be appreciated
Here is the docker-compose.yml
version: "2"
services:
web:
build: .
volumes: # Use this to mount app from local disk
- ./:/usr/src/app
ports:
- "8080:8080"
- "5858:5858"
entrypoint: node --debug=5858 app.js
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
Here is my Dockerfile
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Usual command that I give to start my containers docker-compose up --build -d
Here is docker ps result
"node --debug=5858 ap" 13 minutes ago Up 13 minutes 0.0.0.0:5858->5858/tcp, 0.0.0.0:8080->8080/tcp nodemongochart_web_1
"/entrypoint.sh mongo" 13 minutes ago Up 13 minutes 0.0.0.0:27017->27017/tcp nodemongochart_mongo_1
If by local server, you mean your own machine at home, you will need to do something called "Port Forwarding".
It's usually configurable in your router, and you'll be able to access your application via some kind of link like this :
http://you-personal-ip:8080
If you'll be using another host, not in your own network, they should provide you with an IP address, in that case, no need for port forwarding.

Docker how to start nodejs app with redis in the Container?

I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.

Resources