Run mocha unit-test on docker-compose images - node.js

I have an api in nodeJs, which run under micro-services.
So far it work, what i want to do next is to be able to run unit-test (mocha) on those images.
docker-compose.yml:
version: '3'
services: db:
image: mongo
ports:
- 27017:27017
db-seeds:
build: ./services/mongo-seeds
links:
- db
gateway:
build: ./services/gateway
ports:
- 4000:4000
depends_on:
- A
- B
- C
A:
build: ./services/A
ports:
- 4003:4000
depends_on:
- db
B:
build: ./services/B
ports:
- 4001:4000
depends_on:
- db
C:
build: ./services/C
ports:
- 4002:4000
depends_on:
- db
One of a DockerFile:
FROM node:latest
COPY . /src
WORKDIR /src
RUN yarn
EXPOSE 4000
CMD yarn start
What i did so far is to make another docker-compose file which will run other docker file (DockerFile.test) :
FROM node:latest
COPY . /src
WORKDIR /src
RUN yarn
EXPOSE 4000
CMD yarn test
also tried:
FROM node:latest
COPY . /src
WORKDIR /src
RUN yarn
EXPOSE 4000
CMD yarn start
RUN sleep 240
CMD yarn test
They both fail, at this point yarn test is launch before my gateway and servers are up. What i want to do, is to launch my servers and then run my unit-test in the images, but i'm new to docker and lack knowledge on how to implement that.
yarn test:
"test": "NODE_ENV=test ./node_modules/.bin/mocha"

You have to run mocha on the same host with your docker-compose file. This can be a docker image too (which can be started with docker-compose if you want ;)
You have to run docker-compose with your services before you run yarn test.
Create/Use a Dockerfile with docker and docker-compose and node (for your case)
copy your sources on it (docker-compose.yml and other relevant sources)
run docker-compose up -d on it
run yarn test on it
Tools like CloudFoundry or OpenStack might do this job.
If you are with CloudFoundry I can give you concrete details.
To concatenate several commands use the unix-cli syntax:
RUN docker-compose up -d && \
service1 && \
yarn test

Related

Nodemon not restarting in docker image

I am very new to docker and trying to get a working environment for local development. My issue is I can't seem to get nodemon to trigger correctly when there are changes. The nodemon config works outside of docker so I know that isn't the issue.
Here is what I have in my node app folder.
# Dockerfile.local
FROM node:16
WORKDIR /app
COPY package*.json .
RUN yarn install
COPY . .
RUN yarn build
CMD ["yarn", "watch"]
# docker-compose.yml
version: "3"
services:
db:
image: postgres:12.3
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
# ... my config
actions:
build:
context: ./action-handlers
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "hasura"
volumes:
- actions:/./app
environment:
# ...my confg
hasura:
ports:
- 8080:8080
- 9691:9691
build:
context: ./hasura
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "db"
environment:
# ...my config
volumes:
db_data:
actions:
The docker image works perfectly when I run docker-compose build to build the image, then to run it using docker-compose up. Nodemon runs, the other two services run as expect so there isn't any issues there. It just doesn't restart when I make code changes. I have a feeling I am not using volumes correctly.
I have a feeling that - "./actions:/app/action-handers" is incorrect.

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

Docker network communication issue with multiple node server container-intercommunication

I'm trying to setup a docker environment in which multiple containers communicate with each other via REST. My simplified system architecture looks somewhat like this:
Let's say I have two services foo and bar, that run on a node server (NestJS projects). The Dockerfile for both look something like this:
FROM node:14-alpine
ENV NODE_ENV=production \
NPM_CONFIG_PREFIX=/home/node/.npm-global \
PATH=$PATH:/home/node/.npm-global/bin:/home/node/node_modules/.bin:$PATH
RUN apk add --no-cache tini
RUN apk add iputils
RUN mkdir -p /usr/src/app/node_modules
RUN chown -R node:node /usr/src/app
USER node
WORKDIR /usr/src/app
COPY --chown=node:node package*.json ./
RUN npm ci --only=production
RUN npm cache clean --force
COPY --chown=node:node . ./
RUN ls -l
EXPOSE 80
ENTRYPOINT ["/sbin/tini", "--"]
CMD [ "npm", "start" ]
The docker-compose looks like this:
version: "3.8"
services:
foo:
image: "node:14-slim"
container_name: foo-service.${ENV}
build:
context: .
dockerfile: Dockerfile
args:
env: ${ENV}
port: ${PORT}
user: "node"
working_dir: /usr/src/app
environment:
- NODE_ENV=production
- VERSION=1.0
volumes:
- .:/usr/src/app
- /usr/app/node_modules
ports:
- ${PORT}:80
tty: true
command: "npm start"
networks:
my-network:
aliases:
- foo-service.${ENV}
networks:
my-network:
driver: "bridge"
name: my-network
I connect the containers via the my-network and made sure (docker network inspect my-network) that both containers share the same network. I can even ping one another (docker exec [foo] ping [bar])
When I'm running the application (making a REST call from the web interface) to foo, foo is then unable to "connect" to bar. I call bar from foo using the alias like this:
this.httpService.get('http://bar-service.dev:3002/...').
I get this error message:
I guess the containers still know each other since the DNS is resolved automatically by docker (I made sure that the IP-Address of bar is correct).
After some hours of trial and error and research I'd like to ask you guys. It might be an issue with alpine (some people had issues pinging node servers before). But it's also just as likely that I missed something important all along and can't seem to realize it...
Thanks in advance!

Dockerfile env dependent starting point

Picking up docker, a little late to the show, but better late than never.
Following a few online tutorials i landed at a docker file and docker-compose for my 1st microservice node+mongo.
Terrible setup for dev so now will implement trusty pm2: https://dev.to/itmayziii/step-by-step-guide-to-setup-node-with-docker-2mc9
Production would want the below, but dev i would want pm2 instance mgr to reboot on file change..
But the obvious question i now have is how to differentiate between dev and prod in the Dockerfile?
Dockerfile
FROM node:12-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm i
COPY . /usr/src/app
EXPOSE 3000
CMD node ./build/server.js
docker-compose
version: "3"
services:
ms-authentication-service:
image: "ms-authentication-image"
depends_on:
- mongodb
build:
dockerfile: Dockerfile
context: .
links:
- mongodb
networks:
- default
ports:
- "8080:8000"
restart: always
mongodb:
image: mongo:4.2
container_name: "ms-authentication-mongo-image"
environment:
MONGO_INITDB_ROOT_USERNAME: bob
MONGO_INITDB_ROOT_PASSWORD: bob
networks:
- default
ports:
- 27017:27017
In general, managing environment like the staging or production based on ENV is common practice, but in the case of Docker the best approach is tag.
It's better to use tag for dev, stage and production in case of Docker. There are many reasons, one reason is mount code in development environment is fine but it is not recommended in the production environment.
When building images, always tag them with useful tags which codify
version information, intended destination (prod or test, for
instance), stability, or other information that is useful when
deploying the application in different environments. Do not rely on
the automatically-created latest tag.
Docker App development best-practices
But if still want to go with ENV approach then you can use the docker-entrypoint script.
Dockerfile
FROM node:alpine
RUN npm install pm2 -g
COPY . /app
WORKDIR /app
ENV NODE_ENV=development
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT ["sh","docker-entrypoint.sh"]
Docker-entrypoint
#!bin/sh
if [ $NODE_ENV = development ]; then
pm2 start server.js
else
node server.js
fi
So you are good to go and you will able to change this in Dockerfile or run time
docker run --env NODE_ENV=production -it --rm node:production
or
docker run --env NODE_ENV=development -it --rm dev

Avoid restarting docker when code is changed

I'm using docker for the first time, so it could be a stupid question for you.
I configured: dockerfile and docker-compose.
Every time i change my code ( nodejs code ), I need to run the command:
docker-compose up --build
If i want to see the changes, I want to know if there is a way to update my code without have to run that command every time.
Thanks.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV NODE_PATH=/app/node_modules
EXPOSE 1337
CMD ["npm", "start"]
docker-compose:
version: '3'
services:
node_api:
container_name: app
restart: always
build: .
ports:
- "1337:1337"
links:
- mongo
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=test
ports:
- "27017:27017"
restart: always
You can mount the directory containing your source code in the container and use a tool such as nodemon, which will watch files and restart the application on changes.
See the article Docker Tips : Development With Nodemon for details.

Resources