Why is my docker node container exiting - node.js

I'm trying to run a node container with docker-compose -
services:
node:
build:
context: nodejs/
ports:
- "3000:3000"
volumes:
- ../nodejs:/usr/src/app
working_dir: '/usr/src/app'
My docker file
FROM node:6.10
EXPOSE 3000
The problem is it exits immediately -
$ docker-compose up
Starting docker_node_1
Attaching to docker_node_1
docker_node_1 exited with code 0
And there's nothing in the logs - docker logs docker_node_1 returns nothing.
There's a package.json referencing the main script -
{
...
"main": "server.js",
...
}
And my main script is just a simple express server -
const express = require('express');
const app = express();
const port = 3000;
app.listen(port, (err) => {
if (err) {
return console.log('something bad happened', err);
}
console.log(`server is listening on ${port}`);
});
I guess I'm missing something obvious but I can't see what it is...

It's missing specifying the docker command. That is the key concept that represents the container: sort of isolated process (process = the command, your program)
You can do it in Dockerfile:
CMD npm start
Or in docker-compose.yml:
services:
node:
command: npm start
build:
context: nodejs/
ports:
- "3000:3000"
volumes:
- ../nodejs:/usr/src/app
working_dir: '/usr/src/app'
Both approaches are equivalent. But edit it as your needs (npm run, npm run build, etc)

Related

econnrefused rabbitMQ between docker containers

I am trying to set up a simple docker-compose file which includes a rabbitmq container and a simple express server which connects to rabbitmq. I'm getting the following error when trying to connect to rabbitmq from my express application:
Error: connect ECONNREFUSED 172.19.0.2:5672
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.19.0.2',
port: 5672
}
I checked the IP-adress of the docker network manually to verify that 172.19.0.2 is indeed the rabbitmq process, which it is.
Here is my docker-compose:
version: '3'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=pass
ports:
- 5672:5672
- 15672:15672
producerexpress:
build: ./service1
container_name: producerexpress
ports:
- 3000:3000
environment:
- PORT=3000
depends_on:
- rabbitmq
and the express app and its docker file:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
const amqp = require('amqplib');
const amqpUrl = process.env.AMQP_URL || 'amqp://admin:pass#172.19.0.2:5672';
let channel;
let connection;
connect();
async function connect(){
try{
connection = await amqp.connect(amqpUrl);
channel = await connection.createChannel();
await channel.assertQueue('chatExchange', {durable: false});
} catch (err) {
console.log(err);
}
}
function sendRabbitMessage(msg) {
channel.sendToQueue('chatExchange', Buffer.from(msg));
}
app.get('/', (req, res) => {
let msg = 'Triggered by get request';
sendRabbitMessage(msg);
res.send('Sent rabbitmq message!');
});
app.listen(port, () => {
console.log(`Server started on port ${port}`);
} );
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT=3000
ENV AMQP_URL=amqp://admin:pass#172.19.0.2:5672
EXPOSE 3000
CMD ["npm", "start"]
This is my first time using docker compose and all the fixes I found on here seem to suggest I did everything correctly. What am I missing?
TL;DR
The depends_on guarantes the order in which the services will start up, but that doesn't guarante anything for the processes they inititate.
In these cases, you should expand the depends_on statement in order to take into account the health status of the process of interest
Firstly, you should avoid making the communication of cointainers depend on their IP address but instead rely on their service names, since you are using docker compose.
Meaning, instead of amqp://admin:pass#172.19.0.2:5672
You should use amqp://admin:pass#rabbitmq:5672
Moving on to the core issue, your producerexpress relies on to rabbitmq in order to function.
As a result, you added the depends_on statement to producerexpress to resolve this. But this is not enough, quotting from https://docs.docker.com/compose/startup-order/
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, ....
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running.
As a result, you need to add a health check in order to guarantee that the rabbitmq process has started successfully, not just the container.
In order to achieve that you should alter your compose file
version: '3'
services:
rabbitmq:
build: ./rabbitmq
container_name: 'rabbitmq'
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=pass
ports:
- 5672:5672
- 15672:15672
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
producerexpress:
build: ./service1
restart: on-failure
container_name: producerexpress
ports:
- 3000:3000
environment:
- PORT=3000
depends_on:
rabbitmq:
condition: service_healthy
In order to make the healthcheck, we need the curl package in the rabbitmq image, so add the following Dockerfile
FROM rabbitmq:3-management-alpine
RUN apk update
RUN apk add curl
EXPOSE 5672 15672
Finally, to make this change compatible create the following directory structure
./docker-compose.yml
./rabbitmq/
--- Dockerfile
./service1/
--- Dockerfile

Run rest API tests from docker compose

I'm trying to implement tests in my node.js project. I decided to use mocha / chai / chai-http to test my rest API.
I just have 1 simple file in my test folder:
movie.ts:
import chai from 'chai';
import chaiHttp from 'chai-http';
let app = require('../app');
chai.should();
chai.use(chaiHttp);
declare var process : {
env: {
API_KEY: string
}
}
describe('/GET movies', () => {
it('it should GET all the movies', (done) => {
chai.request(app)
.get('/v1/movies')
.set("Authorization", process.env.API_KEY)
.end((err, res) => {
res.should.have.status(200);
res.body.should.be.a('array');
done();
});
});
});
Here is my dockerfile:
FROM node:latest
WORKDIR /app/
COPY package.json .
RUN npm install
COPY . .
Docker compose:
version: '3.8'
services:
mariadb:
image: mariadb
env_file: ./.env
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_USER: $MYSQL_USER
MYSQL_PASSWORD: $MYSQL_PASSWORD
MYSQL_DATABASE: $MYSQL_DATABASE
ports:
- $MYSQL_LOCAL_PORT:$MYSQL_DOCKER_PORT
volumes:
- mysql:/var/lib/mysql
- mysql_config:/etc/mysql
- ./sql/:/docker-entrypoint-initdb.d/
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
environment:
PMA_HOSTS: mariadb
web:
build: .
env_file: ./.env
command: npm start
volumes:
- .:/app/
- /app/node_modules
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
depends_on:
- mariadb
environment:
MYSQL_HOST: mariadb
web-tests:
image: hypescript_web
command: npm test
environment:
MYSQL_HOST: mariadb
depends_on:
- mariadb
- web
volumes:
mysql:
mysql_config:
This test is working but I have 2 problems when I run it during docker compose up cmd.
As you can see I created another container just for the test and I'm not sure if it's the best practice to test my API. web-tests container uses my node.js image built in the docker compose so if I don't have the image already built in my machine I have to comment all my web-tests services and when the image is built then I can uncomment web-tests and I can use it to run mocha tests.
The 2nd problem is that I need to set a timeout to run my test (e.g after 10 seconds) because I build my database at the same time and the first time I'll get database errors because the tests are running and the database init isn't finished. I tried to add a timeout to mocha in the package.json but the timeout is ignored.
"scripts": {
"start": "nodemon -L app.ts",
"debug": "export DEBUG=* && npm run start",
"test": "mocha --timeout 10000 -r ts-node/register test/**/*.ts"
}
I think I'm doing something wrong because it's very heavy to run my API tests.
I'd like to runs my tests without comment / uncomment at the first run and add a timeout.
How do you do to runs your API tests in docker ?

How to access Docker container app on local?

I have a simple Node.js/Express app:
const port = 3000
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
It works fine when I start it like: node src/app.js
Now I'm trying to run it in a Docker container. Dockerfile is:
FROM node:8
WORKDIR /app
ADD src/. /app/src
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app
EXPOSE 3000
CMD [ "node", "src/app.js" ]
It starts fine: docker run <my image>:
Listening on port 3000
But now I cannot access it in my browser:
http://localhost:3000
This site can’t be reached localhost refused to connect.
Same happen if I try to run it within docker-compose:
version: '3.4'
services:
service1:
image: xxxxxx
ports:
- 8080:8080
volumes:
- xxxxxxxx
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000
command:
node src/app.js
Not sure if I deal right with ports in both docker files
When you work with docker you must define the host for your app as 0.0.0.0 instead of localhost.
For your express application you can define the host on app.listen call.
Check the documentation:
app.listen([port[, host[, backlog]]][, callback])
Your express code should be updated to:
const port = 3000
const host = '0.0.0.0'
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, host, () => console.log(`Example app listening on ${port}!`))
It's also important publish docker ports:
Running docker: docker run -p 3000:3000 <my image>
Running docker-compose:
services:
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000:3000
command:
node src/app.js
try this:
services:
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000:3000 ##THIS IS THE CHANGE, YOU NEED TO MAP MACHINE PORT TO CONTAINER
command:
node src/app.js
You need to publish ports
docker run -p 3000:3000 <my image>
-p - stands for publish

Can't Access to localhost:3030 - NodeJS Docker

Wanted Behavior
When my dockerized nodejs server is launched, i can access from my local machine to the address : http://localhost:3030
Docker console should then print "Hello World"
Problem Description
I have a nodejs Server contained in a Docker Container. I can't access to http://localhost:3030/ from my browser
server.js File
const port = require('./configuration/serverConfiguration').port
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World')
})
app.listen(port)
DockerFile Exposes port 3000 which is the port used by the server.js File
DockerFile
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
I use a docker-compose.yml file because i am linking my container with a mongodb service
docker-compose.yml File
version: '3'
services:
node_server:
build: .
volumes:
- "./app:/src/app"
ports:
- "3030:3000"
links:
- "mongo:mongo"
mongo:
image: mongo
ports:
- "27017:27017"
File publishes my container 3000 port to my host 3030 port
New Info
Tried to execute it on OSX, worked. It seems to be a problem with windows.
I changed localhost for the machine ip since i was using docker toolbox. My bad for not reading the documentation in dept

How can I work with NodeJS and Docker Compose with volumes?

I'm trying to make it work nodejs with docker-compose and volumes, I want to edit my code, that's why I should use volumes.
when I do not put volumes in the docker-compose.yml file, it work!. But with volumes no work.
Any idea?
This is my docker-compose.yml:
version: "3.2"
services:
node:
container_name: node_app
build:
context: ./
dockerfile: dockerfiles/Dockerfile
user: "node"
environment:
- NODE_ENV=production
volumes:
- .:/home/app
ports:
- "3000:3000"
This is my Dockerfile:
FROM node:carbon-stretch
WORKDIR /home/app
COPY package*.json ./
RUN npm i express -S
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
This is my package.json:
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <test#gmail.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
This is my server.js:
'use strict'
const express = require('express')
// Constants
const PORT = 3000
const HOST = '0.0.0.0'
// App
const app = express()
app.get('/', (req, res) => {
res.send('Hello world\n')
});
app.listen(PORT, HOST)
console.log(`Running on http://${HOST}:${PORT}`)
Thanks!
The problem is that when you mount the folder . onto /home/app, all the content in /home/app gets overshadowed by the content of . on the host. This effectively removes the things introduced by RUN npm i express -S
To solve this you need to isolate the code that you want to edit into a separate folder (if that is already not the case)
volumes:
- ./code:/home/app/code
Try to parse your docker-compose file to the valid yaml format, you can try again:
version: "3.2"
services:
node:
container_name: node_app
build:
context: ./
dockerfile: dockerfiles/Dockerfile
user: "node"
environment:
- NODE_ENV=production
volumes:
- .:/home/app
ports:
- "3000:3000"

Resources