I can't make my docker compose work.
Here's my dockerfile:
FROM node:0.12
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
RUN mkdir /myapp
WORKDIR /myapp
ADD . /myapp
RUN npm install
My docker-compose.yml
db:
image: mongo
ports:
- 27017
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- 3000:3000
links:
- db
environment:
PORT: 3000
And in server.js:
var MONGO_DB;
var DOCKER_DB = process.env.DB_1_PORT;
if ( DOCKER_DB ) {
MONGO_DB = DOCKER_DB.replace( "tcp", "mongodb" ) + "/dev_db";
} else {
MONGO_DB = process.env.MONGODB;
}
mongoose.connect(MONGO_DB);
as from duplicated from this repo: https://github.com/projectweekend/Node-Backend-Seed
but process.env.DB_1_PORT is empty. How can I add it?
Thanks
Sorry #gettho.child, I accepted your answer too quickly. I thought it was working but it wasn't. I'll report here my final solution since I have been struggling quite some to achieve it.
Dockerfile:
FROM node:0.12
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev libkrb5-dev
RUN mkdir /myapp
WORKDIR /myapp
ADD package.json /myapp/package.json
RUN npm install
ADD . /myapp
docker-compose.yml:
db:
image: mongo
ports:
- "27017:27017"
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: node app.js
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- db
environment:
PORT: 3000 # this is optional, allows express to use process.env.PORT instead of a raw 3000
And the interesting app.js extracts:
var MONGO_DB;
var DOCKER_DB = process.env.DB_PORT;
if ( DOCKER_DB ) {
MONGO_DB = DOCKER_DB.replace( 'tcp', 'mongodb' ) + '/myapp';
} else {
MONGO_DB = process.env.MONGODB;
}
var retry = 0;
mongoose.connect(MONGO_DB);
app.listen(process.env.PORT || 3000);
Regarding the process.env.DB_PORT I've tried many things around. If it doesn't work out-of-the-box, I suggest to console.log(process.env); and look for mongo's IP.
The final URL should look like: mongodb://172.17.0.76:27017/myapp
Good luck, it is worth it, Docker's awesome!
EDIT:
If the above works, I now found a techno-agnostic workflow by running:
docker-compose run web /bin/bash
and in there running printenv
I hope this is not too much self promotion but I wrote a double article on the topic which may help some of the readers: https://augustin-riedinger.fr/en/resources/using-docker-as-a-development-environment-part-1/
Cheers
Make sure that mongoDB IP (and port) in your 'Server.js' file is set to 'PORT_27017_TCP_ADDR' (check port too) - can be found when running 'docker exec {web app container id} env'.
Since we use docker-compose.yml and for reliable connection web service with db (mondoDB in this case) the final MONGO_DB connection string should look like (without authorization and additional args) as the following:
mongodb://db:27017/myapp
where:
db - name of the docker-composer MongoDB service
myapp - name of the mongoDB database
More see here
Your docker-compose should be
environment:
- MONGODB=3000
See this link for more information on mapping environment variables. You are declaring the environment variable as PORT instead of MONGODB.
Related
I'm working on a Symfony 4 project for months, and I want to Dockerize it.
I make everything work except Webpack, I use it to compile my .scss and .js files with the npm run watch or npm run dev command.
Actually webpack does not listen changes I do in a .scss or .js file for example.
Here is my config, I surely miss something in my files.
My docker-compose.yml :
version: '3.8'
services:
mysql:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
restart: on-failure
environment:
MYSQL_ROOT_PASSWORD: rootpassword
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: on-failure
depends_on:
- mysql
ports:
- '8004:80'
environment:
PMA_HOSTS: mysql
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- '../.:/usr/src/app'
restart: on-failure
env_file:
- .env
nginx:
image: nginx:1.19.0-alpine
restart: on-failure
volumes:
- '../public:/usr/src/app'
- './nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '80:80'
depends_on:
- php
node:
build:
context: .
dockerfile: node/Dockerfile
volumes:
- '../.:/usr/src/app'
command: npm run watch
My Dockerfile for Node Image :
FROM node:12.10.0
RUN apt-get update && \
apt-get install -y \
curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
WORKDIR /usr/src/app
CMD ["npm", "run", "watch"]
My webpack.config.js :
var Encore = require('#symfony/webpack-encore');
var CopyWebpackPlugin = require('copy-webpack-plugin');
if (!Encore.isRuntimeEnvironmentConfigured()) {
Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev');
}
Encore
.setOutputPath('public/build/')
.setPublicPath('/build')
.addEntry('app', './assets/js/app.js')
.splitEntryChunks()
.disableSingleRuntimeChunk()
.enableSassLoader()
.cleanupOutputBeforeBuild()
.enableBuildNotifications()
.enableSourceMaps(!Encore.isProduction())
.enableVersioning(Encore.isProduction())
.configureBabel(() => {}, {
useBuiltIns: 'usage',
corejs: 3
})
.addPlugin(new CopyWebpackPlugin([
{ from: './assets/pictures', to: 'pictures' }
]))
;
module.exports = Encore.getWebpackConfig();
// module.exports = {
// mode: 'development',
// devServer: {
// port: 80,
// host: '0.0.0.0',
// disableHostCheck: true,
// watchOptions: {
// ignored: /node_modules/,
// poll: 1000,
// aggregateTimeout: 1000
// }
// }
// }
As you can see I already tried some thing in webpack.config.js, I saw many things about watchOptions but I didn't get it.
And here is my project's organisation :
project's organisation
I want to be able to launch my Docker with Webpack listening any change I do in real time.
Here is the command console after running docker-compose up:
console command docker-compose up
If you have some advise to improve my Docker environment, I take it all !
Thank you !
i just use this:
docker-compose.yml:
node:
image: node:16-alpine3.13
working_dir: /var/www/app
user: "$USERID"
volumes:
- .:/var/www/app
tty: true
and docker-compose exec node yarn watch
working as expected.
Okay i think i solved my issue,
I followed #Rufinus answer; i had to docker-compose up in a first console command, open a second console command and execute winpty docker-compose exec node yarn watch but for some reason i had issue with node-sass compatibility : i mounted my node_module (windows 10) folder into the container (Linux).
So i opened my node CLI container and execute npm rebuild node-sass to solve this and finally it worked !
But i don't know why, my current solution is to execute npm run watch on my local folders (like i used to do it before Dockerizing all my application) and it re-builds assets when i change .scss or .js file.
I am trying to develop an express api.It works on local machine as expected. I am using docker but on production with docker and heroku redis is not working
Dockerfile
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
docker.compose.yml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
redis:
container_name: redis
image: redis
app:
container_name: password-manager-docker
image: app
restart: always
build: .
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
REDIS_URL: ${REDIS_URL}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
REDIS_HOST: ${REDIS_HOST}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
SMTP_HOST: ${SMTP_HOST}
SMTP_PORT: ${SMTP_PORT}
SMTP_USER: ${SMTP_USER}
SMTP_PASS: ${SMTP_PASS}
redis file
const asyncRedis = require('async-redis');
//process.env.REDIS_HOST's value is redis
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
redisClient.on("connect",() => {
console.log(`Redis: ${host}:${port}`);
})
redisClient.on('error', function(err) {
// eslint-disable-next-line no-console
console.log(`[Redis] Error ${err}`);
});
The error on heroku is "[Redis] Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379". It worked without docker on heroku but not It is not working. Thanks for your help
I finally figured it out.I changed redis addon in heroku from heroku redis to redis to go and changed this line
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
to
const redisClient = asyncRedis.createClient(process.env.REDISTOGO_URL);
Install redis
sudo apt-get install redis-server
Run command to check if everything is fine
sudo service redis-server status
And if you get the message redis-server is running then it'll resove
The environment variable REDIS_HOST isn't set - you have the answer already in your question, error is: ECONNREFUSED 127.0.0.1:6379
The hostaddress is comming from your if statement: {port:6379,host:process.env.REDIS_HOST || "127.0.0.1"}
If process.env.REDIS_HOST is not set, then use 127.0.0.1 as hostaddress. You have to parse a config file or set it via -e KEY=VAL when you run docker-compose or just edit the line REDIS_HOST: ${REDIS_HOST} and replace ${REDIS_HOST} with the service name (redis) from the docker-compose.yml file or..., there are a lot of possibilities. If you want to know the real ip of the redis container, just check with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID/NAME>
https://docs.docker.com/compose/networking/
At above official docker document, I found the part
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
So I understood this paragraph that I can connect docker containers each other without links: or networks: explicitly. because above docker-compose.yml snippet doesn't have links or networks: part. and the document say web’s application code could connect to the URL postgres://db:5432
So I tried to test simple docker-compose with nodejs express, mongodb together using above way. I thought I can connect mongodb in express app with just mongodb://mongo:27017/myapp But I cannot connect mongodb in express container. I think I followed docker's official manual but I don't know why it's not working. Of course I can connect mongodb using links: or networks: But I heard links is depreciated and I cannot find the proper way to use networks:
I think I might be misunderstood, Please fix me.
Below is my docker-compose.yml
version: '3'
services:
app:
container_name: node
restart: always
build: .
ports:
- '3000:3000'
mongo:
image: mongo
ports:
- '27017:27017'
In express app, I connect to mongodb with
mongoose.connect('mongodb://mongo:27017/myapp', {
useMongoClient: true
});
// also doesn't work with mongodb://mongo/myapp
Plus) Dockerfile
FROM node:10.17-alpine3.9
ENV NODE_ENV development
WORKDIR /usr/src/app
COPY ["package*.json", "npm-shrinkwrap.json*", "./"]
RUN rm -rf node_modules
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& apk del build-dependencies
COPY . .
EXPOSE 3000
CMD npm start
If you want to connect mongo with local then you should have to select network mode.
docket-compose.yml file content.
version: '2.1'
services:
z2padmin_docker:
image: z2padmin_docker
build: .
environment:
NODE_ENV: production
volumes: [/home/ankit/Z2PDATAHUB/uploads:/mnt/Z2PDATAHUB/uploads]
ports:
- 5000:5000
network_mode: host
I am trying to set up a docker network with simple nodejs and mongodb services by following this guide, however, when building nodejs it fails because it can't connect to mongodb.
docker-compose.yml
version: "3"
services:
nodejs:
container_name: nodejs # How the container will appear when listing containers from the CLI
image: node:10 # The <container-name>:<tag-version> of the container, in this case the tag version aligns with the version of node
user: node # The user to run as in the container
working_dir: "/app" # Where to container will assume it should run commands and where you will start out if you go inside the container
networks:
- app # Networking can get complex, but for all intents and purposes just know that containers on the same network can speak to each other
ports:
- "3000:3000" # <host-port>:<container-port> to listen to, so anything running on port 3000 of the container will map to port 3000 on our localhost
volumes:
- ./:/app # <host-directory>:<container-directory> this says map the current directory from your system to the /app directory in the docker container
command: # The command docker will execute when starting the container, this command is not allowed to exit, if it does your container will stop
- ./wait-for.sh
- --timeout=15
- mongodb:27017
- --
- bash
- -c
- npm install && npm start
env_file: ".env"
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=mongodb
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
depends_on:
- mongodb
mongodb:
image: mongo:4.1.8-xenial
container_name: mongodb
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app
networks:
app:
driver: bridge
volumes:
dbdata:
app.js
const express = require('express');
var server = express();
var bodyParser = require('body-parser');
// getting-started.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://simpleUser:123456#mongodb:27017/simpleDb', {useNewUrlParser: true});
server.listen(3000, function() {
console.log('Example app listening on port 3000');
});
Here is the common wait-for.sh script that I was using. https://github.com/eficode/wait-for/blob/master/wait-for
docker logs -f nodejs gives;
Operation timed out
Thanks for your help!
In this case I believe the issue is that you are using the wait-for.sh script which makes use of netcat command (see https://github.com/eficode/wait-for/blob/master/wait-for#L24), but the node:10 image does not have netcat installed...
I would suggest either creating a custom image based on the node:10 image and adding netcat or use a different approach (preferably a nodejs based solution) for checking if the mongodb is accessible
A sample Dockerfile for creating your own custom image would look something like this
FROM node:10
RUN apt update && apt install -y netcat
Then you can build this image by replacing image: node:10 with
build:
dockerfile: Dockerfile
context: .
and you should be fine
I found the problem which was because of the image node:10 doesn't have nc command installed so it was failing. I switched to image node:10-alpine and it worked.
I'm trying to compose docker app with two containers:
mongo
app
Mongo container works just fine, meanwhile app cannot connect to mongo. Neither node.js app nor mongostat can. The weird part is, I tried to run this project on both computers with Win10 and it works normally on the other one.
These are logs from mongo container when I run node app.js or mongostat --uri "mongodb://mongo:27017/project" from app container:
2019-05-22T09:33:52.225+0000 I NETWORK [conn17] received client metadata from 192.168.96.2:42916 conn17: { driver: { name: "nodejs", version: "3.1.10" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.9.125-linuxkit" }, platform: "Node.js v10.15.3, LE, mongodb-core: 3.1.9" }
2019-05-22T09:33:52.231+0000 I NETWORK [conn17] end connection 192.168.96.2:42916 (0 connections now open)
This means both containers can see each other so .yml file should be fine. If the problem was with code then it shouldn't work on both computers.
Dockerfile:
FROM node:10.15.3-alpine
RUN apk update && apk --no-cache --virtual build-dependencies add python make g++ && apk del build-dependencies
RUN mkdir -p /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
COPY --chown=node:node . .
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]
docker-compose.yml:
version: "3.5"
services:
app:
container_name: app
restart: always
build: .
ports:
- "3000:3000"
networks:
- mongo
mongo:
restart: always
container_name: mongo
image: mongo
expose:
- 27017
volumes:
- mongodata:/data/db
ports:
- '27017:27017'
networks:
- mongo
volumes:
mongodata:
networks:
mongo:
external: true
snippet from app.js:
MongoClient.connect('mongodb://mongo:27017/project', {useNewUrlParser: true}, (err, client) => {
if (err) throw err; //throws MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo mongo:27017]
console.log("connected");
client.close();//at the moment this line is not being reached because of throw err;
});```
Does it help if you insert a "sleep 10" in your application, before connecting to the mongo db? If so, adding something like wiatforit (https://github.com/maxcnunes/waitforit) might help.
Since you are getting a getaddrinfo ENOTFOUND error, the mongo hostname isn't resolving. Usually, that happens for one of two reasons: 1) your containers aren't on the same network or 2) the other container isn't up and running yet. Seeing that they are on the same network, it sounds like it's something with the container being up.
To troubleshoot, I would start another container, put it on the network, and validate the mongo hostname resolves.
docker container run --rm -ti --network mongo ubuntu
$ apt update && apt install -y dnsutils
$ dig mongo
At this point, you should see the A record resolve to the database. If not, validate the mongo database container is up and running.
You can also try doing this within your app container as well. If that's working, then using something like waitforit should work. This is a common issue, as apps may start up before the database is either running or ready to accept connections.
As one other item of feedback, you don't need to expose the mongo port. This is making it accessible to the world, which most likely isn't what you want. You can still do container-to-container communication without exposing the port.
After hours of trying multiple things I have found solution: turn off Windows Firewall. That's it.
Thanks, I appreciate your help.