Connect mongo and sapper server with docker - node.js

I am working in a sapper / svelte project and I need to build the sapper project and connect it to a mongodb (I need to start mongo compose from docker-compose.yml)
At the moment I was trying to connect the db to the local mongo on port localhost: 27017 but it can't establish the connection. What should I do?
Here there is my docker-compose
version: "3.4"
services:
myapp:
image: my_image
deploy:
update_config:
delay: 30s
parallelism: 1
failure_action: rollback
ports:
- "3000:3000"
and here my dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY static static
COPY emails emails
COPY package.json .
ENV NODE_ENV production
RUN npm install
COPY __sapper__/build __sapper__/build
EXPOSE 3000
CMD ["node", "__sapper__/build/index.js"]
Also what should I do to start the mongo deployment directly from compose? I have mongo on docker but I should start both directly from compose.

I think mongo service should be added to services of docker-compose.yml.
for example.
services:
mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
Then, the node application can access to mongodb by the service name.(ex. mongodb:27017).
I think this URL will help.
https://hub.docker.com/_/mongo

version: "3.4"
services:
app:
image: yourimage
ports:
- "3000:3000"
environment:
- MONGODB_URL=mongodb://yourip/yourdb
mongodb:
image: mongo
restart: always
ports:
- "yourportsdb:yourportsdb"
it is not necessary to authenticate the mongo with password and user, eventually it passes the environments as suggested #Jihoon Yeo

Related

Docker NodeJS app cant connect to postgres database

I am farely new to docker and docker-compose. I tried to spin up a few services using docker which contain of a nodejs (Nest.js) api, a postgres db and pgadmin. Without the API (nodejs) app beeing dockerized I could connect to the docker database containers, but now that I also have dockerized the node app, it is not connecting anymore and I am clueless why. Is there anything wrong with the way I have set it up?
Here is my docker-compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
This is the nodejs app Dockerfile which builds successfully and in the logs I see the app is trying to connect to the databse but it cant (no specific error) just that it doesnt find the db.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
I have 2 env files in my projecs root directory.
.env
docker.env
As mentioned above, when I remove the "nftapi" service from docker and run the nodejs up with a simple npm start it is connecting to the postgres container.
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
synchronize:true,
entities: ['dist/**/*.entity{.ts,.js}'],
}),
The host from the .env file that is used in the typeorm module is localhost
When using networks with docker-compose you should use the name of the service as you hostname.
so in your case the hostname should be postgres and not localhost
You can read more about at here:
https://docs.docker.com/compose/networking/

NodeJS 14 in a Docker container can't connect to Postgres DB (in/out docker)

I'm making a React-Native app using Rest API (NodeJS, Express) and PostgreSQL.
Everything work good when hosted on my local machine.
Everything work good when API is host on my machine and PostgreSQL in docker container.
But when backend and frontend is both in docker, database is reachable from all my computer in local, but not by the backend.
I'm using docker-compose.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- "8080:8080"
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
volumes:
db-data:
.env_docker and .env have the same parameters (just name changing).
Here is my dockerfiles:
Backend
FROM node:14.1
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Database
FROM postgres:alpine
COPY ./wallnerdb.sql /docker-entrypoint-initdb.d/
I tried to change my hostname in connection url to postgres by using the name of the docker, my host IP address, localhost, but no results.
It's also the same .env (file in my node repo with db_name passwd etc) I do use in local to connect my backend to the db.
Since you are using NodeJS 14 in the Docker Container - make sure that you have the latest pg dependency installed:
https://github.com/brianc/node-postgres/issues/2180
Alternatively: Downgrade to Node 12.
Also make sure, that both the database and the "backend" are in the same network. Also: the backend should best "depend" on the database.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- '8080:8080'
networks:
- default
depends_on:
- wallnerdatabase
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
networks:
- default
volumes:
db-data:
networks:
default:
This should not be necessary in you case - as pointed out in the comments - since Docker Compose already creates a default network
The container name "wallnerdatabase" is the host name of your database - if not configured otherwise.
I expect the issue to be in the database connection URL since you did not share it.
Containers in the same network in a docker-compose.yml can reach each other using the service name. In your case the service name of the database is wallnerdatabase so this is the hostname that you should use in the database connection URL.
The database connection URL that you should use in your backend service should be similar to this:
postgres://user:password#wallnerdatabase:5432/dbname
Also make sure that the backend code is calling the database using the hostname wallnerdatabase as it is defined in the docker-compose.yml file.
Here is the reference on Networking in Docker Compose.
You should access your DB using service name as hostname. Here is my working example - https://gitlab.com/gintsgints/vue-fullstack/-/blob/master/docker-compose.yml

Node docker not working for feathersjs - Container running but localhost not accessable

I have a docker-compose file with mongo and an node container, mongo works great, but the node feathers container is not accessable from localhost:3030 (have also tried 127.0.0.1:3030 and 0.0.0.0:3030
version: "3"
services:
app:
image: node:lts-alpine
volumes:
- ./feathers-full:/app
working_dir: /app
depends_on:
- mongo
environment:
NODE_ENV: development
command: npm run dev
ports:
- 3030:3030
expose:
- "3030"
mongo:
image: mongo
ports:
- 27017:27017
expose:
- "27017"
volumes:
- ./data/db:/data/db
Are you binding to 127.0.0.1 in the Feathers server? If so, you won't be able to access the server from outside the container. You need to bind to 0.0.0.0. See https://pythonspeed.com/articles/docker-connection-refused/ for explanation why.

Connect docker compose containers without links

https://docs.docker.com/compose/networking/
At above official docker document, I found the part
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
So I understood this paragraph that I can connect docker containers each other without links: or networks: explicitly. because above docker-compose.yml snippet doesn't have links or networks: part. and the document say web’s application code could connect to the URL postgres://db:5432
So I tried to test simple docker-compose with nodejs express, mongodb together using above way. I thought I can connect mongodb in express app with just mongodb://mongo:27017/myapp But I cannot connect mongodb in express container. I think I followed docker's official manual but I don't know why it's not working. Of course I can connect mongodb using links: or networks: But I heard links is depreciated and I cannot find the proper way to use networks:
I think I might be misunderstood, Please fix me.
Below is my docker-compose.yml
version: '3'
services:
app:
container_name: node
restart: always
build: .
ports:
- '3000:3000'
mongo:
image: mongo
ports:
- '27017:27017'
In express app, I connect to mongodb with
mongoose.connect('mongodb://mongo:27017/myapp', {
useMongoClient: true
});
// also doesn't work with mongodb://mongo/myapp
Plus) Dockerfile
FROM node:10.17-alpine3.9
ENV NODE_ENV development
WORKDIR /usr/src/app
COPY ["package*.json", "npm-shrinkwrap.json*", "./"]
RUN rm -rf node_modules
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& apk del build-dependencies
COPY . .
EXPOSE 3000
CMD npm start
If you want to connect mongo with local then you should have to select network mode.
docket-compose.yml file content.
version: '2.1'
services:
z2padmin_docker:
image: z2padmin_docker
build: .
environment:
NODE_ENV: production
volumes: [/home/ankit/Z2PDATAHUB/uploads:/mnt/Z2PDATAHUB/uploads]
ports:
- 5000:5000
network_mode: host

Mongo service with specific port exposed in a bridge network docker

I have a sample app which consists of three parts:
mongo database
node api (server side)
angular web app (client side)
the goal is to containerize those three parts and run the app.
so to reach there I've created docker-compose.yml file like below:
# docker-compose -f docker-compose.prod.yml build
# docker-compose -f docker-compose.prod.yml up -d
# docker-compose -f docker-compose.prod.yml down
version: '3'
services:
mongodb:
image: mongo
container_name: mongodb-instance-microservices
ports:
- "27020:27017"
networks:
- microservices-network
client:
container_name: client-instance-microservices
image: client-microservices
build:
context: ./client
dockerfile: prod.dockerfile
ports:
- "8080:80"
- "443:443"
depends_on:
- api
networks:
- microservices-network
api:
container_name: api-instance-microservices
image: api-microservices
build:
context: ./server
dockerfile: server.dockerfile
environment:
- NODE_ENV=production
ports:
- "3000:3000"
depends_on:
- mongodb
networks:
- microservices-network
networks:
microservices-network:
driver: bridge
in the server side i am running the main app.js which is trying to connect to the mongodb using this connection string:
mongodb://mongodb-instance-microservices:27020/TestDatabase
the problem is the server can not connect to the mongo db container.
i tried to expose the default port for mongo like below:
mongodb:
image: mongo
container_name: mongodb-instance-microservices
ports:
- "27017:27017"
networks:
- microservices-network
and update the connection string in the app.js file like this:
mongodb://mongodb-instance-microservices:27017/TestDatabase
and it's work fine.
the question is how to expose different port for mongo container and make it work fine?
When you connect between services using a Docker-internal network, you always connect to the internal port of the service. You don't explicitly need to publish ports (in Compose, with a ports: directive); if you do, the port you'd connect to is the one on the right.
Style-wise, also note that you don't need to manually declare a private network with default options that will only be used within this Docker Compose file (Compose will do something very similar to that for you), and you don't need to declare container_name: just for inter-container connectivity (Compose will add a network alias that matches the name of the service).

Resources