I am farely new to docker and docker-compose. I tried to spin up a few services using docker which contain of a nodejs (Nest.js) api, a postgres db and pgadmin. Without the API (nodejs) app beeing dockerized I could connect to the docker database containers, but now that I also have dockerized the node app, it is not connecting anymore and I am clueless why. Is there anything wrong with the way I have set it up?
Here is my docker-compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
This is the nodejs app Dockerfile which builds successfully and in the logs I see the app is trying to connect to the databse but it cant (no specific error) just that it doesnt find the db.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
I have 2 env files in my projecs root directory.
.env
docker.env
As mentioned above, when I remove the "nftapi" service from docker and run the nodejs up with a simple npm start it is connecting to the postgres container.
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
synchronize:true,
entities: ['dist/**/*.entity{.ts,.js}'],
}),
The host from the .env file that is used in the typeorm module is localhost
When using networks with docker-compose you should use the name of the service as you hostname.
so in your case the hostname should be postgres and not localhost
You can read more about at here:
https://docs.docker.com/compose/networking/
Related
I created a JS app with Docker Compose with a front, a back and a common component with Yarn Workspaces. It works on Linux. I am out of ideas to make it work on WSL.
The Docker Compose :
# Use postgres/example user/password credentials
version: '3.1'
services:
postgres:
image: postgres:latest
# restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: caddie_app
ports:
- '5432:5432'
backend:
image: node:16
volumes:
- '.:/app'
ports:
- '3001:3001' # Nest
depends_on:
- postgres
working_dir: /app
command: ["yarn", "workspace", "#caddie/backend", "start:dev"]
environment:
# with docker we listen to the postgres network, but it is reachable at #localhost on our post
DATABASE_URL: postgresql://postgres:password#postgres:5432/caddie_app?schema=public
frontend:
image: node:16
volumes:
- '.:/app'
ports:
- '3000:3000' # React
depends_on:
- backend
working_dir: /app
command: ["yarn", "workspace", "#caddie/frontend", "start"]
I can reach the database with DBeaver, I can fetch the React JS scripts on localhost:3000, but I cannot request the NestJS server on localhost:3001.
The NestJS server is listening on 0.0.0.0
await app.listen(3001, '0.0.0.0');
I allowed the ports 3000 & 3001 on the Firewall. I tried to request directly the NodeJS through the IP of WSL found in ipconfig but the problem remains. I can't figure out what's wrong.
Thanks !
I have a nodejs api dockerized running on an EC2 instance. The app itself is running inside the container on port 5000 and is mapped via docker-compose to 5000:5000.
I actually wanted the API to listen on the port I can access via https so I tried mapping the ports in the docker-compose like this "443:5000". I set an inbound rule in the EC2 security group for SSL allow access from anywhere but when I hit the IP or DNS name into my browser it is not responding.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
Is there anything I was missing?
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '443:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
I am working in a sapper / svelte project and I need to build the sapper project and connect it to a mongodb (I need to start mongo compose from docker-compose.yml)
At the moment I was trying to connect the db to the local mongo on port localhost: 27017 but it can't establish the connection. What should I do?
Here there is my docker-compose
version: "3.4"
services:
myapp:
image: my_image
deploy:
update_config:
delay: 30s
parallelism: 1
failure_action: rollback
ports:
- "3000:3000"
and here my dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY static static
COPY emails emails
COPY package.json .
ENV NODE_ENV production
RUN npm install
COPY __sapper__/build __sapper__/build
EXPOSE 3000
CMD ["node", "__sapper__/build/index.js"]
Also what should I do to start the mongo deployment directly from compose? I have mongo on docker but I should start both directly from compose.
I think mongo service should be added to services of docker-compose.yml.
for example.
services:
mongodb:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
Then, the node application can access to mongodb by the service name.(ex. mongodb:27017).
I think this URL will help.
https://hub.docker.com/_/mongo
version: "3.4"
services:
app:
image: yourimage
ports:
- "3000:3000"
environment:
- MONGODB_URL=mongodb://yourip/yourdb
mongodb:
image: mongo
restart: always
ports:
- "yourportsdb:yourportsdb"
it is not necessary to authenticate the mongo with password and user, eventually it passes the environments as suggested #Jihoon Yeo
I have been trying to dockerize an api . But redis crashes. Nodejs and mongodb work.
Docker-compose.yaml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
networks:
- webnet
ports:
- '27017:27017'
redis:
image: redis
container_name: redis
command: ["redis-server","--bind","redis","--port","6379"]
ports:
- '6379:6379'
hostname: redis
app:
container_name: password-manager-docker
restart: always
build: .
networks:
- webnet
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
NODE_ENV : ${NODE_ENV}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
networks:
webnet:
Docker file
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
The error is Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379.
How can I fix this ?
by default all containers in a compose file will join a default network where they can communicate with each other, but if you specify a network then you have to specify it for all. So you can either remove the network declaration from the app and mongo services or specify it on the redis service.
Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.
Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.