docker-compose network error cant connect to other host - node.js

I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"

You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Related

Docker run exit wit code 0 - Node.js & Postgres

I've build a small Node.js app with a Postgres database. I wanted to Dockerize it, and everything worked fine when I was using docker-compose, now I'm trying to put it on AWS EC2, and I'm using docker run, but it doesn't work.
Docker Run command:
docker run -d -p 80:4000 --name appname hubname/appname
the container starts and stops immediatly
I'm fairly new to Docker so I guess I'm missing something.
Please find my Dockerfile:
FROM node:14 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install
COPY . .
FROM base as production
ENV NODE_PATH=./build
RUN npm start
My docker-compose:
version: '3.7'
services:
appname-db:
image: postgres
container_name: appname-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- '5432:5432'
volumes:
- appname-db:/var/lib/postgresql/data
ts-node-docker:
build:
context: .
dockerfile: Dockerfile
target: production
volumes:
- ./src:/home/node/app/src
- ./nodemon.json:/home/node/app/nodemon.json
container_name: ts-node-docker
environment:
DB_HOST: ${DB_HOST}
DB_SCHEMA: ${DB_SCHEMA}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- appname-db:
condition: service_healthy
expose:
- '4000'
ports:
- '80:4000'
stdin_open: true
command: npm run start
volumes:
appname-db:
When I do:
docker ps -a I see that the container started and exited right after.
I can't find any logs whatsoever it's very frustrating.
Any help/pointer would be really appreciated.
Edit 1 - Some additional information:
When I'm using docker-compose we can see 2 containers:
However this is what I have with docker run:
You should use CMD and not RUN when you execute the npm run start command. CMD runs every time a container is started, while RUN is just executed once, at build time.
This means you start the container, but nothing else is started inside of it. So it exits because there's nothing more to execute. RUN was already executed last time you built the container.
Use RUN for npm install, otherwise it will try every time to reinstall node_modules you already have. But for the start command – do this instead: CMD ["npm", "run", "start"]
In case it could help you I show you my working docker-compose.yml and Dockerfile. Express API with a Postgres DB.
This is tested and works both on an Ubuntu server and on a dev machine (with docker-compose up for both environments). Note that there are two Dockerfiles (dev/prod).
You don't need to add password/user stuff in your docker-compose, you can just include your env_file and Docker/docker-compose sets everything up correctly for you.
version: "3.8"
services:
# D A T A B A S E #
db:
env_file:
- ./.env
image: postgres
container_name: "db"
restart: always
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./db-data/:/var/lib/postgresql/data/
# A P I #
api:
container_name: "api"
env_file:
- ./.env
build:
context: .
dockerfile: "DOCKERFILE.${NODE_ENV}"
ports:
- ${API_PORT}:${API_PORT}
depends_on:
- db
restart: always
volumes:
- /app/node_modules
- .:/usr/src/service
working_dir: /usr/src/service
# A D M I N E R #
adminer:
env_file:
- ./.env
container_name: "adminer"
image: adminer:latest
restart: always
ports:
- ${ADMINER_PORT}:${ADMINER_PORT}
depends_on:
- db
Dockerfile.dev:
FROM node:14.17.3
ENV NODE_ENV=dev
WORKDIR /
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "start"]
Dockerfile.prod is pretty much the same, just change the environment to prod.
I see is that you include the full path from your home directory when copying, try without that. Just copy from the location relative to your Dockerfile at /.

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

My Docker container for a MEAN website dashboard does not work in Docker. Nodejs keeps restarting

For a project at university I am working on I am trying to get a MEAN stack website up and running via docker images and containers. However, when I run the command:
docker-compose up --build
It results in this: nodejs permanently restarting.
When the command is run, I get these messages, at various points, which look like errors to me:
failed to get console mode for stdout: The handle is invalid.
and
nodejs exited with code 0
and then it seems like the connection to the MongoDB keeps starting and ending with these errors:
db | 2021-03-30T08:50:22.519+0000 I NETWORK [listener] connection accepted from 172.21.0.2:39627 #27 (1 connection now open)
db | 2021-03-30T08:50:22.519+0000 I NETWORK [conn27] end connection 172.21.0.2:39627 (0 connections now open)
Prior to running the above command I have tested the website will work without a connection to MongoDB with: docker build . in the Angular root folder containing the Dockerfile and the Express API aspect of it works as I can visit the dashboard at http://localhost:3000/.
The full command process I run to achieve the failed state image linked above is as follows:
docker-compose pull → docker-compose build → docker-compose up --build.
I am using Docker Desktop and running the commands in Powershell on Windows 10 Pro.
My Dockerfile is as follows:
# We use the official image as a parent image.
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
# Set the working directory.
WORKDIR /home/node/app
# Copy the file(s) from your host to your current location.
COPY package*.json ./
# Change the user to node. This will apply to both the runtime user and the following commands.
USER node
# Run the command inside your image filesystem.
RUN npm install
COPY --chown=node:node . .
# Building the webstie
RUN ./node_modules/.bin/ng build
# Add metadata to the image to describe which port the container is listening on at runtime.
EXPOSE 3000
# Run the specified command within the container.
CMD [ "node", "server.js" ]
And my docker-compose.yml is:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon server.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
These are the same files that have been provided by the University and supposedly work.
I am new to Docker and containerization but shall try and provide you with any additional information, should you need it.

How to solve docker-entrypoint.sh executable file not found in $PATH: unknown?

I'm having trouble while trying to start the backend for my application on Docker. I get the following error when I select Compose Up on VS Code. I read that I need to add RUN chmod -x docker-entrypoint.sh on my Dockerfile, but it didn't solved the problem.
Successfully built some_id_abc123
mongodb is up-to-date
cache is up-to-date
Recreating some_project_server ... error
ERROR: for some_project_server Cannot start service server: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker-entrypoint.sh": executable file not found in $PATH: unknown
ERROR: for server Cannot start service server: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker-entrypoint.sh": executable file not found in $PATH: unknown
ERROR: Encountered errors while bringing up the project.
The terminal process "/bin/zsh '-c', 'docker-compose -f "docker-compose.yml" up -d --build'" terminated with exit code: 1.
Dockerfile:
FROM node
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
docker-compose.yml:
version: "3"
services:
database:
image: mongo
container_name: mongodb
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=some_db
- MONGO_INITDB_ROOT_USERNAME=some_user
- MONGO_INITDB_ROOT_PASSWORD=some_password
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo-volume:/data/db
networks:
- server-network
redis:
image: redis
container_name: cache
command: redis-server --requirepass some_password
restart: always
volumes:
- ./redis-data:/var/lib/redis
ports:
- "6379:6379"
networks:
- server-network
server:
build: .
depends_on:
- database
- redis
environment:
- ENV_VARIABLES=some_env_variables
ports:
- "3000:3000"
volumes:
- .:/usr/app
networks:
- server-network
networks:
server-network:
driver: bridge
Finally, here's the file structure of the project:
project
- src
- test
- .dockerignore
- docker-compose.yml
- Dockerfile
- mongo-init.js
- package-lock.json
- package.json
What is this problem's origin and how to solve it? I'm starting with Docker and this is my first professional project I'm working that Docker is necessary.
Try
docker compose build
and next
docker compose up
Found it HERE
Worked for me.

Docker is not building container with changes in source code

I'm relatively new with Docker and I just created an Node.js application that should connect with other services also running on Docker.
So I get the source code and a Dockerfile to setup this image and a docker-compose to orchestrate the environment.
I had a few problems in the beginning so I just updated my source code and found out that it's not getting updated in the next build of docker-compose.
For example I commented all the lines that connect to Redis and MongoDB. I run the application locally and it's fine. But when I create it again in a container, I get the errors "Connection refused..."
I tried many things and this is what i get at the momment:
Dockerfile
FROM node:9
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8090
docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "8090:8090"
container_name: app
redis:
image: redis:latest
ports:
- "6379:6379"
container_name: redis
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
up.sh
sudo docker stop app
sudo docker rm app
docker-compose build --no-cache app
sudo docker-compose up --force-recreate
Any ideas on what could be the problem? Why doesn't it use the current source code? It is using some sort of cache.

Resources