I'm trying to learn how to understand node containers within a docker-compose environment.
I've built a simple docker container using a nodejs alpine base container. My container has a simple hello world express server. This container has been pushed to the public docker hub repo, zipzit/node-web-app
Container works great from the command line, via docker run -p 49160:8080 -d zipzit/node-web-app
Function verified via browser at http://localhost:49160
After the command line above is run, I can shell into the running container via $ docker exec -it <container_id> sh
I can look at the directories within the running docker container, and see my server.js, package.json files, etc...
From what I can tell, from the command line testing, everything seems to be working just fine.
Not sure why, but this is a total fail when I try to use this in a docker-compose test.
version: "3.8"
services:
nginx-proxy:
# http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
mongo_db:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data:/data/db
restart: on-failure:8
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: mongo_password
react_frontend:
image: "zipzit/node-web-app"
## working_dir: /usr/src/app ## remarks on / off here testing all the combinations
## environment:
## - NODE_ENV=develop
ports:
- "3000:3000"
volumes:
- ./frontend:/usr/src/app
backend_server:
## image: "node:current-alpine3.10"
image: "zipzit/node-web-app"
user: "node"
working_dir: /usr/src/app
environment:
- NODE_ENV=develop
ports:
- "8080:8080"
volumes:
- ./backend:/usr/src/app
- ./server_error_log:/usr/src/app/error.log
links:
- mongo_db
depends_on:
- mongo_db
volumes:
frontend: {}
backend: {}
data: {}
server_error_log: {}
When I run docker-compose up -d, and then let things settle, the two containers based on zipzit/node-web-app start to run and immediately shut down.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92710df9aa89 zipzit/node-web-app "docker-entrypoint.s…" 6 seconds ago Exited (1) 5 seconds ago root_react_frontend_1
48a8abdf02ca zipzit/node-web-app "docker-entrypoint.s…" 8 minutes ago Exited (1) 5 seconds ago root_backend_server_1
27afed70afa0 jwilder/nginx-proxy "/app/docker-entrypo…" 20 minutes ago Up 20 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
b44344f31e52 mongo "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:27017->27017/tcp root_mongo_db_1
When I go to docker logs <container_id> I see:
node:internal/modules/cjs/loader:903
throw err;
^
Error: Cannot find module '/usr/src/app/server.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:900:15)
at Function.Module._load (node:internal/modules/cjs/loader:745:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:72:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
When I check out the frontend volume, its totally blank. No server.js file, nothing.
Note: I'm running the docker-compose stuff on a Virtual Private Server (VPS) Ubuntu server. I built the container and did my command line testing on a Mac laptop with docker. Edit: I went back and did complete docker-compose up -d testing on the laptop with exactly the same errors as observed in the Virtual Private Server.
I don't understand what's happening here. Why does that container work from the command line but not via docker-compose?
You are binding volumes in ./frontend or ./backend with the same image running the following command:
CMD ["node" "server.js"]
Are you sure this is the right command ? Are you sure there is a server.js in each folder mounted on your host ?
Hope this helps.
Looks like you're trying to mount local paths into your containers, but you're then declaring those paths as volumes in the compose file.
The simple solution is to remove the volumes section of your compose file.
Here's the reference - it's not very clear from the docs but basically when you mount a volume, docker-compose first looks for that volume in the volumes section, and if it is present uses that volume. The name of the volume is not treated as a local path in this scenario, it is an ID of a volume that is then created on the fly (and will be empty). If however the volume is not declared in volumes, then it is treated as a local path and mounted as such.
Related
My Docker File is
# syntax=docker/dockerfile:1
FROM python:3.7.2
RUN mkdir my_app
COPY . my_app
WORKDIR /my_app
ARG DB_URL=postgresql://postgres:postgres#my_app_db:5432/appdb
ARG KEY=abcdfg1
ENV DATABASE_URL=$DB_URL
EXPOSE 8080:8080
EXPOSE 5432:5432
RUN pip install -r requirements.txt
WORKDIR /my_app/app
RUN python manage.py db init
CMD ["python", "my_app.py" ]
My Docker-composer.yml is
version: '3'
services:
postgres_db:
image: postgres:11.1
container_name: my_app_db
ports:
- 5432:5432
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data/
app_api:
build:
context: .
dockerfile: Dockerfile
container_name: my_app
ports:
- 8080:8080
volumes:
- ./:/app
depends_on:
- postgres_db
volumes:
pgdata:
The Flask my_app.py:
if __name__ == '__main__':
create_app(app).run(host='0.0.0.0')
i Run the commands below:
docker login
docker-compose up -d
2 containers Running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f495fd1f7b38 my_project_my_app "python my_app.py…" 3 hours ago Up 3 hours 5432/tcp, 0.0.0.0:8080->8080/tcp my_app
2ae314034656 postgres:11.1 "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:5432->5432/tcp my_app_db
I can see the terminal of the bash of my_app running as below:
Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 602-498-436
I can run Curl in that terminal (via Docker exec -it my_app bash ) and succesfully call my apis in the app and get responses. THe app and the db container are communicating succesfully.
If try to run my app from postman on my pc and hit the below urls:
http://127.0.0.1:8080/myapp/api/v1/endpoint/1
http://localhost:8080/myapp/api/v1/endpoint/1
nothing happens!!!
OS: Windonws 10
1. How could i call my apis from my postman or local machine?
The problem seems to be from my pc to the my_app on local Docker
In the end - i did not know about the WSL2 features- once that was enabled i have re-instaleed Docker Desktop - configure WSL version 2 and install Ubuntu terminal for windowns.
DB
ports:
- 5433:5432
APP
ports:
- 5001:5000
I called the http://locahost:5001/myapp/api and it worked.
well, the actual question is how do you run your docker engine. With LinuxVM inside Hyper-V?
Are you using Docker Desktop?
The answer to that would be, to find out on what IP is your VM running/listening on and then you can call your API with:
http://yourVMIP:8080/myapp/api/v1/endpoint/1
And it will probably work. I know I had to find out my Hyper-V Linux IP to link it up for a test drive.
Fix for this case
In this case the actual issue was the port Dockerfile was exposing and compose was going on the port 8080, but the flask application was running and binding itself on the default port of 5000.
Solution was:
In your docker-compose file you need to adjust the ports to:
ports:
- 8080:5000
and it will connect to the rights ports on host and right one inside the application
For a project at university I am working on I am trying to get a MEAN stack website up and running via docker images and containers. However, when I run the command:
docker-compose up --build
It results in this: nodejs permanently restarting.
When the command is run, I get these messages, at various points, which look like errors to me:
failed to get console mode for stdout: The handle is invalid.
and
nodejs exited with code 0
and then it seems like the connection to the MongoDB keeps starting and ending with these errors:
db | 2021-03-30T08:50:22.519+0000 I NETWORK [listener] connection accepted from 172.21.0.2:39627 #27 (1 connection now open)
db | 2021-03-30T08:50:22.519+0000 I NETWORK [conn27] end connection 172.21.0.2:39627 (0 connections now open)
Prior to running the above command I have tested the website will work without a connection to MongoDB with: docker build . in the Angular root folder containing the Dockerfile and the Express API aspect of it works as I can visit the dashboard at http://localhost:3000/.
The full command process I run to achieve the failed state image linked above is as follows:
docker-compose pull → docker-compose build → docker-compose up --build.
I am using Docker Desktop and running the commands in Powershell on Windows 10 Pro.
My Dockerfile is as follows:
# We use the official image as a parent image.
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
# Set the working directory.
WORKDIR /home/node/app
# Copy the file(s) from your host to your current location.
COPY package*.json ./
# Change the user to node. This will apply to both the runtime user and the following commands.
USER node
# Run the command inside your image filesystem.
RUN npm install
COPY --chown=node:node . .
# Building the webstie
RUN ./node_modules/.bin/ng build
# Add metadata to the image to describe which port the container is listening on at runtime.
EXPOSE 3000
# Run the specified command within the container.
CMD [ "node", "server.js" ]
And my docker-compose.yml is:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon server.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
These are the same files that have been provided by the University and supposedly work.
I am new to Docker and containerization but shall try and provide you with any additional information, should you need it.
Ok I give up! I spent far too much time on this:
So I want my app inside a docker container to talk to my postgres which is inside another container.
Docker-compose.yml
version: "3.8"
services:
foodbudget-db:
container_name: foodbudget-db
image: postgres:12.4
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: foodbudget
PGDATA: /var/lib/postgresql/data
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- 5433:5432
web:
image: node:14.10.1
env_file:
- .env
depends_on:
- foodbudget-db
ports:
- 8080:8080
build:
context: .
dockerfile: Dockerfile
Dockerfile
FROM node:14.10.1
WORKDIR /src/app
ADD https://github.com/palfrey/wait-for-db/releases/download/v1.0.0/wait-for-db-linux-x86 /src/app/wait-for-db
RUN chmod +x /src/app/wait-for-db
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
EXPOSE 8080
But I keep getting this error when I build the Dockerfile, even though the database is up and running when I run docker ps. I tried connecting to the postgres database in my host machine, and it connected successfully...
Temporary error (pausing for 3 seconds): PostgresError { error: Error(Io(Custom { kind: Other, error: "failed to lookup address information: Name does not resolve" })) }
Have anyone created an app and talk to a db in another docker instance be4?
This line is the issue:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5433 -t 1000000
You must use the internal port of the docker container (5432) instead of the exposed one within a network:
RUN ./wait-for-db -m postgres -c postgresql://user:pass#foodbudget-db:5432 -t 1000000
I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
I'm using docker-compose to set up nginx and node
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- node:node
volumes_from:
- node
volumes:
- /etc/nginx/ssl:/etc/nginx/ssl
node:
container_name: node
build: .
env_file: .env
volumes:
- /usr/src/app
- ./logs:/usr/src/app/logs
expose:
- "8000"
environment:
- NODE_ENV=production
command: npm run package
I have node and nginx share the same volume so that nginx can serve the static content generated by node.
When i update the source code in node. I remove the node container and rebuild it via the below
docker rm node
docker-compose -f docker-compose.prod.yml up --build -d node
I can see that the new node container has the updated source code with the proper updated static content
docker exec -it node bash
root#e0cd1b990cd2:/usr/src/app# cat public/style.css
this shows the updated content i want to see
.project_detail .owner{color:#ccc;padding:10px}
However, when i login to the nginx container
docker exec -it nginx bash
root#a459b271e787:/# cat /usr/src/app/public/style.css
.project_detail .owner{padding:10px}
as you can see , nginx is not able to see the newly updated static files served by node - despite the node update. It however works if i restart the nginx container as well.
Am i doing something wrong? Do i have to restart both nginx and node containers to see the updated content?
Instead of sharing volume of one container with another, share a common directory on the host with both the containers. For example, if the directory is at /home/user/app, then it should be present in volumes section like:
volumes:
- /home/user/app:/usr/src/app
This should be done for both the containers.