I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.
Related
I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app
Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up
The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.
I'm running docker on a ubuntu server. For a while everything was working well, then the docker stoped and when I tried to put it up again i had this error :
ERROR: Service 'back' failed to build: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
Here is my docker-compose.yml
version: "2"
services:
back:
build: ./crypto_feed_back
restart: always
command: node server.js
volumes:
- .:/usr/app/
front:
build: ./crypto_feed_front
restart: always
command: npm start
volumes:
- .:/usr/app/
depends_on:
- back
fetcher:
build: ./crypto_feed_fetch
restart: always
command: /usr/sbin/crond -l 2 -f
depends_on:
- postgres
postgres:
image: postgres:12-alpine
restart: always
environment:
POSTGRES_USER: crypto_feed
POSTGRES_DB: crypto_feed
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
caddy:
image: elswork/arm-caddy:1.0.0
restart: always
environment:
ACME_AGREE: "true"
volumes:
- ./caddy/Caddyfile:/etc/Caddyfile
- ./caddy/certs:/root/.caddy
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- front
And back's Dockerfile :
FROM node:alpine
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install --production
# If you are building your code for production
# RUN npm ci --only=production
COPY ./api/ ./api/
COPY ./server.js .
EXPOSE 3001
CMD npm start
If someone has an idea on how I can solve this, it would be really nice !
I'm running Docker host on my Windows dev machine and have 2 identifcal images exposing different ports (3000, 3001). Using the following docker-compose I build and run the containers but the container on port 3001 isn't available via localhost or my IP address.
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /testdirectory
WORKDIR /testdirectory
COPY package.json /testdirectory
RUN npm install --loglevel=warn
COPY . /testdirectory
EXPOSE 3000
CMD ["node", "index.js"]
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /test2directory
WORKDIR /test2directory
COPY package.json /test2directory
RUN npm install --loglevel=warn
COPY . /test2directory
EXPOSE 3001
CMD ["node", "index.js"]
Docker-Compose file
version: '3'
services:
testdirectory:
container_name: testdirectory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=testdirectory
- NODE_HOST=localhost
- NODE_PORT=3000
- DB_HOST=mongodb://mongo:27017/testdirectory
- DB_PORT=27017
build:
context: ./test-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongodb
command: npm start
test2directory:
container_name: test2directory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=test2directory
- NODE_HOST=localhost
- NODE_PORT=3001
- DB_HOST=mongodb://mongo:27017/test2directory
- DB_PORT=27017
build:
context: ./test2-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3001:3001"
depends_on:
- mongodb
command: npm start
mongodb:
image: mongo:3.4.4
container_name: mongo
ports:
- 27017:27017
volumes:
- /data/db:/data/db
Is there any obvious I'm missing as when I run
docker container port test2directory
it returns
3001/tcp -> 0.0.0.0:3001
Found the problem! Setting the HOST to localhost in the container caused the problem and changing it to 0.0.0.0 got it working.