I have a docker compose file spinning up 3 apps. mysql, phpmyadmin & a nodejs app. You will find the compose file below.
The nodejs app have sequilizeJS which requires to run migration & seed command when It initializes.
When I run docker-compose up --build the build fails because mysql returns with error getaddrinfo ENOTFOUND mysql
I can not figure out what I did wrong in the compose file as it looks okay to me.
both phpmyadmin and auth app requires the mysql so I have added mysql to depends_on section. It seems from the log file that composer trys to build auth before creating mysql.
Logs
Creating network "updials-auth_default" with the default driver
Building auth
Step 1/8 : FROM node:12.14.0
---> 6b5991bf650f
Step 2/8 : WORKDIR /var/www
---> Using cache
---> 21c89e8b8059
Step 3/8 : COPY . .
---> 73072a4bddb5
Step 4/8 : COPY package.json /usr/share/app
---> 886992b71802
Step 5/8 : EXPOSE 3001
---> Running in cd7c14183427
Removing intermediate container cd7c14183427
---> b93bcdf8c653
Step 6/8 : RUN npm install
---> Running in 4b6d75b77bab
npm WARN deprecated mkdirp#0.5.1: Legacy versions of mkdirp are no longer supported. Please update to mkdirp 1.x. (Note that the API surface has changed to use Promises in 1.x.)
npm WARN deprecated request#2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN rm not removing /var/www/node_modules/.bin/rimraf as it wasn't installed by /var/www/node_modules/rimraf
> bcrypt#3.0.8 install /var/www/node_modules/bcrypt
> node-pre-gyp install --fallback-to-build
node-pre-gyp WARN Using request for node-pre-gyp https download
[bcrypt] Success: "/var/www/node_modules/bcrypt/lib/binding/bcrypt_lib.node" is installed via remote
> ejs#2.7.4 postinstall /var/www/node_modules/ejs
> node ./postinstall.js
Thank you for installing EJS: built with the Jake JavaScript build tool (https://jakejs.com/)
npm notice created a lockfile as package-lock.json. You should commit this file.
added 18 packages from 3 contributors, removed 9 packages, updated 467 packages and audited 1563 packages in 51.173s
22 packages are looking for funding
run `npm fund` for details
found 5 low severity vulnerabilities
run `npm audit fix` to fix them, or `npm audit` for details
Removing intermediate container 4b6d75b77bab
---> f3c15392ccc2
Step 7/8 : RUN npm run migrate && npm run seed
---> Running in cd58f889c907
> updials-auth#0.0.2 migrate /var/www
> npx sequelize-cli db:migrate
npx: installed 81 in 8.76s
Sequelize CLI [Node: 12.14.0, CLI: 5.5.1, ORM: 5.21.5]
Loaded configuration file "config/config.js".
Using environment "development".
ERROR: getaddrinfo ENOTFOUND mysql
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! updials-auth#0.0.2 migrate: `npx sequelize-cli db:migrate`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the updials-auth#0.0.2 migrate script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2020-03-29T15_36_20_888Z-debug.log
ERROR: Service 'auth' failed to build: The command '/bin/sh -c npm run migrate && npm run seed' returned a non-zero code: 1
Dockerfile
FROM node:12.14.0
#USER node
WORKDIR /var/www
COPY . .
COPY package.json /usr/share/app
#COPY package.lock.json /usr/share/app
EXPOSE 3001
RUN npm install
RUN npm run migrate && npm run seed
CMD ["npm", "start"]
docker-compose.yml
version: '3.7'
services:
mysql:
container_name: updials-auth-mysql
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_DATABASE: 'updials'
MYSQL_USER: 'updials'
MYSQL_PASSWORD: 'password'
volumes:
- database:/var/lib/mysql
auth:
container_name: updials-auth
restart: always
depends_on:
- mysql
build: .
ports:
- '3001:5002'
environment:
DB_HOST: 'mysql'
DB_USER: 'updials'
DB_PASS: 'password'
DB_NAME: 'updials'
phpmyadmin:
container_name: phpmyadmin-updials-auth
restart: always
image: phpmyadmin/phpmyadmin:5.0.2
depends_on:
- mysql
environment:
MYSQL_USER: updials
MYSQL_PASSWORD: password
ports:
- '4000:8080'
volumes:
database:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/home/sisir/docker-databases/updials-auth'
The Dockerfile can never access a database, volumes, or other resources declared in the docker-compose.yml (outside that service's immediate build: block). The build runs as a separate stage; it doesn't get attached to the Compose network.
(Imagine running docker build; docker push on one system, and on a second system specifying the matching image:. In this setup the build-time system can't access the run-time database, and that's the basic model you should have in mind. More directly you can delete and recreate your mysql container without rebuilding your auth image.)
The typical pattern to make this work is to write an entrypoint script. This becomes the main command your container runs; it gets passed the Dockerfile CMD (or Docker Compose command:) as command-line arguments. Since this runs at the point the container starts up, it does have access to the database, networks, environment variables, etc.
#!/bin/sh
set -e # Stop on any error
npm run migrate # Run migrations
npm run seed # Preload initial data
exec "$#" # Run the command as the main container process
In your Dockerfile put this script as the ENTRYPOINT. You must use the JSON-array form of ENTRYPOINT here.
FROM node:12.14.0
WORKDIR /var/www
# Install dependencies first to save time on rebuild
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3001
RUN chmod +x entrypoint.sh # if required
ENTRYPOINT ["./entrypoint.sh"]
CMD ["npm", "start"]
Related
I have been working on my local machine with no issue using the Dokerfile below and docker-compose.ylm
Dokerfile
FROM node:14-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
compose file
...
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: always
image: nodeapp
ports:
- 5000:5000
volumes:
- ./:/app
- /app/node_modules
networks:
- pnet
...
When I use docker context to switch to remote host
docker context create --docker "host=ssh://root#hostname" remote
and
docker context use remote
and run docker compose up I end up with the error
npm ERR! code ENOENT
api | npm ERR! syscall open
api | npm ERR! path /app/package.json
paylign_api | npm ERR! errno -2
api | npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'
api | npm ERR! enoent This is related to npm not being able to find a file.
api | npm ERR! enoent
api |
api | npm ERR! A complete log of this run can be found in:
api | npm ERR! /root/.npm/_logs/2022-07-01T14_33_58_429Z-debug.log
How can I fix this so as to deploy to remote host
The volumes: are always interpreted by the Docker daemon running the container. If you're using contexts to point at a remote Docker daemon, the volumes: named volumes and file paths are interpreted by that remote daemon, and point at files on the remote host.
While docker build will send the local build context to a remote Docker daemon, volumes: cannot be used to copy files between hosts.
The straightforward solution here is to delete the volumes: from your Compose setup:
version: '3.8'
services:
app:
build: .
restart: always
ports:
- 5000:5000
# volumes: will cause problems
# none of the other options should be necessary
# (delete every networks: block in the file)
Then when you docker-compose build the image, the builder mechanism will send your local file tree to the remote Docker daemon. When you then run docker-compose up the remote Docker will use the code built into the image, and not try to overwrite it with content on the remote system.
This same principle applies in other contexts that don't necessarily involve a second host; for example, if you're launching a container from inside another container using the host's Docker daemon, volume mounts are in the host and not the container filesystem. In general I'd recommend not trying to overwrite your application code with volumes: mounts; run the code that's actually built into the image instead, and use local non-Docker tooling for live development.
background:
I'm on Ubuntu 20.04.
I'm trying to build images using docker-compose.
This is my docker-compose file
version: "3.8"
services:
web:
build: ./frontend
network_mode: "host"
ports:
- 3000:3000
api:
build: ./backend
network_mode: "host"
ports:
- 3001:3001
environment:
DB_URL: mongodb://db/foo-app
db:
image: mongo:4.0-xenial
ports:
- 27017:27017
volumes:
- foo-app-data:/data/db
volumes:
foo-app-data:
And below are my two Dockerfile files:
# ./backend file
FROM node:16.3.0-alpine3.13
RUN addgroup app && adduser -S -G app app
WORKDIR /app
USER root
COPY package*.json ./
# --- debug try
RUN npm cache clean --force
RUN npm config set registry https://registry.npmjs.org/
RUN npm install -g #angular/cli
# ---
RUN npm install
COPY . .
RUN chown app:app /app
USER app
EXPOSE 3001
CMD ["npm", "start"]
# ./frontend file
FROM node:16.3.0-alpine3.13
RUN addgroup app && adduser -S -G app app
WORKDIR /app
USER root
COPY package*.json ./
# --- debug try
RUN npm cache clean --force
RUN npm config set registry https://registry.npmjs.org/
# ---
RUN npm install
COPY . .
RUN chown app:app /app
USER app
EXPOSE 3000
CMD ["npm", "start"]
error:
When I run docker-compose build, the below error is thrown:
Step 8/14 : RUN npm install -g #angular/cli
---> Running in ce221adb18f6
npm ERR! code EAI_AGAIN
npm ERR! syscall getaddrinfo
npm ERR! errno EAI_AGAIN
npm ERR! request to https://registry.npmjs.org/#angular%2fcli failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-06-13T10_19_13_600Z-debug.log
The command '/bin/sh -c npm install -g #angular/cli' returned a non-zero code: 1
ERROR: Service 'web' failed to build : Build failed
what i have tried so far:
when I was building each docker file manually, I was getting the same error, until I used the --network=host. It means, when I build the images with docker build --network=host, it works fine. So I tried to add network_mode= "host" to my docker-compose file but it doesn't solve the issue.
and for God's sake, read this before marking this question as duplicate:
this post here propose a solution for docker and not docker compose. When I ping registry.npmjs.org, the connection works fine.
this post here propose to docker-compose up, which will throw the exact same error as I have here.
this post here doens't work, i have already restarted docker multiple times. And on top of that, I clean all docker images after the error is thrown to make sure the next time I build, nothing is used from the cache.
this post here doesn't work either. I tried to (1) remove the proxies from the npm config, and also add the additional lines npm cache clean --force and npm config set registry https://registry.npmjs.org/ in my Dockerfile. Nothing works.
this post here not only doesn't solve the problem, but also it doesn't really explain well the reason why the solution is being proposed.
and this post here i don't even know how this answer is allowed on StackOverflow.
If you use network_mode with host value, you can't mix it with port mapping: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
Then, or you change your network_mode to bridge or you drop your port mapping for each service.
Add following Variable to .zshrc file in mac
export NODE_OPTIONS=--openssl-legacy-provider
I'm trying to set up a unit test run time through docker compose. When I try to run an npm script through docker-compose Node is acting like it can't find the modules on the path:
➜ docker-compose run --rm server npm run test
Starting redis ... done
Starting mongodb ... done
> evolved#1.0.0 test /server
> mocha --recursive tests
sh: mocha: not found
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn
npm ERR! evolved#1.0.0 test: `mocha --recursive tests`
npm ERR! spawn ENOENT
I've confirmed that the files are being mounted into the container, so why can't Node find them?
➜ dc run --rm --service-ports server ls node_modules/.bin | grep "mocha"
Starting redis ... done
_mocha mocha
The script in my package.json are very basic:
"test": "mocha --recursive tests",
"build": "gulp default:dev",
docker-compose.yml
version: '3' #compose version
services:
server:
build:
context: .
dockerfile: Dockerfile.test
ports:
- "3000:3000"
volumes:
- ".:/server"
working_dir: "/server"
depends_on:
- mongodb
- redis
environment:
PORT: 3000
NODE_ENV: test
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./localdata/db:/data/db
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null # --quiet
redis:
container_name: redis
command: redis-server --requirepass testredispassword
image: redis
ports:
- "6379:6379"
volumes:
- ./localdata/redis:/data
entrypoint: redis-server --appendonly yes
restart: always
The Dockerfile.test is different than the prod dockerfile in that it doesn't install / build the front end of the app or pass in any versioning info. I'm trying to make it build quicker with just what it needs to run the unit tests for the server:
FROM node:8-alpine
RUN apk update \
&& apk --update --no-cache add \
git \
python \
build-base
ADD ./ /server WORKDIR /server
RUN rm -rf node_modules && npm install && npm run build
I think this is all pretty straight forward and I've done similar set ups before but on Docker for Mac. This time I'm running Docker For Windows and running the commands through WSL. I've got the drives shared and bound /mnt/c to /c.
For another reference this project I can run the unit tests on Docker for Mac, but get the same sh: mocha: not found when running it through WSL connected to Docker for Windows on Windows 10. It seems to be just the path to the binaries node_modules/.bin that's not found because I can start the project up without any errors, it just can't find any binaries like mocha, nsp, gulp etc...
Sounds like this is similar to a path issue I've experienced with a windows/wsl environment. Try changing your volume definition to the full path and see if that solves it.
volumes:
- /c/Users/username/Code/server:/server
Even though you've copied the files into the container with the Dockerfile, when mounting a volume with docker-compose, it doesn't really care about that. You essentially change the mapping of the source of the directory.
I encountered a similar situation and it had to do with symlink creation in WSL/Docker. According to this Github issue reported to MS, all you should have to do is enable Developer Mode in the system settings.
Edit:
This Microsoft article describes how to enable developer mode on your Windows 10 machine.
The build folder is ready to be deployed.
You may serve it with a static server:
serve -s build
---> b252a9088991
Removing intermediate container cb5c1e2629c9
Step 16/16 : RUN serve -s build
---> Running in c27b54b31108
serve: Running on port 5000
created and dockerized react application using react-app and getting output like above on "docker-compose up" command
but nothing is showing on http://0.0.0.0:5000/ or http://localhost:5000/
version: '3'
services:
web:
build: .
image: react-cli
container_name: react-cli
volumes:
- .:/app
ports:
- '3000:3000'
above my docker-compose.yml file
FROM scratch
FROM mhart/alpine-node:6.12.0
RUN npm install -g npm --prefix=/usr/local
RUN ln -s -f /usr/local/bin/npm /usr/bin/npm
CMD [ "/bin/sh" ]
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm install -g serve
CMD serve -s build
EXPOSE 3000
COPY package.json package.json
COPY semantic.json semantic.json
COPY npm-shrinkwrap.json npm-shrinkwrap.json
RUN npm install gulp-header --save-dev
RUN npm install --no-optional
COPY . .
RUN npm run build --production
RUN serve -s build
and this is my Dockerfile
Most likely you are not exposing container port on host. I don't see how your compose file looks, but you probably need to add
ports:
- "5000:5000"
To your container definition in compose file.
From this documentation, it seems like that I can execute a single command from a service like this:
docker-compose run SERVICE CMD
But when I run
docker-compose up pwa npm test
I get the error
ERROR: No such service: npm
From my configurations, it will execute npm start, but I'd like to know how to execute other commands.
Files
Dockerfile:
From node:8
WORKDIR /app
copy package.json /app/
RUN npm install --quiet
CMD npm start
docker-compose.yml:
version: '3'
services:
pwa:
build: .
ports:
- '3000:3000'
volumes:
- ./src:/app/src
- ./public:/app/public
Versions
Docker version: 17.03
Docker compose version: 1.11.2
As docs say, the command is docker-compose run, not docker-compose up. The later expects all service names.
Do as this:
docker-compose run pwa npm test