I am using the following docker compose:
dns_to_redis:
build:
context: ./DNS_to_redis/
image: dns_to_redis
depends_on:
- redis
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
networks:
sensor:
ipv4_address: 172.24.1.4
to build and run an image. Inside the Dockerfile I use the following ADD:
ADD home/new_prototypes/dns_to_redis/dns_redis.R /home/
However, when I run sudo docker-compose up, I get the following error:
ERROR: Service 'dns_to_redis' failed to build: ADD failed: file not found in build context or excluded by .dockerignore: stat home/new_prototypes/dns_to_redis/dns_redis.R: file does not exist
The file is located in /home/new_prototypes/dns_to_redis, I am thinking that this is somehow the problem, but I can't modify it in any way to make it work.
How can I run this from docker compose?
Thank you.
As stated in the error message:
file not found in build context
The build context is a copy of the path you set for dns_to_redis.build.context.
Your file needs to be in the ./DNS_to_redis/ directory.
Note that it is generally preferred to use COPY instead of ADD.
Related
As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.
I need to create a pipeline in Azure with my autotests using Docker container. I made it successfully on my local machine using the following algorithm:
Create Selenium node with next command:
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:4.0.0-beta-1-20210215
Build image using command: docker build -t my_tests .
next
Here is my dockerfile:
FROM maven:onbuild
COPY src /home/bns_bdd_automation/src
COPY pom.xml /home/bns_bdd_automation
COPY .gitignore /home/bns_bdd_automation
CMD mvn -f /home/bns_bdd_automation/pom.xml clean test
Everything works fine, but locally.
In the cloud I faced an issue: I need to RUN Selenium Node at first, and after that build my image.
As I understood from some articles I need to use docker-compose (for run first image), but I don't know how. Can you help me with that?
Well, here is my docker-compose.yml file:
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
bns_bdd_automation:
depends_on:
- selenium-hub
- chrome
build: .
But it works not as I expected. It builds and RUN tests BEFORE hub and chrome was executed. And after that it shows me in terminal:
WARNING: Image for service bns_bdd_automation was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Starting selenium-hub ... done
Starting bns_bdd_automation_chrome_1 ... done
Recreating bns_bdd_automation_bns_bdd_automation_1 ... error
ERROR: for bns_bdd_automation_bns_bdd_automation_1 no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: for bns_bdd_automation no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
I am using Docker Compose for my local development environment for a Full Stack Javascript project.
part of my Docker Compose file look like this
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- ../my-shared-module:/code/node_modules/my-shared-module
I am trying to develop a custom Node module called my-shared-module, that's why i added - ../my-shared-module:/code/node_modules/my-shared-module to the Docker Compose file. The node module is hosted in a private Git repo, and is defined like this in package.json
"dependencies": {
"my-shared-module": "http://gitlab+deploy-token....#gitlab.com/.....git",
My problem is,
When I run update my node modules in the docker container using npm install, it download my-shared-module from my private Git repo into /code/node_modules/my-shared-module, and that overwrites the files in host ../my-shared-module, because they are synced.
So my question is, is it possible to have 1 way volume sync in Docker?
when host changes, update container
when container changes, don't update host ?
Unfortunately I don't think this is possible in Docker. Mounting a host volume is always two-way unless you consider a readonly mount to be one-way, but that prevents you from being able modify the file system with things like npm install.
Your best options here would either be to rebuild the image with the new files each time, or bake into your CMD a step to copy the mounted files into a new folder outside of the mounted volume. That way any file changes won't be persisted back to the host machine.
You can script something to do this. Mount your host node_modules to another directory inside the container, and in the entrypoint, copy the directory:
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- /code/node_modules/my-shared-module
- ../my-shared-module:/host/node_modules/my-shared-module:ro
Then add an entrypoint script to your Dockerfile with something like:
#!/bin/sh
if [ -d /host/node_modules/my-shared-module ]; then
cp -r /host/node_modules/my-shared-module/. /code/node_modules/my-shared-module/.
fi
exec "$#"
I am aiming to configure docker so that when I modify a file on the host the change is propagated inside the container file system.
You can think of this as hot reloading for server side node code.
The nodemon file watcher should restart the server in response to file changes.
However these file changes on the host volume don't seem to be reflected inside the container when I inspect the container using docker exec pokerspace_express_1 bash and inspect a modified file the changes are not propagated inside the container from the host.
Dockerfile
FROM node:8
MAINTAINER therewillbecode
# Create app directory
WORKDIR src/app
RUN npm install nodemon -g
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
CMD [ "npm", "start" ]
docker-compose.yml
version: '2'
services:
express:
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongo:27017/test
- SERVER_PORT=3000
volumes:
- ./:/src/app
ports:
- '3000:3000'
links:
- mongo
mongo:
image: mongo
ports:
- '27017:27017'
mongo-seed:
build: ./mongo-seed
links:
- mongo
.dockerignore
.git
.gitignore
README.md
docker-compose.yml
How can I ensure that host volume file changes are reflected in the container?
Try something like this in your Dockerfile:
CMD ["nodemon", "-L"]
Some people had a similar issue and were able to resolve it with passing -L (which means “legacy watch”) to nodemon.
References:
https://github.com/remy/nodemon/issues/419
http://fostertheweb.com/2016/02/nodemon-inside-docker-container/#why-isnt-nodemon-reloading
Right, so with Docker we need to re-build the image or figure out some clever solution.
You probably do not want to rebuild the image every time you make a change to your source code.
Let's figure out a clever solution. Let's generalize the Dockerfile a bit to solve your problem and also help others.
So this is the boilerplate Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Remember, during the image building process we are creating a temporary container. When we make the copies we are essentially taking a snapshot of the contents /src and /public. Its a snapshot that is locked in time and by default will not be updated by making changes to the code.
So in order to get these changes to files /src and /public, we need to abandon doing a straight copy, we are going to adjust the docker run command that we use to start up our container.
We are going to make use of a feature called volume.
With Docker volume we setup a placeholder inside our Docker container, so instead of copying over our entire/src directory we can imagine we are going to put a reference to those files and give us access to the files and folders inside of the local machine.
We are setting up a mapping from a folder inside the container to a folder outside a container. The command to use is a bit painful, but once its documented here you can bookmark this answer.
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <image_id>
-v $(pwd):/app used to set up a volume in present working directory. This is a shortcut. So we are saying get the present working directory, get everything inside of it and map it up to our running container. It's long winded I know.
To implement this you will have to first rebuild your docker image by running:
docker build -f Dockerfile.dev .
Then run:
docker run -p 3000:3000 -v $(pwd):/app <image_id>
Then you are going to very quickly get an error message, the react-scripts not found error. You will see that message because I skipped the -v /app/node_modules.
So what's up with that?
The volume command sets up a mapping and when we do, we are saying take everything inside of our present working directory and map it up to our /appfolder, but the issue is there is no /node_modules folder which is where all our dependencies exist.
So the /node_modules folder got overwritten.
So we are essentially pointing to nothing and thats why we need that -v /app/node_modules with no colon because the colon is to map up the folder inside a container to a folder outside the container. Without the colon we are saying want it to be a placeholder, don't map it up against anything.
Now, go ahead and run: docker run -p 3000:3000 -v $(pwd):/app <image_id>
Once done, you can make all the changes you want to your project and see them "hot reload" in your browser. No need to figure out how to implement Nodemon.
So whats happening there is any changes made to your local file system is getting propagated into your container, the server inside your container sees the change and updates.
Now, I know its hard and annoying to remember such a long command, in enters Docker Compose.
We can make use of Docker Compose to dramatically simplify the command we have to run to start up the container.
So to implement that you create a Docker Compose file and inside of it you will include the port setting and the two volumes that you need.
Inside your root project, make a new file called docker-compose.yml.
Inside there you will add this:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
Then run: docker-compose up
Daniel's answer partially worked for me, but the hot reloading still doesn't work. I'm using a Windows host and had to change his docker-compose.yml to
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /App/node_modules
- .:/App
(I changed the volumes arguments from /app/node_modules to /App/node_modules and from .:/app to .:/App. This enables changes to be passed to the container, however the hot reloading still doesn't work. I have to use docker-compose up --build each time I want to refresh the app.)
I'm using docker compose to create and run several services. One of the services is a nodeJS project in which I added a Makefile to run some custom build tasks.
That's how I define the service in docker-compose.yml file
web:
container_name: web
image: myApp/web
build:
context: .
dockerfile: operations/.docker/web/${APP_ENV}.dockerfile
ports:
- "8080"
volumes:
- ./src/js/web:/var/www/myApp/
working_dir: /var/www/myApp
env_file:
- operations/env/web/${APP_ENV}.env
networks:
- myApp_network
As you can see there is a volume with an alias to my local filesystem. The Makefile is under the ./src/js/web directory.
And this is the dockerfile
FROM node:latest
WORKDIR /var/www/myApp
RUN npm i
RUN make -f /var/www/myApp/Makefile build
RUN mkdir -p /var/log/pm2
EXPOSE 8080
ENTRYPOINT ["pm2", "start", "dist/server/bin/www.js","--name","web","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
I had the same setup on my mac book and I manage to build and run the services.
However, I recently tried to run docker compose on linux mint and I get the following error.
make: /var/www/myApp/Makefile: No such file or directory
make: *** No rule to make target '/var/www/myApp/Makefile'. Stop.
ERROR: Service 'web' failed to build: The command '/bin/sh -c make -f /var/www/myApp/Makefile build' returned a non-zero code: 2
I also tried to tried to build the image by removing RUN make -f /var/www/myApp/Makefile build and it worked fine. When I entered the container I could see the Makefile sitting in /var/www/myApp.