I have a monorepo that has holds various Go services and libraries. The directory structure is like the following:
monorepo
services
service-a
- Dockerfile
go.mod
go.sum
This go.mod file resides in the root of the monorepo directory and the services use the dependencies stated in that file.
I build the Docker image with this command:
docker build -t some:tag ./services/service-a/
When I try to build my Docker image from the root of monorepo directory with the above docker command I get the following error:
COPY failed: Forbidden path outside the build context: ../../go.mod ()
Below is my Dockerfile
FROM golang:1.14.1-alpine3.11
RUN apk add --no-cache ca-certificates git
# Enable Go Modules
ENV GO111MODULE=on
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY ../../go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o service-a
ENTRYPOINT ["/app/service-a"]
Is there something I have to do to be able to add files into my Docker image that aren't in the current directory without having to have a separate go.mod and go.sum in each service within the monorepo?
Docker only allows adding files to the image from the context, which is by default the directory containing the Dockerfile. You can specify a different context when you build, but again, it won't let you include files outside that context:
docker build -f ./services/service-a/Dockerfile .
This should use the current directory as the context.
Alternatively, you can create a temp directory, copy all the artifacts there and use that as the build context. This can be automated by a makefile or build script.
You can build and manage your docker containers using docker-compose, then this problem can be solved with the help context directive, for example:
project_folder
├─── src
│ └── folder1
│ └── folder2
│ └── Dockerfile
├── docker-compose.yaml
└── copied_file.ext
docker-compose.yaml
version: '3'
services:
your_service_name:
build:
context: ./ #project_folder for this case
dockerfile: ./src/folder1/folder2/Dockefile
Dockerfile
FROM xxx
COPY copied_file.ext /target_folder/
build or rebuild services:
docker-compose build
run a one-off command on a service:
docker-compose run your_service_name <command> [arguments]
Related
Below is my dockerfile. After the dependencies are installed, I want to delete a specific binary (ffmpeg) from the node_modules folder on the container, and then reinstall it using the install.js file that exists under the same folder in node_modules.
FROM node:16-alpine
WORKDIR /web
COPY package.json package-lock.json ./
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV:-development}
ARG _ENV
ENV _ENV ${_ENV:-dev}
RUN npm install
RUN rm /web/node_modules/ffmpeg-static/ffmpeg
RUN node /web/node_modules/ffmpeg-static/install.js
COPY . .
EXPOSE 8081
ENTRYPOINT [ "npm" ]
I want the rm and node commands to take effect on the container after the startup and when all the dependencies are installed and copied to the container, but it doesn't happen. I feel like the commands after RUN are only executed during the build process, not after the container starts up.
After the container is up, when I ssh to it and execute the two commands above directly (RUN rm... and RUN node...), my changes are taking effect and everything works perfectly. I basically want these commands to automatically run after the container is up and running.
The docker-compose.yaml file looks like this:
version: '3'
services:
serverless:
build: web
user: root
ports:
- '8081:8081'
env_file:
- env/web.env
environment:
- _ENV=dev
command:
- run
- start
volumes:
- './web:/web'
- './env/mount:/secrets/'
if this dockerfile builds, it means you have a package-lock.json. That is evidence that npm install was executed in the root directory for that image, which means that node_modules exists locally and is being copied in the last copy.
You can avoid this by creating a .dockerignore file that includes files and directories (aka node_modules) that you'd like to exclude from being passed to the Docker build context, which will make sure that they don't get copied to your final image.
You can follow this example:
https://stackoverflow.com/a/43747867/3669093
Update
Dockerfile: Replace the ENTRYPOINT with the following
ADD ./start.sh /start.sh
CMD ["/start.sh"]
Start.sh
rm /web/node_modules/ffmpeg-static/ffmpeg
node /web/node_modules/ffmpeg-static/install.js
npm
Try rm -rf /web/node_modules/ffmpeg-static/ffmpeg
I assume that is directory, not a file.
Please follow this reasoning and tell me where I am wrong.
I want to develop a web app with Typescript. This is the folder structure:
src
- index.ts
package.json
tsconfig.json
I want to compile and run the app inside a container so that is not environment-dependent.
I run docker build --name my_image with the following Dockerfile in order to create an image:
FROM node:16
WORKDIR /app
COPY package.json /app
RUN npm install
CMD ["npx", "dev"]
This will create a node_modules folder inside the container.
Now I create a container like so:
docker create -v $(pwd)/src/:/app/src my_image --name my_container
I created a volume so that when I change files in my host /src I will change also the same files in the container.
I start the container like so:
docker start -i my_container
Now everything is working. The problem, though, are the linters.
When I open a file with my text editor from the host machine in /src the linter are not working because there is no node_modules installed on the host.
If I npm install also on the host machine I will have 2 different node_modules installation and there might be some compilation that differ between the host and the container.
Is there a way to point the host node_modules to the container node_modules?
If I create a docker volume for node_modules like so
docker create -v $(pwd)/node_modules:/app/node_modules ...
I will delete all the compilation done in the container.
What am I doing wrong?
Thank you.
For those who might have a similar problem, this is what I have done and it is working.
First I build an image:
docker build --name my_image .
Then I copy the node_modules folder from a temporary container to my local folder:
docker create --name tmp_container my_image
docker cp tmp_container:/app/node_modules ./node_modules
docker rm tmp_container
Then I create the final container with a volume:
docker create -v $(pwd)/node_modules:/app/node_modules --name my_container my_image
And now I have a exactly what it has built and compiled in the container but also in my local folder.
This can be of course extended to any folder with build artifacts. For example in my case I needed also another folder in order to make the linters work in my text editor. I just copy the folder exactly like I did with node_modules.
I have a projectA that references a common package that my other Nodejs services will reference as well. The issue is how do I pack this up in a Docker file? Ideally, the tsc build would build both the project and the dependent common package into the /dist but that's not the case. (Tried using the prepend but it has to be used on AMD or SYSTEM and cannot be used with node). What are my options to have a shared package in Typescript for multiple Nodejs services and to pack it into a Docker (each service seperately)? I got it working with the project reference (I can use the imports from the shared package) but I dont know how to use the Docker file with this setup.
My docker will look something like this
FROM node:10-alpine
# update packages
RUN apk update
# create root application folder
WORKDIR /app
# copy configs to /app folder
COPY package*.json ./
COPY tsconfig.json ./
# copy source code to /app/src folder
COPY src /app/src
# check files list
RUN ls -a
RUN npm install
RUN npm run build
EXPOSE 7777
CMD [ "node", "./dist/main.js" ]
But with this setup I'll be missing the shared package (project)
root
|-serviceA (has docker file)
|-serviceB (has docker file)
|-shared ( some Mongodb models)
Both depend on shared via references, all three have tconfig
Did you ever find a clean way to do this? My current approach is to run the docker build from the parent directory and copy the common project as well, but that feels messy
I am trying to run a Node.js in a Docker container via Docker Compose.
The node_modules should be created in the image and the source code should be synced from the host.
Therefore, I use 2 volumes in docker-compose.yml. One for my project source and other for the node_modules in the image.
Everything seems to be working. The node_modules are installed and nodemon starts the app. In my docker container I have a node_modules folder with all dependencies. On my host an empty node_modules is created (I am not sure if this is expected).
But, when I change a file from the project. The nodemon process detects a file change and restarts the app. Now the app crashes because it can't find modules. The node_modules folder in the Docker container is empty now.
What am I doing wrong?
My folder structure looks like this
/
├── docker-compose.yml
├── project/
│ ├── package.json
│ ├── Dockerfile
docker-compose.yml
version: '3'
services:
app:
build: ./project
volumes:
- ./project/:/app/
- /app/node_modules/
project/Dockerfile
# base image
FROM node:9
ENV APP_ROOT /app
# set working directory
RUN mkdir $APP_ROOT
WORKDIR $APP_ROOT
# install and cache app dependencies
COPY package.json $APP_ROOT
COPY package-lock.json $APP_ROOT
RUN npm install
# add app
COPY . $APP_ROOT
# start app
CMD ["npm", "run", "start"]
project/package.json
...
"scripts": {
"start": "nodemon index.js"
}
...
Mapping a volume works to make files available to the container, not the other way round.
You can fix your issue by running "npm install" as part of the CMD. You can achieve this by having a "startup" script (eg start.sh) that runs npm install && npm run start. The script should be copied in the container with a normal COPY command and be executable.
When you start your container you should see files in the node_modules folder (on host).
You could use any of these two solutions :
npm install on the host and make a volume that the container can use and that references the host node_modules folder.
Use npm install in the image/container build process - which can be a pain for a development setup since it will npm i everytime you restart the container (if you change some files).
I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.