Command in dockerfile doesn't take effect - linux

Below is my dockerfile. After the dependencies are installed, I want to delete a specific binary (ffmpeg) from the node_modules folder on the container, and then reinstall it using the install.js file that exists under the same folder in node_modules.
FROM node:16-alpine
WORKDIR /web
COPY package.json package-lock.json ./
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV:-development}
ARG _ENV
ENV _ENV ${_ENV:-dev}
RUN npm install
RUN rm /web/node_modules/ffmpeg-static/ffmpeg
RUN node /web/node_modules/ffmpeg-static/install.js
COPY . .
EXPOSE 8081
ENTRYPOINT [ "npm" ]
I want the rm and node commands to take effect on the container after the startup and when all the dependencies are installed and copied to the container, but it doesn't happen. I feel like the commands after RUN are only executed during the build process, not after the container starts up.
After the container is up, when I ssh to it and execute the two commands above directly (RUN rm... and RUN node...), my changes are taking effect and everything works perfectly. I basically want these commands to automatically run after the container is up and running.
The docker-compose.yaml file looks like this:
version: '3'
services:
serverless:
build: web
user: root
ports:
- '8081:8081'
env_file:
- env/web.env
environment:
- _ENV=dev
command:
- run
- start
volumes:
- './web:/web'
- './env/mount:/secrets/'

if this dockerfile builds, it means you have a package-lock.json. That is evidence that npm install was executed in the root directory for that image, which means that node_modules exists locally and is being copied in the last copy.
You can avoid this by creating a .dockerignore file that includes files and directories (aka node_modules) that you'd like to exclude from being passed to the Docker build context, which will make sure that they don't get copied to your final image.
You can follow this example:
https://stackoverflow.com/a/43747867/3669093
Update
Dockerfile: Replace the ENTRYPOINT with the following
ADD ./start.sh /start.sh
CMD ["/start.sh"]
Start.sh
rm /web/node_modules/ffmpeg-static/ffmpeg
node /web/node_modules/ffmpeg-static/install.js
npm

Try rm -rf /web/node_modules/ffmpeg-static/ffmpeg
I assume that is directory, not a file.

Related

How to prevent a non-empty host folder from overriding a mounted container folder?

I am trying to compile my assets using Docker for a Laravel project.
So, I have created a service called npm which is built from the following Dockerfile:
FROM node:16-alpine as node
WORKDIR /usr/src
ADD ./resources ./resources
COPY ["package.json", "package-lock.json", "vite.config.js", "./"]
RUN npm install --global cross-env
RUN npm install
RUN npm run build
Also, I am using the following Docker-compose configuration
node:
build:
context: ./
dockerfile: ./services/nodejs/Dockerfile
working_dir: /var/www
container_name: "nodejs"
volumes:
- ./:/var/www
tty: true
depends_on:
- php
Although the service is built successfully, it seems that my host directory (which is non-empty) is overriding the content of my node container. So, eventually I end up with no "node_modules" directory and my compiled assets and resources get lost.
So, what should I do?
I think that I can first copy the content of my host folder to the container, then delete the content of my host folder, and then run my scripts and then copy it back. But that seems to be a very time consuming thing to do. What is the best practice for cases like this? I'm sure I'm not the first one that has dockerized a full-stack Laravel project. Thanks in advance
It doesn't look like you COPY your application code into the container. That leaves you in a position where you need to bind-mount things in, but since the host directory is missing some built parts, the final container image is incomplete.
There's an especially important detail here around using this container as part of your build sequence which I'll get back to.
The first thing to do is to make sure all of the things you need are in the image itself.
FROM node:16-alpine as node
WORKDIR /usr/src
RUN npm install --global cross-env
# run `npm install` first to avoid rebuilds if `package.json` is unchanged
COPY ["package.json", "package-lock.json", "vite.config.js", "./"]
RUN npm install
# then copy in the rest of the application
# (make sure `node_modules` and `build` are in `.dockerignore`)
COPY ./ ./
RUN npm run build
# and explain how to run the container (*)
CMD npm run start
Then in your docker-compose.yml file, you don't need to specify volumes: because the code is already in your image. You similarly don't need to declare a working_dir: because the Dockerfile sets WORKDIR already. This section of your Compose file can probably be trimmed down to
node:
build:
context: ./
dockerfile: ./services/nodejs/Dockerfile
tty: true # actually needed?
depends_on:
- php
I don't think you actually want this container, though, since
I am trying to compile my assets using Docker for a Laravel project.
In this case you can use this Dockerfile (and it must be the complete self-contained Dockerfile) as part of a multi-stage build. Put this Dockerfile at the start of your PHP application Dockerfile and give it a name, and then you can COPY the built assets from one stage to another.
FROM node:16-alpine AS node # <-- `AS somename` is important
...
RUN npm run build
# no specific need for a CMD here
FROM php:fpm
...
COPY --from=node /usr/src/build ./static/
Again, since this will get filled in at Docker image build time, it's important to not overwrite the image content with a volumes: mount.
If you do this then there's not a "Node container" at all, and you can just outright remove this from your docker-compose.yml file. For this setup you don't need a long-running server, you just need to have Node available to run a compilation step at build time, and including it in a Dockerfile stage is enough for that.

docker-compose for a node project getting cannot find package.json on windows

I'm trying to get docker-compose working on my Windows 8 box. I have the following docker-compose file
version: '3'
services:
testweb:
build: .
command: npm run install
volumes:
- .:/usr/app/
working_dir: /app
ports:
- "3000:3000"
However when I run this using docker-compose I get an error saying cannot find package.json. I know this is something to do with how the paths are mapped. So I moved my folder to c:\users and tried with the same issue. I then moved to c:\users\ and tried, ended up with the same issue. The mapping on my virtual box is as follows
Does anyone know how to fix this?
Attached is my Dockerfile
FROM node:7.7.2-alpine
WORKDIR /usr/app
RUN apk update && apk add postgresql
COPY package.json .
RUN npm install --quiet
COPY . .
It may be because you set the working directory to /app in your docker-compose file, but your package.json is in /usr/app. So the container runs npm run install from the wrong directory.
For a docker container to stay up you need to have a running process. In this case its the command npm start which will keep running as long as the container is up. So you need to have a cmd or Entrypoint that will start a running process. Please try something like this. Let me know if you have any questions
FROM node:7.7.2-alpine
WORKDIR /usr/app
RUN apk update && apk add postgresql
COPY package.json .
RUN npm install --quiet
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
Put file docker-compose.yml in project/app directory.
Then run docker compose up in project/app directory.
reference: https://forums.docker.com/t/use-docker-compose-in-get-started-tutorial-solved/129415

How can I copy node_modules folder out of my docker container onto the build machine?

I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.

Docker-compose is dependent on the locally installed packages outside the Docker environment

I am using docker-compose to create build my image in my local environment but I have been having a few issues.
It looks like when doing docker-compose up, Docker compose does not run unless I have installed all the packages locally in my project outside of Docker.
Here is the Scenario.
I have a simple index.js file with one package (express)
const app = require('express')();
app.get('/health', (req, res) => res.status(200).send('ok'));
app.listen(8080, () => console.log('magic happens here'));
A Dockerfile to build the image
FROM node:6.9.5
RUN mkdir -p /var/log/applications/test_dc
RUN mkdir /src
WORKDIR /src
COPY package.json /src
COPY . /src
RUN npm install
VOLUME ["/src", "/var/log/applications/test_dc"]
EXPOSE 8080
CMD ["npm", "start"]
And a docker-compose.yml file to start the project up (I have removed the linked images to keep it minimal)
version: "2.1"
services:
api:
build: .
container_name: test_dc
ports:
- "8080:8080"
volumes:
- .:/src
The idea that is anyone can clone the repository and run the docker compose command to:
Build the image from the Dockerfile (Install deps and start up)
Spin up the containers.
Hit the health route.
This means I do not need to run npm install in the local project outside of docker.
This is however failing on docker-compose up
This however is solved when I run npm install locally.
Isn't this against the whole idea of Docker? That a project should be isolated from the rest of the system?
Is there something I am missing?
External volume mounts can mask image contents
You appear to be wanting your project code to live in /src, as seen in the Dockerfile:
RUN mkdir /src
WORKDIR /src
COPY package.json /src
COPY . /src
But in docker-compose.yml, you are mounting an external path as /src.
volumes:
- .:/src
The external mount will take precedence, and obscure everything that was built into your image in the Dockerfile (as well as the volume that you defined in Dockerfile).
Running startup tasks using a custom entrypoint
The usual way to handle this is by using an entrypoint script that runs your installer at container startup.
entrypoint.sh:
#!/bin/sh
npm install
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
At container startup, the entrypoint script is run, with the external volume already mounted. Thus, npm install can run and install modules there. When that is done, the CMD is executed. In your case, that is npm start.
Why you would want to use a custom entrypoint
One reason to use a container is to create isolation. That is, you want the image to be isolated and not rely on your external environment. So why would you want to use an external mount at all?
The answer is that during development you often want to avoid frequent rebuilds. Since images are immutable, every single change would require a rebuild and restart of the container. So every code change... yeah, it would be pretty slow.
A good compromise is to build your images to be fully contained, but during development you externally mount the code so you can make changes very quickly without a rebuild step. Once you are ready to release code, you can remove the external mount and do one final build. Since you're copying the files you were mounting into the container, it's all the same. And the npm install step in the build is the same one you run in the custom ENTRYPOINT.

Node.js + Docker Compose: node_modules disappears

I'm attempting to use Docker Compose to bring together a number of Node.js apps in my development environment. I'm running into an issue, however, with node_modules.
Here's what's happening:
npm install is run as part of the Dockerfile.
I do not have node_modules in my local directory. (I shouldn't because the installation of dependencies should happen in the container, right? It seems to defeat the purpose otherwise, since I'd need to have Node.js installed locally.)
In docker-compose.yml, I'm setting up a volume with the source code.
docker-compose build runs fine.
When I docker-compose up, the node_modules directory disappears in the container — I'm assuming because the volume is mounted and I don't have it in my local directory.
How do I ensure that node_modules sticks around?
Dockerfile
FROM node:0.10.37
COPY package.json /src/package.json
WORKDIR /src
RUN npm install -g grunt-cli && npm install
COPY . /src
EXPOSE 9001
CMD ["npm", "start"]
docker-compose.yml
api:
build: .
command: grunt
links:
- elasticsearch
ports:
- "9002:9002"
volumes:
- .:/src
elasticsearch:
image: elasticsearch:1.5
Due to the way Node.js loads modules, simply place node_modules higher in the source code path. For example, put your source at /app/src and your package.json in /app, so /app/node_modules is where they're installed.
I tried your fix, but the issue is that most people run npm install in the /usr/src/app directory, resulting in the node_modules folder ending up in the /app directory. As a result the node_modules folder ends up in both the /usr/src and /usr/src/app directory in the container and you end up with the same issue you started with.

Resources