Docker - Override or remove ENTRYPOINT from a base image - node.js

I'm using Docker (version 1.12.2, build bb80604) to setup a simple image/container with Gatling (Load Testing tool) + NodeJS. So, I pulled this Docker/Gatling base image and created my own Dockerfile to install NodeJS on it.
However, the Docker/Gatling base image above has an ENTRYPOINT already defined to call Gatling straightaway and then automatically exits the container. It looks like this:
ENTRYPOINT ["gatling.sh"]
What I'm trying to achieve is: I want to run a second command (my own NodeJS script to parse the test results), however I couldn't find a solution so far (I tried overriding the ENTRYPOINT, different combinations of ENTRYPOINT and CMD, but no success).
Here is how my current Dockerfile looks like:
FROM denvazh/gatling:2.2.3
RUN apk update \
&& apk add -U bash \
&& apk add nodejs=6.7.0-r0
COPY simulations /opt/gatling/user-files/simulations
COPY trigger-test-and-parser.sh /opt/gatling/
RUN chmod +x /opt/gatling/trigger-test-and-parser.sh
ENTRYPOINT ["bash", "/opt/gatling/trigger-test-and-parser.sh"]
Here is the command I'm using to build my image based on my Dockerfile:
docker build --no-cache -t gatling-nodejs:v8 .
And this is the command I'm using to run my container:
docker run -i -v "$PWD/results":/opt/gatling/results -v "$PWD":/opt/gatling/git.campmon.com/rodrigot/platform-hps-perf-test gatling-nodejs:v8
And this is the shellscript (trigger-test-and-parser.sh) I'd like to execute once the container starts (it should trigger Gatling and then runs my NodeJS parser):
gatling.sh -s MicroserviceHPSPubSubRatePerfTest.scala
node publish-rate-to-team-city.js
Any ideas or tweaks so I can run both commands once my container starts?
Thanks a lot!

Set ENTRYPOINT to /usr/bin/env. Then set CMD to be what you want run.

Graham's idea above worked pretty well. Thanks again!
For future reference, here is the two lines I had to add to my Dockerfile:
ENTRYPOINT ["/usr/bin/env"]
CMD ["bash", "/opt/gatling/trigger-test-and-parse-result.sh"]

Related

Docker image "node" terminates before I can access it (using azure cli)

I try to deploy my image that is based on node (node:latest) on azure. When I do it terminates automatically and does not let me do what I need to do with it.
My docker file:
WORKDIR /usr/src/app
COPY package.json .
COPY artillery-scripts.sh .
COPY images images
COPY src src
EXPOSE 80
RUN npm install -g artillery && \
npm install faker && \
npm install worker && \
npm install -g node-fetch -save && \
npm install -g https://github.com/preguica/artillery-plugin-metrics-by-endpoint.git
I have tried adding && \ while true; do echo SLEEP; sleep 10; done at the end so it wouldn't terminate automatically but that produces an error.
Any one know what this problem is?
Probably good to first try it all locally. It seems you misunderstand some fundamental parts of docker.
Writing something that will pause in your Dockerfile makes no sense at all, since that file is for building the image, not running the container.
Once you have the image, you can run one or more containers based on this image.
Usually you will want to put a CMD or ENTRYPOINT at the end that will tell the container what command to run. Read this article which gives a pretty good explanation of both.
If you want to interact with the container look into the -i and -t (or short -it) flags of the run command. When you run your container, you can also provide a command, this will override any command given in CMD or be appended to anything in ENTRYPOINT.
If you do not write an ENTRYPOINT or CMD it will default to running a shell.
However, if you run it without -it it will start the shell, consider it's work done and stop immediately.
Again if you would want to start a specific script for instance you can add a line to the end of your Dockerfile such as
CMD "node somefile.js"
So first build your image based on the dockerfile, then run the container based on the image:
docker build -t someImageName:someTag .
docker run -it someImageName:someTag // will run CMD, "node somefile.js" or:
docker run -it someImageName:someTag node // will override it and just run node
You can install docker locally and just do that all on your local machine, and once you get a feel for it, and once you are sure your dockerfile is correct see how to deploy it to azure. That way it is easier to debug and learn.
Extra tip: you wrote EXPOSE 80. Read the docs on EXPOSE and PUBLISH beacuse it can be confusing when you start out. EXPOSE is just there for documentation, it does NOT actually expose anything. If you would like to connect somehow to the container from the outside world you have to PUBLISH the port. This is done in the run command:
docker run -it someImageName:someTag -p 80:80 // the first is host port, the second is the container port.

Store NodeJS script outside of Docker container, so it can be modified

I'm interested in running a NodeJS script inside a Docker container, because that seems to be the easiest way to run stuff in unRAID (small scripts at least).
My current Dockerfile looks like this:
FROM node:12.9.1
COPY app.js /home/node/
COPY package*.json /home/node/
RUN mkdir /home/node/saves
WORKDIR /home/node
RUN npm install
CMD ["node", "app.js"]
Not perfect, but works well enough. What my script does, is that it scrapes certain websites for data, then places it in a folder called saves.
But because I do COPY app.js /home/node/, every time I want to make a tiny change to my app.js file, I need to rebuild the whole image, delete the container, and start a new one. Kind of irritating, but it has worked for me.. for now.
When I start my container, I want that volume to stay persistent, so I do this:
docker run --net=bridge -h scraper --name scraper -d -v /mnt/user/scripts/scraper/saves:/home/node/saves scraper
This works, but as I said, if I want to change my app.js (like add a new site to scrape), I have to rebuild the image and run the above command again. Every single time.
What's a better approach than this? I could solve this by not copying the files, but instead run npm install and then node app.js every time, but this script runs every 3 minutes, so that would be a huge waste of resources.
I could also store the appropriate data inside my /saves/ folder, then read that in the NodeJS script every time, but I feel like that's kind of a hack.
Use Environment variables.
If the type of changes to app.js file already known, you could
use environment variables to supply such changes to docker run, and
an entrypoint script file to make such modification using
environment variable.
The content of entrypoint script will depend on what you want to do.
# Add to your Dockerfile
ENV SITE_TO_SCRAP=example.com
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
# Run docker with
-e SITE_TO_SCRAP=abc.com
Entry Point script may looks like
#!/bin/bash
# Modify app.js
# Assuming changing the line SCRAP_URL="someurl"
sed -i -e "s#SCRAP_URL=\"someurl\"#SCRAP_URL=\"${SITE_TO_SCRAP}\"#" /home/node/app.js
# This will execute the CMD
exec "$#"
Make node cache folder persistent
To run npm install do it in entry point script not in docker file. make node.js cache folder persistent by mounting -v host/path/to/cache:/root/.npm. this way node install use cached files whenever possible. use docker run --rm node:{version} npm config get cache to get container cache directory.
Manually mount modified files.
# To your docker run command add required file/directory mount(s)
-v /path/to/modified/app.js:/home/node/app.js

Access raspistill / pi camera inside a Docker container

I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian

Create docker container with Node.js/NPM preinstalled but no package.json

I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

Resources