I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).
Related
I try to deploy my image that is based on node (node:latest) on azure. When I do it terminates automatically and does not let me do what I need to do with it.
My docker file:
WORKDIR /usr/src/app
COPY package.json .
COPY artillery-scripts.sh .
COPY images images
COPY src src
EXPOSE 80
RUN npm install -g artillery && \
npm install faker && \
npm install worker && \
npm install -g node-fetch -save && \
npm install -g https://github.com/preguica/artillery-plugin-metrics-by-endpoint.git
I have tried adding && \ while true; do echo SLEEP; sleep 10; done at the end so it wouldn't terminate automatically but that produces an error.
Any one know what this problem is?
Probably good to first try it all locally. It seems you misunderstand some fundamental parts of docker.
Writing something that will pause in your Dockerfile makes no sense at all, since that file is for building the image, not running the container.
Once you have the image, you can run one or more containers based on this image.
Usually you will want to put a CMD or ENTRYPOINT at the end that will tell the container what command to run. Read this article which gives a pretty good explanation of both.
If you want to interact with the container look into the -i and -t (or short -it) flags of the run command. When you run your container, you can also provide a command, this will override any command given in CMD or be appended to anything in ENTRYPOINT.
If you do not write an ENTRYPOINT or CMD it will default to running a shell.
However, if you run it without -it it will start the shell, consider it's work done and stop immediately.
Again if you would want to start a specific script for instance you can add a line to the end of your Dockerfile such as
CMD "node somefile.js"
So first build your image based on the dockerfile, then run the container based on the image:
docker build -t someImageName:someTag .
docker run -it someImageName:someTag // will run CMD, "node somefile.js" or:
docker run -it someImageName:someTag node // will override it and just run node
You can install docker locally and just do that all on your local machine, and once you get a feel for it, and once you are sure your dockerfile is correct see how to deploy it to azure. That way it is easier to debug and learn.
Extra tip: you wrote EXPOSE 80. Read the docs on EXPOSE and PUBLISH beacuse it can be confusing when you start out. EXPOSE is just there for documentation, it does NOT actually expose anything. If you would like to connect somehow to the container from the outside world you have to PUBLISH the port. This is done in the run command:
docker run -it someImageName:someTag -p 80:80 // the first is host port, the second is the container port.
I have an angularjs application, I'm running using docker.
The docker file looks like this:-
FROM node:6.2.2
RUN npm install --global gulp-cli && \
npm install --global bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
COPY bower.json /usr/src/app/
RUN npm install && \
bower install -F --allow-root --config.interactive=false
COPY . /usr/src/app
ENV GULP_COMMAND serve:dist
ENTRYPOINT ["sh", "-c"]
CMD ["gulp $GULP_COMMAND"]
Now when I make any changes in say any html file, It doesn't dynamically loads up on the web page. I have to stop the container, remove it, build the image again, remove the earlier image and then restart the container from new image. Do I have to do this every time? (I'm new to docker, and I guess this issue is coz my source code is not put into volume, but I don't know how to do it using docker file)
You are correct, you should use volumes for stuff like this. During development, give it the same volumes as the COPY directories. It'll override it with whatever is on your machine, no need to rebuild the image, or even restart the container. Perfect for development.
When actually baking your images for production, you remove the volumes, leave the COPY in, and you'll get a deterministic container. I would recommend you read through this article here: https://docs.docker.com/storage/volumes/.
In general, there are 3 ways to do volumes.
Define them in your dockerfile using VOLUME.
Personally, I've never done this. I don't really see the benefits of this against the other two methods. I believe it would be more common to do this when your volume is meant to act as a permanent data-store. Not so much when you're just trying to use your live dev environment.
Define them when calling docker run.
docker run ... -v $(pwd)/src:/usr/src/app ...
This is great, cause if your COPY in your dockerfile is ./src /usr/src/app then it temporarily overrides the directory while running the image, but it's still there for deployment when you don't use -v.
Use docker-compose.
My personal recommendation. Docker compose massively simplifies running containers. For sake of simplicity just calls docker run ... but automates the arguments based on a given docker-compose.yml config.
Create a dev service specifying the volumes you want to mount, other containers you want it linked to, etc. Then bring it up using docker-compose up ... or docker-compose run ... depending on what you need.
Smart use of volumes will DRAMATICALLY reduce your development cycle. Would really recommend looking into them.
Yes, you need to rebuild every time the files change, since you only modify the files that are outside of the container. In order to apply the changes to the files IN the container, you need to rebuild the container.
Depending on the use case, you could either make the Docker Container dynamically load the files from another repository, or you could mount an external volume to use in the container, but there are some pitfalls associated with either solution.
If you want to keep your container running as you add your files you could also use a variation.
Mount a volume to any other location e.g. /usr/src/staging.
While the container is running, if you need to copy new files into the container, copy them into the location of the mounted volume.
Run docker exec -it <container-name> bash to open a bash shell inside the running container.
Run a cp /usr/src/staging/* /usr/src/app command to copy all new files into the target folder.
I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.
You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....
I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .
I'm using Docker (version 1.12.2, build bb80604) to setup a simple image/container with Gatling (Load Testing tool) + NodeJS. So, I pulled this Docker/Gatling base image and created my own Dockerfile to install NodeJS on it.
However, the Docker/Gatling base image above has an ENTRYPOINT already defined to call Gatling straightaway and then automatically exits the container. It looks like this:
ENTRYPOINT ["gatling.sh"]
What I'm trying to achieve is: I want to run a second command (my own NodeJS script to parse the test results), however I couldn't find a solution so far (I tried overriding the ENTRYPOINT, different combinations of ENTRYPOINT and CMD, but no success).
Here is how my current Dockerfile looks like:
FROM denvazh/gatling:2.2.3
RUN apk update \
&& apk add -U bash \
&& apk add nodejs=6.7.0-r0
COPY simulations /opt/gatling/user-files/simulations
COPY trigger-test-and-parser.sh /opt/gatling/
RUN chmod +x /opt/gatling/trigger-test-and-parser.sh
ENTRYPOINT ["bash", "/opt/gatling/trigger-test-and-parser.sh"]
Here is the command I'm using to build my image based on my Dockerfile:
docker build --no-cache -t gatling-nodejs:v8 .
And this is the command I'm using to run my container:
docker run -i -v "$PWD/results":/opt/gatling/results -v "$PWD":/opt/gatling/git.campmon.com/rodrigot/platform-hps-perf-test gatling-nodejs:v8
And this is the shellscript (trigger-test-and-parser.sh) I'd like to execute once the container starts (it should trigger Gatling and then runs my NodeJS parser):
gatling.sh -s MicroserviceHPSPubSubRatePerfTest.scala
node publish-rate-to-team-city.js
Any ideas or tweaks so I can run both commands once my container starts?
Thanks a lot!
Set ENTRYPOINT to /usr/bin/env. Then set CMD to be what you want run.
Graham's idea above worked pretty well. Thanks again!
For future reference, here is the two lines I had to add to my Dockerfile:
ENTRYPOINT ["/usr/bin/env"]
CMD ["bash", "/opt/gatling/trigger-test-and-parse-result.sh"]