I have a bit of a problem with the NodeJS that is shutting down after a few seconds from the moment the run command had started to execute.
For a start, I had created a react project by running the command create-react-app <my_project_name>. After this, in the project folder, I had created a docker file named Dockerfile.dev. that looks like:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
And compiled it with the command docker build -f Dockerfile.dev -t emy .
The build was a success, but when it had come to run it with the command docker run -p 127.0.0.1:3000:3000 emy . The container is self shutting down after a few seconds.
This is the structure of the generated project:
And the output of the docker running command for git bash terminal:
And for the window terminal:
The container exit code is 0. So it's normal...
Steps to reproduce the problem:
1) Install NodeJS.
2) Install react project generator.
3) Create a project by running the command create-react-app <my_project_name> in the window terminal.
4) Step into the newly created project, with cd <my_project_name>
5) Create Docker file with the content that you can find above.
6) Build the container by running the command docker build -t emy .
7) Now let's reproduce the problem by running the container with the command docker run -p 127.0.0.1:3000:3000 emy
8) Wait 5 seconds (max) and you should have the same problem us me.
The solution to this problem is to add "it" argument to the docker run command. For git bash terminal:
and for window terminal:
Hope this will help others
Related
I try to deploy my image that is based on node (node:latest) on azure. When I do it terminates automatically and does not let me do what I need to do with it.
My docker file:
WORKDIR /usr/src/app
COPY package.json .
COPY artillery-scripts.sh .
COPY images images
COPY src src
EXPOSE 80
RUN npm install -g artillery && \
npm install faker && \
npm install worker && \
npm install -g node-fetch -save && \
npm install -g https://github.com/preguica/artillery-plugin-metrics-by-endpoint.git
I have tried adding && \ while true; do echo SLEEP; sleep 10; done at the end so it wouldn't terminate automatically but that produces an error.
Any one know what this problem is?
Probably good to first try it all locally. It seems you misunderstand some fundamental parts of docker.
Writing something that will pause in your Dockerfile makes no sense at all, since that file is for building the image, not running the container.
Once you have the image, you can run one or more containers based on this image.
Usually you will want to put a CMD or ENTRYPOINT at the end that will tell the container what command to run. Read this article which gives a pretty good explanation of both.
If you want to interact with the container look into the -i and -t (or short -it) flags of the run command. When you run your container, you can also provide a command, this will override any command given in CMD or be appended to anything in ENTRYPOINT.
If you do not write an ENTRYPOINT or CMD it will default to running a shell.
However, if you run it without -it it will start the shell, consider it's work done and stop immediately.
Again if you would want to start a specific script for instance you can add a line to the end of your Dockerfile such as
CMD "node somefile.js"
So first build your image based on the dockerfile, then run the container based on the image:
docker build -t someImageName:someTag .
docker run -it someImageName:someTag // will run CMD, "node somefile.js" or:
docker run -it someImageName:someTag node // will override it and just run node
You can install docker locally and just do that all on your local machine, and once you get a feel for it, and once you are sure your dockerfile is correct see how to deploy it to azure. That way it is easier to debug and learn.
Extra tip: you wrote EXPOSE 80. Read the docs on EXPOSE and PUBLISH beacuse it can be confusing when you start out. EXPOSE is just there for documentation, it does NOT actually expose anything. If you would like to connect somehow to the container from the outside world you have to PUBLISH the port. This is done in the run command:
docker run -it someImageName:someTag -p 80:80 // the first is host port, the second is the container port.
I have a express application. And I use the docker-compose to run it. To run my app I use command:
docker-compsoe up
If I run it at first time and don't have any node_modules - I have an error in terminal, sth like "The module 'express' not found, please install it and try again...". So, I just open one more terminal, and run next command:
docker-compose exec backend npm i
Modules are installed for a few seconds. And and my app start working in the previous terminal. I allways use this method, but now I found command run for docker-compose. It allows you to exec some command in container, when it is not raised. So I wanted to try this command and I deleted ./node_modules directory, stop all containers, close all terminals, open terminal and run command:
docker-compose run backend npm i
Modules started to install, I wait for about 10 minutes but it is stops in the middle. I don't understand why? If I try up and npm i in second terminal it works, but with command run - not. What I do wrong?
You should not install your node modules in a running container. Instead, you shoud install it in your image via your Docker file and then run it via docker or docker-compose.
Your Dockerfile should look like something like this:
FROM node:10 # or the version of node you are using
WORKDIR /usr/src/app #replace this with your app code path
COPY package.json /usr/src/app
RUN npm install
COPY app-code/ /usr/src/app/app-code # again, use your own path
EXPOSE 3000
CMD ["npm", "start"]
You have to run npm install from your dockerfile and not copy your development node folder because the environment from the container may differ from your development environment.
Then you can just run it from your docker-compose file.
I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian
I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.
Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.