I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.
Related
I'm trying to run tacotron2 on docker within Ubuntu WSL2 (v.20.04) on Win10 2004 build. Docker is installed and running and I can run hello world successfully.
(There's a nearly identical question here, but nobody has answered it.)
When I try to run docker build -t tacotron-2_image docker/ I get the error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/nate/docker/Dockerfile: no such file or directory
So then I navigated in bash to where docker is installed (/var/lib/docker) and tried to run it there, and got the same error. In both cases I created a docker directory, but kept getting that error in all cases.
How can I get this to work?
As mentioned here, the error might have nothing to do with "symlinks", and everything with the lack of Dockerfile, which should be in the Tacotron-2/docker folder.
docker build does mention:
The docker build command builds Docker images from a Dockerfile and a “context”.
A build’s context is the set of files located in the specified PATH or URL.
In your case, docker build -t tacotron-2_image docker/ is supposed to be executed in the path you have cloned the Rayhane-mamah/Tacotron-2 repository.
To be sure, you could specify said Dockerfile, but that should not be needed:
docker build -t tacotron-2_image -f docker/Dockerfile docker/
Or:
cd
git clone https://github.com/Rayhane-mamah/Tacotron-2
cd Tacotron-2
cd docker
docker build -t tacotron-2_image .
I thought these commands I'm executing are for the purpose of installing it
To build the image, you need the sources (the repository to clone).
If the name of you Dockerfile is with capital F rename it
For others like me who somehow couldn't get it works because of symlink
just copy out your files out to a new directory that hasn't been symlink and build your image from there
if ony if you've confirm that your Dockerfile isn't dockerfile .Dockerfile ,DockerFile or dockerfile.txt .
My OS elementary which is base on ubuntu.
I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian
I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.
You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....
I'm trying to run a Spark instance using Docker (on Windows) following this explanation: https://github.com/sequenceiq/docker-spark
I was able to:
Pull the image
Build the image
I had to download the Github repository with the Dockerfile though and specify that in the build command. So instead of docker build --rm -t sequenceiq/spark:1.6.0 . I had to run docker build --rm -t sequenceiq/spark:1.6.0 /path/to/dockerfile
However when I try to run the following command to run the container:
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h san
dbox sequenceiq/spark:1.6.0
I get the error:
Error response from daemon: Container command '/etc/bootstrap.sh' not found or does not exist.
I tried copying the bootstrap.sh file from the Github repository to the /etc directory on the VM but that didn't help.
I'm not sure what went wrong, any advice would be more than welcome!
It is probably an issue with the build context because you changed the path to the Dockerfile in your build command.
Instead of changing the path to the Dockerfile in the build command, try cd'ing into that directory first and then running the command. Like so:
cd /path/to/dockerfile
docker build --rm -t sequenceiq/spark:1.6.0 .
Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.