I have a very simple website built by ExpressJS. I run Docker Quick Start terminal and go to the working directory.
This is the result of ls command
app.js bin/ node_modules/ package.json public/ routes/ views/
When I issue the command below, I get "No such file or directory" error:
docker run -p 8080:3000 -v $(pwd):/var/www -w "/var/www" node npm start
I am using Windows 8.1 Pro 64-bit
What am I missing here?
Make sure that you checked mark your Windows drives in order to accessible for the Docker Engine by going to docker settings => Shared Drives.
Also define the absolute path instead of $(pwd) i.e. d:\express:/var/www. The same issue I confronted a couple weeks ago where I resolved using the above approach.
Related
I'm trying to run tacotron2 on docker within Ubuntu WSL2 (v.20.04) on Win10 2004 build. Docker is installed and running and I can run hello world successfully.
(There's a nearly identical question here, but nobody has answered it.)
When I try to run docker build -t tacotron-2_image docker/ I get the error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/nate/docker/Dockerfile: no such file or directory
So then I navigated in bash to where docker is installed (/var/lib/docker) and tried to run it there, and got the same error. In both cases I created a docker directory, but kept getting that error in all cases.
How can I get this to work?
As mentioned here, the error might have nothing to do with "symlinks", and everything with the lack of Dockerfile, which should be in the Tacotron-2/docker folder.
docker build does mention:
The docker build command builds Docker images from a Dockerfile and a “context”.
A build’s context is the set of files located in the specified PATH or URL.
In your case, docker build -t tacotron-2_image docker/ is supposed to be executed in the path you have cloned the Rayhane-mamah/Tacotron-2 repository.
To be sure, you could specify said Dockerfile, but that should not be needed:
docker build -t tacotron-2_image -f docker/Dockerfile docker/
Or:
cd
git clone https://github.com/Rayhane-mamah/Tacotron-2
cd Tacotron-2
cd docker
docker build -t tacotron-2_image .
I thought these commands I'm executing are for the purpose of installing it
To build the image, you need the sources (the repository to clone).
If the name of you Dockerfile is with capital F rename it
For others like me who somehow couldn't get it works because of symlink
just copy out your files out to a new directory that hasn't been symlink and build your image from there
if ony if you've confirm that your Dockerfile isn't dockerfile .Dockerfile ,DockerFile or dockerfile.txt .
My OS elementary which is base on ubuntu.
I wrote a DockerFile for a node application. This is the docker file:
FROM node:10.15.0
COPY frontend/ frontend/
WORKDIR frontend/
RUN npm install
RUN npm start
When I try to build this Dockerfile, I get this error: ERROR in ./app/main.js Module not found: Error: Can't resolve './ResetPwd' in '/frontend/app'
So I added RUN ls & RUN ls /app in Dockerfile. Both of the files are there! I'm not familiar with NodeJS and it's build process at all. Can anybody help me with this?
Point: I'm not sure if it helps or not, but I'm using Webpack too.
The problem was that our front-end developer considered that node imports are case insensitive and he was using windows. I tried to run Dockerfile on mac and that's why it couldn't find the modules. Module name was resetPass!
This question saved me!
hope this helps somebody else.
I have an angular app and I was trying to containerize it using docker.
I build the app on a windows machine. and I was trying to build it inside a linux container.
the app was building fine on my windows machine and failing with the following error in the docker environment:
ERROR in folder1/folder2/name.component.ts: - error TS2307: Cannot find module '../../../folder1/File.name'.
import { Interface1} from '../../../folder1/File.name';
Cannot find module '../../../node_modules/rxjs/Observable.d.ts'.
import { Observable } from 'rxjs/observable';
it was driving me nuts.
I saw this question and at first did not think that it was what was going on. the next day I decided to build the same app in a linux environment just to make sure. Used WSL 2 and boom:
the real problem!
ERROR in error TS1149: File name '/../../node_modules/rxjs/observable.d.ts' differs from already included file name '/../../node_modules/rxjs/Observable.d.ts' only in casing.
6 import { Observable } from 'rxjs/observable';
SO it was a casing issue. I corrected the casing and it builds fine!
I cant say if this will work for sure since I don't know if npm start actually triggers webpack, but if it doesn't you'll have to add an extra RUN line after the COPY frontend / line
There are a few issues here, try using this docker file instead
FROM node:10.15.0
# Copy dependency files and install packages
WORKDIR frontend
COPY frontend/package.* .
RUN npm install
# Copy src down and other stuff
COPY frontend /
# cd to the file with the package.json
WORKDIR /appDir/frontend
# Command that executes when container starts up
CMD ["npm", "start"]
Make sure that you also update your .dockerignore to include node_modules. You'll have to build and run the container with the following commands.
docker build -t frontendApp .
docker run -p 8080:8080 frontendApp
The -p and 8080:8080 have to do with exposing internal ports to the outside world so you can view it in a browser, just change it to whatever port web pack is using to display your stuff.
I had to rebuild the disruptive package, like in this issue for node-sass
The command would be npm rebuild <package-name>
For me, this was npm rebuild node-sass
I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.
You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....
Update 2: I have created a sample project on Git to reproduce this issue. Upon further testing, the test case is slightly different than what I've described in my original post.
I am including the contents of the README I wrote on the github repo below.
Use Case
One simple nodejs project with a Dockerfile.
One local NPM dependency used by the above project (copied to container via Dockerfile). The project refers to the dependency via a local path.
The nodejs project has one web route (/) that prints the version of the local npm dependency from its package.json. This is used to verify the results of the test case procedure.
docker-compose uses this volume technique to overlay the host machine's source tree
on top of the container's source tree and then overlaying the node_modules from the container on top of the first volume.
Steps to Reproduce
Clone this repo.
Clean up any previous containers and images related to this repo's project via docker rm and docker rmi.
Check out the test2_run1 tag. This state represents the project using version 1.0.0 of the local NPM dependency.
Do a docker-compose build. All steps should run without any cache usage if step 2 was followed correctly.
Note the version of the local NPM dependency during the npm install command, e.g. +-- my-npm#1.0.0.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.0.
Stop the running containers. (Ctrl-C on the terminal from which the up command was issued.)
Check out the test2_run2 tag. This introduces a small change to the NPM's index.js file, and a version
bump in its package.json to 1.0.1.
Do a docker-compose build. Only the instructions up to COPY ./my-npm ... should use a cache. (E.g., the docker output prints ---> Using cache for that instruction.) All subsequent steps should be run by docker. This is because the changes introduced in step 7 to the NPM package should have invalidated the cache for the COPY ./my-npm ... command, and, as a result, subsequent steps too. Confirm that during the npm install command, the new version of the NPM is printed in the summary tree output, e.g. +-- my-npm#1.0.1.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.1.
Expected behavior: Page in step 9 should report 1.0.1. That is, a change in the local npm should be reflected in the container via docker-compose up.
Actual behavior: Page in step 9 reports 1.0.0.
Note that docker itself is re-building images as expected. The observed issue is not that docker is re-using a cached image, as the output
shows it re-running NPM install and showing the new version of the local NPM dependency. The issue is that docker-compose is not seeing
that the underlying images that comprise the dctest_service1 container have been updated.
In fact, running bash in the container allows us to see that the container has the updated my-npm module files, but the node_modules
version is stale:
# docker exec -it dctest_service1_1 bash
app#6bf2671b75c6:~/service1$ grep version my-npm/package.json node_modules/my-npm/package.json
my-npm/package.json: "version": "1.0.1",
node_modules/my-npm/package.json: "version": "1.0.0"
app#6bf2671b75c6:~/service1$
Workaround: Use docker rm to remove the dctest_service1 container. Then re-run docker-compose up, which will re-create the container using the existing images. Notable in this step is that no underlying images are re-built. In re-creating the container, docker-compose seems to figure out to use the newer volume that has the updated node_modules.
See the output directory for the output printed during the first run (steps 4 and 5) and the second run (steps 8 and 9).
Original Post
I've got a nodejs Dockerfile based on this tutorial ("Lessons from Building a Node App in Docker"). Specifically, note that this tutorial uses a volume trick to mount the node_modules directory from the container itself to overlay on top of the equivalent one from the host machine. E.g.:
volumes:
- .:/home/app/my-app
- /home/app/my-app/node_modules
I am running into a problem where an update to package.json is triggering a npm install as expected (as opposed to using the docker cache), but when starting the service with docker-compose up, the resulting container somehow ends up with an older version of the node_modules data, as the newly added NPM package that had been added is missing from the directory. Yet, if run the specified CMD by hand via docker-compose run --rm, then I do see the updated volume!
I can confirm this a few ways:
node_modules Timestamp
Container started via "up":
app#88614c5599b6:~/my-app$ ls -l
...
drwxr-xr-x 743 app app 28672 Dec 12 16:41 node_modules
Container started via "run":
app#bdcbfb37b4ba:~/my-app$ ls -l
...
drwxr-xr-x 737 app app 28672 Jan 9 02:25 node_modules
Different docker inspect "Mount" entry Id
Container started via "up":
"Name": "180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a",
"Source": "/var/lib/docker/volumes/180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a/_data",
"Destination": "/home/app/my-app/node_modules",
Container started via "run":
"Name": "8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720",
"Source": "/var/lib/docker/volumes/8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720/_data",
"Destination": "/home/app/my-app/node_modules",
HostConfig -> Binds
I am unsure if this is related, but I also did notice (also in docker inspect) that the Binds section under HostConfig differs between both cases:
Container started via "up":
"Binds": [
"180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a:/home/app/my-app/node_modules:rw",
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
Container started via "run":
"Binds": [
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
(Both show the host source mounted on the image, but only the "up" shows the secondary overlay volume with node_modules, which seems like another odd wrinkle.)
Theory
Per the docker-compose CLI reference:
If there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers
Thus, it appears docker-compose up does not think the configuration or image was changed. I am just not sure how to debug this to confirm. Of course, I could use --force-recreate to work around this, but I wish to resolve what about my configuration is incorrect that is causing the problem.
Update: If I do an explicit docker-compose build prior to docker-compose up, the problem still persists. Thus I am feeling less confident about this theory at the moment.
Here is the entire Dockefile:
FROM node:6.9.1
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
ENV APP=my-app
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm install --global gulp-cli
COPY ./package.json $HOME/$APP/package.json
RUN chown -R app:app $HOME/*
USER app
WORKDIR $HOME/$APP
RUN npm install && npm cache clean
USER root
COPY . $HOME/$APP
RUN chown -R app:app $HOME/* && chmod u+x ./node_modules/.bin/* ./bin/*
USER app
ENTRYPOINT ["/home/app/my-app/bin/entrypoint.sh"]
CMD ["npm", "run", "start:dev"]
the command docker-compose uses the docker-compose.yml file as a configuration file of your containered service. By default it looks for this .yml config file in the directory you run docker-compose.
So it could be that your docker-compose.yml file is not up to date, seeing that one of your volumes is not mounted when you run docker-compose up.
docker run ignores the docker-compose.yml file and just builds the container using the Dockerimage. So there could be configuration difference between the Dockerimage file and the docker-compose.yml files.
I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.