Syncing local code inside docker container without having container service running - node.js

I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.

You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....

Related

Unable to deploy React Application to Kubernetes

I am trying to deploy an application created using Create-React App to Kubernetes through Docker.
When the docker file tries to create the container using Jenkins pipeline, it fails with the below error :
"Starting the development server...
Failed to compile.
./src/index.js
Module not found: Can't resolve './App.js' in '/app/src'
The folder structure is exactly similar to the default 'create-react app' folder structure.
Also below is the Dockerfile:
FROM node:10.6.0-jessie
# set working directory
RUN mkdir /app
WORKDIR /app
COPY . .
# add `/usr/src/app/node_modules/.bin` to $PATH
#ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN npm install
#RUN npm install react-scripts -g --silent
# start app
CMD ["npm", "start"]
I am unable to understand where I might be going wrong.
Edit 1: I would also like to mention that I am able to run the docker container on my local machine using this config.
So any help would be appreciated.
Update 1 :
I was able to do a kubectl exec -it pod_name -- bash to the container inside the pod. I found out due to some reason the "App.js" file was getting copied to the container as "app.js". Since linux is case sensitive so it was not able to find the file. Changing the import statement in index.js fixed the problem. But I still have no idea as to what might have caused the file to get copied with a lower-case since in my local the file exists as "App.js".
The problem you're having will be omitted when you adjust your deployment process to a more production-ready setup.
What you're doing currently is installing all (development) dependencies on every Kubernetes node, compiling your application, and then starting a development webserver. This makes your deployed builds inconsistent and increases load and bloat on the deployment nodes.
Instead what you want to do is create a production-ready build by running npm run build on a build machine, which will compile your application and output to the build folder in your project. You then want to transfer this folder to your server in a .zip file, which will need a production-ready webserver installed (Nginx is highly recommended and industry standard) to serve the static files from your build.

How to dynamically change content in node project run through docker

I have an angularjs application, I'm running using docker.
The docker file looks like this:-
FROM node:6.2.2
RUN npm install --global gulp-cli && \
npm install --global bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
COPY bower.json /usr/src/app/
RUN npm install && \
bower install -F --allow-root --config.interactive=false
COPY . /usr/src/app
ENV GULP_COMMAND serve:dist
ENTRYPOINT ["sh", "-c"]
CMD ["gulp $GULP_COMMAND"]
Now when I make any changes in say any html file, It doesn't dynamically loads up on the web page. I have to stop the container, remove it, build the image again, remove the earlier image and then restart the container from new image. Do I have to do this every time? (I'm new to docker, and I guess this issue is coz my source code is not put into volume, but I don't know how to do it using docker file)
You are correct, you should use volumes for stuff like this. During development, give it the same volumes as the COPY directories. It'll override it with whatever is on your machine, no need to rebuild the image, or even restart the container. Perfect for development.
When actually baking your images for production, you remove the volumes, leave the COPY in, and you'll get a deterministic container. I would recommend you read through this article here: https://docs.docker.com/storage/volumes/.
In general, there are 3 ways to do volumes.
Define them in your dockerfile using VOLUME.
Personally, I've never done this. I don't really see the benefits of this against the other two methods. I believe it would be more common to do this when your volume is meant to act as a permanent data-store. Not so much when you're just trying to use your live dev environment.
Define them when calling docker run.
docker run ... -v $(pwd)/src:/usr/src/app ...
This is great, cause if your COPY in your dockerfile is ./src /usr/src/app then it temporarily overrides the directory while running the image, but it's still there for deployment when you don't use -v.
Use docker-compose.
My personal recommendation. Docker compose massively simplifies running containers. For sake of simplicity just calls docker run ... but automates the arguments based on a given docker-compose.yml config.
Create a dev service specifying the volumes you want to mount, other containers you want it linked to, etc. Then bring it up using docker-compose up ... or docker-compose run ... depending on what you need.
Smart use of volumes will DRAMATICALLY reduce your development cycle. Would really recommend looking into them.
Yes, you need to rebuild every time the files change, since you only modify the files that are outside of the container. In order to apply the changes to the files IN the container, you need to rebuild the container.
Depending on the use case, you could either make the Docker Container dynamically load the files from another repository, or you could mount an external volume to use in the container, but there are some pitfalls associated with either solution.
If you want to keep your container running as you add your files you could also use a variation.
Mount a volume to any other location e.g. /usr/src/staging.
While the container is running, if you need to copy new files into the container, copy them into the location of the mounted volume.
Run docker exec -it <container-name> bash to open a bash shell inside the running container.
Run a cp /usr/src/staging/* /usr/src/app command to copy all new files into the target folder.

docker-compose "up" vs. "run" yields different mounted volume

Update 2: I have created a sample project on Git to reproduce this issue. Upon further testing, the test case is slightly different than what I've described in my original post.
I am including the contents of the README I wrote on the github repo below.
Use Case
One simple nodejs project with a Dockerfile.
One local NPM dependency used by the above project (copied to container via Dockerfile). The project refers to the dependency via a local path.
The nodejs project has one web route (/) that prints the version of the local npm dependency from its package.json. This is used to verify the results of the test case procedure.
docker-compose uses this volume technique to overlay the host machine's source tree
on top of the container's source tree and then overlaying the node_modules from the container on top of the first volume.
Steps to Reproduce
Clone this repo.
Clean up any previous containers and images related to this repo's project via docker rm and docker rmi.
Check out the test2_run1 tag. This state represents the project using version 1.0.0 of the local NPM dependency.
Do a docker-compose build. All steps should run without any cache usage if step 2 was followed correctly.
Note the version of the local NPM dependency during the npm install command, e.g. +-- my-npm#1.0.0.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.0.
Stop the running containers. (Ctrl-C on the terminal from which the up command was issued.)
Check out the test2_run2 tag. This introduces a small change to the NPM's index.js file, and a version
bump in its package.json to 1.0.1.
Do a docker-compose build. Only the instructions up to COPY ./my-npm ... should use a cache. (E.g., the docker output prints ---> Using cache for that instruction.) All subsequent steps should be run by docker. This is because the changes introduced in step 7 to the NPM package should have invalidated the cache for the COPY ./my-npm ... command, and, as a result, subsequent steps too. Confirm that during the npm install command, the new version of the NPM is printed in the summary tree output, e.g. +-- my-npm#1.0.1.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.1.
Expected behavior: Page in step 9 should report 1.0.1. That is, a change in the local npm should be reflected in the container via docker-compose up.
Actual behavior: Page in step 9 reports 1.0.0.
Note that docker itself is re-building images as expected. The observed issue is not that docker is re-using a cached image, as the output
shows it re-running NPM install and showing the new version of the local NPM dependency. The issue is that docker-compose is not seeing
that the underlying images that comprise the dctest_service1 container have been updated.
In fact, running bash in the container allows us to see that the container has the updated my-npm module files, but the node_modules
version is stale:
# docker exec -it dctest_service1_1 bash
app#6bf2671b75c6:~/service1$ grep version my-npm/package.json node_modules/my-npm/package.json
my-npm/package.json: "version": "1.0.1",
node_modules/my-npm/package.json: "version": "1.0.0"
app#6bf2671b75c6:~/service1$
Workaround: Use docker rm to remove the dctest_service1 container. Then re-run docker-compose up, which will re-create the container using the existing images. Notable in this step is that no underlying images are re-built. In re-creating the container, docker-compose seems to figure out to use the newer volume that has the updated node_modules.
See the output directory for the output printed during the first run (steps 4 and 5) and the second run (steps 8 and 9).
Original Post
I've got a nodejs Dockerfile based on this tutorial ("Lessons from Building a Node App in Docker"). Specifically, note that this tutorial uses a volume trick to mount the node_modules directory from the container itself to overlay on top of the equivalent one from the host machine. E.g.:
volumes:
- .:/home/app/my-app
- /home/app/my-app/node_modules
I am running into a problem where an update to package.json is triggering a npm install as expected (as opposed to using the docker cache), but when starting the service with docker-compose up, the resulting container somehow ends up with an older version of the node_modules data, as the newly added NPM package that had been added is missing from the directory. Yet, if run the specified CMD by hand via docker-compose run --rm, then I do see the updated volume!
I can confirm this a few ways:
node_modules Timestamp
Container started via "up":
app#88614c5599b6:~/my-app$ ls -l
...
drwxr-xr-x 743 app app 28672 Dec 12 16:41 node_modules
Container started via "run":
app#bdcbfb37b4ba:~/my-app$ ls -l
...
drwxr-xr-x 737 app app 28672 Jan 9 02:25 node_modules
Different docker inspect "Mount" entry Id
Container started via "up":
"Name": "180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a",
"Source": "/var/lib/docker/volumes/180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a/_data",
"Destination": "/home/app/my-app/node_modules",
Container started via "run":
"Name": "8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720",
"Source": "/var/lib/docker/volumes/8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720/_data",
"Destination": "/home/app/my-app/node_modules",
HostConfig -> Binds
I am unsure if this is related, but I also did notice (also in docker inspect) that the Binds section under HostConfig differs between both cases:
Container started via "up":
"Binds": [
"180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a:/home/app/my-app/node_modules:rw",
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
Container started via "run":
"Binds": [
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
(Both show the host source mounted on the image, but only the "up" shows the secondary overlay volume with node_modules, which seems like another odd wrinkle.)
Theory
Per the docker-compose CLI reference:
If there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers
Thus, it appears docker-compose up does not think the configuration or image was changed. I am just not sure how to debug this to confirm. Of course, I could use --force-recreate to work around this, but I wish to resolve what about my configuration is incorrect that is causing the problem.
Update: If I do an explicit docker-compose build prior to docker-compose up, the problem still persists. Thus I am feeling less confident about this theory at the moment.
Here is the entire Dockefile:
FROM node:6.9.1
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
ENV APP=my-app
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm install --global gulp-cli
COPY ./package.json $HOME/$APP/package.json
RUN chown -R app:app $HOME/*
USER app
WORKDIR $HOME/$APP
RUN npm install && npm cache clean
USER root
COPY . $HOME/$APP
RUN chown -R app:app $HOME/* && chmod u+x ./node_modules/.bin/* ./bin/*
USER app
ENTRYPOINT ["/home/app/my-app/bin/entrypoint.sh"]
CMD ["npm", "run", "start:dev"]
the command docker-compose uses the docker-compose.yml file as a configuration file of your containered service. By default it looks for this .yml config file in the directory you run docker-compose.
So it could be that your docker-compose.yml file is not up to date, seeing that one of your volumes is not mounted when you run docker-compose up.
docker run ignores the docker-compose.yml file and just builds the container using the Dockerimage. So there could be configuration difference between the Dockerimage file and the docker-compose.yml files.

How to restart Node on a docker container without restarting the whole container?

I have a container ==> FROM node:5
Node should restart after each change in the code.
But there is no way for me restart the Node server without restarting the whole docker container.
I have many npm install on dockerfile that runs each time I restart the container, and it's annoying to wait for all of them after each change in the code.
I'm already using shared folder to have the latest code in my container.
If you just need the Node.js server to restart after changes, instead of starting the server with the node command you can use a tool like node-dev or nodemon. To leverage these, you'd need to update the command in your Dockerfile, e.g. CMD [ "node-dev", "server.js" ].
Since you already have your local files as a volume in the Docker container, any change to those files will restart the server.
Here's how I did it. In your docker-compose.yml, in the entry for the appropriate service, add an entrypoint to run npx nodemon /path/to/your/app
This will work even if you don't have nodemon installed in the image.
e.g.
services:
web:
image: foo
entrypoint: ["npx", "nodemon", "/usr/src/app/app.js"]
I think that's not the optimal way to Docker. You should try to build your own Docker image which includes your code changes. In your own Docker image you could make the npm install step part of the container build so you do not need to run this command on container startup.

Docker container files overwritten by host volume share

I am building an application in python which has javascript files. I want to use browserify, so I want to install some node modules which I can use in my js files with require calls. I want these node modules only in my container and not host machine. Here is my Docker setup of node specific container.
### Dockerfile
FROM node:5.9.1
RUN npm install -g browserify
RUN mkdir /js_scripts
ADD package.json /js_scripts/
WORKDIR /js_scripts
RUN npm install # This installs required packages from package.json
RUN ls # lists the node_modules directory indicating successful install.
Now I want to share js files from my host machine with this container, so that I can run browserify main.js -o bundle.js command in the container. Here is my docker-compose.yml, which copies host_js_files to my js_scripts directory.
node:
build: .
volumes:
- ./host_js_files:/js_scripts
Now when I run a container docker-compose run node bash and ls in the js_scripts directory I only see my js files from the host volume, and my node_modules directory is not visible. This makes sense, based on how volumes are set up in docker.
However I want to have these node_modules in the container to successfully run browserify (which looks for these node modules). Is there a good way to do this without globally installing the node modules in the container or having to install the modules in the host machine?
Thanks for your input.
Containers should be stateless. If you destroy the container, all data inside it will be destroyed. You can mount as a volume the node_modules directory to avoid to download all dependencies all the time you create a new container.
See this example that installs browserify once:
### docker-compose.yml
node:
image: node:5.9.1
working_dir: /js_scripts
command: npm install browserify
volumes:
- $PWD/host_js_files:/js_scripts
First, you should run docker-compose up and wait until all packages will be installed. After that, you should run the browserify command as:
docker-compose run node /js_scripts/node_modules/.bin/browserify /js_scripts/main.js -o /js_scripts/bundle.js
It's a bad idea to mix your host files with docker container files by folder sharing. After container removing docker deletes all containers data. Docker should know which files belong to containers and which to host(Docker removes all inside container except for volumes). You have two variants to mix host and container files together:
Put container files to volume after container was started. (Bad idea, container files will not be removed after removing of container)
You can place your host scripts to subfolder in /js_scripts or declare all your scripts separately:
-v ./host_js_files/script1.js:/js_script/script1.js
-v ./host_js_files/script2.js:/js_script/script2.js

Resources