Docker + node.js: can't spawn phantomjs (ENOENT) - node.js

I'm running a node.js application that uses the html-pdf module, which in turn relies on phantomjs, to generate PDF files from HTML. The app runs withing a Docker container.
Dockerfile:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
RUN npm install -g html-pdf --unsafe-perm
VOLUME /mydirectory
ENTRYPOINT ["node"]
Which builds an image just fine.
app.js
const witch = require('witch');
const pdf = require('html-pdf');
const phantomPath = witch('phantomjs-prebuilt', 'phantomjs');
function someFunction() {
pdf.create('some html content', { phantomPath: `${this._phantomPath}` });
}
// ... and then some other stuff that eventually calls someFunction()
And then call docker run <the image name> app.js
When someFunction gets called, the following error message is thrown:
Error: spawn /mydirectory/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs ENOENT
This happens both when deploying the container on a cloud linux server or locally on my machine.
I have tried adding RUN npm install -g phantomjs-prebuilt --unsafe-perms to the Dockerfile, to no avail (this makes docker build fail because the installation of html-pdf cannot validate the installation of phantomjs)
I'm also obviously not a fan of using the --unsafe-perms argument of npm install, so if anybody has a solution that allows bypassing that, it would be fantastic.
Any help is greatly appreciated!

This is what ended up working for me, in case this is helpful to anyone:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
USER node
RUN npm install -g html-pdf
VOLUME /mydirectory
ENTRYPOINT ["node"]

I had a similar problem, only workaround for me was to download and copy a phantom manualy. This is my example from docker file, it should by the last thing before EXPOSE comand. Btw I use a node:10.15.3 image.
RUN wget -O /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN mkdir /tmp/phantomjs && mkdir -p /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN tar xvjf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/phantomjs
RUN mv /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64/* /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN rm -rf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz && rm -rf /tmp/phantomjs
Don't forget to update your paths. It's only workaround, I didn't have time to figure it out yet.

I came to this question in March 2021 and had the same issue dockerizing highcharts: it worked on my machine but failed on docker run (same spawn phantomjs error). In the end, the solution was to find a FROM node version that worked. This Dockerfile works using the latest Node docker image and almost latest highcharts npm version (always pick up specific npm versions):
FROM node:15.12.0
ENV ACCEPT_HIGHCHARTS_LICENSE YES
# see available versions of highcharts at https://www.npmjs.com/package/highcharts-export-server
RUN npm install highcharts-export-server#2.0.30 -g
EXPOSE 7801
# run the container using: docker run -p 7801:7801 -t CONTAINER_TAG
CMD [ "highcharts-export-server", "--enableServer", "1" ]

Related

html-pdf not work in docker to create pdf

i have an node js app which deploy into docker and using html-pdf library.
but when tried in local docker container, it return this error:
Error: spawn Unknown system error -8
at ChildProcess.spawn (node:internal/child_process:415:11)
at Object.spawn (node:child_process:698:9)
at PDF.PdfExec [as exec] (/srv/node_modules/html-pdf/lib/pdf.js:89:28)
at PDF.PdfToFile [as toFile] (/srv/node_modules/html-pdf/lib/pdf.js:85:8)
at exports.renderPdf (/srv/service/PdfRenderer.js:27:14)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /srv/routes/Report.js:63:16 {
errno: -8,
code: 'Unknown system error -8',
syscall: 'spawn'
}
this is how i create dockerfile:
FROM mhart/alpine-node:16.4.2
WORKDIR /srv
ADD . .
RUN npm install
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
mkdir -p /usr/share && \
cd /usr/share \
&& curl -L https://github.com/Overbryd/docker-phantomjs-alpine/releases/download/2.11/phantomjs-alpine-x86_64.tar.bz2 | tar xj \
&& ln -s /usr/share/phantomjs/phantomjs /usr/bin/phantomjs \
&& phantomjs --version
EXPOSE 3000
CMD ["node", "index.js"]
and this is how i'm rendering the PDF in node js:
const content = await compile(template, context)
pdf.create(content,
{ format: 'Letter',
footer: { contents: footer, height: '20mm' },
header: { content: '', height: '6mm' },
timeout: 540000 })
.toFile(path, (err, response) => {
if (err) {
fs.unlinkSync(path)
return console.log(err);
}
const data = fs.readFileSync(path)
res.setHeader('Content-Type', 'application/pdf')
res.setHeader('Content-Length', fs.statSync(path).size + 200)
res.send(data)
return fs.unlinkSync(path)
});
the line that didn't work at .toFile() which i'm not sure why. but i will need to return pdf file
i've tried to install phantom prebuild and adding phantomPath but it's still return the same error.
is there any way that i can do to fix this problem
FROM node:8-alpine
WORKDIR /srv
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
COPY package.json package-lock.json ./
RUN npm install phantomjs-prebuilt --s
RUN npm install -g html-pdf
RUN npm install
RUN chmod -R a+rwx /srv
RUN apk --update add ttf-ubuntu-font-family fontconfig && rm -rf /var/cache/apk/*
COPY . ./
VOLUME /srv
USER node
EXPOSE 3000
CMD ["node", "index.js"]
this is how i fixed my issues by install phantomjs prebuilt and html-pdf after copy and reinstall other package. also install back the font family so that my report have text in it. other wise the text will be blank in pdf
This error seems related to phantomjs and its also documented in there docs
https://www.npmjs.com/package/phantomjs#installation-fails-with-spawn-enoent
Installation fails with spawn ENOENT
This is NPM's way of telling you that it was not able to start a process. It usually means:
node is not on your PATH, or otherwise not correctly installed.
tar is not on your PATH. This package expects tar on your PATH on Linux-based platforms.
i was with the same problem.
You'll need install PhantomJS manually since phantomjs-prebuilt won't work in linux alpine as you can see through this

dumb-init npm install: No such file or directory when running in docker-compose

dockerfile
FROM node:${NODE_VERSION}-buster-slim
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
RUN apt-get update && \
apt-get install -qqy --no-install-recommends \
ca-certificates \
dumb-init \
build-essential && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV HOME=/home/node
WORKDIR $HOME/app
COPY --chown=node:node . .
RUN set -xe && \
chown -R node /usr/local/lib /usr/local/include /usr/local/share /usr/local/bin && \
npm install && npm cache clean --force
EXPOSE 4200
CMD ["node"]
docker-compose
webapp :
container_name : webapp
hostname : webapp
build :
dockerfile : Dockerfile
context : ${PWD}/app
image : webapp:development
command :
- npm install
- npm run start
volumes :
- ${PWD}/webapp:/app
networks :
- backend
ports :
- 4200:4200
restart : on-failure
tty : true
stdin_open : true
env_file :
- variables.env
I can run the image with
docker run webapp bash -c "npm install; npm run start"
but when I run the compose file it says
webapp | [dumb-init] npm install: No such file or directory
I tried to replace the docker-compose command to prefix "node" but the same error but with node npm install: no such file or directory
Can someone tell me where things are going wrong ?
When you use the list form of command: in the docker-compose.yml file (or the JSON-array form of Dockerfile CMD) you are providing a list of words in a single command, not a list of separate commands. Once this gets combined with the ENTRYPOINT in the Dockerfile, the container command is
/usr/bin/dumb-init -- 'npm install' 'npm run start'
and when there isn't a /usr/bin/npm\ install file (including the space in the file name) you get that error.
Since you COPY the application code in the Dockerfile and run npm install there, you don't need to repeat this step at application start time. You should be able to delete the volumes: and command: part of the docker-compose.yml file to use what's built in to the image.
If you really need to repeat this command:, do it in exactly the form you specified in the docker run command, without list syntax
command: bash -c 'npm install; npm run start'

Why is my npm dockerfile looping?

I'm containerizing a nodejs app. My Dockerfile looks like this:
FROM node:4-onbuild
ADD ./ /egp
RUN cd /egp \
&& apt-get update \
&& apt-get install -y r-base python-dev python-matplotlib python-pil python-pip \
&& ./init.R \
&& pip install wordcloud \
&& echo "ABOUT TO do NPM" \
&& npm install -g bower gulp \
&& echo "JUST FINISHED ALL INSTALLATION"
EXPOSE 5000
# CMD npm start > app.log
CMD ["npm", "start", ">", "app.log"]
When I DON'T use the Dockerfile, and instead run
docker run -it -p 5000:5000 -v $(pwd):/egp node:4-onbuild /bin/bash
I can then paste the value of the RUN command and it all works perfectly, and then execute the npm start command and I'm good to go. However, upon attempting instead docker build . it seems to run in to an endless loop attempting to install npm stuff (and never displaying my echo commands), until it crashes with an out-of-memory error. Where have I gone wrong?
EDIT
Here is a minimal version of the EGP folder that exhibits the same container: logging in and pasting the whole "RUN" command works, but docker build does not.It is a .tar.gz file (though the name might download without one of the .)
http://orys.us/egpbroken
The node:4-onbuild image contains the following Dockerfile
FROM node:4.4.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
The three ONBUILD commands run before your ADD or RUN command are kicked off, and the endless loop appears to come from the npm install command that's running. When you launch the container directly, the ONBUILD commands are skipped since you didn't build a child-image. Change your FROM line to:
FROM node:4
and you should have your expected results.

Private node modules on Google App Engine

How do I include my node_modules or specify a npm login / auth token for private npm modules?
It appears that GAE no longer lets the node_modules folder be included at all (see this issue) and there doesn't appear to be a hook to allow npm to login or set a token.
If you include a .npmrc file local to the application you want to deploy, it will get copied into the app source and be used during the npm install. You can have a build step create this file or copy it from your home directory. See this npm article.
The .npmrc file should look like this:
//registry.npmjs.org/:_authToken=<token here>
The Dockerfile I used looks like so:
# Use the base App Engine Docker image, based on debian jessie.
FROM gcr.io/google_appengine/base
# Install updates and dependencies
RUN apt-get update -y && apt-get install --no-install-recommends -y -q curl python build-essential git ca-certificates libkrb5-dev && \
apt-get clean && rm /var/lib/apt/lists/*_*
# Install the latest release of nodejs
RUN mkdir /nodejs && curl https://nodejs.org/dist/v6.2.1/node-v6.2.1-linux-x64.tar.gz | tar xvzf - -C /nodejs --strip-components=1
ENV PATH $PATH:/nodejs/bin
COPY . /app/
WORKDIR /app
# NODE_ENV to production so npm only installs needed dependencies
ENV NODE_ENV production
RUN npm install --unsafe-perm || \
((if [ -f npm-debug.log ]; then \
cat npm-debug.log; \
fi) && false)
# start
CMD ["npm", "start"]

Node.js dev environment in Docker on Windows

I have tried everything I can think of. I have read the docs, blogs and tried following samples on github.
But I can't seem to get it to work.
What I want to do is simple. I want to write my node.js code on my windows 8.1 machine, and I also want to run the code from within a Docker container without having to rebuild the container all the time. So I want to map a directory on my Windows Host to a directory inside the container.
I have created this Dockerfile
FROM node:0.10.38
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
RUN mkdir -p /usr/src/app
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install
EXPOSE 3000
EXPOSE 35729
CMD ["npm", "start"]
I have this simple server.js file
var express = require('express');
var app = express();
var zmq = require('zmq');
app.get('/', function (req, res) {
res.send('ZMQ: ' + zmq.version);
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
And this simple package.json
{
"name": "docker-node-hello-world",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.13.3",
"zmq": "^2.14.0"
}
}
I have installed the latest Docker Toolbox, and can run the Docker Hello World example fine. I try to build my docker image like this when I am in the directory where my Dockerfile is.
docker build -t dockernodetest:dockerfile .
I then try to run it, also from the same location inside the Docker Quickstart Terminal. I use the hash since the tag doesn't take for some reason:
docker run -v //c/Bitbucket/docker-node-hello-world/:/usr/src/app -p 3000:3000 -i -t 9cfd34e046a5 ls ./usr/src/app
The result of this is an empty directory. I was hoping I could just invoke
docker run -v //c/Bitbucket/docker-node-hello-world/:/usr/src/app -p 3000:3000 -i -t 9cfd34e046a5 npm start
But since the host directory isn't available it fails. I have the feeling that I have misunderstood something very basic. I just don't know what.
First let's start from the Dockerfile
FROM node:0.10.38-onbuild
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
EXPOSE 3000 35729
From line 1 I've used the 0.10.38-onbuild tag because I want to take advantage of onbuild scripts that will run to create the /usr/src/app directory and run npm install
Then server.js and package.json are as you have written them. These are both in the same working directory as the Dockerfile as well.
Next we build the image
docker build -t dockernodetest .
I've omitted the dockerfile tag as it seemed unnecessary. The client will automatically add a latest tag anyway. To see what images you've have locally run docker images.
At this point we should have an image ready to run but let us first check that the files we wanted to load are then and that npm install created the node_modules directory
$ docker run dockernodetest ls /usr/src/app
Dockerfile
node_modules
package.json
server.js
We're ready at this point to run our little nodejs app
$ docker run -it -p 8080:3000 dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
Example app listening at http://0.0.0.0:3000
In this instance I've used the -p 8080:3000 flag to map the container's 3000 port to port 8080 on my host machine. Note that I didn't have any other commands at the end because the -onbuild image I've pulled form has a CMD [ "npm", "start" ] and so the default action is to run the start package script.
So to make the development cycle even faster you want to mount your working directory to the container via the -v option
$ docker run -it -p 8080:3000 -v "$PWD":/usr/src/app dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
module.js:340
throw err;
^
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/usr/src/app/server.js:1:77)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
npm ERR! Linux 4.1.7-15.23.amzn1.x86_64
npm ERR! argv "node" "/usr/local/bin/npm" "start"
npm ERR! node v0.10.38
npm ERR! npm v2.11.1
npm ERR! code ELIFECYCLE
npm ERR! docker-node-hello-world#1.0.0 start: `node server.js`
npm ERR! Exit status 8
But what's happened here? Because we mounted the current working directory it overwrote what was previously there in /usr/src/app including our node_modules directory.
So the quick fix here is to now run a npm install in our current working directory and rerun the docker run command.
$ npm install
docker-node-hello-world#1.0.0 /home/ec2-user/dockernode
└─┬ express#4.13.3
├─┬ accepts#1.2.13
│ ├─┬ mime-types#2.1.7
...
$ docker run -it -p 8080:3000 -v "$PWD":/usr/src/app dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
Example app listening at http://0.0.0.0:3000
Now if you make an update to your server.js file just hit Ctrl+c and restart you docker image though you might want to use something like nodemon to make this even more seamless.
Thank you so much for your answer. It definitely led me on the right path. It took me longer than I had thought, but I got it working.
There were a few things that didn't work, but I got it working at last. Here is a rundown of what I went through.
I wasn't able to use the onbuild scripts as you suggested. The reason being that when I base my image on the onbuild script it runs npm install before I install ZeroMQ.
Running npm install outside my docker image wasn't an option either as you suggested for when I mapped a drive. One of the reasons I want to run this in Docker is that I want an environment where all dependencies (such as ZeroMQ) are installed and available. This is something that on some environments is difficult, and I want to run this on both windows and linux hosts.
I really want to use nodemon for when I develop so I also had to install this globally in the image, and also invoke nodemon when starting the container. So I had to extend your example as you suggested.
I had a lot of trouble getting the mapped volumes to work. It turns out that on a windows host you have to be in the context of C:\users\<username> to be able to map volumes on the host from the container. Once I figured that out and undid all the weird experiments I had been through to make it all work, I got that to work. Using "$PWD" as you suggested when invoking docker run is also weird on a windows host. You have to prefix with a forward slash like this: /"$PWD".
When I figured it all out I started looking at docker compose, partly because I didn't want to keep typing out the long docker run commands (that I kept getting wrong) but also because in my real projects I want multiple containers for my database and other services I need.
So this is what it looks like now. It works exactly the way I want it to work. I now have a container where all dependencies are installed inside the container and whenever I run my container it first invokes npm install and the nodemon server.js. All files including the installed npm modules are on the host but mapped inside the container where everything is run from.
File 1 - docker-compose.yml (notice I don't need the $PWD variable, but can just use a . relative path for the host path)
web:
build: .
volumes:
- ".:/usr/src/app"
ports:
- "3000:3000"
File 2 - Dockerfile
FROM node:0.10.40
RUN mkdir /usr/src/app
RUN mkdir /usr/src/node_modules
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
RUN npm install -g nodemon
WORKDIR /usr/src/app
CMD export NODE_PATH=/usr/src/node_modules && cp package.json /usr/src && npm install --prefix /usr/src && npm start
EXPOSE 3000 35729
File 3 - package.json (notice I use the -L flag when invoking nodemon. This is needed when running inside a container)
{
"name": "docker-node-hello-world",
"version": "1.0.0",
"description": "node hello world",
"main": "server.js",
"scripts": {
"start": "nodemon -L server.js"
},
"author": "Author",
"license": "ISC",
"dependencies": {
"express": "^4.13.3",
"zmq": "^2.14.0"
}
}
File 4 - server.js
var express = require('express');
var app = express();
var zmq = require('zmq');
app.get('/', function (req, res) {
res.send('ZMQ Version: ' + zmq.version);
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
To build use the following command
docker-compose build
To run use the following command
docker-compose up
At this time I can develop my app on my host, and whenever I change a file my node server is restarted. When I add a new dependency to my package.json I have to restart my container to run npm install.
The only thing I don't have working yet is how I make rebuilding my image a smooth operation. What happens is that after I build a new image I have to delete my old container before I can run a new container. The mapping is locked by the old container.
I would really like to give you the credit #jeedo. I would not have reached this solution without your help. I don't know how to do that and also mark this as the solution.
Edit: I just edited this to add something to my Dockerfile. I moved the node_modules folder away from the host filesystem. I had some trouble with paths becoming too long on Windows. These changes make sure the node_modules are always installed inside the container.

Resources