i have an node js app which deploy into docker and using html-pdf library.
but when tried in local docker container, it return this error:
Error: spawn Unknown system error -8
at ChildProcess.spawn (node:internal/child_process:415:11)
at Object.spawn (node:child_process:698:9)
at PDF.PdfExec [as exec] (/srv/node_modules/html-pdf/lib/pdf.js:89:28)
at PDF.PdfToFile [as toFile] (/srv/node_modules/html-pdf/lib/pdf.js:85:8)
at exports.renderPdf (/srv/service/PdfRenderer.js:27:14)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /srv/routes/Report.js:63:16 {
errno: -8,
code: 'Unknown system error -8',
syscall: 'spawn'
}
this is how i create dockerfile:
FROM mhart/alpine-node:16.4.2
WORKDIR /srv
ADD . .
RUN npm install
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
mkdir -p /usr/share && \
cd /usr/share \
&& curl -L https://github.com/Overbryd/docker-phantomjs-alpine/releases/download/2.11/phantomjs-alpine-x86_64.tar.bz2 | tar xj \
&& ln -s /usr/share/phantomjs/phantomjs /usr/bin/phantomjs \
&& phantomjs --version
EXPOSE 3000
CMD ["node", "index.js"]
and this is how i'm rendering the PDF in node js:
const content = await compile(template, context)
pdf.create(content,
{ format: 'Letter',
footer: { contents: footer, height: '20mm' },
header: { content: '', height: '6mm' },
timeout: 540000 })
.toFile(path, (err, response) => {
if (err) {
fs.unlinkSync(path)
return console.log(err);
}
const data = fs.readFileSync(path)
res.setHeader('Content-Type', 'application/pdf')
res.setHeader('Content-Length', fs.statSync(path).size + 200)
res.send(data)
return fs.unlinkSync(path)
});
the line that didn't work at .toFile() which i'm not sure why. but i will need to return pdf file
i've tried to install phantom prebuild and adding phantomPath but it's still return the same error.
is there any way that i can do to fix this problem
FROM node:8-alpine
WORKDIR /srv
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
COPY package.json package-lock.json ./
RUN npm install phantomjs-prebuilt --s
RUN npm install -g html-pdf
RUN npm install
RUN chmod -R a+rwx /srv
RUN apk --update add ttf-ubuntu-font-family fontconfig && rm -rf /var/cache/apk/*
COPY . ./
VOLUME /srv
USER node
EXPOSE 3000
CMD ["node", "index.js"]
this is how i fixed my issues by install phantomjs prebuilt and html-pdf after copy and reinstall other package. also install back the font family so that my report have text in it. other wise the text will be blank in pdf
This error seems related to phantomjs and its also documented in there docs
https://www.npmjs.com/package/phantomjs#installation-fails-with-spawn-enoent
Installation fails with spawn ENOENT
This is NPM's way of telling you that it was not able to start a process. It usually means:
node is not on your PATH, or otherwise not correctly installed.
tar is not on your PATH. This package expects tar on your PATH on Linux-based platforms.
i was with the same problem.
You'll need install PhantomJS manually since phantomjs-prebuilt won't work in linux alpine as you can see through this
Related
I am trying to add phantomjs in my docker file to generate html - pdf.
Below is my dockerfile
Dockerfile
FROM node:alpine
WORKDIR /
COPY package*.json ./
# Add support for https on wget
RUN apk update && apk add --no-cache wget && apk --no-cache add openssl wget && apk add ca-certificates && update-ca-certificates
# Add phantomjs
RUN wget -qO- "https://github.com/dustinblackman/phantomized/releases/download/2.1.1a/dockerized-phantomjs.tar.gz" | tar xz -C / \
&& npm config set user 0 \
&& npm install -g phantomjs-prebuilt
# Add fonts required by phantomjs to render html correctly
RUN apk add --update ttf-dejavu ttf-droid ttf-freefont ttf-liberation && rm -rf /var/cache/apk/*
RUN npm install --production
copy . /
RUN npm run build
EXPOSE 8080
cmd ["npm", "run", "start:prod"]
But when I try to run it. it gives me below error.
npm ERR! `user` is not a valid npm option
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2022-12-19T11_55_55_953Z-debug-0.log
The command '/bin/sh -c wget -qO- "https://github.com/dustinblackman/phantomized/releases/download/2.1.1a/dockerized-phantomjs.tar.gz" | tar xz -C / && npm config set user 0 && npm install -g phantomjs-prebuilt' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
This seemed to work fine a few days back. But all of sudden this stopped working and started giving me this error.
This seemed to work fine a few days back. But all of sudden this stopped working and started giving me this error.
I installed a node server in the docker and am using Libre Office. It works normally in a Windows environment, but in a Docker environment, the characters are broken and the cause is unknown. Below is an image of the docker file and broken letters.
FROM node:12-alpine
WORKDIR "/app"
COPY package*.json ./
RUN apk update && apk add libreoffice
# RUN npm install
RUN npm ci --only=production
COPY . .
CMD ["npm", "start"]
convert,
const libre = require(`libreoffice-convert`)
// Read file
const file = fs.readFileSync(enterPath);
// Convert it to pdf format with undefined filter (see Libreoffice doc about filter)
const status = await new Promise((resolve, reject) => {
libre.convert(file, pdfExtend, undefined, (err, done) => {
if (err) {
console.log(err)
reject(err)
}
// Here in done you have pdf file which you can save or transfer in another stream
fs.writeFileSync(outputPath, done);
resolve(200)
});
})
result,
help me
I think there are missing fonts in Alpine Linux, and there are two workarounds that come to mind right now. (docker image: node:12-alpine)
change image
There is no guarantee that it will work
FROM node:12-slim
install fonts
I recommend this solution
image size grows up to 1GB.
ref : stackoverflow/alpine-linux-fonts
FROM node:12-alpine
WORKDIR "/app"
COPY package*.json ./
RUN apk update && apk add libreoffice
RUN apk add --no-cache msttcorefonts-installer fontconfig
RUN update-ms-fonts
# Google fonts
RUN wget https://github.com/google/fonts/archive/main.tar.gz -O gf.tar.gz --no-check-certificate
RUN tar -xf gf.tar.gz
RUN mkdir -p /usr/share/fonts/truetype/google-fonts
RUN find $PWD/fonts-main/ -name "*.ttf" -exec install -m644 {} /usr/share/fonts/truetype/google-fonts/ \; || return 1
RUN rm -f gf.tar.gz
RUN fc-cache -f && rm -rf /var/cache/*
# RUN npm install
RUN npm ci --only=production
COPY . .
CMD ["npm", "start"]
I'm running a node.js application that uses the html-pdf module, which in turn relies on phantomjs, to generate PDF files from HTML. The app runs withing a Docker container.
Dockerfile:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
RUN npm install -g html-pdf --unsafe-perm
VOLUME /mydirectory
ENTRYPOINT ["node"]
Which builds an image just fine.
app.js
const witch = require('witch');
const pdf = require('html-pdf');
const phantomPath = witch('phantomjs-prebuilt', 'phantomjs');
function someFunction() {
pdf.create('some html content', { phantomPath: `${this._phantomPath}` });
}
// ... and then some other stuff that eventually calls someFunction()
And then call docker run <the image name> app.js
When someFunction gets called, the following error message is thrown:
Error: spawn /mydirectory/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs ENOENT
This happens both when deploying the container on a cloud linux server or locally on my machine.
I have tried adding RUN npm install -g phantomjs-prebuilt --unsafe-perms to the Dockerfile, to no avail (this makes docker build fail because the installation of html-pdf cannot validate the installation of phantomjs)
I'm also obviously not a fan of using the --unsafe-perms argument of npm install, so if anybody has a solution that allows bypassing that, it would be fantastic.
Any help is greatly appreciated!
This is what ended up working for me, in case this is helpful to anyone:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
USER node
RUN npm install -g html-pdf
VOLUME /mydirectory
ENTRYPOINT ["node"]
I had a similar problem, only workaround for me was to download and copy a phantom manualy. This is my example from docker file, it should by the last thing before EXPOSE comand. Btw I use a node:10.15.3 image.
RUN wget -O /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN mkdir /tmp/phantomjs && mkdir -p /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN tar xvjf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/phantomjs
RUN mv /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64/* /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN rm -rf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz && rm -rf /tmp/phantomjs
Don't forget to update your paths. It's only workaround, I didn't have time to figure it out yet.
I came to this question in March 2021 and had the same issue dockerizing highcharts: it worked on my machine but failed on docker run (same spawn phantomjs error). In the end, the solution was to find a FROM node version that worked. This Dockerfile works using the latest Node docker image and almost latest highcharts npm version (always pick up specific npm versions):
FROM node:15.12.0
ENV ACCEPT_HIGHCHARTS_LICENSE YES
# see available versions of highcharts at https://www.npmjs.com/package/highcharts-export-server
RUN npm install highcharts-export-server#2.0.30 -g
EXPOSE 7801
# run the container using: docker run -p 7801:7801 -t CONTAINER_TAG
CMD [ "highcharts-export-server", "--enableServer", "1" ]
How do I include my node_modules or specify a npm login / auth token for private npm modules?
It appears that GAE no longer lets the node_modules folder be included at all (see this issue) and there doesn't appear to be a hook to allow npm to login or set a token.
If you include a .npmrc file local to the application you want to deploy, it will get copied into the app source and be used during the npm install. You can have a build step create this file or copy it from your home directory. See this npm article.
The .npmrc file should look like this:
//registry.npmjs.org/:_authToken=<token here>
The Dockerfile I used looks like so:
# Use the base App Engine Docker image, based on debian jessie.
FROM gcr.io/google_appengine/base
# Install updates and dependencies
RUN apt-get update -y && apt-get install --no-install-recommends -y -q curl python build-essential git ca-certificates libkrb5-dev && \
apt-get clean && rm /var/lib/apt/lists/*_*
# Install the latest release of nodejs
RUN mkdir /nodejs && curl https://nodejs.org/dist/v6.2.1/node-v6.2.1-linux-x64.tar.gz | tar xvzf - -C /nodejs --strip-components=1
ENV PATH $PATH:/nodejs/bin
COPY . /app/
WORKDIR /app
# NODE_ENV to production so npm only installs needed dependencies
ENV NODE_ENV production
RUN npm install --unsafe-perm || \
((if [ -f npm-debug.log ]; then \
cat npm-debug.log; \
fi) && false)
# start
CMD ["npm", "start"]
I have tried everything I can think of. I have read the docs, blogs and tried following samples on github.
But I can't seem to get it to work.
What I want to do is simple. I want to write my node.js code on my windows 8.1 machine, and I also want to run the code from within a Docker container without having to rebuild the container all the time. So I want to map a directory on my Windows Host to a directory inside the container.
I have created this Dockerfile
FROM node:0.10.38
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
RUN mkdir -p /usr/src/app
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install
EXPOSE 3000
EXPOSE 35729
CMD ["npm", "start"]
I have this simple server.js file
var express = require('express');
var app = express();
var zmq = require('zmq');
app.get('/', function (req, res) {
res.send('ZMQ: ' + zmq.version);
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
And this simple package.json
{
"name": "docker-node-hello-world",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.13.3",
"zmq": "^2.14.0"
}
}
I have installed the latest Docker Toolbox, and can run the Docker Hello World example fine. I try to build my docker image like this when I am in the directory where my Dockerfile is.
docker build -t dockernodetest:dockerfile .
I then try to run it, also from the same location inside the Docker Quickstart Terminal. I use the hash since the tag doesn't take for some reason:
docker run -v //c/Bitbucket/docker-node-hello-world/:/usr/src/app -p 3000:3000 -i -t 9cfd34e046a5 ls ./usr/src/app
The result of this is an empty directory. I was hoping I could just invoke
docker run -v //c/Bitbucket/docker-node-hello-world/:/usr/src/app -p 3000:3000 -i -t 9cfd34e046a5 npm start
But since the host directory isn't available it fails. I have the feeling that I have misunderstood something very basic. I just don't know what.
First let's start from the Dockerfile
FROM node:0.10.38-onbuild
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
EXPOSE 3000 35729
From line 1 I've used the 0.10.38-onbuild tag because I want to take advantage of onbuild scripts that will run to create the /usr/src/app directory and run npm install
Then server.js and package.json are as you have written them. These are both in the same working directory as the Dockerfile as well.
Next we build the image
docker build -t dockernodetest .
I've omitted the dockerfile tag as it seemed unnecessary. The client will automatically add a latest tag anyway. To see what images you've have locally run docker images.
At this point we should have an image ready to run but let us first check that the files we wanted to load are then and that npm install created the node_modules directory
$ docker run dockernodetest ls /usr/src/app
Dockerfile
node_modules
package.json
server.js
We're ready at this point to run our little nodejs app
$ docker run -it -p 8080:3000 dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
Example app listening at http://0.0.0.0:3000
In this instance I've used the -p 8080:3000 flag to map the container's 3000 port to port 8080 on my host machine. Note that I didn't have any other commands at the end because the -onbuild image I've pulled form has a CMD [ "npm", "start" ] and so the default action is to run the start package script.
So to make the development cycle even faster you want to mount your working directory to the container via the -v option
$ docker run -it -p 8080:3000 -v "$PWD":/usr/src/app dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
module.js:340
throw err;
^
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/usr/src/app/server.js:1:77)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
npm ERR! Linux 4.1.7-15.23.amzn1.x86_64
npm ERR! argv "node" "/usr/local/bin/npm" "start"
npm ERR! node v0.10.38
npm ERR! npm v2.11.1
npm ERR! code ELIFECYCLE
npm ERR! docker-node-hello-world#1.0.0 start: `node server.js`
npm ERR! Exit status 8
But what's happened here? Because we mounted the current working directory it overwrote what was previously there in /usr/src/app including our node_modules directory.
So the quick fix here is to now run a npm install in our current working directory and rerun the docker run command.
$ npm install
docker-node-hello-world#1.0.0 /home/ec2-user/dockernode
└─┬ express#4.13.3
├─┬ accepts#1.2.13
│ ├─┬ mime-types#2.1.7
...
$ docker run -it -p 8080:3000 -v "$PWD":/usr/src/app dockernodetest
> docker-node-hello-world#1.0.0 start /usr/src/app
> node server.js
Example app listening at http://0.0.0.0:3000
Now if you make an update to your server.js file just hit Ctrl+c and restart you docker image though you might want to use something like nodemon to make this even more seamless.
Thank you so much for your answer. It definitely led me on the right path. It took me longer than I had thought, but I got it working.
There were a few things that didn't work, but I got it working at last. Here is a rundown of what I went through.
I wasn't able to use the onbuild scripts as you suggested. The reason being that when I base my image on the onbuild script it runs npm install before I install ZeroMQ.
Running npm install outside my docker image wasn't an option either as you suggested for when I mapped a drive. One of the reasons I want to run this in Docker is that I want an environment where all dependencies (such as ZeroMQ) are installed and available. This is something that on some environments is difficult, and I want to run this on both windows and linux hosts.
I really want to use nodemon for when I develop so I also had to install this globally in the image, and also invoke nodemon when starting the container. So I had to extend your example as you suggested.
I had a lot of trouble getting the mapped volumes to work. It turns out that on a windows host you have to be in the context of C:\users\<username> to be able to map volumes on the host from the container. Once I figured that out and undid all the weird experiments I had been through to make it all work, I got that to work. Using "$PWD" as you suggested when invoking docker run is also weird on a windows host. You have to prefix with a forward slash like this: /"$PWD".
When I figured it all out I started looking at docker compose, partly because I didn't want to keep typing out the long docker run commands (that I kept getting wrong) but also because in my real projects I want multiple containers for my database and other services I need.
So this is what it looks like now. It works exactly the way I want it to work. I now have a container where all dependencies are installed inside the container and whenever I run my container it first invokes npm install and the nodemon server.js. All files including the installed npm modules are on the host but mapped inside the container where everything is run from.
File 1 - docker-compose.yml (notice I don't need the $PWD variable, but can just use a . relative path for the host path)
web:
build: .
volumes:
- ".:/usr/src/app"
ports:
- "3000:3000"
File 2 - Dockerfile
FROM node:0.10.40
RUN mkdir /usr/src/app
RUN mkdir /usr/src/node_modules
RUN apt-get update -qq && apt-get install -y build-essential
ENV ZMQ_VERSION 4.1.3
ENV LIBSODIUM_VERSION 1.0.3
RUN curl -SLO "https://download.libsodium.org/libsodium/releases/libsodium-$LIBSODIUM_VERSION.tar.gz" \
&& tar xvf libsodium-$LIBSODIUM_VERSION.tar.gz \
&& cd libsodium-$LIBSODIUM_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r libsodium-$LIBSODIUM_VERSION \
&& rm libsodium-$LIBSODIUM_VERSION.tar.gz
RUN curl -SLO "http://download.zeromq.org/zeromq-$ZMQ_VERSION.tar.gz" \
&& tar xvf zeromq-$ZMQ_VERSION.tar.gz \
&& cd zeromq-$ZMQ_VERSION \
&& ./configure \
&& make \
&& make install \
&& cd .. \
&& rm -r zeromq-$ZMQ_VERSION \
&& rm zeromq-$ZMQ_VERSION.tar.gz
RUN ldconfig
RUN npm install -g nodemon
WORKDIR /usr/src/app
CMD export NODE_PATH=/usr/src/node_modules && cp package.json /usr/src && npm install --prefix /usr/src && npm start
EXPOSE 3000 35729
File 3 - package.json (notice I use the -L flag when invoking nodemon. This is needed when running inside a container)
{
"name": "docker-node-hello-world",
"version": "1.0.0",
"description": "node hello world",
"main": "server.js",
"scripts": {
"start": "nodemon -L server.js"
},
"author": "Author",
"license": "ISC",
"dependencies": {
"express": "^4.13.3",
"zmq": "^2.14.0"
}
}
File 4 - server.js
var express = require('express');
var app = express();
var zmq = require('zmq');
app.get('/', function (req, res) {
res.send('ZMQ Version: ' + zmq.version);
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
To build use the following command
docker-compose build
To run use the following command
docker-compose up
At this time I can develop my app on my host, and whenever I change a file my node server is restarted. When I add a new dependency to my package.json I have to restart my container to run npm install.
The only thing I don't have working yet is how I make rebuilding my image a smooth operation. What happens is that after I build a new image I have to delete my old container before I can run a new container. The mapping is locked by the old container.
I would really like to give you the credit #jeedo. I would not have reached this solution without your help. I don't know how to do that and also mark this as the solution.
Edit: I just edited this to add something to my Dockerfile. I moved the node_modules folder away from the host filesystem. I had some trouble with paths becoming too long on Windows. These changes make sure the node_modules are always installed inside the container.