Docker with npm install adds unwanted symlink - node.js

I'm trying to build a nodejs container for my project which requires a local module. On my package.json i got a relative link to a folder above, since there is where the local module is located. Everything seems to work correctly except that inside the container, the local module is added as symlink to the host machine (windows).
This behavior only happens when i build using the dockerfile, if i do npm install manually inside the container, the module is copied into the node_module as expected.
package.json entry:
"app-lib": "file:../app_lib"
docker file:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["./Provider/package.json", "./Provider/package-lock.json*", "./Provider/npm-shrinkwrap.json*", "./"]
COPY ["./app_lib/package.json", "./app_lib/package-lock.json*", "./app_lib/npm-shrinkwrap.json*", "../app_lib/"]
RUN cd ../app_lib && npm install
COPY ./app_lib .
RUN cd ../app && npm install
COPY ./Provider .
EXPOSE 3001
Annoying symlink:
app-lib -> E:\work\app_server\app_lib\
Anyone got any suggestion on how to make it work right on build or why might be the underlying cause?

Make sure you have node_modules in .dockerignore, else COPY ./app_lib . will overwrite the same and you will get the behaviour you see

Related

Unable to run (Linux container) or create image (Windows container) a Gatsby React site (win binaries error, matching manifest error) through Docker

I have my website wrapped up and wanted to containerize it for experience as I've never used Docker before. It's built on Gatsby. I did a fresh install of Docker and am running into two issues:
If I try to create an image in a Linux container, it seems to work, but I can't actually run it. I get the following error: "Error in "/app/node_modules/gatsby-transformer-sharp/gatsby-node.js": 'win32-x64' binaries cannot be used on the 'linuxmusl-x64' platform. Please remove the 'node_modules/sharp' directory and run 'npm install' on the 'linuxmusl-x64' platform."
I tried the above, uninstalling and reinstalling sharp in my project to no avail.I'm not even using sharp nor do I know what it is, though.
If I switch to Windows containers, I can't even create an image as I get the following:
"no matching manifest for windows/amd64 10.0.18363 in the manifest list entries"
My Dockerfile is as follows:
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
and my .dockerignore contains
node_modules
build
Dockerfile
Dockerfile.prod
.git
Things I've tried:
This tutorial > https://mherman.org/blog/dockerizing-a-react-app/ (Where I got the Dockerfile text)
This tutorial >https://www.robinwieruch.de/docker-create-react-app-development (And its Dockerfile at one point)
Changing the FROM for node: to 14.4.0, 14, with or without -alpine.
Uninstalling and re-installing sharp
Uninstalling sharp entirely and trying to run it that way (I still get the sharp error for some reason)
Reading the documentation. Which for whatever reason only tells you how to launch a default application (such as create-react-app) or one pulled from somewhere, but not how to do so for our own website.
Thanks

Docker build from Dockerfile hangs indefinitely and occasionally crashes with error 'failed to start service utility VM'

I am currently using Docker Desktop for Windows and following this tutorial for using Docker and VSCode ( https://scotch.io/tutorials/docker-and-visual-studio-code ) and when I am attempting to build the image, the daemon is able to complete the first step of the Dockerfile, but then hangs indefinitely on the second step. Sometimes, but very rarely, after an indeterminate amount of time, it will error out and give me this error
failed to start service utility VM (createreadwrite): CreateComputeSystem 97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm: The operation could not be started because a required feature is not installed.
(extra info: {"SystemType":"container","Name":"97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Program Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"ContainerType":"linux","TerminateOnLastHandleClosed":true})
I have made sure that virtualization is enabled on my machine, uninstalled and reinstalled Docker, uninstalled Docker and deleted all files related to it before reinstalling, as well as making sure that the experimental features are enabled. These are fixes that I have found from various forums while trying to find others who have had the same issue.
Here is the Dockerfile that I am trying to build from. I have double checked with the tutorial that it is correct, though its still possible that I missed something (outside of the version number in the FROM line).
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD npm start
I would expect the image to build correctly as I have followed the tutorial to a T. I have even full reset and started the tutorial over again and I'm still getting this same issue where it hangs indefinitely.
well, you copy some files two times. I would not do that.
so for the minimum change to your Dockerfile I would try:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent && mv node_modules ../
EXPOSE 3000
CMD npm start
I would also think about the && mv node_modules ../ part, if it is really needed.
If you don't do it already I advise you to write a .dockerignore file right next to your Dockerfile with the minimum content of:
/node_modules
so that your local node_modules directory does not get also copied while building the image (saves time).
hope this helps.

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

ELF Header or installation issue with bcrypt in Docker container

Kind of a longshot, but has anyone had any problems using bcrypt in a linux container (specifically docker) and know of an automated workaround? I have the same issue as these two:
Invalid ELF header with node bcrypt on AWSBox
bcrypt invalid elf header when running node app
My Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD ["npm", "start"]
I get the previously mentioned invalid ELF header error if I have bcrypt already installed in my node_modules, but if I remove it (either just itself or all my packages), it isn't installed for some reason when I build the container. I have to manually enter the container after the build and install it inside.
Is there an automated workaround?
Or maybe, just, what would be a good alternative to bcrypt with a Node stack?
Liam's comment is on the money, just expanding on it for future travellers on the internets.
The issue is that you've copied your node_modules folder into your container. The reason that this is a problem is that bcrypt is a native module. It's not just javascript, but also a bunch of C code that gets compiled at the time of installation.
The binaries that come out of that compilation get stored in the node_modules folder and they're customised to the place they were built. Transplanting them out of their OSX home into a strange Linux land causes them to misbehave and complain about ELF headers and fairy feet.
The solution is to echo node_modules >> .dockerignore and run npm install as part of your Dockerfile. This means that the native modules will be compiled inside the container rather than outside it on your laptop.
With this in place, there is no need to run npm install before your start CMD. Just having it in the build phase of the Dockerfile is fine.
protip: the official node images set NODE_ENV=production by default, which npm treats the same as the --production flag. Most of the time this is a good thing. It is not a good thing when your Dockerfile also contains some build steps that rely on dev dependencies (webpack, etc). In that case you want NODE_ENV=null npm install
pro protip: you can take better advantage of Docker's caching by copying in your package.json separately to the rest of your code. Make your Dockerfile look like this:
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Set working directory to /data
WORKDIR /data
# Set working directory to /data
COPY package.json /data
# Install dependencies from package.json
RUN npm install
# Add current directory into path /data in image
ADD . /data
# Run index.js
CMD npm start
And that way Docker will only re-run npm install when you change your package.json, not every time you change a line of code.
Okay, so I have a working automated workaround:
Call npm install --production in the CMD instruction. I'm going to wave my hands at figuring out why I have to install bcrypt at the time of executing the container, but it works.
Updated Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD npm install --production; npm start
Add this command before RUN npm install in your Dockerfile
RUN apk --no-cache add --virtual builds-deps build-base python3
It worked for me. Maybe it will work for you :)

npm package.json and docker (mounting it...)

I am using Docker, so this case might look weird. But I want my whole /data directory to be mounted inside my docker container when developing.
My /data folder container my package.json file, an app directory and a bunch of other stuff.
The problem is that I want my node_modules folder to NOT be persistent, only the package.json file.
I have tried a couple of things, but package.json and npm is giving me a hard time here...
Mounting the package.json file directly will break npm. npm tries to rename the file on save, which is not possible when its a mounted file.
Mounting the parent folder (/data) will mount the node_modules folder.
I cant find any configuration option to put node_modules in another folder outside /data, example /dist
Putting package.json in /data/conf mounting the /data/conf as a volume instead wont work. I cant find any way to specify the package.json path in npmrc.
Putting package.json in /data/conf and symlinking it to /data/package.json wont work. npm breaks the symlink and replaces it with a file.
Copying data back and forth to/from inside the docker container is how I am doing it now.. A little tedious.. I also want a clean solution..
As you have already answered, I think that might be the only solution right now.
When you are building your Docker image, do something like:
COPY data/package.json /data/
RUN mkdir /dist/node_modules && ln -s /dist/node_modules /data/node_modules && cd /data && npm install
And for other stuff (like bower, do the same thing)
COPY data/.bowerrc /data/
COPY data/bower.json /data/
RUN mkdir /dist/vendor && ln -s /dist/vendor /data/vendor && cd /data && bower install --allow-root
And COPY data/ /data at the end (so you are able to use Dockers caching and not having to do npm/docker installation when there is a change to data.
You will also need to create the symlinks you need and store them in your git-repo. They will be invalid on the outside, but will happely work on the inside of your container.
Using this solution, you are able to mount your $PWD/data:/data without getting the npm/bower "junk" outside your container. And you will still be able to build your image as a standalone deployment of your service..
A similar and alternative way is to use NODE_ENV variable instead of creating a symlink.
RUN mkdir -p /dist/node_modules
RUN cp -r node_modules/* /dist/node_modules/
ENV NODE_PATH /dist/node_modules
Here you first create a new directory for node_modules, copy all modules there, and have Node read the modules from there.
I've been having this problem for some time now, and the accepted solution didn't work for me*
I found this link, which had an edit pointing here and this indeed worked for me:
volumes:
- ./:/data
- /data/node_modules
In this case the Engine creates a volume (see Compose reference on volumes) which is not mounted to your source directory. This was the easiest solution and didn't require me to do any symlinking, setting paths, etc.
For reference, my simple Dockerfile just looks like this:
# install node requirements
WORKDIR /data
COPY ./package.json ./package.json
RUN npm install -qq
# add source code
COPY ./ ./
# run watch script
CMD npm run watch
(The watch script is just webpack --watch -d)
Hope this is able to help someone and save hours of time like it did for me!
'*' = I couldn't get webpack to work from my package.json scripts and installing anything while inside the container created the node_modules folder with whatever I just installed (I run npm i --save [packages] from inside the container to get the package update the package.json until the next rebuild)
The solution I went with was placing the node_modules folder in /dist/node_modules, and making a symlink to it from /data/node_modules. I can do this both in my Dockerfile so it will use it when building, and I can submit my symlinks to my git-repo. Everything worked out nicely..
Maybe you can save your container, and then rebuild it regularly with a minimal dockerfile
FROM my_container
and a .dockerignore file containing
/data/node_modules
See the doc
http://docs.docker.com/reference/builder/#the-dockerignore-file

Resources