nodejs in docker ubuntu cannot find module /usr/src/app/index.js - node.js

I'm trying to deploy an application I wrote to my unraid server so I had to docker-ize it. It's written with nodejs and depends on imagemagick and ghostscript so I had to include a build step to install those dependencies. I'm seeing an error when running this image though
Here's my dockerfile
FROM node
RUN mkdir -p /usr/src/app
RUN chmod -R 777 /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm i --only=production
FROM ubuntu
RUN apt-get update
RUN apt-get install -y imagemagick ghostscript nodejs
ENTRYPOINT node /usr/src/app/index.js
Console output
internal/modules/cjs/loader.js:638
throw err;
^
Error: Cannot find module '/usr/src/app/index.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
at Function.Module._load (internal/modules/cjs/loader.js:562:25)
at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
Originally, my entrypoint was configured like ENTRYPOINT node ./index.js but I thought that I was in the wrong directory or something but switching to an absolute path didn't work either so here I am.

By using a second FROM instruction, you are introducing a second stage. Nothing from the first stage is available to the second stage by default. If you need some artefacts, you need to copy them explicitly.
# give the stage a name to
# be able to reference it later
FROM node as builder
RUN mkdir -p /usr/src/app
RUN chmod -R 777 /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm i --only=production
# this is a new stage
FROM ubuntu
RUN apt-get update
RUN apt-get install -y imagemagick ghostscript nodejs
# you need to copy the things you need
COPY --from=builder /usr/src/app /usr/src/app
ENTRYPOINT node /usr/src/app/index.js
That said, it seems pointless for a node app to do that. I would suggest using a single stage. The node runtime is required to run your app. Multi staging would make sense if you were to use node to build statics with something like webpack and then copy the produced statics into a second stage that doesn't need the node runtime.
Also note that using an entrypoint over a simple command only makes sense if your application takes additional arguments and flags, and you want the user of your image to be able to provide said arguments without being required to know how to start the actual app.
Another thing to improve is using npm ci over npm i to avoid untested behaviour in production.
The use of the 2 run instructions to create the folder and change its permissions seem also somewhat redundant. If you use a workdir, that folder is automatically created.

Related

How to install the node in Dockerfile?

I want to install the node v18 in on AWS Linux
Prerequisite
I have Django and frontend React system. So, I want to use node when installing frontend.
If I use make Dockerfile such as From node:18 it works, but I want to use FROM python:3.9 to django work.
Is it not a good idea to put Djang and React in the same container?
Now my docker file is like this.
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get update
WORKDIR /usr/src/app
RUN apt-get install -y npm
RUN pip install pipenv
WORKDIR /usr/src/app/frontend_react
RUN npm install --force
RUN node -v //version 12 is installed
RUN dnf module install nodejs:18/common
RUN node -v
RUN npm run build
Howeber there is no dnf.
How can I do this?
If you can use prebuilt Docker Hub images then this is much easier. I would generally avoid trying to put components with different use cases, build systems, and runtimes into the same image if possible.
In the specific case of a Django application with a React frontend, you might be compiling the frontend to static files that you then serve directly via Django. In this setup, you don't need Node to run the application, just so long as the static files exist then Django can serve them up. Docker's multi-stage build feature will let you build the front-end using a node image, then COPY it into your application. A typical example might look like:
FROM node:18 AS react
WORKDIR /app
COPY frontend_react/package*.json ./
RUN npm ci
COPY fronend_react/ ./
RUN npm build
FROM python:3.9
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
COPY --from=react /app/dist/ frontend_react/dist/
EXPOSE 8000
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
The first half should look like a normal React image build, except that it doesn't have a CMD. The second half should look like a normal Django image build, plus the COPY --from=react line to get the built application from the first build stage. We don't need node or npm in the final image, only the static files, and so we don't invoke a package manager to try to install them.

Docker build from Dockerfile hangs indefinitely and occasionally crashes with error 'failed to start service utility VM'

I am currently using Docker Desktop for Windows and following this tutorial for using Docker and VSCode ( https://scotch.io/tutorials/docker-and-visual-studio-code ) and when I am attempting to build the image, the daemon is able to complete the first step of the Dockerfile, but then hangs indefinitely on the second step. Sometimes, but very rarely, after an indeterminate amount of time, it will error out and give me this error
failed to start service utility VM (createreadwrite): CreateComputeSystem 97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm: The operation could not be started because a required feature is not installed.
(extra info: {"SystemType":"container","Name":"97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Program Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"ContainerType":"linux","TerminateOnLastHandleClosed":true})
I have made sure that virtualization is enabled on my machine, uninstalled and reinstalled Docker, uninstalled Docker and deleted all files related to it before reinstalling, as well as making sure that the experimental features are enabled. These are fixes that I have found from various forums while trying to find others who have had the same issue.
Here is the Dockerfile that I am trying to build from. I have double checked with the tutorial that it is correct, though its still possible that I missed something (outside of the version number in the FROM line).
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD npm start
I would expect the image to build correctly as I have followed the tutorial to a T. I have even full reset and started the tutorial over again and I'm still getting this same issue where it hangs indefinitely.
well, you copy some files two times. I would not do that.
so for the minimum change to your Dockerfile I would try:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent && mv node_modules ../
EXPOSE 3000
CMD npm start
I would also think about the && mv node_modules ../ part, if it is really needed.
If you don't do it already I advise you to write a .dockerignore file right next to your Dockerfile with the minimum content of:
/node_modules
so that your local node_modules directory does not get also copied while building the image (saves time).
hope this helps.

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

ELF Header or installation issue with bcrypt in Docker container

Kind of a longshot, but has anyone had any problems using bcrypt in a linux container (specifically docker) and know of an automated workaround? I have the same issue as these two:
Invalid ELF header with node bcrypt on AWSBox
bcrypt invalid elf header when running node app
My Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD ["npm", "start"]
I get the previously mentioned invalid ELF header error if I have bcrypt already installed in my node_modules, but if I remove it (either just itself or all my packages), it isn't installed for some reason when I build the container. I have to manually enter the container after the build and install it inside.
Is there an automated workaround?
Or maybe, just, what would be a good alternative to bcrypt with a Node stack?
Liam's comment is on the money, just expanding on it for future travellers on the internets.
The issue is that you've copied your node_modules folder into your container. The reason that this is a problem is that bcrypt is a native module. It's not just javascript, but also a bunch of C code that gets compiled at the time of installation.
The binaries that come out of that compilation get stored in the node_modules folder and they're customised to the place they were built. Transplanting them out of their OSX home into a strange Linux land causes them to misbehave and complain about ELF headers and fairy feet.
The solution is to echo node_modules >> .dockerignore and run npm install as part of your Dockerfile. This means that the native modules will be compiled inside the container rather than outside it on your laptop.
With this in place, there is no need to run npm install before your start CMD. Just having it in the build phase of the Dockerfile is fine.
protip: the official node images set NODE_ENV=production by default, which npm treats the same as the --production flag. Most of the time this is a good thing. It is not a good thing when your Dockerfile also contains some build steps that rely on dev dependencies (webpack, etc). In that case you want NODE_ENV=null npm install
pro protip: you can take better advantage of Docker's caching by copying in your package.json separately to the rest of your code. Make your Dockerfile look like this:
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Set working directory to /data
WORKDIR /data
# Set working directory to /data
COPY package.json /data
# Install dependencies from package.json
RUN npm install
# Add current directory into path /data in image
ADD . /data
# Run index.js
CMD npm start
And that way Docker will only re-run npm install when you change your package.json, not every time you change a line of code.
Okay, so I have a working automated workaround:
Call npm install --production in the CMD instruction. I'm going to wave my hands at figuring out why I have to install bcrypt at the time of executing the container, but it works.
Updated Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD npm install --production; npm start
Add this command before RUN npm install in your Dockerfile
RUN apk --no-cache add --virtual builds-deps build-base python3
It worked for me. Maybe it will work for you :)

Why NPM is not available in Docker Container

I am very new to docker and playing with it. I am trying to run nodejs app in docker container. I took ubuntu:14.04 as base image and build my own nodeJS baked image. My Dockerfile content looks like below
FROM ubuntu:14.04
MAINTAINER nmrony
#install packages, nodejs and npm
RUN apt-get -y update && \
apt-get -y install build-essential && \
curl -sL https://deb.nodesource.com/setup | bash - && \
apt-get install -y nodejs
#Copy the sources to Container
COPY ./src /src
CMD ["cd /src"]
CMD ["npm install"]
CMD ["nodejs", "/src/server.js"]
I run container using following command
docker run -p 8080:8080 -d --name nodejs_expreriments nmrony/exp-nodejs
It runs fine. But when I try browse http:localhost:8080 it does not run.
When I run docker logs nodejs_expreriments, I got the following error
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/src/server.js:1:77)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
I run another container with interactive shell and found that npm is not installed. Can someone help me why NPM is not installed on container? Am I doing something wrong?
Your fundamental problem is that you can only have exactly one CMD in a Docker file. Each RUN/COPY command builds up a layer during docker build, so you can have as many of those as you want. However, exactly one CMD gets executed during a docker run. Since you have three CMD statements, only one of them actually gets executed (presumably, the last one).
(IMO, if the Dockerfile team would have chosen the word BUILD instead of RUN and RUN instead of CMD, so that docker build does BUILD statements and docker run does RUN statements, this might have been less confusing to new users. Oh, well.)
You either want to convert your first two CMDs to RUNs (if you expect them to happen during the docker build and be baked into the image) or perhaps put all three CMDs in a script that you run. Here's a few solutions:
(1) The simplest change is probably to use WORKDIR instead of cd and make your npm install a RUN command. If you want to be able to npm install during building so that your server starts up quickly when you run, you'll want to do:
#Copy the sources to Container
COPY ./src /src
WORKDIR /src
RUN npm install
CMD nodejs server.js
(2) If you're doing active development, you may want to consider something like:
#Copy the sources to Container
WORKDIR /src
COPY ./src/package.json /src/package.json
RUN npm install
COPY /src /src
CMD nodejs server.js
So that you only have to do the npm install if your package.json changes. Otherwise, every time anything in your image changes, you rebuild everything.
(3) Another option that's useful if you're changing your package file often and don't want to be bothered with both building and running all the time is to keep your source outside of the image on a volume, so that you can run without rebuilding:
...
WORKDIR /src
VOLUME /src
CMD build_and_serve.sh
Where the contents of build_and_serve.sh are:
#!/bin/bash
npm install && nodejs server.js
And you run it like:
docker run -v /path/to/your/src:/src -p 8080:8080 -d --name nodejs_expreriments nmrony/exp-nodejs
Of course, that last option doesn't give you a portable docker image that you can give someone with your server, since your code is outside the image, on a volume.
Lots of options!
For me this worked:
RUN apt-get update \
&& apt-get upgrade -y \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g react-tools
My debian image apt-get was getting a broken/old version of npm, so passing a download path fixed it.

Resources