How to install the node in Dockerfile? - node.js

I want to install the node v18 in on AWS Linux
Prerequisite
I have Django and frontend React system. So, I want to use node when installing frontend.
If I use make Dockerfile such as From node:18 it works, but I want to use FROM python:3.9 to django work.
Is it not a good idea to put Djang and React in the same container?
Now my docker file is like this.
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get update
WORKDIR /usr/src/app
RUN apt-get install -y npm
RUN pip install pipenv
WORKDIR /usr/src/app/frontend_react
RUN npm install --force
RUN node -v //version 12 is installed
RUN dnf module install nodejs:18/common
RUN node -v
RUN npm run build
Howeber there is no dnf.
How can I do this?

If you can use prebuilt Docker Hub images then this is much easier. I would generally avoid trying to put components with different use cases, build systems, and runtimes into the same image if possible.
In the specific case of a Django application with a React frontend, you might be compiling the frontend to static files that you then serve directly via Django. In this setup, you don't need Node to run the application, just so long as the static files exist then Django can serve them up. Docker's multi-stage build feature will let you build the front-end using a node image, then COPY it into your application. A typical example might look like:
FROM node:18 AS react
WORKDIR /app
COPY frontend_react/package*.json ./
RUN npm ci
COPY fronend_react/ ./
RUN npm build
FROM python:3.9
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
COPY --from=react /app/dist/ frontend_react/dist/
EXPOSE 8000
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
The first half should look like a normal React image build, except that it doesn't have a CMD. The second half should look like a normal Django image build, plus the COPY --from=react line to get the built application from the first build stage. We don't need node or npm in the final image, only the static files, and so we don't invoke a package manager to try to install them.

Related

When to build Typescript Node.js app with docker

I am currently working on a full stack typescript app: Express for the server and React for the client. And the folder structure looks like something like this:
.
├──client/ <-- React app
├──server/ <-- Express server
├──dist/ <-- build result goes into this folder
├──package.json <-- top-level files
Obviously, my app needs an extensive build phase to make the dist folder. First to transpile the server, then build the React app, copy paste that build result of the React app into the dist folder to make Express serve them as static files.
My question is when should I put this build phase if I want to deploy this app using Docker.
First, I can build the app in my environment, and then make a docker image that contains the dependencies and dist folder only. But it feels like I am not truly containerizing my app.
Or I may copy the client and server folder to my docker image, and then build the app inside the image. But by using this way, I have to install all the dev-dependencies(like #babel/... or #types/... modules) inside the image and it doesn`t feel right either.
So I want to ask which of the above two will be a better way to build and deploy my app with Docker. It`ll be also great if you think both are wrong and can suggest any new better strategy.
Thanks in advance!
you separate both app if possible react and express, however if it is the whole monolithic application you can add both folder inside docker.
in docker you can create multi stag docker image. In first stage install dev dependency and generate the bundles while in seconds just copy your bundles from first stage and use it. if inside dist there is no role of express you can directly copy server to second stage.
FROM node:10-alpine AS node-build
WORKDIR /my-app/app/static
COPY app/static/package.json ./
RUN npm install && npm install -g --unsafe-perm node-sass
COPY app/static .
RUN npm run build
FROM python:3.5-slim
WORKDIR /my-app-final
COPY requirements.txt ./
RUN apt-get update -yq \
&& apt-get install -y python3-dev build-essential -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove gcc python3-dev build-essential
COPY --from=node-build ./my-app ./
COPY ./server .
CMD python3 run.py
above small example of multi stage build hope it will be helpful.

cannot build docker image

I have been trying to build a Docker image by using this Dockerfile:
FROM mhart/alpine-node:base-6
MAINTAINER techhadmin
COPY ./package.json src/
RUN cd src && npm install
COPY . /src
WORKDIR /src
EXPOSE 3000
CMD ["npm", "start"]
But I receive this error:
/bin/sh: npm: not found
The command '/bin/sh -c cd src && npm install' returned a non-zero code: 127
Any idea how I can solve this?
Read the docs:
https://hub.docker.com/r/mhart/alpine-node/
Is written:
# If you need npm, don't use a base tag
# RUN npm install
So don't use base-6 tag and change FROM image to something like 7
FROM mhart/alpine-node:7
You are seeing this error message because when you tried to run npm install, there is no copy of npm available.
You are using alpine as the base image.
By default, alpine is a small image and so it has a limited set of default programs inside of it. What programs are in the alpine image? Not much.
So if you are trying to run an alpine image with Nodejs you need to do additional work.
To solve it, you have two options:
Find a different base image. - You can try to find a base image that already has Node and NPM inside of it.
Run alpine with some additional commands that attempts to install npm inside of it.
Use someone else's work or building it from scratch.
I recommend finding an image preconfigured with npm inside of it. You can navigate to DockerHub, which is a repository of images.
There is an official Node repository in DockerHub.
https://hub.docker.com/_/node
So you could do something like this:
# Specify base image
FROM node:alpine
# Install some dependencies
RUN npm install
# Setup default command
CMD ["npm", "start"]
The nice thing about node:alpine is that you will not get any additional unnecessary packages, just the absolute stripped down version of Nodejs and nothing else aside from the basics such as the ping command, cat, ls and so on.

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

ELF Header or installation issue with bcrypt in Docker container

Kind of a longshot, but has anyone had any problems using bcrypt in a linux container (specifically docker) and know of an automated workaround? I have the same issue as these two:
Invalid ELF header with node bcrypt on AWSBox
bcrypt invalid elf header when running node app
My Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD ["npm", "start"]
I get the previously mentioned invalid ELF header error if I have bcrypt already installed in my node_modules, but if I remove it (either just itself or all my packages), it isn't installed for some reason when I build the container. I have to manually enter the container after the build and install it inside.
Is there an automated workaround?
Or maybe, just, what would be a good alternative to bcrypt with a Node stack?
Liam's comment is on the money, just expanding on it for future travellers on the internets.
The issue is that you've copied your node_modules folder into your container. The reason that this is a problem is that bcrypt is a native module. It's not just javascript, but also a bunch of C code that gets compiled at the time of installation.
The binaries that come out of that compilation get stored in the node_modules folder and they're customised to the place they were built. Transplanting them out of their OSX home into a strange Linux land causes them to misbehave and complain about ELF headers and fairy feet.
The solution is to echo node_modules >> .dockerignore and run npm install as part of your Dockerfile. This means that the native modules will be compiled inside the container rather than outside it on your laptop.
With this in place, there is no need to run npm install before your start CMD. Just having it in the build phase of the Dockerfile is fine.
protip: the official node images set NODE_ENV=production by default, which npm treats the same as the --production flag. Most of the time this is a good thing. It is not a good thing when your Dockerfile also contains some build steps that rely on dev dependencies (webpack, etc). In that case you want NODE_ENV=null npm install
pro protip: you can take better advantage of Docker's caching by copying in your package.json separately to the rest of your code. Make your Dockerfile look like this:
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Set working directory to /data
WORKDIR /data
# Set working directory to /data
COPY package.json /data
# Install dependencies from package.json
RUN npm install
# Add current directory into path /data in image
ADD . /data
# Run index.js
CMD npm start
And that way Docker will only re-run npm install when you change your package.json, not every time you change a line of code.
Okay, so I have a working automated workaround:
Call npm install --production in the CMD instruction. I'm going to wave my hands at figuring out why I have to install bcrypt at the time of executing the container, but it works.
Updated Dockerfile
# Pull base image
FROM node:0.12
# Expose port 8080
EXPOSE 8080
# Add current directory into path /data in image
ADD . /data
# Set working directory to /data
WORKDIR /data
# Install dependencies from package.json
RUN npm install --production
# Run index.js
CMD npm install --production; npm start
Add this command before RUN npm install in your Dockerfile
RUN apk --no-cache add --virtual builds-deps build-base python3
It worked for me. Maybe it will work for you :)

Resources