Problem
Currently I can only create simple Dockerfiles but I have no experience with docker-compose. I have taken over a project where a static page was
built with Nuxt.js. I want to build a development environment where I can work
with hot reload or just a copy which immediately transfers the change to the nginx service.
Currently, only when I building a container, the dist folder is copied into the nginx/html folder. So I can then call the page.
Question:
What can I do so that the nginx/html folder is overwritten (hot reload)
when saving the Vue file?
I found an interesting approach on SO (option 4). https://stackoverflow.com/a/64010631/8466673
But I don't know how to configure a docker-compose from my one docker file.
My dockerfile
# build project
FROM node:14 AS build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
# create nginx server
FROM nginx:1.19.0-alpine AS prod-stage
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
Related
Quite recently I have found many websites proposing solution to encapsulate a NPM and NGINX into a single dockerfile using so-called: "multi-stages" docker.
# first stage builds vue
FROM mhart/alpine-node:12 as build-stage
WORKDIR /app
COPY . .
RUN npm ci
RUN npm run build
# second stage copies only the static dist files to nginx html dir
FROM nginx:stable-alpine as production-stage
VOLUME /var/log/nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
But it's not clear to me. After all, docker should host only one process, while in the examples in question it runs the NPM server and separately NGINX - am I reading these instructions in the Dockerfile correctly?
Isn't it more reasonable to take a "side-car" approach when hosting this on a service like Kubernetes or AWS ECS?
When the below line of code runs
COPY --from=build-stage /app/dist /usr/share/nginx/html
You are just copying the compiled JS/HTML and then hosting it via nginx. So there is are no trwo processes here. When you run npm start it run a dev server normally, which you don't need to run for production builds.
Currently I have the docker file, which runs a non-optimized react app (it says 'Note that the development build is not optimized. To create a production build, use npm run build.'). The docker file is:
FROM node:16
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 3000
# Finally runs the application
CMD [ "npm", "start" ]
With the above I can hit my service at http://localhost:3000/ .
I tried the following (from https://medium.com/geekculture/dockerizing-a-react-application-with-multi-stage-docker-build-4a5c6ca68166) but I could not access my service:
The docker file I tried is
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
EXPOSE 3000
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 3000
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Does anyone know what to do to fix this (or how to create an optimized build)?
The root issue was that I was not aware that nginx was serving on port 80. The following docker file works and is run in the following way: docker run -p 80:80 my-ui-app
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 80
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I would like to put my Nodejs app into a docker. When deploying it via npm run build and start I can send requests to it.
But when creating a docker image I getting problems:
First I have an EXPOSE 8080 in my Dockerfile. Then I am running docker run -p=3000:8080 --env-file .env my-docker-file. After that I am getting the info that the server is running on http://localhost:3000.
I know localhost:3000 ist just in the docker file. But at least the docker is running.
When I use the command http localhost:3000 (or the browser) I am getting http: error: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) while doing a GET request to URL: http://localhost:3000/.
Does someone have an idea what's going wrong??? I have no clue.
tanks to all hints that directs me into the right direction.
My Dockerfile:
## this is the stage one , also know as the build step
FROM node:12.17.0-alpine as builder
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
COPY tsconfig.json .
COPY src ./src/
COPY tests ./tests/
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
## this is stage two , where the app actually runs
FROM node:12.17.0-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 8080
CMD npm start
If you use a Dockerfile, first you better to build your image.
FROM node:12.17.0-alpine as builder
WORKDIR /app
COPY . .
RUN npm install
RUN npx prisma generate
RUN npm run build
## this is stage two , where the app actually runs
FROM node:12.17.0-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 8080
CMD ["npm","start"]
From the location where your Dockerfile located:
docker build -t your-image-name .
docker run -p 3000:8080 --env-file .env your-image-name
Did you check the IP address?
When i first deploy my Node project to Docker, i couldn't access it too, because my Node project was listening for localhost requests. But if you don't specify your network as host, your Docker container will have some other IP address in your subnet.
I've changed my Node projects listening IP address to 0.0.0.0 and after that i could connect to my Node project running in a Docker container.
I am not sure the situation has been changed but it seems I got stuck with the versions I am using.
Previously, in Angular 7, we were able to generate server files for Angular Universal at the root level so we could have node main.js in app yaml and Google App Engine just found the way to run our web application. It seems this is not possible anymore for Angular 9.
We are using Angular SSR for our production web site. It compiles all the server files in /dist-server folder. There is a docker file to deploy it on Google App Engine:
FROM node:12-alpine as buildContainer
WORKDIR /app
COPY ./package.json ./package-lock.json /app/
RUN npm install
COPY . /app
RUN npm run build:ssr // This will create dist/ and dist-server/ folders in the docker
FROM node:12-alpine
WORKDIR /app
COPY --from=buildContainer /app/package.json /app
COPY --from=buildContainer /app/dist /app/dist
COPY --from=buildContainer /app/dist-server /app/dist-server
EXPOSE 4000
CMD ["npm", "run", "serve:ssr"]
In package.json we have :
"serve:ssr": "node dist-server/main.js",
In order to start the deployment, we type gcloud app deploy in the terminal and everything works fine for this process. The main problem is this takes almost 25 mins to finish. The main bottleneck for the time consuming part is the compiling.
I thought we could have compiled the repo on our local dev machine, and copy only dist/ and dist-server folder to the docker and add node dist-server/main.js to run our web application in the docker file. Whenever I tried to copy only dist and dist-server folder. I got below error message:
COPY failed: stat /var/lib/docker/tmp/docker-builder{random numbers}/dist: no such file or directory
I also tried to compile the main.js which is the main server file for angular universal at the same level as app.yaml. I assumed this is required according to Google App Engine Node JS deployment rule since there is an example repo from Google. I cannot compile our main.js file into the root folder, it gives below error message:
An unhandled exception occurred: Output path MUST not be project root directory!
So I am looking for a solution to which does not require Google App Engine to rebuild our repo (since we can do this in our dev machine and upload the compiled files for the sake of time saving) to make the deployment process faster.
Thanks for your help
I have found that the .dockerignore file had dist and dist-server folder in it. I have removed those entries. I am able to compile and deploy the docker file on google app engine now.
I am trying to run an angular application in development mode inside a docker container, but when i run it with docker-compose build it works correctly but when i try to put up the container i obtain the below error:
ERROR: for sypgod Cannot start service sypgod: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"npm\": executable file not found in $PATH
The real problem is that it doesn't recognize the command npm serve, but why??
The setup would be below:
Docker container (Nginx Reverse proxy -> Angular running in port 4000)
I know that there are better ways of deploying this but at this moment I need this setup for some personals reasons
Dockerfile:
FROM node:10.9
COPY package.json package-lock.json ./
RUN npm ci && mkdir /angular && mv ./node_modules ./angular
WORKDIR /angular
RUN npm install -g #angular/cli
COPY . .
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
CMD ["npm", "serve", "--port 4000"]
NginxServerSite
server{
listen 80;
server_name sypgod;
location / {
proxy_read_timeout 5m;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:4000/;
}
}
Docker Compose file(the important part where I have the problem)
sypgod: # The name of the service
container_name: sypgod # Container name
build:
context: ../angular
dockerfile: Dockerfile # Location of our Dockerfile
The image that's finally getting run is this:
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["npm", "serve", "--port 4000"]
The first stage doesn't have any effect (you could COPY --from=... files out of it), and if there are multiple CMDs, only the last one has an effect. Since you're running this in a plain nginx image, there's no npm command, leading to the error you see.
I'd recommend using Node on the host for a live development environment. When you've built and tested your application and are looking to deploy it, then use Docker if that's appropriate. In your Dockerfile, run ng build in the first stage to compile the application to static files, add a COPY --from=... in the second stage to get the built application into the Nginx image, and delete all the CMD lines (nginx has an appropriate default CMD). #VikramJakhar's answer has a more complete Dockerfile showing this.
It looks like you might be trying to run both Nginx and the Angular development server in Docker. If that's your goal, you need to run these in two separate containers. To do this:
Split this Dockerfile into two. Put the CMD ["npm", "serve"] line at the end of the first (Angular-only) Dockerfile.
Add a second block in the docker-compose.yml file to run the second container. The backend npm serve container doesn't need to publish ports:.
Change the host name of the backend server in the Nginx config from localhost to the Docker Compose name of the other container.
It would appear the npm can't be accessed from the container.
Try defining where it tries to execute it from:
docker run -v "$PWD":/usr/src/app -w /usr/src/app node:10.9 npm serve --port 4000
source: https://gist.github.com/ArtemGordinsky/b79ea473e8bc6f67943b
Also make sure that npm is installed on the computer running the docker container.
You can do something like below
### STAGE 1: Build ###
# We label our stage as ‘builder’
FROM node:alpine as builder
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++
RUN mkdir -p /ng-app/dist
WORKDIR /ng-app
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm install
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN npm run ng build -- --prod --output-path=dist
### STAGE 2: Setup ###
FROM nginx:1.14.1-alpine
## Copy our default nginx config
COPY toborFront.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf "/usr/share/nginx/html/*"
## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
If you have Portainer.io installed for managing your Docker setup, you can open the console for a particular container from a browser.
This is useful if you want to run a reference command like "npm list" to show what versions of dependencies have been loaded.
So that you can view it like this:
I found this useful for diagnosing issues where an update to a dependency had broken something, which worked fine in a test environment, but the docker version had installed newer minor versions which broke the application.