I am using following docker file to build and host my angular project in serverless environment.
when angular create a build , it has index.html with path to all minified javascript. e.g. runtime.fc6cabb48741575b657e.js
I want to replace the file name with runtime.fc6cabb48741575b657e.js?v=1.25. How I can do this in dockerfile after build?
# Name the node stage "builder"
FROM node:10 AS builder
# Set working directory
WORKDIR /app
# Copy all files from current directory to working dir in image
COPY . .
# install node modules and build assets
RUN npm i && npm run build -- --prod
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=builder /app/dist/Demoproj /usr/share/nginx/html
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
Related
My Dockerfile
FROM node:15.9.0-alpine
ENV NODE_ENV production
# Add a work directory
WORKDIR /
# Cache and Install dependencies
COPY package*.json ./
COPY tsconfig*.json ./
RUN npm install --production
# Copy app files
COPY . .
# Build the app
RUN npm run build
# Bundle static assets with nginx
FROM nginx:1.21.0-alpine as production
ENV NODE_ENV production
# Copy built assets from builder
COPY --from=build /build /usr/share/nginx/html
# Add your nginx.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port
EXPOSE 80
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
When i try run "docker-compose -f docker-compose.prod.yml build" in Terminal I get an error
invalid from flag value builder: pull access denied for builder, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
i have an angular app that i am trying to make it dockerize so with the below Dockerfile it is building an image , how do i run this app now locally for the port that i exposed 4200 i am new to docker stuff any help will be appreciated this will be without nginx.
Dockerfile
# --------------------------------------------------------------------------
FROM node:14 as builder
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install --production
# --------------------------------------------------------------------------
FROM gcr.io/distroless/nodejs:14
USER 9000:9000
# create the base directory
WORKDIR /apps/nodejs/gcp/
ENV HOME=/apps/nodejs/gcp/
# set the home directory
COPY --from=builder node_modules ./node_modules
COPY package.json ./
# copy readme.md
COPY README.md ./
# copy the dist to the home dir
COPY dist ./dist
# DO NOT COPY THE CERTS AND CONFIG FOLDER IN THIS IMAGE. THESE WILL BE INJECTED BY KUBERNETES.
# IN ORDER TO RUN THIS IMAGE IN LOCAL MOUNT THE HOST NODECERT AND CONFIG FOLDER TO THE DOCKER
# docker run -p 9082:9082 --rm \
#--env "NO_UPDATE_NOTIFIER=true NODE_ENV=production PORT=9082 \
#LOGCONSOLE=true CONFIGBASEPATH=/apps/nodejs/gcp/config/ CERTSBASEPATH=/apps/nodejs/gcp/nodecerts" \
#-v /apps/nodejs/gcp/nodecerts:/apps/nodejs/gcp/nodecerts -v /apps/nodejs/gcp/config/:/apps/nodejs/gcp/config/ <image name>
# TO GO INSIDE THE RUNNING CONTAINER
# docker container exec -it <container id> sh
#BUILDING Docker
# docker build -t <image name> .
# <image name>: all lowercase and if needed separated by hypen(-). eg redis-service
# port the server will be listening to.
EXPOSE 4200
CMD ng serve --host 0.0.0.0 --port 4200
Generate the build of the application
Use this command:
npm run build
Create a Dockerfile inside the build output folder and add this code in Dockerfile:
FROM nginx:latest
MAINTAINER yournick#winter.com
COPY ./ /usr/share/nginx/html/
EXPOSE 80
Finally build the docker image using this command:
docker build -t angular-dist-project:v1 .
Now run the image using this command:
docker run -d --name angular-app-container -p 2021:80 angular-dist-project:v1
Now go to browser and navigate http://your-ip:2021:
http://localhost:2021
Result: angular app is successfully dockerized
NOTE: Do not forget and
remember that this only is an alternative, exist anothers many ways!
I hope you understand.
All the best 🌟
Currently I have the docker file, which runs a non-optimized react app (it says 'Note that the development build is not optimized. To create a production build, use npm run build.'). The docker file is:
FROM node:16
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 3000
# Finally runs the application
CMD [ "npm", "start" ]
With the above I can hit my service at http://localhost:3000/ .
I tried the following (from https://medium.com/geekculture/dockerizing-a-react-application-with-multi-stage-docker-build-4a5c6ca68166) but I could not access my service:
The docker file I tried is
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
EXPOSE 3000
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 3000
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Does anyone know what to do to fix this (or how to create an optimized build)?
The root issue was that I was not aware that nginx was serving on port 80. The following docker file works and is run in the following way: docker run -p 80:80 my-ui-app
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 80
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I am confused how to create a Dockerfile for any kind of Node.js or Angular application. I made lots of searches to achieve that but I couldn't. I don't understand what it is wrong with my Dockerfile below. How can I improve it?
FROM node:12.18-alpine as build
# set working directory
WORKDIR /usr/src
COPY src/ ./src/
# install and cache app dependencies
COPY package.json /app/package.json
RUN cd src/app && npm install #angular/cli && npm install && npm run build
# # start app
# CMD ng serve --host 0.0.0.0 --port 80 --disableHostCheck true
# generate build
RUN ng build --output-path=dist --configuration=production
############
### prod ###
############
# base image
FROM nginx:1.16.0-alpine
COPY ./nginx-config.conf /etc/nginx/conf.d/default.conf
# copy artifact build from the 'build environment'
COPY --from=build /app/dist /usr/share/nginx/html
# expose port 80
EXPOSE 80
# run nginx
CMD ["nginx", "-g", "daemon off;"]
nginx-config.conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
Error after build in Azure pipeline :
how to improvew my Docker file to create an docker image for angular application perfectly?
I could reproduce this issue via your dockerfile. That because you set the WORKDIR to WORKDIR /usr/src and copy package.json to /app/package.json. In this case, the file package.json will be copied to the WORKDIR, the path should be:
/usr/src/app/package.json
But, you use the command RUN cd src/app in the work folder, which will switch to the path /usr/src/src/app. This is different the path from the file exists /usr/src/app.
So, to resolve this issue, you need to change the command line:
COPY package.json /app/package.json
To:
COPY package.json ./src/app/package.json
As test I use RUN cd src/app && ls to list the file in the path /usr/src/src/app:
To be brief, I want to build a container in docker with a web project which configuration is modifiable depending on a parameter that is passed to it later when running the image in Docker.
This project tries to read a file call "environment.json" with the custom properties.
My Dockerfile is this:
# NodeJS
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# I have changed this with a lot of possible solutions
# that I have found in Internet, I tried everything.
# ARG APP_ENV
# ENV ENVIRONMENT $APP_ENV
RUN node define-env.js ${APP_ENV}
# NGINX
FROM nginx:stable-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
What I am doing is copying both: built basic web project, a node script responsible of writing the correct environment and a JSON file with all the possible configuration environments.
The Node script is this:
#!/usr/bin/env node
// we will need file system module to write the values
var fs = require("fs");
// this script is going to receive one parameter (see DockerFile)
var option = process.argv[2];
// taking all the possible configurations
var json = require("./environments.json");
// taking the chosen option (by default dev)
var environmentCfg = json[option] || json.dev;
// writing... It is important to do this task sync,
// because we need to be sure is finished before any step
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
And the environments.json is something like this:
{
"dev": {
"title": "This is the dev environment",
"anotherAttribute": "Hello dev, I was configured with Docker!"
},
"uat": {
"title": "This is the uat environment",
"anotherAttribute": "Hello uat, I was configured with Docker!"
},
"prod": {
"title": "This is the prod environment",
"anotherAttribute": "Hello prod, I was configured with Docker!"
}
}
I do not know how to pass the variable when I run my Docker image, I am trying this:
docker build . -t docker-env
and then, once I have created my image I try to run it using this command:
docker run -e APP_ENV=uat -it --rm -p 1337:80 docker-env
When I go to the project I see the "dev" configuration always.
I have checked removing from DockerFile the NGINX configuration and I can see its working fine when I put the parameters.
I think something weird (or I do not know) is happening when I change the baseImage from Node to Nginx.
EDIT: as #babakabadkheir suggested in the comments, I have tried this following approach, but it is still failing:
FROM hoosin/alpine-nginx-nodejs:latest
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
RUN node define-env.js ${APP_ENV}
RUN mv ./dist/* /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
If I take a look to the mounted image shell I can see it has put the environment.json as well:
EDIT 2:
I have modified my Dockerfile like this but it is still not working:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# This instruction I think is failing, but when I swap it with
# CMD instead RUN and put a console.log works without troubles
RUN node define-env.js
# Once I have all the files I move it to the Nginx workspace
RUN mv ./dist /usr/share/nginx/html
# Setup Nginx server
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now my Node script looks like:
#!/usr/bin/env node
var fs = require("fs");
// NOTE: this line is already taken via environment
var option = process.env.APP_ENV;
var json = require("./environments.json");
var environmentCfg = json[option] || json.dev;
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
LAST EDIT!
Finally I have it!
the solution was put all my needs in the CMD Docker instruction:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
You made two mistakes: mixed ENV with ARGS and a syntax error.
When you define an ARG you can pass a value when build the image, if not specified the default value is used; then read the value on Dockerfile without the braces:
(Note: I used slightly different docker images)
Dockerfile
FROM node:12-alpine as build
# Note: I specified a default value
ARG APP_ENV=dev
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# Note $APP_ENV and NOT ${APP_ENV} without braces
RUN node define-env.js $APP_ENV
# NGINX
FROM nginx:1.19
COPY --from=build /app/dist /usr/share/nginx/html
# Not necessary to specify port and CMD
Now for build the image you must specify the APP_ENV argument value
Build
docker build --build-arg APP_ENV=uat -t docker-uat .
Run
docker run -d -p 8888:80 docker-uat
;TL,DR
Your request is to specify the configuration at runtime, this problem has two solutions:
Use ENV var when running container
Use ARGS when building container
First solution has a drawback of security but the advantage of distributing a single image: you copy the entire file from host to container and when running the container you passing the correct ENV value.
Second solution is slightly more complex do not has the security problem but you must build separate images each for each config.
The second solution is the already written answer, for the first solution you must simply read the env variable at runtime
Dockerfile
# NodeJS
FROM node:12-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY environments.json /app/dist
# NGINX
FROM nginx:1.19
# Define the environment variable
ENV APP_ENV=dev
COPY --from=build /app/dist /usr/share/nginx/html
Build
docker build -t docker-generic .
Run
docker run -d -p 8888:80 -e APP_ENV=uat docker-generic
Code
# On your nodejs code you simply read the ENV value
// ....
var env = process.env.APP_ENV
References
Dockerfile ARG
Docker build ARG
Dockerfile ENV
Docker run ENV
The solution for this it is to put all the things that are modifiable at runtime in the CMD instruction.
As I have experienced, RUN only works first time during build time, but CMD does not.
So I have put in my DockerFile:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
I run my script, I move all the files and then serve my application.
I hope help other developers with this, because for me has been a pain.
TL;DR
In summary, the Dockerfile should be something like this:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# Setup Nginx server
EXPOSE 80
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'