How to set azure-function app port from docker -e variable? - node.js

I'm trying to create azure queue listener in docker and deploy it as an azure function.
Azure runs my docker with command similar to following:
docker run -d -p 16506:8081 --name queue-listener_0 -e PORT=8081 ...
The only thing that I need to do is to get that port variable and put it to func start --port $PORT field in entrypoint script, but the problem is that the bash doesn't see variables put through -e key.
Dockerfile:
FROM tarampampam/node:10.10-alpine as buildContainer
COPY package.json package-lock.json entrypoint.sh host.json extensions.csproj proxies.json /app/
COPY /QueueTrigger/function.json /app/
#COPY /app/dist /app/dist
### only for local launch
#COPY /local.settings.json /app
WORKDIR /app
RUN npm install
COPY . /app
RUN npm run build
FROM mcr.microsoft.com/azure-functions/node:2.0
WORKDIR /app
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobs_ExtensionsPath=/app/bin
# Copy dependency definitions
COPY --from=buildContainer /app/package.json /app/
# Get all the code needed to run the app
COPY --from=buildContainer /app/dist /app/
COPY --from=buildContainer /app/function.json /app/QueueTrigger/
COPY --from=buildContainer /app/bin /app/bin
COPY --from=buildContainer /app/entrypoint.sh /app
COPY --from=buildContainer /app/host.json /app
COPY --from=buildContainer /app/extensions.csproj /app
COPY --from=buildContainer /app/proxies.json /app
COPY --from=buildContainer /app/resources /app/resources
### only for local launch
#COPY --from=buildContainer /app/local.settings.json /app
RUN chmod 755 /app/entrypoint.sh
COPY --from=buildContainer /app/node_modules /app/node_modules
RUN npm i -g azure-functions-core-tools#core --unsafe-perm true
RUN apt-get update && apt-get install -y ghostscript && gs -v
# Serve the app
ENTRYPOINT ["sh", "entrypoint.sh"]
Entrypoint:
#!/bin/bash
func start --port $PORT

func is more for local development.
The mcr.microsoft.com/azure-functions/node:2.0 image already has the runtime packaged with the default entrypoint set to start it. You really don't need func here.
But, if ever required, even with just the runtime, you can customize the port
You would have to remove these last few lines from the container
RUN chmod 755 /app/entrypoint.sh
RUN npm i -g azure-functions-core-tools#core --unsafe-perm true
# Serve the app
ENTRYPOINT ["sh", "entrypoint.sh"]
And run your container like this
docker run -d -p 16506:8081 --name queue-listener_0 -e ASPNETCORE_URLS=http://+:8081 ...
Note that local.settings.json won't be picked up by the runtime. Your App Settings would have to be set manually as environment variables.

Related

Docker Build Error: executor failed running [/bin/sh -c npm run build]: exit code: 1 with NextJS and npm

I have a NextJS app with the following Dockerfile.production:
FROM node:16-alpine AS deps
ENV NODE_ENV=production
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
FROM node:16-alpine AS builder
ENV NODE_ENV=production
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
CMD ["node", "server.js"]
This is my docker-compose.production.yml:
services:
app:
image: wayve
build:
dockerfile: Dockerfile.production
ports:
- 3000:3000
When I run docker-compose -f docker-compose.production.yml up --build --force-recreate in my terminal (at the root) I get the following build error:
failed to solve: executor failed running [/bin/sh -c npm run build]: exit code: 1
I do not see any issues with my docker-compose file or my Dockerfile. How can I fix this issue? TYA

Docker: hiding the content of bash script file when running a docker container

I am building a docker image for a React front-end application that needs to be deployed within a server owned by a client.
So, I have created a Dockerfile that contains two stages (build and production).Also, I am using Vite to build the web app.:
FROM node:18-alpine as build
WORKDIR /app
COPY --chown=node:node package*.json ./
RUN npm install
COPY --chown=node:node . .
RUN npm run build
ENV NODE_ENV=production
USER node
FROM node:18-alpine as production
WORKDIR /app/build
ENV APPLICATION_PORT=3001
EXPOSE ${APPLICATION_PORT}
COPY --chown=node:node --from=build /app/node_modules ./node_modules
COPY --chown=node:node --from=build /app/dist ./dist
COPY --chown=node:node --from=build /app/package*.json ./
COPY --chown=node:node --from=build /app/docker-entrypoint.sh ./
COPY --chown=node:node --from=build /app/.env ./
RUN npm install serve -g
RUN npm install #import-meta-env/cli --save-dev
RUN chmod +x ./docker-entrypoint.sh
ENTRYPOINT ["sh", "docker-entrypoint.sh" ]
Within the docker-entrypoint.sh, I am doing some validation to make sure that the environment variables are set:
#!/bin/sh
set -x
function validateEnvVariables {
echo "validating the front-end env variables."
if [[ -z "${API_BASE_URL}" ]]; then
echo "The API_BASE_URL is not set."
exit 1
fi
if [[ -z "${MATERIAL_UI_KEY}" ]]; then
echo "The MATERIAL_UI_KEY is not set."
exit 1
fi
}
validateEnvVariables
echo "Setting the variables"
npx import-meta-env --example .env
echo "Start the server"
npm run serve
With that, I am able to create the Image. However, When I am creating a container with the built image, the code of docker-entrypoint.sh is always displayed in the terminal (with a + sign beside each line of code).
Also, I am able to see the environment variables that are passed from the docker container cmd line:
docker run -e API_BASE_URL=http://localhost:3000 -e MATERIAL_UI_KEY=SOME_KEY -p 3001:3001 --name front-end someImageName
Terminal Display:
+ validateEnvVariables
+ echo 'validating the front-end env variables.'
+ '[[' -z http://localhost:3000 ]]
+ '[[' -z SOME_KEY ]]
+ echo 'Setting the variables'
+ npx import-meta-env --example .env
validating the front-end env variables.
Setting the variables
Start the server
+ echo 'Start the server'
+ npm run serve
> consentement-municipal#0.0.0 serve
> serve -n -s dist -p $APPLICATION_PORT
INFO: Accepting connections at http://localhost:3001
So, is there a way to tell docker not to display the code base of the script and only print the values with the keyword echo?

Passing environment variables in Dockerfile not working when I use 2 baseImages

To be brief, I want to build a container in docker with a web project which configuration is modifiable depending on a parameter that is passed to it later when running the image in Docker.
This project tries to read a file call "environment.json" with the custom properties.
My Dockerfile is this:
# NodeJS
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# I have changed this with a lot of possible solutions
# that I have found in Internet, I tried everything.
# ARG APP_ENV
# ENV ENVIRONMENT $APP_ENV
RUN node define-env.js ${APP_ENV}
# NGINX
FROM nginx:stable-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
What I am doing is copying both: built basic web project, a node script responsible of writing the correct environment and a JSON file with all the possible configuration environments.
The Node script is this:
#!/usr/bin/env node
// we will need file system module to write the values
var fs = require("fs");
// this script is going to receive one parameter (see DockerFile)
var option = process.argv[2];
// taking all the possible configurations
var json = require("./environments.json");
// taking the chosen option (by default dev)
var environmentCfg = json[option] || json.dev;
// writing... It is important to do this task sync,
// because we need to be sure is finished before any step
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
And the environments.json is something like this:
{
"dev": {
"title": "This is the dev environment",
"anotherAttribute": "Hello dev, I was configured with Docker!"
},
"uat": {
"title": "This is the uat environment",
"anotherAttribute": "Hello uat, I was configured with Docker!"
},
"prod": {
"title": "This is the prod environment",
"anotherAttribute": "Hello prod, I was configured with Docker!"
}
}
I do not know how to pass the variable when I run my Docker image, I am trying this:
docker build . -t docker-env
and then, once I have created my image I try to run it using this command:
docker run -e APP_ENV=uat -it --rm -p 1337:80 docker-env
When I go to the project I see the "dev" configuration always.
I have checked removing from DockerFile the NGINX configuration and I can see its working fine when I put the parameters.
I think something weird (or I do not know) is happening when I change the baseImage from Node to Nginx.
EDIT: as #babakabadkheir suggested in the comments, I have tried this following approach, but it is still failing:
FROM hoosin/alpine-nginx-nodejs:latest
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
RUN node define-env.js ${APP_ENV}
RUN mv ./dist/* /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
If I take a look to the mounted image shell I can see it has put the environment.json as well:
EDIT 2:
I have modified my Dockerfile like this but it is still not working:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# This instruction I think is failing, but when I swap it with
# CMD instead RUN and put a console.log works without troubles
RUN node define-env.js
# Once I have all the files I move it to the Nginx workspace
RUN mv ./dist /usr/share/nginx/html
# Setup Nginx server
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now my Node script looks like:
#!/usr/bin/env node
var fs = require("fs");
// NOTE: this line is already taken via environment
var option = process.env.APP_ENV;
var json = require("./environments.json");
var environmentCfg = json[option] || json.dev;
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
LAST EDIT!
Finally I have it!
the solution was put all my needs in the CMD Docker instruction:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
You made two mistakes: mixed ENV with ARGS and a syntax error.
When you define an ARG you can pass a value when build the image, if not specified the default value is used; then read the value on Dockerfile without the braces:
(Note: I used slightly different docker images)
Dockerfile
FROM node:12-alpine as build
# Note: I specified a default value
ARG APP_ENV=dev
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# Note $APP_ENV and NOT ${APP_ENV} without braces
RUN node define-env.js $APP_ENV
# NGINX
FROM nginx:1.19
COPY --from=build /app/dist /usr/share/nginx/html
# Not necessary to specify port and CMD
Now for build the image you must specify the APP_ENV argument value
Build
docker build --build-arg APP_ENV=uat -t docker-uat .
Run
docker run -d -p 8888:80 docker-uat
;TL,DR
Your request is to specify the configuration at runtime, this problem has two solutions:
Use ENV var when running container
Use ARGS when building container
First solution has a drawback of security but the advantage of distributing a single image: you copy the entire file from host to container and when running the container you passing the correct ENV value.
Second solution is slightly more complex do not has the security problem but you must build separate images each for each config.
The second solution is the already written answer, for the first solution you must simply read the env variable at runtime
Dockerfile
# NodeJS
FROM node:12-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY environments.json /app/dist
# NGINX
FROM nginx:1.19
# Define the environment variable
ENV APP_ENV=dev
COPY --from=build /app/dist /usr/share/nginx/html
Build
docker build -t docker-generic .
Run
docker run -d -p 8888:80 -e APP_ENV=uat docker-generic
Code
# On your nodejs code you simply read the ENV value
// ....
var env = process.env.APP_ENV
References
Dockerfile ARG
Docker build ARG
Dockerfile ENV
Docker run ENV
The solution for this it is to put all the things that are modifiable at runtime in the CMD instruction.
As I have experienced, RUN only works first time during build time, but CMD does not.
So I have put in my DockerFile:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
I run my script, I move all the files and then serve my application.
I hope help other developers with this, because for me has been a pain.
TL;DR
In summary, the Dockerfile should be something like this:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# Setup Nginx server
EXPOSE 80
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'

NGINX fails at COPY --from=node /app/dist/comp-lib /usr/share/nginx/html,

COPY failed: stat /var/lib/docker/overlay2/1e9a0e53a11b406c13d4fc790336f37285927a1b87d1bac4d0e889c6d3cfed9b/merged/app/dist/comp-lib: no such file or directory
I tried running docker system prune, and restarted Docker a bunch of times. I also gave a shot at rm -rf /var/lib/docker in the docker VM, somehow that doesn't remove the directory.
Node version: v10.15.1
Docker version: 18.09.2, build 6247962
Dockerfile:
# stage-1
FROM node as builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
# stage -2
FROM nginx:alpine
COPY --from=node /app/dist/comp-lib /usr/share/nginx/html
I expect the build to be successful but the above mentioned is the error I'm experiencing.
In your stage 2
COPY --from=node /app/dist/comp-lib /usr/share/nginx/html
should be
COPY --from=builder /app/dist/comp-lib /usr/share/nginx/html
since stage 1 is called builder and not node.
This is the dockerfile that I use for my Angular apps:
FROM johnpapa/angular-cli as angular-built
WORKDIR /usr/src/app
COPY package.json package.json
RUN npm install --silent
COPY . .
RUN ng build --prod
FROM nginx:alpine
LABEL author="Preston Lamb"
COPY --from=angular-built /usr/src/app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD [ "nginx", "-g", "daemon off;" ]
I've never had any issues with this configuration. There's more information as well in this article.

Passing NODE_ENV to docker to run package.json scripts

This is my dockerfile :
FROM node:6-onbuild
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 80
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
and in package.json I do have this :
"scripts": {
"start": "node start.js",
"stagestart": "NODE_ENV=content-staging node start.js"
}
the start script is for production, now I want a way to run the staging script in dockerfile. is there a way to read NODE_ENV inside dockerfile, so I can have one dockerfile which handle staging and production.
Here is two possible implementation.
FYI: you don't need to mention NODE_ENV in package.json if you
already set NODE_ENV at the system level or set NODE_ENV during build time or runtime in docker.
Here Dockerfile as same but I used to alpine base image
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
ARG DOCKER_ENV
ENV NODE_ENV=${DOCKER_ENV}
RUN if [ "$DOCKER_ENV" = "stag" ] ; then echo your NODE_ENV for stage is $NODE_ENV; \
else echo your NODE_ENV for dev is $NODE_ENV; \
fi
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
when you build this Dockerfile with this command
docker build --build-arg DOCKER_ENV=stag -t test-node .
You will see at layer
---> Running in a6231eca4d0b your NODE_ENV for stage is stag
When you run this docker container and run this command your output will be
/usr/src/app # echo $NODE_ENV
stag
Simplest Approch same image and but set environment variable at run time
Your Dockerfile
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
Run this docker image with this command
docker build -t test-node .
docker run --name test -e NODE_ENV=content-staging -p 3000:3000 --rm -it test-node ash
So when you run this command at container you will see
/usr/src/app # echo $NODE_ENV
content-staging
So this is how you can start your node application with NODE_ENV without setting environment variable at package.json. So if your nodejs configuration is based on NODE_ENV it should pick configuration according to NODE_ENV .
You can use the ENV instruction to get the environment variable as an environment variable inside container. Have an entry point script that injects the available environment variable (perhaps something as simple as sed) in place of a placeholder variable name that is in your package.json file. Then start your node application. Obviously this will require you to make a few changes to your Dockerfile in regards to entrypoint script etc.
That is how I have achieved such things in the past.

Resources