Pass variable to Dockerfile CMD, Nodejs - node.js

I'm getting used to with Docker. Here is my current code in DockerFile:
FROM node:12-alpine AS builder
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV}
RUN npm run build
CMD ["sh","-c","./start-docker.sh ${NODE_ENV}"]
And I'm using pm2 to manage cluster in Nodejs, here is my start-docker.sh:
NODE_PATH=. pm2-runtime ./ecosystem.config.js --env $NODE_ENV
In my ecosystem.config.js, I define an env:
env_dev: {
NODE_ENV: 'development'
}
Everything is oke, but on my server, the NODE_ENV=''. I think there is something wrong when I pass in my CMD but can not find out what's wrong

Okay in my mind there is another way to do this, please try this way. this will not be actual code, it will just be an idea.
ecosystem.config.js
module.exports = {
apps : [{
name: "app",
script: "./app.js",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
}
}]
}
And your dockerfile
dockerfile
FROM node:12-alpine
RUN npm run build
CMD ["pm2","start","ecosystem.config.js"]
As described in PM2 CLI documentation you just need to run command to start the application using the command pm2 start ecosystem.config.js this is automatically accessing the ENV variable described in ecosystem.config.js
https://pm2.keymetrics.io/docs/usage/application-declaration/#cli
Please try this, you might have new problems, but hope problems with some error logs, so that we can debug more. But I am sure that this could work and resolve your problem

Related

Node/Express server deploy with Heroku and PM2

Attempting to add clustering ability via PM2 and deploy via my Node/Express application.
I've set up the following command:
pm2 start build/server/app.js -i max
The above works fine locally. I'm testing the functionality on a staging environment on Heroku via Performance 1X.
The above shows the log for the command but attempting 1 instance rather than max. It shows typical info after successfully running pm2 start however you can see app immediately crashes afterward.
Any advice or guidance is appreciated.
I ended up using the following documentation: https://pm2.keymetrics.io/docs/integrations/heroku/
Using a ecosystem.config.js with the following:
module.exports = {
apps : [
{
name: `app-name`,
script: 'build/server/app.js',
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "localhost"
},
env_development: {
NODE_ENV: process.env.NODE_ENV
},
env_staging: {
NODE_ENV: process.env.NODE_ENV
},
env_production: {
NODE_ENV: process.env.NODE_ENV
}
}
],
};
Then the following package.json script handles the deployment per the environment I am looking to deploy e.g. production:
"deploy:cluster:prod": "pm2-runtime start ecosystem.config.js --env production --deep-monitoring",
I got the same error but I fixed it by adding
{
"preinstall":"npm I -g pm2",
"start":"pm2-runtime start build/server/app.js -i 1"
}
To my package.json file
This is advised for production environment
But running
pm2 start build/server/app.js -i max
Is for development purpose

Environment variables are undefined during Cloud Run Build

I use Google Cloud Run to containerize the node.js app. I added environment variables to the google cloud run by following this guide and expect to use them inside my application code. But. Whenever I run build (cloud run build) it shows me that process.env.NODE_ENV and other enviroenment variables are undefined.
Could you help me to find the root problem of the issue?
Dockerfile
FROM node:14.16.0
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
# Copy local code to the container image.
COPY . ./
RUN yarn install
RUN yarn build
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
# Use the official lightweight Node.js 14 image.
# https://hub.docker.com/_/node
FROM node:14.16.0
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# Copying this first prevents re-running npm install on every code change.
COPY package.json yarn.lock ./
# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN yarn install --production --frozen-lockfile
COPY --from=0 /usr/src/app/dist ./dist
EXPOSE 8080
# Run the web service on container startup.
CMD [ "yarn", "prod" ]
This line of code throws error
RUN npx knex --knexfile=./src/infrastructure/knex/knex.config.ts migrate:latest --env production
This is knex.config.ts
import 'dotenv/config'
import { Knex } from 'knex'
import { envConfig, NodeEnvEnum } from '../../configs/env.config'
console.log('ASDASD', process.env.NODE_ENV, envConfig.environment, process.env.CLOUD_SQL_CONNECTION_NAME, envConfig.databaseCloudSqlConnection)
export const knexConfig: Record<NodeEnvEnum, Knex.Config> = {
[NodeEnvEnum.Development]: {
client: 'pg',
connection: envConfig.databaseUrl,
migrations: {
extension: 'ts'
}
},
[NodeEnvEnum.Production]: {
client: 'pg',
connection: {
database: envConfig.databaseName,
user: envConfig.databaseUser,
password: envConfig.databasePassword,
host: `/cloudsql/${envConfig.databaseCloudSqlConnection}`
}
}
}
export default knexConfig
This is env.config.ts
export enum NodeEnvEnum {
Production = 'production',
Development = 'development'
}
interface EnvConfig {
serverPort: string
environment: NodeEnvEnum
// Database
databaseCloudSqlConnection: string
databaseUrl: string
databaseUser: string
databasePassword: string
databaseName: string
}
export const envConfig: EnvConfig = {
serverPort: process.env.SERVER_PORT as string,
environment: process.env.NODE_ENV as NodeEnvEnum,
// Database
databaseUrl: process.env.DATABASE_URL as string,
databaseCloudSqlConnection: process.env.CLOUD_SQL_CONNECTION_NAME as string,
databaseName: process.env.DB_NAME as string,
databaseUser: process.env.DB_USER as string,
databasePassword: process.env.DB_PASSWORD as string
}
Example of the error from the Cloud Run logs
(logs are shown from bottom to top)
You are mixing context here.
There are 3 contexts that you need to be aware of.
The observer that launches the Cloud Build process based on Git push.
The Cloud Build job is triggered by the observer, and it's executed on a sandboxed environment, it's a build process. A step/command fails in this step, because for this context you have not defined the ENV variables.
When the build is finished, it places the image to GCR repository.
Then "the image" is taken and used by Cloud Run as a service, here you define the ENV variables for the service itself, for your application code and not for your build process.
In Context 2, you need to end up using substitution variables read more here and here.
I had the same problem and the cause turned out to be that my .env files weren't getting copied into the Docker container upon deployment. Fixed it by adding .gcloudignore and .dockerignore files in the root of the repository.

How to run node server using DEBUG command in docker file?

What is the command of the running node server in docker using DEBUG? I tried following commands in dockerfile but no luck.
CMD [ "npm", "DEBUG=* start" ]
CMD [ "DEBUG=*", "npm", "start" ]
I am using debug npm for logging.
Could you please help me?
According to documentation on npm debug, it requires DEBUG to be an environment variable, like set DEBUG=*,-not_this. In this case you can do it in several ways:
Using ENV command of Dockerfile:
ENV DEBUG * start
or
ENV DEBUG="* start"
If you want to dynamically change the DEBUG variable, you can put it into CMD and override on container start, but in this case you have to follow your OS rules for environment variable definition.
For Windows it can be:
CMD ["cmd.exe", "-c", "set DEBUG=* start"]

pm2 script execution path is incorrect, doesn't match the one in ecosystem.config.js

My ecosystem.config.js looks like this:
module.exports = {
apps: [{
name: 'production',
script: '/home/username/sites/Website/source/server.js',
env: { NODE_ENV: 'PRODUCTION' },
args: '--run-server'
}, {
name: 'staging',
script: '/home/username/sites/WebsiteStaging/source/server.js',
env: { NODE_ENV: 'STAGING' },
args: '--run-server'
}],
deploy: {
production: {
user: 'username',
host: ['XXX.XXX.XX.XXX'],
ref: 'origin/production',
repo: 'git#github.com:ghuser/Website.git',
path: '/home/username/sites/Website',
'post-deploy': 'npm install && pm2 reload ecosystem.config.js --only production',
env: { NODE_ENV: 'PRODUCTION' }
},
staging: {
user: 'username',
host: ['XXX.XXX.XX.XXX'],
ref: 'origin/staging',
repo: 'git#github.com:ghuser/Website.git',
path: '/home/username/sites/WebsiteStaging',
'post-deploy': 'npm install && pm2 reload ecosystem.config.js --only staging',
env: { NODE_ENV: 'STAGING' }
}
}
};
When I deploy the application, I expect to see two processes - one called 'production' and one called 'staging'. These run code from the same repo, but from different branches.
I do see two processes, however, when I run pm2 desc production I can see that the script path is /home/username/sites/WebsiteStaging/source/server.js. This path should be /home/username/sites/Website/source/server.js as per the config file.
I've tried setting the script to ./server.js and using the cwd parameter but the result has been the same.
The deploy commands I am using are pm2 deploy production and pm2 deploy staging and I have verified that both the Website and the WebsiteStaging folders are present on my server.
Is there something I'm missing here? Why would it be defaulting to the staging folder like this?
What worked for me was to delete the pm2 application and start it.
pm2 delete production
pm2 start production
When I ran pm2 desc production, I saw that the script path was incorrect, and nothing I did seemed to correct that path, short of the above.
I had the same issue.
Seems it happend due to old dump.pm2 that was not updated after changes to ecosystem.config.js were made.
Updating the startup script solved the issue
pm2 save
pm2 unstartup
pm2 startup

Reload code inside docker container. PM2. Node js

I try to reload my node js app code inside docker container. I use pm2 as process manager. Here is my configurations:
Dockerfile
FROM node:6.9.5
LABEL maintainer "denis.ostapenko2#gmail.com"
RUN mkdir -p /usr/src/koa2-app
COPY . /usr/src/koa2-app
WORKDIR /usr/src/koa2-app
RUN npm i -g pm2
RUN npm install
EXPOSE 9000
CMD [ "npm", "run", "production"]
ecosystem.prod.config.json (aka pm2 config)
{
"apps" : [
{
"name" : "koa2-fp",
"script" : "./bin/www.js",
"watch" : true,
"merge_logs" : true,
"log_date_format": "YYYY-MM-DD HH:mm Z",
"env": {
"NODE_ENV": "production",
"PROTOCOL": "http",
"APP_PORT": 3000
},
"instances": 4,
"exec_mode" : "cluster_mode",
"autorestart": true
}
]
}
docker-compose.yaml
version: "3"
services:
web:
build: .
volumes:
- ./:/koa2-app
ports:
- "3000:3000"
npm run production -
pm2 start --attach ecosystem.prod.config.json
I run 'docker-compose up' in the CLI and it works, I'm able to interact with my app on localhost:3000. But if make some change to code it will not show up in web. How can I configure code reloading inside docker?
P.S. And best practices question: Is it really OK to develop using docker stuff? Or docker containers is most preferable for production use.
It seems that you COPY your code in one place and the volume is in another place.
Try:
version: "3"
services:
web:
build: .
volumes:
- ./:/usr/src/koa2-app
ports:
- "3000:3000"
Then, when you change the js code outside container (your IDE), now the PM2 is able to see the changes, and therefore reload the application (you need to be sure of that part).
Regarding the use of Docker in development environment: it is a really good thing to do because of many reasons. For instance, you manage the same app installation for different environments reducing a lot of bugs, etc.

Resources