nextjs env not exposed to client - node.js

I use Next js 12.1 and I define some env in .env.local everything is fine in local but when I deploy in the server I realize environment variables that I define in server are undefined. when I log the env, it is undefined in the browser and is work in the server terminal.
here is my next.config.js
/** #type {import('next').NextConfig} */
const nextConfig = {
experimental: {
outputStandalone: true,
},
compiler: { styledComponents: true },
env: {
NEXT_PUBLIC_SITE_URL: process.env.NEXT_PUBLIC_SITE_URL,
},
};
module.exports = nextConfig;
Dockerfile
FROM node:16-alpine AS deps
ENV http_proxy=http://fodev.org:8118
ENV https_proxy=http://fodev.org:8118
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install --force
# If using yarn with a `yarn.lock` comment out above and use below instead
# COPY package.json yarn.lock ./
# RUN yarn install --frozen-lockfile
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# If using yarn comment out above and use below instead
# RUN yarn build
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
and this is one of my components name Footer
import React from "react";
import Link from "next/link";
const Footer = () => {
console.log(process.env.NEXT_PUBLIC_COMPANY_NAME );
return (
<footer
className="md:px-10 lg:p-14 text-gray-600 relative flex flex-wrap justify-center mx-auto "
style={{ background: "var(--gray)" }}
dir="rtl"
>
<div>{process.env.NEXT_PUBLIC_COMPANY_NAME || "CmpName"}{" "}</div>
</footer>
);
};
export default Footer;
browser console
Browser console
and server logs
Server Logs

Update your question with example how you use this env.
From docs:
The value will be inlined into JavaScript sent to the browser because of the NEXT_PUBLIC_ prefix. This inlining occurs at build time, so your various NEXT_PUBLIC_ envs need to be set when the project is built.
Read this page https://nextjs.org/docs/basic-features/environment-variables
there is difference when you use process.env.NEXT_PUBLIC_ANALYTICS_ID or const env = process.env and then env.NEXT_PUBLIC_ANALYTICS_ID

Related

How to perform healthcheck for an Angular container based on node.js?

I'm trying to perform a healthcheck for an Angular container. Here is the Dockerfile for it:
FROM node:18-alpine as builder
RUN npm install -g #angular/cli
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build:ssr
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/package.json /app
COPY --from=builder /app/dist /app/dist
ENV NODE_ENV=production
EXPOSE 4000
CMD ["npm", "run", "serve:ssr"]
As far what have I done is to add this into server.ts file:
server.get('/health', (req, res) => {
res.status(200).json({ status: 'OK' });
console.log('Received GET request to /health endpoint');
});
And here is my healthcheck from docker-compose.yml:
test: "curl -f http://localhost:4000/health || exit 1"
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
When I run http://localhost:4000/health in my browser I receive 200 OK as I should (and there are logs proving it was received), however when it is performed through executing "docker compose up" command it does not work. What may be the reason for that?

Puppeteer / docker (Target.setAutoAttach): Target is closed

There is a project in nestjs
build is produced in the docker
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM --platform=linux/amd64 node:18-alpine As development
# Create app directory
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
ENV PUPPETEER_SKIP_DOWNLOAD=true
RUN npm ci
# Bundle app source
COPY --chown=node:node . .
# Use the node user from the image (instead of the root user)
USER node
###################
# BUILD FOR PRODUCTION
###################
FROM --platform=linux/amd64 node:18-alpine As build
WORKDIR /usr/src/app
COPY --chown=node:node package*.json ./
# In order to run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image
COPY --chown=node:node --from=development /usr/src/app/node_modules ./node_modules
COPY --chown=node:node . .
# Run the build command which creates the production bundle
RUN npm run build
# Set NODE_ENV environment variable
ENV NODE_ENV production
# Running `npm ci` removes the existing node_modules directory and passing in --only=production ensures that only the production dependencies are installed. This ensures that the node_modules directory is as optimized as possible
ENV PUPPETEER_SKIP_DOWNLOAD=true
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM --platform=linux/amd64 node:18-alpine As production
# Copy the bundled code from the build stage to the production image
COPY --chown=node:node --from=build /usr/src/app/node_modules ./node_modules
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
RUN apk add chromium
EXPOSE 3010
ENV PORT 3010
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
An error appears in the console when starting the container
[Nest] ERROR [ExceptionHandler] Protocol error (Target.setAutoAttach): Target closed.
ProtocolError: Protocol error (Target.setAutoAttach): Target closed.
at /node_modules/puppeteer-core/lib/cjs/puppeteer/common/Connection.js:104:24
at new Promise (<anonymous>)
at Connection.send (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Connection.js:100:16)
at ChromeTargetManager.initialize (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ChromeTargetManager.js:247:82)
at CDPBrowser._attach (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Browser.js:156:76)
at CDPBrowser._create (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Browser.js:49:23)
at ChromeLauncher.launch (/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:130:53)
at async InstanceWrapper.useFactory [as metatype] (/node_modules/#noahqte/nest-puppeteer/dist/puppeteer-core.module.js:38:24)
at async Injector.instantiateClass (/node_modules/#nestjs/core/injector/injector.js:355:37)
at async callback (/node_modules/#nestjs/core/injector/injector.js:56:34)
I've been thinking about solving this error for quite some time now. Tried different methods to solve it
Puppeteer is connected via the nestjs-puppeteer library
Service
constructor(
private appService: AppService,
#InjectBrowser() private readonly browser: Browser,
) {}
async onSendForm1(data, #Ctx() ctx?: Context) {
const page = await this.browser.newPage()
await page.setRequestInterception(true)
await page.once('request', (interceptedRequest) => {
interceptedRequest.continue({ method: 'POST', postData: JSON.stringify(data), headers: { 'Content-Type': 'application/json' } })
})
await page.goto('http://localhost:3010', { waitUntil: 'networkidle2' })
const fileName = Guid.newGuid().toString()
await page.pdf({
path: `${this.appService.pathFile}${fileName}.pdf`,
scale: 0.9,
format: 'A4',
landscape: false,
pageRanges: '1,3',
})
await page.close()
await this.browser.close()
}
app.module
#Module({
imports: [
PuppeteerModule.forRoot({ pipe: true, isGlobal: true }),
],
controllers: [AppController],
providers: [AppService],
exports: [AppService],
})
Chromium in a container works if you specify env
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
Can you help with this
I found a solution to my problem
Step 1:
Delete nestjs-puppeteer
npm uninstall nestjs-puppeteer
Step 2:
I had to change the dockerfile
Dockerfile
FROM node:18-alpine As development
# Create app directory
WORKDIR /usr/src/app
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm install
# Bundle app source
COPY --chown=node:node . .
# Use the node user from the image (instead of the root user)
USER node
###################
# BUILD FOR PRODUCTION
###################
FROM node:18-alpine As build
WORKDIR /usr/src/app
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
COPY --chown=node:node package*.json ./
# In order to run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image
COPY --chown=node:node --from=development /usr/src/app/node_modules ./node_modules
COPY --chown=node:node . .
# Run the build command which creates the production bundle
RUN npm run build
# Set NODE_ENV environment variable
ENV NODE_ENV production
# Running `npm ci` removes the existing node_modules directory and passing in --only=production ensures that only the production dependencies are installed. This ensures that the node_modules directory is as optimized as possible
RUN npm install && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:18-alpine As production
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
EXPOSE 3010
RUN apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
nano \
sudo \
bash
# Copy the bundled code from the build stage to the production image
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
COPY --chown=node:node --from=build /usr/src/app/node_modules ./dist/node_modules
# Start the server using the production build
USER node
CMD [ "node", "dist/main.js" ]
Step 3: change init browser
service.ts
import puppeteer from 'puppeteer'
const browser = await puppeteer.launch({
headless: true,
args: ['--disable-gpu', '--disable-dev-shm-usage', '--disable-setuid-sandbox', '--no-sandbox'],
})
const page = await browser.newPage()
await page.setRequestInterception(true)
page.once('request', (interceptedRequest) => {
interceptedRequest.continue({
method: 'POST',
postData: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' },
})
})
await page.goto('http://localhost:3010', { waitUntil: 'networkidle2' })
await page.waitForSelector('body')
const fileName = Guid.newGuid().toString()
await page.pdf({
path: `${this.appService.pathFile}${fileName}.pdf`,
scale: 0.9,
format: 'A4',
landscape: false,
pageRanges: '1,2',
})
await browser.close()
The error occurs if the container is run as a root user

Build Nest.js in docker got a Prisma erorr

I am building an application in nest.js ,then I want to dockerize it by using docker, this is my docker file:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
COPY . .
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
Then when I run :
docker build -t medicine-api .
I got this erorr from prisma
Module '"#prisma/client"' has no exported member 'User'.
3 import { User } from '#prisma/client';
and this is my prisma.schema file
/ This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
generator prismaClassGenerator {
provider = "prisma-class-generator"
dryRun = false
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id Int #id #default(autoincrement())
phoneNumber String #unique
lastName String
firstName String
role Role
bio String?
certificate String?
pic String?
verified Boolean #default(false)
medicine Medicine[]
pharmacyMedicine PharmacyMedicine[]
medicineCategory MedicineCategory[]
pharmacyPackage PharmacyPackage[]
pharmacistOrder Order[] #relation("pharmacistOrder")
userOrder Order[] #relation("userOrder")
}
I try to fix this by searching through difference resource and website, then they recommend me to put npx prisma generate in my dockefil. But still I get another erorr here:
Error: Generator at prisma-class-generator could not start:
/bin/sh: 1: prisma-class-generator: not found
If you have any solutions , I am really happy to try. Thanks in advance.
You have to generate the prisma client by running the command
yarn prisma generate
this should come before the step of coping the prisma folder
so I would suggest to change the dockerFile to be
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
COPY . .
RUN yarn prisma generate
COPY prisma ./prisma/
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:prod" ]
the prisma generate step will make sure you will have the prisma client in your node_modules so that it can be used to migrate the prisma models

Passing environment variables in Dockerfile not working when I use 2 baseImages

To be brief, I want to build a container in docker with a web project which configuration is modifiable depending on a parameter that is passed to it later when running the image in Docker.
This project tries to read a file call "environment.json" with the custom properties.
My Dockerfile is this:
# NodeJS
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# I have changed this with a lot of possible solutions
# that I have found in Internet, I tried everything.
# ARG APP_ENV
# ENV ENVIRONMENT $APP_ENV
RUN node define-env.js ${APP_ENV}
# NGINX
FROM nginx:stable-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
What I am doing is copying both: built basic web project, a node script responsible of writing the correct environment and a JSON file with all the possible configuration environments.
The Node script is this:
#!/usr/bin/env node
// we will need file system module to write the values
var fs = require("fs");
// this script is going to receive one parameter (see DockerFile)
var option = process.argv[2];
// taking all the possible configurations
var json = require("./environments.json");
// taking the chosen option (by default dev)
var environmentCfg = json[option] || json.dev;
// writing... It is important to do this task sync,
// because we need to be sure is finished before any step
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
And the environments.json is something like this:
{
"dev": {
"title": "This is the dev environment",
"anotherAttribute": "Hello dev, I was configured with Docker!"
},
"uat": {
"title": "This is the uat environment",
"anotherAttribute": "Hello uat, I was configured with Docker!"
},
"prod": {
"title": "This is the prod environment",
"anotherAttribute": "Hello prod, I was configured with Docker!"
}
}
I do not know how to pass the variable when I run my Docker image, I am trying this:
docker build . -t docker-env
and then, once I have created my image I try to run it using this command:
docker run -e APP_ENV=uat -it --rm -p 1337:80 docker-env
When I go to the project I see the "dev" configuration always.
I have checked removing from DockerFile the NGINX configuration and I can see its working fine when I put the parameters.
I think something weird (or I do not know) is happening when I change the baseImage from Node to Nginx.
EDIT: as #babakabadkheir suggested in the comments, I have tried this following approach, but it is still failing:
FROM hoosin/alpine-nginx-nodejs:latest
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
RUN node define-env.js ${APP_ENV}
RUN mv ./dist/* /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
If I take a look to the mounted image shell I can see it has put the environment.json as well:
EDIT 2:
I have modified my Dockerfile like this but it is still not working:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# This instruction I think is failing, but when I swap it with
# CMD instead RUN and put a console.log works without troubles
RUN node define-env.js
# Once I have all the files I move it to the Nginx workspace
RUN mv ./dist /usr/share/nginx/html
# Setup Nginx server
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now my Node script looks like:
#!/usr/bin/env node
var fs = require("fs");
// NOTE: this line is already taken via environment
var option = process.env.APP_ENV;
var json = require("./environments.json");
var environmentCfg = json[option] || json.dev;
fs.writeFileSync("./dist/environment.json", JSON.stringify(environmentCfg));
LAST EDIT!
Finally I have it!
the solution was put all my needs in the CMD Docker instruction:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
You made two mistakes: mixed ENV with ARGS and a syntax error.
When you define an ARG you can pass a value when build the image, if not specified the default value is used; then read the value on Dockerfile without the braces:
(Note: I used slightly different docker images)
Dockerfile
FROM node:12-alpine as build
# Note: I specified a default value
ARG APP_ENV=dev
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# Note $APP_ENV and NOT ${APP_ENV} without braces
RUN node define-env.js $APP_ENV
# NGINX
FROM nginx:1.19
COPY --from=build /app/dist /usr/share/nginx/html
# Not necessary to specify port and CMD
Now for build the image you must specify the APP_ENV argument value
Build
docker build --build-arg APP_ENV=uat -t docker-uat .
Run
docker run -d -p 8888:80 docker-uat
;TL,DR
Your request is to specify the configuration at runtime, this problem has two solutions:
Use ENV var when running container
Use ARGS when building container
First solution has a drawback of security but the advantage of distributing a single image: you copy the entire file from host to container and when running the container you passing the correct ENV value.
Second solution is slightly more complex do not has the security problem but you must build separate images each for each config.
The second solution is the already written answer, for the first solution you must simply read the env variable at runtime
Dockerfile
# NodeJS
FROM node:12-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY dist /app/dist
COPY environments.json /app/dist
# NGINX
FROM nginx:1.19
# Define the environment variable
ENV APP_ENV=dev
COPY --from=build /app/dist /usr/share/nginx/html
Build
docker build -t docker-generic .
Run
docker run -d -p 8888:80 -e APP_ENV=uat docker-generic
Code
# On your nodejs code you simply read the ENV value
// ....
var env = process.env.APP_ENV
References
Dockerfile ARG
Docker build ARG
Dockerfile ENV
Docker run ENV
The solution for this it is to put all the things that are modifiable at runtime in the CMD instruction.
As I have experienced, RUN only works first time during build time, but CMD does not.
So I have put in my DockerFile:
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'
I run my script, I move all the files and then serve my application.
I hope help other developers with this, because for me has been a pain.
TL;DR
In summary, the Dockerfile should be something like this:
# I am taking a base image with NodeJS and Nginx
FROM hoosin/alpine-nginx-nodejs:latest
# Telling Docker I like to work in the app workdir
WORKDIR /app
# Copying all files I need
COPY dist /app/dist
COPY define-env.js /app
COPY environments.json /app
# My environment variable for runtime purposes (dev by devfault)
ENV APP_ENV dev
# Setup Nginx server
EXPOSE 80
CMD node define-env && mv ./dist/* /usr/share/nginx/html && nginx -g 'daemon off;'

How to set azure-function app port from docker -e variable?

I'm trying to create azure queue listener in docker and deploy it as an azure function.
Azure runs my docker with command similar to following:
docker run -d -p 16506:8081 --name queue-listener_0 -e PORT=8081 ...
The only thing that I need to do is to get that port variable and put it to func start --port $PORT field in entrypoint script, but the problem is that the bash doesn't see variables put through -e key.
Dockerfile:
FROM tarampampam/node:10.10-alpine as buildContainer
COPY package.json package-lock.json entrypoint.sh host.json extensions.csproj proxies.json /app/
COPY /QueueTrigger/function.json /app/
#COPY /app/dist /app/dist
### only for local launch
#COPY /local.settings.json /app
WORKDIR /app
RUN npm install
COPY . /app
RUN npm run build
FROM mcr.microsoft.com/azure-functions/node:2.0
WORKDIR /app
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobs_ExtensionsPath=/app/bin
# Copy dependency definitions
COPY --from=buildContainer /app/package.json /app/
# Get all the code needed to run the app
COPY --from=buildContainer /app/dist /app/
COPY --from=buildContainer /app/function.json /app/QueueTrigger/
COPY --from=buildContainer /app/bin /app/bin
COPY --from=buildContainer /app/entrypoint.sh /app
COPY --from=buildContainer /app/host.json /app
COPY --from=buildContainer /app/extensions.csproj /app
COPY --from=buildContainer /app/proxies.json /app
COPY --from=buildContainer /app/resources /app/resources
### only for local launch
#COPY --from=buildContainer /app/local.settings.json /app
RUN chmod 755 /app/entrypoint.sh
COPY --from=buildContainer /app/node_modules /app/node_modules
RUN npm i -g azure-functions-core-tools#core --unsafe-perm true
RUN apt-get update && apt-get install -y ghostscript && gs -v
# Serve the app
ENTRYPOINT ["sh", "entrypoint.sh"]
Entrypoint:
#!/bin/bash
func start --port $PORT
func is more for local development.
The mcr.microsoft.com/azure-functions/node:2.0 image already has the runtime packaged with the default entrypoint set to start it. You really don't need func here.
But, if ever required, even with just the runtime, you can customize the port
You would have to remove these last few lines from the container
RUN chmod 755 /app/entrypoint.sh
RUN npm i -g azure-functions-core-tools#core --unsafe-perm true
# Serve the app
ENTRYPOINT ["sh", "entrypoint.sh"]
And run your container like this
docker run -d -p 16506:8081 --name queue-listener_0 -e ASPNETCORE_URLS=http://+:8081 ...
Note that local.settings.json won't be picked up by the runtime. Your App Settings would have to be set manually as environment variables.

Resources