How to perform healthcheck for an Angular container based on node.js? - node.js

I'm trying to perform a healthcheck for an Angular container. Here is the Dockerfile for it:
FROM node:18-alpine as builder
RUN npm install -g #angular/cli
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build:ssr
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/package.json /app
COPY --from=builder /app/dist /app/dist
ENV NODE_ENV=production
EXPOSE 4000
CMD ["npm", "run", "serve:ssr"]
As far what have I done is to add this into server.ts file:
server.get('/health', (req, res) => {
res.status(200).json({ status: 'OK' });
console.log('Received GET request to /health endpoint');
});
And here is my healthcheck from docker-compose.yml:
test: "curl -f http://localhost:4000/health || exit 1"
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
When I run http://localhost:4000/health in my browser I receive 200 OK as I should (and there are logs proving it was received), however when it is performed through executing "docker compose up" command it does not work. What may be the reason for that?

Related

Puppeteer / docker (Target.setAutoAttach): Target is closed

There is a project in nestjs
build is produced in the docker
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM --platform=linux/amd64 node:18-alpine As development
# Create app directory
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
ENV PUPPETEER_SKIP_DOWNLOAD=true
RUN npm ci
# Bundle app source
COPY --chown=node:node . .
# Use the node user from the image (instead of the root user)
USER node
###################
# BUILD FOR PRODUCTION
###################
FROM --platform=linux/amd64 node:18-alpine As build
WORKDIR /usr/src/app
COPY --chown=node:node package*.json ./
# In order to run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image
COPY --chown=node:node --from=development /usr/src/app/node_modules ./node_modules
COPY --chown=node:node . .
# Run the build command which creates the production bundle
RUN npm run build
# Set NODE_ENV environment variable
ENV NODE_ENV production
# Running `npm ci` removes the existing node_modules directory and passing in --only=production ensures that only the production dependencies are installed. This ensures that the node_modules directory is as optimized as possible
ENV PUPPETEER_SKIP_DOWNLOAD=true
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM --platform=linux/amd64 node:18-alpine As production
# Copy the bundled code from the build stage to the production image
COPY --chown=node:node --from=build /usr/src/app/node_modules ./node_modules
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
RUN apk add chromium
EXPOSE 3010
ENV PORT 3010
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
An error appears in the console when starting the container
[Nest] ERROR [ExceptionHandler] Protocol error (Target.setAutoAttach): Target closed.
ProtocolError: Protocol error (Target.setAutoAttach): Target closed.
at /node_modules/puppeteer-core/lib/cjs/puppeteer/common/Connection.js:104:24
at new Promise (<anonymous>)
at Connection.send (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Connection.js:100:16)
at ChromeTargetManager.initialize (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ChromeTargetManager.js:247:82)
at CDPBrowser._attach (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Browser.js:156:76)
at CDPBrowser._create (/node_modules/puppeteer-core/lib/cjs/puppeteer/common/Browser.js:49:23)
at ChromeLauncher.launch (/node_modules/puppeteer-core/lib/cjs/puppeteer/node/ChromeLauncher.js:130:53)
at async InstanceWrapper.useFactory [as metatype] (/node_modules/#noahqte/nest-puppeteer/dist/puppeteer-core.module.js:38:24)
at async Injector.instantiateClass (/node_modules/#nestjs/core/injector/injector.js:355:37)
at async callback (/node_modules/#nestjs/core/injector/injector.js:56:34)
I've been thinking about solving this error for quite some time now. Tried different methods to solve it
Puppeteer is connected via the nestjs-puppeteer library
Service
constructor(
private appService: AppService,
#InjectBrowser() private readonly browser: Browser,
) {}
async onSendForm1(data, #Ctx() ctx?: Context) {
const page = await this.browser.newPage()
await page.setRequestInterception(true)
await page.once('request', (interceptedRequest) => {
interceptedRequest.continue({ method: 'POST', postData: JSON.stringify(data), headers: { 'Content-Type': 'application/json' } })
})
await page.goto('http://localhost:3010', { waitUntil: 'networkidle2' })
const fileName = Guid.newGuid().toString()
await page.pdf({
path: `${this.appService.pathFile}${fileName}.pdf`,
scale: 0.9,
format: 'A4',
landscape: false,
pageRanges: '1,3',
})
await page.close()
await this.browser.close()
}
app.module
#Module({
imports: [
PuppeteerModule.forRoot({ pipe: true, isGlobal: true }),
],
controllers: [AppController],
providers: [AppService],
exports: [AppService],
})
Chromium in a container works if you specify env
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
Can you help with this
I found a solution to my problem
Step 1:
Delete nestjs-puppeteer
npm uninstall nestjs-puppeteer
Step 2:
I had to change the dockerfile
Dockerfile
FROM node:18-alpine As development
# Create app directory
WORKDIR /usr/src/app
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm install
# Bundle app source
COPY --chown=node:node . .
# Use the node user from the image (instead of the root user)
USER node
###################
# BUILD FOR PRODUCTION
###################
FROM node:18-alpine As build
WORKDIR /usr/src/app
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
COPY --chown=node:node package*.json ./
# In order to run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image
COPY --chown=node:node --from=development /usr/src/app/node_modules ./node_modules
COPY --chown=node:node . .
# Run the build command which creates the production bundle
RUN npm run build
# Set NODE_ENV environment variable
ENV NODE_ENV production
# Running `npm ci` removes the existing node_modules directory and passing in --only=production ensures that only the production dependencies are installed. This ensures that the node_modules directory is as optimized as possible
RUN npm install && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:18-alpine As production
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
EXPOSE 3010
RUN apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
nano \
sudo \
bash
# Copy the bundled code from the build stage to the production image
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
COPY --chown=node:node --from=build /usr/src/app/node_modules ./dist/node_modules
# Start the server using the production build
USER node
CMD [ "node", "dist/main.js" ]
Step 3: change init browser
service.ts
import puppeteer from 'puppeteer'
const browser = await puppeteer.launch({
headless: true,
args: ['--disable-gpu', '--disable-dev-shm-usage', '--disable-setuid-sandbox', '--no-sandbox'],
})
const page = await browser.newPage()
await page.setRequestInterception(true)
page.once('request', (interceptedRequest) => {
interceptedRequest.continue({
method: 'POST',
postData: JSON.stringify(data),
headers: { 'Content-Type': 'application/json' },
})
})
await page.goto('http://localhost:3010', { waitUntil: 'networkidle2' })
await page.waitForSelector('body')
const fileName = Guid.newGuid().toString()
await page.pdf({
path: `${this.appService.pathFile}${fileName}.pdf`,
scale: 0.9,
format: 'A4',
landscape: false,
pageRanges: '1,2',
})
await browser.close()
The error occurs if the container is run as a root user

nextjs env not exposed to client

I use Next js 12.1 and I define some env in .env.local everything is fine in local but when I deploy in the server I realize environment variables that I define in server are undefined. when I log the env, it is undefined in the browser and is work in the server terminal.
here is my next.config.js
/** #type {import('next').NextConfig} */
const nextConfig = {
experimental: {
outputStandalone: true,
},
compiler: { styledComponents: true },
env: {
NEXT_PUBLIC_SITE_URL: process.env.NEXT_PUBLIC_SITE_URL,
},
};
module.exports = nextConfig;
Dockerfile
FROM node:16-alpine AS deps
ENV http_proxy=http://fodev.org:8118
ENV https_proxy=http://fodev.org:8118
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install --force
# If using yarn with a `yarn.lock` comment out above and use below instead
# COPY package.json yarn.lock ./
# RUN yarn install --frozen-lockfile
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# If using yarn comment out above and use below instead
# RUN yarn build
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
and this is one of my components name Footer
import React from "react";
import Link from "next/link";
const Footer = () => {
console.log(process.env.NEXT_PUBLIC_COMPANY_NAME );
return (
<footer
className="md:px-10 lg:p-14 text-gray-600 relative flex flex-wrap justify-center mx-auto "
style={{ background: "var(--gray)" }}
dir="rtl"
>
<div>{process.env.NEXT_PUBLIC_COMPANY_NAME || "CmpName"}{" "}</div>
</footer>
);
};
export default Footer;
browser console
Browser console
and server logs
Server Logs
Update your question with example how you use this env.
From docs:
The value will be inlined into JavaScript sent to the browser because of the NEXT_PUBLIC_ prefix. This inlining occurs at build time, so your various NEXT_PUBLIC_ envs need to be set when the project is built.
Read this page https://nextjs.org/docs/basic-features/environment-variables
there is difference when you use process.env.NEXT_PUBLIC_ANALYTICS_ID or const env = process.env and then env.NEXT_PUBLIC_ANALYTICS_ID

nestjs Docker build error: can not find tsconfig.build.json

here is my Dockerfile:
FROM node AS builder
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD [ "npm", "run", "start:dev" ]
and here is my docker-compose.yml:
version: "3.7"
services:
web:
image: Dockerfile
build:
context: ./
dockerfile: Dockerfile.development
volumes:
- ./:/app:z
environment:
NODE_ENV : development
TZ: "${TZ:-America/Los_Angeles}"
ports:
- "3000:3000"
After I run the docker-compose up -d, I can have the error from the console :
Error Could not find TypeScript configuration file "tsconfig.build.json". Please, ensure that you are running this command in the appropriate directory (inside Nest workspace).
I have tried to copy and paste tsconfig.build.json to the docker, but it still does not work.
please help.
In your last step, you never copy over the tsconfig.build.json file or the tsconfig.json. Though, I don't see why you're using start:dev when you've already built the server in the docker image. You should just be calling node dist/main
I don't use Docker, but I've got the same error:
Error Could not find TypeScript configuration file "tsconfig.build.json". Please, ensure that you are running this command in the appropriate directory (inside Nest workspace).
I solved this by deleting the dist folder and npm run start:dev again.

Can't connect to SSH from docker container ECONNREFUSED 127.0.0.1:22 - NodeJS

I'm using node-ssh module on nodejs. When I start the connection to ssh it's giving error. Also I'm using WSL Ubuntu 18. I have docker-compose file. I marked PasswordAuthentication as 'yes' on /etc/ssh/sshd_config. I can connect ssh from wsl ubuntu. But when I was trying to connect from my dockerized nodejs project. It's giving error ECONNREFUSED 127.0.0.1:22
On nodejs I'm making a request for user authentication, running some commands, etc.
const Client = require('node-ssh').NodeSSH;
var client = new Client();
client.connect({
host : 'localhost',
port : 22,
username : req.body.username,
password : req.body.password,
keepaliveInterval : 30 * 1000, // 30 minutes for idle as milliseconds
keepaliveCountMax : 1,
}).then(()=>{
// LOGIN SUCCESS
}).catch((e)=>{
console.log(e); // ECONFUSED ERROR
// LOGIN FAILED
});
docker-compose.yml
version: '3.8'
services:
api:
build:
dockerfile: Dockerfile
context: "./server"
ports:
- "3030:3030"
depends_on:
- mysql_db
volumes:
- /app/node_modules
- ./server:/app
...
And my api's Dockerfile
Dockerfile
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apk update \
&& apk add openssh-server
COPY sshd_config /etc/ssh/
EXPOSE 22
CMD ["npm", "run", "start"]
[UPDATE__]
[Dockerfile]
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i \
&& apk add --update openssh \
&& rm -rf /tmp/* /var/cache/apk/*
COPY sshd_config /etc/ssh/
# add entrypoint script
ADD ./docker-entrypoint.sh /usr/local/bin
# make sure we get fresh keys
RUN rm -rf /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/sbin/sshd","-D"]
[UPDATE__2] [Dockerfile]
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apk update && \
apk add openssh-client \
&& rm -rf /tmp/* /var/cache/apk/*
EXPOSE 22
CMD ["npm", "run", "start"]
[SOLUTION]
I have changed Dockerfile and my nodejs code. I have connected WSL's SSH from docker container after applying as Stefan Golubović suggested host.docker.internal. And used node:latest instead of node:alpine docker image. Thanks to #StefanGolubović and #Etienne Dijon
[FIXED]
const Client = require('node-ssh').NodeSSH;
var client = new Client();
client.connect({
host : 'host.docker.internal', // It's worked on WSL2
port : 22,
username : req.body.username,
password : req.body.password,
keepaliveInterval : 30 * 1000, // 30 minutes for idle as milliseconds
keepaliveCountMax : 1,
}).then(()=>{
// LOGIN SUCCESS
}).catch((e)=>{
console.log(e); // ECONFUSED ERROR
// LOGIN FAILED
});
Dockerfile [FIXED]
FROM node:latest
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
RUN apt-get update
EXPOSE 22
CMD ["npm", "run", "start"]
Short answer
sshd server is not started automatically by default on alpine.
You may use an other node image to run your application like node:latest
https://hub.docker.com/_/node
based on debian, equivalent version alternative to node:alpine
Try to avoid ssh in a docker container, you may use a script as entrypoint to configure your container at runtime
Documentation : https://docs.docker.com/engine/reference/builder/#entrypoint
Best practices with example of script : https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint
Test step by step your dockerfile
Something you can do to make sure everything works fine is to run it manually
docker run -it --rm --name testalpine -v $PWD:/app/ node:alpine /bin/sh
Then :
cd /app/
npm i
apk update && apk add openssh-server
# show listening services, openssh is not displayed
netstat -tlpn
As you can see, openssh is not started automatically
Alpine has a wiki about it which needs rc-update :
https://wiki.alpinelinux.org/wiki/Setting_up_a_ssh-server
rc-update is not available in alpine image.
Running sshd server in an alpine container
This image is all about running a ssh server on alpine :
https://github.com/danielguerra69/alpine-sshd
As you can see in Dockerfile, more steps are involved :
Check repository for updated dockerfile
FROM alpine:edge
MAINTAINER Daniel Guerra <daniel.guerra69#gmail.com>
# add openssh and clean
RUN apk add --update openssh \
&& rm -rf /tmp/* /var/cache/apk/*
# add entrypoint script
ADD docker-entrypoint.sh /usr/local/bin
# make sure we get fresh keys
RUN rm -rf /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key
EXPOSE 22
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/sbin/sshd","-D"]
EDIT: If you need to run commands within your container
You can use docker exec once your container is started:
docker exec -it <container name/id> /bin/sh
documentation here :
https://docs.docker.com/engine/reference/commandline/exec/
Updated dockerfile
FROM node:alpine
WORKDIR /app
COPY ./ ./
RUN npm i
ENTRYPOINT ["npm", "run", "start"]

NGINX fails at COPY --from=node /app/dist/comp-lib /usr/share/nginx/html,

COPY failed: stat /var/lib/docker/overlay2/1e9a0e53a11b406c13d4fc790336f37285927a1b87d1bac4d0e889c6d3cfed9b/merged/app/dist/comp-lib: no such file or directory
I tried running docker system prune, and restarted Docker a bunch of times. I also gave a shot at rm -rf /var/lib/docker in the docker VM, somehow that doesn't remove the directory.
Node version: v10.15.1
Docker version: 18.09.2, build 6247962
Dockerfile:
# stage-1
FROM node as builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
# stage -2
FROM nginx:alpine
COPY --from=node /app/dist/comp-lib /usr/share/nginx/html
I expect the build to be successful but the above mentioned is the error I'm experiencing.
In your stage 2
COPY --from=node /app/dist/comp-lib /usr/share/nginx/html
should be
COPY --from=builder /app/dist/comp-lib /usr/share/nginx/html
since stage 1 is called builder and not node.
This is the dockerfile that I use for my Angular apps:
FROM johnpapa/angular-cli as angular-built
WORKDIR /usr/src/app
COPY package.json package.json
RUN npm install --silent
COPY . .
RUN ng build --prod
FROM nginx:alpine
LABEL author="Preston Lamb"
COPY --from=angular-built /usr/src/app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD [ "nginx", "-g", "daemon off;" ]
I've never had any issues with this configuration. There's more information as well in this article.

Resources