Docker Health Check and Node.js Application - node.js

I've been trying to include healthcheck to my container, however, no matter what i do, container never seems to work. To be precise, i have the following structure:
Traefik Proxy
Node.js Application behind that proxy
All the labels for Traefik are included in the docker-compose.yml file.
Whenever i try to add healthcheck, either in Dockerfile or in docker-compose.yml, application is built and listening for connections on port 443, however, when i try to access address from the browser, it always shows 404 error (when Traefik is unable to proxy container).
Here is simple service configuration:
frontend:
restart: always
build:
context: ./configuration/frontend
dockerfile: Dockerfile
environment:
- application_environment=development
- FRONTEND_DOMAIN=HOST_HERE
volumes:
- ./volumes/frontend:/app:rw
- ./volumes/backend/.env:/.env:ro
- ./volumes/backend/resources/lang:/backend-lang:rw
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:HOST_HERE
- traefik.port=3000
- traefik.docker.network=traefik_proxy
- traefik.frontend.redirect.entryPoint=https
- traefik.frontend.passHostHeader=true
- traefik.frontend.headers.SSLRedirect=true
- traefik.frontend.headers.browserXSSFilter=true
- traefik.frontend.headers.contentTypeNosniff=true
- traefik.frontend.headers.customFrameOptionsValue=SAMEORIGIN
- traefik.frontend.headers.STSPreload=true
- traefik.frontend.headers.STSSeconds=31536000
healthcheck:
test: ["CMD", "cd /app && yarn healthcheck"]
interval: 10s
timeout: 5s
start_period: 60s
networks:
- traefik_proxy
And here is healthcheck.js file which is accessible through the command yarn healthcheck:
const http = require('https');
const options = {
host: process.env.FRONTEND_DOMAIN,
port: 443,
path: '/',
method: 'GET',
timeout: 2000
};
const healthCheck = http.request(options, (response) => {
console.log(`STATUS: ${response.statusCode}`);
if (response.statusCode === 200) {
process.exit(0);
}
else {
process.exit(1);
}
});
healthCheck.on('error', function (error) {
console.error('ERROR', error);
process.exit(1);
});
healthCheck.end();
When I start the container without the HEALTHCHECK options (either Dockerfile or compose), it works just fine, page is displayed and when I manually execute yarn healthcheck, then it shows that everything is fine (I mean in the console, STATUS: 200). However, with automated healthcheck, Traefik will have no access to the container.

Related

net::ERR_FAILED api call to backend from nextjs with nginx docker

I have 3 docker containers running in mac os.
backend - port 5055
frontend (next.js) - port 3000
nginx - port 80
I am getting net::ERR_FAILED for backend api requests when I access from browser (http://localhost:80). I can make a request to backend (http://localhost:5055) in postman and it works well.
Sample api request - GET http://backend:5055/api/category/show
What is the reason for this behaviour ?
Thanks.
docker-compose.yml
version: '3.9'
services:
backend:
image: backend-image
build:
context: ./backend
dockerfile: Dockerfile.prod
ports:
- '5055:5055'
frontend:
image: frontend-image
build:
context: ./frontend
dockerfile: Dockerfile.prod
ports:
- '3000:3000'
depends_on:
- backend
nginx:
image: nginx-image
build:
context: ./nginx
ports:
- '80:80'
depends_on:
- backend
- frontend
backend - Dockerfile.prod
FROM node:19.0.1-slim
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . ./
ENV PORT 5055
EXPOSE $PORT
CMD ["npm", "run", "start"]
frontend - Dockerfile.prod
FROM node:19-alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
nginx - Dockerfile
FROM public.ecr.aws/nginx/nginx:stable-alpine
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
nginx - default.conf
upstream frontend {
server frontend:3000;
}
upstream backend {
server backend:5055;
}
server {
listen 80 default_server;
...
location /api {
...
proxy_pass http://backend;
proxy_redirect off;
...
}
location /_next/static {
proxy_cache STATIC;
proxy_pass http://frontend;
}
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://frontend;
}
location / {
proxy_pass http://frontend;
}
}
frontend - .env.local
NEXT_PUBLIC_API_BASE_URL=http://backend:5055/api
frontend - httpServices.js
import axios from 'axios'
import Cookies from 'js-cookie'
const instance = axios.create({
baseURL: `${process.env.NEXT_PUBLIC_API_BASE_URL}`,
timeout: 500000,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
})
...
const responseBody = (response) => response.data
const requests = {
get: (url, body) => instance.get(url, body).then(responseBody),
post: (url, body, headers) =>
instance.post(url, body, headers).then(responseBody),
put: (url, body) => instance.put(url, body).then(responseBody),
}
export default requests
Edit
nginx logs (docker logs -f nginx 2>/dev/null)
172.20.0.1 - - [14/Nov/2022:17:02:39 +0000] "GET /_next/image?url=%2Fslider%2Fslider-1.jpg&w=1080&q=75 HTTP/1.1" 304 0 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /service-worker.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /fallback-B639VDPLP_r91l2hRR104.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /workbox-fbc529db.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
curl request is working well from nginx container to backend container (curl backend:5055/api/category/show)
Edit 2
const CategoryCarousel = () => {
...
const { data, error } = useAsync(() => CategoryServices.getShowingCategory())
...
}
import requests from './httpServices'
const CategoryServices = {
getShowingCategory() {
return requests.get('/category/show')
},
}
Edit 3
When NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
Error: connect ECONNREFUSED 127.0.0.1:5055
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1283:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5055,
config: {
...
baseURL: 'http://localhost:5055/api',
method: 'get',
url: '/products/show',
data: undefined
},
...
_options: {
...
protocol: 'http:',
path: '/api/products/show',
method: 'GET',
...
pathname: '/api/products/show'
},
...
},
_currentUrl: 'http://localhost:5055/api/products/show',
_timeout: null,
},
response: undefined,
isAxiosError: true,
}
Docker configuration is all correct
I've read the question again, with the given logs, it seems your configuration is correct all along. However, what you're doing on browser is access violation.
Docker compose service(hosts) access
Docker services are connected to each other, that means making a request inside another service to another service is possible. What's impossible is a user outside from that service is not accessible to another service. Here's a simpler explanation:
# docker-compose.yaml
service-a:
ports:
- 3000:3000 # exposed to outside
# ...
service-b:
ports:
- 5000:5000 # exposed to outside
# ...
# ✅this works! case A
(service-a) $ curl http://service-b:5000
(service-b) $ curl http://service-a:3000
(local machine) $ curl http://localhost:5000
(local machine) $ curl http://localhost:3000
# ❌this does not work! case B
(local machine) $ curl http://service-a:3000
(local machine) $ curl http://service-b:3000
When I was reading the question initially, I've missed the part that the frontend code is accessing unexposed backend service in browser. This is clearly falls into case B, which will not work.
Solution: With server side rendering...
A frontend app is merely a javascript code on your browser. It cannot access to internals. Therefore the request should be corrected:
# change from http://backend:5055/api
NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
But here's better way to solve it. Try access the api inside server-side code. Since your frontend is Next.js, it is possible to inject backend result to frontend.
export async function getServerSideProps(context) {
const req = await fetch('http://backend:5055/api/...');
const data = req.json();
return {
props: { data }, // will be passed to the page component as props
}
}
Edit
(Edit 3) contains frontend logs when NEXT_PUBLIC_API_BASE_URL is changed to 'localhost' (as you mentioned in the answer). Now the error comes from different api i.e. 'localhost:5055/api/products/show' which is inside a getServerSideProps(). Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Here's more practical example:
// outside getServerSideProps, getStaticProps - browser
// ❌this will fail 1-a
await fetch('http://backend:5055/...') // 'backend' should be service name
// ✅this will work 1-b
await fetch('http://localhost:5055/...')
// inside getServerSideProps or getStaticProps - internal network
export const getServerSideProps() {
// ✅this will work 2-a
await fetch('http://backend:5055/...');
// ❌this will fail 2-b
await fetch('http://localhost:5055/...');
}
In short, the request has to be either 1-b or 2-a.
Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Yes. There are several ways to deal with it.
1.Programmatically differentiating the host
const isServer = typeof window === 'undefined';
const HOST_URL = isServer ? 'http://backend:5055' : 'http://localhost:5055';
fetch(HOST_URL);
2.Manually differentiating the host
// server side
getServerSideProps() {
// this seems unnecessary and confusing at first sight but it comes very handy later in terms of security.
fetch('http://backend:5055');
}
// client side
fetch('http://localhost:5055');
3.Use separate domain for backend(modify hosts file)
This is what I usually resort to when testing services with domain name in local environment.
Modifying hosts file, means exampleurl.com will be resolved as localhost in the OS. In this case, production environment must use separate domains and host file setup is required. The service must be exposed to public. Please refer to this document on modifying hosts file.
# docker-compose.yaml
services:
backend:
ports:
- 5050:5050
# ...
# hosts file
127.0.0.1 exampleurl.com
# ...
// in this case,
// development == local development
const IS_DEV = process.NODE_ENV = 'development';
// hosts file does not resolve port. it's necessary for local development.
const BACKEND_HOST = 'exampleurl.com'
const BACKEND_URL = IS_DEV ? `${BACKEND_HOST}:5050` : BACKEND_HOST;
// client-side
fetch(BACKEND_URL);
// server-side
getServerSideProps() {
fetch(BACKEND_URL);
}
There are many clever ways to solve this problem but there is no "always right" answer. Take your time to think which method best fits to your case.
in nginx.conf you need to specify the backend port too and also the base path (/api):
location /api {
...
proxy_pass http://backend:5055/api;
}

Cannot Proxy Requests From React Node.js App to Express Backend (Docker)

I'm having issues understanding how to proxy my requests to Express routes in my backend while accounting for the local development env. use case and the docker containerized use case. What I'm trying to setup up is a situation in which I have "proxy" configured for "http://localhost:8080" on my local env and http://api:8080 configured for my container. What I have thus far is createProxyMiddleware configured like so...
module.exports = function(app) {
console.log(process.env.API_URL);
app.use(
'/api',
createProxyMiddleware({
target: process.env.API_URL,
changeOrigin: true,
})
);
};
And my docker-compose file is configured like so...
version: "3.7"
services:
client:
image: webapp-client
build: ./client
restart: always
environment:
- API_URL=http://api:8080
volumes:
- ./client:/client
- /client/node_modules
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.rule=PathPrefix(`/`)"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.port=3000"
depends_on:
- api
networks:
- webappnetwork
api:
image: webapp-api
build: ./api
restart: always
ports:
- "8080:8080"
volumes:
- ./api:/api
- /api/node_modules
networks:
- webappnetwork
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- webappnetwork
networks:
webappnetwork:
external: true
volumes:
pg-data:
pgadmin:
Upon startup, the container logs out...
[HPM] Proxy created: / -> http://api:8080
My axios calls look like this...
const <config_name> = {
method: 'post',
url: '/<route>',
headers: {
'Content-Type': 'application/json'
},
data: dataInput
}
As you can see, I set the environment variable and pass that into the createProxyMiddleWare method, but for some reason, this config doesn't work and gives a 404 when I try to hit a route. Any help with this would be greatly appreciated!

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Resources