net::ERR_FAILED api call to backend from nextjs with nginx docker - node.js

I have 3 docker containers running in mac os.
backend - port 5055
frontend (next.js) - port 3000
nginx - port 80
I am getting net::ERR_FAILED for backend api requests when I access from browser (http://localhost:80). I can make a request to backend (http://localhost:5055) in postman and it works well.
Sample api request - GET http://backend:5055/api/category/show
What is the reason for this behaviour ?
Thanks.
docker-compose.yml
version: '3.9'
services:
backend:
image: backend-image
build:
context: ./backend
dockerfile: Dockerfile.prod
ports:
- '5055:5055'
frontend:
image: frontend-image
build:
context: ./frontend
dockerfile: Dockerfile.prod
ports:
- '3000:3000'
depends_on:
- backend
nginx:
image: nginx-image
build:
context: ./nginx
ports:
- '80:80'
depends_on:
- backend
- frontend
backend - Dockerfile.prod
FROM node:19.0.1-slim
WORKDIR /app
COPY package.json .
RUN yarn install
COPY . ./
ENV PORT 5055
EXPOSE $PORT
CMD ["npm", "run", "start"]
frontend - Dockerfile.prod
FROM node:19-alpine
WORKDIR /usr/app
RUN npm install --global pm2
COPY ./package*.json ./
RUN npm install
COPY ./ ./
RUN npm run build
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "start", "npm", "--", "start" ]
nginx - Dockerfile
FROM public.ecr.aws/nginx/nginx:stable-alpine
RUN rm /etc/nginx/conf.d/*
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD [ "nginx", "-g", "daemon off;" ]
nginx - default.conf
upstream frontend {
server frontend:3000;
}
upstream backend {
server backend:5055;
}
server {
listen 80 default_server;
...
location /api {
...
proxy_pass http://backend;
proxy_redirect off;
...
}
location /_next/static {
proxy_cache STATIC;
proxy_pass http://frontend;
}
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://frontend;
}
location / {
proxy_pass http://frontend;
}
}
frontend - .env.local
NEXT_PUBLIC_API_BASE_URL=http://backend:5055/api
frontend - httpServices.js
import axios from 'axios'
import Cookies from 'js-cookie'
const instance = axios.create({
baseURL: `${process.env.NEXT_PUBLIC_API_BASE_URL}`,
timeout: 500000,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
})
...
const responseBody = (response) => response.data
const requests = {
get: (url, body) => instance.get(url, body).then(responseBody),
post: (url, body, headers) =>
instance.post(url, body, headers).then(responseBody),
put: (url, body) => instance.put(url, body).then(responseBody),
}
export default requests
Edit
nginx logs (docker logs -f nginx 2>/dev/null)
172.20.0.1 - - [14/Nov/2022:17:02:39 +0000] "GET /_next/image?url=%2Fslider%2Fslider-1.jpg&w=1080&q=75 HTTP/1.1" 304 0 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /service-worker.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /fallback-B639VDPLP_r91l2hRR104.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
172.20.0.1 - - [14/Nov/2022:17:02:41 +0000] "GET /workbox-fbc529db.js HTTP/1.1" 304 0 "http://localhost/service-worker.js" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
curl request is working well from nginx container to backend container (curl backend:5055/api/category/show)
Edit 2
const CategoryCarousel = () => {
...
const { data, error } = useAsync(() => CategoryServices.getShowingCategory())
...
}
import requests from './httpServices'
const CategoryServices = {
getShowingCategory() {
return requests.get('/category/show')
},
}
Edit 3
When NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
Error: connect ECONNREFUSED 127.0.0.1:5055
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1283:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5055,
config: {
...
baseURL: 'http://localhost:5055/api',
method: 'get',
url: '/products/show',
data: undefined
},
...
_options: {
...
protocol: 'http:',
path: '/api/products/show',
method: 'GET',
...
pathname: '/api/products/show'
},
...
},
_currentUrl: 'http://localhost:5055/api/products/show',
_timeout: null,
},
response: undefined,
isAxiosError: true,
}

Docker configuration is all correct
I've read the question again, with the given logs, it seems your configuration is correct all along. However, what you're doing on browser is access violation.
Docker compose service(hosts) access
Docker services are connected to each other, that means making a request inside another service to another service is possible. What's impossible is a user outside from that service is not accessible to another service. Here's a simpler explanation:
# docker-compose.yaml
service-a:
ports:
- 3000:3000 # exposed to outside
# ...
service-b:
ports:
- 5000:5000 # exposed to outside
# ...
# ✅this works! case A
(service-a) $ curl http://service-b:5000
(service-b) $ curl http://service-a:3000
(local machine) $ curl http://localhost:5000
(local machine) $ curl http://localhost:3000
# ❌this does not work! case B
(local machine) $ curl http://service-a:3000
(local machine) $ curl http://service-b:3000
When I was reading the question initially, I've missed the part that the frontend code is accessing unexposed backend service in browser. This is clearly falls into case B, which will not work.
Solution: With server side rendering...
A frontend app is merely a javascript code on your browser. It cannot access to internals. Therefore the request should be corrected:
# change from http://backend:5055/api
NEXT_PUBLIC_API_BASE_URL=http://localhost:5055/api
But here's better way to solve it. Try access the api inside server-side code. Since your frontend is Next.js, it is possible to inject backend result to frontend.
export async function getServerSideProps(context) {
const req = await fetch('http://backend:5055/api/...');
const data = req.json();
return {
props: { data }, // will be passed to the page component as props
}
}
Edit
(Edit 3) contains frontend logs when NEXT_PUBLIC_API_BASE_URL is changed to 'localhost' (as you mentioned in the answer). Now the error comes from different api i.e. 'localhost:5055/api/products/show' which is inside a getServerSideProps(). Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Here's more practical example:
// outside getServerSideProps, getStaticProps - browser
// ❌this will fail 1-a
await fetch('http://backend:5055/...') // 'backend' should be service name
// ✅this will work 1-b
await fetch('http://localhost:5055/...')
// inside getServerSideProps or getStaticProps - internal network
export const getServerSideProps() {
// ✅this will work 2-a
await fetch('http://backend:5055/...');
// ❌this will fail 2-b
await fetch('http://localhost:5055/...');
}
In short, the request has to be either 1-b or 2-a.
Is this happening because some apis are calling from client side and some are from server side ? If that is the case , How should I fix this ? Thanks
Yes. There are several ways to deal with it.
1.Programmatically differentiating the host
const isServer = typeof window === 'undefined';
const HOST_URL = isServer ? 'http://backend:5055' : 'http://localhost:5055';
fetch(HOST_URL);
2.Manually differentiating the host
// server side
getServerSideProps() {
// this seems unnecessary and confusing at first sight but it comes very handy later in terms of security.
fetch('http://backend:5055');
}
// client side
fetch('http://localhost:5055');
3.Use separate domain for backend(modify hosts file)
This is what I usually resort to when testing services with domain name in local environment.
Modifying hosts file, means exampleurl.com will be resolved as localhost in the OS. In this case, production environment must use separate domains and host file setup is required. The service must be exposed to public. Please refer to this document on modifying hosts file.
# docker-compose.yaml
services:
backend:
ports:
- 5050:5050
# ...
# hosts file
127.0.0.1 exampleurl.com
# ...
// in this case,
// development == local development
const IS_DEV = process.NODE_ENV = 'development';
// hosts file does not resolve port. it's necessary for local development.
const BACKEND_HOST = 'exampleurl.com'
const BACKEND_URL = IS_DEV ? `${BACKEND_HOST}:5050` : BACKEND_HOST;
// client-side
fetch(BACKEND_URL);
// server-side
getServerSideProps() {
fetch(BACKEND_URL);
}
There are many clever ways to solve this problem but there is no "always right" answer. Take your time to think which method best fits to your case.

in nginx.conf you need to specify the backend port too and also the base path (/api):
location /api {
...
proxy_pass http://backend:5055/api;
}

Related

Weird behaviour of the fetch when running npm install

My setup is very simple and small:
C:\temp\npm [master]> tree /f
Folder PATH listing for volume OSDisk
Volume serial number is F6C4-7BEF
C:.
│ .gitignore
│ 1.js
│ package.json
│
└───.vscode
launch.json
C:\temp\npm [master]> cat .\package.json
{
"name": "node-modules",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"emitter": "http://github.com/component/emitter/archive/1.0.1.tar.gz",
"global": "https://github.com/component/global/archive/v2.0.1.tar.gz"
},
"author": "",
"license": "ISC"
}
C:\temp\npm [master]> npm config list
; cli configs
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/6.14.12 node/v14.16.1 win32 x64"
; userconfig C:\Users\p11f70f\.npmrc
https-proxy = "http://127.0.0.1:8888/"
proxy = "http://127.0.0.1:8888/"
strict-ssl = false
; builtin config undefined
prefix = "C:\\Users\\p11f70f\\AppData\\Roaming\\npm"
; node bin location = C:\Program Files\nodejs\node.exe
; cwd = C:\temp\npm
; HOME = C:\Users\p11f70f
; "npm config ls -l" to show all defaults.
C:\temp\npm [master]>
Notes:
The proxy addresses correspond to Fiddler.
Notice that the emitter dependency url uses http whereas the global uses https.
When I run npm install it starts and then hangs very quickly. And I know why, because Fiddler tells me:
The request is:
GET http://github.com:80/component/emitter/archive/1.0.1.tar.gz HTTP/1.1
connection: keep-alive
user-agent: npm/6.14.12 node/v14.16.1 win32 x64
npm-in-ci: false
npm-scope:
npm-session: 74727385b32ebcbf
referer: install
pacote-req-type: tarball
pacote-pkg-id: registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz
accept: */*
accept-encoding: gzip,deflate
Host: github.com:80
And the response is:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com:80/component/emitter/archive/1.0.1.tar.gz
Now this is BS, pardon my French, because the returned Location value of https://github.com:80/component/emitter/archive/1.0.1.tar.gz is invalid. But I suppose the server is not very smart - it just redirects to https without changing anything else, including the port, which remains 80 - good for http, wrong for https. This explains the hanging - the fetch API used by npm seems to retry at progressively longer delays which creates an illusion of hanging.
Debugging npm brings me to the following code inside C:\Program Files\nodejs\node_modules\npm\node_modules\npm-registry-fetch\index.js:
return opts.Promise.resolve(body).then(body => fetch(uri, {
agent: opts.agent,
algorithms: opts.algorithms,
body,
cache: getCacheMode(opts),
cacheManager: opts.cache,
ca: opts.ca,
cert: opts.cert,
headers,
integrity: opts.integrity,
key: opts.key,
localAddress: opts['local-address'],
maxSockets: opts.maxsockets,
memoize: opts.memoize,
method: opts.method || 'GET',
noProxy: opts['no-proxy'] || opts.noproxy,
Promise: opts.Promise,
proxy: opts['https-proxy'] || opts.proxy,
referer: opts.refer,
retry: opts.retry != null ? opts.retry : {
retries: opts['fetch-retries'],
factor: opts['fetch-retry-factor'],
minTimeout: opts['fetch-retry-mintimeout'],
maxTimeout: opts['fetch-retry-maxtimeout']
},
strictSSL: !!opts['strict-ssl'],
timeout: opts.timeout
}).then(res => checkResponse(
opts.method || 'GET', res, registry, startTime, opts
)))
And when I stop at the right moment, this boils down to the following values:
uri
'http://github.com/component/emitter/archive/1.0.1.tar.gz'
agent:undefined
algorithms:['sha1']
body:undefined
ca:null
cache:'default'
cacheManager:'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache'
cert:null
headers:{
npm-in-ci:false
npm-scope:''
npm-session:'413f9b25525c452a'
pacote-pkg-id:'registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz'
pacote-req-type:'tarball'
referer:'install'
user-agent:'npm/6.14.12 node/v14.16.1 win32 x64'
}
integrity:undefined
key:null
localAddress:undefined
maxSockets:50
method:'GET'
noProxy:null
proxy:'http://127.0.0.1:8888/'
referer:'install'
retry:{retries: 2, factor: 10, minTimeout: 10000, maxTimeout: 60000}
strictSSL:false
timeout:0
I have omitted two values and I truly do not know their significance - opt.Promise and memoize. It is possible that they are crucial, I do not know.
Anyway, when I step over this statement, the aforementioned session appears in Fiddler with the bogus url of http://github.com:80/component/emitter/archive/1.0.1.tar.gz and I do not understand - how come? The debugger clearly shows that the uri parameter passed to fetch does not specify the port number.
I thought maybe it is some kind of a non string type, but typeof uri returns 'string'.
I have even written a tiny reproduction to execute just this request using the same arguments, except for the opt.Promise and memoize:
const fetch = require('make-fetch-happen')
fetch('http://github.com/component/emitter/archive/1.0.1.tar.gz', {
algorithms: ['sha1'],
cache: 'default',
cacheManager: 'C:\\Users\\p11f70f\\AppData\\Roaming\\npm-cache\\_cacache',
headers:{
"npm-in-ci":false,
"npm-scope":"",
"npm-session":"00b5bb97075e3c35",
"user-agent":"npm/6.14.12 node/v14.16.1 win32 x64",
"referer":"install",
"pacote-req-type":"tarball",
"pacote-pkg-id":"registry:undefined#http://github.com/component/emitter/archive/1.0.1.tar.gz"
},
maxSockets: 50,
method: 'GET',
proxy: 'http://127.0.0.1:8888',
referer: 'install',
retry: {
retries: 2,
factor: 10,
minTimeout: 10000,
maxTimeout: 60000
},
strictSSL: false,
timeout: 0
}).then(res => console.log(res))
But it shows up correctly in Fiddler - no port is added and hence the redirection works fine.
When there is no Fiddler (and hence no proxy) everything works correctly too, but I am very much curious to know why it does not work with Fiddler.
What is going on here?

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

Nginx reverse proxy to node.js not working

I have an angular and node app with Postgres as the db. I am deploying them to docker containers on ec2 instance. The Nginx reverse proxy on ec2 is not routing requests to node. The same setup works on my local machine. The error message I receive is: "XMLHttpRequest cannot load http://localhost:3000/api/user/login due to access control checks". Here is my docker-compose file:
version: '3.3'
networks:
network1:
services:
api-service:
image: backend
environment:
PORT: 3000
volumes:
- ./volumes/logs:/app/logs
ports:
- 3000:3000
restart: always
networks:
- network1
nginx:
image: frontend
environment:
USE_SSL: "false"
volumes:
- ./volumes/logs/nginx:/var/log/nginx/
- ./volumes/ssl/nginx:/etc/ssl/
ports:
- 80:80
- 443:443
restart: always
networks:
- network1
and my Nginx.conf is as follows:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
location / {
root /myapp/app;
try_files $uri $uri/ /index.html;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
}
location /api/ {
proxy_pass http://api-service:3000;
}
}
}
and my api url for backend is : (the commented lines are all what I have tried)
export const environment = {
production: false,
//apiUrl: 'http://api-service'
// apiUrl: '/api-service/api'
// apiUrl : 'http://' + window.location.hostname + ':3000'
//apiUrl: 'http://localhost:3000/api'
apiUrl: 'http://localhost:3000'
};
and in my angular service I am using:
const BACKEND_URL = environment.apiUrl + "/api/user/";
and in my backend app.js:
app.use("/api/user/", userRoutes);
What am I doing wrong?
Posting what worked for me in case it helps someone. Here is what worked when I moved my code to ec2 instance:
export const environment = {
production: false,
//apiUrl: 'http://localhost:3000/api'
apiUrl: '/api'
};
So, in localhost apiUrl: 'http://localhost:3000/api works but it does not work on server.

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Docker Health Check and Node.js Application

I've been trying to include healthcheck to my container, however, no matter what i do, container never seems to work. To be precise, i have the following structure:
Traefik Proxy
Node.js Application behind that proxy
All the labels for Traefik are included in the docker-compose.yml file.
Whenever i try to add healthcheck, either in Dockerfile or in docker-compose.yml, application is built and listening for connections on port 443, however, when i try to access address from the browser, it always shows 404 error (when Traefik is unable to proxy container).
Here is simple service configuration:
frontend:
restart: always
build:
context: ./configuration/frontend
dockerfile: Dockerfile
environment:
- application_environment=development
- FRONTEND_DOMAIN=HOST_HERE
volumes:
- ./volumes/frontend:/app:rw
- ./volumes/backend/.env:/.env:ro
- ./volumes/backend/resources/lang:/backend-lang:rw
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:HOST_HERE
- traefik.port=3000
- traefik.docker.network=traefik_proxy
- traefik.frontend.redirect.entryPoint=https
- traefik.frontend.passHostHeader=true
- traefik.frontend.headers.SSLRedirect=true
- traefik.frontend.headers.browserXSSFilter=true
- traefik.frontend.headers.contentTypeNosniff=true
- traefik.frontend.headers.customFrameOptionsValue=SAMEORIGIN
- traefik.frontend.headers.STSPreload=true
- traefik.frontend.headers.STSSeconds=31536000
healthcheck:
test: ["CMD", "cd /app && yarn healthcheck"]
interval: 10s
timeout: 5s
start_period: 60s
networks:
- traefik_proxy
And here is healthcheck.js file which is accessible through the command yarn healthcheck:
const http = require('https');
const options = {
host: process.env.FRONTEND_DOMAIN,
port: 443,
path: '/',
method: 'GET',
timeout: 2000
};
const healthCheck = http.request(options, (response) => {
console.log(`STATUS: ${response.statusCode}`);
if (response.statusCode === 200) {
process.exit(0);
}
else {
process.exit(1);
}
});
healthCheck.on('error', function (error) {
console.error('ERROR', error);
process.exit(1);
});
healthCheck.end();
When I start the container without the HEALTHCHECK options (either Dockerfile or compose), it works just fine, page is displayed and when I manually execute yarn healthcheck, then it shows that everything is fine (I mean in the console, STATUS: 200). However, with automated healthcheck, Traefik will have no access to the container.

Resources