Node.js + Prometheus - Target Down Connection Refused - node.js

I am running a node applications locally. It runs on http://localhost:3002 using prom-client i can see the metrics at the following endpoint http://localhost:3002/metrics.
I've setup prometheus in a docker container and ran it.
Dockerfile
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
prometheus.yml
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:3002']
labels:
service: 'my-service'
group: 'production'
rule_files:
- 'alert.rules'
docker build -t my-prometheus .
docker run -p 9090:9090 my-prometheus
When i navigate to http://localhost:9090/targets it shows
Get http://localhost:3002/metrics: dial tcp 127.0.0.1:3002: connect:
connection refused
Can you please tell me what im doing wrong here. node app is running on localhost at that port becasue when i go to http://localhost:3002/metrics i can see the metrics.

When you are inside a container, you cannot access the localhost directly. You will need to add docker.for.mac.localhost to your prometheus.yml file. See below:
Your Job in prometheus.yml file.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- targets: ['docker.for.mac.localhost:3002']

and for windows, it would be
- job_name: 'spring-actuator'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['docker.for.win.localhost:8082']

The applications are not in the same network. Firstly, you can create docker image from your node application too. When running docker images, network( --net) parameter should be passed to both images.
Run prometheus app:
docker run --net basic -p 9090:9090 my-prometheus
Run nodejs app:
docker run --net basic -p 8080:8080 my-node-app
Now, the applications run in the same network that is called basic. So the prometheus application can access the http://localhost:3002/metric endpoint.

I do this to localhost...success global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 1s
scrape_configs:
job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets:
['127.0.0.1:9090']
job_name: class
honor_timestamps: true
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets: ['host.docker.internal:8080']

Related

nodejs app cant connect to redis on docker swarm

I'm having a tough time connecting my nodejs backend service to the redis service (both running in swarm mode).
The backend server fails to start as it complains that it was not able to reach redis with the following error:
But on the contrary, if I try to reach the redis service from within my backend container using curl redis-dev.localhost:6379, I get the response curl: (52) Empty reply from server. Which implies that the redis service is available to the backend container:
Below is the backend's docker-compose yaml:
version: "3.8"
services:
backend:
image: backend-app:latest
command: node dist/main.js
ports:
- 4000:4000
environment:
- REDIS_HOST='redis-dev.localhost'
- APP_PORT=4000
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend-app.rule=Host(`backend-app.localhost`)"
- "traefik.http.services.backend-app.loadbalancer.server.port=4000"
- "traefik.docker.network=traefik-public"
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
And following is my redis's compose file:
version: "3.8"
services:
redis:
image: redis:7.0.4-alpine3.16
hostname: "redis-dev.localhost"
ports:
- 6379:6379
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
The issue was with the backend's docker compose file. The REDIS_HOST environment variable should have been redis-dev.localhost instead of 'redis-dev.localhost'. That's why the backend app was complaining that it was not able to reach the redis host.
It's funny how elementary these things can be.

reading spark cluster metrics with grafana

I've a stand alone spark cluster, I'm able to get all the build in metrics for worker and driver.
Following this "https://dzlab.github.io/bigdata/2020/07/03/spark3-monitoring-1/"
I've a Prometheus server, and set up my targets like this
- job_name: 'X_master'
metrics_path: '/metrics/master/prometheus'
static_configs:
- targets: ['X:8080']
labels:
instance_type: 'master'
spark_cluster: 'X_CLUSTER'
- job_name: 'X_spark-workers'
metrics_path: '/metrics/prometheus'
static_configs:
- targets: ['X1:8081','X2:8081']
labels:
instance_type: 'worker'
spark_cluster: 'X_CLUSTER'
- job_name: 'X_spark-driver'
metrics_path: '/metrics/prometheus'
static_configs:
- targets: ['X:4040']
labels:
instance_type: 'driver'
spark_cluster: 'X_CLUSTER'
I've grafana on same server, and tried this dashboard https://grafana.com/grafana/dashboards/7890
But how I can load data to it ?
I've followed this too https://grafana.com/docs/grafana-cloud/integrations/integrations/integration-apache-spark/?pg=blog&plcmt=body-txt
But how I could create grafana agent and add that yml ?

Very slow queries between server and database in docker

I'm trying to setup some integrations tests with docker compose. I'm using docker compose to spin up a postgres database and a nodejs server. I'm then using jest to run http requests against the server.
For some reasons that I can't explain all SQL queries (even the simplest one) are extremely slow (+ 1s).
It sounds like a communication problem between the two containers, but I can't spot it. Am I doing something wrong?
Here's my docker-compose.yml file. The server is just a simple express app
version: "3.9"
services:
database:
image: postgres:12
env_file: .env
volumes:
- ./db-data:/var/lib/postgresql/data
healthcheck:
test: pg_isready -U test_user -d test_database
interval: 1s
timeout: 10s
retries: 3
start_period: 0s
server:
build: .
ports:
- "8080:8080"
depends_on:
database:
condition: service_healthy
env_file: .env
environment:
POSTGRES_HOST: database
NODE_ENV: test
init: true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/healthcheck"]
interval: 1s
timeout: 10s
retries: 3
start_period: 0s
EDIT
I'm using
Docker version 20.10.2, build 2291f61
macOs BigSur 11.1 (20C69)
Try using Volume instead of Bind Mount for the data folder by changing this:
- ./db-data:/var/lib/postgresql/data
To this:
- db-data:/var/lib/postgresql/data
And adding this section to the end of the compose file:
volumes:
db-data:
You can read more about bind mounts vs volumes here
I found it. The issue is not in docker, but in the compiler. I'm using ncc and it breaks my postgres client.
I opened an issue in their repo with a minimal reproducible example
https://github.com/vercel/ncc/issues/646
Thanks a lot for your help

How to monitor Fastify app with Prometheus and Grafana?

I am learning to monitor my Fastify app with Prometheus and Grafana. First, I installed fastify-metrics package and registered in Fastify app.
// app.ts
import metrics from 'fastify-metrics'
...
app.register(metrics, {
endpoint: '/metrics',
})
Then I setup Prometheus and Grafana in docker-compose.yml:
version: "3.7"
services:
prometheus:
image: prom/prometheus:latest
volumes:
- prometheus_data:/prometheus
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
network_mode: host
ports:
- '9090:9090'
grafana:
image: grafana/grafana:latest
volumes:
- grafana_data:/var/lib/grafana
# - ./grafana/provisioning:/etc/grafana/provisioning
# - ./grafana/config.ini:/etc/grafana/config.ini
# - ./grafana/dashboards:/var/lib/grafana/dashboards
environment:
- GF_SECURITY_ADMIN_PASSWORD=ohno
depends_on:
- prometheus
network_mode: host
ports:
- '3000:3000'
volumes:
prometheus_data: {}
grafana_data: {}
I added network_mode=host because Fastfy app will be running at localhost:8081.
Here's the Prometheus config:
# prometheus.yml
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 1m
scrape_configs:
- job_name: 'prometheus'
# metrics_path: /metrics
static_configs:
- targets: [
'app:8081',
]
- job_name: 'node_exporter'
static_configs:
- targets: [
'localhost:8081',
]
After docker-compose up and npm run dev, Fastify app is up and running and target localhost:8081 is UP in Prometheus dashboard, localhost:9090, I tried executing some metrics.
I imported Node Exporter Full and Node Exporter Server Metrics dashboards. And added Prometheus datasource localhost:9090, named Fastify, and saved successfully, showed Data source is working.
However, when I go to the Node Exporter Full dashboard, it shows no data. I selected Fastify in datasource but it shows None in others selections at upper left corner.
Please help, what I am doing wrong?
It looks like you're using a dashboard intended for linux stats. In order to use Prometheus/Grafana with your Fastify app, you'll need a dashboard that's meant for Node.js apps. For example:
https://grafana.com/grafana/dashboards/11159
https://grafana.com/grafana/dashboards/12230
Plugging one of those in should do the trick.
you should specify the metrics_path in the job as defined in your 'fastify-metrics' endpoint and also update the targets param:
- job_name: 'node_exporter'
scrape_interval: 5s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['localhost:8081']
labels:
group: 'node_exporter'

prometheus is not able to access metrics from localhost

I have been trying to configure prometheus to show metrics in grafana for my nodejs application. For metrics, I am using prom-client. However, on localhost I get always following error:
Get http://localhost:5000/metrics: dial tcp 127.0.0.1:5000: connect: connection refused
Moreover, if I use a local tunneling service, such as ngrok, it will be able to read the metrics. What am I missing ? I need to add some special config somewhere ?
This is my prometheus.yml file:
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'my-app'
static_configs:
- targets: ['localhost:5000']
I am running the default prometheus image with docker-compose, same for grafana.
Because you're using docker-compose, therefore localhost or 127.0.0.1 won't work in the docker container.
You can replace localhost with your machine IP or use ngrok as you did, docker can resolve it to your IP.
Thanks for reading :D

Resources