I am learning to monitor my Fastify app with Prometheus and Grafana. First, I installed fastify-metrics package and registered in Fastify app.
// app.ts
import metrics from 'fastify-metrics'
...
app.register(metrics, {
endpoint: '/metrics',
})
Then I setup Prometheus and Grafana in docker-compose.yml:
version: "3.7"
services:
prometheus:
image: prom/prometheus:latest
volumes:
- prometheus_data:/prometheus
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
network_mode: host
ports:
- '9090:9090'
grafana:
image: grafana/grafana:latest
volumes:
- grafana_data:/var/lib/grafana
# - ./grafana/provisioning:/etc/grafana/provisioning
# - ./grafana/config.ini:/etc/grafana/config.ini
# - ./grafana/dashboards:/var/lib/grafana/dashboards
environment:
- GF_SECURITY_ADMIN_PASSWORD=ohno
depends_on:
- prometheus
network_mode: host
ports:
- '3000:3000'
volumes:
prometheus_data: {}
grafana_data: {}
I added network_mode=host because Fastfy app will be running at localhost:8081.
Here's the Prometheus config:
# prometheus.yml
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 1m
scrape_configs:
- job_name: 'prometheus'
# metrics_path: /metrics
static_configs:
- targets: [
'app:8081',
]
- job_name: 'node_exporter'
static_configs:
- targets: [
'localhost:8081',
]
After docker-compose up and npm run dev, Fastify app is up and running and target localhost:8081 is UP in Prometheus dashboard, localhost:9090, I tried executing some metrics.
I imported Node Exporter Full and Node Exporter Server Metrics dashboards. And added Prometheus datasource localhost:9090, named Fastify, and saved successfully, showed Data source is working.
However, when I go to the Node Exporter Full dashboard, it shows no data. I selected Fastify in datasource but it shows None in others selections at upper left corner.
Please help, what I am doing wrong?
It looks like you're using a dashboard intended for linux stats. In order to use Prometheus/Grafana with your Fastify app, you'll need a dashboard that's meant for Node.js apps. For example:
https://grafana.com/grafana/dashboards/11159
https://grafana.com/grafana/dashboards/12230
Plugging one of those in should do the trick.
you should specify the metrics_path in the job as defined in your 'fastify-metrics' endpoint and also update the targets param:
- job_name: 'node_exporter'
scrape_interval: 5s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['localhost:8081']
labels:
group: 'node_exporter'
Related
I'm having a tough time connecting my nodejs backend service to the redis service (both running in swarm mode).
The backend server fails to start as it complains that it was not able to reach redis with the following error:
But on the contrary, if I try to reach the redis service from within my backend container using curl redis-dev.localhost:6379, I get the response curl: (52) Empty reply from server. Which implies that the redis service is available to the backend container:
Below is the backend's docker-compose yaml:
version: "3.8"
services:
backend:
image: backend-app:latest
command: node dist/main.js
ports:
- 4000:4000
environment:
- REDIS_HOST='redis-dev.localhost'
- APP_PORT=4000
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend-app.rule=Host(`backend-app.localhost`)"
- "traefik.http.services.backend-app.loadbalancer.server.port=4000"
- "traefik.docker.network=traefik-public"
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
And following is my redis's compose file:
version: "3.8"
services:
redis:
image: redis:7.0.4-alpine3.16
hostname: "redis-dev.localhost"
ports:
- 6379:6379
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
The issue was with the backend's docker compose file. The REDIS_HOST environment variable should have been redis-dev.localhost instead of 'redis-dev.localhost'. That's why the backend app was complaining that it was not able to reach the redis host.
It's funny how elementary these things can be.
I've got a node process running on port 3000 using pm2.
I want to configure Traefik so that it reverse proxies this service on port 80.
Following this excellent blog post, I was able to quickly start Traefik using docker compose and set up a skeleton config for the node-server.
However, that example assumes the node process is hosted inside a docker as well. I couldn't get this to work for my node process (*) so I just want to be able to configure Traefik by pointing to port 3000 in some way. Seems straightforward but couldn't get it to work.
I'm stuck with the following config (which is a mix of various blog-posts without actually knowing what I'm doing):
services:
reverse-proxy:
image: traefik:v2.4
container_name: "traefik"
command:
- "--api.insecure=true"
- "--api.dashboard=true"
- "--api.debug=true"
- "--providers.docker=true"
- "--log.LEVEL=DEBUG"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--providers.docker.exposedbydefault=false"
- "--certificatesresolvers.myresolver.acme.httpchallenge=true"
- "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.myresolver.acme.email=xxxx#xxx.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "443:443"
- "80:80"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
node-server:
loadBalancer:
servers:
- url: http://127.0.0.1:3000/
labels:
- "traefik.enable=true"
- "traefik.http.routers.node-server.rule=Host(`xxxxxx.com`)"
- "traefik.http.routers.node-server.entrypoints=websecure"
- "traefik.http.routers.node-server.tls.certresolver=myresolver"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.routers.redirs.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.redirs.entrypoints=web"
- "traefik.http.routers.redirs.middlewares=redirect-to-https"
This gives the error: 'Unsupported config option for services.node-server: 'loadBalancer'"
Long story short: how would I configure Traefik to just reverse proxy a service running on port 3000?
*) A total newbie to Docker and I couldn't get the situation to work, where the node process depends on custom javascript modules in a parent directory. Perhaps there's a way to do this and I could do it in the 'host node in docker' way instead. I'm all ears
A few months ago I have configured a reverse proxy, here you go my configuration:
version: '3'
services:
reverse-proxy:
image: traefik:v2.5
container_name: selling-point-reverse-proxy
ports:
- 80:80
- 8080:8080
volumes:
# Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
command:
# Enables the web UI
- --api.insecure=true
# Tells Traefik to listen to docker
- --providers.docker
# Creates a new entrypoint called web
- --entrypoints.web.address=:80
# Disable container exposition
- --providers.docker.exposedByDefault=false
# Traefik matches against the container's labels to determine whether to create any route for that container
- --providers.docker.constraints=Label(`traefik.scope`,`selling-point`)
# Enable tracing (using jaeger by default)
- --tracing=true
# Name of the tracing service on Jaeger
- --tracing.serviceName=reverse-proxy
# Host and port of the Jaeger agent
- --tracing.jaeger.localAgentHostPort=jaeger:6831
labels:
# Matcher for creating a route
- traefik.scope=selling-point
# Exposes container
- traefik.enable=true
# Creates circuit breaker middleware
- traefik.http.middlewares.latency.circuitbreaker.expression=LatencyAtQuantileMS(50.0) > 10000
# Creates a forward auth middleware
- traefik.http.middlewares.auth.forwardauth.address=http://auth:3000/auth/authorize
# Enables cross origin requests
- traefik.http.middlewares.cors.headers.accesscontrolalloworiginlist=*
# Enables forwarding of the request headers
- traefik.http.middlewares.cors.headers.accessControlAllowHeaders=*
networks:
- selling-point
api:
image: selling-point-api
container_name: selling-point-api
build:
context: ./selling-point-api
labels:
# Tells Traefik where to redirect the request if the url has the specified prefix
- traefik.http.routers.api.rule=PathPrefix(`/api`)
# Attaches a middleware for forwarding the authentication
- traefik.http.routers.api.middlewares=cors,auth,latency
# Attaches entrypoints
- traefik.http.routers.api.entrypoints=web
# Exposes container
- traefik.enable=true
# Matcher for creating a route
- traefik.scope=selling-point
# Creates a service called selling-point-api
- traefik.http.services.selling-point-api.loadbalancer.server.port=3000
# Attach the container to a service
- traefik.http.routers.api.service=selling-point-api
volumes:
- ./selling-point-api/src:/app/src
networks:
- selling-point
environment:
WAIT_HOSTS: mysql:3306
DATABASE_URL: mysql://root:huachinango#mysql:3306/selling_point
NODE_ENV: development
auth:
image: selling-point-auth
container_name: selling-point-auth
build:
context: ./selling-point-auth
labels:
# Tells Traefik where to redirect the request if the url has the specified prefix
- traefik.http.routers.auth.rule=PathPrefix(`/auth`)
# Attaches a circuit breaker middleware
- traefik.http.routers.auth.middlewares=cors,latency
# Attaches entrypoints
- traefik.http.routers.auth.entrypoints=web
# Exposes container
- traefik.enable=true
# Matcher for creating a route
- traefik.scope=selling-point
# Creates a service called selling-point-auth
- traefik.http.services.selling-point-auth.loadbalancer.server.port=3000
# Attach the container to a service
- traefik.http.routers.auth.service=selling-point-auth
environment:
WAIT_HOSTS: mysql:3306
IGNORE_ENV_FILE: 'true'
DATABASE_URL: mysql://root:huachinango#mysql:3306/selling_point
PASSWORD_SALT: $$2b$$10$$g0OI8KtIE3j6OQqt1ZUDte
NODE_ENV: development
volumes:
- ./selling-point-auth/src:/app/src
networks:
- selling-point
mysql:
image: mysql:5
container_name: selling-point-mysql
environment:
MYSQL_ROOT_PASSWORD: huachinango
MYSQL_DATABASE: selling_point
networks:
- selling-point
volumes:
- mysql-db:/var/lib/mysql
jaeger:
image: jaegertracing/all-in-one:1.29
container_name: selling-point-tracing
environment:
COLLECTOR_ZIPKIN_HOST_PORT: :9411
ports:
- 16686:16686
networks:
- selling-point
volumes:
mysql-db:
networks:
selling-point:
name: selling-point
driver: bridge
thanks in advance for this awesome stack platform that is jhipster.
I have a question, I am trying to run a microservice directly with:
./mvnw -Pdev -DskipTests
And I am getting (UnknownHostException -- 'http://admin:admin#jhipster-registry:8761/eureka/):
2021-09-16 10:06:26.225 INFO 6762 --- [ restartedMain] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://admin:admin#jhipster-registry:8761/eureka/}, exception=I/O error on GET request for "http://admin:admin#jhipster-registry:8761/eureka/apps/": jhipster-registry: Name or service not known; nested exception is java.net.UnknownHostException: jhipster-registry: Name or service not known stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://admin:admin#jhipster-registry:8761/eureka/apps/": jhipster-registry: Name or service not known; nested exception is java.net.UnknownHostException: jhipster-registry: Name or service not known
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602)
at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplic
My doubt is, why is trying to use the domain jhipster-registry:8761 instead of what I have in the dev configurations, "localhost"?
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
Right now I am using docker-compose in order to run the needed services, like the registry:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v6.8.0
volumes:
- ./central-server-config:/central-config
# By default the JHipster Registry runs with the "dev" and "native"
# Spring profiles.
# "native" profile means the filesystem is used to store data, see
# http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- JHIPSTER_SLEEP=20
- SPRING_PROFILES_ACTIVE=dev,oauth2
- SPRING_SECURITY_USER_PASSWORD=admin
- JHIPSTER_REGISTRY_PASSWORD=admin
- SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=native
- SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS=file:./central-config
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=git
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_URI=https://github.com/jhipster/jhipster-registry/
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_PATHS=central-config
# For Keycloak to work, you need to add '127.0.0.1 keycloak' to your hosts file
- SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=http://keycloak:9080/auth/realms/jhipster
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID=jhipster-registry
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET=jhipster-registry
ports:
- 8761:8761
keycloak:
image: jboss/keycloak:12.0.4
command:
[
"-b",
"0.0.0.0",
"-Dkeycloak.migration.action=import",
"-Dkeycloak.migration.provider=dir",
"-Dkeycloak.migration.dir=/opt/jboss/keycloak/realm-config",
"-Dkeycloak.migration.strategy=OVERWRITE_EXISTING",
"-Djboss.socket.binding.port-offset=1000",
"-Dkeycloak.profile.feature.upload_scripts=enabled",
]
volumes:
- ./realm-config:/opt/jboss/keycloak/realm-config
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- DB_VENDOR=h2
ports:
- 9080:9080
- 9443:9443
- 10990:10990
test-mysql:
container_name: test-mysql
restart: always
image: mysql:8.0.25
environment:
MYSQL_ROOT_PASSWORD: 'root'
ports:
# <Port exposed> : < MySQL Port running inside container>
- '3306:3306'
expose:
# Opens port 3306 on the container
- '3306'
volumes:
- test-datavolume:/var/lib/mysql
volumes:
test-datavolume:
I know that if I add into the /etc/hosts the entry "127.0.0.1 jhipster-registry" is going to work, but I cant find/understand why is trying to use jhipster-registry instead of localhost?
Thanks!
I have a microservice based node app. I am using docker, docker-compose and traefik for service discovery.
I have 2 microservices at this moment:
the server app: running at node-app.localhost:8000
the search microservice running at search-microservice.localhost:8002
The issue I can't make a request from one microservice to another.
Here are my docker compose config:
# all variables used in this file are defined in the .env file
version: "2.2"
services:
node-app-0:
container_name: node-app
restart: always
build: ./backend/server
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8000:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:node-app.localhost"
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
search-microservice:
container_name: ${CONTAINER_NAME_SEARCH}
restart: always
build: ./backend/search-service
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8002:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:search-microservice.localhost"
volumes:
node-ts-app-volume:
external: true
Both the node-app and the search-microservice expose the port 3000.
Why can't I call http://search-microservice.localhost:8002 from the node app ? calling it from the browser works though.
Because node-app is a container and to access other containers it has to use service name and internal port.
In your case it is search-microservice:3000.
To access host PC and exposed ports, you have to use host.docker.internal name for all services and external port.
If you want to access other services from in a different container with their hostnames, you can use the "extra_hosts" parameter in your docker-compose.yml file. Also, you have to use the "ipv4_address" parameter under the network parameter for each all services.
For example;
services:
node-app-1:
container_name: node-app
networks:
apps:
ipv4_address: 10.1.3.1
extra_hosts:
"search-microservice.localhost:10.1.3.2"
node-app-2:
container_name: search-microservice
networks:
apps:
ipv4_address: 10.1.3.2
extra_hosts:
"node-app.localhost:10.1.3.1"
Extra hosts in docker-compose
I am running a node applications locally. It runs on http://localhost:3002 using prom-client i can see the metrics at the following endpoint http://localhost:3002/metrics.
I've setup prometheus in a docker container and ran it.
Dockerfile
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
prometheus.yml
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:3002']
labels:
service: 'my-service'
group: 'production'
rule_files:
- 'alert.rules'
docker build -t my-prometheus .
docker run -p 9090:9090 my-prometheus
When i navigate to http://localhost:9090/targets it shows
Get http://localhost:3002/metrics: dial tcp 127.0.0.1:3002: connect:
connection refused
Can you please tell me what im doing wrong here. node app is running on localhost at that port becasue when i go to http://localhost:3002/metrics i can see the metrics.
When you are inside a container, you cannot access the localhost directly. You will need to add docker.for.mac.localhost to your prometheus.yml file. See below:
Your Job in prometheus.yml file.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- targets: ['docker.for.mac.localhost:3002']
and for windows, it would be
- job_name: 'spring-actuator'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['docker.for.win.localhost:8082']
The applications are not in the same network. Firstly, you can create docker image from your node application too. When running docker images, network( --net) parameter should be passed to both images.
Run prometheus app:
docker run --net basic -p 9090:9090 my-prometheus
Run nodejs app:
docker run --net basic -p 8080:8080 my-node-app
Now, the applications run in the same network that is called basic. So the prometheus application can access the http://localhost:3002/metric endpoint.
I do this to localhost...success global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 1s
scrape_configs:
job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets:
['127.0.0.1:9090']
job_name: class
honor_timestamps: true
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets: ['host.docker.internal:8080']