I have been trying to configure prometheus to show metrics in grafana for my nodejs application. For metrics, I am using prom-client. However, on localhost I get always following error:
Get http://localhost:5000/metrics: dial tcp 127.0.0.1:5000: connect: connection refused
Moreover, if I use a local tunneling service, such as ngrok, it will be able to read the metrics. What am I missing ? I need to add some special config somewhere ?
This is my prometheus.yml file:
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'my-app'
static_configs:
- targets: ['localhost:5000']
I am running the default prometheus image with docker-compose, same for grafana.
Because you're using docker-compose, therefore localhost or 127.0.0.1 won't work in the docker container.
You can replace localhost with your machine IP or use ngrok as you did, docker can resolve it to your IP.
Thanks for reading :D
Related
I am running a node applications locally. It runs on http://localhost:3002 using prom-client i can see the metrics at the following endpoint http://localhost:3002/metrics.
I've setup prometheus in a docker container and ran it.
Dockerfile
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
prometheus.yml
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:3002']
labels:
service: 'my-service'
group: 'production'
rule_files:
- 'alert.rules'
docker build -t my-prometheus .
docker run -p 9090:9090 my-prometheus
When i navigate to http://localhost:9090/targets it shows
Get http://localhost:3002/metrics: dial tcp 127.0.0.1:3002: connect:
connection refused
Can you please tell me what im doing wrong here. node app is running on localhost at that port becasue when i go to http://localhost:3002/metrics i can see the metrics.
When you are inside a container, you cannot access the localhost directly. You will need to add docker.for.mac.localhost to your prometheus.yml file. See below:
Your Job in prometheus.yml file.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- targets: ['docker.for.mac.localhost:3002']
and for windows, it would be
- job_name: 'spring-actuator'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['docker.for.win.localhost:8082']
The applications are not in the same network. Firstly, you can create docker image from your node application too. When running docker images, network( --net) parameter should be passed to both images.
Run prometheus app:
docker run --net basic -p 9090:9090 my-prometheus
Run nodejs app:
docker run --net basic -p 8080:8080 my-node-app
Now, the applications run in the same network that is called basic. So the prometheus application can access the http://localhost:3002/metric endpoint.
I do this to localhost...success global:
scrape_interval: 5s
scrape_timeout: 5s
evaluation_interval: 1s
scrape_configs:
job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets:
['127.0.0.1:9090']
job_name: class
honor_timestamps: true
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: http
follow_redirects: true
static_configs:
targets: ['host.docker.internal:8080']
since a couple of weeks I'm trying to fix an issue on my new laptop with fedora 28 KDE desktop!
I have two issues :
The container can't connect to the internet
The container doesn't see my hosts in /etc/hosts
I tried many solutions, disable firewalld, flusing iptables, accepting all connections in ip tables, enabling firewalld and changing network zones to "trusted"! also disbaled iptables using daemon.json! it still not working!!
please anyone can help, it's becoming a nightmare for me!
UPDATE #1:
even when I try to build an image it can't access the internet for some reason!, it seems the problem in the level of docker not only containers!
I tried to disable the firewall or changing zones, I also set all connections to "trusted" zone
anyone can help?
UPDATE #2:
When I turn on firewalld service and set wifi connection zone to 'external' now container/docker is able to access internet, but services can't access each other
Here is my yml file :
version: "3.4"
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 1
networks:
nabed: {}
volumes:
- "../nginx/etc/nginx/conf.d:/etc/nginx/conf.d"
- "../nginx/etc/nginx/ssl:/etc/nginx/ssl"
api:
image: nabed_backend:dev
hostname: api
command: api
extra_hosts:
- "nabed.local:172.17.0.1"
- "cms.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .api.env
networks:
nabed: {}
cms:
image: nabedd/cms:master
hostname: cms
extra_hosts:
- "nabed.local:172.17.0.1"
- "api.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .cms.env
volumes:
- "../admin-panel:/admin-panel"
networks:
nabed: {}
networks:
nabed:
driver: overlay
inside API container:
$ curl cms.nabed.local
curl: (7) Failed to connect to cms.nabed.local port 80: Connection timed out
inside CMS container:
$ curl api.nabed.local
curl: (7) Failed to connect to api.nabed.local port 80: Connection timed out
UPDATE #3:
I'm able to fix the issue by putting my hosts in my YAML file in extra_hosts options
then turning my all networks to 'trusted' mode
then restarting docker and Networkmanager
Note: for ppl who voted to close this question, please try help instead
Try very dirty solution - start your container in host network - docker run argument --net=host.
I guess, there will be also better solution, but you didn't provide details how are you starting your containers and which network is available for your containers.
I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.
I’m using Docker for Mac for my development environment.
The problem is that anybody within our local network can access the server and the MySQL database running on my machine. Of course, they need to know the credentials, which they can possibly brute force.
For example, if my local IP is 10.10.100.22, somebody can access my local site by typing https://10.10.100.22:8300, or database mysql -h 10.10.100.22 -P 8301 -u root -p (port 8300 maps to docker 443, port 8301 maps to docker 3306).
Currently, I use Mac firewall and block incoming connections for vpnkit, which is used by Docker. It works, but I'm not sure if this is the best approach.
UPDATE
The problem with firewall is that you have to coordinate it with all developers in your team. I was hoping to achieve my goal just using Docker configuration same as private networks in Vagrant https://www.vagrantup.com/docs/networking/private_network.html.
What is the best way to restrict access to my docker dev environment within the local network?
Found very simple solution for my problem. In the docker-compose.yml instead of,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "8301:3306"
which opens the 8301 port wide open for the local network. I did the following,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "127.0.0.1:8301:3306"
which binds the 8301 port to the docker host 127.0.0.1 only and the port is not accessible outside of the host.
I'm trying to setup a monitoring for MySQL as mentioned in this Percona link
I'm setting this up for the first time.
This is my Prometheus config file:
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: linux
static_configs:
- targets:
- '172.19.36.189:3306'
labels:
alias: db1
Prometheus version:
prometheus, version 1.1.2 (branch: master, revision: 36fbdcc)
build user: root#a74d279
build date: 20160908-13:12:43
go version: go1.6.3
While I check in Prometheus targets page: I get the following error:
There are no errors reported in Prometheus logs.
When I click the metric link, the metrics page is not opening. And the state of the target is DOWN.
I have started mysqld and node exporters properly as well.
Where is the issue?
You need to scrape the mysqld exporter (usually port 9104), not mysqld itself.
Three things to check when scrape targets are not reachable:
Networking:
Is the scrape target http://172.19.36.189:9104/metrics reachable from where you've opened the Prometheus GUI in the browser? Check curl -vvv http://172.19.36.189:9104/metrics and any proxy to http://172.19.36.189:9104/metrics.
Prometheus logs:
Start prometheus with debugging turned on by using the flag --log.level:
$ /bin/prometheus -h
...
--log.level=info [debug,
info,
warn,
error]
Then, check the container's logs using:
docker logs <name of prometheus container>
kubectl logs <name of prometheus pod>
Is the scrape target's port 9104 exposed in the container/pod/service?