Prometheus Monitoring Issue for MySQL - percona

I'm trying to setup a monitoring for MySQL as mentioned in this Percona link
I'm setting this up for the first time.
This is my Prometheus config file:
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: linux
static_configs:
- targets:
- '172.19.36.189:3306'
labels:
alias: db1
Prometheus version:
prometheus, version 1.1.2 (branch: master, revision: 36fbdcc)
build user: root#a74d279
build date: 20160908-13:12:43
go version: go1.6.3
While I check in Prometheus targets page: I get the following error:
There are no errors reported in Prometheus logs.
When I click the metric link, the metrics page is not opening. And the state of the target is DOWN.
I have started mysqld and node exporters properly as well.
Where is the issue?

You need to scrape the mysqld exporter (usually port 9104), not mysqld itself.

Three things to check when scrape targets are not reachable:
Networking:
Is the scrape target http://172.19.36.189:9104/metrics reachable from where you've opened the Prometheus GUI in the browser? Check curl -vvv http://172.19.36.189:9104/metrics and any proxy to http://172.19.36.189:9104/metrics.
Prometheus logs:
Start prometheus with debugging turned on by using the flag --log.level:
$ /bin/prometheus -h
...
--log.level=info [debug,
info,
warn,
error]
Then, check the container's logs using:
docker logs <name of prometheus container>
kubectl logs <name of prometheus pod>
Is the scrape target's port 9104 exposed in the container/pod/service?

Related

Gitlab-runner with podman can't run containers inside properly

Description of the problem:
Hello,
We have a pod YAML file with specifications of our containers, their setup, etc. If we run this pod locally with the command 'podman play kube pod.yaml', it starts successfully, and our application runs. But if we run this command inside GitLab-runner (podman as executor), some networking problems occur.
Gitlab runner version: 14.9.2 (CI runner), 15.0.0 (Local GitLab runner)
Description of the problem and differences between running containers in GitLab-runner vs. localhost.
We can't access the external network from the containers inside GitLab-runner (no packages can't be installed etc.). Gitlab-runner container itself can access the external network.
If I try to access POD_NAME:8080/ inside GitLab-runner, it says connection refused. If I try to access POD_NAME:8080/ from localhost with directly running application containers from the host, it says connection refused too.
If I try to access localhost:8080 from the host, it loads the page. If I try to access localhost:8080 from the runner, it says 'No route to host'.
We run the gitlab-runner command with '--privileged' and '--network=host' flags. Also, in our pod, we use 'networkHost: true' and 'privileged: true'.
Containers from GitLab-runner and localhost have different /etc/hosts files:
Containers started from localhost:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.113 cqe dd88a3440675-infra
127.0.1.1 cqe cqe-dispatcher
127.0.1.1 cqe cqe-umbsender
127.0.1.1 cqe cqe-frontend
127.0.1.1 cqe cqe-db
127.0.1.1 cqe cqe-umbreader
Containers started from runner:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.117 runner--project-0-concurrent-0 runner--project-0-concurrent-0-d244dbca3614d5aa-build-2
10.88.0.2 cqe 4a46f7216e30-infra
10.88.0.1 host.containers.internal
10.88.0.117 host.containers.internal
Our pod file (I kept only network information and deleted env variables etc.):
apiVersion: v1
kind: Pod
metadata:
labels:
name: cqe
spec:
hostNetwork: true
privileged: true
restartPolicy: Always
containers:
- name: db
- name: frontend
securityContext:
runAsUser: 5000
runAsGroup: 5000
ports:
- containerPort: 8080
hostPort: 8080
protocol: TCP
- name: dispatcher
securityContext:
runAsUser: 5000
runAsGroup: 5000
- name: umbreader
workingDir: /clusterqe-django
securityContext:
runAsUser: 5000
runAsGroup: 5000
- name: umbsender
workingDir: /clusterqe-django
securityContext:
runAsUser: 5000
runAsGroup: 5000
Containers have access to each other via all ports. For example, I can access db from the frontend by 'curl POD_NAME:3306'. This also works in containers from GitLab-runner.
I think the problem will probably be related to the fact that we are unpacking a container within a container. But despite using all the different flags and settings, we have not been able to solve this problem for a long time. I will be happy to add more information and reproduction steps.
If I were to describe the main problems that I don't understand:
Containers inside gitlab-runner don't have access to the external network.
I can't access the pod name and its exposed port regardless of whether it's running in the runner or on localhost.
Differences in /etc/hosts and other network settings if I'm inside the runner or on the localhost.
Check if See GitLab 15.1 (June 2022) helps, considering podman is officially supported there.
GitLab Runner 15.1
We’re also releasing GitLab Runner 15.1 today!
GitLab Runner is the lightweight, highly-scalable agent that runs your CI/CD jobs and sends the results back to a GitLab instance.
GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
What's new:
Use Podman as the container runtime in the Runner Docker executor <===
Add Software Attestations (metadata) to enable SLSA 2 in GitLab CI
Bug Fixes:
Kubernetes executor ignores Docker ENTRYPOINT for the runner helper image
Docker Hub image for GitLab Runner 15.0.0 has only amd64, not arm64 or ppc64le
See Documentation.
And with GitLab 15.3 (August 2022):
GitLab Runner 15.3
We’re also releasing GitLab Runner 15.3 today! GitLab Runner is the lightweight, highly-scalable agent that runs your CI/CD jobs and sends the results back to a GitLab instance. GitLab Runner works in conjunction with GitLab CI/CD, the open-source continuous integration service included with GitLab.
Use Podman for the Docker executor on Linux <====
The list of all changes is in the GitLab Runner CHANGELOG.
See Documentation.

prometheus is not able to access metrics from localhost

I have been trying to configure prometheus to show metrics in grafana for my nodejs application. For metrics, I am using prom-client. However, on localhost I get always following error:
Get http://localhost:5000/metrics: dial tcp 127.0.0.1:5000: connect: connection refused
Moreover, if I use a local tunneling service, such as ngrok, it will be able to read the metrics. What am I missing ? I need to add some special config somewhere ?
This is my prometheus.yml file:
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'my-app'
static_configs:
- targets: ['localhost:5000']
I am running the default prometheus image with docker-compose, same for grafana.
Because you're using docker-compose, therefore localhost or 127.0.0.1 won't work in the docker container.
You can replace localhost with your machine IP or use ngrok as you did, docker can resolve it to your IP.
Thanks for reading :D

Docker container on Fedora 28 KDE have no internet connection

since a couple of weeks I'm trying to fix an issue on my new laptop with fedora 28 KDE desktop!
I have two issues :
The container can't connect to the internet
The container doesn't see my hosts in /etc/hosts
I tried many solutions, disable firewalld, flusing iptables, accepting all connections in ip tables, enabling firewalld and changing network zones to "trusted"! also disbaled iptables using daemon.json! it still not working!!
please anyone can help, it's becoming a nightmare for me!
UPDATE #1:
even when I try to build an image it can't access the internet for some reason!, it seems the problem in the level of docker not only containers!
I tried to disable the firewall or changing zones, I also set all connections to "trusted" zone
anyone can help?
UPDATE #2:
When I turn on firewalld service and set wifi connection zone to 'external' now container/docker is able to access internet, but services can't access each other
Here is my yml file :
version: "3.4"
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 1
networks:
nabed: {}
volumes:
- "../nginx/etc/nginx/conf.d:/etc/nginx/conf.d"
- "../nginx/etc/nginx/ssl:/etc/nginx/ssl"
api:
image: nabed_backend:dev
hostname: api
command: api
extra_hosts:
- "nabed.local:172.17.0.1"
- "cms.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .api.env
networks:
nabed: {}
cms:
image: nabedd/cms:master
hostname: cms
extra_hosts:
- "nabed.local:172.17.0.1"
- "api.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .cms.env
volumes:
- "../admin-panel:/admin-panel"
networks:
nabed: {}
networks:
nabed:
driver: overlay
inside API container:
$ curl cms.nabed.local
curl: (7) Failed to connect to cms.nabed.local port 80: Connection timed out
inside CMS container:
$ curl api.nabed.local
curl: (7) Failed to connect to api.nabed.local port 80: Connection timed out
UPDATE #3:
I'm able to fix the issue by putting my hosts in my YAML file in extra_hosts options
then turning my all networks to 'trusted' mode
then restarting docker and Networkmanager
Note: for ppl who voted to close this question, please try help instead
Try very dirty solution - start your container in host network - docker run argument --net=host.
I guess, there will be also better solution, but you didn't provide details how are you starting your containers and which network is available for your containers.

Docker With DataStax Connection Not Working

I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.

How can i view my dockerized container app that i just set up on Azure?

I seem to be missing something here.
So i've set up a free account on Azure. Using their tutorials as a reference, I managed to get the public/private keys set up and i ssh'd into the server. (this was done after creating a container resource on the portal). Once in that server I sucessfully ran this command:
docker run -d -p 80:80 dallascaley/get-started:part1
I can also verify that it is running by using docker ps command. I also know that this works fine on my local system so i feel that i am very close to getting my first containerized docker app up and running but I just can't seem to figure out what address i need to go to to view the live app.
According to one post on stack overflow it should be the DNS name that is listed on the resource that starts with 'swarm-master-ip-' but that doesn't work, nor does the IP address listed. I've looked at every record in my resources list and tried all of the DNS and IP address (most of which are duplicates) and none of them seem to work.
Any suggestions?
The root cause it that you create your container on the master, we should create container on swarm agent. By default we can't create container on swarm master.
Once we content to swarm master and run docker info, the information like this:
root#swarm-master-784816DA-0:~# docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 2
Server Version: 17.06.0-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 3.19.0-65-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 6.805GiB
Name: swarm-master-784816DA-0
ID: IKDF:RSRH:CXT2:M6ER:KI4R:DYAR:2CZH:FFQX:MCRT:4NZB:CBS4:LNRK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Then we access the docker swarm cluster, set your DOCKER_HOST environment variable to the local port you configured for the tunnel whit this command export DOCKER_HOST=:2375 and docker info:
root#swarm-master-784816DA-0:~# export DOCKER_HOST=:2375
root#swarm-master-784816DA-0:~# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 1
swarm-agent-784816DA000001: 10.0.0.5:2375
└ Status: Healthy
└ Containers: 0
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.145 GiB
└ Labels: executiondriver=<not supported>, kernelversion=3.19.0-65-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=overlay
└ Error: (none)
└ UpdatedAt: 2017-07-03T01:57:18Z
Plugins:
Volume:
Network:
Log:
Swarm:
NodeID:
Is Manager: false
Node Address:
Kernel Version: 3.19.0-65-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 7.145GiB
Name: 2076070ddfd8
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Live Restore Enabled: false
WARNING: No kernel memory limit support
Then we run docker ps to get the information:
root#swarm-master-784816DA-0:~# docker run -d -p 80:80 yeasy/simple-web
0226e9ab3cadf20701f64c02f1f4a42f5fd57fd297722f268db47db1b124ab5c
root#swarm-master-784816DA-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0226e9ab3cad yeasy/simple-web "/bin/sh -c 'pytho..." 6 seconds ago Up 5 seconds 10.0.0.5:80->80/tcp swarm-agent-784816DA000001/stupefied_snyder
Then we can use the public IP address or DNS of swarm-agent-lb load balancer to access the website.
More information about set environment, please refer to this link.
I don't know Azure and the Azure Container Service so I'm trying to answer this based on what I can gather from the documentation.
Since you mentioned swarm-master-ip I'm assuming you launched a Swarm cluster in the container service. The documentation at https://learn.microsoft.com/en-us/azure/container-service/container-service-docker-swarm suggests that there is a Azure Load Balancer automatically set up for the Swarm agent nodes that will route requests to the applications:
You can now access the application that is running in this container through the public DNS name of the Swarm agent load balancer. You can find this information in the Azure portal.
If there isn't anywhere in the Azure Container Service portal, then somewhere in the Azure service should be the management for the Azure Load Balancer and list the public DNS for it.
The docker container is bound to the host port 80. So from the host, you should be able to hit that running instance, curl localhost and get a positive response.
If that works, then assuming this azure instance has a public IP (network interface bound to the public) you could get your IP address with ifconfig and from anywhere in the world you can reach your server from http://12.34.56.78 (whatever your external interface IP address is)

Resources