Pushing metrics to prometheus server via prometheus remote write from netdata - linux

I have netdata installed in one of my computers and I want to export data to my prometheus server (both Ubuntu).
But I can't use prometheus' pull system, I need the metrics to be pushed from netdata to prometheus.
Netdata has prometheus remote write implemented in its exporting engine and I am able to configure it to send metrics to my server PC just fine.
But I can't see the metrics in prometheus at all, although I know the metrics are being sent to the server PC as I can see them by listening on the port I'm pushing to, via netcat.
So I think that my prometheus config is wrong.
This is my netdata exporting config:
[prometheus_remote_write:prometheus_receiver]
enabled = yes
destination = 192.168.5.45:9090
remote write URL path = /write
#username = admin
#password = admin
data source = average
prefix = netdata
# hostname = my_hostname
# update every = 10
# buffer on failures = 10
# timeout ms = 20000
# send names instead of ids = yes
# send charts matching = *
send hosts matching = *
And this is my prometheus config:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
remote_read:
- url: http://localhost/api/v1/write
remote_timeout: 30s
If I open the page localhost:9090/api/v1/write I expected to be able to see the metrics pushed from netdata, but instead I get a blank page that says "Method Not Allowed".
I execute prometheus with the flags --web.enable-admin-api --web.enable-remote-write-receiver.
Any clue on what I'm doing wrong?

Try execute prometheus with the flags --enable-feature=remote-write-receiver.
May be you have old version prometheus and this flag will be work.

Related

Prometheus - Scraping metrics from different endpoints inside an Azure VM

I have Prometheus running inside a Kubernetes cluster in Azure, and I'm trying to use it to also monitor a few VMs inside the same resource-group.
I have setup Azure SD for the VMs, and it's scanning them correctly, but the point is that in these VMs there are more than 1 service exposing metrics in different ports.
Is there a way to tell Prometheus to scan multiple ports under the azure_service_discovery job?
Or at least have these metrics aggregated, so Prometheus can scrape them from one single port?
The job definition that I'm using is:
azure_sd_configs:
- authentication_method: "OAuth"
subscription_id: AZURE_SUBSCRIPTION_ID
tenant_id: AZURE_TENANT_ID
client_id: AZURE_CLIENT_ID
client_secret: AZURE_CLIENT_SECRET
port: 9100
refresh_interval: 300s
You can't have two different ports in the same sd config.
However you can :
Either have multiple jobs with different azure_sd_configs. This way you can have different configuration for each job (drop some targets, customize sample limit, etc)
- job_name: azure_exporters_a
sample_limit: 1000
azure_sd_configs:
- port: 9100
...
- job_name: azure_exporters_b
sample_limit: 5000
azure_sd_configs:
- port: 9800
...
Or have multiple azure_sd_config for a specific job. In that case (the second one), all of your exporters will be regrouped in the same job, thus they will share the same configuration (sample_limit, scrape_timeout, ...)
- job_name: azure_exporters
sample_limit: 5000
azure_sd_configs:
- port: 9100
...
- port: 9800
...

K8S - using Prometheus to monitor another prometheus instance in secure way

I've installed Prometheus operator 0.34 (which works as expected) on cluster A (main prom)
Now I want to use the federation option,I mean collect metrics from other Prometheus which is located on other K8S cluster B
Secnario:
have in cluster A MAIN prometheus operator v0.34 config
I've in cluster B SLAVE prometheus 2.13.1 config
Both installed successfully via helm, I can access to localhost via port-forwarding and see the scraping results on each cluster.
I did the following steps
Use on the operator (main cluster A) additionalScrapeconfig
I've added the following to the values.yaml file and update it via helm.
additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target prometheus on Cluster B
I took the target like following:
on prometheus inside cluster B (from which I want to collect the data) I use:
kubectl get svc -n monitoring
And get the following entries:
Took the EXTERNAL-IP and put it inside the additionalScrapeConfigs config entry.
Now I switch to cluster A and run kubectl port-forward svc/mon-prometheus-operator-prometheus 9090:9090 -n monitoring
Open the browser with localhost:9090 see the graph's and click on Status and there Click on Targets
And see the new target with job federate
Now my main question/gaps. (security & verification)
To be able to see that target state on green (see the pic) I configure the prometheus server in cluster B instead of using type:NodePort to use type:LoadBalacer which expose the metrics outside, this can be good for testing but I need to secure it, how it can be done ?
How to make the e2e works in secure way...
tls
https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config
Inside cluster A (main cluster) we use certificate for out services with istio like following which works
tls:
mode: SIMPLE
privateKey: /etc/istio/oss-tls/tls.key
serverCertificate: /etc/istio/oss-tls/tls.crt
I see that inside the doc there is an option to config
additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target
# tls_config:
# ca_file: /opt/certificate-authority-data.pem
# cert_file: /opt/client-certificate-data.pem
# key_file: /sfp4/client-key-data.pem
# insecure_skip_verify: true
But not sure which certificate I need to use inside the prometheus operator config , the certificate of the main prometheus A or the slave B?
You should consider using Additional Scrape Configuration
AdditionalScrapeConfigs allows specifying a key of a Secret
containing additional Prometheus scrape configurations. Scrape
configurations specified are appended to the configurations generated
by the Prometheus Operator.
I am affraid this is not officially supported. However, you can update your prometheus.yml section within the Helm chart. If you want to learn more about it, check out this blog
I see two options here:
Connections to Prometheus and its exporters are not encrypted and
authenticated by default. This is one way of fixing that with TLS
certificates and
stunnel.
Or specify Secrets which you can add to your scrape configuration.
Please let me know if that helped.
A couple of options spring to mind:
Put the two clusters in the same network space and put a firewall in-front of them
VPN tunnel between the clusters.
Use istio multicluster routing (but this could get complicated): https://istio.io/docs/setup/install/multicluster

collectd data not showing in influxdb container

I'm trying to in place a global resource monitoring of a small cluster. The chosen stack:
- collectd on the node for data gathering
- influxdb as backend using the official docker container
- grafana as frontend again using the official container
The container are launched on a central server. Grafana is able to connect to influxdb source and I updated collectd agent (network plugin in collectd.conf) and influxdb (influxdb.conf with collectd plugin) to enable them to talk to each others.
But no data is showing up... No much log to check, but for sure the influxdb data file are empty and nothing comes up when querying.
Anyone experienced such context? Any idea where to dig?
collectd conf extract:
# /etc/collectd/collectd.conf
<Plugin network>
Server "<public_IP_of_the_docker_host>" "25826"
</Plugin>
influxdb conf:
[input_plugins.collectd]
enabled = true
address = "public_IP_of_the_docker_host"
port = 25826
database = "collectd"
typesdb = "/usr/share/collectd/types.db"

Puppet clients applying empty classes (with default parameters)

The problem:
Servers running puppet agent in my environment are receiving empty [classes] (without parameters), instead of the expected parameters stored in their Hiera document. This causes puppet modules to run with null parameters which in turn causes them to successfully run with default values instead of the actual expected values (which is obviously unwanted and destructive behavior).
What causes the problem to trigger?
Our Hiera are based on CouchDB document database (as I will further expand on later). When the CouchDB service is down the puppet agents (upon asking the puppet-masters for a new catalog) receive empty [classes] (without parameters), instead of the expected parameters stored in their Hiera document.
My environment architecture:
4 Puppet master servers under NetworkLoadBalancer (Cisco Ace)
1 puppet ca server
2 Hiera servers (couchDB 1.6.0)behind NetworkLoadBalancer (Cisco Ace)
all the servers OS are RedHat 6.3
Puppet version 3.7.4
Puppet masters communicate with hiera servers with Http_Backend v1.0.1
Using puppetDB with postgress sql to save servers inventory
How are we able to simulate the problem?
By stopping the CouchDB service on the Hiera server - hiera01 (just one of the two) we are able to trigger the problem.
puppet-masters logs show "connection refused ..." error for the sessions that were already open to the hiera01 server for more than 20 minutes
sessions not being closed when the couchdb services is stopped.
new requests are routed to hiera02.
client servers that their session to get catalog from master through hiera01 - got default params for their classes !!!
Puppet master`s main configuration file
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = %vardir/ssl {group = service, mode = 640}
ca = false
certname = master_server_01.domain
dns_alt_names = puppet-master-ace.domain, puppet-master-ace
use_srv_records = true
pluginsource = puppet:///plugins
pluginfactsource = puppet:///pluginfacts
reports = log, foreman
enviromentpath = $confdir/enviroments
basemodulepath = $confdir/modules
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
[master]
storeconfigs = true
storeconfigs_backend = puppetdb
always_cache_features = true
Hiera.yaml (master server)
---
:backends:
- http
- yaml
:hierarchy:
- "%{fqdn}"
- "%{enviroment}"
- common
:http:
:host: hieraserverace.domain
:use_auth: true
:auth_user: admin
:auth_pass: Passowrd
:api_user: apiUser
:api_pass: apipassword
:merge_behavior: deeper
:port: 5984
:output: json
:failure: graceful
:path:
- "/%{environment}/%{fqdn}"
- "/%{environment}/%{osfamily}"
- "/%{environment}/%{enviroment}"
- "/%{environment}/common"
:yaml:
:datadir: /etc/puppet/hieradata
Notes
environment is productions and updating versions is almost
impossible.
hieradata directory is empty (not using the yaml backend)
Thank you !
PS: Since we are running a classified environment disconnected from the internet uploading log files is a very complicated process.

Remote ArangoDB access

I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.

Resources