I'm trying to get a few metrics from a Cassandra node that has a Cassandra Exporter running on it (https://github.com/criteo/cassandra_exporter/). I don't want to go into the details, but using Prometheus is not an option at this time.
I'd like to access the data with HTTP requests or something similar. With a simple HTTP Get I can access all the cached information, but I would like to do more sophisticated operations on this, such as filtering for certain messages. Is there a way to do this? I could not find any information on this. Or do I have to get the entire log and then do filtering operations on my local machine?
I'm using the jmx-exporter tag because cassandra-exporter used to be a fork of it and I couldn't find a more fitting tag.
I would suggest to use telegraf + jolokia.
It is easy to setup and it will expose the metrics via HTTP.
I wrote a post about it (in my case I saved the result into InfluxDb and used it in Grafana), it might be useful:
cassandra-performance-monitoring-by-using-jolokia-agent-telegraf-influxdb-and-grafana
Using Prometheus exporters without the Prometheus server itself is a perfectly valid approach if you don't care about historical data and just want to get an immediate snapshot of metrics (state of the system) or make a recording of some short period manually.
One of the instruments you might look at is Metricat application (https://metricat.dev/), it allows you to have filters by metrics and make recordings of how metrics change in time during period of your interest.
Related
Hello i have filebeat which is collecting logs and it is connected with logstash.
My idea is to show logs from logstash to Grafana.
Is there any option to send logstash logs directly to prometheus or grafana?
In my solution i dont want to use elasticsearch. I found some logstash exporter but that is for status of logstash not for logs.
Grafana is a visualization tool that reads the data from a data source, you will need to store your logs in one of the supported data sources, prometheus and elasticsearch are just two of the supported data sources.
To send your logs from Logstash to Prometheus you would need an output plugin, but there isn't an official plugin for it, it seems that a third party plugin exists, but it is currently in beta and maybe it still do not have all the features that you want.
Grafana, by itself, doesn't store any data (besides users/dashboards, etc.). Storing the raw logs in Prometheus is not recommended. Prometheus doesn't handle well high cardinality labels, and each different log line would generate a new value for the label. And this is assuming that you transform your log line into a set of labels and send that to Prometheus (again, don't do this).
That being said, you might want to give it a try to Loki. This a new~ish system that (as described by its authors) "Like Prometheus but for your logs". They even support a query language LogQL that is a subset of PromQL, and you can even extract metrics from the logs while storing the logline. Ingestion is usually through Promtail but Loki has an HTTP endpoint that can be used to push the logs.
The data model from Loki is quite similar: a set of labels, a timestamp, and a logline. Grafana ships with out of the box support for Loki, and it is improving with each release.
I'd like to get the following system: once an event occurs in Cloud Foundry, it is loaded to elasticsearch. Using logstash would be fine, but I explored its input plugin and couldn't find anything that I could use. What is the best solution for this scenario? At the moment I can think of writing a script that would continuously pull the data using CF api and load it to elasticsearch. Is there a better way of doing it?
I can think of two solutions:
Create a "drain" (e.g., via the drain CLI) for the app you
would like to see events for and drain it to your ELK deployment.
This should forward each event (formatted as rfc 5425 syslog) to
elastic search.
If you are using the Loggregator Firehose to write data into elastic
search (e.g., via firehose-to-syslog) then you will get events
(as log messages). This has the downside of everything ends up in
your ELK deployment.
I'm trying to debug why the remote caching doesn't work for my use case.
I wanted to inspect the cache entries related to bazel, but realized that I don't really know and can't find what map names are used.
I found one "hazelcast-build-cache" - this seems to keep some of the build and test actions. I've set up a listener to see what gets put there, but I can't see any of the success actions.
For example,I run a test, and I want to verify that the success of this test gets cached remotely. I have no idea how to do this. I would either want to know how to find it out, or what map names I can inspect in hazelcast to find it out.
Hazelcast Management Center can show you all the maps/caches that you create or get created in the cluster, how data is distributed etc. You can also make use of various types of listeners within Hazelcast: EntryListener, MapListener etc.
Take a look at documentation:
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#management-center
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#distributed-events
I've built some custom middleware on Node.js for a client which runs great in user space, but I want to make it a service.
I've accomplished this using node-windows, which works great, but the client has occasional large bursts of data so I'd like to allocate a little more memory using the --max-old-space-size command line parameter. Unfortunately, I don't see how to configure that in my service set-up wrapper for node-windows.
Any suggestions?
FWIW, I'm also thinking about changing how I parse the data, e.g. treating it more as a stream, but since this is the first time I've used Node and the project is going live in a couple of days, I'm hoping to find a quick and dirty option that'll get us to an up-and-running status easily, to be adjusted later.
Thanks!
Use node-windows v0.1.14 or higher. The ability to add flags was merged in this version. The more appropriate issue related to this is https://github.com/coreybutler/node-windows/issues/159.
I want to store logs of applications like uWSGI ("/var/log/uwsgi/uwsgi.log") on a device that can be accessed from
multiple instances and can save their logs to that particular device under their own instance name dir.
So does AWS provides any solution to do that....
There are a number of approaches you can take here. If you want to have an experience that is like writing directly to the filesystem, then you could look at using something like s3fs to mount a common S3 bucket to each of your instances. This would give you more or less a real-time log merge though honestly I would be concerned over the performance of such a set up in a high volume application.
You could process the logs at some regular interval to push the data to some common store. This would not be real time, but would likely be a pretty simple solution. The problem here is that it may be difficult to interleave your log entries from different servers if you need to have them arranged in time order.
Personally, I set up a Graylog server for each instance cluster I have, to which I log all my access logs, error logs, etc. It is UDP based, so it is fire and forget from the application servers' standpoint. It provides nice search/querying tools as well. Personally I like this approach as it removes log management from the application servers altogether.
Two options that I've used:
Use syslog (or Syslog-NG) to log to a centralized location. We do this to ship our AWS log data offsite to our datacenter. Syslog-NG is more reliable than plain ole' Syslog and allows us to use MongoDB as a backing store.
Use logrotate to push your logs to S3. It's not real-time like the Syslog solution, but it's a lot easier to set up and manage, especially if you have a lot of instances and aren't using a VPC
Loggly and Splunk Storm are also two interesting SaaS products intended to solve this problem.