Any UI log monitoring tool for cassandra system.log? - cassandra

I am just looking any UI based tool to monitor cassandra system.log so that we could analyze and extract errors efficiently. if any please let me know the steps to configure.

As usual people use ELK stack - Elasticsearch, Logstash, Kibana, where Kibana is a UI

Related

How to monitor Spark with Kibana

I want to have a view of Spark in Kibana such as decommissioned nodes per application, shuffle read and write per application, and more.
I know I can get all this information about these metrics here.
But I don't know how to send them to elastic search or how to do it the correct way. I know I can do it with Prometheus but I don't think that helps me.
Is there a way of doing so?

Exporting metrics from Prometheus to Elastics Search for better monitoring capabilities

I want to use Prometheus to monitor Spark (using spark driver API) but I also want to use Kibana for better investigation capabilities.
So I want to export those metrics from Prometheus also to Elastic Search as records to show on Kibana.
Is it somehow possible?
You can check this blog where they have shown various way to export prometheus metrics to Elasticsearch.
You can use metricbeat as well to get data from prometheus as it provide module for same.
Also, if you are using latest version of Elasticsearch then you can explore Elastic Agent and Fleet as well, which have integration for prometheus.

Configure elasticsearch APM with Apache Storm

Am trying to get realtime data of topoloiges running in Apache Storm using Elasticsearch APM.
During topology submission, am passing required arguments like -Delastic.apm.service_name, -Delastic.apm.server_url, -Delastic.apm.application_packages & javaagent.
Topologies are also running and i can see required parameters in topology process. Topologies are processing data but no data is coming to APM.
Can someone help me on this? Am I missing some arguments or Storm is not supported or something different?
Has anyone configured APM on Apache Storm or Spark?

ELK apache spark application log

How to configure Filebeats to read apache spark application log. The logs generated is moved to history server, in non readable format as soon as the application is completed. What is the ideal way here.
You can configure Spark logging via Log4J. For a discussion around some edge cases for setting up log4j configuration, see SPARK-16784, but if you simply want to collect all application logs coming off a cluster (vs logs per job) you shouldn't need to consider any of that.
On the ELK side, there was a log4j input plugin for logstash, but it is deprecated.
Thankfully, the documentation for the deprecated plugin describes how to configure log4j to write data locally for FileBeat, and how to set up FileBeat to consume this data and sent it to a Logstash instance. This is now the recommended way to ship logs from systems using log4j.
So in summary, the recommended way to get logs from Spark into ELK is:
Set the Log4J configuration for your Spark cluster to write to local files
Run FileBeat to consume from these files and sent to logstash
Logstash will send data into Elastisearch
You can search through your indexed log data using Kibana

How to aggregate logs for all docker containers in mesos

I have multiple microservices written in node and microservices are installed into the docker container and we are using Mesos+Marathon for clustering.
How can I aggregate the logs of all the containers(microservices) on different instance.?
We're using Docker+Mesos as well and are shipping all logs to a log analytics service (it's the service the company I work for offers, http://logz.io). There a couple of ways to achieve that:
Have a log shipper agent within each docker - an agent like rsyslog, nxlog, logstash, logstash-forwarder - that agent would ship data to a central logging solution
Create a Docker that is running the shipper agent (like rsyslog, nxlog, logstash, logstash-forwarder) and that agent reads logs from all Dockers on each machine and ships them to a central location - this is the path we're taking
This is a broad question but I suggest you setup an Elastic Search, Logstash, Kibana stack (ELK)
https://www.elastic.co/products/elasticsearch
https://www.elastic.co/products/logstash
https://www.elastic.co/products/kibana
Then on each one of your containers you can run the logstash forwarder/shipper to send logs to your logstash frontend.
Logs get stored in Elastic Search and then you search for them using Kibana or the Elastic Search API
Hope it helps.
I am also doing some Docker + Mesos + Marathon work so, I guess, I am going to face same doubt that you have.
I don't know if there's any native solution yet. But there's a blog by the folks at elastic.io on how they went about solving this issue.
Here's the link - Log aggregation for Docker containers in Mesos / Marathon cluster

Resources