Spark metrics - Disable all metrics - apache-spark

I'm building a monitoring system for our Spark. I sent the metrics with spark's graphite sink. I want to have the ability to stop all the metrics dynamically. So that means I need to set it with sc.set.
How can I just disable all metrics in the spark configuration? Because I couldn't find something like spark.metrics.enable property.

I couldn't find a way of disabling it. What I do is only set it if I want to monitor (per application).
sc.set("spark.metrics.conf.*.sink.graphite.class", "org.apache.spark.metrics.sink.GraphiteSink")

Related

Monitoring spark with the ability to get specific information on every execution

spark exposes many metrics to monitor the work of the driver and the executors.
Let's say I use Prometheus. Can the metrics be used to see information about a specific spark run? To investigate for example the memory usage of specific execution, and not in general? Not just make big picture graphs in Grafana (as an example). I do not see how can I do it with Prometheus or graphite.
Is there a tool that is better suitable for what I need?

How to monitor Spark with Kibana

I want to have a view of Spark in Kibana such as decommissioned nodes per application, shuffle read and write per application, and more.
I know I can get all this information about these metrics here.
But I don't know how to send them to elastic search or how to do it the correct way. I know I can do it with Prometheus but I don't think that helps me.
Is there a way of doing so?

How to estimate the Graphite(whisper) database size when I'm using it in monitoring Apache Spark

I'm gonna set up monitoring Spark application via $SPARK_HOME/conf/metrics.propetries.
And decided to use Graphite.
Is there any way to estimate the database size of Graphite especially for monitoring Spark application?
Regardless of what you are monitoring, Graphite has its own configuration about retention and rollup of metrics. It stores file (called whisper) per metric and you can use the calculator to estimate how much disk space it can take https://m30m.github.io/whisper-calculator/

Bluemix Apache Spark Metrics

I have been looking for a way to monitor performance in Spark on Bluemix. I know in the Apache Spark project, they provide a metrics service based on the Coda Hale Metrics Library. This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV files. Details here: http://spark.apache.org/docs/latest/monitoring.html
Does anyone know of any way to do this in the Bluemix Spark service? Ideally, I would like to save the metrics to a csv file in Object Storage.
Appreciate the help.
Thanks
Saul
Currently, I do not see an option for usage of "Coda Hale Metrics Library" and reporting the job history or accessing the information via REST API.
However, on the main page of the Spark history server, you can see the Event log directory. It refers to your following user directory: file:/gpfs/fs01/user/USER_ID/events/
There I saw JSON (like) formatted files.

Custom metrics in Apache Spark Ui

I'm using Apache Spark and the metrics UI (found on 4040) is very useful.
I wonder if it's possible to add custom metrics in this UI, custom task metrics but maybe custom RDD metrics too. (like executing time just for a RDD transformation )
It could be nice to have custom metrics grouped by stream batch jobs and tasks.
I have seen the TaskMetrics object but it's marked as a dev api and it looks just useful for input or output sources and do not support custom values.
There is spark way to do that ? Or an alternative?
You could use the shared variables support [1] built-in in Spark. I often used them for implementing something like that.
[1] http://spark.apache.org/docs/latest/programming-guide.html#shared-variables

Resources