I am using the Spark operator to run Spark on Kubernetes. (https://github.com/GoogleCloudPlatform/spark-on-k8s-operator)
I am trying to run a Java agent in Spark driver and executor pods and send the metrics through a Kubernetes service to Prometheus operator.
I am using this example
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/examples/spark-pi-prometheus.yaml
Java agent is exposing the metrics on port 8090 for a short time (I can validate that with port-forwarding kubctl port-forward < spark-driver-pod-name > 8090:8090 ), also the service is also exposing the metrics for few mins ( can validate that with port-forwarding kubctl port-forward svc/< spark-service-name > 8090:8090 ).
Promethues is able to register these pod's URL in the prometheus, but when it is trying to scrape the metrics(runs for every 30 seconds), the pod's URL is down.
How can i make the Java agent JMX exporter to run long, until the driver and executors completed the job. could you please guide or help me here, who have come across this scenario before?
Either Prometheus needs to scrape the metrics of every 5 seconds (chances are that metrics may not be accurate), or you need to use pushgateway, like mentioned in this blog(https://banzaicloud.com/blog/spark-monitoring/) to push the metrics to Prometheus
Pushing the metrics to Prometheus, is a best practice for batch jobs.
Pulling the metrics from Prometheus is a best approach for long running services (ex:REST Services)
Related
I have a spark application running with Spark Operator v1beta2-1.1.2-2.4.5 with Spark 2.4.5, running on Kubernetes
I have application level metrics that I would like to expose through the jmx-exporter port in order for Prometheus podmonitor to scrape.
I am able to get driver system level metrics via the exposed port.
I am using Groupon's open source spark-metrics to publish metrics to the Jmx sink. Even after configuring spark.metrics.conf to this file below, as well as adding this to each of the executors (by sparkContext.addfile(<path to metrics.properties>), I am still not able to see these metrics be reported to my jmx-exporter port (8090/metrics). I am able to see it from the spark-ui endpoint, at (4040/metrics/json)
*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink
master.source.jvm.class=org.apache.spark.metrics.source.JvmSource
worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource
driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource
executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource
Would appreciate any pointers in terms of where I should look.
Below is the monitoring API related deployment configs
monitoring:
exposeDriverMetrics: true
exposeExecutorMetrics: true
metricsPropertiesFile: /opt/spark/metrics.properties
prometheus:
jmxExporterJar: "/prometheus/jmx_prometheus_javaagent.jar"
Okay. Where to start? I am deploying a set of Spark applications to a Kubernetes cluster. I have one Spark Master, 2 Spark Workers, MariaDB, a Hive Metastore (that uses MariaDB - and it's not a full Hive install - it's just the Metastore), and a Spark Thrift Server (that talks to Hive Metastore and implements the Hive API).
So this setup is working pretty well for everything except the setup of the Thrift Server job (start-thriftserver.sh in the Spark sbin directory on the thrift server pod). By working well I say that outside my cluster I can create spark jobs and submit them to master and then using the Web UI I can see my code test app ran to completion utilizing both workers.
Now the problem. When you launch the start-thriftserver.sh it submits a job to the cluster with itself as the driver (I believe - which is correct behavior). And when I look at the related spark job via the WebUI I see it has workers and they repeatedly get hatched and then exit shortly therafter. When I look at the workers' stderr logs I see that every worker launches and tries to connect back to the thrift server pod at the spark.driver.port. This is correct behavior I believe. The gotcha is that connection fails because it says unknown host exception and it uses a kubernetes raw pod name (not a service name and with no IP in the name) of the thrift server pod to say it can't find the thrift server that initiated the connection. Now Kubernetes DNS stores service names and then only pod names as prefaced with their private IP. In other words the raw name of the pod (without an IP) is never registered with the DNS. That is not how kubernetes works.
So my question. I am struggling to figure out why the spark worker pod is using a raw pod name to try to find the thrift server. It seems it should never do this and that it should be impossible to ever satisfy that request. I have wondered if there is some spark config setting that would tell the workers that the (thrift) driver it needs to be searching for is actually spark-thriftserver.my-namespace.svc. But I can't find anything having done much searching.
There are so many settings that go into a cluster like this that I don't want to barrage you with info. One thing that might clarify my setup: the following string is dumped at the top of a worker log that fails. Notice the raw pod name of the thrift server for driver-url. If anyone has any clue what steps to take to fix this please let me know. I'll edit this post and share settings etc as people request them. Thanks for helping.
Spark Executor Command: "/usr/lib/jvm/java-1.8-openjdk/jre/bin/java" "-cp" "/spark/conf/:/spark/jars/*" "-Xmx512M" "-Dspark.master.port=7077" "-Dspark.history.ui.port=18081" "-Dspark.ui.port=4040" "-Dspark.driver.port=41617" "-Dspark.blockManager.port=41618" "-Dspark.master.rest.port=6066" "-Dspark.master.ui.port=8080" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#spark-thriftserver-6bbb54768b-j8hz8:41617" "--executor-id" "12" "--hostname" "172.17.0.6" "--cores" "1" "--app-id" "app-20220408001035-0000" "--worker-url" "spark://Worker#172.17.0.6:37369"
I am new to Cassandra and trying to setup monitoring tool to monitor Cassandra production cluster. So i have setup one graphite-grafana on one of the cassandra node & i'm able to get metrics of that particular cassandra node on grafana, but now i want to fetch metrics from all the cassandra nodes and display them in grafana.
can anyone direct me about structure i should follow or how to setup graphite-grafana tool for multiple nodes monitoring in production . what are the changes to be made configurations file etc.
I think it is better that Graphite-grafana will be in a separated machine or cluster.
You could send metrics from all your cassandra nodes to the machine/cluster, and make sure that there is identification of cassandra node in the metric key (for example, use the key cassandra.nodes.machine01.blahblahblah for one metric from machine01).
After that, you could use Graphite API to fetch metrics of all your cassandra nodes from that Graphite machine/cluster.
I got my answer after hit-trial.e.g, i have edited metrics_reporter_graphite.yaml like below:
graphite:
-
period: 30
timeunit: 'SECONDS'
prefix: 'cassandra-clustername-node1'
hosts:
- host: 'localhost'
port: 2003
predicate:
color: 'white'
useQualifiedName: true
patterns:
- '^org.apache.cassandra.+'
- '^jvm.+'`enter code here`
Replace localhost with your graphite-grafana server/vm IP address.
We experience unexplained behaviour with our new EMR setup that includes:
EMR 5.16 (3 nodes - c4.8xlarge and 1 master - c4.8xlarge)
Kafka Cluster based on ECS
We running simple stream job that reads from a Kafka topic, makes some logic and writeStream back to Kafka topic (using checkpointLocation as HDFS path)
The "problem" is that in Ganglia I can see increasing network traffic that came out from the driver (that runs on one of the slaves) to the Master server.
I can see from a simple pcap file that's the traffic belongs to 50010 (Hadoop Data Transfer) and here I'm in a dead end.
Some help needed, thanks!
After some investigation and view the payload of the traffic, it was the logs that sent to the Master! It was delivered to Spark history server and located in HDFS..
I just needed to add this config to my spark-submit --conf spark.eventLog.enabled=false
I was trying to get the jobs count in the first time,
and I try to get it from JobprogressListener. But only stages and tasks information, not jobs. We know Spark application generate the jobs "as it goes".
But if there is a component or a class recording the job information UP FRONT?
It's possible, but I would recommend Spark RESTful API
Steps:
Get applicationId from SparkContext.applicationId property
Query http://context-url:port/applications/api/v1/[app-id]/jobs Where context-url is address of your Spark Driver and port is port with Web UI, it's 4040 normally. Here is documentation
Count jobs that are returned in response from RESTful API