Monitor spark standalone with Prometheus - apache-spark

How can I monitor spark standalone with Prometheus?
How do I know the IP of the drivers? the drivers can be changed.
What is the best practice to do so?

Related

Can we use repartitionByCassandraReplica functionality of spark-cassandra-connector in kubernetes environment?

I am trying to undertand how to use repartitionByCassandraReplica functionality of spark-cassandra-connector in Kubernetes environment?
My initial thought is that hosting executor on the same host on which Cassandra pod is running will solve my problem. Am i right in my thinking?
Data locality can only be achieved with repartitionByCassandraReplica if both the Spark worker/executor and Cassandra JVMs run in the same OSI. This applies to physical servers, VMs, containers, pods, etc.
Unless you have a way of running both the Spark and Cassandra image in the same container/pod, it won't be possible to achieve data locality.
For what it's worth, there's an open spark-cassandra-connector ticket to look into how this can be achieved (SPARKC-655). It's just a stub right now and there has not been any work done on it yet. Cheers!

Does Spark streaming needs HDFS with Kafka

I have to design a setup to read incoming data from twitter (streaming). I have decided to use Apache Kafka with Spark streaming for real time processing. It is required to show analytics in a dashboard.
Now, being a newbie is this domain, My assumed data rate will be 10 Mb/sec maximum. I have decided to use 1 machine for Kafka of 12 cores and 16 GB memory. *Zookeeper will also be on same machine. Now, I am confused about Spark, it will have to perform streaming job analysis only. Later, analyzed data output is pushed to DB and dashboard.
Confused list:
Should I run Spark on Hadoop cluster or local file system ?
Is standalone mode of Spark can fulfill my requirements ?
Is my approach is appropriate or what should be best in this case ?
Try answer:
Should I run Spark on Hadoop cluster or local file system ?
recommend use hdfs,it can can save more data, ensure High availability.
Is standalone mode of Spark can fulfill my requirements ?
Standalone mode is the easiest to set up and will provide almost all the same features as the other cluster managers if you are only running Spark.
YARN allows you to dynamically share and centrally configure the same pool of cluster resources between all frameworks that run on YARN.
YARN doesn’t need to run a separate ZooKeeper Failover Controller.
YARN will likely be preinstalled in many Hadoop distributions.such as CDH HADOOP.
so recommend use
YARN doesn’t need to run a separate ZooKeeper Failover Controller.
so recommend yarn
Useful links:
spark yarn doc
spark standalone doc
other wonderful answer
Is my approach is appropriate or what should be best in this case ?
If you data not more than 10 million ,I think can use use local cluster to do it.
local mode avoid many nodes shuffle. shuffles between processes are faster than shuffles between nodes.
else recommend use greater than or equal 3 nodes,That is real Hadoop cluster.
As a spark elementary players,this is my understand. I hope ace corrects me.

Zeppelin isolated per user mode

My question is about the isolated mode per user in zeppelin.
If zeppelin doesn't yet support the deploy-mode remote for spark driver( more info in ZEPPELIN-2040)
how you deal with multiple users so multiple spark driver with each one take 4GB of memory ?
I'am looking for a horizontal scalable solution without increasing the spark master instance memory and use the slaves memory for driver.

How to use Spark Streaming from an other vm with kafka

I have Spark Streaming on a virtual machine, and I would like to connect it with an other vm which contains kafka . I want Spark to get the data from the kafka machine.
Is it possible to do that ?
Thanks
Yes, it is definitely possible. In fact, this is the reason why we have distributed systems in place :)
When writing your Spark Streaming program, if you are using Kafka, you will have to create a Kafka config data structure (syntax will vary depending on your programming language and client). In that config structure, you will have to specify the Kafka brokers IP. This would be the IP of your Kafka VM.
You then just need to run Spark Streaming Application on your Spark VM.
It's possible and makes perfect sense to have them on separate VM's. That way there is a clear separation of roles.

Cassandra production Monitoring

I am new to Cassandra and trying to setup monitoring to Cassandra production cluster.
Apart from monitoring using nodetool commands in crontab what else is recommended?
is it a general practice to use ganglia for monitoring?
can you direct me to a good resource on setting up monitoring in production.
we are using apache cassandra so opscenter was not very useful.
The free version of OpsCenter works with OSS Cassandra and most monitoring capabilities are available. You do miss a good amount of cluster management capabilities if you don't have DSE:
http://www.datastax.com/what-we-offer/products-services/datastax-opscenter/compare

Resources