Databricks Streaming and Cluster updates? - databricks

Does anyone know if there's any documentation about how to handle Structured Streaming in Databricks and cluster maintenance updates (runtime, OS, etc)?
I would like to know more about how it works, and if we (user) need to handle that manually or if there's some sort of mechanism that handles that internally.

See recovery from failures under streaming.
For any infrastructure problem that could occur:
Hence, to make your queries fault tolerant, you must enable query checkpointing and configure Databricks jobs to restart your queries automatically after a failure.

Related

Kubernetes Vs Spark Vs Spark on kubernetes

So I have a use case where I will stream about 1000 records per minute from kafka. I just need to dump these records in raw form in a no sql db or something like a data lake for that matter
I ran this through two approaches
Approach 1
——————————
Create kafka consumers in java and run them as three different containers in kubernetes. Since all the containers are in the same kafka consumer group, they would all contribute towards reading from same kafka topic and dump data into data lake. This works pretty quick for the volume of work load I have
Approach 2
——————————-
I then created a spark cluster and the same java logic to read from kafka and dump data in data lake
Observations
———————————-
Performance of kubernetes if not bad was equal to that of a spark job running in clustered mode.
So my question is, what is the real use case for using spark over kubernetes the way I am using it or even spark on kubernetes?
Is spark only going to rise and shine much much heavier work loads let’s say something of the order of 50,000 records per minute or cases where some real time processing needs to be done on the data before dumping it to the sink?
Spark has more cost associated to it so I need to make sure I use it only if it would scale better than kuberbetes solution
If your case is only to archive/snapshot/dump records I would recommend you to look into the Kafka Connect.
If you need to process the records you stream, eg. aggregate or join streams, then Spark comes into the game. Also for this case you may look into the Kafka Streams.
Each of these frameworks have its own tradeoffs and performance overheads, but in any case you save much development efforts using the tools made for that rather than developing your own consumers. Also these frameworks already support most of the failures handling, scaling, and configurable semantics. Also they have enough config options to tune the behaviour to most of the cases you can imagine. Just choose the available integration and you're good to go! And of course beware the open source bugs ;) .
Hope it helps.
Running kafka inside Kubernetes is only recommended when you have a lot of expertise doing it, as Kubernetes doesn't know it's hosting Spark, and Spark doesn't know its running inside Kubernetes you will need to double check for every feature you decide to run.
For your workload, I'd recommend sticking with Kubernetes. The elasticity, performance, monitoring tools and scheduling features plus the huge community support adds well on the long run.
Spark is a open source, scalable, massively parallel, in-memory execution engine for analytics applications so it will really spark when your load become more processing demand. It simply doesn't have much room to rise and shine if you are only dumping data, so keep It simple.

Spark streaming + Kafka vs Just Kafka

Why and when one would choose to use Spark streaming with Kafka?
Suppose I have a system getting thousand messages per seconds through Kafka. I need to apply some real time analytics on these messages and store the result in a DB.
I have two options:
Create my own worker that reads messages from Kafka, run the analytics algorithm and store the result in DB. In a Docker era it is easy to scale this worker through my entire cluster with just scale command. I just need to make sure I have an equal or grater number of partitions than my workers and all is good and I have a true concurrency.
Create a Spark cluster with Kafka streaming input. Let the Spark cluster to do the analytics computations and then store the result.
Is there any case when the second option is a better choice? Sounds to me like it is just an extra overhead.
In a Docker era it is easy to scale this worker through my entire cluster
If you already have that infrastructure available, then great, use that. Bundle your Kafka libraries in some minimal container with health checks, and what not, and for the most part, that works fine. Adding a Kafka client dependency + a database dependency is all you really need, right?
If you're not using Spark, Flink, etc, you will need to handle Kafka errors, retries, offset and commit handling more closely to your code rather than letting the framework handle those for you.
I'll add in here that if you want Kafka + Database interactions, check out the Kafka Connect API. There's existing solutions for JDBC, Mongo, Couchbase, Cassandra, etc. already.
If you need more complete processing power, I'd go for Kafka Streams rather than needing to separately maintain a Spark cluster, and so that's "just Kafka"
Create a Spark cluster
Let's assume you don't want to maintain that, or rather you aren't able to pick between YARN, Mesos, Kubernetes, or Standalone. And if you are running the first three, it might be worth looking at running Docker on those anyway.
You're exactly right that it is extra overhead, so I find it's all up to what you have available (for example, an existing Hadoop / YARN cluster with idle memory resources), or what you're willing to support internally (or pay for vendor services, e g. Kafka & Databricks in some hosted solution).
Plus, Spark isn't running the latest Kafka client library (up until 2.4.0 updated to Kafka 2.0, I believe), so you'll need to determine if that's a selling point.
For actual streaming libraries, rather than Spark batches, Apache Beam or Flink would probably let you do the same types of workloads against Kafka
In general, in order to scale a producer / consumer, you need some form of resource scheduler. Installing Spark may not be difficult for some, but knowing how to use it efficiently and tune for appropriate resources can be

How to change Spark Streaming application with checkpointing?

Please consider the following scenario:
created initial pipeline via Spark streaming
enable checkpointing
run the application for a while
stop streaming application
made tiny changes to the pipeline, e.g. business logic remained untouched but did some refactoring, renaming, class moving, etc.
restart the streaming application
get an exception as pipeline stored in checkpoint directory differs on a class level from the new one
What are the best practices to deal with such a scenario? How can we seamlessly upgrade streaming application with checkpointing enabled? What are the best practices for versioning of streaming applications?
tl;dr Checkpointing is for recovery situations not for upgrades.
From the official documentation about Checkpointing:
A streaming application must operate 24/7 and hence must be resilient to failures unrelated to the application logic (e.g., system failures, JVM crashes, etc.). For this to be possible, Spark Streaming needs to checkpoint enough information to a fault- tolerant storage system such that it can recover from failures.
So to answer your question about using checkpointing (that is meant for fault tolerance) and changing your application code, you should not expect it would work since it is against the design.

Failure recovery in spark running on HDinsight

I was trying to get Apache spark run on Azure HDinsight by following the steps from http://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-spark-install/
I was wondering if I have to manage the master/slave failure recovery myself, or will HDinsight take care of it.
I'm also working on Spark Streaming applications on Azure HDInsight. Inside the Spark job, Spark and Yarn can provide some Fault-Tolerance for Master and Slave.
But sometimes, the driver and worker will also crash by the user-code error, spark internal issues, and Azure HDInsight issues. So, we need to make our own monitoring/daemon process, and maintain the recovery.
For Streaming Scenarios, it's even harder. As the Spark Streaming Job which need keep 7*24 running, there's the concern that how to keep the job recovery for the machine reboot and reimage.

Big Data Analytics using Redshift vs Spark, Oozie Workflow Scheduler with Redshift Analytics

We want to do Big Data Analytics on our data stored in Amazon Redshift (currently in Terabytes, but will grow with time).
Currently, it seems that all our Analytics can be done through Redshift queries (and hence, no distributed processing might be required at our end) but we are not sure if that will remain to be the case in future.
In order to build a generic system that should be able to cater our future needs as well, we are looking to use Apache Spark for data analytics.
I know that data can be read into Spark RDDs from HDFS, HBase and S3, but does it support data reading from Redshift directly?
If not, we can look to transfer our data to S3 and then read it in Spark RDDs.
My question is if we should carry out our Data Analytics through Redshift's queries directly or should we look to go with the approach above and do analytics through Apache Spark (Problem here is that Data Locality optimization might not be available)?
In case we do analytics through Redshift queries directly, can anyone please suggest a good Workflow Scheduler to write our Analytics jobs with. Our requirement is to be able to execute jobs as a DAG (Job2 should execute only if Job1 succeeds, etc) and be able to schedule our workflows through the proposed Workflow Engine.
Oozie seems like a good fit for our requirements but it turns out that Oozie cannot be used without Hadoop. Does it make sense to set up Hadoop on our machines and then use Oozie Workflow Scheduler to schedule our Data Analysis jobs through Redshift queries?
You cannot access data stored on Redshift nodes directly (each via Spark), only via SQL queries submitted the cluster as a whole.
My suggestion would be to use Redshift as long as possible and only take on the complexity of Spark/Hadoop when you absolutely need it.
If, in the future, you move to Hadoop then Cascading Lingual gives you the option of running your existing Redshift analytics more or less unchanged.
Regarding workflow, Oozie is not a good fit for Redshift. I would suggest you look at Azkaban (true DAG) or Luigi (uses a Python DSL).

Resources