How to configure spark job checkpoint in azure blob storage?
I was able to add checkpoints in Databricks but not in azure Kubernetes cluster
Can you please help me to overcome the issue
You can refer to Spark Checkpointing:
Checkpointing can be enabled by setting a directory in a
fault-tolerant, reliable file system (e.g., HDFS, S3, etc.) to which
the checkpoint information will be saved. This is done by using
streamingContext.checkpoint(checkpointDirectory).
From
https://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing
This should be implemented in your code following the language you use using StreamingContext.getOrCreate which gets StreamingContext from checkpoint data or create a new one.
Related
I am new to Spark Structured Streaming and its concepts. Was reading through the documentation for Azure HDInsight cluster here and it's mentioned that the structured streaming applications run on HDInsight cluster and connects to streaming data from .. Azure Storage, or Azure Data Lake Storage. I was looking at how to get started with the streaming listening to new file created events from the storage or ADLS. The spark documentation does provide an example, but i am looking for how to tie up streaming with the blob/file creation event, so that I can store the file content in a queue from my spark job. It will be great if anyone can help me out on this.
happy to help you on this, but can you be more precise with the requirement. Yes, you can run the Spark Structured Streaming jobs on Azure HDInsight. Basically mount the azure blob storage to cluster and then you can directly read the data available in the blob.
val df = spark.read.option("multiLine", true).json("PATH OF BLOB")
Azure Data Lake Gen2 (ADL2) has been released for Hadoop 3.2 only. Open Source Spark 2.4.x supports Hadoop 2.7 and if you compile it yourself Hadoop 3.1. Spark 3 will support Hadoop 3.2, but it's not released yet (only preview release).
Databricks offers support for ADL2 natively.
My solution to tackle this problem was to manually patch and compile Spark 2.4.4 with Hadoop 3.2 to be able to use the ADL2 libs from Microsoft.
I have created a small application that submits a spark job at certain intervals and creates some analytical reports. These jobs can read data from a local filesystem or a distributed filesystem (fs could be HDFS, ADLS or WASB). Can I run this application on Azure databricks cluster?
The application works fine on HDInsights cluster as I was able to access the nodes. I kept my deployable jar at one location, started it using the start-script similarly I could also stop it using the stop-script that I prepared.
One thing I found is that Azure Databricks has its own File System: ADFS, I can also add support for this file system but then will I be able to deploy and run my application as I was able to do it on the HDInsight cluster? If not, is there a way I can submit jobs from an edge node, my HDInsight cluster or any other OnPrem Cluster to Azure Databricks cluster.
Have you looked at Jobs? https://docs.databricks.com/user-guide/jobs.html. You can submit jars to spark-submit just like on HDInsight.
Databricks file system is DBFS - ABFS is used for Azure Data Lake. You should not need to modify your application for these - the file paths will be handled by databricks.
Has anyone tried using Azure data bricks as the spark cluster for CDAP job processing. CDAP documentation details how to add it to Azure HDInsight, but just wondering is there a way to configure CDAP to point to data bricks spark cluster, is it even possible? OR this kind of integration needs a specific data bricks client connector jar? If anyone has any insights that would be helpful.
There is no out of box support for Databricks spark on Azure. But, that said you can develop a new Cloud Runtime that is capable of submitting the jobs to Databricks spark cluster. Here is example of how to write a runtime extension for Cloud Dataproc and EMR.
I'm trying to access a wasb(Azure blob storage) file in Spark and need to specify the account key.
How do I specify the account in the spark-env.sh file?
fs.azure.account.key.test.blob.core.windows.net
EC5sNg3qGN20qqyyr2W1xUo5qApbi/zxkmHMo5JjoMBmuNTxGNz+/sF9zPOuYA==
WHen I try this it throws the following error
fs.azure.account.key.test.blob.core.windows.net: command not found
From your description, it is not clear that the Spark you used is either on Azure or on local.
For Spark running on local, refer this blog post which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s].
For Spark running on Azure, the difference is just only access HDFS with wasb, all configurations have been done by Azure when creating HDInsight cluster with Spark.
How to get list of file from Azure blob storage in Spark and Scala.
I am not getting any idea to approach this.
I don't know the Spark you used is either on Azure or on local. So they are two cases, but similar.
For Spark running on local, there is an offical blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the offical tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.
For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark.
The method for listing files is listFiles or wholeTextFiles of SparkContext.
Hope it helps.
If you are using databricks, try the below
dbutils.fs.ls(“blob_storage_location”)