Mounting onprem datastore to Azure Databricks dbfs - azure

I am looking for a solution to mount local storage which is on on premise hadoop cluster that can be used with databricks to mount onto dbfs:/// directly instead of loading to azure blob storage and then mounting it to databricks. Any advice here would be helpful. Thank You
I am in research side and still have not figured a way to come up with solution. I am not sure even if its possible with out azure storage account.

Unfortunately, mounting on Prem datastore to Azure Databricks not supported.
You can try these alternative methods:
Method 1:
Connecting local files on a remote data bricks spark cluster access local file with DBFS. Refer this MS Document.
Method 2:
Alternative use azure Databricks CLI or REST API and push local data to a location on DBFS, where it can be read into Spark from within a Databricks notebook.
For more information refer this Blog by Vikas Verma

Related

Azure Synapse spark read from default storage

We are working on an Azure Synapse Analytics project with CI/CD pipeline. I want to read data with serverless spark-pool from storage account, but not specify the storage account name. Is this possible? We are using the default storage account but a separate container for datalake data.
I can read data with spark.read.parquet('abfss://{container_name}#{account_name}.dfs.core.windows.net/filepath.parquet) but since the name of the storage account is different between dev test and prod this will need to be parameterized and I would like to avoid it if possible. Is there any native spark way to do this? I found some documentation about doing this with pandas and FSSPEC but not with only spark.

what kind of data encryption is using Default Databricks file system or Default DBFS or DBFS root?

Azure databricks Allows to mount storage objects so I cant easily mount Azure storage(Blob,Data Lake), and I know Azure storage using 256-bit AES encryption.
But my question is when I store my data or save my data in Default Databricks file system or DBFS root(Not mount point) is it use any kind of encryption system or not?
Any help appreciate, thanks in advance.
Data in Azure Storage (Azure Databricks DBFS resides in Blob storage which is created while creating databricks workspace called as Managed Resource Group) is encrypted and decrypted transparently using 256-bit AES encryption.
Azure Databricks File System DBFS is an abstraction layer on top of Azure Blob Storage which is created in the Managed Resource Group that lets you access data as if it were a local file system.
By default, when you deploy Databricks it creates Azure Blob Storage that is used for storage and can be accessed via DBFS. When you mount to DBFS, you are essentially mounting a Azure Blob Storage/ADLS Gen1/Gen2 to a path on DBFS.
Hope this helps. Do let us know if you any further queries.
Do click on "Mark as Answer" and Upvote on the post that helps you, this can be beneficial to other community members.

Can't connect to Azure Data Lake Gen2 using PySpark and Databricks Connect

Recently, Databricks launched Databricks Connect that
allows you to write jobs using Spark native APIs and have them execute remotely on an Azure Databricks cluster instead of in the local Spark session.
It works fine except when I try to access files in Azure Data Lake Storage Gen2. When I execute this:
spark.read.json("abfss://...").count()
I get this error:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
Does anybody know how to fix this?
Further information:
databricks-connect version: 5.3.1
If you mount the storage rather use a service principal you should find this works: https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake-gen2.html
I posted some instructions around the limitations of databricks connect here. https://datathirst.net/blog/2019/3/7/databricks-connect-limitations
Likely too late but for completeness' sake, there's one issue to look out for on this one. If you have this spark conf set, you'll see that exact error (which is pretty hard to unpack):
fs.abfss.impl org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem
So you can double check the spark configs to make sure you have the permissions to directly access ADLS gen2 using the storage account access key.

Specify Azure key in Spark 2.x version

I'm trying to access a wasb(Azure blob storage) file in Spark and need to specify the account key.
How do I specify the account in the spark-env.sh file?
fs.azure.account.key.test.blob.core.windows.net
EC5sNg3qGN20qqyyr2W1xUo5qApbi/zxkmHMo5JjoMBmuNTxGNz+/sF9zPOuYA==
WHen I try this it throws the following error
fs.azure.account.key.test.blob.core.windows.net: command not found
From your description, it is not clear that the Spark you used is either on Azure or on local.
For Spark running on local, refer this blog post which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s].
For Spark running on Azure, the difference is just only access HDFS with wasb, all configurations have been done by Azure when creating HDInsight cluster with Spark.

How to get list of file from Azure blob using Spark/Scala?

How to get list of file from Azure blob storage in Spark and Scala.
I am not getting any idea to approach this.
I don't know the Spark you used is either on Azure or on local. So they are two cases, but similar.
For Spark running on local, there is an offical blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the offical tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.
For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark.
The method for listing files is listFiles or wholeTextFiles of SparkContext.
Hope it helps.
If you are using databricks, try the below
dbutils.fs.ls(“blob_storage_location”)

Resources