I have an new HDInisght:Spark cluster that I spun up
I want to upload a file via AMbari portal but I don't see the HDFS option:
What am I missing? How can I get my .csv up to the server so I can start using it in the Python notebook?
HDInsight clusters do not work off local HDFS. They use Azure Blob Storage instead. So upload to the storage account that got attached to the cluster during its creation.
More info:
https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-use-blob-storage
Related
I am looking for a solution to mount local storage which is on on premise hadoop cluster that can be used with databricks to mount onto dbfs:/// directly instead of loading to azure blob storage and then mounting it to databricks. Any advice here would be helpful. Thank You
I am in research side and still have not figured a way to come up with solution. I am not sure even if its possible with out azure storage account.
Unfortunately, mounting on Prem datastore to Azure Databricks not supported.
You can try these alternative methods:
Method 1:
Connecting local files on a remote data bricks spark cluster access local file with DBFS. Refer this MS Document.
Method 2:
Alternative use azure Databricks CLI or REST API and push local data to a location on DBFS, where it can be read into Spark from within a Databricks notebook.
For more information refer this Blog by Vikas Verma
I am trying to read file from Azure blob Storage to Azure HDInsight Cluster using PySpark and getting this error:
From the Azure portal, try opening the session again. You may be using the Jupyter notebook which you may try to open and then run the commands. It should work fine.
I'm trying to access a wasb(Azure blob storage) file in Spark and need to specify the account key.
How do I specify the account in the spark-env.sh file?
fs.azure.account.key.test.blob.core.windows.net
EC5sNg3qGN20qqyyr2W1xUo5qApbi/zxkmHMo5JjoMBmuNTxGNz+/sF9zPOuYA==
WHen I try this it throws the following error
fs.azure.account.key.test.blob.core.windows.net: command not found
From your description, it is not clear that the Spark you used is either on Azure or on local.
For Spark running on local, refer this blog post which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s].
For Spark running on Azure, the difference is just only access HDFS with wasb, all configurations have been done by Azure when creating HDInsight cluster with Spark.
How to get list of file from Azure blob storage in Spark and Scala.
I am not getting any idea to approach this.
I don't know the Spark you used is either on Azure or on local. So they are two cases, but similar.
For Spark running on local, there is an offical blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the offical tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.
For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark.
The method for listing files is listFiles or wholeTextFiles of SparkContext.
Hope it helps.
If you are using databricks, try the below
dbutils.fs.ls(“blob_storage_location”)
I created cluster on HDInsight (Azure), created Ipython Notebook and uploaded text file to blob storage. How can I upload it to notebook? Hope there is some url of this file in blob storage.
Jupyter notebooks are a web-based UI to run queries against HDInsight (and other data sources). They do not have a mechanism to upload data into blob storage; they are only able to tell Apache Spark (for example) to query data within blob storage.
To work with data within HDInsight Azure, you will need to upload the files into HDFS or into blob storage (the latter is the more common mechanism). To upload data into blob storage, here are some good references:
Upload data for Hadoop jobs in HDInsight
Use HDFS-compatible Azure Blob storage with Hadoop in HDInsight