We could use some help on how to send Spark Driver and worker logs to a destination outside Azure Databricks, like e.g. Azure Blob storage or Elastic search using Eleastic-beats.
When configuring a new cluster, the only options on get reg log delivery destination is dbfs, see
https://docs.azuredatabricks.net/user-guide/clusters/log-delivery.html.
Any input much appreciated, thanks!
Maybe the following could be helpful :
First you specify a dbfs location for your Spark driver and worker logs.
https://docs.databricks.com/user-guide/clusters/log-delivery.html
Then, you create a mount point that links your dbfs folder to a Blob Storage container.
https://docs.databricks.com/spark/latest/data-sources/azure/azure-storage.html#mount-azure-blob-storage-containers-with-dbfs
Hope this help !
Related
I am looking for a solution to mount local storage which is on on premise hadoop cluster that can be used with databricks to mount onto dbfs:/// directly instead of loading to azure blob storage and then mounting it to databricks. Any advice here would be helpful. Thank You
I am in research side and still have not figured a way to come up with solution. I am not sure even if its possible with out azure storage account.
Unfortunately, mounting on Prem datastore to Azure Databricks not supported.
You can try these alternative methods:
Method 1:
Connecting local files on a remote data bricks spark cluster access local file with DBFS. Refer this MS Document.
Method 2:
Alternative use azure Databricks CLI or REST API and push local data to a location on DBFS, where it can be read into Spark from within a Databricks notebook.
For more information refer this Blog by Vikas Verma
I am working on a spark project where the storage sink is Azure Blob Storage. I write data in parquet format. I need some metrics around storage, eg. numberOfFilesCreated, writtenBytes etc. On searching for it online I came across a particular metrics that the hadoop-azure package has called the AzureFileSystemInstrumentation. I am not sure about how to access the same from spark and can't find any resources for the same. How would one access this instrumentation for the given spark job?
Based on my experience, I think there are three solution can be used in your current scenario, as below.
Directly use Hadoop API for HDFS to get HDFS Metrics Data in Spark, because hadoop-azure just implements the HDFS APIs for using Azure Blob Storage, so please see the Hadoop offical document for Metrics to know what particular metrics you want to use, such as CreateFileOps or FilesCreated as the figure below to get numberOfFilesCreated. Meanwhile, there is a similar SO thread How do I get HDFS bytes read and write for Spark applications? which you can refer to.
Directly use Azure Storage SDK for Java or other languages you used to write a program to do the statistics for files stored in Azure Blob Storage as blobs ordered by creation timestamp or others, please refer to the offical document Quickstart: Azure Blob storage client library v8 for Java to know how to use its SDK.
Use Azure Function with Blob Trigger to monitor the events of files created in Azure Blob Storage, then you can write the code for statistics on every blob created event, please refer to the offical document Create a function triggered by Azure Blob storage to know how to use Blob Trigger. Even, you can send these metrics what you want to Azure Table Storage or Azure SQL Database or other services for statistics later in the Azure Blob Trigger Function.
I think I have a fairly straightforward question. I'm using a Hadoop cluster hosted on a Unix machine. And I need to transfer ORC files to an Azure BLOB storage from this Hadoop cluster.
If I install AzCopy on the Unix machine that hosts the Hadoop cluster, will I then be able to transfer the files to and fro HDFS and Azure?
All the information I see on the internet only talks about transfers between local storage to Azure. So, I'm still left confused about transfers from HDFS to Azure.
Can someone please help me in resolving this confusion about AzCopy? Would be a great help. Thanks in advance.
You can just use distcp for this. See solution from here (I just reversed the order of the params) https://stackoverflow.com/a/54514602/1537195
hadoop distcp -D fs.azure.account.key.<account name>.blob.core.windows.net=<Key> hdfs://<hdfs path> wasb://<container>#<account>.blob.core.windows.net<path to wasb file>
How to get list of file from Azure blob storage in Spark and Scala.
I am not getting any idea to approach this.
I don't know the Spark you used is either on Azure or on local. So they are two cases, but similar.
For Spark running on local, there is an offical blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the offical tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.
For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark.
The method for listing files is listFiles or wholeTextFiles of SparkContext.
Hope it helps.
If you are using databricks, try the below
dbutils.fs.ls(“blob_storage_location”)
I have an new HDInisght:Spark cluster that I spun up
I want to upload a file via AMbari portal but I don't see the HDFS option:
What am I missing? How can I get my .csv up to the server so I can start using it in the Python notebook?
HDInsight clusters do not work off local HDFS. They use Azure Blob Storage instead. So upload to the storage account that got attached to the cluster during its creation.
More info:
https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-use-blob-storage