Azure Data Lake Gen2 vs Storage account - azure

I have a requirement to process some big data and planning to deploy Databricks cluster & a storage technology. Currently evaluating Data Lake Gen2 which supports both object and file storage. The storage account (blob, file, table, queue) also has similar capabilities which can handle both file based and object based storage requirements. I am bit puzzled to go for an option because of these similarities. Can someone clarify the following questions please?
Except HDFS support, what else is a significant feature that I should use Data Lake Gen2 against Storage Account?
Storage Account v2 with Hierarchical namespace enabled == Data Lake Gen2. If so, can I use File System to create file shares and mount them in my VM as like Storage acc's File system?
For accessing data from Databricks, which one of these two will be better for big data workloads. I can see Storage account can also be mounted as DBFS which can still leverage the distributed processing.

Except HDFS support, what else is a significant feature that I should
use Data Lake Gen2 against Storage Account?
Answer: There're also other benefits. In short, the benefits are Performance / Management / Security as well it's cost. For more details, you can refer to this official article.
Storage Account v2 with Hierarchical namespace enabled == Data Lake
Gen2. If so, can I use File System to create file shares and mount
them in my VM as like Storage acc's File system?
Answer: Of course, the ADLS Gen2 supports file shares mount as the blob storage does.
For accessing data from Databricks, which one of these two will be
better for big data workloads. I can see Storage account can also be
mounted as DBFS which can still leverage the distributed processing.
Answer: ADLS Gen2 can also be mounted as DBFS. And as per Answer 1, the better one should be ADLS Gen2.

Related

blob vs adls gen2 || things that can be done through adls2 but not possible through blob storage

I am new to azure world. I am going through blob storage and adls2. I observed that apart from space constraint and herieracial namespace, everything what adls2 can do , same can be done through blob.
But still people recommend adls2 for analytical workload. Please advise what are things that can be done through adls2 but not possible through blob storage(apart from space and herieracial namespace).
Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput, Data Lake Storage Gen2 allows you to easily manage massive amounts of data.
Data Lake Storage Gen2 builds on Blob storage and enhances performance, management, and security in the following ways:
Performance is optimized because you do not need to copy or transform data as a prerequisite for analysis. Compared to the flat namespace on Blob storage, the hierarchical namespace greatly improves the performance of directory management operations, which improves overall job performance.
Management is easier because you can organize and manipulate files through directories and subdirectories.
Security is enforceable because you can define POSIX permissions on directories or individual files.
Also, Data Lake Storage Gen2 is very cost effective because it is built on top of the low-cost Azure Blob Storage. The additional features further lower the total cost of ownership for running big data analytics on Azure.
No limits on account sizes, file sizes, or number of files in ADLS
For more information refer this article by Ashish Patel

Azure Databricks with Storage Account as data layer

I have just started working on a data analysis that requires analyzing high volume data using Azure Databricks. While planning to use Databricks notebook to analyze, I have come across different storage options to load the data a) DBFS - default file system from Databricks b) Azure Data Lake (ADLS) and c) Azure Blob Storage. Looks like the items (b) and (c) can be mounted into the workspace to retrieve the data for our analysis.
With the above understanding, may I get the following questions clarified please?
What's the difference between these storage options while using them in the context of Databricks? Do DBFS and ADLS incorporate HDFS' file management principles under the hood like breaking files into chunks, name node, data node etc?
If I mount Azure Blob Storage container to analyze the data, would I still get the same performance as other storage options? Given the fact that blob storage is an object based store, does it still break the files into blocks and load those chunks as RDD partitions into Spark executor nodes?
DBFS is just an an abstraction on top of scalable object storage like S3 on AWS, ADLS on Azure, Google Storage on GCP.
By default when you create a workspace, you get an instance of DBFS - so-called DBFS Root. Plus you can mount additional storage accounts under the /mnt folder. Data written to mount point paths (/mnt) is stored outside of the DBFS root. Even though the DBFS root is writeable, It's recommended that you store data in mounted object storage rather than in the DBFS root. The DBFS root is not intended for production customer data, as there are limitations, like lack of access control, you can't access storage account mounted as DBFS Root outside of workspace, etc.
The actual implementation of the storage service like namenodes, etc. are really abstacted away - you work with HDFS-compatible API, but under the hood implementation will differ depending on the cloud and flavor of storage. For Azure, you can find some details about their implementation in this blog post.
Regarding the second question - yes, you still should get the splitting of files into chunks, etc. There are differences between Blob Storage & Data Lake Storage, especially for ADLS Gen 2 that have better security model and may better optimized for big data workloads. This blog post describes differences between them.

What is the Data size limit of DBFS in Azure Databricks

I read here that storage limit on AWS Databricks is 5TB for individual file and we can store as many files as we want
So does the same limit apply to Azure Databricks? or, is there some other limit applied on Azure Databricks?
Update:
#CHEEKATLAPRADEEP Thanks for the explanation but, can someone please share the reason behind: "we recommend that you store data in mounted object storage rather than in the DBFS root"
I need to use DirectQuery (because of huge data size) in Power BI and ADLS doesnt support that as of now.
From Azure Databricks Best Practices: Do not Store any Production Data in Default DBFS Folders
Important Note: Even though the DBFS root is writeable, we recommend that you store data in mounted object storage rather than in the DBFS root.
Reason for recommending to store data in mounted storage account than storing in storage account is located in ADB workspace.
Reason1: You don't have write permission, when you use the same storage account externally via Storage Explorer.
Reason 2: You cannot use the same storage accounts for another ADB workspace or use the same storage account linked service for Azure Data Factory or Azure synapse workspace.
Reason 3: In future, you decided to use Azure Synapse workspaces than ADB.
Reason 4: What if you want to delete the existing workspace.
Databricks File System (DBFS) is a distributed file system mounted into an Azure Databricks workspace and available on Azure Databricks clusters. DBFS is an abstraction on top of scalable object storage i.e. ADLS gen2.
There is no restriction on amount of data you can store in Azure Data Lake Storage Gen2.
Note: Azure Data Lake Storage Gen2 able to store and serve many exabytes of data.
For Azure Databricks Filesystem (DBFS) - Support only files less than 2GB in size.
Note: If you use local file I/O APIs to read or write files larger than 2GB you might see corrupted files. Instead, access files larger than 2GB using the DBFS CLI, dbutils.fs, or Spark APIs or use the /dbfs/ml folder.
For Azure Storage – Maximum storage account capacity is 5 PiB Petabytes.
The following table describes default limits for Azure general-purpose v1, v2, Blob storage, and block blob storage accounts. The ingress limit refers to all data that is sent to a storage account. The egress limit refers to all data that is received from a storage account.
Note: Limitation on single block blob is 4.75 TB.
Databricks documentation states:
Support only files less than 2GB in size. If you use local file I/O
APIs to read or write files larger than 2GB you might see corrupted
files. Instead, access files larger than 2GB using the DBFS CLI,
dbutils
You can read more here: https://learn.microsoft.com/en-us/azure/databricks/data/databricks-file-system

Azure Data lake VS Azure HDInsight

I was going through the Microsoft documents:
https://learn.microsoft.com/en-us/azure/data-lake-store/data-lake-store-overview
I'm new to Azure Data lake and HDInsight. There is a statement in the URL which tells that
"Azure Data Lake Store can be accessed from Hadoop (available with HDInsight cluster) using the WebHDFS-compatible REST APIs."
As per my initial understanding, Data lake store is a store in which any kind of data can be stored. I think, HDInsight also kind of does the same thing.
My question is what is the difference between Azure Data lake and Azure HDInsight? If HDInsight can be used for file storage or any kind of storage then Why to use Data Lake?It would be great if some one could clarify this in details. Thanks.
The easiest way to think of Data Lake is to think of this large container that has like a real lake with rivers coming into the river you never know where the rivers are coming from (or what "type" of river). Azure Data Lake was introduced to make big data easy for developers, data scientists, and analysts to store data of any size. It removes the complexities of ingesting and storing all your data while making it faster to get up and running with big data. Data Lake is able to stored the mass different types of data (Structured data, unstructured data, log files, real-time, images, etc. ) and to blend that together, to correlate many different data types. The key thing here is as we are moving from traditional way to the modern tools (like Hadoop, Cassandra, NoSQL DB, etc). Azure Data Lake includes three services:
Azure Data Lake Store, a no limits data lake that powers big data
analytics
Azure Data Lake Analytics, a massively parallel on-demand
job service
Azure HDInsight, a full managed Cloud Hadoop and Spark
offering
Azure Data Lake Store is like a cloud-based file service or file system that is pretty much unlimited in size. We can run services on top of the data that's in that store. So you could use Hadoop or Spark in an HDInsight cluster, or you could use the Azure Data Lake analytic service, which is a complement to the Azure Data Lake Store. And what that service will let you do is to run jobs that effectively query the data you have stored in the Azure Data Lake store and generate output results.
In nutshell,
Hdinsight is a managed hadoop service (to provide compute support)
Azure Data lake(ADL) is a managed storage service (to provide large amount of storage support)
(Instead of ADL, you can alternatively choose to use Blobs in HDinsight, but Blobs have some limitations (like file streaming to storage via hdinsight cluster is not supported)
Here is the definition from Azure documentation (below):
Azure uses "decomposed hardware method"
You can relate or assume HDinsight as a Hadoop Cluster, Azure Data lake (ADL) as HDFS. But they are detached.
If you want to relate with AWS, HDInsight is equivalent to EMR and ADL is equivalent to EMRFS or S3
If you terminate the cluster, ADL storage stays with the files stored in it. You can access the storage directly using another service or tool (like Azure Data bricks) or you can create one another hdinsight cluster on top of the data.
Hdinsight access the ADL using adl:// , and hdinsight never
store the file blocks in the nodes (like Hadoop does), rather it has
mappings to storage service.
Azure Data Lake Store, is just that a data store. HDInsight can also do that in the cluster that you spin up. However, when you stop that cluster, the data also goes away.
It is common that customers use either Azure Data Lake Store, or Azure storage to provide permanent storage separate from the cluster (compute) used to process the data.
Guy
HDInsight is the analytics service whereas the Azure Data Lake Storage is the storage service. You most likely need both to have functional analytics cluster.
HDInsight provides the cluster, fully manages the open-source packages for analytics (Hadoop, Spark ...etc), and you set up your cluster to use Azure Data Lake Storage which support HDFS API ( Hadoop FileSystem ) on top of Cloud Storage.
Azure Data Lake Storage Gen2 is what you are supposed to start looking at which merges the benefits of both Azure Storage and ADLS in one service.
ADLS Gen 2 documentation - https://learn.microsoft.com/en-us/azure/storage/data-lake-storage/introduction
Azure Data Lake Analytics provides server less compute while using Azure Data Lake Store for data storage, whereas in HDInsight,we need to specify and design for Compute Virtual Machine nodes as per processing requirements. It may be advantageous for developers to work with server less compute in Azure Data Lake Analytics, as scaling needs of Analytics Job are taken care out of box.

Use Data Lake or Blob on HDInsights cluster on Azure

When creating a HDInsights Hadoop cluster in Azure there are two storage options. Either Azure Data Lake Store (ADLS) or Azure Blob Storage.
What are the real differences between these two options and how do they affect the performance?
I found this page https://learn.microsoft.com/en-us/azure/data-lake-store/data-lake-store-comparison-with-blob-storage
But it is not very specific, only uses very general terms like "ADLS is optimized for analytics".
Does it mean that its better for storing the HDInsights file system? And if ADLS is indeed faster then why not use it for non-analytics data as well?
As per this document, an Azure Storage account can hold up to 4.75 TB, though individual blobs (or files from an HDInsight perspective) can only go up to 195 GB. Azure Data Lake Store can grow dynamically to hold trillions of files, with individual files greater than a petabyte. For more information, see Understanding blobs and Data Lake Store.
Also, check Benefits of Azure Storage and Use Data Lake Store for more details and comparisons.
Hope this helps.
In addition to Ashok's answer: ADLS is currently only available in a few regions, compared to Azure Storage. So if you need your HDInsight account in a specific region, you should make sure your storage is in the same region.
Another benefit of ADLS over Azure Storage is its POSIX-based security model at the file/folder level that uses AAD security principals instead of Shared Access Keys.
The reason why you may not want to use ADLS for non-analytics data is primarily cost. Because of some of the additional capabilities, it is currently a bit more expensive.
In addition to the other answers its not possible to use the Spark Data Factory activity on HDInsights clusters that use Data Lake as the primary storage. This limitation applies to both ADFv1 and v2 as seen here: https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-spark and https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-spark

Resources