How does HDInsight cluster maps to Azure Storage as HDFS? - azure-hdinsight

I have a fair idea of how Hadoop works as I have studied the on-premise model since that's how everyone learns. In that sense the top level idea is fairly straightforward.We have a set of machines (nodes) and we run certain processes on each one of them and then configure those processes in such a way that the entire thing starts behaving as a single logical entity that we call a Hadoop (YARN) cluster. Here HDFS is a logical layer on top of individual storage of all the machines in the cluster. But when we start of thinking of the same cluster in cloud , this becomes little confusing. Taking the case of HDInsight Hadoop cluster , lets say I already have an Azure Storage account with lots of text data and I want to do some analysis so I go ahead and spin a Hadoop cluster in the same region as the storage account. Now the whole idea behind Hadoop is that of processing closest to where data exists. In this case when we create the Hadoop cluster , a bunch of Azure Virtual Machines start behind the scenes with their own underlying storage (though in the same region). But then, while creating the cluster we do specify a default storage account and a few other storage accounts to be attached where data that is to be processed lies. So ideally the data that is to be processed needs to exist on the disks for the virtual machines. How does this thing work in Azure? I guess the virtual machines create disks that are actually pointers to azure storage accounts (default + attached) ? This part is what is not really explained well and is really cloudy. So lot of people including myself are always in dark when they learn the classic on-premise Hadoop model academically and start using cloud based clusters in the real world. If we could see more information about these virtual machines right from the cluster Overview page from the Azure portal , it would help the understanding. I know it's visible from Ambari but again Ambari is blind to Azure, it's an independent component so that is not very helpful.

There is an underlying driver which works as a bridge in mapping the Azure Storage as HDFS to other services running in HDInsight.
You can read more about this driver's functionality in the below official page.
https://hadoop.apache.org/docs/current/hadoop-azure/index.html
If your Azure Storage Account is of type ADLS Gen 2 (Azure Data Lake Storage Gen2) then the driver used is different and can be found under the following official page. This offers some advance capabilities of ADLS Gen2 to beef up your HDInsight performance.
https://hadoop.apache.org/docs/current/hadoop-azure/abfs.html
Finally, as same as your on-prem Hadoop installation, HDInsight too has a local HDFS that is deployed across your HDInsight cluster VM Hard drives also. You can access this local HDFS using URI as below.
hdfs://mycluster/
For example you can issue the following to view your local HDFS root level content.
hdfs dfs -ls hdfs://mycluster/

Related

Configuration of Azure for Storage and Hadoop Clustered Processing

Hope someone can offer any advice. At the moment I have been asked to scope out a possible infrastructure for a new Azure Platform. We are also going to be using HDFS / Hadoop for our ETL and Storage.
Can anyone offer any advice on the following :
Set up a Storage Optimised Server (eg, L4, 4 Core, 32gb Ram, 678 GB Storage) to hold our raw data, reference tables and final cleansed data within HDFS. This server could be running 24/7 to feed our analytics platforms.
Then, to utilise the power of Hadoop, could we spin up a set of Processing servers (eg, once a week) to read from the Storage Server, process and write back to the storage server and then shutdown until the next load & process task.
Would really appreciate anyone's thoughts advice on this or any possible configurations we could think of?
Many thanks
Fiorano
Whether your Hadoop cluster is on-premises or in the cloud, it contains two main resources: compute resources to process jobs, and storage resources to hold data. In an on-premises cluster, the storage and compute resources are combined into the same hardware tying them together. With HDInsight the storage is wholly separated from the compute resource. This is a very important distinction of HDInsight. It means that I can completely turn off the compute portion of the cluster and the data will remain accessible.
Note: To analyze data in HDInsight cluster, you can store the data either in Azure Storage, Azure Data Lake Store, or both.
For more details, refer "Azure HDInsight Documentation".

microsoft azure difference between cluster and storage account

I am learning from this course. It asks to create a new hdinsight cluster (options are hadoop, hbase, storm or spark) and also a storage account. What is difference between a cluster and a storage account? Does cluster include processors to process my jobs and does storage account mean space to store my data? Why cannot i connect the same storage account with different clusters?
Also under Microsoft Azure >> New >> Data + Analytics, I see 2 options : hdinsight, data lake analytics that deal with big data. What is difference between those two? Both of them look similar
HDInsight
Microsoft's cloud-based Big Data service. Apache Hadoop and other popular Big Data solutions.
Data Lake Analytics
Big data analytics made easy
There are a lot of questions in here so let me answer them 1 by 1.
What is Blob Storage vs HDInsight Cluster?
Blob storage is a distributed file store very similar to HDFS and is used to store data/videos/things. A HDInsight cluster is a number of Hadoop virtual machines created to run Map Reduce code over a DFS (HDFS or Blob storage). Having two separate services allow you to scale each independently, saving money in the long term. Data storage is cheap but a 500 node VM cluster can get pricey quickly. Being able to kill the cluster but keep your data is helpful.
Why can't I connect the same storage account with different clusters?
You can have multiple clusters pointed at the same storage account but it's an Anti pattern. Storage accounts have Data and IO limits and if you have multiple clusters pulling against a single storage account, it's more probable you'll hit them. Also, storage accounts only cost $$ if you have data in them so having multiple isn't a cost increase.
What is Azure Data Lake(ADL) and ADL storage?
Azure data lake is another option for both storage and compute. ADL storage can be thought of as blob storage v2. You get an increase of some of the limits on IO and file size from blob storage, while still being able to use Hadoop for compute. ADL is a second option for compute that is completely different then Hadoop. You don't have to worry about the cluster creation or clusters in general. You write a query, specify the amount of parallelization you'd like, and the data is returned.
References:
https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storage-limits
https://azure.microsoft.com/en-us/services/hdinsight/
https://azure.microsoft.com/en-us/solutions/data-lake/

HDInsight - Azure blob storage

I have some basic clarifications about azure hdInsight.
The following article gives some basic input on using hdinsight.
https://azure.microsoft.com/en-in/documentation/articles/hdinsight-hadoop-emulator-get-started/.
It says that HDinsight internally uses azure blob storage .
Having this in mind, my question is as follows:
I have a hdinsight hd1 which uses storage account stg1.
If I want to just uploading and download files using azure storage explorer to stg1 , then whats the use of having hd1 , I can do it without even creating hdinsight which costs heavily.
So, is hadoop hdinsight only used for processing some data stored in stg1 to produce some results like wordcount?Is that the only reason why we use HDInsight?
If you want to understand the HDInsight and blob storage better, you need to read https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-use-blob-storage/.
HDInsight is Microsoft's implementation of Hadoop. So far there 4 different base types which include Hadoop, HBase, Storm, Spark. You can always install additional components to the base types.
Your question is really about why using Hadoop. Hadoop shines when you need to process a lot of data - big data.
One of the differences between HDInsight and other Hadoop implementations is the separation of storage (blob storage) from compute (HDInsight clusters). You would still need to copy the data (or store the data directly in Azure blob storage). When you are ready to process, you create an HDInsight cluster, submit a job, and then delete the cluster. You delete the cluster so you don't need to pay for the cluster anymore. Even after the cluster is deleted, your date stored in the Blob storage retains.
HDInsight is a family of products, including Hadoop, Spark, HBase, and Storm. They all do different things, and storage is but only one aspect.

How to create a new HDInsight cluster on an existing storage container

I am using HDInsight on Azure to research the scalability of ranking machine learning methods (learning to rank, for the insiders) on Hadoop. I managed to test run my implementation of a learning to rank algorithm on a HDInsight cluster and clocked its time to complete the operation.
Now I want to run the same code over and over again with different numbers of cores to see how the running time scales as a function of the number of cores. From other questions on this forum I understood that HDInsight does not allow changing the number of cores of a cluster. Would it instead be possible in some way to delete the current cluster, and then create a new cluster that makes use of the exact same container on my Azure Storage? I tried to do this by simply giving the new cluster the same name as the previous one (as the container that is created for a new cluster is automatically named after the cluster at creation time), but that doesn't work as the new container created for this new cluster will have "-1" appended to the cluster name. The datafile that I am trying to process is around 15GB in size, so it would be a real pain in the ass if I would need to upload this file to the cluster container for each cluster that I create.
Any help on how I can run my algorithms on HDInsight with varying numbers of cores without having to re-upload my input data for each point of measurement would be very much appreciated!
Kind Regards,
Niek Tax
You should be able to link your existing storage container to an HDInsight cluster According to http://azure.microsoft.com/en-us/documentation/articles/hdinsight-use-blob-storage/#benefits
Using the custom create, you have one of the following options for the default storage account:
Use existing storage
Create new storage
Use storage from another subscription.
You also have the option to create your own Blob container or use an existing one.
The link shows how you can do that through the Windows Azure Portal.

Accessing Raw Data for Hadoop

I am looking at the data.seattle.gov data sets and I'm wondering in general how all of this large raw data can get sent to hadoop clusters. I am using hadoop on azure.
It looks like data.seattle.gov is a self contained data service, not built on top of the public cloud.
They have own Restful API for the data access.
Thereof I think the simplest way is to download interested Data to your hadoop cluster, or
to S3 and then use EMR or own clusters on Amazon EC2.
If they (data.seattle.gov ) has relevant queries capabilities you can query the data on demand from Your hadoop cluster passing data references as input. It might work only if you doing very serious data reduction in these queries - otherwise network bandwidth will limit the performance.
In Windows Azure you can place your data sets (unstructured data etc..) in Windows Azure Storage and then access it from the Hadoop Cluster
Check out the blog post: Apache Hadoop on Windows Azure: Connecting to Windows Azure Storage from Hadoop Cluster:
http://blogs.msdn.com/b/avkashchauhan/archive/2012/01/05/apache-hadoop-on-windows-azure-connecting-to-windows-azure-storage-your-hadoop-cluster.aspx
You can also get your data from the Azure Marketplace e.g. Gov Data sets etc..
http://social.technet.microsoft.com/wiki/contents/articles/6857.how-to-import-data-to-hadoop-on-windows-azure-from-windows-azure-marketplace.aspx

Resources