I would like some advice/tips about the right technology to select in order to store some forecast data on Azure technologies.
My team and I are scraping some weather forecast data everyday from various sources and store it as is on a Azure File Storage. The files format is "grib2" which is a standard format of weather forecast data.
We are able to extract the data from those "grib2" files using python script running on a Azure VM.
We now have several files that represent hundreds gigabytes of data to store and I'm struggling to find which data store from the Azure technologies suits the best our needs in term of praticity and cost.
We started using "Azure Table Storage" first because it's cheap solution,
but I've read on many posts that it is a bit old and not very adapted to our solution as it for example does not allow more than 1,000 entites per query and no aggregation on data.
I considered using Azure SQL db but it seems that it can become very expensive very fast.
I also considered the Azure Data Lake Storage Gen2 (and HDinsight) technologies but am not very at ease with those blob storages and am not really able to say if it can suit my needs in terms of praticity and if it is "easy to query".
By now we just plan to achieve that :
1) Extract data from grib2 files thanks to a python script running on an Azure VM
2) Insert the transformed data into [Azure storage]
3) Query the [Azure storage] from Azure Machine Learning Service or a local R script (for example)
4) Insert the computed data into [Azure storage]
where [Azure Storage] technology is to determine.
Any help or advice would be much appreciated, thanks.
A couple of things I would see here:
To store the downloaded files in raw format (grib2 in your case), either place them on good ol' Azure Blob Storage. Cheap storage exactly for your needs.
Use Azure Databricks to load the data from the storage account and unpack it into memory. (python or scala)
Load the data in memory - still in Databricks - to run you ML inferencing. You could also use SparkR if you really want to.
Store the computed files in a serving layer. This really depends on what you want to do with it later. Often Azure SQL Database is an obvious choice. There is a native Spark connector which efficiently writes data from Databricks to SQL DB.
In addition to using Databricks as your inferencing environment, it's also a good choice for ML training (e.g. utilizing Azure ML Service).
Related
I'm trying to get my head around Databricks.
I've found documentation stepping through importing data from S3 or Azure Datalake, and then outputting into Azure Synapse Analytics or another Data Warehouse solution.
After a quick play, I've recognised that you can simply save a table in Databricks, access it using SQL, and even pull it into PowerBI as a source.
So my question: for a small Datamart (10 dims, 5 facts), why would I choose to pay for an additional database solution like Azure SQL, Synapse, RDS or other when I could simply leave the data in a table in Databricks and then access it directly from my reporting tool from there?
Thank you in advance.
Andy
Yes this is very much possible . Just to let you know that SQL Azure and Synapse may be a Microsoft offering but they are for different purpose , Synapse supports MPP and so it more big data implementation . Also its not only how many dimension and fact table you have , how much data you have , what kind of aggregation it has etc becomes decisive .
I have several external data APIs that I access using some Python scripts. My scripts run from an on-premises server, transform the data, and store it in a SQL Server database on the same server. I suppose it's a rudimentary ETL system run with Python and T-SQL.
The system is about to grow quite a bit with new APIs and will require more complex data pipelines (for example, some of the API data will be spun off to more than one table). I think this would be a good time to move the system onto Azure (we are heavily integrated with Microsoft so it will have to be Azure!).
I have spent a few days researching the Azure products that would let me run Python scripts to access data from web APIs and store the processed data in a cloud database. I'm looking for advice on what sort of Azure products other people have used for similar jobs. At the moment it seems I will need:
Azure SQL Database to hold the processed data that can be accessed by various colleagues.
Azure Data Factory to manage, log, and schedule the pipeline jobs and to run my custom Python scripts (is this even possible?).
Azure Batch to run the aforementioned Python scripts but I'm not sure about this.
I want to put together a proposal basically and start thinking about costs but it would be good to hear from someone who has done something similar - am I on the right track or completely off? Should I just stay on-premises? Thank you in advance.
Azure SQL Database, Azure SQL Data Warehouse are good for relational data. And if you want to use NoSQL, you could go with Azure Cosmos DB. If you want to use Files to store data, you could use Azure Data Lake.
For python scripts, you could use custom activity or Data bricks for Azure Data Factory.
Azure SQL Warehouse should be used if the amount of data you want to load is in petabytes. Also, Azure Data warehouse is not meant for complex transformations. I would recommend it for plain data load with PolyBase.
I am working for an energy provider company. Currently, we are generating 1 GB data in form of flat files per day. We have decided to use azure data lake store to store our data, in which we want to do batch processing on a daily basis. My question is that what is the best way to transfer the flat files into azure data lake store? and after the data is pushed into azure I am wondering whether it is good idea to process the data with HDInsight spark? like Dataframe API or SparkSQL and finally visualize it with azure?
For a daily load from a local file system I would recommend using Azure Data Factory Version 2. You have to install Integration Runtimes on Premise (more than one for High Avalibility). You have to consider several security topics (local firewalls, network connectivity etc.) A detailed documentation can be found here. There are also some good Tutorials available. With Azure Data Factory you can trigger your upload to Azure with a Get-Metadata-Activity and use e. g. an Azure Databricks Notebook Activity for further Spark processing.
Has anybody ever moved Google Analytics data into Azure? I have seen a handful of ways to do it but I am not sure what I am getting myself into. The Google Analytics data is becoming quite large and I am wondering if it is best suited to leave it in google storage and access it from Azure or move it to something like HDInsight or Data Lake. I need to join the data across several disparate data stores, SQL Azure, Blob, and Table Storage. I was also looking into Apache Drill and Presto as a possible solution to unify the data access. Just looking to see if anybody out there has dealt with this same issue and has any experience to share. Thanks!
Preface
I don't have experience with Presto so I can only comment on the feasibility of doing this with Drill. Also I have not used Azure services so my advice is theoretical.
Drill Storage Plugins
Drill will allow you to perform any SQL queries you want on data originating from different sources, provided that each data source has a storage plugin. A storage plugin is simply a piece of code in Drill that allows you to interface with a data source. Since you are concerned with performing queries on 3 data sources, we need to determine if each of those 3 data sources have a Storage plugin.
SQL Azure
I assume SQL Azure has a jdbc driver for java. If so then Drill can be configured to use SQL Azure by following these instructions.
Azure Blob
Azure Blob storage has an implementation of the hadoop filesystem api which Drill uses to read data from file systems. So you could theoretically add the hadoop-azure jar and its dependencies https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure/2.7.0 to Drill's class path and configure Drill's DFS storage plugin to use it.
Additionally the data in Azure Blob would have to be stored in a supported file format like: json, parquet, csv, or hadoop sequence files.
Azure Table
This looks like Microsoft's custom NoSQL database. Currently Drill does not support it.
Conclusion
With a bit of work you could use Drill to query data on both Azure SQL and Blob, but not Azure Table.
Currently my team is creating a solution that would use HDInsight. We will be getting 5TB of data daily and will need to do some map/reduce jobs on this data. Would there be any performance/cost difference if our data will be stored in Azure Table Storage instead of Azure HBase?
The main differences will be in both functionality and cost.
Azure Table Storage doesn't have a map reduce engine attached to it in itself, though of course you could use the map reduce approach to write your own.
You can use Azure HDInsight to connect Map Reduce to table storage. There are a couple of connectors around, including one written by me which is hive focused and requires some configuration, and may not suit your partition scheme (http://www.simonellistonball.com/technology/hadoop-hive-inputformat-azure-tables/) and a less performance focused, but more complete version from someone at Microsoft (http://blogs.msdn.com/b/mostlytrue/archive/2014/04/04/analyzing-azure-table-storage-data-with-hdinsight.aspx).
The main advantage of Table Storage is that you aren't constantly taking processing cost.
If you use HBase, you will need to run a full cluster all the time, so there is a cost disadvantage, however, you will get some functionality and performance gains, plus you will have something a bit more portable, should you wish to use other hadoop platforms. You would also have access to a much greater range of analytic functionality with the HBase option.
HDInsight (HBase/Hadoop) uses Azure Blob storage not ATS. For your data-storage you will charged only applicable blob storage cost, based on your subscription.
P.S. Don't forget to delete your cluster once job has completed, to avoid charges. Your data will persist in BLOB storage and can be used by next cluster you build.