What is meant by an 'integration dataset' in Azure Synapse Analytics? - azure

I'm new to Synapse. I am using 'Azure Synapse' and I have noticed that there is an option to import an 'integration dataset'.
I'm not sure what exactly it means and how it differs from some of the other options for instance. I can't find anything on the Microsoft documentation. Can anyone please explain to me what it means?.

Integration Datasets in Synapse are similar to Datasets in Azure Data Factory.
It is a reference dataset to specify the location and structure of your data within a data store which can be used in your pipeline activities and dataflows. There are many types of datastore options and connectors available to create your datasets which can be internal to Azure or external.
Please read this link to know more about datasets: https://learn.microsoft.com/en-us/azure/data-factory/concepts-datasets-linked-services?tabs=data-factory

Related

Power Apps: Using a Form to enter data in DataLake, Data Factory or Synapse?

Its possible to create an App or WebForm App using Power Apps to save and retrieve information from Azure DataLake, Synapse or Data Factory?
Could you give any suggestion about this implementations, please?
I appreciate any help you can share!!
Thanks so much!
There are multiple ways to import and export data into Microsoft Dataverse. You can use dataflows, Power Query, Azure Data Factory, Azure Logic Apps, and Power Automate. See, Importing and exporting data and Import by bringing your own source file
You can configure dataflows to store their data in your organization’s Azure Data Lake Storage Gen2 account. This article describes the general steps necessary to do so, and provides guidance and best practices along the way.
Although I a starting in power apps, you can checkout for further Create, edit, or configure forms using the form designer
use Dataverse in ADF..
Add source to your forms...

Why would I not used Databricks as my data mart?

I'm trying to get my head around Databricks.
I've found documentation stepping through importing data from S3 or Azure Datalake, and then outputting into Azure Synapse Analytics or another Data Warehouse solution.
After a quick play, I've recognised that you can simply save a table in Databricks, access it using SQL, and even pull it into PowerBI as a source.
So my question: for a small Datamart (10 dims, 5 facts), why would I choose to pay for an additional database solution like Azure SQL, Synapse, RDS or other when I could simply leave the data in a table in Databricks and then access it directly from my reporting tool from there?
Thank you in advance.
Andy
Yes this is very much possible . Just to let you know that SQL Azure and Synapse may be a Microsoft offering but they are for different purpose , Synapse supports MPP and so it more big data implementation . Also its not only how many dimension and fact table you have , how much data you have , what kind of aggregation it has etc becomes decisive .

MERGE (UPSERT) - AZURE SQL DWH

I been working long with on-premises DWH solutions. Now moving to AZURE DWH.
Right now am up-to doing most of the processing / transformation in Azure Databricks and writing the result set to Azure SQL DWH Staging Tables.
Now I want to MERGE (UPSERT) the Dimensions and Load Fact Tables.
As MERGE is not supported in AZURE SQL DWH, what is the best way to accomplish this?
MERGE is not support with AZURE SQL DWH, Azure SQL DWH team said they are planning to support this feature.
Reference: MERGE statement support.
I found this blog, MSFT give an example to use UPDATE/INSERT statements instead of MERGE.
Hope this helps.

Google Analytics data in Azure

Has anybody ever moved Google Analytics data into Azure? I have seen a handful of ways to do it but I am not sure what I am getting myself into. The Google Analytics data is becoming quite large and I am wondering if it is best suited to leave it in google storage and access it from Azure or move it to something like HDInsight or Data Lake. I need to join the data across several disparate data stores, SQL Azure, Blob, and Table Storage. I was also looking into Apache Drill and Presto as a possible solution to unify the data access. Just looking to see if anybody out there has dealt with this same issue and has any experience to share. Thanks!
Preface
I don't have experience with Presto so I can only comment on the feasibility of doing this with Drill. Also I have not used Azure services so my advice is theoretical.
Drill Storage Plugins
Drill will allow you to perform any SQL queries you want on data originating from different sources, provided that each data source has a storage plugin. A storage plugin is simply a piece of code in Drill that allows you to interface with a data source. Since you are concerned with performing queries on 3 data sources, we need to determine if each of those 3 data sources have a Storage plugin.
SQL Azure
I assume SQL Azure has a jdbc driver for java. If so then Drill can be configured to use SQL Azure by following these instructions.
Azure Blob
Azure Blob storage has an implementation of the hadoop filesystem api which Drill uses to read data from file systems. So you could theoretically add the hadoop-azure jar and its dependencies https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure/2.7.0 to Drill's class path and configure Drill's DFS storage plugin to use it.
Additionally the data in Azure Blob would have to be stored in a supported file format like: json, parquet, csv, or hadoop sequence files.
Azure Table
This looks like Microsoft's custom NoSQL database. Currently Drill does not support it.
Conclusion
With a bit of work you could use Drill to query data on both Azure SQL and Blob, but not Azure Table.

Azure Data Factory - moving data from On-Premise SQL to Azure SQL

A simple question: Can this be achieved directly? I mean without the Azure blob storage in between (as showed in all the examples)? Can someone provide some code example please.
yes, you can do this directly. In fact, you can do direct copies from any of our supported sources/sinks, you don't have to pass through blob. To go from on-prem SQL Server-->SQL azure, you will need to setup a Data Management Gateway connector on your on-prem server. Then, you use a linked service of type AzureStorage and an output dataset of type AzureSQLTable as the output dataset, instead of AzureBlob as is shown in the example. The exact steps to setup the DMG and the JSON code for the linked services, datasets, and pipelines can be found in our documentation. We are also improving our UI in the near future to make these kinds of copy setups an easy code-free experience.
https://azure.microsoft.com/en-us/documentation/articles/data-factory-sqlserver-connector/

Resources