Is it possible to use Visual Studio for Azure Data Factory? - azure

I am new to Azure. I would like to learn the architecture deployed in my company which i shown below on diagram. Can anyone point me to some video example or something that could reflect that from diagram below. I also have access to Azure portal that i have some money credit so if it is possible i could create some test environment based on that diagram.
P.S Is it possible to use Visual Studio for any kind of work based on that diagram or everything have to be created and develop from Azure portal?
Datasource Oracle DB --> on prem gateway --> ADF--> Azure DB --> AAS --> PowerBI
SQL EDP --------------------------------------^

You've got a fairly straightforward BI architecture there with the following logical components:
raw / source data
integration
data mart / dimensional model
semantic
visualisation
The physical components look a bit like this:
The physical components can be described like this:
Oracle database - former market leader database product. I would guess your employers have rejected OBIEE for some reason
Self-hosted Integration Runtime (SHIR)On-premises data gateway - the SHIR gateway enables the movement of data from on-prem data sources to the cloud. This must be used when moving data from on-prem to Azure SQL DB using Data Factory. Use the SHIR with Data Factory and the Gateway with Power BI and Azure Analysis Services.
Data Factory - Azure ELT tool for moving data from place to place. ETL feature Data Flow currently in preview.
Azure SQL DB - PaaS SQL database, scalable via service tiers. If your data in Oracle is not already in a data mart / dimensional format, then it can be made so here
Azure Analysis Services (AAS) - PaaS OLAP in-memory engine, scalable for fast slice-and-dice, drill down and semantic modelling. Tabular only.
Power BI - increasingly powerful visualisation tool. Run dashboard in DirectQuery / LiveConnection mode to avoid entirely duplicating the tabular model from AAS in Power BI.
In answer to some of your questions: you can have one Azure Data Factory with many pipelines. The Visual Studio Azure Data Factory project type is now defunct.
As to "why" for certain technologies:
why Oracle - Who knows.
why SHIR - SHIR is compulsory when moving data from on-prem to cloud with ADF
why Azure SQL DB - lightweight and powerful PaaS DB requiring no infra and low TCO; scalable. Might be location for restructuring of data from raw / relational structure to dimensional in readiness for semantic layer if your data is not already in that format in Oracle
why AAS - fast, in-memory slice-and-dice; scalable, can pause, can be interrogated by Excel, Power BI Desktop, SSMS, VS, other clients etc. Optionally has row-level security (RLS)
Power BI - online service Power BI.com offers easy sharing within organisation, even externally.
why all the components together - you could (in theory) go straight from Oracle to Power BI with a Power BI gateway (I think) BUT you would then have to do all the modelling in Power BI and your model is then only really accessible from Power BI. In this model, users with SQL skills can query the data mart, users with DAX (or Excel, or Power BI Desktop) skills can query the AAS tabular model, AAS is very scalable component, etc
These opinions are strictly my own personal ones and the value of them may go down, as well as up.
HTH

Azure Data Factory has a 1:M capability with various data sources. One instance of Azure Data Factory will support multiple data movement capabilities: Data movement activities
Information about On-Premise Gateway:
The on-premises data gateway acts as a bridge, providing secure data transfer between on-premises data sources and your Azure Analysis Services servers in the cloud. In addition to working with multiple Azure Analysis Services servers in the same region, the latest version of the gateway also works with Azure Logic Apps, Power BI, Power Apps, and Microsoft Flow. You can associate multiple services in the same subscription and same region with a single gateway.
Connecting to on-premises data sources with Azure On-premises Data Gateway

Related

Can we use Microsoft Purview and Unity Catalog together

Unity Catalog is the Azure Databricks data governance solution for the Lakehouse. Whereas, Microsoft Purview provides a unified data governance solution to help manage and govern your on-premises, multicloud, and software as a service (SaaS) data.
Question: In our same Azure Cloud project, can we use Unity Catalog for the Azure Databricks Lakehouse, and use Microsoft Purview for the rest of our Azure project?
Update: In our current Azure subscription, we have divided workload as follows:
SQL related workload: we are doing all our SQL database work using Databricks only (no Azure SQL databases are involved). That is, we are using Databricks Lakehouse, Delta Lake, Deatricks SQL etc. to perform ETL and all Data Analytics work.
All Non-SQL workload: All other assets (Excel files, csv files, pdf, media files etc.) are stored in various Azure storage accounts.
MS Purview is doing a good job in scanning assets in scenario 2 above, and it easily creates a holistic, up-to-date map of our data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. It also enables our data consumers to access valuable, trustworthy data management.
However, our almost 50% of the work (SQL, ETL, Data Analytics etc.) is done in Azure Databricks where we have significant challenges with Purview. We were wondering if it's possible to keep Purview and Unity Catalog separate as follows: Purview does its Data Governance work for scenario 1 only and Unity Catalog does its Data Governance work for scenario 2 only.
This recently released update may resolve our issue of making Purview work better with Azure Databricks but we have not tried it yet: Connect to and manage Azure Databricks in Microsoft Purview (Preview)
As of right now there is no official integration between Unity Catalog and Purview yet, but it may come in the future. You may join Azure Databricks roadmap webinar that will be tomorrow to get more information.
Regarding the actual question - imho, nothing prevents you from using UC & Purview in the same Azure project.
P.S. You can get metadata & lineage information into Purview by loading data from information schema tables and using Purview APIs to store it in Purview.

Azure Synapse, Azure Analysis Serivces and Power BI

Could someone clarify to me if Azure Analysis Service is still an architecture component that we have to consider if we decide to adopt Azure Synapse as DWH environment ?
The question comes in oder to understand if there is some best practise in plase in order to interconnect Power BI with Synapse avoiding to maintain another layer (e.g. Analysis Services).
The feature set isn't 100% identical, but at a high level a Power BI Dataset = AAS database. They use the same engine, so you only need to maintain a separate AAS instance if there is a feature currently available in AAS not currently implemented in Power BI.

Distinct difference between Azure Databricks and Azure Synapse Analytics

Can someone explain the distinct difference between these two products in all major aspects? As far as I am aware from reading the official documents, both could host database systems and provide data cleaning pipeline? Both are on cloud?
Databricks:
Azure Databricks is an Apache Spark-based analytics platform optimized
for the Microsoft Azure cloud services platform. Designed with the
founders of Apache Spark, Databricks is integrated with Azure to
provide one-click setup, streamlined workflows, and an interactive
workspace that enables collaboration between data scientists, data
engineers, and business analysts.
Synapse Analytics:
Azure Synapse is a limitless analytics service that brings together
enterprise data warehousing and Big Data analytics. It gives you the
freedom to query data on your terms, using either serverless on-demand
or provisioned resources—at scale. Azure Synapse brings these two
worlds together with a unified experience to ingest, prepare, manage,
and serve data for immediate BI and machine learning needs
they do overlap to some extent, but they are not the same thing. Databricks is pretty much managed Apache Spark, whereas Synapse Analytics is managed SQL Data Warehouse.

Azure Data Factory architecture with Azure SQL database to Power BI

I'm no MS expert - recently hopped onto the Azure train and apologies in advance if I get some information wrong.
Basically need some input in Azure's architecture utilising Azure Data Factory (as the ETL/ELT tool) and Azure SQL database (as the storage), to a BI output - Power BI. My situation is this;
I have on-premise data sources such as Oracle DB, Oracle Cloud SSAS, MS SQL server db
I'd like to have a MS cloud infrastructure solution for reporting purposes.
No data migration needed - merely pumping on-prem data onto cloud and producing a BI reporting solution
Based on my limited knowledge and Google research, Azure Data Factory caters for all my on-prem sources, as well as the future cloud Azure SQL database. If future analysis is needed, Azure Storage and Azure Databricks can be added in to this architecture. I have sketched out the architecture of my proposed solution.
Just confirming my understanding
Without Azure Storage & Databricks (the 2 pink boxes), the 2 Azure component (DF & SQL database) is sufficient to take data from on-premise sources, process on cloud & output into Power BI.
With Azure Storage & Databricks (the 2 pink boxes), processing will be more efficient as their summarised function is to store training data models & act as an analytics processing engine.
Azure SQL database is more suitable, as compared to Azure SQL datawarehouse as my data sources does not exceed 1TB; cost-wise is cheaper AND one of my data sources contain data from call centers, hence OLTP is more suitable. Plus I have Azure Databricks to support the analytical bit that SQL datawarehouse does (OLAP).
Any other comments to help me understand this whole architecture will be great!
I am a new learner of Azure. I was wondering if we have #Query (value="...") kind or any equivalence for DocumentDb (CosmosDB). Because, the documentDB does not take #Query. I am looking to convert the sql query (From jpa to cosmosDB).
Taking data from on-prem or IaaS sources like SQL on a VM, Oracle etc, requires a Self-Hosted Integration Runtime (SHIR).
Please review the Modern Data Warehouse pattern which sounds similar to what you are proposing.

tutorials on migrating SQL 2008 BI-stack to Azure SQL Data Warehouse?

Are there any tutorials available on the subject of migrating from an existing BI-stack based on SQL Server 2008 to Azure SQL Data Warehouse? I'm specifically interested in best practices with regards to how to handle cross database joins on non-premium tiers (our existing procedures and UDFs are full of joins on multiple database objects) and how to migrate existing SSAS cubes and its related programmability and ETL.
What BI-stack are you using? This will determine your next steps for the actual BI tools.
Specifically for cross-database queries when moving to the cloud, the guidance is to move the databases into schemas and then update your scripts to use schema based (2 part names) vs. database (3 part names) when referencing objects. For example, if you have staging and production databases you can simply move your staging objects into a [staging] schema within a single database.
Azure SQL Data Warehouse is commonly used as a backing store for SSAS cubes (MOLAP/ROLAP/Tabular mode). In the Azure cloud, customers have created IaaS SQL Server VMs to host ETL process (SSIS) and cubes (SSAS) with direct connections to SQL Data Warehouse.

Resources