I am evaluating how to implement a Data Governance solution with Azure Data Catalogue for a Data Lake batch transformation pipeline. Below is my approach to it. Any insights please?
Data Factory can't capture the lineage from source to Data Lake.
I know Data Catalogue can't not maintain business rules for data curation on the Data Lake.
First the data feed is onboard manually from Azure Data Catalogue under a given business
glossary, etc. Or When raw data feed is ingested into Data Lake
Storage, the asset to be created automatically under a given business
glossary (if it does not exists).
The raw data is cleaned, classified and tagged during a light transformation on the lake. Thus, related tags needs to be created on Data Catalogue. (this is custom coding calling Azure Data Catalogue REST API's)
Then, there is ETL processing. New data assets to be created with tagging in Data
Catalogue. The tools are Spark based. (this is custom coding calling Azure Data Catalogue REST API's) Finally, Data Catalogue will display all data assets created in Data Lake batch transformation data pipeline under specific business glossary with the right tags.
I am skipping Operational meta-data and full lineage as there is no such
solution in Azure offerings. this needs to be custom solution again.
I am looking for the best practice. Appreciate your thoughts.
Many thanks
Cengiz
Related
We've designed a Data Architecture for our client on Azure wherein, We ingest the sources into a Raw Layer consisting of a Azure SQL Database. This Azure SQL Database acts as a Source Mirror and Has Near Real time sync enabled.
We also have an ODS Layer which is populated from the Previously mentioned Azure SQL Database (Source Mirror) as per the given Data Model. This Layer should ideally take anywhere between 30mins and 1 Hour to Load.
May I Know How Do I Handle the Concurrent Writes and Reads from the Raw Layer (Azure SQL Database, Source Mirror) ? It Syncs every 5 mins with the Sources but also read every 30mins - 1 Hour for ODS Layer.
I've to Use Azure Data Factory to Implement my Data Loads
Yes, Azure Data Factory platform is best fit for such scenarios. Its a cloud-based ETL and data integration tool that lets you build data-driven processes for managing data transportation and data transformation at scale. You can use Azure Data Factory to design and plan data-driven processes (also known as pipelines) that may consume data from a variety of sources. With data flows or computing services like Azure HDInsight Hadoop, Azure Databricks, and Azure SQL Database, you can design sophisticated ETL processes that convert data graphically.
When using control flow, you can use a GetMetadata activity to get a
list of files in a storage account, then pass that list to a for each
activity with the Sequential flag set to false to process all files
concurrently (in parallel) up to the maximum batch size according to
the activities defined in the for each loop.
Here, is the Microsoft Official Document for the Azure Datafactory Connectors Overview | Docs
I am performing ETL in Azure Data Factory and I just wanted to confirm my understanding of it before going further. Please find the image attached below.
I am collecting data from multiple source and storing in Azure Blob Storage then perform Transformation and Loading. What I am confused about is that whether Azure Blob Storage is a landing or staging area here in my case. Some people use these terms interchangeably and couldn't understand the fine line between these two terms.
Also, can anyone explain me which part is Extract, Transform and Load is. In my understating, collecting the data from multiple source and store into Azure Blob Storage is Extracting, Azure Data Factory is Transformation and copying the transformed data into Azure Database is Loading. Am i correct or is there something I am misunderstanding here?
What I am confused about is that whether Azure Blob Storage is a
landing or staging area here in my case.
In your case, Azure Blob Storage is both landing area and staging area. Landing area means a area collecting data from different places. Staing area means it only save data for a little time, staging data should be deleted during ETL process.
Also, can anyone explain me which part is Extract, Transform and Load
is.
Copy Activity is a typical technology based on ETL. If only talking about the Copy Activity of Azure Data Factory, after you specify the copy source, the ADF will perform copy activities based on this, this is 'extract'. The part of the ADF that transfers data to the specified Sink according to your settings, this is 'Load', and the details of the copy behavior is 'Transform'. If you look at your entire process, you collect data to blob storage is also 'Extract'.
We've been reviewing the Modern Data Warehouse architectures from Microsoft (link here), which references using Azure Data Factory to pull structured and unstructured data into the Azure Data Lake. I've attended a lot of presentations on the subject as well, but most people are split on whether the Data Lake is a good home for structured data. What I am trying to determine is if importing data into the Data Lake is a good strategy if the only source we will be utilizing is on-prem SQL Server databases? And, what would be the advantage / disadvantages of that strategy?
For context sake, we're looking for a single pane of glass for consumption - whether it's end user's reporting with Power BI, or fodder for Azure Data Warehouse / on-prem Data Warehouse. We want one container that is the source for all of these systems, which is not the source OLTP system (i.e. OLTP database --> (Azure Data Factory) --> Data Lake --> everything else).
I appreciate any guidance on the subject. Thank you.
You have not mentioned the data size and I think for moving to ADL , the data is a very strong parameter . In your case the data is very much structured . If you we had unstructured & massive data and if you wanted to use ADB or Hadoop or any other technology to process it later , i think ADL is a good candidate .
You should also consider that the data is encrypted in motion using SSL .You can authorize users and groups with fine-grained POSIX-based ACLs for all data in the Store enabling role-based access controls .
The only real value in taking stuctured data, flattening it and loading it into a data lake is to save cost and decouple the data from any proprietary tool/compute. In your scenario, it will be less expensive to store the data in a data lake store vs. Azure SQL Database.
However, there is a complexity cost to flattening the data. You will need to restructure the data (ie. load it back into a database, or wrap logical structure) when you need to consume the data. Formats such as Parquet will help with this, but it is more complex for users to query data in a datalake than it is to connect to a relational database. Most all analysts and data consumers will know how to query a relational database, especially if the data is already in SQL Server.
Look at the volume of data and use cases for consumption to make that decision. A "logical datalake" can include both structured data in a relational database, semi structured data flattened in a storage account, and unstructured data saved to a storage account.
I've Two Custom code dll, for Image related to IP Cams.
dll-One : Extract image from IP cams and can be stored it to Azure data lake Store.
Like :
/adls/clinic1/patientimages
/adls/clinic2/patientimages
dll-two : Use those image and extract information from it and load data into RDBMS tables.
So for instance in RDBMS ,say there are entities dimpatient, dimclinic and factpatientVisit.
For start, a one time data can be exported to defined location in Azure data lake store.
Like:
/adls/dimpatient
/adls/dimclinic
/adls/factpatientVisit
Question :
How to push incremental data in same file or how we can handle this incremental load in Azure data Analytics?
This like implementing Warehouse in Azure Data Analytics.
Note: Azure SQL db or any other storage offered by Azure is not want to.
I mean why to spend in other Azure Services if one type of storage has capabilities to hold all types of data.
adls is name of my ADLS storage.
I am not sure I completely understand your question, but you can organize your data files in Azure Data Lake Store or your rows in partitioned U-SQL tables along a time dimension, so you can add new partitions/files for each increment. In general, we recommend that such increments are of substantial sizes though to preserve the ability to scale.
Does anyone tell me how Azure Data factory datasets are cleaned up (removed, deleted etc). Is there any policy or settings to control it?
From what I can say, all the time series of data sets are left behind intact.
Say, I want to develop an activity which overwrites data daily in the destination folder in Azure Blob or Data Lake storage (for example which is mapped to external table in Azure Datawarehouse and it is a full data extract). How can I achieve this with just copy activity? Shall I add custom .Net activity to do the cleanup no longer needed datasets myself?
Yes, you would need a custom activity to delete pipepline output.
You could have pipeline activities that overwrite the same output but you must be careful to understand how ADF slicing model and activity dependency works, so that anything using the output gets a clear and repeatable set.