Azure Databricks Delta live table tab is missing from my Databricks notebooks. Why?
Delta Live Table access control is available only in the Premium plan or above.
Enabling access control for Delta Live Tables allows pipeline owners to control pipeline access, including permissions to view pipeline details, start and stop pipeline updates, and manage pipeline settings.
This article describes individual consent and how to configure pipeline access control.
Related
We have different resources (storage account, logic app, SQL database, SQL server, Synapse Workspace) under a directory and a subscription (let's call them directory_1 and subscription_1)
The resources are used to perform simple ETL pipelines.
We want to move all these resources to a new directory and subscription (directory_2, subscription_2), everything moves correctly except the Synapse Worksapce.
When we try to access it shows this error:
Failed to load one or more resources due to no access, error code 403.
Pipeline
Related service
Trigger
Data flow
Dataset
Credentials
SQL script
Spark job definition
Synapse KQL Scripts
Notebook
Lake databases
Both accounts (from directory_1 and Directory_2) have [Owner] and [Contributor] roles for the Azure Synapse Workspace and the resources group as well.
Any idea how to fix this?
Unfortunately, you cannot transfer an entire Azure Synapse Analytics workspace to another subscription.
Before moving Azure resources to another Subscription, check whether the resource type supports move operation or not by checking this Microsoft Doc
According to this, Microsoft won’t support move operation of Azure Synapse workspace to another resource group or subscription or region.
This may be the reason behind getting that error.
I recommend you to upvote the request submitted by another Azure customer in the below forum.
Transfer an entire Azure Synapse Analytics workspace to another subscription · Community
Reference:
Transfer an entire Azure Synapse Analytics workspace to another subscription - Microsoft Q&A
Whenever I create a Source in an activity in a Synapse Pipeline, in the Linked Service tab, I get an option to either create a new Linked Service or to select from the dropdown (as shown below). One selection of that dropdown includes a default Linked Service (shown below) that shows as MySynapseWorkspaceName-WorkspaceDefaultStorage (where MySynapseWorkspaceName is name of a Synapse workspace that you create).
It seems that MySynapseWorkspaceName-WorkspaceDefaultStorage is the linked service that gets created when you specify an Azure Data Lake Storage Gen2 (ADLSGen2) account for your Synapse workspace.
Question: If a Dataset for the source or destination (Sink) of an activity in Synapse Pipeline is a ADLSGen2 storage, can we just select the above default linked service MySynapseWorkspaceName-WorkspaceDefaultStorage for that dataset; or choosing this linked service (created for Synapse workspace) for other datasets may cause an issue - and hence we should avoid using this linked service for other datasets inside our Synapse workspace?
From your comment, I understood that You want to know Whether same Linked Service can be used in both Source and Sink datasets ?
Unfortunately, you can not use same Linked Service in both Source and Sink. It may cause an issue and hence you should avoid using same linked service.
I am trying to test out Azure Purview and connect it to an Azure SQL Server. Since the SQL server is hosted in the cloud I want to use the default AutoResolve Integrated Runtime to get connected but there is not one setup or an option to setup a new one. Has anyone else using Purview been able to setup (or needed to setup) an AutoResolve IR?
To connect to Azure SQL DB/MI you can directly go to the Azure Purview portal and register new data sources and select Azure SQL DB/MI.
In this article - Manage data sources in Azure Purview (Preview), you learn how to register new data sources, manage collections of data sources, and view sources in Azure Purview (Preview).
Only to connect on-premise SQL server you need to Set up a
self-hosted integration runtime to scan the data source.
If the data source is located on Azure, you don't need any integration runtime to scan the data source.
Reference: Register and scan an Azure SQL Database.
CHEEKATLAPRADEEP-MSFT is absolutely correct, to go a step further, since you know what an auto resolve integration runtime is, you probably are utilizing Azure Data Factory so in addition to registering your SQL Server, you can also link your Azure Data Factory for data lineage purposes. Based on the pipelines that are executed, it will autonomously create the data lineage.
Navigation to Link Data Factory
Data lineage created by linking Data Factory
Keep in mind, you will have to execute pipelines after linkage for it to pick up the data lineage. Also, for sources or destinations not supported yet, it will not get the data lineage.
I've debugged my ADF pipeline,
The pipeline contains 4 copy activities and two DataFlows.
After the Debug is done , I switched to Azure Purview to look at the changes done to the Datafactory and I was able to see the Pipeline.
But when I go into the pipeline in Azure Purview all the activities and the Dataflows appear with lineage except one Dataflow.
This Dataflow sinks into an SQL table that doesn't exist , so it auto creates the Table.
Is this the reason why it isn't appearing in the purview??
At this time, lineage into auto-created tables from a data flow is not a supported scenario. This is on the Azure Purview roadmap, but unfortunately we do not have an ETA right now.
I am attempting to spin up an azure synapse pool in terraform. At present from the documentation found at: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/synapse_sql_pool, it appears you have to use a synapse workspace, which also includes a datafactory integration and powerbi, etc.
Right now we just want to datawarehouse not all the other bells and whistles. As you can see within the Azure Portal, you are free to spin up a synapse analytics DW with or without a workspace (see the right image in the box, "formerly SQL DW"):
When you spin that up, you simply have a standalone DW...
Any insight on just getting the datawarehouse as you can in the portal without the workspace and realted?
I am not a Terraform guy. As for Synapse, you are referring to the new one that is in preview. The new one has the workspace which supports SQL pools, Sparks clusters and Pipelines. Although they are supported, they are not created when you deploy a Synapse workspace.
So you can go ahead and created the workspace and one SQL Pool and you will get what you're looking for: the data warehouse engine, named SQL Pool.
Some extra notes: there are 2 types of SQL data warehouse in Synapse Analytics: SQL Pools and SQL on demand. The first one is provisioned computing and is the traditional one with all the features. SQL on demand is still in preview, doesn't have all the features and is charged by the terabyte processed by your queries.
Happy data crunching!