Databricks SQL Editor "Failure to initialize configuration" - databricks

When I'm trying to select something from one specific table in SQL Editor, I'm getting an error "Failure to initialize configuration".
The query is simple as select * from table_name. Tried also with limits and/or selecting specific columns, but got the same error.
If I switch to "Data Science & Engineering" and execute the same query using a regular cluster in a notebook everything works.

Edit the Spark Config by entering the connection information for your Azure Storage account.
This will allow your cluster to access the files. Enter the following:
spark.hadoop.fs.azure.account.key.<STORAGE_ACCOUNT_NAME>.blob.core.windows.net <ACCESS_KEY>, where <STORAGE_ACCOUNT_NAME> is your Azure Storage account name, and <ACCESS_KEY> is your storage access key.
If using Azure Key vault, you can create a KeyVault backed secret scope (https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes) and access the values via the following syntax in your spark config: {{secrets//}}

Related

Azure Data Factory Copy Snowflake to Azure blog storage ErrorCode 2200User configuration issue ErrorCode=SnowflakeExportCopyCommandValidationFailed

Need help with this error.
ADF copy activity, Moving data from snowflake to azure blob storage delimited text.
I am able to preview the snowflake source data. I am also able to browse the containers via sink browse. This doesn't look like an issue with permissions.
ErrorCode=SnowflakeExportCopyCommandValidationFailed,
'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,
Message=Snowflake Export Copy Command validation failed:
'The Snowflake copy command payload is invalid.
Cannot specify property: column mapping,
Source=Microsoft.DataTransfer.ClientLibrary,'
Thanks for your help
Clear the mapping from copy activity, it worked.
ErrorCode=SnowflakeExportCopyCommandValidationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Snowflake Export Copy Command validation failed: 'The Snowflake copy command payload is invalid. Cannot specify property: column mapping,Source=Microsoft.DataTransfer.ClientLibrary,'"
To resolve the above error, please check the following:
Please check the copy command you are using, it means the copy command you are using is not valid.
To know more in detail about the error, try like below:
COPY INTO MYTABLE VALIDATION_MODE = 'RETURN_ERRORS' FILES=('result.csv');
Check if you have any issue with the data files before loading the data again.
Try checking the storage account you are currently using and note that Snowflake doesn't support Data Lake Storage Gen1.
Use the COPY INTO command to copy the data from the Snowflake database table into Azure blob storage container.
Note:
Use the blob.core.windows.net endpoint for all supported types of Azure blob storage accounts, including Data Lake Storage Gen2.
Make sure to have either ACCOUNTADMIN role or a role with the global CREATE INTEGRATION privilege to run the below sample command:
copy into 'azure://myaccount.blob.core.windows.net/mycontainer/unload/' from mytable storage_integration = myint;
By using the above command, you no need to include credentials to access the storage.
For more in detail, please refer below links:
Unloading into Microsoft Azure — Snowflake Documentation
COPY INTO < table> — Snowflake Documentation

Azure Synapse severless SQL pool - query execution fails

After completing tutorial 1, I am working on this tutorial 2 from Microsoft Azure team to run the following query (shown in step 3). But the query execution gives the error shown below:
Question: What may be the cause of the error, and how can we resolve it?
Query:
SELECT
TOP 100 *
FROM
OPENROWSET(
BULK 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet',
FORMAT='PARQUET'
) AS [result]
Error:
Warning: No datasets were found that match the expression 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet'. Schema cannot be determined since no files were found matching the name pattern(s) 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet'. Please use WITH clause in the OPENROWSET function to define the schema.
NOTE: The path of the file in the container is correct, and actually I generated the following query just by right clicking the file inside container and generated the script as shown below:
Remarks:
Azure Data Lake Storage Gen2 account name: contosolake
Container name: users
Firewall settings used on the Azure Data lake account:
Azure Data Lake Storage Gen2 account is allowing public access (ref):
Container has required access level (ref)
UPDATE:
The owner of the subscription is someone else, and I did not get the option Check the "Assign myself the Storage Blob Data Contributor role on the Data Lake Storage Gen2 account" box described in item 3 of Basics tab > Workspace details section of tutorial 1. I also do not have permissions to add roles - although I'm the owner of synapse workspace. So I am using workaround described in the Configure anonymous public read access for containers and blobs from Azure team.
--Workaround
If you are unable to granting Storage Blob Data Contributor, use ACL to grant permissions.
All users that need access to some data in this container also needs
to have the EXECUTE permission on all parent folders up to the root
(the container). Learn more about how to set ACLs in Azure Data Lake
Storage Gen2.
Note:
Execute permission on the container level needs to be set within the
Azure Data Lake Gen2. Permissions on the folder can be set within
Azure Synapse.
Go to the container holding NYCTripSmall.parquet.
--Update
As per your update in comments, it seems you would have to do as below.
Contact the Owner of the storage account, and ask them to perform the following tasks:
Assign the workspace MSI to the Storage Blob Data Contributor role on
the storage account
Assign you to the Storage Blob Data Contributor role on the storage
account
--
I was able to get the query results following the tutorial doc you have mentioned for the same dataset.
Since you confirm that the file is present and in the right path, refresh linked ADLS source and publish query before running, just in case if a transient issue.
Two things I suspect are
Try setting Microsoft network routing in Network Routing settings in ADLS account.
Check if built-in pool is online and you have atleast contributer roles on both Synapse workspace and Storage account. (If the current credentials using to run the query has not created the resources)

Empty error while executing SSIS package in Azure Data Factory

I have created a simple SSIS project and in this project, I have a package that will delete a particular file in Downloads folder.
I deployed this project to Azure. And when I am trying to execute this package using Azure Data Factory then the pipeline fails with an empty error (I am attaching the screenshot here).
enter image description here
What I have done to fix this error is:
I have added self-hosted IR to Azure-SSIS IR as the proxy to access the data on-premise.
Set the ConnectByProxy as True.
Converted the project to Project Deployment Model.
Please help me out to fix this error and if you need more details then just leave a comment.
Windows Authentication :
To access data stores such as SQL servers/file shares on-premises or Azure Files, check the Windows authentication check box.
If this check box is selected, fill in the Domain, Username, and Password fields with the values for your package execution credentials. The domain is Azure, the username is storage account name>, and the password is storage account key> to access Azure Files, for example.
Using the secrets stored in your Azure Key Vault
As a substitute, you can leverage secrets from your Azure Key Vault as values. Select the AZURE KEY VAULT check box next to them to do so. Create a new key vault connected service or choose or update an existing one. Then choose your value's secret name and version. You can pick or update an existing key vault or create a new one when creating or editing your key vault connected service. If you haven't previously done so, allow Data Factory managed identity access to your key vault. You may also directly input your secret in the format key vault linked service name>/secret name>/secret version>.
Note : If you are using Windows Authentication, there are four methods to
access data stores with Windows authentication from SSIS packages
running on your Azure-SSIS IR: Access data stores and file shares with
Windows authentication from SSIS packages in Azure | Docs
Make Sure it Falls under one of such methods, else it could potentially fail at the Run Time.

shell.azure.com is failing in configuration

While doing something I got option to execute shell commands from azure portal. It required to configure shell.azure.com first time.
In first step it is giving option of selecting Subscription & create storage. When I select required subscription & click on create storage, it is giving error:
Error: 409
{"error":{"code":"StorageAccountAlreadyTaken", "message":"The storage account named ... is already taken"}}
Can't create a storage account. Please try again.
I tried multiple times but no avail.
I opened Show advanced settings & tried to play with combinations but here using existing storage account is disabled(in advanced settings) and create storage is also disabled.
strong text
PS I have rights to create storage account on subscription, so that is not an issue.
I also face the same issue before. You need to directly edit (manually type the name) the existing storage account in the box, just ignore the using existing checkbox. It seems like a UI bug.
When you add the existing storage account on the UI, please note that the cloud shell region matches the storage account region. You can see the Supported storage regions from https://learn.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage.
Refer to the familiar threads,
Unable to open Cloud Shell because of Storage Account error
Azure Cloud shell requires storage account

Can't access database in Azure Portal - "This resource was not found"

I have an SQL database in Azure that I use with my API. I can access the database from SQL Server Management Studio, and it seemto be alright - I can select data, make modifications and whatever. Although, I can't access it from Azure Portal.
It doesn't appear when I list all resources in the Azure portal
When I select the database server from Azure Portal, I can see it under the list of available databases. When I click the specific database, I get the following error message:
"This resource was not found, it may have been deleted. /subscriptions/aee8966e-5891-40fb-8fff-8c359f43baee/resourceGroups/TestResourceGroup/providers/Microsoft.Sql/servers/testdbserver/databases/testdb"
Any ideas?
Update 2018-05-16:
Jerry Liu suggested renaming the database, so I logged into SQL Server Management Studio, and this is what I found:
I wasn't allowed to rename any of the databases on the server using right click --> Rename. The option is disabled.
ALTER DATABASE carbonate_prod_180508 Modify Name = carbonate_renamed;
Running this command, I get the error message "The source database 'carbonate_prod_180508' does not exist.". But I do see the database listed in the Object Explorer.
I also tried renaming it using Azure CLI. I find the database using "az sql db list" with the resource group name and server name, but when i run "az sql db rename" I get an error message stating the database was not found.
If you have done some operation like delete and recreate with same name or just move to another resource group, please wait a while for the operation to work completely.
Or if you have done nothing related, please try to rename your database in Management Studio and visit portal to check whether it's "back".
How to Rename SQL DB
Two methods for you to try
Click on database name twice slowly, or click once and push F2, just like we rename a file locally on Windows.
Create query on master database to execute alter operation.
Wait for about 30 seconds and refresh portal.
If none of them works, I recommend you to new a support request to Azure Team. It's the last blade in your Azure sql server panel.

Categories

Resources