We're migrating from blob storage to ADLS Gen 2 and we want to test the access to Data Lake from DataBricks. I created a service principal which has Blob Storage Reader and Blob Storage Contributor access to Data Lake.
My notebook sets the below spark config:
spark.conf.set("fs.azure.account.auth.type","OAuth")
spark.conf.set("fs.azure.account.oauth.provider.type","org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
spark.conf.set("fs.azure.account.oauth2.client.id","<clientId")
spark.conf.set("fs.azure.account.oauth2.client.secret","<secret>")
spark.conf.set("fs.azure.account.oauth2.client.endpoint","https://login.microsoftonline.com/<endpoint>/oauth2/token")
//I replaced the values in my notebook with correct values from my service principal
When I run the below code, the content of the directory are shown correctly:
dbutils.fs.ls("abfss://ado-raw#<storage account name>.dfs.core.windows.net")
I can read a small text file from my data lake which is only 3 bytes
but when I'm trying to show its content, the cell gets stuck at running command and nothing happens.
What do you think the issue is? and how do I resolve it?
Thanks in advance
The issue was the private and public subnets had been deleted by mistake and then recreated using a different IP range. They need to be on the same range as the management subnet, otherwise the private endpoint set up for the storage account won’t work.
Related
Hey everyone I am trying to connect Powerbi to my data lake gen 2 on azure I am set as
Storage Blob Data Contributor Aswell as Storage Blob Data Reader on the Storage account level i am not sure if I am doing something wrong but I also used the MS docs yet nothing
https://learn.microsoft.com/en-us/power-query/connectors/datalakestorage
The URL is formatted according to this format https://.dfs.core.windows.net//
To resolve Access to the resource is forbidden error, try following ways:
As suggested by Etienne Oosthuysen, check the date-time settings of your system.
As per documentation:
Only this format is supported https://<accountname>.dfs.core.windows.net/<container>
It doesn't support filename or subfolder like this,https://<accountname>.dfs.core.windows.net/<container>/<filename> or https://<accountname>.dfs.core.windows.net/<container>/<subfolder>
You can refer to Get Data from Azure Data Lake Gen 2 : Access to the resource is forbidden
After disabling both Enable soft delete for blobs and containers in azure storage account connection issue was resolved.
To Change Settings
Go To Azure storage account ---> Data Protection--->Disable soft delete for blobs/Enable soft delete for containers
I am new to Azure Data Lake Storage Gen2 service. I have a Storage Account with "Hierarchical namespace" option Enabled.
I am using AzCopy to move some files and folders. From the command line I can - within the address string - use either the option "blob" or the "adf" string tokens:
'https://myaccount.blob.core.windows.net/mycontainer/myfolder'
or
'https://myaccount.adf.core.windows.net/mycontainer/myfolder'
again within the .\azcopy.exe copy command.
"Apparently" both ways succeed giving the same result. My question is: is there any difference if I use blob or adf in the address string? If yes, what is it?
Also, whatever string token I choose, in the Azure portal a file address is always given with the blob string token..
thanks
In the storage account Endpoint page, you can see all the available endpoints for you to use for their services.
Both blob and dfs work for you because both of them are supported in Azure Data Lake Storage Gen2 . However, in Gen1, you may only have the blob service but not the dfs service available (like below). In that case, you won't be able to use the dfs endpoint.
blob and dfs represent the resource type in the endpoint URL
After completing tutorial 1, I am working on this tutorial 2 from Microsoft Azure team to run the following query (shown in step 3). But the query execution gives the error shown below:
Question: What may be the cause of the error, and how can we resolve it?
Query:
SELECT
TOP 100 *
FROM
OPENROWSET(
BULK 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet',
FORMAT='PARQUET'
) AS [result]
Error:
Warning: No datasets were found that match the expression 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet'. Schema cannot be determined since no files were found matching the name pattern(s) 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet'. Please use WITH clause in the OPENROWSET function to define the schema.
NOTE: The path of the file in the container is correct, and actually I generated the following query just by right clicking the file inside container and generated the script as shown below:
Remarks:
Azure Data Lake Storage Gen2 account name: contosolake
Container name: users
Firewall settings used on the Azure Data lake account:
Azure Data Lake Storage Gen2 account is allowing public access (ref):
Container has required access level (ref)
UPDATE:
The owner of the subscription is someone else, and I did not get the option Check the "Assign myself the Storage Blob Data Contributor role on the Data Lake Storage Gen2 account" box described in item 3 of Basics tab > Workspace details section of tutorial 1. I also do not have permissions to add roles - although I'm the owner of synapse workspace. So I am using workaround described in the Configure anonymous public read access for containers and blobs from Azure team.
--Workaround
If you are unable to granting Storage Blob Data Contributor, use ACL to grant permissions.
All users that need access to some data in this container also needs
to have the EXECUTE permission on all parent folders up to the root
(the container). Learn more about how to set ACLs in Azure Data Lake
Storage Gen2.
Note:
Execute permission on the container level needs to be set within the
Azure Data Lake Gen2. Permissions on the folder can be set within
Azure Synapse.
Go to the container holding NYCTripSmall.parquet.
--Update
As per your update in comments, it seems you would have to do as below.
Contact the Owner of the storage account, and ask them to perform the following tasks:
Assign the workspace MSI to the Storage Blob Data Contributor role on
the storage account
Assign you to the Storage Blob Data Contributor role on the storage
account
--
I was able to get the query results following the tutorial doc you have mentioned for the same dataset.
Since you confirm that the file is present and in the right path, refresh linked ADLS source and publish query before running, just in case if a transient issue.
Two things I suspect are
Try setting Microsoft network routing in Network Routing settings in ADLS account.
Check if built-in pool is online and you have atleast contributer roles on both Synapse workspace and Storage account. (If the current credentials using to run the query has not created the resources)
I uploaded my CSV file into my Azure Data Lake Storage Gen2 using Azure Synapse portal. Then I tried select Top 100 rows and got an error after running auto-generated SQL.
Auto-generated SQL:
SELECT
TOP 100 *
FROM
OPENROWSET(
BULK 'https://accountname.dfs.core.windows.net/filesystemname/test_file/contract.csv',
FORMAT = 'CSV',
PARSER_VERSION='2.0'
) AS [result]
Error:
File 'https://accountname.dfs.core.windows.net/filesystemname/test_file/contract.csv'
cannot be opened because it does not exist or it is used by another process.
This error in Synapse Studio has link (which leads to self-help document) underneath it which explains the error itself.
Do you have rights needed on the storage account?
You must have Storage Blob Data Contributor or Storage Blob Data Reader in order for this query to work.
Summary from the docs:
You need to have a Storage Blob Data Owner/Contributor/Reader role to
use your identity to access the data. Even if you are an Owner of a
Storage Account, you still need to add yourself into one of the
Storage Blob Data roles.
Check out the full documentation for Control Storage account access for serverless SQL pool
If your storage account is protected with firewall rules then take a look at this stack overflow answer.
Reference full docs article.
I just took your code & updated the path to what I have and it worked just worked fine
SELECT
TOP 100 *
FROM
OPENROWSET(
BULK 'https://XXX.dfs.core.windows.net/himanshu/NYCTaxi/PassengerCountStats.csv',
FORMAT = 'CSV',
PARSER_VERSION='2.0'
) AS [result]
Please check if the path to which you have uploaded the file and the one used in the script is the same .
You can do this to check that
Navigate to WS -> Data -> ADLS gen2 -> Go to the file -> right click go to the property and copy the Uri from there paste in the script .
I'm connecting ADF to blob storage v2 using a managed identity following this doc: Doc1
When it comes to test the connection with my first dataset, I am successful when I test the connection to the linkedservice. When I try by the filepath, and enter "testfolder" (which exists in the blob) it fails returning a generic forbidden error displayed at the end of this post.
However, when I opt to "browse" the folders in the dataset portal, the folder "testfolder" does show up. But when I select it, it will not show me anything within that folder.
The Data Factory managed instance is given the role of Contributor, granting full access to manage all resources. Is there some other hidden issue or possible way to narrow down the issue? My instinct is that this is something within the blob container since I can view the containers, but not their contents.
Error message:
It seems that you don't give the role of azure blob storage.
Please fellow this:
1.click IAM in azure blob storage,navigate to Role assignments and add role assignment.
2.choose role according your need and select your data factory.
3.A few minute later,you can retry to choose file path.
Hope this can help you.