Checking if a queue exists - azure

I have a very basic question about Windows Azure Storage Queue errors/access.
I am trying to find out if the given storage account already contains a queue by the given name - say "queue1". I do not want to create the queue if it does not exist, and so am not keen on using the CreateIfNotExist method. The permissions I have given to the SAS token are - processing and Add (since all I want to do is to add a new message to the queue only if it already exists, and throw an error otherwise)
The problem is that when I try to get reference to a fake named queue, and add a message to it, I get a 403. 403 can also occur when the SAS token does not have permissions, so I cannot be sure what is causing the error.
Is there a way I could explicitly know if the queue exists or not?
I have tried the BeginExist, and EndExist methods but they always return false even when I can see the queue being there.
Any suggestions?

The Get Queue Metadata REST API operation will return status code 200 if the queue exists or a Queue Service Error Code otherwise.
Regarding to authorization,
This operation can be performed by the account owner and by anyone with a shared access signature that has permission to perform this operation.
A GET request to
https://myaccount.queue.core.windows.net/myqueue?comp=metadata
Will return a response like:
Response Status:
HTTP/1.1 200 OK
Response Headers:
Transfer-Encoding: chunked
x-ms-approximate-messages-count: 0
Date: Fri, 16 Sep 2011 01:27:38 GMT
Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0

Are you sure you're getting a 403 error even if the queue does not exist. Based on what you described above, I created a simple console app. The queue does not exist in my storage account. When I try to add a message with valid SAS token, I get a 404 error:
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account", "key"), false);
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference("non-existent-queue");
var queuePolicy = new SharedAccessQueuePolicy();
var sas = queue.GetSharedAccessSignature(new SharedAccessQueuePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessQueuePermissions.Add | SharedAccessQueuePermissions.ProcessMessages | SharedAccessQueuePermissions.Update
}, null);
StorageCredentials creds = new StorageCredentials(sas);
var queue1 = new CloudQueue(queue.Uri, creds);
try
{
queue1.AddMessage(new CloudQueueMessage("This is a test message"));
}
catch (StorageException excep)
{
//Get 404 error here
}
Next, I made the SAS token invalid by setting it's expiry to 30 minutes before current time. Now when I run the application, I get 403 error as expected.
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account", "key"), false);
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference("non-existent-queue");
var queuePolicy = new SharedAccessQueuePolicy();
var sas = queue.GetSharedAccessSignature(new SharedAccessQueuePolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(-30),//-30 to ensure SAS is invalid
Permissions = SharedAccessQueuePermissions.Add | SharedAccessQueuePermissions.ProcessMessages | SharedAccessQueuePermissions.Update
}, null);
StorageCredentials creds = new StorageCredentials(sas);
var queue1 = new CloudQueue(queue.Uri, creds);
try
{
queue1.AddMessage(new CloudQueueMessage("This is a test message"));
}
catch (StorageException excep)
{
//Get 403 error here
}

There is now an Exists and ExistsAsync (with various overloads).
Example of the former in use:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queueName);
bool doesExist = queue.Exists();
You will want a reference to Microsoft.Azure.Storage.Queue (I believe older 'cloud' assemblies may not have had these properties - initially I could only access ExistsAsync before I had reference the right package, once I had added the above via Nuget Exists also was available)
For more details see the following links:
Exists
ExistsAsync

There is no Exists method in the v12 as well. Wrote a simple helper method to do the check:
private async Task<bool> QueueExistsAsync(QueueClient queue)
{
try
{
await queue.GetPropertiesAsync();
return true;
}
catch (RequestFailedException ex)
{
if (ex.Status == (int) HttpStatusCode.NotFound)
{
return false;
}
throw;
}
}

Related

Authentication Failure when uploading to Azure Blob Storage using Azure.Storage.Blob v12.9.0

I get this error when trying to upload files to blob storage. The error is present both when I run on localhost and when I run in Azure Function.
My connection string looks like:
DefaultEndpointsProtocol=https;AccountName=xxx;AccountKey=xxx;EndpointSuffix=core.windows.net
Authentication information is not given in the correct format. Check the value of the Authorization header.
Time:2021-10-14T15:56:26.7659660Z
Status: 400 (Authentication information is not given in the correct format. Check the value of Authorization header.)
ErrorCode: InvalidAuthenticationInfo
But this used to work in the past but recently its started throwing this error for a new storage account I created. My code looks like below
public AzureStorageService(IOptions<AzureStorageSettings> options)
{
_connectionString = options.Value.ConnectionString;
_containerName = options.Value.ImageContainer;
_sasCredential = new StorageSharedKeyCredential(options.Value.AccountName, options.Value.Key);
_blobServiceClient = new BlobServiceClient(new BlobServiceClient(_connectionString).Uri, _sasCredential);
_containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);
}
public async Task<string> UploadFileAsync(IFormFile file, string location, bool publicAccess = true)
{
try
{
await _containerClient.CreateIfNotExistsAsync(publicAccess
? PublicAccessType.Blob
: PublicAccessType.None);
var blobClient = _containerClient.GetBlobClient(location);
await using var fileStream = file.OpenReadStream();
// throws Exception here
await blobClient.UploadAsync(fileStream, true);
return blobClient.Uri.ToString();
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
// To be able to do this, I have to create the container client via the blobService client which was created along with the SharedStorageKeyCredential
public Uri GetSasContainerUri()
{
if (_containerClient.CanGenerateSasUri)
{
// Create a SAS token that's valid for one hour.
var sasBuilder = new BlobSasBuilder()
{
BlobContainerName = _containerClient.Name,
Resource = "c"
};
sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1);
sasBuilder.SetPermissions(BlobContainerSasPermissions.Write);
var sasUri = _containerClient.GenerateSasUri(sasBuilder);
Console.WriteLine("SAS URI for blob container is: {0}", sasUri);
Console.WriteLine();
return sasUri;
}
else
{
Console.WriteLine(#"BlobContainerClient must be authorized with Shared Key
credentials to create a service SAS.");
return null;
}
}
Please change the following line of code:
_blobServiceClient = new BlobServiceClient(new BlobServiceClient(_connectionString).Uri, _sasCredential);
to
_blobServiceClient = new BlobServiceClient(_connectionString);
Considering your connection string has all the necessary information, you don't really need to do all the things you're doing and you will be using BlobServiceClient(String) constructor which expects and accepts the connection string.
You can also delete the following line of code:
_sasCredential = new StorageSharedKeyCredential(options.Value.AccountName, options.Value.Key);
and can probably get rid of AccountName and Key from your configuration settings if they are not used elsewhere.

Azur Functions is not authorized to list blobs from afolder in a container in Data Lake

I would like to list blobs in a folder in a container in Azure Data Lake in an Azure Functions.
Für authenticating I would like to use system assigned managed identity of Azure Functions. I have activate it in my azure Functions and on Data Lake side give it Storage Blob Data Contributor role.
Here is my Code:
string dfsUri = "https://<myDataLake>.dfs.core.windows.net";
DataLakeClientOptions options = new DataLakeClientOptions(DataLakeClientOptions.ServiceVersion.V2019_07_07);
DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClient(new Uri(dfsUri), new Azure.Identity.DefaultAzureCredential(),options);
DataLakeFileSystemClient dataLakeFileSystemClient = dataLakeServiceClient.GetFileSystemClient("my-file-system");
IAsyncEnumerator<PathItem> enumerator = dataLakeFileSystemClient.GetPathsAsync("testfolder").GetAsyncEnumerator();
await enumerator.MoveNextAsync();
PathItem item = enumerator.Current;
while (item != null)
{
log.LogInformation($"File Name {item.Name}.");
if (!await enumerator.MoveNextAsync())
{
break;
}
item = enumerator.Current;
}
If I run my Code I get this error message:
This request is not authorized to perform this operation using this permission.
RequestId:cd00a570-401f-0024-4d21-35badb000000
Time:2021-04-19T13:39:30.8429070Z
Status: 403 (This request is not authorized to perform this operation using this permission.)
ErrorCode: AuthorizationPermissionMismatch
Headers:
Server: Windows-Azure-HDFS/1.0,Microsoft-HTTPAPI/2.0
x-ms-error-code: AuthorizationPermissionMismatch
x-ms-request-id: cd00a570-401f-0024-4d21-35badb000000
x-ms-version: 2019-07-07
x-ms-client-request-id: b0510f6a-5798-476c-a95e-6f206bf2a9cc
Date: Mon, 19 Apr 2021 13:39:29 GMT
Content-Length: 227
Content-Type: application/json; charset=utf-8
but on the container level the following code works fine for me:
string dfsUri = "https://<myDataLake>.dfs.core.windows.net";
DataLakeClientOptions options = new DataLakeClientOptions(DataLakeClientOptions.ServiceVersion.V2019_07_07);
DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClient(new Uri(dfsUri), new Azure.Identity.DefaultAzureCredential(),options);
DataLakeFileSystemClient dataLakeFileSystemClient = dataLakeServiceClient.GetFileSystemClient("my-file-system");
IAsyncEnumerator<PathItem> enumerator = dataLakeFileSystemClient.GetPathsAsync("").GetAsyncEnumerator();
await enumerator.MoveNextAsync();
PathItem item = enumerator.Current;
while (item != null)
{
log.LogInformation($"File Name {item.Name}.");
if (!await enumerator.MoveNextAsync())
{
break;
}
item = enumerator.Current;
}
can someone tell me what should I do to list blobs from a folder in a container?
PART 1
I have done something similar with a user assigned identity, but it should be the same for both. It sounds like your configuration should be correct, but you can do the following to confirm. In the Function App's Identity section, click on the "Azure Role Assignments":
Do you see anything listed? If not, then your current configuration it isn't recognized. You can try to the "Add role assignment" feature, which is a pretty straightforward wizard.
PART 2
Again, not exactly your scenario, but it may help. I recently wrote some Azure Functions that connect to Synapse via Managed Identity. I used a ManagedIdentityCredential, not a DefaultAzureCredential:
// Build the credentials
var clientId = SettingsHelper.Get(Settings.ManagedIdentityClientId);
var credential = new ChainedTokenCredential(new ManagedIdentityCredential(clientId),
new AzureCliCredential());
As you can see, I stored the Managed Identity Client ID value in the Function App Settings. The ChainedTokenCredential is there to support local development via the AzureCliCredential. I used the exact same code to connect to Key Vault.
Update:
Tested, the document description is indeed problematic, and 'Storage Blob Data Contributor' can give the required permissions:
var uri = new Uri("https://datalakename.dfs.core.windows.net/");
var tokenCredential = new ManagedIdentityCredential();
DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClient(uri,tokenCredential);
var fileSystemClient = dataLakeServiceClient.GetFileSystemClient("test");
Pageable<PathItem> items = fileSystemClient.GetPaths();
foreach (var item in items) {
Console.WriteLine(item.Name);
}
After add Storage Blob Data Contributor RBAC role of function app identity to data lake, I still get the error:
But few minutes later, I can fetch the blobs with no problem. I suspect you just need to wait for minutes and it will be OK。

How to implement blob storage access timeout and show message

I want to show message to end user when blob is taking so much time for uploading and downloading. I found useful blog here.
Simply linear retry policy
public static RetryPolicy LinearRetry(int retryCount, TimeSpan intervalBetweenRetries)
{
return () =>
{
return (int currentRetryCount, Exception lastException, out TimeSpan retryInterval) =>
{
// Do custom work here
// Set backoff
retryInterval = intervalBetweenRetries;
// Decide if we should retry, return bool
return currentRetryCount < retryCount;
};
};
}
But here I didn't get how to send response to user back while retrying. Is this right way or anything else. Please suggest
OperationContext class in Storage Client Library has an event called Retrying that you can consume and send message back to the client.
For example, I created a simple console application which tries to create a blob container. When I ran this application, I deliberately turned off Internet access so that I can simulate a situation where operation would be retried. Then in this event consumer, I simply write something back to console. You could simply raise another event from there that would send a message back to your client.
var requestOptions = new BlobRequestOptions()
{
RetryPolicy = new ExponentialRetry(),
};
var operationContext = new OperationContext();
operationContext.Retrying += (sender, args) =>
{
Console.WriteLine("I'm retrying ....");
};
var cloudStorageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
var blobClient = cloudStorageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("test");
container.CreateIfNotExists(requestOptions, operationContext);

How do i find bloburl with shared access token expired?

i have written this below code to get the blob url with cache expiry token, actually have set 2 hours to expire the blob url,
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
CloudBlockBlob blockBlob = container.GetBlockBlobReference("blobname");
//Create an ad-hoc Shared Access Policy with read permissions which will expire in 2 hours
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(2),
};
SharedAccessBlobHeaders headers = new SharedAccessBlobHeaders()
{
ContentDisposition = string.Format("attachment;filename=\"{0}\"", "blobname"),
};
var sasToken = blockBlob.GetSharedAccessSignature(policy, headers);
blobUrl = blockBlob.Uri.AbsoluteUri + sasToken;
using this above code i get the blob url with valid expiry token, now i want to check blob url is valid or not in one client application.
I tried web request and http client approach by passing the URL and get the response status code. if the response code is 404 then I assuming the URL is expired if not the URL is still valid,but this approach taking more time.
Please suggest me any other way.
I tried running code very similar to yours, and I am getting a 403 error, which is actually what is expected in this case. Based on your question, I am not sure whether the 403 is more helpful to you than the 404. Here is code running in a console application that returns a 403:
class Program
{
static void Main(string[] args)
{
string blobUrl = CreateSAS();
CheckSAS(blobUrl);
Console.ReadLine();
}
//This method returns a reference to the blob with the SAS, and attempts to read it.
static void CheckSAS(string blobUrl)
{
CloudBlockBlob blob = new CloudBlockBlob(new Uri(blobUrl));
//If the DownloadText() method is run within the two minute period that the SAS is valid, it succeeds.
//If it is run after the SAS has expired, it returns a 403 error.
//Sleep for 3 minutes to trigger the error.
System.Threading.Thread.Sleep(180000);
Console.WriteLine(blob.DownloadText());
}
//This method creates the SAS on the blob.
static string CreateSAS()
{
string containerName = "forum-test";
string blobName = "blobname";
string blobUrl = "";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
container.CreateIfNotExists();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName + DateTime.Now.Ticks);
blockBlob.UploadText("Blob for forum test");
//Create an ad-hoc Shared Access Policy with read permissions which will expire in 2 hours
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(2),
};
SharedAccessBlobHeaders headers = new SharedAccessBlobHeaders()
{
ContentDisposition = string.Format("attachment;filename=\"{0}\"", blobName),
};
var sasToken = blockBlob.GetSharedAccessSignature(policy, headers);
blobUrl = blockBlob.Uri.AbsoluteUri + sasToken;
return blobUrl;
}
}
There are cases in which SAS failures do return a 404, which can create problems for troubleshooting operations using SAS. The Azure Storage team is aware of this issue and in future releases SAS failures may return a 403 instead. For help troubleshooting a 404 error, see http://azure.microsoft.com/en-us/documentation/articles/storage-monitoring-diagnosing-troubleshooting/#SAS-authorization-issue.
I also ran into the same issue a few days back. I was actually expecting storage service to return a 403 error code when the SAS token has expired but storage service returns 404 error.
Given that we don't have any other option, the way you're doing it is the only viable way but it is still not correct because you could get 404 error if the blob is not present in the storage account.
Maybe you can parse "se" argument from the generated SAS, which means expiry time, e.g. "se=2013-04-30T02%3A23%3A26Z". However, since the server time might not be the same as client time, the solution may be unstable.
http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
You're using UTC time for SharedAccessExpiryTime (see "Expiry Time" in https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1#parameters-common-to-account-sas-and-service-sas-tokens).
The expiry time then is registered under the se claim in the token the value of which can be checked against current UTC time on the client side before actually using the token. This way you'd save yourself from making an extra call to the Blob storage only to find out whether the token is expired.

Create Shared Access Token with Microsoft.WindowsAzure.Storage returns 403

I have a fairly simple method that uses the NEW Storage API to create a SAS and copy a blob from one container to another.
I am trying to use this to Copy blob BETWEEN STORAGE ACCOUNTS. So I have TWo Storage accounts, with the exact same Containers, and I am trying to copy a blob from the Storage Account's Container to another Storage Account's Container.
I don't know if the SDK is built for that, but it seems like it would be a common scenario.
Some additional information:
I create the token on the Destination Container.
Does that token need to be created on both the source and destination? Does it take time to register the token? Do I need to create it for each request, or only once per token "lifetime"?
I should mention a 403 is an Unauthorized Result http error code.
private static string CreateSharedAccessToken(CloudBlobClient blobClient, string containerName)
{
var container = blobClient.GetContainerReference(containerName);
var blobPermissions = new BlobContainerPermissions();
// The shared access policy provides read/write access to the container for 10 hours:
blobPermissions.SharedAccessPolicies.Add("SolutionPolicy", new SharedAccessBlobPolicy()
{
// To ensure SAS is valid immediately we don’t set start time
// so we can avoid failures caused by small clock differences:
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
Permissions = SharedAccessBlobPermissions.Write |
SharedAccessBlobPermissions.Read
});
blobPermissions.PublicAccess = BlobContainerPublicAccessType.Blob;
container.SetPermissions(blobPermissions);
return container.GetSharedAccessSignature(new SharedAccessBlobPolicy(), "SolutionPolicy");
}
Down the line I use this token to call a copy operation, which returns a 403:
var uri = new Uri(srcBlob.Uri.AbsoluteUri + blobToken);
destBlob.StartCopyFromBlob(uri);
My version of Azure.Storage is 2.1.0.2.
Here is the full copy method in case that helps:
private static void CopyBlobs(
CloudBlobContainer srcContainer, string blobToken,
CloudBlobContainer destContainer)
{
var srcBlobList
= srcContainer.ListBlobs(string.Empty, true, BlobListingDetails.All); // set to none in prod (4perf)
//// get the SAS token to use for all blobs
//string token = srcContainer.GetSharedAccessSignature(
// new SharedAccessBlobPolicy(), "SolutionPolicy");
bool pendingCopy = true;
foreach (var src in srcBlobList)
{
var srcBlob = src as ICloudBlob;
// Determine BlobType:
ICloudBlob destBlob;
if (srcBlob.Properties.BlobType == BlobType.BlockBlob)
{
destBlob = destContainer.GetBlockBlobReference(srcBlob.Name);
}
else
{
destBlob = destContainer.GetPageBlobReference(srcBlob.Name);
}
// Determine Copy State:
if (destBlob.CopyState != null)
{
switch (destBlob.CopyState.Status)
{
case CopyStatus.Failed:
log.Info(destBlob.CopyState);
break;
case CopyStatus.Aborted:
log.Info(destBlob.CopyState);
pendingCopy = true;
destBlob.StartCopyFromBlob(destBlob.CopyState.Source);
return;
case CopyStatus.Pending:
log.Info(destBlob.CopyState);
pendingCopy = true;
break;
}
}
// copy using only Policy ID:
var uri = new Uri(srcBlob.Uri.AbsoluteUri + blobToken);
destBlob.StartCopyFromBlob(uri);
//// copy using src blob as SAS
//var source = new Uri(srcBlob.Uri.AbsoluteUri + token);
//destBlob.StartCopyFromBlob(source);
}
}
And finally the account and client (vetted) code:
var credentials = new StorageCredentials("BAR", "FOO");
var account = new CloudStorageAccount(credentials, true);
var blobClient = account.CreateCloudBlobClient();
var sasToken = CreateSharedAccessToken(blobClient, "content");
When I use a REST client this seems to work... any ideas?
Consider also this problem:
var uri = new Uri(srcBlob.Uri.AbsoluteUri + blobToken);
Probably you are calling the "ToString" method of Uri that produce a "Human redable" version of the url. If the blobToken contain special caracters like for example "+" this will cause a token malformed error on the storage server that will refuse to give you the access.
Use this instead:
String uri = srcBlob.Uri.AbsoluteUri + blobToken;
Shared Access Tokens are not required for this task. I ended up with two accounts and it works fine:
var accountSrc = new CloudStorageAccount(credsSrc, true);
var accountDest = new CloudStorageAccount(credsSrc, true);
var blobClientSrc = accountSrc.CreateCloudBlobClient();
var blobClientDest = accountDest.CreateCloudBlobClient();
// Set permissions on the container.
var permissions = new BlobContainerPermissions {PublicAccess = BlobContainerPublicAccessType.Blob};
srcContainer.SetPermissions(permissions);
destContainer.SetPermissions(permissions);
//grab the blob
var sourceBlob = srcContainer.GetBlockBlobReference("FOO");
var destinationBlob = destContainer.GetBlockBlobReference("BAR");
//create a new blob
destinationBlob.StartCopyFromBlob(sourceBlob);
Since both CloudStorageAccount objects point to the same account, copying without a SAS token would work just fine as you also mentioned.
On the other hand, you need either a publicly accessible blob or a SAS token to copy from another account. So what you tried was correct, but you established a container-level access policy, which can take up to 30 seconds to take effect as also documented in MSDN. During this interval, a SAS token that is associated with the stored access policy will fail with status code 403 (Forbidden), until the access policy becomes active.
One more thing that I would like to point is; when you call Get*BlobReference to create a new blob object, the CopyState property will not be populated until you do a GET/HEAD operation such as FetchAttributes.

Resources