I am publishing and subscribing to azure event hub, which uses blob in the container in a storage account. Messages are not published with this storage account but working with another storage account.
I could see the blob with the lease status as leased. I think deleting it and creating it again may solve the issue, so I tried to delete this and create a new one. But not able to delete it. I also tried breaking the lease but it again sets the lease status to leased.
Is there any way to solve this issue?
• I tried to reproduce your exact scenario by creating a blob container and uploading a blob in it. Then acquiring it on lease through REST API, breaking the lease and then finally deleting the blob through REST API itself all successfully. I used ‘Postman’ application as the REST API platform for this purpose and also used an application registered in Azure AD through which the token required for the blob operations to be performed was retrieved. Please find the below snapshots for your reference: -
a) Blob ‘ACMx7.pdf’ acquired on lease through appropriate blob owner and user authorization and header parameters.
b) Blob ‘ACMx7.pdf’ lease has been broken through appropriate header, i.e., x-ms-lease-action : break
c) Blob ‘ACMx7.pdf’ has been deleted after the lease has been broken by passing the headers in ‘Postman’ as below.
Please note that the lease given to the blob was given for an infinite period with reference from the below documentation links on using the required headers for the action required on the blob: -
https://learn.microsoft.com/en-us/rest/api/storageservices/lease-blob
https://learn.microsoft.com/en-us/rest/api/storageservices/delete-blob
Related
I am setting up a event trigger on a blob storage v2in data factory pipeline, when i publish the pipeline I keep getting this error below, i have only set up storage recently but i cant see any thing out of place, do I need to set up even subscription in blob storage and create event from the storage itself as there are option to set up automation in there
The attempt to configure storage notifications for the provided storage account hmtest1 failed. Please ensure that your storage account meets the requirements described at https://aka.ms/storageevents. The error is Failed to retrieve credentials for request=RequestUri=https://management.azure.com/subscriptions
{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}
{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}
AFAIK, In ADF, this error occurs when the Data factory is not registered in the Resource providers.
To resolve this, we need to register Data factory in the Resource Providers.
Go to Subscriptions->your account->Resource providers and check whether Data factory is Registered or not.
If it is showing as NotRegistered then select it and click on Register.
After successfully registered, create a new data factory workspace and check the Storage event trigger.
If it still gives the same error, register the EventGrid as well and re-check.
While deleting old blobs using a logic app by giving the container path, we ran into an error message "Status code:409, "message": This operation is not permitted because the blob has snapshots”. This subsequently fails the running of the logic app. I tried to use delete blob by providing Id and Filename but the error persists. Is there any way to specifically delete blob and its corresponding snapshot using the logic app? Approaches to solving the issue are welcome. Blob's lifecycle management policy does not work for us.
You can use an Azure Function to delete your blob including this header in your request:
x-ms-delete-snapshots: {include, only}
Required if the blob has associated snapshots. Specify one of the following two options:
include: Delete the base blob and all of its snapshots.
only: Delete only the blob's snapshots and not the blob itself.
This header should be specified only for a request against the base blob resource. If this header is specified on a request to delete an individual snapshot, the Blob service returns status code 400 (Bad Request).
If this header is not specified on the request and the blob has associated snapshots, the Blob service returns status code 409 (Conflict).
Check documentation here.
You can try to filter and order your Blobs before remove your base blob, deleting snapshots first within your Logic App.
I am trying to run the sample reading message from EventHub but getting the following error:
Sample URL: https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/SampleEphReceiver
Error:
Microsoft.Azure.EventHubs.Processor.EventProcessorRuntimeException:
'Out of retries creating lease for partition'
I can see a container got created under using Azure portal - Storage Explorer.
And know some message got written successfully to the eventhub I am trying to read from.
Any idea what might be causing this?
My storage account is of type "Storage (general purpose v1)"!
This seems to be an issue with the Storage Account you made.
I also stumbled upon this issue following this guide. I created the Storage Account (Account kind: "Storage (general purpose v1)", Performance: "Premium") and created a new Container (note: container access options could not be changed), I tested with the simple consumer code in the guide and it failed with the same "Out of retries creating lease for partition" error you received.
I eventually found this GitHub issue which suggested I used "Blob storage" instead. I created a new Storage Account with "Blob storage" selected as the Account Kind and it worked. Out of curiosity, I made two more Storage Accounts, one as "StorageV2 (general purpose v2)" and another as "Storage (general purpose v1)" again (note: container access options were now available). Both worked, so I was confused.
After some further playing around, I found that this is likely an issue with the Performance option (including the container access issue). Pick "Standard" with any sub option instead of "Premium". My original Storage Account was "Premium" and every following failing test was also "Premium". Also, it seems like you can never make a storage account with the same name ever again as the containers always have "Forbidden" names...
I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
I've seen the Avoid over-writing blobs AZURE but it is about the client side.
My goal is to secure the server from overwritting blobs.
AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.
Edited
Let me clarify: My client app receives a SAS token to upload a new blob. However, an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior, the new blob will replace the existing one (effectively deleting the good one).
I am aware of different approaches to deal with the replacement on the client. However, I need to do it on the server, somehow even against the client (which could be compromised by the hacker).
You can issue the SAS token with "create" permissions, and without "write" permissions. This will allow the user to upload blobs up to 64 MB in size (the maximum allowed Put Blob) as long as they are creating a new blob and not overwriting an existing blob. See the explanation of SAS permissions for more information.
There is no configuration on server side but then you can implement some code using the storage client sdk.
// retrieve reference to a previously created container.
var container = blobClient.GetContainerReference(containerName);
// retrieve reference to a blob.
var blobreference = container.GetBlockBlobReference(blobName);
// if reference exists do nothing
// else upload the blob.
You could do similar using the REST api
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/blob-service-rest-api
GetBlobProperties which will return 404 if blob does not exists.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
Azure Storage Services expose the Blob Service REST API for you to do operations against Blobs. For upload/update a Blob(file), you need invoke Put Blob REST API which states as follows:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the existing blob is overwritten with the content of the new blob.
In order to avoid over-writing existing Blobs, you need to explicitly specify the Conditional Headers for your Blob Operations. For a simple way, you could leverage Azure Storage SDK for .NET (which is essentially a wrapper over Azure Storage REST API) to upload your Blob(file) as follows to avoid over-writing Blobs:
try
{
var container = new CloudBlobContainer(new Uri($"https://{storageName}.blob.core.windows.net/{containerName}{containerSasToken}"));
var blob = container.GetBlockBlobReference("{blobName}");
//bool isExist=blob.Exists();
blob.UploadFromFile("{filepath}", accessCondition: AccessCondition.GenerateIfNotExistsCondition());
}
catch (StorageException se)
{
var requestResult = se.RequestInformation;
if(requestResult!=null)
//409,The specified blob already exists.
Console.WriteLine($"HttpStatusCode:{requestResult.HttpStatusCode},HttpStatusMessage:{requestResult.HttpStatusMessage}");
}
Also, you could combine your blob name with the MD5 code of your blob file before uploading to Azure Blob Storage.
As I known, there is no any configurations on Azure Portal or Storage Tools for you to achieve this purpose on server-side. You could try to post your feedback to Azure Storage Team.
I've run a process to delete approximately 1500 blobs from my Azure storage service. The code I've used to do this (in a loop) is essentially this:
var blob = BlobStorageContainer.GetBlockBlobReference(blobName);
if (await blob.ExistsAsync(cancellationToken))
{
await blob.DeleteAsync(cancellationToken);
}
I went through both the Azure Portal and Azure Storage Explorer, and it looks like all the blobs that should have been deleted are still there. However, when I try to actually access the file via the URL, I get a ResourceNotFound error. So it seems the data has been deleted, but the storage service seems to think that the blob should still be there. Am I doing something wrong, or does the storage service need time to catch up, in a sense, to all the delete operations I performed?
You can try doing a list blob operation for the container and that will give you an up to date view of what blobs are still present in your account. Accessing the blob from the internet URI will come back as ResourceNotFound if the blob isn't public even if it is still present in the container. Is it possible your calls are failing but your code is eating the exceptions?