Avoid over-writing blobs AZURE on the server - azure

I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
I've seen the Avoid over-writing blobs AZURE but it is about the client side.
My goal is to secure the server from overwritting blobs.
AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.
Edited
Let me clarify: My client app receives a SAS token to upload a new blob. However, an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior, the new blob will replace the existing one (effectively deleting the good one).
I am aware of different approaches to deal with the replacement on the client. However, I need to do it on the server, somehow even against the client (which could be compromised by the hacker).

You can issue the SAS token with "create" permissions, and without "write" permissions. This will allow the user to upload blobs up to 64 MB in size (the maximum allowed Put Blob) as long as they are creating a new blob and not overwriting an existing blob. See the explanation of SAS permissions for more information.

There is no configuration on server side but then you can implement some code using the storage client sdk.
// retrieve reference to a previously created container.
var container = blobClient.GetContainerReference(containerName);
// retrieve reference to a blob.
var blobreference = container.GetBlockBlobReference(blobName);
// if reference exists do nothing
// else upload the blob.
You could do similar using the REST api
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/blob-service-rest-api
GetBlobProperties which will return 404 if blob does not exists.

Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
Azure Storage Services expose the Blob Service REST API for you to do operations against Blobs. For upload/update a Blob(file), you need invoke Put Blob REST API which states as follows:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the existing blob is overwritten with the content of the new blob.
In order to avoid over-writing existing Blobs, you need to explicitly specify the Conditional Headers for your Blob Operations. For a simple way, you could leverage Azure Storage SDK for .NET (which is essentially a wrapper over Azure Storage REST API) to upload your Blob(file) as follows to avoid over-writing Blobs:
try
{
var container = new CloudBlobContainer(new Uri($"https://{storageName}.blob.core.windows.net/{containerName}{containerSasToken}"));
var blob = container.GetBlockBlobReference("{blobName}");
//bool isExist=blob.Exists();
blob.UploadFromFile("{filepath}", accessCondition: AccessCondition.GenerateIfNotExistsCondition());
}
catch (StorageException se)
{
var requestResult = se.RequestInformation;
if(requestResult!=null)
//409,The specified blob already exists.
Console.WriteLine($"HttpStatusCode:{requestResult.HttpStatusCode},HttpStatusMessage:{requestResult.HttpStatusMessage}");
}
Also, you could combine your blob name with the MD5 code of your blob file before uploading to Azure Blob Storage.
As I known, there is no any configurations on Azure Portal or Storage Tools for you to achieve this purpose on server-side. You could try to post your feedback to Azure Storage Team.

Related

Azure Blob Storage Temporary File URL

I have saved pdf files in azure blob storage blob, I want to show these files on my website but when a file render on html its link should be deactivated means no one can use that link to download the file again. Is this possible in azure blob storage?
You can use the blob policy to make it:
CloudStorageAccount account = CloudStorageAccount.Parse("yourStringConnection");
CloudBlobClient serviceClient = account.CreateCloudBlobClient();
var container = serviceClient.GetContainerReference("yourContainerName");
container
.CreateIfNotExistsAsync()
.Wait();
CloudBlockBlob blob = container.GetBlockBlobReference("test/helloworld.txt");
blob.UploadTextAsync("Hello, World!").Wait();
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy();
// define the expiration time
policy.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1);
// define the permission
policy.Permissions = SharedAccessBlobPermissions.Read;
// create signature
string signature = blob.GetSharedAccessSignature(policy);
// get full temporary uri
Console.WriteLine(blob.Uri + signature);
If I understand correctly, you're looking for single use links to Azure Blobs. Natively this feature is not available in Azure Storage. You would need to write code to implement something like this where you would keep track of the number of times a link has been used and in case the limit exceeds, you will not process that link.

Issue while copying existing blob to Azure Media Services

We are trying to copy the existing blob to AMS but it is not getting copied. Blob resides in storage account 1 and AMS is associated with storage account 2. All the accounts including AMS are in the same location.
await destinationBlob.StartCopyAsync(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
When visualizing the AMS Storage Account using Blob Storage explorer, asset folders are getting created but with no blobs in it. Also, within the Media explorer, we can see the assets listed in the AMS but when clicked, not found exception is thrown. Basically they are not getting fully copied into the AMS.
However, when we use same code and attach a new AMS to the blob storage account (storage account1) where the actual blob resides, copy is working fine.
I have not reproduce your issue. But there is a code sample to copy existing blob to Azure media services via .NET SDK. Please try to copy the Blob using StartCopyFromBlob or StartCopyFromBlobAsync(the Azure storage client library 4.3.0). Below is the code snippet in the code sample :
destinationBlob.StartCopyFromBlob(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
while (true)
{
// The StartCopyFromBlob is an async operation,
// so we want to check if the copy operation is completed before proceeding.
// To do that, we call FetchAttributes on the blob and check the CopyStatus.
destinationBlob.FetchAttributes();
if (destinationBlob.CopyState.Status != CopyStatus.Pending)
{
break;
}
//It's still not completed. So wait for some time.
System.Threading.Thread.Sleep(1000);
}

Check if Blob of unknown Blob type exists

I've inherited a project built using the Azure Storage Client 1.7 and am upgrading it as Microsoft have announced that this will no longer be supported from December this year.
References to the files in Blob storage are stored in a database with the following fields:
FilePath - a string in the form of uploadfiles/xxx/yyy/Image-20140117170146.jpg
FileURI - A string in the form of https://zzz.blob.core.windows.net/uploadfiles/xxx/yyy/Image-20140117170146.jpg
GetBlobReferenceFromServer will throw an exception if the file doesn't exist, so it seems you should use GetBlockBlobReference if you know the container and the Blob type.
So my question(s):
Can I assume any Blobs currently uploaded (using StorageClient 1.7) will be BlockBlobs?
As I need to know the container name to call GetBlockBlobReference can I reliably say that in the examples above my container would always be uploadfiles
Can I assume any Blobs currently uploaded (using StorageClient 1.7)
will be BlockBlobs?
Though you can't be 100% sure that the blobs uploaded via Storage Client library 1.7 are Blob Blobs because 1.7 also supported Page Blobs however you can make some intelligent guesses. For example, if the files are image files and other commonly used files (pdf, document etc.), you can assume that they are block blobs. Typically you would see vhd files uploaded as page blobs. Again if these are uploaded by the users of your application, more than likely they are block blobs.
Having said this, I think you should use GetBlobReferenceFromServer method. What you could do is list all blobs from the database and for each of them call GetBlobReferenceFromServer method. If the blob exists, then you will get the blob type. If the blob doesn't exist, this method will give you an error. This would be the quickest way to identify the blob type of existing entries in the database. If you want, you can store the blob type back in the database along with existing record if you find both block and page blobs when you check the blob type so that if in future you need to decide between creating a CloudBlockBlob or CloudPageBlob reference, you can look at this field.
As I need to know the container name to call GetBlockBlobReference can
I reliably say that in the examples above my container would always be
uploadfiles
Yes. In the examples you listed above, you can say that the blob container is upload files.

Azure CDN per Blob SAS

As far as I know in Azure Storage we can delegate access to our storage to single person using SAS on CONTAINER basis.
I need to delegate access on per BLOB basis to prevent hotlinking.
We are using Asp.Net MVC. Sorry for my English:)
Edit: And how new Azure user can create CDN?
So you can create a SAS on a blob. The approach is similar to the way you create a SAS on a blob container. Since you're using ASP.Net MVC, I'm assuming you would want to use .Net Storage Client API to create SAS on a blob. To create a SAS on a blob, just call GetSharedAccessSignature method on the blob object you have created.
For example, the code below would give you a SAS URL where user has permission to download a blob:
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15),
});
return string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas);
I wrote a blog post some time ago which describes SAS functionality on blobs and containers in more details: http://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/
Regarding your question about CDN, I believe the functionality to create DSN nodes was taken away from the Windows Azure Portal when new portal was announced. I guess you would need to wait for the functionality to come up again on the portal.

Verifying CloudBlob.UploadFromStream compleated with no errors?

I want to save files users upload to my site into my Azure Blob and I am using the CloudBlob.UploadFromStream method to do so but I want to make sure the file completed saving to the blob with no problems before doing some more work. I am currently just uploading the blob then checking to see if a reference to the new blob exists using GetBlockBlobReference inside an if statement. Is there a better way of verifying the upload completed fine?
If there's any problem while uploading the blob, CloudBlob.UploadFromStream method would throw an error so that would be the first place to check if the upload went fine.
I don't think creating a reference for a blob using GetBlockBlobReference would do you any good as it just creates an instance of CloudBlockBlob. It doesn't check if the blob exists in the storage or not. If you want to check if the blob exists in the storage, you could either fetch blob attributes using CloudBlockBlob.FetchAttributes method or creating an instance of CloudBlob using CloudBlobContainer.GetBlobReferenceFromServer or CloudBlobClient.GetBlobReferenceFromServer. All of the three methods above will fetch information about the blob from storage and would throw appropriate errors if something is not right (e.g. Not Found error if the blob does not exist).

Resources