Azure - get_blob_client - blob creation - azure

Whats the exact functionality of get_blob_client()?
get_blob_client(container, blob, snapshot=None)
I understood as, it automatically creates one if its the blob is not yet available.
My issue :
I used get_blob_client(container, blob, snapshot=None) for creating NEW blobs before. It now neither shows error nor the blob is created.
Note: When tried download_blob() it say
NO blob found.
Whats the issue here?

On Workaround on the get_blob_client() function. It wont creates the blob it only creates the reference .
I tried with the python taken the sample example .I don’t have the example.txt blob in container (test) .When I call get_blob_client() function it creates the reference not the blob.
In azure storage the blob not created
So when you try to download the blob it throws the error. So Create the blob the get the reference for the blob then download it

Related

Azure: Unable to copy Archive blobs from one storage account to another?

Whenever I try to copy Archive blobs to a different storage account and changing its tier in destination. I am getting the following error:
Copy source blob has been modified. ErrorCode: CannotVerifyCopySource
I have tried copying Hot/Cool blobs to Hot/Cool/Archive. I am facing the issue only while copying Archive to Hot/Cool/Archive. Also, there is no issue while copying within same storage account.
I am using Azure python SDK:
blob_url = source_block_blob_service.make_blob_url(copy_from_container, blob_name, sas_token = sas)
dest_blob_service.copy_blob(copy_to_container, blob_name, blob_url, requires_sync = True, standard_blob_tier = 'Hot')
The reason you're getting this error is because copying an archived blob is only supported in the same storage account and you're trying it across different storage account.
From the REST API documentation page:
Copying Archived Blob (version 2018-11-09 and newer)
An archived blob can be copied to a new blob within the same storage
account. This will still leave the initially archived blob as is. When
copying an archived blob as source the request must contain the header
x-ms-access-tier indicating the tier of the destination blob. The data
will be eventually copied to the destination blob.
While a blob is in the archive access tier, it's considered offline and can't be read or modified.
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration
To read the blob, you either need to rehydrate it first. Or, as described in the link above, you can also use the CopyBlob operation. I am not sure if the python SDK copy_blob() operation uses that API behind the scenes - maybe not if it did not work that way for you.

Can you output an Azure Logic App Variable to file and store on blob storage?

I have searched google and MSDN and it's not clear if you can write a variable to blob storage? Searching the available steps/actions does not yield anything obvious either.
I have constructed an array variable of file names from an SFTP in per the following documentation, but I can't figure out if this can be stored or saved in any capacity.
Right now it seems these variables are essentially internal to the logic app and can't be made external or is there a way to export them?
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-create-variables-store-values
If you just simply want to save the variable's value in a blob then you can do so with the Azure Blob Storage - Create Blob action:

How to download the Azure blob content with the same name of File

I have an Azure storage account where I have created a folder to upload & download a file in it.I am also performing the rename operation on it e.g when I perform rename operation and upload the file into the blob all Blob metadata get updated successfully.
Please suggest the changes.
How to download the Azure blob content with the same name of File
As Gaurav Mantri said that you could specify the ContentDisposition property for your blob. Use the Azure Storage Explorer, you could quick set the ContentDisposition property as follows:
But when I downloading the image, the ContentDisposition seems not working at all. Then I found a similar issue, you need to set the DefaultServiceVersion for your blob storage service. And you need to write your code and set the DefaultServiceVersion, more details you could refer to here and choose your development language.
Test:
Additionally, if you upload/download your blob files by programming, you could refer to issue1 and issue2.

Check if Blob of unknown Blob type exists

I've inherited a project built using the Azure Storage Client 1.7 and am upgrading it as Microsoft have announced that this will no longer be supported from December this year.
References to the files in Blob storage are stored in a database with the following fields:
FilePath - a string in the form of uploadfiles/xxx/yyy/Image-20140117170146.jpg
FileURI - A string in the form of https://zzz.blob.core.windows.net/uploadfiles/xxx/yyy/Image-20140117170146.jpg
GetBlobReferenceFromServer will throw an exception if the file doesn't exist, so it seems you should use GetBlockBlobReference if you know the container and the Blob type.
So my question(s):
Can I assume any Blobs currently uploaded (using StorageClient 1.7) will be BlockBlobs?
As I need to know the container name to call GetBlockBlobReference can I reliably say that in the examples above my container would always be uploadfiles
Can I assume any Blobs currently uploaded (using StorageClient 1.7)
will be BlockBlobs?
Though you can't be 100% sure that the blobs uploaded via Storage Client library 1.7 are Blob Blobs because 1.7 also supported Page Blobs however you can make some intelligent guesses. For example, if the files are image files and other commonly used files (pdf, document etc.), you can assume that they are block blobs. Typically you would see vhd files uploaded as page blobs. Again if these are uploaded by the users of your application, more than likely they are block blobs.
Having said this, I think you should use GetBlobReferenceFromServer method. What you could do is list all blobs from the database and for each of them call GetBlobReferenceFromServer method. If the blob exists, then you will get the blob type. If the blob doesn't exist, this method will give you an error. This would be the quickest way to identify the blob type of existing entries in the database. If you want, you can store the blob type back in the database along with existing record if you find both block and page blobs when you check the blob type so that if in future you need to decide between creating a CloudBlockBlob or CloudPageBlob reference, you can look at this field.
As I need to know the container name to call GetBlockBlobReference can
I reliably say that in the examples above my container would always be
uploadfiles
Yes. In the examples you listed above, you can say that the blob container is upload files.

Verifying CloudBlob.UploadFromStream compleated with no errors?

I want to save files users upload to my site into my Azure Blob and I am using the CloudBlob.UploadFromStream method to do so but I want to make sure the file completed saving to the blob with no problems before doing some more work. I am currently just uploading the blob then checking to see if a reference to the new blob exists using GetBlockBlobReference inside an if statement. Is there a better way of verifying the upload completed fine?
If there's any problem while uploading the blob, CloudBlob.UploadFromStream method would throw an error so that would be the first place to check if the upload went fine.
I don't think creating a reference for a blob using GetBlockBlobReference would do you any good as it just creates an instance of CloudBlockBlob. It doesn't check if the blob exists in the storage or not. If you want to check if the blob exists in the storage, you could either fetch blob attributes using CloudBlockBlob.FetchAttributes method or creating an instance of CloudBlob using CloudBlobContainer.GetBlobReferenceFromServer or CloudBlobClient.GetBlobReferenceFromServer. All of the three methods above will fetch information about the blob from storage and would throw appropriate errors if something is not right (e.g. Not Found error if the blob does not exist).

Resources