How to choose blob block size in Azure - azure

I want to use Append blobs in Azure storage.
When Im uploading a blob, I should choose the block size.
What I should consider when choosing the block size?
I see no difference if Im uploading a file which has bigger size then block size.
How to choose the right block size?

According to your description, I did some research, you could refer to it for a better understanding about blocks of append blob:
I just checked the CloudAppendBlob.AppendText and CloudAppendBlob.AppendFromFile. If the file size or text content size less than 4MB, then it would be uploaded to a new individual block. Here I used CloudAppendBlob.AppendText for appending text content (byte size less than 4MB) three times, you could refer to the network traces as follows:
For content size > 4MB, then the client SDK would divide the content into small pieces (4MB) and upload them into each blocks. Here I uploaded a file with the size about 48.8MB, you could refer to the network traces as follows:
As Gaurav Mantri mentioned that you could choose small block size for low speed network. Moreover, for small block size write, you would retrieve the better performance for write requests, but when you reading data, your data spans across multiple separate blocks, it would slow down your read requests. It depends on the write/read ratio your application expected, for optimal reads, I recommend that you need to batch writes to be as near 4MB as possible, which would bring you with slower write requests but reads to be much faster.

A few things to consider when deciding on the block size:
In case of an Append Blob, maximum size of a block can be 4 MB so you can't go beyond that number.
Again, a maximum of 50000 blocks can be uploaded so you would need to divide the blob size with 50000 to decide the size of a block. For example, if you're uploading a 100MB file and decide to choose 100 byte block, you would end up with 1048576 (100x1024x1024/100) blocks which is more than allowed limit of 50000 so it is not allowed.
Most importantly, I believe it depends on your Internet speed. If you have a really good Internet connection, you can go up to 4MB block size. For not so good Internet connections, you can reduce the limit. For example, I always try to use 256-512KB block size as the Internet connection I have is not good.

Related

Is it possible to increase the size of cell in cassandra?

I want to insert a 16MB image with blob type in Cassandra.
However, I noticed that the practical limit on blob size is less than 1 MB.
(The description of blob type is here.)
Except splitting the image into multiple 1MB, I'm wondering if it is possible to increase the size of the cell to handle my requirement.
Thanks a lot.
The 1Mb limit specified in the documentation is a recommendation, not a hard limit. And it's a good recommendation, because otherwise you can get problems with maintenance operations, like, repair, bootstrapping of the new nodes, etc. - I've seen cases (on older Cassandra) when people stored 1Mb blobs, and couldn't add the new data center because bootstrap failed. Nowadays, it shouldn't be a problem, but this recommendation still actual.
Usual recommendation is to store file content on the file system and store metadata, including the file path in Cassandra. By doing that, it's easier to host your images, especially if you're in the cloud - this will be more performant, and cheaper...

Azure Blob Storage - What exactly does "Write Operations" mean?

I'm using the Azure Pricing Calculator for estimating storage costs for files (more specifically, SQL backups).
I'm currently selecting Block Blob Storage with Blob Storage account type.
There's a section in the pricing calculator that shows the cost of Write Operations and describes which API calls are Write Ops:
The following API calls are considered Write Operations: PutBlob, PutBlock, PutBlockList, AppendBlock, SnapshotBlob, CopyBlob and SetBlobTier (when it moves a Blob from Hot to Cool, Cool to Archive or Hot to Archive).
I looked at the docs for PutBlob and PutBlock, but both don't really seem to mention "file" at all anywhere (except for PubBlob which mentions a filename).
The PutBlob description says:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob.
The PutBlock description says:
The Put Block operation creates a new block to be committed as part of a blob.
Is it 1 block per file or is a file multiple blocks?
Are those 2 Put commands used for uploading files?
Does a write operation effectively mean 1 operation per 1 file?
For example, if i have 100 files is that 100 write operations?
Or can 1 write operation write multiple files in a single op?
Let me try to explain it with a couple of scenarios. Considering you are using block blobs, I will explain using them only.
Uploading a 1 MB File: Assuming you have a 1 MB local file that you wish to save as block blob. Considering the file size is relatively small, you can upload this file in blob storage using Put Blob operation. Since you're calling this operation only once, you will be performing one write operation.
Uploading a 1 GB File: Now let's assume that you have a 1 GB local file that you wish to save as block blob. Considering the file size is big, you decide to logically split the file in 1 MB chunks (i.e. you logically split your 1 GB local file in 1024 chunks). BTW, these chunks are also known as blocks. Now you upload each of these blocks using Put Block operation and then finally stitch these blocks together using Put Block List operation to create your blob. Since you're calling 1024 put block operations (one for each block) and then 1 put block list operation, you will be performing 1025 write operations (1024 + 1).
Now to answer your specific questions:
Is it 1 block per file or is a file multiple blocks?
It depends on whether you used Put Blob operation or Put Block operation to upload the file. In scenario 1 above, it is just 1 block per file (or blob) because you used put blob operation however in scenario 2, it is 1024 blocks per file (or blob) because you used put block operation.
Are those 2 Put commands used for uploading files?
Yes. Again depending on the file size you may decide to use either put blob or put block/put block list operation to upload files. Maximum size of a file that can be uploaded by a put blob operation is 100 MB. What that means is that if the file size is greater than 100 MB, then you must use put block/put block list operation to upload a file. However if the file size is less than 100 MB, then you can use either put blob or put block/put block list operation.
Does a write operation effectively mean 1 operation per 1 file? For
example, if i have 100 files is that 100 write operations?
At the minimum, yes. If each of the 100 files is uploaded using put blob operation, then it would amount to 100 write operations.
Or can 1 write operation write multiple files in a single op?
No, that's not possible.
Operations are at the REST level. So, for a given blob being written, you may see more than one operation for a given blob, especially if its total size exceeds the maximum payload of a single Put Block/Page operation (either 4MB or 100MB for a block, 4MB for a page).
For a block blob, there's a follow-on Put Block List call, after all of the Put Block calls, resulting in yet another metered operation.
There are similar considerations for Append blobs.

Used space too close to allocated space for Azure SQL Database

The following is a screen shot of what I see on the dashboard of my Azure SQL database. I did check the documentation on this so I do understand what each data point means. What I'd like to understand is whether it's problematic for the used space to be that close to allocated space.
The only data point I set is the max storage space and the rest are managed by Azure SQL Database so it should be OK but I don't want to make assumptions.
Your allocated space will grow automatically, you don't have to worry about it, that is normal. You will always see used space close to allocated space. The key is when allocated space and used space are getting close to the maximum storage size.
If you see the database size reaching the maximum size you may need to run the following statement to increase the maximum size or adjust the maximum size using Azure portal.
ALTER DATABASE AzureDB2 MODIFY (EDITION='STANDARD', MAXSIZE= 50 GB)
I know from on-premise Data warehouse setups, you better enlarge the allocated space in larger chunks of the database than every time small chunks.. IMO you would like to tune the autogrowth when you want to insert large chunks of data.

Understanding Azure Block Blob Storage Limitations

I am a bit confused on how block blob storage works so I bit confused on how the limitations work(from what I read that most people will not get even close to the limitations but I still like to understand how it's been applied). I been reading this post
The limitations seem to be like this
Block blobs store text and binary data, up to about 4.7 TB. Block
blobs are made up of blocks of data that can be managed individually.
Azure Blob storage limits Resource Target Max size of single blob
container Same as max storage account capacity Max number of blocks in
a block blob or append blob 50,000 blocks Max size of a block in a
block blob
I understand the above picture but I don't understand in the above image what is actually the "block blob".
I don't understand if I store say all my pictures in 1 container will I be reaching the limit?
Say if I had something super crazy like I have in this picture container 10 million photos each photo is 100mb, would I have gone over the limit?
or does block blob mean if I had img001 and it was 1gb it would get separated into blocks and the limit would be 50,000 blocks?
Then img002 would have it's own 50,000 block limit and so forth(ie the limits for storage are against each image not the total container size)
The 50000 blocks limit is for each object in container. You can have multiple objects each having 50000*100mb size.
Normally you don't hit these limits. At least I have never been able to hit them. :-)
You can find more information at https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs.

Is UploadFromStream function of CloudBlockBlob atomic for sizes less than 64 MB?

I am reading a file from BlockBlob. File size is guaranteed to be less than 64 MB. Therefore it is single block operation. After reading the file, I am changing some parts of it and re-uploading it via UploadFromStream function of CloudBlockBlob. My question is "Is UploadFromStream function of CloudBlockBlob atomic for sizes less than 64 MB?". Is there a possibility that I have a corrupt file on azure storage after an exception during write process?
Note: I've asked a similar question for AppendBlobs and got an answer that it is atomic.
Yes, it's atomic if it's under 64MB, unless you parallelize as parallelizing will chunk the data. Even for data greater than 64MB with block blobs, there's a two step commit process so if upload fails in the middle you're still in relatively good shape. If we upload chunks of data in 4MB blocks, we also have to commit these blocks. So, if uploading fails we won't commit and all you'll have is some extra uncommitted blocks only accessible via the get block list operation (aka, they uncommitted blocks are not downloadable). So, for block blobs upload failing in the middle won't overwrite your existing data or corrupt it in general.

Resources