Windows Azure: Can't upload a 34 MB file on to the blob - azure

I was trying to upload a 34 MB file onto the blob but it is prompting me some error
XML Parsing Error: no element found
Location: http://127.0.0.1:83/Default.aspx
Line Number 1, Column 1:
What should I do....How to solve it
I am able to upload small files of size 500KB.. but I have a file of size 34 MB to be uploaded into my blob container
I tried it using
protected void ButUpload_click(object sender, EventArgs e)
{
// store upladed file as a blob storage
if (uplFileUpload.HasFile)
{
name = uplFileUpload.FileName;
// get refernce to the cloud blob container
CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents");
// set the name for the uploading files
string UploadDocName = name;
// get the blob reference and set the metadata properties
CloudBlob blob = blobContainer.GetBlobReference(UploadDocName);
blob.Metadata["FILETYPE"] = "text";
blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType;
// upload the blob to the storage
blob.UploadFromStream(uplFileUpload.FileContent);
}
}
But I am not able to upload it.. Can anyone tell me How to do that....

Blobs larger than 64MB must be uploaded using block blobs. You break the file into blocks, upload all the blocks (associating each block with a unique string identifier), and at the very end you post the list of block IDs to the blob to commit the entire batch in one go.
Uploading in blocks is also recommended for large blobs less than 64MB in size. It is very easy for a hiccup in the network connection or routing through the internet to lose a frame or two in a very large upload, which will corrupt or invalidate the entire upload. Use smaller blocks to reduce your exposure to cosmic events.
More info in this discussion thread: http://social.msdn.microsoft.com/Forums/en-NZ/windowsazure/thread/f4575746-a695-40ff-9e49-ffe4c99b28c7

I would start by dropping some logging into the project to try and track the problem down. It may not be happening where you think. There might also be a permissions error. Try adding some dummy data into the database. If it still fails that might be a potential problem.
But track it down yourself with some debug, logging and some code review, I bet you can get to the bottom of the problem sooner that way. And it will also help to make your code more robust.

You can use Blobs here. I think its an issue with your web request size. You can change this setting in the web.config by increasing the number of the maxRequestLength attribute in the element. If you are sending chunks of 500Kb, then you are wasting bandwidth and bringing down performance. Send bigger chunks of data such as 1-2 Mb per chunk. See my Silverlight or HTML5 based upload control for chunked uploads. Pick Your Azure File Upload Control: Silverlight and TPL or HTML5 and AJAX

Use the Blob Transfer Utility to download and upload all your blob files.
It's a tool to handle thousands of (small/large) blob transfers in a effective way.
Binaries and source code, here: http://bit.ly/blobtransfer

Related

Does Azure Blob Storage supports partial content 206 by default?

I am using Azure blob storage to storage all my images and videos. I have implemented the upload and fetch functionality and it's working quite good. I am facing 1 issue while loading the videos, because when I use the url which is generated after uploading that video on Azure blob storage, it's downloading all the content first before rendering it to the user. So if the video size is 100 mb, it'll download all the 100 mb and till than user won't able to see the video.
I have done a lot of R&D and came to know that while rendering the video, I need to fetch the partial content (status 206) rather than fetching the whole video at a time. After adding the request header "Range:bytes-500", I tried to hit the blog url, but it's still downloading the whole content. So I have checked with some open source video URLs and tried to hit the video URL along with the "Range" request header and it was successfully giving 206 response status, which means it was properly giving me the partial content instead of the full video.
I read some forum and they are saying Azure storage supports the partial content concept and need to enable it from the properties. But I have checked all the options under the Azure storage account but didn't find anything to enable this functionality.
Can anyone please help me out to resolve this or if there's anything on Azure portal that I need to enable? It's something that I have been doing the R&D for this since a week now. Any help would be really appreciated.
Thank you! Stay safe.
Suppose the Accept-Ranges is not enabled, from this blog I got it needs to set the default version of the service.
Below is a sample code to implement it.
var credentials = new StorageCredentials("account name", "account key");
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var properties = client.GetServiceProperties();
properties.DefaultServiceVersion = "2019-07-07";
client.SetServiceProperties(properties);
Below is a return header comparison after setting the property.
Before:
After:
Assuming the video content is MPEG-4 the issue may be the media itself needs to have the moov atom position changed from the end of the file to the beginning. The browser won't render the video until it finds the moov atom in the file therefore you want to make sure the atom is at the start of the file which can be accomplished using ffmpeg with the "FastStart". Here's a good article with more detail : HERE
You just need to update your Azure Storage version. It will work automatically after the update.
Using Azure CLI
Just run:
az storage account blob-service-properties update --default-service-version 2021-08-06 -n yourStorageAccountName -g yourStorageResourceGroupName
List of avaliable versions:
https://learn.microsoft.com/en-us/rest/api/storageservices/previous-azure-storage-service-versions
To see your current version, open a file and inspect the x-ms-version header
following is the SDK I used to download the contents:
var container = new BlobContainerClient("UseDevelopmentStorage=true", "sample-container");
await container.CreateIfNotExistsAsync();
BlobClient blobClient = container.GetBlobClient(fileName);
Stream stream = new MemoryStream();
var result = await blobClient.DownloadToAsync(stream, cancellationToken: ct);
which DOES download the whole file right away! Unfortunately the solution provided in other answers seems to be referencing another SDK? So for the SDK that I use - the solution is to use the method OpenReadAsync:
long kBytesToReadAtOnce = 300;
long bytesToReadAtOnce = kBytesToReadAtOnce * 1024;
//int mbBytesToReadAtOnce = 1;
var result = await blobClient.OpenReadAsync(0, bufferSize: (int)bytesToReadAtOnce, cancellationToken: ct);
By default - it fetches 4mb of data, so you have to override the value to smaller amount if you want your app to have smaller memory footprint.
I think that internally the SDK sends the requests with the byte range already set. So all you have to do is enable the partial content support in Web API like this:
return new FileStreamResult(result, contentType)
{
EnableRangeProcessing = true,
};

Azure blob storage file being accessed by multiple azure nodes

I have multiple JSON format files which is being pushed to the Azure storage account under a specific container. There are n number of files in the container.
And 4 to 8 nodes which will be accessing the Azure storage container to downloaded the files locally, the download code is written in java.
Since there are n number of files and multiple file accessing the container at the same time, how to avoid the situation that the same file is downloaded by another server?
Example:
Azure container has 1.json, 2.json, 3.json, etc which are > 35 MB size.
batch-process-node1 -> starts downloading 1.json
batch-process-node2 -> starts downloading 2.json
batch-process-node3 -> should not start downloading the 1.json
Is there any logic to be built for each node which has the java process to download the file uniquely?
Is there any setting that can be set in the Azure storage container?
--
Trying to use the Camel Azure-bolb component, using the block blob (blobType).
New to Azure storage blob, any help is appreciated.
Since we are already using Apache camel in the code, we tried to use camel azure-blob component to address the issue. Below is the approach we used, still the race condition is acceptable for our scenario.
Camel route started with timer consumer, and producer to get the list of blob from container using below endpoint,
azure-blob://<account>/<container>?credentials=#storagecredentials&blobType=blockBlob&operation=listBlobs
Note: storagecredential is a bean of type StorageCredentialsAccountAndKey class.
Created a java class implementing the Processor of camel, and in process() method, using the exchange.getIn().getBody() => which provides an iterable object with has ListBlobItem.
first i set the meta data of the blob using below endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=updateBlockBlob&blobMetadata=#blobMetaData1
Note: blobMetaData1 is bean created in the context file.
<util:map id="blobMetaData1" map-class="java.util.HashMap">
<entry key="someKey" value="someValue"/>
</util:map>
Key thing: In this class process method
validate the metadata is being set or not, if set then the
process is already picked the blob. so it won't be picked again
assuming if the process executed in different server.
got the blob name from the ListBlobItem individual blob item.
using getURI() and forming the endpoint within this processor class.
in order to invoke the custom endpoint, used to set it an customer
header value of In message.
using the recipientList camel option which invokes the metadata endpoint to update the specific blob.
Then used another processor to form the download blob endpoint
azure-blob://<account>/<container>/*<blobName>*?credentials=#storagecredentials&blobType=blockBlob&operation=getBlob
and using the recipientList to get the processor endpoint from message header.
finally forming another delete endpoint which will delete once its downloaded.

How to create a blob in node (server side) from a stream, a file or a base64 string?

I am trying to create a blob from a pdf I am creating from pdfmake so that I can send it to a remote api that only handles blobs.
This is how I get my PDF file:
var docDefinition = { content: 'This is an sample PDF printed with pdfMake' };
pdfDoc.pipe(fs.createWriteStream('./pdfs/test.pdf'));
pdfDoc.end();
The above lines of code do produce a readable pdf.
Now how can I get a blob from there? I have tried many options (creating the blob from the stream with the blob-stream module, creating from the file with fs, creating it from a base64 string with b64toBlob) but all of them require at some point to use the constructor Blob for which I always get an error even if I require the module blob:
TypeError: Blob is not a constructor
After some research I found that it seems that the Blob constructor is only supported client-side.
All the npm packages that I have found and which seem to deal with this issue seem to only work client-side: blob-stream, blob, blob-util, b64toBlob, etc.
So, how can I create a blob server-side on Node?
I don't understand why almost nobody also needs to create a blob server-side? The only thread I could find on the subject is this one.
According to that thread, apparently:
The Solution to this problem is to create a function which can convert between Array Buffers and Node Buffers. :)
Unfortunately this does not help me much as I clearly seem to lack some important knowledge here to be able to comprehend this.
use node-blob npm package
const Blob = require('node-blob');
let myBlob = new Blob(["something"], { type: 'text/plain' });

How to make a block blob temporarily unavailable

I use Azure cloud blob to serve images. Sometimes I need to make an image temporarily unavailable (ie. someone reported the image) and later may recover or delete it. How do I achieve this?
My container is public:
container.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
This is how I made the blob:
CloudBlockBlob block_blob = ImagesContainer.GetBlockBlobReference(blob_name);
block_blob.Properties.ContentType = file.ContentType;
block_blob.Properties.CacheControl = "public, max-age=2592000"; // 30 days
block_blob.UploadFromStream(file.InputStream);
Is there something like:
block_blob.Properties.AccessType = private;
block_blob.SetProperties();
so that I can make it unavailable to everyone? And later I might recover it by setting the property to "public". I can't find any properties related to this usage.
Thanks very much.
There is no way to do this in a public container. I would recommend copying the blob to a private container and then deleting it from the public container, and then you can copy it back whenever you want to make it available again. Using the Copy Blob API (http://msdn.microsoft.com/en-us/library/azure/dd894037.aspx) between two containers in a single storage account will be very, very fast even for large files.

does azure blob storage use gzip across the wire

I want to know if there is a benefit to zipping files before sending them to Azure Blob Storage - strictly for transfer purposes. Put another way, will pre-zipping files make file transfers any faster when going to/from blob storage? Or does this automatically happen at the transport level by using gzip?
As of 12th August 2015 Azure blob storage (when mounted to the CDN) now supports automatic GZip compression.
Compression method - Supported compression methods are
gzip/deflate/bzip2, a supported method must be set in the
Accept-Encoding Request Header.
Improve performance by compressing files
UPDATE
I'm unsure of what and how I originally did this, but all I can think is that I was looking at the results incorrectly. Everything I can read about azure (from MSDN, to the code itself) is now telling me that Azure does not support gzip for transfer purposes. I do not know under what circumstances I was able to get the following results and am unable to reproduce them now. Needless to say, I'm very disappointed.
(THIS ANSWER IS INCORRECT, SEE THE UPDATE ABOVE) The answer is no, there is no benefit for transfer speed purposes to zip a file before sending to blob storage. By turning on Fiddler, you can see that the transport level automatically gzips content across the wire. Screenshots below confirm this:
Edit 1 - Quick Clarifications for Gaurav
The byte array that comes back in code has a length of 386803, but the network card only saw 23505 bytes go by, because it was gzipped by Azure in the response. I didn't have to do anything for that to happen.
Here is the code I'm using to initiate the request from Blob Storage
public Byte[] Read(string containerName, string filename)
{
CheckContainer(containerName);
Initialize();
// Retrieve reference to a previously created container.
CloudBlobContainer container = _blobClient.GetContainerReference(containerName);
// Retrieve reference to a blob named "photo1.jpg".
CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
byte[] buffer;
// Save blob contents to a file.
using (var stream = new MemoryStream())
{
blockBlob.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
buffer = new byte[stream.Length];
stream.Read(buffer, 0, (int)stream.Length);
}
return buffer;
}

Resources