I created a web API which allows users to send files and upload to Azure Storage. The way it works is, the client app will connect to API to send one or more files to the file upload controller and controller will take care of rest such as
Upload file to Azure storage
Update database
Works great but I don't think it is the right way to do this because now I can see there are two different processes
Upload file from the client's file system to my web API (server)
Upload file to the Azure storage from API (server)
It gives me the feeling that I am duplicating the upload process as the same file first travels to API (server) and then Azure (destination) from the client (file system). I feel the need of showing two progress-bars to the client for file upload progress (from client to server and then the server to Azure) - That just doesn't make sense to me and I feel that my approach is incorrect.
My API accepts up to 250MBs so you can imagine the overload.
What do you guys think?
//// API Controller
if (!Request.Content.IsMimeMultipartContent("form-data"))
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
var provider = new RestrictiveMultipartMemoryStreamProvider();
var contents = await Request.Content.ReadAsMultipartAsync(provider);
int Total_Files = contents.Contents.Count();
foreach (HttpContent ctnt in contents.Contents)
{
await storageManager.AddBlob(ctnt)
}
////// Stream
#region SteamHelper
public class RestrictiveMultipartMemoryStreamProvider : MultipartMemoryStreamProvider
{
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
var extensions = new[] { "pdf", "doc", "docx", "cab", "zip" };
var filename = headers.ContentDisposition.FileName.Replace("\"", string.Empty);
if (filename.IndexOf('.') < 0)
return Stream.Null;
var extension = filename.Split('.').Last();
return extensions.Any(i => i.Equals(extension, StringComparison.InvariantCultureIgnoreCase))
? base.GetStream(parent, headers)
: Stream.Null;
}
}
#endregion SteamHelper
///// AddBlob
public async Task<string> AddBlob(HttpContent _Payload)
{
CloudStorageAccount cloudStorageAccount = KeyVault.AzureStorage.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("SomeContainer");
cloudBlobContainer.CreateIfNotExists();
try
{
byte[] fileContentBytes = _Payload.ReadAsByteArrayAsync().Result;
CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference("SomeBlob");
blob.Properties.ContentType = _Payload.Headers.ContentType.MediaType;
blob.UploadFromByteArray(fileContentBytes, 0, fileContentBytes.Length);
var B = await blob.CreateSnapshotAsync();
B.FetchAttributes();
return "Snapshot ETAG: " + B.Properties.ETag.Replace("\"", "");
}
catch (Exception X)
{
return ($"Error : " + X.Message);
}
}
It gives me the feeling that I am duplicating the upload process as
the same file first travels to API (server) and then Azure
(destination) from the client (file system).
I think you're correct. One possible solution would be have your API generate a Shared Access Signature (SAS) token and return that SAS token/URI to the client whenever a client wishes to upload a file.
Using this SAS URI your client can directly upload the file to Azure Storage without sending it to your API first. Once the file is uploaded successfully by the client, it can send a message to the API to update the database.
You can read more about SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1.
I have also written a blog post long time back on using SAS that you may find useful: https://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/.
Related
My code below uploads an Excel file from a user's form submission. However, I'd like to work with the Excel file using EPplus to write it to the database. For that, I need the file's address on the disk, rather than the web address. How can I get that?
Relevant code:
{
public class ExcelService
{
public async Task<string> UploadExcelAsync(HttpPostedFileBase upload)
{
string excelFullPath = null;
if (upload == null || upload.ContentLength == 0)
{
return null;
}
try
{
CloudStorageAccount cloudStorageAccount = ConnectionString.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("excel");
if (await cloudBlobContainer.CreateIfNotExistsAsync())
{
await cloudBlobContainer.SetPermissionsAsync(
new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
}
);
}
string excelName = Guid.NewGuid().ToString() + "-" + Path.GetExtension(upload.FileName);
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(excelName);
cloudBlockBlob.Properties.ContentType = upload.ContentType;
await cloudBlockBlob.UploadFromStreamAsync(upload.InputStream);
excelFullPath = cloudBlockBlob.Uri.ToString();
}
catch (Exception ex)
{
}
return excelFullPath;
}
}
}
You can't. Azure Storage Blobs is a cloud service that only exposes web access to the stored blobs, so there is no physical location on a disk you can access. So unless you download the file, edit it and upload it again there is not way to edit it directly.
An alternative could be to use Azure Storage Files. Still an Azure Cloud based service for storage but it allows you to map folders to your machine so you can access it like any network share. See the docs. This, of course, only works if you are able to create a file share on the web server.
Unfortunately you cannot access Azure Storage Files shares using Azure Web Apps (source) so if that it how your web app is hosted than you are out of luck for that.
I want the file storage specifically not the blob storage (I think). This is code for my azure function and I just have a bunch of stuff in my node_modules folder.
What I would like to do is upload a zip of the entire app and then just upload that and have azure unpack it at a given folder. Is this possible?
Right now I'm essentially iterating over all of my files and calling:
var fileStream = new stream.Readable();
fileStream.push(myFileBuffer);
fileStream.push(null);
fileService.createFileFromStream('taskshare', 'taskdirectory', 'taskfile', fileStream, myFileBuffer.length, function(error, result, response) {
if (!error) {
// file uploaded
}
});
And this works its just too slow. So I'm wondering if there is a faster way to upload a bunch of files for use in apps.
And this works its just too slow. So I'm wondering if there is a faster way to upload a bunch of files for use in apps.
If Microsoft Azure Storage Data Movement Library is acceptable, please have a try to use it. The Microsoft Azure Storage Data Movement Library designed for high-performance uploading, downloading and copying Azure Storage Blob and File. This library is based on the core data movement framework that powers AzCopy.
We also could get the demo code from the github document.
string storageConnectionString = "myStorageConnectionString";
CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference("mycontainer");
blobContainer.CreateIfNotExists();
string sourcePath = "path\\to\\test.txt";
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob");
// Setup the number of the concurrent operations
TransferManager.Configurations.ParallelOperations = 64;
// Setup the transfer context and track the upoload progress
SingleTransferContext context = new SingleTransferContext();
context.ProgressHandler = new Progress<TransferStatus>((progress) =>
{
Console.WriteLine("Bytes uploaded: {0}", progress.BytesTransferred);
});
// Upload a local blob
var task = TransferManager.UploadAsync(
sourcePath, destBlob, null, context, CancellationToken.None);
task.Wait();
I've been trying to create a Windows Azure Blob containing an image file. I followed these tutorials: http://www.nickharris.net/2012/11/how-to-upload-an-image-to-windows-azure-storage-using-mobile-services/ and http://www.windowsazure.com/en-us/develop/mobile/tutorials/upload-images-to-storage-dotnet/. Finally the following code represents a merging of them. On the last line, however, an exception is raised:
An exception of type 'System.TypeLoadException' occurred in
mscorlib.ni.dll but was not handled in user code
Additional information: A binding for the specified type name was not
found. (Exception from HRESULT: 0x80132005)
Even the container is created the table, but It doesn't work properly.
private async void SendPicture()
{
StorageFile media = await StorageFile.GetFileFromPathAsync("fanny.jpg");
if (media != null)
{
//add todo item to trigger insert operation which returns item.SAS
var todoItem = new Imagem()
{
ContainerName = "mypics",
ResourceName = "Fanny",
ImageUri = "uri"
};
await imagemTable.InsertAsync(todoItem);
//Upload image direct to blob storage using SAS and the Storage Client library for Windows CTP
//Get a stream of the image just taken
using (var fileStream = await media.OpenStreamForReadAsync())
{
//Our credential for the upload is our SAS token
StorageCredentials cred = new StorageCredentials(todoItem.SasQueryString);
var imageUri = new Uri(todoItem.SasQueryString);
// Instantiate a Blob store container based on the info in the returned item.
CloudBlobContainer container = new CloudBlobContainer(
new Uri(string.Format("https://{0}/{1}",
imageUri.Host, todoItem.ContainerName)), cred);
// Upload the new image as a BLOB from the stream.
CloudBlockBlob blobFromSASCredential =
container.GetBlockBlobReference(todoItem.ResourceName);
await blobFromSASCredential.UploadFromStreamAsync(fileStream.AsInputStream());
}
}
}
Please use Assembly Binding Log Viewer to see which load is failing. As also mentioned in the article, the common language runtime's failure to locate an assembly typically shows up as a TypeLoadException in your application.
After I use CloudBlob.BeginUploadFromStream() method to upload a file, I later get a StorageClientException with StorageErrorCode.ResourceNotFound when trying to retrieve the file for a download. If I upload the same file using CloudBlob.UploadFromStream() method, then the blob DOES exist and i can download it.
here's my download code:
var client = _storageAccount.CreateCloudBlobClient();
var container = client.GetContainerReference(BLOB_CONTAINER_DOCUMENTS_ADDRESS);
container.CreateIfNotExist();
string blobName = id.ToString();
var newBlob = container.GetBlobReference(blobName);
if (newBlob.Exists())
{
var stream = newBlob.OpenRead();
return stream;
}
else
{
throw new Exception("Blob does not exist!");
}
Exists is an extension method. I'm getting the StorageClientException with the error code ResourceNotFound when I use the BeginUploadFromStream() method
public static bool Exists(this CloudBlob blob)
{
try
{
blob.FetchAttributes();
return true;
}
catch (StorageClientException e)
{
if (e.ErrorCode == StorageErrorCode.ResourceNotFound)
{
return false;
}
else
{
throw;
}
}
}
And my call to upload
var blob = container.GetBlobReference(blobName);
This will NOT throw an exception when i later check if the blob exists
blob.UploadFromStream(fileStream);
This will
AsyncCallback uploadCompleted = new AsyncCallback(OnUploadCompleted);
blob.BeginUploadFromStream(fileStream, uploadCompleted, documentId);
EDIT
As suggested, i didn't have a call to EndUploadFromStream() method. Here is my updated call to upload:
blob.BeginUploadFromStream(fileStream, uploadCompleted, blob);
And my handler
private void OnUploadCompleted(IAsyncResult result)
{
var blob = (CloudBlob) result.AsyncState;
blob.EndUploadFromStream(result);
}
Running this, the EndUploadFromStream() method throws a WebException with the msg: "The request was aborted: The request was canceled." The InnerException is "Cannot close stream until all bytes are written."
Anyone have any idea what's going on here?
BeginUploadFromStream uploads the blob asynchronously, so your method proceeds while the blob uploads on a thread in the background. If the blob hasn't finished uploading -- or if Azure hasn't been told that the upload has completed -- you won't see the blob in storage. Only blobs uploaded through successfully completed transactions are visible.
Could you post the code for OnUploadCompleted?
It looks at first glance as if either the blob is still uploading -- or you've forgotten to call EndUploadFromStream() in your OnUploadCompleted method.
What it sounds like is happening is IIS is cancelling the thread that is being initiated to make the BeginUploadFromStream. Since the storage API is really just manipulating a bunch of REST calls under the hood you can think of these storage calls as web service calls and not like traditional IO.
Check out this topic on HttpKeepAlives, this might solve your problem but as the article pointed out it may impact performance of your site. So you may want to add logic to only enable the keep alive for the requests that are performing the upload.
http://www.jaxidian.org/update/2007/05/05/8/
I'm converting a website from a standard ASP.NET website over to use Azure. The website had previously taken an Excel file uploaded by an administrative user and saved it on the file system. As part of the migration, I'm saving this file to Azure Storage. It works fine when running against my local storage through the Azure SDK. (I'm using version 1.3 since I didn't want to upgrade during the development process.)
When I point the code to run against Azure Storage itself, though, the process usually fails. The error I get is:
System.IO.IOException occurred
Message=Unable to read data from the transport connection: The connection was closed.
Source=Microsoft.WindowsAzure.StorageClient
StackTrace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options)
at Framework.Common.AzureBlobInteraction.UploadToBlob(Stream stream, String BlobContainerName, String fileName, String contentType) in C:\Development\RateSolution2010\Framework.Common\AzureBlobInteraction.cs:line 95
InnerException:
The code is as follows:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(storageAccount, dmc);
CloudBlobClient BlobClient = null;
CloudBlobContainer BlobContainer = null;
BlobClient = storageAccount.CreateCloudBlobClient();
// For large file copies you need to set up a custom timeout period
// and using parallel settings appears to spread the copy across multiple threads
// if you have big bandwidth you can increase the thread number below
// because Azure accepts blobs broken into blocks in any order of arrival.
BlobClient.Timeout = new System.TimeSpan(1, 0, 0);
Role serviceRole = RoleEnvironment.Roles.Where(s => s.Value.Name == "OnlineRates.Web").First().Value;
BlobClient.ParallelOperationThreadCount = serviceRole.Instances.Count;
// Get and create the container
BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
BlobContainer.CreateIfNotExist();
//delete prior version if one exists
BlobRequestOptions options = new BlobRequestOptions();
options.DeleteSnapshotsOption = DeleteSnapshotsOption.None;
CloudBlob blobToDelete = BlobContainer.GetBlobReference(fileName);
Trace.WriteLine("Blob " + fileName + " deleted to be replaced by newer version.");
blobToDelete.DeleteIfExists(options);
//set stream to starting position
stream.Position = 0;
long totalBytes = 0;
//Open the stream and read it back.
using (stream)
{
// Create the Blob and upload the file
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
try
{
BlobClient.ResponseReceived += new EventHandler<ResponseReceivedEventArgs>((obj, responseReceivedEventArgs)
=>
{
if (responseReceivedEventArgs.RequestUri.ToString().Contains("comp=block&blockid"))
{
totalBytes += Int64.Parse(responseReceivedEventArgs.RequestHeaders["Content-Length"]);
}
});
blob.UploadFromStream(stream);
// Set the metadata into the blob
blob.Metadata["FileName"] = fileName;
blob.SetMetadata();
// Set the properties
blob.Properties.ContentType = contentType;
blob.SetProperties();
}
catch (Exception exc)
{
Logging.ExceptionLogger.LogEx(exc);
}
}
}
I've tried a number of different alterations to the code: deleting a blob before replacing it (although the problem exists on new blobs as well), setting container permissions, not setting permissions, etc.
Your code looks like it should work, but it has lots of extra functionality that is not strictly required. I would cut it down to an absolute minimum and go from there. It's really only a gut feeling, but I think it might be the using statement giving you grief. This enture function could be written (presuming the container already exists) as:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
CloudBlobClient BlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
stream.Position = 0;
blob.UploadFromStream(stream);
}
Notes on the stuff that I've removed:
You should set up diagnostics just once when you're app starts, not every time a method is called. Usually in the RoleEntryPoint.OnStart()
I'm not sure why you're trying to set ParallelOperationThreadCount higher if you have more instances. Those two things seem unrelated.
It's not good form to check for the existence of a container/table every time you save something to it. It's more usual to do that check once when your app starts or to have a process external to the website to make sure all the required containers/tables/queues exist. Of course if you're trying to dynamically create containers this is not true.
The problem turned out to be firewall settings on my laptop. It's my personal laptop originally set up at home and so the firewall rules weren't set up for a corporate environment resulting in slow performance on uploads and downloads.