I want to download a file from storage - azure

file path
https://ubj10edustgcdn.azureedge.net/userdata/documents/a09b372d-cffe-438a-b98d-123d118ac0a5/provincial management service batch.pdf
how can i downlaod file from this path by c#
var filepath ="https://ubj10edustgcdn.azureedge.net/userdata/documents/a09b372d-cffe-438a-b98d-123d118ac0a5/provincial management service batch.pdf";
return File(filepath, "application/force-download", Path.GetFileName(filepath));
this code return exception that virtual path is not valid.

The URL contains azureedge.net, which means it's served from a CDN. Which also means you will not be downloading from storage, but 'just' a file from the internet.
Have a look at How to download a file from a URL in C#?
using (var client = new WebClient())
{
client.DownloadFile("http://example.com/file/song/a.mpeg", "a.mpeg");
}
If you actually want to download a file from Azure Storage, you will need to know the (internal) path to the file and you can use code like this to download the file:
string connectionString = CloudConfigurationManager.GetSetting("StorageConnection");
BlobContainerClient container = new BlobContainerClient(connectionString, "<CONTAINER-NAME>");
var blockBlob = container.GetBlobClient("<FILE-NAME>");
using (var fileStream = System.IO.File.OpenWrite("<LOCAL-FILE-NAME>"))
{
blockBlob.DownloadTo(fileStream);
}
Source: Uploading and Downloading a Stream into an Azure Storage Blob

Related

C# Azure Storage download issue

I'm working on moving our files from the database to Azure Storage for the files themselves. We are keeping the folder structure and file information in our SQL DB.
The control we are using is an ASPxFileManager from Dev Express which does not allow you do to async functionality because it isn't built in.
The code I'm using to pull the blob from Azure and return a stream is below:
using (var ms = new MemoryStream())
{
var result = blobItem.DownloadStreaming().GetRawResponse();
result.ContentStream.CopyTo(ms);
return ms.ToArray();
}
It appears on large files, it is downloading it to the server to get the stream AND then sending that stream to the client. So I think it is processing twice.
BlobContainerClient container = GetBlobContainer(containerInfo);
BlobClient blobItem = container.GetBlobClient(fileSystemFileDataId);
using (var ms = new MemoryStream())
{
blobItem.DownloadTo(ms);
return ms.ToArray();
}
I looked at using their CloudFileSystemProviderBase to use GetDownloadURL, but it only allows you to download single files. This works fine for single files as we can return a url with a SAS token etc.
We still need to support downloading multiple files though.
Is there a way in azure to NOT download to the file system and just obtain ONE stream to send back for the Dev Express ASPxFileManager to process?
I liked the GetDownloadUrl call from their CloudFileSystemProviderBase, because it doesn't block the main UI thread and allows the user to continue to work in the app while large files are downloading.
Main question: Is there a way to return a stream from azure where it does not have to download to the server first?
(Note: I've already been talking to DevExpress about this issue)
UPDATE 2:
The code below obtains it to a stream, but does it download it on the server and then sends it to the client? Or just does the obtain the stream once? I think the code above I was using does it twice?
Also, this code uses the WindowsAzure.Storage, which is depecated.
So what would be the nuget/C# code for the correct way now days?
if (!String.IsNullOrWhiteSpace(contentDisposition.FileName))
{
string connectionString = ConfigurationManager.AppSettings["my-connection-string"];
string containerName = ConfigurationManager.AppSettings["my-container"];
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference(containerName);
CloudBlockBlob blob = blobContainer.GetBlockBlobReference(contentDisposition.FileName);
stream = blob.OpenWrite();
}

Is there a way to write a spreadsheetlight excel file directly to blob storage?

I am porting some old application services code into Microsoft Azure and need a little help.
I am pulling in some data from a stored procedure and creating a SpreadsheetLight document (this was brought in from the old code since my users want to keep the extensive formatting that was already built into this process). That code works fine, but I need to write the file directly into our azure blob container without saving a local copy first (as this process will run in a pipeline). Using some sample code I found as a guide, I was able to get it working in debug by saving locally and then uploading that out to the blob storage... but now I need to find a way to remove that local save prior to the upload.
Well I actually stumbled on the solution.
string connectionString = "YourConnectionString";
string containerName = "YourContainerName";
string strFileNameXLS = "YourOutputFile.xlsx";
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
BlobContainerClient blobContainerClient = blobServiceClient.GetBlobContainerClient(containerName);
BlobClient blobClient = blobContainerClient.GetBlobClient(strFileNameXLS);
SLDocument doc = YourSLDocument();
using (MemoryStream ms = new MemoryStream())
{
doc.SaveAs(ms);
ms.Position = 0;
await blobClient.UploadAsync(ms, true);
ms.Close();
}

How to get lease on Azure storage file?

I have multi instance web application in azure cloud. I want to make sure that only one of them read and process file from azure storage file. I thought using lease option I can make it happen.
However I am unable to find proper methods for this operation. I found this REST API option but I am looking for something like this.
This example shows for blob storage. What are file storage options? I am using latest file storage nuget.
EDIT:
This is how I am getting data from storage file and processing the data.
var storageAccount = CloudStorageAccount.Parse("FileStorageConnectionString");
var fileClient = storageAccount.CreateCloudFileClient();
var share = fileClient.GetShareReference("StorageFileShareName");
var file = this.share.GetRootDirectoryReference().GetFileReference("file.txt");
---processing file and renaming the file after processing---
How to implement lease on this file so that no other cloud instance get the access to this file? After processing the file I will also rename it.
Please try something like below:
string connectionString = "DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net;";
var shareClient = new ShareClient(connectionString, "test");
shareClient.CreateIfNotExists();
var fileClient = shareClient.GetRootDirectoryClient().GetFileClient("test.txt");
var bytes = Encoding.UTF8.GetBytes("This is a test");
fileClient.Create(bytes.Length);
fileClient.Upload(new MemoryStream(bytes));
var leaseClient = fileClient.GetShareLeaseClient();
Console.WriteLine("Acquiring lease...");
var leaseId = leaseClient.Acquire();
Console.WriteLine("Lease Id: " + leaseId);
Console.WriteLine("Breaking lease...");
leaseClient.Break();
Console.WriteLine("Lease broken...");
It makes use of Azure.Storage.Files.Shares Nuget package.

What happens to files downloaded in WebJob

I am working with some sensitive files (mostly images) in my WebJob. My WebJob downloads the files from Azure Blob (container 1), does some processing and uploads to Azure Blob (container 2).
Because these files are sensitive in nature, I want to be 100% sure that WebJob deletes them once the Job is completed running.
Can someone tell me what happens to files downloaded in WebJob?
My download code looks like this ...
var stream = new MemoryStream();
using (StorageService storage = CreateStorageClient())
{
var bucketname = "container1";
var objectToDownload = storage.Objects.Get(bucketname, "files/img1.jpg").Execute();
var downloader = new MediaDownloader(storage);
downloader.Download(objectToDownload.MediaLink, stream);
}
Here CreateStorageClient() is my utility method which creates a StorageService object.
Solved using #lopezbertoni comment.
Also found relevant question which also helped - Azure Webjob - accessing local file system

Lucene.NET and storing data on Azure Blob Storage

The question I am asking is specifically because I don't want to use AzureDirectory project. I am just trying something on my own.
cloudStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=xxxx;AccountKey=xxxxx");
blobClient=cloudStorageAccount.CreateCloudBlobClient();
List<CloudBlobContainer> containerList = new List<CloudBlobContainer>();
IEnumerable<CloudBlobContainer> containers = blobClient.ListContainers();
if (containers != null)
{
foreach (var item in containers)
{
Console.WriteLine(item.Uri);
}
}
/* Used to test connectivity
*/
//state the file location of the index
string indexLocation = containers.Last().Name.ToString();
Lucene.Net.Store.Directory dir =
Lucene.Net.Store.FSDirectory.Open(indexLocation);
//create an analyzer to process the text
Lucene.Net.Analysis.Analyzer analyzer = new
Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30);
//create the index writer with the directory and analyzer defined.
bool findexExists = Lucene.Net.Index.IndexReader.IndexExists(dir);
Lucene.Net.Index.IndexWriter indexWritr = new Lucene.Net.Index.IndexWriter(dir, analyzer,!findexExists, Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED);
//create a document, add in a single field
Lucene.Net.Documents.Document doc = new Lucene.Net.Documents.Document();
string path="D:\\try.html";
TextReader reader = new FilterReader("D:\\try.html");
doc.Add(new Lucene.Net.Documents.Field("url",path,Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.NOT_ANALYZED));
doc.Add(new Lucene.Net.Documents.Field("content",reader.ReadToEnd().ToString(),Lucene.Net.Documents.Field.Store.YES,Lucene.Net.Documents.Field.Index.ANALYZED));
indexWritr.AddDocument(doc);
indexWritr.Optimize();
indexWritr.Commit();
indexWritr.Close();
Now the issue is after indexing is completed I am not able to see any files created inside the container. Can anybody help me out?
You're using the FSDirectory there, which is going to write files to the local disk.
You're passing it a list of containers in blob storage. Blob storage is a service made available over a REST API, and is not addressable directly from the file system. Therefore the FSDirectory is not going to be able to write your index to storage.
Your options are :
Mount a VHD disk on the machine, and store the VHD in blob storage. There are some instructions on how to do this here: http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/15/mount-a-page-blob-vhd-in-any-windows-azure-vm-outside-any-web-worker-or-vm-role.aspx
Use the Azure Directory, which you refer to in your question. I have rebuilt the AzureDirectory against the latest storage SDK: https://github.com/richorama/AzureDirectory
Another alternative for people looking around - I wrote up a directory that uses the azure shared cache (preview) which can be an alternative for AzureDirectory (albeit for bounded search sets)
https://github.com/ajorkowski/AzureDataCacheDirectory

Resources