Excel files are stored in azure blob containers. They are downloaded without incident in IE but in Chrome the page displays this message (and in Canary it crashes):
This file appears corrupt
and provides a link to download it and all is well from that point. I've tried setting the content-type to different excel formats but the result is the same.
Here's the blob code:
MemoryStream memoryStream = new MemoryStream();
CreateFile(memoryStream, grid);
memoryStream.Position = 0;
var blockBlob = container.GetBlockBlobReference(randomFileName);
blockBlob.Properties.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
blockBlob.DeleteIfExists();
var options = new BlobRequestOptions()
{
ServerTimeout = TimeSpan.FromMinutes(10)
};
try
{
blockBlob.UploadFromStream(memoryStream, null, options);
}
catch (Exception e)
{
_logger.Error("Uploading excel file: Error: {0}", e.Message);
}
return new Uri("https://myblobs.blob.core.windows.net/" + "containername/" + randomFileName);
You missed blockBlob.SetProperties();
Try this:
var blockBlob = container.GetBlockBlobReference(randomFileName);
blockBlob.DeleteIfExists();
blockBlob.Properties.ContentType =
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
blockBlob.SetProperties(); // This is for commiting changes.
Note that for .xls files you need to set content-type to application/vnd.ms-excel.
FYI: If you want to update property values in existing blob you need to fetch the current values, set the property that we want to update and call SetProperties on the BLOB.
Example:
blob.FetchAttributes();
blob.Properties.ContentType = "image/png";
blob.SetProperties();
Related
I am trying to download a Blob from an Azure storage account container. When I run the application locally, I get the correct "Download" folder C:\Users\xxxx\Downloads. When I publish the application to Azure and try to download the file, I get an error. I have tried various "Knownfolders", and some return empty strings, others return the folders on the Azure server. I am able to upload files fine, list the files in a container, but am struggling with downloading a file.
string conn =
configuration.GetValue<string>"AppSettings:AzureContainerConn");
CloudStorageAccount storageAcct = CloudStorageAccount.Parse(conn);
CloudBlobClient blobClient = storageAcct.CreateCloudBlobClient();
CloudBlobContainer container =
blobClient.GetContainerReference(containerName);
Uri uriObj = new Uri(uri);
string filename = Path.GetFileName(uriObj.LocalPath);
// get block blob reference
CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
Stream blobStream = await blockBlob.OpenReadAsync();
string _filepath = _knownfolder.Path + "\\projectfiles\\";
Directory.CreateDirectory(_filepath);
_filepath = _filepath + filename;
Stream _file = new MemoryStream();
try
{
_file = File.Open(_filepath, FileMode.Create, FileAccess.Write);
await blobStream.CopyToAsync(_file);
}
finally
{
_file.Dispose();
}
The expected end result is the file ends up in the folder within the users "Downloads" folder.
Since you're talking about publishing to Azure, the code is probably from a web application, right? And the code for the web application runs on the server. Which means the code is trying to download the blob to the server running the web application.
To present a downloadlink to the user to enable them to download the file, use the FileStreamResult which
Represents an ActionResult that when executed will write a file from a stream to the response.
A (pseudo code) example:
[HttpGet]
public FileStreamResult GetFile()
{
var stream = new MemoryStream();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
blockBlob.DownloadToStream(stream);
blockBlob.Seek(0, SeekOrigin.Begin);
return new FileStreamResult(stream, new MediaTypeHeaderValue("text/plain"))
{
FileDownloadName = "someFile.txt"
};
}
I have storage and inside Blob with public access.
but when I am trying to write a document live telemetry shows below Dependency error.
1:21:57 PM | Dependency | 404 | 65
HEAD imagesa| LogLevel=Information | Blob=255274.jpg
Time: 1:21:57 PM
Duration: 65 ms
Outgoing Command: HEAD imagesa
Result code: 404
fileName =imageURL.Substring(imageURL.LastIndexOf(#"/") + 1);
var req = System.Net.WebRequest.Create(imageURL);
using (Stream filestream = req.GetResponse().GetResponseStream())
{
// Get the reference to the block blob from the container
CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(fileName);
//create a snapshot
bool existsTask = await blockBlob.ExistsAsync();
if (existsTask == true)
{
// the base blob's metadata is copied to the snapshot.
await blockBlob.CreateSnapshotAsync();
blockBlob.Metadata.Clear();
}
}
I cannot reproduce your issue with the same code in a console app(And if you run your code with some special setting/environment, please point it out).
Please make sure somethings:
1.check if you have set the blob access to public in azure portal, and check your code if use the same blob / container.
2.please use the latest version of WindowsAzure.Storage package, 9.3.3.
And one thing you also need to know: after the code blockBlob.Metadata.Clear(), you need use blockBlob.SetMetadata(). Or it will not clear the metadata.
The code I used:
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("account", "key"), true);
CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
var cloudBlobContainer = cloudBlobClient.GetContainerReference("test-2");
var imageURL = "https://xx.blob.core.windows.net/test-2/sample.JPG";
var fileName = imageURL.Substring(imageURL.LastIndexOf(#"/") + 1);
var req = System.Net.WebRequest.Create(imageURL);
using (Stream filestream = req.GetResponse().GetResponseStream())
{
CloudBlockBlob blockBlob = cloudBlobContainer.GetBlockBlobReference(fileName);
bool existsTask = await blockBlob.ExistsAsync();
if (existsTask == true)
{
await blockBlob.CreateSnapshotAsync();
blockBlob.Metadata.Clear();
blockBlob.SetMetadata(); // add this line of code to ensure the changes to metadata is committed.
}
}
Please let me know if you have more issues.
I'm trying to convert current application that uses NPOI for creating xls document on the server to Azure hosted application. I have little experience with NPOI and Azure so 2 strikes right there. I have the app uploading the xls to Blob container however it is always blank (9 bytes). From what I understand NPOI uses filestream to write to the file so I just changed that to write to the blob container.
Here is what i think are the relevant portions:
internal void GenerateExcel(DataSet ds, int QuoteID, string ReportFileName)
{
string ExcelFileName = string.Format("{0}_{1}.xls",ReportFileName,QuoteID);
try
{
//these 2 strings will get deleted but left here for now to run side by side at the moment
string ReportDirectoryPath = HttpContext.Current.Server.MapPath(".") + "\\Reports";
if (!Directory.Exists(ReportDirectoryPath))
{
Directory.CreateDirectory(ReportDirectoryPath);
}
string ExcelReportFullPath = ReportDirectoryPath + "\\" + ExcelFileName;
if (File.Exists(ExcelReportFullPath))
{
File.Delete(ExcelReportFullPath);
}
// Create a new workbook.
var workbook = new HSSFWorkbook();
//Rest of the NPOI XLS rows cells etc. etc. all works fine when writing to disk////////////////
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("pricingappreports");
// Create the container if it doesn't already exist.
if (container.CreateIfNotExists())
{
container.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}
// Retrieve reference to a blob with the same name.
CloudBlockBlob blockBlob = container.GetBlockBlobReference(ExcelFileName);
// Write the output to a file on the server
String file = ExcelReportFullPath;
using (FileStream fs = new FileStream(file, FileMode.Create))
{
workbook.Write(fs);
fs.Close();
}
// Write the output to a file on Azure Storage
String Blobfile = ExcelFileName;
using (FileStream fs = new FileStream(Blobfile, FileMode.Create))
{
workbook.Write(fs);
blockBlob.UploadFromStream(fs);
fs.Close();
}
}
I'm uploading to the Blob and the file exists, why doesn't the data get written to the xls?
Any help would be appreciated.
Update: I think I found the problem. Doesn't look like you can write to a file in Blob Storage. Found this Blog which pretty much answers my questions: it doesn't use NPOI but the concept is the same. http://debugmode.net/2011/08/28/creating-and-updating-excel-file-in-windows-azure-web-role-using-open-xml-sdk/
Thanks
Can you install fiddler and check the request and the response packets? You may also need to seek back to 0 between two writes . So the correct code here could be to add the below before trying to write the stream to blob.
workbook.Write(fs);
fs.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(fs);
fs.Close();
I also noticed that you are using String Blobfile = ExcelFileName instead of String Blobfile = ExcelReportFullPath.
I am trying to rename blob in azure storage via .net API and it is I am unable to rename a blob file after a day : (
Here is how I am doing it, by creating new blob and copy from old one.
var newBlob = blobContainer.GetBlobReferenceFromServer(filename);
newBlob.StartCopyFromBlob(blob.Uri);
blob.Delete();
There is no new blob on server so I am getting http 404 Not Found exception.
Here is working example that i have found but it is for old .net Storage api.
CloudBlob blob = container.GetBlobReference(sourceBlobName);
CloudBlob newBlob = container.GetBlobReference(destBlobName);
newBlob.UploadByteArray(new byte[] { });
newBlob.CopyFromBlob(blob);
blob.Delete();
Currently I am using 2.0 API. Where I am I making a mistake?
I see that you're using GetBlobReferenceFromServer method to create an instance of new blob object. For this function to work, the blob must be present which will not be the case as you're trying to rename the blob.
What you could do is call GetBlobReferenceFromServer on the old blob, get it's type and then either create an instance of BlockBlob or PageBlob and perform copy operation on that. So your code would be something like:
CloudBlobContainer blobContainer = storageAccount.CreateCloudBlobClient().GetContainerReference("container");
var blob = blobContainer.GetBlobReferenceFromServer("oldblobname");
ICloudBlob newBlob = null;
if (blob is CloudBlockBlob)
{
newBlob = blobContainer.GetBlockBlobReference("newblobname");
}
else
{
newBlob = blobContainer.GetPageBlobReference("newblobname");
}
//Initiate blob copy
newBlob.StartCopyFromBlob(blob.Uri);
//Now wait in the loop for the copy operation to finish
while (true)
{
newBlob.FetchAttributes();
if (newBlob.CopyState.Status != CopyStatus.Pending)
{
break;
}
//Sleep for a second may be
System.Threading.Thread.Sleep(1000);
}
blob.Delete();
The code in OP was almost fine except that an async copy method was called. The simplest code in new API should be:
var oldBlob = cloudBlobClient.GetBlobReferenceFromServer(oldBlobUri);
var newBlob = container.GetBlobReference("newblobname");
newBlog.CopyFromBlob(oldBlob);
oldBlob.Delete();
I use the following method to upload a document into sharepoint document library.
However, upon executing the query - get the following error:
Message = "The remote server returned an error: (400) Bad Request."
the files are failing over 1mb, so i tested it via the sharepoint UI and the same file uploaded successfully.
any thoughts on what's the issue is? is it possible to stream the file over rather than 1 large file chunk? the file in question is only 3mb in size..
private ListItem UploadDocumentToSharePoint(RequestedDocumentFileInfo requestedDoc, ClientContext clientContext)
{
try
{
var uploadLocation = string.Format("{0}{1}/{2}", SiteUrl, Helpers.ListNames.RequestedDocuments,
Path.GetFileName(requestedDoc.DocumentWithFilePath));
//Get Document List
var documentslist = clientContext.Web.Lists.GetByTitle(Helpers.ListNames.RequestedDocuments);
var fileCreationInformation = new FileCreationInformation
{
Content = requestedDoc.ByteArray,
Overwrite = true,
Url = uploadLocation //Upload URL,
};
var uploadFile = documentslist.RootFolder.Files.Add(fileCreationInformation);
clientContext.Load(uploadFile);
clientContext.ExecuteQuery();
var item = uploadFile.ListItemAllFields;
item["Title"] = requestedDoc.FileNameParts.FileSubject;
item["FileLeafRef"] = requestedDoc.SharepointFileName;
item.Update();
}
catch (Exception exception)
{
throw new ApplicationException(exception.Message);
}
return GetDocument(requestedDoc.SharepointFileName + "." + requestedDoc.FileNameParts.Extention, clientContext);
}
EDIT: i did find the following ms page regarding my issue (which seems identical to the issue they have raised) http://support.microsoft.com/kb/2529243 but appears to not provide a solution.
ok found the solution here:
http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
i'll need to store the document on the server hosting the file then using the filestream upload process i've done in my code below:
private ListItem UploadDocumentToSharePoint(RequestedDocumentFileInfo requestedDoc, ClientContext clientContext)
{
try
{
using(var fs = new FileStream(string.Format(#"C:\[myfilepath]\{0}", Path.GetFileName(requestedDoc.DocumentWithFilePath)), FileMode.Open))
{
File.SaveBinaryDirect(clientContext, string.Format("/{0}/{1}", Helpers.ListNames.RequestedDocuments, requestedDoc.FileName), fs, true);
}
}
catch (Exception exception)
{
throw new ApplicationException(exception.Message);
}
return GetDocument(requestedDoc.SharepointFileName + "." + requestedDoc.FileNameParts.Extention, clientContext);
}