Custom FileResult on Azure: Browser Waits forever - azure

I have an action that returns an Excel as a custom FileResult. My solution is based on the ClosedXml library (internaly using OpenXml).
My XlsxResult class uses a read-only .xlsx file on the server as a template. It then passes on the template into a memory stream that gets manipulated and saved back with ClosedXml. In the end the memory stream get written to the response.
This works fine both on Cassini as well as IIS Express but fails when deployed on azure with no error whatsoever. The only effect I am experiencing is the request sent to the server never gets any response. I am still waiting for something to happen after 60 minutes or so...
My action:
[OutputCache(Location= System.Web.UI.OutputCacheLocation.None, Duration=0)]
public FileResult Export(int year, int month, int day) {
var date = new DateTime(year, month, day);
var filename = string.Format("MyTemplate_{0:yyyyMMdd}.xlsx", date);
//return new FilePathResult("~/Content/templates/MyTemplate.xlsx", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
var result = new XlsxExportTemplatedResult("MyTemplate.xlsx", filename, (workbook) => {
var ws = workbook.Worksheets.Worksheet("My Export Sheet");
ws.Cell("B3").Value = date;
// Using a OpenXML's predefined formats (15 stands for date)
ws.Cell("B3").Style.NumberFormat.NumberFormatId = 15;
ws.Columns().AdjustToContents(); // You can also specify the range of columns to adjust, e.g.
return workbook;
});
return result;
}
My FileResult
public class XlsxExportTemplatedResult : FileResult
{
// default buffer size as defined in BufferedStream type
private const int BufferSize = 0x1000;
public static readonly string TEMPLATE_FOLDER_LOCATION = #"~\Content\templates";
public XlsxExportTemplatedResult(string templateName, string fileDownloadName, Func<XLWorkbook, XLWorkbook> generate)
: base("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet") {
this.TempalteName = templateName;
this.FileDownloadName = fileDownloadName;
this.Generate = generate;
}
public string TempalteName { get; protected set; }
public Func<XLWorkbook, XLWorkbook> Generate { get; protected set; }
protected string templatePath = string.Empty;
public override void ExecuteResult(ControllerContext context) {
templatePath = context.HttpContext.Server.MapPath(System.IO.Path.Combine(TEMPLATE_FOLDER_LOCATION, this.TempalteName));
base.ExecuteResult(context);
}
//http://msdn.microsoft.com/en-us/library/office/ee945362(v=office.11).aspx
protected override void WriteFile(System.Web.HttpResponseBase response) {
FileStream fileStream = new FileStream(templatePath, FileMode.Open, FileAccess.Read);
using (MemoryStream memoryStream = new MemoryStream()) {
CopyStream(fileStream, memoryStream);
using (var workbook = new XLWorkbook(memoryStream)) {
Generate(workbook);
workbook.Save();
}
// At this point, the memory stream contains the modified document.
// grab chunks of data and write to the output stream
Stream outputStream = response.OutputStream;
byte[] buffer = new byte[BufferSize];
while (true) {
int bytesRead = memoryStream.Read(buffer, 0, BufferSize);
if (bytesRead == 0) {
// no more data
break;
}
outputStream.Write(buffer, 0, bytesRead);
}
}
fileStream.Dispose();
}
static private void CopyStream(Stream source, Stream destination) {
byte[] buffer = new byte[BufferSize];
int bytesRead;
do {
bytesRead = source.Read(buffer, 0, buffer.Length);
destination.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
So am I missing something (apparently I am).
Please Note:
There are no dlls missing from Azure because I checked using RemoteAccess feature of the Windows Azure Tools 1.7
My export is not a heavy long running task.
when I changed the action to just return a FilePathResult with the template xlsx it worked on azure. But I need to process the file before returning it as u might suspect :-)
Tanks.
UPDATE:
After I logged extensively in my code the execution hangs with no error at the ClosedXml "Save" method call. But still no error. Abstract from the WADLogsTable:
Opening template file from path:
E:\sitesroot\0\Content\templates\MyTemplate.xlsx
Opened template from path:
E:\sitesroot\0\Content\templates\MyTemplate.xlsx just
copied template to editable memory stream. Bytes copied: 15955,
Position: 15955
modified the excel document in memory.
here it hangs when a it calls to workbook.Save(); This is a ClosedXml method call.

I was facing the exact same error situation as you. I can't offer a fix in your specific situation, and I know you switched tracks, but after going through the same frustrating steps you had faced, I'd like to "pave the way" for an answer for you (or others).
Drop into your package manager console in Visual Studio and install Elmah with the MVC goodies (routing):
Install-Package elmah.MVC
Now, in your root web.config, update your Elmah entry. It's likely at the end of the file, looking like this:
<elmah></elmah>
Update that bad boy to allow remote access and set up your log path:
<elmah>
<security allowRemoteAccess="1" />
<errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/app_data/elmah" />
</elmah>
Now, push that up to Azure.
Finally, visit your site, force the error then navigate to http://your-site-here.azurewebsites.net/elmah and you'll see the exact cause of the error.
Elmah is so the awesome.
Sheepish confession: The error for me wasn't in the third party code, it turned out to be in my connection string, for which I hadn't set MultipleActiveResultsSets to true. The other fix I had to do was pass my entities in after calling ToList() to one of the internal methods on that library, leaving it as IQueryable borked the method up.

Related

Blazor server side upload file to Azure Blob Storage

I am using a file upload component that posts the file to an API controller and this works ok but i need to get the progres of the upload.
[HttpPost("upload/single")]
public async Task SingleAsync(IFormFile file)
{
try
{
// Azure connection string and container name passed as an argument to get the Blob reference of the container.
var container = new BlobContainerClient(azureConnectionString, "upload-container");
// Method to create our container if it doesn’t exist.
var createResponse = await container.CreateIfNotExistsAsync();
// If container successfully created, then set public access type to Blob.
if (createResponse != null && createResponse.GetRawResponse().Status == 201)
await container.SetAccessPolicyAsync(Azure.Storage.Blobs.Models.PublicAccessType.Blob);
// Method to create a new Blob client.
var blob = container.GetBlobClient(file.FileName);
// If a blob with the same name exists, then we delete the Blob and its snapshots.
await blob.DeleteIfExistsAsync(Azure.Storage.Blobs.Models.DeleteSnapshotsOption.IncludeSnapshots);
// Create a file stream and use the UploadSync method to upload the Blob.
uploadFileSize = file.Length;
var progressHandler = new Progress<long>();
progressHandler.ProgressChanged += UploadProgressChanged;
using (var fileStream = file.OpenReadStream())
{
await blob.UploadAsync(fileStream, new BlobHttpHeaders { ContentType = file.ContentType },progressHandler:progressHandler);
}
Response.StatusCode = 400;
}
catch (Exception e)
{
Response.Clear();
Response.StatusCode = 204;
Response.HttpContext.Features.Get<IHttpResponseFeature>().ReasonPhrase = "File failed to upload";
Response.HttpContext.Features.Get<IHttpResponseFeature>().ReasonPhrase = e.Message;
}
}
private double GetProgressPercentage(double totalSize, double currentSize)
{
return (currentSize / totalSize) * 100;
}
private void UploadProgressChanged(object sender, long bytesUploaded)
{
uploadPercentage = GetProgressPercentage(uploadFileSize, bytesUploaded);
}
I am posting this file and it does upload but the file upload progress event is inaccurate it says the file upload is complete after a few seconds when in reality the file takes ~90 secs on my connection to appear in the Azure Blob Storage container.
So in the code above i have the progress handler which works (I can put a break point on it and see it increasing) but how do I return this value to the UI?
I found one solution that used Microsoft.AspNetCore.SignalR but I can't manage to integrate this into my own code and I'm not even sure if I'm on the right track.
using BlazorReportProgress.Server.Hubs;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.SignalR;
using System.Threading;
namespace BlazorReportProgress.Server.Controllers
{
[ApiController]
[Route("api/[controller]")]
public class SlowProcessController : ControllerBase
{
private readonly ILogger<SlowProcessController> _logger;
private readonly IHubContext<ProgressHub> _hubController;
public SlowProcessController(
ILogger<SlowProcessController> logger,
IHubContext<ProgressHub> hubContext)
{
_logger = logger;
_hubController = hubContext;
}
[HttpGet("{ClientID}")]
public IEnumerable<int> Get(string ClientID)
{
List<int> retVal = new();
_logger.LogInformation("Incoming call from ClientID : {ClientID}", ClientID);
_hubController.Clients.Client(ClientID).SendAsync("ProgressReport", "Starting...");
Thread.Sleep(1000);
for (int loop = 0; loop < 10; loop++)
{
_hubController.Clients.Client(ClientID).SendAsync("ProgressReport", loop.ToString());
retVal.Add(loop);
Thread.Sleep(500);
}
_hubController.Clients.Client(ClientID).SendAsync("ProgressReport", "Done!");
return retVal;
}
}
}
I read the Steve Sandersen blog but this says not to use the code as its been superceded by inbuilt blazor functionality.
My application is only for a few users and so i'm not too worried about backend APIs etc, If the upload component used a service not a controller I could more easily get the progress, but the compoents all seem to post to controllers.
Can anyone please enlighten me as to the best way to solve this?

Big JSON stream on Azure

I'm working on a big data export API, but I'm having isssues when it needs to transport big data as JSON. An example of such is a transfer of over 4 milion records. When saved as a textfile, the data is suposed to be about 380MB, but for some reason the stream is cut short to about 250-280MB (always dfferent) and when I check the file in notepad, it did just cut off the data in the middle of a record.
This behaviour is only happening on the Azure server, I can download the full file through my local IIS. Also weird is that when I export the data as XML, which results in an even bigger file of +600MB did not have this issue.
Our Azure app service plan is S3 (4 cores, 7GB memory) which I believe should be enough, the code that actually transfers the data is the following function:
public IActionResult ResponseConvert(IList data)
{
return new Microsoft.AspNetCore.Mvc.JsonResult(data);
}
The data parameter is a List<dynamic> object, containing the +4 milion records.
At first glance it seems like Azure terminates the stream prematurely, any idea why and how this can be prevented?
In the end I've writen my own JsonResult class, that would use a JsonTextWriter to transfer the data. This seems to work fine with larger objects, even on Azure.
Here's the full class:
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
using System.Collections;
using System.Collections.Generic;
using System.Dynamic;
using System.IO;
using System.Linq;
using System.Text;
namespace MyProject.OutputFormat
{
public class JsonResult : ActionResult
{
private readonly IList _data;
public Formatting Formatting { get; set; }
public string MimeType { get; set; }
public JsonResult(IList data)
{
_data = data;
// Default values
MimeType = "application/json";
Formatting = Formatting.None;
}
public override void ExecuteResult(ActionContext context)
{
context.HttpContext.Response.ContentType = MimeType;
using (var sw = new StreamWriter(context.HttpContext.Response.Body, Encoding.UTF8))
{
using (var writer = new JsonTextWriter(sw) { Formatting = Formatting })
{
writer.WriteStartArray();
if (_data != null)
{
foreach (var item in _data)
{
writer.WriteStartObject();
if (item is ExpandoObject)
{
foreach (KeyValuePair<string, object> prop in item as ExpandoObject)
{
writer.WritePropertyName(prop.Key);
writer.WriteValue(prop.Value != null ? prop.Value.GetType().Name != "Byte[]" ? prop.Value.ToString() : ((byte[])prop.Value).BinaryToString() : null);
}
}
else
{
var props = item.GetType().GetProperties().Where(i => i.Name != "Item");
foreach (var prop in props)
{
var val = prop.GetValue(item);
writer.WritePropertyName(prop.Name);
writer.WriteValue(val != null ? val.GetType().Name != "Byte[]" ? val.ToString() : ((byte[])val).BinaryToString() : null);
}
}
writer.WriteEndObject();
}
}
writer.WriteEndArray();
}
}
}
}
}
The BinaryToString() method you see is a self written extension on byte[] to convert a byte array to a base64 string.
Small note, though this works for bigger data, the JsonResult in Microsoft.AspNetCore.Mvc downloads faster. Getting the response to the client is as fast, but since this method only converts during download, it takes a bit longer until the stream is fully downloaded. If you do not have any issues in your environment, I'd advise using the one in Microsoft.AspNetCore.Mvc.

UWP apps accessing files from random location on system

in UWP there are files and permissions restrictions, so we can only acces files directly from few folders or we can use filepicker to access from anywhere on system.
how can I use the files picked from filepicker and use them anytime again when the app runs ? tried to use them again by path but it gives permission error. I know about the "futureacceslist" but its limit is 1000 and also it will make the app slow if I am not wrong? .
Is there a better way to do this ? or can we store storage files link somehow in local sqlite database?
If you need to access lots of files, asking the user to select the parent folder and then storing that is probably a better solution (unless you want to store 1,000 individually-picked files from different locations). You can store StorageFolders in the access list as well.
I'm not sure why you think it will make your app slow, but the only real way to know if this will affect your performance is to try it and measure against your goals.
Considering this method..
public async static Task<byte[]> ToByteArray(this StorageFile file)
{
byte[] fileBytes = null;
using (IRandomAccessStreamWithContentType stream = await file.OpenReadAsync())
{
fileBytes = new byte[stream.Size];
using (DataReader reader = new DataReader(stream))
{
await reader.LoadAsync((uint)stream.Size);
reader.ReadBytes(fileBytes);
}
}
return fileBytes;
}
This class..
public class AppFile
{
public string FileName { get; set; }
public byte[] ByteArray { get; set; }
}
And this variable
List<AppFile> _appFiles = new List<AppFile>();
Just..
var fileOpenPicker = new FileOpenPicker();
IReadOnlyList<StorageFile> files = await fileOpenPicker.PickMultipleFilesAsync();
foreach (var file in files)
{
var byteArray = await file.ToByteArray();
_appFiles.Add(new AppFile { FileName = file.DisplayName, ByteArray = byteArray });
}
UPDATE
using Newtonsoft.Json;
using System.Linq;
using Windows.Security.Credentials;
using Windows.Storage;
namespace Your.Namespace
{
public class StateService
{
public void SaveState<T>(string key, T value)
{
var localSettings = ApplicationData.Current.LocalSettings;
localSettings.Values[key] = JsonConvert.SerializeObject(value);
}
public T LoadState<T>(string key)
{
var localSettings = ApplicationData.Current.LocalSettings;
if (localSettings.Values.ContainsKey(key))
return JsonConvert.DeserializeObject<T>(((string) localSettings.Values[key]));
return default(T);
}
public void RemoveState(string key)
{
var localSettings = ApplicationData.Current.LocalSettings;
if (localSettings.Values.ContainsKey(key))
localSettings.Values.Remove((key));
}
public void Clear()
{
ApplicationData.Current.LocalSettings.Values.Clear();
}
}
}
A bit late, but, yes the future access list will slow down your app in that it returns storagfile, storagefolder, or storeageitem objects. These run via the runtime broker which hits a huge performance barrier at about 400 objects regardless of the host capability

How do I display a PDF using PdfSharp in ASP.Net MVC?

We're making an ASP.Net MVC app that needs to be able to generate a PDF and display it to the screen or save it somewhere easy for the user to access. We're using PdfSharp to generate the document. Once it's finished, how do we let the user save the document or open it up in a reader? I'm especially confused because the PDF is generated server-side but we want it to show up client-side.
Here is the MVC controller to create the report that we have written so far:
public class ReportController : ApiController
{
private static readonly string filename = "report.pdf";
[HttpGet]
public void GenerateReport()
{
ReportPdfInput input = new ReportPdfInput()
{
//Empty for now
};
var manager = new ReportPdfManagerFactory().GetReportPdfManager();
var documentRenderer = manager.GenerateReport(input);
documentRenderer.PdfDocument.Save(filename); //Returns a PdfDocumentRenderer
Process.Start(filename);
}
}
When this runs, I get an UnauthorizedAccessException at documentRenderer.PdfDocument.Save(filename); that says, Access to the path 'C:\Program Files (x86)\Common Files\Microsoft Shared\DevServer\10.0\report.pdf' is denied. I'm also not sure what will happen when the line Process.Start(filename); is executed.
This is the code in manager.GenerateReport(input):
public class ReportPdfManager : IReportPdfManager
{
public PdfDocumentRenderer GenerateReport(ReportPdfInput input)
{
var document = CreateDocument(input);
var renderer = new PdfDocumentRenderer(true, PdfSharp.Pdf.PdfFontEmbedding.Always);
renderer.Document = document;
renderer.RenderDocument();
return renderer;
}
private Document CreateDocument(ReportPdfInput input)
{
//Put content into the document
}
}
Using Yarx's suggestion and PDFsharp Team's tutorial, this is the code we ended up with:
Controller:
[HttpGet]
public ActionResult GenerateReport(ReportPdfInput input)
{
using (MemoryStream stream = new MemoryStream())
{
var manager = new ReportPdfManagerFactory().GetReportPdfManager();
var document = manager.GenerateReport(input);
document.Save(stream, false);
return File(stream.ToArray(), "application/pdf");
}
}
ReportPdfManager:
public PdfDocument GenerateReport(ReportPdfInput input)
{
var document = CreateDocument(input);
var renderer = new PdfDocumentRenderer(true,
PdfSharp.Pdf.PdfFontEmbedding.Always);
renderer.Document = document;
renderer.RenderDocument();
return renderer.PdfDocument;
}
private Document CreateDocument(ReportPdfInput input)
{
//Creates a Document and puts content into it
}
I'm not familar with PDF sharp but for MVC is mostly done via built in functionality. You need to get your pdf document represented as an array of bytes. Then you'd simply use MVC's File method to return it to the browser and let it handle the download. Are there any methods on their class to do that?
public class PdfDocumentController : Controller
{
public ActionResult GenerateReport(ReportPdfInput input)
{
//Get document as byte[]
byte[] documentData;
return File(documentData, "application/pdf");
}
}

File Read/Write Locks

I have an application where I open a log file for writing. At some point in time (while the application is running), I opened the file with Excel 2003, which said the file should be opened as read-only. That's OK with me.
But then my application threw this exception:
System.IO.IOException: The process cannot access the file because another process has locked a portion of the file.
I don't understand how Excel could lock the file (to which my app has write access), and cause my application to fail to write to it!
Why did this happen?
(Note: I didn't observe this behavior with Excel 2007.)
Here is a logger which will take care of sync locks. (You can modify it to fit to your requirements)
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
namespace Owf.Logger
{
public class Logger
{
private static object syncContoller = string.Empty;
private static Logger _logger;
public static Logger Default
{
get
{
if (_logger == null)
_logger = new Logger();
return _logger;
}
}
private Dictionary<Guid, DateTime> _starts = new Dictionary<Guid, DateTime>();
private string _fileName = "Log.txt";
public string FileName
{
get { return _fileName; }
set { _fileName = value; }
}
public Guid LogStart(string mesaage)
{
lock (syncContoller)
{
Guid id = Guid.NewGuid();
_starts.Add(id, DateTime.Now);
LogMessage(string.Format("0.00\tStart: {0}", mesaage));
return id;
}
}
public void LogEnd(Guid id, string mesaage)
{
lock (syncContoller)
{
if (_starts.ContainsKey(id))
{
TimeSpan time = (TimeSpan)(DateTime.Now - _starts[id]);
LogMessage(string.Format("{1}\tEnd: {0}", mesaage, time.TotalMilliseconds.ToString()));
}
else
throw new ApplicationException("Logger.LogEnd: Key doesn't exisits.");
}
}
public void LogMessage(string message)
{
lock (syncContoller)
{
string filePath = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
if (!filePath.EndsWith("\\"))
filePath += "\\owf";
else
filePath += "owf";
if (!Directory.Exists(filePath))
Directory.CreateDirectory(filePath);
filePath += "\\Log.txt";
lock (syncContoller)
{
using (StreamWriter sw = new StreamWriter(filePath, true))
{
sw.WriteLine(DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.sss") + "\t" + message);
}
}
}
}
}
}
How do you write the log? Have your own open/close or use some thirty party product?
I thing that the log is opened and locked only when it writes something. Once the data writing is finished, the code closes the file and, of course, releases the lock
This seems like a .NET issue. (Well; a Bug if you ask me).
Basically I have replicated the problem by using the following multi-threaded code:
Dim FS As System.IO.FileStream
Dim BR As System.IO.BinaryReader
Dim FileBuffer(-1) As Byte
If System.IO.File.Exists(FileName) Then
Try
FS = New System.IO.FileStream(FileName, System.IO.FileMode.Open, IO.FileAccess.Read, IO.FileShare.Read)
BR = New System.IO.BinaryReader(FS)
Do While FS.Position < FS.Length
FileBuffer = BR.ReadBytes(&H10000)
If FileBuffer.Length > 0 Then
... do something with the file here...
End If
Loop
BR.Close()
FS.Close()
Catch
ErrorMessage = "Error(" & Err.Number & ") while reading file:" & Err.Description
End Try
Basically, the bug is that trying to READ the file with all different share-modes (READ, WRITE, READ_WRITE) have absolutely no effect on the file locking, no matter what you try; you would always end up in the same result: The is LOCKED and not available for any other user.
Microsoft won't even admit to this problem.
The solution is to use the internal Kernel32 CreateFile APIs to get the proper access done as this would ensure that the OS LISTENs to your request when requesting to read files with a share-locked or locked access.
I believe I'm having the same type of locking issue, reproduced as follows:
User 1 opens Excel2007 file from network (read-write) (WindowsServer, version unkn).
User 2 opens same Excel file (opens as ReadOnly, of course).
User 1 successfully saves file many times
At some point, User 1 is UNABLE to save the file due to message saying "file is locked".
Close down User 2's ReadOnly version...lock is released, and User 1 can now save again.
How could opening the file in ReadOnly mode put a lock on that file?
So, it seems to be either an Excel2007 issue, or a server issue.

Resources