I'm using Azure Storage Queues to send a message to a WebJob. This WebJob then creates a PDF and stores it in a blob container. This works fine on my dev-machine. The message is received, object instantiated and PDF is created and stored in the blob storage. When I deploy the WebJob to Azure I get an Out of Memory exception at the moment it receives a message.
What are the memory limit and how do I keep below those limits?
public static void HandleNewRegistration(
[QueueInput("pdf")] Models.Registration registration,
[BlobOutput("pdf/{Name}.txt")] TextWriter writer,
[BlobOutput("pdf/{Name}.pdf")] Stream pdfWriter)
{
try
{
// Store received registration in database (using EF)
AppContext db = new AppContext();
db.Registrations.Add(registration);
db.SaveChanges();
// Create PDF document (nothing fancy, just a section with a paragraph)
var pdf = CreatePdf(registration);
var renderer = new MigraDoc.Rendering.PdfDocumentRenderer(true, PdfSharp.Pdf.PdfFontEmbedding.Always);
renderer.Document = pdf;
renderer.RenderDocument();
renderer.Save(pdfWriter,true);
}
catch (Exception e)
{
writer.WriteLine(e.Message);
writer.WriteLine(e.StackTrace);
}
writer.WriteLine(registration.Name);
}
Using this I end up with only a text file in my blob storage with the stack trace:
Out of memory.
at System.Drawing.Graphics.FromHwndInternal(IntPtr hwnd)
at System.Drawing.Graphics.FromHwnd(IntPtr hwnd)
at PdfSharp.Drawing.XGraphics..ctor(Graphics gfx, XSize size, XGraphicsUnit pageUnit, XPageDirection pageDirection)
at MigraDoc.Rendering.DocumentRenderer.PrepareDocument()
at MigraDoc.Rendering.PdfDocumentRenderer.PrepareDocumentRenderer(Boolean prepareCompletely)
at MigraDoc.Rendering.PdfDocumentRenderer.PrepareRenderPages()
at MigraDoc.Rendering.PdfDocumentRenderer.RenderDocument()
at WebJob.Program.HandleNewRegistration(Registration registration, TextWriter writer, Stream pdfWriter) in d:\Source\Workspaces\[...]\WebJob\Program.cs:line 43
Obviously you are using the GDI+ build of MigraDoc - and there is no GDI+ on the Azure Server and Graphics.FromHwnd() fails.
Use the WPF build of MigraDoc and things should run fine on the Azure Server.
Related
I have an azure function which is triggered when a zip file is uploaded to an azure blob storage container. I unzip the file in memory and process the contents and add/update the result into a database. While for the db part I can use the in memory db option. Somehow am not too sure how to simulate the blob trigger for unit testing this azure function.
All the official samples and some blogs mostly talk about Http triggers(mocking httprequest) and queue triggers (using IAsynCollection).
[FunctionName("AzureBlobTrigger")]
public void Run([BlobTrigger("logprocessing/{name}", Connection = "AzureWebJobsStorage")]Stream blobStream, string name, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {blobStream.Length} Bytes");
//processing logic
}
There is a project about Unit test/ Integration test about azure function including blob trigger in github, please take a try at your side. Note that the unit test code is in FunctionApp.Tests folder.
Some code snippet about blob trigger from github:
unit test code of BlobFunction.cs
namespace FunctionApp.Tests
{
public class BlobFunction : FunctionTest
{
[Fact]
public async Task BlobFunction_ValidStreamAndName()
{
Stream s = new MemoryStream();
using(StreamWriter sw = new StreamWriter(s))
{
await sw.WriteLineAsync("This is a test");
BlobTrigger.Run(s, "testBlob", log);
}
}
}
}
My function stores data up Azure Data Lakta Storage Gen 1.
But I got bug An error occurred while sending the request.
When I investigated,I knowed that my connection in azure function overcome 8k then it's broken.
Here is my code(Append to file Azure DataLakeStorage Gen 1)
//This for authorizing azure data lake storage gen 1
await InitADLInfo(adlsAccountName);
DataLakeStoreFileSystemManagementClient _adlsFileSystemClient;
//Here is my code to append data lake storage gen 1
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(buffer)))
{
await _adlsFileSystemClient.FileSystem.AppendAsync(_adlsAccountName, path, stream);
}
How to dispose that when every append ends.
I try to dispose
_adlsFileSystemClient.Dispose();
But it didn't dispose anything.My connection will up.
I read this
https://www.troyhunt.com/breaking-azure-functions-with-too-many-connections/1
and I have made connection down.Just use DO NOT create a new client with every function invocation.
Example Code :
// Create a single, static HttpClient
private static HttpClient httpClient = new HttpClient();
public static async Task Run(string input)
{
var response = await httpClient.GetAsync("http://example.com");
// Rest of function
}
I want to know if it is possible to create an Azure Queue (Service bus or Storage Queue) which can be placed in front of a web application and receives http requests very fist.
updates
Thanks for the comments and answers.
I want to process the request without burding IIS. I need to make it possible to process a request in a queue before it reaches IIS.
if it is possible to create an Azure Queue (Service bus or Storage Queue) which can be placed in front of a web application and receives http requests very fist.
We can save the request message to a queue before handing the request in Azure Web App by adding some code. I wrote a C# version sample code which will record the request message to an Azure Storage queue. Steps below are for your reference.
Step 1. Add a http module to your project. In this module, I registered BeginRequest event of HttpApplication and do the message recording job.
public class RequestToQueueModeule : IHttpModule
{
#region IHttpModule Members
public void Dispose()
{
//clean-up code here.
}
public void Init(HttpApplication context)
{
// Below is an example of how you can handle LogRequest event and provide
// custom logging implementation for it
context.BeginRequest += new EventHandler(OnBeginRequest);
}
#endregion
public void OnBeginRequest(Object source, EventArgs e)
{
HttpApplication context = source as HttpApplication;
AddMessageToQueue(context.Request);
}
public void AddMessageToQueue(HttpRequest request)
{
StringBuilder sb = new StringBuilder();
sb.AppendLine(request.HttpMethod + " " + request.RawUrl + " " + request.ServerVariables["SERVER_PROTOCOL"]);
for (int i = 0; i < request.Headers.Count; i++)
{
sb.AppendLine(request.Headers.Keys[i] + ":" + request.Headers[i]);
}
sb.AppendLine();
if (request.InputStream != null)
{
using (StreamReader sr = new StreamReader(request.InputStream))
{
sb.Append(sr.ReadToEnd());
}
}
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("connection string of your azure storage");
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("queue name which is used to store the request message");
// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();
// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage(sb.ToString());
queue.AddMessage(message);
}
}
Step 2. Register upper module in system.webServer node of web.config. Please modify the namespace name where your module placed.
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<add name="RequestToQueueModeule" type="[your namespace name].RequestToQueueModeule" />
</modules>
</system.webServer>
I want to process the request without burding IIS. I need to make it possible to process a request in a queue before it reaches IIS.
If you want to process the request in a queue before it reaches IIS, you need to add a proxy in front of Azure Web App. Azure Application Gateway works as a proxy and it can be put in front of Web App. If you only want to log the main information of HTT request, you could use Azure Application Gateway and turn on the Access Log. For more information, link below is for your reference.
Diagnostic logs of Application Gateway
If you want to save all the request message, I am afraid you need to build a custom proxy and log the request by yourself.
After I use CloudBlob.BeginUploadFromStream() method to upload a file, I later get a StorageClientException with StorageErrorCode.ResourceNotFound when trying to retrieve the file for a download. If I upload the same file using CloudBlob.UploadFromStream() method, then the blob DOES exist and i can download it.
here's my download code:
var client = _storageAccount.CreateCloudBlobClient();
var container = client.GetContainerReference(BLOB_CONTAINER_DOCUMENTS_ADDRESS);
container.CreateIfNotExist();
string blobName = id.ToString();
var newBlob = container.GetBlobReference(blobName);
if (newBlob.Exists())
{
var stream = newBlob.OpenRead();
return stream;
}
else
{
throw new Exception("Blob does not exist!");
}
Exists is an extension method. I'm getting the StorageClientException with the error code ResourceNotFound when I use the BeginUploadFromStream() method
public static bool Exists(this CloudBlob blob)
{
try
{
blob.FetchAttributes();
return true;
}
catch (StorageClientException e)
{
if (e.ErrorCode == StorageErrorCode.ResourceNotFound)
{
return false;
}
else
{
throw;
}
}
}
And my call to upload
var blob = container.GetBlobReference(blobName);
This will NOT throw an exception when i later check if the blob exists
blob.UploadFromStream(fileStream);
This will
AsyncCallback uploadCompleted = new AsyncCallback(OnUploadCompleted);
blob.BeginUploadFromStream(fileStream, uploadCompleted, documentId);
EDIT
As suggested, i didn't have a call to EndUploadFromStream() method. Here is my updated call to upload:
blob.BeginUploadFromStream(fileStream, uploadCompleted, blob);
And my handler
private void OnUploadCompleted(IAsyncResult result)
{
var blob = (CloudBlob) result.AsyncState;
blob.EndUploadFromStream(result);
}
Running this, the EndUploadFromStream() method throws a WebException with the msg: "The request was aborted: The request was canceled." The InnerException is "Cannot close stream until all bytes are written."
Anyone have any idea what's going on here?
BeginUploadFromStream uploads the blob asynchronously, so your method proceeds while the blob uploads on a thread in the background. If the blob hasn't finished uploading -- or if Azure hasn't been told that the upload has completed -- you won't see the blob in storage. Only blobs uploaded through successfully completed transactions are visible.
Could you post the code for OnUploadCompleted?
It looks at first glance as if either the blob is still uploading -- or you've forgotten to call EndUploadFromStream() in your OnUploadCompleted method.
What it sounds like is happening is IIS is cancelling the thread that is being initiated to make the BeginUploadFromStream. Since the storage API is really just manipulating a bunch of REST calls under the hood you can think of these storage calls as web service calls and not like traditional IO.
Check out this topic on HttpKeepAlives, this might solve your problem but as the article pointed out it may impact performance of your site. So you may want to add logic to only enable the keep alive for the requests that are performing the upload.
http://www.jaxidian.org/update/2007/05/05/8/
I'm converting a website from a standard ASP.NET website over to use Azure. The website had previously taken an Excel file uploaded by an administrative user and saved it on the file system. As part of the migration, I'm saving this file to Azure Storage. It works fine when running against my local storage through the Azure SDK. (I'm using version 1.3 since I didn't want to upgrade during the development process.)
When I point the code to run against Azure Storage itself, though, the process usually fails. The error I get is:
System.IO.IOException occurred
Message=Unable to read data from the transport connection: The connection was closed.
Source=Microsoft.WindowsAzure.StorageClient
StackTrace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options)
at Framework.Common.AzureBlobInteraction.UploadToBlob(Stream stream, String BlobContainerName, String fileName, String contentType) in C:\Development\RateSolution2010\Framework.Common\AzureBlobInteraction.cs:line 95
InnerException:
The code is as follows:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(storageAccount, dmc);
CloudBlobClient BlobClient = null;
CloudBlobContainer BlobContainer = null;
BlobClient = storageAccount.CreateCloudBlobClient();
// For large file copies you need to set up a custom timeout period
// and using parallel settings appears to spread the copy across multiple threads
// if you have big bandwidth you can increase the thread number below
// because Azure accepts blobs broken into blocks in any order of arrival.
BlobClient.Timeout = new System.TimeSpan(1, 0, 0);
Role serviceRole = RoleEnvironment.Roles.Where(s => s.Value.Name == "OnlineRates.Web").First().Value;
BlobClient.ParallelOperationThreadCount = serviceRole.Instances.Count;
// Get and create the container
BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
BlobContainer.CreateIfNotExist();
//delete prior version if one exists
BlobRequestOptions options = new BlobRequestOptions();
options.DeleteSnapshotsOption = DeleteSnapshotsOption.None;
CloudBlob blobToDelete = BlobContainer.GetBlobReference(fileName);
Trace.WriteLine("Blob " + fileName + " deleted to be replaced by newer version.");
blobToDelete.DeleteIfExists(options);
//set stream to starting position
stream.Position = 0;
long totalBytes = 0;
//Open the stream and read it back.
using (stream)
{
// Create the Blob and upload the file
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
try
{
BlobClient.ResponseReceived += new EventHandler<ResponseReceivedEventArgs>((obj, responseReceivedEventArgs)
=>
{
if (responseReceivedEventArgs.RequestUri.ToString().Contains("comp=block&blockid"))
{
totalBytes += Int64.Parse(responseReceivedEventArgs.RequestHeaders["Content-Length"]);
}
});
blob.UploadFromStream(stream);
// Set the metadata into the blob
blob.Metadata["FileName"] = fileName;
blob.SetMetadata();
// Set the properties
blob.Properties.ContentType = contentType;
blob.SetProperties();
}
catch (Exception exc)
{
Logging.ExceptionLogger.LogEx(exc);
}
}
}
I've tried a number of different alterations to the code: deleting a blob before replacing it (although the problem exists on new blobs as well), setting container permissions, not setting permissions, etc.
Your code looks like it should work, but it has lots of extra functionality that is not strictly required. I would cut it down to an absolute minimum and go from there. It's really only a gut feeling, but I think it might be the using statement giving you grief. This enture function could be written (presuming the container already exists) as:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
CloudBlobClient BlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
stream.Position = 0;
blob.UploadFromStream(stream);
}
Notes on the stuff that I've removed:
You should set up diagnostics just once when you're app starts, not every time a method is called. Usually in the RoleEntryPoint.OnStart()
I'm not sure why you're trying to set ParallelOperationThreadCount higher if you have more instances. Those two things seem unrelated.
It's not good form to check for the existence of a container/table every time you save something to it. It's more usual to do that check once when your app starts or to have a process external to the website to make sure all the required containers/tables/queues exist. Of course if you're trying to dynamically create containers this is not true.
The problem turned out to be firewall settings on my laptop. It's my personal laptop originally set up at home and so the firewall rules weren't set up for a corporate environment resulting in slow performance on uploads and downloads.