Http Trigger Azure function returns 404 randomly - azure

We have a .net core azure function on function runtime 3. This works perfectly fine ran locally and most of the time on our deployed app service. However we've experienced intermittent 404 responses for requests that go through perfectly fine at other times.
There's no entries appearing in our logs or application insights telemetry for the failing requests.
It feels a lot like this issue on the azure-function-host github project:
https://github.com/Azure/azure-functions-host/issues/5247
though that's targeting functions runtime 1 or 2.
Has anyone had similar issues or know of any way to get additional log info that might highlight what problem we're running into.

That might happen during scaling out or your function app about to change the server from one to another (for some reason). That's the actually cons of serverless applications.
But what I can suggest to you is:
Create HeartBeat Function similar to this:
private readonly IAsyncRepository<Business> _businessAsyncRepository;
public HeartBeat(IAsyncRepository<Business> businessAsyncRepository)
{
// Your all DI injections are here
_businessAsyncRepository = businessAsyncRepository;
}
[FunctionName(nameof(HeartBeat))]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
ILogger log)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
return new OkObjectResult("OK");
}
And then create availability test on Application Insights and call the HeartBeat.
This will also give you a warm instance of Azure Functions at all time. But obviously you spend your 1m free call on consumption plan every time you call the heartbeat depending on how frequently you call the AF.

Related

Waiting for an azure function durable orchestration to complete

Currently working on a project where I'm using the storage queue to pick up items for processing. The Storage Queue triggered function is picking up the item from the queue and starts a durable orchestration. Normally the according to the documentation the storage queue picks up 16 messages (by default) in parallel for processing (https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue), but since the orchestration is just being started (simple and quick process), in case I have a lot of messages in the queue I will end up with a lot of orchestrations running at the same time. I would like to be able to start the orchestration and wait for it to complete before the next batch of messages are being picked up for processing in order to avoid overloading my systems. The solution I came up with and seems to work is:
public class QueueTrigger
{
[FunctionName(nameof(QueueTrigger))]
public async Task Run([QueueTrigger("queue-processing-test", Connection = "AzureWebJobsStorage")]Activity activity, [DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {activity.ActivityId}");
string instanceId = await starter.StartNewAsync<Activity>(nameof(ActivityProcessingOrchestrator), activity);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
var status = await starter.GetStatusAsync(instanceId);
do
{
status = await starter.GetStatusAsync(instanceId);
} while (status.RuntimeStatus == OrchestrationRuntimeStatus.Running || status.RuntimeStatus == OrchestrationRuntimeStatus.Pending);
}
which basically picks up the message, starts the orchestration and then in a do/while loop waits while the staus is Pending or Running.
Am I missing something here or is there any better way of doing this (I could not find much online).
Thanks in advance your comments or suggestions!
This might not work since you could either hit timeouts causing duplicate orchestration runs or just force your function app to scale out defeating the purpose of your code all together.
Instead, you could rely on the concurrency throttles that Durable Functions come with. While the queue trigger would queue up orchestrations runs, only the max defined would run at any time on a single instance of a function.
This would still cause your function app to scale out, so you would have to consider that as well when setting this limit and you could also set the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT app setting to control how many instances you function app can scale out to.
It could be that the Function app's built in scaling throttling does not reduce load on downstream services because it is per app and will just cause the app to scale more. Then what is needed is a distributed max instance count that all app instances adhere to. I have built this functionality into my Durable Function orchestration app with a scaleGroupId and it`s max instance count. It has an Api call to save this info and the scaleGroupId is a string that can be set to anything that describes the resource you want to protect from overloading. Here is my app that can do this:
Microflow

Azure App Insights issue with end to end tracing

I seem to be having some issues with tracing end to end using Azure functions and Service Bus Queue.
I have essentially a HTTP Trigger, that will then put the JSON message body onto the Service Bus Queue as a Message - with a generated Correlation Id (guid).
I have the Service Bus as a queue - this is fine.
I then have a Service Bus Queue Trigger function that pulls off the message from the queue. This is working fine.
However, the Service Map and the telematary and the request does not provide the end to end trace. It is split into two, my HTTP Trigger and Service Bus Trigger.
The operation-ids are different.
What am I doing wrong?
Here is the HTTP Trigger SEND function
[FunctionName("SendQueueMessage")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log,
[ServiceBus("busqueue", Connection = "xxxxx", EntityType = EntityType.Queue)] ICollector<Message> outputQueueItem)
{
log.LogInformation("Processing a request.");
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
var correlationId = Guid.NewGuid().ToString();
outputQueueItem.Add(new Message(Encoding.UTF8.GetBytes(requestBody))
{
CorrelationId = correlationId
});
return new OkObjectResult($"Successful CorrelationId: {correlationId}");
}
Here is the Queue Trigger:
[FunctionName("RetrieveFromBusQueue")]
public static void Run([ServiceBusTrigger("busqueue",
Connection = "xxxxxxxx")]Message messageItem,
ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message {System.Text.Encoding.UTF8.GetString(messageItem.Body)}");
log.LogInformation($"Message ID: {messageItem?.CorrelationId}");
}
Is there some setting?
Both these functions are under the same FunctionApp.
It strangely never factors into the MAP the queue either.. I'm so confused here.
I am invoking the HTTP Trigger through the CODE and TEST page in Azure against the trigger function. Is it something to do with HTTP headers?
Can someone point me in the correct direction to go with this please?
UPDATE:
So... turns out, if I use a much older version of Microsoft.ApplicationInsights nuget package... 2.10.0 instead of 2.16.0 it actually traces my call to the Service bus and off the queue - with the correct display in the MAP.
This is absolutely bizarre to say the least - pulling my hair out for hours trying to figure out what I had done wrong. I still am unsure of why the latest version is causing this issue.
It seems a bug in the new version 2.16.0.
As per the comment, you'd better raise an issue about this in it's github here.
I had this issue recently with 3.1.6
Upgrading to 3.1.13 (latest) didn't help.
There is a compatibility problem in version >= 3.1.4, downgrading to 3.1.3 fixed the problem for me.

Sequence processing with Azure Function & Service Bus

I have an issue with Azure Function Service Bus trigger.
The issue is Azure function cannot wait a message done before process a new message. It process Parallel, it not wait 5s before get next message. But i need it process sequencecy (as image bellow).
How can i do that?
[FunctionName("HttpStartSingle")]
public static void Run(
[ServiceBusTrigger("MyServiceBusQueue", Connection = "Connection")]string myQueueItem,
[OrchestrationClient] DurableOrchestrationClient starter,
ILogger log)
{
Console.WriteLine($"MessageId={myQueueItem}");
Thread.Sleep(5000);
}
I resolved my problem by using this config in my host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}}
There are two approaches you can accomplish this,
(1) You are looking for Durable Function with function chaining
For background jobs you often need to ensure that only one instance of
a particular orchestrator runs at a time. This can be done in Durable
Functions by assigning a specific instance ID to an orchestrator when
creating it.
(2) Based on the messages that you are writing to Queue, you need to partition the data, that will automatically handle the order of messages which you do not need to handle manually by azure function
In general, ordered messaging is not something I'd be striving to implement since the order can and at some point will be distorted. Saying that, in some scenarios, it's required. For that, you should either use Durable Function to orchestrate your messages or use Service Bus message Sessions.
Azure Functions has recently added support for ordered message delivery (accent on the delivery part as processing can still fail). It's almost the same as the normal Function, with a slight change that you need to instruct the SDK to utilize sessions.
public async Task Run(
[ServiceBusTrigger("queue",
Connection = "ServiceBusConnectionString",
IsSessionsEnabled = true)] Message message, // Enable Sessions
ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {Encoding.UTF8.GetString(message.MessageId)}");
await _cosmosDbClient.Save(...);
}
Here's a post for more detials.
Warning: using sessions will require messages to be sent with a session ID, potentially requiring a change on the sending side.

how to use structured logging in Azure Functions

I am using the relatively new ILogger (vs. TraceWriter) option in Azure functions and trying to understand how logs are captured.
Here's my function:
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req, ILogger log)
{
log.LogTrace("Function 1 {level}", "trace");
log.LogWarning("Function 1 {level}", "warning");
log.LogError("Function 1 {level}", "error");
return req.CreateResponse(HttpStatusCode.OK, "Success!!!!");
}
When I look at the server logs, the LogFiles directory has a hierarchy.
The yellow highlighted file includes my log statements:
2017-08-19T13:58:31.814 Function started (Id=d40f2ca6-4cb6-4fbe-a05f-006ae3273562)
2017-08-19T13:58:33.045 Function 1 trace
2017-08-19T13:58:33.045 Function 1 warning
2017-08-19T13:58:33.045 Function 1 error
2017-08-19T13:58:33.075 Function completed (Success, Id=d40f2ca6-4cb6-4fbe-a05f-006ae3273562, Duration=1259ms)
The structured directory contains nothing here, but it seems to have various "codeddiagnostic" log statements in my real function applications directory.
What should I expect here? Ultimately, I would like to have a single sink for logging from all of my application components and take advantage of structured logging across the board.
The best way to collect structured logging from Azure Functions is to use Application Insights. One you defined that your Logger is based on ILogger, you can define a Template that specifies the properties you want to log. Then on Application Insights traces, using the Application Insights Query Language (aka Kusto) you can access the values of each of these properties with the name customDimensions.prop__{name}.
You can find a sample of how to do this with Azure Functions v2 on this post https://platform.deloitte.com.au/articles/correlated-structured-logging-on-azure-functions
Just FYI: Azure Functions running in isolated mode (.NET5 and .NET6) don't support structured logging using the ILogger from DI or provided in the FunctionContext. As of November 2021, there is an open bug about it: https://github.com/Azure/azure-functions-dotnet-worker/issues/423
As I understand it, the bug happens because all ILogger calls goes through the GRPC connection between the host and the isolated function, and in that process the message gets formatted instead of sending the original format and arguments. The Azure Insights connection that would record the structured log runs on the host and only receives the final message.
I plan on investigating some king of workaround, playing with direct access to Azure Insights inside the isolated process. If it works, I'll post the workaround in a comment at the bug linked above.
I had the same question. The log file logger doesn't really respect structured logging, but if you use AppInsights for Azure Functions it will in fact add custom properties for your structured logging
I had a conversation going about it here
https://github.com/Azure/azure-webjobs-sdk-script/issues/1675

Azure Socket Leaks?

I have an ASP.NET Core a website with a lot of simultaneous users which crashes many times during the day and I scaled up and out but no luck.
I have been told my numerous Azure support staff that the issue is that I'm sending out a lot of database calls although database utilization improved after creating indexes. Can you kindly advise what you think the problem is as I have done my best...
I was told that I have "socket leaks".
Please note:
I don't have any external service calls except to sendgrid
I have not used ConfigureAwait(false)
I'm not using "using" statements or explicitly disposing contexts
This is my connection string If it may help...
Server=tcp:sarahah.database.windows.net,1433;Initial Catalog=SarahahDb;Persist Security Info=False;User ID=********;Password=******;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=400;
These are some code examples:
In Startup.CS:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
Main class:
private readonly ApplicationDbContext _context;
public MessagesController(ApplicationDbContext context, IEmailSender emailSender, UserManager<ApplicationUser> userManager)
{
_context = context;
_emailSender = emailSender;
_userManager = userManager;
}
This an important method code for example:
string UserId = _userManager.GetUserId(User);
var user = await _context.Users.Where(u => u.Id.Equals(UserId)).Include(u => u.Messages).FirstOrDefaultAsync();
// some other code
return View(user.Messages);
Please advise as I have tried my best but this is very embarrassing to me in font of my customers.
Without the error messages that you're seeing, here's a few ideas that you can check.
I'd start with going to your Web App's Overview blade in the Azure Portal. Update the monitoring graph to a time period when you're experiencing problems. Are you CPU bound? Have you exhausted memory? Also, check the HTTP Queue length. If your HTTP queue is really long, it's because your server is choking trying to service the requests and users are experiencing timeout issues.
Next, jump over to your SQL Server's Overview blade in the Azure Portal, and look at the resource utilization chart. Set the time period on the chart to when you're experiencing problems. Have you pegged out your DTUs for your database? If so, it's a sign of poor indexing, poor schema design, or you're just undersized and need to scale up.
Turn on ApplicationInsights if you haven't already. You can use the ApplicationInsights API to insert your own trace statements into your code. Or, you might be able to see exceptions causing the issue without having to do your own tracing.
Check the Kudu logs for your Web Apps.
I agree with Tseng - your usage of EF and .NET Core's DI framework looks correct.
Let us know how the troubleshooting goes and provide additional information on exactly what kind of errors you're seeing. Best of luck!
It looks like a DI issue to me. You are injecting ApplicationDbContext context. Which means the ApplicationDbContext will be resolved from the DI container meaning it will stay open the entire request (transient) as Tseng pointed out. It should be a scoped.
You can inject IServiceScopeFactory scopeFactory in your controller and do something like:
using (var scope = _scopeFactory.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<ApplicationDbContext>();
}
Note that if you are using ASP.NET Core 1.1 and want to be sure that all your services are being resolved correctly change your ConfigureService method in the Startup to:
public IServiceProvider ConfigureServices(IServiceCollection services)
{
// Register services
return services.BuildServiceProvider(validateScopes: true);
}

Resources