We have this host.json file in our azure function:
{
"version": "2.0",
"functions": [ "xxx" ],
"logging": {
"applicationInsights": {
"samplingExcludedTypes": "Request",
"samplingSettings": {
"isEnabled": true
}
}
}
}
Now we want to exclude exception to be automatically logged as well (as we handle exceptions ourselves in try catch block, so don't want it logged twice).
However, I am not sure samplingExcludedTypes is the right property to use, according to this issue:
https://github.com/MicrosoftDocs/azure-docs/issues/47219, excludeTypes is the one to use
Should I just do this:
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request;Exception"
}
}
}
Normally I would say that what you need is a Telemetry Processor or Telemetry Initializer, depending on the language, that drops all exception telemetry. But unfortunately that does not work in an Azure Function, it is not supported.
We can, however, leverage the sampling settings to prevent telemetry from being send by forcing it to be sampled out. It does work when adaptive sampling is enabled (I just didn't try for other types of sampling) which is the default sampling behavior of an Azure Function. To do that we can set the ProactiveSamplingDecision property like this:
public class DropExceptionTelemetry : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if (!(telemetry is ExceptionTelemetry item)) return;
item.ProactiveSamplingDecision = SamplingDecision.SampledOut;
}
}
Also, don't forget to add this initializer using DI:
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddSingleton<ITelemetryInitializer, DropExceptionTelemetry>();
builder.Services.AddLogging();
}
}
For this to work, make sure Exception is not listened in the exludedTypes property
"applicationInsights": {
"samplingSettings": {
"excludedTypes": "Event;Request"
}
}
Related
I am new to azure durable functions. I have created a sample azure durable function using vs 2019. I am running default generated azure durable function template code locally with azure storage enumerator and when I run the durable function, the OrchestrationTrigger stuck and not able to resume.
The hub name is samplehubname. There a pending records present in the samplehubnameInstances azure table but there is no records in the samplehubnameHistory azure table.
There is no exception and no errors in the code.
SampleFunction.cs
public static class SampleFunction
{
[FunctionName("SampleFunction")]
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var outputs = new List<string>();
// Replace "hello" with the name of your Durable Activity Function.
outputs.Add(await context.CallActivityAsync<string>("SampleFunction_Hello", "Tokyo"));
// returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
return outputs;
}
[FunctionName("SampleFunction_Hello")]
public static string SayHello([ActivityTrigger] string name, ILogger log)
{
log.LogInformation($"Saying hello to {name}.");
return $"Hello {name}!";
}
[FunctionName("SampleFunction_HttpStart")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
string instanceId = await starter.StartNewAsync("SampleFunction", null);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
return starter.CreateCheckStatusResponse(req, instanceId);
}
}
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"AzureWebJobsSecretStorageType": "files", //files
"MyTaskHub": "samplehubname"
}
}
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensions": {
"durableTask": {
"hubName": "%MyTaskHub%"
}
}
}
samplehubname-control-03 Message Queue
{"$type":"DurableTask.AzureStorage.MessageData","ActivityId":"72b75a34-e403-4772-aed0-fbb10039795a","TaskMessage":{"$type":"DurableTask.Core.TaskMessage","Event":{"$type":"DurableTask.Core.History.ExecutionStartedEvent","OrchestrationInstance":{"$type":"DurableTask.Core.OrchestrationInstance","InstanceId":"f8d0499a4297480c8bdf4a56954861d3","ExecutionId":"2e46b87e4cf74c2dab572d92e012bded"},"EventType":0,"ParentInstance":null,"Name":"Function1","Version":"","Input":"null","Tags":null,"EventId":-1,"IsPlayed":false,"Timestamp":"2021-09-21T15:41:35.0156514Z"},"SequenceNumber":0,"OrchestrationInstance":{"$type":"DurableTask.Core.OrchestrationInstance","InstanceId":"f8d0499a4297480c8bdf4a56954861d3","ExecutionId":"2e46b87e4cf74c2dab572d92e012bded"}},"CompressedBlobName":null,"SequenceNumber":1,"Sender":{"$type":"DurableTask.Core.OrchestrationInstance","InstanceId":"","ExecutionId":""}}
Any help will appreciated.
If your orchestration code has contain a loop too much, the guidance in our Eternal orchestration documentation.
In your code there is not much loop available. So, you need to use the durable functions eternal orchestrations TerminateAsync (.NET) method of the orchestration client binding to stop it.
Add your durable function into application insights to check the clear view of issue. It may help to fix the issue.
Check the similar issue here.
Try the steps and use fan in/ fan out in durable function durable functions patterns fan out and fan in
In my Azure Function I am trying to log using the DI container; however I am unable to get the logger to display messages in the Log Stream or the Traces in Application Insights. I have created a minimal code sample project which I am testing using the Azure Portal (Code + Test) shown here.
I've tried using Serilog same result.
I've tried removing the log parameter from Function1 - then I get no errors nor messages
I assume I'm missing something simple/obvious but I'm stuck.
namespace LoggingApiTest {
public class Function1 {
private readonly ILogger<Function1> logger;
public Function1(ILogger<Function1> logger)
{
this.logger = logger;
logger.LogInformation("In Function1 ctor using DI created Logger");
}
[FunctionName("Function1")]
public IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "Test")] HttpRequest req,
ILogger log) {
logger.LogInformation("Hello from DI Logger");
log.LogInformation("Hello from logger passed as parameter");
return new OkObjectResult("Hello");
}
}
}
startup
namespace LoggingApiTest {
public class Startup : FunctionsStartup {
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddLogging();
}
}
}
host.json
enter code here{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
},
"logLevel": {
"default": "Information"
}
}
}
}
The LogStream shows 2021-07-16T12:59:50.705 [Information] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.', Id=b8238346-7ff7-450d-a7f0-abc8f1a210fa) 2021-07-16T12:59:50.705 [Information] Hello from logger passed as parameter 2021-07-16T12:59:50.705 [Information] Executed 'Function1' (Succeeded, Id=b8238346-7ff7-450d-a7f0-abc8f1a210fa, Duration=1ms)
In your logging section of your host.json file, try adding your solution namespace with the logLevel. This should enable logging at a logLevel of Information and above.
{
"version": "2.0",
"logging": {
"logLevel": {
"LoggingApiTest": "Information"
}
}
}
https://github.com/Azure/azure-functions-host/issues/4425#issuecomment-492678381
I´m using azure functions using dotnet core 3.1 and I cannot make my custom logs work as expected.
In my values section of the local.settings.json file i put the following key: APPINSIGHTS_INSTRUMENTATIONKEY and i read on the docs that the functions runtime add the log automatically.
I can see the default logs on the application insights panel, but my custom logs are not being write there. On my class I did this:
private readonly ILogger<LoginService> _logger;
public CustomService(ILogger<CustomService> logger)
{
_logger = logger;
}
public void Test()
{
_logger.LogError("TestLog");
}
How is the proper way of injecting the default logging in the constructor of other classes which is not the function method itself?
ILogger is indeed not a valid injections. ILogger does work and
creates a filter of type T. However you need to manually add the
ability to listen to custom filters (in this case your class) for them
to show up. you can do that in host.json. Here's a sample.
host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingExcludedTypes": "Request",
"samplingSettings": {
"isEnabled": true
}
},
"logLevel": {
"Hollan.Functions.HttpTrigger": "Information"
}
}
}
HttpTrigger.cs
public class HttpTrigger
{
private readonly ILogger<HttpTrigger> _log;
public HttpTrigger(ILogger<HttpTrigger> log)
{
_log = log;
}
[FunctionName("HttpTrigger")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req)
{
_log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
string responseMessage = string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.";
return new OkObjectResult(responseMessage);
}
}
}
Source: https://github.com/Azure/Azure-Functions/issues/1484
I have a .NET Core WebAPI app. The app is deployed on Azure as an App Service.
In the code, I have enabled Application Insights like so
public static IWebHost BuildWebHost(string[] args) =>
WebHost
.CreateDefaultBuilder(args)
.UseApplicationInsights()
.UseStartup<Startup>()
.ConfigureLogging((hostingContext, logging) =>
{
logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging")).SetMinimumLevel(Microsoft.Extensions.Logging.LogLevel.Error);
logging.AddApplicationInsights(" xxxxx-xxxx-xxxx-xxxx--xxxxxxxxxxx").SetMinimumLevel(LogLevel.Trace);
})
.Build();
In the constructor of a controller and inside an method of a controller I have these logging statements.
_logger.LogInformation("ApiController, information");
_logger.LogWarning("ApiController, warning");
_logger.LogCritical("ApiController, critical");
_logger.LogWarning("ApiController, error");
_logger.LogDebug("ApiController, debug");
In the Azure Portal I have Application Insights for my App Service enabled. Here a picture from the portal.
App Insights in Azure Portal
But where do I see the logging statements in the Azure portal?
When I go to Application Insights -> Logs and I query by
search *
I can see the requests made to the API but not the logging statements.
Application Insights Log
Where are the log statements?
First, it is not good practice to configure the log level in code. You can easily configure the log level in the appsettings.json file. So in Program.cs -> public static IWebHost BuildWebHost method, change the code to below:
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseApplicationInsights()
.UseStartup<Startup>()
.Build();
Then in appsettings.json(also right click the file -> properties -> set "copy to output directory" to "Copy if newer"):
{
"Logging": {
"IncludeScopes": false,
"ApplicationInsights": {
"LogLevel": {
"Default": "Trace"
}
},
"Console": {
"LogLevel": {
"Default": "Warning"
}
}
},
"ApplicationInsights": {
"InstrumentationKey": "the key"
}
}
In Controller, like ValuesController:
public class ValuesController : Controller
{
private readonly ILogger _logger;
public ValuesController(ILoggerFactory loggerFactory)
{
_logger = loggerFactory.CreateLogger<ValuesController>();
}
// GET api/values
[HttpGet]
public IEnumerable<string> Get()
{
_logger.LogInformation("ApiController, information");
_logger.LogWarning("ApiController, warning");
_logger.LogCritical("ApiController, critical");
_logger.LogWarning("ApiController, error");
_logger.LogDebug("ApiController, debug");
return new string[] { "value1", "value2" };
}
}
Run the project, and wait for a few minutes(application insights would always take 3 to 5 minutes or more to display the data). Then nave to azure portal -> application insights logs, remember that all the logs written by ILogger are stored in "traces" table. Just write the query like "traces" and specify a proper time range, you should see all the logs like below:
My scenario: with a message coming from an Azure Storage Queue I want to either create or update a document in DocumentDB
I set the DocumentDB consistency to Strong, so that it is
guaranteed that the document is updated
I use a Singleton/Listener,
so that only one Queue entry is processed at a time
Here is my code:
function.json
{
"disabled": false,
"bindings": [
{
"name": "updateEntry",
"type": "queueTrigger",
"direction": "in",
"queueName": "update-log",
"connection": "AzureWebJobsStorage"
},
{
"type": "documentDB",
"name": "inputDocument",
"databaseName": "logging",
"collectionName": "messages",
"id": "{id}",
"connection": "documentDB",
"direction": "in"
},
{
"type": "documentDB",
"name": "outputDocument",
"databaseName": "logging",
"collectionName": "messages",
"createIfNotExists": true,
"connection": "documentDB",
"direction": "out"
}
]
}
run.csx
#r "Newtonsoft.Json"
using System;
using System.Collections.Generic;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
[Singleton(Mode = SingletonMode.Listener)]
public static void Run(UpdateEntryType updateEntry, TraceWriter log, DocumentType inputDocument, out DocumentType outputDocument)
{
log.Info($"update entry:{updateEntry.id} {updateEntry.created} {updateEntry.Event.ToString()}");
outputDocument = new DocumentType();
outputDocument.Events = new List<JObject>();
if (inputDocument == null)
{
outputDocument.id = updateEntry.id;
outputDocument.created = updateEntry.created;
}
else
{
log.Info($"input document:{inputDocument.id} {inputDocument.created} {inputDocument.Events.Count}");
outputDocument.id = inputDocument.id;
outputDocument.created = updateEntry.created.CompareTo(inputDocument.created) < 0 ? updateEntry.created : inputDocument.created;
outputDocument.Events.AddRange(inputDocument.Events);
}
outputDocument.Events.Add(updateEntry.Event);
log.Info($"output document:{outputDocument.id} {outputDocument.created} {outputDocument.Events.Count}");
}
public class UpdateEntryType
{
public string id { get; set; }
public string created { get; set; }
public JObject Event { get; set; }
}
public class DocumentType
{
public string id { get; set; }
public string created { get; set; }
public List<JObject> Events { get; set; }
}
My problem: most of the times an actual existing document is found with id and hence updated - but not for at least 5% of the time
My questions (before I open a case #MSFT support):
What am I missing? Is this the right approach or is it destined to fail anyway?
not for at least 5% of the time
From my experience, If it is a dedicated plan, we need to turn on Always On setting for our Function App.
The Function runtime will go idle after a few minutes of inactivity, so only HTTP triggers will actually "wake up" your functions. This is similar to how WebJobs must have Always On enabled.
More detail info about Consumption Plan & Dedicated App Service Plan and how to set appsetting, please refer to Enable Always On when running on dedicated App Service Plan.
Using Queue batchSize 1 instead of SingletonMode.Listener made the problem disappear.
hosts.json
{
"queues": {
"batchSize": 1
}