Serilog logging part of message in Azure Application Insight - azure

After searching lot on google and trying to find what could be the problem, logged issue in github repo where from I had read about serilog implmentation in .Net Core function app - https://github.com/serilog/serilog-sinks-applicationinsights/issues/179
Serilog is not logging complete message in Azure application insights, no idea what could be the reason. However on console it is logging complete message. Below is code snippet in Startup.cs
public override void Configure(IFunctionsHostBuilder builder)
{
var logger = ConfigureLogging();
builder.Services.AddLogging(lb => lb.AddSerilog(logger));
}
private Logger ConfigureLogging()
{
var telemetryConfiguration = TelemetryConfiguration.CreateDefault();
telemetryConfiguration.InstrumentationKey =
Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY");
int defaultLoggingSwitch = 3;//Warning
int tloggingSwitch = 3;//Warning
int tSloggingSwitch = 3;//Warning
Int32.TryParse(Environment.GetEnvironmentVariable("DefaultLogging"), out defaultLoggingSwitch);
Int32.TryParse(Environment.GetEnvironmentVariable("TMPLoggingSwitch"), out tloggingSwitch);
Int32.TryParse(Environment.GetEnvironmentVariable("TESLoggingSwitch"), out tSloggingSwitch);
LoggingLevelSwitch SeriLogLevelSwitch = new LoggingLevelSwitch((LogEventLevel)defaultLoggingSwitch);
LoggingLevelSwitch TMPLoggingSwitch = new LoggingLevelSwitch((LogEventLevel)tloggingSwitch);
LoggingLevelSwitch TESLoggingSwitch = new LoggingLevelSwitch((LogEventLevel)tSloggingSwitch);
var logger = new LoggerConfiguration()
.MinimumLevel.ControlledBy(SeriLogLevelSwitch)
.MinimumLevel.Override("ClassName", TMPLoggingSwitch)
.MinimumLevel.Override("IEventsService", TESLoggingSwitch)
.Enrich.FromLogContext()
.WriteTo.ApplicationInsights(telemetryConfiguration, TelemetryConverter.Events)
.CreateLogger();
return logger;
}
Consuming in Eventhub based function app as shown below -
Injecting logger in Function App class -
public EventHubProcessing(ITypeService teService, IConfiguration configuration, IServiceScopeFactory serviceScopeFactory, ILogger<ISampleClass> logger)
{
log = logger;
}
Run method below -
public async Task Run([EventHubTrigger("%EVENTHUB-RECIEVE%", Connection = "EVENTHUB-RECIEVE-CONN",ConsumerGroup = "%ConsumerGroup%")] EventData[] events, Microsoft.Azure.WebJobs.ExecutionContext executionContext, CancellationToken cancellationToken)
{
string json = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
log.LogInformation($"Event Hub trigger function processed a message: {json}");
}
Below are nuget package versions -
Serilog Nuget versions
Please let me know if anything else is required.

There are two places in Azure Application Insight which shows log message. At one place it was not showing complete message which I think is supposed to show only initial part of log message but in another field complete message was shown. So it is not an issue at all.

Related

how distinguish traces from different instances .net core application in Application Insights

I work on .NET Core 2.2 console application that uses Microsoft.Extensions.Logging and is configured to send logs to Azure Application Insights using Microsoft.ApplicationInsights.Extensibility by:
services.AddSingleton(x =>
new TelemetryClient(
new TelemetryConfiguration
{
InstrumentationKey = "xxxx"
}));
...
var loggerFactory = serviceProvider.GetService<ILoggerFactory>();
loggerFactory.AddApplicationInsights(serviceProvider, logLevel);
It works ok: I can read logs in Application Insights. But the application can be started simultanously in few instances (in different Docker containers). How can I distinguish traces from different instances? I can use source FileName, but I don't know how I should inject it.
I tried to use Scope:
var logger = loggerFactory.CreateLogger<Worker>();
logger.BeginScope(dto.FileName);
logger.LogInformation($"Start logging.");
It's interesting that my configuration is almost identical as in example: https://github.com/MicrosoftDocs/azure-docs/issues/12673
But in my case I can't see the property "FileName" in Application Insights.
For console project, if you want to use the custom ITelemetryInitializer, you should use this format: .TelemetryInitializers.Add(new CustomInitializer());
Official doc is here.
I test it at my side, and it works. The role name can be set.
Sample code is below:
static void Main(string[] args)
{
TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = "xxxxx";
configuration.TelemetryInitializers.Add(new CustomInitializer());
var client = new TelemetryClient(configuration);
ServiceCollection services = new ServiceCollection();
services.AddSingleton(x => client);
var provider = services.BuildServiceProvider();
var loggerFactory = new LoggerFactory();
loggerFactory.AddApplicationInsights(provider, LogLevel.Information);
var logger = loggerFactory.CreateLogger<Program>();
logger.LogInformation("a test message 111...");
Console.WriteLine("Hello World!");
Console.ReadLine();
}
Check the role name in azure portal:
If you really have no way to distinguish them you can use a custom telemetry initializer like this:
public class CustomInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
telemetry.Context.Cloud.RoleName = Environment.MachineName;
}
}
and/or you can add a custom property:
public class CustomInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if(telemetry is ISupportProperties)
{
((ISupportProperties)telemetry).Properties["MyIdentifier"] = Environment.MachineName;
}
}
}
In this example I used Environment.MachineName but you can of course use something else if needed. Like this work Id parameter of yours.
the wire it up using:
services.AddSingleton<ITelemetryInitializer, CustomInitializer>();

Azure WebJobs and ApplicationInsights

I have following code:
public static void Main()
{
_servicesBusConnectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
_namespaceManager = NamespaceManager.CreateFromConnectionString(_servicesBusConnectionString);
_applicationInsightsInstrumentationKey = ConfigurationManager.AppSettings["appInsightsInstrumentationKey"];
JobHostConfiguration config = new JobHostConfiguration();
ServiceBusConfiguration serviceBusConfig = new ServiceBusConfiguration
{
ConnectionString = _servicesBusConnectionString
};
config.UseServiceBus(serviceBusConfig);
config.LoggerFactory = new LoggerFactory()
.AddApplicationInsights(_applicationInsightsInstrumentationKey, null)
.AddConsole();
config.Tracing.ConsoleLevel = TraceLevel.Off;
var host = new JobHost(config);
host.RunAndBlock();
}
and in function:
public static async Task ProcessMessages([ServiceBusTrigger(ServiceBusQueueNames.SomeQueueName)]BrokeredMessage brokeredMessage, TextWriter log)
{
try
{
_log = log;
_log.WriteLine("WebJob started processing of a message");
await _log.FlushAsync();}
It logs message to console, but not to ApplicationInsights.
Instrumentation key set properly.
Can you please say why it does not log to the ApplicationInsights?
I'v written this code accordingly https://github.com/Azure/azure-webjobs-sdk/wiki/Application-Insights-Integration
first thing to check: is your instrumentation key valid at runtime?
then, if the ikey is what you expect it to be, is it for the resource you expect it to be (you aren't looking at a dev application insights instance but the telemetry is going to a prod instance?)
if you debug this locally, do you see application insights assemblies getting loaded?
do you see console output from application insights with the debugger running?

How to get notification when webjob status was aborted in azure

Azure WebJob, how to be notified if it aborted?
(1)Always Availability is on for the service.
(2) SCM_COMMAND_IDLE_TIMEOUT = 2000.
WEBJOBS_IDLE_TIMEOUT = 2000.
But as i'm new to this. can you please help me on this one where can i put the logic
You could add the logic in the Function.cs file. For more information you could refer to the detail steps
Steps:
1.follow official document to create a webjob project.
2.Add Functions.cs in the project
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions;
using SendGrid;
public class Functions
{
//demo webjob trigger
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
}
// error monitor
public static void ErrorMonitor([ErrorTrigger("0:30:00", 10, Throttle = "1:00:00")]TraceFilter filter, [SendGrid] SendGridMessage message)
{
message.Subject = "WebJobs Error Alert";
message.Text = filter.GetDetailedMessage(5);
}
}
3.If want to use ErrorTrigger and SendGrid we need to config it in the Program.cs file.
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
config.UseCore();
config.UseSendGrid(new SendGridConfiguration
{
ApiKey = "xxxxx",
FromAddress = new Email("emailaddress","name"),
ToAddress = new Email("emailaddress","name")
});
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
4.If we want to test it locally, we need to add Storage connection string in the App Settings collection.
<connectionStrings>
<add name="AzureWebJobsStorage" connectionString="{storage connection string}" />
</connectionStrings>

ETW events in Azure diagnostics (SDK 2.5) are logged with incorrect / missing schema

I upgraded to Azure SDK 2.5 and switched to semantic logging with EventSources.
Logging works locally with a custom EventListener.
When deployed, logs are written to a storage table, but only the EventId, Pid, Tid etc. are populated, the really interesting fields (Message, Task, Keyword, Opcode) are left blank.
The diagnostics infrastructure log is full of errors with regards to ETW, but I don't know what to make of them:
Failed to load backup EventSource manifest file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something.WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.backup.xml;
EventSource events will be logged without a proper schema until provider sends the manifest packets
Load manifest file failed for C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf.something. WorkerRole. DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.xml
Failed to manage manifest version for file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something. WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Pid_3436.xml
Failed to process EventSource manifest event GUID:fe06b63d-39aa-5419-0529-18c4dacf4f68, event id:0xFFFE
Change in the number of events lost since the last sample: EventsCaptured=2 EventsLogged=1 EventsLost=0
I do not use a manifest file and specify the EventSource via class / attribute name:
<EtwEventSourceProviderConfiguration scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Information" provider="something.Core">
<DefaultEvents eventDestination="CoreEvents" />
</EtwEventSourceProviderConfiguration>
I must be missing something, but I do not know what.
The remaining diagnostic services all work (infrastructure logs, performance counter etc.).
The EventId that is being logged is the correct one, but all the important information of the log is missing, I suppose because of an incomplete configuration?
Edit: here is my EventSource code. I won't post the entire thing because it's quite large. I use another type that calls the EventSource methods and handles formatting of parameters (if the source is enabled in that level). Most method arguments are of type string, there are no objects or other complex types passed around (that handles the other type).
[EventSource(Name = "something.Core")]
public sealed class CoreEventSource : EventSource {
private static readonly CoreEventSource SoleInstance = new CoreEventSource();
static CoreEventSource() {}
private CoreEventSource() {}
public static CoreEventSource Instance {
get { return SoleInstance; }
}
public static EventKeywords AllKeywords = (EventKeywords)(-1);
public class Keywords {
public const EventKeywords None = (EventKeywords)(1 << 1);
public const EventKeywords Infrastructure = (EventKeywords)(1 << 2);
[...]
}
public class Tasks {
public const EventTask None = EventTask.None;
// generic operations
public const EventTask Create = (EventTask)11;
public const EventTask Update = (EventTask)12;
public const EventTask Delete = (EventTask)13;
public const EventTask Get = (EventTask)14;
public const EventTask Put = (EventTask)15;
public const EventTask Remove = (EventTask)16;
public const EventTask Process = (EventTask)17;
}
[Event(1, Message = "Initialization of {0} failed: {1}.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void CriticalInitializationFailure(string component, string details, string exception) {
this.WriteEvent(1, component, details, exception);
}
[Event(2, Message = "[Role '{0}'] Startup: {1}", Level = EventLevel.Informational, Keywords = Keywords.Infrastructure)]
public void RoleStartup(string roleName, string message) {
this.WriteEvent(2, roleName, message);
}
[Event(3, Message = "[Role '{0}'] Stop failed: {1}.", Level = EventLevel.Error, Keywords = Keywords.Infrastructure)]
public void RoleStopFailed(string roleName, string details, string exception) {
this.WriteEvent(3, roleName, details, exception);
}
[Event(4, Message = "An unhandled exception occurred.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnhandledException(string exception) {
this.WriteEvent(4, exception);
}
[Event(5, Message = "An unobserved exception occurred in a faulted task.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnobservedTaskException(string exception) {
this.WriteEvent(5, exception);
}
[...]
}
Turns out there were quite a few problems with my EventSource. The first thing I'd recommend to anyone working with ETW is to use the Microsoft TraceEvent Library from NuGet, even if you use System.Diagnostics.Tracing, because it comes with a tool that will verify your EventSource code and notify you about problems.
I had to fix the following:
EventSource names must not contain a period .
Task/Opcode pairs must be unique within an EventSource
One must not declare a None field in a custom Keywords or Tasks enumeration
Hope this is of some use to anyone who encounters a similar problem.
Another thing that should be taken care of (which fixed our case)
- EventSources should only have a Name or a Guid, not both.
In our case, having both caused
- The EtwEventSourceProvider to not log anything
- The EtwEventManifestProvider to log the same way you outlined, with empty data points.

How do we integrate elmah logging in servicestack

I am new to servicestack and elman logging.
Can any body suggest how do we integrate elmah in service stack applications.
Thank you...
If you have an existing logging solution then you can use the ServiceStack.Logging.Elmah project. It is available via NuGet.
Exceptions, errors and fatal calls will be logged to Elmah in addition to the originally intended logger. For all other log types, only the original logger is used.
So if you are already using Log4Net then you can just configure Elmah like this
ElmahLogFactory factory = new ElmahLogFactory(new Log4NetFactory());
If you don't want to wrap in over an existing log then you can just research adding Elmah to any ASP.NET website. There is no reason it wouldn't work just because you are using ServiceStack.
using ServiceStack.Logging;
using ServiceStack.Logging.Elmah;
using ServiceStack.Logging.NLogger;
public AppHost()
: base(
"description",
typeof(MyService).Assembly)
{
LogManager.LogFactory = new ElmahLogFactory(new NLogFactory());
}
public override void Configure(Container container)
{
this.ServiceExceptionHandler += (request, exception) =>
{
// log your exceptions here
HttpContext context = HttpContext.Current;
ErrorLog.GetDefault(context).Log(new Error(exception, context));
// call default exception handler or prepare your own custom response
return DtoUtils.HandleException(this, request, exception);
};
// rest of your config
}
}
Now your ServiceStack error's appear in Elmah (assuming you've setup web.config etc).
Actually kampsj answer is better than Gavin's as Gavins causes double-logging to elmah by calling explicit elmah logger and then the default servicestack error handling...which itself already does the logging.
So really all you need is this (below assuming you want to wrap NLog with Elmah)
public class YourAppHost : AppHostBase
{
public YourAppHost() //Tell ServiceStack the name and where to find your web services
: base("YourAppName", typeof(YourService).Assembly)
{
LogManager.LogFactory = new ElmahLogFactory(new NLogFactory());
}
//...just normal stuff...
}
You could just have this above:
ElmahLogFactory factory = new ElmahLogFactory();
...but you probably should wrap another type of logger for non-error logging, like Debug and Warn.
This section on configuring Elmah and the Logging.Elmah UseCase for a working example of ServiceStack and Elmah configured together.
The ElmahLogFactory can be configured in your Global.asax before initializing the ServiceStack AppHost, e.g:
public class Global : System.Web.HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
var debugMessagesLog = new ConsoleLogFactory();
LogManager.LogFactory = new ElmahLogFactory(debugMessagesLog, this);
new AppHost().Init();
}
}

Resources