Capturing server event logs in Seq - seq

I'm looking at installing Seq to view all my application exceptions in one place. This is easy to do for exceptions in my C# code, by using Serilog. However, I would like to send server event logs to Seq as well (that is, the events that show up in Event Viewer). How can I do that?

At present, there's no Event Log → Serilog or Event Log → Seq bridge that I'm aware of.
You can tail event logs in C# and pipe them through Serilog using EventLog.EntryWritten, if this approach is an option for you.

Related

Azure Service Bus message receiver - loop vs stream

I'm working on standing up the Azure Service Bus messaging infrastructure for my team, and I'm trying to establish best practices for developing Service Bus message receivers. We are standing up a new service to consume the Service Bus messages; the start up script will instantiate the message receivers and start their message reception.
The pattern I'm setting up for my team is to extend a base receiver class and implement an abstract function that will starts the message receiver in the stream fashion.
I'm curious if there are any notable differences between receiving messages using ServiceBusReceiver::subscribe vs ServiceBusReceiver::receiveMessages (stream vs loop)? I'm suggesting that my team uses ServiceBusReceiver::subscribe since it registers the reception forever and it seems to handle errors more gracefully.
I've noticed two differences between the stream vs loop:
ServiceBusReceiver::receiveMessages is asynchronous. This means that in my script I would need to run Promise.all or Promise.allSettled to start the receivers in parallel. Because of the limited error handling with the loop message reception, I noticed that if the receiver hits an error, it will halt messaging processing. This scenario would require our team to restart the service if any of the receivers hits an error which is a con for our team.
The streaming method is synchronous so my start up script can register the subscriptions, save the return values, and close the subscriptions on shutdown.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
Thanks in advance
The intended way of receiving messages is definitely streaming for the messaging services though both the ways of receiving work just fine with the ServiceBus JS SDK.
receiveMessages (loop) is more for the convenience of the users who just want to receive the messages simply and don't want to deal with the callbacks, handlers, etc.
Internally, receiveMessages also does streaming to receive the messages and waits for the given duration before returning the array of messages.
Hope that might clarify your doubts.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
You can perhaps use arrow functions. For reference, please check this part of an unrelated subscribe test...
https://github.com/Azure/azure-sdk-for-js/blob/d417e93b53450b2660c34965ffa177f3d4d2f947/sdk/servicebus/perf-tests/service-bus/test/subscribe.spec.ts#L72

Is it possible to view custom ETW events, raised in C# with EventSource, in PerfMon? in real-time?

I want to raise ETW events from inside a server application to monitor performance.I would like to consume these events in perfMon or a similar tool so as to view the events graphically. Is this possible? (perfView is not available in my work environment, and anyway does not display events graphically)
I can raise events simply enough, I've been using the sample in Ben Watsons "Writing HighPerformance .Net Code" book, but have been unable to view these events in perfMon.exe, when adding a new data collector set .
I added code to the sample create an event-source
if(!EventLog.SourceExists("EtlDemo"))
{
EventLog.CreateEventSource("EtlDemo", "EtlDemoLog");
}
I suspect something more needs to done for the EtlDemo "event trace provider" to be visible to perfMon (and probably Windows Performance Analyzer), but documentation seems sparse. Any ideas?

Azure Functions notification on failure

I have timer-triggered Azure functions running in production, but now I want to be notified if the function fails.
In my case, access to various connected services can cause crashes, and there are many to troubleshoot. The crash is the type of error I need notification for.
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
I know that blob and queue bindings, for instance, support the creation of poison queue entries, but timer trigger binding doesn't say anything about any trigger outputs of that nature.
I see that functions can pass their $return status as input to other functions, but that operation is not explained in depth in the docs. Also, in that case, I need to write another function to process the error status, and I was looking for something built-in.
I Have inquired with #AzureSupport on this, but their answer had nothing to do with Azure Functions, instead referring me to DLL notification hooks, then recommending I file on uservoice.
I'm sure there must be people here who have implemented some sort of error status notification. I prefer a solution that doesn't require code.
The recommended way to monitor and alert on failures is to use AppInsights which integrates fully with Azure Functions now
https://blogs.msdn.microsoft.com/appserviceteam/2017/04/06/azure-functions-application-insights/
Since all the logs are available in AppInsights it's easy to monitor for failures and setup alerts based on your own criteria.
However, if you only care about alerting and not things like monitoring etc, you could use Azure Monitor instead: https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
...
I prefer a solution that doesn't require code.
This is a zero-code solution:
I poked #AzureFunctions once before on this topic, and a suggested response was to use Application Insights. It can handle the alerts upon failure and also can use webhooks.
See the Azure Functions App-Insights documentation on how to link your function app to App Insights. Then set up any alerts you want.
Unfortunately this hook doesn't exist.
Can you switch from a timer trigger to a queue trigger?
You can get retries (if you want them), and after the specified number of attempts the message is sent to a poison queue.
To schedule executions you can add queue messages with a visibility timeout to match your schedule.
In order to get alerts on failure you have two options:
A timer trigger than scans the execution logs (via SFTP) for failures.
Wrap the whole function in a try/catch block and in the catch block write a few lines to send you an email with the error details.
Hope this helps.
No code:
Go to your azure cloud account
From the menu select Monitor
Then select Add New Rule
Then Select your condition, action and add the alert details.

How to log - the 12 factor application way

I want to know the best practice behind logging my node application. I was reading the 12 factor app guidelines at https://12factor.net/logs and it states that logs should always be sent to the stdout. Cool, but then how would someone manage logs in production? Is there an application that scoops up whatever is sent to stdout? In addition, is it recommended that I only be logging to stdout and not stderr? I would appreciate a perspective on this matter.
Is there an application that scoops up whatever is sent to stdout?
The page you linked to provides some examples of log management tools, but the simplest version of this would be just redirecting the output of your application to a file. So in bash node app.js > app.out. You could also split your stdout and stderr like node app.js 2> app.err 1> app.out.
You could additionally have some sort of service that collects the logs from this file, and then puts them indexes them for searching somewhere else.
The idea behind the suggestion to only log to stdout is to let the environment control what to do with the logs because the application doesn't necessarily know the environment that it will eventually run within. Furthermore, by treating all logs as an event stream, you leave the choice of what to do with this stream up to the environment. You may want to send the log stream directly to a log aggregation service for instance, or you may want to first preprocess it, and then stream the result somewhere else. If you mandate a specific output such as logging to a file, you reduce the portability of your service.
Two of the primary goals of the 12 factor guidelines are to be "suitable for deployment on modern cloud platforms" and to offer "maximum portability between execution environments". On a cloud platform where you might have ephemeral storage on your instance, or many instances running the same service, you'd want to aggregate your logs into some central store. By providing a log stream, you leave it up to the environment to coordinate how to do this. If you put them directly into a file, then you would have to tailor your environment to wherever each application has decided to put the logs in order to then redirect them to the central store. Using stdout for logs is thus primarily a useful convention.
I think it's a mistake to categorically say "[web] applications should write logs to stdout".
Rather, I would suggest:
a) Professional-quality, robust web apps should HAVE logs
b) The application should treat the "log" as an abstract, "stream" object
c) Ideally, the logger implementation MAY be configured to write to stdout, to stderr, to a file, to a date-stamped file, to a rotating file, filter by severity level, etc. etc. as appropriate.
I would strongly argue that hard-coded writes to stdout, without any intervening "logger" abstraction, is POOR practice.
Here is a good article:
https://blog.risingstack.com/node-js-logging-tutorial/
Cool, but then how would someone manage logs in production?
The log sink is what you're looking for.
Is there an application that scoops up whatever is sent to stdout?
Yes and no. It's the log ship (or log router). It could be an application, but it's really just some process within the execution or runtime environment that your app doesn't really know about.
Another way to look at this is separation of concern. As it was stated in a different answer, it's about letting the environment own what happens to the log and only expecting the application to concern itself with emitting log events at all. I think what's missing from the 12FA documentation is that they don't try to complete the puzzle for you because there will be different opinions on where to go from stdout, so I'll help by adding in those missing pieces based on my personal experience and what I'm seeing all over the cloud space.
Logger sends log event to log stream (aka 'the log')
It goes without saying that your application should have some sort of "logger" abstraction, but that's really just an entry point for emitting a log event to stdout. That abstraction's responsibility is to get your log event onto the log stream (stdout) in the desired format and then your application's responsibility is done. In fact, the 12FA documentation ends here.
12 Factor App is about creating cloud-friendly and portable applications, so you have to assume that you don't know what the executing/runtime environment even is. So we don't know what "the environment" is and that's the whole point. So from here, it is the responsibility of the executing/runtime environment to process the stream and move it to the sink.
Log ship/router realizes log stream to log sink
So the way we solve for this now is to have some sort of listener for the stdout stream that will take the output and send it downstream to the log sink.
The "ship" (also known as the log router or scraper) might be something in the environment or the runtime, or honestly it could be something running the background of your application (a stream listener); it could be some other custom process; it could be even be Kafka -- I think GCP uses fluentd to scoop up logs from various sources and put them in stackdriver. The point is that it should be a separate "class" in your application that your application doesn't really know about. It just listens to the stream and sends it to the sink. In some solutions, this is something you need to build, in other solutions, it's handled by your platform. Put simply "how do I get the stream to the sink?"
The "sink" is the destination. This can be the console (hello it's literally a stream reader), it can be a file, it can be Splunk, Application Insights, Stack Driver, etc. There are simple solutions and there are larger more complex enterprise solutions, but the concept stays the same.
So in short, this is the answer to your question, if we're writing to stdout "how do we manage logs in production." It's the log sink or log aggregator that you're looking for. In 12FA vernacular, something like "splunk" isn't the "log". The log is the stream itself (stdout). In terms of 12FA - Your application doesn't know what the sink is and ideally, it shouldn't because that sink could change, in which case all of your applications would break, or there could be many different sinks and that could bog your application down particularly if you're writing straight to the sinks instead of stdout first. It's just another decoupling exercise if nothing else.
You can send to a single sink, multiple sinks at once, or you can send to a single sink and have some other component 'ship' your logs from that sink to another (e.g. write to a rolling file and have a router scrape that into splunk). Just depends on your needs.
You can actually see this popping up more and more in cloud providers by default. For example, on GCP, all logs to stdout automatically get picked up and sent to stackdriver. In Azure, so long as you add the instrumentation to your .NET application (the application diagnostics package), it will emit events to stdout and it'll get picked up by azure monitor. There are also more and more packages out there that are beginning to implement this pattern, so in .NET you could use Serilog to abstract most of these concepts.
Logger -> Log Event -> Log [stream] (stdout) -> Sink -> Your eyeballs
Logger: The thing you use to emit the log, typically an abstraction (e.g. Serilog, NLog, Log4net)
Log Event: The individual log itself
Log Stream (or 'the log'): stdout it's the unbuffered, time-ordered aggregation of all events and has no beginning or end.
Log Ship/Router: The transport that sends the stream to one or more sinks. (e.g. in process like log4net, out of process like fluentd)
Log Sink: The thing that you're actually looking at like a console, file, or index/search engine, or analytics/monitoring platform (e.g. splunk, datadog, appinsights, stackdriver, etc.)
There are packages and platforms that provide one or more of these pieces, but all of those pieces are always there. It makes 12FA logging make more sense when you're aware of them.

Something making NServiceBus lose messages

I have an NServiceBus configuration that is working great on developers machines and in my Development Environment.
However, when I move it to my Test Environment my messages just start getting tossed.
Here is the system:
An app gets a TCP message from a Mainframe system and sends it to a MSMQ (call it FromMainframe).
An application hosted in IIS has a "Handle" method for that MSMQ and processes the messages from the mainframe.
In my Test Environment, step two only half way happens. The message is popped off the MSMQ, but not processed by my application.
Effectively my data is LOST! NServiceBus removes them from the Queue but I never get to process them. They are not even in the error queue!
These are the things I have tried in an attempt to figure out what is happening:
Check the Config files
Attach a remote debugger to the process to see what the Handle method is doing
The Handle method is never called (but when I attach to the Development Environment my breakpoint in my Handle method is hit and it all works flawlessly).
Redeploy my Dev version to the Test Envioronment and try step 2 again (just in case the versions were not exactly the same.)
Check the Config files again
Check that the Error queue is not filling up
The error queue stays empty (I wish it would fill up, then my data would not be LOST).
Check for any other process that may be pulling stuff from my MSMQs
I Turned off my IIS website and the messages in the FromMainframe queue start to backup.
When I turn it back on, the messages disappear fairly fast (but still not all at once). The speed that they disappear is too fast for them to be processed by my Handle method.
Check Config files yet again.
Run the NServiceBusTools\MsmqUtils\Runner.exe \i
I ran it, rebooted, ran it again and again for good measure!
Check the Configs again (I must have missed SOMETHING right?)
Check the Development Environment Configs are not pointing to the Test Environment
I don't think it is possible to use another computer's MSMQ as your input queue, but it does not hurt to check.
Look for any catch blocks that could be silently killing my message.
One last check of the Config files.
Recreate my Test Environment on another machine (it worked flawlessly)
Run my stuff outside of IIS.
When I host outside of IIS (using NServiceBus.Host.exe) it all works fine. So it has to be an IIS thing right?
Go crazy and hope that stack overflow can offer any kind of insight.
So I know enough about what happened to throw out an "Answer".
When I setup my NServiceBus self hosting I had a call that loaded the message handlers.
NServiceBus.Configure.With().LoadMessageHandlers()
(There are more configurations, but I omitted them for brevity)
When you call this, NServiceBus scans the assmeblies for a class that implements IHandleMessages<T>.
So, somehow, on my Test Environment Machine, the ServiceBus scan of the directory for a class that calls IHandleMessages was failing to find my class (even though the assembly was absolutely there).
Turns out that if NServiceBus does not find something that handles a message it will THROW IT AWAY!!!
This is a total design bug in my opinion. The whole idea of NServiceBus is to not lose your data, but in this case it does just that!
Now, once you know about this pitfall, there are several ways around it.
Expressly state what your handler(s) should be:
NServiceBus.Configure.With().LoadMessageHandlers<First<MyMessageType>>()
Even further protection is to add another handler that will handle "Everything else". IMessage is the base for all message payloads, so if you put a handler on it, it will pickup everything.
If you set IMessage to handle after your messages get handled, then it will handle everything that NServiceBus can't find a handler for. If you throw and exception in that Handle method that will cause NServiceBus to to move the message to the error queue. (What I think should be the default behavior.)

Resources