I'm running Seq on an Azure instance (Windows Server 2012 R2 Datacenter) and logging with Serilog from a console application running on my local workstation. I have 3 sinks configured - File, Console and Seq. I'm also running on dnxcore50 (just in case you were thinking my setup wasn't dodgy enough). All my events are showing up in console and the file 100% of the time. Seq is only capturing event about 1 in 5 or more runs, that is it will either capture all the events for the run or none of them. I am using the free single user license to evaluate Seq, and I haven't found anything to suggest there are any limitations that would cause this behavior.
I've set up SelfLog.Out on the loggers, but that logs nothing at all, other than my test line which I added to make sure the self logging could at least write to the specified file.
public abstract class SerilogBaseLogger<T> : SerilogTarget
{
protected SerilogBaseLogger(SerilogOptions options, string loggerType)
{
LoggerConfiguration = new LoggerConfiguration();
Options = options;
LevelSwitch.MinimumLevel = options.LogLevel.ToSerilogLevel();
var file = File.CreateText(Path.Combine(options.LogFilePath, string.Format("SelfLog - {0}.txt", loggerType)));
file.AutoFlush = true;
SelfLog.Out = file;
SelfLog.WriteLine("Testing self log.");
}
// ... snip ....
}
public class SerilogSeqTarget<T> : SerilogBaseLogger<T>
{
public string Server => Options.Seq.Server;
public SerilogSeqTarget(SerilogOptions options)
: base(options, string.Format("SerilogSeqTarget[{0}]", typeof(T)))
{
LoggerConfiguration
.WriteTo
.Seq(Server);
InitializeLogger();
}
public override string ToString()
{
return string.Format("SerilogSeqTarget<{0}>", typeof(T).Name);
}
}
public class SerilogLoggerFactory<TType> : LoggerFactory<SerilogTarget>
{
// .... snip ....
public override IMyLoggerInterface GetLogger()
{
var myLogger = new SerilogDefaultLogger()
.AddTarget(SerilogTargetFactory<TType>.GetFileTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetConsoleTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetSeqTarget(Options));
myLogger.Info("Logger initialized with {#options} and targets: {targets}", Options, ((SerilogDefaultLogger)myLogger).Targets);
return myLogger;
}
}
public class SerilogTargetFactory<TType>
{
// ... snip ...
public static SerilogTarget GetSeqTarget(SerilogOptions options)
{
return !string.IsNullOrEmpty(options.Seq.Server)
? new SerilogSeqTarget<TType>(options)
: null;
}
}
Any suggestions? Is this just a side-effect of being on the bleeding edge, working with pre-release everything (although in that case I'd expect things to fail somewhat consistently)?
when targeting dnxcore50/CoreCLR, Serilog can't hook into AppDomain shutdown to guarantee that any buffered messages are always flushed. (AppDomain doesn't exist in that profile :-)
There are a couple of options:
Dispose the loggers (especially the Seq one) on shutdown:
(logger as IDisposable)?.Dispose();
Use the static Log class and call its CloseAndFlush() method on shutdown:
Log.CloseAndFlush();
The latter - using the static Log class instead of the various individual ILogger instances - is probably the quickest and easiest to get going with, but it has exactly the same effect as disposing the loggers so either approach should do it.
Related
I upgraded to Azure SDK 2.5 and switched to semantic logging with EventSources.
Logging works locally with a custom EventListener.
When deployed, logs are written to a storage table, but only the EventId, Pid, Tid etc. are populated, the really interesting fields (Message, Task, Keyword, Opcode) are left blank.
The diagnostics infrastructure log is full of errors with regards to ETW, but I don't know what to make of them:
Failed to load backup EventSource manifest file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something.WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.backup.xml;
EventSource events will be logged without a proper schema until provider sends the manifest packets
Load manifest file failed for C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf.something. WorkerRole. DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.xml
Failed to manage manifest version for file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something. WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Pid_3436.xml
Failed to process EventSource manifest event GUID:fe06b63d-39aa-5419-0529-18c4dacf4f68, event id:0xFFFE
Change in the number of events lost since the last sample: EventsCaptured=2 EventsLogged=1 EventsLost=0
I do not use a manifest file and specify the EventSource via class / attribute name:
<EtwEventSourceProviderConfiguration scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Information" provider="something.Core">
<DefaultEvents eventDestination="CoreEvents" />
</EtwEventSourceProviderConfiguration>
I must be missing something, but I do not know what.
The remaining diagnostic services all work (infrastructure logs, performance counter etc.).
The EventId that is being logged is the correct one, but all the important information of the log is missing, I suppose because of an incomplete configuration?
Edit: here is my EventSource code. I won't post the entire thing because it's quite large. I use another type that calls the EventSource methods and handles formatting of parameters (if the source is enabled in that level). Most method arguments are of type string, there are no objects or other complex types passed around (that handles the other type).
[EventSource(Name = "something.Core")]
public sealed class CoreEventSource : EventSource {
private static readonly CoreEventSource SoleInstance = new CoreEventSource();
static CoreEventSource() {}
private CoreEventSource() {}
public static CoreEventSource Instance {
get { return SoleInstance; }
}
public static EventKeywords AllKeywords = (EventKeywords)(-1);
public class Keywords {
public const EventKeywords None = (EventKeywords)(1 << 1);
public const EventKeywords Infrastructure = (EventKeywords)(1 << 2);
[...]
}
public class Tasks {
public const EventTask None = EventTask.None;
// generic operations
public const EventTask Create = (EventTask)11;
public const EventTask Update = (EventTask)12;
public const EventTask Delete = (EventTask)13;
public const EventTask Get = (EventTask)14;
public const EventTask Put = (EventTask)15;
public const EventTask Remove = (EventTask)16;
public const EventTask Process = (EventTask)17;
}
[Event(1, Message = "Initialization of {0} failed: {1}.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void CriticalInitializationFailure(string component, string details, string exception) {
this.WriteEvent(1, component, details, exception);
}
[Event(2, Message = "[Role '{0}'] Startup: {1}", Level = EventLevel.Informational, Keywords = Keywords.Infrastructure)]
public void RoleStartup(string roleName, string message) {
this.WriteEvent(2, roleName, message);
}
[Event(3, Message = "[Role '{0}'] Stop failed: {1}.", Level = EventLevel.Error, Keywords = Keywords.Infrastructure)]
public void RoleStopFailed(string roleName, string details, string exception) {
this.WriteEvent(3, roleName, details, exception);
}
[Event(4, Message = "An unhandled exception occurred.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnhandledException(string exception) {
this.WriteEvent(4, exception);
}
[Event(5, Message = "An unobserved exception occurred in a faulted task.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnobservedTaskException(string exception) {
this.WriteEvent(5, exception);
}
[...]
}
Turns out there were quite a few problems with my EventSource. The first thing I'd recommend to anyone working with ETW is to use the Microsoft TraceEvent Library from NuGet, even if you use System.Diagnostics.Tracing, because it comes with a tool that will verify your EventSource code and notify you about problems.
I had to fix the following:
EventSource names must not contain a period .
Task/Opcode pairs must be unique within an EventSource
One must not declare a None field in a custom Keywords or Tasks enumeration
Hope this is of some use to anyone who encounters a similar problem.
Another thing that should be taken care of (which fixed our case)
- EventSources should only have a Name or a Guid, not both.
In our case, having both caused
- The EtwEventSourceProvider to not log anything
- The EtwEventManifestProvider to log the same way you outlined, with empty data points.
Rather than roll a log after a date/time or specified maximum size, I want to be able to call a method like "ResetLog" that copies my "log.txt" to "log.txt.1" and then clears log.txt.
I've tried to implement that by doing something like this with a FileAppender, rather than a RollingFileAppender:
var appenders = log4net.LogManager.GetRepository().GetAppenders();
foreach (var appender in appenders) {
var fa = appender as log4net.Appender.FileAppender;
if (fa != null) {
string logfile = fa.File;
fa.Close();
string new_file_path = CreateNextLogFile(logfile);
fa.File = new_file_path;
fa.ActivateOptions();
}
}
The file is closed and CreateNextLogFile() renames it. I then create a new log file and set the FileAppender to use it. However, I thought that ActivateOptions would go ahead and reconfigure the FileAppender with the desired settings. I've looked over the log4net documentation and don't see any other public methods that allow me to reopen the FileAppender after closing it. Can anyone recommend a way to implement the rollover? It would be nice if the RollingFileAppender had something like this, but I didn't see anything useful it its documentation, either.
If we look at the RollingFileAppender we can see that the mechanism for rolling over consists of closing the file, renaming existing files (optional) and opening it again:
// log4net.Appender.RollingFileAppender
protected void RollOverSize()
{
base.CloseFile();
// debug info removed
this.RollOverRenameFiles(this.File);
if (!this.m_staticLogFileName && this.m_countDirection >= 0)
{
this.m_curSizeRollBackups++;
}
this.SafeOpenFile(this.m_baseFileName, false);
}
Unfortunately the CloseFile/SafeOpenFile methods are protected which means you cannot access it from the outside (not easily). So your best bet would be to write an appender inheriting from RollingFileAppender, and to override the virtual AdjustFileBeforeAppend which is called before any logging event is added to the appender.
There you can decide what are the conditions of the roll if any must occur. An idea would be to create a static event that your custom rolling appender suscribes to: when the event is triggered the appender makes a note of it (rollBeforeNextAppend = true;). As soon as you try and log next entry the appender will roll.
public class CustomRollingAppender: RollingFileAppender
{
public CustomRollingAppender()
{
MyStaticCommandCenter.RollEvent += Roll;
}
public void Roll()
{
rollBeforeNextAppend = true;
}
public bool rollBeforeNextAppend {get; set;}
public override void AdjustFileBeforeAppend()
{
if (rollBeforeNextAppend) {
CloseFile();
RollOverRenameFiles(File);
SafeOpenFile(Filename, false);
rollBeforeNextAppend = false;
}
}
}
I want to send messages back to a client via a stream. I want the client to start processing these messages as soon as possible (before the server has completed the streaming on the server side).
I have implemented IStreamWriter and I have a service which returns the IStreamWriter implementation.
public class StreamingService : Service
{
public object Any(MyStreamRequest request)
{
return new MyStreamWriter(request);
}
}
Where MyStreamRequest is defined like this:
[DataContract]
public class StreamRequest : IReturn<Stream>
{
[DataMember]
public int HowManySecondsToProduceData { get; set; }
}
When I test my implementation in a self-hosted environment it works perfectly. However, when I host this in IIS, the get call from the client
var client = new ProtoBufServiceClient("");
Stream stream = client.Get(new StreamRequest { HowManySecondsToProduceData = 20};
does not return until the IStreamWriter.WriteTo call returns (20 seconds in the sample above). This prevents my client from processing the stream right away and will also cause failure in high volume cases. I do call responseStream.Flush() inside my IStreamWriter.WriteTo implementation.
Does anybody have any insight on why this does not work in the IIS scenario, but only for the self hosted case? What do I need to do differently?
It seems like a likely cause of this problem is that the servicestack response stream is set to use buffering. I cannot find a way to change this though. Is it possible?
You just need to disable ASP.Nets response buffering:
public class NoBufferAttribute : RequestFilterAttribute
{
public override void Execute( IHttpRequest req, IHttpResponse res, object requestDto )
{
var originalResponse = (System.Web.HttpResponse)res.OriginalResponse;
originalResponse.BufferOutput = false;
}
}
John
I found a solution myself: The solution to this problem is quite simple: Call IHttpResponse Flush() inside the IStreamWriter.WriteTo implementation when you want to send data to the client. You get the IHttpResponse by calling base.Response inside the Service implementation.
Is it possible/easy to mock NLog log methods, using Rhino Mocks or similar?
Using Nuget : install-package NLog.Interface
Then: ILogger logger = new LoggerAdapter([logger-from-NLog]);
You can only mock virtual methods. But if You create some interface for logging and then implement it using NLog You can use dependency injection and in Your tests use mocked interface to see if system under test (SUT) is logging what You expect it to log.
public class SUT
{
private readonly ILogger logger;
SUT(ILogger logger) { this.logger = logger;}
MethodUnderTest() {
// ...
logger.LogSomething();
// ...
}
}
// and in tests
var mockLogger = new MockLogger();
var sut = new SUT(mockLogger);
sut.MethodUnderTest();
Assert.That("Expected log message", Is.Equal.To(mockLogger.LastLoggedMessage));
The simple answer, is 'no'. Looking at the code, dependency-injection is not supported, which seems rather an oversight, especially as it doesn't look difficult to implement (at first glance).
The only interfaces in the project are there to support COM interop objects and a few other things. The main Logger concrete class neither implements an interface, nor provides virtual methods.
You could either provide an interface yourself, or use Moles/TypeMock/ another isolation framework to mock the dependency.
I've used code like this to stub out the NLog logging code. You can make use of NLog's MemoryTarget which just keeps messages in memory until it's disposed of. You can query the content of the log using Linq or whatever (this example uses FluentAssertions)
using FluentAssertions
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NLog;
using NLog.Config;
using NLog.Targets;
...
private MemoryTarget _stubLogger;
[TestInitialize]
public void Setup()
{
ConfigureTestLogging();
}
protected virtual LoggingConfiguration GetLoggingConfiguration()
{
var config = new NLog.Config.LoggingConfiguration();
this._stubLogger = new MemoryTarget();
_stubLogger.Layout = "${level}|${message}";
config.AddRule(LogLevel.Debug, LogLevel.Fatal, this._stubLogger);
return config;
}
protected virtual void ConfigureTestLogging()
{
var config = GetLoggingConfiguration();
NLog.LogManager.Configuration = config;
}
[TestMethod]
public void ApiCallErrors_ShouldNotThrow()
{
// arrange
var target = new Thing();
// act
target.DoThing();
// assert
this._stubLogger.Logs.Should().Contain(l =>
l.Contains("Error|") &&
l.Contains("Expected Message"));
}
I wrote about this topic in another question.
However, I've since refactored my code to get rid of configuration access, thus allowing the specs to pass. Or so I thought. They run fine from within Visual Studio using TestDriven.Net. However, when I run them during rake using the mspec.exe tool, they still fail with a serialization exception. So I've created a completely self-contained example that does basically nothing except setup fake security credentials on the thread. This test passes just fine in TD.Net, but blows up in mspec.exe. Does anybody have any suggestions?
Update: I've discovered a work-around. After researching the issue, it seems the cause is that the assembly containing my principal object is not in the same folder as the mspec.exe. When mspec creates a new AppDomain to run my specs, that new AppDomain has to load the assembly with the principal object in order to deserialize it. That assembly is not in the same folder as the mspec EXE, so it fails. If I copied my assembly into the same folder as mspec, it works fine.
What I still don't understand is why ReSharper and TD.Net can run the test just fine? Do they not use mspec.exe to actually run the tests?
using System;
using System.Security.Principal;
using System.Threading;
using Machine.Specifications;
namespace MSpecTest
{
[Subject(typeof(MyViewModel))]
public class When_security_credentials_are_faked
{
static MyViewModel SUT;
Establish context = SetupFakeSecurityCredentials;
Because of = () =>
SUT = new MyViewModel();
It should_be_initialized = () =>
SUT.Initialized.ShouldBeTrue();
static void SetupFakeSecurityCredentials()
{
Thread.CurrentPrincipal = CreatePrincipal(CreateIdentity());
}
static MyIdentity CreateIdentity()
{
return new MyIdentity(Environment.UserName, "None", true);
}
static MyPrincipal CreatePrincipal(MyIdentity identity)
{
return new MyPrincipal(identity);
}
}
public class MyViewModel
{
public MyViewModel()
{
Initialized = true;
}
public bool Initialized { get; set; }
}
[Serializable]
public class MyPrincipal : IPrincipal
{
private readonly MyIdentity _identity;
public MyPrincipal(MyIdentity identity)
{
_identity = identity;
}
public bool IsInRole(string role)
{
return true;
}
public IIdentity Identity
{
get { return _identity; }
}
}
[Serializable]
public class MyIdentity : IIdentity
{
private readonly string _name;
private readonly string _authenticationType;
private readonly bool _isAuthenticated;
public MyIdentity(string name, string authenticationType, bool isAuthenticated)
{
_name = name;
_isAuthenticated = isAuthenticated;
_authenticationType = authenticationType;
}
public string Name
{
get { return _name; }
}
public string AuthenticationType
{
get { return _authenticationType; }
}
public bool IsAuthenticated
{
get { return _isAuthenticated; }
}
}
}
Dan,
thank you for providing a reproduction.
First off, the console runner works differently than the TestDriven.NET and ReSharper runners. Basically, the console runner has to perform a lot more setup work in that it creates a new AppDomain (plus configuration) for every assembly that is run. This is required to load the .dll.config file for your spec assembly.
Per spec assembly, two AppDomains are created:
The first AppDomain (Console) is created
implicitly when mspec.exe is
executed,
a second AppDomain is created by mspec.exe for the assembly containing the specs (Spec).
Both AppDomains communicate with each other through .NET Remoting: For example, when a spec is executed in the Spec AppDomain, it notifies the Console AppDomain of that fact. When Console receives the notification it acts accordingly by writing the spec information to the console.
This communiciation between Spec and Console is realized transparently through .NET Remoting. One property of .NET Remoting is that some properties of the calling AppDomain (Spec) are automatically included when sending notifications to the target AppDomain (Console). Thread.CurrentPrincipal is such a property. You can read more about that here: http://sontek.vox.com/library/post/re-iprincipal-iidentity-ihttpmodule-serializable.html
The context you provide will run in the Spec AppDomain. You set Thread.CurrentPrincipal in the Because. After Because ran, a notification will be issued to the Console AppDomain. The notification will include your custom MyPrincipal that the receiving Console AppDomain tries to deserialize. It cannot do that since it doesn't know about your spec assembly (as it is not included in its private bin path).
This is why you had to put your spec assembly in the same folder as mspec.exe.
There are two possible workarounds:
Derive MyPrincipal and MyIdentity from MarshalByRefObject so that they can take part in cross-AppDomain communication through a proxy (instead of being serialized)
Set Thread.CurrentPrincipal transiently in the Because
(Text is required for formatting to work -- please ignore)
Because of = () =>
{
var previousPrincipal = Thread.CurrentPrincipal;
try
{
Thread.CurrentPrincipal = new MyPrincipal(...);
SUT = new MyViewModel();
}
finally
{
Thread.CurrentPrincipal = previousPrincipal;
}
}
ReSharper, for example, handles all the communication work for us. MSpec's ReSharper Runner can hook into the existing infrastructure (that, AFAIK, does not use .NET Remoting).