I couldn't find any information on how to do it. Basically FluentFTP is using System.Diagnostics to log their messages.
FluentFtp expose the following static method:
FtpTrace.AddListener(TraceListener listener);
However I don't know if there is any way to implement (or use existing implementation, which?) TraceListener in the way it relays everything to log4net engine.
Any hints or ideas?
Thanks, Radek
You can attach a listener to the OnLogEvent method that FluentFTP exposes.
private static readonly log4net.ILog Log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
public static void UploadFTP(FileInfo localFile, string remoteFileLocation, string remoteServer, NetworkCredential credentials)
{
FtpClient client = new FtpClient(remoteServer, credentials);
client.RetryAttempts = 3;
client.OnLogEvent = OnFTPLogEvent;
client.Connect();
if (!client.UploadFile(localFile.FullName, remoteFileLocation, FtpExists.Overwrite, false, FtpVerify.Retry | FtpVerify.Throw))
{
throw new Exception($"Could not Upload File {localFile.Name}. See Logs for more information");
}
}
private static void OnFTPLogEvent(FtpTraceLevel ftpTraceLevel, string logMessage)
{
switch (ftpTraceLevel)
{
case FtpTraceLevel.Error:
Log.Error(logMessage);
break;
case FtpTraceLevel.Verbose:
Log.Debug(logMessage);
break;
case FtpTraceLevel.Warn:
Log.Warn(logMessage);
break;
case FtpTraceLevel.Info:
default:
Log.Info(logMessage);
break;
}
}
The method OnFTPLogEvent will be called every-time the OnLogEvent action will be called allowing you to extend any logging you have already built into your application.
Basically FluentFTP is using System.Diagnostics.TraceListener so in order to make it logging to your log4net log you need to write your own simple class that would redirect logs to log4net logger. Like the following:
using System.Diagnostics;
using log4net;
namespace YourApp.Logging
{
public class Log4NetTraceListener : TraceListener
{
private readonly ILog _log;
public Log4NetTraceListener(string provider)
{
_log = LogManager.GetLogger(provider);
}
public override void Write(string message)
{
if(_log == null)
return;
if(!string.IsNullOrWhiteSpace(message))
_log.Info(message);
}
public override void WriteLine(string message)
{
if(_log == null)
return;
if (!string.IsNullOrWhiteSpace(message))
_log.Info(message);
}
}
}
Then, in your app.config file add the following entry:
<system.diagnostics>
<trace autoflush="true"></trace>
<sources>
<source name="FluentFTP">
<listeners>
<clear />
<add name="FluentLog" />
</listeners>
</source>
</sources>
<sharedListeners>
<add name="FluentLog" type="YourApp.Logging.Log4NetTraceListener, YourApp" initializeData="FluentLog" />
</sharedListeners>
</system.diagnostics>
That should enable FluentFtp logs and merge it with your application log4net log.
Related
Using glimpse I'm able to access the session information accept when using the RuntimeEvent.ExecuteResource. Without this the axd file is exposed and I'd rather have it disabled unless specific users are logged in. The session will be null in both examples below. Also I've tried having the class implement IRequiresSessionState but that didn't help either.
namespace Glimpse
{
public class GlimpseSecurityPolicy:IRuntimePolicy
{
public RuntimePolicy Execute(IRuntimePolicyContext policyContext)
{
try
{
var name = HttpContext.Current.Session["username"];
var name2 = policyContext.GetHttpContext().Session["username"];
}
catch (Exception)
{
}
// You can perform a check like the one below to control Glimpse's permissions within your application.
// More information about RuntimePolicies can be found at http://getglimpse.com/Help/Custom-Runtime-Policy
// var httpContext = policyContext.GetHttpContext();
// if (!httpContext.User.IsInRole("Administrator"))
// {
// return RuntimePolicy.Off;
// }
return RuntimePolicy.On;
}
public RuntimeEvent ExecuteOn
{
// The RuntimeEvent.ExecuteResource is only needed in case you create a security policy
// Have a look at http://blog.getglimpse.com/2013/12/09/protect-glimpse-axd-with-your-custom-runtime-policy/ for more details
get { return RuntimeEvent.EndRequest | RuntimeEvent.ExecuteResource; }
}
}
}
The reason for this is that the Glimpse HttpHandler which processes the requests for Glimpse.axd does not implement the IRequireSessionState interface.
It is that HttpHandler that will eventually execute all IRuntimePolicy instances that have RuntimeEvent.ExecuteResource configured as part of the ExecuteOn property value.
I think the easiest solution for you is to create your own IHttpHandler that implements the IRequireSessionState interface and forwards all calls to the Glimpse HttpHandler as shown below.
public class SessionAwareGlimpseHttpHandler : IHttpHandler, IRequiresSessionState
{
private readonly HttpHandler _glimpseHttpHandler =
new Glimpse.AspNet.HttpHandler();
public void ProcessRequest(HttpContext context)
{
_glimpseHttpHandler.ProcessRequest(context);
}
public bool IsReusable
{
get { return _glimpseHttpHandler.IsReusable; }
}
}
Don't forget to update your web.config to use that handler instead of the original one:
...
<system.webServer>
...
<handlers>
<add name="Glimpse" path="glimpse.axd" verb="GET" type="YourNamespace.SessionAwareGlimpseHttpHandler, YourAssembly" preCondition="integratedMode" />
</handlers>
...
</system.webServer>
...
Once all this is in place, you should be able to access the Session inside your IRuntimePolicy.
I want to convert an INFO log level to a WARN if the INFO log message contains an exception. Is there anyway I can accomplish this? (I am integrating log4net in a .NET application)
Unless you already wrap your logging calls, in which case you could intercept the messages before passing them to log4net, your best bet would be to create your own appenders which promote log events as appropriate. As each appender subclass would need the exact same code I've created an extension method which does the actual promotion:
public static class AppenderExtensions
{
public static LoggingEvent Promote(this LoggingEvent loggingEvent)
{
if (loggingEvent.Level != Level.Info
|| loggingEvent.ExceptionObject == null)
{
return loggingEvent;
}
var data = loggingEvent.GetLoggingEventData(FixFlags.All);
data.Level = Level.Warn;
return new LoggingEvent(data);
}
}
public class PromotingAdoNetAppender : AdoNetAppender
{
protected override void Append(LoggingEvent loggingEvent)
{
base.Append(loggingEvent.Promote());
}
}
public class PromotingRollingFileAppender : RollingFileAppender
{
protected override void Append(LoggingEvent loggingEvent)
{
base.Append(loggingEvent.Promote());
}
}
Then all you need to do is to declare these appender types in your config:
<appender name="DatabaseAppender"
type="Your.Namespace.Here.PromotingAdoNetAppender">
…
I've bumped into a following problem with Azure Diagnostic Monitor:
When I create a new AppDomain in OnStart() event in WorkerRole entry point the diagnostics works only in the parent AppDomain. I've tried initializing Diagnostics Monitor in the child AppDomain but it doesn't help. (Traces are collected only from the parent domain)
Example repro code:
public class WorkerRole : RoleEntryPoint
{
public override void Run()
{
// This is a sample worker implementation. Replace with your logic.
InitializeDiagnostics();
Trace.TraceInformation("WorkerRole1 entry point called", "Information");
while (true)
{
Thread.Sleep(10000);
Trace.TraceInformation("Parent domain working", "Information");
}
}
public override bool OnStart()
{
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;
InitializeDiagnostics();
var setup = new AppDomainSetup();
setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory;
setup.ConfigurationFile = AppDomain.CurrentDomain.SetupInformation.ConfigurationFile;
var newDomain = System.AppDomain.CreateDomain("NewApplicationDomain",null, setup);
foreach (var assembly in AppDomain.CurrentDomain.GetAssemblies().Where(x => !x.GlobalAssemblyCache))
{
newDomain.Load(assembly.GetName());
}
newDomain.Load(typeof (Worker).Assembly.FullName);
var worker = newDomain.CreateInstanceAndUnwrap(this.GetType().Assembly.FullName, typeof (Worker).FullName) as Worker;
worker.DoWork();
return base.OnStart();
}
public void InitializeDiagnostics()
{
var roleInstanceDiagnosticManager = new RoleInstanceDiagnosticManager(RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"),
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance
.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
var dictionaryConfiguration = new DirectoryConfiguration();
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);
}
}
public class Worker : MarshalByRefObject
{
public void DoWork()
{
new Task(() =>
{
while (true)
{
Thread.Sleep(1000);
Trace.TraceInformation(AppDomain.CurrentDomain.FriendlyName + " Worker working...", "Information");
}
}).Start();
}
}
}
App config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.diagnostics>
<trace>
<listeners>
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics">
<filter type="" />
</add>
</listeners>
</trace>
</system.diagnostics>
</configuration>
Expected output:
Lots of logged messages:
"{Domain Name} Wokrer working..."
Some
"Parent domain working"
Actual output:
"Parent domain working"
I'm using Azure SDK 2.0. Have any of you came across a similar issue ?
Ok, finally solved it. Upgrading Azure SDK to 2.3 did the thing... It's interesting that messages still doesn't appear in Compute emulator console, but after upgrade they are correctly logged to WADLog table.
I am using anotar catel fody for logging in my application.
In NLog.config I want to use different levels for certain classes. Example config
<logger name="SpaceA.*"
minlevel="Info"
writeTo="file"
final="true" />
<logger name="*"
minlevel="Debug"
writeTo="file" />
I have created a NLogListener class which derives from catel's LogListenerBase.
public class NLogListener : LogListenerBase
{
private static readonly NLog.Logger Log = NLog.LogManager.GetCurrentClassLogger();
protected override void Debug(ILog log, string message, object extraData)
{
Log.Debug(message);
}
protected override void Info(ILog log, string message, object extraData)
{
Log.Info(message);
}
protected override void Warning(ILog log, string message, object extraData)
{
Log.Warn(message);
}
protected override void Error(ILog log, string message, object extraData)
{
Log.Error(message);
}
#endregion Methods
}
In my code I use Catel Anotar Fody:
LogTo.Debug("Starting something...");
Now no matter where I use the logging, it is all being displayed as coming from the namespace where I have defined the LogListerer.
What am I doing wrong and ergo do I have to change to be able to filter the NLog on class names like it normally should?
The problem is that you get the current class logger in the LogListener:
private static readonly NLog.Logger Log = NLog.LogManager.GetCurrentClassLogger();
That way, you always log to the NLogListener type. What you should do is get the right logger type for each entry:
protected override void Debug(ILog log, string message, object extraData)
{
var nlog = NLog.LogManager.GetClassLogger(log.TargetType);
nlog.Debug(message);
}
I've created a simple WF4 console app and set up log4net identically to my other apps. However, when I fire up the console and use the ILog object inside WF4 (I actually pass it into the workflow), no information is presented using my ColoredConsoleAppender. What am I doing wrong?
Workflow trace output is written to trace listeners and as far as I am aware log4net doesn't log the output written to a trace listener by default. I am no expert on log4net so there might be an easier way but creating a TraceListener that just passes all data on to log4net is not hard, the following code worked just fine in a quick test.
public class Log4netTraceListener : TraceListener
{
private static readonly ILog _log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, params object[] data)
{
base.TraceData(eventCache, source, eventType, id, data);
}
public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data)
{
var logger = LogManager.GetLogger(source);
switch (eventType)
{
case TraceEventType.Critical:
logger.Fatal(data);
break;
case TraceEventType.Error:
logger.Error(data);
break;
case TraceEventType.Information:
logger.Info(data);
break;
case TraceEventType.Verbose:
logger.Debug(data);
break;
case TraceEventType.Warning:
logger.Warn(data);
break;
default:
base.TraceData(eventCache, source, eventType, id, data);
break;
}
}
public override void Write(string message)
{
_log.Info(message);
}
public override void WriteLine(string message)
{
_log.Info(message);
}
Next you need to make sure the activity trace information is send to this TraceListener using the following code in you app.config.
<system.diagnostics>
<sources>
<source name="System.Activities"
switchValue="Verbose">
<listeners>
<add name="Test"
type="WorkflowConsoleApplication17.Log4netTraceListener, WorkflowConsoleApplication17"/>
</listeners>
</source>
</sources>
</system.diagnostics>
Create an Extension for your workflow that your activities can get from the context.
var wf = new WorkflowApplication(myActivity);
var log = new MyLogForNetExtensionLol();
wf.Extensions.Add(log);
then, within the activity:
var log = context.GetExtension<ILog>();
log.Write("Worked!");