How to write an NLog target using Signalr - nlog

I'm trying to write a target for NLog to send messages out to connected clients using SignalR.
Here's what I have now. What I'm wondering is should I be using resolving the ConnectionManager like this -or- somehow obtain a reference to the hub (SignalrTargetHub) and call a SendMessage method on it?
Are there performance ramifications for either?
[Target("Signalr")]
public class SignalrTarget:TargetWithLayout
{
public SignalR.IConnectionManager ConnectionManager { get; set; }
public SignalrTarget()
{
ConnectionManager = AspNetHost.DependencyResolver.Resolve<IConnectionManager>();
}
protected override void Write(NLog.LogEventInfo logEvent)
{
dynamic clients = GetClients();
var logEventObject = new
{
Message = this.Layout.Render(logEvent),
Level = logEvent.Level.Name,
TimeStamp = logEvent.TimeStamp.ToString("yyyy-MM-dd HH:mm:ss.fff")
};
clients.onLoggedEvent(logEventObject);
}
private dynamic GetClients()
{
return ConnectionManager.GetClients<SignalrTargetHub>();
}
}

I ended up with the basic the same basic structure that I started with. Just a few tweaks to get the information I needed.
Added exception details.
Html encoded the final message.
[Target("Signalr")]
public class SignalrTarget:TargetWithLayout
{
protected override void Write(NLog.LogEventInfo logEvent)
{
var sb = new System.Text.StringBuilder();
sb.Append(this.Layout.Render(logEvent));
if (logEvent.Exception != null)
sb.AppendLine().Append(logEvent.Exception.ToString());
var message = HttpUtility.HtmlEncode(sb.ToString());
var logEventObject = new
{
Message = message,
Logger = logEvent.LoggerName,
Level = logEvent.Level.Name,
TimeStamp = logEvent.TimeStamp.ToString("HH:mm:ss.fff")
};
GetClients().onLoggedEvent(logEventObject);
}
private dynamic GetClients()
{
return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<SignalrTargetHub>();
}
}
In my simple testing it's working well. Still remains to be seen if this adds any significant load when under stress.

Related

Spring-Integration: external routing slip

I would like to allow callers to pass an external routing slip, e.g. by posting:
POST http://localhost:8080/transform?routing-slip=capitalize&routing-slip=lowercase
Content-Type: text/plain
camelCase
It should be possible to use the given routing-slip array as external routing slip from a pojo:
#Bean
public IntegrationFlow transformerChain(RoutingSlipRouteStrategy routeStrategy) {
return IntegrationFlows.from(
Http.inboundGateway("/transform")
.headerExpression("routingSlipParam",
"#requestParams['routing-slip']")
.requestPayloadType(String.class))
.enrichHeaders(spec -> spec.header(
IntegrationMessageHeaderAccessor.ROUTING_SLIP,
new RoutingSlipHeaderValueMessageProcessor(
"#routePojo.get(request, reply)")
)
)
.logAndReply();
}
The pojo can access the routingSlipParam header and you would think it can then hold the slip as internal state, or at least that is what TestRoutingSlipRoutePojo lead me to believe, so I built this (with a slight doubt, given that there is only one instance of the pojo):
public class ExternalRoutingSlipRoutePojo {
private List<String> routingSlip;
private int i = 0;
public String get(Message<?> requestMessage, Object reply) {
if (routingSlip == null) {
routingSlip = (LinkedList)requestMessage.getHeaders()
.get("routingSlipParam");
}
try {
return this.routingSlip.get(i++);
} catch (Exception e) {
return null;
}
}
}
It turns out that this only works exactly once, which is not surprising after all - the index is incremented for every incoming message and the routing slip is never updated.
So I thought, sure, I have to hold the internal status for all incoming messages and came up with this RouteStrategy:
public class ExternalRoutingSlipRouteStrategy implements RoutingSlipRouteStrategy {
private Map<UUID, LinkedList<String>> routingSlips = new WeakHashMap<>();
private static final LinkedList EMPTY_ROUTINGSLIP = new LinkedList<>();
#Override
public Object getNextPath(Message<?> requestMessage,Object reply) {
MessageHeaders headers = requestMessage.getHeaders();
UUID id = headers.getId();
if (!routingSlips.containsKey(id)) {
#SuppressWarnings("unchecked")
List<String> routingSlipParam =
headers.get("routingSlipParam", List.class);
if (routingSlipParam != null) {
routingSlips.put(id,
new LinkedList<>(routingSlipParam));
}
}
LinkedList<String> routingSlip = routingSlips.getOrDefault(id,
EMPTY_ROUTINGSLIP);
String nextPath = routingSlip.poll();
if (nextPath == null) {
routingSlips.remove(id);
}
return nextPath;
}
}
That does not work either because the strategy is not only called for the incoming message but also for all the new messages which are created by the dynamic routing, which of course have different IDs.
But it is only called twice for the original message, so the routing slip never gets exhausted and the application runs in an endless loop.
How can I make spring-integration use an external routing slip?
UPDATE:
As suggested by Gary Russel, neither the external routing slip index nor the external routing slip itself should be stored in the Spring bean, rather one can use message headers to maintain them separately for each request:
Http.inboundGateway("/transform")
.headerExpression("routingSlipParam",
"#requestParams['routing-slip']")
.requestPayloadType(String.class))
.enrichHeaders(spec -> spec
.headerFunction("counter",h -> new AtomicInteger())
.header(IntegrationMessageHeaderAccessor.ROUTING_SLIP,
new RoutingSlipHeaderValueMessageProcessor(externalRouteStrategy)
)
)
The externalRouteStrategy is an instance of the following class:
public class ExternalRoutingSlipRouteStrategy implements
RoutingSlipRouteStrategy {
#Override
public Object getNextPath(Message<?> requestMessage, Object reply) {
List<String> routingSlip = (List<String>)
requestMessage.getHeaders().get("routingSlipParam");
int routingSlipIndex = requestMessage.getHeaders()
.get("counter", AtomicInteger.class)
.getAndIncrement();
String routingSlipEntry;
if (routingSlip != null
&& routingSlipIndex < routingSlip.size()) {
routingSlipEntry = routingSlip.get(routingSlipIndex);
} else {
routingSlipEntry = null;
}
return routingSlipEntry;
}
}
For reference, I have published the example in Github.
Go back to your first version and store i in a message header (AtomicInteger) in the header enricher.
.headerExpression("counter", "new java.util.concurrent.atomic.AtomicInteger()")
then
int i = requestMessage.getHeaders().get("counter", AtomicInteger.class).getAndIncrement();

Unable to use multiple instances of MobileServiceClient concurrently

I structured my project into multiple mobile services, grouped by the application type eg:
my-core.azure-mobile.net (user, device)
my-app-A.azure-mobile.net (sales, order, invoice)
my-app-B.azure-mobile.net (inventory & parts)
I'm using custom authentication for all my services, and I implemented my own SSO by setting the same master key to all 3 services.
Things went well when I tested using REST client, eg. user who "logged in" via custom api at my-core.azure-mobile.net is able to use the returned JWT token to access restricted API of the other mobile services.
However, in my xamarin project, only the first (note, in sequence of creation) MobileServiceClient object is working properly (eg. returning results from given table). The client object are created using their own url and key respectively, and stored in a dictionary.
If i created client object for app-A then only create for app-B, I will be able to perform CRUD+Sync on sales/order/invoice entity, while CRUD+Sync operation on inventory/part entity will just hang there. The situation is inverse if I swap the client object creation order.
I wonder if there is any internal static variables used within the MobileServiceClient which caused such behavior, or it is a valid bug ?
=== code snippet ===
public class AzureService
{
IDictionary<String, MobileServiceClient> services = new Dictionary<String, MobileServiceClient>();
public MobileServiceClient Init (String key, String applicationURL, String applicationKey)
{
return services[key] = new MobileServiceClient (applicationURL, applicationKey);
}
public MobileServiceClient Get(String key)
{
return services [key];
}
public void InitSyncContext(MobileServiceSQLiteStore offlineStore)
{
// Uses the default conflict handler, which fails on conflict
// To use a different conflict handler, pass a parameter to InitializeAsync.
// For more details, see http://go.microsoft.com/fwlink/?LinkId=521416
var syncHandler = new MobileServiceSyncHandler ();
foreach(var client in services) {
client.Value.SyncContext.InitializeAsync (offlineStore, syncHandler);
}
}
public void SetAuthenticationToken(String uid, String token)
{
var user = new MobileServiceUser(uid);
foreach(var client in services) {
client.Value.CurrentUser = user;
client.Value.CurrentUser.MobileServiceAuthenticationToken = token;
}
}
public void ClearAuthenticationToken()
{
foreach(var client in services) {
client.Value.CurrentUser = null;
}
}
}
=== more code ===
public class DatabaseService
{
public static MobileServiceSQLiteStore LocalStore = null;
public static string Path { get; set; }
public static ISet<IEntityMappingProvider> Providers = new HashSet<IEntityMappingProvider> ();
public static void Init (String dbPath)
{
LocalStore = new MobileServiceSQLiteStore(dbPath);
foreach(var provider in Providers) {
var types = provider.GetSupportedTypes ();
foreach(var t in types) {
JObject item = null;
// omitted detail to create JObject using reflection on given type
LocalStore.DefineTable(tableName, item);
}
}
}
}
=== still code ===
public class AzureDataSyncService<T> : IAzureDataSyncService<T>
{
public MobileServiceClient ServiceClient { get; set; }
public virtual Task<List<T>> GetAll()
{
try
{
var theTable = ServiceClient.GetSyncTable<T>();
return theTable.ToListAsync();
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine("GetAll<{0}> EXCEPTION TYPE: {1}, EXCEPTION:{2}", typeof(T).ToString(), msioe.GetType().ToString(), msioe.ToString());
}
catch (Exception e)
{
Debug.WriteLine("GetAll<{0}> EXCEPTION TYPE: {1}, EXCEPTION:{2}", typeof(T).ToString(), e.GetType().ToString(), e.ToString());
}
List<T> theCollection = Enumerable.Empty<T>().ToList();
return Task.FromResult(theCollection);
}
}
=== code ===
public class UserService : AzureDataSyncService<User>
{
}
public class PartService : AzureDataSyncService<Part>
{
}
const string coreApiURL = #"https://my-core.azure-mobile.net/";
const string coreApiKey = #"XXXXX";
const string invApiURL = #"https://my-inventory.azure-mobile.net/";
const string invApiKey = #"YYYYY";
public async void Foo ()
{
DatabaseService.Providers.Add (new CoreDataMapper());
DatabaseService.Providers.Add (new InvDataMapper ());
DatabaseService.Init (DatabaseService.Path);
var coreSvc = AzureService.Instance.Init ("Core", coreApiURL, coreApiKey);
var invSvc = AzureService.Instance.Init ("Inv", invApiURL, invApiKey);
AzureService.Instance.InitSyncContext (DatabaseService.LocalStore);
AzureService.Instance.SetAuthenticationToken("AAA", "BBB");
UserService.Instance.ServiceClient = coreSvc;
PartService.Instance.ServiceClient = invSvc;
var x = await UserService.GetAll(); // this will work
var y = await PartService.GetAll(); // but not this
}
It's ok to use multiple MobileServiceClient objects, but not with the same local database. The offline sync feature uses a particular system tables to keep track of table operations and errors, and it is not supported to use the same local store across multiple sync contexts.
I'm not totally sure why it is hanging in your test, but it's possible that there is a lock on the local database file and the other sync context is waiting to get access.
You should instead use different local database files for each service and doing push and pull on each sync context. With your particular example, you just need to move LocalStore out of DatabaseService and into a dictionary in AzureService.
In general, it seems like an unusual design to use multiple services from the same client app. Is there a particular reason that the services need to be separated from each other?

NLog MemoryTarget maximum size

Since I have a global exception handler that reports uncaught errors via e-mail, next step is to add some context to it by having some 10-20 last lines of log that are collected.
So I am using MemoryTarget like so:
MemoryTarget _logTarget;
_logTarget = new MemoryTarget();
_logTarget.Layout = "${longdate}|${level:uppercase=true}|${logger}|${message}${exception}";
LoggingRule loggingRule = new LoggingRule("*", LogLevel.Debug, _logTarget);
LogManager.Configuration.AddTarget("exceptionMemory", _logTarget);
LogManager.Configuration.LoggingRules.Add(loggingRule);
LogManager.Configuration.Reload();
Apps containing this should run forever, and if I leave logs in memory, unchecked, I'll have neatly designed memory leak.
How to address this? How to truncate MemoryTarget.Logs to have at most say 100 lines?
Your best bet is probably to write your own MemoryTarget... Something like this (untested) should work.
namespace NLog.Targets
{
using System.Collections.Generic;
[Target("LimitedMemory")]
public sealed class LimitedMemoryTarget : TargetWithLayout
{
private Queue<string> logs = new Queue<string>();
public LimitedMemoryTarget()
{
this.Logs = new List<string>();
}
public IEnumerable<string> Logs
{
get { return logs; }
private set { logs = value; }
}
[DefaultValue(100)]
public int Limit { get; set; }
protected override void Write(LogEventInfo logEvent)
{
string msg = this.Layout.Render(logEvent);
logs.Enqueue(msg);
if (logs.Count > Limit)
{
logs.Dequeue();
}
}
}
}
This example is based on the NLog MemoryTarget, the source code for which you can find here:
https://github.com/NLog/NLog
NLog docs are here:
http://nlog-project.org/documentation/v2.0.1/
I didn't see anything like you are asking about in either location.

Debugging Package Manager Console Update-Database Seed Method

I wanted to debug the Seed() method in my Entity Framework database configuration class when I run Update-Database from the Package Manager Console but didn't know how to do it. I wanted to share the solution with others in case they have the same issue.
Here is similar question with a solution that works really well.
It does NOT require Thread.Sleep.
Just Launches the debugger using this code.
Clipped from the answer
if (!System.Diagnostics.Debugger.IsAttached)
System.Diagnostics.Debugger.Launch();
The way I solved this was to open a new instance of Visual Studio and then open the same solution in this new instance of Visual Studio. I then attached the debugger in this new instance to the old instance (devenv.exe) while running the update-database command. This allowed me to debug the Seed method.
Just to make sure I didn't miss the breakpoint by not attaching in time I added a Thread.Sleep before the breakpoint.
I hope this helps someone.
If you need to get a specific variable's value, a quick hack is to throw an exception:
throw new Exception(variable);
A cleaner solution (I guess this requires EF 6) would IMHO be to call update-database from code:
var configuration = new DbMigrationsConfiguration<TContext>();
var databaseMigrator = new DbMigrator(configuration);
databaseMigrator.Update();
This allows you to debug the Seed method.
You may take this one step further and construct a unit test (or, more precisely, an integration test) that creates an empty test database, applies all EF migrations, runs the Seed method, and drops the test database again:
var configuration = new DbMigrationsConfiguration<TContext>();
Database.Delete("TestDatabaseNameOrConnectionString");
var databaseMigrator = new DbMigrator(configuration);
databaseMigrator.Update();
Database.Delete("TestDatabaseNameOrConnectionString");
But be careful not to run this against your development database!
I know this is an old question, but if all you want is messages, and you don't care to include references to WinForms in your project, I made some simple debug window where I can send Trace events.
For more serious and step-by-step debugging, I'll open another Visual Studio instance, but it's not necessary for simple stuff.
This is the whole code:
SeedApplicationContext.cs
using System;
using System.Data.Entity;
using System.Diagnostics;
using System.Drawing;
using System.Windows.Forms;
namespace Data.Persistence.Migrations.SeedDebug
{
public class SeedApplicationContext<T> : ApplicationContext
where T : DbContext
{
private class SeedTraceListener : TraceListener
{
private readonly SeedApplicationContext<T> _appContext;
public SeedTraceListener(SeedApplicationContext<T> appContext)
{
_appContext = appContext;
}
public override void Write(string message)
{
_appContext.WriteDebugText(message);
}
public override void WriteLine(string message)
{
_appContext.WriteDebugLine(message);
}
}
private Form _debugForm;
private TextBox _debugTextBox;
private TraceListener _traceListener;
private readonly Action<T> _seedAction;
private readonly T _dbcontext;
public Exception Exception { get; private set; }
public bool WaitBeforeExit { get; private set; }
public SeedApplicationContext(Action<T> seedAction, T dbcontext, bool waitBeforeExit = false)
{
_dbcontext = dbcontext;
_seedAction = seedAction;
WaitBeforeExit = waitBeforeExit;
_traceListener = new SeedTraceListener(this);
CreateDebugForm();
MainForm = _debugForm;
Trace.Listeners.Add(_traceListener);
}
private void CreateDebugForm()
{
var textbox = new TextBox {Multiline = true, Dock = DockStyle.Fill, ScrollBars = ScrollBars.Both, WordWrap = false};
var form = new Form {Font = new Font(#"Lucida Console", 8), Text = "Seed Trace"};
form.Controls.Add(tb);
form.Shown += OnFormShown;
_debugForm = form;
_debugTextBox = textbox;
}
private void OnFormShown(object sender, EventArgs eventArgs)
{
WriteDebugLine("Initializing seed...");
try
{
_seedAction(_dbcontext);
if(!WaitBeforeExit)
_debugForm.Close();
else
WriteDebugLine("Finished seed. Close this window to continue");
}
catch (Exception e)
{
Exception = e;
var einner = e;
while (einner != null)
{
WriteDebugLine(string.Format("[Exception {0}] {1}", einner.GetType(), einner.Message));
WriteDebugLine(einner.StackTrace);
einner = einner.InnerException;
if (einner != null)
WriteDebugLine("------- Inner Exception -------");
}
}
}
protected override void Dispose(bool disposing)
{
if (disposing && _traceListener != null)
{
Trace.Listeners.Remove(_traceListener);
_traceListener.Dispose();
_traceListener = null;
}
base.Dispose(disposing);
}
private void WriteDebugText(string message)
{
_debugTextBox.Text += message;
Application.DoEvents();
}
private void WriteDebugLine(string message)
{
WriteDebugText(message + Environment.NewLine);
}
}
}
And on your standard Configuration.cs
// ...
using System.Windows.Forms;
using Data.Persistence.Migrations.SeedDebug;
// ...
namespace Data.Persistence.Migrations
{
internal sealed class Configuration : DbMigrationsConfiguration<MyContext>
{
public Configuration()
{
// Migrations configuration here
}
protected override void Seed(MyContext context)
{
// Create our application context which will host our debug window and message loop
var appContext = new SeedApplicationContext<MyContext>(SeedInternal, context, false);
Application.Run(appContext);
var e = appContext.Exception;
Application.Exit();
// Rethrow the exception to the package manager console
if (e != null)
throw e;
}
// Our original Seed method, now with Trace support!
private void SeedInternal(MyContext context)
{
// ...
Trace.WriteLine("I'm seeding!")
// ...
}
}
}
Uh Debugging is one thing but don't forget to call:
context.Update()
Also don't wrap in try catch without a good inner exceptions spill to the console.
https://coderwall.com/p/fbcyaw/debug-into-entity-framework-code-first
with catch (DbEntityValidationException ex)
I have 2 workarounds (without Debugger.Launch() since it doesn't work for me):
To print message in Package Manager Console use exception:
throw new Exception("Your message");
Another way is to print message in file by creating a cmd process:
// Logs to file {solution folder}\seed.log data from Seed method (for DEBUG only)
private void Log(string msg)
{
string echoCmd = $"/C echo {DateTime.Now} - {msg} >> seed.log";
System.Diagnostics.Process.Start("cmd.exe", echoCmd);
}

Loading an object from a db4o database

I am developing an e-commerce website that utilises db4o as the backend. All was well until last week when I came across a problem that I have been unable to solve. The code below is quite straight forward. I open a database file, save an object and then try to retrieve it. However I get nothing back. The "users" variable has a count of zero.
public class Program
{
private static string _connectionString = string.Format(#"c:\aaarrrr.db4o");
static void Main(string[] args)
{
TestUser container = new TestUser() { id = 1, Name = "Mohammad", Surname = "Rafiq" };
Db4oFactory.Configure().Diagnostic().AddListener(new DiagnosticToConsole());
using (var dbc = Db4oFactory.OpenFile(_connectionString))
{
dbc.Store(container);
}
IList<TestUser> users = null;
using (var dbc = Db4oFactory.OpenFile(_connectionString))
{
users = dbc.Query<TestUser>(x => x.id == 1).ToList();
}
if (users.Count > 0)
{
Console.WriteLine("{0} {1} with id of {2}", users.First().Name, users.First().Surname, users.First().id);
}
else
{
Console.WriteLine("\nNo data returned.");
}
Console.ReadLine();
}
}
public class TestUser
{
[Indexed]
private int _id = 0;
private string _name = string.Empty;
private string _surname = string.Empty;
public int id { get { return _id; } set { _id = value; } }
public string Name { get { return _name; } set { _name = value; } }
public string Surname { get { return _surname; } set { _surname = value; } }
}
I have attached db4o diagnostic listener and I see nothing in the console output. Everything seems fine. I know I am writing to the file because I can see the file size increase and the timestamp is also updated. I have checked all the project settings and they are all set to default. I am using .net 4, visual studio 2010 beta and windows 7. I have done some reading regarding reflection permission but I cant see how this applies here. Any help or ideas would be knidly appreciated.
After calling store(), you need to commit() before leaving the using{} statement. You closed your database before committing your changes.

Resources