Log4net EventLogAppender Log Event ID - log4net

Is there a way to log an event into the windows event log with a specified eventid per message? I am using log4net v 1.2.10.

Based on what I see in the EventLogAppender source code the following should do the trick:
log4net.ThreadContext.Properties["EventID"] = 5;
Just call this before you write your log messages (if you do not set it for all messages you should remove the "EventID" again from the Properties.
N.B the property key is case sensitive.

When one uses the native .net Event Log APIs in System.Diagnostics, the WriteEntry methods allow setting the eventID and category. In these APIs:
eventID is a 32 bit int, but its value must be between 0 and 65535
category is a 16 bit int, but its value must be positive. If the
event source includes a category resource file, the event viewer will
use the integer category value to lookup a localized “Task category”
string. Otherwise, the integer value is displayed.
The categories must be numbered consecutively, beginning with the
number 1
Log4net supports writing an EventID and a Category, but it isn’t straight forward. When log4net’s EventLogAppender logs an event, it looks at a dictionary of properties. The named properties "EventID" and "Category" are automatically mapped by the EventLogAppender to the corresponding values in the event log. I’ve seen a few good suggested ways to use log4net’s EventLogAppender and set the EventID and Category in the Windows event log.
a. Using log4net’s appender filtering, a filter may be registered that can add the EventID and Category properties. This method has a nice benefit that the standard log4net wrappers are used and so this can be implemented without changing existing logging code. The difficulty in this method is some mechanism has to be created to calculate the EventID and Category from the logged information. For instance, the filter could look at the exception source and map that source to a Category value.
b. Log4net may be extended so custom logging wrappers can be used that can include EventID and Category parameters. Adding EventID is demonstrated in the log4net sample “Extensibility – EventIDLogApp” which is included in the log4net source. In the extension sample a new interface (IEventIDLog) is used that extends the standard ILog interface used by applications to log. This provides new logging methods that include an eventId parameter. The new logging methods add the eventId to the Properties dictionary before logging the event.
public void Info(int eventId, object message, System.Exception t)
{
if (this.IsInfoEnabled)
{
LoggingEvent loggingEvent = new LoggingEvent(ThisDeclaringType, Logger.Repository, Logger.Name, Level.Info, message, t);
loggingEvent.Properties["EventID"] = eventId;
Logger.Log(loggingEvent);
}
}
c. Log4net supports a ThreadContext object that contains a Properties dictionary. An application could set the EventID and Category properties in this dictionary and then when the thread calls a logging method, the values will be used by the EventLogAppender.
log4net.ThreadContext.Properties["EventID"] = 5;
Some helpful references:
Log4net home page
Log4net SDK reference
Log4net samples
Log4net source
Enhancing log4net exception logging
Log4Net Tutorial pt 6: Log Event Context
Customizing Event Log Categories
EventSourceCreationData.CategoryResourceFile Property
Event Logging Elements
EventLog.WriteEntry Method

Well, the solution was to build the extension project "log4net.Ext.EventID" and to use its types: IEventIDLog, EventIDLogImpl and EventIDLogManager.

Another solution is to add a custom Filter as described here: Enhancing log4net exception logging (direct link to the Gist just in case).
As the author points out:
... EventLogAppender uses inline consts to check them. Once they are added they will be used by the mentioned EventLogAppender to mark the given entries with EventId and Category.
The filter implementation will look like the code below (stripped down gist) with the added benefit that if you make GetEventId method public, you can write some tests against it
public class ExceptionBasedLogEnhancer : FilterSkeleton
{
private const string EventLogKeyEventId = "EventID";
public override FilterDecision Decide(LoggingEvent loggingEvent)
{
var ex = loggingEvent.ExceptionObject;
if (ex != null)
{
loggingEvent.Properties[EventLogKeyEventId] = GetEventId(ex);
}
return FilterDecision.Neutral;
}
private static short GetEventId(Exception ex)
{
// more fancy implementation, like getting hash of ex properties
// can be provided, or mapping types of exceptions to eventids
// return no more than short.MaxValue, otherwise the EventLog will throw
return 0;
}
}

Extend ILog.Info() to take an event ID:
public static class LogUtils
{
public static void Info(this ILog logger, int eventId, object message)
{
log4net.ThreadContext.Properties["EventID"] = eventId;
logger.Info(message);
log4net.ThreadContext.Properties["EventID"] = 0; // back to default
}
}
Then call it like this:
using LogUtils;
private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
_logger.Info(3, "First shalt thou take out the Holy Pin, then shalt thou count to three.");

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

Configuring notification tag for Azure Function

I'm using an Azure function to pick up messages from an event hub and send out notifications via the Azure notification hub. Works great! Now I wanted to see whether I could add tags to those messages in order to allow user targeting via those tags.
The output for the notification hub has a "tag expression" parameter which you can configure. But this seems to be a static text. I need to dynamically set these tags based on the message received from the event hub instead. I'm not sure whether you can somehow put dynamic content in there?
I also found that the constructor of the GcmNotification object that I'm using has an overload which allows a tag string to be used. But when I try that I get a warning on compile time stating this is deprecated and when the function fires it shows an error because the Tag property should be empty.
So I'm not clear on a) whether this is at all possible and b) how to do it when it is. Any ideas?
Update: as suggested I tried creating a POCO object to map to my input string. The string is as follows:
[{"deviceid":"repsaj-neptune-win10pi","readingtype":"temperature1","reading":22.031614503139451,"threshold":23.0,"time":"2016-06-22T09:38:54.1900000Z"}]
The POCO object:
public class RuleMessage
{
public string deviceid;
public string readingtype;
public object reading;
public double threshold;
public DateTime time;
}
For the function I now tried both RuleMessage[] and List<RuleMessage> as parameter types, but the function complains it cannot convert the input:
2016-06-24T18:25:16.830 Exception while executing function: Functions.submerged-function-ruleout. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'inputMessage'. Microsoft.Azure.WebJobs.Host: Binding parameters to complex objects (such as 'RuleMessage') uses Json.NET serialization.
1. Bind the parameter type as 'string' instead of 'RuleMessage' to get the raw values and avoid JSON deserialization, or
2. Change the queue payload to be valid json. The JSON parser failed: Cannot deserialize the current JSON array (e.g. [1,2,3]) into type 'Submission#0+RuleMessage' because the type requires a JSON object (e.g. {"name":"value"}) to deserialize correctly.
To fix this error either change the JSON to a JSON object (e.g. {"name":"value"}) or change the deserialized type to an array or a type that implements a collection interface (e.g. ICollection, IList) like List that can be deserialized from a JSON array. JsonArrayAttribute can also be added to the type to force it to deserialize from a JSON array.
Function code:
using System;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.Azure.NotificationHubs;
public static void Run(List<RuleMessage> inputEventMessage, string inputBlob, out Notification notification, out string outputBlob, TraceWriter log)
{
if (inputEventMessage == null || inputEventMessage.Count != 1)
{
log.Info($"The inputEventMessage array was null or didn't contain exactly one item.");
notification = null;
outputBlob = inputBlob;
return;
}
log.Info($"C# Event Hub trigger function processed a message: {inputEventMessage[0]}");
if (String.IsNullOrEmpty(inputBlob))
inputBlob = DateTime.MinValue.ToString();
DateTime lastEvent = DateTime.Parse(inputBlob);
TimeSpan duration = DateTime.Now - lastEvent;
if (duration.TotalMinutes >= 0) {
notification = GetGcmMessage(inputMessage.First());
log.Info($"Sending notification message: {notification.Body}");
outputBlob = DateTime.Now.ToString();
}
else {
log.Info($"Not sending notification message because of timer ({(int)duration.TotalMinutes} minutes ago).");
notification = null;
outputBlob = inputBlob;
}
}
private static Notification GetGcmMessage(RuleMessage input)
{
string message;
if (input.readingtype == "leakage")
message = String.Format("[FUNCTION GCM] Leakage detected! Sensor {0} has detected a possible leak.", input.reading);
else
message = String.Format("[FUNCTION GCM] Sensor {0} is reading {1:0.0}, threshold is {2:0.0}.", input.readingtype, input.reading, input.threshold);
message = "{\"data\":{\"message\":\""+message+"\"}}";
return new GcmNotification(message);
}
public class RuleMessage
{
public string deviceid;
public string readingtype;
public object reading;
public double threshold;
public DateTime time;
}
Update 28-6-2016: I've not managed to get it working by switching ASA output to line seperated to that it doesn't generate a JSON array any more. This is a temp. fix because the Function binding now fails as soon as there is more than one line in the output (which can happen).
Anyways, I now proceeded to set the tagExpression, as per instruction I changed it to:
{
"type": "notificationHub",
"name": "notification",
"hubName": "repsaj-neptune-notifications",
"connection": "repsaj-neptune-notifications_NOTIFICATIONHUB",
"direction": "out",
"tagExpression": "deviceId:{deviceid}"
}
Where {deviceid} equals the deviceid property on my RuleMessage POCO. Unfortunately this doesn't work, when I call the function it outputs:
Exception while executing function: Functions.submerged-function-ruleout. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'notification'. Microsoft.Azure.WebJobs.Host: No value for named parameter 'deviceid'.
Which is not true, I know for sure the property has been set as I've logged it to the ouput window. I also tried something like {inputEventMessage.deviceid}, but that doesn't work either (as I didn't get how the runtime would map {deviceid} to the correct input object when there's more than one.
The tagExpression binding property supports binding parameters coming from trigger input properties. For example, assume your incoming Event Hub event has properties A and B. You can use these properties in your tagExpression using the parens syntax, e.g.: My Tag {A}-{B}.
In general, most of the properties across all the binding types support binding parameters in this way.

Is there a way to ignore some entity properties when calling EdmxWriter.WriteEdmx

I am specifically using breezejs and the server code for breeze js converts the dbcontext into a form which is useable on the clientside using EdmxWriter.WriteEdmx. There are many properties which I have added JsonIgnore attributes to so that they don't get passed to the client side. However, the metadata that is generated (and passed to the clientside) from EdmxWriter.WriteEdmx still has those properties. Is there any additional attribute that I can add to those properties that I want ignored so that they are ignored by EdmxWriter.WriteEdmx? Or, would I need to make a separate method so as not to have any other unintended side effects.
You can sub-class your DbContext with a more restrictive variant that you use solely for metadata generation. You can continue to use your base context for persistence purposes.
The DocCode sample illustrates this technique with its NorthwindMetadataContext which hides the UserSessionId property from the metadata.
It's just a few extra lines of code that do the trick.
public class NorthwindMetadataContext : NorthwindContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
// Hide from clients
modelBuilder.Entity<Customer>().Ignore(t => t.CustomerID_OLD);
// Ignore UserSessionId in metadata (but keep it in base DbContext)
modelBuilder.Entity<Customer>().Ignore(t => t.UserSessionId);
modelBuilder.Entity<Employee>().Ignore(t => t.UserSessionId);
modelBuilder.Entity<Order>().Ignore(t => t.UserSessionId);
// ... more of the same ...
}
}
The Web API controller delegates to the NorthwindRepository where you'll see that the Metadata property gets metadata from the NorthwindMetadataContext while the other repository members reference an EFContextProvider for the full NorthwindContext.
public class NorthwindRepository
{
public NorthwindRepository()
{
_contextProvider = new EFContextProvider<NorthwindContext>();
}
public string Metadata
{
get
{
// Returns metadata from a dedicated DbContext that is different from
// the DbContext used for other operations
// See NorthwindMetadataContext for more about the scenario behind this.
var metaContextProvider = new EFContextProvider<NorthwindMetadataContext>();
return metaContextProvider.Metadata();
}
}
public SaveResult SaveChanges(JObject saveBundle)
{
PrepareSaveGuard();
return _contextProvider.SaveChanges(saveBundle);
}
public IQueryable<Category> Categories {
get { return Context.Categories; }
}
// ... more members ...
}
Pretty clever, eh?
Just remember that the UserSessionId is still on the server-side class model and could be set by a rogue client's saveChanges requests. DocCode guards against that risk in its SaveChanges validation processing.
You can sub-class your DbContext with a more restrictive variant that you use solely for metadata generation. You can continue to use your base context for persistence purposes.
The DocCode sample illustrates this technique with its NorthwindMetadataContext which hides the UserSessionId property from the metadata.
It's just a few extra lines of code that do the trick.
The Web API controller delegates to the NorthwindRepository where you'll see that the Metadata property gets metadata from the NorthwindMetadataContext while the other repository members reference an EFContextProvider for the full NorthwindContext.
Pretty clever, eh?
If you use the [NotMapped] attribute on a property, then it should be ignored by the EDMX process.

CRM 2011 PLUGIN to update another entity

My PLUGIN is firing on Entity A and in my code I am invoking a web service that returns an XML file with some attributes (attr1,attr2,attr3 etc ...) for Entity B including GUID.
I need to update Entity B using the attributes I received from the web service.
Can I use Service Context Class (SaveChanges) or what is the best way to accomplish my task please?
I would appreciate it if you provide an example.
There is no reason you need to use a service context in this instance. Here is basic example of how I would solve this requirement. You'll obviously need to update this code to use the appropriate entities, implement your external web service call, and handle the field updates. In addition, this does not have any error checking or handling as should be included for production code.
I made an assumption you were using the early-bound entity classes, if not you'll need to update the code to use the generic Entity().
class UpdateAnotherEntity : IPlugin
{
private const string TARGET = "Target";
public void Execute(IServiceProvider serviceProvider)
{
//PluginSetup is an abstraction from: http://nicknow.net/dynamics-crm-2011-abstracting-plugin-setup/
var p = new PluginSetup(serviceProvider);
var target = ((Entity) p.Context.InputParameters[TARGET]).ToEntity<Account>();
var updateEntityAndXml = GetRelatedRecordAndXml(target);
var relatedContactEntity =
p.Service.Retrieve(Contact.EntityLogicalName, updateEntityAndXml.Item1, new ColumnSet(true)).ToEntity<Contact>();
UpdateContactEntityWithXml(relatedContactEntity, updateEntityAndXml.Item2);
p.Service.Update(relatedContactEntity);
}
private static void UpdateContactEntityWithXml(Contact relatedEntity, XmlDocument xmlDocument)
{
throw new NotImplementedException("UpdateContactEntityWithXml");
}
private static Tuple<Guid, XmlDocument> GetRelatedRecordAndXml(Account target)
{
throw new NotImplementedException("GetRelatedRecordAndXml");
}
}

DSL Add Root Element to Serialization

I am looking for help to achieve the following
The Diagram represents a car, users can add engine and colour
when I view the XML it looks like this:
<Car>
<Engine>BigEngine</Engine>
<Colour>Pink</Colour>
</Car>
What I would like to do is to wrap the car inside 'vehicle', i.e
<Vehicle>
<Car>
<Engine>BigEngine</Engine>
<Colour>Pink</Colour>
</Car>
</Vehicle>
I am not sure of the best way to achieve this. I want the model explorer and the generated XML to be wrapped in 'vehicle' but for all other intents and purposes the user is working with a car only
Info: Visual Studio 2010, C# and DSL SDK for 2010
I would try two different approaches:
1st: override DSL Package class DocData
In DocData.cs file and override method
protected override void OnDocumentSaved(System.EventArgs e)
and then I would create the wrapper
afterwards I'd override in DocData.cs
protected override void OnDocumentLoading(System.EventArgs e)
and before calling the base method base.OnDocumentLoading(e); i would delete from the file.
2nd: Under DSL Explorer go to XML Serialization Behaviour and set Car Domain Class "Is Custom = true".
This solution is not straightforward but it's not as complicated as it seems at the first place. You'll must define every single method but for each custom method you can call a DSL generated method called "DefaulMethod" which has the default DSL serializer behaviour.
I am currently using VS 2005, so some things might have changed...
I have fixed this by the following. I am double deriving the Car class and in the Car serializer I am doing this:
Writing the extra elements:
public partial class CarSerializer : CarSerializerBase
{
public override void Write(SerializationContext serializationContext, ModelElement element, XmlWriter writer, RootElementSettings rootElementSettings)
{
// Adds the Model and LobSystem root elements to match that required by the SharePoint BCS
writer.WriteStartElement("Garage");
writer.WriteStartElement("Cars");
base.Write(serializationContext, element, writer, rootElementSettings);
writer.WriteEndElement();
writer.WriteEndElement();
}
}
To be able to read this back in I am overriding the Car LoadModel method in the SerializationHelper and where it is getting the reader I am reading the elements until I get to Car.
....
XmlReader reader = XmlReader.Create(fileStream, settings);
reader.MoveToContent();
while (!reader.EOF && !reader.Name.Equals("Car"))
{
reader.Read();
}
reader = reader.ReadSubtree();
// using (global::System.Xml.XmlReader reader = global::System.Xml.XmlReader.Create(fileStream, settings))
using (reader)
{
....

Resources