Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Has anyone used log4net with Biztalk? We are currently looking into using it and are trying to access pros/cons, and whether or not it would meet our needs.
I have used Log4Net with BizTalk, but i will say that out of the box i ran into issues. Every call out of BizTalk results in the current orchestration getting dehydrated (serialized) so any type you use in BizTalk would have to be serializable and the log4net logger was not.
If you absolutely have to use log4net there is a wrapper that Scott Colestock wrote here.
Assuming you are not locked in, i would just use Enterprise Logging, it offers almost the same functionality as log4net and works out of the box with BizTalk. You can find it here.
For pros and cons, i will say that offer almost exact functionality, I actually ended up creating a wrapper utility that made the Enterprise Library Logging Block look more like log4net.
public static class Logging
{
public static void LogMessage(TraceEventType eventType, string category, string message)
{
LogEntry logEntry = new LogEntry();
logEntry.Severity = eventType;
logEntry.Priority = 1;
logEntry.Categories.Add(category);
logEntry.Message = message;
Logger.Write(logEntry);
}
public static void LogError(string category, string message)
{
LogMessage(TraceEventType.Error, category,message);
}
public static void LogInfo(string category, string message)
{
LogMessage(TraceEventType.Information, category, message);
}
public static void LogVerbose(string category, string message)
{
LogMessage(TraceEventType.Verbose, category, message);
}
}
And if you need more look here .
Have you considered using ETW. This in my opinion is the way to go for instrumenting BizTalk. http://blogs.msdn.com/b/asgisv/archive/2010/05/11/best-practices-for-instrumenting-high-performance-biztalk-solutions.aspx
One of the drawbacks of using both log4net and Enterprise Logging is you need config to enable it. So you have to manage the btsntsvc.exe.config files on all servers in your biztalk group which can be an overhead.
ETW is zero config.
I've got to say that after using both log4net and MS Enterprise Library for application logging on different projects, I prefer log4net. I particularly like the way that with log4net you can centralise the configuration in a single place (e.g. database), rather than having to rely on local server app.config for the btsntsvc.exe.
This is particularly useful if you need to spin out new server instances to add to your farm - you've got enough to do without worrying about logging config. I've used log4net with both BTS2004 and BTS2006R2 and been satisfied. One thing I would recommend whichever logging framework you go with, don't fall into the trap of using the Event Log as a sink - when you scale out across 10 BTS app servers, it is a time consuming process to track errors, particularly as orchestration instances have no affinity to an app server and tend to move across your estate! Keep the event log for crucial OS and BTS service issues, rather than custom application errors - makes SCOM monitoring a lot less painless.
FYI - I too use log4net with Colestock's serializable wrapper, albeit with a few tweaks.
Related
This is not related to cloud sdk per se, but more regarding mocking the s4 endpoints which we usually use c sdk to query.
We want to do this for our load test, where we would not want the load test to go till s4 endpoint. We are considering using wiremock, to mock the endpoints, but the question is, whether the mocking logic in wiremock itself will contribute not in an insignificant manner to the metrics we are taking. If it would, then the metric becomes somewhwat unreliable since we want the apps performance metric not the mock framework's.
Other approach would be to use a mock server application dedicated for this, so that from app we would not have to do any mocking. Just route the call to the mock server app(using a mock destination perhaps)
My question to you guys is, have you ever encountered this use case? Perhaps yourself, or maybe from a consumer of yours. I would like to know how other teams in SAP solved this problem.
Thanks,
Sachin
In cases like yours, where the entire system (including responses from external services) should be tested, we usually recommend using Wiremock.
This is, because Wiremock is rather easy to setup and works well-enough for regular testing scenarios.
However, as you also pointed out, Wiremock introduces significant runtime overhead for the tested code and, thus, rending performance measurements of any kind more or less useless.
Hence, you could try mocking the HttpClient using Mockito instead:
BasicHttpResponse page = new BasicHttpResponse(new BasicStatusLine(HttpVersion.HTTP_1_1, 200, "OK"));
page.setEntity(new StringEntity("hello world!));
HttpClient httpClient = mock(HttpClient.class);
doReturn(page).when(httpClient).execute(any(HttpUriRequest.class));
This enables fine-grained control over what your applications retrieves from the mocked endpoint without introducing any kind of actual network actions.
Using the code shown above obviously requires your application under test to actually use the mocked httpClient.
Assuming you are using the SAP Cloud SDK in your application, this can be achieved by overriding the HttpClientCache used in the HttpClientAccessor with a custom implementation that returns your mocked client, like so:
class MockedHttpClientCache implements HttpClientCache
{
#Nonnull
#Override
public Try<HttpClient> tryGetHttpClient(#Nonnull final HttpDestinationProperties destination, #Nonnull final HttpClientFactory httpClientFactory) {
return Try.of(yourMockedClient);
}
#Nonnull
#Override
public Try<HttpClient> tryGetHttpClient(#Nonnull final HttpClientFactory httpClientFactory) {
return Try.of(yourMockedClient);
}
}
// in your test code:
HttpClientAccessor.setHttpClientCache(new MockedHttpClientCache());
I have an application built from a series of web servers and microservices, perhaps 12 in all. I would like to monitor and, importantly, map this suite of services in Applications Insights. Some of the services are built with Dot Net framework 4.6 and deployed as Windows services using OWIN to receive and respond to requests.
In order to get the instrumentation working with OWIN I'm using the ApplicationInsights.OwinExtensions package. I'm using a single instrumentation key across all my services.
When I look at my Applications Insights Application Map, it appears that all the services that I've instrumented are grouped into a single "application", with a few "links" to outside dependencies. I do not seem to be able to produce the "Composite Application Map" the existence of which is suggested here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-app-map.
I'm assuming that this is because I have not set a different "RoleName" for each of my services. Unfortunately, I cannot find any documentation that describes how to do so. My map looks as follow, but the big circle in the middle is actually several different microservices:
I do see that the OwinExtensions package offers the ability to customize some aspects of the telemetry reported but, without a deep knowledge of the internal structure of App Insights telemetry, I can't figure out whether it allows the RoleName to be set and, if so, how to accomplish this. Here's what I've tried so far:
appBuilder.UseApplicationInsights(
new RequestTrackingConfiguration
{
GetAdditionalContextProperties =
ctx =>
Task.FromResult(
new [] { new KeyValuePair<string, string>("cloud_RoleName", ServiceConfiguration.SERVICE_NAME) }.AsEnumerable()
)
}
);
Can anyone tell me how, in this context, I can instruct App Insights to collect telemetry which will cause a Composite Application Map to be built?
The following is the overall doc about TelemetryInitializer which is exactly what you want to set additional properties to the collected telemetry - in this case set Cloud Rolename to enable application map.
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling#add-properties-itelemetryinitializer
Your telemetry initializer code would be something along the following lines...
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
// set role name correctly here.
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
Please try this and see if this helps.
I'm developing a Parse App and currently checking the backend security. I'm a bit lost regarding the Installation Class permissions. It is (by default) readable and writable by everyone. Thus, any user could delete every object of the class.
My question is: is it protected by default like the User class? Or should I add ACL for every new registration to push notifications? Or change the class level permissions?
Many thanks for your help,
Parse defaults to public read/write access for everything outside of User to streamline development.
Security measures will vary from one app to another depending on use-case, but assuming that you have associated each Installation to a User, I would highly recommend applying an ACL which gives public read and limits writes to the specific user.
In case you are not already associating each Installation to a User, here's a nice piece of cloud code to take care of it for you.
Parse.Cloud.beforeSave(Parse.Installation, function(request, response) {
Parse.Cloud.useMasterKey();
if (request.user) {
request.object.set('user', request.user);
} else {
request.object.unset('user');
}
response.success();
});
It's a good place to start by creating ACLs which provide public read and user-specific write access. That one step alone will drastically improve security.
I have written a custom Windows Service that writes data to a custom Event Log (in the Windows Event Viewer).
For dev'ing the biz logic that the service uses, I created a Windows Form which simulates the Start/Stop methods of the Windows Service.
When executing the biz logic via the Windows Forms, info is successfully written to my custom Event Log. However, when I run the same biz logic from the custom Windows Service, information is failing to be written to the Event Log.
To be clear, I have written a library (.dll) that does all the work that I want my custom service to do - including the create/write to the custom Event Log. My Form application references this library as does my Windows Service.
Thinking the problem is a security issue, I manually set the custom Windows Service to "Log on" as "Administrator", but the service still did not write to the Event Log.
I'm stuck on how to even troubleshoot this problem since I can't debug and step into the code when I run the service (if there is a way to debug a service, please share).
Do you have any ideas as to what could be causing my service to fail to write to the event log?
I use it like this. There can be some typos. Writed it on my phone browser...
public class MyClass
{
private EventLog eventLog = new EventLog();
public void MyClass()
{
if (!System.Diagnostics.EventLog.SourceExists("MyLogSource"))
System.Diagnostics.EventLog.CreateEventSource("MyLogSource", "MyLogSource_Log");
eventLog.Source = "MyLogSource";
eventLog.Log = "MyLogSource_Log";
}
private void MyLogWrite()
{
eventLog.WriteEntry(ex.ToString(), EventLogEntryType.Error);
}
}
To debug a running service you need to attach to the process. See here for the steps.
You could also add parameter checking to the Main entry point and have a combination service and console app which would start based on some flag. See this SO post for a good example but here's a snippet:
using System;
using System.ServiceProcess;
namespace WindowsService1
{
static class Program
{
static void Main(string[] args)
{
if (args == null)
{
Console.WriteLine("Starting service...");
ServiceBase.Run(new ServiceBase[] { new Service1() });
}
else
{
Console.WriteLine("Hi, not from service: " + args[0]);
}
}
}
}
The above starts the app in console mode if there any parameters exist and in service mode if there are no parameters. Of course it can be much fancier but that's the gist of the switch.
I discovered why my service wasn't writing to the Event Log.
The problem had nothing to do with any part of the code/security/etc that was attempting to write to the EL. The problem was that my service wasn't successfully collecting the information that is written to the EL - therefore, the service wasn't even attempting to write the log.
Now that I fixed the code that collects the data, data is successfully writing to the event log.
I'm open to having this question closed since the question was amiss to the real problem.
We're working on logging in our applications, using log4net. We'd like to capture certain information automatically with every call. The code calling log.Info or log.Warn should call them normally, without specify this information.
I'm looking for a way to create something we can plug into log4net. Something between the ILog applications use to log and the appenders, so that we can put this information into the log message somehow. Either into ThreadContext, or the LogEventInfo.
The information we're looking to capture is asp.net related; the request url, user agent, etc. There's also some information from the apps .config file we want to include (an application id).
I want to get between the normal ILog.Info and appender so that this information is also automatically included for 3rd party libraries which also use log4net (Nhibernate, NServiceBus, etc).
Any suggestions on where the extensibility I want would be?
Thanks
What you are looking for is called log event context. This tutorial explains how it works:
http://www.beefycode.com/post/Log4Net-Tutorial-pt-6-Log-Event-Context.aspx
In particular the chapter 'Calculated Context Values' will interesting for you.
Update:
My idea was to use the global context. It is easy to see how this works for something like application ID (in fact there you do not even need a calculated context object). Dynamic information like request url could be done like this:
public class RequestUrlContext
{
public override string ToString()
{
string url;
// retrieve url from request
return url;
}
}
The object is global but the method is called on the thread that handles the request and thus you get the correct information. I also recommend that you create one class per "information entity" so that you have a lot of flexibility with the output in the log destination.