log4net does great on turning log levels on/off on a per class basis. But I have a case where I am implementing an interface and I have a ton of logging in this implementation. It's a 3rd party interface so I'm stuck having all this code, and hence all this logging, in a single class.
I've looked at creating helper classes for each of the areas of this implementation. But that actually makes things more messy. Partial classes work great for this, but again - still all one class.
Is there a good way to further specify which logging statements to log? The best I've come up with is an enum of the categories in the class and define each logging statement with the specific enum. And then use that enum in the if (log.IsDebugEnable && MyEnum == true)
Anything better?
Update: per the request below, actually showing all this would be so long/large that no one would read it. So here's a summation:
log.Debug("order placed");
// ...
log.Debug("new stock added");
// ...
log.Debug("price change");
// ...
log.Debug("order changed");
// ...
log.Debug("predicted price");
// ...
log.Debug("profit taken");
// ... the types of actions go on and on
All of the above is implementing a QCAlgorithm in QuantConnect. Basically there are times I want the details for orders logged. Other times I don't want those but do need to log the predicted price and actual price logging.
Related
Background
A data structure in my database consists of "sections"; lists of custom objects. The number of sections may expand in the future, so to keep my code as DRY as possible, I wanted to determine the section to add/update/delete an item from to be defined dynamically as a parameter.
I quickly realised that doing something like #Body() section: SectionA | SectionB | SectionC... disables validation so I needed a single DTO Section that could encompass all sections. To do that I need to define dynamically which validators to apply as I have several #IsNotEmpty constraints.
So I came across this post whose selected answer recommends the usage of groups.
This posed the following challenges:
I now have to write a custom validation pipe. Relied heavily on this
I want to override the global validator pipe that I already had running and use my custom one for just that method. Outcome: didn't work, had to start defining the pipe on every controller method, a tradeoff I am willing to accept. Looks like there is no simple alternative.
However, I'm now faced with the final problem: how to use the parameters in the request to define these groups in the validator; another brick wall. No simple solution.
Solution
This question has been asked here but no satisfactory solution was actually given.
Option one recommended redefining the scope of the pipe to "request" level but didn't explain how, and solutions found online didn't work.
The second solution, using a custom decorator to perform the validation instead, did work, very well in fact here is a simplified version of the code:
export const ProfileSectionData = createParamDecorator(
async (data: unknown, ctx: ExecutionContext) => {
const request = ctx.switchToHttp().getRequest();
let object = plainToInstance(SectionDto, request.body); // I don't need to access the metatype from the request because I know what type I need but I'm sure I could if need-be.
const groups = [request.params.profileSection];
let validatorOptions = { groups, ...defaultOptions };
const errors = await validate(object, validatorOptions);
if (errors.length > 0) {
throw new BadRequestException();
}
return request.body;
},
);
Implications?
Here's my question. When Jay McDoniel recommended using a custom decorator, they warn: "Do note, that this could impact how the ValidationPipe is functioning if that is bound globally, at the class, or method level."
What does this mean?
Are there any vulnerabilities or performance drawbacks associated with this solution?
Obviously, one drawback is that you are using validation outside a validation pipe which is not ideal from a point of view of order and single-responsibility but I can't think of tangible inconveniences beyond aesthetics and maintainability.
Knowing the background, would you have approached the problem in a completely different way?
I am mocking an interface array which throws java.lang.IllegalArgumentException: Cannot subclass final class class.
Following are the changes I did.
Added the following annotations at class level in this exact order:
#Runwith(PowerMockRunner.class)
#PrepareForTest({ Array1[].class, Array2[].class })
Inside the class I am doing like this:
Array1[] test1= PowerMockito.mock(Array1[].class);
Array2[] test2= PowerMockito.mock(Array2[].class);
and inside test method:
Mockito.when(staticclass.somemethod()).thenReturn(test1);
Mockito.when(staticclass.somediffmethod()).thenReturn(test2);
Basically I need to mock an array of interfaces.
Any help would be appreciated.
Opening up another perspective on your problem: I think you are getting unit tests wrong.
You only use mocking frameworks in order to control the behavior of individual objects that you provide to your code under test. But there is no sense in mocking an array of something.
When your "class under test" needs to deal with some array, list, map, whatever, then you provide an array, a list, or a map to it - you just make sure that the elements within that array/collection ... are as you need them. Maybe the array is empty for one test, maybe it contains a null for another test, and maybe it contains a mocked object for a third test.
Meaning - you don't do:
SomeInterface[] test1 = PowerMock.mock() ...
Instead you do:
SomeInterface[] test1 = new SomeInterface[] { PowerMock.mock(SomeInterface.class) };
And allow for some notes:
At least in your code, it looks like you called your interface "Array1" and "Array2". That is highly misleading. Give interfaces names that say what their behavior is about. The fact that you later create arrays containing objects of that interface ... doesn't matter at all!
Unless you have good reasons - consider not using PowerMock. PowerMock relies on byte-code manipulation; and can simply cause a lot of problems. In most situations, people wrote untestable code; and then they turn to PowerMock to somehow test that. But the correct answer is to rework that broken design, and to use a mocking framework that comes without "power" in its name. You can watch those videos giving you lengthy explanations how to write testable code!
I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers
What I need: a class with two parents, which are ContextBoundObject and another class.
Why: I need to access the ContextBoundOject to log the method calls.
Composition works? As of now, no (types are not recognized, among other things).
Are other ways to do this? Yes, but not so automatable and without third-party components (maybe a T4 could do, but I'm no expert).
A more detailed explanation.
I need to extend System classes (some of which have already MarshalByRefObject (which is the parent of ContextBoundObject) for parent, for example ServiceBase and FileSystemWatcher, and some not, for example Exception and Timer) to access some inner workings of the framework, so I can log method calls (for now; in future it may change).
If I use this way I only have to add a class name to the object I want to log, instead of adding the logging calls to every method, but obviously I can't do this:
public class MyService:ServiceBase,ContextBoundObject,IDisposable{
public MyService(){}
public Dispose(){}
}
so one could try the usual solution, interfaces, but then if I call Run as in:
ServiceBase.Run(new MyService());
using a hypotethical interface IServiceBase it wouldn't work, because the type ServiceBase is not castable to IServiceBase -- it doesn't inherit from any interface. The problem is even worse with exceptions: throw only accepts a type descending from Exception.
The reverse, producing a IContextBoundObject interface, doesn't seem to work either: the logging mechanism doesn't work by methods, so I don't need to implement any, just an attribute and some small internal classes (and inheriting from ContextBoundObject, not even from MarshalByRefObject, which the metadata present as practically the same).
From what I see, extending from ContextBoundObject puts the extended class in a Proxy (probably because in this way the method calls use SyncProcessMessage(IMessage) and so can be intercepted and logged), maybe there's a way to do it without inheritance, or maybe there could be pre or post compiling techniques available for surrounding methods with logging calls (like T4 Text Templates), I don't know.
If someone wants to give this a look, I used a customized version of MSTestExtentions in my program to do the logging (of the method calls).
Any ideas are appreciated. There could be the need for more explanations, just ask.
Logging method calls is usually done using attributes to annotate classes or methods for which you want to have logging enabled. This is called Aspect Oriented Programming.
For this to work, you need a software that understands those attributes and post-processes your assembly by adding the necessary code to the methods / classes that have been annotated.
For C# there exists PostSharp. See here for an introduction.
Experimenting with proxies I found a way that apparently logs explicit calls.
Essentially I create a RealProxy like in example in the msdn, then obtain the TransparentProxy and use that as the normal object.
The logging is done in the Invoke method overridden in the customized RealProxy class.
static void Main(){
...
var ServiceClassProxy=new ServiceRealProxy(typeof(AServiceBaseClass),new object[]{/*args*/});
aServiceInstance=(AServiceBaseClass)ServiceClassProxy.GetTransparentProxy();
ServiceBase.Run(aServiceInstance);
...
}
In the proxy class the Invoke will be done like this:
class ServiceRealProxy:RealProxy{
...
[SecurityPermissionAttribute(SecurityAction.LinkDemand, Flags=SecurityPermissionFlag.Infrastructure)]
public override IMessage Invoke(IMessage myIMessage){
// remember to set the "__Uri" property you get in the constructor
...
/* logging before */
myReturnMessage = ChannelServices.SyncDispatchMessage(myIMessage);
/* logging after */
...
return myReturnMessage;
// it could be useful making a switch for all the derived types from IMessage; I see 18 of them, from
// System.Runtime.Remoting.Messaging.ConstructionCall
// ... to
// System.Runtime.Remoting.Messaging.TransitionCall
}
...
}
I have still to investigate extensively, but the logging happened. This isn't an answer to my original problem because I have still to test this on classes that don't inherit from MarshalByRefObject.
When I enabled code contracts on my WPF control project I ran into a problem with an auto generated file which was created at compile time (XamlNamespace.GeneratedInternalTypeHelper). Note, the generated file is called GeneratedInternalTypeHelper.g.cs and is not the same as the GeneratedInternalTypeHelper.g.i.cs which there are several obsolete blog posts about.
I'm not exactly sure what its purpose is, but I am assuming it is important for some internal reflection to resolve XAML. The problem is that it does not have code contracts, nor is the code contract system smart enough to recognize it as an auto generated file. This leads to a bunch of errors from the static checker.
I tried searching for a solution to this problem, but it seems like nobody is developing WPF controls and using code contracts. I did come across an interesting attribute, ContractVerificationAttribute, which takes a boolean value to set whether the assembly or class is to be verified. This allows you to decorate a class as not verified. Sadly the GeneratedInternalTypeHelper is regenerated with every compile, so it is not possible to exclude just this one class. The inverse scenario is possible though, decorate the assembly as not verified and then opt in for every class.
To mitigate the obvious hack I wanted to create a test that would at least verify that the exposed classes have code contract verification with a test like the following to ensure that own classes were at least being verified:
[Fact]
public void AllAssemblyTypesAreDecoratedWithContractVerificationTrue()
{
var assembly = typeof(someType).Assembly;
var exposedTypes = assembly.GetTypes().Where(t=>!string.IsNullOrWhiteSpace(t.Namespace) && t.Namespace.StartsWith("MyNamespace") && !t.Name.StartsWith("<>"));
var areAnyNotContractVerified = exposedTypes.Any(t =>
{
var verificationAttribute = t.GetCustomAttributes(typeof(ContractVerificationAttribute), true).OfType<ContractVerificationAttribute>();
return verificationAttribute.Any() && verificationAttribute.First().Value;
});
Assert.False(areAnyNotContractVerified);
}
As you can see it takes all classes in the controls assembly and finds the one from the company namespace which are not also auto generated anonymous types (<>WeirdClassName).
(I also need to exclude Resources and settings, but I hope you get the idea).
I'm not loving the solution since there are ways of avoiding contract verification, but currently it's the best I can come up with. If anyone has a better solution, please let me know.
So you can treat this class exactly like you would treat any other "3rd party" class or library. I'm sure certain assumptions would hold with the interaction with this generated class so at the interaction points, decorate your own code with Contract.Assume(result != null) or similar.
var result = new GennedClass().GetSomeValue();
Contract.Assume(result != null);
What this does is translate into an assertion that is checked at run time, but it allows the static analyzer to reason about the rest of the code that you do control.