How to retain some of the interface methods' default implementations in the implementing class in C# 8.0? - resharper

One would think that in C# 8.0 you should be able to do the following (according to this (1st snippet)):
public interface IRestApiClient : IRestClient
{
...
Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
{
return QueryPrivateAsync(Method.POST, action, parameters, deserializer);
}
...
}
public class SpecificClient : ExchangeClient, IRestApiClient, IRestHtmlClient, ISeleniumClient, IWebSocketClient
{
}
The example above won't compile because the interface members need to be explicitly and wholly implemented (including the methods supplying the default logic)
So one would think that the following should work:
public interface IRestApiClient : IRestClient
{
...
Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
{
return QueryPrivateAsync(Method.POST, action, parameters, deserializer);
}
...
}
public class SpecificClient : ExchangeClient, IRestApiClient, IRestHtmlClient, ISeleniumClient, IWebSocketClient
{
...
public async Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
=> await ((IRestApiClient) this).PostPrivateAsync(action, parameters, deserializer);
...
}
Nope, it looks like this method is recursive (despite the upcast) and will cause our favorite Stack Overflow exception.
So my question is (abstracting from the fact that I could change the design in my example), is there a way of keeping the implementation for a specific method default, preferably without the necessity of resorting to hacky or Static Helper Extension methods? I could call static extension method in both interface and the class but it kind of defeats the purpose of this feature.
// EDIT
I must admit it confuses me and it appears I am missing something critical that is obvious to other people. I didn't provide additional info because I didn't consider my issue to be code specific. Lets look at this simple example (taken from the website I linked on the beginning of my post):
According to #Panagiotis Kanavos comment: No, default members don't need to be implemented (...) what I screenshoted should not be true. Can sb please enlighten me?
// EDIT 2
As you can see I am properly targeting .NET CORE 3.0 with C# 8.0.
ERRORS:
Interface method cannot declare a body
Interface member 'void CryptoBotCoreMVC.IDefaultInterfaceMethod.DefaultMethod()' is not implemented
To answer the question in the comments: I didn't specify LangVersion explicitly in the .csproj file.
// EDIT 3
The issue was ReSharper, see:
https://stackoverflow.com/a/58614702/3783852
My comment have been deleted, presumably by the owner of the answer so I'll write it here: the clue was the fact that there was actually no error numbers, but the compilation was blocked. It turned out that there is an option to block compilation when these errors occur in ReSharper.
It seems that in the end this is a possible duplicate, but getting to this conclusion was quite a journey :).

The issue is caused by ReSharper, reference:
https://youtrack.jetbrains.com/issue/RSRP-474628
It appears that the problem will be resolved in version v2019.3 and we currently have v2019.2.3. You can setup ReSharper to block compilation depending on issue severity, the workaround is to disable this feature for the time being.

Related

Mockito isNotNull passes null

Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

Why can't I add Contract.Requires in an overridden method?

I'm using code contract (actually, learning using this).
I'm facing something weird to me... I override a method, defined in a 3rd party assembly. I want to add a Contract.Require statement like this:
public class MyClass: MyParentClass
{
protected override void DoIt(MyParameter param)
{
Contract.Requires<ArgumentNullException>(param != null);
this.ExecuteMyTask(param.Something);
}
protected void ExecuteMyTask(MyParameter param)
{
Contract.Requires<ArgumentNullException>(param != null);
/* body of the method */
}
}
However, I'm getting warnings like this:
Warning 1 CodeContracts:
Method 'MyClass.DoIt(MyParameter)' overrides 'MyParentClass.DoIt(MyParameter))', thus cannot add Requires.
[edit] changed the code a bit to show alternatives issues [/edit]
If I remove the Contract.Requires in the DoIt method, I get another warning, telling me I have to provide unproven param != null
I don't understand this warning. What is the cause, and can I solve it?
You can't add extra requirements which your callers may not know about. It violates Liskov's Subtitution Principle. The point of polymorphism is that a caller should be able to treat a reference which actually refers to an instance of your derived class as if it refers to an instance of the base class.
Consider:
MyParentClass foo = GetParentClassFromSomewhere();
DoIt(null);
If that's statically determined to be valid, it's wrong for your derived class to hold up its hands and say "No! You're not meant to call DoIt with a null argument!" The aim of static analysis of contracts is that you can determine validity of calls, logic etc at compile-time... so no extra restrictions can be added at execution time, which is what happens here due to polymorphism.
A derived class can add guarantees about what it will do - what it will ensure - but it can't make any more demands from its callers for overridden methods.
I'd like to note that you can do what Jon suggested (this answers adds upon his) but also have your contract without violating LSP.
You can do so by replacing the override keyword with new.
The base remains the base; all you did is introduce another functionality (as the keywords literally suggest).
It's not ideal for static-checking because the safety could be easily casted away (cast to base-class first, then call the method) but that's a must because otherwise it would violate LSP and you do not want to do that obviously. Better than nothing though, I'd say.
In an ideal world you could also override the method and call the new one, but C# wouldn't let you do so because the methods would have the same signatures (even tho it would make perfect sense; that's the trade-off).

Possible C# 4.0 compiler error, can others verify?

Since I don't know exactly what part of it alone that triggers the error, I'm not entirely sure how to better label it.
This question is a by-product of the SO question c# code seems to get optimized in an invalid way such that an object value becomes null, which I attempted to help Gary with yesterday evening. He was the one that found out that there was a problem, I've just reduced the problem to a simpler project, and want verification before I go further with it, hence this question here.
I'll post a note on Microsoft Connect if others can verify that they too get this problem, and of course I hope that either Jon, Mads or Eric will take a look at it as well :)
It involves:
3 projects, 2 of which are class libraries, one of which is a console program (this last one isn't needed to reproduce the problem, but just executing this shows the problem, whereas you need to use reflector and look at the compiled code if you don't add it)
Incomplete references and type inference
Generics
The code is available here: code repository.
I'll post a description below of how to make the projects if you rather want to get your hands dirty.
The problem exhibits itself by producing an invalid cast in a method call, before returning a simple generic list, casting it to something strange before returning it. The original code ended up with a cast to a boolean, yes, a boolean. The compiler added a cast from a List<SomeEntityObject> to a boolean, before returning the result, and the method signature said that it would return a List<SomeEntityObject>. This in turn leads to odd problems at runtime, everything from the result of the method call being considered "optimized away" (the original question), or a crash with either BadImageFormatException or InvalidProgramException or one of the similar exceptions.
During my work to reproduce this, I've seen a cast to void[], and the current version of my code now casts to a TypedReference. In one case, Reflector crashes so most likely the code was beyond hope in that case. Your mileage might vary.
Here's what to do to reproduce it:
Note: There is likely that there are more minimal forms that will reproduce the problem, but moving all the code to just one project made it go away. Removing the generics from the classes also makes the problem go away. The code below reproduces the problem each time for me, so I'm leaving it as is.
I apologize for the escaped html characters in the code below, this is Markdown playing a trick on me, if anyone knows how I can rectify it, please let me know, or just edit the question
Create a new Visual Studio 2010 solution containing a console application, for .NET 4.0
Add two new projects, both class libraries, also .NET 4.0 (I'm going to assume they're named ClassLibrary1 and ClassLibrary2)
Adjust all the projects to use the full .NET 4.0 runtime, not just the client profile
Add a reference in the console project to ClassLibrary2
Add a reference in ClassLibrary2 to ClassLibrary 1
Remove the two Class1.cs files that was added by default to the class libraries
In ClassLibrary1, add a reference to System.Runtime.Caching
Add a new file to ClassLibrary1, call it DummyCache.cs, and paste in the following code:
using System;
using System.Collections.Generic;
using System.Runtime.Caching;
namespace ClassLibrary1
{
public class DummyCache<TModel> where TModel : new()
{
public void TriggerMethod<T>()
{
}
// Try commenting this out, note that it is never called!
public void TriggerMethod<T>(T value, CacheItemPolicy policy)
{
}
public CacheItemPolicy GetDefaultCacheItemPolicy()
{
return null;
}
public CacheItemPolicy GetDefaultCacheItemPolicy(IEnumerable<string> dependentKeys, bool createInsertDependency = false)
{
return null;
}
}
}
Add a new file to ClassLibrary2, call it Dummy.cs and paste in the following code:
using System;
using System.Collections.Generic;
using ClassLibrary1;
namespace ClassLibrary2
{
public class Dummy
{
private DummyCache<Dummy> Cache { get; set; }
public void TryCommentingMeOut()
{
Cache.TriggerMethod<Dummy>();
}
public List<Dummy> GetDummies()
{
var policy = Cache.GetDefaultCacheItemPolicy();
return new List<Dummy>();
}
}
}
Paste in the following code in Program.cs in the console project:
using System;
using System.Collections.Generic;
using ClassLibrary2;
namespace ConsoleApplication23
{
class Program
{
static void Main(string[] args)
{
Dummy dummy = new Dummy();
// This will crash with InvalidProgramException
// or BadImageFormatException, or a similar exception
List<Dummy> dummies = dummy.GetDummies();
}
}
}
Build, and ensure there are no compiler errors
Now try running the program. This should crash with one of the more horrible exceptions. I've seen both InvalidProgramException and BadImageFormatException, depending on what the cast ended up as
Look at the generated code of Dummy.GetDummies in Reflector. The source code looks like this:
public List<Dummy> GetDummies()
{
var policy = Cache.GetDefaultCacheItemPolicy();
return new List<Dummy>();
}
however reflector says (for me, it might differ in which cast it chose for you, and in one case Reflector even crashed):
public List<Dummy> GetDummies()
{
List<Dummy> policy = (List<Dummy>)this.Cache.GetDefaultCacheItemPolicy();
TypedReference CS$1$0000 = (TypedReference) new List<Dummy>();
return (List<Dummy>) CS$1$0000;
}
Now, here's a couple of odd things, the above crash/invalid code aside:
Library2, which has Dummy.GetDummies, performs a call to get the default cache policy on the class from Library1. It uses type inference var policy = ..., and the result is an CacheItemPolicy object (null in the code, but type is important).
However, ClassLibrary2 does not have a reference to System.Runtime.Caching, so it should not compile.
And indeed, if you comment out the method in Dummy that is named TryCommentingMeOut, you get:
The type 'System.Runtime.Caching.CacheItemPolicy' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Runtime.Caching, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.
Why having this method present makes the compiler happy I don't know, and I don't even know if this is linked to the current problem or not. Perhaps it is a second bug.
There is a similar method in DummyCache, if you restore the method in Dummy, so that the code again compiles, and then comment out the method in DummyCache that has the "Try commenting this out" comment above it, you get the same compiler error
OK, I downloaded your code and can confirm the problem as described.
I have not done any extensive tinkering with this, but when I run & reflector a Release build all seems OK (= null ref exception and clean disassembly).
Reflector (6.10.11) crashed on the Debug builds.
One more experiment: I wondered about the use of CacheItemPolicies so I replaced it with my own MyCacheItemPolicy (in a 3rd classlib) and the same BadImageFormat exception pops up.
The exception mentions : {"Bad binary signature. (Exception from HRESULT: 0x80131192)"}

NInject and thread-safety

I am having problems with the following class in a multi-threaded environment:
public class Foo
{
[Inject]
public IBar InjectedBar { get; set; }
public bool NonInjectedProp { get; set; }
public void DoSomething()
{
/* The following line is causing a null-reference exception */
InjectedBar.DoSomething();
}
public Foo(bool nonInjectedProp)
{
/* This line should inject the InjectedBar property */
KernelContainer.Inject(this);
NonInjectedProp = nonInjectedProp;
}
}
This is a legacy class which is why I am using property rather than constructor injection.
Sometime when the DoSomething() is called the InjectedBar property is null. In a single-threaded application, everything runs fine.
How can this be occuring and how can I prevent it?
I am using NInject 2.0 without any extensions, although I have copied the KernelContainer from the NInject.Web project.
I have noticed a similar problem occurring in my web services. This problem is extremely intermittent and difficult to replicate.
First of all, let me say that this is wrong on so many levels; the KernelContainer was an infrastructure class kept specifically to work around certain limitations in the ASP.NET WebForms page lifecycle. It was never meant to be used in application code. Using the Ninject kernel (or any DI container) as a service locator is an anti-pattern.
That being said, Ninject itself is definitely thread-safe because it's used to service parallel requests in ASP.NET all the time. Wherever this NullReferenceException is coming from, it's got little if anything to do with Ninject.
I can think of two possibilities:
You have to initialize KernelContainer.Kernel somewhere, and that code might have a race condition. If something tries to use the KernelContainer before the kernel is fully initialized (possible if you use the IKernel.Bind methods instead of loading modules as per the guidance), you'll get errors like this. Or:
It's your IBar implementation itself that has problems, and the NullReferenceException is happening somewhere inside the DoSomething method. You don't actually specify that InjectedBar is null when you get the exception, so that's a legitimate possibility here.
Just to narrow the field of possibilities, I'd eliminate the KernelContainer first. If you absolutely must use Ninject as a service locator due to a poorly-designed legacy architecture, then at least allow it to create the dependencies instead of relying on Inject(this). That is to say, whichever class or classes need to create your Foo, have that class call kernel.Get<Foo>(), and set up your kernel to Bind<Foo>().ToSelf().

Resources