Should I avoid exposing the Lazy<T> class in public API? - c#-4.0

In a design of the public interface of a library, is it legitimate to return an instance of Lazy<T> in a property, if I want lazy initialization? Or is it better to always hide the usage of Lazy<T> by means of encapsulation or other techniques?

For the following I'm assumim you mean a Lazy Property.
It depends on the purpose of your interface.
Is it an important detail that the consumer knows that it is lazy ? Or is it just a technical detail which should not change the behavior for the consumer.
It you only have a short delay which must not be handled by the consumer then i would tend to haide the Lazy and only expose T directly.
If the consumer should be aware and may adapt this behavior then i would expose the lazy .
But thiking on this I would in most cases rather expose a method which indicates that the code may have side effects or may take a while.

I don't see any reason to directly expose Lazy<T> in the signature. In my use cases, the laziness is an implementation detail.
Don't use this in a property if initialization is long running. In such cases, rather provide a method, considering returning Task<T>.

Related

Are mocks and stubs implementation details?

I have read that with TDD we should approach the entity (function, class etc.) under test from the perspective of the user/caller of the entity. The gist being focusing on the public "interface". This in turn would drive the design and help reason about the design earlier.
But when we need to introduce mocks and stubs into our tests, isn't that an implementation detail?
Why would/should the "user" care about the other entities that are supposed to be there?
E.g.
How to start writing a test for the PlaceOrder service which should check with the credit card service if the user has enough money? Putting a mock for the credit card service whilst writing a test from the perspective of the PlaceOrder client looks out of place now - because it is an implementation detail; our PlaceOrder may call the credit card for each user or it can simply have a cache with scores provided at the creation time.
It's not clear-cut. As a catch-phrase says: Tests are specifications.
When you use Test Doubles you are, indeed, specifying how your System Under Test (SUT) ought to interact with its dependencies.
I agree that this is usually an implementation detail, but there will typically be a few dependencies of a more architectural character.
A common example is an email gateway. If your SUT should send email, that's an observable side effect that the user (or some other stakeholder) cares about.
While you can (and perhaps should?) also run full systems tests that verify that certain conditions produce real emails that land in certain real mailboxes, such test cases are difficult to automate.
Inserting a Test Double that can take the place of an email gateway and verify that the correct message was delivered to the gateway is not only an implementation detail, but an important part of the overall system. In such cases, using a Test Double makes sense.
Yes, Test Doubles specify behaviour, but sometimes, that's exactly what you want.
How much you should rely on this kind of design is an architectural choice. In addition to sending emails, you might choose to explicitly specify that a certain SUT ought to place a message on a durable queue.
You can create entire systems based on asynchronous messaging, which could imply that it'd be architecturally sound to let tests rely on Test Doubles.
In short, I find it a useful heuristic to use Test Doubles for architectural components, and rely mostly on testing pure functions for everything else.
For something like an order service, I wouldn't let the order service contact the payment gateway. Rather, I'd implement the order service operations as pure functions, and either pass in a payment token as function arguments, or let the output of functions trigger a payment action.
The book Domain Modeling Made Functional contains lots of good information about this kind of architecture.
On the other hand, the book Growing Object-Oriented Software, Guided by Tests contains many good examples of how to use Test Doubles to specify desired behaviour.
Perhaps you'll also find my article From interaction-based to state-based testing useful.
In summary: Tests are specifications. Test Doubles are specifications. Use them to specify the observable behaviour of the system. Try to avoid using them to specify implementation details.
But when we need to introduce mocks and stubs into our tests, isn't that an implementation detail?
Yes, in effect. A bit more precisely, it is additional coupling between your test and the details of your test subject's implementation.
There are two ideas in tension here. On the one hand, we want that our tests are as representative as possible of how our system will actually work; and on the other hand we each of our tests to be an controlled experiment of our implementation, without coupling to shared mutable state.
In some cases, we can disguise some of the coupling by using inert substitutes for our dependency as the default case, so that our implementation classes are isolated unless we specifically opt into a shared configuration.
So for PlaceOrder, it might look like using a default CreditCardService that always answers "yes, the customer has enough money". Of course, that design only allows you to test the "yes" branch in your code - to test a "no" branch, you are necessarily going to need to know how to configure PlaceOrder with a CreditCardService that declines credit.
For more on this idea, see the doctrine of useful objects.
More generally, in TDD we normally take complicated designs that are hard to test and refactor them into a design where something really simple but hard to test collaborates with something that is complicated but easy to test.
But for that to work at all, the components need to be able to talk to each other, and if you are going to simulate that communication in a test you are necessarily going to be coupled to the "implementation detail" that is the protocol between them.
For the case where that protocol is stable, having tests coupled to those details isn't, of itself, a problem in practice. There's coupling, sure, and cost of change, but if the probability of change is negligible then the expected cost of that coupling is effectively nothing.
So the trick is identifying when our tests would require coupling to an unstable implementation protocol, and figuring out how to best mitigate the risk of change.

Dependency Inversion Principle And "String"

I know you can't create a program that adheres 100% to the Dependency Inversion Principle. All of us violate it by instantiation strings in our programs without thinking about it. Since String is a class and not a datatype, we always become dependent on a concrete class.
I was wondering if there are any solutions for this (purely theoretical speaking). Since String is pretty much a blackbox with very few 'leaks', and has a complex background algorithm, I don't expect an actual implementation ofcourse :)
The intent of the principle is not to avoid creating instances within a class, or to avoid using the "new" keyword. Therefore instantiating objects (or strings) does not violate the principle.
The principle is also not about always creating a higher-level abstraction (e.g. an interface or base class) in order to inject it and promote looser coupling. If an abstraction is already reasonable, there is no reason to try to improve on it. What benefit would you ever gain by swapping out the implementation of string?
I actually posted this question a few years ago (semi-relevant): IOC/DI: Is Registering a Concrete Type a Code Smell?
So what is the principle about? It's about writing components that are highly focused on their own responsibilities, and injecting components that are highly focused on their own responsibilities. These components are usually services when using a dependency injection framework and constructor injection, but can also be datatypes when doing other types of injection (e.g. method injection).
Note that there is no requirement for these services or datatypes to be interfaces or base classes--they can absolutely be concrete types without violating the principle.
Dependeny inversion is not about creation of object, its about high-level/low-level module dependency and who define the Domain (object and interface).
You are talking about dependency injection, a sub-part of the Inversion of Control principle.

Core Data - Are primitive setters / getters faster? When not to use?

From Apple's Core Data Programming Guide:
Core Data dynamically generates efficient public and primitive get and
set attribute accessor methods and relationship accessor methods for
managed object classes.
...
Primitive accessor methods are similar to "normal" or public key-value
coding compliant accessor methods, except that Core Data uses them as
the most basic data methods to access data, consequently they do not
issue key-value access or observing notifications. Put another way,
they are to primitiveValueForKey: and setPrimitiveValue:forKey: what
public accessor methods are to valueForKey: and setValue:forKey:.
I would then expect the primitive accessor methods to be better performant then the public accessors because they do not trigger KVO notifications. Is there a way to effectively test this theory with Time Profiler? (Surely it can't be as easy as putting the two call in their own for-loops that iterate a zillion times and compare the results...)
Obviously the primitive accessors aren't to be called by objects or functions outside of the Managed Object subclass, but when shouldn't you use them from within the class?
edelaney05,
As you appear to know, Core Data depends upon the KVC/KVO features of Objective-C. Yes, you are correct that the path length is slightly longer through the accessors. What of it? Performance of Core Data is dominated by the performance of the I/O subsystem.
IOW, tuning your fetch request is much more important than avoiding the accessor overhead. Can you do what you're proposing? Yes. Should you? No. You should, IMO, focus upon how to get your data into a MOC efficiently and then refine it with predicates and other filter techniques. Learning how to use the various key path operators and predicate language after the fetch is very important to writing performant CD code. Only after Instruments can document that you are spending an appreciable amount of time in the accessors would I consider your strategy of avoiding them.
In answer to your specific question, you should generally restrict your use of the primitive accessors to within your reimplementation of the public accessors. Sticking with accessors for all of your code then becomes your standard pattern. This gives you the long term engineering benefit of having the ability to associate arbitrary behavior with any property. Finally, if you can use the various key path and set operators, then the CD team has already optimized those access patterns. They are quite performant.
Andrew

PostSharp OnMethodBoundaryAspect Not Thread Safe

I'm trying out PostSharp AOP and am surprised that OnMethodBoundaryAspect is not thread safe.
The same instance of the aspect is shared between method calls.
This makes its utility quite limited in number of use cases where it can be applied.
Any way to address this?
All OnEntry, OnExit and OnException methods receive a parameter of the type MethodExecutionArgs. This parameter has a property called MethodExecutionTag, and this one can be used for sharing information between these events.
http://doc.sharpcrafters.com/postsharp-2.1/Default.aspx##PostSharp-2.1.chm/html/P_PostSharp_Aspects_MethodExecutionArgs_MethodExecutionTag.htm
The third question on link http://www.sharpcrafters.com/blog/post/Stay-DRY-Webinar.aspx is similar as yours.

Should methods have the same preconditions as the methods they call?

I've recently had a few scenarios where small changes to code have resulted in changing preconditions across multiple classes and I was wondering if design by contract is supposed to be that way or not.
public Goal getNextGoal() {
return goalStack.pop();
}
If goalStack.pop() has a precondition that the stack isn't empty, then does getNextGoal() need to explicitly have the same precondition? It seems like inheriting the preconditions would make things brittle, and changing to a queue or other structure would change the preconditions to getNextGoal(), it's callers, and it's callers' callers. But it seems like not inheriting the preconditions would hide the contracts and the callers, and the callers' callers, wouldn't know about the preconditions.
So brittle code where all callers know and inherit the preconditions and postconditions of the code they call, or mysterious code where callers never know what the deeper preconditions and postconditions are?
It depends on what your calling method does exactly. The important thing with preconditions is that the caller is responsible for fulfilling the preconditions.
So if callers of your GetNextGoal method should be responsible for providing a non-empty stack, then you should indeed also set preconditions on your GetNextGoal method. Clarity of preconditions is one of the huge advantages of Code Contracts, so I'd suggest you put them in all places where callers have to fulfill the preconditions.
If your code seems brittle however, it might be a sign that you need to refactor some code.
It seems like inheriting the
preconditions would make things
brittle, and changing to a queue or
other structure would change the
preconditions to getNextGoal(), it's
callers, and it's callers' callers.
If you expose the queue to the callers and change it later ( to another structure, like you said ), it's callers would also have to change. This is usually a sign of brittle code.
If you would expose an interface instead of a specific queue implementation, your preconditions could also use the interface and you wouldn't have to change the preconditions every time your implementation changes. Thus resulting in less brittle code.
Exceptions are one solution but perhaps not feasible for your situation.
Documenting what happens if there are no goals is normal.E.G. This is what malloc() does in C
I can't tell if you are using Java or C++ or something else as each language might have slightly more natural ways for that specific language.

Resources