Let's say we have a template method that looks like this
abstract class Worker
{
public void DoJob()
{
BeforJob()
DoRealJob();
AfterJob();
}
abstract void DoRealJob();
}
subclasses that inherit from the Wroker classe should implemente the DoRealJob() method,
when the implementation is running under the same thread everything is fine, the three part of the DoJob() method get executed in this order
BeforJob()
DoRealJob()
AfterJob()
but when DoRealJob() runs under another thread, AfterJob() may get executed before DoRealJob() is completed
my actual solution is to let the subclasses call AfterJob() but this doesn't prevent a subclass from forgetting to call it, and we loose the benefit of a template method.
are there other ways to get consistent call order despite the fact the DoRealJob() is blocking or not?
You can't get both the simple inheritance(signature and hooking) and support asynchronous operations in your code.
These two goals are mutually exclusive.
The inheritors must be aware about callback mechanisms in either direct (Tasks, async) or indirect (events, callback functions, Auto(Manual)ResetEvents or other synchronization constructs). Some of them new, some old. And it is difficult to say which one will be better for the concrete case of use.
Well, it may look like there is a simple way with multithreaded code, but what if your DoRealJob will actually run in another process or use some remote job queuing, so the real job will be executed even outside your app?
So:
If you really consider that your class will be used as the basis for some
async worker, then you should design it accordingly.
If not - do not overengineer. You can't consider any possible
scenario. Just document your class well enough and I doubt that
anyone will try to implement the DoRealJob asynchronously,
especially if you name it DoRealJobSynchronously. If someone tries to
do it then in that case your conscience can be pristinely clean.
EDIT:
Do you think it would be correct if I provide both versions, sync and
async, of DoRealJob and a flag IsAsynchronous so I can decide which
one to call
As I have already said I don't know your actual usage scenarios. And it is unrealistic to consider that the design will be able to effectively handle all of them.
Also there are two very important questions to consider that pertain to your overall Worker class and its DoJob method:
1) You have to determine whether you want the DoJob method to be synchronous or asynchronous, or do you want to have both the synchronous and asynchronous versions? It is not directly related to your question, but it is still very important design decision, because it will have great impact on your object model. This question could be rephrased as:
Do you want the DoJob method to block any actions after it is called until it does its job or do you want to call it as some StartJob method, that will just launch the real processing but it is up to other mechanisms to notify you when the job has ended(or to stop it manually):
//----------------Sync worker--------------------------
SyncWorker syncWorker = CreateSyncStringWriter("The job is done");
Console.WriteLine("SyncWorker will be called now");
syncWorker.DoJob(); // "The job is done" is written here
Console.WriteLine("SyncWorker call ended");
//----------------Async worker--------------------------
Int32 delay = 1000;
AsyncWorker asyncWorker = CreateAsyncStringWriter("The job is done", delay);
Console.WriteLine("AsyncWorker will be called now");
asyncWorker.StartDoJob(); // "The job is done" won't probably be written here
Console.WriteLine("AsyncWorker call ended");
// "The job is done" could be written somewhere here.
2) If you want DoJob to be async(or to have async version) you should consider whether you want to have some mechanisms that will notify when DoJob finishes the processing - Async Programming Patterns , or it is absolutely irrelevant for you when or whether at all it ends.
SO:
Do you have the answers to these two questions?
If yes - that is good.
If not - refine and consider your requirements.
If you are still unsure - stick with simple sync methods.
If you, however, think that you need some async based infrastructure, then, taking into account that it is C# 3.0, you should use Asynchronouse Programming Model.
Why this one and not the event based? Because IAsyncResult interface despite its cumbersomeness is quite generic and can be easily used in Task-based model, simplifying future transition to higher .NET versions.
It will be something like:
/// <summary>
/// Interface for both the sync and async job.
/// </summary>
public interface IWorker
{
void DoJob();
IAsyncResult BeginDoJob(AsyncCallback callback);
public void EndDoJob(IAsyncResult asyncResult);
}
/// <summary>
/// Base class that provides DoBefore and DoAfter methods
/// </summary>
public abstract class Worker : IWorker
{
protected abstract void DoBefore();
protected abstract void DoAfter();
public IAsyncResult BeginDoJob(AsyncCallback callback)
{
return new Action(((IWorker)this).DoJob)
.BeginInvoke(callback, null);
}
//...
}
public abstract class SyncWorker : Worker
{
abstract protected void DoRealJobSync();
public void DoJob()
{
DoBefore();
DoRealJobSync();
DoAfter();
}
}
public abstract class AsyncWorker : Worker
{
abstract protected IAsyncResult BeginDoRealJob(AsyncCallback callback);
abstract protected void EndDoRealJob(IAsyncResult asyncResult);
public void DoJob()
{
DoBefore();
IAsyncResult asyncResult = this.BeginDoRealJob(null);
this.EndDoRealJob(asyncResult);
DoAfter();
}
}
P.S.: This example is incomplete and not tested.
P.P.S: You may also consider to use delegates in place of abstract(virtual) methods to express your jobs:
public class ActionWorker : Worker
{
private Action doRealJob;
//...
public ActionWorker(Action doRealJob)
{
if (doRealJob == null)
throw new ArgumentNullException();
this.doRealJob = doRealJob;
}
public void DoJob()
{
this.DoBefore();
this.doRealJob();
this.DoAfter();
}
}
DoBefore and DoAfter can be expressed in a similar way.
P.P.P.S: Action delegate is a 3.5 construct, so you will probably have to define your own delegate that accepts zero parameters and returns void.
public delegate void MyAction()
Consider change the DoRealJob to DoRealJobAsync and give it a Task return value. So you can await the eventual asynchronous result.
So your code would look like
abstract class Worker
{
public void DoJob()
{
BeforJob()
await DoRealJobAsync();
AfterJob();
}
abstract Task DoRealJob();
}
If you don't have .net 4.0 and don't want to us the old 3.0 CTP of async you could use the normale task base style:
abstract class Worker
{
public void DoJob()
{
BeforJob()
var task = DoRealJobAsync();
.ContinueWith((prevTask) =>
{
AfterJob()
});
}
abstract Task DoRealJob();
}
Related
#Builder
public class X {
#Nonnull String a;
#Nonnull String b;
}
main () {
X.XBuilder builder = X.builder();
//thread 1
CompletableFuture.runAsync(()-> {
builder.a("some");
});
//thread 2
CompletableFuture.runAsync(()-> {
builder.b("thing");
});
}
Here the same object is being accessed and modified at the same time.
So will this code be thread safe?
Usecase is like wants to call multiple api's, each api results is to populate the fields of class X.
If you want to know how the stuff that Lombok generates works, you can always use the delombok tool.
With regard to thread safety of #Builder, you will see that you can in fact access a builder instance from multiple threads, but only under these constraints:
Don't call the same field setter from different threads. Otherwise you'll never know which value makes it to the builder eventually.
Make sure that all value-setting threads have terminated before you call build(). (If you want to call build() in a thread, too, make sure you create this thread after all value-setting threads have terminated.)
This is necessary because #Builder wasn't designed for concurrency (as that's not something you typically do with a builder). In particular, #Builder does not use synchronization or volatile fields, so you have to create a happens-before relation for all setter calls with the build() call.
I am a .net developer, trying my hands on nodejs web api development.
I was wondering that whether we can create models in nodejs same as we create in asp.net web api.
For example
public class BaseResponse
{
public bool Success { get; set; }
public string ErrorMessage { get; set; }
}
public class MovieResponse : BaseResponse
{
public int MovieId { get; set; }
public string MovieName { get; set; }
}
This is how we do it in c#.
How can i create such models in nodejs.
Any npm package available?
There's good news and there's bad news. The bad news is the concept of classes and inheritance as you know it from other languages is not supported. The good news, JavaScript attempted to do something along that idea (although it did a miserable job implementing it). Below is an example of the code you provided using JavaScript:
function BaseResponse(success, errorMessage) {
this.success = success;
this.errorMessage = errorMessage;
}
function MovieResponse(success, errorMessage, movieId, movieName) {
BaseResponse.call(this, success, errorMessage); // Call the base class's constructor (if necessary)
this.movieId = movieId;
this.movieName = movieName;
}
MovieResponse.prototype = Object.create(BaseResponse);
MovieResponse.prototype.constructor = MovieResponse;
/**
* This is an example of an instance method.
*/
MovieResponse.prototype.instanceMethod = function(data) { /*...*/ };
/**
* This is an example of a static method. Notice the lack of prototype.
*/
MovieResponse.staticMethod = function(data) {/* ... */ };
// Instantiate a MovieResponse
var movieResInstance = new MovieResponse();
Mozilla has really good documentation on JavaScript and classes. In the code above, you are creating two functions BaseResponse and MovieResponse. Both of these functions act as constructors for an object with the appropriate "class" when you use the new keyword. You specify that MovieResponse inherits from BaseMovie with MovieResponse.prototype =Object.create(BaseResponse). This effectively sets MovieResponse's prototype chain equal to BaseResponse's prototype chain. You'll notice that immediately after setting MovieResponse's prototype chain I have to set its constructor to point to MovieResponse. If I didn't do this, every time you tried to initialize a MovieResponse, JavaScript would try to instead instantiate a BaseResponse (I told you they did a horrible job).
The rest of the code should be relatively straightforward. You can create instance methods on your brand new, shiny class by defining them on the prototype chain. If you define a function on BaseResponse that is not defined on MovieResponse but call the function on an instance of MovieResponse, JavaScript will "crawl" the prototype chain until it finds the function. Static methods are defined directly on the constructor itself (another weird feature).
Notice there is no concept of types or access modifiers (public/private). There are runtime tricks that you can implement to enforce types, but it's usually unnecessary in JavaScript and more prone to errors and inflexibility than adding such checks may justify.
You can implement the concept of private and protected members of a class in a more straightforward method than types. Using Node's require(), and assuming you wanted a private function called privateMethod you could implement it as:
function privateMethod() { /* privateMethod definition */ }
// Definition for MovieResponse's constructor
function MovieResponse() { /*...*/ }
module.exports = MovieResponse;
I will add a somewhat required commentary that I do not agree with: it is unnecessary to use inheritance in JavaScript. JavaScript uses a notion coined "duck typing" (if it looks like a duck and sounds like a duck, its a duck). Since JavaScript is weakly typed, it doesn't care if the object is a BaseResponse or MovieResponse, you can call any method or try to access any field you want on it. The result is usually an error or erroneous/error-prone code. I mention this here because you may come across the notion and its supporters. Know that such programming is dangerous and results in just bad programming practices.
I found the following code on MSDN:
public class DisposeExample
{
public class MyResource: IDisposable
{
private IntPtr handle;
private Component component = new Component();
private bool disposed = false;
public MyResource(IntPtr handle)
{
this.handle = handle;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if(!this.disposed)
{
if(disposing)
{
// Dispose managed resources.
component.Dispose();
}
CloseHandle(handle);
handle = IntPtr.Zero;
disposed = true;
}
}
~MyResource()
{
Dispose(false);
}
}
public static void Main()
{
MyResource obj = new MyResource()
//obj.dispose()
}
}
Now the confusion I have here is that, if I call obj.dispose, it disposes the objects created in the class MyResources i.e. handle, component etc. But does the obj also gets removed off the heap?? Same applies with the destructor. If I don't call dispose, the destructor will be called sometime. The code inside destructor removes the contained objects. But what about the obj?
Secondly, if I don't have a destructor defined inside the class and I dont even call dispose, does the GC never come into picture here?
IDisposable exists to remove unmanaged items from your managed objects. The runtime automatically provides a destructor, this destructor here has the sole purpose of releasing unmanaged items. As soon as your object goes out of scope or is set to null and has no more references to it will eventually be cleared by the GC.
The fundamental rule I'd recommend with with IDisposable is that at any given moment in time, for every object that implements IDisposable, there should be exactly one entity which has the clearly-defined responsibility of ensuring that it will get cleaned up (almost always by calling Dispose) before it is abandoned. Such responsibility will initially belong to whatever entity calls the IDidposable object's constructor, but that entity may hand the responsibility off to some other entity (which may hand it off again, etc.).
In general, I'd guess that most programmers would be best served if they pretended finalizers and destructors did not exist. They are generally only needed as a consequence of poorly-written code, and in most cases the effort one would have to spend writing a 100%-correct finalizer/destructor and working through all the tricky issues related to threading context, accidental resurrection, etc. could be better spent ensuring that the primary rule given above is always followed. Because of some unfortunate design decisions in the Framework and its languages, there are a few situations which can't very well be handled without finalizers/destructors, but for the most part they serve to turn code which would fail relatively quickly into code which will mostly work but may sometimes fail in ways that are almost impossible to debug.
I have some nagging doubts about the correct way to implement IDisposable. Consider the following scenario...
public class Foo : IDisposable {...}
public class Bar : IDisposable {
private bool disposed = false;
private readonly Foo MyFoo;
public Bar() {
this.MyFoo = new Foo();
}
public Bar(Foo foo) {
this.MyFoo = foo;
}
~Bar() {
Dispose(false);
}
protected virtual void Dispose(bool disposing) {
if (!this.disposed) {
if (disposing) {
if (MyFoo != null) {
this.MyFoo.Dispose();
this.MyFoo = null;
}
}
this.disposed = true;
}
}
public void Dispose() {
Dispose(true);
GC.SuppressFinalize(this);
}
}
My questions are:
1) If a class creates a disposable object, should it call the Dispose() method on that object in its own Dispose() method?
2) If a disposable object is passed to a class as a reference, should that class still call the Dispose() method on that reference object, or should it leave it to the class that created the object in the first place?
The above pattern seems to crop up quite a lot (particularly with DI), but I don't seem to be able to find a concrete example of the correct way to structure this.
Refer to Excellent MSDN article
Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework
1) If a class creates a disposable object, should it call the Dispose() method on that object in its own Dispose() method?
Yes it should. Otherwise also, Dispose will be called. But that will increase life of object by atleast 1 generation. This is due to the finalizer in the class definition. Refer to the article link above.
2) If a disposable object is passed to a class as a reference, should that class still call the Dispose() method on that reference object, or should it leave it to the class that created the object in the first place?
It is responsibility of caller (more specifically the class which has created an instance) to call the Dispose method.
~Bar() {
Dispose(false);
}
Whenever you find yourself writing code like this, take a deep breath first and ask "do I actually need a finalizer?" It is extremely rare that you need one, a finalizer is only required when you take ownership of an unmanaged resource yourself.
The first litmus test is "does the finalizer actually do anything?" That's clear if you follow the code. It calls Dispose(false) and that code only does something when the argument is true. What follows is that you don't need a finalizer. This is entirely normal, finalizers is something that Microsoft worries about. They wrote the .NET framework classes that wrap an unmanaged resource. FileStream, Socket, etcetera. And above all, the SafeHandle classes, designed to wrap operating system handles. They have their own finalizer, you don't rewrite one yourself.
So without a finalizer, the code entirely collapses to the simple and correct implementation, you only need to call the Dispose() method of any disposable objects you store yourself:
public class Bar : IDisposable {
private readonly Foo MyFoo;
public Bar() {
this.MyFoo = new Foo();
}
public void Dispose() {
MyFoo.Dispose();
}
}
It is common to have classes with methods with string parameters that must be validated agains null or empty, such as this example:
public class MyClass {
public void MyMethod(string param){
if(string.IsNullOrEmpty(param)){
throw new ArgumentNullException(...);
}
//...
}
}
It's clear that the behavior of the method is the same for both (invalid) values. This is a very common situation, and when it comes to testing these methods, I always doubt about how to do it. I always create two separate tests for these cases:
[TestClass]
public class Tests {
[TestMethod]
public void MyMethod_should_fail_if_param_is_null(){
//...
myclass.MyMethod(null);
//...
}
[TestMethod]
public void MyMethod_should_fail_if_param_is_empty(){
//...
myclass.MyMethod("");
//...
}
}
But I see too much redundancy. Those tests are exactly the same, with the only difference being the parameter passed to the method. That bothers me very much, since I have to create two tests for each string parameter. A method with 3 parameters would have 6 tests only to test the parameters.
I think this is the right way of testing those parameters, but if I know that 99% of string parameters will be validated the same way, wouldn't it be better just test them for null (or empty) and assume that the behavior in the other case will be the same?
I would like to know what you think about this. I know what I'm asking is more a technical opinion than a technical question, but I think the testing community may have something interesting to say about this situation.
Thank you!
Personally I'd consider using a single test for all of the parameters. That doesn't follow the normal dogma of unit testing, but it increases the readability of the tests (by minimizing the amount of test code which is dedicated to a pretty repetitive case) and doesn't have much in the way of downsides. Yes, if the test fails you don't know whether all of the checks after the first failing one will also fail - but is that really a problem in practice?
The important point is to make sure that you've got a short cut for testing the case. For instance, you might write something like this (if your unit test framework doesn't have it already):
public static void ExpectException<T>(Action action) where T : Exception
{
try
{
action();
Assert.Fail("Expected exception " + typeof(T).Name);
}
catch (T exception)
{
// Expected
}
}
Then you can write:
[Test]
public void MyMethodFailsWithInvalidArguments()
{
ExpectException<ArgumentNullException>(() => myClass.MyMethod(null));
ExpectException<ArgumentException>(() => myClass.MyMethod(""));
}
Much more concise than doing each one with an individual try/catch block, or even using an ExpectedException attribute and multiple tests.
You might want overloads for cases where you also want to verify that in each case, no mocked objects have been touched (to check that side-effects are avoided) or possibly overloads for common exceptions like ArgumentNullException.
For single-parameter methods you could even write a method to encapsulate exactly what you need:
public void ExpectExceptionForNullAndEmptyStrings(Action<string> action)
{
ExpectException<ArgumentNullException>(() => action(null));
ExpectException<ArgumentException>(() => action(""));
}
then call it with:
[Test]
public void MyMethodFailsWithInvalidArguments()
{
// This *might* work without the
ExpectExceptionForNullAndEmptyStrings(myClass.MyMethod);
}
... and maybe another one for methods with a single parameter but a non-void return type.
That's possibly going a bit far though :)
If you use Java and JUnit you can use this syntax
#Test(expected = IllegalArgumentException.class)
public void ArgumentTest() {
myClass.MyMethod("");
myClass.MyMethod(null);
}