Is it the best way to pass context object as parameter to another C# method?
Can somebody advise, passing it this way will lead to any issues ??
Thanks in advance..
Passing a DbContext as a parameter isn't a problem at all -- there's nothing particularly special about it. It's just another class.
The only issue that jumps to mind is one that would be the same for any IDisposable -- the .NET developer guidelines recommend that only the class responsible for creating the IDisposable should dispose of it.
...that can be tricky to determine if you are using a dependency injection framework (eg Ninject) as it is sort of a mystery to any of the code written by the application where the objects are created.
To that end, you should never bind an IDisposable object in TransientScope:
Guidelines For Dispose() and Ninject
Related
Are there any creational design patterns that allow for completely new objects (as in newly written) to be instantiated without having to add a new statement somewhere in existing code?
The main problem to solve is how to identify the class or classes to instantiate. I know of and have used three general patterns for discovering classes, which I'll call registration, self-registration and discovery by type. I'm not aware of them having been written up in a formal pattern description.
Registration: Each class that wants to be discovered is registered somewhere where a framework can find it:
the class name is put in an environment variable or Java system property, in a file, etc.
some code adds the class or its name to a singleton list early in program execution
Self-registration: Each class that wants to be discovered registers itself, probably in a singleton list. The trick is how the class knows when to do that.
The class might have to be explicitly referred to by some code early in the program (e.g. the old way of choosing a JDBC driver).
In Ruby, the code that defines a class can register the class somewhere (or do anything else) when the class is defined. It suffices to require the file that contains the class.
Discovery by type: Each class that wants to be discovered extends or implements a type defined by the framework, or is named in a particular way. Spring autowiring class annotations are another version of this pattern.
There are several ways to find classes that descend from a given type in Java (here's one SO question, here's another) and Ruby. As with self-registration, in languages like those where classes are dynamically loaded, something has to be done to be sure the class is loaded before asking the runtime what classes are available.
One things which I think here is that someone needs to do a new to your newly written Class. If you are not doing that then may be some framework needs to do that.
I could remember something similar which I did in one of my side projects using Java Spring . I had an interface and multiple implementations to it. My project required to iterate over all the implementations do some processing. Now for this even I was looking for some solution with which I would not have to manually do the instantiation of that particular class or some explicit wiring of that particular class. So Spring facilitated me to do that through #Autowired annotation. It injected all the implementation of that interface on the fly. Eg :-
#Autowired
private List<IMyClass> myClassImplementations;
Now in the above example I can simply iterate over the list of implementations injected and I would not have to do instantiation of my new implementation every time I write a new one.
But I think in most of the cases it would be difficult to use this approach (even though it fits in my case). I would rather go with a Factory pattern in general case and try to use that for creating new instances. I know that would require new but in my perception engineering it in a way that its object is automatically created and injected is a bit an extra overhead.
I want to understand some best practices regarding using MVVM and multithreading. Let us assume I have a ViewModel and it has an observableCollection. Also, let us assume I pass this collection to another service class which does some calculation and then udpates my collection.
After a point I realize that I want to make this a multithreaded call. When I make the call to the service class using threads or tasks what results is a cross thread operation. The reason is quite obvious because the service class updates the collection whcih in turn will update the UI on the background thread.
In such scenarios what is the best practice? Should we always write our service class in such a way that it first clones the input and then updates that cloned copy? Or should the view model always assuem that the service calls might be multithreaded and send a cloned copy?
What would be the recommended way to solve this?
Thanks
Jithu
A solution that might solve the cross-thread exception is by implementing the OnPropertyChanged in the base class of all ViewModels to switch to the correct thread/synchronization context so all properties in the View that are bound to the changing property will have their handlers called on the correct thread. See: Avoid calling BeginInvoke() from ViewModel objects in multi-threaded c# MVVM application
If/when you create copies you are postponing the synchronization and, in many cases, making it harder than need be.
A web service will always return new objects, how you, or a framework, updates the model using these object is up to you. A lot would depend on the amount of checks and updates coming in. There is no recommended way, see whatever fits the applications requirements.
Part of my problem here is using the proper vocabulary, so I apologize in advance for what might be a simple matter of terminology.
Suppose I have a Person interface, and a PersonBean class that implements that interface.
Suppose further I have a producer method somewhere (annotated #Produces) that returns a Person. Internally it returns a new PersonBean, but that's neither here nor there.
Finally, suppose I have another CDI bean somewhere with an injection point defined like this:
#Inject
private Person person;
Assuming I have all my beans.xml files in place etc. and have bootstrapped Weld or another CDI-1.0-compliant environment, as this all stands I will get an ambiguous definition error. This makes sense: Weld will find my PersonBean as a candidate for injection (it could just call the constructor) and will find the output of my producer method as a candidate for injection.
What I'd like to do is somehow force the production of Person instances in this application to always route through the producer method.
I understand I could invent some qualifier somewhere and make the producer method produce Person instances that are qualified by that qualifier. If I do that, and change my injection point to include the qualifier, then obviously there's only one source of these qualified injectables (namely my producer method), so voila, problem solved.
But suppose I don't want to invent some bogus qualifier. (I'm not saying this is the case; just trying to more deeply understand the issues.) What are my options? Do I have any? I suppose I could put #Typed(Object.class) on the PersonBean to make it so that it was not seen as a Person by CDI....
Any ideas welcomed, including pointers to documentation, or better ways to understand this. Thanks.
Annotate you PersonBean as #Alternative then it will use the producer method.
From digesting several different answers here and elsewhere, the solution I've adopted is to use the #Typed annotation with a value of Object.class on my bean. This means that it will only be eligible to be injected into fields that are declared like this:
#Inject
private Object something;
...which thankfully prove to be pretty much nonexistent. :-)
What I'd like to do is somehow force the production of Person
instances in this application to always route through the producer
method.
Seam solder has a solution for this.
I'm not 100% sure how this will develop with the merge of Seam 3 and Deltaspike (the page is so 90s, but the content rocks :-), but putting Solder in your classpath is certainly a safe bet.
Oh, and as far as I know a comparable mechanism made it into the CDI 1.1 spec.
I am having a throuble about Spring AOP. I am trying to trigger a method using aspect but the method that will trigger the aspect is also the method of the same class and aspect is not working(No errors by the way).Like this
class A extends Runnable{
public void write(){
System.out.println('Hi');
}
public void run(){
this.write();
}
}
<aop:after-returning method="anyMethod" pointcut="execution(* A.write(..))"/>
Any ideas will be appreciated
Thanks
The fact that the advised method is called in a different thread doesn't make any difference. Just make sure the instance that you pass to the thread is created by the spring application context and not by your application code.
Also, since you're advising a method declared in a class, not an interface -- write() -- you'll need to perform load-time weaving (and have cglib in your classpath).
This is because Spring AOP is proxy based. You use a proxy to delegate calls to the underlying object. However, when an underlying object's method makes a call to another method inside it, of the same class (your use case) then proxy does not come into picture and hence what you are trying to achieve is not possible. There are some work arounds, but they kill the very purpose of AOP.
You can refer more information here.
http://docs.spring.io/spring/docs/3.1.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
As Abhishek Chauhan said, Spring AOP is proxy-based and thus cannot intercept direct calls to this.someMethod(). But the good news is that you can also use full-blown AspectJ within Spring applications via load-time weaving as described in the Spring manual. This way you can get rid of the limitation and even of the whole proxy overhead because AspectJ does not need any proxies.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.