Part of my problem here is using the proper vocabulary, so I apologize in advance for what might be a simple matter of terminology.
Suppose I have a Person interface, and a PersonBean class that implements that interface.
Suppose further I have a producer method somewhere (annotated #Produces) that returns a Person. Internally it returns a new PersonBean, but that's neither here nor there.
Finally, suppose I have another CDI bean somewhere with an injection point defined like this:
#Inject
private Person person;
Assuming I have all my beans.xml files in place etc. and have bootstrapped Weld or another CDI-1.0-compliant environment, as this all stands I will get an ambiguous definition error. This makes sense: Weld will find my PersonBean as a candidate for injection (it could just call the constructor) and will find the output of my producer method as a candidate for injection.
What I'd like to do is somehow force the production of Person instances in this application to always route through the producer method.
I understand I could invent some qualifier somewhere and make the producer method produce Person instances that are qualified by that qualifier. If I do that, and change my injection point to include the qualifier, then obviously there's only one source of these qualified injectables (namely my producer method), so voila, problem solved.
But suppose I don't want to invent some bogus qualifier. (I'm not saying this is the case; just trying to more deeply understand the issues.) What are my options? Do I have any? I suppose I could put #Typed(Object.class) on the PersonBean to make it so that it was not seen as a Person by CDI....
Any ideas welcomed, including pointers to documentation, or better ways to understand this. Thanks.
Annotate you PersonBean as #Alternative then it will use the producer method.
From digesting several different answers here and elsewhere, the solution I've adopted is to use the #Typed annotation with a value of Object.class on my bean. This means that it will only be eligible to be injected into fields that are declared like this:
#Inject
private Object something;
...which thankfully prove to be pretty much nonexistent. :-)
What I'd like to do is somehow force the production of Person
instances in this application to always route through the producer
method.
Seam solder has a solution for this.
I'm not 100% sure how this will develop with the merge of Seam 3 and Deltaspike (the page is so 90s, but the content rocks :-), but putting Solder in your classpath is certainly a safe bet.
Oh, and as far as I know a comparable mechanism made it into the CDI 1.1 spec.
Related
I am trying Abp framework recently and happily found that it is a wonderful implementation of DDD. But since it uses AutoMapper to translate DTOs into Entities/Aggregates, I have noticed it's able to short-circuit my private setters of Aggregates, which obviously violated the main rule of DDD. Although the goal of AutoMapper is to reduce manual operations, but DDD emphasizes invariant through private setters.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
I believe AutoMapper is not an anti-pattern of DDD since it's very popular in the community. In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
Thanks for anyone could help me or have me a hint.
AutoMapper is: A convention-based object-object mapper in .NET.
In itself, AutoMapper does not violate the principle of DDD. It is how you use it that possibly does.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
No, you don't have to give up AutoMapper.
You can specify .IgnoreAllPropertiesWithAnInaccessibleSetter for each map.
Related: How to configure AutoMapper to globally Ignore all Properties With Inaccessible Setter(private or protected)?
In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
No, that means that reflection is very powerful.
Don't know a lot about the Abp framework. Private setters is just good old traditional OOP which is used in DDD (encapsulation). You should expose public methods from your aggregate that will change its state. Automapper can be used in your application layer where you map the DTOs to domain building blocks (like value objects) and you pass them as parameters in your aggregate public functions that will change its own state and enforce invariants. Having said that not everyone loves Automapper :)
How can I make there two seemingly conflicting concept clear and use this framework smoothly?
By configuring AutoMapper's profile to construct the aggregate root using a custom expression that uses the aggregate's factory methods or constructors. Here is an example from one of my projects:
public class BphNomenclatureManagerApplicationAutoMapperProfile : Profile
{
public BphNomenclatureManagerApplicationAutoMapperProfile()
{
CreateMap<BphCompany, BphCompanyDto>(MemberList.Destination);
CreateMap<CreateUpdateBphCompanyDto, BphCompany>(MemberList.Destination)
// invariants preserved by use of AR constructor:
.ConstructUsing(dto => new BphCompany(
dto.Id,
dto.BphId,
dto.Name,
dto.VatIdNumber,
dto.TradeRegisterNumber,
dto.IsProducer
));
}
}
Are there any creational design patterns that allow for completely new objects (as in newly written) to be instantiated without having to add a new statement somewhere in existing code?
The main problem to solve is how to identify the class or classes to instantiate. I know of and have used three general patterns for discovering classes, which I'll call registration, self-registration and discovery by type. I'm not aware of them having been written up in a formal pattern description.
Registration: Each class that wants to be discovered is registered somewhere where a framework can find it:
the class name is put in an environment variable or Java system property, in a file, etc.
some code adds the class or its name to a singleton list early in program execution
Self-registration: Each class that wants to be discovered registers itself, probably in a singleton list. The trick is how the class knows when to do that.
The class might have to be explicitly referred to by some code early in the program (e.g. the old way of choosing a JDBC driver).
In Ruby, the code that defines a class can register the class somewhere (or do anything else) when the class is defined. It suffices to require the file that contains the class.
Discovery by type: Each class that wants to be discovered extends or implements a type defined by the framework, or is named in a particular way. Spring autowiring class annotations are another version of this pattern.
There are several ways to find classes that descend from a given type in Java (here's one SO question, here's another) and Ruby. As with self-registration, in languages like those where classes are dynamically loaded, something has to be done to be sure the class is loaded before asking the runtime what classes are available.
One things which I think here is that someone needs to do a new to your newly written Class. If you are not doing that then may be some framework needs to do that.
I could remember something similar which I did in one of my side projects using Java Spring . I had an interface and multiple implementations to it. My project required to iterate over all the implementations do some processing. Now for this even I was looking for some solution with which I would not have to manually do the instantiation of that particular class or some explicit wiring of that particular class. So Spring facilitated me to do that through #Autowired annotation. It injected all the implementation of that interface on the fly. Eg :-
#Autowired
private List<IMyClass> myClassImplementations;
Now in the above example I can simply iterate over the list of implementations injected and I would not have to do instantiation of my new implementation every time I write a new one.
But I think in most of the cases it would be difficult to use this approach (even though it fits in my case). I would rather go with a Factory pattern in general case and try to use that for creating new instances. I know that would require new but in my perception engineering it in a way that its object is automatically created and injected is a bit an extra overhead.
I am new to the Groovy programming language and I am trying to fully understand the dynamic nature and capabilities it has. What I do know is that every class created in Groovy in its most basic form looks like this (implements GroovyObject and extending java Object).
public class Foo implements groovy.lang.GroovyObject extends java.lang.Object { }
Groovy object also contains a MetaClass that extends MetaObjectProtocol. It is this class hierarchy that provides some of Groovy's dynamic capabilities. This includes the ability to introspect itself (getProperties,getMethods) or intercept methods (invokeMethod,missingMethod).
I also understand the different types of meta programming available in Groovy. These give you the ability to add or override functionality at runtime or compile time.
Runtime
Categories
Expando / MetaClass / ExpandoMetaClass
Compile Time
AST Transformations
Extension Module
Now that have some of that out of the way we can get to the meat of this question. When someone or a book refers to the "Metaobject Protocol" in Groovy are we talking about a specific class or a collection of things. I have hard time grasping something that isn't defined or set in stone. One of my books defined it as
A protocol is a collection of rules and formats. The
Meta-Object-Protocol (MOP) is the collection of rules of how a request
for a method call is handled by the Groovy runtime system and how to
control the intermediate layer. The "format" of the protocol is
defined by the respective APIs,
I also have Venkat's Programming Groovy 2 book and in it there is a diagram that defines this method lookup process. So I am guessing this is the rules of how we request a method (at least a POGO, I understand a POJO is different).
Anyways I think I am going down the right path but I feel like I am still missing that "ahhhaa" moment. Can anyone fill me in on what I am missing? Or at the very least tell me my ramblings here made some sort of sense :) Thank you!!
This is the answer. "The Meta-Object-Protocol (MOP) is the collection of rules of how a request for a method call is handled by the Groovy runtime system and how to control the intermediate layer." Once you understand the process a method call goes through and the API that comes with it I think it all makes sense. I was just over thinking it all. Thanks!!
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.
JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373
You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (#EJB, #Resource, #PersistenceContext and the like) and CDI's brand of dependency injection.
The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's #EJB or #PersistenceContext or #Resource annotations and with CDI's #Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by #Inject).
My question is: what is the community's consensus--or is there one--on:
what this bridge class should be named
where this bridge class should live
whether it's best to localize all this stuff into one class or make several of them
...?
Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it.
Thanks.
This seems highly subjective but I prefer to make several classes and I call FooProducer a class producing a Foo.