I process messages from a queue. I use data from the incoming message to determine which class to use to process the message; for example origin and type. I would use the combination of origin and type to look up a FQCN and use reflection to instantiate an object to process the message. At the moment these processing objects are all simple POJOs that implement a common interface. Hence I am using a strategy pattern.
The problem I am having is that all my external resources (mostly databases accessed via JPA) are injected (#Inject) and when I create the processing object as described above all these injected objects are null. The only way I know to populate these injected resources is to make each implementation of the interface a managed bean by adding #stateless. This alone does not solve the problem because the injected members are only populated if the class implementing the interface is itself injected (i.e. container managed) as opposed to being created by me.
Here is a made up example (sensitive details changed)
public interface MessageProcessor
{
public void processMessage(String xml);
}
#Stateless
public VisaCreateClient implements MessageProcessor
{
#Inject private DAL db;
…
}
public MasterCardCreateClient implements MessageProcessor…
In the database there is an entry "visa.createclient" = "fqcn.VisaCreateClient", so if the message origin is "Visa" and the type is "Create Client" I can look up the appropriate processing class. If I use reflection to create VisaCreateClient the db variable is always null. Even if I add the #Stateless and use reflection the db variable remains null. It's only when I inject VisaCreateClient will the db variable get populated. Like so:
#Stateless
public QueueReader
{
#Inject VisaCreateClient visaCreateClient;
#Inject MasterCardCreateClient masterCardCreateClient;
#Inject … many more times
private Map<String, MessageProcessor> processors...
private void init()
{
processors.put("visa.createclient", visaCreateClient);
processors.put("mastercard.createclient", masterCardCreateClient);
… many more times
}
}
Now I have dozens of message processors and if I have to inject each implementation then register it in the map I'll end up with dozens of injections. Also, should I add more processors I have to modify the QueueReader class to add the new injections and restart the server; with my old code I merely had to add an entry into the database and deploy the new processor on the class path - didn't even have to restart the server!
I have thought of two ways to resolve this:
Add an init(DAL db, OtherResource or, ...) method to the interface that gets called right after the message processor is created with reflection and pass the required resource. The resource itself was injected into the QueueReader.
Add an argument to the processMessage(String xml, Context context) where Context is just a map of resources that were injected into the QueueReader.
But does this approach mean that I will be using the same instance of the DAL object for every message processor? I believe it would and as long as there is no state involved I believe it is OK - any and all transactions will be started outside of the DAL class.
So my question is will my approach work? What are the risks of doing it that way? Is there a better way to use a strategy pattern to dynamically select an implementation where the implementation needs access to container managed resources?
Thanks for your time.
In a similar problem statement I used an extension to the processor interface to decide which type of data object it can handle. Then you can inject all variants of the handler via instance and simply use a loop:
public interface MessageProcessor
{
public boolean canHandle(String xml);
public void processMessage(String xml);
}
And in your queueReader:
#Inject
private Instance<MessageProcessor> allProcessors;
public void handleMessage(String xml) {
MessageProcessor processor = StreamSupport.stream(allProcessors.spliterator(), false)
.filter(proc -> proc.canHandle(xml))
.findFirst()
.orElseThrow(...);
processor.processMessage(xml);
}
This does not work on a running server, but to add a new processor simply implement and deploy.
Related
My application is built on top of Guice and runs scheduled jobs (cron4j), which are showing some issues related to inherently #Singleton instances.
The appropriate solution to my problem seems to have Scope that would be applicable for each job run, instead of the Singleton one. It would be similar to the Request scope, but in this different scenario.
I've read the docs for Custom Scopes, but it isn't clear to me on how a given dependency would know on how to request an specific scoped instance from guice.
Example:
public class MyJob {
/* Knows its "run id", which could be used for the scoping mechanism */
#Inject private Dependency dep;
public void run() { ... }
}
public class Dependency {
/* Technically does not know the "run id" from the job */
#Inject #Named("jobRunScope") private InnerDependency innerDep;
}
I appreciate any guidance.
If you look at the source for RequestScoped, you'll see that it uses a ThreadLocal to store a special Context map, holding all of the key-object pairs for the current request.
If your jobs run in a single thread, you could use a similar strategy to store scoped singletons.
Another option would be to create a new Injector instance for each job.
There is a service class FooService and method named fetchFoos that calls remote service, deserialize the JSON response and returns graph of value objects (starting with root Foo object). For now, there is no other behavior with this remote service, i.e. we are just fetching some 3rd party data. Speaking in DDD terms, this is closed bounded context with sole purpose of providing data, using its own models.
We may leave this method as a service; but... it seems it would be better if we may rename it to something more 'linguistic'.
For example, we could migrate singleton service to a simple bean named: FooFetcher (any better name?) and have method fetchFooForBar() that does the same. Then instead of injecting the service, we would simple create a new instance of this object and use it.
I even think that FooFetcher is a wrong domain name, it should be just Foos and the method would be fetchForBar().
However, some other ppl think that should come from a repository - so basically, we would just need to rename the FooService to FooRepository.
Any collective wisdom on how to encapsulate remote services in DDD?
Assuming Foo is an entity in your bounded context, you can think of this service as an infrastructure service that will be invoked from a repository.
In the following example, I named the fetcher "FooFetchService" and it has a method called "getFoo" return a JSON string with the "contents" of the foo object
public interface FooRepository {
public Foo getById(String fooId);
}
public class RemoteFooRepository implements FooRepository {
#Inject
FooFetchService fooFetchService;
public Foo getById(String fooId) {
String returnedFoo = fooFetchService.getFoo(fooId);
/* add code here to deserialize the JSON contents of the returnFoo variable to an object Foo foo*/
return foo;
}
}
RemoteFooRepository is just an implementation of FooRepository which happens to retrieve a Foo via some remote service. You can inject this in any of your other services classes that need it.
Question Description :
I have a JAX-RS resource pojo defined as below (outside is cxf container inregrated with spring, running in a tomcat)
#Path("/test/{id}")
#Consumes(MediaType.APPLICATION_JSON)
public class TestService {
#PathParam("id") private String id;
#GET
public Response get() throws Exception{
return Response.ok().entity(id).build();
}
}
Then I use jmeter to send some load with auto-increasing "id" parameters to the server. And I got this issue : the id in the response doesn't match that was sent.
E.g. request "localhost:8090/test/100" will get a "87" in the response.
And The frequency of error increases by using more client threads or making the handler method slower like this :
#GET
public Response get() throws Exception{
return Response.ok().entity(id).build();
Thread.sleep(500);
}
My thinking and confusion: The TestService is used as a singleton and since the "id" is a shared
field, so it MAY cause inconsistency issue when there are multiple threads running the "get()" function because it uses the shared "id". And then I put the "id" into the method parameter issue was resolved :
#Path("/test/{id}")
#Consumes(MediaType.APPLICATION_JSON)
public class TestService {
#GET
public Response get(#PathParam("id") String id) throws Exception{
return Response.ok().entity(id).build();
}
}
My confusion is : If this is a existing problem, I did saw lots of places and articles with the first style of using #PathParam, even in the jsr-339-jaxrs final spec?
![code snippets from jsr-339-jaxrs final spec][1]
Or both style there is good but I made some mistakes on my code?
Thanks!
A quick look at the docs seems to suggest that in CXF, with Spring, resources are treated as singletons by default:
"By default, the service beans which are referenced directly from the jaxrs:server endpoint declarations are treated by the runtime as singleton JAX-RS root resources"
Apache CXF Docs - Lifecycle Management Section
But in Jersey, the JAX-RS reference implementation, root resources are treated as dependent scoped (a new one is created on each request) unless otherwise specified.
By default the life-cycle of root resource classes is per-request which, namely that a new instance of a root resource class is created every time the request URI path matches the root resource.
See section 3.4 in https://jersey.java.net/documentation/latest/jaxrs-resources.html
So, if you are using CXF with Spring, your resources are likely singletons unless you configure them to be Spring Prototypes. With dependent scoped injection, #PathParam as an instance field should be fine, but in a singleton scope it you would expect to see issues like you describe.
I think i understood how CDI works and in order to dive deep in it, i would like to try using it with something real world example. I am stuck with one thing where i need your help to make me understand. I would really appreciate your help in this regard.
I have my own workflow framework developed using Java reflection API and XML configurations where based on specific type of "source" and "eventName" i load appropriate Module class and invoke "process" method on that. Everything is working fine in our project.
I got excited with CDI feature and wanted to give it try with workflow framework where i am planning inject Module class instead of loading them using Reflection etc...
Just to give you an idea, I will try to keep things simple here.
"Message.java" is a kind of Transfer Object which carries "Source" and "eventName", so that we can load module appropriately.
public class Message{
private String source;
private String eventName;
}
Module configurations are as below
<modules>
<module>
<source>A</source>
<eventName>validate</eventName>
<moduleClass>ValidatorModule</moduleClass>
</module>
<module>
<source>B</source>
<eventName>generate</eventName>
<moduleClass>GeneratorModule</moduleClass>
</module>
</modules>
ModuleLoader.java
public class ModuleLoader {
public void loadAndProcess(Message message){
String source=message.getSource();
String eventName=message.getEventName();
//Load Module based on above values.
}
}
Question
Now , if i want to implement same via CDI to inject me a Module (in ModuleLoader class), I can write Factory class with #Produce method , which can do that. BUT my question is,
a) how can pass Message Object to #Produce method to do lookup based on eventName and source ?
Can you please provide me suggestions ?
Thanks in advance.
This one is a little tricky because CDI doesn't work the same way as your custom solution (if I understand it correctly). CDI must have all the list of dependencies and resolutions for those dependencies at boot time, where your solution sounds like it finds everything at runtime where things may change. That being said there are a couple of things you could try.
You could try injecting an InjectionPoint as a parameter to a producer method and returning the correct object, or creating the correct type.
There's also creating your own extension of doing this and creating dependencies and wiring them all up in the extension (take a look at ProcessInjectionTarget, ProcessAnnotatedType, and 'AfterBeanDiscovery` events. These two quickstarts may also help get some ideas going.
I think you may be going down the wrong path regarding a producer. Instead it more than likely would be much better to use an observer especially based on what you've described.
I'm making the assumption that the "Message" transfer object is used abstractly like a system wide event where basically you fire the event and you would like some handler defined in your XML framework you've created to determine the correct manager for the event, instantiate it (if need be), and then call the class passing it the event.
#ApplicationScoped
public class MyMessageObserver {
public void handleMessageEvent(#Observes Message message) {
//Load Module based on above values and process the event
}
}
Now let's assume you want to utilize your original interface (I'll guess it looks like):
public interface IMessageHandler {
public void handleMessage(final Message message);
}
#ApplicationScoped
public class EventMessageHandler implements IMessageHandler {
#Inject
private Event<Message> messageEvent;
public void handleMessage(Message message) {
messageEvent.fire(message);
}
}
Then in any legacy class you want to use it:
#Inject
IMessageHandler handler;
This will allow you to do everything you've described.
May be you need somthing like that:
You need the qualifier. Annotation like #Module, which will take two paramters source and eventName; They should be non qualifier values. See docs.
Second you need a producer:
#Produces
#Module
public Module makeAmodule(InjectionPoint ip) {
// load the module, take source and eventName from ip
}
Inject at proper place like that:
#Inject
#Module(source="A", eventName="validate")
Module modulA;
There is only one issue with that solution, those modules must be dependent scope, otherwise system will inject same module regardles of source and eventName.
If you want to use scopes, then you need make source and eventName qualified parameters and:
make an extension for CDI, register programmatically producers
or make producer method for each and every possible combinations of source and eventName (I do not think it is nice)
I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Commit()
}
and our repositories looked like this:
public interface IRepository<T>
{
T GetByID()
void Save(T entity)
void Delete(T entity)
}
In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations.
Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this:
public interface IUnitOfWork
{
void BeginTransaction()
void Save()
void Commit()
}
public interface IRepository<T>
{
T GetByID()
void Add(T entity)
void Delete(T entity)
}
This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?
I wanted to come back and followup with what I have learned since I posted this. It seems if you are going to keep true to repository pattern, the data stores it persists to do not matter. If there are two data stores, write to them both in the same repository. What is important is to keep up the facade that repository pattern represents: an in memory collection. I would not do separate repositories because that doesn't feel like a true abstraction to me. You are letting the technology under the hood dictate the design at that point. To quote from the dddstepbystep.com:
What Sits Behind A Repository? Pretty
much anything you like. Yep, you heard
it right. You could have a database,
or you could have many different
databases. You could use relational
databases, or object databases. You
could have an in memory database, or
a singleton containing a list of in
memory items. You could have a REST
layer, or a set of SOA services, or a
file system, or an in memory cache…
You can have pretty much anything –
your only limitation is that the
Repository should be able to act like
a Collection to your domain. This
flexibility is a key difference
between Repository and traditional
data access techniques.
http://thinkddd.com/assets/2/Domain_Driven_Design_-_Step_by_Step.pdf
First I assume you are using an IoC container. I advocate you make true Repositories for each entity type. This means you will wrap each object context EntitySet in a class that implements something like:
interface IRepository<TEntity> {
TEntity Get(int id);
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
bool CanPersist<T>(T entity);
}
CanPersist merely returns whether that repository instance supports persisting the passed entity, and is used polymorphically by UnitOfWork.Save described below.
Each IRepository will also have a constructor that allows the IRepository to be constructed in "transactional" mode. So, for EF, we might have:
public partial EFEntityARepository : IRepository<EntityA> {
public EFEntityARepository(EFContext context, bool transactional) {
_context = context;
_transactional = transactional;
}
public void Add(EntityA entity) {
_context.EntityAs.Add(entity);
if (!_transactional) _context.SaveChanges();
}
}
UnitOfWork should look like this:
interface UnitOfWork {
void Add(TEntity entity);
void Save(TEntity entity);
void Remove(TEntity entity);
void Complete();
}
The UnitOfWork implementation will use dependency injection to get instances of all IRepository. In UnitOfWork.Save/Add/Remove, the UoW will pass the argument entity into CanPerist of each IRepository. For any true return values, the UnitOfWork will store that entity in a private collection specific to that IRepository and to the intended operation. In Complete, the UnitOfWork will go through all private entity collections and call the appropriate operation on the appropriate IRepository for each entity.
If you have an entity that needs to be partially persisted by EF and partially persisted by AD, you would have two IRepository classes for that entity type (they would both return true from CanPersist when passed an instance of that entity type).
As for maintaining atomicity between EF and AD, that is a separate non-trivial problem.
IMO I would wrap the calls to both of these repos in a service type of class. Then I would use IoC/DI to inject the repo types into the service class. You would have 2 repos, 1 for the Ent. framework and 1 that supports AD. This way each repo deals with only its underlaying data store and doesn't have to cross over.
What I have done to support multiple units of work types, is to have IUnitOfWork be more of a factory. I create another type called IUnitOfWorkScope which is the actual unit of work and it has only a commit method.
namespace Framework.Persistance.UnitOfWork
{
public interface IUnitOfWork
{
IUnitOfWorkScope Get();
IUnitOfWorkScope Get(bool shared);
}
public interface IUnitOfWorkScope : IDisposable
{
void Commit();
}
}
This allows me to inject different implementations of the unit of work into a service and be able to use them side by side.