Guice scope for scheduled job run - scope

My application is built on top of Guice and runs scheduled jobs (cron4j), which are showing some issues related to inherently #Singleton instances.
The appropriate solution to my problem seems to have Scope that would be applicable for each job run, instead of the Singleton one. It would be similar to the Request scope, but in this different scenario.
I've read the docs for Custom Scopes, but it isn't clear to me on how a given dependency would know on how to request an specific scoped instance from guice.
Example:
public class MyJob {
/* Knows its "run id", which could be used for the scoping mechanism */
#Inject private Dependency dep;
public void run() { ... }
}
public class Dependency {
/* Technically does not know the "run id" from the job */
#Inject #Named("jobRunScope") private InnerDependency innerDep;
}
I appreciate any guidance.

If you look at the source for RequestScoped, you'll see that it uses a ThreadLocal to store a special Context map, holding all of the key-object pairs for the current request.
If your jobs run in a single thread, you could use a similar strategy to store scoped singletons.
Another option would be to create a new Injector instance for each job.

Related

How to use the strategy pattern with managed objects

I process messages from a queue. I use data from the incoming message to determine which class to use to process the message; for example origin and type. I would use the combination of origin and type to look up a FQCN and use reflection to instantiate an object to process the message. At the moment these processing objects are all simple POJOs that implement a common interface. Hence I am using a strategy pattern.
The problem I am having is that all my external resources (mostly databases accessed via JPA) are injected (#Inject) and when I create the processing object as described above all these injected objects are null. The only way I know to populate these injected resources is to make each implementation of the interface a managed bean by adding #stateless. This alone does not solve the problem because the injected members are only populated if the class implementing the interface is itself injected (i.e. container managed) as opposed to being created by me.
Here is a made up example (sensitive details changed)
public interface MessageProcessor
{
public void processMessage(String xml);
}
#Stateless
public VisaCreateClient implements MessageProcessor
{
#Inject private DAL db;
…
}
public MasterCardCreateClient implements MessageProcessor…
In the database there is an entry "visa.createclient" = "fqcn.VisaCreateClient", so if the message origin is "Visa" and the type is "Create Client" I can look up the appropriate processing class. If I use reflection to create VisaCreateClient the db variable is always null. Even if I add the #Stateless and use reflection the db variable remains null. It's only when I inject VisaCreateClient will the db variable get populated. Like so:
#Stateless
public QueueReader
{
#Inject VisaCreateClient visaCreateClient;
#Inject MasterCardCreateClient masterCardCreateClient;
#Inject … many more times
private Map<String, MessageProcessor> processors...
private void init()
{
processors.put("visa.createclient", visaCreateClient);
processors.put("mastercard.createclient", masterCardCreateClient);
… many more times
}
}
Now I have dozens of message processors and if I have to inject each implementation then register it in the map I'll end up with dozens of injections. Also, should I add more processors I have to modify the QueueReader class to add the new injections and restart the server; with my old code I merely had to add an entry into the database and deploy the new processor on the class path - didn't even have to restart the server!
I have thought of two ways to resolve this:
Add an init(DAL db, OtherResource or, ...) method to the interface that gets called right after the message processor is created with reflection and pass the required resource. The resource itself was injected into the QueueReader.
Add an argument to the processMessage(String xml, Context context) where Context is just a map of resources that were injected into the QueueReader.
But does this approach mean that I will be using the same instance of the DAL object for every message processor? I believe it would and as long as there is no state involved I believe it is OK - any and all transactions will be started outside of the DAL class.
So my question is will my approach work? What are the risks of doing it that way? Is there a better way to use a strategy pattern to dynamically select an implementation where the implementation needs access to container managed resources?
Thanks for your time.
In a similar problem statement I used an extension to the processor interface to decide which type of data object it can handle. Then you can inject all variants of the handler via instance and simply use a loop:
public interface MessageProcessor
{
public boolean canHandle(String xml);
public void processMessage(String xml);
}
And in your queueReader:
#Inject
private Instance<MessageProcessor> allProcessors;
public void handleMessage(String xml) {
MessageProcessor processor = StreamSupport.stream(allProcessors.spliterator(), false)
.filter(proc -> proc.canHandle(xml))
.findFirst()
.orElseThrow(...);
processor.processMessage(xml);
}
This does not work on a running server, but to add a new processor simply implement and deploy.

Seed value in Weld CDI custom scope

Coming from a Guice background, I know that it is possible to seed an object value from a scope using.
scope.seed(Key.get(SomeObject.class), someObject);
I suppose one could do this by registering a Bean that gets a value from an AbstractBoundContext, but examples just seeding one value from a Custom Scope seem hard to find. How do I create a custom scope that seeds a value that can be injected elsewhere?
Edit:
I am currently using the following workaround, that can be injected in an interceptor to set the Configuration when entering the scope, and can then be injected through its thread local provider. I am still looking for options that feel less hacky / are more integrated with the scope/scope context system in Weld though.
#Singleton
public class ConfigurationProducer {
private final InheritableThreadLocal<Configuration> threadLocalConfiguration =
new InheritableThreadLocal<>();
#Produces
#ActiveDataSet
public ConfigurationConfiguration() {
return threadLocalConfiguration.get()
}
public void setConfiguration(Configuration configuration) {
threadLocalConfiguration.set(configuration);
}
}
The answer is to register a custom bean with the AfterBeanDiscovery event, like so:
event.addBean()
.createWith(ctx -> commandContext.getCurrentCommandExecution())
.addType(CommandExecution.class)
.addQualifier(Default.Literal.INSTANCE)
.scope(CommandScoped.class)
.beanClass(CommandExtension.class);
There is a quite sophisticated example available at https://github.com/weld/command-context-example

Can I use an EJB in an infinite thread

I would like to know if it's prohibited to use an EJB in an infinite thread(since it can't be given back to the container).
Something like this:
#ManagedBean(eager = true)
#ApplicationScoped
public class TestListenner {
private ResultThread result;
#PostConstruct
public void init() {
result = new ResultThread ();
Thread myThread = new Thread(result);
myThread.start();
}
public ResultThread getResult() {
return result;
}
}
And the thread:
public class ResultThread implements Runnable{
#EJB
private SomeService service;
private boolean continueWork = true;
public void run(){
while(continueWork){
service.doSomething();
//some proccessing
}
}
I'm working with EJB's since I started working with databases. I went over daofactories and the likes but I forgot about them(it was a year ago). I use them to do actions on my database when an user request a web page on my web app. But now I need to have a thread that calculate things in my database continuously to decrease the response time. If I cannot use EJB for the reason the container needs to have an handle on them, then what should I use ?
Hopefully I can use a class similar to what I'm used to use :
#Stateless
public class SomeServiceImpl implements SomeService {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public void updateCategory(SomeClass theclass) {
em.merge(theclass);
}
}
Edit: The first answer by BalusC in this topic seems to imply that spawning threads in a ManagedBean wouldn't be dangerous in a case where no additional threads could be spawned. Since my bean is ApplicationScoped, which the web-app uses 1 and only 1 instance of it to do background work on the database (I've actually like a TOP 100 "posts" table that needs to be continually recalculated over time so I can query the table -with another bean- to have a fast answer).
What you have now won't work for at least one reason:
You can't inject resources into non-managed components. For the #EJB annotation to work, ResultThread ought to be a managed bean, and injected by the container. That means, that you must at least use CDI to inject it, rather than the new ResultThread you have now. What will work will look something like:
#Inject
private ResultThread result;
This way, the container gets in on the action.
The bottom line however, is that there are better ways of doing what you appear to be trying to do.
An EJB Timer
The new ManagedExecutor
Async EJBs
It may also interest you to know that EJBs are not allowed to spawn their own threads; in fact, it's frowned upon to do any handmade threading in the container. Why? The container is a managed environment - one where memory and concurrency have already been well thought out and designed. Your handspun thread breaks that model and any guarantees that the container may have been able to give you on your beans and other app components
Related:
Why is spawning threads in Java EE container discouraged?
You can't use your own Threads on Java EE container.
http://www.oracle.com/technetwork/java/restrictions-142267.html#threads
The Java EE spec provide TimerServices for this kind of work.
https://docs.oracle.com/javaee/6/tutorial/doc/bnboy.html

Can CDI #Producer method take custom parameters?

I think i understood how CDI works and in order to dive deep in it, i would like to try using it with something real world example. I am stuck with one thing where i need your help to make me understand. I would really appreciate your help in this regard.
I have my own workflow framework developed using Java reflection API and XML configurations where based on specific type of "source" and "eventName" i load appropriate Module class and invoke "process" method on that. Everything is working fine in our project.
I got excited with CDI feature and wanted to give it try with workflow framework where i am planning inject Module class instead of loading them using Reflection etc...
Just to give you an idea, I will try to keep things simple here.
"Message.java" is a kind of Transfer Object which carries "Source" and "eventName", so that we can load module appropriately.
public class Message{
private String source;
private String eventName;
}
Module configurations are as below
<modules>
<module>
<source>A</source>
<eventName>validate</eventName>
<moduleClass>ValidatorModule</moduleClass>
</module>
<module>
<source>B</source>
<eventName>generate</eventName>
<moduleClass>GeneratorModule</moduleClass>
</module>
</modules>
ModuleLoader.java
public class ModuleLoader {
public void loadAndProcess(Message message){
String source=message.getSource();
String eventName=message.getEventName();
//Load Module based on above values.
}
}
Question
Now , if i want to implement same via CDI to inject me a Module (in ModuleLoader class), I can write Factory class with #Produce method , which can do that. BUT my question is,
a) how can pass Message Object to #Produce method to do lookup based on eventName and source ?
Can you please provide me suggestions ?
Thanks in advance.
This one is a little tricky because CDI doesn't work the same way as your custom solution (if I understand it correctly). CDI must have all the list of dependencies and resolutions for those dependencies at boot time, where your solution sounds like it finds everything at runtime where things may change. That being said there are a couple of things you could try.
You could try injecting an InjectionPoint as a parameter to a producer method and returning the correct object, or creating the correct type.
There's also creating your own extension of doing this and creating dependencies and wiring them all up in the extension (take a look at ProcessInjectionTarget, ProcessAnnotatedType, and 'AfterBeanDiscovery` events. These two quickstarts may also help get some ideas going.
I think you may be going down the wrong path regarding a producer. Instead it more than likely would be much better to use an observer especially based on what you've described.
I'm making the assumption that the "Message" transfer object is used abstractly like a system wide event where basically you fire the event and you would like some handler defined in your XML framework you've created to determine the correct manager for the event, instantiate it (if need be), and then call the class passing it the event.
#ApplicationScoped
public class MyMessageObserver {
public void handleMessageEvent(#Observes Message message) {
//Load Module based on above values and process the event
}
}
Now let's assume you want to utilize your original interface (I'll guess it looks like):
public interface IMessageHandler {
public void handleMessage(final Message message);
}
#ApplicationScoped
public class EventMessageHandler implements IMessageHandler {
#Inject
private Event<Message> messageEvent;
public void handleMessage(Message message) {
messageEvent.fire(message);
}
}
Then in any legacy class you want to use it:
#Inject
IMessageHandler handler;
This will allow you to do everything you've described.
May be you need somthing like that:
You need the qualifier. Annotation like #Module, which will take two paramters source and eventName; They should be non qualifier values. See docs.
Second you need a producer:
#Produces
#Module
public Module makeAmodule(InjectionPoint ip) {
// load the module, take source and eventName from ip
}
Inject at proper place like that:
#Inject
#Module(source="A", eventName="validate")
Module modulA;
There is only one issue with that solution, those modules must be dependent scope, otherwise system will inject same module regardles of source and eventName.
If you want to use scopes, then you need make source and eventName qualified parameters and:
make an extension for CDI, register programmatically producers
or make producer method for each and every possible combinations of source and eventName (I do not think it is nice)

NInject and thread-safety

I am having problems with the following class in a multi-threaded environment:
public class Foo
{
[Inject]
public IBar InjectedBar { get; set; }
public bool NonInjectedProp { get; set; }
public void DoSomething()
{
/* The following line is causing a null-reference exception */
InjectedBar.DoSomething();
}
public Foo(bool nonInjectedProp)
{
/* This line should inject the InjectedBar property */
KernelContainer.Inject(this);
NonInjectedProp = nonInjectedProp;
}
}
This is a legacy class which is why I am using property rather than constructor injection.
Sometime when the DoSomething() is called the InjectedBar property is null. In a single-threaded application, everything runs fine.
How can this be occuring and how can I prevent it?
I am using NInject 2.0 without any extensions, although I have copied the KernelContainer from the NInject.Web project.
I have noticed a similar problem occurring in my web services. This problem is extremely intermittent and difficult to replicate.
First of all, let me say that this is wrong on so many levels; the KernelContainer was an infrastructure class kept specifically to work around certain limitations in the ASP.NET WebForms page lifecycle. It was never meant to be used in application code. Using the Ninject kernel (or any DI container) as a service locator is an anti-pattern.
That being said, Ninject itself is definitely thread-safe because it's used to service parallel requests in ASP.NET all the time. Wherever this NullReferenceException is coming from, it's got little if anything to do with Ninject.
I can think of two possibilities:
You have to initialize KernelContainer.Kernel somewhere, and that code might have a race condition. If something tries to use the KernelContainer before the kernel is fully initialized (possible if you use the IKernel.Bind methods instead of loading modules as per the guidance), you'll get errors like this. Or:
It's your IBar implementation itself that has problems, and the NullReferenceException is happening somewhere inside the DoSomething method. You don't actually specify that InjectedBar is null when you get the exception, so that's a legitimate possibility here.
Just to narrow the field of possibilities, I'd eliminate the KernelContainer first. If you absolutely must use Ninject as a service locator due to a poorly-designed legacy architecture, then at least allow it to create the dependencies instead of relying on Inject(this). That is to say, whichever class or classes need to create your Foo, have that class call kernel.Get<Foo>(), and set up your kernel to Bind<Foo>().ToSelf().

Resources