I want to be able to support multiple versions of Java ME without having to have multiple builds. I already know how to detect the profile/configuration/supported JSRs. My problem is that knowing whether the JSR is supported at run time doesn't allow me to use all the features as Java-ME does not provide support for reflection.
For if I call a function added in a later version anywhere in the code - even a location that will never be run, then this could cause an error during resolution on some JVMs. Is there any way round this?
Related Questions
Handling optional APIs in J2ME
If you only need to access the class C through an interface which you know you will have access to, then it is simple enough:
MyInterface provider=null;
try{
Class myClass= Class.forName("sysPackage.C");
provider = (MyInterface)(myClass.newInstance());
}catch(Exception ex){
}
if(provide!=null){
//Use provider
}
If C does not have an interface that can be used, then we can instead create a wrapper class S that will be a member of the interface instead.
class S implements MyInterface{
static {
try {
Class.forName("sysPackage.C");
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
public static void forceExceptionIfUnavailable() {}
//TODO: Methods that use C. Class C will only be used within this class
}
S has a static block so that an exception is thrown during class resolution if C is unavailable. Immediately after loading the class, we call forceExceptionIfUnavailable to make sure that the static block is run immediately. If it doesn't crash, then we can use the methods in S to indirectly use class C.
Alternatively, we can use the method here:
Basically, you create a new package P, with a public abstract class A and a concrete subclass S private to the package. A has a static method getS that returns an instance of S or null if an exception is thrown during instantiation. Each instance of S has an instance of C so it will fail to instantiate when C is unavailable - otherwise it will succeed. This method seems to be a bit safer as S (and hence all the C APIs) are package private.
Related
To externalize UI strings we use the "Messages-class" approach as supported e.g. in Eclipse and other IDEs. This approach requires that in each package where one needs some UI strings there has to be a class "Messages" that offers a static method String getString(key) via which one obtains the actual String to display to the user. The Strings are internally accessed/fetched using Java's Resources mechanism for i18n.
Esp. after some refactoring - we again and again have accidental imports from a class Messages from a different package.
Thus I would like to create an archunit rule checking whether we only access classes called "Messages" from the very same package. I.e. each import of a class x.y.z.Messages is an error if the package x.y.z is not the same package as the current class (i.e. the class that contains the import)
I got as far as this:
#ArchTest
void preventReferencesToMessagesOutsideCurrentPackage(JavaClasses classes) {
ArchRule rule;
rule = ArchRuleDefinition.noClasses()
.should().accessClassesThat().haveNameMatching("Messages")
.???
;
rule.check(classes);
}
but now I got stuck at the ???.
How can one phrase a condition "and the referenced/imported class "Messages" is not in the same package as this class"?
I somehow got lost with all these archunit methods of which none seems to fit here nor lend itself to compose said condition. Probably I just can't see the forest for the many trees.
Any suggestion or guidance anyone?
You need to operate on instances of JavaAccess to validate the dependencies. JavaAccess provides information about the caller and the target such that you can validate the access dynamically depending on the package name of both classes.
DescribedPredicate<JavaAccess<?>> isForeignMessageClassPredicate =
new DescribedPredicate<JavaAccess<?>>("target is a foreign message class") {
#Override
public boolean apply(JavaAccess<?> access) {
JavaClass targetClass = access.getTarget().getOwner();
if ("Message".equals(targetClass.getSimpleName())) {
JavaClass callerClass = access.getOwner().getOwner();
return !targetClass.getPackageName().equals(callerClass.getPackageName());
}
return false;
}
};
ArchRule rule =
noClasses().should().accessTargetWhere(isForeignMessageClassPredicate);
One would think that in C# 8.0 you should be able to do the following (according to this (1st snippet)):
public interface IRestApiClient : IRestClient
{
...
Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
{
return QueryPrivateAsync(Method.POST, action, parameters, deserializer);
}
...
}
public class SpecificClient : ExchangeClient, IRestApiClient, IRestHtmlClient, ISeleniumClient, IWebSocketClient
{
}
The example above won't compile because the interface members need to be explicitly and wholly implemented (including the methods supplying the default logic)
So one would think that the following should work:
public interface IRestApiClient : IRestClient
{
...
Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
{
return QueryPrivateAsync(Method.POST, action, parameters, deserializer);
}
...
}
public class SpecificClient : ExchangeClient, IRestApiClient, IRestHtmlClient, ISeleniumClient, IWebSocketClient
{
...
public async Task<T> PostPrivateAsync<T>(string action, OrderedDictionary<string, object> parameters = null, DeserializeCustom<T> deserializer = null)
=> await ((IRestApiClient) this).PostPrivateAsync(action, parameters, deserializer);
...
}
Nope, it looks like this method is recursive (despite the upcast) and will cause our favorite Stack Overflow exception.
So my question is (abstracting from the fact that I could change the design in my example), is there a way of keeping the implementation for a specific method default, preferably without the necessity of resorting to hacky or Static Helper Extension methods? I could call static extension method in both interface and the class but it kind of defeats the purpose of this feature.
// EDIT
I must admit it confuses me and it appears I am missing something critical that is obvious to other people. I didn't provide additional info because I didn't consider my issue to be code specific. Lets look at this simple example (taken from the website I linked on the beginning of my post):
According to #Panagiotis Kanavos comment: No, default members don't need to be implemented (...) what I screenshoted should not be true. Can sb please enlighten me?
// EDIT 2
As you can see I am properly targeting .NET CORE 3.0 with C# 8.0.
ERRORS:
Interface method cannot declare a body
Interface member 'void CryptoBotCoreMVC.IDefaultInterfaceMethod.DefaultMethod()' is not implemented
To answer the question in the comments: I didn't specify LangVersion explicitly in the .csproj file.
// EDIT 3
The issue was ReSharper, see:
https://stackoverflow.com/a/58614702/3783852
My comment have been deleted, presumably by the owner of the answer so I'll write it here: the clue was the fact that there was actually no error numbers, but the compilation was blocked. It turned out that there is an option to block compilation when these errors occur in ReSharper.
It seems that in the end this is a possible duplicate, but getting to this conclusion was quite a journey :).
The issue is caused by ReSharper, reference:
https://youtrack.jetbrains.com/issue/RSRP-474628
It appears that the problem will be resolved in version v2019.3 and we currently have v2019.2.3. You can setup ReSharper to block compilation depending on issue severity, the workaround is to disable this feature for the time being.
I process messages from a queue. I use data from the incoming message to determine which class to use to process the message; for example origin and type. I would use the combination of origin and type to look up a FQCN and use reflection to instantiate an object to process the message. At the moment these processing objects are all simple POJOs that implement a common interface. Hence I am using a strategy pattern.
The problem I am having is that all my external resources (mostly databases accessed via JPA) are injected (#Inject) and when I create the processing object as described above all these injected objects are null. The only way I know to populate these injected resources is to make each implementation of the interface a managed bean by adding #stateless. This alone does not solve the problem because the injected members are only populated if the class implementing the interface is itself injected (i.e. container managed) as opposed to being created by me.
Here is a made up example (sensitive details changed)
public interface MessageProcessor
{
public void processMessage(String xml);
}
#Stateless
public VisaCreateClient implements MessageProcessor
{
#Inject private DAL db;
…
}
public MasterCardCreateClient implements MessageProcessor…
In the database there is an entry "visa.createclient" = "fqcn.VisaCreateClient", so if the message origin is "Visa" and the type is "Create Client" I can look up the appropriate processing class. If I use reflection to create VisaCreateClient the db variable is always null. Even if I add the #Stateless and use reflection the db variable remains null. It's only when I inject VisaCreateClient will the db variable get populated. Like so:
#Stateless
public QueueReader
{
#Inject VisaCreateClient visaCreateClient;
#Inject MasterCardCreateClient masterCardCreateClient;
#Inject … many more times
private Map<String, MessageProcessor> processors...
private void init()
{
processors.put("visa.createclient", visaCreateClient);
processors.put("mastercard.createclient", masterCardCreateClient);
… many more times
}
}
Now I have dozens of message processors and if I have to inject each implementation then register it in the map I'll end up with dozens of injections. Also, should I add more processors I have to modify the QueueReader class to add the new injections and restart the server; with my old code I merely had to add an entry into the database and deploy the new processor on the class path - didn't even have to restart the server!
I have thought of two ways to resolve this:
Add an init(DAL db, OtherResource or, ...) method to the interface that gets called right after the message processor is created with reflection and pass the required resource. The resource itself was injected into the QueueReader.
Add an argument to the processMessage(String xml, Context context) where Context is just a map of resources that were injected into the QueueReader.
But does this approach mean that I will be using the same instance of the DAL object for every message processor? I believe it would and as long as there is no state involved I believe it is OK - any and all transactions will be started outside of the DAL class.
So my question is will my approach work? What are the risks of doing it that way? Is there a better way to use a strategy pattern to dynamically select an implementation where the implementation needs access to container managed resources?
Thanks for your time.
In a similar problem statement I used an extension to the processor interface to decide which type of data object it can handle. Then you can inject all variants of the handler via instance and simply use a loop:
public interface MessageProcessor
{
public boolean canHandle(String xml);
public void processMessage(String xml);
}
And in your queueReader:
#Inject
private Instance<MessageProcessor> allProcessors;
public void handleMessage(String xml) {
MessageProcessor processor = StreamSupport.stream(allProcessors.spliterator(), false)
.filter(proc -> proc.canHandle(xml))
.findFirst()
.orElseThrow(...);
processor.processMessage(xml);
}
This does not work on a running server, but to add a new processor simply implement and deploy.
A NinjectBootstrapper class has been created in a Service project(responsible for getting products).
This is my code
public static class NinjectBootstrapper
{
private static readonly object _thislock = new object();
//flag to prevent bootstrap for executing multiple times
private static bool _done;
public static void Bootstrap()
{
lock(_thislock)
{
if(_done)
{
return;
}
_done = true;
// services
NinjectContainer.Kernel.Bind<IProductService>().To<ProductService>();
// repositories
NinjectContainer.Kernel.Bind<IProductRepository>().To<ProductRepository>();
}
}
}
Now, while this works, I would really like to know if there are better ways to refactor my code. For instance I've read a lot of stuff about using a singleton instead of the static class.
I'd like to really set a good base here to make it easy and flexible for future developers to extend functionality. What are some good pointers and tips I can take into consideration?
I've read a lot of stuff about using a singleton instead of the static class.
When applying DI we typically prefer instance classes over static classes, because we can't practice Constructor Injection in static classes. This doesn't mean however, that static classes are a bad practice. When classes have no dependencies and no state, static is fine.
This holds as well for code in the startup path, such as your NinjectBootstrapper. You can make it an instance class, but since this class is started directly at startup, no dependencies need to be injected (obviously, because it is the thing that wires the DI container), it would typically be useless to make this an instance class.
I am having problems with the following class in a multi-threaded environment:
public class Foo
{
[Inject]
public IBar InjectedBar { get; set; }
public bool NonInjectedProp { get; set; }
public void DoSomething()
{
/* The following line is causing a null-reference exception */
InjectedBar.DoSomething();
}
public Foo(bool nonInjectedProp)
{
/* This line should inject the InjectedBar property */
KernelContainer.Inject(this);
NonInjectedProp = nonInjectedProp;
}
}
This is a legacy class which is why I am using property rather than constructor injection.
Sometime when the DoSomething() is called the InjectedBar property is null. In a single-threaded application, everything runs fine.
How can this be occuring and how can I prevent it?
I am using NInject 2.0 without any extensions, although I have copied the KernelContainer from the NInject.Web project.
I have noticed a similar problem occurring in my web services. This problem is extremely intermittent and difficult to replicate.
First of all, let me say that this is wrong on so many levels; the KernelContainer was an infrastructure class kept specifically to work around certain limitations in the ASP.NET WebForms page lifecycle. It was never meant to be used in application code. Using the Ninject kernel (or any DI container) as a service locator is an anti-pattern.
That being said, Ninject itself is definitely thread-safe because it's used to service parallel requests in ASP.NET all the time. Wherever this NullReferenceException is coming from, it's got little if anything to do with Ninject.
I can think of two possibilities:
You have to initialize KernelContainer.Kernel somewhere, and that code might have a race condition. If something tries to use the KernelContainer before the kernel is fully initialized (possible if you use the IKernel.Bind methods instead of loading modules as per the guidance), you'll get errors like this. Or:
It's your IBar implementation itself that has problems, and the NullReferenceException is happening somewhere inside the DoSomething method. You don't actually specify that InjectedBar is null when you get the exception, so that's a legitimate possibility here.
Just to narrow the field of possibilities, I'd eliminate the KernelContainer first. If you absolutely must use Ninject as a service locator due to a poorly-designed legacy architecture, then at least allow it to create the dependencies instead of relying on Inject(this). That is to say, whichever class or classes need to create your Foo, have that class call kernel.Get<Foo>(), and set up your kernel to Bind<Foo>().ToSelf().