What's the connection & difference between mockito & powermock & powermockito? - mockito

I studied mockito test framework, and learned about powermock, but I suddenly found a framework called powermockito, I can't understand anymore.
Can anyone tell me the difference between these three test utils?

The Mockito is the market standard mocking framework that tastes really good. It provides many functionalities like mocking, stubbing, verifying the methods calls etc. In previous versions it did not provide the possibility to mock a private or static methods (here is the explenation why: https://github.com/mockito/mockito/wiki/Mockito-And-Private-Methods). Than The really helpful was the
Powermock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initializers and more. But when You have to use the Powermock probably your application design is poor
I ve never used the PowerMockito but it is a PowerMock's extension API to support Mockito. It provides capabilities to work with the Java Reflection API in a simple way to overcome the problems of Mockito, such as the lack of ability to mock final, static or private methods.
But nowadays (starting from mockito 3.4) there is a possibility to mock the static methods - than the usage of PowerMock can be limited. Here is a nice blog post about it https://tech.cognifide.com/blog/2020/mocking-static-methods-made-possible-in-mockito-3.4.0/

Related

Kotlin- Coroutines and ThreadLocal in the same project: best practices question

I'm converting a large project (Spring boot, GraphQL server) to coroutines.
Since this is a large project, I'm not converting it all at once and need to work in parallel with ThreadLocal and CoroutineContext for interop.
We currently have a class that wraps ThreadLocal manipulation, my plan is to create an equivalent for CoroutineContext and to use ThreadContextElement as well as asContextElement to ensure the non-coroutine and the coroutine parts of my code interop properly.
Questions:
Are there better alternatives to implement such interop?
Are there good examples of such a mix I can use as good reference?
Are there any pitfalls I should be aware of?
Thanks!

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

difference between Powermock and Powermockito

Can anyone could elaborate about PowerMock and PowerMockito.
I didn't even get documentation for powermockito.
Both used for mocking static and private methods in different way I guess.
what are the similarities and usages? which one is better?
The main aim of PowerMock is to extend the existing mocking frameworks APIs with some annotations and methods to provide extra features that make unit testing quite easy. The PowerMock framework uses a custom classloader and bytecode manipulation techniques to enable the mocking of static methods, final classes, final methods, private methods, constructor, and removal of static initializers.
The PowerMockito is a class provided by PowerMock Framework used to create mock objects and initiates verification and expectation. The PowerMockito provides the functionality to work with the Java reflection API.

Are all Java SE classes available in Java ME?

I'm a Java newbie. Wanted to know if all Java SE classes are available in Java ME. If not why is it so?
No, only a subset is available, see http://java.sun.com/javame/technology/index.jsp for an introduction.
A brief overview is given in this Wikipedia article:
Noteworthy limitations
Compared to the Java SE environment, several APIs are absent entirely, and some APIs are altered such that code requires explicit changes to support CLDC. In particular, certain changes aren't just the absence of classes or interfaces, but actually change the signatures of existing classes in the base class library. An example of this is the absence of the Serializable interface, which does not appear in the base class library due to restrictions on reflection usage. All java.lang.* classes which normally implement Serializable do not, therefore, implement this tagging interface.
Other examples of limitations depend on the version being used, as some features were re-introduced with version 1.1 of CLDC.
CLDC 1.0 and 1.1
The Serializable interface is not supported.
Parts of the reflection capabilities of the Java standard edition:
The java.lang.reflect package and any of its classes not supported.
Methods on java.lang.Class which obtain Constructors or Methods or Fields.
No finalization. CLDC does not include the Object.finalize() method.
Limited error handling. Non-runtime errors are handled by terminating the application or resetting the device.
No Java Native Interface (JNI)
No user-defined class loaders
No thread groups or daemon threads.
It's worth noting that where J2ME versions of J2SE classes are apparently available, they often have a reduced API. So you can't always assume that code using 'available' classes will port straight over.
If memory serves, there are one or two methods with differening names too. Memory doesn't serve well enough right now to recall a specific example.
No, Java ME is a significantly restricted subset of Java SE. Java SE is an enormous standard library, and most of the devices Java ME is intended to run on don't have the resources to support all that overhead.
Take a look at the javadocs for CLDC 1.1, the main, universally supported API accessible to Java ME.
No they are not. The reason for this is the standard library is quite large, making it difficult to use on embedded devices with small amounts of memory and slower processors.
See this page for more info about whats included and whats not.

Using java classes in Grails

I have a Java\Spring\Hibernate application - complete with domain classes which are basically Hibernate POJOs
There is a piece of functionality that I think can be written well in Grails.
I wish to reuse the domain classes that I have created in the main Java app
What is the best way to do so ?
Should I write new domain classes extending the Java classes ? this sounds tacky
Or Can I 'generate' controllers off the Java domain classes ?
What are the best practices around reusing Java domain objects in Grails\Groovy
I am sure there must be others writing some pieces in grails\groovy
If you know about a tutorial which talks about such an integration- that would be awesome !!!
PS: I am quite a newbie in grails-groovy so may be missing the obvious. Thanks !!!
Knowing just how well Groovy and Grails excel at integrating with existing Java code, I think I might be a bit more optimistic than Michael about your options.
First thing is that you're already using Spring and Hibernate, and since your domain classes are already POJOs they should be easy to integrate with. Any Spring beans you might have can be specified in an XML file as usual (in grails-app/conf/spring/resources.xml) or much more simply using the Spring bean builder feature of Grails. They can then be accessed by name in any controller, view, service, etc. and worked with as usual.
Here are the options, as I see them, for integrating your domain classes and database schema:
Bypass GORM and load/save your domain objects exactly as you're already doing.
Grails doesn't force you to use GORM, so this should be quite straightforward: create a .jar of your Java code (if you haven't already) and drop it into the Grails app's lib directory. If your Java project is Mavenized, it's even easier: Grails 1.1 works with Maven, so you can create a pom.xml for your Grails app and add your Java project as a dependency as you would in any other (Java) project.
Either way you'll be able to import your classes (and any supporting classes) and proceed as usual. Because of Groovy's tight integration with Java, you'll be able to create objects, load them from the database, modify them, save them, validate them etc. exactly as you would in your Java project. You won't get all the conveniences of GORM this way, but you would have the advantage of working with your objects in a way that already makes sense to you (except maybe with a bit less code thanks to Groovy). You could always try this option first to get something working, then consider one of the other options later if it seems to make sense at that time.
One tip if you do try this option: abstract the actual persistence code into a Grails service (StorageService perhaps) and have your controllers call methods on it rather than handling persistence directly. This way you could replace that service with something else down the road if needed, and as long as you maintain the same interface your controllers won't be affected.
Create new Grails domain classes as subclasses of your existing Java classes.
This could be pretty straightforward if your classes are already written as proper beans, i.e. with getter/setter methods for all their properties. Grails will see these inherited properties as it would if they were written in the simpler Groovy style. You'll be able to specify how to validate each property, using either simple validation checks (not null, not blank, etc.) or with closures that do more complicated things, perhaps calling existing methods in their POJO superclasses.
You'll almost certainly need to tweak the mappings via the GORM mapping DSL to fit the realities of your existing database schema. Relationships would be where it might get tricky. For example, you might have some other solution where GORM expects a join table, though there may even be a way to work around differences such as these. I'd suggest learning as much as you can about GORM and its mapping DSL and then experiment with a few of your classes to see if this is a viable option.
Have Grails use your existing POJOs and Hibernate mappings directly.
I haven't tried this myself, but according to Grails's Hibernate Integration page this is supposed to be possible: "Grails also allows you to write your domain model in Java or re-use an existing domain model that has been mapped using Hibernate. All you have to do is place the necessary 'hibernate.cfg.xml' file and corresponding mappings files in the '%PROJECT_HOME%/grails-app/conf/hibernate' directory. You will still be able to call all of the dynamic persistent and query methods allowed in GORM!"
Googling "gorm legacy" turns up a number of helpful discussions and examples, for example this blog post by Glen Smith (co-author of the soon-to-be-released Grails in Action) where he shows a Hibernate mapping file used to integrate with "the legacy DB from Hell". Grails in Action has a chapter titled "Advanced GORM Kungfu" which promises a detailed discussion of this topic. I have a pre-release PDF of the book, and while I haven't gotten to that chapter yet, what I've read so far is very good, and the book covers many topics that aren't adequately discussed in other Grails books.
Sorry I can't offer any personal experience on this last option, but it does sound doable (and quite promising). Whichever option you choose, let us know how it turns out!
Do you really want/need to use Grails rather than just Groovy?
Grails really isn't something you can use to add a part to an existing web app. The whole "convention over configuration" approach means that you pretty much have to play by Grails' rules, otherwise there is no point in using it. And one of those rules is that domain objects are Groovy classes that are heavily "enhanced" by the Grails runtime.
It might be possible to have them extend existing Java classes, but I wouldn't bet on it - and all the Spring and Hibernate parts of your existing app would have to be discarded, or at least you'd have to spend a lot of effort to make them work in Grails. You'll be fighting the framework rather than profiting from it.
IMO you have two options:
Rewrite your app from scratch in Grails while reusing as much of the existing code as possible.
Keep your app as it is and add new stuff in Groovy, without using Grails.
The latter is probably better in your situation. Grails is meant to create new web apps very quickly, that's where it shines. Adding stuff to an existing app just isn't what it was made for.
EDIT:
Concerning the clarification in the comments: if you're planning to write basically a data entry/maintenance frontend for data used by another app and have the DB as the only communication channel between them, that might actually work quite well with Grails; it can certainly be configured to use an existing DB schema rather than creating its own from the domain classes (though the latter is less work).
This post provides some suggestions for using grails for wrapping existing Java classes in a web framework.

Resources