Interrogation about SpringJUnit4ClassRunner's default rollback behavior for integration tests - spring-transactions

I am running Spring integration tests - and therefore using #RunWith(SpringJUnit4ClassRunner.class) for those tests.
Some of those tests call business classes/methods (i.e. src/main/java) annotated with #Transactional and those classes/methods manipulate data in database.
I understand that tests classes/methods (i.e. src/test/java and SpringJUnit4ClassRunner) annotated with #Transactional perform a rollback automatically if not specified otherwise.
Now I want to know if that behavior (automatic rollback) is also to be expected for data manipulated from within business classes/methods (i.e. located within src/main/java) when I run integration tests?

You're in luck... I literally just updated the Javadoc for TransactionalTestExecutionListener with the following clarifications:
Test-managed Transactions
Test-managed transactions are transactions that are managed declaratively via this listener or programmatically via TestTransaction. Such transactions should not be confused with Spring-managed transactions (i.e., those managed directly by Spring within the ApplicationContext loaded for tests) or application-managed transactions (i.e., those managed programmatically within application code that is invoked via tests). Spring-managed and application-managed transactions will typically participate in test-managed transactions; however, caution should be taken if Spring-managed or application-managed transactions are configured with any propagation type other than REQUIRED or SUPPORTS.
Enabling and Disabling Transactions
Annotating a test method with #Transactional causes the test to be run within a transaction that will, by default, be automatically rolled back after completion of the test. If a test class is annotated with #Transactional, each test method within that class hierarchy will be run within a transaction. Test methods that are not annotated with #Transactional (at the class or method level) will not be run within a transaction. Furthermore, tests that are annotated with #Transactional but have the propagation type set to NOT_SUPPORTED will not be run within a transaction.
Declarative Rollback and Commit Behavior
By default, test transactions will be automatically rolled back after completion of the test; however, transactional commit and rollback behavior can be configured declaratively via the class-level #TransactionConfiguration and method-level #Rollback annotations.
Note, however, that TestTransaction is a new feature in Spring Framework 4.1, but you can try it out in 4.1 RC1 that just got released yesterday. ;)
In the context of your question, the code you're referring to in src/main/java would be related to the Spring-managed or application-managed transactions mentioned in the Javadoc. In other words, if you don't have any application code that requires new transactions or suspends current transactions (i.e., the test-managed transaction), then changes made by application code will also get rolled back along with the test-managed transaction.
Regards,
Sam (author of the Spring TestContext Framework)

Related

Any way to get flow status (maybe like, running, stopped, stopped with exception) with spring integration?

Recently, I was trying to build a rest service to create, start and stop an integration flow (which was reading an RSS feed and printing on console), and I was able to achieve that.
But my next requirement was to get status (maybe like, running, stopped, stopped with exception) from the already running flow, and I was unable to do that, I cannot see any method related to that in "IntegrationFlowRegistration".
Also, is there a way to store "IntegrationFlow" in RDBMS like MySql, etc?
Well, the flow by itself is just a logical container for components it connects. Even though we really can get somehow access to all those components in one IntegrationFlow, that doesn't mean that your whole solution contains only one IntegrationFlow. It is fully normal to connect components from different IntegrationFlows since at runtime they are fully not related to the IntegrationFlow created them. Those runtime components are identical after any configuration parsing we have in the framework: XML, Java DSL, Kotlin DSL or just plain Java & Annotation configuration. You even can create all the beans manually and still at runtime it is going to be an EIP solution.
What I'm trying to say that it is wrong to try to find a solution for the whole flow state. Or you should consult some individual component (e.g. the mentioned RSS one), or you should have some separate component to track such a custom state.
See Lifecycle contract. Most of Spring Integration implements it, so you can check its isRunning() whenever you need. In fact even a StandardIntegrationFlow implement this one, but you should not fully rely on it because your final solution might consist from several flows or many independent components.
There is no anything like stopped with exception - we don't stop components because of an error. Instead you can enable metrics and check channel or handlers success and failure send counts: https://docs.spring.io/spring-integration/docs/5.3.0.RELEASE/reference/html/system-management.html#metrics-management

CRM 2011 Plugin development best practice

I am inheriting a set of plugins that appear to be developed by different people. Some of them follow the pattern of one master plugin with many different steps. In this plugin none of the steps are cohesive or related in functionality, the author simply put them all in the same plugin with code internal to the plugin (if/else madness) that handles the various different entities, crm messages (update, create, delete, etc..) and stages (preValidation/post operation etc.).
The other developer seems to make a plugin for every entity type and/or related feature grouping. This results in multiple smaller plugins with fewer steps.
My question is this, assuming I have architected a way out of the if/else hell that the previous developer created in the 'one-plugin-to-rule-them-all' design, which approach is preferable from a CRM performance and long term maintenance (as in fewer side effects and difficulties with deployment, etc.) perspective?
I usually follow a model driven approach and design one plugin class per entity. On this class steps can be registered for the pre-validation, pre- and post-operation and asynchronous stages on the Create, Update, Delete and other messages, but always for only one entity at a time.
Doing so I can keep a clear oversight of the plugin logic that is triggered on an entity's events and also I do not need to bother about the order in which plugin steps are triggered.
Following this approach, of course, means I need a generic pattern for handling all supported events. For this purpose I designed a plugin base class responsible for the event routing. My deriving plugin classes only need to implement (override) the event handler methods (PreUpdate, PostCreate etc.).
Im my opinion plugin classes should only be used to glue system events to the business logic. Therefore the code performing the desired actions should be placed in separate classes. Plugin classes only route the events, prepare the data and call the business logic.
Some developers tend to design one plugin class per step or even per implemented requirement. Doing so keeps your plugin classes terse (which is positive), but when logic gets complicated you can easily loose track of what is going on for a single entity. (Recently I worked with a CRM implementation that had an entity having 21 plugin classes registered for it. Understanding what was going on and adding new behaviour to this entity proved to be very tricky and time consuming.)

What is the difference between Custom plugin and custom event handlers in OIM 11g R2?

What is the difference between Custom plugin and custom event handlers in OIM 11g R2?
Thanks a ton in advance...
Sangita
A plugin is a module of code which can be run inside the OIM server. It contains Java classes which are executed along with metadata (plugin.xml) which identifies them. There are many types of plugins - the type is determined by the Java interface or abstract class the plugin implements/extends.
One of the core components of OIM is the orchestration engine. It processes create/update/delete transactions on core identity objects (e.g. User, Role, etc). Each orchestration process involves the execution of a sequence of event handlers, and each event handler is a plugin implementing oracle.iam.platform.kernel.spi.EventHandler. Many are shipped out-of-the-box, and you can write custom ones too. For example, you could install an event handler to run after (postprocess) the creation of any user.
However, there are also other types of plugins - for example, login name generation plugins (oracle.iam.identity.usermgmt.api.UserNamePolicy). Some of these plugins are actually called by the out-of-the-box event handlers. Event handlers are a very general API (they are similar in concept to database triggers) - they have a lot of power, but if you are not careful with that power you can destabilise your OIM environment. By contrast, other plugin interfaces perform one specific task only (such as generating a login name for a new user), and thus the risk from using them is much less. If you can solve your problem using some more specific type of plugin, do that in preference to using an event handler.
You will also find, that while some of these more specific plugin interfaces are called by out-of-the-box event handlers, others are not called by the orchestration engine at all, but instead by other components in OIM. For example, scheduled tasks are not run by the orchestration engine, but instead by the embedded Quartz scheduler. Custom scheduled tasks extend the oracle.iam.scheduler.vo.TaskSupport abstract class.
While every plugin needs the plugin framework metadata (plugin.xml), some specific types of plugins need additional metadata specific to that type. For example, event handlers need an EventHandlers.xml uploaded to MDS; similarly, scheduled tasks need to be defined in a task.xml file.
It is also worth nothing that OIM 9.x also had a concept of "event handler", but the technology was different from that in OIM 11g. OIM 9.x event handlers extend class com.thortech.xl.client.events.tcBaseEvent. As a general rule, 9.x event handlers are no longer supported in 11g.
For more information, read these chapters in the OIM 11.1.2.3 Developer Guide: chapter 17 for basics of plugin development, chapter 18 for developing custom event handlers, and chapter 16 for developing custom scheduled tasks, and appendix B for developing custom username and common name generation/validation policies.
Also, if you want some samples, and have access to My Oracle Support, check out these documents:
OIM11g: Sample Code For A Custom Username Generation Policy Plugin Using JDeveloper (Doc ID 1228035.1)
OIM11g: Sample Code For A Custom Event Handler Implemented for Pre-Process Stage During Create User Management Operation (Doc ID 1262803.1)
How To Create A Request Validator To Validate Justification Attribute in OIM 11g (Doc ID 1317087.1)
How To Determine OIM User Attribute Changes In A Modify Orchestration (Doc ID 1535503.1)

Spring Hibernate SessionFactory.getCurrentSession in multithreaded environment

I have written a batch application which spawns multiple threads to read assigned files and save records to database. The architecture uses Spring context and Hibernate.
Transaction is managed by Spring and I am using SessionFactory.getCurrentSession to get a session to perform a save operation for each thread.
Consider that I have a generic DAO that handles get, save, update operations and a facade to hide Hibernate implementation, how can I be assured that two threads when invoking SessionFactory.getCurrentSession() are getting their dedicated Session object to perform DB operations.
I found a post in StackOverflow where someone recommended not to use current_session_context_class=thread when using spring managed transaction. what is the default implementation used by Spring for current_session_context_class property?
Thanks in Advance!
As of Spring 2.0 Spring integrates with hibernate throuhg its own implementation(s) of the CurrentSessionContext interface provided by hibernate.
By default spring sets this to the SpringSessionContext to integrate properly. In general you don't want or need to mess with the current_session_context_class unless you are using JTA (although when using Hibernate 4 with a recent Spring version it should also just work).

Re-engineer POJOs to EJBs or client transaction

I have a couple of questions regarding EJB transactions. I have a situation where a process has become longer running that originally intended and is sometimes failing due to server timeout's being exceeded. While I have increased the timeouts initially (both total transaction and max transaction), for a long running process, I know that it make more sense to segment this work as much as possible into smaller units of work that don't fail based on timeout. As a result, I'm looking for some thoughts or references regarding next course of action based on the background below and the questions that follow.
Environment:
EJB 3.1, JPA 2.0, WebSphere 8.5
Background:
I built a set of POJOs to do some batch oriented work for an enterprise application. They are non-EJB POJOs that were intended to implement several business processes (5 related, sequential processes, each depending on it's predecessor). The POJOs are in a plain Java project, not an EJB project.
However, these POJOs access an EJB facade for database access via JPA. The abstract core of the 5 business processes does the JNDI lookup for the EJB facade in order to return the domain objects for processing. Originally, the design was to run from the server completely, however, a need arose to initiate these processes externally. As a result, I created an EJB wrapper so that the processes could be called remotely (individually or as a single process based on a common strategy interface). Unfortunately, the size of the data, both row width and row count, has grown well beyond the original intent.
The processing time required to complete these batch processes has increased significantly (from around a couple of hours to around 1/2 a day and could increase beyond that). Only one of the 5 processes made sense to multi-thread (I did implement it multi-threaded). Since I have the wrapper EJB to initiate 1 or all, I have decided to create a new container transaction for each process as opposed to the single default transaction of "required" when I run all as a single process. Since the one process is multi-threaded, it would make sense to attempt to create a new transaction per thread, however, being a group of POJOs, I do not have transaction capability.
Question:
So my question is, what makes more sense and why? Re-engineer the POJOs to be EJBs themselves and have the wrapper EJB instantiate each process as a child process where each can have its own transaction and more importantly, the multi-threaded process can create a transaction per thread. Or does it make more sense to attempt to create a UserTransaction in the POJOs from a JNDI lookup in the container and try to manage it as if it were a bean managed transaction (if that's even a viable solution). I know this may be application dependent, but what is reasonable with regard to timeouts for a Java EE container? Obviously, I don't want run away processes, but want to make sure that I can complete these batch processes.
Unfortunatly, this application has already been deployed as a production system. Re-engineering, though it may be little more than assembling the strategy logic in EJBs, is a large change to the functionality.
I did look around for some other threads here and via general internet searches, but thought I would see if anyone had compelling arguments for one over the other or another solution entirely. Additional links that talk about a topic such as this are appreciated. I wrestled with whether to post this since some may construe this as subjective, however, I felt the narrowed topic was worth the post and potentially relevant to others attempting processes like this.
This is not direct answer to your question, but something you could consider.
WebSphere 8.5 especially for these kind of applications (batch) provides a batch container. The batch function accommodate applications that must perform batch work alongside transactional applications. Batch work might take hours or even days to finish and uses large amounts of memory or processing power while it runs. You can reuse your Java classes in batch applications, batch steps can be run in parallel in cluster and has transaction checkpoint management.
Take a look at following resources:
IBM Education Assistant - Batch applications
Getting started with the batch environment
Since I really didn't get a whole lot of response or thoughts for this question over the past couple of weeks, I figured I would answer this question to hopefully help others in making a decision if they run across this or a similar situation.
Ultimately, I re-engineered one of the POJOs into an EJB that acted as a wrapper to call the other POJOs. The wrapper EJB performed the same activity as when it was just a POJO, except for the fact that I added the transaction semantics (REQUIRES_NEW) on the primary method. The primary method calls the other POJOs based on a stategy pattern so each call (or POJO) gets its own transaction. Other methods in the EJB that call the primary method were defined with NOT_SUPPORTED so that I could separate the transactions for each call to the primary method and not join an existing transaction.
Full disclosure, the original addition of transaction semantics significantly increased the processing time (on the order of days), but the process did not fail due to exceeding transaction timeouts. It was the result of some unexpected problems with JPA Many-To-One relationships that were bringing back too much data. Data retreived as a result of a the Many-To-One relationship. As I mentioned originally, some of my data row width increased unexpectedly. That data increase was in the related table object, but the query did not need that data at the time. I corrected those issues by changing my queries (creating objects for SELECT NEW queries, changed relationships to FetchType.LAZY, etc).
Going forward, if I am able to dedicate enough time, I will transform the rest of those POJOs into EJBs. The POJO doing the most significant amount of work that is threaded has been implemented with a Callable implementation that is run via an ExecutorService. If I can transform that one, the plan will be to make each thread its own transaction. However, while I'm not sure yet, it appears that my container may already be creating transactions for each thread group (of 10 threads) due to status updates I'm seeing. I will have to do more investigation.

Resources