What is the difference between Custom plugin and custom event handlers in OIM 11g R2? - oim

What is the difference between Custom plugin and custom event handlers in OIM 11g R2?
Thanks a ton in advance...
Sangita

A plugin is a module of code which can be run inside the OIM server. It contains Java classes which are executed along with metadata (plugin.xml) which identifies them. There are many types of plugins - the type is determined by the Java interface or abstract class the plugin implements/extends.
One of the core components of OIM is the orchestration engine. It processes create/update/delete transactions on core identity objects (e.g. User, Role, etc). Each orchestration process involves the execution of a sequence of event handlers, and each event handler is a plugin implementing oracle.iam.platform.kernel.spi.EventHandler. Many are shipped out-of-the-box, and you can write custom ones too. For example, you could install an event handler to run after (postprocess) the creation of any user.
However, there are also other types of plugins - for example, login name generation plugins (oracle.iam.identity.usermgmt.api.UserNamePolicy). Some of these plugins are actually called by the out-of-the-box event handlers. Event handlers are a very general API (they are similar in concept to database triggers) - they have a lot of power, but if you are not careful with that power you can destabilise your OIM environment. By contrast, other plugin interfaces perform one specific task only (such as generating a login name for a new user), and thus the risk from using them is much less. If you can solve your problem using some more specific type of plugin, do that in preference to using an event handler.
You will also find, that while some of these more specific plugin interfaces are called by out-of-the-box event handlers, others are not called by the orchestration engine at all, but instead by other components in OIM. For example, scheduled tasks are not run by the orchestration engine, but instead by the embedded Quartz scheduler. Custom scheduled tasks extend the oracle.iam.scheduler.vo.TaskSupport abstract class.
While every plugin needs the plugin framework metadata (plugin.xml), some specific types of plugins need additional metadata specific to that type. For example, event handlers need an EventHandlers.xml uploaded to MDS; similarly, scheduled tasks need to be defined in a task.xml file.
It is also worth nothing that OIM 9.x also had a concept of "event handler", but the technology was different from that in OIM 11g. OIM 9.x event handlers extend class com.thortech.xl.client.events.tcBaseEvent. As a general rule, 9.x event handlers are no longer supported in 11g.
For more information, read these chapters in the OIM 11.1.2.3 Developer Guide: chapter 17 for basics of plugin development, chapter 18 for developing custom event handlers, and chapter 16 for developing custom scheduled tasks, and appendix B for developing custom username and common name generation/validation policies.
Also, if you want some samples, and have access to My Oracle Support, check out these documents:
OIM11g: Sample Code For A Custom Username Generation Policy Plugin Using JDeveloper (Doc ID 1228035.1)
OIM11g: Sample Code For A Custom Event Handler Implemented for Pre-Process Stage During Create User Management Operation (Doc ID 1262803.1)
How To Create A Request Validator To Validate Justification Attribute in OIM 11g (Doc ID 1317087.1)
How To Determine OIM User Attribute Changes In A Modify Orchestration (Doc ID 1535503.1)

Related

EventSourcing gateways (synchronize with external systems)

Are there best practices for implementation of eventsourcing gateways? The gateway is meant as infrastructure or service which allows to generate a set of events, proceeding from the status returned by some external service.
Even if application based on eventsourcing, some external uncontrollable entitles can still be present. For example, you want to synchronize users list from Azure AD, and perform prompt to service, which return users list. Then you get users list from projection, make difference with external state, and produce events to fill this difference.
Or your application is online-shop, and you should import actual USD/EUR/bitcoin ranks for showing prices. Gateway can poll some currencies provider and produce event. In simple case it's very easy, but if projection state is more complex structure, trivial import is not obvious.
Maybe is there common approach for this case?
Building integration adapters that use poll-emit is normal and I personally prefer this way of doing integrations in general.
However, this has little to do with event sourcing, since what you actually need to solve your integration problems is to simulate the desired functionality that the external system will emit events on its own and you can build a reactive system that consumes these events.
When these events come to your system from the adapter - you can do whatever you want with them but essentially, event sourcing assumes that you store your own object's state in event streams but in case the event comes from some external system - it is not your state. You can derive your system state from external events but these will be your own events.

CRM 2011 Plugin development best practice

I am inheriting a set of plugins that appear to be developed by different people. Some of them follow the pattern of one master plugin with many different steps. In this plugin none of the steps are cohesive or related in functionality, the author simply put them all in the same plugin with code internal to the plugin (if/else madness) that handles the various different entities, crm messages (update, create, delete, etc..) and stages (preValidation/post operation etc.).
The other developer seems to make a plugin for every entity type and/or related feature grouping. This results in multiple smaller plugins with fewer steps.
My question is this, assuming I have architected a way out of the if/else hell that the previous developer created in the 'one-plugin-to-rule-them-all' design, which approach is preferable from a CRM performance and long term maintenance (as in fewer side effects and difficulties with deployment, etc.) perspective?
I usually follow a model driven approach and design one plugin class per entity. On this class steps can be registered for the pre-validation, pre- and post-operation and asynchronous stages on the Create, Update, Delete and other messages, but always for only one entity at a time.
Doing so I can keep a clear oversight of the plugin logic that is triggered on an entity's events and also I do not need to bother about the order in which plugin steps are triggered.
Following this approach, of course, means I need a generic pattern for handling all supported events. For this purpose I designed a plugin base class responsible for the event routing. My deriving plugin classes only need to implement (override) the event handler methods (PreUpdate, PostCreate etc.).
Im my opinion plugin classes should only be used to glue system events to the business logic. Therefore the code performing the desired actions should be placed in separate classes. Plugin classes only route the events, prepare the data and call the business logic.
Some developers tend to design one plugin class per step or even per implemented requirement. Doing so keeps your plugin classes terse (which is positive), but when logic gets complicated you can easily loose track of what is going on for a single entity. (Recently I worked with a CRM implementation that had an entity having 21 plugin classes registered for it. Understanding what was going on and adding new behaviour to this entity proved to be very tricky and time consuming.)

Making code in Liferay Model Listeners Asynchronous (using concurrency)

The Problem
Our liferay system is the basis to synchronize data with other web-applications.
And we use Model Listeners for that purpose.
There are a lot of web-service calls and database updates through the listeners and consequently the particular action in liferay is too slow.
For example:
On adding of a User in liferay we need to fire a lot of web-service calls to add user details and update other systems with the userdata, and also some liferay custom tables. So the adding of User is taking a lot of time and in a few rare cases the request may time-out!
Since the code in the UserListener only depends on the User Details and even if there is any exception in UserListener still the User would be added in Liferay, we have thought of the following solution.
We also have a scheduler in liferay which fixes things if there was some exception while executing code in Listeners.
Proposed Solution
We thought of making the code in UserListener asynchronous by using Concurrency API.
So here are my questions:
Is it recommended to have concurrent code in Model Listeners?
If yes, then will it have any adverse effect if we also update Liferay custom tables through this code, like transactions or other stuff?
What can be other general Pros and Cons of this approach?
Is there any other better-way we can have real-time update to other systems without hampering User-experience?
Thank you for any help on this matter
It makes sense that you want to use Concurrency to solve this issue.
Doing intensive work like invoking web services etc in the thread that modifies the model is not really a good idea, apart from the impact it will have on user experience.
Firing off threads within the models' listeners may be somewhat complex and hard to maintain.
You could explore using Liferay's Message Bus paradigm where you can send a message to a disconnected message receiver which will then do all the intensive work outside of the model listener's calling thread.
Read more about the message bus here:
Message Bus Developer Guide
Message Bus Wiki

Intercepting events and controlling behaviour via events for BPEL runtime engine

I would like to
1. intercepting events and
2. controlling behaviour via events for BPEL runtime engine. May I know which BPEL runtime engine support this?
For 1. for example when an invocation to a service name "hello", I would like to receive the event "invoke_hello" from the server.
For 2. for example, when the server has parallel invocation of 3 services, "invoke_hello1", "invoke_hello2" and "invoke_hello3", I could control the behaviour by saying I would only allowed "invoke_hello1" to be run.
I am interested if there is any BPEL engines that supports 1, or 2, or both, with its documentation page that roughly talked about this (so I could make use of this feature).
Disclaimer: I haven't personally used the eventing modules of these engines, so I cannot guarantee that they work as they promise.
Concerning question 1 (event notification):
Apache ODE has support for Execution Events. These events go into its database, and you have several ways of retrieving events. You can:
query the database to read them.
use the engines Management API to do this via Web Services
add your own event listener implementation to the engine's classpath.
ODE's events concern the lifecycle of activities in BPEL. So your "invoke_hello" should map to one of the ActivityXXX events in ODE.
The Sun BPEL Service Engine included in OpenESB has some support for alerting, but the documentation is not that verbose concerning how to use it. Apparently, you can annotate activities with an alert level and events are generated when an activity is executed.
Concerning question 2 (controlling behaviour):
This is hard and I am not sure if any engine really supports this in normal execution mode. One straight-forward way of achieving this would be to execute the engine in debug mode and to manually control each step. So you could skip the continuation of "invoke_hello2" and "invoke_hello3" and just continue with "invoke_hello1".
As far as I know, ODE does not have a debugger. The Sun BPEL Service Engine on the other hand, has quite a nice one. It is integrated in the BPEL editor in Netbeans which is a visual editor (that uses BPMN constructs to visualize BPEL activities) and lets you jump from and to every activity.
Another option would be to manually code your own web service that intercepts messages and forwards these to the engine depending on your choice. However, as I understand your question, you would rather like an engine that provides this out of the box.
Apparently Oracle BPEL also has support for eventing and according to this tutorial also comes with a debugger, but I haven't used this engine personally so far, so I won't include it in this answer.

Use cases of the Workflow Engine

I'd like to know about specific problems you - the SO reader - have solved using Workflow Engines and what libraries/frameworks you used if you didn't roll your own. I'd also like to know when a Workflow Engine wasn't the best choice and if/how you chose something simpler, like a TaskList/WorkList/Task-Management type application using state machines.
Questions:
What problems have you used workflow engines to solve?
What libraries/frameworks did you use?
When did a simpler State Machine/Task Management like system suffice?
Bonus: How did/do you make the distinction between Task Management and Workflow Engine?
I'm looking for first-hand experiences.
Some of the resources I've checked out:
Ruote and State Machine != Workflow Engine
StonePath and Docs
Creating and Managing Worklist Task Plans with Oracle
Design and Implementation of a Workflow Engine - Thesis
What to use Windows Workflow Foundation For
JBoss jBPM Docs
I'm biased as well, as I am the main author of StonePath.
I have developed workflow applications for the U.S. State Department, the Geneva Centre for Humanitarian Demining, several fortune 500 clients, and most recently the Washington DC Public School System. Every time I have seen a 'workflow engine' that tried to be the one master reference for business processes, I have seen an organization fighting itself to work around the tool. This may be due to the fact that these solutions have always been vendor/product driven, and then end up with a tactical team of 'consultants' constantly feeding the app... but because of this, I tend to react negatively when I hear the benefits of process-based tools that promise to 'centralize the workflow definitions in one place and make them repeatable'.
I very much like Ruote - I have been following that project for some time and should I need that kind of solution, it will be the next tool I'll be willing to try. StonePath has a very different purpose than ruote
Ruote is useful to Ruby in general,
StonePath is aimed at Rails, the web framework written in Ruby.
Ruote is about long-lived business processes and their associated definitions (Note - active development on ruote ceased).
StonePath is about managing State-based workflow and tasking.
Frankly, I think the distinction from the outside looking in might be subtle - many times the same kinds of business processes can be represented either way - the state-and-task-based model tends to map to my mental model though.
Let me describe the highlights of a state-based workflow.
States
Imagine a workflow revolving around the processing of something like a mortgage loan or a passport renewal. As the document moves 'around the office', it travels from state to state.
If you are responsible for the document, and your boss asked you for a status update you'd say things like
"It is in data entry"...
"We are checking the applicant's credentials now"...
"we are awaiting quality review"...
"We are done"... and so on.
These are the states in a state-based workflow. We move from state to state via transitions - like "approve", "apply", kickback", "deny", and so on. These tend to be action verbs. Things like this are modeled all the time in software as a state machine.
Tasks
The next part of a state/task-based workflow is the creation of tasks.
A Task is a unit of work, typically with a due date and handling instructions, that connects a work item (the loan application or passport renewal, for instance), to a users "in box".
Tasks can happen in parallel with each other or sequentially
Tasks can be created automatically when we enter states,
Create tasks manually as people realize work needs to get done
Require tasks to be completed before we can move onto a new state.
This kind of behavior is optional, and part of the workflow definition.
The rabbit hole can go a lot deeper than this, and I wrote an article about it for Issue #4 of PragPub, the Pragmatic Programmer's Magazine. Check out the repo link above for an updated PDF of that article.
In working with StonePath the last few months, I have found that the state-based model maps really well to restful web architectures - in particular, the tasks and state transitions map nicely as nested resources. Expect to see future writing from me on this subject.
I'm biased, I'm one of the authors of ruote.
variant 1) state machine attached to a resource (document, order, invoice, book, piece of furniture).
variant 2) state machine attached to a virtual resource named a task
variant 3) workflow engine interpreting workflow definitions
Now your question is tagged "BPM" we can be expanded into "Business Process management". How does that kind of management occur in each of the variant ?
In variant 1, the business process (or workflow) is scattered in the application. The state machine attached to the resource enforces some of the aspects of the workflow, but only those related to the resource. There may be other resources with their own state machine following the same business process.
In variant 2, the workflow can be concentrated around the task resource and represented by the state machine around that resource.
In variant 3, the workflow is enacted by interpreting a resource called a workflow definition (or business process definition).
What happens when the business process changes ? Is it worth having a workflow engine where business processes are manageable resources ?
Most of the state machine libraries have 1 set states + transitions. Workflow engines are, most of them, workflow definition interpreters and they allow multiple different workflows to run together.
What will be the cost of changing the workflow ?
The variants are not mutually exclusive. I have seen many examples where a workflow engine changes the state of multiple resources some of them guarded by state machines.
I also use variant 3 + 2 a lot, for human tasks : the workflow engine, at some points when running a process instance, hands a task (workitem) to a human participant (resource task is created and placed in state 'ready').
You can go a long way with variant 2 alone (the task manager variant).
We could also mention variant 0), where there is no state machine, no workflow engine, and the business process(es) are scattered and/or hardcoded in the application.
You can ask many questions, but if you don't take the time to read the answers and don't take the time to try out and experiment, you won't go very far, and will never acquire any flair for when to use this or that tool.
On a previous project I was working on i added some Workflow type rules to a set of Government Forms in the Healhcare industry.
Forms needed to be filled out by the end user , and depending on some answers other Forms were scheduled to be filled out at a later date. There were also external events that would cancel scheduled Forms or schedule new ones.
Sample Flow :
Patient Admitted -> Schedule Initial Assessment FOrm -> Schedule Quarterly Review Form -> Patient Died -> Cancel Review -> Schedule Discharge Assessment Form
Many other rules were based on things such as Patient age, where they were being admitted etc.
This was an ASP.NET app, the rules were basically a table in the database. I added scripting, so a script would run on Form completion to determine what to do next. This was a horrid design, and would have been perfect for a proper Workflow engine.
I'm one of the authors of the open source Temporal Workflow Engine we initially developed at Uber as Cadence. The difference between Temporal and the majority of the existing workflow engines is that it is developer focused and is extremely flexible and scalable (to tens of thousands updates per second and up to billions of open workflows). The workflows are written as object oriented programs and the engine ensures that the state of the workflow objects including thread stacks and local variables is fully preserved in case of host failures.
What problems have you used workflow engines to solve?
Temporal is used for practically any backend application that lives beyond a single request reply. Examples of usage are:
Distributed CRON jobs
Managing ML/Data pipelines
Reacting to business events. For example trip events at Uber. The workflow can accumulate state based on events received and execute activities when necessary.
Services Deployment to Mesos/ Kubernetes
CI Pipeline implementation
Ensuring that multiple service calls complete when a request is received. Including SAGA pattern implementation
Managing human worker tasks (similar to Amazon MTurk)
Media processing
Customer Support Ticket Routing
Order processing
Testing service similar to ChaosMonkey
and many others
The other set of use cases is based on porting existing workflow engines to run on Temporal. Practically any existing engine workflow specification language can be ported to run on Temporal. This way a single backend service can power multiple domain specific workflow systems.
What libraries/frameworks did you use?
Temporal is a self contained service written in Go with Go, Java, PHP, and Typescript client side SDKs (.NET and Python are coming in 2022). The only external dependency is storage. Cassandra, MySQL and, PostgreSQL are supported. Elasticsearch can be used for advanced indexing.
Temporal also support asynchronous cross region (using AWS terminology) replication.
When did a simpler State Machine/Task Management like system suffice?
Open source Temporal service can be self hosted or temporal.io cloud offering can be used. So the overhead of building any custom state machine/task management is always higher than using Temporal. Outside the company the service and storage for it need to be set up. If you already have an SQL database the service deployment is trivial through a docker image. The docker is also used to run a local Temporal service for development on a personal computer or laptop.
I am one of the authors of Imixs-Workflow. Imixs-Workflow is an open source workflow engine based on BPMN 2.0 and fully integrated into the Java EE technology stack.
I develop workflow engines by myself since more than 10 years. I will try to answer your question in short:
> What problems have you used workflow engines to solve?
My personal goal when I started to think about workflow engines was to avoid hard codding the business logic within my application. Many things in a business application can be reused so it makes sense to keep them configurable. For example:
sending out a notification
view open tasks
assigned a task to a person
describing the current task
From this function list you can see I am talking about human-centric workflows. In short: A human-centric workflow engine answers the questions: Who is responsible for a task and who needs to be informed next? And these are the typical questions in business requirements.
>What libraries/frameworks did you use?
5 years ago we started reimplementing Imixs-Workflow engine focusing on BPMN 2.0. BPMN is the common standard for process modeling. And the surprising thing for me was that we were suddenly able to describe even highly complex business processes that could be visualized and executed. I recommend everyone to use BPMN for modeling business processes.
> When did a simpler State Machine/Task Management like system suffice?
A simple state machine is sufficient if you just want to track the status of a business object. This is the case when you begin to introduce the 'status' attribute into your object model. But in case you need business processes with responsibilities, logging and flow control, then a state machine is no longer sufficient.
> Bonus: How did/do you make the distinction between Task Management and Workflow Engine?
This is exactly the point where many workflow engines mentioned here differ. For a human-centric workflow you typically need a task management to distribute tasks between human actors. For a process automation, this point is not so relevant. It is sufficient if the engine performs certain tasks. Task management and workflow engines can not be compared because task management is always a function of a workflow engine.
Check rails_workflow gem - I think this is close to what you searching.
I have an experience with using Activiti BPMN 2.0 engine for handling high-performance and high-throughput data transfer processes in an infrastructure of network nodes. The basic task was to allow configuration and monitoring of such transfer processes and control each network node (ie. request node1 to send a data file to node2 via specific transport layer).
There could be thousands of processes running at a time and overall tens or low hundreds of thousands processes per day.
There were bunch of different process definitions but it was not necessarily required that an operator of the system could create custom workflows. So the primary use case for the BPM engine itself was to be robust, scalable and allow monitoring of each process flow.
In the end it basically worked but what we learned from that project was that a BPMN platform, or rather the Activiti engine specifically, was not the best bet for such a high-throughput system.
The main challenges were task execution prioritization, DB locking, execution retries to name the few concerning the BPM itself. So we had to develop custom handling of these, for example:
Handling of retries in the BPM for cases when a node had no free worker for given task, or when the node was not running at all.
Execution of parallel transfer tasks in a single process and synchronization of the results (success/failure).
I don't know if other BPMN engines would be more suitable for such scenario since BPMN is mostly intended for long-running business tasks involving user interaction where performance is probably not the same issue as was in our case.
I rolled my own workflow engine to support phased processing of documents - cataloging, sending for image processing (we work with redaction sw), if needed sending to validation, then release and finally shipping back to the client. In our case we have a truckload of documents to process so sometimes we need to run each service separately to control delivery and resources usage. Simple in concept but high performance and distributed processing needed, and we could't find any off the shelf product that fit the bill for us.

Resources