Determine if Commitment control is running - rpgle

I am attempting to conditionally use commitment control. RPG allows a conditional COMMIT keyword on the files it is opening. Since one of my programs is called from within a trigger, I want the higher level logic to control the commit scope. This means, in the trigger, I need to determine if commitment control is in effect, and if so, pass an optional parameter to the called programs.
Does anyone know of a way to tell if commitment control is currently running in RPG or DB2400?

The QTNRCMTI API will tell you the current commitment control status.

Related

Inferring the user intention from the event stream in an event store. Is this even a correct thing to do?

We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.

How to implement Commands and Events for complex form using Event Sourcing?

I would like to implement CQRS and ES using Axon framework
I've got a pretty complex HTML form which represents recruitment process with six steps.
ES would be helpful to generate historical statistics for selected dates and track changes in form.
Admin can always perform several operations:
assign person responsible for each step
provide notes for each step
accept or reject candidate on every step
turn on/off SMS or email notifications
assign tags
Form update (difference only) is sent from UI application to backend.
Assuming I want to make changes only for servers side application, question is what should be a Command and what should be an Event, I consider three options:
Form patch is a Command which generates Form Update Event
Drawback of this solution is that each event handler needs to check if changes in form refers to this handler ex. if email about rejection should be sent
Form patch is a Command which generates several Events ex:. Interviewer Assigned, Notifications Turned Off, Rejected on technical interview
Drawback of this solution is that some events could be generated and other will not because of breaking constraints ex: Notifications Turned Off will succeed but Interviewer Assigned will fail due to assigning unauthorized user. Maybe I should check all constraints before commands generation ?
Form patch is converted to several Commands ex: Assign Interviewer, Turn Off Notifications and each command generates event ex: Interviewer Assigned, Notifications Turned Off
Drawback of this solution is that some commands can fail ex: Assign Interviewer can fail due to assigning unauthorized user. This will end up with inconsistent state because some events would be stored in repository, some will not. Maybe I should check all constraints before commands generation ?
The question I would call your attention to: are you creating an authority for the information you store, or are you just tracking information from the outside world?
Udi Dahan wrote Race Conditions Don't Exist; raising this interesting point
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If you have an unauthorized user in your system, is it really critical to the business that they be authorized before they are assigned responsibility for a particular step? Can the system really tell that the "fault" is that the responsibility was assigned to the wrong user, rather than that the user is wrongly not authorized?
Greg Young talks about exception reports in warehouse systems, noting that the responsibility of the model in that case is not to prevent data changes, but to report when a data change has produced an inconsistent state.
What's the cost to the business if you update the data anyway?
If the semantics of the message is that a Decision Has Been Made, or that Something In The Real World Has Changed, then your model shouldn't be trying to block that information from being recorded.
FormUpdated isn't a particularly satisfactory event, for the reason you mention; you have to do a bunch of extra work to cast it in domain specific terms. Given a choice, you'd prefer to do that once. It's reasonable to think in terms of translating events from domain agnostic forms to domain specific forms as you go along.
HttpRequestReceived ->
FormSubmitted ->
InterviewerAssigned
where the intermediate representations are short lived.
I can see one big drawback of the first option. One of the biggest advantage of CQRS/ES with Axon is scalability. We can add new features without worring about regression bugs. Adding new feature is the result of defining new commands, event and handlers for both of them. None of them should not iterfere with ones existing in our system.
FormUpdate as a command require adding extra logic in one of the handler. Adding new attribute to patch and in consequence to command will cause changes in current logic. Scalability is no longer advantage in that case.
VoiceOfUnreason is giving a very good explanation what you should think about when starting with such a system, so definitely take a look at his answer.
The only thing I'd like to add, is that I'd suggest you take the third option.
With the examples you gave, the more generic commands/events don't tell that much about what's happening in your domain. The more granular events far better explain what exactly has happened, as the event message its name already points it out.
Pulling Axon Framework in to the loop, I can also add a couple of pointers.
From a command message perspective, it's safe to just take a route and not over think it to much. The framework quite easily allows you to adjust the command structure later on. In Axon Framework trainings it is typically suggested to let a command message take the form of a specific action you're performing. So 'assigning a person to a step would typically be a AssignPersonToStepCommand, as that is the exact action you'd like the system to perform.
From events it's typically a bit nastier to decide later on that you want fine grained or generic events. This follows from doing Event Sourcing. Since the events are your source of truth, you'll thus be required to deal with all forms of events you've got in your system.
Due to this I'd argue that the weight of your decision should lie with how fine grained your events become. To loop back to your question: in the example you give, I'd say option 3 would fit best.

How to ease updating inferno with web performance test scripts

Updating can performance test script e.g. with LoadRunner can take a lot of time and be quite frustrating. If there has been some updates with the applications, you usually have to run the script and then find out what has to be changed, update and run again and so on. Does anyone have some concrete best practices how to ease this updating inferno? One obvious thing is good communication with developers.
It depends on the kind of updates. If the update is dramatic, like adding new fields for user to fill in, then, someone has to manually touch up the test scripts.
If, however, the update is minor, for example, some changes to the hidden fields or changes to the internal names of user-facing fields, then it's possible to write a script that checks the change and automatically updates the test script.
One of the performance test platforms, NetGend, automatically takes care of the hidden fields and the internal names of user-facing fields so it's very easy to create a script to performance-test a HTML form. Tester only needs to fill in the values that he/she would have to enter using a browser, so no correlation is necessary there. Please send me a message if you need to know more about it.
There are many things you can do to insulate your scripts from build to build variability. The higher up the OSI stack you go the lower the maintenance charge, but the higher the resource cost for the virtual user type. Assuming changes are limited to page level resources and a few hidden fields here and there for web sites or applications, then you can record in HTML mode. You blast the EXTRARES sections as the page parser in HTML mode will automatically parse the page and load the page resources even without an explicit reference - It can be a real pain to keep these sections in synch if you have developers who are experimenting quite a bit.
Next up, for forms which have a very high velocity in terms of change consider the use of a web_custom_request() for the one form. You can use correlation statements to pick up all of the name|value pairs as needed and build the form submit dynamically. There will be a little bit more up front work for this but you should have pay offs at around the fourth changed build where you would normally have been rebuilding some scripts.
Take a look at all of the hosts referenced in your code. Parameterize all of these items. I have a template that I use for web virtual users which pairs a default value and the ability to change any of the host names via the control panel extra attributes section. Take a look at the example for lr_get_attrib_string() for how you might implement the pickup and pair that with a check for NULL and a population with a default value in your code
This is going to seem counter intuitive, but comment your script heavily for changes that are occurring often so you know where to take the extra labor change up front to handle a more dynamic data set.
Almost nothing you do with any tool can save you from struuctural changes in the design and flow of the app, such as the insertion of a new page in the workflow, but paying attention to the design on the high change pages, of which there are typically a small number, can result in a test code with a very long life.
Of course if your application is web services based then there is a natual long life to the use of exposed public services. Code may change on the back end of the service, but typically the exposed public interface is very stable.

How to implement "continue=false" in Xpages events

This is a rather generic question, but I'll try to be as precise as possible:
quite often I'm asked by customers for proper implementations of LotusScript's
continue = false
in Notes' Query* events. One quite common situation is a form's QueryOpen event where we actually can stop the process of opening the document in question based on some condition, e.g based on the response from a user dialog.
For some Xpages events like querySaveDocument there are quite obvious solutions, whereas with others I only can recommend re-thinking the entire logic like preventing code execution at a much earlier stage. But of course most people in question would prefer a generic approach like "re-write those codes using...". And - to be honest - I'd like to know myself ;)
I'm more or less familiar with the Xpages / JSF lifecycle, but have to admit that I don't have a proper idea how I could stop execution at any given phase.
As always, any hint is welcome.
EDIT (to clarify my question, but also in response to Tim's answer below):
It's not just the QuerySave but also the QueryModeChange and QueryRecalc that somehow need to be transformed together ith an existng application's logic but that don't have their equivalent in the Xpages logic. Are both concepts (forms based and xpages based) just too different at this point?
As an example think of a workflow application where we need to check certain conditions before we allow opening an existing doc in edit mode for a potential author. In my Notes client application I add some code to 2 events, i.e. QueryOpen, where I check the "mode" arg, and 2nd QueryModeChange, where I check the current doc mode. In both cases I can prevent the doc from being edited by adding my continue = false, if necessary. Depending on the event the doc will either not change its mode, or not open at all.
With an Xpage I can use buttons for changing a doc's edit mode, and I can "hide" those buttons, or just add some checking code or whatever.
But 17 years of Domino consulting have tought me at least one lesson: there'll always be users that'll find the hidden ways to reach their goals. In our case they might find out that a simple modification of the page's URL will finally allow them to edit the doc. To prevent this I could maybe use the "beforeRenderResponse" event, I assume. But then, beforeRenderResponse is also called in other situations as well, so that we have to investigate the current situation first. Or I could make sure that users don't have author rights unless the situation allows it.
Again, not a huge problem, but when making the transition from a legacy Notes application this means re-thinking its entire logic. Which makes the job more tediuos, and especilly more expensive.
True? Or am I missing some crucial parts of the concept?
Structure your events as action groups and, when applicable, return false. This will cause all remaining actions in the group to be skipped.
For example, you could split a "Save" button into two separate actions:
1.
// by default, execute additional actions:
var result = true;
/* execute some logic here */
if (somethingFailed) {
result = false;
}
return result;
Replace somethingFailed with an evaluation based on whatever logic you have in place of the block comment to determine whether it's appropriate to now save the document.
2.
return currentDocument.save();
Not only does the above pattern cause the call to save() to be skipped if the first action returns false, but because save(), in turn, returns a boolean, you could theoretically also add a third action as a kind of postSave event: if the save is successful, the third action will automatically run; if the save fails, the third action will be automatically skipped.
All queryModeChange logic should be moved to the readonly attribute of a panel (or the view root of an XPage or Custom Control) containing all otherwise editable content... you would basically just be flipping the boolean: traditionally, queryModeChange would treat false (for Continue) as an indication that the document should not be edited (although this also forces you to check whether the user is trying to change from read to edit, because if you forgo this check, you're potentially also preventing a user from changing the mode back to read when it's already in edit), whereas readonly should of course return true if the content should not be editable.
Since the queryModeChange approach was nearly always an additional layer of "fig leaf" security, in XPages it's far better to handle this via actual security mechanisms; the readonly attribute is explicitly intended for enforcing security. Additionally, in lieu of using readonly, you could instead use the acl complex property that is also available for panels, XPages, and Custom Controls to provide different permissions to different subsets of users; anyone with a certain role, for instance, would automatically have edit, whereas the level for the default entry can be computed based on item values indicating the current "status" and/or "assignee". With either (or both) of these mechanisms in place, it doesn't matter what the user does to the URL... the relevant components cannot be editable if the container is read only. They could even try to hack in by running JavaScript in Chrome Developer Tools, attempting to emulate the POST requests that would be sent if they could edit the content... the data they send will still not get pushed back to the model, because the targeted components are read-only by virtue of the attributes of their container.
Attempting to apply all Notes client patterns directly to the XPages context is nearly always an exercise in frustration -- and, ultimately, futility. While I won't divulge specifics here, I (and some of the smartest people I know) learned this lesson at great cost. While users may say (and even believe) that they want exactly what they already have... if they did, they would be keeping what they already have, not paying you to turn it into something else. So any migration from a Notes client app to an XPage "equivalent" is your one opportunity to revisit the reason the code used to do what it did, and determine whether that even makes sense to retain within the XPage, based not only on the differential between Notes client and XPage paradigms, but also on any differential between what the users' business process was when the Notes client app was developed and what their process is now.
Omitting this evaluation guarantees that the resulting app will be running code it doesn't need to and fail to make the most of the target platform.
queryRecalc is a perfect example of this: typically, recalculation was blocked to optimize performance when the user's desktop and network resources were responsible for performing complex and/or network-intensive recalculations. In XPages this all happens on the server, so a network request from the browser that returns a page where everything has changed is typically no more expensive for the end user than a page where nothing has changed (unless there's an extreme differential in the amount of markup that is actually sent). Unless the constituent components are bound to data that is expensive for the server to recalculate, logical blocking of recalculation offers little or no performance benefit for the user. Furthermore, if you're trying to block recalculation in an event, you're too late: XPages uses a "lifecycle" that consists of 6 phases, so by the time your event code runs, any recalculation you're trying to block has already occurred. So, if the reason for blocking recalculation was to optimize performance, implement a scope caching strategy that ensures you're only pulling fresh data when it makes sense to do so, and the end user experience will be sufficiently performant without trying to prevent the entire page from recalculating. If, on the other hand, queryRecalc was being used as another fig leaf (something has changed, but we don't want to show the user the updates yet), that logic should definitely be revisited to determine whether it's still applicable, still (if ever) a good idea, and which portions of the platform are now the best fit for meeting the business process objectives.
In summary, use the security mechanisms unique to XPages for locking down portions or all of a page, and use the memory scopes that we didn't have in the Notes client to ensure the application performs well. Porting an event that used to contain this logic to an XPage event that continues to contain this logic will likely fail to produce the desired result and squander some of the benefits of migrating to XPages.

Push vs pull in reactive-banana

I am building a mediaplayer-like application using reactive-banana.
Let's say I want a Behavior that represents the currently selected track in
the track list.
I have two options: use fromPoll to get the current selection when it's
needed; or use fromChanges and subscribe to the selection change event.
I use the selected track Behavior only when the user presses the "Play"
button. This event is much more rare than a selection change.
Given that, I'd assume that fromPoll would be better/more efficient
than fromChanges in this situation. But the docs say that "the
recommended way to obtain Behaviors is by using fromChanges".
Does it still apply here? I.e. will the polling action be executed more
often than it is actually used (sampled) by the network?
In the current version (0.7) of reactive-banana, the fromPoll function actually creates a behavior whose value is determined by executing the polling action whenever any event happens at all.
In contrast, fromChanges will only update the behavior when the particular event given as argument has an occurrence.
In other words, in the current implementation, fromPoll is always less efficient than fromChanges.
Then again, I wouldn't worry too much about efficiency right now, because I haven't spent much time implementing suitable optimizations yet. Just use whatever is simplest now and save any efficiency issues for future versions of reactive-banana.

Resources