I have tried to load some code that uses
SystemChangeNotifier notify: self
ofSystemChangesOfItem: #class
using: #method.
I know this should be changed to SystemAnnouncer however this class seems to require you to register for each different possible change, i.e. classAdded:, methodAdded: etc.
What is the equivalent of the above code that notifies on all changes?
AFAICT that's no longer possible. I think it was a conscious decision to drop unspecified, general notifications for specific ones. In your case specifically this might be a bit of a bummer but in general it means that subscription to specific change events is much easier, and there are less announcements because the notification object knows what change it represents. Previously the subscriber would be notified about all changes and would have to create a set of checks to filter unwanted ones.
Related
Our plugin maintains some instance parameter values across many elements, including those in groups.
Occasionally the end users will introduce data that activates an unused Category,
so we have to update the document parameter bindings, to include those categories. However, when we call
doc.ParameterBindings.ReInsert()
our existing parameter values inside groups are lost, because our VariesAcrossGroups flag is toggled back to false?
How did Revit intend this to work - are we supposed to use this in a different way, to not trigger this problem?
ReInsert() expects a base Definition argument, and would usualy get an ExternalDefinition supplied.
To learn, I instead tried to scan through the definition-keys of existing bindings and match those.
This way, I got the document's InternalDefinition, and tried calling Reinsert with that instead
(my hope was, that since its existing InternalDefinition DID include VariesAcrossGroups=true, this would help). Alas, Reinsert doesn't seem to care.
The problem, as you might guess, is that after VariesAcrossGroups=False, a lot of my instance parameters have collapsed into each other, so they all hold identical values. Given that they are IDs, this is less than ideal.
My current (intended) solution is to instead grab a backup of all existing parameter values BEFORE I update the bindings, then after the binding-update and variesAcrossGroups back to true, then inspect all values and re-assign all parameter-values that have been broken. But as you may surmise, this is less than ideal - it will be horribly slow for the users to use our plugin, and frankly it seems like something the revitAPI should take care of, not the plugin developer.
Are we using this the wrong way?
One approach I have considered, is to bind every possibly category I can think of, up front and once only. But I'm not sure that is possible. Categories in themselves are also difficult to work with, as you can only create them indirectly, by using your Project-Document as a factory (i.e. you cannot create a category yourself, you can only indirectly ask the Document to - maybe! - create a category for you, that you request). Because of this, I don't think you can bind for all categories up front - some categories only become available in the document, AFTER you have included a given family/type in your project.
To sum it up: First, I
doc.ParameterBindings.ReInsert()
my binding, with the updated categories. Then, I call
InternalDefinition.SetAllowVaryBetweenGroups()
(after having determined IDEF.VariesAcrossGroups has reverted back to false.)
I am interested to hear the best way to do this, without destroying the client's existing data.
Thank you very much in advance.
(I'm not sure I will accept my own answer).
My answer is just, that you can survive-circumvent this problem,
by scanning the entire revit database for your existing parmater values, before you update the document bindings.
Afterwards, you reset VariesAcrossGroups back to its lost value.
Then, you iterate through your collected parameters, and verify which ones have lost their original value, and reset them back to their intended value.
One trick that speeds this up a bit, is that you can check Element.GroupId <> -1. That is, those elements that are group members.
You only need to track elements which are group members, as it's precisely those that are affected by this Revit bug.
A further tip is, that you should not only watch out for parameter-values that have lost their original value. You must also watch out for parameter-values that have accidentally GOTTEN a value, but which should be left un-set.
I just use FilteredElementCollector with WhereElementIsNotElementType().
Performance-wise, it is of course horrible to do all this,
but given how Revit behaves, I see no other solution if you have to ship to your clients.
We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.
I would like to implement CQRS and ES using Axon framework
I've got a pretty complex HTML form which represents recruitment process with six steps.
ES would be helpful to generate historical statistics for selected dates and track changes in form.
Admin can always perform several operations:
assign person responsible for each step
provide notes for each step
accept or reject candidate on every step
turn on/off SMS or email notifications
assign tags
Form update (difference only) is sent from UI application to backend.
Assuming I want to make changes only for servers side application, question is what should be a Command and what should be an Event, I consider three options:
Form patch is a Command which generates Form Update Event
Drawback of this solution is that each event handler needs to check if changes in form refers to this handler ex. if email about rejection should be sent
Form patch is a Command which generates several Events ex:. Interviewer Assigned, Notifications Turned Off, Rejected on technical interview
Drawback of this solution is that some events could be generated and other will not because of breaking constraints ex: Notifications Turned Off will succeed but Interviewer Assigned will fail due to assigning unauthorized user. Maybe I should check all constraints before commands generation ?
Form patch is converted to several Commands ex: Assign Interviewer, Turn Off Notifications and each command generates event ex: Interviewer Assigned, Notifications Turned Off
Drawback of this solution is that some commands can fail ex: Assign Interviewer can fail due to assigning unauthorized user. This will end up with inconsistent state because some events would be stored in repository, some will not. Maybe I should check all constraints before commands generation ?
The question I would call your attention to: are you creating an authority for the information you store, or are you just tracking information from the outside world?
Udi Dahan wrote Race Conditions Don't Exist; raising this interesting point
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If you have an unauthorized user in your system, is it really critical to the business that they be authorized before they are assigned responsibility for a particular step? Can the system really tell that the "fault" is that the responsibility was assigned to the wrong user, rather than that the user is wrongly not authorized?
Greg Young talks about exception reports in warehouse systems, noting that the responsibility of the model in that case is not to prevent data changes, but to report when a data change has produced an inconsistent state.
What's the cost to the business if you update the data anyway?
If the semantics of the message is that a Decision Has Been Made, or that Something In The Real World Has Changed, then your model shouldn't be trying to block that information from being recorded.
FormUpdated isn't a particularly satisfactory event, for the reason you mention; you have to do a bunch of extra work to cast it in domain specific terms. Given a choice, you'd prefer to do that once. It's reasonable to think in terms of translating events from domain agnostic forms to domain specific forms as you go along.
HttpRequestReceived ->
FormSubmitted ->
InterviewerAssigned
where the intermediate representations are short lived.
I can see one big drawback of the first option. One of the biggest advantage of CQRS/ES with Axon is scalability. We can add new features without worring about regression bugs. Adding new feature is the result of defining new commands, event and handlers for both of them. None of them should not iterfere with ones existing in our system.
FormUpdate as a command require adding extra logic in one of the handler. Adding new attribute to patch and in consequence to command will cause changes in current logic. Scalability is no longer advantage in that case.
VoiceOfUnreason is giving a very good explanation what you should think about when starting with such a system, so definitely take a look at his answer.
The only thing I'd like to add, is that I'd suggest you take the third option.
With the examples you gave, the more generic commands/events don't tell that much about what's happening in your domain. The more granular events far better explain what exactly has happened, as the event message its name already points it out.
Pulling Axon Framework in to the loop, I can also add a couple of pointers.
From a command message perspective, it's safe to just take a route and not over think it to much. The framework quite easily allows you to adjust the command structure later on. In Axon Framework trainings it is typically suggested to let a command message take the form of a specific action you're performing. So 'assigning a person to a step would typically be a AssignPersonToStepCommand, as that is the exact action you'd like the system to perform.
From events it's typically a bit nastier to decide later on that you want fine grained or generic events. This follows from doing Event Sourcing. Since the events are your source of truth, you'll thus be required to deal with all forms of events you've got in your system.
Due to this I'd argue that the weight of your decision should lie with how fine grained your events become. To loop back to your question: in the example you give, I'd say option 3 would fit best.
How do we usually deal with versioning of an aggregate root?
I was thinking along this line (I'm in a survey-design domain).
One way to have versioning is to have an explicit method to create a new version, based on the existing one. For example, Study (an aggregate root).
So initially we have an aggregate root, whose root-entity is Study with (business) key "ABC", version "1".
By invoking the method "newVersion()" on the Study, a copy of that Study and all the other entities that belong to the same aggregate root will be created.
So basically, versioning is done through creation a separate instance (of aggregate root). The ID is composite (business key + version).
How do we know if it's a branch? or is it just one version up? (1.1? or 2). I guess, this simple rule would work: if there's no further version associated, then it's "one version up" (2); if there's already another version, than it's a branch (1.1).
Another concern: noise.
But that means, we cannot work on / modify existing version. We'd have to create a newVersion everytime we want to make modifications to our object. Everytime??? Hmmm.... Doesn't sound right.
Or... we can make rule like this, based on a flag (active / not-active, or published / un-published). If the flag is "not-active", we can modify the AR directly, without creating a new version. If the flag is active we have to either: (a) set it to "not-active" first, and modify.... or (b) create a newVersion and work on the version (initially set to "not-active").
Any thoughts / experience you want to share on this matter?
I think you will find things a bit confusing in researching this question, because there are two very different concepts at play:
Versioning as a concurrency control mechanism to support optimistic concurrency
Versioning as an explicit domain concept
Versioning to support Optimistic Concurrency
Optimistic concurrency is when two simultaneous transactions are allowed to start, but if they both try and modify the same data item, only the first one is permitted to proceed. See Concurrency Control for an overview of different locking strategies.
In summary, you leave versioning up to the persistence technology, because the purpose of the version is to detect simultaneous writes to the persistence layer.
When using this pattern, it's common to not even keep copies of old versions, however it's certainly possible to do so as an audit trail/change log.
Versioning as an explicit domain concept
Based on your question, and the need to support potential branching strategies, it sounds like versioning is an explicit domain concept in your domain - i.e. the concept of a "Version" is something that your domain experts talk about, and working with versions is an important part of the ubiquitous language.
However, you raise a few different concepts which indicate that the domain needs further exploration:
Version branching
User-defined version naming/tagging (but still connected to a 'chain' of versions)
Explicit version changes (user requested) vs implicit version changes (automatic on every change)
If I understand your intent correctly, with explicit versioning, the current 'active'/'live'/'tip' version is mutable and can be modified without tracking the change, until the user 'commits' it - it becomes immutable, and a new 'live' version that is mutable is created.
Some other concepts that may come up if you explore this version:
Branch merging (once you have split two branches, what happens if you want to bring them back together?)
Rolling back - if you have an old version, do you support 'undoing' one or more changes?
Given the above, you may also find some insights from the way that version control systems work both centralised (e.g. subversion) and distributed (e.g. git and mercurial), as they present an active working model of version tracking with a mixture of mutable and immutable elements.
The open questions here suggest to me that you need to explore this in more detail with your domain experts. With DDD sometimes it's easy to get lost in what you can do, but I strongly encourage you to try and understand what you need to do.
How do your users/domain experts think about the world? What kind of operations do they want to be able to do? What is the purpose of these operations towards their initial goal? Your aim is to distill the answers to these questions into a model that effectively encapsulates the processes they work with.
Edit to Consider Modelling
Based on your comment - my first response would be to challenge the interpretation of the word 'version' when thinking about the modified questionnaire. In fact, I'd be tempted to challenge the modelling of the template/survey relationship. Consider a possible set of entities:
Template
Defines the set of questions in the questionnaire
Supports operations:
StartSurvey
Various operations to modify the questions and options in the template etc.
Survey
Rather than referencing a 'live' template, the survey would own it's own questionnaire
When you call Template.StartSurvey it returns a Survey that is prefilled with the list of questions from the template
A survey also supports modifying the questions - but this doesn't change the template it was created from
Unlike a template, a survey also maintains a list of recorded answers, and offers operations to set the answers
It probably also includes a lifecycle state wherein in some states answering questions is permitted, but once 'submitted' you can't modify the answers (just guessing on this one).
In this world, the survey is 'stamped out' from the template, but then lives an independent life. You can modify the questionnaire in the survey all you like, and it won't effect the template.
The trade-off here is that if you do modify the template, none of the surveys that have already been created from it would get updated - but it sounds like that might be safer for you anyway?
You could also support operations to convert a survey back into a template so that if you like the look of a modified survey, you could 'templatize' it so it could be used for future surveys.
This is a rather generic question, but I'll try to be as precise as possible:
quite often I'm asked by customers for proper implementations of LotusScript's
continue = false
in Notes' Query* events. One quite common situation is a form's QueryOpen event where we actually can stop the process of opening the document in question based on some condition, e.g based on the response from a user dialog.
For some Xpages events like querySaveDocument there are quite obvious solutions, whereas with others I only can recommend re-thinking the entire logic like preventing code execution at a much earlier stage. But of course most people in question would prefer a generic approach like "re-write those codes using...". And - to be honest - I'd like to know myself ;)
I'm more or less familiar with the Xpages / JSF lifecycle, but have to admit that I don't have a proper idea how I could stop execution at any given phase.
As always, any hint is welcome.
EDIT (to clarify my question, but also in response to Tim's answer below):
It's not just the QuerySave but also the QueryModeChange and QueryRecalc that somehow need to be transformed together ith an existng application's logic but that don't have their equivalent in the Xpages logic. Are both concepts (forms based and xpages based) just too different at this point?
As an example think of a workflow application where we need to check certain conditions before we allow opening an existing doc in edit mode for a potential author. In my Notes client application I add some code to 2 events, i.e. QueryOpen, where I check the "mode" arg, and 2nd QueryModeChange, where I check the current doc mode. In both cases I can prevent the doc from being edited by adding my continue = false, if necessary. Depending on the event the doc will either not change its mode, or not open at all.
With an Xpage I can use buttons for changing a doc's edit mode, and I can "hide" those buttons, or just add some checking code or whatever.
But 17 years of Domino consulting have tought me at least one lesson: there'll always be users that'll find the hidden ways to reach their goals. In our case they might find out that a simple modification of the page's URL will finally allow them to edit the doc. To prevent this I could maybe use the "beforeRenderResponse" event, I assume. But then, beforeRenderResponse is also called in other situations as well, so that we have to investigate the current situation first. Or I could make sure that users don't have author rights unless the situation allows it.
Again, not a huge problem, but when making the transition from a legacy Notes application this means re-thinking its entire logic. Which makes the job more tediuos, and especilly more expensive.
True? Or am I missing some crucial parts of the concept?
Structure your events as action groups and, when applicable, return false. This will cause all remaining actions in the group to be skipped.
For example, you could split a "Save" button into two separate actions:
1.
// by default, execute additional actions:
var result = true;
/* execute some logic here */
if (somethingFailed) {
result = false;
}
return result;
Replace somethingFailed with an evaluation based on whatever logic you have in place of the block comment to determine whether it's appropriate to now save the document.
2.
return currentDocument.save();
Not only does the above pattern cause the call to save() to be skipped if the first action returns false, but because save(), in turn, returns a boolean, you could theoretically also add a third action as a kind of postSave event: if the save is successful, the third action will automatically run; if the save fails, the third action will be automatically skipped.
All queryModeChange logic should be moved to the readonly attribute of a panel (or the view root of an XPage or Custom Control) containing all otherwise editable content... you would basically just be flipping the boolean: traditionally, queryModeChange would treat false (for Continue) as an indication that the document should not be edited (although this also forces you to check whether the user is trying to change from read to edit, because if you forgo this check, you're potentially also preventing a user from changing the mode back to read when it's already in edit), whereas readonly should of course return true if the content should not be editable.
Since the queryModeChange approach was nearly always an additional layer of "fig leaf" security, in XPages it's far better to handle this via actual security mechanisms; the readonly attribute is explicitly intended for enforcing security. Additionally, in lieu of using readonly, you could instead use the acl complex property that is also available for panels, XPages, and Custom Controls to provide different permissions to different subsets of users; anyone with a certain role, for instance, would automatically have edit, whereas the level for the default entry can be computed based on item values indicating the current "status" and/or "assignee". With either (or both) of these mechanisms in place, it doesn't matter what the user does to the URL... the relevant components cannot be editable if the container is read only. They could even try to hack in by running JavaScript in Chrome Developer Tools, attempting to emulate the POST requests that would be sent if they could edit the content... the data they send will still not get pushed back to the model, because the targeted components are read-only by virtue of the attributes of their container.
Attempting to apply all Notes client patterns directly to the XPages context is nearly always an exercise in frustration -- and, ultimately, futility. While I won't divulge specifics here, I (and some of the smartest people I know) learned this lesson at great cost. While users may say (and even believe) that they want exactly what they already have... if they did, they would be keeping what they already have, not paying you to turn it into something else. So any migration from a Notes client app to an XPage "equivalent" is your one opportunity to revisit the reason the code used to do what it did, and determine whether that even makes sense to retain within the XPage, based not only on the differential between Notes client and XPage paradigms, but also on any differential between what the users' business process was when the Notes client app was developed and what their process is now.
Omitting this evaluation guarantees that the resulting app will be running code it doesn't need to and fail to make the most of the target platform.
queryRecalc is a perfect example of this: typically, recalculation was blocked to optimize performance when the user's desktop and network resources were responsible for performing complex and/or network-intensive recalculations. In XPages this all happens on the server, so a network request from the browser that returns a page where everything has changed is typically no more expensive for the end user than a page where nothing has changed (unless there's an extreme differential in the amount of markup that is actually sent). Unless the constituent components are bound to data that is expensive for the server to recalculate, logical blocking of recalculation offers little or no performance benefit for the user. Furthermore, if you're trying to block recalculation in an event, you're too late: XPages uses a "lifecycle" that consists of 6 phases, so by the time your event code runs, any recalculation you're trying to block has already occurred. So, if the reason for blocking recalculation was to optimize performance, implement a scope caching strategy that ensures you're only pulling fresh data when it makes sense to do so, and the end user experience will be sufficiently performant without trying to prevent the entire page from recalculating. If, on the other hand, queryRecalc was being used as another fig leaf (something has changed, but we don't want to show the user the updates yet), that logic should definitely be revisited to determine whether it's still applicable, still (if ever) a good idea, and which portions of the platform are now the best fit for meeting the business process objectives.
In summary, use the security mechanisms unique to XPages for locking down portions or all of a page, and use the memory scopes that we didn't have in the Notes client to ensure the application performs well. Porting an event that used to contain this logic to an XPage event that continues to contain this logic will likely fail to produce the desired result and squander some of the benefits of migrating to XPages.