Expand occurrences into single events - outlook-restapi

Are there any options to expand recurring events into single events and only return single one-off events and instances of recurring events, but not the underlying recurring events themselves?
Currently the Outlook Calendar REST API returns occurrences with their SeriesMaster, but we need to merge and expand them programmatically.

This is exactly what GET /me/calendarview does. That expands recurring meetings and returns any instances that fall within the start/end parameters of the view. If you just do GET /me/events then you only get the series master.
If you want to expand from a series master you can always do GET /me/events/{id-of-series-master}/instances?startDateTime=XXXX&endDateTime=XXXX.
Note with both cases you have to specify a time window, so there is no way to say "just give me ALL the instances". This is because there is the possibility to have recurring events with no end date, so there is no "all" :).

Related

Inferring the user intention from the event stream in an event store. Is this even a correct thing to do?

We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.

Delete series of recurring events on Outlook Calendar API

I can create recurring events through Outlook Calendar's API, but I haven't yet found how to delete such events all at once. The only "solution" I've come up with so far is to fetch all instances from an event within a given time range (using this) and make API calls to delete every one of them, one by one.
However, this is not only very time-consuming, but also makes no sense when we're talking about a recurring event that was created with the RecurrenceRange type NoEnd (which means it's gonna repeat forever) - what time range would I pick?
I'm sorry if it's a silly question, but I've read all questions from the outlook-restapi tag in here that had any relation with calendars and/or recurrence and also a few other questions from that tag (along with the API's docs/reference) and really didn't find much about how to deal with recurring events once they're created.
Thanks in advance for any help!
You can delete the master event which will internally delete all instances. https://msdn.microsoft.com/office/office365/APi/calendar-rest-operations#DeleteAnEvent

Context level DFD

So, not really sure if this is the right place for this but I have this current Context level data flow diagram for the bellow specification extract and I have never done one before so I was wondering if it was correct or if it needs fixing? any help appreciated
This is a link to a screen of my current one http://i.imgur.com/S4xvutc.png
SPECIFICATION
Currently the office staff operate the following processes:
Add/Amend/Delete Membership
This is run on-demand when a new membership application is received or when a member indicates that he/she wishes to make amendments to their details. It is also run in those rare instances when a membership is terminated at the discretion of the manager. A new member has an ID number allocated (simply incremented from the previous membership accepted). A membership balance is also maintained for accounting purposes.
Another process operates in a similar fashion on data associated with transfer partners.
Monthly Maintenance
This is run on the last day of each month to issue requests and reminders for subscriptions due, and to remove memberships where fees remain outstanding. Standard letters are also generated. Membership balances are updated as appropriate.
Payment Updates
This is run prior to the Monthly Maintenance, with membership balances being updated accordingly.
Payments to partners are also disbursed at this time.
New Member Search
This is run whenever a new member has been added to the database. The partners are partitioned in terms of vehicle category and location. Normally, there is a limited choice of partner in a particular location (if, indeed, there is any choice) but for some popular destinations, several partners are involved in providing the airport transfer. Thus, a search is then made through the appropriate section for potential matches in the following manner:
A search is then made on the grounds of sex (many female passengers in particular prefer a driver of their own sex, especially if travelling alone or in couples).
Matches are then selected according to factors such as cost (if available), availability of extra requested facilities (such as child seats, air-conditioning etc.)
Existing Member - Additional Searches
These are run on-demand in the same fashion as for a new member's search. Members may of course request any number of such searches, but a separate payment is due for each.
All financial transactions (payments) are also posted to the separate Accounts file, which also stores other financial details relating to running costs for the consideration of the firm's accountants at the end of the financial year.
Thanks for any help, regarding this level 0 Context only DFD
It needs some fixing.
The most obvious flaw is that you use verbs in your dataflows. In some cases this can be fixed easily by just discarding the verb. Return balance and status is not a datflow, but balance and status is.
In others cases it is not so easy. Check Balance, is it outstanding? sounds more like a Process than a dataflow. It looks like Accounting is responsible for doing that job. So will Accounting produce a list of outstanding balances? Or will it return a single balance and status, and if so, based on what input? Will your Airpot Transport System send a list of balances to check to Accounting?
Take for example Monthly Maintenance. What matters is that you want
requests and reminders for subscriptions due
Standard letters
These need to be visible in your DFD
The fact that you want to remove memberships where fees remain outstanding, probably has not place in the toplevel diagram, because that looks like an internal affair.
In general, focus on what the System produces. Maintaining internal state is secondary, is is a necessity to produce the desired output.

CQRS + EventSourcing. Change Aggregate Root history

I have following problem.
Given. CQRS + EventSourcing application.
How is that possible to change the state of the Aggregate root in history?
For example, accounting application, accounter wants to apply transation but with past date. The event which will be stored in Event Store will have the older date than recent events, but the sequense number of this event will be bigger.
Repository will restore the state of aggregate root by ordering events by sequence number. And if we will take the snapshot for this past date - we will have aggregate root without this event.
We can surely change the logic of repository to order events by date, but we use external framework for CQRS, and this is not desirable.
Are there some elegant solutions for this case?
What you're looking for is a bi-temporal implementation.
e.g. On dec 3rd we thought X == 12 (as-at), but on dec 5th we corrected the mistake and now know X == 14 on dec 3rd (as-of)
There are two ways to implement this
1) The event store holds as-at data and a projection holds as-of data (a possible variation is both an as-of and as-at projection as well)
2) The aggregate has an overloaded method indicating the desire for as-of vs as-at values from the event store. This will most likely involve using a custom secondary snapshot stream for as-of data values.
Your solution could very likely use both implementations as one is command focused and the other is query focused.
The as-of snapshots for the aggregate root in the second option would need to be rebuilt as corrective events are recieved.
Martin Folwler talks about this in this article
Note: The event store is still append only.
In accounting you'll probably end up in jail if you change past bookings. Don't change the past. Use compensating commands instead.
Sorry, but you brought up the accounting example, which is probably a domain that's very strict about fiddling with past data without making the changes explicit.
If the above doesn't apply to your domain you can easily apply new events on top of older ones that change the state (and possibly the history) of your domain objects.
Take a booking to an account for example. The event might have occurred today, but it can set the actual booking date to some time in the past.
You have stated that your business logic allows you to add a back-dated transaction; now I don't know why you'd want that, but there's nothing constraining your aggregate not to accept it. Of course the event will get a later event sequence number/version, but that's expected.
You don't need to fiddle with the infrastructure, repository or anything else to do this.
Accounting doesn't let you change history. It only lets you add entries. It's up to your business logic to interpret the dates on these events as you will. In this case, the sequence of events is not just a persistence trick as with event sourcing, but the actual content of the domain!
One solution to this is to think of the event as an explicit compensating action. For example, when your bank reverses a charge, they don't delete an existing transaction, they add a compensating transaction. This transaction may reference they transaction it wishes to compensate with respective dating. In this way, the events are a proper representation of reality.

Should I use a workflow or event receiver?

I want to build a custom content type that will be the basis of a list item that will have multiple states. The various states will determine which list will instantiate this item. It will move between the states, and therefore the lists, based on user actions.
I have a few choices to implement this:
Create workflows on each list that handle the specific functions related to that list. Move an item to another list when necessary (copy item to new list, delete orig item), and let that workflow kick off.
Create a workflow on the custom content type we will be using, and let that move the item between the various lists. Not sure if a workflow on a content type can move from list to list, let alone across site collections.
Use the event receivers on the custom content type to manage state. User acts on an item, changing its state, so the event receiver creates a copy of itself on the other list and then deletes itself on the current list. I know this works across site collections.
Which way is best, and why? Anything that absolutely will not work? Any method I've overlooked?
In my opinion, use event receivers, as these follow the item rather than the list. You still need to enable the content type for the receiving list, but this approach is a lot easier than updating and deleting workflows in lists based on the presence or absences of certain content types.
However, why not combine the approaches? Have the content type event receiver deal with the content type specific activities and let the lists handle any list specific activities. Event receivers are cheap and flexible.
.b
Generally speaking:
In SharePoint workflows and event-receivers are related (if you look at the events on a list with an attached workflow you will find a event-receiver starting the workflow..)
The advantage of workflows is the possibility for the user to check the log (given that you use the Log-Activity)
The advantage of event-receivers are the greater number of events; they are more flexible than workflows.
From what you describe i would probably choose workflows, so the users can check if their item was processed correct.
I use the workflow associated with each list approach because I need the workflow history as an audit trail for which user does what. I rather like the idea of a workflow on the content type however and in retrospect this would have been the cleaner solution to what I've done.
This is prefect flowchart to decide between workflows and event receivers :
http://msdn.microsoft.com/en-us/library/ff648492.aspx

Resources