Delete series of recurring events on Outlook Calendar API - outlook-restapi

I can create recurring events through Outlook Calendar's API, but I haven't yet found how to delete such events all at once. The only "solution" I've come up with so far is to fetch all instances from an event within a given time range (using this) and make API calls to delete every one of them, one by one.
However, this is not only very time-consuming, but also makes no sense when we're talking about a recurring event that was created with the RecurrenceRange type NoEnd (which means it's gonna repeat forever) - what time range would I pick?
I'm sorry if it's a silly question, but I've read all questions from the outlook-restapi tag in here that had any relation with calendars and/or recurrence and also a few other questions from that tag (along with the API's docs/reference) and really didn't find much about how to deal with recurring events once they're created.
Thanks in advance for any help!

You can delete the master event which will internally delete all instances. https://msdn.microsoft.com/office/office365/APi/calendar-rest-operations#DeleteAnEvent

Related

Can I set an order for the Event Handlers in Office.JS?

I am reacting to both types of events onSelectionChanged and BindingSelectionChanged in Excel, and I actually need onSelectionChanged to be resolved before BindingSelectionChanged. Is this possible, or what would be a workaround? I am using an offset now, so onSelectionChanged only does its job if is is >200ms after the timestamp set by BindingSelectionChanged. This works in practice, but is not perfect (there is no garantee that that time difference is always OK).
Unfortunately currently Excel JS event order isn't guaranteed and couldn't be enforced. You may go to Microsoft 365 Developer Platform Ideas Forum and see if this feature has already been requested or request a new feature.
Regarding your scenario, could you explain the purpose of using two onSelectionChanged events together? One workaround might be using just one of them and check if additional criteria are satisfied.

Zapier trigger based on filter criteria in the past not new entries

Many zaps in Gmail or Sheets, are designed to trigger when a new entry or email matches filter criteria. But can a zap be triggered to act on entries or emails that already exist, simply by fitting that filter criteria? Is there some way to trigger a whole bunch of existing entries?
Thanks!
David here, from the Zapier Platform team.
This is a great question for our support team! They'll be able to explain it more specifically (and tailor it to your use case) but the short answer is "mostly yes". If you create a filter, turn the zap on, then dump a bunch of rows/emails/items into that filter, then Zapier will see them all as new and trigger appropriately. Exact possibilities depend on the app and what you want to filter by.
Again, I'd reach out to support (contact#zapier.com) since they're great at addressing stuff like this!

Track changes to InventoryCD for Stock Items

I'm creating a contract API solution to keep items in sync between multiple tenants. Is there any way to track the changes to InventoryCD? In this case one Franchiser would like to update items in their 6 franchisees. It's easy for me to find the records that changed, but harder to know when the CD has changed (importantly what it chagned FROM). Certainly I could write customization to do it, but I thought maybe Acumatica has some option inbuilt.
Ideally I'd like to have a log of the changes with old and new CD. It's hosted so I don't think I can make it happen with DB Triggers (which is how pre-Acumatica me would have handled id)
Thanks in advance.
It depends on the Acumatica version. But have you tried looking at Business Events? I believe there is the ability to access the old and previous values.
Also look at Acumatica's Audit history capabilities but be careful to only turn on the fields you need to track as the DB can grow very large if you turn on all fields on the Stock Item screen or for any screen.

getstream.io How do handle activity permissions?

If a user creates a new activity and wants all their followers to see it except 1, how can this be implemented? Do we simply push the activity, and then immediately delete it from the specific follower's timeline feed? This seems like a hack.
https://github.com/GetStream/stream-js/issues/210
this use case hasn't come up before. Why would someone want everyone except one person to see a post? Do they want that person to unfollow them? Are there "rings" or levels of people to choose from when posting? If that's the case, you can create separate feeds with follows to them for those levels (and will likely need to use the TO field as well since fanout only goes 1 level deep).
There's no built in mechanism to specify which feeds to fan out to or which not to. The fanout is intended to happen as fast as possible (milliseconds) so doing those kinds of checks wouldn't be optimal. Your solution to quickly delete from that feed will work.

Inferring the user intention from the event stream in an event store. Is this even a correct thing to do?

We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.

Resources