Azure CosmosDB fire trigger on time to live expiration - azure

I wonder if it is possible to fire a trigger after document is deleted from collection. To be more specific I'd like to be informed when a document expires.
I've seen that it isn't possible to catch deletes from change feed (https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed) but then I've downloaded cosmosdb emulator and here I see an option to create a trigger that can be fired on deletes. What is the difference between triggers created by user as seen in emulator and triggers fired on change feed? Is there any chance I could get a trigger for my needs?

Currently there is no wait to archive what you want unless you use a server side post-trigger written in JavaScript.
However, in CosmosDB a trigger has to be explicitly called when the operation that will do the delete is invoked, which makes it less of a trigger and more of a stored-procedure-ish.
You can read more about that here: https://learn.microsoft.com/en-us/azure/cosmos-db/stored-procedures-triggers-udfs#post-triggers
Registered triggers don't run automatically when their corresponding operations (create / delete / replace / update) happen. They have to be explicitly called when executing these operations.
To learn more, see how to run triggers article.

Related

Can Logic Apps Monitor a large number of calendars effectively?

PROBLEM
We want to track changes in user calendars, but are concerned with how often we'd need to check 2000+ user calendars (Outlook).
Would the process of monitoring over 2000 user calendars present a problem for our network?
WORKFLOW
Trigger (Check for calendar change) -> ACTION (Http: update a DB)
The Trigger below checks a calendar every 2 seconds. Is there a trigger that behaves like a "subscription" object where it simply handles a change notification?
For the question about how often to check the calendar events, it depends on your requirement. In my opinion, if you set check event every 2 seconds(it's a little bit more frequent), you'd better check if your logic app set as run in parallel. You can click the ... button of the trigger and click "Settings". Then check if it is same to below screenshot.
For your question about is there a trigger that behaves like a "subscription". I'm afraid it doesn't exist a trigger which can implement this requirement in logic app. We can also check if any backend api can implement it, we can see the graph api document.
The example in above screenshot is for mailFolders, but it's same with events. We can see it is necessary to specify a user(like me) or a group before the /events. So I don't think we can monitor the subscription events. You can raise a post on Azure feedback page to suggest developer add this feature.

How can I know that a projection is completed and need to publish an event for realtime subscription in CQRS?

I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of documents (with joins) from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc. by read model). At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
The operation succeeds, data has changed and the UI should be updated to reflect these changes.
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Without completion of projection, I can't fetch inserted/updated data and send back to UI by a real-time sockets. One solution, I can send the state of the same aggregate and aggregateId but in case of joins, how could I send full updates back (data with joins) to UI?
You have a few options for updating the UI.
If possible, it's easier if your API synchronously returns success/failure. If you get a failure, you can immediately report to the user, and you don't need to update the UI. If you get a success, you have some options.
Include in the success response some entity version information. Include in your read model some version information, and poll the query API until the version is new enough.
Include in the success response some entity version information. Have your query API allow you to specify that you want the data as-of at least that version (in a header, query parameter, or whatever). Either you can have the query API immediately return failure if it isn't yet up to date, or you can have it block until it is up to date (with a timeout) then return the up-to-date data.
Use some kind of client notification system such as web sockets to notify the client whenever the read model it is 'subscribed' to changes. Either the notification can include the information the client needs to update, or it can make another query.
On top of those options, you can also employ the technique of optimistic updates. In this case, after making the appropriate client side checks before issuing a command, you assume instantly that the command succeeded (if it's the kind of command that usually does succeed), update the UI immediately using client-side code. Then:
If the command comes back with failure, undo the client-side update (you could do this by re-fetching the read model, or using client-side code).
If the command succeeds, consider re-fetching the read model, if there's any likelihood that the client-side optimistic update isn't quite right. This should use one of the strategies listed above to ensure the command's effect is present in the read model.
This also works quite well with offline support - you queue commands up persistently in the client, and optimistically assume they succeed (perhaps with a UI indication that you are offline and data is stale) using client-side code to update the read model, until you can actually send the commands through to the server and re-fetch the server-generated read model.

Cloudant/CouchDB triggers an event by deleting a document

I am trying "update handlers" to catch create/update/delete events in IBM cloudant. It works when a document is created or updated, but not deleted. Is there any other way I can catch an event that a document is deleted and then create a document in another database to record this event? Thank you.
If you want to monitor a couchDB/Cloudant database for changes take a look at the /_changes feed: http://docs.couchdb.org/en/2.0.0/api/database/changes.html. You could implement an app that continuously monitors the feed and "logs" the desired information whenever a document is inserted, updated or deleted. For some programming languages there are libraries (such as https://www.npmjs.com/package/follow for Node.js) that make it easy to manage/process the feed.

Realtime update for updated activity

I'm playing around with getstream.io, it's an amazing tool but I have a question regarding realtime updates.
I am connected to realtime updates of a feed via Javascript (simply following examples from getstream.io).
feed.subscribe(callback)
Which works beautifully, adding and removing activities to the feed triggers the callback function.
However if I (via python) update an activity of a feed.
e = feed.add_activity(editable)
e['content'] = 'Ooops'
client.update_activity(e)
I see that the update was successful if I call feed.get() but I don't get a realtime notification in Javascript, shouldn't I? Am I doing something wrong?
The activity update API does not trigger a real-time update notification.

Does SP in db2 waits for triggers to execute

We have scenario where we want user to update certain tables in DB2. Which we are doing using a SP and transaction management is done for same. However, now we need to introdue one new table for logging user actions, but we don't want to keep user waiting for same. Can we write trigger in this scenrio.
If I call SP from outside someother language like java, when the this SP is updating the row, a trigger will called on that row.
In such scneario , is SP going to wait for trigger to complete execution or will return just by completing udpate execution of row and trigger will run in separate thread.
I tried to implement same , not sure how to be confirm.
No, DB2 does not have asynchronous triggers. Triggers are compiled as part of the SQL statement being executed that will necessitate their use. You can see this by explaining a query.

Resources