With Getstream, what's the lifespan of a feed cursor (activity ID)? I'm writing an iOS app where the activities are persisted to Core Data for offline viewing. I was thinking of using the persisted activity IDs in conjunction with the Stream id_lt pagination param to sync my Core Data DB with updates from my server (which forwards requests to Stream). For how long can an activity ID be used as a pagination cursor?
It also appears that an activity ID can be used for pagination even after the activity of said ID has been removed. Is this behavior guaranteed, and if so, for how long? Not sure if it's working for me only because the activity ID remains available until a nightly cleanup.
The activity ID will not invalidate at any point. Thus you can use them as long as the activity isn't deleted. For pagination the activity ID can be used even if the activity is deleted.
Related
I have a Customer container with items representing a single customer in SQL API (DocumentDB) in CosmosDB. I also have a Gremlin API (GraphDB) with the customers' shoppingcart data. Both these data are temporary/transient. The customer can choose clear shopping cart which will delete the temporary customer and the shoppingcart data.
Currently I make separate calls, one to the SQL API (DocumentDB) and Gremlin API (GraphDB) which works but I want to do both as a transaction (ACID principle). To delete a customer, I call the Gremblin API and delete the shoppingcart data, then call the SQL API to delete the customer. But if deleting the customer with the SQL API (second step) fails, I want to roll back the changes done in the first call which will roll back the shoppingcart data which were deleted. In the T-SQL world, this is done with a commit and rollback.
How can I achieve distributed transaction coordination around the delete operations of the customer and shoppingcart data?
Since you don't have transactions in Cosmos DB across different collections (only within the partition of one container), this won't be directly possible.
Next best thing could be to use the Change Feed. It gets triggered whenever an item gets changed or inserted. But: It does not get triggered on deletes. So you need another little workaround of "soft deletes". Bascially you create a flag to that document ("to-be-deleted" etc.) and set its TTL to something very soon. This does trigger then change feed and you can from there delete the item in the other collection.
Is all that better than what you currently have? Honestly, not really if you ask me.
//Update: To add to the point regarding commit/rollback: This also does not exist in Cosmos DB. One possible workaround for this that comes to mind:
Update elements in collection shopping cart. Set a flag to-be-deleted to true and set the TTL for those elements to something like now() + 5 minutes
Delete the element in customer collection. If this works, all good.
If deletion failed, update the shoppingcart again. Remove the to-be-deleted flag and remove the TTL so Comsos DB won't automatically delete it.
Of course, you also need to update any queries you run against your shoppingcart to exclude any elements with the deletion flag in place.
I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of documents (with joins) from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc. by read model). At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
The operation succeeds, data has changed and the UI should be updated to reflect these changes.
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Without completion of projection, I can't fetch inserted/updated data and send back to UI by a real-time sockets. One solution, I can send the state of the same aggregate and aggregateId but in case of joins, how could I send full updates back (data with joins) to UI?
You have a few options for updating the UI.
If possible, it's easier if your API synchronously returns success/failure. If you get a failure, you can immediately report to the user, and you don't need to update the UI. If you get a success, you have some options.
Include in the success response some entity version information. Include in your read model some version information, and poll the query API until the version is new enough.
Include in the success response some entity version information. Have your query API allow you to specify that you want the data as-of at least that version (in a header, query parameter, or whatever). Either you can have the query API immediately return failure if it isn't yet up to date, or you can have it block until it is up to date (with a timeout) then return the up-to-date data.
Use some kind of client notification system such as web sockets to notify the client whenever the read model it is 'subscribed' to changes. Either the notification can include the information the client needs to update, or it can make another query.
On top of those options, you can also employ the technique of optimistic updates. In this case, after making the appropriate client side checks before issuing a command, you assume instantly that the command succeeded (if it's the kind of command that usually does succeed), update the UI immediately using client-side code. Then:
If the command comes back with failure, undo the client-side update (you could do this by re-fetching the read model, or using client-side code).
If the command succeeds, consider re-fetching the read model, if there's any likelihood that the client-side optimistic update isn't quite right. This should use one of the strategies listed above to ensure the command's effect is present in the read model.
This also works quite well with offline support - you queue commands up persistently in the client, and optimistically assume they succeed (perhaps with a UI indication that you are offline and data is stale) using client-side code to update the read model, until you can actually send the commands through to the server and re-fetch the server-generated read model.
I'm using notification feeds where users get a notification when other users add replies into a forum thread they are part of.
I'd like to know how I can remove activities from all feeds when the reply is deleted?
I can't seem to find any information about that. The examples show how I can remove an activity from one users feed. But I don't necessarily know all the users that might have the activity on their notifications feed.
Or is there a way to get a list of notification feeds that contain activities with a foreign id?
When you delete an activity from a feed, a delete is propagated to every feed that received that activity via follow relationship or to field. In your example, if you delete the activity from the "origin" feed you should be OK. If that's not the case you should probably expand your question with more detail.
Since you mentioned it: deletes by foreign_id allow you to delete all activities from one feed that share the same foreign_id value. For example: say that you have many activities in a feed with foreign_id "post:42" and you want to delete them all in once, you can perform a delete on foreign_id="post:42".
I have a workerrole which creates a PDF document. I pass the workerRole the needed data trough a queue, the worker role creates a PDF document, stores it in a BLOB, but how can I send the BLOB address back to the website to inform the user where to go to download the PDF?
That's a typical scenario for the Correlation Identifier pattern.
When the worker role is done, it should send back a message over a queue indicating that the document is ready. You can use a Correlation Identifier (such as a document id) to indicate on the DocumentReadyEvent message which original request this event relates to.
You could also go the route of full CQRS and simply update a view-specific table that includes the new document, and let the web site query from that.
You could do it the other way around using a common naming framework. Let the website/user application choose the name and location of the blob based on some standard convention. The site/app can then occasionally check for the blob via an http request.
But, do you want to inform the web user in real time about the ready document?
You can do lot of things, for example you can create a table partitioned by "user id" and store the url of the finished documents there, and set up an ajax call that checks in background the content of that table for that user regularly, and when it founds a new one that has not been "viewed" yet, show a warning with a download link.
Just an idea.
I've studied some CQRS sample implementations (Java / .Net) which use event sourcing as the event store and a simple (No)SQL stores as the 'report store'.
Looks all good, but I seem to be missing something in all sample implementations.
How to handle the addition of new report stores / screens, after an application has gone in production? and how to import existing (latest) data from the event store to the new report store?
Ie:
Imagine a basic DDD/CQRS driven CRM application.
Every screen (view really) has it's own structured report store (a SQL table).
All these views get updated using handlers listening to the domain events (CustomerCreated / CustomerHasMoved, etc).
One feature of the CRM is that it can log phone calls (PhoneCallLogged event). Due to time constraints we only implemented the logging of phone calls in V1 of the CRM (viewing and reporting of who handled which phone call will be implemented in V2)
After a time running in production, we want to implement the 'reporting' of logged phone calls per customer and sales representative.
So we need to add some screens (views) and the supporting report tables (in the report store) and fill it with the data already collected in the Event Store...
That is where I get stuck while looking at the samples I studied. They don't handle the import of existing (history) data from the event store to a (new) report store.
All samples of the EventRepository (DomainRepository) only have a method 'GetById' and 'Add', they don't support getting ALL aggregate roots in once to fill a new report table.
Without this initial data import, the new screens are only updated for newly occurred events. Not for the phone calls already logged (because there was no report listener for the PhoneCallLogged event)
Any suggestions, recommendations ?
Thanks in advance,
Remco
You re-run the handler on the existing event log (eg you play the old events through the new event handler)
Consider you example ... you have a ton of PhoneCallLoggedEvents in your event log. Take your new Handles and play all the old events through it. It is then like it has always been running and will just continue to process any new events that arrive.
Cheers,
Greg
For example in Axon Framework, this can be done via:
JdbcEventStore eventStore = ...;
ReplayingCluster replayingCluster = new ReplayingCluster(
new SimpleCluster("replaying"),
eventStore,
new NoTransactionManager(),
0,
new BackloggingIncomingMessageHandler());
replayingCluster.startReplay();
Event replay is an area that is not completely documented and lacks mature tooling, but here are some starting points:
http://www.axonframework.org/docs/2.4/event-processing.html#d5e1852
https://groups.google.com/forum/#!searchin/axonframework/ReplayingCluster/axonframework/brCxc7Uha7I/Hr4LJpBJIWMJ
The 'EventRepository' only contains these methods because you only need them in production.
When adding a new denormalization for reporting, you can send all event from start to you handler.
You can do this on your development site this way :
Load your event log to the dev site
Send all events to your denormalization handler
Move your new view + handler to your production site
Run events that happened inbetween
Now you're ready