Determine timestamp of replication of documents during live sync in Cloudant - couchdb

Right now I have transaction syncs with clients that go offline and online frequently. This means that creation of a transaction document (when it goes into pouch) doesn't align with the point that it is entered into Couch.
Is there a way for me to tag these documents with a timestamp on confirmation of replication? I see there are advanced replication schedulers but the completed flag does not apply to live replication which is what we are using.
I have tried tagging the document before syncing it, but this doesn't account for issues of network delay or backend delay of replication. It simply is the time I started the sync of that document, there's no promise that it arrived at that point in CouchDB.

You would need to use an add-on like spiegel (using on_change documents to call back to an update function) or another (pouchdb?) client to observe the changes feed and add the timestamp of when it was available to that client from the couchdb (which might be a little delayed).
Such a client would be in danger of creating an infinite loop as #Flimzy indicated in the comments, unless it uses a rule to not re-update docs with existing timestamps so that it does not write when it is re-triggered by itself and therefore stops retriggering itself. Spiegel has support for such a rule and/or stopping an infinite loop could be part of an update function.

Related

How to reliably store event to Azure CosmosDB and dispatch to Event Grid exactly once

I'm experimenting with event sourcing / cqrs pattern using serverless architecture in Azure.
I've chosen Cosmos DB document database for Event Store and Azure Event Grid for dispachting events to denormalizers.
How do I achieve that events are reliably delivered to Event Grid exactly once, when the event is stored in Cosmos DB? I mean, if delivery to Event Grid fails, it shouldn't be stored in the Event Store, should it?
Look into Cosmos Db Change Feed. Built in event raiser/queue for each change in db. You can register one or many listeners/handlers. I.e. Azure functions.
This might be exactly what you are asking for.
Some suggest you can go directly to cosmos db and attach eventgrid at the backside of changefeed.
You cannot but you shouldn't do it anyway. Maybe there are some very complicated methods using distributed transactions but they are not scalable. You cannot atomically store and publish events because you are writing to two different persistences, with different transactional boundaries. You can have a synchronous CQRS monolith, but only if you are using the same technology for the events persistence and readmodels persistence.
In CQRS the application is split in Write/Command and Read/Query sides (this long video may help). You are trying to unify the two parts into a single one, a downgrade if you will. Instead you should treat them separately, with different models (see Domain driven design).
The Write side should not depend on the outcome from the Read side. This means, that after the Event store persist the events, the Write side is done. Also, the Write side should contain all the data it needs to do its job, the emitting of events based on the business rules.
If you have different technologies in the Write and Read part then your Read side should be decoupled from the Write side, that is, it should run in a separate thread/process.
One way to do this is to have a thread/process that listens to appends to the Event store, fetch new events then publish them to the Event Grid. If this process fails or is restarted, it should resume from where it left off. I don't know if CosmosDB supports this but MongoDB (also a document database) has the rslog that you can tail to get the new events, in a few milliseconds.

How to control idempotency of messages in an event-driven architecture?

I'm working on a project where DynamoDB is being used as database and every use case of the application is triggered by a message published after an item has been created/updated in DB. Currently the code follows this approach:
repository.save(entity);
messagePublisher.publish(event);
Udi Dahan has a video called Reliable Messaging Without Distributed Transactions where he talks about a solution to situations where a system can fail right after saving to DB but before publishing the message as messages are not part of a transaction. But in his solution I think he assumes using a SQL database as the process involves saving, as part of the transaction, the correlationId of the message being processed, the entity modification and the messages that are to be published. Using a NoSQL DB I cannot think of a clean way to store the information about the messages.
A solution would be using DynamoDB streams and subscribe to the events published either using a Lambda or another service to transformed them into domain-specific events. My problem with this is that I wouldn't be able to send the messages from the domain logic, the logic would be spread across the service processing the message and the Lambda/service reacting over changes and the solution would be platform-specific.
Is there any other way to handle this?
I can't say a specific solution based on DynamoDB since I've not used this engine ever. But I've built an event driven system on top of MongoDB so I can share my learnings you might find useful for your case.
You can have different approaches:
1) Based on an event sourcing approach you can just save the events/messages your use case produce within a transaction. In Mongo when you are just inserting/appending new items to the same collection you can ensure atomicity. Anyway, if the engine does not provide that capability the query operation is so centralized that you are reducing the possibility of an error at minimum.
Once all the events are stored, you can then consume them and project them to a given state and then persist the updated state in another transaction.
Here you have to deal with eventual consistency as data will be stale in your read model until you have projected the events.
2) Another approach is applying the UnitOfWork pattern where you cache all the query operations (insert/update/delete) to save both events and the state. Once your use case finishes, you execute all the cached queries against the database (flush). This way although the operations are not atomic you are again centralizing them quite enough to minimize errors.
Of course the best is to use an ACID database if you require that capability and any other approach will be a workaround to get close to it.
About publishing the events I don't know if you mean they are published to a messaging transportation mechanism such as rabbitmq, Kafka, etc. But that must be a background process where you fetch the events from the DB and publishes them in order to break the 2 phase commit within the same transaction.

How to handle projection errors by event sourcing and CQRS?

I want to use event sourcing and CQRS, and so I need projections (I hope I use the proper term) to update my query databases. How can I handle database errors?
For example one of my query cache databases is not available, but I already updated the others. So the not-available database won't be in snyc with the others when it comes back to business. How will it know that it have to run for instance the last 10 domain events from the event storage? I guess I have to store information about the current state of the databases, but what if that database state storage fails? Any ideas, best practices how to solve this kind of problems?
In either case, you must tell your messaging bus that the processing failed and it should redeliver the event later, in the hope that the database will be back online then. This is essentially why we are using message bus systems with an "at least once"-delivery guarantee.
For transactional query databases, you should also rollback the transaction, of course. If your query database(s) do not support transactions, you must make sure on the application side that updates are idempotent - i.e., if your event arrives on the next delivery attempt, your projection code and/or database must be designed such that the repeated processing of the event does not harm the state of the database. This is sometimes trivial to achieve (e.g., when the event leads to a changed person's name in the projection), but often not-so-trivial (e.g., when the projection simply increments view counts). But this is what you pay for when you are using non-transactional databases.

Is it possible to specify the Update Sequence from which to start a CouchDB replication?

I'm using CouchDB in an application that has Account objects, Event objects and Ticket objects (one-to-many-to-many). The basic setup is that each Account has its own database, in which all the Event and Ticket objects are stored, and each Ticket object stores a reference to its parent Event object. The events and tickets are then consumed by a mobile client application following the _changes feed for the database.
I've come to a point where the primary database has become so big that the _changes feed being consumed has become unbearably slow. So my plan is to have another database for each Event object that only contains tickets for that event, but I still need to keep the primary database for other reasons. So I'm setting up a replication from the primary database to the event database using a filter that only allows tickets for the correct event to pass through.
Because creating an event is a discreet operation, I know that the replication for a new event can start from the update sequence for the primary database before the event is created, negating the need for it to run through all the previous documents that are irrelevant. So my question is: when I start a replication can I specify from where it should start? Basically like passing a ?since=XXX to the _changes feed on the mobile client.
Yes, that's possible. Just add the since_seq attribute to the replication document.

Commit protocol

I'm building a REST web service that receives a request and must return "Ok" if the operation was done correctly. How could I deal with the possibility of the loose of the connection while returning this "Ok" message?
For example, a system like Amazon SimpleDB.
1) It receives a request.
2) Process the request (store and replicates the content).
3) Return a confirmation message.
If the connection was lost between phases 2 and 3, the client thinks the operation was not successful then submits again.
Thanks!
A system I reviewed earlier this year had a process similar to this. The solution they implemented was to have the client reply to the commit message, and clear a flag on the record at that point. There was a periodic process that checked every N minutes, and if an entry existed that was completed, but that the client hadn't acknowledged, that transaction was rolled back. This allowed a client to repost the transaction, but not have 2 'real' records committed on the server side.
In the event of the timeout scenario, you could do the following:
Send a client generated unique id with the initial request in a header.
If the client doesn't get a response, then it can resend the request with the same id.
The server can keep a list of ids successfully processed and return an OK, rather than repeating the action.
The only issue with this is that the server will need to eventually remove the client ids. So there would need to be a time window for the server to keep the ids before purging them.
Depends on the type of web service. The whole nature of HTTP and REST is that it's basically stateless.
e.g. In the SimpleDB case, if you're simply requesting a value for a given key. If in the process of returning it the client connection is dropped then the client can simply re-request the data at a later time. That data is likely to have been cached by the db engine or the operating system disk cache anyway.
If you're storing or updating a value and the data is identical then quite often the database engines know the data hasn't changed and so the update won't take very long at all.
Even complex queries can run quicker the second time on some database engines.
In short, I wouldn't worry about it unless you can prove there is a performance problem. In which case, start caching the results of some recent queries yourself. Some REST based frameworks will do this for you. I suspect you won't even find it to be an issue in practice though.

Resources