Cloudant/CouchDB triggers an event by deleting a document - couchdb

I am trying "update handlers" to catch create/update/delete events in IBM cloudant. It works when a document is created or updated, but not deleted. Is there any other way I can catch an event that a document is deleted and then create a document in another database to record this event? Thank you.

If you want to monitor a couchDB/Cloudant database for changes take a look at the /_changes feed: http://docs.couchdb.org/en/2.0.0/api/database/changes.html. You could implement an app that continuously monitors the feed and "logs" the desired information whenever a document is inserted, updated or deleted. For some programming languages there are libraries (such as https://www.npmjs.com/package/follow for Node.js) that make it easy to manage/process the feed.

Related

Firestore - Cloud functions - Get uid of user who deleted a document

I am trying to get the UID within an onWrite cloud function of any authenticated user who deletes a document in firestore (not the real time database which doesn't have this issue). The reason is... I am trying to create a log of all actions performed on documents in a collection. I have to use cloud functions as the client could hypothetically create/edit/delete a document and then prevent the corresponding log entry from being sent.
I have seen in other stackoverflow questions like:
Firestore - Cloud Functions - Get uid
Getting the user id from a Firestore Trigger in Cloud Functions for Firebase?
That firestore will not include any auth data for firestore in the onWrite function, and that the accepted workaround is to have fields like updated_by, created_by, created_at, updated_at in the document being created/updated which are verified using firebase permissions. This is great for documents being inserted or updated, but deleted documents in onWrite cloud functions only have change.before data, and no change.after data, meaning you have no way to see who deleted the document, and at best who updated the document last before deletion.
I am in the middle of trying out some work arounds as follows (but they have serious detractors):
Sending an update to a document right before it is to be deleted. Issues -> Might have timing issue, debounce issues, requires messy permissions to ensure that a document is only deleted if it has the proceeding update.
Updating it with a field that tags it for deletion and watching for this tag in a cloud function that then does the deleting. Issues -> leads to a very noticeable lag before the item is deleted.
Does anyone have a better way of doing something like this? Thanks!
Don't delete the document at all. Just add a field called "deleted", set it to true, and add another field with the UID of the user that deleted it. Use these new fields in your queries to decide if you want to deal with deleted documents or not for any given query.
Use a different document in a separate collection that records deletions. Query that collection whenever you need to know if a document has been deleted. Or create a different record in a different database that marks the deletion.
There are really no other options. Either use the existing document or create a new document to record the change. Existing documents are the only things that can be queried - you can't query data that doesn't exist in Firestore.

Purging documents in Couchbase Lite

I have a mobile app using Couchbase lite. When the user logouts, I want to remove some of the documents on the device; the user-specific documents. I do not want to remove all of the documents. Documents have a purgeDocument() method that I thought I could call on those user-specific documents.
The problem is that the purged documents are not re-synced down to the device if the user logs back in and a pull replication is run.
Based on the little I know of CouchDB sync protocol, it makes sense that those are not re-synced down because there are not newer Sequence updates on those user-specific documents to trigger a re-sync.
How should I approach this problem?
Possibilities
Delete the whole database (including common documents) and lose performance.
Somehow reset the last sequence for the replicator and hope the replicator does not transfer the already-downloaded docs over the wire. (Probably would screw up CBL)
Have separate databases, one that stores the user-specific docs and one that contains common docs. Databases can have filtered replicators (by channel) so it would be feasible to partition the incoming data into separate databases. The problem would be the seamless reference loading between documents of differing databases when using CBLModel objects wrappers.
As i understand from the official documentation in the subsection Purging documents, you are not retrieving the document again just because it has not been modified/updated (in short, its rev is the same) on the server side.
You can try to create again a dummy document with the same type and, for example, username (or whatever you are using to identify the user's configuration) when the user logs again in your app so that you trigger the pull replication from the server. You probably will have a conflict that can easily be solved taking the revision from the server.
I hope this idea helps a little.
UPDATE AFTER COMMENT
The idea is to store somewhere the id and type of the user's documents you're going to purge. That way you can create a new dummy document with those two fields when the user logs in again. Perhaps this new dummy document triggers the pull replication.
NOTE: I haven't tried this method. I am just guessing what it might be a work around to your problem.
I would suggest that your backend modifies the selected documents - this could be just a timestamp update - upon user login, which will post the new revisions to the device
You can keep purging the documents when the user logs in.
To solve problem of re-syncing specific document, I think the easiest way is to use filtered replication where the filter is document id.
These document IDs can be created in a manner which can be derived. For example it can be as UserDocument::.
Now when the user logs in you can start one shot replication with document ID as filter. This can only be done in one shot. And when this One Shot finishes you can start replication again by changing the setting of the replication(changing filter/channel).
following is the URL by Couchbase which explains filtered replication by document ID.
https://developer.couchbase.com/documentation/mobile/1.4/guides/couchbase-lite/native-api/replication/index.html#filtering-by-document-ids
Try Push after Purging the document with Couchbase Lite which allows you to Pull the document from the server at a later point.

Using Knockout.js with CouchDB - updating when changed

Just wondering about the best way to subscribe to my CouchDB data store, so that if a document in couch is updated, the KO view will also update (automagically). Is this something that's even possible?
Below is what I have so far, which simply get the user name from the user_info document.
$.getJSON('http://localhost/couchdb/user_info', function(data) {
var viewModel = ko.mapping.fromJS(data);
ko.applyBindings(viewModel);
});
Any help would be greatly appreciated!
CouchDB supports notifications when documents change: the changes feed.
You can poll the changes feed, with a ?since=X parameter to receive only updates since X.
You can also "long poll" the feed by adding &feed=longpoll. If there are no changes yet, CouchDB will receive your query but not answer until finally a change comes on.
Or, you can have a full COMET-style feed by instead adding &feed=continuous. That is similar to longpoll, however CouchDB will never close the connection. Every time a change happens, it will send you the JSON and then continue waiting.
Finally, you can be notified when anything changes in the database, or you can specify a Javascript filter to run on the server (&filter=designdoc/filtername). You will only receive notifications if the filter approves.
Have you looked at http://hood.ie/ it woks well. I'm also running hoodie as an os_daemons service from within my couchdb.
It's nice.

callback from workerRole when task is completed

I have a workerrole which creates a PDF document. I pass the workerRole the needed data trough a queue, the worker role creates a PDF document, stores it in a BLOB, but how can I send the BLOB address back to the website to inform the user where to go to download the PDF?
That's a typical scenario for the Correlation Identifier pattern.
When the worker role is done, it should send back a message over a queue indicating that the document is ready. You can use a Correlation Identifier (such as a document id) to indicate on the DocumentReadyEvent message which original request this event relates to.
You could also go the route of full CQRS and simply update a view-specific table that includes the new document, and let the web site query from that.
You could do it the other way around using a common naming framework. Let the website/user application choose the name and location of the blob based on some standard convention. The site/app can then occasionally check for the blob via an http request.
But, do you want to inform the web user in real time about the ready document?
You can do lot of things, for example you can create a table partitioned by "user id" and store the url of the finished documents there, and set up an ajax call that checks in background the content of that table for that user regularly, and when it founds a new one that has not been "viewed" yet, show a warning with a download link.
Just an idea.

CQRS - how to handle new report tables (or: how to import ALL history from the event store)

I've studied some CQRS sample implementations (Java / .Net) which use event sourcing as the event store and a simple (No)SQL stores as the 'report store'.
Looks all good, but I seem to be missing something in all sample implementations.
How to handle the addition of new report stores / screens, after an application has gone in production? and how to import existing (latest) data from the event store to the new report store?
Ie:
Imagine a basic DDD/CQRS driven CRM application.
Every screen (view really) has it's own structured report store (a SQL table).
All these views get updated using handlers listening to the domain events (CustomerCreated / CustomerHasMoved, etc).
One feature of the CRM is that it can log phone calls (PhoneCallLogged event). Due to time constraints we only implemented the logging of phone calls in V1 of the CRM (viewing and reporting of who handled which phone call will be implemented in V2)
After a time running in production, we want to implement the 'reporting' of logged phone calls per customer and sales representative.
So we need to add some screens (views) and the supporting report tables (in the report store) and fill it with the data already collected in the Event Store...
That is where I get stuck while looking at the samples I studied. They don't handle the import of existing (history) data from the event store to a (new) report store.
All samples of the EventRepository (DomainRepository) only have a method 'GetById' and 'Add', they don't support getting ALL aggregate roots in once to fill a new report table.
Without this initial data import, the new screens are only updated for newly occurred events. Not for the phone calls already logged (because there was no report listener for the PhoneCallLogged event)
Any suggestions, recommendations ?
Thanks in advance,
Remco
You re-run the handler on the existing event log (eg you play the old events through the new event handler)
Consider you example ... you have a ton of PhoneCallLoggedEvents in your event log. Take your new Handles and play all the old events through it. It is then like it has always been running and will just continue to process any new events that arrive.
Cheers,
Greg
For example in Axon Framework, this can be done via:
JdbcEventStore eventStore = ...;
ReplayingCluster replayingCluster = new ReplayingCluster(
new SimpleCluster("replaying"),
eventStore,
new NoTransactionManager(),
0,
new BackloggingIncomingMessageHandler());
replayingCluster.startReplay();
Event replay is an area that is not completely documented and lacks mature tooling, but here are some starting points:
http://www.axonframework.org/docs/2.4/event-processing.html#d5e1852
https://groups.google.com/forum/#!searchin/axonframework/ReplayingCluster/axonframework/brCxc7Uha7I/Hr4LJpBJIWMJ
The 'EventRepository' only contains these methods because you only need them in production.
When adding a new denormalization for reporting, you can send all event from start to you handler.
You can do this on your development site this way :
Load your event log to the dev site
Send all events to your denormalization handler
Move your new view + handler to your production site
Run events that happened inbetween
Now you're ready

Resources