NodeJS - MongoDB triggers - node.js

I'm trying to develop a Log Viewer using DerbyJS, Racer and MongoDB. The logs will be inserted into the MongoDB database by a different source continuously, and my Log Viewer should be able to update the Logs Table on the user interface automatically.
I was wondering if there is a native way of listening to MongoDB events, like:
- On update
- On delete
These would be similar to, for example, Oracle DB triggers.

You can listen to events like insert, update, and other data events in mongodb using special collection named oplog. You just need to enable replication on your db instance either using mongod --master or mongod --replicaSet.
Oplog is actually a capped collection which is used by mongodb internally to implement replication. If you are using master/slave replication you will find the collection by the name of oplog.$main, if you are using replica sets it will be named oplog.rs.
You can use a tailable cursor on oplog, that should work.
Oplog, in effect, is logs itself. So you might not need to store them separately for logging purpose. However its size is fixed. Meaning when its full, older data gets deleted.
Also make sure you are looking into the local database, thats where oplogs are maintained
Here is a working example from mongoskin wiki page
skin = require "mongoskin"
db = skin.db "localhost:27017/local"
#//Cursor on oplog (a capped collection) which maintains a history for replication
#//oplog can be used only when replication is enabled
#//Use oplog.rs instead of oplog.$main if you are using replica set
oplog = db.collection "oplog.$main"
cursor = oplog.find({'ns': "icanvc.projects"},{tailable: yes, awaitData: yes})
#//Using cursor.nextObject will be slow
cursor.each (err, log)->
console.error err if err
console.log log if not err

The typical approach to a log viewer application is to use a tailable cursor with a capped collection of log entries.

No, https://jira.mongodb.org/browse/SERVER-124 it has to be application side.
I am unsure as to whether node.js has inbuilt triggers for MongoDB within it's driver however most likely not so you will need to code this in yourself.

Related

Mongo watch change stream suddenly stopped working

I'm using mongo watch() to subscribe to change stream events. I've noticed that the change stream events automatically stopped without throwing any specific error and become idle. Then have to restart the server to listen to the change stream again.
I'm not able to find out the specific reason for this strange behavior.
We are using Nodejs server. mongoose for db connection and watch.
If any one of you faced the same issue please guide me on that. We have 1 primary node and 2 secondary node cluster and hosted in mongodb atlas.
The collection.watch(...) method has to be called on the collection on every server restart. Common mistake is to call it once upon the creation of the collection. However, the database does not maintain reference to the result of this call as it does for other calls such as the collection.createIndexes(...).
Change streams only notify on data changes that have persisted to a majority of data-bearing members in the replica set. This ensures that notifications are triggered only by majority-committed changes that are durable in failure scenarios.
Change stream events stop working when a node fails in a replica set

How to handle Transaction Rollback for multiple database calls (Calling firebase and mongoDB Atlas)?

So here is my scenario:
I have a Firebase database
I also have a MongoDB Atlas database
There is a scenario where I have to write to a collection in a MongoDB Atlas database, then another write to a collection in a Firebase database and finally a completion write back to the MongoDB Atlas database.
This is how I handle this:
I start a MongoDB transaction
I perform a write to MongoDB (in case this fails, I can just rollback no issues)
I perform a write to Firebase (in case this fails, I can still cancel MongoDB commit and rollback)
I perform another final write to MongoDB (ISSUE HERE)
I then commit the MongoDB transaction (ISSUE HERE)
As you can see that in points 4 and 5, if the operation fails, the writes to MongoDB can be rolled back but not the writes to Firebase. Obviously because both these databases are not linked and are not under the same systems. How does one approach this? I'm sure there are lots of systems out there with multiple databases.
I am using NodeJS and Express to handle this.
There are many strategies:
Accept the changes in the non-transactional database even if the transaction fails. Accept that the non-transactional database may have incorrect data. For example, depending on how you view notifications here on SO the number of notifications in the top nav bar can be wrong.
Have a janitor process that periodically goes through the transactional database and updates the non-transactional database to match.
Same as 2 but trigger the janitor when a transaction is aborted, when you know some changes would need to be made on the non-transactional database.
Perform another write to non-transactional database after the transaction completes. This way you'll miss data from some completed transactions in the non-transactional database but you won't have data from aborted transactions there.
When reading, read from transactional database first before reading from non-transactional database. If the data isn't present in transactional database, skip non-transactional read.
Expire data from non-transactional database to reduce the time when the data there is incorrect.

mongodb Atlas server - slow return

So I understand how some queries can take a while and querying the same information many times can just eat up ram.
I am wondering is their away to the following query more friendly for real-time requests?
const LNowPlaying = require('mongoose').model('NowPlaying');
var query = LNowPlaying.findOne({"history":[y]}).sort({"_id":-1})
We have our iOS and Android apps that request this information every second - which takes toll on MongoDB Atlas.
We are wondering if their is away in nodeJS to cache the data that is returned for at least 30 seconds and then fetch the new playing data when the data has changed.
(NOTE: We have a listener script that listen for song metadata to change - and update NowPlaying for every listener).
MongoDB will try doing its own caching when possible of queried data in memory. But the frequent queries mentioned may still put too much load on the database.
You could use Redis, Memcached, or even in-memory on the NodeJS side to cache the query results for a time. The listener script referenced could invalidate the cache each time an update occurs for a song's metadata to ensure clients get the most up-to-date data. One example of an agnostic cache client for NodeJS is catbox.

How to get mongodb sync or change log and apply it on another instance?

Is it possible to get a changelog of changes in mongodb , say from a given timestamp and then apply it at another instance of mongodb?
These 2 instances have same collection but changes to one is independent of the other.
Ideally the change log would be a transaction log of all the data changes that have happened from a given instance of time.
looks like the only way is to start the mongodb server in replica mode and get the oplog (as discussed with mongodb's solution architect - vigyan)
mongodb --replSet rs0
link to convert your standalone server to replica

NodeJS and MongoDB: Is there a way to listen to a collection and have an callback be called when a collection has new document?

Is there a way to listen to a MongoDB collection and have a callback gets triggered when a collection has a new document?
Looks like there isn't a way yet. There is a lot of discussion in the "triggers" JIRA about related topics:
https://jira.mongodb.org/browse/SERVER-124
You can work around this by polling with timestamps or counts, but an event callback would obviously be better.
There aren't any active pushes from the DB, but you could hook into replication.
Let's assume you have a replica set (you wouldn't run single mongod, would you?).
Every change is written to the oplog on the primary and then is replicated to secondaries.
You can efficiently pull new changes (both inserts and updates) from the oplog, using tailable cursors. Note, this is still pull, not push.

Resources