Mongodb change streams getting previous values? - node.js

Recently I learned about change streams in mongodb and how powerful they are, but I’m in need of getting the previous values before an update. However, I seem to have learned through some research that it’s impossible to get the previous values? So this got me thinking about what alternatives exist in order to retrieve those previous values?
What I want to achieve is a logging system such as, “Record A field has changed from {old_value} to {new_value}.” I’m using socket.io to push these updates to a react front-end client. The updates to records would be happening from a completely different system and not on the same blackened server where the change streams would be listening so I won’t be able to query the document before updating.
So I started to think of a different solution…maybe I could have two databases? One contains the old records and the other the updated records but this sounds like a duplication of data. And I can’t imagine having thousands of records.
I need some guidance as I really don’t know what the best option is? Is there really no way you can use change streams and get the previous values? Is it possible to somehow query the document before a change stream event? Thank you.

Not sure how I missed this but the solution is versioning the data.

Related

How to track model changes nodejs/postgresql

I have a app perpetuating data in Postgresql/Express/Knex/Objection. I am looking for. way t track changes in my models, so that I can manage and revert versions similar to paper_trail in rails or this port for sequelize: https://github.com/nielsgl/sequelize-paper-trail
Is there something I could use for this in Knex/Objection or at the db level to track changes
Answer: There is not any generic way to do it in Objection nor knex.
Random rambling:
You need to design what kind of changes you like to track and write some code for example to Model hooks in objection how to track the changes.
One way to implement it would be for example by adding a separate table where all the tracked changes are written for example in JSONB object where updated fields or old values are stored and indexed or something like that. I'm pretty sure you don't want to add tracking of all the data in the database, since it will blow up the DB size very fast.
Anyways implementation depends what it is actually why you like or need to track the data and what are actual use cases that you need to support.
Also this might work for you: https://wiki.postgresql.org/wiki/Audit_trigger

How to backup up a Sqlalchmey database?

I am trying to backup a database through sqlachemy and save it as a file. I tried using the extension, Flask-AlchemyDumps, but it appears to no longer be supported.
I musted be missing something obvious as this is surly an action a lot of developers want to do. Does anyone know how I should be backing up the database?
Thanks in advance
J Kirkman
SQLAlchemy is an ORM which sits between your code and the database. It's useful if you want to interact with specific rows and relationships without having to keep track of lots of ids and joins.
What you're looking for is a way to dump the entire contents of your DB to disk, presumably so you can restore it later/elsewhere. This is a bulk action, which is your first clue that an ORM may not be a suitable tool. (ORMs tend to be fast enough for small to medium operations, but slow and not ideal for actions which affect 10s of 1000s of rows at once.) And indeed, this isn't usually something you'd use an ORM for, it's a feature of your DB, presumably Postgres or MySQL. If you happen to be using Heroku, you can use their command line tool to do this.

How to make MongoDB's mongorestore update and replace files with the same _id

So we've recently setup sharding and we're migrating some of the data from several clients across several smaller databases into a bigger, sharded one. The problem is, that if I try to move data from production and do a mongorestore, then the files won't update if they have the same _id. This is a problem, because several mongorestores might be necessary as we test the sharded database and as customer production data changes over the testing period.
I obviously don't want to use --drop, since that will drop the whole collection instead of replacing the old files. Is there any way of doing this properly?
Cheers
I came up with a solution, although not ideal.
I'll use mongoimport with the --upsert option, instead of mongodump. For a whole database, I might need to write a script to mimic mongodump, but oh well.

Structuring Session Data in MongoDB

This might be a bad title, but I was having trouble thinking of a good way to phrase my problem. Basically, I have a NodeJS application that has session management. Each session interacts with a set of data independent from the other sessions. I am having trouble coming up with a way to structure this in MongoDB. Things I have thought of:
Currently I'm storing a list of JSON "pages" that each have an ID corresponding to the session using it. I am almost positive this will not scale well though, because these "pages" will be read and updated frequently, so if I'm connected to Session1000, I'm going to have to search through 1000 items looking for the correct ID every time I update something from that session. If 1000 people are doing that roughly once a second, well...
Ideally I would like to store each session in a different collection, but the sessions need to be created and referenced dynamically, and I can't find a way in MongoDB to access a collection without hard-coding the name.
Hopefully this accurately describes my problem. Does anyone have any ideas to help me structure the db so that accessing/updating will give fast performance/scalability?

Why isnt there a read analog of validate_doc_update in couchdb?

I am posing it as a suggested feature of couchdb because thats is the best way to express what i would like to achieve, and as a rant because i have not found a good reason for its lack:
Why not have a validate_doc_read(doc, userCtx) function so that I can implemen per-document read control? It would work exactly as validate_doc_update works, by throwing an error when you want to deny the read. What am I missing? Has someone found a workaround for per-document read control?
I'm not sure what the actual reason is, but having read validation would make reads very slow, and view indexes very hard to update incrementally (or perhaps impossible meaning that you'd basically have to have a per-user index).
The way to implement what you want is via filtered replication, so you create a new DB with only the documents you want a given user to be able to read.
The main problem to create a validate_doc_read, is how do we work with reduce functions with that behavior.
I can't believe thar a validate_doc_read is the best solution because we will give away one feature in favour of another.
In this way, you must restrict the view access using a proxy.

Resources