Transaction mongodb - node.js

I need to write into two different mongodb collections using an 'all or nothing' process. Fyi I use NodeJs in my backend side.
As far as I know MongoDb provides atomicity when it comes to a single collection, but it does not when we need to write into multiple collections.
So I'd like to know a way of emulating this a transaction in nodejs/mongodb in order to avoid writing into one collection if the other failed and also getting the possibility of doing a 'roll back' if the second process fails.
Thank you guys!

Starting from version 4.0 MongoDB will add support for multi-document transactions. Transactions in MongoDB will be like transactions in relational databases.
For details visit this link:
https://www.mongodb.com/blog/post/multi-document-transactions-in-mongodb?jmp=community

I wrote a library that implements the two phase commit system mentioned above. It might help in this scenario. Fawn - Transactions for MongoDB

The transactions for multi-document have been introduced in MongoDB 4.0 !!!
https://docs.mongodb.com/manual/core/transactions

In MongoDB (prior to 4.0) there is no way you can fully implement transactions on database level. However, there are some mechanisms which provides some transactions functionality. You can read about them in documentation.
Since MongoDB 4.0, transactions are supported. Very little chage is needed in your current code to support them. There's a new section in the documentation fully dedicated to the subject

Related

Use Mongoose-Transactions over multiple databases

I am creating a Node.js API consisting of multiple Microservices.
Each Microservice is responsible for one or more features of my application. However, my data is structured into multiple databases which each have multiple collections.
Now I need one sevice to perform atomic operations across multiple databases. If everything happened in the same database, I'd use a normal transaction. However, I don't know how to do this with multiple databases or if this is even possible?
Example:
One of the Microservices takes care of creating users. A user must be
created inside two databases. However, this should happen atomically,
i.e. if the user is created, it must be created in both databases.
UPDATE: MongoDB's official docs state the following:
With distributed transactions, transactions can be used across
multiple operations, collections, databases, documents, and shards.
I haven't found anything on how to perform distributed transactions with mongoose though.
I would be extremely glad if someone could give me some clarification on this topic.
You need to use the SAGA pattern of the microservice architecture.
The SAGA pattern is divided into two types:
Choreography-based saga
Orchestration-based saga
If you want to manage distributed transactions from a single service, then you can use Orchestration-based saga (2).
So with this pattern, you can implement a distributed transaction that either executes a chain of actions or rolls back along the chain, using compensating transactions.
I also recommend studying the patterns of microservice architecture on this site and recommend the book.
EDIT: Mongoose support Distributed Transactions, because it's a client to MongoDB Server. Form Mongoose point of view, a distributed transaction is just a transaction.
According to this video, on Distributed Transactions in MongoDB
the Distributed Transactions is defined above the level of mongoose, and can use it.
in the documentation of mongodb, they say:
Distributed Transactions and Multi-Document Transactions Starting in
MongoDB 4.2, the two terms are synonymous. Distributed transactions
refer to multi-document transactions on sharded clusters and replica
sets. Multi-document transactions (whether on sharded clusters or
replica sets) are also known as distributed transactions starting in
MongoDB 4.2.
Here is how I would try to solve this (Divide-and-conquer):
Try simple example of Distributed Transactions with MongoDB
Then try using simple mongoose with Transactions (it might be that there is be no different between , Distributed Transactions and non- Distributed Transactions as far as mongoose knows, because the Transactions is in higher level – see video).
Then try to combine the 2 solutions and see it this works,
If does not work with mongoose, I would try to implement Distributed Transactions with MongoDB, as the video implay that they spent a lot of effort in this, and since mongoose just let you do things that you can also do with MongoDB alone. Moving from mongoose to MongoDB maybe not so simple, but implementing Distributed Transactions is very hard.

How do you perform queries without specifying shard key in mongodbapi and how do you query across partitions?

How do you perform queries without specifying shard key in mongodb api and how do you query across partitions?
In sql api the latter is enabled by setting EnableCrossPartitionQuery to true on the request but I'm not able to find anything like that for the mongodb api. And my queries that work on an unsharded collection now fails(queries that specify the shard key works as expected).
The queries fail indiscriminately of whether I use the AsQueryable extension syntax or the aggregation framework.
As I know, no such property similar to EnableCrossPartitionQuery in CosmosDB Mongo API. In fact, CosmosDB is an independent server implementation that does not directly align with MongoDB server versions and features.
CosmosDB supports a subset of the MongoDB API and translates requests into the CosmosDB SQL equivalent. CosmosDB has some different behaviours and results, particularly with their implementation of partitioning as compared to MongoDB's sharding. But the onus is on CosmosDB to improve their emulation of MongoDB.
Certainly, you could add feedback here to get official assistance or consider using MongoDB Atlas on Azure if you'd like full MongoDB feature support.
Hope it helps you.
Was confirmed a bug by the Product Group team! Will be fixed in first two weeks of september in case anyone runs into the same problems in the mean time.

Partial syncing in pouchdb / couchdb with a particular scenario

I have been reading docs and articles on pouchdb/couchdb/cloudant. I am not able to create this simple architecture in my head. I need help!
So there are many users on the app. Each user has a separate database (which I read is the approach in pouch/couch/cloudant setup).
Now lets just focus on a single user. This user has some remote data already present on our server(couchdb). He has 3 separate docs stored.
He accesses docs 1 and docs 2 from browser 1. And docs 2 and docs 3 from browser 2.
Content in both the browsers must be in sync.
Should I be using Sync api of pouchdb? But as I read, it sync's the whole database. How can I use this api to sync only a subset of the central database. Is filtered replication answer here?
And also I don't want to push both the docs in a single call. He can access docs as he needs.
What is the correct approach to implement this logic with pouch/couch databases. If you can explain with a little code, that will be great. I just need basic ideas.
Is this kind of problem easily solvable in upcoming releases of CouchDB 2.0 and PouchDB-find.
Thanks a lot!
If you take a look at the PouchDB documentation, you should see the options.doc_ids. This parameter let you setup a replication on certain document ids. In your scenario, this would be solving your problem.

Sync views between pouchdb and couchdb

I've been able to sync data from my cloudant instance to my nodejs based pouchdb, however I need to setup a secondary search index and therefore I created a view on the couchdb instance however I am unable to see it in my synced pouchdb instance.
I see it in cloudant, in all documents, however after syncing and calling alldocs on pouchdb, it's not there. Also, i'm using the pouchdb-find plugin and I can't reference the secondary index search fields. Of course from pouchdb if if set the secondary index, it works fine.
Am I missing something? Does sync not replicate design docs in PouchDB? If not, what's the best way to create a persistent secondary index?
Any good docs for this? (Nolan....?) Speaking of docs, or support, is there an IRC room or some other live support for couchdb from the user community?
Thanks for your attention,
Paul
pouchdb-find is a reimplementation of Cloudant Query Language, not their search index (which is what I think you're talking about). It's also not done; I've only written about half of the operators. :) You may also want to try the pouchdb-quick-search plugin, which is for full-text search.
In general, the advice I usually give people is to not sync design documents at all – just replicate using a filter to avoid syncing design docs. Then you can create design documents that are optimized for whatever platform you happen to be on (PouchDB, CouchDB, Cloudant, the various PouchDB plugins, etc.).
And yeah, we are usually pretty responsive inside of the IRC channel and on the mailing list, but it's a small operation because we aren't sponsored by Cloudant or Couchbase or anybody. The core PouchDB team are all hobbyists. :)
Maybe this is stupid but, does the user that access couch has the admin role? Only admins can see and edit design documents.

CouchDB: bulk update best practices

My use case: i would set a flag ("read" or "unread") in a group of documents with only one request.
My first idea was to send a list of ids using an _update handler but reading docs it seem to work only on one document.
I'm wrong? How to solve this case?
You are correct.
Currently (CouchDB 1.1.0 and to my knowledge the next release, 1.2 also), the only way to modify documents in bulk is to send the literal documents themselves to CouchDB using the CouchDB bulk document API.
In my experience, in practice, this is not a major problem because bulk operations tend to be done with offline tools or else with AJAX operations where there is no noticeable impact to the user experience.

Resources