If a multi update fails partially in mongodb, how to roll back? - node.js

I understand that a multi update will not be atomic and there are chances that the update query may fail during the process. The error can be found via getLastError. But what if I want to roll back the updates that have already been done in that particular query? Any method simpler than the tedious two phase commits?
For instance, let me say I have a simple collection of some users and their phone models. Now I do a multi update on all the users who have a Nexus 4 to Nexus 5 (Dreamy, isn't it?). The only condition being, all or none - so all the N5s are taken back if even one N4 user doesn't get his. Now, somehow the mongodb fails in between, and I am stuck with a few users with N4 and a few with N5. Now, as I have gathered from the net is that I can't have mongo roll back directly. If the operation failed, I will have to do a manual update of the N5 users back to N4. And if that fails too somehow, keep repeating it.
Or when I have a complicated collection, I will have to keep a new key viz. Status and update it with keywords updating/ updated.
This is what I understand. I wanted to know if there is any simpler way. I assume from the comments the answer is a big no.

Related

Hi ,how can skip the update of unchanged value and will perform update operation on changed value in MongoDB and nodejs

I have to send some objects data form angular reactive form to MongoDB, in that object some value will be changed and some unchanged, i use like below code
db.findByIdAndUpdate({_id:id},{item1:value1,item2:value2,item3:value3})
if any value ie value1 or value2 or value3 is changing then update operation will be ok but nothing any change then how can skip this updation,i want do this because i want avoid unnecessarily server interaction
tl;dr--You can't skip the interaction with the DB.
Basically, you only want to update your record in the database when the user changes one of its values on the UI. This means you have two different versions of the same record: one in the UI, and one in the DB. If you're sending the one from the UI to the DB, you basically have two options:
save the version from the UI, no matter what is in the DB
retrieve the value from the database and compare; save the version from the UI if they are different
You might've noticed that the first option has fewer interactions with the DB on average. This is good if you have high latency between your server and your DB.
But, on the other hand, the second option has fewer writes than the first option. If concurrent writes to the same record are common, and they're causing timeouts, then this might be the best option for you.

How to implement atomicity in node js transactions

I am working on an application in which client(android/reactjs) clicks a button and five operations takes place, let say,
add a new field
update the old field
upload a photo
upload some text
delete some old fields.
Now sometimes due to network issue or any another issue only some operations takes place and db gets corrupted. So my question is how can I make all this transactions one i.e. atomic i.e. either all will complete or the done operations will be rollback. And where should I do this in client(reactjs/android) or in backend(nodejs) with API ? I thought of making an API on backend(since chances of backend goes down is rare) and keep the track of the operations done(statelessly like using arrays). If in any case transaction get stopped, roll back all the done operations. But I found this expensive and it not covers the risk of server error. Can you suggest how can I implement/design this ?

Mongodb transactions scalability WriteConflict

I am using:
Mongodb (v4.2.10)
nodejs + Mongoose
I am still in the development phase of my application and I am facing a potential problem (WriteConflict) using transactions in mongodb.
My application gives the possibilities for users to add posts and to like posts.
When liking a post, here what is happening on the back-end side :
Start transaction
Find the post with the given ID in the Post collection
Add a document to the Like collection
Update the like count of the post in the Post collection
Update the likes given count of the user in the User collection
Update the likes received count of the user in the User collection
Add a notification document in the Notification collection
Commit the transaction
I'd say the average execution time is 1 second, so it means that for 1 second, I am locking :
1 Post document
2 User documents
I can see that as huge scalability problem, especially if a user has many popular posts that will often be liked by others.
What would you recommend me to do?
I don't think I can stop using transactions because if something goes bad during the execution of the function, I want to revert potential changes made to the DB
Thanks
Transactions are not required for this.
Once a like is in the likes collection, you can recalculate all counts.
Notifications cannot be calculated (since you don't know when one was sent), but generally they are ephemeral anyway and if you have an outage requiring database restore users will most likely forgive some lost notifications.
When updating counts, use $inc instead of read-modify-write (writing out the new value).

Node.js - Scaling with Redis atomic updates

I have a Node.js app that preforms the following:
get data from Redis
preform calculation on data
write new result back to Redis
This process may take place several times per second. The issue I now face is that I wish to run multiple instances of this process, and I am obviously seeing out of date date being updated due to each node updating after another has got the last value.
How would I make the above process atomic?
I cannot add the operation to a transaction within Redis as I need to get the data (which would force a commit) before I can process and update.
Can anyone advise?
Apologies for the lack of clarity with the question.
After further reading, indeed I can use transactions however the area I was struggling to understand was that I need separate out the read from the update, and just wrap the update in the transaction along with using WATCH on the read. This causes the update transaction to fail if another update has taken place.
So the workflow is:
WATCH key
GET key
MULTI
SET key
EXEC
Hopefully this is useful for anyone else looking to an atomic get and update.
Redis supports atomic transactions http://redis.io/topics/transactions

How to account for a failed write or add process in Mongodb

So I've been trying to wrap my head around this one for weeks, but I just can't seem to figure it out. So MongoDB isn't equipped to deal with rollbacks as we typically understand them (i.e. when a client adds information to the database, like a username for example, but quits in the middle of the registration process. Now the DB is left with some "hanging" information that isn't assocaited with anything. How can MongoDb handle that? Or if no one can answer that question, maybe they can point me to a source/example that can? Thanks.
MongoDB does not support transactions, you can't perform atomic multistatement transactions to ensure consistency. You can only perform an atomic operation on a single collection at a time. When dealing with NoSQL databases you need to validate your data as much as you can, they seldom complain about something. There are some workarounds or patterns to achieve SQL like transactions. For example, in your case, you can store user's information in a temporary collection, check data validity, and store it to user's collection afterwards.
This should be straight forwards, but things get more complicated when we deal with multiple documents. In this case, you need create a designated collection for transactions. For instance,
transaction collection
{
id: ..,
state : "new_transaction",
value1 : values From document_1 before updating document_1,
value2 : values From document_2 before updating document_2
}
// update document 1
// update document 2
Ooohh!! something went wrong while updating document 1 or 2? No worries, we can still restore the old values from the transaction collection.
This pattern is known as compensation to mimic the transactional behavior of SQL.

Resources