Can't delete an object in IndexedDB with auto_increment key - object

I can really use some help on my code.
I'm using IndexedDB in my web app, and I've created two objectStore :
- companyToCall
- companyCalled
Both contain Company objects (custom class in js I've made).
Here goes the schema :
databaseOpeningRequest.onupgradeneeded = function(event)
{
db = event.target.result;
db.createObjectStore('companyToCall', { autoIncrement: true }).createIndex("id", "id", { unique: true});
db.createObjectStore('companyCalled', { autoIncrement: true }).createIndex("id", "id", { unique: true});
}
I've decided to not use Company.id as a key in the DB because I wanted to memorize the order of insertion in database, for instance you insert subsequently companies with id 25 20 and 30 in the db, I want their keys to be like this : company 25 -> 1 / company 20 -> 2 / company 30 -> 3
All those companies are first inserted in the CompanyToCall Storage and when I'm done working with one I want to put it in the CompanyCalled storage and delete it from the CompanyToCall Storage.
Unfortunately, the deletion in the CompanyToCall storage won't work and I can't figure out why.
Here is the deletion :
var removeCompanyFromToCallStorage = function(company)
{
if (activateLocalStorage)
{
var requete = db.transaction(['companyToCall'], 'readwrite').objectStore('companyToCall').delete(company.getId());
requete.onsuccess = function(e)
{
console.log('worked');
};
}
};
I got the "worked" on my console but when I'm checking my db I can still see this company in the wrong storage (after refreshing etc etc)
Does anyone have any idea?

First, you need to specify a keypath when creating your object stores in order to reference objects by an id. A keypath is like the primary key of a record in an ordinary relational table. Using a keypath is optional. You currently have no primary key defined, so doing a delete-by-primary-key operation, without specifying which field in the object represents the primary key, does not make sense. You can define a keypath by changing db.createObjectStore('companyToCall', { autoIncrement: true })... to db.createObjectStore('companyToCall', {keyPath: 'id', autoIncrement: true}).... See IDBDatabase.createObjectStore for additional information.
Second, IDBObjectStore.prototype.delete fires a success event regardless of whether an object within the object store was modified. Many operations in indexedDB fire success events regardless of what actually happened. Basically success just means that you properly requested the operation to be performed and that the operation completed. It does not mean the operation did something. This is why 'worked' is always displayed in your console. Unfortunately, there is no simple way to detect whether the object was actually deleted. Instead of 'worked', you can only print something like 'successfully requested object to be deleted', but you will never know if the request actually did anything unless you create a later get request to check it.

Related

Proper Sequelize flow to avoid duplicate rows?

I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.

One to many relation in Dynamodb Node js (Dynamoose)

I am using Dynamodb with nodejs for my reservation system. And Dynamoose as ORM. I have two tables i.e Table and Reservation. To create relation between them, I have added tableId attribute in Reservation which is of type Model type (of type Table type), as mentioned in the dyanmoose docs. Using the document.populate I am able to get the Table data through the tableId attribute from Reservation table. But how can I retrieve all Reservation for a Table? (Reservation and Table has one to many relation)?
These are my Model:
Table Model:
const tableSchema = new Schema ({
tableId: {
type: String,
required: true,
unique: true,
hashKey: true
},
name: {
type: String,
default: null
},
});
*Reservation Model:*
const reservationSchema = new Schema ({
id: {
type: Number,
required: true,
unique: true,
hashKey: true
},
tableId: table, \\as per doc attribute of Table (Model) type
date: {
type: String
}
});
This is how I retrieve table data from reservation model
reservationModel.scan().exec()
.then(posts => {
return posts.populate({
path: 'tableId',
model: 'Space'
});
})
.then(populatedPosts => {
console.log('pp',populatedPosts);
return {
allData: {
message: "Executedddd succesfully",
data: populatedPosts
}
}
})
Anyone please help to retrieve all Reservation data from Table??
As of v2.8.2, Dynamoose does not support this. Dynamoose is focused on one directional simple relationships. This is partly due to the fact that we discourage use of model.populate. It is important to note that model.populate does another completely separate request to DynamoDB. This increases the latency and decreases the performance of your application.
DynamoDB truly requires a shift in how you think about modeling your data compared to SQL. I recommend watching AWS re:Invent 2019: Data modeling with Amazon DynamoDB (CMY304) for a great explanation of how you can model your data in DynamoDB in a highly efficient manner.
At some point Dynamoose might add support for this, but it's really hard to say if we will.
If you truly want to do this, I'd recommend adding a global index to your tableId property in your reservation schema. Then you can run something like the following:
async function code(id) {
const reservation = await reservationModel.get(id);
const tables = await tableModel.query("tableId").eq(id).exec(); // This will be an array of `table` entries where `"tableId"=id`. Remember, it is required you add an index for this to work.
}
Remember, this will cause multiple calls to DynamoDB and isn't as efficient. I'd highly recommend watching the video I linked above to get more information about how to model your data in an more efficient manner.
Finally, I'd like to point out that your unique: true code does nothing. As seen in the Dynamoose Attribute Settings Documentation, unique is not a valid setting. In your case since you don't have a rangeKey, it's not possible for two items to have the same hashKey, so technically it's already a unique property based on that. However it is important to note that you can overwrite existing items when creating an item. You can set overwrite to false for document.save or Model.create to prevent that behavior and throw an error instead of overwriting your document.

Dealing with race conditions and starvation when generating unique IDs using MongoDB + NodeJS

I am using MongoDB to generate unique IDs of this format:
{ID TYPE}{ZONE}{ALPHABET}{YY}{XXXX}
Here ID TYPE will be an alphabet from {U, E, V} depending on the input, zone will be from the set {N, S, E, W}, YY will be the last 2 digits of the current year and XXXXX will be a 5 digit number beginning from 0 (willbe padded with 0s to make it 5 digits long). When XXXXX reaches 99999, the ALPHABET part will be incremented to the next alphabet (starting from A).
I will receive ID TYPE and ZONE as input and will have to give the generated unique ID as output. Everytime, I have to generate a new ID, I will read the last generated for the given ID TYPE and ZONE, increment the number part by 1 (XXXXX + 1) and then save the new generated ID in MongoDB and return the output to the user.
This code will be run on a single NodeJS server and there can be multiple clients calling this method
Is there a possibility of a race condition like the once described below if I am ony running a single server instance:
First client reads last generated ID as USA2100000
Second client reads last generated ID as USA2100000
First client generates the new ID and saves it as USA2100001
Second client generates the new ID and saves it as USA2100001
Since 2 clients have generated IDs, finally the DB should have had USA2100002.
To overcome this, I am using MongoDB transactions. My code in Typescript using Mongoose as ODM is something like this:
session = await startSession();
session.startTransaction();
lastId = await GeneratedId.findOne({ key: idKeyStr }, "value").value
lastId = createNextId(lastId);
const newIdObj: any = {
key: `Type:${idPrefix}_Zone:${zone_letter}`,
value: lastId,
};
await GeneratedId.findOneAndUpdate({ key: idKeyStr }, newIdObj, {
upsert: true,
new: true,
});
await session.commitTransaction();
session.endSession();
I want to know what exactly will happen when the situation I
described above happens with this code?
Will the second client's transaction throw an exception and I have to abort or retry the transaction in my code or will it handle the retry on its own?
How does MongoDB or other DBs handle transactions? Does MongoDB lock the documents involved in the transaction? Are the exclusive locks (wont even allow other clients to read)?
If the same client keeps failing to commit its transaction, this client would be starved. How to deal with this starvation?
You are using MongoDB to store the ID. It's a state. Generation of the ID is a function. You use Mongodb to generate the ID when mongodb process takes arguments of the function and returns the generated ID. It's not what you are doing. You are using nodejs to generate the ID.
Number of threads, or rather event loops is critical as it defines the architecture but in either way you don't need transactions. Transactions in mongodb are being called "multi-document transactions" exactly to highlight they are intended for consistent update of several documents at once. The very first paragraph of https://docs.mongodb.com/manual/core/transactions/ warns you that if you update a single document there is no room for transactions.
A single threaded application does not require any synchronisation. You can reliably read the latest generated ID on start and guarantee the ID is unique within the nodejs process. If you exclude mongodb and other I/O from the generation function you will make it synchronous so you can maintain state of the ID within nodejs process and guarantee its uniqueness. Once generated you can persist in in the db asynchronously. In the worst case scenario you may have a gap in the sequential numbers but no duplicates.
If there is a slighteest chance that you may need to scale up to more than 1 nodejs process to handle more simultaneous requests or add another host for redundancy in the future you will need to sync generation of the ID and you can employ Mongodb unique indexes for that. The function itself doesn't change much you still generate the ID as in a single-threaded architecture but add an extra step to save the ID to mongo. The document should have unique index on the ID field, so in case of concurrent updates one of the query will successfully add the document and another will fail with "E11000 duplicate key error". You catch such errors on nodejs side and repeat the function again picking the next number:
This is what you can try. You need to store only one document in the GeneratedId collection. This document will have the last generated id's value. The document must have a known _id field, for example lets say it will be an integer with value 1. So, the document can be like this:
{ _id: 1, lastGeneratedId: "<some value>" }
In your application, you can use the findOneAndUpdate() method with a filter { _id: 1 }; which means you are targeting one document update. This update will be an atomic operation; as per the MongoDB documentation "All write operations in MongoDB are atomic on the level of a single document." . Do you need a transaction in this case? No. The update operation is atomic and performs better than using a transaction. See Update Documents - Atomicity.
Then, how do I generate the new generated id and retrieve it?
I will receive ID TYPE and ZONE...
Using the above input values and the existing lastGeneratedId value you can arrive at the new value and update the document (with the new value). The new value can be calculated / formatted within the Aggregation Pipeline of the update operation - you can use the feature Updates with Aggregation Pipeline (this is available with MongoDB v4.2 or higher).
Note the findOneAndUpdate() method returns the updated (or modified) document when you use the update option new: true. This returned document will have the newly generated lastGeneratedId value.
The update method can look like this (using NodeJS driver or even Mongoose):
const filter = { _id: 1 }
const update = [
{ $set: { lastGeneratedId: { // your calculation of new value goes here... } } }
]
const options = { new: true, projection: { _id: 0, lastGeneratedId: 1} }
const newId = await GeneratedId.findOneAndUpdate(filter, update, options).['lastGeneratedId']
Note about the JavaScript function:
With MongoDB v4.4 you can use JavaScript functions within an Aggregation Pipeline; and this is applicable for the Updates with Aggregation Pipeline. For details see $function aggregation pipeline operator.

MongoError: E11000 duplicate key error collection cms_demo1.posts index: username_1 dup key: { : null } [duplicate]

Following is my user schema in user.js model -
var userSchema = new mongoose.Schema({
local: {
name: { type: String },
email : { type: String, require: true, unique: true },
password: { type: String, require:true },
},
facebook: {
id : { type: String },
token : { type: String },
email : { type: String },
name : { type: String }
}
});
var User = mongoose.model('User',userSchema);
module.exports = User;
This is how I am using it in my controller -
var user = require('./../models/user.js');
This is how I am saving it in the db -
user({'local.email' : req.body.email, 'local.password' : req.body.password}).save(function(err, result){
if(err)
res.send(err);
else {
console.log(result);
req.session.user = result;
res.send({"code":200,"message":"Record inserted successfully"});
}
});
Error -
{"name":"MongoError","code":11000,"err":"insertDocument :: caused by :: 11000 E11000 duplicate key error index: mydb.users.$email_1 dup key: { : null }"}
I checked the db collection and no such duplicate entry exists, let me know what I am doing wrong ?
FYI - req.body.email and req.body.password are fetching values.
I also checked this post but no help STACK LINK
If I removed completely then it inserts the document, otherwise it throws error "Duplicate" error even I have an entry in the local.email
The error message is saying that there's already a record with null as the email. In other words, you already have a user without an email address.
The relevant documentation for this:
If a document does not have a value for the indexed field in a unique index, the index will store a null value for this document. Because of the unique constraint, MongoDB will only permit one document that lacks the indexed field. If there is more than one document without a value for the indexed field or is missing the indexed field, the index build will fail with a duplicate key error.
You can combine the unique constraint with the sparse index to filter these null values from the unique index and avoid the error.
unique indexes
Sparse indexes only contain entries for documents that have the indexed field, even if the index field contains a null value.
In other words, a sparse index is ok with multiple documents all having null values.
sparse indexes
From comments:
Your error says that the key is named mydb.users.$email_1 which makes me suspect that you have an index on both users.email and users.local.email (The former being old and unused at the moment). Removing a field from a Mongoose model doesn't affect the database. Check with mydb.users.getIndexes() if this is the case and manually remove the unwanted index with mydb.users.dropIndex(<name>).
If you are still in your development environment, I would drop the entire db and start over with your new schema.
From the command line
➜ mongo
use dbName;
db.dropDatabase();
exit
I want to explain the answer/solution to this like I am explaining to a 5-year-old , so everyone can understand .
I have an app.I want people to register with their email,password and phone number .
In my MongoDB database , I want to identify people uniquely based on both their phone numbers and email - so this means that both the phone number and the email must be unique for every person.
However , there is a problem : I have realized that everyone has a phonenumber but not everyone has an email address .
Those that don`t have an email address have promised me that they will have an email address by next week. But I want them registered anyway - so I tell them to proceed registering their phonenumbers as they leave the email-input-field empty .
They do so .
My database NEEDS an unique email address field - but I have a lot of people with 'null' as their email address . So I go to my code and tell my database schema to allow empty/null email address fields which I will later fill in with email unique addresses when the people who promised to add their emails to their profiles next week .
So its now a win-win for everyone (but you ;-] ): the people register, I am happy to have their data ...and my database is happy because it is being used nicely ...but what about you ? I am yet to give you the code that made the schema .
Here is the code :
NOTE : The sparse property in email , is what tells my database to allow null values which will later be filled with unique values .
var userSchema = new mongoose.Schema({
local: {
name: { type: String },
email : { type: String, require: true, index:true, unique:true,sparse:true},
password: { type: String, require:true },
},
facebook: {
id : { type: String },
token : { type: String },
email : { type: String },
name : { type: String }
}
});
var User = mongoose.model('User',userSchema);
module.exports = User;
I hope I have explained it nicely .
Happy NodeJS coding / hacking!
In this situation, log in to Mongo find the index that you are not using anymore (in OP's case 'email'). Then select Drop Index
Check collection indexes.
I had that issue due to outdated indexes in collection for fields, which should be stored by different new path.
Mongoose adds index, when you specify field as unique.
Well basically this error is saying, that you had a unique index on a particular field for example: "email_address", so mongodb expects unique email address value for each document in the collection.
So let's say, earlier in your schema the unique index was not defined, and then you signed up 2 users with the same email address or with no email address (null value).
Later, you saw that there was a mistake. so you try to correct it by adding a unique index to the schema. But your collection already has duplicates, so the error message says that you can't insert a duplicate value again.
You essentially have three options:
Drop the collection
db.users.drop();
Find the document which has that value and delete it. Let's say the value was null, you can delete it using:
db.users.remove({ email_address: null });
Drop the Unique index:
db.users.dropIndex(indexName)
I Hope this helped :)
Edit: This solution still works in 2023 and you don't need to drop your collection or lose any data.
Here's how I solved same issue in September 2020. There is a super-fast and easy way from the mongodb atlas (cloud and desktop). Probably it was not that easy before? That is why I feel like I should write this answer in 2020.
First of all, I read above some suggestions of changing the field "unique" on the mongoose schema. If you came up with this error I assume you already changed your schema, but despite of that you got a 500 as your response, and notice this: specifying duplicated KEY!. If the problem was caused by schema configuration and assuming you have configurated a decent middleware to log mongo errors the response would be a 400.
Why this happens (at least the main reason)
Why is that? In my case was simple, that field on the schema it used to accept only unique values but I just changed it to accept repeated values. Mongodb creates indexes for fields with unique values in order to retrieve the data faster, so on the past mongo created that index for that field, and so even after setting "unique" property as "false" on schema, mongodb was still using that index, and treating it as it had to be unique.
How to solve it
Dropping that index. You can do it in 2 seconds from Mongo Atlas or executing it as a command on mongo shell. For the sack of simplicity I will show the first one for users that are not using mongo shell.
Go to your collection. By default you are on "Find" tab. Just select the next one on the right: "Indexes". You will see how there is still an index given to the same field is causing you trouble. Just click the button "Drop Index". Done.
So don't drop your database everytime this happens
I believe this is a better option than just dropping your entire database or even collection. Basically because this is why it works after dropping the entire collection. Because mongo is not going to set an index for that field if your first entry is using your new schema with "unique: false".
I faced similar issues ,
I Just clear the Indexes of particular fields then its works for me .
https://docs.mongodb.com/v3.2/reference/method/db.collection.dropIndexes/
This is my relavant experience:
In 'User' schema, I set 'name' as unique key and then ran some execution, which I think had set up the database structure.
Then I changed the unique key as 'username', and no longer passed 'name' value when I saved data to database. So the mongodb may automatically set the 'name' value of new record as null which is duplicate key. I tried the set 'name' key as not unique key {name: {unique: false, type: String}} in 'User' schema in order to override original setting. However, it did not work.
At last, I made my own solution:
Just set a random key value that will not likely be duplicate to 'name' key when you save your data record. Simply Math method '' + Math.random() + Math.random() makes a random string.
I had the same issue. Tried debugging different ways couldn't figure out. I tried dropping the collection and it worked fine after that. Although this is not a good solution if your collection has many documents. But if you are in the early state of development try dropping the collection.
db.users.drop();
I have solved my problem by this way.
Just go in your mongoDB account -> Atlast collection then drop your database column. Or go mongoDB compass then drop your database,
It happed sometimes when you have save something null inside database.
This is because there is already a collection with the same name with configuration..Just remove the collection from your mongodb through mongo shell and try again.
db.collectionName.remove()
now run your application it should work
I had a similar problem and I realized that by default mongo only supports one schema per collection. Either store your new schema in a different collection or delete the existing documents with the incompatible schema within the your current collection. Or find a way to have more than one schema per collection.
I got this same issue when I had the following configuration in my config/models.js
module.exports.models = {
connection: 'mongodb',
migrate: 'alter'
}
Changing migrate from 'alter' to 'safe' fixed it for me.
module.exports.models = {
connection: 'mongodb',
migrate: 'safe'
}
same issue after removing properties from a schema after first building some indexes on saving. removing property from schema leads to an null value for a non existing property, that still had an index. dropping index or starting with a new collection from scratch helps here.
note: the error message will lead you in that case. it has a path, that does not exist anymore. im my case the old path was ...$uuid_1 (this is an index!), but the new one is ....*priv.uuid_1
I have also faced this issue and I solved it.
This error shows that email is already present here. So you just need to remove this line from your Model for email attribute.
unique: true
This might be possible that even if it won't work. So just need to delete the collection from your MongoDB and restart your server.
It's not a big issue but beginner level developers as like me, we things what kind of error is this and finally we weast huge time for solve it.
Actually if you delete the db and create the db once again and after try to create the collection then it's will be work properly.
➜ mongo
use dbName;
db.dropDatabase();
exit
Drop you database, then it will work.
You can perform the following steps to drop your database
step 1 : Go to mongodb installation directory, default dir is "C:\Program Files\MongoDB\Server\4.2\bin"
step 2 : Start mongod.exe directly or using command prompt and minimize it.
step 3 : Start mongo.exe directly or using command prompt and run the following command
i) use yourDatabaseName (use show databases if you don't remember database name)
ii) db.dropDatabase()
This will remove your database.
Now you can insert your data, it won't show error, it will automatically add database and collection.
I had the same issue when i tried to modify the schema defined using mangoose. I think the issue is due to the reason that there are some underlying process done when creating a collection like describing the indices which are hidden from the user(at least in my case).So the best solution i found was to drop the entire collection and start again.
If you are in the early stages of development: Eliminate the collection. Otherwise: add this to each attribute that gives you error (Note: my English is not good, but I try to explain it)
index:true,
unique:true,
sparse:true
in my case, i just forgot to return res.status(400) after finding that user with req.email already exists
Go to your database and click on that particular collection and delete all the indexes except id.

Mongo duplicate key error on upsert or save

I'm running into a problem inserting data into mongo via nodejs. I'm loading json objects into documents through either upsert:true, or .save() called on a returned mongoose document.
EDIT: I forgot to point out one important point, that this does work. I update 30-40,000 documents correctly. It will run for a while, then eventually throw this error. The "unique key (xId) is a different string each time, so I don't think it's caused by the data actually being loaded...
Here's the schema:
var rosterSchema = new Schema({
name : String,
xId : {type:String, unique: true},
event : {type:ObjectId, ref:'Event'},
team : {type:ObjectId, ref:'Team'},
division : {type:ObjectId, ref:'Division'},
place : String,
players : [{type:ObjectId,ref:"Player"}],
staff : [{type:ObjectId,ref:"Player"}],
matches : [{type:ObjectId,ref:"Match"}],
});
Error:
MongoError: E11000 duplicate key error collection: r_fix.rosters index: xId_1 dup key: { : "6RNoYBSsCAJRsjxs" }
at Function.MongoError.create
Each run of the parse/load function targets a single roster page, which references other rosters in their matches.
Most of the rosters already exist from loading other data.
I can't guarantee the order that the rosters will be parsed, so I may need to create a 'match' against a roster that doesn't exist yet, which requires the new roster to be created, hence why I use findOneAndUpdate as opposed to find
Any idea what might be causing this? I'm trying to avoid pasting the whole source so these are each of the individual calls, with what I believe to be relevant info:
var rosterObj = {
xId : id,
name : rosterJson.team_name,
};
Roster.findOneAndUpdate({xId:rosterObj.xId},{$set:rosterObj},{new: true, upsert: true, setDefaultsOnInsert: true})
.exec((err,roster)=>{
if(err)throw(err);
}).then((roster)=>{
...
The above roster returns the document used in all subsequent save()'s
roster.event = event._id;
roster.save((err)=>{if(err)throw(err)})
...
roster.team = team._id;
roster.save((err)=>{if(err)throw(err)})
...
if(pObj.staff == "No")
roster.players.addToSet(player._id);
else
roster.staff.addToSet(player._id);
roster.save((err)=>{if(err)throw(err)});
...
if(!roster.event)
if(oppRoster.event){
roster.event = oppRoster.event;
roster.save((err)=>{if(err)throw(err)});
}
...
var rosterObj = {
xId:mObj.vs.roster_id,
event:roster.event,
}
Roster.findOneAndUpdate({xId:rosterObj.xId},{$set:rosterObj},{new: true, upsert: true, setDefaultsOnInsert: true}).exec((err,oppRoster)=>{
if(err)throw(err);
return oppRoster;
})
As far as I understand it, when I use a single key for the find, and it's the only unique:true value in the document, then doc.save() and doc.findOneAndUpdate({ ... , {upsert :true...}) should never return a duplicate key error.
My catch() at the end of the promise chain doesn't catch these thrown errors either, but that is an entirely different problem.
But I don't know anything, so that's why I'm here!
EDIT: I should point out that I'm doing this over a large number of documents, but they're all promise-chained, so only one 'roster' should be getting updated at one time.
The unique index constraint does not itself protect you from duplicate key errors, only from duplicate records. You need to catch the exception and retry. The duplicate key error should not reoccur as the race condition danger has passed at that point. See: https://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/#behavior

Resources