node not recognizing duplicated entries - node.js

I'm trying to create a basic MEAN stack CRUD api to add shops into my database. I want every shop to have a unique name (to avoid adding duplicates). So far, everything gets saved into the database even if I post the same request 10 times. Went trough the code a couple of times and can't figure out what's wrong, if anyone could point me in the right direction I'd be very grateful.
shop model:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var bcrypt = require('bcrypt-nodejs');
//shop schema
var ShopSchema = new Schema({
name: { type: String, required: true, index: { unique: true }},
address: { type: String, required: true, index: { unique: true }}
});
module.exports = mongoose.model('Shop', ShopSchema);
post function:
apiRouter.route('/shops')
//create a shop
.post(function(req, res) {
//new instance of shop model
var shop = new Shop();
//set the shop information
shop.name = req.body.name;
shop.address = req.body.address;
//save shop and check for errors
shop.save(function(err) {
if(err) {
//duplicate entry
if(err.code == 11000) {
return res.json({ success: false, message: 'A shop with that name already exists.'});
}
else {
return res.send(err);
}
}
else {
res.json({ message:'Shop created! '});
}
});
})
I do not receive errors of any kind, like I said everything just gets written into the database.
Thanks for the help.

Basically your writes haven't finished before the new entries are saved. You can read more about creating unique keys Here, but the gist is below. The solution is to create an index over the unique fields ahead of time.
When we declare a property to be unique, we’re actually declaring that we want a database-level index on that property. Some database abstraction layers will issue a query to see if a there’s another record with the same value for the unique property, and if that query comes back empty, it allows the save or update to proceed. If you trust this method, you either have incredibly low traffic or you’re about to learn about race conditions, because 2 or more requests could have their checks to the database occur before any writes go out, and you end up with non-unique data in your DB.
In between the time that check query is issued, another insert could come along doing the exact same thing, and you still end up with duplication. Uniqueness can’t be correctly validated at the application level. So it’s good that Mongoose tries to create an index for us.

Related

Validate relationship existance in MongoDB with Mongoose

I'm using Mongoose and its not in an advanced stage, so I need some help with some specific points. I will try to keep my examples clear and without much context for it.
First of all, I'm doing some relationships in my schemas. Before I create or edit any of them, I'm verifying if the provided ObjectId exists in database when necessary.
VehicleSchema = new Schema({
name: String,
})
PersonSchema = new Schema({
name: String,
vehicle: ObjectId //relation with vehicle
})
PersonSchema.pre('save', (next) => {
// ...
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
// ...
}
Is there any better way to do this or is this the best way possible to make sure that my doc exists?
I was thinking about three possibilities to help this be faster, but I'm not sure if is secure and consistent:
Create a custom ObjectId that indicates the modelName of my Schema in it. Something like:
function createObjectIdByModelName(modelName) {
return new ObjectId(`${modelName}-${uuid.v4()}`)
}
and then:
function validateObjectIdByModelName(_id, expectedModel) {
const modelName = mongoose.model.get(_id).modelName
return modelName === expectedModel
}
Use some cache package like recachegoose or speedgoose
Make my requests have an "origin" where I could create some rules like:
// This is a simple example of course, but the idea is that
// if the origin of my request is my frontend, I would trust in it, so my validation
// would be ignored. Otherwise I validate it normally.
if (origin !== 'frontend') {
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
}
What do you think about? This is blowing my mind for weeks now.

Hide and then show again mongoose model values

So am I learning nodejs and using mongodb/mongoose with it. I am making a todo list site and as part of it I am trying to add the option of hiding and then showing again the tasks that are completed. I have been doing a lot of research but cannot find a way to hide documents in mongoose model, so what I have been trying to do is delete the completed tasks but have the values stored first so that they can then be restored again but I have run into a bit of trouble with this.
This is my mongoose schema
const todoTaskSchema = new mongoose.Schema({
text: {
type: String,
required: true
},
isComplete: {
type: Boolean,
default: false
}
});
module.exports = mongoose.model('TodoTask',todoTaskSchema);
In my main index file I have imported this using
const TodoTask = require("./models/TodoTask");
The way it works is that the user will click on the button and then this code will run:
//Toggle Completed
app.route("/toggleCompleted/").get((req, res) => {
TodoTask.find({isComplete: true}, function(error, completed){
if (completed.length !== 0) {
return TodoTask.deleteMany({isComplete: true}, function(err) {
}
);
}
if (completed.length === 0)
console.log(completed)
TodoTask.insertMany([
{completed}
]).then(function(){
console.log("Data inserted") // Success
}).catch(function(error){
console.log(error) // Failure
});
})
res.redirect("/");
});
Obviously the issue with this is that once the completed tasks are deleted and the user clicks on the button again, it is going to overwrite the completed value and it will be blank. Not sure how to fix this though or if there is a method to just simply hide and then show the values again. Any help would be greatly appreciated
Logic like hiding and showing should generally be handled on the client side, this would be a lot faster, since you save yourself a trip to the server and the database and would also be easier to implement. If you still wanted to do it serverside, you could query for all tasks, where isComplete is either true or false instead of deleting and creating them again.

MongoDB best practice for sorting by non-indexed fields

I have an app that allows users to use their own custom data, so I can't know what the data is. However, I do want to allow them to sort the data.
This can be a significant amount of data, and mongodb ends up giving me memory errors (32MB limit)
What would be the best way to approach this? How can I allow the user to sort a large amount of data by an unknown field?
MongoDB allows you to design the schema in such a way that it can store Objects and Object relation in a schema, So you can allow the user to store any kind of information. As #kevinadi said, there is a limit of 32MB. As of sorting is concerned it can be done on your serverside.
This is an example I tried when storing objects in MongoDB and Mongoose ORM
var mongoose = require("mongoose");
var userSchema = new mongoose.Schema({
email: {
type: String,
unique: true,
required: true,
lowercase: true,
trim: true,
match: [/^\w+([\.-]?\w+)*#\w+([\.-]?\w+)*(\.\w{2,3})+$/, "Please fill a valid email address"]
},
custInfo:{
type:Object,
required: true
}
isConfirmed: {
type: Boolean,
required: true,
default: false
},
confirmedOn:{
type: Date,
required: true,
default: Date.now()
}
});
module.exports = mongoose.model("user",userSchema);
Since you have tagged this question Meteor I assume, you have the default Meteor environment, where you can use the client-side lightweight Mongo collections.
This gives you the opportunity to publish (Publication) / return (Method) your data mostly unsorted and let the client's handle this task.
Think this: just 100 clients asking for a publication that updates on every sort action (because the subscription parameters change, so the publication changes, too).
This causes already your server to consume a high amount of RAM to keep the observers (OPLOG etc.) running for 100 publications, each querying huge amounts of documents.
Possible performant solutions are described below. Please keep in mind, that they are not bound to any front-end and rather a conceptual description. You will have to include reactivity etc., based on your frontend environment.
Option A - Publish unsorted, let clients sort
server
Meteor.publish('hugeData', function () {
return MyCollection.find({ ...})
})
client
const handle = Meteor.subscribe('hugeData')
if (handle.ready()) {
const sortedData = MyCollection.find({ ... }, {sort: { someField: -1 } })
}
A big plus is here, that you can inform the clients about the completeness status, if using cursor.observeChanges.
Note, that if you want to scan backwards (return docs, with the newest) you can use the hint option on find:
Meteor.publish('hugeData', function () {
return MyCollection.find({ ...}, { hint: { $natural : -1 })
})
This is way more performant than { sort: { fieldName: -1} }.
Option B - return unsorted from Method, let clients sort
Now there may still be a problem with solution A, since it still has a lot of RAM to consume if there are lots of subscribers. An alternative (especially if live-data changes are not so relevant) is to use the Meteor Methods:
server
Meteor.method('hugeData', function () {
return MyCollection.find({ ...}).fetch()
})
Note that this requires to fetch the docs, otherwise and unhandledPromiseRejection is thrown.
client
This requires a LocalCollection on the client, that is not in sync with your server side collection, or you will get problems with document syncing:
const HugeData = new LocalCollection(null) // note the null as collection name!
const insertUpdate = document => {
if (LocalCollection.findOne(document._id)) {
delete document._id
return LocalCollection.update(document._id, document)
} else {
return LocalCollection.insert(document)
}
}
Meteor.call('hudeData', (err, data) => {
data.forEach(insertUpdate)
})
Then you can use the LocalCollection on the client for any projection of the received data.
All in all it is a good tradeoff to move the load to the clients. As long as you keep them informed when projections take a while it should be okay.
My current thought is an additional indexed collection holding 1.entity id, 2 fields name 3.field value.
Have that collection indexed, and then pull ordered entity ids from there, later on loading the full relevant documets by ID.

Preventing NoSQL injection: Isn't mongoose supposed to convert inputs based on given schema?

Looking to prevent NoSQL injection attacks for a node.js app using mongodb.
var mongoose = require('mongoose'); // "^5.5.9"
var Schema = mongoose.Schema;
var historySchema = new Schema({
userId: {
type: String,
index: true,
},
message: {},
date: {
type: Date,
default: Date.now,
}
});
var history = mongoose.model('history', historySchema);
// the following is to illustrate the logic, not actual code
function getHistory(user){
history.find({userId: user}, function(err, docs) {
console.log(docs)
}
}
Based on this answer to a similar question, my understanding is that using mongoose and defining the field as string should prevent query injection. However, by changing the user input to a query object, it is possible to return all users. For example:
getHistory({$ne: 1}) // returns the history for all users
I am aware of other ways to prevent this type of attack before it gets to the mongoose query, like using mongo-sanitize. But I'd like to know if there's something wrong with the way I defined the schema or if one can't expect mongoose to convert inputs according to the schema.
Thanks in advance!
this part is good enough, you do not need anything else there. There is method that receives string and uses the string.
The best approach is to validate the input that can be modified (usually HTTP request) on top level before processing anything (I can recommend https://github.com/hapijs/joi its easy to use and you can check if there all required fields and if all fields are in correct format).
So put the validation into middleware just before it hits your controller. Or at the beginning of your controller.
From that point you are in full control of all the code and you believe what you got through your validation, so it cannot happen that someone pass object instead of string and get through.
Following the "skinny controllers, fat model" paradigm, it would be best to expose a custom validation schema from your model to be used in your controller for POST and PUT requests. This means that any data that attempts to enter your database will first be sanitized against a validation schema. Every Mongoose model should own its own validation schema.
My personal favorite for this is Joi. It's relatively simple and effective. Here is a link to the documentation: https://www.npmjs.com/package/#hapi/joi
A Joi schema permits type checking (i.e., Boolean vs. String vs. Number, etc), mandatory inputs if your document has the field required, and other type-specific enforcement such as "max" for numbers, enumerable values, etc.
Here is an example you'd include in your model:
const Joi = require('joi');
...
function validateHistory(history) {
const historySchema = {
userId: Joi.string(),
message: Joi.object(),
date: Joi.date()
}
return Joi.validate(history, historySchema);
}
...
module.exports.validate = validateHistory;
And then in your controller you can do:
const {
validate
} = require('../models/history');
...
router.post('/history', async (req, res) => {
const {
error
} = validate(req.body.data);
if (error) return res.status(400).send(error.details[0].message);
let history = new History({
userID: req.body.user,
message: req.body.message,
date: req.body.date
})
history = await history.save();
res.send(history);
});
*Note that in a real app this route would also have an authentication callback before handling the request.

Mongoose key/val set on instance not show in JSON or Console.. why?

I have some information on my mongoose models which is transient. For performance reasons I dont wish to store it against the model.. But I do want to be able to provide this information to clients that connect to my server and ask for it.
Here's a simple example:
var mongoose = require('mongoose'),
db = require('./dbconn').dbconn;
var PersonSchema = new mongoose.Schema({
name : String,
age : Number,
});
var Person = db.model('Person', PersonSchema);
var fred = new Person({ name: 'fred', age: 100 });
The Person schema has two attributes that I want to store (name, and age).. This works.. and we see in the console:
console.log(fred);
{ name: 'fred', age: 100, _id: 509edc9d8aafee8672000001 }
I do however have one attribute ("status") that rapidly changes and I dont want to store this in the database.. but I do want to track it dynamically and provide it to clients so I add it onto the instance as a key/val pair.
fred.status = "alive";
If we look at fred in the console again after adding the "alive" key/val pair we again see fred, but his status isnt shown:
{ name: 'fred', age: 100, _id: 509edc9d8aafee8672000001 }
Yet the key/val pair is definitely there.. we see that:
console.log(fred.status);
renders:
alive
The same is true of the JSON representation of the object that I'm sending to clients.. the "status" isnt included..
I dont understand why.. can anyone help?
Or, alternatively, is there a better approach for adding attributes to mongoose schemas that aren't persisted to the database?
Adding the following to your schema should do what you want:
PersonSchema.virtual('status').get(function() {
return this._status;
});
PersonSchema.virtual('status').set(function(status) {
return this._status = status;
});
PersonSchema.set('toObject', {
getters: true
});
This adds the virtual attribute status - it will not be persisted because it's a virtual. The last part is needed to make your console log output correctly. From the docs:
To have all virtuals show up in your console.log output, set the
toObject option to { getters: true }
Also note that you need to use an internal property name other than status (here I used _status). If you use the same name, you will enter an infinite recursive loop when executing a get.
Simply call .toObject() on the data object.
For you code will be like:
fred.toObject()
This has been very helpful. I had to struggle with this myself.
In my case, I was getting a document from mongoose. When I added a new key, the key was not visible to the object if I console.log it. When I searched for the key (console.log(data.status), I could see it in the log but not visible if I logged the entire object.
After reading this response thread, it worked.
For example, I got an object like this one from my MongoDB call:
`Model.find({}).then(result=> {
//console.log(result);// [{ name:'John Doe', email:'john#john.com'}];
//To add another key to the result, I had to change that result like this:
var d = result[0];
var newData = d.toJSON();
newData["status"] = "alive";
console.log(newData);// { name:'John Doe', email:'john#john.com', status:'alive'};
}).catch(err=>console.log(err))`
Hope this helps someone else.
HappyCoding

Resources