Validate relationship existance in MongoDB with Mongoose - node.js

I'm using Mongoose and its not in an advanced stage, so I need some help with some specific points. I will try to keep my examples clear and without much context for it.
First of all, I'm doing some relationships in my schemas. Before I create or edit any of them, I'm verifying if the provided ObjectId exists in database when necessary.
VehicleSchema = new Schema({
name: String,
})
PersonSchema = new Schema({
name: String,
vehicle: ObjectId //relation with vehicle
})
PersonSchema.pre('save', (next) => {
// ...
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
// ...
}
Is there any better way to do this or is this the best way possible to make sure that my doc exists?
I was thinking about three possibilities to help this be faster, but I'm not sure if is secure and consistent:
Create a custom ObjectId that indicates the modelName of my Schema in it. Something like:
function createObjectIdByModelName(modelName) {
return new ObjectId(`${modelName}-${uuid.v4()}`)
}
and then:
function validateObjectIdByModelName(_id, expectedModel) {
const modelName = mongoose.model.get(_id).modelName
return modelName === expectedModel
}
Use some cache package like recachegoose or speedgoose
Make my requests have an "origin" where I could create some rules like:
// This is a simple example of course, but the idea is that
// if the origin of my request is my frontend, I would trust in it, so my validation
// would be ignored. Otherwise I validate it normally.
if (origin !== 'frontend') {
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
}
What do you think about? This is blowing my mind for weeks now.

Related

How can I dynamically generate Mongoose discriminators (at runtime?)

TL;DR: Is there a safe way to dynamically define a mongoose discriminator at runtime?
I have an app with a MongoDB collection where users have some control over the underlying schema.
I could add one or two fixed, required fields and just use mongoose.Mixed for the remainder that users can change, but I'd like to make use of Mongoose's validation and discriminators if I can.
So, what I've got is a second collection Grid where the users can define the shape they'd like their data to take, and in my main model Record, I've added a function to dynamically generate a discriminator from the definition in the second collection.
The code for my Record model looks like this:
const mongoose = require("mongoose")
const recordSchema = new mongoose.Schema({
fields: {
type: Array,
required: true
}
}, {
discriminatorKey: "grid"
})
const Record = mongoose.model("Record", recordSchema)
module.exports = grid => {
// Generate a mongoose-compatible schema from the grid's field definitions
const schema = grid.fields.map(field => {
if(field.type === "string") return { [field.name]: String }
if(field.type === "number") return { [field.name]: Number }
if(field.type === "checkbox") return { [field.name]: Boolean }
return { [field.name]: mongoose.Mixed }
})
return Record.discriminator(grid._id, new mongoose.Schema(schema))
}
This is inside an Express app, and I use the model in my middleware handlers something like this:
async (req, res) => {
const grid = await Grid.findById(req.params.id)
const Record = await GenerateRecordModel(grid)
const records = await Record.find({})
res.json({
...grid,
records
})
}
This works great on the first request, but after that I get an error Discriminator with name “ ” already exists.
I guess this is because only one discriminator with its name per model can exist.
I could give every discriminator a unique name whenever the function is called:
return Record.discriminator(uuidv4(), new mongoose.Schema(schema), grid._id)
But I imagine that this isn't a good idea because discriminators seem to persist beyond the lifetime of the request, so am I laying the groundwork for a memory leak?
I can see two ways forward:
COMPLICATED? Define all discriminators when the app boots up, rather than just when a HTTP request comes in, and write piles of extra logic to handle the user creating, updating or deleting the definitions over in the Grid collection.
SIMPLER? Abandon using discriminators, just use mongoose.Mixed so anything goes as far as mongoose is concerned, and write any validation myself.
Any ideas?

Preventing NoSQL injection: Isn't mongoose supposed to convert inputs based on given schema?

Looking to prevent NoSQL injection attacks for a node.js app using mongodb.
var mongoose = require('mongoose'); // "^5.5.9"
var Schema = mongoose.Schema;
var historySchema = new Schema({
userId: {
type: String,
index: true,
},
message: {},
date: {
type: Date,
default: Date.now,
}
});
var history = mongoose.model('history', historySchema);
// the following is to illustrate the logic, not actual code
function getHistory(user){
history.find({userId: user}, function(err, docs) {
console.log(docs)
}
}
Based on this answer to a similar question, my understanding is that using mongoose and defining the field as string should prevent query injection. However, by changing the user input to a query object, it is possible to return all users. For example:
getHistory({$ne: 1}) // returns the history for all users
I am aware of other ways to prevent this type of attack before it gets to the mongoose query, like using mongo-sanitize. But I'd like to know if there's something wrong with the way I defined the schema or if one can't expect mongoose to convert inputs according to the schema.
Thanks in advance!
this part is good enough, you do not need anything else there. There is method that receives string and uses the string.
The best approach is to validate the input that can be modified (usually HTTP request) on top level before processing anything (I can recommend https://github.com/hapijs/joi its easy to use and you can check if there all required fields and if all fields are in correct format).
So put the validation into middleware just before it hits your controller. Or at the beginning of your controller.
From that point you are in full control of all the code and you believe what you got through your validation, so it cannot happen that someone pass object instead of string and get through.
Following the "skinny controllers, fat model" paradigm, it would be best to expose a custom validation schema from your model to be used in your controller for POST and PUT requests. This means that any data that attempts to enter your database will first be sanitized against a validation schema. Every Mongoose model should own its own validation schema.
My personal favorite for this is Joi. It's relatively simple and effective. Here is a link to the documentation: https://www.npmjs.com/package/#hapi/joi
A Joi schema permits type checking (i.e., Boolean vs. String vs. Number, etc), mandatory inputs if your document has the field required, and other type-specific enforcement such as "max" for numbers, enumerable values, etc.
Here is an example you'd include in your model:
const Joi = require('joi');
...
function validateHistory(history) {
const historySchema = {
userId: Joi.string(),
message: Joi.object(),
date: Joi.date()
}
return Joi.validate(history, historySchema);
}
...
module.exports.validate = validateHistory;
And then in your controller you can do:
const {
validate
} = require('../models/history');
...
router.post('/history', async (req, res) => {
const {
error
} = validate(req.body.data);
if (error) return res.status(400).send(error.details[0].message);
let history = new History({
userID: req.body.user,
message: req.body.message,
date: req.body.date
})
history = await history.save();
res.send(history);
});
*Note that in a real app this route would also have an authentication callback before handling the request.

Query Parse.com migrated database pointer relationship with Mongoose

Context
So we have migrated from Parse.com to an hosted MongoDB database. Now I have to write a script that queries our database directly (not using Parse).
I'm using nodejs / mongoose and am able to retrieve these documents.
Problem
Here is my schema so far:
var StorySchema = new mongoose.Schema({
_id: String,
genre: String
});
var ActivitySchema = new mongoose.Schema({
_id: String,
action: String,
_p_story: String /* Also tried: { type: mongoose.Schema.Types.ObjectId, ref: 'Story' } and { type: String, ref: 'Story' }*/,
});
I would like to write a query that fetches theses documents with the related Story (stored as a pointer).
Activity
.find({
action: 'read',
})
.exec(function(error, activities) {
activities.forEach(function(activity) {
// I would like to use activity._p_story or whatever the mean to access the story here
});
});
Question
Is there a way to have the fetched activities populated with their story, given that the _p_story field contains Story$ before the object id?
Thanks!
One option I have been looking at is the ability to create a custom data type for each pointer. The unfortunate side is Parse treats these as 'belongsTo' relationships and but does not store the 'hasMany' relationship that Mongoose wants for populate(). But once this is in place you can easily do loops to get the relational data. Not ideal but works and is what populate is really doing under the hood anyways.
PointerTypeClass.js -> This would work for populating the opposite direction.
var Pointer = function(mongoose) {
function PointerId(key, options) {
mongoose.SchemaType.call(this, key, options, 'PointerId');
}
PointerId.prototype = Object.create(mongoose.SchemaType.prototype);
PointerId.prototype.cast = function(val) {
return 'Pointer$' + val;
}
return PointerId;
}
module.exports = Pointer;
Also be sure mongoose knows about the new type by doing mongoose.Schema.Types.PointerId = require('./types/PointerTypeClass')(mongoose);
Lastly. If you are willing to write some cloudcode you could create the array of ids for your populate to know about the objects. Basically in your Object.beforeSave you would update the array of the id for the relationship. Hope this helps.

Unable to get the value of particular key in mongoose if that key is not present in Schema

UserEventsInfo = new mongoose.Schema({
name: String,
username: String,
event_movie:[String],
event_tour:[String],
event_restaurant:[String],
event_lifetimeevents:[String]
},{strict : false});
I am able to insert new key-value pair other than defined in the schema
but when I try to read the value of that key. I can't. I am using the following code.
UserEventsDetails.find({username:username},function(err,docs){
if(!docs.length)
{
res.send('datanotavailable');
}
else{
res.send(docs[0][eventname]);
}
});
Here eventname is a variable.
When I add that key in the schema it returns the value i.e. work's fine.
Otherwise it is not returning any value.
Looks like there was an issue submitted like this to mongoose. Here is there response:
The benefit we see in a schemaless database is the ability for our data model to evolve as fast as our features require it, without a linear impact on performance and slower deployment cycles with needless migrations.
If you don't want your data to be normalized and validated prior to saving, then you don't need a tool like Mongoose, you can use the driver directly.
After a little digging there is a way to do this, but you will need to have a field with type Schema.Types.Mixed. So it would look like this:
var schema = new Schema({
mixed: Schema.Types.Mixed,
});
var Thing = mongoose.model('Thing', schema);
var m = new Thing;
m.mixed = { any: { thing: 'i want' } };
m.save(callback);
To do a find on a mixed this SO question answers that.
****EDIT
forgot to link the documentation of mixed types

node not recognizing duplicated entries

I'm trying to create a basic MEAN stack CRUD api to add shops into my database. I want every shop to have a unique name (to avoid adding duplicates). So far, everything gets saved into the database even if I post the same request 10 times. Went trough the code a couple of times and can't figure out what's wrong, if anyone could point me in the right direction I'd be very grateful.
shop model:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var bcrypt = require('bcrypt-nodejs');
//shop schema
var ShopSchema = new Schema({
name: { type: String, required: true, index: { unique: true }},
address: { type: String, required: true, index: { unique: true }}
});
module.exports = mongoose.model('Shop', ShopSchema);
post function:
apiRouter.route('/shops')
//create a shop
.post(function(req, res) {
//new instance of shop model
var shop = new Shop();
//set the shop information
shop.name = req.body.name;
shop.address = req.body.address;
//save shop and check for errors
shop.save(function(err) {
if(err) {
//duplicate entry
if(err.code == 11000) {
return res.json({ success: false, message: 'A shop with that name already exists.'});
}
else {
return res.send(err);
}
}
else {
res.json({ message:'Shop created! '});
}
});
})
I do not receive errors of any kind, like I said everything just gets written into the database.
Thanks for the help.
Basically your writes haven't finished before the new entries are saved. You can read more about creating unique keys Here, but the gist is below. The solution is to create an index over the unique fields ahead of time.
When we declare a property to be unique, we’re actually declaring that we want a database-level index on that property. Some database abstraction layers will issue a query to see if a there’s another record with the same value for the unique property, and if that query comes back empty, it allows the save or update to proceed. If you trust this method, you either have incredibly low traffic or you’re about to learn about race conditions, because 2 or more requests could have their checks to the database occur before any writes go out, and you end up with non-unique data in your DB.
In between the time that check query is issued, another insert could come along doing the exact same thing, and you still end up with duplication. Uniqueness can’t be correctly validated at the application level. So it’s good that Mongoose tries to create an index for us.

Resources