How to use mongoose schema when you have dynamic values? - node.js

I am trying to create schema where body can have different keys in it based on the incoming event. So when i try to rendered data it just send _id to client event is not part of results. Did i implemented wrong schema with for this approach ?
event.model.js
var mongoose = require('bluebird').promisifyAll(require('mongoose'));
var bmpEventSchema = new mongoose.Schema({
event: {
type: String,
body : {}
}
});
export default mongoose.model('BmpEvent', bmpEventSchema);
JsonDocument
{
"_id" : ObjectId("596f672f4c387baa25db5ec6"),
"event" : {
"type" : "json",
"body" : {
"evntType" : "Smtduki",
"tkt" : "75522655",
"cat" : "RNT",
"esc_lvl" : "4",
"asset" : "DNEC843027 ATI",
"esc_tm" : "2017-05-26 09:18:00",
"tos" : "T3APLS",
"mcn" : "SL6516",
"cusTkt" : "",
"tktSrc" : "BMP",
"tier1" : "STLMOETS"
}
}
}

This is a use case for discrimnators. You can make body a Mixed type but that will defeat the purpose of mongoose to provide validation. Suppose you have are modeling a books' database. You make a key named Professor for an academic book. But then you need to make a key novelist for a novel. You need to store genre for novel but not for educational books.
Now you can make a type key like you did in your use case and play with the results. But then you may have to apply default values for novelist in novels. Or you may need to set a field required in one of the types and not the other. Another problem with that approach would be to use middlewares (hooks) to the different types. You may want to perform a different function on creation of novel and a different function on creation of an educational book. It is just a scenario and you can have like 10 or 15 types which will be even more cumbersome to handle.
Now in order to avoid these issues you can make a different model for each type. But if you do that, when you want to query all books, you will have to perform a query on each of the models which will be ineffecient. You need something on the ODM layer. This is where discriminators come into play.
You make a base model with all the keys you want in all types of books and add a discrimnator key to it(refer to docs). Then you create novel from this model's discriminator function and add additional keys which will only be in novel. You can create as many child models as you like this way and then you can use them in simply a polymorphic manner. Internally, this will create a single collection named books but for novels it will store only novels' keys. The validation, middlewares etc of the different types of models will be handled by the ODM layer.
http://mongoosejs.com/docs/discriminators.html

Schema is wrong. Should be:
var bmpEventSchema = new mongoose.Schema({
event: {
type: String,
body : Mixed
}
});

There are two approaches I can suggest:
you simply don't list your keys as in your example
you list all possible keys, and mark some of them as required (according to your logic)
Example:
"key": {
type: "string",
required: true
}

Related

Maintain a custom order/sort of documents in MongoDB

In my web app XY I'm showing a classic list/table of data (documents) to the user. While all the sort functions provided by MongoDB ( and Mongoose, i'm using Mongoose) are pretty clear to me, I'm not interested in sorting by date or alphabetical order. In my case it would be important to let the user maintain a custom sort as in manually drag/drop items around to set a specific order of the documents (e.g. putting favourites in top of the list ). The UI to do this is a no-brainer but how to actually save the order in the database, brain-freeze.
Problem : How would I go about saving such a custom order of documents ?
What I use : NodeJS / Express / Mongoose (MongoDB)
Ideas
So far I could think of 2 ideas on how to do this.
A : Having an additional key (e.g. orderKey) in the mongoose Schema. Big con : I would need to keep constantly updating all documents orderKeys. Also I would need some sort of auto-increment for new documents.
const mySch = new Schema({
orderKey : { type : Number }
});
B : Creating one Schema/Model only for sorting, with an Array including all documents _ids for example. The order of the elements within the array would be used as reference for the custom order of the documents. Whenever the order changes, this Array would be changed as well.
conts orderSch = new Schema({
orderArray : { type : Array }
});
mongoose.model('Order', orderSch);
/* const customOrder = new Order({
orderArray : [ _id1, _id2, _id3, _id10, _id7, .. ]
}); */
Any more ideas or best practises are highly appreciated !

Find query in mongodb using cases

I have a collection of product in which I have document like this
"_id" : ObjectId("5acb1dad698eaa7a254c9017"),
"txtProductCode" : "1233A",
"txtModelCode" : "00M",
"txtPartNo" : "00P",
"txtSerialNo" : "00S",
"txtProductName" : "Watch",
"traderId" : ObjectId("5ac5fb29b0f9b3444e6c1ef2")
I want to search a product based on its name and traderId for which I used
db.getCollection('product').find( {$and:[{'txtProductName':"Watch"},{"traderId" : ObjectId("5ac5fb29b0f9b3444e6c1ef2")}]})
its working fine but now if a user have input model no then it shoud use model number also to search for a product if the user have not input the model no then it should without model number
So My question is do I have to use cases like this
if(req.body.modelNo)
db.getCollection('product').find( {$and:[{'txtProductName':"Watch"},{"traderId" : ObjectId("5ac5fb29b0f9b3444e6c1ef2")},{'txtModelCode':"00M"}]})
else
db.getCollection('product').find( {$and:[{'txtProductName':"Watch"},{"traderId" : ObjectId("5ac5fb29b0f9b3444e6c1ef2")}]})
or is there a way to do this without making cases I have to do this for multiple condtions so I am trying not to use cases
Create the query object first then add the extra key with a conditional check. No need to explicitly use the $and operator when specifying a comma separated list of expressions as it's implicitly provided:
let query = {
'txtProductName': 'Watch',
'traderId': ObjectId('5ac5fb29b0f9b3444e6c1ef2')
};
if (req.body.modelNo) query['txtModelCode'] = req.body.modelNo;
db.getCollection('product').find(query);
If using the $and operator, you can push the additional query into an array then use the list for the $and operator:
let andOperator = [
{ 'txtProductName': 'Watch' },
{ 'traderId': ObjectId('5ac5fb29b0f9b3444e6c1ef2') }
];
if (req.body.modelNo) andOperator.push({ 'txtModelCode': req.body.modelNo });
// if (req.body.modelNo) andOperator = [...andOperator, { 'txtModelCode': req.body.modelNo }];
db.getCollection('product').find({ '$and': andOperator });
Well, I would have done this in this way
First, you should send a json of specific from to backend. for example
[{'txtModelCode':"00M"},{'txtPartNo':"AC"},{'Yts':"xyz"}]
OR
[{'txtModelCode':"00M"},{'txtPartNo':"AC"}]
OR
[{'txtModelCode':"00M"}]
This is the payload that you should expect in req.body. And finally you can use it in your find() criteria. Something like
db.getCollection('product').find( {$and:[{'txtProductName':"Watch"},
{"traderId" : ObjectId("5ac5fb29b0f9b3444e6c1ef2")}, ...req.body]})
... is called spread operator. Spread syntax allows an iterable such as an array expression or string to be expanded. Read more about it here
This will make it totally dynamic. Any scaling in collection can directly be used in find criteria. you never have to add extra line of code

Models and Schemas, why can't I set properties on my Model [duplicate]

The two types of objects seem to be so close to one another that having both feels redundant. What is the point of having both schemas and models?
EDIT: Although this has been useful for many people, as mentioned in the comments it answers the "how" rather than the why. Thankfully, the why of the question has been answered elsewhere also, with this answer to another question. This has been linked in the comments for some time but I realise that many may not get that far when reading.
Often the easiest way to answer this type of question is with an example. In this case, someone has already done it for me :)
Take a look here:
http://rawberg.com/blog/nodejs/mongoose-orm-nested-models/
EDIT: The original post (as mentioned in the comments) seems to no longer exist, so I am reproducing it below. Should it ever return, or if it has just moved, please let me know.
It gives a decent description of using schemas within models in mongoose and why you would want to do it, and also shows you how to push tasks via the model while the schema is all about the structure etc.
Original Post:
Let’s start with a simple example of embedding a schema inside a model.
var TaskSchema = new Schema({
name: String,
priority: Number
});
TaskSchema.virtual('nameandpriority')
.get( function () {
return this.name + '(' + this.priority + ')';
});
TaskSchema.method('isHighPriority', function() {
if(this.priority === 1) {
return true;
} else {
return false;
}
});
var ListSchema = new Schema({
name: String,
tasks: [TaskSchema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
I created a new TaskSchema object with basic info a task might have. A Mongoose virtual attribute is setup to conveniently combine the name and priority of the Task. I only specified a getter here but virtual setters are supported as well.
I also defined a simple task method called isHighPriority to demonstrate how methods work with this setup.
In the ListSchema definition you’ll notice how the tasks key is configured to hold an array of TaskSchema objects. The task key will become an instance of DocumentArray which provides special methods for dealing with embedded Mongo documents.
For now I only passed the ListSchema object into mongoose.model and left the TaskSchema out. Technically it's not necessary to turn the TaskSchema into a formal model since we won’t be saving it in it’s own collection. Later on I’ll show you how it doesn’t harm anything if you do and it can help to organize all your models in the same way especially when they start spanning multiple files.
With the List model setup let’s add a couple tasks to it and save them to Mongo.
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
sampleList.tasks.push(
{name:'task one', priority:1},
{name:'task two', priority:5}
);
sampleList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
The tasks attribute on the instance of our List model (sampleList) works like a regular JavaScript array and we can add new tasks to it using push. The important thing to notice is the tasks are added as regular JavaScript objects. It’s a subtle distinction that may not be immediately intuitive.
You can verify from the Mongo shell that the new list and tasks were saved to mongo.
db.lists.find()
{ "tasks" : [
{
"_id" : ObjectId("4dd1cbeed77909f507000002"),
"priority" : 1,
"name" : "task one"
},
{
"_id" : ObjectId("4dd1cbeed77909f507000003"),
"priority" : 5,
"name" : "task two"
}
], "_id" : ObjectId("4dd1cbeed77909f507000001"), "name" : "Sample List" }
Now we can use the ObjectId to pull up the Sample List and iterate through its tasks.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task.isHighPriority());
});
});
If you run that last bit of code you’ll get an error saying the embedded document doesn’t have a method isHighPriority. In the current version of Mongoose you can’t access methods on embedded schemas directly. There’s an open ticket to fix it and after posing the question to the Mongoose Google Group, manimal45 posted a helpful work-around to use for now.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task._schema.methods.isHighPriority.apply(task));
});
});
If you run that code you should see the following output on the command line.
Sample List retrieved
task one
task one (1)
true
task two
task two (5)
false
With that work-around in mind let’s turn the TaskSchema into a Mongoose model.
mongoose.model('Task', TaskSchema);
var Task = mongoose.model('Task');
var ListSchema = new Schema({
name: String,
tasks: [Task.schema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
The TaskSchema definition is the same as before so I left it out. Once its turned into a model we can still access it’s underlying Schema object using dot notation.
Let’s create a new list and embed two Task model instances within it.
var demoList = new List({name:'Demo List'});
var taskThree = new Task({name:'task three', priority:10});
var taskFour = new Task({name:'task four', priority:11});
demoList.tasks.push(taskThree.toObject(), taskFour.toObject());
demoList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
As we’re embedding the Task model instances into the List we’re calling toObject on them to convert their data into plain JavaScript objects that the List.tasks DocumentArray is expecting. When you save model instances this way your embedded documents will contain ObjectIds.
The complete code example is available as a gist. Hopefully these work-arounds help smooth things over as Mongoose continues to develop. I’m still pretty new to Mongoose and MongoDB so please feel free to share better solutions and tips in the comments. Happy data modeling!
Schema is an object that defines the structure of any documents that will be stored in your MongoDB collection; it enables you to define types and validators for all of your data items.
Model is an object that gives you easy access to a named collection, allowing you to query the collection and use the Schema to validate any documents you save to that collection. It is created by combining a Schema, a Connection, and a collection name.
Originally phrased by Valeri Karpov, MongoDB Blog
I don't think the accepted answer actually answers the question that was posed. The answer doesn't explain why Mongoose has decided to require a developer to provide both a Schema and a Model variable. An example of a framework where they have eliminated the need for the developer to define the data schema is django--a developer writes up their models in the models.py file, and leaves it to the framework to manage the schema. The first reason that comes to mind for why they do this, given my experience with django, is ease-of-use. Perhaps more importantly is the DRY (don't repeat yourself) principle--you don't have to remember to update the schema when you change the model--django will do it for you! Rails also manages the schema of the data for you--a developer doesn't edit the schema directly, but changes it by defining migrations that manipulate the schema.
One reason I could understand that Mongoose would separate the schema and the model is instances where you would want to build a model from two schemas. Such a scenario might introduce more complexity than is worth managing--if you have two schemas that are managed by one model, why aren't they one schema?
Perhaps the original question is more a relic of the traditional relational database system. In world NoSQL/Mongo world, perhaps the schema is a little more flexible than MySQL/PostgreSQL, and thus changing the schema is more common practice.
To understand why? you have to understand what actually is Mongoose?
Well, the mongoose is an object data modeling library for MongoDB and Node JS, providing a higher level of abstraction. So it's a bit like the relationship between Express and Node, so Express is a layer of abstraction over regular Node, while Mongoose is a layer of abstraction over the regular MongoDB driver.
An object data modeling library is just a way for us to write Javascript code that will then interact with a database. So we could just use a regular MongoDB driver to access our database, it would work just fine.
But instead we use Mongoose because it gives us a lot more functionality out of the box, allowing for faster and simpler development of our applications.
So, some of the features Mongoose gives us schemas to model our data and relationship, easy data validation, a simple query API, middleware, and much more.
In Mongoose, a schema is where we model our data, where we describe the structure of the data, default values, and validation, then we take that schema and create a model out of it, a model is basically a wrapper around the schema, which allows us to actually interface with the database in order to create, delete, update, and read documents.
Let's create a model from a schema.
const tourSchema = new mongoose.Schema({
name: {
type: String,
required: [true, 'A tour must have a name'],
unique: true,
},
rating: {
type: Number,
default: 4.5,
},
price: {
type: Number,
required: [true, 'A tour must have a price'],
},
});
//tour model
const Tour = mongoose.model('Tour', tourSchema);
According to convetion first letter of a model name must be capitalized.
Let's create instance of our model that we created using mongoose and schema. also, interact with our database.
const testTour = new Tour({ // instance of our model
name: 'The Forest Hiker',
rating: 4.7,
price: 497,
});
// saving testTour document into database
testTour
.save()
.then((doc) => {
console.log(doc);
})
.catch((err) => {
console.log(err);
});
So having both schama and modle mongoose makes our life easier.
Think of Model as a wrapper to schemas. Schemas define the structure of your document , what kind of properties can you expect and what will be their data type (String,Number etc.). Models provide a kind of interface to perform CRUD on schema. See this post on FCC.
Schema basically models your data (where you provide datatypes for your fields) and can do some validations on your data. It mainly deals with the structure of your collection.
Whereas the model is a wrapper around your schema to provide you with CRUD methods on collections. It mainly deals with adding/querying the database.
Having both schema and model could appear redundant when compared to other frameworks like Django (which provides only a Model) or SQL (where we create only Schemas and write SQL queries and there is no concept of model). But, this is just the way Mongoose implements it.

Modelling reference to embedding document using Mongoose

I am modelling two types of events (events and subevents) in a MongoDB like this:
var EventSchema = mongoose.Schema({
'name' : String,
'subEvent' : [ SubeventSchema ]
});
var SubeventSchema = mongoose.Schema({
'name' : String
});
Now when I query a subevent I want to be able to also retrieve data about its corresponding superevent, so that some example data retrieved using Mongoose population feature could look like this:
EventModel.findOne({
name : 'Festival'
})
.populate('subEvent')
.execute(function (err, evt) { return evt; });
{
name : 'Festival',
subEvent: [
{ name : 'First Concert' },
{ name : 'Second Concert' }
]
}
EventModel.findOne({
'subEvent.name' : 'FirstConcert'
}, {
'subEvent.$' : 1
})
.populate('superEvent') // This will not work, this is the actual problem of my question
.execute(function (err, subevt) { return subevt; });
{
name: 'First Concert',
superEvent: {
name: 'Festival'
}
}
A solution I can think of is not to embed but to reference like this:
var EventSchema = mongoose.Schema({
'name' : String,
'subEvent' : [ {
'type' : mongoose.Schema.Types.ObjectId,
'ref' : 'SubeventSchema'
} ]
});
var SubeventSchema = mongoose.Schema({
'name' : String,
'superEvent' : {
'type' : mongoose.Schema.Types.ObjectId,
'ref' : 'EventSchema'
}
});
I am looking for a solution based on the first example using embedded subevents, though. Can this be achieved and in case yes, how?
I think your mental model of document embedding isn't correct. The major misunderstanding (and this is very common) is that you "query a subevent" (query an embedded document). According to your current Event schema, a Subevent is just a document embedded in an Event document. The embedded SubEvent is not a top-level document; it's not a member of any collection in MongoDB. Therefore, you don't query for it. You query for Events (which are the actual collection-level documents in your schema) whose subEvents have certain properties. E.g. one way people translate the query
db.events.find({ "subEvent" : { "name" : "First Concert" } })
into plain English is as "find all the subevents with the name "First Concert". This is wrong. The right translation is "find all events that have at least one subevent whose name is "First Concert" (the "at least one" part depends on knowledge that subEvent is an array).
Coming back to the specific question, you can hopefully see now that trying to do a populate of a "superevent" on a subevent makes no sense. Your queries return events. The optimal schema, be it subevents embedded in events, one- or two-way references between events and subevents documents in separate collections, or events denormalized into the constituent subevent documents, cannot be determined from the information in the question because the use case is not specified.
Perhaps this is a situation where you need to modify your thinking rather than the schema itself. Mongoose .populate() supports the basic ideas of MongoDB "projection", or more commonly referred to as "field selection". So rather than try to model around this, just select the fields you want to populate.
So your second schema form is perfectly valid, just change how you populate:
EventModel.find({}).populate("subEvent", "name").execute(function(err,docs) {
// "subevent" array items only contain "name" now
});
This is actually covered in the Mongoose documentation under the "populate" section.

Whats the best way of saving a document with revisions in a key value store?

I'm new to Key-Value Stores and I need your recommendation. We're working on a system that manages documents and their revisions. A bit like a wiki does. We're thinking about saving this data in a key value store.
Please don't give me a recommendation that is the database you prefer because we want to hack it so we can use many different key value databases. We're using node.js so we can easily work with json.
My Question is: What should the structure of the database look like? We have meta data for each document(timestamp, lasttext, id, latestrevision) and we have data for each revision (the change, the author, timestamp, etc...). So, which key/value structure you recommend?
thx
Cribbed from the MongoDB groups. It is somewhat specific to MongoDB, however, it is pretty generic.
Most of these history implementations break down to two common strategies.
Strategy 1: embed history
In theory, you can embed the history of a document inside of the document itself. This can even be done atomically.
> db.docs.save( { _id : 1, text : "Original Text" } )
> var doc = db.docs.findOne()
> db.docs.update( {_id: doc._id}, { $set : { text : 'New Text' }, $push : { hist : doc.text } } )
> db.docs.find()
{ "_id" : 1, "hist" : [ "Original Text" ], "text" : "New Text" }
Strategy 2: write history to separate collection
> db.docs.save( { _id : 1, text : "Original Text" } )
> var doc = db.docs.findOne()
> db.docs_hist.insert ( { orig_id : doc._id, ts : Math.round((new Date()).getTime() / 1000), data : doc } )
> db.docs.update( {_id:doc._id}, { $set : { text : 'New Text' } } )
Here you'll see that I do two writes. One to the master collection and
one to the history collection.
To get fast history lookup, just grab the original ID:
> db.docs_hist.ensureIndex( { orig_id : 1, ts : 1 })
> db.docs_hist.find( { orig_id : 1 } ).sort( { ts : -1 } )
Both strategies can be enhanced by only displaying diffs
You could hybridize by adding a link from history collection to original collection
Whats the best way of saving a document with revisions in a key value store?
It's hard to say there is a "best way". There are obviously some trade-offs being made here.
Embedding:
atomic changes on a single doc
can result in large documents, may break the reasonable size limits
probably have to enhance code to avoid returning full hist when not necessary
Separate collection:
easier to write queries
not atomic, needs two operations (do you have transactions?)
more storage space (extra indexes on original docs)
I'd keep a hierarchy of the real data under each document with the revision data attached, for instance:
{
[
{
"timestamp" : "2011040711350621",
"data" : { ... the real data here .... }
},
{
"timestamp" : "2011040711350716",
"data" : { ... the real data here .... }
}
]
}
Then use the push operation to add new versions and periodically remove the old versions. You can use the last (or first) filter to only get the latest copy at any given time.
I think there are multiple approaches and this question is old but I'll give my two cents as I was working on this earlier this year. I have been using MongoDB.
In my case, I had a User account that then had Profiles on different social networks. We wanted to track changes to social network profiles and wanted revisions of them so we created two structures to test out. Both methods had a User object that pointed to foreign objects. We did not want to embed objects from the get-go.
A User looked something like:
User {
"tags" : [Tags]
"notes" : "Notes"
"facebook_profile" : <combo_foreign_key>
"linkedin_profile" : <same as above>
}
and then, for the combo_foreign_key we used this pattern (Using Ruby interpolation syntax for simplicity)
combo_foreign_key = "#{User.key}__#{new_profile.last_updated_at}"
facebook_profiles {
combo_foreign_key: facebook_profile
... and you keep adding your foreign objects in this pattern
}
This gave us O(1) lookup of the latest FacebookProfile of a User but required us to keep the latest FK stored in the User object. If we wanted all of the FacebookProfiles we would then ask for all keys in the facebook_profiles collection with the prefix of "#{User.key}__" and this was O(N)...
The second strategy we tried was storing an array of those FacebookProfile keys on the User object so the structure of the User object changed from
"facebook_profile" : <combo_foreign_key>
to
"facebook_profile" : [<combo_foreign_key>]
Here we'd just append on the new combo_key when we added a new profile variation. Then we'd just do a quick sort of the "facebook_profile" attribute and index on the largest one to get our latest profile copy. This method had to sort M strings and then index the FacebookProfile based on the largest item in that sorted list. A little slower for grabbing the latest copy but it gave us the advantage knowing every version of a Users FacebookProfile in one swoop and we did not have to worry about ensuring that foreign_key was really the latest profile object.
At first our revision counts were pretty small and they both worked pretty well. I think I prefer the first one over the second now.
Would love input from others on ways they went about solving this issue. The GIT idea suggested in another answer actually sounds really neat to me and for our use case would work quite well... Cool.

Resources