When looking at tutorials there is often a delineation between a schema and a model, particularly when dealing with mongoose/mongodb.
This makes porting over to postgresql somewhat confusing, as 'models' don't seem to exist under that system. What is the difference the two approaches?
For example, what would be a postgres/sql ORM equivalent of this line?
(mongoose and express.js):
var userSchema = schema.define('local', {
username: String,
password: String,
});
module.exports = mongoose.model('User', userSchema);
In mongoose, a schema represents the structure of a particular document, either completely or just a portion of the document. It's a way to express expected properties and values as well as constraints and indexes. A model defines a programming interface for interacting with the database (read, insert, update, etc). So a schema answers "what will the data in this collection look like?" and a model provides functionality like "Are there any records matching this query?" or "Add a new document to the collection".
In straight RDBMS, the schema is implemented by DDL statements (create table, alter table, etc), whereas there's no direct concept of a model, just SQL statements that can do highly flexible queries (select statements) as well as basic insert, update, delete operations.
Another way to think of it is the nature of SQL allows you to define a "model" for each query by selecting only particular fields as well as joining records from related tables together.
In other ORM systems like Ruby on Rails, the schema is defined via ActiveRecord mechanisms and the model is the extra methods your Model subclass adds that define additional business logic.
A schema is fundamentally describing the data construct of a
document (in MongoDB collection). This schema defines the name of each item of data, and the type of data, whether it is a string, number, date, Boolean, and so on.
A model is a compiled version of the schema. One instance of the model will map to one document in the database.
It is the model that handles the reading, creating, updating, and deleting of documents.
A document in a Mongoose collection is a single instance of a model. So it makes sense that if we're going to work with our data then it will be through the model.
A single instance of a model (like a User instance in var User = mongoose.model('User', userSchema);) maps directly to a single document in the database.
With this 1:1 relationship, it is the model that handles all document interaction - creating, reading, saving, and deleting. This makes the model a very powerful tool.
Taken from "Mongoose for Application Development", by Simon Holmes, 2013
I imagine models as classes created from a schema (maybe I am mistaken).
MongoDB stores everything in BSON , which is a binary format. A simple Hello World BSON document might look like this internally:
\x16\x00\x00\x00\x02hello\x00\x06\x00\x00\x00world\x00\x00. A computer can deal with all that mumbo-jumbo, but that's hard to read for humans. We want something we can easily understand, which is why developers have created the concept of a database model. A model is a representation of a database record as a nice object in your programming language of choice. In this case, our models will be JavaScript objects. Models can serve as simple objects that store database values, but they often have things like data validation, extra methods, and more. As you’ll see, Mongoose has a lot
of those features.
Taken from "Express in Action", by Evan Hahn, 2016
In Short:
A Mongoose model is a wrapper on the Mongoose schema. A Mongoose schema defines the structure of the document, default values, validators, etc., whereas a Mongoose model provides an interface to the database for creating, querying, updating, deleting records, etc.
Reference: Introduction to Mongoose for MongoDB - FCC
Related
I'm using Sequelize as my ORM, and just wondering what the point of having a model is.
It looks like the main thing that matters, is the table definitions in your migrations, and models are just a static snapshot of what your tables look like. When you perform a migration, nothing changes in your models. It doesn't get updated, nor created/deleted based on your migration.
You have to manually keep your models up to date it looks like.
So is there any point in having models, or making the effort to keep them updated?
The models are the definition of your database schema so that it can map into the ORM that Sequelize provides. For me this is the most important feature of Sequelize, not the migrations.
Migrations are used for changing the database schema.
Models are used to map the database schema to your code.
Using Models gives you lots of built in helper methods, associations let you build references between tables to generate complex JOINs, etc.
Is it possible to add all data in one object in the collection,
for example :
{
id:134....,
fullname : "jack...",
email : "amsdk#gm....",
messages :[
all messages here
Perhaps up to fifty thousand items
],
orders : [
all orders here
],.........
}
Does this cause any overload on the server?
The Mongo docs have helpful guidance for selecting a data model. The nested model you describe above is known as the Embedded Data Model:
In general, embedding provides better performance for read operations,
as well as the ability to request and retrieve related data in a
single database operation. Embedded data models make it possible to
update related data in a single atomic write operation.
Embedding documents simplifies getting data out of the database. The flip-side is that it can be challenging to add and update documents with a complex document hierarchy.
Mongo calls a traditional PK/FK structure the Normalized Data Model
In general, use normalized data models:
when embedding would result in duplication of data but would not provide sufficient read performance advantages to outweigh the implications of the duplication.
to represent more complex many-to-many relationships.
to model large hierarchical data sets.
The Aggregation Pipeline can be scary, which may make an Embedded Data Model tempting. If you feel that way, check out this free course on the MongoDB Aggregation Pipeline. A Normalized Data Model may be more appealing if you can confidently retrieve related data across collections.
Test read/write queries with a small data set for each model. You'll gain clarity about the benefits and challenges of each approach. For my current project, I found a Normalized Data Model simplified the many write operations required. Ultimately, separating concerns like orders and users into distinct collections proved simpler, especially once I understood how to use $lookup to merge data from related collections.
I am redesigning my NodeJS application because I want to use the Rich Domain Model concept. Currently I am using Anemic Domain Model and this is not scaling well, I just see 'ifs' everywhere.
I have read a bunch of blog posts and DDD related blogs, but there is something that I simply cannot understand... How do we handle Persistence properly.
To start, I would like to describe the layers that I have defined and their purpose:
Persistence Model
Defines the Table Models. Defines the Table name, Columns, Keys and Relations
I am using Sequelize as ORM, so the Models defined with Sequelize are considered my Persistence Model
Domain Model
Entities and Behaviors. Objects that correspond to the abstractions created as part of the Business Domain
I have created several classes and the best thing here is that I can benefit from hierarchy to solve all problems (without loads of ifs yay).
Data Access Object (DAO)
Responsible for the Data management and conversion of entries of the Persistence Model to entities of the Domain Model. All persistence related activities belong to this layer
In my case DAOs work on top of the Sequelize models created on the Persistence Model, however, I am serializing the records returned on Database Interactions in different objects based on their properties. Eg.: If I have a Table with a column called 'UserType' that contains two values [ADMIN,USER], when I select entries on this table, I would serialize the return according to the User Type, so a User with Type: ADMIN would be an instance of the AdminUser class where a User with type: USER would simply be a DefaultUser...
Service Layer
Responsible for all Generic Business Logic, such as Utilities and other Services that are not part of the behavior of any of the Domain Objects
Client Layer
Any Consumer class that plays around with the Objects and is responsible in triggering the Persistence
Now the confusion starts when I implement the Client Layer...
Let's say I am implementing a new REST API:
POST: .../api/CreateOrderForUser/
{
items: [{
productId: 1,
quantity: 4
},{
productId: 3,
quantity: 2
}]
}
On my handler function I would have something like:
function(oReq){
var oRequestBody = oReq.body;
var oCurrentUser = oReq.user; //This is already a Domain Object
var aOrderItems = oRequestBody.map(function(mOrderData){
return new OrderItem(mOrderData); //Constructor sets the properties internally
});
var oOrder = new Order({
items: aOrderItems
});
oCurrentUser.addOrder(oOrder);
// So far so good... But how do I persist whatever
// happened above? Should I call each DAO for each entity
// created? Like, first create the Order, then create the
// Items, then update the User?
}
One way I found to make it work is to merge the Persistence Model and the Domain Model, which means that oCurrentUser.addOrder(...) would execute the business logic required and would call the OrderDAO to persist the Order along with the Items in the end. The bad thing about this is that now the addOrder also have to handle transactions, because I don't want to add the order without the items, or update the User without the Order.
So, what I am missing here?
Aggregates.
This is the missing piece on the story.
In your example, there would likely not be a separate table for the order items (and no relations, no foreign keys...). Items here seem to be values (describing an entity, ie: "45 USD"), and not entities (things that change in time and we track, ie: A bank account). So you would not directly persist OrderItems but instead, persist only the Order (with the items in it).
The piece of code I would expect to find in place of your comment could look like orderRepository.save(oOrder);. Additionally, I would expect the user to be a weak reference (by id only) in the order, and not orders contained in a user as your oCurrentUser.addOrder(oOrder); code suggests.
Moreover, the layers you describe make sense, but in your example you mix delivery concerns (concepts like request, response...) with domain concepts (adding items to a new order), I would suggest that you take a look at established patterns to keep these concerns decoupled, such as Hexagonal Architecture. This is especially important for unit testing, as your "client code" will likely be the test instead of the handler function. The retrieve/create - do something - save code would normally be a function in an Application Service describing your use case.
Vaughn Vernon's "Implementing Domain-Driven Design" is a good book on DDD that would definitely shed more light on the topic.
I have a server which stores records representing Objects, and which uses Mongoose to manage these records. I want to be able to query/update/etc. all objects with a simple API (i.e. a single endpoint). Different types of Objects have some identical attributes, and some different attributes, so a single, static Object schema won't do. Instead, I still want to have a single schema, but I want to be able to change it slightly by adding/deleting fields when I create each new Object, with the fields which are/aren't present depending on the type of the Object. I don't want a mixed schema, because I want error validation for each type of Object. I want a single schema (as opposed to a different schema for each type of Object) so that I can just do
Object = mongoose.model('Object', ObjectSchema);
Object.findOne({objectType: "type1"}, function(err, model) {
...
});
So basically, I want field validation, while still maintaining some flexibility for attributes, and a single point to query/update/etc. my Object records. If I change the schema with each new Object, recompile it into a model, and create a new instance of that model, will all the instances of the different models (compiled from different modified versions of the same schema) still be queryable as above?
Obviously, I'm new to Mongoose. I just talked a lot about the schema here, and I honestly don't know whether I should have used the word "model" in place of "schema" in some places. I just don't know how I can accomplish all of this. Let me know if I make no sense.
We are successfully using the mongoose model inheritance and discriminator functionality for a very similar scenario. See here for an example:
http://www.laplacesdemon.com/2014/02/19/model-inheritance-node-js-mongoose/
You might also be able to use this plugin:
https://www.npmjs.com/package/mongoose-schema-extend
Looking at the Mongoose ODM docs, it doesn't really say much about what are ObjectId's and how they can be used. I think its something like foreign keys in MongoDB?
If so, Embedded Documents seem to achieve the same purpose, when do I use which?
It would be very worthwhile to read the MongoDB documentation or a quick MongoDB intro such as The Little MongoDB Book (it's free) for some background on MongoDB concepts.
To answer your question:
An ObjectID is a unique 12-byte identifier which can be generated by MongoDB as the primary key (_id) for a collection. There is a specification for the ObjectID.
A DBRef (database reference) is an ObjectID referencing an object in another collection. A DBRef does require require another query to fetch the related object, and is a convention supported by the client drivers rather than MongoDB server. The Mongoid equivalent is called referenced relations.
Embedded documents are nested arrays or subdocuments within a document. In Mongoid these are embedded relations.
The approach to data modelling and schema design in MongoDB is very different from relational databases. There are (intentionally) no joins or foreign keys, but the document-oriented approach allows large amounts of related data to be stored and fetched in a single document. Depending on how you plan to query and update your data, embedding or linking may be a more suitable choice. The schema design page on the MongoDB wiki has some helpful tips to get you started.