I hope I'm having a big brainfart moment. But here's my situation in a scraping szenario;
I'm wanting to be able to scrape over multiple machines and cores. Per site, I have different Front pages, I scrape (exmpl. for the site stackoverflow I'd have fronts stackoverflow.com/questions/tagged/javascript and stackoverflow.com/questions/tagged/nodejs).
An article could be on every Front and when I discover an article I want to create an Article if the url is unknown, if its known I want to make an Front entry in article.discover if Front is unknown and otherwise insert my FrontDiscovery to the apropriate Front.
Here are my Schemas;
FrontDiscovery = new Schema({
_id :{ type:ObjectId, auto:true },
date :{ type: Date, default:Date.now},
dims :{ type: Object, default:null},
pos :{ type: Object, default:null}
});
Front = new Schema({
_id :{ type:ObjectId, auto:true },
url :{type:String}, //front
found :[ FrontDiscovery ]
});
Article = new Schema({
_id :{ type:ObjectId, auto:true },
url :{ type: String , index: { unique: true } },
site :{ type: String },
discover:[ Front]
});
The Problem I am thinking I will eventually be running into is a race condition. When two job-runners (in parallel) find the same (before unknown) article and create a new one. Yes, I have a unique index on it and could handle it that way - quite inelegantly imho.
But lets go further; When - for what ever reason - my 2 job-runners are scraping the same front at the same time and both notice that for Front there is no entry yet and create a new one adding the FrontDiscovery, I'd end with two entry's for the same Front.
What are your strategies to circumvent such a situation? findByIdAndUpdate with the upsert:true for each document seperately? If so, how can I only push something to the embedded document collection and not overwrite everything else at the same time but still create the defaults if it hasnt been created?
Thank you for any help in directing me in the right direction! I really hope I'm having a massive brainfart..
Update with upsert=true can be used to perform an atomic "insert or update" (http://docs.mongodb.org/manual/core/update/#update-operations-with-the-upsert-flag).
For instance if we want to make sure a document in Front collection with specific url is inserted exactly once, we could run something like:
db.Front.update(
{url: 'http://example.com'},
{$set: {
url: 'http://example.com'},
found: true
}
)
Operations on a single document in MongoDB are always atomic. If you make updates that span over multiple documents, then no atomicity is guaranteed. In such cases, you can ask yourself: do I really need the operations to be atomic? If the answer is no, then you probably will find your way around working with potentially unconsistent data. If the answer is yes and you want to stick with MongoDB, check out the design pattern on Two Phase Commits.
Related
We have user and news model, in the news model we have e viewsCount field, I want to increment this view count when a GET request is made by a User.
When a specific user makes a GET request, the view count will increment one, every user just one view.
const NEWSModel = new Schema({
viewesCount: { type: Number },
Publisher: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User',
required: true
},
LikesCount: { type: Number },
DislikeCount: { type: Number },
Comments: CommenTs
});
Every user can view the news as many times as wants, but just can make one view. How can I do that?
you Can change your model like and then whenever you get a news just push the user id to the viewedBy field.
news.viewedBy.push(user id)
viewedBy: [{
type: mongoose.Schema.Types.ObjectId,
ref: "User"
}]
}); ```
If you have not a lot of users, you can add additional field to news model like users_viewed which would be array of unique user ids.
And make additional check before incrementing views count.
If user, who requested news is already in this users_viewed array, you skip any additional actions.
If don't, increment views counter.
But if you do have a lot of users, it's better to store views counter in Redis to skip request to database and increment in memory counter.
The logic for storing and showing data would be the same, but you'll reduce load on your database and speed up the whole process.
[UPDATE] According to your comment, about number of users.
To make things work you can use this package.
First of all, after request for a news from a client, you can store all the news data in your cache (to reduce number of requests to your database).
Now you have few possible ways to handle number of views.
I think, the easiest to implement would be to add user unique identifier to SET. And return number of users in SET using SCARD;
In this solution you wouldn't need to check if user already watched the news, because set data structure holds only unique values (the same reason why do we need to use user's unique identifier).
And you just use 2 redis requests, which is pretty good for heavy load services.
You can have another field called viewedBy of type array in which you can store users ids. Then it will be easier to check if a user already viewed your post or to count them.
File: news.model.js
const News = new Schema({
viewedBy: [{
type: mongoose.Schema.Types.ObjectId,
ref: "User"
}],
// other properties...
});
File: news.controller.js
const user = User.find({...}); // get current user
const news = News.find({...}); // get a news
/*
Update views count by adding the current user id if it's not already added
Thanks to '$addToSet', the update() function will do nothing if the user id it's already there)
*/
news.update({ $addToSet: { viewedBy: user._id } });
// Getting the views count
console.log('Total views:', news.viewedBy.length);
More about $addToSet: https://docs.mongodb.com/manual/reference/operator/update/addToSet/
I have an article model like this:
var ArticleSchema = new Schema({
type: String
,title: String
,content: String
,hashtags: [String]
,comments: [{
type: Schema.ObjectId
,ref: 'Comment'
}]
,replies: [{
type: Schema.ObjectId
,ref: 'Reply'
}]
, status: String
,statusMeta: {
createdBy: {
type: Schema.ObjectId
,ref: 'User'
}
,createdDate: Date
, updatedBy: {
type: Schema.ObjectId
,ref: 'User'
}
,updatedDate: Date
,deletedBy: {
type: Schema.ObjectId,
ref: 'User'
}
,deletedDate: Date
,undeletedBy: {
type: Schema.ObjectId,
ref: 'User'
}
,undeletedDate: Date
,bannedBy: {
type: Schema.ObjectId,
ref: 'User'
}
,bannedDate: Date
,unbannedBy: {
type: Schema.ObjectId,
ref: 'User'
}
,unbannedDate: Date
}
}, {minimize: false})
When user creates or modify the article, I will create hashtags
ArticleSchema.pre('save', true, function(next, done) {
var self = this
if (self.isModified('content')) {
self.hashtags = helper.listHashtagsInText(self.content)
}
done()
return next()
})
For example, if user write "Hi, #greeting, i love #friday", I will store ['greeting', 'friday'] in hashtags list.
I am think about creating an index for hashtags to make queries on hashtags faster. But from mongoose manual, I found this:
When your application starts up, Mongoose automatically calls
ensureIndex for each defined index in your schema. Mongoose will call
ensureIndex for each index sequentially, and emit an 'index' event on
the model when all the ensureIndex calls succeeded or when there was
an error. While nice for development, it is recommended this behavior
be disabled in production since index creation can cause a significant
performance impact. Disable the behavior by setting the autoIndex
option of your schema to false.
http://mongoosejs.com/docs/guide.html
So is indexing faster or slower for mongoDB/Mongoose?
Also, even if I create index like
hashtags: { type: [String], index: true }
How can I make use of the index in my query? Or will it just magically become faster for normal queries like:
Article.find({hashtags: 'friday'})
You are reading it wrong
You are misreading the intent of the quoted block there as to what .ensureIndex() ( now deprecated, but still called by mongoose code ) actually does here in the context.
In mongoose, you define an index either at the schema or model level as is appropriate to your design. What mongoose "automatically" does for you is on connection it inpects each registered model and then calls the appropriate .ensureIndex() methods for the index definitions provided.
What does this actually do?
Well, in most cases, being after you have already started up your application before and the .ensureIndexes() method was run is Absolutely Nothing. That is a bit of an overstatement, but it more or less rings true.
Because the index definition has already been created on the server collection, a subsesquent call does not do anything. I.e, it does not drop the index and "re-create". So the real cost is basically nothing, once the index itself has been created.
Creating indexes
So since mongoose is just a layer on top of the standard API, the createIndex() method contains all the details of what is happening.
There are some details to consider here, such as that an index build can happen in the "background", and while this is less intrusive to your application it does come at it's own cost. Notably that the index size from "background" generation will be larger than if you built it n the foreground, blocking other operations.
Also all indexes come at a cost, notably in terms of disk usage as well as an additional cost of writing the additional information outside of the collection data itself.
The adavantages of an index are that it is much faster to "search" for values contained within an index than to seek through the whole collection and match the possible conditions.
These are the basic "trade-offs" associated with indexes.
Deployment Pattern
Back to the quoted block from the documentation, there is a real intent behind this advice.
It is typical in deployment patterns and particularly with data migrations to do things in this order:
Populate data to relevant collections/tables
Enable indexes on the collection/table data relevant to your needs
This is because there is a cost involved with index creation, and as mentioned earlier it is desirable to get the most optimum size from the index build, as well as avoid having each document insertion also having the overhead of writing an index entry when you are doing this "load" in bulk.
So that is what indexes are for, those are the costs and benefits and the message in the mongoose documentation is explained.
In general though, I suggest reading up on Database Indexes for what they are and what they do. Think of walking into a library to find a book. There is a card index there at the entrance. Do you walk around the library to find the book you want? Or do you look it up in the card index to find where it is? That index took someone time to create and also keep it updated, but it saves "you" the time of walking around the whole library just so you can find your book.
Here is my Mongoose Schema:
var SchemaA = new Schema({
field1: String,
.......
fieldB : { type: Schema.Types.ObjectId, ref: 'SchemaB' }
});
var SchemaB = new Schema({
field1: String,
.......
fieldC : { type: Schema.Types.ObjectId, ref: 'SchemaC' }
});
var SchemaC = new Schema({
field1: String,
.......
.......
.......
});
While i access schemaA using find query, i want to have fields/property
of SchemaA along with SchemaB and SchemaC in the same way as we apply join operation in SQL database.
This is my approach:
SchemaA.find({})
.populate('fieldB')
.exec(function (err, result){
SchemaB.populate(result.fieldC,{path:'fieldB'},function(err, result){
.............................
});
});
The above code is working perfectly, but the problem is:
I want to have information/properties/fields of SchemaC through SchemaA, and i don't want to populate fields/properties of SchemaB.
The reason for not wanting to get the properties of SchemaB is, extra population will slows the query unnecessary.
Long story short:
I want to populate SchemaC through SchemaA without populating SchemaB.
Can you please suggest any way/approach?
As an avid mongodb fan, I suggest you use a relational database for highly relational data - that's what it's built for. You are losing all the benefits of mongodb when you have to perform 3+ queries to get a single object.
Buuuuuut, I know that comment will fall on deaf ears. Your best bet is to be as conscious as you can about performance. Your first step is to limit the fields to the minimum required. This is just good practice even with basic queries and any database engine - only get the fields you need (eg. SELECT * FROM === bad... just stop doing it!). You can also try doing lean queries to help save a lot of post-processing work mongoose does with the data. I didn't test this, but it should work...
SchemaA.find({}, 'field1 fieldB', { lean: true })
.populate({
name: 'fieldB',
select: 'fieldC',
options: { lean: true }
}).exec(function (err, result) {
// not sure how you are populating "result" in your example, as it should be an array,
// but you said your code works... so I'll let you figure out what goes here.
});
Also, a very "mongo" way of doing what you want is to save a reference in SchemaC back to SchemaA. When I say "mongo" way of doing it, you have to break away from your years of thinking about relational data queries. Do whatever it takes to perform fewer queries on the database, even if it requires two-way references and/or data duplication.
For example, if I had a Book schema and Author schema, I would likely save the authors first and last name in the Books collection, along with an _id reference to the full profile in the Authors collection. That way I can load my Books in a single query, still display the author's name, and then generate a hyperlink to the author's profile: /author/{_id}. This is known as "data denormalization", and it has been known to give people heartburn. I try and use it on data that doesn't change very often - like people's names. In the occasion that a name does change, it's trivial to write a function to update all the names in multiple places.
SchemaA.find({})
.populate({
path: "fieldB",
populate:{path:"fieldC"}
}).exec(function (err, result) {
//this is how you can get all key value pair of SchemaA, SchemaB and SchemaC
//example: result.fieldB.fieldC._id(key of SchemaC)
});
why not add a ref to SchemaC on SchemaA? there will be no way to bridge to SchemaC from SchemaA if there is no SchemaB the way you currently have it unless you populate SchemaB with no other data than a ref to SchemaC
As explained in the docs under Field Selection, you can restrict what fields are returned.
.populate('fieldB') becomes populate('fieldB', 'fieldC -_id'). The -_id is required to omit the _id field just like when using select().
I think this is not possible.Because,when a document in A referring a document in B and that document is referring another document in C, how can document in A know which document to refer from C without any help from B.
I am trying to create an array of nested objects. I am following an example from a book that does the following:
// Creates the Schema for the Features object (mimics ESRI)
var Phone = new Schema({
number: { type: Number, required: false },
...
personId: {type: Schema.Types.ObjectId}
}
);
// Creates the Schema for the Attachments object
var Person = new Schema({
name: { type: String },
phones: [Phone]
}
);
var Person = mongoose.model('Person', Person);
Which works just fine when storing multiple Phone #'s for a person. However I am not sure if there is a good/fast way to get a Phone object by _id. Since Phone is not a mongoose model you cannot go directly to Phone.findOne({...}); Right now I am stuck with getting a person by _id then looping over that persons phones and seeing if the id matches.
Then I stumbled upon this link:
http://mongoosejs.com/docs/populate.html
Is one way more right than the other? Currently when I delete a person his/her phones go away as well. Not really sure that works with 'populate', seems like I would need to delete Person and Phones.
Anyone want to attempt to explain the differences?
Thanks in advance
The general rule is that if you need to independently query Phones, then you should keep them in a separate collection and use populate to look them up from People when needed. Otherwise, embedding them is typically a better choice as it simplifies updates and deletion.
When using an embedded approach like you are now, note that Mongoose arrays provide an id method you can use to more easily look up an element by its _id value.
var phone = person.phones.id(id);
I have two collections:
Users
Uploads
Each upload has a User associated with it and I need to know their details when an Upload is viewed. Is it best practice to duplicate this data inside the the Uploads record, or use populate() to pull in these details from the Users collection referenced by _id?
OPTION 1
var UploadSchema = new Schema({
_id: { type: Schema.ObjectId },
_user: { type: Schema.ObjectId, ref: 'users'},
title: { type: String },
});
OPTION 2
var UploadSchema = new Schema({
_id: { type: Schema.ObjectId },
user: {
name: { type: String },
email: { type: String },
avatar: { type: String },
//...etc
},
title: { type: String },
});
With 'Option 2' if any of the data in the Users collection changes I will have to update this across all associated Upload records. With 'Option 1' on the other hand I can just chill out and let populate() ensure the latest User data is always shown.
Is the overhead of using populate() significant? What is the best practice in this common scenario?
If You need to query on your Users, keep users alone. If You need to query on your uploads, keep uploads alone.
Another question you should ask yourself is: Every time i need this data, do I need the embedded objects (and vice-versa)? How many time this data will be updated? How many times this data will be read?
Think about a friendship request:
Each time you need the request you need the user which made the request, then embed the request inside the user document.
You will be able to create an index on the embedded object too, and your search will be mono query / fast / consistent.
Just a link to my previous reply on a similar question:
Mongo DB relations between objects
I think this post will be right for you http://www.mongodb.org/display/DOCS/Schema+Design
Use Cases
Customer / Order / Order Line-Item
Orders should be a collection. customers a collection. line-items should be an array of line-items embedded in the order object.
Blogging system.
Posts should be a collection. post author might be a separate collection, or simply a field within posts if only an email address. comments should be embedded objects within a post for performance.
Schema Design Basics
Kyle Banker, 10gen
http://www.10gen.com/presentation/mongosf2011/schemabasics
Indexing & Query Optimization
Alvin Richards, Senior Director of Enterprise Engineering
http://www.10gen.com/presentation/mongosf-2011/mongodb-indexing-query-optimization
**These 2 videos are the bests on mongoddb ever seen imho*
Populate() is just a query. So the overhead is whatever the query is, which is a find() on your model.
Also, best practice for MongoDB is to embed what you can. It will result in a faster query. It sounds like you'd be duplicating a ton of data though, which puts relations(linking) at a good spot.
"Linking" is just putting an ObjectId in a field from another model.
Here is the Mongo Best Practices http://www.mongodb.org/display/DOCS/Schema+Design#SchemaDesign-SummaryofBestPractices
Linking/DBRefs http://www.mongodb.org/display/DOCS/Database+References#DatabaseReferences-SimpleDirect%2FManualLinking