how to dynamically query in mongoose? - node.js

so I'm new to Programming and Mongoose, just learning the basics.
and now I get the task to make a dynamic query with criteria such as
query => gets all the data I input into the query in the parameter
example:
query: [ios, android] => i get all ios data, and android
query_else => gets all the data I input other than the parameter in the query
example:
query_else: [ios, android] => I got all the data OTHER ios and android
if I try to use .find, I can only get 1 specific data I'm looking for, but it can't if I enter 2 queries in it the results are []
maybe it's not the answer I want to look for, but how to think about solving this case, because of my lack of knowledge about mongoose and my lack of knowledge about coding that makes me deadlock thinking
Thank you in advance
account log schema in activity-log collection:
const actLogSchema = new Schema(
{
account_id: {
type: Schema.Types.ObjectId,
ref: 'account'
},
date_created: { type: String, default: Date.now() },
ip: String,
location: String,
device: String,
type: String,
label: String,
action: String,
description: String
},
{ versionKey: false }
);
i assume query is part of {date_created, ip, location, device, type, action, label, description}
and query_else is equal to query but the different it is values.. like the example above

Related

how to sort ,index and paginate posts mongodb-mongoose

I have the following postSchema and would like to fetch datas depending on updatedAt field. When people make comment I increase numberofreply by one and its updatedAt is updated. How should I fetch datas for infinite scroll and should I use indexing for this operation ?
const postScheme = mongoose.Schema(
{
post: {
type: String,
trim: true,
},
numberOfReply: {
type: Number,
default: 0
},
owner: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User'
},
hasImage: {
type: Boolean,
},
image: {
type: String,
trim: true
},
},
{timestamps: true}
)
this is what I use to fetch first page
Post.Post.find({}).sort({'updatedAt': -1}).limit(10).populate('owner').populate('coin').exec(function (err, posts) {
res.send(posts)
})
this is for infinite scroll
Post.Post.find({isCoin: true, updatedAt: {$lt: req.body.last}}).sort({'updatedAt': -1}).populate('owner').limit(
10).exec(function (err, posts) {
res.send(posts)
})
The limit, skip syntax is Mongo's way of paginating through data so you got that worked out, from a code perspective you can't really change anything to work better.
should I use indexing for this operation
Most definitely yes, indexes are the way to make this operation be efficient. otherwise Mongo will do a collection scan for each pagination which is very inefficient.
So what kind of index you should built? Well you want to build a compound index that will allow the query to both satisfy the query and the sort conditions, and in your case that is on the isCoin and updateAt fields, like so:
db.collection.createIndex( { isCoin: 1, updateAt: -1 } )
A few improvements you can make to make the index a bit more efficient (for this specific query) are:
Consider creating the index as a sparse index, this will only index documents with both fields in them, obviously if the data doesn't include this options you can ignore it.
This one has a few caveats in it, but partial indexes are designed for this case, to improve query performance by indexing a smaller subset of the data. and in your case you can add this option
{ partialFilterExpression: { isCoin: true } }
with that said this will limit your index usage for other queries so it might not be the ultimate choice for you.

Mongodb aggregation - geoNear and text search in joined collection

I have a tricky query that hits my MongoDB know-how. Here the simplified szenario.
We have a collection Restaurant and a collection Subsidary.
They look roughly like this (simplified - using mongoose):
const restaurantSchema = new Schema(
{
name: { type: String, required: true },
categories: { type: [String], required: true },
...
})
const subsidarySchema = new Schema(
{
restaurant: { type: Schema.Types.ObjectId, ref: 'Restaurant' },
location: {
type: { type: String, enum: ['Point'], required: true },
coordinates: { type: [Number], required: true },
},
...
})
What is required:
Always: Find restaurants that have a subsidary within 3.5 KM radius and sort by distance.
Sometimes filter those restaurants also by a string that should fuzy-match the Restaurant name.
Apply further filters and pagination (e.g. filter by categories, ...)
I'm trying to tackle this with a mongodb aggregation. The problem:
The aggregation pipeline stages geoNear and text require each to be first in the pipeline - which means they exclude each other.
Here my thought so far:
Start aggregation with subsidary, $geoNear stage first. This cuts away already all restaurants outside the 3.5 KM.
$group the subsidaries by restaurant and keep the minimal distance value per cluster.
$lookup to get the matchin restaurant for each cluster. Maybe $unwind here.
??? Here the text/search match should be, fuzy-matching the restaurants' name. ???
$match for other values (category, openingHours, ...)
$sort and $limit and $skip for sorting andd pagination.
Here the same as illustration.
Question
Does this approach make sense? What would be a possible way to implement stage 4?
I was searching a lot but there seems no way to use something like { $match: { $text: { $search: req.query.name } } } as a 4th stage.
An alternative would be to run a second query before that just handles the text search and then build an intersection. This could lead to a massive amount of restaurant IDs being passed in that stage. Is that something mongodb could handle?
I'm very thankful for your comments!
Some ways around the requirement that both text search and geo query must be the first stage:
Use text search as the first stage, then manually calculate the distance using $set/$expr in a subsequent stage.
Use geo query as the first stage, then perform text filtering in your application (allowing you also to use any text matching/similarity algorithm you like).

Mongodb Relationship: Posts and Comments (ref v sub-documents)

I know there are a lot of similar questions, but they're too old and since Mongodb has evolved alot for last 5-6 years I am looking for a good schema design.
Goal: I want to have a post with comments by users.
What I have designed so far is:
Separate post model:
const projectSchema = new mongoose.Schema({
user: { type: mongoose.Schema.Types.ObjectId, required: true, ref: 'User' },
title: { type: String, required: true },
image: { type: String, default: undefined },
description: { type: String, required: true, minLength: 200, maxlength: 500 },
comments: [{
type: mongoose.Schema.Types.ObjectId, ref: 'Comment'
}],
state: { type: Boolean, default: true },
collaborators: { type: Array, default: [] },
likes: { type: Array, default: [] }
})
And a separate comments model:
const commentSchema = new mongoose.Schema({
comment: { type: String, required: true },
project: { type: String, required: true, ref: 'Project' },
user: { type: String, required: true, ref: 'User' }
})
The reason I am going for the relational approach is because if the comments increase to say 10,000 in number, it will increase the size of schema by alot.
This way, no matter how many comments we can populate them using their IDs, also, we will have different collection for comments iself.
Reference : one-to-many
Is this a good approach for my project?
The way I am querying the comments from one single post:
const project = await Project.findById(
new mongoose.Types.ObjectId(req.params.projectId)
).populate({
path: 'comments',
populate: { path: 'user' }
}).lean()
Whether it's a good design depends how many comments per post do you expect, and what query will be performed on your app.
There's a good blog from mongodb.com on how to design your database schema
The common design is:
One to Few (Use embed)
One to Many (Use embed reference)
One to squillions (The usual relational database one-to-many approach)
Summary is:
So, even at this basic level, there is more to think about when designing a MongoDB schema than when designing a comparable relational schema. You need to consider two factors:
Will the entities on the “N” side of the One-to-N ever need to stand alone?
What is the cardinality of the relationship: is it one-to-few; one-to-many; or one-to-squillions?
Based on these factors, you can pick one of the three basic One-to-N schema designs:
Embed the N side if the cardinality is one-to-few and there is no need to access the embedded object outside the context of the parent object
Use an array of references to the N-side objects if the cardinality is one-to-many or if the N-side objects should stand alone for any reasons
Use a reference to the One-side in the N-side objects if the cardinality is one-to-squillions
There is also a blog about advanced schema design which is worth the read.
You seems to be using the two-way referencing approach.
The difference between yours and one-to-squillions is you are not only storing post id reference on comment document, but also storing comment ids as reference in post document, while one-to-squillions will only stores project id reference in comment document.
Using your approach will be better if you need to get comment ids of a post. But the disadvantage is you need to run two queries when deleting or creating a comment, one to delete / create comment id from post, and the other one to delete / create the comment document it self. It's also will be slower to find "which post belongs to given comment id".
While using one-to-squillions would gives you worse performance when performing a query to get comments by post id. But you can mitigate this by properly indexing your comment collection.

use mongoose schema over multiple microservices

my application is split into multiple microservices that are running on heroku dynos (they can't access each others files). Sometimes, there are multiple microservices working with one collection. Therefore, both of the microservices need the according mongoose schema.
However, not both microservices need the full schema. For example, microservice A needs the full schema whereas microservice B only needs a few fields of that schema.
Example schema inside microservice A:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
phone: { type: String, required: true, unique: true },
forename: { type: String, required: true },
surname: { type: String, required: true },
middleInitals: { type: String, required: false },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
Example Schema inside microservice B:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
My approach
I would just go ahead and create a new schema in each microservice, containing only the needed fields. However, I am not sure if there will be any problems when multiple microservices register a new schema to the MongoDB database? For example, both microservices would attempt to create an index for the unique field. Will there be any performance issues?
Does anybody have a different approach I could use? Is this even a valid approach to go with?
Thanks in advance :)
It's a valid approach. you can have 2 schemas pointing to the same collection. i have tested it and it works.
Mongoose is an Object Data Modeling (ODM) library, and you can have 2 objects looking at the same collection /(Table or view in SQL) - no problem with that.
No reason for performance problems, so long you got the right index. no relation to Object Data Modeling.
You might want to add some type key, so you can find only type1/type2 accounts on get request. On find, you can restrict getting the right fields with projection.
I think you should have only 2 keys in the index – email + password. If you have phone index and microservice B: don't include a phone –you will have a violation at the unique index of phone.
But if you really want a unique phone index you can make an override. You can generate temp unique value for phone for mircoservice B (using auto-Generated or duplicate the email value), you will ignore this value on mircoservice B and only find/ update / present phone in microsaervice A, where you have a valid phone. When user change from accountB type to accountA type – you must make sure to replace the wrong phone number with a valid one.
I see no problem in 2 schemas to same collection - you just need to mange your indexes the right way – to avoid collisions, and to insure you can differentiate the different accounts type in the collection.
As far as I can see there is no reason you simply can't use the same schema, maybe use some sort of privilege system and have it only return the right information between these separate micro services. You could have a fetch request tell it from which service its coming from and return a which items using a simple conditional.

MongoDB - Limit number of records in related model

I have just started learning Node and MongoDB and I'm building an application for my wedding to try out these technologies.
Part of the application will focus on the assignment of guests to tables for the wedding breakfast. So far I have a Guest schema like so:
const guestSchema = new mongoose.Schema({
firstname: {
type: String,
trim: true,
},
surname: {
type: String,
trim: true,
},
attending: String,
allergies: String,
menu: String
});
const tableSchema = new mongoose.Schema({
name: String,
capacity: Number,
**guests: [
{ type: mongoose.Schema.ObjectId, ref: 'Guest' }
]**
I am aware that I can create a relationship between guests and tables as shown in bold but I'd like to limit the number of 'related' guests to a table according to the capacity of the table. E.g - tableOne might have 5 people, tableTwo might have 7, etc. I know that I could do this in Node by using my capacity field and performing a query before I relate a guest to a table, but I was wondering if there was a way to do this on the database side as I'd imagine this would be better practice?
All thoughts welcome,
Thanks

Resources