Let's says we have this model
const Model = mongoose.model('Model', new mongoose.Schema({
values: [{ key: String }]
}));
So we would have documents looking like that :
{
_id: someId,
values: [
{ key: 'v1' },
{ key: 'v2' },
]
}
Now if I perform this update =>
Model.findByIdAndUpdate(someId, { values: { key: 'v3' } })
This does not throw even if the values property is not an array. However it updates the document resulting in
{
_id: someId,
values: [
{ key: 'v3' },
]
}
I would like this kind of errors to throw instead of silently deleting every items in my values property :D My mongoose projects have default configuration (strict modes are enabled).
I know i could update with $set, which would not lead to that behavior. However if i have to use $set, i think disabling updates outside an operator would be a good option too.
Related
Its my first time trying prisma and am stuck. So I have "products" and "filters" model.
I want the following query to work. The idea is, I want to fetch the products with dynamic matching query params (name and value). The product query parameters come dynamically from the frontend.
const products = await prisma.product.findMany({
where: {
categoryName,
subCategoryName,
filters: {
some: {
AND: [
{
name: "RAM",
value: "32GB",
},
{
name: "Storage",
value: "1TB",
},
],
},
},
},
include: {
images: true,
},
});
If there's only one parameter, like
{
name:"RAM",
value:"32GB"
}
the query returns appropriate products, but if there are more that one query params (like in the original code above), it returns empty array.
my product schema looks like this, simplified,
name String
filters Filter[]
my filter schema looks like this, simplified
name String
value String?
product Product? #relation(fields: [productId], references:[id])
productId Int?
Thank you very much
I've found the solution here
https://github.com/prisma/prisma/discussions/8216#discussioncomment-992302
It should be like this instead apparently.
await prisma.product.findMany({
where: {
AND: [
{ price: 21.99 },
{ filters: { some: { name: 'ram', value: '8GB' } } },
{ filters: { some: { name: 'storage', value: '256GB' } } },
],
},
})
I searched many questions here and other articles on the web, but they all seem to describe somehow different cases from what I have at hand.
I have User schema:
{
username: { type: String },
lessons: [
{
lesson: { type: String },
result: { type: String }
}
]
}
I want to add new element into lessons or skip, if there is already one with same values, therefore I use addToSet:
const dbUser = await User.findOne({ username })
dbUser.lessons.addToSet({ lesson, result: JSON.stringify(result) })
await dbUser.save()
However it makes what seems to be duplicates:
// first run
[
{
_id: 60c80418f2bcfe5fb8f501c1,
lesson: '60c79d81cf1f57221c05fdac',
result: '{"correct":2,"total":2}'
}
]
// second run
[
{
_id: 60c80418f2bcfe5fb8f501c1,
lesson: '60c79d81cf1f57221c05fdac',
result: '{"correct":2,"total":2}'
},
{
_id: 60c80470f2bcfe5fb8f501c2,
lesson: '60c79d81cf1f57221c05fdac',
result: '{"correct":2,"total":2}'
}
]
At this point I see that it adds _id and thus treats them as different entries (while they are identical).
What is my mistake and what should I do in order to fix it? I can change lessons structure or change query - whatever is easier to implement.
You can create sub-documents avoid _id. Just add _id: false to your subdocument declaration.
const userSchema = new Schema({
username: { type: String },
lessons: [
{
_id: false,
lesson: { type: String },
result: { type: String }
}
]
});
This will prevent the creation of an _id field in your subdoc, and you can add a new element to the lesson or skip it with the addToSet operator as you did.
I have a unique index like this
code: {
type: String,
index: {
unique: true,
partialFilterExpression: {
code: { $type: 'string' }
}
},
default: null
},
state: { type: Number, default: 0 },
but When the state is 2 (archived) I want to keep the code, but it should be able to reuse the code, so it cannot be unique if state is 2.
Is there any away that I could accomplish this?
This is possible, though it's through a work around documented here https://jira.mongodb.org/browse/SERVER-25023.
In MongoDB 4.7 you will be able to apply different index options to the same field but for now you can add a non-existent field to separate the two indexes.
Here's an example using the work around.
(async () => {
const ItemSchema = mongoose.Schema({
code: {
type: String,
default: null
},
state: {
type: Number,
default: 0,
},
});
// Define a unique index for active items
ItemSchema.index({code: 1}, {
name: 'code_1_unique',
partialFilterExpression: {
$and: [
{code: {$type: 'string'}},
{state: {$eq: 0}}
]
},
unique: true
})
// Defined a non-unique index for non-active items
ItemSchema.index({code: 1, nonExistantField: 1}, {
name: 'code_1_nonunique',
partialFilterExpression: {
$and: [
{code: {$type: 'string'}},
{state: {$eq: 2}}
]
},
})
const Item = mongoose.model('Item', ItemSchema)
await mongoose.connect('mongodb://localhost:27017/so-unique-compound-indexes')
// Drop the collection for test to run correctly
await Item.deleteMany({})
// Successfully create an item
console.log('\nCreating a unique item')
const itemA = await Item.create({code: 'abc'});
// Throws error when trying to create with the same code
await Item.create({code: 'abc'})
.catch(err => {console.log('\nThrowing a duplicate error when creating with the same code')})
// Change the active code
console.log('\nChanging item state to 2')
itemA.state = 2;
await itemA.save();
// Successfully created a new doc with sama code
await Item.create({code: 'abc'})
.then(() => console.log('\nSuccessfully created a new doc with sama code'))
.catch(() => console.log('\nThrowing a duplicate error'));
// Throws error when trying to create with the same code
Item.create({code: 'abc'})
.catch(err => {console.log('\nThrowing a duplicate error when creating with the same code again')})
})();
This is not possible with using indexes. Even if you use a compound index for code and state there will still be a case where
new document
{
code: 'abc',
state: 0
}
archived document
{
code: 'abc',
state: 2
}
Now although you have the same code you will not be able to archive the new document or unarchive the archived document.
You can do something like this
const checkCode = await this.Model.findOne({code:'abc', active:0})
if(checkCode){
throw new Error('Code has to be unique')
}
else{
.....do something
}
I want to have two types of users on my site - normal users and web designers.
How do I specify in the mongoose schema a type that can be two values - "normal" or "designer"
The "choice" between two types depends entirely on what you mean.
In the simple case where you just want multiple values to be valid for a property from a list of choices then you can use enum to make the only values one of the possibilities:
var userSchema = new Schema({
"type": { "type": String, "enum": ["normal", "developer"] }
});
That allows basic validation that the property must contain one of those values and only one of those values. Trying to set to another value or omitting the value will result in a validation error.
If however, you want something more advanced such as essentially different objects with extended fields for each "type", then "discriminators" are likely for you.
This allows you to have different properties between a normal "user" and a "developer", such as a list of the projects for the developer. They will all be saved in the same collection, but they are first-class objects with all their own schema definition and rules:
var util = require('util'),
async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
mongoose.connect('mongodb://localhost/test');
function BaseSchema() {
Schema.apply(this,arguments);
this.add({
name: String,
createdAt: { type: Date, default: Date.now }
});
}
util.inherits(BaseSchema,Schema);
var userSchema = new BaseSchema();
var developerSchema = new BaseSchema({
projects: [String]
});
var User = mongoose.model('User',userSchema),
Developer = User.discriminator( 'Developer', developerSchema );
async.series(
[
function(callback) {
User.remove({},callback);
},
function(callback) {
async.eachSeries(
[
{ "role": "User", "name": "Bill" },
{ "role": "Developer", "name": "Ted", "projects": [
"apples", "oranges" ] }
],
function(item,callback) {
mongoose.model(item.role).create(item,callback);
},
callback
);
},
function(callback) {
User.find().exec(function(err,docs) {
console.log(docs)
callback(err);
});
}
],
function(err) {
if (err) throw err;
mongoose.disconnect();
}
);
For which the output shows the two objects in the same collection:
[ { _id: 55ff7b22f1ff34f915cc8312,
name: 'Bill',
__v: 0,
createdAt: Mon Sep 21 2015 13:36:02 GMT+1000 (AEST) },
{ _id: 55ff7b22f1ff34f915cc8313,
name: 'Ted',
__v: 0,
__t: 'Developer',
createdAt: Mon Sep 21 2015 13:36:02 GMT+1000 (AEST),
projects: [ 'apples', 'oranges' ] } ]
Note that "Ted" the "Developer" has another field stored as __t which holds the discriminator value for the model. This allows mongoose to apply magic when it is read that casts to the correct object.
It also means that each model can be used independently, with operations like:
Developer.find().exec(function(err,developers) { });
Developer.create(newDev,function(err,developer) { });
Which respectively "magically insert" the "type" so that only matching __t: 'Developer' objects are either found or created by each operation.
So enum will give you a restriction on possible values and discriminator() allows you to set up completely independent definitions for objects as a whole. It depends on which form you mean, but both are nice to use.
I have an article schema that has a subdocument comments which contains all the comments i got for this particular article.
What i want to do is select an article by id, populate its author field and also the author field in comments. Then sort the comments subdocument by date.
the article schema:
var articleSchema = new Schema({
title: { type: String, default: '', trim: true },
body: { type: String, default: '', trim: true },
author: { type: Schema.ObjectId, ref: 'User' },
comments: [{
body: { type: String, default: '' },
author: { type: Schema.ObjectId, ref: 'User' },
created_at: { type : Date, default : Date.now, get: getCreatedAtDate }
}],
tags: { type: [], get: getTags, set: setTags },
image: {
cdnUri: String,
files: []
},
created_at: { type : Date, default : Date.now, get: getCreatedAtDate }
});
static method on article schema: (i would love to sort the comments here, can i do that?)
load: function (id, cb) {
this.findOne({ _id: id })
.populate('author', 'email profile')
.populate('comments.author')
.exec(cb);
},
I have to sort it elsewhere:
exports.load = function (req, res, next, id) {
var User = require('../models/User');
Article.load(id, function (err, article) {
var sorted = article.toObject({ getters: true });
sorted.comments = _.sortBy(sorted.comments, 'created_at').reverse();
req.article = sorted;
next();
});
};
I call toObject to convert the document to javascript object, i can keep my getters / virtuals, but what about methods??
Anyways, i do the sorting logic on the plain object and done.
I am quite sure there is a lot better way of doing this, please let me know.
I could have written this out as a few things, but on consideration "getting the mongoose objects back" seems to be the main consideration.
So there are various things you "could" do. But since you are "populating references" into an Object and then wanting to alter the order of objects in an array there really is only one way to fix this once and for all.
Fix the data in order as you create it
If you want your "comments" array sorted by the date they are "created_at" this even breaks down into multiple possibilities:
It "should" have been added to in "insertion" order, so the "latest" is last as you note, but you can also "modify" this in recent ( past couple of years now ) versions of MongoDB with $position as a modifier to $push :
Article.update(
{ "_id": articleId },
{
"$push": { "comments": { "$each": [newComment], "$position": 0 } }
},
function(err,result) {
// other work in here
}
);
This "prepends" the array element to the existing array at the "first" (0) index so it is always at the front.
Failing using "positional" updates for logical reasons or just where you "want to be sure", then there has been around for an even "longer" time the $sort modifier to $push :
Article.update(
{ "_id": articleId },
{
"$push": {
"comments": {
"$each": [newComment],
"$sort": { "$created_at": -1 }
}
}
},
function(err,result) {
// other work in here
}
);
And that will "sort" on the property of the array elements documents that contains the specified value on each modification. You can even do:
Article.update(
{ },
{
"$push": {
"comments": {
"$each": [],
"$sort": { "$created_at": -1 }
}
}
},
{ "multi": true },
function(err,result) {
// other work in here
}
);
And that will sort every "comments" array in your entire collection by the specified field in one hit.
Other solutions are possible using either .aggregate() to sort the array and/or "re-casting" to mongoose objects after you have done that operation or after doing your own .sort() on the plain object.
Both of these really involve creating a separate model object and "schema" with the embedded items including the "referenced" information. So you could work upon those lines, but it seems to be unnecessary overhead when you could just sort the data to you "most needed" means in the first place.
The alternate is to make sure that fields like "virtuals" always "serialize" into an object format with .toObject() on call and just live with the fact that all the methods are gone now and work with the properties as presented.
The last is a "sane" approach, but if what you typically use is "created_at" order, then it makes much more sense to "store" your data that way with every operation so when you "retrieve" it, it stays in the order that you are going to use.
You could also use JavaScript's native Array sort method after you've retrieved and populated the results:
// Convert the mongoose doc into a 'vanilla' Array:
const articles = yourArticleDocs.toObject();
articles.comments.sort((a, b) => {
const aDate = new Date(a.updated_at);
const bDate = new Date(b.updated_at);
if (aDate < bDate) return -1;
if (aDate > bDate) return 1;
return 0;
});
As of the current release of MongoDB you must sort the array after database retrieval. But this is easy to do in one line using _.sortBy() from Lodash.
https://lodash.com/docs/4.17.15#sortBy
comments = _.sortBy(sorted.comments, 'created_at').reverse();