Mongoose lowercase option doesn't work - node.js

I am using the mongoose framework and trying to insert data. This is my schema:
var personSchema = new mongoose.Schema({
"gender": {
type: String,
enum: ["male", "female", "other", "unknown"],
lowercase: false,
},
...
});
I only want to allow case-sensitive values of the name. So the dataset {"gender":"Male"} should create an error. But in fact, it doesn't matter if I use lowercase: true or false, it creates the Object and uses the lowercase value of my enum. Only when I delete the lowercase attribute, then my inserted dataset isn't accepted.
Is there a fix for this problem?

Ok, this is apperently a bug from mongoose. I created a bug report on GitHub: https://github.com/Automattic/mongoose/issues/4622

Related

How to insert Array of objects in mongoDB?

I am very new to MONGO DB so please bear with me.I am having a problem my array of objects is not working properly .
Here is my schema
const playerSchema = new mongoose.Schema({
name: String,
stats :{
wins:Number,
losses:Number,
xp:Number
},
achievement:[
{
name:String,
date: String
}
] });
Here is my document
const fluffy = new playerModel({
"name":"nic raboy",
"stats":{
"wins":5,
"losses":10,
"xp":300
},
"achievements":[
{"name":"Massive XP","date" :"25-08-21"},
{"name":"instant loss","date":"24-08-21"}
]
});
however in mongodb atlas its only showing array...and i cant see the objects inside...
SCREENSHOT
Your schema is correct, it seems your input is wrong,
In schema definition you named it achievement, whereas in input document it is achievements. Correct this everything will work as you expected.
Explanation
The schema is expecting achievement and you inserted achievements, that is why it is shown as an empty array in the database. To avoids this kind of typos in the future, use the required flag.
const playerSchema = new mongoose.Schema({
name: String,
stats: {
wins: Number,
losses: Number,
xp: Number
},
achievements: [
{
name: {
type: String,
required : true,
},
date: {
type: String,
required : true, // required informs for missing fields
}
}
]
})
Refer this link for more on validation
You can use insertMany see the doc here.
Of course, a while loop should work find calling multiple times insertOne, though I advise you to use the insertMany() method.
If you're new to MongoDB, I strongly encourage you to have a look at MongoDB University's MongoDB basics course as well as the MongoDB for JavaScript Developers course.

MongoDB check types of attributes before insert

I'm using mongoose to define the following schema:
var toySchema = new Schema({
id: {type: String, required: true, unique: true},
name: {type: String, required: true},
price: Number
});
I believed that when I inserted into the collection using db.toys.insert(), mongo would check to see that the type of each attribute matched the types in the schema, but it seems that it does not. For example, id should be a string, but I was able to insert a document with an integer id. How can I enforce the schema onto these inserts?
Actually it does work, but not as you might expect. If you specify a field in your model that is of type Date and you try to save a boolean value (e.g. true) then you will get an error like
Error [ValidationError]: MySchema validation failed: expireAt: Cast to Date failed for value "true" at path ...
The reason for your particular problem can be found in the documentation
If you pass an element that has a toString() function, Mongoose will call it, unless the element is an array or the toString() function is strictly equal to Object.prototype.toString().
You could add a custom validator

use mongoose schema over multiple microservices

my application is split into multiple microservices that are running on heroku dynos (they can't access each others files). Sometimes, there are multiple microservices working with one collection. Therefore, both of the microservices need the according mongoose schema.
However, not both microservices need the full schema. For example, microservice A needs the full schema whereas microservice B only needs a few fields of that schema.
Example schema inside microservice A:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
phone: { type: String, required: true, unique: true },
forename: { type: String, required: true },
surname: { type: String, required: true },
middleInitals: { type: String, required: false },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
Example Schema inside microservice B:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
My approach
I would just go ahead and create a new schema in each microservice, containing only the needed fields. However, I am not sure if there will be any problems when multiple microservices register a new schema to the MongoDB database? For example, both microservices would attempt to create an index for the unique field. Will there be any performance issues?
Does anybody have a different approach I could use? Is this even a valid approach to go with?
Thanks in advance :)
It's a valid approach. you can have 2 schemas pointing to the same collection. i have tested it and it works.
Mongoose is an Object Data Modeling (ODM) library, and you can have 2 objects looking at the same collection /(Table or view in SQL) - no problem with that.
No reason for performance problems, so long you got the right index. no relation to Object Data Modeling.
You might want to add some type key, so you can find only type1/type2 accounts on get request. On find, you can restrict getting the right fields with projection.
I think you should have only 2 keys in the index – email + password. If you have phone index and microservice B: don't include a phone –you will have a violation at the unique index of phone.
But if you really want a unique phone index you can make an override. You can generate temp unique value for phone for mircoservice B (using auto-Generated or duplicate the email value), you will ignore this value on mircoservice B and only find/ update / present phone in microsaervice A, where you have a valid phone. When user change from accountB type to accountA type – you must make sure to replace the wrong phone number with a valid one.
I see no problem in 2 schemas to same collection - you just need to mange your indexes the right way – to avoid collisions, and to insure you can differentiate the different accounts type in the collection.
As far as I can see there is no reason you simply can't use the same schema, maybe use some sort of privilege system and have it only return the right information between these separate micro services. You could have a fetch request tell it from which service its coming from and return a which items using a simple conditional.

Sort Keys in Response Object from Mongoose in ExpressJS and NodeJS

I have been working on my own small API, so far everything works super fine, i just have a small cosmetic issue I can#t seem to find an answer to.
I defined a schema like so in mongoose:
const ArtistSchema = new Schema({
stageName: {
type: String,
unique: true,
required: true,
minlength: 3,
maxlength: 255
},
realName: {
type: String,
unique: true,
required: true,
minlength: 5,
maxlength: 255
},
birthday: {
type: Date,
required: true
},
debutDate: {
type: Date,
required: true
},
company: {
type: String,
minlength: 5,
maxlength: 255,
required: function () {
return this.active;
}
},
active: {
type: Boolean,
default: true
},
music: [AlbumSchema],
createdAt: {
type: Date,
default: Date.now
}
});
I can create an entry in the database, with no problem either. I use this function on app.post
create(req, res, next) {
const artistProps = req.body;
Artist.create(artistProps)
.then(artist => res.send(artist))
.catch(next);
},
This works good, but the res.send(artist)) actually returns the object with no key order.. or in a pattern I cannot recognize. I want the response to be the same sorting as i defined in the schema, beause now it returns it:
active, stagename, realname, label, music, birthday
while it should be stagename, realname, birthday, debutDate etc..
I hope someone can help me out here. I know i can sort the VALUEs of a specific key with sort (like sort stageName alphabetically) but I really cant find anything for the keys .
Express' res.send method recognizes that artist is an Object, and calls JSON.stringify on it to convert the Object to a JSON string before sending. Simplifying a bit, the JSON.stringify method iterates through your artist object keys in the order they were created. (Here's a link to the more complicated ordering explanation.) That explains the current behavior.
Others may chime in with their own proposals for how you might do what you're aiming for, but here's a simple one to try first:
First, do your own JSON.stringify, using a "replacer" to create
the output order that you want:
const artistString = JSON.stringify(artist, ["realName", "stageName", ...])
// '{"realName": "Paul David Hewson", "stageName": "Bono", ...}'
Then, use res.json(artistString), rather than res.send, to send your JSON string with the
correct Content-Type header. (res.send will assume you want
Content-Type: “text/html”.)
There are definitely more complicated approaches, including creating a function that gets keys, sorts them, and returns a replacer; or writing your own .toJSON() substitute for JSON.stringify. You may need to implement one of these approaches, because you have nested objects; the behavior of the replacer can be a bit wonky in this case. You might be able to list the nested properties immediately after the parent, like:
["realName", "type", ...]
but since you have the same names for some nested properties, this may or may not work for you. You might have to stringify the insides before you stringify the outsides (gah!).
In any case, hopefully my suggestion can be a first step.

Mongoose, unique username but not required (Mongo) [duplicate]

I was wondering if there is way to force a unique collection entry but only if entry is not null.
e
Sample schema:
var UsersSchema = new Schema({
name : {type: String, trim: true, index: true, required: true},
email : {type: String, trim: true, index: true, unique: true}
});
'email' in this case is not required but if 'email' is saved I want to make sure that this entry is unique (on a database level).
Empty entries seem to get the value 'null' so every entry wih no email crashes with the 'unique' option (if there is a different user with no email).
Right now I'm solving it on an application level, but would love to save that db query.
thx
As of MongoDB v1.8+ you can get the desired behavior of ensuring unique values but allowing multiple docs without the field by setting the sparse option to true when defining the index. As in:
email : {type: String, trim: true, index: true, unique: true, sparse: true}
Or in the shell:
db.users.ensureIndex({email: 1}, {unique: true, sparse: true});
Note that a unique, sparse index still does not allow multiple docs with an email field with a value of null, only multiple docs without an email field.
See http://docs.mongodb.org/manual/core/index-sparse/
tl;dr
Yes, it is possible to have multiple documents with a field set to null or not defined, while enforcing unique "actual" values.
requirements:
MongoDB v3.2+.
Knowing your concrete value type(s) in advance (e.g, always a string or object when not null).
If you're not interested in the details, feel free to skip to the implementation section.
longer version
To supplement #Nolan's answer, starting with MongoDB v3.2 you can use a partial unique index with a filter expression.
The partial filter expression has limitations. It can only include the following:
equality expressions (i.e. field: value or using the $eq operator),
$exists: true expression,
$gt, $gte, $lt, $lte expressions,
$type expressions,
$and operator at the top-level only
This means that the trivial expression {"yourField"{$ne: null}} cannot be used.
However, assuming that your field always uses the same type, you can use a $type expression.
{ field: { $type: <BSON type number> | <String alias> } }
MongoDB v3.6 added support for specifying multiple possible types, which can be passed as an array:
{ field: { $type: [ <BSON type1> , <BSON type2>, ... ] } }
which means that it allows the value to be of any of a number of multiple types when not null.
Therefore, if we want to allow the email field in the example below to accept either string or, say, binary data values, an appropriate $type expression would be:
{email: {$type: ["string", "binData"]}}
implementation
mongoose
You can specify it in a mongoose schema:
const UsersSchema = new Schema({
name: {type: String, trim: true, index: true, required: true},
email: {
type: String, trim: true, index: {
unique: true,
partialFilterExpression: {email: {$type: "string"}}
}
}
});
or directly add it to the collection (which uses the native node.js driver):
User.collection.createIndex("email", {
unique: true,
partialFilterExpression: {
"email": {
$type: "string"
}
}
});
native mongodb driver
using collection.createIndex
db.collection('users').createIndex({
"email": 1
}, {
unique: true,
partialFilterExpression: {
"email": {
$type: "string"
}
}
},
function (err, results) {
// ...
}
);
mongodb shell
using db.collection.createIndex:
db.users.createIndex({
"email": 1
}, {
unique: true,
partialFilterExpression: {
"email": {$type: "string"}
}
})
This will allow inserting multiple records with a null email, or without an email field at all, but not with the same email string.
Just a quick update to those researching this topic.
The selected answer will work, but you might want to consider using partial indexes instead.
Changed in version 3.2: Starting in MongoDB 3.2, MongoDB provides the
option to create partial indexes. Partial indexes offer a superset of
the functionality of sparse indexes. If you are using MongoDB 3.2 or
later, partial indexes should be preferred over sparse indexes.
More doco on partial indexes: https://docs.mongodb.com/manual/core/index-partial/
Actually, only first document where "email" as field does not exist will get save successfully. Subsequent saves where "email" is not present will fail while giving error ( see code snippet below). For the reason look at MongoDB official documentation with respect to Unique Indexes and Missing Keys here at http://www.mongodb.org/display/DOCS/Indexes#Indexes-UniqueIndexes.
// NOTE: Code to executed in mongo console.
db.things.ensureIndex({firstname: 1}, {unique: true});
db.things.save({lastname: "Smith"});
// Next operation will fail because of the unique index on firstname.
db.things.save({lastname: "Jones"});
By definition unique index can only allow one value to be stored only once. If you consider null as one such value it can only be inserted once! You are correct in your approach by ensuring and validating it at application level. That is how it can be done.
You may also like to read this http://www.mongodb.org/display/DOCS/Querying+and+nulls

Resources