I'm using Sails.js (0.9.8) and MongoDB (via the sails-mongo adaptor) to create a collection of pages that can be positioned in a tree-view. I would like to store the path of a page in an array of UUIDs
My model:
module.exports = {
schema: true,
attributes: {
uuid: {
type: 'string',
unique: true,
required: true,
uuidv4: true
},
name: {
type: 'string',
required: true,
empty: false
},
path: {
type: 'array',
required: true,
array: true
}
}
}
It works well when I save a 'root' page (the 'path' property has just one item because it's a root page. Here is what it was saved in MongoDB:
{
_id: ObjectId("52f853e9609fb6c0341bdfcc"),
createdAt: ISODate("2014-02-10T04:22:01.828Z"),
name: "Home Page",
path: [
"a2b23e1f-954b-49a3-91f1-4d62d209a093"
],
updatedAt: ISODate("2014-02-10T04:22:01.833Z"),
uuid: "a2b23e1f-954b-49a3-91f1-4d62d209a093"
}
But when I want to create a 'subpage' below my previous created page (Home Page/Products), I get this error:
MongoError: E11000 duplicate key error index: cms-project.item.$path_1
dup key: { : "a2b23e1f-954b-49a3-91f1-4d62d209a093" }
Here is the data I sent:
{ name: 'Products',
uuid: 'a004ee54-7e42-49bf-976c-9bb93c118038',
path:
[ 'a2b23e1f-954b-49a3-91f1-4d62d209a093',
'a004ee54-7e42-49bf-976c-9bb93c118038' ] }
I probably missed something but I don't know what.
If I store the path in a string instead of an array, it work well, but I find it much less elegant and handy.
Not sure of all the Sails / Waterline parts myself as I've never played with it. But by the error the problem is there is a unique index on your array field.
When you are inserting your second document, you already have one of the values (the parent) in your path field in another document. The unique constraint is not going to allow this. Most certainly for what you are modelling, you do not want this and the index cannot be unique.
I hope that you set this up yourself under the assumption that it meant unique within the array contained in the document. If you did then you know where to look and what to change now. If this is being automatically deployed somehow, then I'm not the one to help.
Change the index to not be unique. You can confirm this through the mongo shell:
use cms-project
db.item.getIndices()
Good luck
Related
I have a dynamo db schema lets say goes like this
const team = new dynamoose.Schema(
{
bitbucketDetails: [
{
id: {
type: String,
required: true,
},
repository: {
type: String,
},
},
],
})
I wanted to search for an item having lets say particular repository "abc". how can i search that?
so far i tried something like
team.scan({ bitbucketDetails: { contains:"abc" }}).all().exec()
but keep saying ValidationException: One or more parameter values were invalid: ComparisonOperator CONTAINS is not valid for M AttributeValue type.
Please help!!
You are looking for the value in bitbucketDetails, but you want to look in bitbucketDetails.repository.
my application is split into multiple microservices that are running on heroku dynos (they can't access each others files). Sometimes, there are multiple microservices working with one collection. Therefore, both of the microservices need the according mongoose schema.
However, not both microservices need the full schema. For example, microservice A needs the full schema whereas microservice B only needs a few fields of that schema.
Example schema inside microservice A:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
phone: { type: String, required: true, unique: true },
forename: { type: String, required: true },
surname: { type: String, required: true },
middleInitals: { type: String, required: false },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
Example Schema inside microservice B:
var AccountSchema = mongoose.Schema({
email: { type: String, required: true, unique: true },
password: { type: String, required: true },
failedLoginAttempts: { type: Number, required: true, default: 0 },
lockUntil: { type: Number },
createdAt: { type: Date, default: Date.now }
})
My approach
I would just go ahead and create a new schema in each microservice, containing only the needed fields. However, I am not sure if there will be any problems when multiple microservices register a new schema to the MongoDB database? For example, both microservices would attempt to create an index for the unique field. Will there be any performance issues?
Does anybody have a different approach I could use? Is this even a valid approach to go with?
Thanks in advance :)
It's a valid approach. you can have 2 schemas pointing to the same collection. i have tested it and it works.
Mongoose is an Object Data Modeling (ODM) library, and you can have 2 objects looking at the same collection /(Table or view in SQL) - no problem with that.
No reason for performance problems, so long you got the right index. no relation to Object Data Modeling.
You might want to add some type key, so you can find only type1/type2 accounts on get request. On find, you can restrict getting the right fields with projection.
I think you should have only 2 keys in the index – email + password. If you have phone index and microservice B: don't include a phone –you will have a violation at the unique index of phone.
But if you really want a unique phone index you can make an override. You can generate temp unique value for phone for mircoservice B (using auto-Generated or duplicate the email value), you will ignore this value on mircoservice B and only find/ update / present phone in microsaervice A, where you have a valid phone. When user change from accountB type to accountA type – you must make sure to replace the wrong phone number with a valid one.
I see no problem in 2 schemas to same collection - you just need to mange your indexes the right way – to avoid collisions, and to insure you can differentiate the different accounts type in the collection.
As far as I can see there is no reason you simply can't use the same schema, maybe use some sort of privilege system and have it only return the right information between these separate micro services. You could have a fetch request tell it from which service its coming from and return a which items using a simple conditional.
I'm trying to add a user to my users collection and keep getting a duplicate null key value error.
My Users model used to look like this like this:
const mongoose = require("mongoose");
const UserSchema = new mongoose.Schema({
name: {
type: String,
required: true
},
email: {
type: String,
required: true,
unique: true
},
profilePictures: [{
link: {
type: String
}
rank: {
type: Number,
unique: true
}
}],
});
module.exports = User = mongoose.model("users", UserSchema);
Before I changed the pictures field to
...
pictures = []
...
I believe because I saved users under the former schema, it has saved somewhere the model of the object in the pictures array (they would be given an ObjectId when I saved something to that array).
Even though I have changed the field to
pictures = []
I still get this error
E11000 duplicate key error collection: testDB.users index: profilePictures.rank_1 dup key: { profilePictures.rank: null }
When neither profilePictures nor rank fields even exist anymore.
I imagine I can probably just delete the users collection and start again but I want to know if there is a better way to handle this error? Suppose I had 100 users in the users collection – I wouldn't be able to just delete them all.
Thanks!
you added unique property true in your model to profilePictures.rank. on first request it is saving null as you may be not providing rank in your query.
second time it is again trying to save null but it is also marked unique so it is throwing exception.
solution:
remove unique flag from profilePictures.rank
provide unique value for profilePictures.rank
I have a mongoose model: (With a field that has a default)
var MySchema= new mongoose.Schema({
name: {
type: String,
required: true
},
isClever: {
type: Boolean,
default: false
}
});
I can save a model of this type by just saving a name and in mongoDB, only name can be seen in the document (and not isClever field). That's fine because defaults happen at the mongoose level. (?)
The problem I am having then is, when trying to retrieve only people called john and isClever = false:
MySchema.find({
'name' : 'john',
'isClever': false
}).exec( function(err, person) {
// person is always null
});
It always returns null. Is this something related to how defaults work with mongoose? We can't match on a defaulted value?
According to Mongoose docs, default values are applied when the document skeleton is constructed.
When you execute a find query, it is passed to Mongo when no document is constructed yet. Mongo is not aware about defaults, so since there are no documents where isClever is explicitly true, that results in empty output.
To get your example working, it should be:
MySchema.find({
'name' : 'john',
'isClever': {
$ne: true
}
})
I was wondering if there is way to force a unique collection entry but only if entry is not null.
e
Sample schema:
var UsersSchema = new Schema({
name : {type: String, trim: true, index: true, required: true},
email : {type: String, trim: true, index: true, unique: true}
});
'email' in this case is not required but if 'email' is saved I want to make sure that this entry is unique (on a database level).
Empty entries seem to get the value 'null' so every entry wih no email crashes with the 'unique' option (if there is a different user with no email).
Right now I'm solving it on an application level, but would love to save that db query.
thx
As of MongoDB v1.8+ you can get the desired behavior of ensuring unique values but allowing multiple docs without the field by setting the sparse option to true when defining the index. As in:
email : {type: String, trim: true, index: true, unique: true, sparse: true}
Or in the shell:
db.users.ensureIndex({email: 1}, {unique: true, sparse: true});
Note that a unique, sparse index still does not allow multiple docs with an email field with a value of null, only multiple docs without an email field.
See http://docs.mongodb.org/manual/core/index-sparse/
tl;dr
Yes, it is possible to have multiple documents with a field set to null or not defined, while enforcing unique "actual" values.
requirements:
MongoDB v3.2+.
Knowing your concrete value type(s) in advance (e.g, always a string or object when not null).
If you're not interested in the details, feel free to skip to the implementation section.
longer version
To supplement #Nolan's answer, starting with MongoDB v3.2 you can use a partial unique index with a filter expression.
The partial filter expression has limitations. It can only include the following:
equality expressions (i.e. field: value or using the $eq operator),
$exists: true expression,
$gt, $gte, $lt, $lte expressions,
$type expressions,
$and operator at the top-level only
This means that the trivial expression {"yourField"{$ne: null}} cannot be used.
However, assuming that your field always uses the same type, you can use a $type expression.
{ field: { $type: <BSON type number> | <String alias> } }
MongoDB v3.6 added support for specifying multiple possible types, which can be passed as an array:
{ field: { $type: [ <BSON type1> , <BSON type2>, ... ] } }
which means that it allows the value to be of any of a number of multiple types when not null.
Therefore, if we want to allow the email field in the example below to accept either string or, say, binary data values, an appropriate $type expression would be:
{email: {$type: ["string", "binData"]}}
implementation
mongoose
You can specify it in a mongoose schema:
const UsersSchema = new Schema({
name: {type: String, trim: true, index: true, required: true},
email: {
type: String, trim: true, index: {
unique: true,
partialFilterExpression: {email: {$type: "string"}}
}
}
});
or directly add it to the collection (which uses the native node.js driver):
User.collection.createIndex("email", {
unique: true,
partialFilterExpression: {
"email": {
$type: "string"
}
}
});
native mongodb driver
using collection.createIndex
db.collection('users').createIndex({
"email": 1
}, {
unique: true,
partialFilterExpression: {
"email": {
$type: "string"
}
}
},
function (err, results) {
// ...
}
);
mongodb shell
using db.collection.createIndex:
db.users.createIndex({
"email": 1
}, {
unique: true,
partialFilterExpression: {
"email": {$type: "string"}
}
})
This will allow inserting multiple records with a null email, or without an email field at all, but not with the same email string.
Just a quick update to those researching this topic.
The selected answer will work, but you might want to consider using partial indexes instead.
Changed in version 3.2: Starting in MongoDB 3.2, MongoDB provides the
option to create partial indexes. Partial indexes offer a superset of
the functionality of sparse indexes. If you are using MongoDB 3.2 or
later, partial indexes should be preferred over sparse indexes.
More doco on partial indexes: https://docs.mongodb.com/manual/core/index-partial/
Actually, only first document where "email" as field does not exist will get save successfully. Subsequent saves where "email" is not present will fail while giving error ( see code snippet below). For the reason look at MongoDB official documentation with respect to Unique Indexes and Missing Keys here at http://www.mongodb.org/display/DOCS/Indexes#Indexes-UniqueIndexes.
// NOTE: Code to executed in mongo console.
db.things.ensureIndex({firstname: 1}, {unique: true});
db.things.save({lastname: "Smith"});
// Next operation will fail because of the unique index on firstname.
db.things.save({lastname: "Jones"});
By definition unique index can only allow one value to be stored only once. If you consider null as one such value it can only be inserted once! You are correct in your approach by ensuring and validating it at application level. That is how it can be done.
You may also like to read this http://www.mongodb.org/display/DOCS/Querying+and+nulls