I am building a schema in mongoose (v4.13.8) with an Array of Mixed values. I have come up with the following Schema:
var deviceConfigSchema = new mongoose.Schema({
capabilities: {
type: [capabilitySchema],
required: true,
validator: [isValidCapabilities, "Not a valid capability array"]
},
services: {
type: [{}],
required: true,
validator: [isValidServices, "Not a valid service array"]
}
});
The problem is that I get a validation error saying that services: Path 'services' is required. when I try to submit data. What is strange is that the data I send for the 'capabilities' works fine and the only difference is that I specify a schema explicitly.
Removing the required: true from services causes there to be an empty array object in the returned values.
I am submitting the data using an API POST request with the data in the body of the request. I am using Postman to submit the request, with x-www-form-urlencoded checked. This is copied from the body key-value input
capabilities[0][field_map][field]:pressure
capabilities[0][field_map][type]:float
capabilities[0][field_map][format]:hPa
services[0][name]:rest
services[0][receive][0][capability_id]:0
services[0][receive][0][path]:/api/relay/0
Update:
I'd like to apologise as this was a mistake on my part. I dynamically create a configuration based on the request and at one point the copied services were being made null, doh!
However, having got the required: true validation to pass, the custom validator is still not being executed. I also can't find any documentation about the order in which validators and are executed which would be very useful. Below is the validator snippet for reference:
function isValidServices(services) {
for (const service of services) {
if (typeof service.name !== 'string') return false;
}
return true;
}
Having experimented with various approaches and looking in more detail on the mongoose API docs, I found that there is a validate option for schemas too. http://mongoosejs.com/docs/api.html#schematype_SchemaType-validate
I changed my Schema from this:
var deviceConfigSchema = new mongoose.Schema({
capabilities: {
type: [capabilitySchema],
required: true,
validator: [isValidCapabilities, "Not a valid capability array"]
},
services: {
type: [{}],
required: true,
validator: [isValidServices, "Not a valid service array"]
}
});
To this [notice the validate instead of validator]...
var deviceConfigSchema = new mongoose.Schema({
capabilities: {
type: [capabilitySchema],
required: true,
validate: [isValidCapabilities, "Not a valid capability array"]
},
services: {
type: [{}],
required: true,
validate: [isValidServices, "Not a valid service array"]
}
});
After this my validator functions were being executed without any issues. Hopefully this helps someone.
Related
I have an api using express, mongodb and I use AJV validation to validate the incoming requests.
//JSONSchema
var recordJsonSchema = {
type: "object",
properties: {
name: { type: "string" },
idNumber: { type: "number" },
content: { type: "string" }
},
required: ['name','idNumber']
}
And I'd use this JSON schema to validate incoming requests like so.
app.post('/record', (req,res) => {
let errors = ajv.inspect(req.body, recordJsonSchema)
return errors ? res.send(errors) : res.send(this.handler(req));
})
This works fine and is very fast. I also like JsonSchema since it follows OpenAPI standards.
Unfortunately, in order to read/write to mongo via mongoose I also need to create a MongoSchema for Record. They are very similar but a bit different in how they handle required fields etc.
var recordSchema = new Schema({
name: { type: "string", required: true },
idNumber: { type: "number", required: true },
content: { type: "string" }
})
So for my model of Record I have two schemas now. One for JSONschema and one for handling Mongo read/writes.
I'm looking for a way to cut MongoSchema, any suggestions?
Maybe this, seems like it imports your ajv schema from the entry and place it in the mongoose schema. https://www.npmjs.com/package/mongoose-ajv-plugin
I have faced with same problem. I think in new mongo 4.4 we can load ajv schema directly to mongodb https://docs.mongodb.com/manual/reference/operator/query/jsonSchema/
I am trying to create a simple server application in Node.js using the waterline-orientdb package where there are several users who can invoke several methods. Before a user can do anything, the user needs to authenticate with his username and password. Within this authentication the user object is given a token that will be piggybacked with the future requests.
When a user is given a token, an update query is invoked. When invoking the update request I get the following error:
ERROR err: { [OrientDB.RequestError: expression item ']' cannot be resolved because current record is NULL]
name: 'OrientDB.RequestError',
message: 'expression item \']\' cannot be resolved because current record is NULL',
data: {},
previous: [],
id: 1,
type: 'com.orientechnologies.orient.core.exception.OCommandExecutionException',hasMore: 0 }
The strange thing is that the update is executed, so this error doesn't have influence on the update request. But because I want to catch all errors, I can't just ignore this.
My model looks like this:
module.exports = {
tableName: 'User',
identity: 'dbuser',
schema: true,
attributes: {
id: {
type: 'string',
primaryKey: true,
columnName: '#rid'
},
username: {
type: 'string',
required: true,
unique: true
},
password: {
type: 'string',
required: false
},
token: {
type: 'string'
},
follows: {
collection: 'dbuser',
via: 'followed',
dominant: true
},
followed: {
collection : 'dbuser',
via: 'follows'
}
};
As you can see, I'm associating two users with eachother so that one user can follow the activities of the other user. When I delete the association (so follows and followed) the error also dissapears.
The piece of code where the updates happens looks like this:
user[0].token = generateToken(user[0])
dbuser.update({
id: user[0].id
}, user[0]).exec(function (error, data) {
if (error) res.json(401, {
code: 401,
error: "Token could not be updated"
})
res.json(user);
});
Does anyone has an idea on how to avoid this behavior or what the error even means?
It seems to be a bug in the adapter.
You could try using:
npm install appscot/waterline-orientdb#refactor_collection
Apparently will be resolved in v.0.10.40
More info about it: https://github.com/appscot/waterline-orientdb/issues/43#issuecomment-75890992
I've created an OData Endpoint with Node by using odata-server module by JayData in this way:
require("odata-server");
$data.Entity.extend("Service", {
Id: {type: "id", key: true, computed: true, nullable: false},
Name: {type: "string", nullable: false, maxLength: 50}
});
$data.EntityContext.extend("marketplace", {
Services: {type: $data.EntitySet, elementType: Service}
});
$data.createODataServer(marketplace, "/marketplace", 8081, "localhost");
console.log("Marketplace OData Endpoint created... Listening at 8081.");
Then, still with Node, I've created an Express web application which receives some commands through GET request, connects to the OData Endpoint (still by using JayData) and receives some data from there, then sends back the result to the client (in the following code it just sends 200), in this way (by defining a route):
require("jaydata");
...
app.get("/addCompare/:id", function(req, res) {
console.log("Comparison request for: " + req.params.id);
$data.Entity.extend("Service", {
Id: {type: "id", key: true, computed: true, nullable: false},
Name: {type: "string", nullable: false, maxLength: 50}
});
$data.EntityContext.extend("marketplace", {
Services: {type: $data.EntitySet, elementType: Service}
});
db = new marketplace("http://localhost:8081/marketplace");
db.onReady(function() {
var arr = db.Services.filter(function(s) {return s.Name.startsWith("Serv");}).toArray();
console.dir(arr);
});
res.send(200);
});
The problem is that when I try this code (by using this GET request for example: http://www.localhost:8080/addCompare/NTM0M2ZkNjU2YjljNWMwODRiOGYyYTU5), I always get this error in the server and after that it crashes. Here's the error:
TypeError: Value '$data.Object' not convertable to '$data.ObjectID'
{ name: 'TypeError',
message: 'Value \'$data.Object\' not convertable to \'$data.ObjectID\'',
data:
{ __metadata:
{ type: 'Service',
id: 'http://localhost:8081/marketplace/Services(\'NTM0M2ZkNjU2YjljNWMwODRiOGYyYTU5\')',
uri: 'http://localhost:8081/marketplace/Services(\'NTM0M2ZkNjU2YjljNWMwODRiOGYyYTU5\')' },
Id: 'NTM0M2ZkNjU2YjljNWMwODRiOGYyYTU5',
Name: 'Service51' } }
Where am I wrong? Thanks...
As the behavior was explained in OData - Strange index with MongoDB [Mongoose: Cast Error], the id - NTM0M2ZkNjU2YjljNWMwODRiOGYyYTU5 – should be base-64 decoded (for example 5343fd656b9c5c084b8f2a70 is a valid format).
Although the declaration of JayData model is correct, it will be re-defined every single time when a request arrives to your server. You can improve the current implementation by moving your $data.Entity.extend and $data.EntityContext.extend blocks outside the app.get – after require("jaydata");.
The following code represents an Account Model in Sails.js v0.9.4 .
module.exports = {
attributes: {
email: {
type: 'email',
unique: true,
required: true
},
password:{
type: 'string',
minLength: 6,
maxLength: 15,
required:true
}
}
};
When I send two POSTS and a PUT request via Postman to localhost:8080/account, the unique property of the email fails.
Specifically, I send the following HTTP requests from Postman:
POST http://localhost:8080/account?email=foo#gmail.com&password=123456
POST http://localhost:8080/account?email=bar#gmail.com&password=123456
PUT http://localhost:8080/account?id=1&email=bar#gmail.com
GET http://localhost:8080/account
The last GET request shows me:
[
{
"email": "bar#gmail.com",
"password": "123456",
"createdAt": "2013-09-30T18:33:00.415Z",
"updatedAt": "2013-09-30T18:34:35.349Z",
"id": 1
},
{
"email": "bar#gmail.com",
"password": "123456",
"createdAt": "2013-09-30T18:33:44.402Z",
"updatedAt": "2013-09-30T18:33:44.402Z",
"id": 2
}
]
Should this happen?
*For those who don't know, Waterline generates by default an id which automatically increments in every insertion.
This is because your schema is not updated in your disk database (".tmp/disk.db").
You need to shutdown sails, drop your DB and restart sails.
The DB will be reconstruct with your good schema.
Attention : the data will be drop too !
If you want keep your data, you can just update the schema part of ".tmp/disk.db".
What I have doing to keep data and rebuild schema by sails.js :
copy ".tmp/disk.db"
clean ".tmp/disk.db"
shutdown sails.js
start sails.js
-> the database is empty and the schema is updated
copy old "counters" part
copy old "data" part
You must have this in your schema (file ".tmp/disk.db" -> "schema" part) for the unique field :
"xxx": {
"type": "string",
"unique": true
},
I hope this help you.
I ran into this same issue. To solve it, you have to avoid using the 'disk' ORM adapter. For some reason it appears that it doesn't support uniqueness checks.
Other adapters such as mongo and mysql should support uniqueness checks, so this shouldn't be an issue outside of development.
For the course of development, change the default adapter in config/adapters.js from 'disk' to 'memory'. Should look like this:
module.exports.adapters = {
// If you leave the adapter config unspecified
// in a model definition, 'default' will be used.
'default': 'memory',
// In-memory adapter for DEVELOPMENT ONLY
memory: {
module: 'sails-memory'
},
...
};
I'm not certain this is the issue, but have you added schema:true to your models and adapters?
My mongo adapter config looks like this:
module.exports.adapters = {
'default': 'mongo',
mongo: {
module: 'sails-mongo',
url: process.env.DB_URL,
schema: true
}
};
And my User model looks like this (trimmed a bit):
module.exports = {
schema: true,
attributes: {
username: {
type: 'string',
required: true,
unique: true
}
//...
}
};
There is no need to delete current database to solve this, in stead change the waterline migrate option from safe to alter. This way the underlying database will adapt this setting.
I wouldn't recommend migrate: alter in a production environment, though. ;)
Here is my /config/local.js:
module.exports = {
...
models: {
migrate: 'alter'
},
}
According to the official documentation of sails
You should configure the option "migrate" in "alter" to create the schemas with their indexes
There's nothing wrong with adding or removing validations from your
models as your app evolves. But once you go to production, there is
one very important exception: unique. During development, when your
app is configured to use migrate: 'alter', you can add or remove
unique validations at will. However, if you are using migrate: safe
(e.g. with your production database), you will want to update
constraints/indices in your database, as well as migrate your data by
hand.
http://sailsjs.com/documentation/concepts/models-and-orm/validations
var InvoiceSchema = new Schema({
email: {type: 'email', required: true}
name : {type: String}
});
InvoiceScheme({email: 1}, {unique: true});
Set Uniquee In Nodejs
I'm building a node.js application with Mongoose and have a problem related to sorting embedded documents. Here's the schema I use:
var locationSchema = new Schema({
lat: { type: String, required: true },
lon: { type: String, required: true },
time: { type: Date, required: true },
acc: { type: String }
})
var locationsSchema = new Schema({
userId: { type: ObjectId },
source: { type: ObjectId, required: true },
locations: [ locationSchema ]
});
I'd like to output the locations embedded in the userLocations documented sorted by their time attribute. I currently do the sorting in JavaScript after I retrieved the data from MongoDb like so:
function locationsDescendingTimeOrder(loc1, loc2) {
return loc2.time.getTime() - loc1.time.getTime()
}
LocationsModel.findOne({ userId: theUserId }, function(err, userLocations) {
userLocations.locations.sort(locationsDescendingTimeOrder).forEach(function(location) {
console.log('location: ' + location.time);
}
});
I did read about the sorting API provided by Mongoose but I couldn't figure out if it can be used for sorting arrays of embedded documents and if yes, if it is a sensible approach and how to apply it to this problem. Can anyone help me out here, please?
Thanks in advance and cheers,
Georg
You're doing it the right way, Georg. Your other options are either to sort locations by time upon embedding in the first place, or going the more traditional non-embedded route (or minimally embedded route so that you may be embedding an array of ids or something but you're actually querying the locations separately).
This also can be done using mongoose sort API as well.
LocationsModel.findOne({ userId: theUserId })
// .sort({ "locations.time": "desc" }) // option 1
.sort("-locations.time") // option 2
.exec((err, result) => {
// compute fetched data
})
Sort by field in nested array with Mongoose.js
More methods are mentioned in this answer as well
Sorting Options in mogoose
Mongoose Sort API