Let's say I have have this model:
const employeeSchema = new Schema({
name: String,
age: Number,
employeeData: {
department: String,
position: String,
lastTraining: Date
}
});
const Employee = mongoose.model('employee', employeeSchema);
In the database, the only thing that is going to be saved is something that looks like this:
{
_id: ...
name: 'John Smith',
age: 40,
employeeCode: '.... '
}
What's going on is that by some business rules, the employeeData info, which is coming from the reqeust body, is going through some function that compiles out of it the employeeCode, and when saving to the database I just use to the employeeCode.
Right now, the way I am implementing this is using statics. So, I have in the model the follwing:
employeeSchema.statics.compileEmployeeCode = (doc) => {
if (!doc.employeeData) {
doc.employeeCode= compileCode(doc.employeeData);
delete doc.employeeData;
}
return doc;
}
And then, I need to remember, for each call that receives info from the client, to call this function before creating the document (an instance of the model):
const compiledDoc = Employee.compileEmployeeCode(req.body);
const employee = new Employee(comiledDoc);
My question is: is there a way to automatically invoke some function that compiles the code out of the data any time I create a document like that, so I won't need to remember to always call on the static method beforehand?
Middlaware is what you are looking for. You need to create a function that will set a pre-save hook on the schema (which will be triggered every time before saving a new document) and to plug this function into the schema.
function compileEmployeeCode (schema) {
schema.pre('save', next => {
if (this.employeeData) {
this.employeeCode= compileCode(this.employeeData);
delete this.employeeData;
next();
}
});
}
employeeSchema.plugin(compileEmployeeCode);
OK. It was really hard but I finally managed to find the solution. The trick is to use a setter on a specific path. Each field in the schema is of type SchemaType which can have a setter apply on it:
https://mongoosejs.com/docs/api.html#schematype_SchemaType-set
Anyway, if I want to make it possible for the request to enter an object that will be converted to some other format, say a string, I would need to define the schema like this:
const employeeSchema = new Schema({
name: String,
age: Number,
employeeCode: {
type: String,
set: setCodeFromObj,
alias: 'employeeData'
}
});
The setter function I'm using here looks something like this (I'm omitting here all the error handling and the like to keep this short:
function setCodeFromObj(v) {
const obj = {};
obj.department = v.department;
obj.position = v.position;
obj.lastTraining = v.lastTraing
// breaking the object to properties just to show that v actually includes them
return compileEmployeeCode(obj);
}
I used an alias to make the name visible to the user different from what is actually saved in the database. I could have also done that using virtuals or just design the system a bit differently to use up the same name.
Related
I'm using Mongoose and its not in an advanced stage, so I need some help with some specific points. I will try to keep my examples clear and without much context for it.
First of all, I'm doing some relationships in my schemas. Before I create or edit any of them, I'm verifying if the provided ObjectId exists in database when necessary.
VehicleSchema = new Schema({
name: String,
})
PersonSchema = new Schema({
name: String,
vehicle: ObjectId //relation with vehicle
})
PersonSchema.pre('save', (next) => {
// ...
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
// ...
}
Is there any better way to do this or is this the best way possible to make sure that my doc exists?
I was thinking about three possibilities to help this be faster, but I'm not sure if is secure and consistent:
Create a custom ObjectId that indicates the modelName of my Schema in it. Something like:
function createObjectIdByModelName(modelName) {
return new ObjectId(`${modelName}-${uuid.v4()}`)
}
and then:
function validateObjectIdByModelName(_id, expectedModel) {
const modelName = mongoose.model.get(_id).modelName
return modelName === expectedModel
}
Use some cache package like recachegoose or speedgoose
Make my requests have an "origin" where I could create some rules like:
// This is a simple example of course, but the idea is that
// if the origin of my request is my frontend, I would trust in it, so my validation
// would be ignored. Otherwise I validate it normally.
if (origin !== 'frontend') {
if (!Vehicles.countDocuments({ _id: this.vehicle }) throw new Error('blabla')
}
What do you think about? This is blowing my mind for weeks now.
I have the following situation:
I am creating a general purpose update function for my project, which takes payload and goes through it, checking whether property in payload exists on model schema and if it does it assigns the new property value to document (updating it). It does this also in subdocuments (recursively).
I have a custom defined type Language for multi-language string fields, which is an object that contains properties in form of language ('en', 'de', etc). Now since its a custom type, Mongoose doesn't know if its contents were modified, so I have to use markModified on it. And here comes the problem: Actual subschemas behave here differently than objects. If I call markModified on subschema, it expects path within that subschema, not entire document. On the other hand, if I call markModified on an object, it expect entire path from parent. I don't know whether it is a bug or not, but if I want to support both, I need to differentiate between the two in my function. Is there a way to know whether it's a subschema made by user or just an object (that was converted to subschema by mongoose)?
Example setup model:
const TestSchema = new Schema(
{
object: {
name: {
type: Language
}
},
nestedSchema: {
type: NestedTestSchema
}
}
)
const NestedTestSchema = new Schema(
{
name: {
type: Language
}
}
)
Example code:
const testDocument = new TestModel({
object: {
name: {
en: 'NameEN',
de: 'NameDE'
}
}
nestedSchema: {
name: {
en: 'NameEN',
de: 'NameDE'
}
}
})
// We make a payload to change these values
const payload = {
object: { // Update object
name: {
en: 'Name updated',
fr: 'Something',
}
},
nestedSchema: { // Update subschema
name: {
en: 'Name updated',
fr: 'Something',
}
}
}
And now when I receive this and update the document with these values, for object I have to
const { object, nestedSchema } = document // This, of course, is useless here, I would get nestedSchema and object as argument in recursive function, its only for demonstration
nestedSchema.markModified('name.en') // Etc
and for object I have to
object.markModified('object.name.en') // Etc
Together with co-workers we found out that object is not an actual subschema, it's called a nestedPath. Only nestedPaths have property $__isNested, subschemas don't. As a result, because of this different handling of two cases, in nestedPaths we need to specify full path when using markModified while in subschemas only path within that subschema
How do I represent a field that could be either a simple ObjectId string or a populated Object Entity?
I have a Mongoose Schema that represents a 'Device type' as follows
// assetSchema.js
import * as mongoose from 'mongoose'
const Schema = mongoose.Schema;
var Asset = new Schema({ name : String,
linked_device: { type: Schema.Types.ObjectId,
ref: 'Asset'})
export AssetSchema = mongoose.model('Asset', Asset);
I am trying to model this as a GraphQLObjectType but I am stumped on how to allow the linked_ue field take on two types of values, one being an ObjectId and the other being a full Asset Object (when it is populated)
// graphql-asset-type.js
import { GraphQLObjectType, GraphQLString } from 'graphql'
export var GQAssetType = new GraphQLObjectType({
name: 'Asset',
fields: () => ({
name: GraphQLString,
linked_device: ____________ // stumped by this
});
I have looked into Union Types but the issue is that a Union Type expects fields to be stipulated as part of its definition, whereas in the case of the above, there are no fields beneath the linked_device field when linked_device corresponds to a simple ObjectId.
Any ideas?
As a matter of fact, you can use union or interface type for linked_device field.
Using union type, you can implement GQAssetType as follows:
// graphql-asset-type.js
import { GraphQLObjectType, GraphQLString, GraphQLUnionType } from 'graphql'
var LinkedDeviceType = new GraphQLUnionType({
name: 'Linked Device',
types: [ ObjectIdType, GQAssetType ],
resolveType(value) {
if (value instanceof ObjectId) {
return ObjectIdType;
}
if (value instanceof Asset) {
return GQAssetType;
}
}
});
export var GQAssetType = new GraphQLObjectType({
name: 'Asset',
fields: () => ({
name: { type: GraphQLString },
linked_device: { type: LinkedDeviceType },
})
});
Check out this excellent article on GraphQL union and interface.
I was trying to solve the general problem of pulling relational data when I came across this article. To be clear, the original question appears to be how to dynamically resolve data when the field may contain either the ObjectId or the Object, however I don't believe it's good design in the first place to have a field store either object or objectId. Accordingly, I was interested in solving the simplified scenario where I keep the fields separated -- one for the Id, and the other for the object. I also, thought employing Unions was overly complex unless you actually have another scenario like those described in the docs referenced above. I figured the solution below may interest others also...
Note: I'm using graphql-tools so my types are written schema language syntax. So, if you have a User Type that has fields like this:
type User {
_id: ID
firstName: String
lastName: String
companyId: ID
company: Company
}
Then in my user resolver functions code, I add this:
User: { // <-- this refers to the User Type in Graphql
company(u) { // <-- this refers to the company field
return User.findOne({ _id: u.companyId }); // <-- mongoose User type
},
}
The above works alongside the User resolver functions already in place, and allow you write GQL queries like this:
query getUserById($_id:ID!)
{ getUserById(_id:$_id) {
_id
firstName
lastName
company {
name
}
companyId
}}
Regards,
S. Arora
Context
So we have migrated from Parse.com to an hosted MongoDB database. Now I have to write a script that queries our database directly (not using Parse).
I'm using nodejs / mongoose and am able to retrieve these documents.
Problem
Here is my schema so far:
var StorySchema = new mongoose.Schema({
_id: String,
genre: String
});
var ActivitySchema = new mongoose.Schema({
_id: String,
action: String,
_p_story: String /* Also tried: { type: mongoose.Schema.Types.ObjectId, ref: 'Story' } and { type: String, ref: 'Story' }*/,
});
I would like to write a query that fetches theses documents with the related Story (stored as a pointer).
Activity
.find({
action: 'read',
})
.exec(function(error, activities) {
activities.forEach(function(activity) {
// I would like to use activity._p_story or whatever the mean to access the story here
});
});
Question
Is there a way to have the fetched activities populated with their story, given that the _p_story field contains Story$ before the object id?
Thanks!
One option I have been looking at is the ability to create a custom data type for each pointer. The unfortunate side is Parse treats these as 'belongsTo' relationships and but does not store the 'hasMany' relationship that Mongoose wants for populate(). But once this is in place you can easily do loops to get the relational data. Not ideal but works and is what populate is really doing under the hood anyways.
PointerTypeClass.js -> This would work for populating the opposite direction.
var Pointer = function(mongoose) {
function PointerId(key, options) {
mongoose.SchemaType.call(this, key, options, 'PointerId');
}
PointerId.prototype = Object.create(mongoose.SchemaType.prototype);
PointerId.prototype.cast = function(val) {
return 'Pointer$' + val;
}
return PointerId;
}
module.exports = Pointer;
Also be sure mongoose knows about the new type by doing mongoose.Schema.Types.PointerId = require('./types/PointerTypeClass')(mongoose);
Lastly. If you are willing to write some cloudcode you could create the array of ids for your populate to know about the objects. Basically in your Object.beforeSave you would update the array of the id for the relationship. Hope this helps.
I have a Mongoose schema that looks like this:
ManifestSchema = new Schema({
entries: [{
order_id: String,
line_item: {}, // <-- resolved at run time
address: {},// <-- resolved at run time
added_at: Number,
stop: Number,
}]
}, {collection: 'manifests', strict: true });
and somewhere in the code I have this:
Q.ninvoke(Manifests.findById(req.params.id), 'exec')
.then(function(manifest)
{
// ... so many things, like resolving the address and the item information
entry.line_item = item;
entry.address = order.delivery.address;
})
The issue that I faced is that without defining address and line_item in the schema, when I resolved them at run time, they wouldn't returned to the user because they weren't in the schema...so I added them...which cause me another unwanted behavior: When I saved the object back, both address and line_item were saved with the manifest object, something that I would like to avoid.
Is there anyway to enable adding fields to the schema at run time, but yet, not saving them on the way back?
I was trying to use 'virtuals' in mongoose, but they really provide what I need because I don't create the model from a schema, but it rather returned from the database.
Call toObject() on your manifest Mongoose instance to create a plain JavaScript copy that you can add extra fields to for the user response without affecting the doc you need to save:
Q.ninvoke(Manifests.findById(req.params.id), 'exec')
.then(function(manifest)
{
var manifestResponse = manifest.toObject();
// ... so many things, like resolving the address and the item information
entry.line_item = item;
entry.address = order.delivery.address;
})