I've been using Loopback to create an API. The documentation is generally really good but doesn't really answer my question about the following: how do I extend (not replace) a built in model?
The most promising piece of information came from this page - it specifies the way of basing a class from another class, via inheritance. This is useful but not ideal - I'd like to create relationships to custom models from the stock models, for example - "Role" should have many "Permission".
The page I mention also shows a Javascript file, located at common/models/<modelName>.js, where it states you can "extend" a model based on the properties and options you give it. The server never seems to hit the file... For example - I put a file in common/models/role.js with the following content:
var properties = {
exampleProperty: {type: String, required: true}
};
var user = loopback.Model.extend('Role', properties);
console.log('test');
First off, it doesn't seem to hit the file at all (no console.log output given). Second, obviously because of the first point, it doesn't extend the model with the properties I created.
Am I missing something obvious or is the documentation just plain wrong?
You should generate a new model via slc loopback:model named user. By default, the built in user is named User, which is why you can use lowercase user or even UserModel if you prefer. Then when you are prompted by the model generator for a base model, choose User. See https://github.com/strongloop/loopback-faq-user-management/blob/master/common/models/user.json#L3
Related
When use jhipster entity generator by console a question like follow allows to change the display field on frontend:
When you display this relationship with AngularJS, which field from
'user' do you want to use? (id)
It is possible to make the same with jdl?
Edited
Ok, It appears that nobody has this problem, or maybe there aren't anybody in home?
So, I'm going to explain it. Let's suppose this scenario:
Create two (or more) entities with relationship with the console.
Choose a different field from id to manage the front-end.
Test the project (it works fine).
Export the model to jdl file
Import the jdl file, with no changes, in another new project.
Boom! The UI comes back to Id reference.
Is there any option to resolve it, or is this resolve in other tools like uml import? Future plans?
Resolved
Reading the documentation in detail, I've found a solution:
"Declare the field you want to use to display a relationship in AngularJS.
To do that you must add the field name between () after the injected field name."
Resolved
Reading the documentation in detail, I've found a solution:
"Declare the field you want to use to display a relationship in
AngularJS.
To do that you must add the field name between () after the injected
field name."
I recently load several versions of MEANJS (meanjs.org) to understand the file structure better and view the changes.
In 4.0, articles.client.controller.js to be specific they have:
and I'm able to make changes to new Articles there as I appended new fields to the mongoose Schema.
$scope.create = function () {
// Create new Article object
var article = new Articles({
title: this.title,
content: this.content
});
In 4.1, it comes like this.
// Create new Article object
var article = new Articles({
title: this.title,
content: this.content
});
Now with 4.2, I don't see that in articles.client.controller.js,
vm.article = article;
I have my modified Schema version. How to make changes to the creation of a new Articles object? This is a good question for upgrading app from 4.0, 4.1 to 4.2.
It’s changed slightly.
Trying to use the resources directly as was done in version 4 may cause problems where the page is ready but not the resource (Article).
To get around this problem angular uses resolves which which use promises to handle the timing issues.
The important thing to know is that the promise will give you some answer at some point in the future - Just, it may not be the answer you'd like!
Either way, it always tells you once it has the answer - or more correctly once it has resolved.
Angular uses the promises to help out with the timing issues mentioned above. The resolves are keyed to a promise and will only load the controller once the resolve(s)... erm... resolve!
This means we will always have articles when we expect.
Promises, resolves - Let me at'em - Let me at'em!!
The resolves option is used in the updated articles.client.routes. Here we see that articleResolve is keyed to getArticle which isn't a promise itself but is instead a function which returns one (which is just a good!)
If we look at the lines below we can see how we create this promise returning function. It's a function which uses Angular's $stateParams (to inspect the state) and fill in the articleId for the requested article. We get the articles using the injected and familiar Articles service.
In your case you want to know how new articles are created so we must travel a little further into the articles service which has recently been updated.
This is almost the same as the Articles service which you are used to using, however the additional lines add an extra method to this service which allow it to create, or if existing to save the article details.
These lines are how we extend a service in angular and the below implementation basically checks the article to see wether it has the ._id property. This is the string representation of the .id property that all saved mongo db documents get.
It uses this information call the appropriate method.
Finally, back where we began
In the controller we see the earlier created promise key articleResolve used as the second injected argument; As if to say "when you have this articles service resolved use it as this second parameter when I'm injecting the arguments".
When we look at the controller definition, we notice that the corresponding second parameter is named article.
Background: Within any controller this actually points to the scope (or $scope). As convention†, and to make things in angular look like standard JavaScript where we often say var that = this, we create a variable to reference our scope.
Within the controller we attach this article to the scope so that it is accessible in the views via vm.article.
Fin!
† Graze at Papa John's style guide when you get chances and slowly evolve your code style to match it. It will help you avoid traps and as a side effect makes lot of the angular code examples/tutorial more understandable, especially where the authors also follow it.
I've read a bunch about multitenancy setup options in Mongoose/MongoDB. The options are:
Separate databases. This is a no-no because of the overhead per database instance
Prefixed collections in the same database - I would have tenant1_Products and tenant2_Products which share the same schema.
Standard collections with a reference back to the tenant document.
It seems to be that #2 is the best option for scaling, since it allows for easy sharding by prefix. So, assuming that, I would implement something like this to dynamically retrieve the model for a certain tenant:
tenant.methods.getModel = function(name) {
return mongoose.model(this.uniqid + '_' + name);
}
Assuming I register all models for all tenants ahead of time with the correct schema, then I could do var productModel = myTenant.getModel('Product');, and then create/update from there.
The problem with this is when you need to implement document references to make use of Mongoose's populate method. E.g.,
var productSchema = new mongoose.Schema({
name:String,
categories:[
{
type:mongoose.Schema.Types.ObjectId,
// This will need to change to "tenant1_Category", etc
ref:'Category'
}
]
});
mongoose.model('tenant1_Product', productSchema);
tenant1.getModel('Product').find().populate('categories').exec(function(err, results) {
...
});
Essentially for this you'd need either need to make the ref property dynamic (IE, a function) that returns a different collection name depending upon some variable that defines the current tenant, or you could keep it static but find some way of telling the populate method to automatically add the appropriate prefix.
What is the best way of accomplishing this? Or, does it make more sense to go with option #3 above to avoid these issues? If so, does that have an effect on sharding/scaling methods?
For anyone interested in this, I went ahead and created a package that handles all of this, since I couldn't find an existing one.
https://www.npmjs.org/package/mongoose-multitenant
It uses separate collections per tenant and compiles the schema as needed, optionally modifying the ref attributes for related documents to keep it within the tenancy.
I have noticed in Geddy that when I create a model and a subsequent record for that model, I get a very ugly model ID associated with the record. Something like:
http://localhost:4000/posts/3FEEDE8D-2669-445B-AEA1-A31092A7FEDA
Is there a way to change this?
Ideally, I would always want this to be some sort of string. Where it be for a post or user:
http://localhost:4000/posts/this-is-a-post-title
http://localhost:4000/profile/meebix
If this is possible, how should I:
Configure routes
Change primary key for model
Other implementation steps I may need
Thanks!
Yes, you can change the id if you really want to, but you'll be going off the beaten path there, so it's quite a bad idea. Let Geddy handle IDs for you.
The way I would do this (and certainly how many others have too) is to have a "slugging" function create a slug from the post title, and save that in your database. Then, query on that instead in your show action. You won't have to change your routes.
This is what your query will look like in the show action:
Post.first({slug: params.id}, function (err, post) {
params.id is whatever string you use in the route /posts/<this string>
So once you change your show links to use the slug instead of the ID you will be all set!
I am currently playing around with node.js and MongoDB using the node-mongo-native driver.
I tested a bit around using the Mongo console storing and retrieving JS objects. I figured out, that if I store an object that contains functions/methods the methods and functions will also be stored in the collection. This is interesting since I thought that functions could not be stored in MongoDB (with the exception of the system.js collection, as suggested by the Mongo docs).
Also it will not only store the methods but actually each method and member of the object's entire prototype chain. Besides that I dont like this behaviour and think it's unintuitive I mustn't have it.
I was going to manage users in a Mongo collection. To do this I have a User object containing all of the users methods functioning as a prototype for each instance of an user. The user object itself would only contain the users attributes.
If I store a user in the Mongo collection I only want to store the own properties of the user object. No prototype members and especially no prototype methods. Currently I do not see how to cleanly do this. The options that I figured might work are:
creating a shallow copy using foreach and hasOwnProperty and storing this copy in the collection.
Add a data attribute to each user that contains all the object's attributes and can be stored in the collection.
This just came to my mind writing this: I could also set all the prototypes properties to not enumerable which should prevent them from being stored in the collection.
However, I do have the same issues the other way around: when loading a user from a collection. AFAIK there is no way to change an objects prototype in JavaScript after it was created. And there's also no way to specify a prototype to use when Mongo instantiates objects it retrieved from a collection. So basically I always get objects that inherit from Object using Mongo. As far as I can tell I have 2 options to restore a usable user object from this point on:
Create a fresh object inheriting from User and copying each attribute on the result object to the newly created object. (Compatible to storing mechanisms 1 & 3)
Create a fresh object inheriting from User and storing the result object as a data attribute on the newly created object. (Compatible to storing mechanism 2)
Are my assumptions, especially about the possibility to specify a prototype for query results, correct? What's the right way to do it, and why? I'm surely not the first person struggling to store and resurrect objects in/from MongoDB using node.js.
Currently I would go with the approach 2/2. I don't really like it, but it is the most efficient and the only one that works cleanly with the API. However, I'd much rather hear that actually the API does nothing wrong, but I do for not knowing how to use it correctly. So please, enlighten me :)
I just recently realized, that it actually is possible to change an objects prototype in V8/node. While this is not in the standard it is possible in various browsers and especially in V8/node!
function User(username, email) {
this.username = username;
this.email = email;
}
User.prototype.sendMail = function (subject, text) {
mailer.send(this.email, subject, text);
};
var o = {username: 'LoadeFromMongoDB', email: 'nomail#nomail.no'};
o.__proto__ = User.prototype;
o.sendMail('Hello, MongoDB User!', 'You where loaded from MongoDB, but inherit from User nevertheless! Congratulations!');
This is used all over various modules and plugins - even core modules make use of this technique, allthough it is not ECMAScript standard. So I guess it is safe to use within node.js.
I'm not sure I'm following you question exactly... but fwiw one thing came to mind: Have you checked out the Mongoose ORM? (http://mongoosejs.com/)
It gives you a lot of options when it comes to defining models and methods. In particular "Virtuals" might be of interest (http://mongoosejs.com/docs/virtuals.html).
Anyway, hope it helps some!