Loopback before save update all instance - hook

Hey all quick question about before save on UpdateAll
Hypothetically I have a customer model, that hasMany Orders. In an operation hook monitoring before save of the Customer I'm trying to create an order automatically. The trick is that I'm trying to do this when doing customers UpdateAll
Based on the docs I see there's no way to get the id of the current instance of the customer
But is there a way to do something like ctx.instance.orders.create(data); ?

I believe in case like this, using persist hook will make more sense. You can access the id via ctx.currentInstance.id. This hook is triggered by operations that persist data to the datasource, including update.
MyModel.observe('persist', function(ctx, next) {
if (ctx && ctx.currentInstance.id) {
// create order here
}
next();
});

updateAll is an alias function of update.
You can try something like,
Customer.observe('before save', (ctx, next) => {
if (ctx.instance && ctx.instance.id ) {
// create order here
}
next();
});
ctx.instance.id is avaialble only when you pass it.
i.e
Customer.update({id: 1}, {name: 'New Name'}) -in this case ctx.instance.id will be available.
Customer.update({name: 'New Name'}) - in this case ctx.instance.id won't be available. Also this will update name of all the customer.

Related

Limit posted fields for insert

I'm trying to limit the fields a user can post when inserting an object in mongodb. I know ho i can enforce fields to be filled but I can't seem to find how to people from inserting fields that I don't want.
This is the code I have now for inserting an item.
app.post("/obj", function (req, res) {
var newObj = req.body;
//TODO filter fields I don't want ?
if (!(newObj .id || newObj .type)) {
handleError(res, "Invalid input", "Must provide a id and type.", 400);
return;
}
db.collection(OBJ_COLLECTION).insertOne(newObj, function(err, doc) {
if (err) {
handleError(res, err.message, "Failed to create new object.");
} else {
res.status(201).json(doc.ops[0]);
}
});
});
There's likely JS native ways to do this, but I tend to use Lodash as my toolbox for most projects, and in that case what I normally do is setup a whitelist of allowed fields, and then extract only those from the posted values like so:
const _ = require('lodash');
app.post("/obj", function (req, res) {
var newObj = _.pick(req.body, ['id', 'type','allowedField1','allowedField2']);
This is pretty straightforward, and I usually also define the whitelist somewhere else for reuse (e.g. on the model or the like).
As a side note, I avoid using 'id' as a field that someone can post to for new objects, unless I really need to, to avoid confusion with the autogenerated _id field.
Also, you should really look into mongoose rather than using the straight mongodb driver, if you want to have more model-based control of your documents. Among other things, it will strip any fields off the object if they're not defined in the schema. I still use the _.pick() method when there are things that are defined in the schema, but I don't want people to change in a particular controller method.

MongooseJS: Hide Specific Documents From Query

I have a User schema with a activated field of Boolean type. I want queries to only return documents which have activated: true. And I hope there is a more efficient and DRY way of doing so than adding a conditional to every find, findOne, or findById.
What would be the most effective approach?
while there may be some way to do this, it is generally a bad idea to always hide this information.
speaking from experience trying to do this with other languages and database systems, you will, at some point, want / need to load items that are not actived. but if you always and only return activated items, you'll never be able to get the list you need.
for your purposes, i would recommend creating a findActive method on your schema:
someSchema.static("findActive", function(query, cb){
// check if there is a query and callback
if (!cb){
cb = query;
query = {};
}
// set up an empty query, if there isn't one provided
if (!query) { query = {}; }
// make sure you only load activated items
query.activated = true;
// run the query
this.find(query, cb);
});
with this method, you will have a findActive method the same as findOne, but it will always filter for activated items.
MyModel.findActive(function(err, modelList){ ... });
and it optionally supports additional query filters
MyModel.findActive({some: "stuff"}, function(err, modelList){ ... });
You might want to look at Mongoose Query middleware here
Query middleware is supported for the following Model and Query
functions.
count
find
findOne
...
For example:
User.pre('find', function() {
console.log(this instanceof mongoose.Query); // true
this.activated = true;
});

Run custom validation in mongoose update query

I have been trying to run a custom validator to check if the name entered by the user already exists in the database. Since, mongoDb treats uppercase and lowercase names as different, I created my own validator for it.
function uniqueFieldInsensitive ( modelName, field ){
return function(val, cb){
if( val && val.length ){ // if string not empty/null
var query = mongoose.models[modelName]
.where( field, new RegExp('^'+val+'$', 'i') ); // lookup the collection for somthing that looks like this field
if( !this.isNew ){ // if update, make sure we are not colliding with itself
query = query.where('_id').ne(this._id)
}
query.count(function(err,n){
// false when validation fails
cb( n < 1 )
})
} else { // raise error of unique if empty // may be confusing, but is rightful
cb( false )
}
}
}
Now, the problem is that the validator runs while saving the document in the DB but not while update.
Since, I am using mongoose version 4.x, I also tried using { runValidators: true } in my update query. That doesn't work either as the 'this' keyword in my validator is 'null' while in the case of update whereas it refers to the updated doc in the case of save.
Could you please let me know if there is something i missed or is there any other way by which I can run custom validators in update query.
Finally I found a way out to do this.
According to MongoDB documentation, it says:
First, update validators only check $set and $unset operations. Update validators will not check $push or $inc operations.
The second and most important difference lies in the fact that, in document validators, this refers to the document being updated. In the case of update validators, there is no underlying document, so this will be null in your custom validators.
Refer to : Validators for update()
So, now we are only left with calling save() instead of update() in our queries. Since, save() calls all the custom and inbuilt validators, our validator will also be called. I achieved it like this:
function(req, res, next) {
_.assign(req.libraryStep, req.body);
req.libraryStep.save().then(function(data){
res.json(data);
}).then(null, function (err) {
console.info(err);
var newErr = new errorHandler.error.ProcessingError(errorHandler.getErrorMessage(err));
next(newErr);
});
};
Notice here req.libraryStep is the document that i queried from the database. I have used lodash method assign which takes the updated json and assigns it to the existing database document.
https://lodash.com/docs#assign
I dont think this is the ideal way but as for now till Mongoose doesnt come up with supporting custom validators, we can use this to solve our problem.
This is a fairly old thread, but I wanted to update the answer for those who come across it like I did.
While you're correct about the context of this being empty in an update validator (per the docs), there is a context option you can use to set the context of this. See the docs
However, a plugin also exists that will check the uniqueness of the field you are setting: mongoose-unique-validator. I use this for checking for duplicate emails. This also has an option for case insensitivity, so I would check it out. It also does run correctly using the update command with the runValidators: true option.

Hoodie - Update a CouchDB document (Node.js)

I'm handling charges and customers' subscriptions with Stripe, and I want to use these handlings as a Hoodie plugin.
Payments and customer's registrations and subscriptions appear normally in Stripe Dashboard, but what I want to do is update my _users database in CouchDB, to make sure customer's information are saved somewhere.
What I want to do is updating the stripeCustomerId field in org.couchdb.user:user/bill document, from my _users database which creates when logging with Hoodie. And if it is possible, to create this field if it does not exist.
In hoodie-plugin's document, the update function seems pretty ambiguous to me.
// update a document in db
db.update(type, id, changed_attrs, callback)
I assume that type is the one which is mentioned in CouchDB's document, or the one we specify when we add a document with db.add(type, attrs, callback) for example.
id seems to be the doc id in couchdb. In my case it is org.couchdb.user:user/bill. But I'm not sure that it is this id I'm supposed to pass in my update function.
I assume that changed_attrs is a Javascript object with updated or new attributes in it, but here again I have my doubts.
So I tried this in my worker.js:
function handleCustomersCreate(originDb, task) {
var customer = {
card: task.card
};
if (task.plan) {
customer.plan = task.plan;
}
stripe.customers.create(customer, function(error, response) {
var db = hoodie.database(originDb);
var o = {
id: 'bill',
stripeCustomerId: 'updatedId'
};
hoodie.database('_users').update('user', 'bill', o, function(error) {
console.log('Error when updating');
addPaymentCallback(error, originDb, task);
});
db.add('customers.create', {
id: task.id,
stripeType: 'customers.create',
response: response,
}, function(error) {
addPaymentCallback(error, originDb, task);
});
});
}
And between other messages, I got this error log:
TypeError: Converting circular structure to JSON
And my file is not updated : stripeCustomerId field stays null.
I tried to JSON.stringify my o object, but It doesn't change a thing.
I hope than some of you is better informed than I am on this db.update function.
Finally, I decided to join the Hoodie official IRC channel, and they solved my problem quickly.
Actually user.docs need an extra API, and to update them you have to use hoodie.account instead of hoodie.database(name)
The full syntax is:
hoodie.account.update('user', user.id, changedAttrs, callback)
where user.id is actually the account name set in Hoodie sign-up form, and changedAttrs an actual JS object as I thought.
Kudos to gr2m for the fix; ;)

Modelling API: each row represents a table. Suggestions?

I have an app that stores user uploaded spreadsheets as tables in PostgreSQL. Everytime an user uploads a spreadsheet I create a record in a Dataset table containing the physical table name, its alias and the owner. I can retrieve a certain Dataset information with
GET domain.com/v1/Datasets/{id}
AFAIK, the relation between rows in Dataset and physical tables can't be enforced by a FK, or at least I haven't seen anyones creating FKs on the information_schema of PostgreSQL, and FKs can't drop tables, or can they? So it's common to have orphan tables, or records in Dataset that point to tables that no longer exist. I have managed this with business logic and cleaning tasks.
Now, to access one of those physical tables, for example one called nba_teams I would need to declare an NbaTeams model in loopback and restart the app, then query its records with
GET domain.com/v1/NbaTeams/{id}
But that can't scale, specially if I'm already having like 100 uploads a day. So from where I'm standing, there are two ways to go:
1.- Create one model, then add 4 custom methods that accepts a table name as a string, and perform the next CRUD operation on that table name via raw queries. For example, to list the records:
GET domain.com/v1/Datasets/getTable/NbaTeams
or, to update one team
PUT domain.com/v1/Datasets/getTable/NbaTeams/{teamId}
This sounds unelegant but should work.
2.- Create a custom method that accepts a table name as a string, which in turn creates an ephemeral model and forward the HTTP verb and the rest of the arguments to it
dataSource.discoverAndBuildModels('nba_teams', {
owner: 'uploader'
}, function (err, models) {
console.log(models);
models.NbaTeams.find(function (err, act) {
if (err) {
console.error(err);
} else {
console.log(act);
}
dataSource.disconnect();
});
});
this second one I haven't got to work yet, and I don't know how much overhead it might have, but I'm sure it's doable.
So before I dig in deeper I came to ask: has anybody dealt with this row-to-table relation? What are the good practices in this?
In the end, I did my own hacky workaround and I thought it may help someone, some day.
What I did was put a middleware (with regular express syntax) to listen for /v1/dataset{id_dataset} , create the model on the fly and pass the execution to the next middleware
app.use('/v1/dataset:id_dataset', function(req, res, next) {
var idDataset=req.params.id_dataset;
app.getTheTable(idDataset,function(err,result) {
if(err) {
console.error(err);
res.json({"error":"couldn't retrieve related table"});
} else {
next();
}
});
});
inside the app.getTheTable function, I'm creating a model dynamically and setting it up before callback
app.getTheTable = function (idDataset, callback) {
var Table = app.models.Dataset,
modelName='dataset'+idDataset,
dataSource;
Table.findById(idDataset, function (err, resultados) {
if (err) {
callback(new Error('Unauthorized'));
} else {
if(app.models[modelName]) {
callback(null,modelName); // model already exists
} else {
var theDataset = dataSource.createModel(modelName, properties, options);
theDataset.settings.plural = modelName;
theDataset.setup();
app.model(theDataset);
var restApiRoot = app.get('restApiRoot');
app.use(restApiRoot, app.loopback.rest());
callback(null, modelName);
}
}
});
};
It's hacky, I know, and I believe there must be some kind of performance penalty for overloading restApiRoot middleware, but it's still better tan creating 500 models on startup to cover all possible dataset requests.

Resources