In our app, we have a large document that is the source of most of our data for our REST api.
In order to properly invalidate our client-side cache for the REST api, i want to keep track of any modifications made to teh document. The best we came up with is to extend the mongo save command for the document to send off the notification (and then save as usual).
The question is, how does one actually do this in practice? Is there a direct way to extend mongo's "save" method, or should we create a custom method (i.e. "saveAndNotify") on the model that we use instead (which i would avoid if i can)?
[edit]
So in principle, i am looking to do this, but am having an issue not clobbering the parent save function();
mySchema.methods.save = function() {
// notify some stuff
...
//call mongo save function
return this.super.save();
};
Monkey patching the core mongo object is a bad idea, however it turns out mogoose has a middleware concept that will handle this just fine:
var schema = new Schema(..);
schema.pre('save', function (next) {
// do stuff
next();
});
Related
I am trying to build a logging mechanism, to log changes done to a record. I am currently logging previous and new record. However, as the site is very busy, I expect the logfile to grow seriously huge. To avoid this, I plan to only capture the modified fields only.
Is there a way to capture only the modifications done to a record (in REACT), so my {request.body} will have fewer fields?
My Server-side is build with NODE.JS and the client-side is REACT.
One approach you might want to consider is to add an onChange(universal) or onTextChanged(native) listener to the text field and store the form update in a local state/variables.
Finally, when a user makes an action (submit, etc.) you can send the updated data to the logging module.
The best way I found and works for me is …
on the api server-side, where I handle the update request, before hitting the database, I do a difference between the previous record and {request.body} using lodash and use the result to send to my update database function
var _ = require('lodash');
const difference = (object, base) => {
function changes(object, base) {
return _.transform(object, function (result, value, key) {
if (!_.isEqual(value, base[key])) {
result[key] = (_.isObject(value) && _.isObject(base[key])) ? changes(value, base[key]) : value;
}
});
}
return changes(object, base);
}
module.exports = difference
I saved the above code in a file named diff.js and included it in my server-side file.
It worked good.
Thanks for giving the idea...
Is it possible to get document back after adding it / updating it without additional network calls with Firestore, similar to MongoDB?
I find it stupid to first make a call to add / update a document and then make an additional call to get it.
As you have probably seen in the documentation of the Node.js (and Javascript) SDKs, this is not possible, neither with the methods of a DocumentReference nor with the one of a CollectionReference.
More precisely, the set() and update() methods of a DocumentReference both return a Promise containing void, while the CollectionReference's add() method returns a Promise containing a DocumentReference.
Side Note (in line with answer from darrinm below): It is interesting to note that with the Firestore REST API, when you create a document, you get back (i.e. through the API endpoint response) a Document object.
When you add a document to Cloud Firestore, the server can affect the data that is stored. A few ways this may happen:
If your data contains a marker for a server-side timestamp, the server will expand that marker into the actual timestamp.
Your data data is not permitted according to your server-side security rules, the server will reject the write operation.
Since the server affects the contents of the Document, the client can't simply return the data that it already has as the new document. If you just want to show the data that you sent to the server in your client, you can of course do so by simply reusing the object you passed into setData(...)/addDocument(data: ...).
This appears to be an arbitrary limitation of the the Firestore Javascript API. The Firestore REST API returns the updated document on the same call.
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/patch
I did this to get the ID of a new Document created, and then use it in something else.
Future<DocumentReference<Object>> addNewData() async {
final FirebaseFirestore _firestore = FirebaseFirestore.instance;
final CollectionReference _userCollection = _firestore.collection('users');
return await _userCollection
.add({ 'data': 'value' })
.whenComplete(() => {
// Show good notification
})
.catchError((e) {
// Show Bad notification
});
}
And here I obtain the ID:
await addNewData()
.then((document) async {
// Get ID
print('ID Document Created ${document.id}');
});
I hope it helps.
I need to generate slug every time that post get saved into database. Specifficallly on Post.create and post.save. The single place where I may need this in PostShema.pre('validate') middleware like the following:
PostSchema.pre('validate', function (next) {
this.slug = sluglify(this.title);
return next();
});
All works fine except the fact that it happens in validate middlweare that should only check but not set.
SO my questuin is where should I reside my code for sluglify my title on creating or updating post?
This is not happening in the validation but before the validation. IMHO, it 's like if you are prepearing/cleaning your object before validating it; which is OK.
If you feel more confortable with that you could include it in a pre-save or pre-init instead of the pre-validate
I just started developing nodejs. I'm confused to use async model. I believe there is a way to turn most of SYNC use cases into ASYNC way. Example, by SYNC, we load some data and wait until it returns then show them to user; by ASYNC, we load data and return, just tell the user data will be presented later. I can understand why ASYNC is used in this scenario.
But here I have a use case. I'm building an web app, allowing user to place a order (buying something). Before saving the order data into db, I want to put some user data together with order data (I'm using document NoSql db by the way). So I think by SYNC, after I get order data, I make a SYNC call to database and wait for its returned user data. After I get returned data, integrate them together and ingest into db.
I think there might be an issue if I make ASYNC call to db to query user data because user data may be returned after I save data to db. And that's not what I want.
So in this case, how can I do this thing ASYNCHRONOUSLY?
Couple of things here. First, if your application already has the user data (the user is already logged in), then this information should be stored in session so you don't have to access the DB. If you are allowing the user to register at the time of purchase, you would simply want to pass a callback function that handles saving the order into your call that saves the user data. Without knowing specifically what your code looks like, something like this is what you would be looking for.
function saveOrder(userData, orderData, callback) {
// save the user data to the DB
db.save(userData, function(rec) {
// if you need to add the user ID or something to the order...
orderData.userId = rec.id; // this would be dependent on your DB of choice
// save the order data to the DB
db.save(orderData, callback);
});
}
Sync code goes something like this. step by step - one after other. There can be ifs and loops (for) etc. all of us get it.
fetchUserDataFromDB();
integrateOrderDataAndUserData();
updateOrderData();
Think of async programming with nodejs as event driven. Like UI programming - code (function) is executed when an event occurs. E.g. On click event - framework calls back registered clickHandler.
nodejs async programming can also be thought on these lines. When db query (async) execution completes, your callback is called. When order data is updated, your callback is called. The above code goes something like this:
function nodejsOrderHandler(req,res)
{
var orderData;
db.queryAsync(..., onqueryasync);
function onqueryasync(userdata)
{
// integrate user data with order data
db.update(updateParams, onorderudpate);
}
function onorderupdate(e, r)
{
// handler error
write response.
}
}
javascript closure provides the way to keep state in variables across functions.
There is certainly much more to async programming and there are helper modules that help with basic constructs like chain, parallel, join etc as you write more involved async code. but this probably gives you a quick idea.
So, one of the more confusing aspects I've been observing with Meteor is that Sessions get cleared every refresh. Since it isn't a persistent store, where would I put things like userid, or where people are in my application's state machine?
What are the patterns for those scenarios?
Actually what you could do is create a "subclass" of Session that stores the value in Amplify's local storage when set() is called. You would automatically inherit all the reactive properties of Session. Here is the code, it worked for me:
SessionAmplify = _.extend({}, Session, {
keys: _.object(_.map(amplify.store(), function(value, key) {
return [key, JSON.stringify(value)]
})),
set: function (key, value) {
Session.set.apply(this, arguments);
amplify.store(key, value);
},
});
Just replace all your Session.set/get calls with SessionAmplify.set/get calls. When set() is called, the parent Session method is called, as well as amplify.store(). When the "subclass" is first created, it loads everything that is in amplify's store inside its keys, so that they can be retrieved right away with get().
You can test a working variation of the Leaderboard example here: https://github.com/sebastienbarre/meteor-leaderboard
Well, for a start I would be using meteors built in Auth to store userID. They are using local storage by default there I think, but AFAIK there's no easy way to hook into that.
However, I would have thought if you want stuff to survive across refreshes you should either store it in mongo or use the URL to indicate where they are in the 'state machine'. You can use the bootstrap router (for example) to use pushState to change the URL.