Backbone.relational, real-time and handling large data - node.js

I'm building a real-time feed application using Backbone.js, node.js and socket.io.
My Feed is a collection of Update models. Displaying these, overriding Backbone.sync for integration with socket.io works fine.
The complication comes in that each Update has a set of comments associated with it. When I show each Update in the Feed view, I want to show a summary of the associated comments (number of comments and a single 'most poular' comment), and also have the ability to click through to a different view to display each Update on its own with a paginated list of comments with further data.
I'm using backbone-relational to model the relationship between the Update model and Comment model, as follows:
Feed (collection) -> Update (model) -(has many)-> Comment (model)
I've been following this backbone-relational tutorial, but it seems to assume that I'd want to have all related data in memory at once in my Feed view, which I don't as there are potentially thousands of comments updating in real-time:
http://antoviaque.org/docs/tutorials/backbone-relational-tutorial/
My questions are:
How can I bring in summary data for comments to each Update in my Feed view without loading all comment data, and also maintain the ability to show paginated full data in my Update view?
I'm using backbone.layoutmanager for rendering my views. How best should I break my views up to accomplish the above?

For Q1:
I'm assuming you're using something like ioSync to use socket.io in Backbone.sync instead of REST API, or a similar solution.
Include metadata (such as # of comments) as an attribute on Update. If your Update object is heavy weight in itself, you could update the count using ioBind and custom server-side socket.io events instead of sending the whole object every time.
Include an attribute topComment as an additional one-to-one relation in Update. When initially loading Update from the server, include topComment in the response, but not the other comments.
Lazy-load the rest of the comments using custom socket.io events. You will likely want a server-side handler that takes as parameters updateId, startIndex, maxComments, which returns a list of comments for the given Update starting at the given index. If the result is sent to the client as JSON, then it's easy to do something like this on the client:
// Assume `model` is an instance of `Update`.
socket.emit('get_comments_page', {
updateId: model.get('id'),
startIndex: 1,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
model.get('messages').reset(data)
}
});
Avoid sending ID for all comments when fetching Update then trying to use fetchRelated to resolve them. I learned this one the hard way :O/
You could also store the comments collection directly on the view without associating it as relationship of Update
For Q2:
I don't have any experience with layoutmanager as I use Backbone.Marionette for managing my views. Marionette has an async extension (disclaimer: I'm a co-maintainer). I encourage to see how Marionette.async does the delayed rendering, waiting for the data to arrive from the server.
The main idea is to use jquery's Deferred objects that resolve when the data comes back from the server. Extending the above example with deferred:
var MyView = Backbone.View.extend({
// ... normal stuff that views need ...
initialize: function() {
var deferred = $.Deferred();
// Assume `model` is an instance of `Update`.
var that = this;
socket.emit('get_comments_page', {
updateId: that.model.get('id'),
startIndex: that.options.pageNumber,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
that.model.get('messages').reset(data)
}
deferred.resolve();
});
this.promise = deferred.promise();
},
render: function() {
var that = this;
this.promise.done(function() {
// Do your normal rendering code here, for instance:
$(that.el).html(that.template(that.model.toJSON()));
});
return this;
}
});
Note: the code snippets above are not tested as is.

Related

How to read/write a document in parallel execution with mongoDB/mongoose

I'm using MongoDB with NodeJS. Therefore I use mongoose.
I'm developing a multi player real time game. So I receive many requests from many players sometimes at the very same time.
I can simplify it by saying that I have a house collection, that looks like this:
{
"_id" : 1,
"items": [item1, item2, item3]
}
I have a static function, called after each request is received:
house.statics.addItem = function(id, item, callback){
var HouseModel = this;
HouseModel.findById(id, function(err, house){
if (err) throw err;
//make some calculations such as:
if (house.items.length < 4){
HouseModel.findByIdAndUpdate(id, {$push: {items: item}}, cb);
}
});
}
In this example, I coded so that the house document can never have more than 4 items. But what happens is that when I receive several request at the very same time, this function is executed twice by both requests and since it is asynchronous, they both push a new item to the items field and then my house has 5 items.
I am doing something wrong? How can I avoid that behavior in the future?
yes, you need better locking on the houseModel, to indicate that an addItem
is in progress.
The problem is that multiple requests can call findById and see the same
house.items.length, then each determine based on that (outdated) snapshot
that it is ok to add one more item. The nodejs boundary of atomicity is the
callback; between an async call and its callback, other requests can run.
One easy fix is to track not just the number of items in the house but the
number of intended addItems as well. On entry into addItem, bump the "want
to add more" count, and test that.
One possible approach since the release of Mongoose 4.10.8 is writing a plugin which makes save() fail if the document has been modified since you loaded it. A partial example is referenced in #4004:
#vkarpov15 said:
8b4870c should give you the general direction of how one would write a plugin for this
Since Mongoose 4.10.8, plugins now have access to this.$where. For documents which have been loaded from the database (i.e., are not this.isNew), the plugin can add conditions which will be evaluated by MongoDB during the update which can prevent the update from actually happening. Also, if a schema’s saveErrorIfNotFound option is enabled, the save() will return an error instead of succeeding if the document failed to save.
By writing such a plugin and changing some property (such as a version number) on every update to the document, you can implement “optimistic concurrency” (as #4004 is titled). I.e., you can write code that roughly does findOne(), do some modification logic, save(), if (ex) retry(). If all you care about is a document remaining self-consistent and ensuring that Mongoose’s validators run and your document is not highly contentious, this lets you write code that is simple (no need to use something which bypasses Mongoose’s validators like .update()) without sacrificing safety (i.e., you can reject save()s if the document was modified in the meantime and avoid overwriting committed changes).
Sorry, I do not have a code example yet nor do I know if there is a package on npm which implements this pattern as a plugin yet.
I am also building a multiplayer game and ran into the same issue. I believe I have solved it my implementing a queue-like structure:
class NpcSaveQueue {
constructor() {
this.queue = new Map();
this.runQueue();
}
addToQueue(unitId, obj) {
if (!this.queue.has(unitId)) {
this.queue.set(String(unitId), obj);
} else {
this.queue.set(String(unitId), {
...this.queue.get(unitId),
...obj,
})
}
}
emptyUnitQueue(unitId) {
this.queue.delete(unitId);
}
async executeUnitQueue(unitId) {
await NPC.findByIdAndUpdate(unitId, this.queue.get(unitId));
this.emptyUnitQueue(unitId);
}
runQueue() {
setInterval(() => {
this.queue.forEach((value, key) => {
this.executeUnitQueue(key);
})
}, 1000)
}
}
Then when I want to update an NPC, instead of interacting with Mongoose directly, I run:
npcSaveQueue.addToQueue(unit._id, {
"location.x": newLocation.x,
"location.y": newLocation.y,
});
That way, every second, the SaveQueue just executes all code for every NPC that requires updating.
This function never executes twice, because update operation is atomic on a level of single document.
More info in official manual: http://docs.mongodb.org/manual/core/write-operations-atomicity/#atomicity-and-transactions

How to display arbitrary, schemaless data in HTML with node.js / mongodb

I'm using mongodb to store application error logs as json documents. I want to be able to format the error logs as HTML rather than returning the plain json to the browser. The logs are properly schemaless - they could change at any time, so it's no use trying to do this (in Jade):
- var items = jsonResults
- each item in items
h3 Server alias: #{item.ServerAlias}
p UUID: #{item.UUID}
p Stack trace: #{item.StackTrace}
h3 Session: #{item.Session}
p URL token: #{item.Session.UrlToken}
p Session messages: #{item.Session.SessionMessages}
as I don't know what's actually going to be in the JSON structure ahead of time. What I want is surely possible, though? Everything I'm reading says that the schema isn't enforced by the database but that your view code will outline your schema anyway - but we've got hundreds of possible fields that could be removed or added at any time so managing the views in this way is fairly unmanageable.
What am I missing? Am I making the wrong assumptions about the technology? Going at this the wrong way?
Edited with extra info following comments:
The json docs look something like this
{
"ServerAlias":"GBIZ-WEB",
"Session":{
"urltoken":"CFID=10989&CFTOKEN=f07fe950-53926E3B-F33A-093D-3FCEFB&jsessionid=84303d29a229d1",
"captcha":{
},
"sessionmessages":{
},
"sessionid":"84197a667053f63433672873j377e7d379101"
},
"UUID":"53934LBB-DB8F-79T6-C03937JD84HB864A338",
"Template":"\/home\/vagrant\/dev\/websites\/g-bis\/code\/webroot\/page\/home\/home.cfm, line 3",
"Error":{
"GeneratedContent":"",
"Mailto":"",
"RootCause":{
"Message":"Unknown tag: cfincflude.",
"tagName":"cfincflude",
"TagContext":[
{
"RAW_TRACE":"\tat cfhome2ecfm1296628853.runPage(\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/home\/home.cfm:3)",
"ID":"CFINCLUDE",
"TEMPLATE":"\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/home\/home.cfm",
"LINE":3,
"TYPE":"CFML",
"COLUMN":0
},
{
"RAW_TRACE":"\tat cfdisplay2ecfm1093821753.runPage(\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/display.cfm:6)",
"ID":"CFINCLUDE",
"TEMPLATE":"\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/display.cfm",
"LINE":6,
"TYPE":"CFML",
"COLUMN":0
}
]
}
}
... etc, but is likely to change depending on what the individual project that generates the log is configured to trigger.
What I want to end up with is a formatted HTML page with headers for each parent and the children listed below, iterating right through the data structure. The Jade sample above is effectively what we need to output, but without hard-coding that in the view.
Mike's analysis in the comments of the problem being that of creating a table-like structure from a bunch of collections that haven't really got a lot in common is bang-on. The data is relational, but only within individual documents - so hard-coding the schema into anything is virtually impossible as it requires you to know what the data structure looks like first.
The basic idea is what #Gates VP described. I use underscore.js to iterate through the arrays/objects.
function formatLog(obj){
var log = "";
_.each(obj, function(val, key){
if(typeof(val) === "object" || typeof(val) === "array"){
// if we have a new list
log += "<ul>";
log += formatLog(val);
log += "</ul>";
}
else{
// if we are at an endpoint
log += "<li>";
log += (key + ": " + val);
log += "</li>";
}
});
return log;
}
If you call formatLog()on the example data you gave it returns
ServerAlias: GBIZ-WEBurltoken: CFID=10989&CFTOKEN=f07fe950-53926E3B-F33A-093D-3FCEFB&jsessionid=84303d29a229d1sessionid: 84197a667053f63433672873j377e7d379101UUID: 53934LBB-DB8F-79T6-C03937JD84HB864A338Template: /home/vagrant/dev/websites/g-bis/code/webroot/page/home/home.cfm, line 3GeneratedContent: Mailto: Message: Unknown tag: cfincflude.tagName: cfincfludeRAW_TRACE: at cfhome2ecfm1296628853.runPage(/home/vagrant/dev/websites/nig-bis/code/webroot/page/home/home.cfm:3)ID: CFINCLUDETEMPLATE: /home/vagrant/dev/websites/nig-bis/code/webroot/page/home/home.cfmLINE: 3TYPE: CFMLCOLUMN: 0RAW_TRACE: at cfdisplay2ecfm1093821753.runPage(/home/vagrant/dev/websites/nig-bis/code/webroot/page/display.cfm:6)ID: CFINCLUDETEMPLATE: /home/vagrant/dev/websites/nig-bis/code/webroot/page/display.cfmLINE: 6TYPE: CFMLCOLUMN: 0
How to format it then is up to you.
This is basically a recursive for loop.
To do this with Jade you will need to use mixins so that you can print nested objects by calling the mixin with a deeper level of indentation.
Note that this whole thing is a little ugly as you won't get guaranteed ordering of fields and you may have to implement some logic to differentiate looping on arrays vs. looping on JSON objects.
You can try util.inspect. In your template:
pre
= util.inspect(jsonResults)

dijit.Tree search and refresh

I can't seem to figure out how to search in a dijit.Tree, using a ItemFileWriteStore and a TreeStoreModel. Everything is declarative, I am using Dojo 1.7.1, here is what I have so far :
<input type="text" dojoType="dijit.form.TextBox" name="search_fruit" id="search_fruit" onclick="search_fruit();">
<!-- store -->
<div data-dojo-id="fruitsStore" data-dojo-type="dojo.data.ItemFileWriteStore" clearOnClose="true" urlPreventCache="true" data-dojo-props='url:"fruits_store.php"'></div>
<!-- model -->
<div data-dojo-id="fruitsModel" data-dojo-type="dijit.tree.TreeStoreModel" data-dojo-props="store:fruitsStore, query:{}"></div>
<!-- tree -->
<div id="fruitsTree" data-dojo-type="dijit.Tree"
data-dojo-props='"class":"container",
model:fruitsModel,
dndController:"dijit.tree.dndSource",
betweenThreshold:5,
persist:true'>
</div>
The json returned by fruits_store.php is like this :
{"identifier":"id",
"label":"name",
"items":[{"id":"OYAHQIBVbeORMfBNZXFGOHPdaRMNUdWEDRPASHSVDBSKALKIcBZQ","name":"Fruits","children":[{"id":"bSKSVDdRMRfEFNccfTZbWHSACWbLJZMTNHDVVcYGcTBDcIdKIfYQ","name":"Banana"},{"id":"JYDeLNIGPDBRMcfSTMeERZZEUUIOMNEYYcNCaCQbCMIWOMQdMEZA","name":"Citrus","children":[{"id":"KdDUfEDaKOQMFNJaYbSbAcAPFBBdLALFMIPTFaYSeCaDOFaEPbJQ","name":"Orange"},{"id":"SDWbXWbTWKNJDIfdAdJbbbRWcLZFJHdEWASYDCeFOZYdcZUXJEUQ","name":"Lemon"}]},{"id":"fUdQTEZaIeBIWCHMeBZbPdEWWIQBFbVDbNFfJXNILYeBLbWUFYeQ","name":"Common ","children":[{"id":"MBeIUKReBHbFWPDFACFGWPePcNANPVdQLBBXYaTPRXXcTYRTJLDQ","name":"Apple"}]}]}]}
Using a grid instead of a tree, my search_fruit() function would look like this :
function search_fruit() {
var grid = dijit.byId('grid_fruits');
grid.query.search_txt = dijit.byId('search_fruit').get('value');
grid.selection.clear();
grid.store.close();
grid._refresh();
}
How to achieve the same using the tree ? Thanks !
The refreshing of a dijit.Tree becomes a little more complicated, since there is a model involved (which in grid afaik is inbuilt, the grid component implements query functionality)
Performing search via store
But how to search, thats incredibly easy whilst using the ItemFileReadStore. Syntax is as such:
myTree.model.store.fetch({
query: {
name: 'Oranges'
},
onComplete: function(items) {
dojo.forEach(items, function(item) {
console.log(myTree.model.store.getValue(item, "ID"));
});
}
});
Displaying search results only
As shown above, the store will fetch, the full payload is put into its _allItemsArray and the store queryengine then filters out what its told by query argument to the fetch method. At any time, we could call fetch on store, even without sending an XHR for json contents - fetch with query argument can be considered as a simple filter.
It becomes slightly more interesting to let the Model know about this query.. If you do so, it will only create treeNodes to fill the tree, based on the returned results from store.fetch({query:model.query});
So, instead of sending store.fetch with a callback, lets _try to set model query and update the tree.
// seing as we are working with a multi-parent tree model (ForestTree), the query Must match a toplevel item or else nothing is shown
myTree.model.query = { name:'Fruits' };
// below method must be implemented to do so runtime
// and note, that the DnD might become invalid
myTree.update();
Refreshing tree with new xhr-request from store
You need to do exactly as you do with regards to the store. Close it but then rebuild the model. Model contains all the TreeNodes (beneath its root-node) and the Tree itself maps an itemarray which needs to be cleared to avoid memory leakage.
So, performing following steps will rebuild the tree - however this sample does not take in account, if you have DnD activated, the dndSource/dndContainer will still reference the old DOM and thereby 'keep-alive' the previous DOMNode hierachy (hidden ofc).
By telling the model that its rootNode is UNCHECKED, the children of it will be checked for changes. This in turn will produce the subhierachy once the tree has done its _load()
Close the store (So that the store will do a new fetch()).
this.model.store.clearOnClose = true;
this.model.store.close();
Completely delete every node from the dijit.Tree
delete this._itemNodesMap;
this._itemNodesMap = {};
this.rootNode.state = "UNCHECKED";
delete this.model.root.children;
this.model.root.children = null;
Destroy the widget
this.rootNode.destroyRecursive();
Recreate the model, (with the model again)
this.model.constructor(this.model)
Rebuild the tree
this.postMixInProperties();
this._load();
Creds; All together as such, scoped onto the dijit.Tree:
new dijit.Tree({
// arguments
...
// And additional functionality
update : function() {
this.model.store.clearOnClose = true;
this.model.store.close();
delete this._itemNodesMap;
this._itemNodesMap = {};
this.rootNode.state = "UNCHECKED";
delete this.model.root.children;
this.model.root.children = null;
this.rootNode.destroyRecursive();
this.model.constructor(this.model)
this.postMixInProperties();
this._load();
}
});

Subclass QueryReadStore or ItemFileWriteStore to include write api and server side paging and sorting.

I am using Struts 2 and want to include an editable server side paging and sorting grid.
I need to sublclass the QueryReadStore to implement the write and notification APIs. I do not want to inlcude server side REST services so i do not want to use JsonRest store. Any idea how this can be done.? What methods do i have to override and exactly how. I have gone through many examples but i am not getting how this can be done exactly.
Also is it possible to just extend the ItemFileWriteStore and just override its methods to include server side pagination? If so then which methods do i need to override. Can i get an example about how this can be done?
Answer is ofc yes :)
But do you really need to subclass ItemFileWriteStore, does it not fit your needs? A short explaination of the .save() follows.
Clientside does modify / new / delete in the store and in turn those items are marked as dirty. While having dirty items, the store will keep references to those in a has, like so:
store._pending = { _deletedItems: [], _modifiedItems: [], _newItems: [] };
On call save() each of these should be looped, sending requests to server BUT, this does not happen if neither _saveEverything or _saveCustom is defined. WriteStore simply resets its client-side revert feature and saves in client-memory.
See source search "save: function"
Here is my implementation of a simple writeAPI, must be modified to use without its inbuilt validation:
OoCmS._storeAPI
In short, follow this boiler, given that you would have a CRUD pattern on server:
new ItemFileWriteStore( {
url: 'path/to/c**R**ud',
_saveCustom: function() {
for(var i in this._pending._newItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/**C**rud', contents: { id:i }});
}
for(i in this._pending._modifiedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cr**U**d', contents: { id:i }});
}
for(i in this._pending._deletedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cru**D**', contents: { id:i }});
}
});
Now; as for paging, ItemFileWriteStore has the pagination in it from its superclass mixins.. You just need to call it with two setups, one being directly on store meaning server should only return a subset - or on a model with query capeabilities where server returns a full set.
var pageSize = 5, // lets say 5 items pr request
currentPage = 2; // note, starting on second page (with *one* being offset)
store.fetch({
onComplete: function(itemsReceived) { },
query: { foo: 'bar*' }, // optional filtering, server gets json urlencoded
count: pageSize, // server gets &count=pageSize
start: currentPage*pageSize-pageSize // server gets &start=offsetCalculation
});
quod erat demonstrandum

When no data is returned from database

I am intiating a loading panel in init method and hiding it in ReturnDataPayload event.This is working perfectly when data Table has got some values in it.But when there is no data returned from database , the control is not going to returnDataPayLoad event.Please help me in finding an event which will be fired even when the response doesn't have any data or tell me a way to hide the loading panel.
If you want a custom behavior, use DataSource's sendRequest method of the dataTable's dataSource
(function() {
var YdataTable = YAHOO.widget.DataTable,
YdataSource = YAHOO.util.DataSource;
var settings = {
container:"<DATATABLE_CONTAINER_GOES_HERE>",
source:"<URL_TO_RETRIEVE_YOUR_DATA>",
columnSettings:[
{key:"id", label:"Id"}
],
dataSourceSettings:{
responseType:YdataSource.TYPE_JSON,
responseSchema:{
resultsList:"rs",
fields:[
{key:"id"}
]
}
},
dataTableSettings:{
initialLoad:false
}
}
var dataTable = new YdataTable(
settings.container,
settings.columnSettings,
new YdataSource(
settings.source,
settings.dataSourceSettings),
settings.dataTableSettings);
})();
keep in mind No matter which source is your data: XML, JSON, JavaScript object, TEXT, you always will get your data in a unified way through DataSource's sendRequest method. So when you want to retrieve your data and, at the same time, add custom behavior, use it
dataTable.getDataSource().sendRequest(null, {
success:function(request, response, payload) {
if(response.results.length == 0) {
// No data returned
// Do what you want right here
// You can, for instance, hide the dataTable by calling this.setStyle("display", "none");
} else {
// Some data returned
// If you want to use default the DataTable behavior, just call
this.onDataReturnInitializeTable(request, response, payload);
}
},
scope:dataTable,
argument:dataTable.getState()
});
The properties of the response are
results (Array): Your source of data in a unified way. For each object in the results Array, There is a property according to responseSchema's fields property. Notice i use response.results.length to verify if some data has been returned
error (Boolean): Indicates data error
cached (Boolean): Indicates cached response
meta (Object): Schema-parsed meta data
On the YUI dataTable page, look for Loading data at runtime to see some built-in functions provided by YUI dataTable
I hope it can be useful and feel free to ask for help for anything else you want about YUI. See a demo page of nice features of YUI dataTable

Resources