MvcSiteMapProvider to lazy load dynamic nodes - mvcsitemapprovider

I'd like to use MvcSiteMapProvider for building breadcrumbs for an MVC3 project.
My problem is that certain dynamic nodes could have hundreds of dynamic child nodes, each of which could have hundreds of subnodes itself - so reading the whole sitemap is not an option. Instead, I'd like to lazy-load subnodes for a given node when a user lands on the page.
As far as I can see, this is not possible with MvcSiteMapProvider, but maybe I'm missing something? Is there a recommended way to address that?

Ok - I haven't got any answers and, unfortunately, it seems the correct answer is that lazy loading is not supported by MVCSiteMapProvider.
So I created a quick prototype of a very leightweight MVC breadcrumbs generator, which would request nodes only when you actually visit the corresponding page.

Related

React JS Google Map usage limits

I have just started using React JS and I am currently working on getting the google maps "google-maps-react" package up and running.
From my basic understanding of React JS, any change causes a whole component hierarchy to re-render.
From my understanding of the Google usage information via this link; any re-render constitutes as a usage.
Question
So with that, how do React JS developers handle\deal with this problem? 25,000 free map renders pre-React is fairly substantial but it seems like a fairly simple cap to burst with frameworks like React that cause a re-render for any change in your hierarchy.
Option 1
Is the best way to ensure the map component is not nested in a hierarchy that is updatable by other components? I wrote a sample application and confirmed that only the components in the hierarchy that invoked the change are re-rendered.
If this is the best way, that is fine but I am hoping to hear from more experienced React developers.
Thanks,
I think the 25k limit refers to you requesting the google maps js sdk, not how often you instantiate a google.maps.Map object.
And yes, it would be good practice to not re-render the component encapsulating the map all the time.
Check this simple map component:
https://github.com/Scarysize/react-maps-popup/blob/master/src/map.js
It initializes the map once and propagates the map instance up using a function as a child approach (ofc you could simply pass a callback as a prop).
After some digging I found that the "google-map-react" NPM package is doing something interesting behind the scenes. It is actually sending an update to your markers with the latest map state.
Once I found this, I tied into that behaviour and now my map only renders once and I dynamically handle marker changes based on this behaviour.
I was not able to find this behaviour documented anywhere but happened to stumble upon it via a console.log(JSON.stringify(this.props)) within my marker.

How to destroy a cytoscape.js instance

We are using cytoscape.js to render graphs in an Angular JS app and it seems to be leaking DOM nodes. A snapshot comparison in Chrome Dev Tools shows Detached DOM Trees being retained by the "instances" array in the global cytoscape object.
The cytoscape instance is created in the link function of the directive and I would like to clear these objects on the scope $destroy event. Even if I manually nullify the references to these instances, there are other global objects like the CanvasRenderer.data.container or CanvasRenderer.bindings[].target which still hold on to these elements which prevents them from being garbage collected.
My questions is: does cytoscape have a destroy() method that would free up references to these DOM elements that I could call on the angular $destroy event? OR what is the right way to get rid of these global references?
Screenshots from the Chrome Dev Tools profiler are here: https://drive.google.com/folderview?id=0B6OGkJMuELQHeC01U1FBYkd4NVU&usp=drive_web
(Lack sufficient reputation for attaching images here)
Options:
(1) You can reuse your Cytoscape.js instances and associated DOM elements instead of destroying them. You'll have better performance that way (esp. w.r.t. the DOM), and this is probably the approach you should be using anyway. Though perhaps your choice of frameworks has limited your options in this regard.
(2) You can call the private cytoscape.removeRegistrationForInstance(cy).
This issue stems from some old code that uses registrations for some compatibility as a jQuery plugin. I'll remove it, but you can work around this for now.
Finally, please create issues on Github for feature requests or bug reports. Stackoverflow should only be used for help questions (e.g. "I don't understand this function in the API" etc.).

Can I load an XPage from another Db using the Include Page control?

Looking at xp:include control; we have the following syntax:
<
xp:include pageName="/main.xsp" id="include1">
Is there any way to compute the pageName property so it makes reference to another Db?
Designer Help
"Can you?" Yes, but it's quite complicated and I wouldn't recommend attempting it. You would have to write an extended version of the Include control that could point to a different NSF and could include that NSF as a ModuleClassLoader for your current application. While this is theoretically possible, it requires a great deal of knowledge of the internal Java workings of XPages and a willingness to test extensively.
There may be other ways to solve your problem, such as using Design inheritance to include your target page in the local application. Otherwise, you're asking a question about what's possible with a sequence of ones and zeros, so of course the answer is "yes, but..." And in this case, unless you have a desire to research the inner workings of the XSP server, or want to recruit someone who already has such knowledge, the answer to "Can I" is no.
As Nathan says - the reality is that no you can't do that. BUT you should think of WHY you're trying to do this?
Keep in mind that by not having your XPage source in the same place you won't have any kind of shared application or sessionScope available. Because of that it really is just like including a foreign web page and has nothing to do with XPages.
You'll probably be better off just using an iFrame if you want to run a foreign page inside another xpages application.

Drupal search engine does not index my custom nodes!

Somebody has posted an hour ago or so a question that was about the drupal search engine and was about like this:
I know drupal should index anything that is returned by node_view() but this is not happening for my custom content. Also: are there better alternatives to Drupal built-in functionality?
As the question has been removed while I was answering, and didn't want to throw away 20 minutes of my life for nothing ;) I thought to re-create the question a second time. Hope this is fine by the rules of SO! :)
The Drupal search engine is probably not the most celebrated feature of Drupal, but is fairly solid, sophisticated and reliable. There are plenty of modules that enhance or substitute it but - at least in my experience - there is not a commonly accepted "better way" to manage searching and indexing.
However, for very big and busy sites people prefer to use external tools altogether, like a google searchbox or even dedicated software or hardware, like solr / lucene or google search appliance (GSA).
The link I provided above - however - sorts the search-related modules by descending usage statistics, so you will find on the first page the one most commonly used. One that I personally like for English language sites is the porter-stemmer, which index words by their stem (eg: highness, highest and higher will all be returned as matches for the word "high").
That was for the general information on search and Drupal. As for your problem, there are a number of things you could check to track down your problem:
Have your cron.php been executed lately? Indexing is done as part of the cron run, so - if you do not have a crontab set or if you haven't executed it by hand, your node will likely not been indexed yet.
Are the settings correct? Settings for the search module are located at http://example.com/admin/settings/search : is your minimum word length sufficient for your needs (the default is 3 letters)?
Has the 100% of the site being indexed? (You can check that from the setting page). If it is not, and running cron.php doesn't solve the matter, look further down.
Does a re-index solve the problem? Especially if you inserted data by mean of SQL queries directly on the Drupal tables, chances are Drupal hasn't realised the content of the node has changed and therefore doesn't update the index.
Is the node you are trying to find, visible? Search results about unpublished nodes or nodes that require higher-than-yours permissions to be viewed are not returned, AFAIK.
As for the "stuck indexing" that happened to me once as well. It turned out it was some PHP code within a node body that would trigger a PHP exception when the node was being indexed, and as a result the indexing process would halt and all the following nodes would not be indexed as well.
Hope this helps. Good luck!

standard way of encoding pagination info on a restful url get?

I think my question would be better explained with a couple of examples...
GET http://myservice/myresource/?name=xxx&country=xxxx&_page=3&_page_len=10&_order=name asc
that is, on the one hand I have conditions ( name=xxx&country=xxxx ) and on the other hand I have parameters affecting the query ( _page=3&_page_len=10&_order=name asc )
now, I though about using some special prefix ( "_" in thes case ) to avoid collisions between conditions and parameters ( what if my resourse has an "order" property? )
is there some standard way to handle these situations?
--
I found this example (just to pick one)
http://www.peej.co.uk/articles/restfully-delicious.html
GET http://del.icio.us/api/peej/bookmarks/?tag=mytag&dt=2009-05-30&start=1&end=2
but in this case condition fields are already defined (there is no start nor end property)
I'm looking for some general solution...
--
edit, a more detailed example to clarify
Each item is completely indepent from one another... let's say that my resources are customers, and that (luckily) I have a couple millions of them in my db.
so the url could be something like
http://myservice/customers/?country=argentina,last_operation=2009-01-01..2010-01-01
It should give me all the customers from argentina that bought anything in the last year
Now I'd like to use this service to build a browse page, or to fill a combo with ajax, for example, so the idea was to add some metada to control what info should I get
to build the browse page I would add
http://...,_page=1,_page_len=10,_order=state,name
and to fill an autosuggest combo with ajax
http://...,_page=1,_page_len=100,_order=state,name,name=what_ever_type_the_user*
to fill the combo with the first 100 customers matching what the user typed...
my question was if there was some standard (written or not) way of encoding this kind of stuff in a restfull url manner...
While there is no standard, Web API Design (by Apigee) is a great book of advice when creating Web APIs. I treat it as a sort of standard, and follow its recommendations whenever I can.
Under "Pagination and partial response" they suggest (page 17):
Use limit and offset
We recommend limit and offset. It is more common, well understood in leading databases, and easy for developers.
/dogs?limit=25&offset=50
There's no standard or convention which defines a way to do this, but using underscores (one or two) to denote meta-info isn't a bad idea. This is what's used to specify member variables by convention in some languages.
Note:
I started writing this as a comment to my previous answer. Then I was going to add it as an edit, but I think that it belongs as a separate answer instead. This is a completely different approach and a separate answer in its own right since it is a different approach.
The more that I have been thinking about this, I think that you really have two different resources that you have to deal with:
A page of resources
Each resource that is collected into the page
I may have missed something (could be... I've been guilty of misinterpretation). Since a page is a resource in its own right, the paging meta-information is really an attribute of the resource so placing it in the URL isn't necessarily the wrong approach. If you consider what can be cached downstream for a page and/or referred to as a resource in the future, the resource is defined by the paging attributes and the query parameters so they should both be in the URL. To continue with my entirely too lengthy response, the page resource would be something like:
http://.../myresource/page-10/3?name=xxx&country=yyy&order=name&orderby=asc
I think that this gets to the core of your original question. If the page itself is a resource, then the URI should describe the page so something like page-10 is my way of saying "a page of 10 items" and the next portion of the page is the page number. The query portion contains the filter.
The other resource names each item that the page contains. How the items are identified should be controlled by what the resources are. I think that a key question is whether the result resources stand on their own or not. How you represent the item resources differs based on this concept.
If the item representations are only appropriate when in the context of the page, then it might be appropriate to include the representation inline. If you do this, then identify them individually and make sure that you can retrieve them using either URI fragment syntax or an additional path element. It seems that the following URLs should result in the fifth item on the third page of ten items:
http://.../myresource/page-10/3?...#5
http://.../myresource/page-10/3/5?...
The largest factor in deciding between these two is how strongly coupled the individual item is with the page. The fragment syntax is considerably more binding than the path element IMHO.
Now, if the item resources are free-standing and the page is simply the result of a query (which I think is likely the case here), then the page resource should be an ordered list of URLs for each item resource. The item resource should be independent of the page resource in this case. You might want to use a URI that is based on the identifying attribute of the item itself. So you might end up with something like:
http://.../myresource/item/42
http://.../myresource/item/307E8599-AD9B-4B32-8612-F8EAF754DFDB
The key deciding factor is whether the items are freestanding resources or not. If they are not, then they are derived from the page URI. If they are freestanding, then they should have their are defined by their own resources and should be included in the page resource as links instead.
I know that the RESTful folk tend to dislike the usage of HTTP headers, but has anyone actually looked into using the HTTP ranges to solve pagination. I wrote a ISAPI extension a few years back that included pagination information along with other non-property information in the URI and I never really like the feel of it. I was thinking about doing something like:
GET http://...?name=xxx&country=xxxx&_orderby=name&_order=asc HTTP/1.1
Range: pageditems=20-29
...
This puts the result set parameters (e.g., _orderby and _order) in the URI and the selection as a Range header. I have a feeling that most HTTP implementations would screw this up though especially since support for non-byte ranges is a MAY in RFC2616. I started thinking more seriously about this after doing a bunch of work with RTSP. The Range header in RTSP is a nice example of extending ranges to handle time as well as bytes.
I guess another way of handling this is to make a separate request for each item on the page as an individual resource in its own right. If your representation allows for this, then you might want to consider it. It is more likely that intermediate caching would work very well with this approach. So your resources would be defined as:
myresource/name=xxx;country=xxx/orderby=name;order=asc/20/
myresource/name=xxx;country=xxx/orderby=name;order=asc/21/
myresource/name=xxx;country=xxx/orderby=name;order=asc/22/
myresource/name=xxx;country=xxx/orderby=name;order=asc/23/
myresource/name=xxx;country=xxx/orderby=name;order=asc/24/
I'm not sure if anyone has tried something like this or not. This would make URIs constructible which is always a useful property IMHO. The bonus to this approach is that the individual responses could be cached and the server is free to optimize handling of collecting pages of items and what not in the most efficient way. The basic idea is to have the client specify the query in the URI and the index of them item that it wants to retrieve. No need to push the idea of a "page" into the resource or even to make it visible. The client can iteratively retrieve objects until it's page is full or it receives a 404.
There is a downside of course... the HTTP server and infrastructure has to support pipelining or the cost of creation/destruction of connections might kill the idea outright.

Resources