Couchdb and dhtmlx library - couchdb

I have created a small database (couchdb) and web page (html5 boilerplate). My end goal is to have the user click a button which will retrieve a particular view which will be rendered as a table using the dhtmlx library (http://dhtmlx.com/).
At this point I have the page initializing the table (grid) on page load. I am trying to load the data in to the table using 'mygrid.load(url,"json")' The documentation doesn't provide an example of url but I'm assuming it would the be couchdb url of the view. In my case that is: 127.0.0.1:5984/mydata/_design/mydata/_view/details. If I open that url in a browser, I see the data in json format.
{"total_rows":14,"offset":0,"rows":[
{"id":"90e77126ce592105891eba2bd4000143","key":"An","value":"addition to others"},
{"id":"90e77126ce592105891eba2bd4001106","key":"Changed","value":"Directories."},
. . .
{"id":"83001c900adeefe50928a24b98001733","key":"Yeah","value":"CSS kind of working. Guess I have express 3.0"}
]}
Needless to say:
mygrid.load("http://127.0.0.1:5984/mydata/_design/mydata/_view/details","json")
doesn't work. So:
a) Any ideas what I might be doing wrong?
b) Are there better libraries for what I'm trying to do with the grid? dhtmlx seems to be oriented to xml files, but it's what I was given.

Also check if your html is served from http:/ /127.0.0.1:5984. If it is not served from that address and port, than your javascript will not be able to issue request at all to http:/ /127.0.0.1:5984 because of Same origin policy
Sou you either have to serve your html from couchdb directly or use some proxy so that it appears they are served from same host and port.

It looks like dhtmlx supports JSON initialization:
http://www.dhtmlx.com/docs/products/dhtmlxGrid/samples/12_initialization_loading/09_init_grid_json.html
You will probably need to write some custom JavaScript to massage the CouchDB view output into a format that the Grid initializer supports.

Related

Serve custom javascript to browser via Node.js app

I developed a small node.js app in which I can configure conditions for a custom javascript file, which can be embedded in a webpage, and which modifies the DOM of that page in the browser on load. The configuration values are stored in MongoDB. (For sake of argument: add class "A" to DOM element with ID "B" )
I have difficulties to figure out the best way to serve requests / the JavaScript file.
Option 1 and my current implementation is:
I save a configuration in the node app and a distinct JavaScript
file is created for that configuration.
The page references that file which is hosted and served by the server.
Option 2 and where I think I want and should go is:
I saves a configuration (mongodb) NO JavaScript file is created Pages
a generic JavaScript link (for instance: api.service.com/javascript.js)
Node.js / Express app processes the request, and
returns a custom JavaScript (file?) with the correct values as saved in mongodb for that configuration
Now, while I believe this is the right way to go about it, I am unsure HOW to go about it. Any ideas and advise are very welcome!
Ps: For instance I wonder how best to authenticate or identify the origin, user and requested configuration. Shall I do this like: api.service.com/javascript.js&id="userID" - is that good practice?
Why not serve up a generic Javascript file which can take a customized json object (directly from mongodb) and apply the necessary actions? You can include the json data on the page if you really need to have everything embedded, but breaking up configuration and code is the most maintainable approach.

Can I capture JSON data already being sent with a userscript/Chrome extension?

I'm trying to write a userscript/Chrome extension to capture JSON data being sent while using a web service so that I can reformat it and display selected portion on page. Currently the JSON is sent as the application loads (as I've observed from watching traffic with Fiddler 2). Is my only option to request the JSON again or is capture possible? As I'm not providing a code example, a requested answer is even some guidance on what method / topic to research or if I'm barking up the wrong tree.
No easy way.
If it is for a specific site you might look into intercepting and overwriting part of a code which sends a request. For example if it is sent on a button click you can replace existing click handler with your own implementation.
You can also try to make a proxy for XMLHttpRequest. Not sure if this even possible, never seen a working example. You can look at some attempts here.
For all these tasks you probably would need to run your javascript code out of sandboxed content script to be able to access parent page variables, so you would need to inject <script> tag with your code right into the page from a content script:

Enabling search for my site that uses rss

my site is here
The main flash app uses rss (or xml) to display data. I'm wondering how I can add search functionality to it. One idea is to create multiple custom rss for each filter and search query, but I thought that it would be a nightmare to add more data later on. So I'm wondering if there's another way to do it?
RSS feed is located here My site is hosted at edicy.com and I can't install any other server side extensions other than use XHTML, XML, HTML and Javascript.
Index your data using a search engine like solr or sphinx then have your flash app talk to the server to post a query to it and retrieve the results in XML

Why does new Facebook Javascript SDK not violate the "same origin policy"?

The new Facebook Javascript SDK can let any website login as a Facebook user and fetch data of a user...
So it will be, www.example.com including some Javascript from Facebook, but as I recall, that script is considered to be of the origin of www.example.com and cannot fetch data from facebook.com, because it is a violation of the "same origin policy". Isn't that correct? If so, how does the script fetch data?
From here: https://developer.mozilla.org/en/Same_origin_policy_for_JavaScript
The same origin policy prevents a
document or script loaded from one
origin from getting or setting
properties of a document from another
origin. This policy dates all the way
back to Netscape Navigator 2.0.
and explained slightly differently here: http://docs.sun.com/source/816-6409-10/sec.htm
The same origin policy works as
follows: when loading a document from
one origin, a script loaded from a
different origin cannot get or set
specific properties of specific
browser and HTML objects in a window
or frame (see Table 14.2).
The Facebook script is not attempting to interact with script from your domain or reading DOM objects. It's just going to do its own post to Facebook. It gets yous site name, not by interacting with your page, or script from your site, but because the script itself that is generated when you fill out the form to get the "like" button. I registered a site named "http://www.bogussite.com" and got the code to put on my website. The first think in this code was
iframe src="http://www.facebook.com/plugins/like.php?href=http%3A%2F%2Fwww.bogussite.com&
so the script is clearly getting your site info by hard-coded URL parameters in the link to the iFrame.
Facebook's website is by far not alone in having you use scripts hosted on their servers. There are plenty of other scripts that work this way.. All of the Google APIs, for example, including Google Gears, Google Analytics, etc require you to use a script hosted on their server. Just last week, while I was trying to figure out how to do geolocation for our store finder for a mobile-friendly web app, I found a whole slew of geolocation services that had you use scripts hosted on their servers, rather than copying the script to your server.
I think, but am not sure, that they use the iframe method. At least the cross domain receiver and xfbml stuff for canvas apps uses that. Basically the javascript on your page creates an iframe within the facebook.com domain. That iframe then has permission to do whatever it needs with facebook. Communication back with the parent can be done with one of several methods, for example the url hash. But I'm not sure which if any method they use for that part.
If I recall, they use script tag insertion. So when a JS SDK call needs to call out to Facebook, it inserts a <script src="http://graph.facebook.com/whatever?params...&callback=some_function script tag into the current document. Then Facebook returns the data in JSON format as some_function({...}) where the actual data is inside the ... . This results in the function some_function being called in the origin of example.com using data from graph.facebook.com.

How do I get a web part to refresh in IE?

I have a SharePoint web part that gets XML data from an .ASHX page, parses it and displays it using JavaScript. Everything works fine, until the XML changes. When I view the web part in IE, the new data is not updated until I close the browser. Even doing a CTRL-F5 does not grab the new data.
Firefox displays the new data immediately, with just a simple page refresh.
I have added a timestamp to the query string of my .ASHX page so that the XML result is not cached, but that did not fix my IE woes. Any other ideas?
Edit
The .ASHX page is using the API to access a list and is building the XML string, then returning that as an application/XML content type. I have confirmed that the XML is updated to reflect the new data in the list. I am also able to see the data consumed in the web part when it is displayed in FireFox.
Solution
I actually was generating the timestamp to append to my query in the server code, and then putting that string in the javascript. Once I moved the timestamp code to the javascript, things started working much better.
It's most likely cached, there's no other logical explanation why it would work in Firefox and not in IE. Try reloading IE several times in a row.
Check what headers that .ashx page sets.
It's not just your browser which could be caching the page, any middleware including web server might have a flawed caching implementation. You can also try using HTTP POST instead of GET because according to HTTP specification, POST requests should never be cached.
Is it a custom Web Part or one of the out-of-the-box Web Parts? It would make it easier to help you if you provided any more information on how you're retrieving the data from the HttpHandler (ashx).

Resources