I am learning YUI and was wondering what is the best way to access my configurations (stored in a json) using YUI.
One way I came across was to maintain it in the config/app.json and access it with using global variable:
Y.Object.getValue(App, ['Cache', 'globals', 'context'])
Is this the best way? Also if my configuration is spread out over multiple json files, what would be the best way to access them?
Thanks
There are basically two ways to do this:
Include the configuration in the HTML page
Load the configuration using Ajax
Both have some pros and cons.
Including the configuration in the HTML
This requires you to do some server side coding that reads the JSON file and prints it in the page as a global variable. This is what you seem to be doing. The upside of this is that you don't have to make an extra HTTP request. The downside is that you're relying on global variables which can be fragile.
if you're using Node.js you can use express-state to expose that configuration to the client. Alternatively you can use express-yui which relies on a similar mechanism to generate YUI configuration.
Using Ajax
The downside of using Ajax is that it's slower, but the upside is that you can trust the information to be new and not have been modified by anything else in your page.
Dealing with multiple configuration files
My recommendation is that you merge the configuration into a single object. Just decide on some convention for which one wins and generate a single configuration object. This will simplify handling that information in the client. You can do this easily with express-state by just calling app.expose(config1); app.expose(config2) and so on.
Related
I would like to create a system where an admin user can set up an HTML-based invoice using EJS, submit it, and then use that EJS to generate invoices.
To do this, I would need them to submit the EJS, store it, and then run it -- server side -- to generate the invoice.
I realise that this is generally a bad idea. At the moment, I am doing my best to put security guidelines in terms of writing the fields with the code in them (only admins can change submit them, etc.). However, I realise that anybody with admin permissions is potentially able to submit a template with malicious code. Questions:
Is EJS at least meant to be safe? (that us, inability to require(), etc.)
What would you do if you absolutely hard to run user- (or admin-) provided code?
Is EJS at least meant to be safe? (that us, inability to require(), etc.)
If you're talking about the ejs template itself, than (generally speaking) it mostly affects security on the browser side.
So just to clarify, you can read an EJS template as string and use the render method from the ejs library to create the rendered html string to be sent to the client. So basically they cant affect the server unless you rely on data from that template that could affect security for you.
What would you do if you absolutely hard to run user- (or admin-) provided code?
If I had to go with this approach I would probably add some validations on the template itself. Maybe even parse the DOM check elements are ok it depends on the task and how the EJS varies.
If security is an important issue for you. You can test the ejs when they upload it with some headless browser to check for anything that is not supposed to be there (unknown ajax requests or script> tags etc)
I am very new to xPages and have been reading about xAgents. I need to write one but am a bit puzzled how to begin. Things like how to call it once it's written. Where do I put the code, can I use library code, java code. . .
Does anyone have a complete sample I could see so I can get started with this? I have most of my code written in an xPage but for security reasons need to put it into an xAgent with sessionAsSigner to access other data.
Thanks!
Your first stop would be the original article that coined the term XAgent (also check the links at the end of the article). Depending on your output the XMl Helper class might be useful too.
Update/Clarification: An XAgent is a front-end programming technique, not a back-end tool. XAgents get called from browsers (or other devices using HTTP(s)) and need thus be accessible to end-users (ACL applies of course). For functionality your program is calling you use beans and/or SSJS libraries
But taking one step back:
An XAgent is first and foremost an XPage. So all rules for XPages apply:
You call it via an URL, there is no scheduling or event facility. An XAgent is a replacement for the ?OpenAgent URL command, not for the other agent use cases
The XAgent is always accessible from the outside, that is its sole purpose, not a device for back-end calls
since your access to an XAgent is via URL, it isn't an approach for security, security is done using ACL, Readers and Authors. Be careful with using sessionAsSigner, if that is your default you need to revisit your access control ideas
Since you render all of the XAgent output yourself a typical use case is to obtain the XPages outputstream only and hand this into a function call of a Java (managed) bean
What you might want to look at (again: revisit your security model) is to run an agent from an XPage (comes with a performance penalty) or simply have a managed bean for your sensitive parts
Using sessionAsSigner in a xAgent could cause a serious security issue. When an anonymous user knows the url of your xagent he can use it to retrieve data from users who are not allowed to do so.
The xAgent is retrieving the data, displaying the data in json or xml structure of some sort ( probably ) and your calling website is then parsing this data. Because of this a user who knows the url of your xagent can use this agent to retrieve data he is not allowed to see. (what If I wrote a php script which calls your agent a couple 100 times to with different parameters? )
I think the best approach would be to have a simple onclick method bound to a button or maybe an onchange which does a partial refresh on a panel where you display the result of the verification.
I'm using express and I want to put some configurations in a file(like database configuration, api credentials and other basic stuffs).
Now I'm putting this configuration in a JSON and I read this file using readAsync.
Reading some code I noted a lot of people use don`t use a JSON.. Instead, they use a common JS file and exports in a module.
Is there any difference between these approaches, like performace?
The latter way probably simplifies version control, testing and builds, and makes it easier to have separate configurations for production and development. It also lets you do a little "preprocessing" like defining "constants" for common setups.
In a well-designed application, the performance of configuration-reading is going to be completely irrelevant.
If you go with the latter, you need to practice some discipline: a configuration module should consist almost entirely of literals, with only enough executable code to handle things like distinguishing between development and production. Beware of letting application logic creep into it.
In node.js require works synchronously, but it's not very important if you load configurations once on the application starts. Asynchronously way realy need only if you loading configurations many times (for each request for example).
In node.js you can simply require your json files:
config.json:
{
"db": "127.0.0.1/database"
}
app.js:
var config = require('./config');
console.log(config);
If you need something more full featured I would use flatiron/nconf.
I'm building my first backbone app, and though I'm doing my authentication server side, there are features that non-authenticated users are unable to use, but because they are in my asset path, and part of my backbone files, everything gets loaded.
Is there a way to load only the resources that a user is actually able to use?
I'm using Rails with cancan to manage this server-side.
You need to split the assets out in to separate groups: a group that can be used by anyone, and a group that can be used by authenticated users. Only send the code that the user is allowed to use, basically.
I wrote a post about doing this with asp.net mvc recently. the same idea applies to rails, though the use of the asset pipeline makes the implementation a bit different:
http://lostechies.com/derickbailey/2012/01/26/modularity-and-security-in-composite-javascript-apps/
The best way is to create a Base view with a property named requireLogin: true/false.
All other views should inherit this view and the views which need authentication you should set requireLogin:true, for all others this property should be false.
After this you should handle the authentication base of this property.
Like a lot of developers, I want to make JavaScript served up by Server "A" talk to a web service on Server "B" but am stymied by the current incarnation of same origin policy. The most secure means of overcoming this (that I can find) is a server script that sits on Server "A" and acts as a proxy between it and "B". But if I want to deploy this JavaScript in a variety of customer environments (RoR, PHP, Python, .NET, etc. etc.) and can't write proxy scripts for all of them, what do I do?
Use JSONP, some people say. Well, Doug Crockford pointed out on his website and in interviews that the script tag hack (used by JSONP) is an unsafe way to get around the same origin policy. There's no way for the script being served by "A" to verify that "B" is who they say they are and that the data it returns isn't malicious or will capture sensitive user data on that page (e.g. credit card numbers) and transmit it to dastardly people. That seems like a reasonable concern, but what if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not? Would it be any more safe with HTTPS? Example scenarios would be appreciated.
Addendum: Support for IE6 is required. Third-party browser extensions are not an option. Let's stick with addressing the merits and risks of the script tag hack, please.
Currently browser venders are split on how cross domain javascript should work. A secure and easy to use optoin is Flash's Crossdomain.xml file. Most languages have a Cross Domain Proxies written for them, and they are open source.
A more nefarious solution would be to use xss how the Sammy Worm used to spread. XSS can be used to "read" a remote domain using xmlhttprequest. XSS isn't required if the other domains have added a <script src="https://YOUR_DOMAIN"></script>. A script tag like this allows you to evaluate your own JavaScript in the context of another domain, which is identical to XSS.
It is also important to note that even with the restrictions on the same origin policy you can get the browser to transmit requests to any domain, you just can't read the response. This is the basis of CSRF. You could write invisible image tags to the page dynamically to get the browser to fire off an unlimited number of GET requests. This use of image tags is how an attacker obtains documnet.cookie using XSS on another domain. CSRF POST exploits work by building a form and then calling .submit() on the form object.
To understand the Same Orgin Policy, CSRF and XSS better you must read the Google Browser Security Handbook.
Take a look at easyXDM, it's a clean javascript library that allows you to communicate across the domain boundary without any server side interaction. It even supports RPC out of the box.
It supports all 'modern' browser, as well as IE6 with transit times < 15ms.
A common usecase is to use it to expose an ajax endpoint, allowing you to do cross-domain ajax with little effort (check out the small sample on the front page).
What if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not?
Lets say you have two servers - frontend.com and backend.com. frontend.com includes a <script> tag like this - <script src="http://backend.com/code.js"></script>.
when the browser evaluates code.js is considered a part of frontend.com and NOT a part of backend.com. So, if code.js contained XHR code to communicate with backend.com, it would fail.
Would it be any more safe with HTTPS? Example scenarios would be appreciated.
If you just converted your <script src="https://backend.com/code.js> to https, it would NOT be any secure. If the rest of your page is http, then an attacker could easily man-in-the-middle the page and change that https to http - or worse, include his own javascript file.
If you convert the entire page and all its components to https, it would be more secure. But if you are paranoid enough to do that, you should also be paranoid NOT to depend on an external server for you data. If an attacker compromises backend.com, he has effectively got enough leverage on frontend.com, frontend2.com and all of your websites.
In short, https is helpful, but it won't help you one bit if your backend server gets compromised.
So, what are my options?
Add a proxy server on each of your client applications. You don't need to write any code, your webserver can automatically do that for you. If you are using Apache, look up mod_rewrite
If your users are using the latest browsers, you could consider using Cross Origin Resource Sharing.
As The Rook pointed out, you could also use Flash + Crossdomain. Or you could use Silverlight and its equivalent of Crossdomain. Both technologies allow you to communicate with javascript - so you just need to write a utility function and then normal js code would work. I believe YUI already provides a flash wrapper for this - check YUI3 IO
What do you recommend?
My recommendation is to create a proxy server, and use https throughout your website.
Apologies to all who attempted to answer my question. It proceeded under a false assumption about how the script tag hack works. The assumption was that one could simply append a script tag to the DOM and that the contents of that appended script tag would not be restricted by the same origin policy.
If I'd bothered to test my assumption before posting the question, I would've known that it's the source attribute of the appended tag that's unrestricted. JSONP takes this a step further by establishing a protocol that wraps traditional JSON web service responses in a callback function.
Regardless of how the script tag hack is used, however, there is no way to screen the response for malicious code since browsers execute whatever JavaScript is returned. And neither IE, Firefox nor Webkit browsers check SSL certificates in this scenario. Doug Crockford is, so far as I can tell, correct. There is no safe way to do cross domain scripting as of JavaScript 1.8.5.