Is it possible to store data across domains using a Greasemonkey script? I want to allow a Javascript object to be accessed from multiple websites that are using the same Greasemonkey script.
Yes, that is one of the purposes of GM_setvalue(), it stores data, per script, and across domains.
Beware that the bog-standard GM_setValue() is somewhat problematic. It can use lots of global resources or cause a script instance to crash.
Here are some guidelines:
Do not use GM_setValue() to store anything but strings. For anything else, use a serializer such as GM_SuperValue. Even innocent looking integers can cause the default GM_setValue() to crash.
Rather than store lots of small variables, it may be better to wrap them in an object and store that with one of the serializers.
Finally note that localStorage has a specific meaning in javascript, and localStorage is domain specific.
http://wiki.greasespot.net/GM_setValue
foo = "This is a string";
GM_setValue('myEntry', foo);
http://wiki.greasespot.net/GM_getValue
bar = GM_getValue('myEntry');
bar = GM_getValue('myOtherEntry', "default value if no value was found");
http://wiki.greasespot.net/GM_deleteValue
GM_deleteValue('myEntry');
GM_deleteValue('myOtherEntry');
https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Storage
foo = "this is a string";
localStorage.setItem('myEntry', foo);
bar = localStorage.getItem('pointer') || "default value";
localStorage.removeItem('myEntry');
or just...
localStorage.myEntry = "this is a string";
bar = localStorage.myEntry;
Related
Our Firefox addon issues queries to Google at the backend (main.js), then extracts some content through xpath. For this purpose, we use innerHTML to create a document instance for xpath parsing. But when we submit this addon to Mozilla, we got rejected because:
This add-on is creating DOM nodes from HTML strings containing potentially unsanitized data, by assigning to innerHTML, jQuery.html, or through similar means. Aside from being inefficient, this is a major security risk. For more information, see https://developer.mozilla.org/en/XUL_School/DOM_Building_and_HTML_Insertion
Following the link provided, we tried to replace innerHTML with nsIParserUtils.parseFragment(). However, the example code:
let { Cc, Ci } = require("chrome");
function parseHTML(doc, html, allowStyle, baseURI, isXML) {
let PARSER_UTILS = "#mozilla.org/parserutils;1";
...
The Cc, Ci utilities can only be used on main.js, while the function requires a document (doc) as the argument. But we could not find any examples about creating a document inside main.js, where we could not use document.implementation.createHTMLDocument("");. Because main.js is a background script, which does not have reference to the global built-in document.
I googled a lot, but still could not find any solutions. Could anybody kindly help?
You probably want to use nsIDOMParser instead, which is the same as the standard DOMParser accessible in window globals except that you can use it from privileged contexts without a window object.
Although that gives you a whole document with synthesized <html> and <body> elements if you don't provide your own. If you absolutely need a fragment you can use the html5 template element to extract a fragment via domparser:
let partialHTML = "foo <b>baz</b> bar"
let frag = parser.parseFromString(`<template>${partialHTML}</template>`, 'text/html').querySelector("template").content
Can I set environment variables depending on which domain the request is going through?
What I am thinking about is that I've got my node.js application up and running, I assign two domains, the same domain, with different TLD:s like below
mydomain.fr
mydomain.de
and doing something like this pseudo code
switch(app.host) {
case 'mydomain.fr':
process.env.LANGUAGE = 'fr';
break;
case 'mydomain.de':
process.env.LANGUAGE = 'de';
break;
default:
process.env.LANGUAGE = 'en';
break;
}
I am thinkg about doing this way because I'd really like to use a node module like i18n or similar but using the same code base then just add different language variables in specific json files.
This would make it a lot easier if I would like to launch my website in a new country like Italy (.it) or anything else. If I push an update to the website it's automatically pushed to all languages.
My main question is now first: Is this possible?
and second what are the pros and especially cons for this approach? I've already listed some pros and right now the only con I can think of is that one web server needs to be larger/stronger than I would've need if I set the page up on different servers.
Worth mentioning is that the traffic on each language site is rather low (below 10k per month for both languages)
Another mention that could be worth mentioning is that I'm planning to deploy these websites to Heroku if that would matter.
It wouldn't work if two domains run simultaneously.
Global variables like process in node.js are just that: global. What would happen if you run two services (apps) is that the first will set the variable process.env.LANGUAGE to something and the second will overwrite it.
It wouldn't even work if you do it per-connection. The first customer from France will set process.env.LANGUAGE to fr then the second customer from Germany will set it to de then by the time you respond to the first customer (who is French) you will end up giving him the page in German.
Remember, while node.js is only single threaded we still need to worry about multiple connections because they can be concurrent.
The correct place to attach something like this is the variable that you uniquely get for each connection: the request object and response object (usually abbreviated as req and res). If you want something standard-ish, add a pseudo-header to the request object using a middleware. I'd personally just do req.lang = 'de';
am a newbie, trying to write some basics extension. For my extension to work i need to initialize some data, so what I did is inside my background.js i declared something like this.
localStorage["frequency"] = 1; //I want one as Default value. This line is not inside any method, its just the first line of the file background.js
Users can goto Options page and change this above variable to any value using the GUI. As soon as the user changes it in UI am updating that value.
Now the problem is to my understanding background.js reloads everytime the machine is restarted. So every time I restart my machine and open Chrome the frequency value is changed back to 1. In order to avoid this where I need to initialize this value?
You could just use a specific default key. So if frequency is not set you would try default-frequency. The default keys are then still set or defined in the background.js.
I like to do that in one step, in a function like this
function storageGet(key,defaultValue){
var item = localstorage.getItem(key);
if(item === null)return defaultValue;
else return item;
}
(According to the specification localstorage must return null if no value has been set.)
So for your case it would look something like
var f = storageGet("frequency",1);
Furthermore you might be interested in checking out the chrome.storage API. It's used similar to localstorage but provides additional functionalities which might be useful for your extension. In particular it supports to synchronize the user data across different chrome browsers.
edit I changed the if statement in regard to apsillers objection. But since the specification says it's ought to be null, I think it makes sense to check for that instead of undefined.
This is another solution:
// background.js
initializeDefaultValues();
function initializeDefaultValues() {
if (localStorage.getItem('default_values_initialized')) {
return;
}
// set default values for your variable here
localStorage.setItem('frequency', 1);
localStorage.setItem('default_values_initialized', true);
}
I think the problem lies with your syntax. To get and set your localStorage values try using this:
// to set
localStorage.setItem("frequency", 1);
// to get
localStorage.getItem("frequency");
Did anyone set up something like this for himself using the existing
node.js REPL? I didn't think of a quick way to do it.
The way I do it today is using emacs and this:
https://github.com/ivan4th/swank-js
This module is composed of:
A SLIME-js addon to emacs which, in combination with js2-mode, lets
you simply issue a C-M-x somewhere in the body of a function def - and
off goes the function's string to the ..
Swank-js server (yes, you could eval from your local-machine
directly to a remote process) written in Node.js - It receives the
string of the function you eval'ed and actually evals it
A whole part that lets you connect to another port on that server
with your BROWSER and then lets you manipulate the DOM on that browser
(which is pretty amazing but not relevant)
My solution uses SLIME-js on the emacs side AND I require('swank-
js') on my app.js file
Now.. I have several issues and questions regarding my solution or
other possible ones:
Q1: Is this overdoing it? Does someone have a secret way to eval stuff
from nano into his live process?
Q2: I had to change the way swank-js is EVALing.. it used some
kind of black magic like this:
var Script = process.binding('evals').Script;
var evalcx = Script.runInContext;
....
this.context = Script.createContext();
for (var i in global) this.context[i] = global[i];
this.context.module = module;
this.context.require = require;
...
r = evalcx("CODECODE", this.context, "repl");
which, as far I understand, just copies the global variables to the
new context, and upon eval, doesn't change the original function
definitions - SOOO.. I am just using plain "eval" and IT
WORKS.
Do you have any comments regarding this?
Q3: In order to re-eval a function, it needs to be a GLOBAL function -
Is it bad practice to have all function definitions as global (clojure-like) ? Do you think there is another way to do this?
Actually, swank.js is getting much better, and it is now much easier to set up swank js with your project using NPM. I'm in the process of writing the documentation right now, but the functionality is there!
Check this out http://nodejs.org/api/vm.html
var util = require('util'),
vm = require('vm'),
sandbox = {
animal: 'cat',
count: 2
};
vm.runInNewContext('count += 1; name = "kitty"', sandbox, 'myfile.vm');
console.log(util.inspect(sandbox));
// { animal: 'cat', count: 3, name: 'kitty' }
Should help you a lot, all of the sandbox things for node uses it :) but you can use it directly :)
You might take a look at jsapp.us, which runs JS in a sandbox, and then exposes that to the world as a quick little test server. Here's the jsapp.us github repo.
Also, stop into #node.js and ask questions for a quicker response :)
We have a high security application and we want to allow users to enter URLs that other users will see.
This introduces a high risk of XSS hacks - a user could potentially enter javascript that another user ends up executing. Since we hold sensitive data it's essential that this never happens.
What are the best practices in dealing with this? Is any security whitelist or escape pattern alone good enough?
Any advice on dealing with redirections ("this link goes outside our site" message on a warning page before following the link, for instance)
Is there an argument for not supporting user entered links at all?
Clarification:
Basically our users want to input:
stackoverflow.com
And have it output to another user:
stackoverflow.com
What I really worry about is them using this in a XSS hack. I.e. they input:
alert('hacked!');
So other users get this link:
stackoverflow.com
My example is just to explain the risk - I'm well aware that javascript and URLs are different things, but by letting them input the latter they may be able to execute the former.
You'd be amazed how many sites you can break with this trick - HTML is even worse. If they know to deal with links do they also know to sanitise <iframe>, <img> and clever CSS references?
I'm working in a high security environment - a single XSS hack could result in very high losses for us. I'm happy that I could produce a Regex (or use one of the excellent suggestions so far) that could exclude everything that I could think of, but would that be enough?
If you think URLs can't contain code, think again!
https://owasp.org/www-community/xss-filter-evasion-cheatsheet
Read that, and weep.
Here's how we do it on Stack Overflow:
/// <summary>
/// returns "safe" URL, stripping anything outside normal charsets for URL
/// </summary>
public static string SanitizeUrl(string url)
{
return Regex.Replace(url, #"[^-A-Za-z0-9+&##/%?=~_|!:,.;\(\)]", "");
}
The process of rendering a link "safe" should go through three or four steps:
Unescape/re-encode the string you've been given (RSnake has documented a number of tricks at http://ha.ckers.org/xss.html that use escaping and UTF encodings).
Clean the link up: Regexes are a good start - make sure to truncate the string or throw it away if it contains a " (or whatever you use to close the attributes in your output); If you're doing the links only as references to other information you can also force the protocol at the end of this process - if the portion before the first colon is not 'http' or 'https' then append 'http://' to the start. This allows you to create usable links from incomplete input as a user would type into a browser and gives you a last shot at tripping up whatever mischief someone has tried to sneak in.
Check that the result is a well formed URL (protocol://host.domain[:port][/path][/[file]][?queryField=queryValue][#anchor]).
Possibly check the result against a site blacklist or try to fetch it through some sort of malware checker.
If security is a priority I would hope that the users would forgive a bit of paranoia in this process, even if it does end up throwing away some safe links.
Use a library, such as OWASP-ESAPI API:
PHP - http://code.google.com/p/owasp-esapi-php/
Java - http://code.google.com/p/owasp-esapi-java/
.NET - http://code.google.com/p/owasp-esapi-dotnet/
Python - http://code.google.com/p/owasp-esapi-python/
Read the following:
https://www.golemtechnologies.com/articles/prevent-xss#how-to-prevent-cross-site-scripting
https://www.owasp.org/
http://www.secbytes.com/blog/?p=253
For example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$esapi = new ESAPI( "/etc/php5/esapi/ESAPI.xml" ); // Modified copy of ESAPI.xml
$sanitizer = ESAPI::getSanitizer();
$sanitized_url = $sanitizer->getSanitizedURL( "user-homepage", $url );
Another example is to use a built-in function. PHP's filter_var function is an example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$sanitized_url = filter_var($url, FILTER_SANITIZE_URL);
Using filter_var allows javascript calls, and filters out schemes that are neither http nor https. Using the OWASP ESAPI Sanitizer is probably the best option.
Still another example is the code from WordPress:
http://core.trac.wordpress.org/browser/tags/3.5.1/wp-includes/formatting.php#L2561
Additionally, since there is no way of knowing where the URL links (i.e., it might be a valid URL, but the contents of the URL could be mischievous), Google has a safe browsing API you can call:
https://developers.google.com/safe-browsing/lookup_guide
Rolling your own regex for sanitation is problematic for several reasons:
Unless you are Jon Skeet, the code will have errors.
Existing APIs have many hours of review and testing behind them.
Existing URL-validation APIs consider internationalization.
Existing APIs will be kept up-to-date with emerging standards.
Other issues to consider:
What schemes do you permit (are file:/// and telnet:// acceptable)?
What restrictions do you want to place on the content of the URL (are malware URLs acceptable)?
Just HTMLEncode the links when you output them. Make sure you don't allow javascript: links. (It's best to have a whitelist of protocols that are accepted, e.g., http, https, and mailto.)
You don't specify the language of your application, I will then presume ASP.NET, and for this you can use the Microsoft Anti-Cross Site Scripting Library
It is very easy to use, all you need is an include and that is it :)
While you're on the topic, why not given a read on Design Guidelines for Secure Web Applications
If any other language.... if there is a library for ASP.NET, has to be available as well for other kind of language (PHP, Python, ROR, etc)
For Pythonistas, try Scrapy's w3lib.
OWASP ESAPI pre-dates Python 2.7 and is archived on the now-defunct Google Code.
How about not displaying them as a link? Just use the text.
Combined with a warning to proceed at your own risk may be enough.
addition - see also Should I sanitize HTML markup for a hosted CMS? for a discussion on sanitizing user input
There is a library for javascript that solves this problem
https://github.com/braintree/sanitize-url
Try it =)
In my project written in JavaScript I use this regex as white list:
url.match(/^((https?|ftp):\/\/|\.{0,2}\/)/)
the only limitation is that you need to put ./ in front for files in same directory but I think I can live with that.
Using Regular Expression to prevent XSS vulnerability is becoming complicated thus hard to maintain over time while it could leave some vulnerabilities behind. Having URL validation using regular expression is helpful in some scenarios but better not be mixed with vulnerability checks.
Solution probably is to use combination of an encoder like AntiXssEncoder.UrlEncode for encoding Query portion of the URL and QueryBuilder for the rest:
public sealed class AntiXssUrlEncoder
{
public string EncodeUri(Uri uri, bool isEncoded = false)
{
// Encode the Query portion of URL to prevent XSS attack if is not already encoded. Otherwise let UriBuilder take care code it.
var encodedQuery = isEncoded ? uri.Query.TrimStart('?') : AntiXssEncoder.UrlEncode(uri.Query.TrimStart('?'));
var encodedUri = new UriBuilder
{
Scheme = uri.Scheme,
Host = uri.Host,
Path = uri.AbsolutePath,
Query = encodedQuery.Trim(),
Fragment = uri.Fragment
};
if (uri.Port != 80 && uri.Port != 443)
{
encodedUri.Port = uri.Port;
}
return encodedUri.ToString();
}
public static string Encode(string uri)
{
var baseUri = new Uri(uri);
var antiXssUrlEncoder = new AntiXssUrlEncoder();
return antiXssUrlEncoder.EncodeUri(baseUri);
}
}
You may need to include white listing to exclude some characters from encoding. That could become helpful for particular sites.
HTML Encoding the page that render the URL is another thing you may need to consider too.
BTW. Please note that encoding URL may break Web Parameter Tampering so the encoded link may appear not working as expected.
Also, you need to be careful about double encoding
P.S. AntiXssEncoder.UrlEncode was better be named AntiXssEncoder.EncodeForUrl to be more descriptive. Basically, It encodes a string for URL not encode a given URL and return usable URL.
You could use a hex code to convert the entire URL and send it to your server. That way the client would not understand the content in the first glance. After reading the content, you could decode the content URL = ? and send it to the browser.
Allowing a URL and allowing JavaScript are 2 different things.