Potentially vulnerability using setInterval in Firefox addon? - security

I've written a Firefox addon for the first time and it was reviewed and accepted a few month ago. This add-on calls frequently a third-party API. Meanwhile it was reviewed again and now the way it calls setInterval is criticized:
setInterval called in potentially dangerous manner. In order to prevent vulnerabilities, the setTimeout and setInterval functions should be called only with function expressions as their first argument. Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation.
Here's some background about the »architecture« of my addon. It uses a global Object which is not much more than a namespace:
if ( 'undefined' == typeof myPlugin ) {
var myPlugin = {
//settings
settings : {},
intervalID : null,
//called once on window.addEventlistener( 'load' )
init : function() {
//load settings
//load remote data from cache (file)
},
//get the data from the API
getRemoteData : function() {
// XMLHttpRequest to the API
// retreve data (application/json)
// write it to a cache file
}
}
//start
window.addEventListener(
'load',
function load( event ) {
window.removeEventListener( 'load', load, false ); needed
myPlugin.init();
},
false
);
}
So this may be not the best practice, but I keep on learning. The interval itself is called inside the init() method like so:
myPlugin.intervalID = window.setInterval(
myPlugin.getRemoteData,
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
There's another point setting the interval: an observer to the settings (preferences) clears the current interval and set it exactly the same way like mentioned above when a change to the updateMinInterval setting occures.
As I get this right, a solution using »function expressions« should look like:
myPlugin.intervalID = window.setInterval(
function() {
myPlugin.getRemoteData();
},
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
Am I right?
What is a possible scenario of »attacking« this code, I've overlooked so far?
Should setInterval and setTimeout basically used in another way in Firefox addons then in »normal« frontend javascripts? Because the documentation of setInterval exactly shows the way using declared functions in some examples.

Am I right?
Yes, although I imagine by now you've tried it and found it works.
As for why you are asked to change the code, it's because of the part of the warning message saying "Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation".
This means that unless you follow the recommended pattern for the first parameter it is impossible to automatically calculate the outcome of executing the setInterval call.
Since setInterval is susceptible to the same kind of security risks as eval() it is important to check that the call is safe, even more so in privileged code such as an add-on so this warning serves as a red flag to the add-on reviewer to ensure that they carefully evaluate the safety of this line of code.
Your initial code should be accepted and cause no security issues but the add-on reviewer will appreciate having one less red flag to consider.
Given that the ability to automatically determine the outcome of executing JavaScript is useful for performance optimisation as well as automatic security checks I would wager that a function expression is also going to execute more quickly.

Related

range.address throws context related errors

We've been developing using Excel JavaScript API for quite a few months now. We have been coming across context related issues which got resolved for unknown reasons. We weren't able to replicate these issues and wondered how they got resolved. Recently similar issues have started popping up again.
Error we consistently get:
property 'address' is not available. Before reading the property's
value, call the load method on the containing object and call
"context.sync()" on the associated request context.
We thought as we have multiple functions defined to modularise code in project, may be context differs somewhere among these functions which has gone unnoticed. So we came up with single context solution implemented via JavaScript Module pattern.
var ContextManager = (function () {
var xlContext;//single context for entire project/application.
function loadContext() {
xlContext = new Excel.RequestContext();
}
function sync(object) {
return (object === undefined) ? xlContext.sync() : xlContext.sync(object);
}
function getWorksheetByName(name) {
return xlContext.workbook.worksheets.getItem(name.toString());
}
//public
return {
loadContext: loadContext,
sync: sync,
getWorksheetByName: getWorksheetByName
};
})();
NOTE: above code shortened. There are other methods added to ensure that single context gets used throughout application.
While implementing single context, this time round, we have been able to replicate the issue though.
Office.initialize = function (reason) {
$(document).ready(function () {
ContextManager.loadContext();
function loadRangeAddress(rng, index) {
rng.load("address");
ContextManager.sync().then(function () {
console.log("Address: " + rng.address);
}).catch(function (e) {
console.log("Failed address for index: " + index);
});
}
for (var i = 1; i <= 1000; i++) {
var sheet = ContextManager.getWorksheetByName("Sheet1");
loadRangeAddress(sheet.getRange("A" + i), i);//I expect to see a1 to a1000 addresses in console. Order doesn't matter.
}
});
}
In above case, only "A1" gets printed as range address to console. I can't see any of the other addresses (A2 to A1000)being printed. Only catch block executes. Can anyone explain why this happens?
Although I've written for loop above, that isn't my use case. In real use case, such situations occur where one range object in function a needs to load range address. In the mean while another function b also wants to load range address. Both function a and function b work asynchronously on separate tasks such as one creates table object (table needs address) and other pastes data to sheet (there's debug statement to see where data was pasted).
This is something our team hasn't been able to figure out or find a solution for.
There is a lot packed into this code, but the issue you have is that you're calling sync a whole bunch of times without awaiting the previous sync.
There are several problems with this:
If you were using different contexts, you would actually see that there is a limit of ~50 simultaneous requests, after which you'll get errors.
In your case, you're running into a different (and almost opposite) problem. Given the async nature of the APIs, and the fact that you're not awaiting on the sync-s, your first sync request (which you'd think is for just A1) will actually contain all the load requests from the execution of the entire for loop. Now, once this first sync is dispatched, the action queue will be cleared. Which means that your second, third, etc. sync will see that there is no pending work, and will no-op, executing before the first sync ever came back with the values!
[This might be considered a bug, and I'll discuss with the team about fixing it. But it's still a very dangerous thing to not await the completion of a sync before moving on to the next batch of instructions that use the same context.]
The fix is to await the sync. This is far and away the simplest to do in TypeScript 2.1 and its async/await feature, otherwise you need to do the async version of the for loop, which you can look up, but it's rather unintuitive (requires creating an uber-promise that keeps chaining a bunch of .then-s)
So, your modified TypeScript-ified code would be
ContextManager.loadContext();
async function loadRangeAddress(rng, index) {
rng.load("address");
await ContextManager.sync().then(function () {
console.log("Address: " + rng.address);
}).catch(function (e) {
OfficeHelpers.Utilities.log(e);
});
}
for (var i = 1; i <= 1000; i++) {
var sheet = ContextManager.getWorksheetByName("Sheet1");
await loadRangeAddress(sheet.getRange("A" + i), i);//I expect to see a1 to a1000 addresses in console. Order doesn't matter.
}
Note the async in front of the loadRangeAddress function, and the two await-s in front of ContextManager.sync() and loadRangeAddress.
Note that this code will also run quite slowly, as you're making an async roundtrip for each cell. Which means you're not using batching, which is at the very core of the object-model for the new APIs.
For completeness sake, I should also note that creating a "raw" RequestContext instead of using Excel.run has some disadvantages. Excel.run does a number of useful things, the most important of which is automatic object tracking and un-tracking (not relevant here, since you're only reading back data; but would be relevant if you were loading and then wanting to write back into the object).
Finally, if I may recommend (full disclosure: I am the author of the book), you will probably find a good bit of useful info about Office.js in the e-book "Building Office Add-ins using Office.js", available at https://leanpub.com/buildingofficeaddins. In particular, it has a very detailed (10-page) section on the internal workings of the object model ("Section 5.5: Implementation details, for those who want to know how it really works"). It also offers advice on using TypeScript, has a general Promise/async-await primer, describes what .run does, and has a bunch more info about the OM. Also, though not available yet, it will soon offer information on how to resume using the same context (using a newer technique than what was originally described in How can a range be used across different Word.run contexts?). The book is a lean-published "evergreen" book, son once I write the topic in the coming weeks, an update will be available to all existing readers.
Hope this helps!

attaching Jquery on partially downloaded DOM [duplicate]

Essentially I want to have a script execute when the contents of a DIV change. Since the scripts are separate (content script in the Chrome extension & webpage script), I need a way simply observe changes in DOM state. I could set up polling but that seems sloppy.
For a long time, DOM3 mutation events were the best available solution, but they have been deprecated for performance reasons. DOM4 Mutation Observers are the replacement for deprecated DOM3 mutation events. They are currently implemented in modern browsers as MutationObserver (or as the vendor-prefixed WebKitMutationObserver in old versions of Chrome):
MutationObserver = window.MutationObserver || window.WebKitMutationObserver;
var observer = new MutationObserver(function(mutations, observer) {
// fired when a mutation occurs
console.log(mutations, observer);
// ...
});
// define what element should be observed by the observer
// and what types of mutations trigger the callback
observer.observe(document, {
subtree: true,
attributes: true
//...
});
This example listens for DOM changes on document and its entire subtree, and it will fire on changes to element attributes as well as structural changes. The draft spec has a full list of valid mutation listener properties:
childList
Set to true if mutations to target's children are to be observed.
attributes
Set to true if mutations to target's attributes are to be observed.
characterData
Set to true if mutations to target's data are to be observed.
subtree
Set to true if mutations to not just target, but also target's descendants are to be observed.
attributeOldValue
Set to true if attributes is set to true and target's attribute value before the mutation needs to be recorded.
characterDataOldValue
Set to true if characterData is set to true and target's data before the mutation needs to be recorded.
attributeFilter
Set to a list of attribute local names (without namespace) if not all attribute mutations need to be observed.
(This list is current as of April 2014; you may check the specification for any changes.)
Edit
This answer is now deprecated. See the answer by apsillers.
Since this is for a Chrome extension, you might as well use the standard DOM event - DOMSubtreeModified. See the support for this event across browsers. It has been supported in Chrome since 1.0.
$("#someDiv").bind("DOMSubtreeModified", function() {
alert("tree changed");
});
See a working example here.
Many sites use AJAX/XHR/fetch to add, show, modify content dynamically and window.history API instead of in-site navigation so current URL is changed programmatically. Such sites are called SPA, short for Single Page Application.
Usual JS methods of detecting page changes
MutationObserver (docs) to literally detect DOM changes.
Info/examples:
How to change the HTML content as it's loading on the page
Performance of MutationObserver to detect nodes in entire DOM.
Lightweight observer to react to a change only if URL also changed:
let lastUrl = location.href;
new MutationObserver(() => {
const url = location.href;
if (url !== lastUrl) {
lastUrl = url;
onUrlChange();
}
}).observe(document, {subtree: true, childList: true});
function onUrlChange() {
console.log('URL changed!', location.href);
}
Event listener for sites that signal content change by sending a DOM event:
pjax:end on document used by many pjax-based sites e.g. GitHub,
see How to run jQuery before and after a pjax load?
message on window used by e.g. Google search in Chrome browser,
see Chrome extension detect Google search refresh
yt-navigate-finish used by Youtube,
see How to detect page navigation on YouTube and modify its appearance seamlessly?
Periodic checking of DOM via setInterval:
Obviously this will work only in cases when you wait for a specific element identified by its id/selector to appear, and it won't let you universally detect new dynamically added content unless you invent some kind of fingerprinting the existing contents.
Cloaking History API:
let _pushState = History.prototype.pushState;
History.prototype.pushState = function (state, title, url) {
_pushState.call(this, state, title, url);
console.log('URL changed', url)
};
Listening to hashchange, popstate events:
window.addEventListener('hashchange', e => {
console.log('URL hash changed', e);
doSomething();
});
window.addEventListener('popstate', e => {
console.log('State changed', e);
doSomething();
});
P.S. All these methods can be used in a WebExtension's content script. It's because the case we're looking at is where the URL was changed via history.pushState or replaceState so the page itself remained the same with the same content script environment.
Another approach depending on how you are changing the div.
If you are using JQuery to change a div's contents with its html() method, you can extend that method and call a registration function each time you put html into a div.
(function( $, oldHtmlMethod ){
// Override the core html method in the jQuery object.
$.fn.html = function(){
// Execute the original HTML method using the
// augmented arguments collection.
var results = oldHtmlMethod.apply( this, arguments );
com.invisibility.elements.findAndRegisterElements(this);
return results;
};
})( jQuery, jQuery.fn.html );
We just intercept the calls to html(), call a registration function with this, which in the context refers to the target element getting new content, then we pass on the call to the original jquery.html() function. Remember to return the results of the original html() method, because JQuery expects it for method chaining.
For more info on method overriding and extension, check out http://www.bennadel.com/blog/2009-Using-Self-Executing-Function-Arguments-To-Override-Core-jQuery-Methods.htm, which is where I cribbed the closure function. Also check out the plugins tutorial at JQuery's site.
In addition to the "raw" tools provided by MutationObserver API, there exist "convenience" libraries to work with DOM mutations.
Consider: MutationObserver represents each DOM change in terms of subtrees. So if you're, for instance, waiting for a certain element to be inserted, it may be deep inside the children of mutations.mutation[i].addedNodes[j].
Another problem is when your own code, in reaction to mutations, changes DOM - you often want to filter it out.
A good convenience library that solves such problems is mutation-summary (disclaimer: I'm not the author, just a satisfied user), which enables you to specify queries of what you're interested in, and get exactly that.
Basic usage example from the docs:
var observer = new MutationSummary({
callback: updateWidgets,
queries: [{
element: '[data-widget]'
}]
});
function updateWidgets(summaries) {
var widgetSummary = summaries[0];
widgetSummary.added.forEach(buildNewWidget);
widgetSummary.removed.forEach(cleanupExistingWidget);
}

Webworker-threads: is it OK to use "require" inside worker?

(Using Sails.js)
I am testing webworker-threads ( https://www.npmjs.com/package/webworker-threads ) for long running processes on Node and the following example looks good:
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
this.onmessage = function (event) {
try{
postMessage(fibo(event.data));
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
But as soon as I add any code to query Mongodb, it throws an exception:
(not using the Sails model in the query, just to make sure the code could run on its own -- db has no password)
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
// MY DB TEST -- THIS WORKS FINE OUTSIDE THE WORKER
function callDb(event){
var db = require('monk')('localhost/mydb');
var users = db.get('users');
users.find({ "firstName" : "John"}, function (err, docs){
console.log(("serviceSuccess"));
return fibo(event.data);
});
}
this.onmessage = function (event) {
try{
postMessage(callDb(event.data)); // calling db function now
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
Since the DB code works perfectly fine outside the Worker, I think it has something to do with the require. I've tried something that also works outside the Worker, like
var moment = require("moment");
var deadline = moment().add(30, "s");
And the code also throws an exception. Unfortunately, console.log only shows this for all types of errors:
{Object}
{/Object}
So, the questions are: is there any restriction or guideline for using require inside a Worker? What could I be doing wrong here?
UPDATE
it seems Threads will not allow external modules
https://github.com/xk/node-threads-a-gogo/issues/22
TL:DR I think that if you need to require, you should use a node's
cluster or child process. If you want to offload some cpu busy work,
you should use tagg and the load function to grab any helpers you
need.
Upon reading this thread, I see that this question is similar to this one:
Load Nodejs Module into A Web Worker
To which Audreyt, the webworker-threads author answered:
author of webworker-threads here. Thank you for using the module!
There is a default native_fs_ object with the readFileSync you can use
to read files.
Beyond that, I've mostly relied on onejs to compile all required
modules in package.json into a single JS file for importScripts to
use, just like one would do when deploying to a client-side web worker
environment. (There are also many alternatives to onejs -- browserify,
etc.)
Hope this helps!
So it seems importScripts is the way to go. But at this point, it might be too hacky for what I want to do, so probably KUE is a more mature solution.
I'm a collaborator on the node-webworker-threads project.
You can't require in node-webworker-threads
You are correct in your update: node-webworker-threads does not (currently) support requireing external modules.
It has limited support for some of the built-ins, including file system calls and a version of console.log. As you've found, the version of console.log implemented in node-webworker-threads is not identical to the built-in console.log in Node.js; it does not, for example, automatically make nice string representations of the components of an Object.
In some cases you can use external modules, as outlined by audreyt in her response. Clearly this is not ideal, and I view the incomplete require as the primary "dealbreaker" of node-webworker-threads. I'm hoping to work on it this summer.
When to use node-webworker-threads
node-webworker-threads allows you to code against the WebWorker API and run the same code in the client (browser) and the server (Node.js). This is why you would use node-webworker-threads over node-threads-a-gogo.
node-webworker-threads is great if you want the most lightweight possible JavaScript-based workers, to do something CPU-bound. Examples: prime numbers, Fibonacci, a Monte Carlo simulation, offloading built-in but potentially-expensive operations like regular expression matching.
When not to use node-webworker-threads
node-webworker-threads emphasizes portability over convenience. For a Node.js-only solution, this means that node-webworker-threads is not the way to go.
If you're willing to compromise on full-stack portability, there are two ways to go: speed and convenience.
For speed, try a C++ add-on. Use NaN. I recommend Scott Frees's C++ and Node.js Integration book to learn how to do this, it'll save you a lot of time. You'll pay for it in needing to brush up on your C++ skills, and if you want to work with MongoDB then this probably isn't a good idea.
For convenience, use a Child Process-based worker pool like fork-pool. In this case, each worker is a full-fledged Node.js instance. You can then require to your heart's content. You'll pay for it in a larger application footprint and in higher communication costs compared to node-webworker-threads or a C++ add-on.

Intern.io Single Page Application Functional Testing

I have a single page application that uses Dojo to navigate between pages.
I am writing some functional tests using intern and there are some niggly issues I am trying to weed out.
Specifically I am having trouble getting intern to behave with timeouts. None of the timeouts seem to have any effect for me. I am trying to set the initial load timeout using "setPageLoadTimeout(30000)" but this seems to get ignored. I also call "setImplicitWaitTimeout(10000)" but again this seems to have no effect.
The main problem I have is that it may take a couple of seconds in my test environment for the request to be sent and the response parsed and injected into the DOM. The only way I have been able to get around this is by explicitly calling "sleep(3000)" for example but this can be a bit hit & miss and sometimes the DOM elements are not ready by the time I query them. (as mentioned setImplicitWaitTimeout(10000) doesn't seem to have an effect for me)
With the application I fire an event when the DOM has been updated. I use dojo.subscribe to hook into this in the applictaion. Is it possible to use dojo.subscribe within intern to control the execution of my tests?
Heres a sample of my code. I should have also mentioned that I use Dijit so there is also a slight delay when the response comes back and the widgets are being created (via data-dojo-type declarations)...
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/node_modules/dojo/topic'
], function (registerSuite, assert, require, topic) {
registerSuite({
name: 'Flow1',
// login to the application
'Login': function(remote) {
return remote
.setPageLoadTimeout(30000)
.setImplicitWaitTimeout(10000)
.get(require.toUrl('https://localhost:8080/'))
.elementById('username').clickElement().type('user').end()
.elementById('password').clickElement().type('password').end()
.elementByCssSelector('submit_button').clickElement().end();
},
// check the first page
'Page1':function() {
return this.remote
.setPageLoadTimeout(300000) // i've tried these calls in various places...
.setImplicitWaitTimeout(10000) // i've tried these calls in various places...
.title()
.then(function (text) {
assert.strictEqual(text, 'Page Title');})
.end()
.active().type('test').end()
.elementByCssSelector("[title='Click Here for Help']").clickElement().end()
.elementById('next_button').clickElement().end()
.elementByCssSelector("[title='First Name']").clear().type('test').end()
.elementByCssSelector("[title='Gender']").clear().type('Female').end()
.elementByCssSelector("[title='Date Of Birth']").type('1/1/1980').end()
.elementById('next_button').clickElement().end();
},
// check the second page
'Page2':function() {
return this.remote
.setImplicitWaitTimeout(10000)
.sleep(2000) // need to sleep here to wait for request & response injection and DOM parsing etc...
.source().then(function(source){
assert.isTrue(source.indexOf('test') > -1, 'Should contain First Name: "test"');
}).end()
// more tests etc...
}
});
});
I'm importing the relevant Dojo module from the intern dojo node module but I'm unsure of how to use it.
Thanks
Your test is timing out, because Intern tests have an explicit timeout set to 30s that is not accessible through their API. It can be changed by adding 'intern/lib/Test' to your define array, and then overwriting the timeout from the Test's object, e.g. Test.prototype.timeout = 60000;.
For example:
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/node_modules/dojo/topic',
'intern/lib/Test'
], function (registerSuite, assert, require, topic, Test) {
Test.prototype.timeout = 60000;
...
}
This should change the timeout to one minute instead of 30s, to prevent your test timing out.

Remove and restore Scope from digest cycles

Is there a way to remove a scope from the digest cycles? In other words, to suspend/resume a scope digest cycle?
In my case, I have all pages already loaded, but not all of them visible. So I'd like to suspend the ones that aren't visible to avoid useless processing. I don't want to use ng-view + $route, I don't want/need deep-linking.
I saw this thread and arrived to this fiddle. It probably does the work, but it's pretty invasive and not very framework-update-friendly.
Is there any other solution like a $scope.suspend() and scope.resume()? Or a less invasive one (from framework perspective)? I'm currently thinking about $destroy and $compile cycles.
I've ran into the same problem and I found an interesting solution that doesn't interfere (too much) with AngularJS. Add this to the scopes you want to disable:
var watchers;
scope.$on('suspend', function () {
watchers = scope.$$watchers;
scope.$$watchers = [];
});
scope.$on('resume', function () {
scope.$$watchers = watchers;
watchers = null;
});
Then, you can disable a scope and its children with: scope.$broadcast('suspend') and bring it back with scope.$broadcast('resume').
As the framework stands today there are no methods to suspend / resume digest on a scope. Having said this there are several techniques that one can use to limit number of watches that are executed as part of a digest cycle.
First of all, if parts of a screen are hidden anyway you could use the ng-switch family of directives thus removing invisible parts completely from the DOM.
Secondly, if a digest cycle is triggered from your directive via $apply and you want to limit watches re-evaluation to child scopes you could call $digest instead of $apply.
Then, yes, one could destroy and re-create scopes as described in the discussion you've linked to. But, if you are already hiding parts of the DOM it sounds like ng-switch might be a better option.

Resources