Debugging injected content scripts - google-chrome-extension

I have a lot of code that I only want to run when the user clicks the extension icon. I'd rather not have it run for every tab opened. Thus using the content_scripts entry in the manifest file isn't the best option. However, I haven't been able to see the content scripts show up in the list of scripts in the developer tools when I programatically inject scripts. I'm fine developing for now with content scripts, but at some point I'd like to avoid it.
I run logging all over the place, and perform message passing as well. So I know very well that these scripts are successfully getting injected and running, but they simply fail to show up in the file list.
In code, the following works just dandy (in the manifest):
{
// ...
"content_scripts": [{
"matches": ["<all_urls>"],
"css": ["style/content.css"],
"js": [
"closure/goog/base.js",
"closure/goog/deps.js",
"util.js",
"AddressRE.js",
// ...
"makeRequests.js"
]
}]
}
Performing the following after an onClick does not:
function executeNextScript(tabId, files, callback) {
chrome.tabs.executeScript(tabId, {
file: files.pop()
}, function () {
if (files.length)
executeNextScript(tabId, files, callback);
else
callback();
});
}
function executeScripts(tabId, callback) {
var files = [
"closure/goog/base.js",
"closure/goog/deps.js",
"util.js",
// ...
"makeRequests.js"
];
executeNextScript(tabId, files.reverse(), callback);
}

You can use the debugger JavaScript keyword to set breakpoints in your code.

I add //# sourceURL=myscript.js to any script that is injected and that adds it to the list of sources once it has been injected

Related

youtube video in chrome extension content script

I am trying to insert youtube videos with the iframe API in to an existing page with the help of a chrome extension content script. But I cannot get the onYouTubeIframeAPIReady to trigger.
manifest.json
"content_scripts": [
{
"matches": ["http://*/*", "https://*/*", "file://*/*", "*://*/*"],
"js": ["content-script.js"]
}
],
content-script.js
const appEl = document.createElement('div');
appEl.id = 'my-app';
appEl.innerHTML = `<div id="youtube-iframe"></div>`;
const bodyEl = document.querySelector('body');
bodyEl.insertBefore(appEl, bodyEl.firstChild);
var tag = document.createElement('script');
tag.src = "https://www.youtube.com/iframe_api";
document.querySelector('body').appendChild(tag);
window.onYouTubeIframeAPIReady = () => {
this.player = new YT.Player('youtube-iframe', {
height: '390',
width: '640',
videoId: 'M7lc1UVf-VE',
events: {
'onReady': onPlayerReady,
}
});
}
function onPlayerReady(event) {
console.log('player ready');
event.target.playVideo();
};
In a chrome-app I was able to make it work with a webview but this does not seem to be available in extensions.
I solved the problem, here is the solution.
I tried all variants of the code injection method but the problem was the the YouTube API script was defining an anonymous function that expected the window as an input argument. So even after following the advice of not loading external scripts (chrome web store might remove your extension) and having a local file that I included with different means I was not able to get the onYouTubeIframeAPIReady to be triggered by the YouTube API script. Only after pasting the script into the same file where I defined onYouTubeIframeAPIReady I was able to see the video. However to organize the code better, so it works with ES6 imports (via Webpack) I did the following steps.
Download the YouTube API script (https://www.youtube.com/iframe_api see https://developers.google.com/youtube/iframe_api_reference) to a local file.
Adopt the script to work as module by changing the the script from
(function(){var g,k=this;function l(a){a=a.split(".");
...
Ub=l("onYouTubePlayerAPIReady");Ub&&Ub();})();
to
export default function(){var g,k=window;function l(a){a=a.split(".")
...
Ub=l("onYouTubePlayerAPIReady");Ub&&Ub();}
This changes the anonymous function call to a function that is exported in a ES6 module style and the this object in the anonymous function is exchanged with the window. I saved it in the file as youtube-iframe-api.js
Now I was able to use the YouTube API in another module with the following code
import youtubeApi from './youtube-iframe-api';
function onPlayerReady(event) {
event.target.playVideo();
},
window.onYouTubeIframeAPIReady = () => {
this.player = new YT.Player('youtube-iframe', {
height: '100',
width: '100',
videoId: 'M7lc1UVf-VE',
events: {
'onReady': onPlayerReady,
}
});
}
youtubeApi();

intern custom reporter dependency is loaded as a different module instance

I thought I'd post this as I stumbled around for a while before noticing what's going on. I have a test suite that uses CouchDB as its logging / recording database. I discovered you can write custom reporters in intern, so thought I could move a lot of my manual 'recordSuccess()'/'recordFailure()' calls out of my test script, and into a custom reporter responding to test pass and fail events.
My main test script still wants to do a little couchdb interaction, so I factored out the couchdb connection and reporting functions into a module, then tried to use that module from both the main test script, and the custom reporter module.
I find that the couchdb helper module is instantiated twice. This goes against the expectation that AMD/RequireJS require() will only execute a module once, and cache the result for use the next time the module is required. If I put a 'debugger' statement in its main body of code, it is clearly executed twice. The upshot, for me, is that the couchdb reference is undefined when called from the reporter.
Directory structure:
runTest.js # helper script to run intern test from this dir
src/MainTest.js
src/CouchHelper.js
src/CouchDBReporter.js
src/intern.js # intern config
runTest.js
node node_modules/.bin/intern-client config=src/intern suites=mypackage/WINTest --envConfig=src/test/dev.json
i.e. MainTest.js:
define([ 'CouchHelper' ], function (CouchHelper) {
.. test startup ..
CouchHelper.connect(username, password, etc);
CouchDBReporter.js:
define([ 'CouchHelper' ], function (CouchHelper) {
return {
'/test/fail': function (test) {
// Presume the couchdb is connected at this point
CouchHelper.recordFailure(test);
}
}
intern.js:
... blah blah ..
loader: {
// Packages that should be registered with the loader in each testing environment
packages: [
'node',
'nedb',
'nodemailer',
{ 'mypackage', 'src' }
],
reporters: [ 'console', 'src/CouchDBReporter' ]
CouchHelper.js:
define([
'intern/dojo/node!node-couchdb'
], function (Couchdb) {
debugger; // this is hit twice
var instance = 0;
function CouchHelper() {
this.couchdb = undefined;
this.instance = instance++;
console.log('Created instance ' + this.instance);
}
CouchHelper.prototype = {
connect: function () { this.couchdb = Couchdb.connect(blah); },
recordFailure: function (test) { this.couchdb.insert(blah); }
}
}
On startup, the console logs:
Created instance 0
Created instance 0
When the reporter calls recordFailure, it calls into a different instance of CouchHelper than the MainTest.js file called connect() on .. so this.couchdb is undefined, and the script crashes. I can call recordSuccess/recordFailure from in MainTest.js just fine, and this.couchdb is valid in CouchHelper, but from the CouchDBReporter the CouchHelper instance is clearly different.
Is this behaviour expected, and if so, what's the recommended way to share data and resources between the main test code, and code in a custom reporter? I see that in 3.0 the reporters config can take an object which might help mitigate this problem, but it feels like one would have to instantiate the reporter programatically rather than define it in config.
Nick
As suggested by Colin, the path to the answer lay in my loader map configuration. This means that my intern.js file, referenced as config on the command line, has a loader section where one can define the mappings of paths to AMD module (see https://theintern.github.io/intern/#option-loader). Typically I just define a list of package names, for example I know my test requires nedb, nodemailer, and my own src package:
loader: {
packages: [ 'node', 'nedb', 'nodemailer', 'src' ]
}
For some reason, I had defined my src package as being available by the name mypackage:
loader: {
packages: [ 'node', 'nedb', 'nodemailer',
{ name: 'mypackage', location: 'src' }
]
}
I had no good reason to do this. I then specified my custom reporter be loaded by intern using the 'src' package name:
intern.js:
reporters: [ 'console', 'src/CouchDBReporter' ]
And, here's the tricky bit, I referenced my helper module, CouchHelper, in two different ways, but both times by using a relative module path ./CouchHelper:
MainTest.js:
require([
'./CouchHelper',
...
], ...
CouchDBReporter.js:
require([
'./CouchHelper',
...
], ...
And on the command line, you guessed it, specified the test to be run as mypackage/MainTest.js. This conflicts with my specification of src/CouchDBReporter in intern.js's reporter section.
The result was that mypackage/MainTest.js required ./CouchHelper which resolved as mypackage/CouchHelper, and src/CouchDBReporter required ./CouchHelper, which resolved as src/CouchHelper. This loaded the CouchHelper module code twice, working around the usual guarantee with an AMD style loader that a module is only ever loaded once.
It has certainly been a good lesson in AMD module paths, and one implication of using relative paths.

Can Blanket.js work with Jasmine tests if the tests themselves are loaded with RequireJS?

We've been using Jasmine and RequireJS successfully together for unit testing, and are now looking to add code coverage, and I've been investigating Blanket.js for that purpose. I know that it nominally supports Jasmine and RequireJS, and I'm able to successfully use the "jasmine-requirejs" runner on GitHub, but this runner is using a slightly different approach than our model -- namely, it loads the test specs using a script tag in runner.html, whereas our approach has been to load the specs through RequireJS, like the following (which is the callback for a requirejs call in our runner):
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 1000;
var htmlReporter = new jasmine.TrivialReporter();
var jUnitReporter = new jasmine.JUnitXmlReporter('../JasmineTests/');
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.addReporter(jUnitReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
var specs = [];
specs.push('spec/models/MyModel');
specs.push('spec/views/MyModelView');
$(function () {
require(specs, function () {
jasmineEnv.execute();
});
});
This approach works fine for simply doing unit testing, if I don't have blanket or jasmine-blanket as dependencies for the function above. If I add them (with require.config paths and shim), I can verify that they're successfully fetched, but all that appears to happen is that I get jasmine-blanket's overload of jasmine.getEnv().execute, which simply prints "waiting for blanket..." to the console. Nothing is triggering the tests themselves to be run anymore.
I do know that in our approach there's no way to provide the usual data-cover attributes, since RequireJS is doing the script loading rather than script tags, but I would have expected in this case that Blanket would at least calculate coverage for everything, not nothing. Is there a non-attribute-based way to specify the coverage pattern, and is there something else I need to do to trigger the actual test execution once jasmine-blanket is in the mix? Can Blanket be made to work with RequireJS loading the test specs?
I have gotten this working by requiring blanket-jasmine then setting the options
require.config({
paths: {
'jasmine': '...',
'jasmine-html': '...',
'blanket-jasmine': '...',
},
shim: {
'jasmine': {
exports: 'jasmine'
},
'jasmine-html': {
exports: 'jasmine',
deps: ['jasmine']
},
'blanket-jasmine': {
exports: 'blanket',
deps: ['jasmine']
}
}
});
require([
'blanket-jasmine',
'jasmine-html',
], function (blanket, jasmine) {
blanket.options('filter', '...'); // data-cover-only
blanket.options('branchTracking', true); // one of the data-cover-flags
require(['myspec'], function() {
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 250;
var htmlReporter = new jasmine.HtmlReporter();
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
jasmineEnv.addReporter(new jasmine.BlanketReporter());
jasmineEnv.currentRunner().execute();
});
});
The key lines are the addition of the BlanketReporter and the currentRunner execute. Blanket jasmine adapter overrides jasmine.execute with a no-op that just logs a line, because it needs to halt the execution until it is ready to begin after it has instrumented the code.
Typically the BlanketReport and currentRunner execute would be done by the blanket jasmine adapter but if you load blanket-jasmine itself in require, the event for starting blanket test runner will not get fired as subscribes to the window.load event (which by the point blanket-jasmine is loaded has already fired) therefore we need to add the report and execute the "currentRunner" as it would usually execute itself.
This should probably be raised as a bug, but for now this workaround works well.

RequireJS plugin, load files on demand

I have RequireJS implemented fine, and a Grunt based build process which is optimizing the all the JS files app into one file via r.js which is also working fine. All my app files are concatenated into one big JS file for efficient production deployment.
Now I'm having the following requirements:
I need to write a plugin for requirejs, that will not load(not include the file) into the optimized file in the build process, but will required on demand:
Meaning in my code I'll have:
var myObj = require("myplugIn!jsFile");
So in the end when this line runs, it will runs in 2 options:
on build process, the file is not included in the optimized file
The application is running, it will be request the file on demand.
I wrote the following plugin, but is not working:
define(function () {
"use strict";
return {
load : function (name, req, onload, config) {
// we go inside here we are running the application not in build process
if (!config.isBuild) {
req([name], function () {
onload(arguments[0]);
});
}
}
};
});
What I'm missing here.
In your build configuration you can exclude files that you don't want to bundle. They will still be loaded on demand when needed. You may also do something like this:
define(function (){
// module code...
if (condition){
require(['mymodule'], function () {
// execute when mymodule has loaded.
});
}
}):
This way mymodule will be loaded only if condition is met. And only once, if you use same module dependency elsewhere it will return loaded module.
It was more simpler that I though, if helps someone, I'm posting the solution, I create a plugin , that in build process return nothing and in run time, returns the required file, hope helps someone.
define(function () {
"use strict";
return {
load : function (name, req, onload, config) {
if (config.isBuild) {
onload(null);
} else {
req([name], function () {
onload(arguments[0]);
});
}
}
};
});

Chrome plugin running a function but not found

I want my plugin to inject so that when i click on the button, it runs a function in the current tab. However, it's giving me a function not found error.. is there some way to do this?
This is my popup.html:
<script>
function start() {
chrome.tabs.executeScript( null,
{ code: "func_in_body()",
allFrames: true }
);
}
start();
</script>
and even though the function is in the page, it gives me an error
The error is:
Uncaught ReferenceError: func_in_body is not defined
(anonymous function)
though one of the buttons' onclick uses it. I'm not sure if there's a scope issue or not.
JavaScript you inject into tabs is executed in a isolated environment and does not have access to the pages JavaScript. You can read more about it in the documentation.
You don't need a popup to execute a function or a script on the current tab. What I did is, I made a background page with:
<html>
<head>
<script>
var iconName = "icon.png";
chrome.browserAction.setIcon({path:iconName});
function onClicked(){
chrome.tabs.executeScript(null, {file: "content_script.js"});
}
chrome.browserAction.onClicked.addListener(onClicked);
</script>
</head>
</html>
the background page has to be defined in the manifest.json
...
"background_page": "background.html",
...
and the last thing just create the content_script.js (or whatever you called it) and enter your code there.
Edit:
Don't forget to add the permissions that your script can be executed on every site
...
"permissions": [
"tabs",
"http://*/*",
"https://*/*"
],
...

Resources