Calling a function created inside a bookmarklet - scope

I'm trying to write a bookmarklet which adds a JSONP call into a page like this:
javascript:(function(){
var myFunction = (window.function(data){alert('my function is firing with arg' + data)});
var j = 'http://localhost/jsonscript.js';
var s = document.createElement('script');
s.src = j;
document.getElementsByTagName('head')[0].appendChild(s);
})();
where the script src appended into the page contains
myFunction('foo');
But there's an error when I click the bookmarklet -- myFunction is not defined. How do I "export" that function outside the scope of my bookmarklet so that when called from the appended script tag it works?
Edit: I figured out that I can just pack the script element's innerHTML with raw JavaScript. This works but it's ugly. I would still like to figure out a better way.

Define the function on the window object:
window.myFunction = ...
With JSONP requests, you'll usually want some type of counter to increment, ie:
var counter = 1;
var myFuncName = "myFunction" + counter;
var j = 'http://localhost/jsonscript.js?callback=' + myFuncName;
window[myFuncName] = function (data) {...};
// remove function after timeout expires
setTimeout(function () { delete window[myFuncName] }, 5000);

Related

How to get element from multiple URLs appending each one?

I have a website that has a main URL containing several links. I want to get the first <p> element from each link on that main page.
I have the following code that works fine to get the desired links from main page and stores them in urls array. But my issue is
that I don't know how to make a loop to load each url from urls array and print each first <p> in each iteration or append them
in a variable and print all at the end.
How can I do this? thanks
var request = require('request');
var cheerio = require('cheerio');
var main_url = 'http://www.someurl.com';
request(main_url, function(err, resp, body){
$ = cheerio.load(body);
links = $('a'); //get all hyperlinks from main URL
var urls = [];
//With this part I get the links (URLs) that I want to scrape.
$(links).each(function(i, link){
lnk = 'http://www.someurl.com/files/' + $(link).attr('href');
urls.push(lnk);
});
//In this part I don't know how to make a loop to load each url within urls array and get first <p>
for (i = 0; i < urls.length; i++) {
var p = $("p:first") //first <p> element
console.log(p.html());
}
});
if you can successfully get the URLs from the first <p>, you already know everything to do that so I suppose you have issues with the way request is working and in particular with the callback based workflow.
My suggestion is to drop request since it's deprecated. You can use something like got which is Promise based so you can use the newer async/await features coming with it (which usually means easier workflow) (Though, you need to use at least nodejs 8 then!).
Your loop would look like this:
for (const i = 0; i < urls.length; i++) {
const source = await got(urls[i]);
// Do your cheerio determination
console.log(new_p.html());
}
Mind you, that your function signature needs to be adjusted. In your case you didn't specify a function at all so the module's function signature is used which means you can't use await. So write a function for that:
async function pullAllUrls() {
const mainSource = await got(main_url);
...
}
If you don't want to use async/await you could work with some promise reductions but that's rather cumbersome in my opinion. Then rather go back to promises and use a workflow library like async to help you manage the URL fetching.
A real example with async/await:
In a real life example, I'd create a function to fetch the source of the page I'd like to fetch, like so (don't forget to add got to your script/package.json):
async function getSourceFromUrl(thatUrl) {
const response = await got(thatUrl);
return response.body;
}
Then you have a workflow logic to get all those links in the other page. I implemented it like this:
async function grabLinksFromUrl(thatUrl) {
const mainSource = await getSourceFromUrl(thatUrl);
const $ = cheerio.load(mainSource);
const hrefs = [];
$('ul.menu__main-list').each((i, content) => {
$('li a', content).each((idx, inner) => {
const wantedUrl = $(inner).attr('href');
hrefs.push(wantedUrl);
});
}).get();
return hrefs;
}
I decided that I'd like to get the links in the <nav> element which are usually wrapped inside <ul> and elements of <li>. So we just take those.
Then you need a workflow to work with those links. This is where the for loop is. I decided that I wanted the title of each page.
async function mainFlow() {
const urls = await grabLinksFromUrl('https://netzpolitik.org/');
for (const url of urls) {
const source = await getSourceFromUrl(url);
const $ = cheerio.load(source);
// Netpolitik has two <title> in their <head>
const title = $('head > title').first().text();
console.log(`${title} (${url}) has source of ${source.length} size`);
// TODO: More work in here
}
}
And finally, you need to call that workflow function:
return mainFlow();
The result you see on your screen should look like this:
Dossiers & Recherchen (https://netzpolitik.org/dossiers-recherchen/) has source of 413853 size
Der Netzpolitik-Podcast (https://netzpolitik.org/podcast/) has source of 333354 size
14 Tage (https://netzpolitik.org/14-tage/) has source of 402312 size
Official Netzpolitik Shop (https://netzpolitik.merchcowboy.com/) has source of 47825 size
Über uns (https://netzpolitik.org/ueber-uns/#transparenz) has source of 308068 size
Über uns (https://netzpolitik.org/ueber-uns) has source of 308068 size
netzpolitik.org-Newsletter (https://netzpolitik.org/newsletter) has source of 291133 size
netzwerk (https://netzpolitik.org/netzwerk/?via=nav) has source of 299694 size
Spenden für netzpolitik.org (https://netzpolitik.org/spenden/?via=nav) has source of 296190 size

why node js donot provide a [Mother]function to call any function asynchronously with a supplied call back

Given Node.js boasts of asynchronous event driven model,
I was expecting, I should be able to write any Nodejs function,
e.g as simple as going through a loop, e.g IamLooper() below,
which might or might not involve file I/O and then pass that looping function to a mother nodeJs function e.g Invoke(),to which I also pass another call back functiont e.g happyend() below.
My expectation was after IamLooper is finished ,happyend () will be invoked by the NodeJs supplied function .
e.g :
==>
gdata =[];
function IamLooper() {
var pi = Array;
for (var ii = 0 ; ii <4 ; ii ++)
{
pi[ii] = 13* ii;;
gdata.push(ii);
}
console.log("looper done -tell the callback") ;
}
function happyend() { console.log("looper says done");}
I want to invoke IamLooper() and supply the happyend at time of invocation.
i.e. I am looking for a ready made node function e.g Invoke, which can be called like this:
Invoke(IamLooper(), happyend());
if(gdata.length > 0) {console.log("looping has started");}
In essence Invoke should do the same for any two functions I supply to it so that we have just a working template of a callback execution strategy.
Also the Invoke being executed async, my program progresses beyond Invoke before it finishes.
Is my expectation is misguided ? Can any one give me some guidance here.
If you are looking for a preexisting way of easily doing callbacks in node, you should use event emitters (https://nodejs.org/api/events.html):
var EventEmitter = require('events').EventEmitter;
var eventExample = new EventEmitter;
//You can create event listeners:
eventExample.on('anEvent', function(someData){
//Do something with someData
});
//To trigger an event listener you must emit:
eventExample.emit('anEvent', someData);
With your code, it'd look something like this:
var EventEmitter = require('events').EventEmitter;
var looper = new EventEmitter;
looper.on('invoke', function(data){
var callFunction = data.callFunction;
var finishFunction = data.finishFunction;
var callParameters = data.callParameters;
var finishParameters = data.finishParameters;
if(callParameters == null){
callFunction({callbackPara: finishParameters, callbackFunction: finishFunction});
}
else{
callFunction(callParameters, {callbackParameters: finishParameters, callbackFunction: finishFunction});
}
});
looper.on('finish', function(data){
var finishFunction = data.callbackFunction;
var parameters = data.callbackParameters;
if(parameters == null){
finishFunction();
}
else{
finishFunction(parameters);
}
});
gdata =[];
function IamLooper(g, callback){
var pi = Array;
for (var ii = 0 ; ii <4 ; ii ++){
pi[ii] = 13* ii;;
g.push(ii);
}
looper.emit('finish', callback);
}
function happyend() { console.log("looper says done");}
And then call it like:
looper.emit('invoke', {callFunction: IamLooper, finishFunction: happyend, callParameters: gdata, finishParameters: null});
You can also always do normal callbacks:
gdata =[];
function IamLooper(g, callback){
var pi = Array;
for (var ii = 0 ; ii <4 ; ii ++){
pi[ii] = 13* ii;;
g.push(ii);
}
callback();
}
IamLooper(gdata, function(){ console.log("looper says done");}

Zombie.js check dynamic updates

I am trying to scrape content from a web page that is continuously changing. I have been able to use PhantomJS to achieve this however wanted a lighter weight solution. The following code gets the correct value the first time it prints to the console. However on following iterations the same value is printed. Any ideas?
var Browser = require("zombie");
var assert = require("assert");
// Load the page from localhost
browser = new Browser()
browser.visit("http://www.timeanddate.com/worldclock/usa/los-angeles", function () {
setInterval(function(){
console.log(browser.text('#ct'));
},10000);
});
Note the example above is purely an example. I know this would be the most inefficient way to get the time in Los Angeles.
Once you call browser.visit(), the browser stores the response, but unless you call it multiple times, the response won't change. See it for yourself:
browser.visit("http://www.timeanddate.com/worldclock/usa/los-angeles", function () {
console.log(browser.html()); // will print the HTML to stdout
});
So what you probably want is to call browser.visit() more than once, maybe inside setInterval() (although there may be more robust solutions out there).
I readapted your code:
var Browser = require("zombie");
var assert = require("assert");
var browser = new Browser();
setInterval(function () {
browser.visit("http://www.timeanddate.com/worldclock/usa/los-angeles", function () {
console.log(browser.text('#ct'));
});
}, 10000);

Closed over value not picked up in Mongoose Map function

I am trying to create a dynamic map function - ie the use an arbitrary field to aggregate on. I thought I would be able to use a closure for this but it does not work - I get an error stating blah is not defined.
My test code -
o.map = (function(){
var blah = 'skill';
var mapIt = function() {
for (var idx = 0; idx < this[blah].length; idx++) {
var key = this.skill[idx];
var val = 1;
emit(key, val);
}
}
return mapIt
})()
Regards,
Sean
So the map function is actually getting sent over the wire via function toString (in source code form) to mongodb for execution inside mongodb itself (not node). Thus, this doesn't work. This is what the scope option is for. Any data you need to supply as context/arguments/scope to the map/reduce job needs to be set in the scope object.
Looks like you have to set scope manually -
o.scope = {'blah': blah};

Chrome extension only opens last assigned url

I have a chrome extension browser action that I want to have list a series of links, and open any selected link in the current tab. So far what I have is this, using jquery:
var url = urlForThisLink;
var li = $('<li/>');
var ahref = $('' + title + '');
ahref.click(function(){
chrome.tabs.getSelected(null, function (tab) {
chrome.tabs.update(tab.id, {url: url});
});
});
li.append(ahref);
It partially works. It does navigate the current tab, but will only navigate to whichever link was last created in this manner. How can I do this for an iterated series of links?
#jmort253's answer is actually a good illustration of what is probably your error. Despite being declared inside the for loop, url has function scope since it is declared with var. So your click handler closure is binding to a variable scoped outside the for loop, and every instance of the closure uses the same value, ie. the last one.
Once Chrome supports the let keyword you will be able to use it instead of var and it will work fine since url will be scoped to the body of the for loop. In the meantime you'll have to create a new scope by creating your closure in a function:
function makeClickHandler(url) {
return function() { ... };
}
Inside the for loop say:
for (var i = 0; i < urls.length; i++) {
var url = urls[i];
...
ahref.click(makeClickHandler(url));
...
}
In your code example, it looks like you only have a single link. Instead, let's assume you have an actual collection of links. In that case, you can use a for loop to iterate through them:
// collection of urls
var urls = ["http://example.com", "http://domain.org"];
// loop through the collection, for each url, build a separate link.
for(var i = 0; i < urls.length; i++) {
// this is the link for iteration i
var url = urls[i];
var li = $('<li/>');
var ahref = $('' + title + '');
ahref.click( (function(pUrl) {
return function() {
chrome.tabs.getSelected(null, function (tab) {
chrome.tabs.update(tab.id, {url: pUrl});
});
}
})(url));
li.append(ahref);
}
I totally forgot about scope when writing the original answer, so I updated it to use a closure based on Matthew Gertner's answer. Basically, in the click event handler, I'm now passing in the url variable into an anonymous 1 argument function which returns another function. The returned function uses the argument passed into the anonymous function, so its state is unaffected by the fact that the next iterations of the for loop will change the value of url.

Resources