Javascript wait for request to process in a for loop - google-chrome-extension

In my chrome extension i was checking for a function which can stop my for loop from processing till it gets a response from content scripts. Sharing the sample code below
function abc() {
chrome.tabs.query({'status': 'complete'}, function(tabArray) {
for (var i = 0, tab; tab = tabArray[i]; i++) {
var currentUrl = tab.url;
var tabId = tab.id;
if (currentUrl.match(otherthing)) {
chrome.tabs.sendRequest(tabId, {'type': 'getrequiredthing'},
function(response) {
if (response.isrequiredthind) {
callfunction(tabId);
}
}
);
}
}
});
}
Here when i get the matching url in else if i'm sending a request to the page for getting some info, if my info is positive i need to callfunction. But here in the for loop tabId is iterating very fastly and even if the response is positive it is calling the callfunction with next(or next) tabId.
Can you give your opinions on solving this keep waiting the for loop this response is received.
Thanks

The problem is that the third argument to sendRequest does not block on the request being ready. By design, JavaScript almost never blocks. This is a Good Thing. Instead, it uses an "event-driven" model.
Another problem is due to lexical scoping: When callfunction is called, tabId has the most recent value, not the value when sendRequest was called. To get around this, you need to create a separate scope for each loop iteration e.g.
for (...) {
var tabId = ...;
if (...) {
(function (localTabId) {
chrome.tabs.SendRequest(..., function (response) {
if (response.isrequiredthind) {
callfunction(localTabId);
}
}
})(tabId);
}
}

Related

Chrome Extension API Calls order and DOM Information

I'm working on an extension that is supposed to extract information from the DOM based specific classes/tags,etc, then allow the user to save the information as a CSV file.
I'm getting stuck on a couple of places and haven't been able to find answers to questions similar enough.
Where I am tripped up at is:
1) Making sure that the page has completely loaded so the chrome.tabs.query doesn't return null a couple of times before the promise actually succeeds and allows the blocksF to successfully inject. I have tried placing it within a settimeout function but the chrome api doesn't seem to work within such the function.
2) Saving the extracted information so when the user moves onto a new page, the information is still there. I'm not sure if I should use the chrome.storage api call or simply save the information as an array and keep passing it through. It's just text, so I don't believe that it should take up too much space.
Then main function of the background.js is below.
let mainfunc = chrome.tabs.onUpdated.addListener(
async(id, tab) => {
if (buttonOn == true) {
let actTab = await chrome.tabs.query({
active: true,
currentWindow: true,
status: "complete"
}).catch(console.log(console.error()));
if (!actTab) {
console.log("Could not get URL. Turn extension off and on again.");
} else {
console.log("Tab information recieved.")
};
console.log(actTab);
let blocksF = chrome.scripting.executeScript({
target: { tabId: actTab[0]['id'] },
func: createBlocks
})
.catch(console.error)
if (!blocksF) {
console.log("Something went wrong.")
} else {
console.log("Buttons have been created.")
};
/*
Adds listeners and should return value of the works array if the user chose to get the information
*/
let listenersF = chrome.scripting.executeScript({
target: { tabId: actTab[0]['id'] },
func: loadListeners
})
.catch(console.error)
if (!listenersF) {
console.log("Listeners failed to load.")
} else {
console.log("Listeners loaded successfully.")
};
console.log(listenersF)
};
});
Information from the DOM is extracted through an event listener on a div/button that is added. The event listener is added within the loadListeners function.
let workArr = document.getElementById("getInfo").addEventListener("click", () => {
let domAr = Array.from(
document.querySelectorAll(<class 1>, <class 2>),
el => {
return el.textContent
}
);
let newAr = []
for (let i = 0; i < domAr.length; i++) {
if (i % 2 == 0) {
newAr.push([domAr[i], domAr[i + 1]])
}
}
newAr.forEach((work, i) => {
let table = document.getElementById('extTable');
let row = document.createElement("tr");
row.appendChild(document.createElement("td")).textContent = work[0];
row.appendChild(document.createElement("td")).textContent = work[1];
table.appendChild(row);
});
return newAr
I've been stuck on this for a couple of weeks now. Any help would be appreciated. Thank you!
There are several issues.
chrome methods return a Promise in MV3 so you need to await it or chain on it via then.
tabs.onUpdated listener's parameters are different. The second one is a change info which you can check for status instead of polling the active tab, moreover the update may happen while the tab is inactive.
catch(console.log(console.error())) doesn't do anything useful because it immediately calls these two functions so it's equivalent to catch(undefined)
Using return newArr inside a DOM event listener doesn't do anything useful because the caller of this listener is the internal DOM event dispatcher which doesn't use the returned value. Instead, your injected func should return a Promise and call resolve inside the listener when done. This requires Chrome 98 which added support for resolving Promise returned by the injected function.
chrome.tabs.onUpdated.addListener(onTabUpdated);
async function onTabUpdated(tabId, info, tab) {
if (info.status === 'complete' &&
/^https?:\/\/(www\.)?example\.com\//.test(tab.url) &&
await exec(tabId, createBlocks)) {
const [{result}] = await exec(tabId, loadListeners);
console.log(result);
// here you can save it in chrome.storage if necessary
}
}
function exec(tabId, func) {
// console.error returns `undefined` so we don't need try/catch,
// because executeScript is always an array of objects on success
return chrome.scripting.executeScript({target: {tabId}, func})
.catch(console.error);
}
function loadListeners() {
return new Promise(resolve => {
document.getElementById('getInfo').addEventListener('click', () => {
const result = [];
// ...add items to result
resolve(result);
});
});
}

How can I get the original request URL in the response callback?

I have a loop like this:
var req;
for (var i=0; i<sites.length; i++) {
req = https.get(sites[i], handleRequest);
req.on('error', handleError);
}
The callback (handleRequest) runs asynchronously, for each website being requested.
However, the only parameter in handleRequest seems to be a "response".
When the callback is run, the loop has already completed, so how can I keep track of which website is this response for, so I can handle it accordingly?
You can change your handleRequest to take two parameters - url being the first of them. With that you can partially apply the function via Function#bind and so set the url parameter at the time of calling but you'll still wait for the second argument.
let sites = [
"https://example.com",
"https://example.net",
"https://example.org"
];
function handleRequest(url, res) {
console.log("handling:", url);
/* handling code */
}
//minimalistic dummy HTTP module that responds after 1 second
let https = {
get: handler => setTimeout(handler, 1000)
}
for (var i=0; i < sites.length; i++) {
let url = sites[i];
https.get(handleRequest.bind(this, url)) //partially apply handleRequest
}
You can get a similar result via currying - instead of having two parameters, first take one, then return a function that takes the other. It leads to (in my opinion) better syntax when calling:
let sites = [
"https://example.com",
"https://example.net",
"https://example.org"
];
function handleRequest(url) {
return function actualHandler(res) {
console.log("handling:", url);
/* handling code */
}
}
//minimalistic dummy HTTP module that responds after 1 second
let https = {
get: handler => setTimeout(handler, 1000)
}
for (var i=0; i < sites.length; i++) {
let url = sites[i];
https.get(handleRequest(url))
}

Angular 5 will not run more than one method on ngOnInit?

I have attached a screenshot of what I am trying to do. This is so basic yet so frustrating. I have to run a data parse after retrieving the array of objects from the first method being called but I can't add my method to the one inside ngOnInit or directly after it inside ngOnInit. Either way the method just simply doesn't run. Any ideas?
Image
ngOnInit() {
this.getSiteContent(this.route.snapshot.params['id']);
//Doesnt work
this.addUpdatedPages();
}
//in use
getSiteContent(id) {
this.http.get('/site-content/'+id).subscribe(data => {
this.siteContent = data;
});
//Doesn't show..
console.log('End of getSiteContent');
}
addUpdatedPages(){
//Doesn't show
console.log('Adding pages...');
for (var i = 0; i < this.siteContent.length; i++) {
this.checkNull(this.siteContent[i].SiteID, this.siteContent[i].SitePageID);
console.log(this.nullCheck[0].SiteID);
if (this.nullCheck.length > 0) {
this.siteContent[i].SitePageContent = this.nullCheck[0].SitePageContent;
}
}
}
Everything points to an unhandled exception when you call this.http.get. You should check your browsers console, that would show it if there was one. One likely reason is that http was not injected or is undefined.
ngOnInit() {
this.getSiteContent(this.route.snapshot.params['id']);
// if the above throws an exception anything below would not be called
this.addUpdatedPages();
}
getSiteContent(id) {
this.http.get('/site-content/'+id).subscribe(data => {
this.siteContent = data;
});
// If the call above to this.http.get throws an exception the code below would not be called
console.log('End of getSiteContent');
}
That being said the method addUpdatedPages should be called in the subscribe of the http.get because you want it to occur after the data base been retrieved. Modify the getSiteContent so that the line is moved into the callback for the observable's subscribe call.
this.http.get('/site-content/'+id).subscribe(data => {
this.siteContent = data;
this.addUpdatedPages();
});

Getting original request object during multiple asynchronous calls in nodejs-request

I have multiple HTTP requests in a nodejs app that each returns a word of a sentence. The replies will come at different times, so I'm saving them in a dictionary, with the key being the original sentence's word index. Problem is, when I access the request object, I only get the last one.
var completed_requests = 0;
sentence = req.query.sentence;
sentence = "sentence to be translated"
responses=[];
words = sentence.split(" ");
for(j=0;j<words.length;j++){
var word = words[j];
var data={
word:word
};
var options = {
url: 'example.com',
form:data,
index:j
};
request.post(options, function(err,httpResponse,body){
options = options;
if(!err){
responses.push({options.index: body});
completed_requests+=1;
if(completed_requests==words.length){
var a="";
for(var k=0;k<words.length;k++){
a+=responses[k]+" ";
}
res.render('pages/index', { something: a });
}
}
else{
//err
}
});
}
Basically, when I access the object.index object, the object returned isn't the one used for the original request, but the last one (for some reason). How should I resolve this?
When we take a look at how the code is evaluated by JavaScript due to it's async nature in node.js the problem becomes obvious:
For the first word the loop for(j=0;j<words.length;j++){ is executed.
The value of j is assigned to options.index. For the loop run this options.index has now the value 0.
request.post(options, function(err,httpResponse,body){ is executed but the callback handler will be invoked later.
For the first word the loop for(j=0;j<words.length;j++){ is executed.
The value of j is assigned to options.index. options.index has now the value 1.
request.post(options, function(err,httpResponse,body){ is executed but the callback handler will be invoked later.
The problem becomes obvious now since no new options objects are created but the value of j is assigned to options.index in every loop run. When the first callback handler is invoked options.index has the value words.length - 1.
To fix the problem we will wrap creating the options object in a function executeRequest
var completed_requests = 0;
sentence = req.query.sentence;
sentence = "sentence to be translated"
responses=[];
words = sentence.split(" ");
for(j=0;j<words.length;j++){
var word = words[j];
var data={
word:word
};
function executeRequest(url, form, index) {
var options = {
url: url,
form: form,
index: index
};
request.post(options, function(err,httpResponse,body){
// options = options; Superfluous
if(!err){
responses.push({ [index]: body});
completed_requests+=1;
if(completed_requests==words.length){
var a="";
for(var k=0;k<words.length;k++){
a+=responses[k]+" ";
}
res.render('pages/index', { something: a });
}
}
else{
//err
}
});
}
executeRequest('example.com', data, j);
}
A good read about scoping and hoisting in JavaScript can be found here http://www.adequatelygood.com/JavaScript-Scoping-and-Hoisting.html
You need to use an async routine such as forEach or map, also I suggest you read up on the async nature of node to help understand how to handle callbacks for io.

What's going on with Meteor and Fibers/bindEnvironment()?

I am having difficulty using Fibers/Meteor.bindEnvironment(). I tried to have code updating and inserting to a collection if the collection starts empty. This is all supposed to be running server-side on startup.
function insertRecords() {
console.log("inserting...");
var client = Knox.createClient({
key: apikey,
secret: secret,
bucket: 'profile-testing'
});
console.log("created client");
client.list({ prefix: 'projects' }, function(err, data) {
if (err) {
console.log("Error in insertRecords");
}
for (var i = 0; i < data.Contents.length; i++) {
console.log(data.Contents[i].Key);
if (data.Contents[i].Key.split('/').pop() == "") {
Projects.insert({ name: data.Contents[i].Key, contents: [] });
} else if (data.Contents[i].Key.split('.').pop() == "jpg") {
Projects.update( { name: data.Contents[i].Key.substr(0,
data.Contents[i].Key.lastIndexOf('.')) },
{ $push: {contents: data.Contents[i].Key}} );
} else {
console.log(data.Contents[i].Key.split('.').pop());
}
}
});
}
if (Meteor.isServer) {
Meteor.startup(function () {
if (Projects.find().count() === 0) {
boundInsert = Meteor.bindEnvironment(insertRecords, function(err) {
if (err) {
console.log("error binding?");
console.log(err);
}
});
boundInsert();
}
});
}
My first time writing this, I got errors that I needed to wrap my callbacks in a Fiber() block, then on discussion on IRC someone recommending trying Meteor.bindEnvironment() instead, since that should be putting it in a Fiber. That didn't work (the only output I saw was inserting..., meaning that bindEnvironment() didn't throw an error, but it also doesn't run any of the code inside of the block). Then I got to this. My error now is: Error: Meteor code must always run within a Fiber. Try wrapping callbacks that you pass to non-Meteor libraries with Meteor.bindEnvironment.
I am new to Node and don't completely understand the concept of Fibers. My understanding is that they're analogous to threads in C/C++/every language with threading, but I don't understand what the implications extending to my server-side code are/why my code is throwing an error when trying to insert to a collection. Can anyone explain this to me?
Thank you.
You're using bindEnvironment slightly incorrectly. Because where its being used is already in a fiber and the callback that comes off the Knox client isn't in a fiber anymore.
There are two use cases of bindEnvironment (that i can think of, there could be more!):
You have a global variable that has to be altered but you don't want it to affect other user's sessions
You are managing a callback using a third party api/npm module (which looks to be the case)
Meteor.bindEnvironment creates a new Fiber and copies the current Fiber's variables and environment to the new Fiber. The point you need this is when you use your nom module's method callback.
Luckily there is an alternative that takes care of the callback waiting for you and binds the callback in a fiber called Meteor.wrapAsync.
So you could do this:
Your startup function already has a fiber and no callback so you don't need bindEnvironment here.
Meteor.startup(function () {
if (Projects.find().count() === 0) {
insertRecords();
}
});
And your insert records function (using wrapAsync) so you don't need a callback
function insertRecords() {
console.log("inserting...");
var client = Knox.createClient({
key: apikey,
secret: secret,
bucket: 'profile-testing'
});
client.listSync = Meteor.wrapAsync(client.list.bind(client));
console.log("created client");
try {
var data = client.listSync({ prefix: 'projects' });
}
catch(e) {
console.log(e);
}
if(!data) return;
for (var i = 1; i < data.Contents.length; i++) {
console.log(data.Contents[i].Key);
if (data.Contents[i].Key.split('/').pop() == "") {
Projects.insert({ name: data.Contents[i].Key, contents: [] });
} else if (data.Contents[i].Key.split('.').pop() == "jpg") {
Projects.update( { name: data.Contents[i].Key.substr(0,
data.Contents[i].Key.lastIndexOf('.')) },
{ $push: {contents: data.Contents[i].Key}} );
} else {
console.log(data.Contents[i].Key.split('.').pop());
}
}
});
A couple of things to keep in mind. Fibers aren't like threads. There is only a single thread in NodeJS.
Fibers are more like events that can run at the same time but without blocking each other if there is a waiting type scenario (e.g downloading a file from the internet).
So you can have synchronous code and not block the other user's events. They take turns to run but still run in a single thread. So this is how Meteor has synchronous code on the server side, that can wait for stuff, yet other user's won't be blocked by this and can do stuff because their code runs in a different fiber.
Chris Mather has a couple of good articles on this on http://eventedmind.com
What does Meteor.wrapAsync do?
Meteor.wrapAsync takes in the method you give it as the first parameter and runs it in the current fiber.
It also attaches a callback to it (it assumes the method takes a last param that has a callback where the first param is an error and the second the result such as function(err,result).
The callback is bound with Meteor.bindEnvironment and blocks the current Fiber until the callback is fired. As soon as the callback fires it returns the result or throws the err.
So it's very handy for converting asynchronous code into synchronous code since you can use the result of the method on the next line instead of using a callback and nesting deeper functions. It also takes care of the bindEnvironment for you so you don't have to worry about losing your fiber's scope.
Update Meteor._wrapAsync is now Meteor.wrapAsync and documented.

Resources