Click event does nothing when triggered - node.js

When I trigger a .click() event in a non-headless mode in puppeteer, nothing happens, not even an error.. "non-headless mode so i could visually monitor what is being clicked"
const scraper = {
test: async () => {
let browser, page;
try {
browser = await puppeteer.launch({
headless: false,
args: ["--no-sandbox", "--disable-setuid-sandbox"]
});
page = await browser.newPage();
} catch (err) {
console.log(err);
}
try {
await page.goto("https://www.betking.com/sports/s/eventOdds/1-840-841-0-0,1-1107-1108-0-0,1-835-3775-0-0,", {
waitUntil: "domcontentloaded"
});
console.log("scraping, wait...");
} catch (err) {
console.log(err);
}
console.log("waiting....");
try {
await page.waitFor('.eventsWrapper');
} catch (err) {
console.log(err, err.response);
}
try {
let oddsListData = await page.evaluate(async () => {
let regionAreaContainer = document.querySelectorAll('.areaContainer.region .regionGroup > .regionAreas > div:first-child > .area:nth-child(5)');
regionAreaContainer = Array.prototype.slice.call(regionAreaContainer);
let t = []; //Used to monitor the element being clicked
regionAreaContainer.forEach(async (region) => {
let dat = await region.querySelector('div');
dat.innerHTML === "GG/NG" ? t.push(dat.innerHTML) : false; //Used to confirm that the right element is being clicked
dat.innerHTML === "GG/NG" ? dat.click() : false;
})
return t;
})
console.log(oddsListData);
} catch (err) {
console.log(err);
}
}
}
I expect it to click the specified button and load in some dynamic data on the page.
In Chrome's console, I get the error
Transition Rejection($id: 1 type: 2, message: The transition has been superseded by a different transition, detail: Transition#3( 'sportsMultipleEvents'{"eventMarketIds":"1-840-841-0-0,1-1107-1108-0-0,1-835-3775-0-0,"} -> 'sportsMultipleEvents'{"eventMarketIds":"1-840-841-0-0,1-1107-1108-0-0,1-835-3775-535-14,"} ))

Problem
Behaving non-human-like by executing code like element.click() (inside the page context) or element.value = '..' (see this answer for a similar problem) seems to be problematic for Angular applications. You want to try to behave more human-like by using puppeteer functions like page.click() as they simulate a "real" mouse click instead of just triggering the element's click event.
In addition the page seems to rebuild parts of the page whenever one of the items is clicked. Therefore, you need to execute the selector again after each click.
Code sample
To behave more human-like and requery the elements after each click you can change the latter part of your code to something like this:
let list = await page.$x("//div[div/text() = 'GG/NG']");
for (let i = 0; i < list.length; i++) {
await list[i].click();
// give the page some time and then query the selectors again
await page.waitFor(500);
list = await page.$x("//div[div/text() = 'GG/NG']");
}
This code uses an XPath expression to query the div elements which contain another div element with the given text. After that, a click is simulated on the element and then the contents of the page are queried another time to respect the change of the DOM elements.

Here might be a less confusing way to click those:
for(var div of document.querySelectorAll('div')){
if(div.innerHTML === 'GG/NG') div.click()
}

Related

Chrome Extension API Calls order and DOM Information

I'm working on an extension that is supposed to extract information from the DOM based specific classes/tags,etc, then allow the user to save the information as a CSV file.
I'm getting stuck on a couple of places and haven't been able to find answers to questions similar enough.
Where I am tripped up at is:
1) Making sure that the page has completely loaded so the chrome.tabs.query doesn't return null a couple of times before the promise actually succeeds and allows the blocksF to successfully inject. I have tried placing it within a settimeout function but the chrome api doesn't seem to work within such the function.
2) Saving the extracted information so when the user moves onto a new page, the information is still there. I'm not sure if I should use the chrome.storage api call or simply save the information as an array and keep passing it through. It's just text, so I don't believe that it should take up too much space.
Then main function of the background.js is below.
let mainfunc = chrome.tabs.onUpdated.addListener(
async(id, tab) => {
if (buttonOn == true) {
let actTab = await chrome.tabs.query({
active: true,
currentWindow: true,
status: "complete"
}).catch(console.log(console.error()));
if (!actTab) {
console.log("Could not get URL. Turn extension off and on again.");
} else {
console.log("Tab information recieved.")
};
console.log(actTab);
let blocksF = chrome.scripting.executeScript({
target: { tabId: actTab[0]['id'] },
func: createBlocks
})
.catch(console.error)
if (!blocksF) {
console.log("Something went wrong.")
} else {
console.log("Buttons have been created.")
};
/*
Adds listeners and should return value of the works array if the user chose to get the information
*/
let listenersF = chrome.scripting.executeScript({
target: { tabId: actTab[0]['id'] },
func: loadListeners
})
.catch(console.error)
if (!listenersF) {
console.log("Listeners failed to load.")
} else {
console.log("Listeners loaded successfully.")
};
console.log(listenersF)
};
});
Information from the DOM is extracted through an event listener on a div/button that is added. The event listener is added within the loadListeners function.
let workArr = document.getElementById("getInfo").addEventListener("click", () => {
let domAr = Array.from(
document.querySelectorAll(<class 1>, <class 2>),
el => {
return el.textContent
}
);
let newAr = []
for (let i = 0; i < domAr.length; i++) {
if (i % 2 == 0) {
newAr.push([domAr[i], domAr[i + 1]])
}
}
newAr.forEach((work, i) => {
let table = document.getElementById('extTable');
let row = document.createElement("tr");
row.appendChild(document.createElement("td")).textContent = work[0];
row.appendChild(document.createElement("td")).textContent = work[1];
table.appendChild(row);
});
return newAr
I've been stuck on this for a couple of weeks now. Any help would be appreciated. Thank you!
There are several issues.
chrome methods return a Promise in MV3 so you need to await it or chain on it via then.
tabs.onUpdated listener's parameters are different. The second one is a change info which you can check for status instead of polling the active tab, moreover the update may happen while the tab is inactive.
catch(console.log(console.error())) doesn't do anything useful because it immediately calls these two functions so it's equivalent to catch(undefined)
Using return newArr inside a DOM event listener doesn't do anything useful because the caller of this listener is the internal DOM event dispatcher which doesn't use the returned value. Instead, your injected func should return a Promise and call resolve inside the listener when done. This requires Chrome 98 which added support for resolving Promise returned by the injected function.
chrome.tabs.onUpdated.addListener(onTabUpdated);
async function onTabUpdated(tabId, info, tab) {
if (info.status === 'complete' &&
/^https?:\/\/(www\.)?example\.com\//.test(tab.url) &&
await exec(tabId, createBlocks)) {
const [{result}] = await exec(tabId, loadListeners);
console.log(result);
// here you can save it in chrome.storage if necessary
}
}
function exec(tabId, func) {
// console.error returns `undefined` so we don't need try/catch,
// because executeScript is always an array of objects on success
return chrome.scripting.executeScript({target: {tabId}, func})
.catch(console.error);
}
function loadListeners() {
return new Promise(resolve => {
document.getElementById('getInfo').addEventListener('click', () => {
const result = [];
// ...add items to result
resolve(result);
});
});
}

chrome.devtools.inspectedWindow.eval - frameURL

I am trying to get the selected element to the sidebar pane in my chrome extension.
It's working fine if the page has no frames when the element is in the frame, it's not working.
As per the document I have to pass the frameURL, but how do I get the frame or Iframe URL?
Thank you.
Note: This issue is duplicate that was opened in 3 years ago, but still no solution there, so re-opening it again.
In devtools.js
chrome.devtools.panels.elements.createSidebarPane(name, (panel) => {
// listen for the elements changes
function updatePanel() {
chrome.devtools.inspectedWindow.eval("parseDOM($0)", {
frameURL: // how to pass dynamic
useContentScriptContext: true
}, (result, exceptipon) => {
if (result) {
console.log(result)
}
if (exceptipon) {
console.log(exceptipon)
}
});
}
chrome.devtools.panels.elements.onSelectionChanged.addListener(updatePanel);
});
I ran into this as well. I ended up needing to add a content_script on each page/iframe and a background page to help pass messages between devtools and content scripts.
The key bit is that in the devtools page, we should ask the content_scripts to send back what their current url is. For every content script that was registered, we can then call chrome.devtools.inspectedWindow.eval("setSelectedElement($0)", { useContentScriptContext: true, frameURL: msg.iframe } );
Or in full:
chrome.devtools.panels.elements.createSidebarPane( "example", function( sidebar ) {
const port = chrome.extension.connect({ name: "example-name" });
// announce to content scripts that they should message back with their frame urls
port.postMessage( 'SIDEBAR_INIT' );
port.onMessage.addListener(function ( msg) {
if ( msg.iframe ) {
// register with the correct frame url
chrome.devtools.panels.elements.onSelectionChanged.addListener(
() => {
chrome.devtools.inspectedWindow.eval("setSelectedElement($0)", { useContentScriptContext: true, frameURL: msg.iframe } );
}
);
} else {
// otherwise assume other messages from content scripts should update the sidebar
sidebar.setObject( msg );
}
} );
}
);
Then in the content_script, we should only process the event if we notice that the last selected element ($0) is different, since each frame on the page will also handle this.
let lastElement;
function setSelectedElement( element ) {
// if the selected element is the same, let handlers in other iframe contexts handle it instead.
if ( element !== lastElement ) {
lastElement = element;
// Pass back the object we'd like to set on the sidebar
chrome.extension.sendMessage( nextSidebarObject( element ) );
}
}
There's a bit of setup, including manifest changes, so see this PR for a full example:
https://github.com/gwwar/z-context/pull/21
You can found url of the frame this way:
document.querySelectorAll('iframe')[0].src
Assuming there is at lease one iframe.
Please note, you cannot use useContentScriptContext: true, as it will make the script execute as a context page (per documentation) and it will be in a separate sandboxed environment.
I had a slightly different problem, but it might be helpful for your case too, I was dynamically inserting an iframe to a page, and then tried to eval a script in it. Here the code that worked:
let win = chrome.devtools.inspectedWindow
let code = `
(function () {
let doc = window.document
let insertFrm = doc.createElement('IFRAME')
insertFrm.src = 'about:runner'
body.appendChild(insertFrm)
})()`
win.eval(code, function (result, error) {
if (error) {
console.log('Eror in insertFrame(), result:', result)
console.error(error)
} else {
let code = `
(function () {
let doc = window.document
let sc = doc.createElement('script')
sc.src = '${chrome.runtime.getURL('views/index.js')}'
doc.head.appendChild(sc)
})()`
win.eval(code, { frameURL: 'about:bela-runner' }, function (result, error) {
if (error) {
console.log('Eror in insertFrame(), result:', result)
console.error(error)
}
})
}
})

Why cant Puppeteer find this link element on page?

^^UPDATE^^
Willing to pay someone to walk me through this, issue posted on codeMentor.io: https://www.codementor.io/u/dashboard/my-requests/9j42b83f0p
I've been looking to click on the element:
<a id="isc_LinkItem_1$20j" href="javascript:void" target="javascript" tabindex="2"
onclick="if(window.isc_LinkItem_1) return isc_LinkItem_1.$30i(event);"
$9a="$9d">Reporting</a>
In: https://stackblitz.com/edit/js-nzhhbk
(I haven't included the acutal page because its behind a username & pass)
seems easy enough
----------------------------------------------------------------------
solution1:
page.click('[id=isc_LinkItem_1$20j]') //not a valid selector
solution2:
const linkHandlers = await frame.$x("//a[contains(text(), 'Reporting')]");
if (linkHandlers.length > 0) {
await linkHandlers[0].click();
} else {
throw new Error('Link not found');
} //link not found
----------------------------------------------------------------------
I have looked at every which way to select and click it and it says it isn't in the document even though it clearly is (verified by inspecting the html in chrome dev tools and calling:page.evaluate(() => document.body.innerHTML))
**tried to see if it was in an iframe
**tried to select by id
**tried to select by inner text
**tried to console log the body in the browser (console logging not working verified on the inspect _element) //nothing happens
**tried to create an alert with body text by using: _evaluate(() => alert(document)) // nothing happens
**tried to create an alert to test to see if javascript can be injected by: _evaluate(() => alert('works')) // nothing happens
**also tried this: How to select elements within an iframe element in Puppeteer // doesn't work
Here is the code I have built so far
const page = await browser.newPage();
const login1url =
'https://np3.nextiva.com/NextOSPortal/ncp/landing/landing-platform';
await page.goto(login1url);
await page.waitFor(1000);
await page.type('[name=loginUserName', 'itsaSecretLol');
await page.type('[name=loginPassword]', 'nopeHaha');
await page.click('[type=submit]');
await page.waitForNavigation();
const login3url = 'https://np3.nextiva.com/NextOSPortal/ncp/admin/dashboard';
await page.goto(login3url);
await page.click('[id=hdr_users]');
await page.goto('https://np3.nextiva.com/NextOSPortal/ncp/user/manageUsers');
await page.goto('https://np3.nextiva.com/NextOSPortal/ncp/user/garrettmrg');
await page.waitFor(2000);
await page.click('[id=loginAsUser]');
await page.waitFor(2000);
await page.click('[id=react-select-5--value');
await page.waitFor(1000);
await page.click('[id=react-select-5--option-0]');
await page.waitFor(20000);
const elementHandle = await page.$('iframe[id=callcenter]');
const frame = await elementHandle.contentFrame();
const linkHandlers = await frame.$x("//a[contains(text(), 'Reporting')]");
if (linkHandlers.length > 0) {
await linkHandlers[0].click();
} else {
throw new Error('Link not found');
}
due isc_LinkItem_1$20j is not a valid selector, maybe you can try finding elements STARTING WITH isc_LinkItem_1 , like this
await page.waitForSelector("[id^=isc_LinkItem_1]", {visible: true, timeout: 30000});
await page.click("[id?=isc_LinkItem_1]);
?
On your solution1:
await page.click('a[id=isc_LinkItem_1\\$20j]');
Or try to:
await page.click('#isc_LinkItem_1\\$20j]');
I have the slight impression that you must provide what kind of element your trying to select before the brackets, in this case, an < a > element.
On the second solution, the # character means we're selecting an element by it's id
It turns out that the previous click triggered a new tab. Puppeteer doesn't move to the new tab, all previous code was being executed on the old tab. To fix all we had to do was find the new tab, select it and execute code, here is the function we wrote to select for the tab:
async function getTab(regex, browser, targets) {
let pages = await browser.pages();
if (targets) pages = await browser.targets();
let newPage;
for (let i = 0; i < pages.length; i++) {
const url = await pages[i].url();
console.log(url);
if (url.search(regex) !== -1) {
newPage = pages[i];
console.log('***');
console.log(url);
console.log('***');
break;
}
}
console.log('finished');
return newPage;
}

Puppeteer in NodeJS reports 'Error: Node is either not visible or not an HTMLElement'

I'm using 'puppeteer' for NodeJS to test a specific website. It seems to work fine in most case, but some places it reports:
Error: Node is either not visible or not an HTMLElement
The following code picks a link that in both cases is off the screen.
The first link works fine, while the second link fails.
What is the difference?
Both links are off the screen.
Any help appreciated,
Cheers, :)
Example code
const puppeteer = require('puppeteer');
const initialPage = 'https://website.com/path';
const selectors = [
'div[id$="-bVMpYP"] article a',
'div[id$="-KcazEUq"] article a'
];
(async () => {
let selector, handles, handle;
const width=1024, height=1600;
const browser = await puppeteer.launch({
headless: false,
defaultViewport: { width, height }
});
const page = await browser.newPage();
await page.setViewport({ width, height});
page.setUserAgent('UA-TEST');
// Load first page
let stat = await page.goto(initialPage, { waitUntil: 'domcontentloaded'});
// Click on selector 1 - works ok
selector = selectors[0];
await page.waitForSelector(selector);
handles = await page.$$(selector);
handle = handles[12]
console.log('Clicking on: ', await page.evaluate(el => el.href, handle));
await handle.click(); // OK
// Click that selector 2 - fails
selector = selectors[1];
await page.waitForSelector(selector);
handles = await page.$$(selector);
handle = handles[12]
console.log('Clicking on: ', await page.evaluate(el => el.href, handle));
await handle.click(); // Error: Node is either not visible or not an HTMLElement
})();
I'm trying to emulate the behaviour of a real user clicking around the site, which is why I use .click(), and not .goto(), since the a tags have onclick events.
Instead of
await button.click();
do this:
await button.evaluate(b => b.click());
The difference is that button.evaluate(b => b.click()) runs the JavaScript HTMLElement.click() method on the given element in the browser context, which will fire a click event on that element even if it's hidden, off-screen or covered by a different element, whereas button.click() clicks using Puppeteer's ElementHandle.click() which
scrolls the page until the element is in view
gets the bounding box of the element (this step is where the error happens) and finds the screen x and y pixel coordinates of the middle of that box
moves the virtual mouse to those coordinates and sets the mouse to "down" then back to "up", which triggers a click event on the element under the mouse
First and foremost, your defaultViewport object that you pass to puppeteer.launch() has no keys, only values.
You need to change this to:
'defaultViewport' : { 'width' : width, 'height' : height }
The same goes for the object you pass to page.setViewport().
You need to change this line of code to:
await page.setViewport( { 'width' : width, 'height' : height } );
Third, the function page.setUserAgent() returns a promise, so you need to await this function:
await page.setUserAgent( 'UA-TEST' );
Furthermore, you forgot to add a semicolon after handle = handles[12].
You should change this to:
handle = handles[12];
Additionally, you are not waiting for the navigation to finish (page.waitForNavigation()) after clicking the first link.
After clicking the first link, you should add:
await page.waitForNavigation();
I've noticed that the second page sometimes hangs on navigation, so you might find it useful to increase the default navigation timeout (page.setDefaultNavigationTimeout()):
page.setDefaultNavigationTimeout( 90000 );
Once again, you forgot to add a semicolon after handle = handles[12], so this needs to be changed to:
handle = handles[12];
It's important to note that you are using the wrong selector for your second link that you are clicking.
Your original selector was attempting to select elements that were only visible to xs extra small screens (mobile phones).
You need to gather an array of links that are visible to your viewport that you specified.
Therefore, you need to change the second selector to:
div[id$="-KcazEUq"] article .dfo-widget-sm a
You should wait for the navigation to finish after clicking your second link as well:
await page.waitForNavigation();
Finally, you might also want to close the browser (browser.close()) after you are done with your program:
await browser.close();
Note: You might also want to look into handling unhandledRejection errors.
Here is the final solution:
'use strict';
const puppeteer = require( 'puppeteer' );
const initialPage = 'https://statsregnskapet.dfo.no/departementer';
const selectors = [
'div[id$="-bVMpYP"] article a',
'div[id$="-KcazEUq"] article .dfo-widget-sm a'
];
( async () =>
{
let selector;
let handles;
let handle;
const width = 1024;
const height = 1600;
const browser = await puppeteer.launch(
{
'defaultViewport' : { 'width' : width, 'height' : height }
});
const page = await browser.newPage();
page.setDefaultNavigationTimeout( 90000 );
await page.setViewport( { 'width' : width, 'height' : height } );
await page.setUserAgent( 'UA-TEST' );
// Load first page
let stat = await page.goto( initialPage, { 'waitUntil' : 'domcontentloaded' } );
// Click on selector 1 - works ok
selector = selectors[0];
await page.waitForSelector( selector );
handles = await page.$$( selector );
handle = handles[12];
console.log( 'Clicking on: ', await page.evaluate( el => el.href, handle ) );
await handle.click(); // OK
await page.waitForNavigation();
// Click that selector 2 - fails
selector = selectors[1];
await page.waitForSelector( selector );
handles = await page.$$( selector );
handle = handles[12];
console.log( 'Clicking on: ', await page.evaluate( el => el.href, handle ) );
await handle.click();
await page.waitForNavigation();
await browser.close();
})();
For anyone still having trouble this worked for me:
await page.evaluate(()=>document.querySelector('#sign-in-btn').click())
Basically just get the element in a different way, then click it.
The reason I had to do this was because I was trying to click a button in a notification window which sits outside the rest of the app (and Chrome seemed to think it was invisible even if it was not).
I know I’m late to the party but I discovered an edge case that gave me a lot of grief, and this thread, so figured I’d post my findings.
The culprit:
CSS
scroll-behavior: smooth
If you have this you will have a bad time.
The solution:
await page.addStyleTag({ content: "{scroll-behavior: auto !important;}" });
Hope this helps some of you.
My way
async function getVisibleHandle(selector, page) {
const elements = await page.$$(selector);
let hasVisibleElement = false,
visibleElement = '';
if (!elements.length) {
return [hasVisibleElement, visibleElement];
}
let i = 0;
for (let element of elements) {
const isVisibleHandle = await page.evaluateHandle((e) => {
const style = window.getComputedStyle(e);
return (style && style.display !== 'none' &&
style.visibility !== 'hidden' && style.opacity !== '0');
}, element);
var visible = await isVisibleHandle.jsonValue();
const box = await element.boxModel();
if (visible && box) {
hasVisibleElement = true;
visibleElement = elements[i];
break;
}
i++;
}
return [hasVisibleElement, visibleElement];
}
Usage
let selector = "a[href='https://example.com/']";
let visibleHandle = await getVisibleHandle(selector, page);
if (visibleHandle[1]) {
await Promise.all([
visibleHandle[1].click(),
page.waitForNavigation()
]);
}

page does not wait for another page to finish their tasks before continuing

So here's the code snippet:
for (let item of items)
{
await page.waitFor(10000)
await page.click("#item_"+item)
await page.click("#i"+item)
let pages = await browser.pages()
let tempPage = pages[pages.length-1]
await tempPage.waitFor("a.orange", {timeout: 60000, visible: true})
await tempPage.click("a.orange")
counter++
}
page and tempPage are two different pages.
What happens is that page waits for 10 seconds, then clicks some stuff, which opens a second page.
What's supposed to happen is that tempPage waits for an element, clicks it, then page should wait 10 seconds before doing it all over again.
However, what actually happens is that page waits for 10 seconds, clicks the stuff, then starts waiting for 10 seconds without waiting for tempPage to finish its tasks.
Is this a bug, or am I misunderstanding something? How should I fix this so that when the for loop loops again, it is only after tempPage has clicked.
Generally, you cannot rely on await tempPage.click("a.orange") to pause execution until tempPage has "finish[ed] its tasks". For super simple code that executes synchronously, it may work. But in general, you cannot rely on it.
If the click triggers an Ajax operation, or starts a CSS animation, or starts a computation that cannot be immediately computed, or opens a new page, etc., then the result you are waiting for is asynchronous, and the .click method will not wait for this asynchronous operation to complete.
What can you do? In some cases you may be able to hook into the code that is running on the page and wait for some event that matters to you. For instance, if you want to wait for an Ajax operation to be done and the code on the page uses jQuery, then you might use ajaxComplete to detect when the operation is complete. If you cannot hook into any event system to detect when the operation is done, then you may need to poll the page to wait for evidence that the operation is done.
Here is an example that shows the issue:
const puppeteer = require('puppeteer');
function getResults(page) {
return page.evaluate(() => ({
clicked: window.clicked,
asynchronousResponse: window.asynchronousResponse,
}));
}
puppeteer.launch().then(async browser => {
const page = await browser.newPage();
await page.goto("https://example.com");
// We add a button to the page that will click later.
await page.evaluate(() => {
const button = document.createElement("button");
button.id = "myButton";
button.textContent = "My Button";
document.body.appendChild(button);
window.clicked = 0;
window.asynchronousResponse = 0;
button.addEventListener("click", () => {
// Synchronous operation
window.clicked++;
// Asynchronous operation.
setTimeout(() => {
window.asynchronousResponse++;
}, 1000);
});
});
console.log("before clicks", await getResults(page));
const button = await page.$("#myButton");
await button.click();
await button.click();
console.log("after clicks", await getResults(page));
await page.waitForFunction(() => window.asynchronousResponse === 2);
console.log("after wait", await getResults(page));
await browser.close();
});
The setTimeout code simulates any kind of asynchronous operation started by the click.
When you run this code, you'll see on the console:
before click { clicked: 0, asynchronousResponse: 0 }
after click { clicked: 2, asynchronousResponse: 0 }
after wait { clicked: 2, asynchronousResponse: 2 }
You see that clicked is immediately incremented twice by the two clicks. However, it takes a while before asynchronousResponse is incremented. The statement await page.waitForFunction(() => window.asynchronousResponse === 2) polls the page until the condition we are waiting for is realized.
You mentioned in a comment that the button is closing the tab. Opening and closing tabs are asynchronous operations. Here's an example:
puppeteer.launch().then(async browser => {
let pages = await browser.pages();
console.log("number of pages", pages.length);
const page = pages[0];
await page.goto("https://example.com");
await page.evaluate(() => {
window.open("https://example.com");
});
do {
pages = await browser.pages();
// For whatever reason, I need to have this here otherwise
// browser.pages() always returns the same value. And the loop
// never terminates.
await page.evaluate(() => {});
console.log("number of pages after evaluating open", pages.length);
} while (pages.length === 1);
let tempPage = pages[pages.length - 1];
// Add a button that will close the page when we click it.
tempPage.evaluate(() => {
const button = document.createElement("button");
button.id = "myButton";
button.textContent = "My Button";
document.body.appendChild(button);
window.clicked = 0;
window.asynchronousResponse = 0;
button.addEventListener("click", () => {
window.close();
});
});
const button = await tempPage.$("#myButton");
await button.click();
do {
pages = await browser.pages();
// For whatever reason, I need to have this here otherwise
// browser.pages() always returns the same value. And the loop
// never terminates.
await page.evaluate(() => {});
console.log("number of pages after click", pages.length);
} while (pages.length > 1);
await browser.close();
});
When I run the above, I get:
number of pages 1
number of pages after evaluating open 1
number of pages after evaluating open 1
number of pages after evaluating open 2
number of pages after click 2
number of pages after click 1
You can see it takes a bit before window.open() and window.close() have detectable effects.
In your comment you also wrote:
I thought await was basically what turned an asynchronous function into a synchronous one
I would not say it turns asynchronous functions into synchronous ones. It makes the current code wait for an asynchronous operation's promise to be resolved or rejected. However, more importantly for the issue at hand here, the problem is that you have two virtual machines executing JavaScript code: there's Node which runs puppeteer and the script that controls the browser, and there's the browser itself which has its own JavaScript virtual machine. Any await that you use on the Node side affects only the Node code: it has no bearing on the code that runs in the browser.
It can get confusing when you see things like await page.evaluate(() => { some code; }). It looks like it is all of one piece, and all executing in the same virtual machine, but it is not. puppeteer takes the parameter passed to .evaluate, serializes it, and sends it over to the browser, where it executes. Try adding something like await page.evaluate(() => { button.click(); }); in the script above, after const button = .... Something like this:
const button = await tempPage.$("#myButton");
await button.click();
await page.evaluate(() => { button.click(); });
In the script, button is defined before page.evaluate, but you'll get a ReferenceError when page.evaluate runs because button is not defined on the browser side!

Resources