I am trying to get puppeteer to wait for the navigation to finish before moving on to the next statement. Based on the Docs for waitForNavigation() , the code should work below. but it just skips to the next statement and I have to use a workaround to wait for a specific URL in the response.
I have tried all the waituntil options as well
( load, domcontentloaded, networkidle0 and networkidle2 ) .
Any ideas how I could get that working properly is appreciated.
const browser = await puppeteer.launch({
headless: false,
})
const page = await browser.newPage()
const home = page.waitForNavigation()
await page.goto(loginUrl)
await home
const login = page.waitForNavigation()
await page.type('#email', config.get('login'))
await page.type('#password', config.get('password'))
await page.click('#submitButton')
await login // << skips over this
// the following line is my workaround and it works , but ideally I don't want
// to specify the expected "after" page each time I navigate
await page.waitForResponse(request => request.url() === 'http://example.com/expectedurl')
The function page.waitForNavigation() waits for navigation to begin and end.
The navigation has already been initiated with page.click().
Therefore, you can use Promise.all() to avoid the race condition between the mentioned functions:
const browser = await puppeteer.launch({
headless: false,
});
const page = await browser.newPage();
await page.goto(loginUrl);
await page.type('#email', config.get('login'));
await page.type('#password', config.get('password'));
await Promise.all([
page.click('#submitButton'),
page.waitForNavigation({
waitUntil: 'networkidle0',
}),
]);
await browser.close();
I have been going through the same problem, I used pending-xhr-request
it solved many problems when the requests were expected, but when I have late requests I have faced many problems, it took me a while to solve the problem so I built a package Puppeteer-response-waiter to do that
const puppeteer = require('puppeteer');
const {ResponseWaiter} = require('puppeteer-response-waiter');
let browser = await puppeteer.launch({ headless: false });
let page = await browser.newPage();
let responseWaiter = new ResponseWaiter(page);
await page.goto('http://somesampleurl.com');
// start listening
responseWaiter.listen();
// do something here to trigger requests
await responseWaiter.wait();
// all requests are finished and responses are all returned back
// remove listeners
responseWaiter.stopListening();
await browser.close();
hope this will solve your problem.
Related
I have a simple function that tries to accept the cookies
Here's my code:
(async () => {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('https://www.sport1.de/live/darts-sport');
await page.click('button[text=AKZEPTIEREN]');
// await page.screenshot({ path: 'example.png' });
// await browser.close();
})();
The cookie popup is placed in an iframe. You have to switch to iframe by contentFrame to be able to click on the accept button.
Also, if you want to filter by textContent, you need to use XPath. With CSS selector you can't get elements by its textContent.
const cookiePopUpIframeElement=await page.$("iframe[id='sp_message_iframe_373079']");
const cookiePopUpIframe=await cookiePopUpIframeElement.contentFrame();
const acceptElementToClick = await cookiePopUpIframe.$x("//button[text()='AKZEPTIEREN']");
await acceptElementToClick[0].click();
The following code is viable for reading the clipboard in headless/headfull:
var context = await client.defaultBrowserContext();
await context.overridePermissions('http://localhost', ['clipboard-read']);
page = await browser.newPage();
await page.goto( 'http://localhost/test/', {waitUntil: 'load', timeout: 35000});
// click button for clipboard..
let clipboard = await page.evaluate(`(async () => await navigator.clipboard.readText())()`);
But when you later start incognito its not working anymore:
const incognito = await client.createIncognitoBrowserContext();
page = await incognito.newPage();
and you get:
DOMException: Read permission denied.
I currently try to figure out to combine both.. Anybody know how to set overridePermissions inside of the new incognito window?
Please notice I do not want to use the incognito chrome arg at the start. I want to manually create new incognito pages inside of my scripts with correct overridePermissions.
I am having the very same issue. Here's a minimal reproducible example.
Nodejs version: v16.13.1
puppeteer version: puppeteer#14.4.1
'use strict';
const puppeteer = require('puppeteer');
const URL = 'https://google.com';
(async () => {
const browser = await puppeteer.launch();
const context = browser.defaultBrowserContext();
context.overridePermissions(URL, ['clipboard-read', 'clipboard-write'])
const page = await browser.newPage();
await page.goto(URL, {
waitUntil: 'networkidle2',
});
await page.evaluate(() => navigator.clipboard.writeText("Injected"));
const value = await page.evaluate(() => navigator.clipboard.readText());
console.log(value);
})();
I need to read data on https://www.cmegroup.com/tools-information/quikstrike/options-calendar.html
I tried to click on FX tab from page.click in puppeteer, but the page remains on the default.
Any help welcome
const puppeteer = require('puppeteer');
let scrape = async () => {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
await page.goto('https://www.cmegroup.com/tools-information/quikstrike/options-calendar.html');
await page.waitFor(1000);
//div select FX
await page.click('#ctl00_MainContent_ucViewControl_IntegratedCMEOptionExpirationCalendar_ucViewControl_ucProductSelector_lvGroups_ctrl3_lbProductGroup');
//browser.close();
return result;
};
scrape().then((value) => {
console.log(value); // Success!
});
I couldn't find the element you're looking for on that page. However, this might be helpful:
Wait for the selector to appear on the page before clicking on it:
await page.waitForSelector(selector);
If still facing the issue, try using Javascript click method:
await page.$eval(selector, elem => elem.click());
I have created a Puppeteer script to run in offline, I have got the below code to take the screenshot. While running the offline-login-check.js script from the command prompt, could some one please advise where the screen shots are added ?
const puppeteer = require("puppeteer");
(async() => {
const browser = await puppeteer.launch({
headless: true,
chromeWebSecurity: false,
args: ['--no-sandbox']
});
try {
// Create a new page
const page = await browser.newPage()
// Connect to Chrome DevTools
const client = await page.target().createCDPSession()
// Navigate and take a screenshot
await page.waitFor(3000);
await page.goto('https://sometestsite.net/home',{waitUntil: 'networkidle0'})
//await page.goto(url, {waitUntil: 'networkidle0'});
await page.evaluate('navigator.serviceWorker.ready');
console.log('Going offline');
await page.setOfflineMode(true);
// Does === true for the main page but the fallback content isn't being served.
page.on('response', r => console.log(r.fromServiceWorker()));
await page.reload({waitUntil: 'networkidle0'});
await page.waitFor(5000);
await page.screenshot({path: 'screenshot.png',fullPage: true})
await page.waitForSelector('mat-card[id="route-tile-card]');
await page.click('mat-card[id="route-tile-card]');
await page.waitFor(3000);
} catch(e) {
// handle initialization error
console.log ("Timeout or other error: ", e)
}
await browser.close();
})();
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch({
headless: false,
chromeWebSecurity: false,
args: ['--no-sandbox']
});
try {
// Create a new page
const page = await browser.newPage();
// Connect to Chrome DevTools
const client = await page.target().createCDPSession();
// Navigate and take a screenshot
await page.goto('https://example.com', {waitUntil: 'networkidle0'});
// await page.evaluate('navigator.serviceWorker.ready');
console.log('Going offline');
await page.setOfflineMode(true);
// Does === true for the main page but the fallback content isn't being served.
page.on('response', r => console.log(r.fromServiceWorker()));
await page.reload({waitUntil: 'networkidle0'});
await page.screenshot({path: 'screenshot2.png',fullPage: true})
// await page.waitForSelector('mat-card[id="route-tile-card]');
// await page.click('mat-card[id="route-tile-card]');
} catch(e) {
// handle initialization error
console.log ("Timeout or other error: ", e)
}
await browser.close();
})();
then in command line run ls | GREP .png and you should see screenshot there. Be aware i take rid of await page.evaluate('navigator.serviceWorker.ready'); which might be specified to your website
Your script is perfect. There is no problem with it!
The screenshot.png should be on the directory that you run the node offline-login-check.js command.
If its not there, maybe you are getting some error/timeout before the page.screenshot command runs. Since your script is ok, this can be caused by network issues or issues with the page. For example, if your page has a never ending connection (like WebSocket), change the "networkidle0" to "networkidle2" or "load", otherwise the first page.goto will get stuck.
Again, your script is perfect. You don't have to change it.
I need a example how to switch betweens tabs with puppeteer
this is currently what i have:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({
headless: false, // launch headful mode
});
const page = await browser.newPage();
await page.setViewport({ width: 1920, height: 1080 });
await page.goto('https://URL1.com');
const pagee = await browser.newPage();
await pagee.setViewport({ width: 1920, height: 1080 });
await pagee.goto('https://URL2.com');
})();
So it opens 2 tabs first:Url1, second: Url2
What i need:
first Tab do some action...
go to second Tab do some action...
go back to first Tab do some action...
can you guys please provide me a example ?
thank you
The bit of code you need is page.bringToFront See here
A working script below. Please note I have adding in a wait between tab switching else the script runs to fast :)
const puppeteer = require('puppeteer');
async function run() {
const browser = await puppeteer.launch( {
headless: false
});
const page1 = await browser.newPage();
await page1.goto('https://www.google.com');
const page2 = await browser.newPage();
await page2.goto('https://www.bing.com');
const pageList = await browser.pages();
console.log("NUMBER TABS:", pageList.length);
//switch tabs here
await page1.bringToFront();
blockingWait(1);
await page2.bringToFront();
blockingWait(1);
await page1.bringToFront();
blockingWait(4);
await browser.close();
};
function blockingWait(seconds) {
//simple blocking technique (wait...)
var waitTill = new Date(new Date().getTime() + seconds * 1000);
while(waitTill > new Date()){}
}
run();
In the case of clicking on a link/button to open a new tab with a new URL, the following worked for me.
await page.click('#your_Button_To_Open_New_Tab_With_Different_URL')
await page.waitForTimeout(3000)
const pageList = await browser.pages();
await console.log("NUMBER TABS:", pageList.length);
await console.log("NUMBER TABS:", pageList[2]._target._targetInfo.url);
await page.waitForTimeout(3000)
page2 = await browser.newPage()
const redirectedUrlforService = pageList[2]._target._targetInfo.url;
await page2.goto(redirectedUrlforService)
await page2.bringToFront();
await page2.waitForTimeout(3000)
await page2.waitForSelector('#A_Selector_On_New_Page_To_Verify_That_You_Can_Perform_Your_Actions_There')
Waiting for idle network requests might not always work if the responses involve long-running DOM updates that take longer than 500ms to trigger a render.