How to get finished content of a webpage using NodeJS [closed] - node.js

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 29 days ago.
Improve this question
I am trying to get the content of https://apps.shopify.com/ as a html response to save it in a file for further processing (I need the list of names and URls for those apps for a scraping task).
I tried to use httpget, axios and request but all returns an unrendered version (I think page uses JS to add the products later). I need finished html code. How can I get the finished result in NodeJS?
(Or if anyone knows an API to search shopify appstore).

To iterate over all app links, retrieve url and text:
const puppeteer = require('puppeteer');
(async () => {
var url = 'https://apps.shopify.com/search?q=a';
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url, { waitUntil: 'networkidle2' });
await page.waitForXPath('//a[#data-app-link-details]');
const links = await page.$x('//a[#data-app-link-details]');
for (let i = 0; i < links.length; i++) {
let hrefp = await links[i].getProperty('href');
let href = await hrefp.jsonValue();
let txt = await links[i].getProperty('textContent');
let text = await txt.jsonValue();
console.log(href + " " + text);
}
await browser.close();
})();
Output
https://apps.shopify.com/automizely-loyalty?locale=fr&search_id=22d826ac-82ca-42ef-ad32-d2736ba59bc8&surface_detail=a&surface_inter_position=1&surface_intra_position=22&surface_type=search
Automizely Referral&Affiliate
https://apps.shopify.com/klaviyo-email-marketing?locale=fr&search_id=22d826ac-82ca-42ef-ad32-d2736ba59bc8&surface_detail=a&surface_inter_position=1&surface_intra_position=23&surface_type=search
Klaviyo: Email Marketing & SMS
https://apps.shopify.com/govx-id?locale=fr&search_id=22d826ac-82ca-42ef-ad32-d2736ba59bc8&surface_detail=a&surface_inter_position=1&surface_intra_position=24&surface_type=search
GovX ID Exclusive Discounts
[...]

Related

How to get lastModified property of another website

When I use the inspect/developer tool in chrome I can find the last modified date from browser but I want to see the same date in my nodeJS application.
I have already tried
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.tbsnews.net/economy/bsec-chairman-stresses-restoring-investor-confidence-mutual-funds-500126');
const names = page.evaluate( ()=> {
console.log(document.lastModified);
})
Unfortunately this code shows the current time of new DOM creation as we are using newPage(). Can anyone help me ?
I have also tired JSDOM as well.
Thanks in advance.

How can I download FULL QUALITY pictures from google images using puppeteer?

Need help in how to create a stable selector that saves URL in full quality from google images.
Trying to download 4-25 pictures from google images using puppeteer in full quality.
It doesn't work.
The problem is to create a stable selector and getting URLs of the pictures in full quality and not the URL of google's preview mode.
I had it running already but it broke down due to what I understand to be a not so well selector. Now trying rebuild it.
Old selector that results in "elements" being undefined:
let previewimagexpath =
"/html/body/div[2]/c-wiz/div[3]/div[2]/div[3]/div/div/div[3]/div[2]/c-wiz/div/div[1]/div[1]/div[2]/div/a/img";
// previewimagexpath = '//*[#id="Sva75c"]/div/div/div[3]/div[2]/c-wiz/div/div[1]/div[1]/div[2]/div/a/img'
for (let i = 1; i < numOfPics; i++) {
let imagexpath =
"/html/body/div[2]/c-wiz/div[3]/div[1]/div/div/div/div[1]/div[1]/span/div[1]/div[1]/div[" +
i +
"]/a[1]/div[1]/img";
const elements = await page.$x(imagexpath);
await elements[0].click();
await page.waitForTimeout(3000);
const image = await page.$x(previewimagexpath);
let d = await image[0].getProperty("src");
//console.log(d._remoteObject.value);
imagelinkslist.push(d._remoteObject.value);
}
await browser.close();
};
new selector which is resulting in URLs of the preview mode and not in URLs of full quality images.
axios
.get(
"https://www.google.com/search?q=dogs&sxsrf=ALiCzsZW27NYppMFDO9xwabkhmXUQMku8g:1651495383126&source=lnms&tbm=isch&sa=X&ved=2ahUKEwj4-qLd68D3AhUR3KQKHdk3CFYQ_AUoAXoECAIQAw&biw=1680&bih=948&dpr=2"
)
.then(response => {
const $ = cheerio.load(response.data);
const image = $("img");
$("img").each((i, elem) => {});
console.log(image);
});

How to kill old Puppeteer browser if still running?

I am trying to scrape data from different websites using only one Puppeteer instance. I don't want to launch a new browser for each website. So I need to check if any existing browser has already launched then just open a new tab. I did something like the below, Some conditions I always check before launching any browser
const browser = await puppeteer.launch();
browser?.isConnected()
browser.process() // null if browser is still running
but still, I found sometimes my script re-launch the browser if any old browser has already been launched. So I am thinking to kill if any old browser has been launched or what would be the best check? Any other good suggestion will be highly appreciated.
I'm not sure if that specific command (Close existing browsers) can be done inside puppeteer's APIs, but what I could recommend is how would people usually handle this situation which is to make sure that the browser instance is closed if any issue was encountered:
let browser = null;
try {
browser = await puppeteer.launch({
headless: true,
args: ['--no-sandbox'],
});
const page = await browser.newPage();
url = req.query.url;
await page.goto(url);
const bodyHTML = await page.evaluate(() => document.body.innerHTML);
res.send(bodyHTML);
} catch (e) {
console.log(e);
} finally {
if (browser)
await browser.close();
}
Otherwise, you can use shell based commands like kill or pkill if you have access to the process ID of the previous browser.
The most reliable means of closing a puppeteer instance that I've found is to close all of the pages within a BrowserContext, which automatically closes the Browser. I've seen instances of chromium linger in Task Manager after calling just await browser.close().
Here is how I do this:
const openAndCloseBrowser = async () => {
const browser = await puppeteer.launch();
try {
// your logic
catch(ERROR) {
// error handling
} finally {
const pages = await browser.pages();
for(const page of pages) await page.close();
}
}
If you try running await browser.close() after running the loop and closing each page individually, you should see an error stating that the browser was already closed and your Task Manager should not have lingering chromium instances.

node js puppeteer metadata

I am new to Puppeteer, and I am trying to extract meta data from a Web site using Node.JS and Puppeteer. I just can't seem to get the syntax right. The code below works perfectly extracting the Title tag, using two different methods, as well as text from a paragraph tag. How would I extract the content text for the meta data with the name of "description" for example?
meta name="description" content="Stack Overflow is the largest, etc"
I would be seriously grateful for any suggestions! I can't seem to find any examples of this anywhere (5 hours of searching and code hacking later). My sample code:
const puppeteer = require('puppeteer');
async function main() {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
await page.goto('https://stackoverflow.com/', {waitUntil: 'networkidle2'});
const pageTitle1 = await page.evaluate(() => document.querySelector('title').textContent);
const pageTitle2 = await page.title();
const innerText = await page.evaluate(() => document.querySelector('p').innerText);
console.log(pageTitle1);
console.log(pageTitle2);
console.log(innerText);
};
main();
You need a deep tutorial for CSS selectors MDN CSS Selectors.
Something that I highly recommend is testing your selectors on the console directly in the page you will apply the automation, this will save hours of running-stop your system. Try this:
document.querySelectorAll("head > meta[name='description']")[0].content;
Now for puppeteer, you need to copy that selector and past on puppeteer function also I like more this notation:
await page.$eval("head > meta[name='description']", element => element.content);
Any other question or problem just comment.
For anyone struggling to get the OG tags in Puppeteer , here is the solution.
let dom2 = await page.evaluate(() => {
return document.head.querySelector('meta[property="og:description"]').getAttribute("content");
});
console.log(dom2);
If you prefer to avoid $eval, you can do:
const descriptionTag = await page.$('meta[name="description"]');
const description = await descriptionTag?.getAttribute('content');

Puppeteer: How to handle multiple tabs?

Scenario: Web form for developer app registration with two part workflow.
Page 1: Fill out developer app details and click on button to create Application ID, which opens, in a new tab...
Page 2: The App ID page. I need to copy the App ID from this page, then close the tab and go back to Page 1 and fill in the App ID (saved from Page 2), then submit the form.
I understand basic usage - how to open Page 1 and click the button which opens Page 2 - but how do I get a handle on Page 2 when it opens in a new tab?
Example:
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch({headless: false, executablePath: '/Applications/Google Chrome.app'});
const page = await browser.newPage();
// go to the new bot registration page
await page.goto('https://register.example.com/new', {waitUntil: 'networkidle'});
// fill in the form info
const form = await page.$('new-app-form');
await page.focus('#input-appName');
await page.type('App name here');
await page.focus('#input-appDescription');
await page.type('short description of app here');
await page.click('.get-appId'); //opens new tab with Page 2
// handle Page 2
// get appID from Page 2
// close Page 2
// go back to Page 1
await page.focus('#input-appId');
await page.type(appIdSavedFromPage2);
// submit the form
await form.evaluate(form => form.submit());
browser.close();
})();
Update 2017-10-25
The work for Browser.pages has been completed and merged
Fixes Emit new Page objects when new tabs created #386 and Request: browser.currentPage() or similar way to access Pages #443.
Still looking for a good usage example.
A new patch has been committed two days ago and now you can use browser.pages() to access all Pages in current browser.
Works fine, tried myself yesterday :)
Edit:
An example how to get a JSON value of a new page opened as 'target: _blank' link.
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'load'});
// click on a 'target:_blank' link
await page.click(someATag);
// get all the currently open pages as an array
let pages = await browser.pages();
// get the last element of the array (third in my case) and do some
// hucus-pocus to get it as JSON...
const aHandle = await pages[3].evaluateHandle(() => document.body);
const resultHandle = await pages[3].evaluateHandle(body =>
body.innerHTML, aHandle);
// get the JSON value of the page.
let jsonValue = await resultHandle.jsonValue();
// ...do something with JSON
This will work for you in the latest alpha branch:
const newPagePromise = new Promise(x => browser.once('targetcreated', target => x(target.page())));
await page.click('my-link');
// handle Page 2: you can access new page DOM through newPage object
const newPage = await newPagePromise;
await newPage.waitForSelector('#appid');
const appidHandle = await page.$('#appid');
const appID = await page.evaluate(element=> element.innerHTML, appidHandle );
newPage.close()
[...]
//back to page 1 interactions
Be sure to use the last puppeteer version (from Github master branch) by setting package.json dependency to
"dependencies": {
"puppeteer": "git://github.com/GoogleChrome/puppeteer"
},
Source: JoelEinbinder # https://github.com/GoogleChrome/puppeteer/issues/386#issuecomment-343059315
According to the Official Documentation:
browser.pages()
returns: <Promise<Array<Page>>> Promise which resolves to an array of all open pages. Non visible pages, such as "background_page", will not be listed here. You can find them using target.page().
An array of all pages inside the Browser. In case of multiple browser contexts, the method will return an array with all the pages in all browser contexts.
Example Usage:
let pages = await browser.pages();
await pages[0].evaluate(() => { /* ... */ });
await pages[1].evaluate(() => { /* ... */ });
await pages[2].evaluate(() => { /* ... */ });
In theory, you could override the window.open function to always open "new tabs" on your current page and navigate via history.
Your workflow would then be:
Override the window.open function:
await page.evaluateOnNewDocument(() => {
window.open = (url) => {
top.location = url
}
})
Go to your first page and perform some actions:
await page.goto(PAGE1_URL)
// ... do stuff on page 1
Navigate to your second page by clicking the button and perform some actions there:
await page.click('#button_that_opens_page_2')
await page.waitForNavigation()
// ... do stuff on page 2, extract any info required on page 1
// e.g. const handle = await page.evaluate(() => { ... })
Return to your first page:
await page.goBack()
// or: await page.goto(PAGE1_URL)
// ... do stuff on page 1, injecting info saved from page 2
This approach, obviously, has its drawbacks, but I find it simplifies multi-tab navigation drastically, which is especially useful if you're running parallel jobs on multiple tabs already. Unfortunately, current API doesn't make it an easy task.
You could remove the need to switch page in case it is caused by target="_blank" attribute - by setting target="_self"
Example:
element = page.$(selector)
await page.evaluateHandle((el) => {
el.target = '_self';
}, element)
element.click()
If your click action is emitting a pageload, then any subsequent scripts being ran are effectively lost. To get around this you need to trigger the action (a click in this case) but not await for it. Instead, wait for the pageload:
page.click('.get-appId');
await page.waitForNavigation();
This will allow your script to effectively wait for the next pageload event before proceeding with further actions.
You can't currently - Follow https://github.com/GoogleChrome/puppeteer/issues/386 to know when the ability is added to puppeteer (hopefully soon)
it looks like there's a simple 'page.popup' event
Page corresponding to "popup" window
Emitted when the page opens a new tab or window.
const [popup] = await Promise.all([
new Promise(resolve => page.once('popup', resolve)),
page.click('a[target=_blank]'),
]);
const [popup] = await Promise.all([
new Promise(resolve => page.once('popup', resolve)),
page.evaluate(() => window.open('https://example.com')),
]);
credit to this github issue for easier 'targetcreated'
You can have multiple inheritance from browser.newPage() to open multiple tabs
Example
const page = await browser.newPage();
await page.goto("https://www.google.com/");
const page2 = await browser.newPage();
await page2.goto("https://www.youtube.com/");

Resources