Puppeteer Waiting for target frame Ubuntu digitalocean - node.js

I have been building a webscraper in Node.js and running it on a digital ocean Ubuntu server. Puppeteer is only having issues on Ubuntu for my program.
I originally had an issue running Puppeteer with root user so I switched to a new account I made on the server and now I have this new issue.
Version: HeadlessChrome/105.0.5173.0
Error: Waiting for target frame D0E4A57B880331E15F232D467A28499A
failed
at Timeout._onTimeout (/home/pricepal/priceServer-deployment/price-server/node_modules/puppeteer/lib/cjs/puppeteer/common/util.js:447:18)
at listOnTimeout (node:internal/timers:564:17)
at process.processTimers (node:internal/timers:507:7)
Node.js v18.7.0
Here is the block of code that the program stops at and eventually errors out:
try {
const browser = await puppeteer.launch()
const page = await browser.newPage()
await page.goto(link)
const content = await page.content()
await browser.close()
return content
} catch (error) {
console.log(error)
}
It takes a little longer than normal to generate the headless browser but the error is stemming from a timeout happening at page.goto(link). All of the links fail to load not just one in particular.
The links I am using work when ran on my m1 mac with the same chromium and node versions.
I have been doing research and trying new things all day but I cannot get it fixed and have found little resourced relating to this issue.

I had the exact same problem, been pulling my hair out looking for answers the past few days. I know it's not exactly a proper answer (mods sorry if you have to delete this), but I found that switching from Ubuntu to Debian 10 magically fixed everything. FWIW the line causing the error is:
const page = await browser.newPage()
I suspect the issue lies somewhere within the version of Chromium that Puppeteer downloads, and its interaction with the OS. What exactly though I couldn't say. My results are as follows:
Didn't work:
Ubuntu 22.04
Ubuntu 20.04
Debian 11
Worked:
Debian 10

Related

Puppeteer is failing to launch the browser in local

I am getting this error again and again while launching the application. I would have reinstalled puppeteer for like 8-9 times and even downloaded all the dependencies listed in the Troubleshooting link.
Error: Failed to launch the browser process! spawn /home/......./NodeJs/Scraping/code3/node_modules/puppeteer/.local-chromium/linux-756035/chrome-linux/chrome ENOENT
TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md
This Code is just for taking a screenshot of google.com
NodeJs Version- 14.0.0
Puppeteer Version- 4.0.1
Ubuntu Version- 20.04
I am using puppeteer which is bundled with Chromium
const chalk = require("chalk");
// MY OCD of colorful console.logs for debugging... IT HELPS
const error = chalk.bold.red;
const success = chalk.keyword("green");
(async () => {
try {
// open the headless browser
var browser = await puppeteer.launch({ headless: false });
// open a new page
var page = await browser.newPage();
// enter url in page
await page.goto(`https://www.google.com/`);
// Google Say Cheese!!
await page.screenshot({ path: "example.png" });
await browser.close();
console.log(success("Browser Closed"));
} catch (err) {
// Catch and display errors
console.log(error(err));
await browser.close();
console.log(error("Browser Closed"));
}
})(); ```
As you said puppeteer 2.x.x works for you perfectly but 4.x.x doesn't: it seems to be a linux dependency issue which occurs more since puppeteer 3.x.x (usually libgbm1 is the culprit).
If you are not sure where is your chrome executable located first run:
whereis chrome
(e.g.: /usr/bin/chrome)
Then to find your missing dependencies run:
ldd /usr/bin/chrome | grep not
sudo apt-get install the listed dependencies.
After this happened you are able to do a clean npm install on your project with the latest puppeteer aas well (as of today it will be 5.0.0).

Postgres database does not connect, .sync() does not resolve. Sequelize, PostgreSQL, Node

Recently, I got a new laptop and I am set up my development environment on there. Also, I copied a project I worked on on my old laptop and wanted to continue working on it on my new laptop. Nothing weird here, I would think. The server-side code in this case.
So I started with installing all the apps and programs, cloned my GitHub repo, booted up a docker container with the following command:
docker run -d -p 5432:5432 --name ulti-mate -e POSTGRES_PASSWORD=secret postgres
and ran npm install. I connected to the database using postico, just to have a little visual feedback, which connected instantly. Then started the server up using nodemon index.js and it seemed to start. Only I was missing one thing: Normally the server console logs Database connected and runs a bunch of insert queries. Went to Postico, nothing.
I've been going over the code of my database:
const Sequelize = require('sequelize');
const databaseUrl =
process.env.DATABASE_URL ||
'postgres://postgres:secret#localhost:5432/postgres';
const db = new Sequelize(databaseUrl);
async function syncDB() {
try {
console.log('Before sync');
await db.sync({ force: false });
console.log('Database connected');
} catch (error) {
console.error('error synching database', error);
}
}
syncDB();
and I noticed that it runs until it hits db.sync(). Before sync logs consistently. That's where it stops. It doesn't resolve at all. I tried assigning it to a variable, but nothing. For example, this does not log:
const a = await db.sync({force: false});
console.log("a:", a);
The weird thing is that it worked before on my old machine, so the problem can't be in the code. It must have something to do with my new development environment. I tried installing different versions of sequelize and pg in the repo, didn't help. I tried reinstalling postgresql with homebrew, but it's up to date.
If anyone has an idea what might be going wrong or something I might try to fix this issue, it would be greatly appreciated, because I'm slowly going mad.
I figured out the problem. I was running Node v14.4.0 (latest version at the moment). Downgrading to the latest LTS version (v12.18.1) fixed the issue. I'm not sure if this is a known issue, but I opened a ticket on the sequelize Repo.

How do I get a response/information from a hanging pgPool.connect()?

I am using the pg package (node.js), and for some reason the connect function gives me nothing. My code gets hung up on that line and I'm unable to see any errors, what's wrong, or what's happening.
i.e.
console.log("HERE");
await pgPool.connect()
console.log("NOW HERE") //this line never prints
I've tried a bunch of variations too:
console.log("HERE");
const client = await pgPool.connect()
console.log(client) //this line never prints
Does anyone know how to get a verbose stream from pg? My pg version is 7.15.0 and my npm version is 6.14.4
I've tried waiting it out for over an hour. For friends running the same code from the same branch on their local machines it connects in under a second. I've confirmed they have the same version of pg as me.
I am able to connect directly to the database using psql in a separate terminal without issues (it immediately connects in < 1 second)
Updated my pg to 8.2.1 and it solved the problem. Must be an incompatibility issue with an earlier version

Unhandled promise rejection (rejection id: 1): Error: kill ESRCH

I've made some research on the Web and SOF, but found nothing really helpful on that error.
I installed Node and Puppeteer with Windows 10 Ubuntu Bash, but didn't manage to make it work, yet I manage to make it work on Windows without Bash on an other machine.
My command is :
node index.js
My index.js tries to take a screenshot of a page :
const puppeteer = require('puppeteer');
async function run() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://github.com');
await page.screenshot({ path: 'screenshots/github.png' });
browser.close();
}
run();
Does anybody know the way I could fix this "Error: kill ESRCH" error?
I had the same issue, this worked for me.
Try updating your script to the following:
const puppeteer = require('puppeteer');
async function run() {
//const browser = await puppeteer.launch();
const browser = await puppeteer.launch({headless: true, args: ['--no-sandbox'] }); //WSL's chrome support is very new, and requires sandbox to be disabled in a lot of cases.
const page = await browser.newPage();
await page.goto('https://github.com');
await page.screenshot({ path: 'screenshots/github.png' });
await browser.close(); //As #Md. Abu Taher suggested
}
run();
const browser = await puppeteer.launch({ args: ['--no-sandbox'] });
If you want to read all the details on this, this ticket has them (or links to them).
https://github.com/Microsoft/WSL/issues/648
Other puppeteer users with similar issues:
https://github.com/GoogleChrome/puppeteer/issues/290#issuecomment-322851507
I just fixed this issue. What you need to do is the following:
1) Install Debian dependencies
You can find them in this doc:
https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
sudo apt-get install all of those bad boys.
2) Add '--no-sandbox' flag when launching puppeteer
3) Make sure your windows 10 is up to date. I was missing an important update that allowed you to launch Chrome.
Points no consider:
Windows bash is not a complete drop-in replacement for Ubuntu bash (yet). There are many cases where different GUI based apps did not work properly. Also, the script might be confused by bash on windows 10. It could think that the os is linux instead of windows.
Windows 10 bash only supports 64-bit binaries, so make sure the node and the chrome version that's used inside is pretty much 64-bit. Puppeteer is using -child.pid to kill the child processes instead of child.pid on windows version. Make sure puppeteer is not getting confused by all these bash/windows thing.
Back to your case.
You are using browser.close() in the function, but it should be await browser.close(), otherwise it's not executing in proper order.
Also, You should try to add await page.close(); before browser.close();.
So the code should be,
await page.close();
await browser.close();
I worked around it by softlinking chrome.exe to node_modules/puppeteer/.../chrome as below
ln -s /mnt/c/Program\ Files\ \(x86\)/Google/Chrome/Application/chrome.exe node_modules/puppeteer/.local-chromium/linux-515411/chrome-linux/chrome

Strange ECONNRESET error I cannot figure out

I do not know, if this is related to koa, or is problem of some other npm module or something else. I am going to start from here.
So to the problem. I am having REST api written in koa v1. We are running node server in the Docker image. One of the endpoints we have, starts the import and returns the status 200 with message: "import started", and when the import finishes, we send Slack message to notify us.
So first I tested the server on my local machine, everything works (endpoint does not throw any errors). Then I built docker image. I run container localy, everything works (endpoint does not throw any errors). I deploy my image to Mesos environment, everything works so far. Container runs, every endpoint works, beside import endpoint. When I call it, after few seconds (5 to 10), I get ECONNRESET error, the running container gets killed and new running instance is started. So import is terminated.
At the beginning we assigned 128 MB ram to the docker container and that seems to be enough. After import error occurred, we thought maybe OOM killed process. So we decided to check dmesg and we could not find any log entries related to the OOM and the process of the running container. Then we checked ram usage of the container locally (with htop) and found out it uses aprox. 250+ MB, so we decided to add more ram in marathon config (512 MB). That however did not help, same error occurred.
Because the error was not explicit enough we installed longjohn module, so we could get more detailed error message. That got us just a little bit more information, but not as much as we thought it would.
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
---------------------------------------------
at Application.app.callback (/src/node_modules/koa/lib/application.js:130:45)
at Application.app.listen (/src/node_modules/koa/lib/application.js:73:39)
at Promise.then.result (/src/server.js:97:13)
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
Line 97 of the server.js is:
96:if(!module.parent) {
97: app.listen(port, (err) => {
98: if (err) {
99: console.error('Server error', err);
100: }
101: console.log('Listening on the port', port);
102: });
103:}
So what exactly happens in the endpoint logic. We are using postgres npm module pg. We are passing pg.Pool to the context, so later we can use it in our models. We are executing insert query encapsulated in promise and push promises in the array. There are roughly 2700+ records. Later we do Promise.all on the array of promises and with then we send the message to Slack.
As you can see I do not know if the error is related to koa or pg or some other thing. What is more intriguing is that locally everything works (node server as well as in docker container), but on Mesos it does not. How can I find out what is wrong?
version of koa npm module: 1.2.0
version of pg npm module: 6.1.0
version of Postgres 9.5
version of Mesos: 1.0.1
According to this github issue this is an error caused by tiny-lr.
It seems that downgrading to version 0.2.1 stops it, but this is usually a dependency of other packages you're using that you've got no control over. You might be able to filter out the error by displaying all errors except this, as such:
if (error.code !== 'ECONNRESET') { console.log(error) }
The issue is still open, and dates from Oct 27, 2016. Don't know if it will get fixed or not. But as far as feedback goes, it doesn't seem like a dangerous error, or to have any impact whatsoever. But heh, I'd rather fix mine too, if there was a way.
Thanks to another developer, we found out what was the cause of the ERROR. We used all connections in the pool when there was an import running.
When the marathon was requesting the service status at the time of the import, service tried to connect to the database to test the connection and at that time the connection to the database was terminated. Service became unhealthy and marathon restarted the service. We re-factored the import code. We are limiting the number of pool connections.

Resources