I have a Firebase function to create a PDF file. Lately, it times out due to a "Chrome revision"? Neither do I understand the error message, nor do I understand what is wrong. The function works, when I deploy it locally under MacOS.
TimeoutError: Timed out after 30000 ms while trying to connect to the browser! Only Chrome at revision r818858 is guaranteed to work.
at Timeout.onTimeout (/workspace/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:204:20)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
The function:
const puppeteer = require('puppeteer');
const createPDF = async (html, outputPath) => {
let pdf;
try {
const browser = await puppeteer.launch({
args: ['--no-sandbox']
});
const page = await browser.newPage();
await page.emulateMediaType('screen');
await page.setContent(html, {
waitUntil: 'networkidle0'
});
pdf = await page.pdf({
// path: outputPath,
format: 'A4',
printBackground: true,
margin: {
top: "50px",
bottom: "50px"
}
});
await browser.close();
} catch (e) {
console.error(e);
}
return pdf;
};
TimeoutError: Timed out after 30000 ms while trying to connect to the browser!
The aforementioned error is coming from the fact that as mentioned in the documentation:
When you install Puppeteer, it downloads a recent version of Chromium
Everytime you're executing Puppeteer you're running a Chromium in the backend to which Puppeteer will try to connect, hence when it can't connect to the browser this errors raises.
After doing multiple test I was able to execute the Cloud Function by adding the parameter headless on the launch option, since the documentation mentions that it should be true by default I don't quite understand why setting it manually allows the Cloud Function to finish correctly.
At the beginning I was trying with the timeout set to 0 to disable the error due to timeout, however it seems that it's not required, since by only adding headless it finished correctly, but if you find the same problem with the timeouts you can add it.
At the end my code looks like this:
const createPDF = async (html, outputPath) => {
let pdf;
try {
const browser = await puppeteer.launch({
args: ['--no-sandbox'],
headless: true,
timeout: 0
});
const page = await browser.newPage();
await page.emulateMediaType('screen');
await page.setContent(html, {
waitUntil: 'networkidle0'
});
pdf = await page.pdf({
// path: outputPath,
format: 'A4',
printBackground: true,
margin: {
top: "50px",
bottom: "50px"
}
});
await browser.close();
console.log("Download finished"); //Added this to debug that it finishes correctly
} catch (e) {
console.error(e);
}
return pdf;
};
Related
After creating a directory with an index.js file with the following code:
const puppeteer = require('puppeteer');
async function main() {
const browser = await puppeteer.launch({
headless: false,
args: ['--no-sandbox']
});
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({
path: 'example.png'
});
await browser.close();
}
// Start the script
main();
and then running npm init, and npm install puppeteer, the following error is returned:
node index.js
/mnt/c/Users/trgre/OneDrive/Desktop/puppeteer
test/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:214
reject(new Errors_js_1.TimeoutError(`Timed out after ${timeout} ms while trying to
connect to the browser! Only Chrome at revision r${preferredRevision} is guaranteed to
work.`));
^
TimeoutError: Timed out after 30000 ms while trying to connect to the browser! Only Chrome at
revision r901912 is guaranteed to work.
at Timeout.onTimeout (/mnt/c/Users/trgre/OneDrive/Desktop/puppeteer
test/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:214:20)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
Node.js v17.1.0
Any ideas on what to do in order to run a puppeteer program, I am on windows using Ubuntu 20?
I am using an Ubuntu server 18.04.5 LTS and Puppeteer 10.0.0. My Problem is that the browser.newPage() function never resolves. So basicly in the console it alsways loggs Start but never 1 nor 2. I have tried a different puppeteer version or puppeteer-core with my own chromium version. I even installed an VM on my Pc and it works there but not on my Server.
var puppeteer = require('puppeteer')
var adresse = "https://www.google.de/"
async function test() {
try {
const browser = await puppeteer.launch({
"headless": true,
"args": [
'--disable-setuid-sandbox',
'--no-sandbox',
'--disable-gpu',
]
})
console.log("Start")
const page = await browser.newPage()
console.log("1")
await page.goto(adresse)
console.log("2")
console.log(page)
} catch (error) {
console.log(error)
}
}
test()
Problem: Using puppeteer to get the screengrab of a website. Works fine on dev machine but throws below exception when deployed to Azure Functions on the cloud.
Environment: on Azure(Node 12, Linux, Consumption plan), Function triggered using service bus topic.
Error:
Result: Failure Exception: Error: Failed to launch the browser process! spawn
/home/site/wwwroot/node_modules/puppeteer/.local-chromium/linux-818858/chrome-linux/chrome
EACCES TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md
Stack: Error: Failed to launch the browser process!
spawn /home/site/wwwroot/node_modules/puppeteer/.local-chromium/linux-818858/chrome-linux/chrome
EACCES TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md
at onClose (/home/site/wwwroot/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:193:20)
at ChildProcess.<anonymous> (/home/site/wwwroot/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserRunner.js:185:85)
at ChildProcess.emit (events.js:314:20) at Process.ChildProcess._handle.onexit (internal/child_process.js:274:12)
at onErrorNT (internal/child_process.js:470:16) at processTicksAndRejections (internal/process/task_queues.js:84:21)
I followed the recommendations that are on puppeteer troubleshoot document but still having the same issue.
Things I tried for lunch setting
let browser = await puppeteer.launch({ ignoreDefaultArgs: ['--disable-extensions'] });
let browser = await puppeteer.launch({ headless: true, args: ['--no-sandbox', '--disable-setuid-sandbox'] });
let browser = await puppeteer.launch({ headless: true });
let browser = await puppeteer.launch({ args: ['--no-sandbox', '--disable-setuid-sandbox'] });
None of the above worked. They all throw the same error.
I checked FTPing into the function and the chrome file puppeteer is looking for, exists.
Thanks in advance.
Azure has the necessary dependencies for running headless Chromium in the Linux Consumption plan. So we can use the package puppeteer in Azure function. But we need to deploy the app using remote build. For more details, please refer to the Azure feedback and the blog.
For example
Create Azure function app
Create Azure function project
a. Install package
npm install puppeteer
b. function.json
{
"bindings": [
{
"name": "mySbMsg",
"type": "serviceBusTrigger",
"direction": "in",
"topicName": "bowman1012",
"subscriptionName": "test",
"connection": "bowman1012_SERVICEBUS"
},
{
"type": "blob",
"direction": "out",
"name": "outputBlob",
"path": "outcontainer/{rand-guid}.png",
"connection": "AzureWebJobsStorage"
}
]
}
c. code
const puppeteer = require("puppeteer");
module.exports = async function (context, mySbMsg) {
context.log(
"JavaScript ServiceBus topic trigger function processed message",
mySbMsg
);
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto("https://google.com/");
const screenshotBuffer = await page.screenshot({ fullPage: true });
await browser.close();
context.bindings.outputBlob = screenshotBuffer;
};
Add .funcignore file in the root project folder
*.js.map
*.ts
.git*
.vscode
local.settings.json
test
tsconfig.json
node_modules
Deploy to Azure
func azure functionapp publish $appName --build remote
Test
Update
Since you use typescript to create Azure function, we need to update .funcignore as the following
*.js.map
.git*
.vscode
local.settings.json
test
node_modules
For example
My function code index.ts
import { AzureFunction, Context } from "#azure/functions";
import { ServiceBusMessage } from "#azure/service-bus";
import puppeteer from "puppeteer";
import { BlobServiceClient } from "#azure/storage-blob";
const serviceBusTopicTrigger: AzureFunction = async function (
context: Context,
mySbMsg: ServiceBusMessage
): Promise<void> {
try {
const promotionId = context.bindingData.userProperties.promotionId;
context.log(
"Player Screen Grabber ServiceBus topic trigger function processing message started",
promotionId
);
const playerURL = "https://www.google.com/";
let browser = await puppeteer.launch({ headless: true });
let page = await browser.newPage();
await page.goto(playerURL, { waitUntil: "networkidle2" });
await page.setViewport({ width: 1920, height: 1080 });
const screenshotBuffer = await page.screenshot({
encoding: "binary",
});
await page.close();
await browser.close();
// the storage account connection string
const constr = process.env["AzureWebJobsStorage"] + "";
const blobserviceClient = BlobServiceClient.fromConnectionString(constr);
const containerClient = blobserviceClient.getContainerClient(
"outcontainer"
);
const blob = containerClient.getBlockBlobClient(`${promotionId}.png`);
await blob.uploadData(screenshotBuffer);
context.log(
"Player Screen Grabber ServiceBus topic trigger function processing message ended",
promotionId
);
} catch (error) {
throw error;
}
};
export default serviceBusTopicTrigger;
Deploy to Azure
func azure functionapp publish $appName --build remote
Test
My service bus message
result
This is driving me insane... I have the following code:
// Load a PUG template
const template = await loadTemplateRoute(pdfProps.layout);
// Generate HTML
const html = template(pdfProps);
// requirement for puppeteer to work locally, if using locally
const executablePath = process.env.EXECUTABLE_PATH || await chromium.executablePath;
console.log('executable path', executablePath);
// These are needed to run on WSL
chromium.args.push(['--disable-gpu', '--single-process']);
console.log('1');
const browser = await puppeteer.launch({
args: chromium.args,
defaultViewport: chromium.defaultViewport,
executablePath,
headless: true,
ignoreHTTPSErrors: true
});
console.log('2');
const page = await browser.newPage();
console.log('3');
// eslint-disable-next-line quote-props
await page.setContent(html, { 'waitUntil': 'networkidle2' });
console.log('4');
// here we can insert customizable features in the future using JSONB stored formats
const pdf = await page.pdf({
format: 'A4',
printBackground: true,
margin: {
top: '1cm',
right: '1cm',
bottom: '1cm',
left: '1cm'
}
});
await page.close();
console.log('5');
await browser.close();
console.log('6');
return pdf;
Running this gives me the PDF I want, but only like once in ten times. The other times I get either this after console.log('4'):
Protocol error (IO.read): Target closed.
at .../node_modules/puppeteer-core/lib/Connection.js:183:56
at new Promise (<anonymous>)
at CDPSession.send (.../node_modules/puppeteer-core/lib/Connection.js:182:12)
at Function.readProtocolStream (.../node_modules/puppeteer-core/lib/helper.js:254:37)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
at Page.pdf (.../node_modules/puppeteer-core/lib/Page.js:1021:12)
Or other times (more seldom) this after console.log('3'):
Navigation failed because browser has disconnected!
at CDPSession.<anonymous> (.../node_modules/puppeteer-core/lib/LifecycleWatcher.js:46:107)
at CDPSession.emit (events.js:223:5)
at CDPSession.EventEmitter.emit (domain.js:475:20)
at CDPSession._onClosed (.../node_modules/puppeteer-core/lib/Connection.js:215:10)
at Connection._onClose (.../node_modules/puppeteer-core/lib/Connection.js:138:15)
at WebSocket.<anonymous> (.../node_modules/puppeteer-core/lib/WebSocketTransport.js:48:22)
at WebSocket.onClose (.../node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:124:16)
at WebSocket.emit (events.js:223:5)
at WebSocket.EventEmitter.emit (domain.js:475:20)
at WebSocket.emitClose (.../node_modules/puppeteer-core/node_modules/ws/lib/websocket.js:191:10)
at Socket.socketOnClose (.../node_modules/puppeteer-core/node_modules/ws/lib/websocket.js:850:15)
at Socket.emit (events.js:223:5)
at Socket.EventEmitter.emit (domain.js:475:20)
at TCP.<anonymous> (net.js:664:12)
I run this on a WSL Ubuntu but running it on Mac gives errors too (but less frequently).
It seems to be working better if I wait like 5 minutes between tries but listing processes (ps -ef) shows nothing running/hanging...
EDIT: Logging out what's happening in /node_modules/puppeteer-core/lib/Connection.js:182:56 gives:
send(); Page.printToPDF {
transferMode: 'ReturnAsStream',
landscape: false,
displayHeaderFooter: false,
headerTemplate: '',
footerTemplate: '',
printBackground: true,
scale: 1,
paperWidth: 8.27,
paperHeight: 11.7,
marginTop: 0.39375,
marginBottom: 0.39375,
marginLeft: 0.39375,
marginRight: 0.39375,
pageRanges: '',
preferCSSPageSize: false
}
send(); IO.read { handle: '1' }
send(); IO.read { handle: '1' }
The Page.printToPDF works fine, the first IO.read also is working while the second IO.read throws the error...
After trying a bunch of things I started suspecting external sources as it worked fine with simple templates.
Reworking the templates to not load any external CSS and instead place all CSS in <style> tags and "pre-parse" all images to base64 (<img src="data:image/png;MIIlkaa3498asm..." />) it is no longer happening...
Clearly some load of resources that is messing with me...
I had the same issue and resolved it by
replacing page.pdf() with page.createPDFStream() (docs here)
adding a 60 seconds timeout to page.createPDFStream() and page.setContent (defaults to 30 sec)
waiting for ['load', 'domcontentloaded'] events to be fired
Example
await page.setContent(html, {
timeout: 60000,
waitUntil: ['load', 'domcontentloaded'],
});
await page.emulateMediaType('screen');
// const pdf = await page.pdf({
const pdfStream = await page.createPDFStream({
timeout: 60000,
format: 'A4',
margin: { top: '0.5cm', right: '1cm', bottom: '0.8cm', left: '1cm' },
printBackground: true,
});
// ...
// do something with the PDF stream, e.g. save to file
pdfStream
.on('end', () => console.log('pdfStream done'))
.on('error', (err) => console.log(err))
.pipe(fs.createWriteStream('my-form.pdf'));
I am using puppeteer on Google App Engine with Node.JS
whenever I run puppeteer on app engine, I encounter an error saying
Navigation failed because browser has disconnected!
This works fine in local environment, so I am guessing it is a problem with app engine.
const browser = await puppeteer.launch({
ignoreHTTPSErrors: true,
headless: true,
args: ["--disable-setuid-sandbox", "--no-sandbox"],
});
This is my app engine's app.yaml file
runtime: nodejs12
env: standard
handlers:
- url: /.*
secure: always
script: auto
-- EDIT--
It works when I add --disable-dev-shm-usage argument, but then it always timeouts. Here are my codes.
const browser = await puppeteer.launch({
ignoreHTTPSErrors: true,
headless: true,
args: [
"--disable-gpu",
"--disable-dev-shm-usage",
"--no-sandbox",
"--disable-setuid-sandbox",
"--no-first-run",
"--no-zygote",
"--single-process",
],
});
const page = await browser.newPage();
try {
const url = "https://seekingalpha.com/market-news/1";
const pageOption = {
waitUntil: "networkidle2",
timeout: 20000,
};
await page.goto(url, pageOption);
} catch (e) {
console.log(e);
await page.close();
await browser.close();
return resolve("error at 1");
}
try {
const ulSelector = "#latest-news-list";
await page.waitForSelector(ulSelector, { timeout: 30000 });
} catch (e) {
// ALWAYS TIMEOUTS HERE!
console.log(e);
await page.close();
await browser.close();
return resolve("error at 2");
}
...
It seems the problem was app engine's memory capacity.
When memory is not enough to deal with puppeteer crawling,
It automatically generates another instance.
However, newly created instance has a different puppeteer browser.
Therefore, it results in Navigation failed because browser has disconnected.
The solution is simply upgrade the app engine instance so it can deal with the crawling job by a single instance.
default instance is F1, which has 256M of memory, so I upgraded to F4, which has 1GB of memery, then it doesn't show an error message anymore.
runtime: nodejs12
instance_class: F4
handlers:
- url: /.*
secure: always
script: auto
For me the error was solved when I stopped using the --use-gl=swiftshader arg.
It is used by default if you use args: chromium.args from chrome-aws-lambda
I was having that error in a deploy, the solution for this problem is change some parameters in waitForNavigation:
{ waitUntil: "domcontentloaded" , timeout: 60000 }