I'm setting up a test that involves starting a webcam video session.
So far all is working fine and doesn't require any user interaction except for granting access to the webcam.
When the third party library I'm using makes the call: navigator.mediaDevices.getUserMedia({audio: true, video: true})
the browser opens a prompt asking the user to allow access.
What I'm looking for is a way to grant access without user interaction.
I've tried puppeteer's page.on('dialog'... but that doesn't get called for the webcam access prompt.
Please let me know if you have any ideas?
Google Chrome has a launch option --use-fake-ui-for-media-stream that allows the user to skip a prompt of getUserMedia.
And you can set it with puppeteer like below.
const puppeteer = require('puppeteer')
;(async () => {
const browser = await puppeteer.launch({
args: [ '--use-fake-ui-for-media-stream' ]
})
const page = await browser.newPage()
await page.goto('http://localhost/start-video-test.html')
const startVideoButton = await page.$('#startVideoButton')
startVideoButton.click()
// video session starts without prompt
return browser.close()
})()
Related
I'm trying to do a handle check if the user closed the browser UI in the midst of the crawling automation. Is this possible?
I believe the it only allows everything to run entirely until the process ends at await browser.close();.
(async() => {
const browser = await puppeteer.launch();
// do something ...
// if browser ui is closed/exit browser ui in midst of automation
await browser.close();
})();
While running my Puppeteer app with PM2's cluster mode enabled, during concurrent requests, only one of the processes seems to be utilized instead of all 4 (1 for each of my cores). Here's the basic flow of my program:
helpers.startChrome()
.then((resp) => {
http.createServer(function (req, res) {
const {webSocketUrl} = JSON.parse(resp.body);
let browser = await puppeteer.connect({browserWSEndpoint: webSocketUrl});
const page = await browser.newPage();
... //do puppeteer stuff
await page.close();
await browser.disconnect();
})
})
and here is the startChrome() function:
startChrome: function(){
return new Promise(async (resolve, reject) => {
const opts = {
//chromeFlags: ["--no-sandbox", "--headless", "--use-gl=egl"],
userDataDir: "D:/pupeteercache",
output: 'json'
};
// Launch chrome using chrome-launcher.
const chrome = await chromeLauncher.launch(opts);
opts.port = chrome.port;
// Connect to it using puppeteer.connect().
resp = await util.promisify(request)(`http://localhost:${opts.port}/json/version`);
resolve(resp);
})
}
First, I use a package called chrome-launcher to start up chrome, I then setup a simple http server that listens for incoming requests to my app. When a request is recieved, i connect to the chrome endpoint i setup through chrome-launcher at the beginning.
When i now try to run this app within PM2's cluster mode, 4 separate chrome tabs are opened up (not sure why it works this way but alright), and everything seems to be running fine. But when I send the server 10 concurrent requests to test and see if all processes are getting used, only the first one is. I know this because when i run PM2 monit, only the first process is using any memory.
Can someone explain to me why all the processes aren't utilized? Is it because of how i'm using chrome-launcher to only use one browser with multiple tabs instead of running multiple browsers?
You cannot use the same user directory for multiple instances at same time. If you pass a user directory, no matter what kind of launcher it is, it will automatically pick the running process and create a new tab on that instead.
Puppeteer creates a temporary profile whenever you want to launch the browser. So if you want to utilize 4 instances, pass it a different user data directory on each instance.
The browser between the Tests is always open in a clean slate. The Login is remembered in my application as Authentication persists but as the browser is opened in clean slate always, I have to perform Login in Before hook of all Fixtures.
Is there some way I can open browser so that user settings, cache, local and session storage are remembered?
TestCafe doesn't offer a way to store the page state between tests and encourages writing independent tests. However, Roles API may meet some of your needs (refer to this comment for more details).
This is how I resolved using Role API.
Login.js page object file
const loginBtn = Selector('[type="submit"]');
const password = Selector('input[placeholder="Password"]');
const userName = Selector('input[placeholder="Email"]');
export const login = Role(`http://example.com/login`, async t => {
await t
.typeText(userName, `abc`)
.typeText(password, `password`)
.click(loginBtn);
});
Then I called this const Login in my fixture file as shown below :
fixture.js
import { login } from '../page-objects/login';
fixture('Example Fixture').beforeEach(async t => {
await t.useRole(login).navigateTo('url of the page that you want to open');
});
While my setting up my node.js puppeteer proxy server I found little misunderstandings. My software is Linux Mint 19, I run puppeteer on Node.js. All works well when I run my command:
const puppeteer = require('puppeteer');
const pptrFirefox = require('puppeteer-firefox');
(async () => {
const browser = await puppeteer.launch({
headless: false,
args:[ '--proxy-server=socks5://127.0.0.1:9050']
});
const page = await browser.newPage();
await page.goto('http://www.whatismyproxy.com/');
await page.screenshot({path: 'example.png'}).then(()=>{console.log("I took screenshot")});
await browser.close();
})();
proxy run on app tor in the system. While my IP is changed and privacy works, google and other websites recognize me as a bot (even without proxy server ON). When I change into "puppeteer-firefox" proxy flags do not work, but I am not recognized as a bot.
My goal is to not be recognized as a bot and run my puppeteer section incognito (in future from Tails linux, through proxy). I am already very excited from your answers :). I ensure you this is only for development purposes. regards to all
Although Puppeteer and Puppeteer-Firefox share the same API, the arguments you send using the args arguments are Browser specific.
Firefox doesn't support passing a proxy from the command arguments. But you can create a profile and launch Firefox using that profile. There are many posts explaining how to create a profile and launch Firefox with that profile. This is one of them.
I've been starting a small project on Node.js and Puppeteer that requires the use of a proxy and i've had some problem connecting through VPNGate's proxy servers.
this is the code i've used so far:
async function getIpTest(){
ips= await new ipGeneration(40);
console.log(ips['#HostName']);
proxConnect= '--proxy-server=' + ips['#HostName'] + '.opengw.net';
const browser= await puppeteer.launch({
headless: false,
ignoreHTTPSErrors: true,
args: [proxConnect]
});
const page = await browser.newPage();
await page.setExtraHTTPHeaders({'Proxy-Authorization': 'Basic' + Buffer.from('vpn:vpn').toString('base64')});
await page.goto('http://www.whatsmyip.org/');
}
where
IPGeneration()
is just a module i made to parse their CSV file.
and
proxConnect= '--proxy-server=' + ips['#HostName'] + '.opengw.net';
is part of the parsing and yeld same results if i put as string directly in puppeteer.launch args
I tried changing the port, or not using any. I tried a dozen of different proxy adresses, and tried to connect directly to IP or hostname
I've tried to look everywhere online but can't seem to find why it is not working (should i mention everything works without trying to launch puppeteer with the proxy).
Is it just VPN Gate that won't work with puppeteer?
EDIT: i was messing around and see that they have config data to connect through openVPN. Could it be a simple working solution to use node>openVPN>VPN Gate servers? Ill try this now