I'm trying to test my chrome extension using Github Actions.
This is how I setup the tests and the fixtures:
extensionContext: async ({ exe_path }, use) => {
const pathToExtension = path.join(__dirname, '..', 'build');
const extensionContext = await chromium.launch({
headless: false,
args: [
`--disable-extensions-except=${pathToExtension}`,
`--load-extension=${pathToExtension}`,
],
executablePath: exe_path,
});
await use(await extensionContext.newContext());
await extensionContext.close();
},
I'm running the system test inside Github actions using a Makefile, the target runs the next command:
xvfb-run --auto-servernum --server-args='-screen 0, 1920x1080x24' yarn run test --project=${BROWSER}
The setup is as follows (Also from the Makefile):
yarn install --frozen-lockfile
yarn run playwright install --with-deps
When I run the above code, chromium.launch hangs and the test fails after the 30s timeout.
When I remove the --disable-extensions-except flag launch succeeds, but the extension is not loaded (although the --load-extension flag remains).
Is there a working example on how to test extensions in headful mode inside any CI/CD framework (preferably Github actions)?
Additional Info:
Playwright Version: 1.28.0
Operating System: Linux (ubuntu-latest runner in Github actions)
Node.js version: 18.12.0
Browser: chromium
Thanks
Related
Actually am making an API for some works with puppeteer, it works in my local bcoz i had executable path
also in docs it says no need for set executable path but my error is
Could not find expected browser (chrome) locally. Run `npm install` to download the correct Chromium revision (970485).
My code For the launch() is
const browser = await puppeteer.launch({
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
],
});
const page = await browser.newPage();
Here i use Railway (Heroku clone)
If am right,
i have made mistakes in Executable path
lmk where i made the error
thank you
we have a Windows Electron application that runs e2e Tests via Spectron. The application is platform-dependent and won't run on Linux (Containers). We want to run our Spectron e2e Tests inside a preconfigured Docker container to have them isolated.
To get a grasp of it I have built a minimal nodejs application that does basically nothing and has an e2e test (jest) that opens a browser tab and checks the title, no functionality just a simple spike.
I created a Dockerfile to build a container to run the tests:
FROM mcr.microsoft.com/windows:20H2-amd64
RUN mkdir "C:/app"
WORKDIR "C:/app"
COPY app "C:/app"
RUN powershell -Command \
Set-ExecutionPolicy unrestricted;
ENV chocolateyUseWindowsCompression false
RUN powershell -Command \
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'));
RUN choco install googlechrome -y --version=91.0.4472.101 --ignore-checksums
RUN choco install chromedriver -y --version=91.0.4472.1010 --ignore-checksums
RUN choco install nodejs-lts -y --version=14.17.1
RUN npm config set strict-ssl false
RUN npm install
ENTRYPOINT npm test
Note this is a Windows container, as our main app will also need a Windows container to run. The container builds and runs the test but crashes with the error: SessionNotCreatedError: session not created thrown by from tab crashed. On my Windows Host, the test runs fine.
Is there anything wrong with my Dockerfile or is this simply not possible in a Windows container?
I don't think it's relevant to the problem but here is also the test file that gets executed when the container does npm test:
describe('test google.com', () => {
const {
Builder,
By,
Key,
until
} = require('selenium-webdriver');
var driver;
beforeEach(() => {
driver = new Builder()
.forBrowser('chrome')
.build();
});
afterEach(() => {
driver.quit();
});
it('should open google search', async () => {
await driver.get('http://www.google.com');
driver
.getTitle()
.then(title => {
expect(title).toEqual('Google');
});
});
});
We had a similar problem, but we are using .net-core with Selenium. For some reason, installing the Chromedriver did not work inside container, so we had to do two things:
manually download the driver based on the chrome version and export the zip into the working directory. (It's been a while though, and we did not really update the image, installing via choco may be working now)
Even stranger thing is that we had to install some fonts for some reason.
Take look at my repo: https://github.com/yamac-kurtulus/Windows-Docker-Images/tree/master/DotnetCore%20Selenium%20With%20Chrome
The relevant part is after line 23 in the Dockerfile.
Note: If you are not very deep into the project, I strongly suggest you to migrate to Linux. Working with Docker on Windows is like a nightmare that you cannot wake up from.
I am trying to debug an issue which causes headless Chrome using Puppeteer to behave differently on my local environment and on a remote environment such as AWS or Heroku.
The application tries to search public available jobs on LinkedIn without authentication (no need to look at profiles), the url format is something like this: https://www.linkedin.com/jobs/search?keywords=Engineer&location=New+York&redirect=false&position=1&pageNum=0
When I open this url in my local environment I have no problems, but when I try to do the same thing on a remote machine such as AWS EC2 or Heroku Dyno I am redirected to a login form by LinkedIn. To debug this difference I've built a Docker image (based on this image) to have isolation from my local Chrome/profile:
Dockerfile
FROM buildkite/puppeteer
WORKDIR /app
COPY . .
RUN npm install
CMD node index.js
EXPOSE 9222
index.js
const puppeteer = require("puppeteer-extra");
puppeteer.use(require("puppeteer-extra-plugin-stealth")());
const testPuppeteer = async () => {
console.log('Opening browser');
const browser = await puppeteer.launch({
headless: true,
slowMo: 20,
args: [
'--remote-debugging-address=0.0.0.0',
'--remote-debugging-port=9222',
'--single-process',
'--lang=en-GB',
'--disable-dev-shm-usage',
'--no-sandbox',
'--disable-setuid-sandbox',
"--proxy-server='direct://",
'--proxy-bypass-list=*',
'--disable-gpu',
'--allow-running-insecure-content',
'--enable-automation',
],
});
console.log('Opening page...');
const page = await browser.newPage();
console.log('Page open');
const url = "https://www.linkedin.com/jobs/search?keywords=Engineer&location=New+York&redirect=false&position=1&pageNum=0";
console.log('Opening url', url);
await page.goto(url, {
waitUntil: 'networkidle0',
});
console.log('Url open');
// page && await page.close();
// browser && await browser.close();
console.log("Done! Leaving page open for remote inspection...");
};
(async () => {
await testPuppeteer();
})();
The docker image used for this test can be found here.
I've run the image on my local environment with the following command:
docker run -p 9222:9222 spinlud/puppeteer-linkedin-test
Then from the local Chrome browser chrome://inspect it should be possible to inspect the GUI of the application (I have deliberately left open the page in headless browser):
As you can see even in local docker the page opens without authentication.
I've done the same test on an AWS EC2 (Amazon Linux 2) with Docker installed. It needs to be a public instance with SSH access and an inbound rule to allow traffic through port 9222 (for remote Chrome debugging).
I've run the same command:
docker run -p 9222:9222 spinlud/puppeteer-linkedin-test
Then again from local Chrome browser chrome://inspect, once added the remote public IP of the EC2, I was able to inspect the GUI of the remote headless Chrome as well:
As you can see this time LinkedIn requires authentication. We can see also a difference in the cookies:
I can't understand the reasons behind this different behaviour between my local and remote environment. In theory Docker should provide isolation and in both environment the headless browser should start with no cookies and a fresh (empty session). Still there is difference and I can't figure out why.
Does anyone have any clue?
I am getting this error again and again while launching the application. I would have reinstalled puppeteer for like 8-9 times and even downloaded all the dependencies listed in the Troubleshooting link.
Error: Failed to launch the browser process! spawn /home/......./NodeJs/Scraping/code3/node_modules/puppeteer/.local-chromium/linux-756035/chrome-linux/chrome ENOENT
TROUBLESHOOTING: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md
This Code is just for taking a screenshot of google.com
NodeJs Version- 14.0.0
Puppeteer Version- 4.0.1
Ubuntu Version- 20.04
I am using puppeteer which is bundled with Chromium
const chalk = require("chalk");
// MY OCD of colorful console.logs for debugging... IT HELPS
const error = chalk.bold.red;
const success = chalk.keyword("green");
(async () => {
try {
// open the headless browser
var browser = await puppeteer.launch({ headless: false });
// open a new page
var page = await browser.newPage();
// enter url in page
await page.goto(`https://www.google.com/`);
// Google Say Cheese!!
await page.screenshot({ path: "example.png" });
await browser.close();
console.log(success("Browser Closed"));
} catch (err) {
// Catch and display errors
console.log(error(err));
await browser.close();
console.log(error("Browser Closed"));
}
})(); ```
As you said puppeteer 2.x.x works for you perfectly but 4.x.x doesn't: it seems to be a linux dependency issue which occurs more since puppeteer 3.x.x (usually libgbm1 is the culprit).
If you are not sure where is your chrome executable located first run:
whereis chrome
(e.g.: /usr/bin/chrome)
Then to find your missing dependencies run:
ldd /usr/bin/chrome | grep not
sudo apt-get install the listed dependencies.
After this happened you are able to do a clean npm install on your project with the latest puppeteer aas well (as of today it will be 5.0.0).
I am trying to run my portractor tests on live-server via one grunt task.
I have installed live-server (https://www.npmjs.com/package/live-server) and grunt-execute. With grunt execute I managed to start live-server with a grunt command in 2 steps:
1st I created a node script (liveServer.js)
var liveServer = require("live-server");
var params = {
port: 8080,
host: "localhost",
open: true,
wait: 1000
};
liveServer.start(params);
2nd I created a task in my grunt file to start the script:
(in grunt.initConfig)
execute: {
liveserver: {
src: ['liveServer.js']
},
}
and registered a command to trigger it:
grunt.registerTask('live', [
'execute:liveserver'
]);
Now if I run "grunt live" in my commandline live-server starts, opens a browser, and I can browse my application.
I also created a protractor task in my grunt file, which works just fine as well.
(in grunt.initConfig)
protractor: {
options: {
keepAlive: false,
noColor: false
},
e2e: {
options: {
configFile: 'protractor.conf.js',
}
}
},
If I trigger it with a registered task the protractor tests run just fine, only I have to make sure live-server is running first.
So ofcourse I want to combine the two in one command that starts live-server and then runs my protractor tests.
So I tried:
grunt.registerTask('runProtractor', [
'execute:liveserver',
'protractor'
]);
But unfortunately this does not work, live-server starts and then ... nothing happens, the protractor tests aren't run. I tried changing some of the live-server parameters such as open and wait, but without any luck. There are no error messages either.
As I said before separately the tasks work both fine (with two command windows, first start live-server in one and then protractor in the other)
Does anybody have a clue why my does not continue after the live-server has started?
The execution of live-server blocks all subsequent tasks, since it doesn't "finish", i.e. to grunt the task is still running, which is why it won't proceed to the next task. You can use grunt-concurrent to run tasks in parallel.