How to communicate with chrome extension background page in electron? - google-chrome-extension

Since electron doesn't implement chrome.runtime.onMessageExternal.
How can we communicate with chrome extension background page in main or renderer process?

Actually, extension background page is one of webContents.
With webContents.getAllWebContents(), you could find all webContents.
And filter with wc.getType() === 'backgroundPage'.
Then load preload file.
You could do anything in the preload file.
webContents.getAllWebContents().forEach((wc) => {
if (wc.getType() === 'backgroundPage') {
const preloadPath = path.join(__dirname, 'preloadForExtension.js');
if (!wc.session.getPreloads().includes(preloadPath)) {
wc.session.setPreloads(wc.session.getPreloads().concat(preloadPath));
}
}
});

Related

React: add HTML from generic file path at server-side build time

The use case I'm trying to fulfill:
Admin adds SVG along with new content in CMS, specifying in the CMS which svg goes with which content
CMS commits change to git (Netlify CMS)
Static site builds again
SVG is added inline so that it can be styled and/or animated according to the component in which it occurs
Now - I can't figure out a clean way to add the SVG inline. My logic tells me - everything is available at build time (the svgs are in repo), so I should be able to simply inline the svgs. But I don't know how to generically tell React about an svg based on variables coming from the CMS content. I can import the svg directly using svgr/weback, but then I need to know the file name while coding, which I don't since it's coming from the CMS. I can load the svg using fs.readFileSync, but then the SVG gets lost when react executes client-side.
I added my current solution as an answer, but it's very hacky. Please tell me there's a better way to do this with react!
Here is my current solution, but it's randomly buggy in dev mode and doesn't seem to play well with next.js <Link /> prefetching (I still need to debug this):
I. Server-Side Rendering
Read SVG file path from CMS data (Markdown files)
Load SVG using fs.readFileSync()
Sanitize and add the SVG in React
II. Client-Side Rendering
Initial Get:/URL response contains the SVGs (ssr worked as intended)
Read the SVGs out of the DOM using HTMLElement.outerHTML
When React wants to render the SVG which it doesn't have, pass it the SVG from the DOM
Here is the code.
import reactParse from "html-react-parser";
import DOMPurify from "isomorphic-dompurify";
import * as fs from "fs";
const svgs = {}; // { urlPath: svgCode }
const createServerSide = (urlPath) => {
let path = "./public" + urlPath;
let svgCode = DOMPurify.sanitize(fs.readFileSync(path));
// add id to find the SVG client-side
// the Unique identifier is the filepath for the svg in the git repository
svgCode = svgCode.replace("<svg", `<svg id="${urlPath}"`);
svgs[urlPath] = svgCode;
};
const readClientSide = (urlPath) => {
let svgElement = document.getElementById(urlPath);
let svgCode = svgElement.outerHTML;
svgs[urlPath] = svgCode;
};
const registerSVG = (urlPath) => {
if (typeof window === "undefined") {
createServerSide(urlPath);
} else {
readClientSide(urlPath);
}
return true;
};
const inlineSVGFromCMS = (urlPath) => {
if (!svgs[urlPath]) {
registerSVG(urlPath);
}
return reactParse(svgs[urlPath]);
};
export default inlineSVGFromCMS;

Why chromium doesn't open in headless Mode?

I have the following NodeJS code to open Chromium in headless mode and record a web page to a video :
const { launch, getStream } = require("puppeteer-stream");
const fs = require("fs");
const { exec } = require("child_process");
async function test() {
const browser = await launch({headless: true});
const page = await browser.newPage();
await page.goto("https://www.someurl.com");
const stream = await getStream(page, { audio: true, video: true});
// record the web page to mp4 video
const ffmpeg = exec('ffmpeg -y -i - output.mp4');
stream.pipe(ffmpeg.stdin);
setTimeout(async () => {
await stream.destroy();
stream.on("end", () => {});
}, 1000 * 60);
}
The following code works properly but doesn't open chromium in headless mode. No matter what I do, the browser is still opened and visible when browsing the page. No error is thrown.
Does anyone know why it's not opened in headless mode please ?
Thanks
It says in the documentation for puppeteer-stream:
Notice: This will only work in headful mode
This is due to a limitation of Chromium where the Tab Capture API for the extension doesn't work in headless mode. (There are a couple bug reports about this, but I can't find the links at the moment.)
I had the same issue that headless doesn't work with some Websites and Elements (showing blank page content, not finding an element etc.).
But there is another method to "simulate" the headless mode by minimizing and moving the window to a location that can not be seen by the user.
This doesn't hide the chrome task from the taskbar, but the Chrome tab itself will still be hidden for the User.
Just use the following arguments:
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments(new List<string>() { "--window-size=1,1", "window-position=-2000,0" }); // This hides the chrome window
var chromeDriverService = ChromeDriverService.CreateDefaultService();
chromeDriverService.HideCommandPromptWindow = true; // This is to hid the console.
ChromeDriver driver = new ChromeDriver(chromeDriverService, chromeOptions);
driver.Navigate().GoToUrl("https://google.com");
in short the important part:
chromeOptions.AddArguments(new List<string>() { "--window-size=1,1", "window-position=-2000,0" });
chromeDriverService.HideCommandPromptWindow = true;
//driver.Manage().Window.Minimize(); //use this if the code above does not work

Angular +Workbox = build ChunkLoadError: Loading chunk # and Refused to execute script because its MIME

I have added Workbox to Angular in first production deploy everything works fine, but after updating a module and rebuilding angular and injecting Workbox then visiting the site i see the service worker updates to the new version and refreshes the page, but now trying to use the updated module I get errors
Refused to execute script from 'https://example.com/8-es2015.203674bf0547eff7ff27.js'
because its MIME type ('text/html') is not executable,
and strict MIME type checking is enabled.
main-es2015.45ba4a512f87eefb1b3a.js:1 ERROR Error: Uncaught (in promise): ChunkLoadError:
Loading chunk 8 failed.(error: https://example.com/8-es2015.203674bf0547eff7ff27.js)
ChunkLoadError: Loading chunk 8 failed......
I looked at the network in chrome and I see that the file 8-es2015.203674bf0547eff7ff27.js is being served from the (disk cache) unlike the rest of the files which get served by (ServiceWorker), its content is the index.html file I don't know where it came from its not even part of the new build ? chrome places it in top frame section under scripts
Whats the reason for this Error, in the angular.json I have "outputHashing": "all", I delete everything and rebuild but still this errors, its until I clear the browser cash remove the ServiceWorker and hard refresh that the error stops happening until I reload page and it returns. Do I need to delete all the cache after every update, I thought Workbox does this automatically.Do I add something like so in the sw.js
self.addEventListener('activate', event => event.waitUntil(
caches.keys().then(cacheNames => cacheNames.forEach(name => caches.delete(name)))
)
);
Am using express, so I have set the maxAge on the sw.js to 0 and even change the public route to static files to a deep route but nothing
app.use('/sw.js', express.static(path.resolve('./public/dist/static/sw.js'), {maxAge: 0}));
app.use('/', express.static(path.resolve('./public/dist/static/'), {maxAge: 86400000}));
tools: angular 8.2.4 - workbox 4.3.1
Update
Removed workbox and the app worked, am guessing its cause of their new package workbox-window or the way am trying to use it. I have placed it in module service that is loaded from app.module then the service is called from a AppComponent ngOnInit. This could be the wrong way of initializing it.
code setup:
import {Workbox} from 'workbox-window';
#Injectable()
export class WorkerService {
supportWorker: boolean;
supportPush: boolean;
constructor(#Inject(WINDOW) private window: any, private loggerService: LoggerService) {
this.supportWorker = ('serviceWorker' in navigator);
this.supportPush = (this.supportWorker && 'PushManager' in window);
}
initWorker() {
if (this.supportWorker && environment.production) {
const wb = new Workbox('sw.js');
if (wb) {
wb.addEventListener('installed', event => {
if (event.isUpdate) {
// output a toast translated message to users
this.loggerService.info('App.webWorkerUpdate', 10000);
setTimeout(() => this.window.location.reload(), 10000);
}
});
wb.addEventListener('activated', event => {
if (!event.isUpdate) {
this.loggerService.success('App.webWorkerInit', 10000);
}
});
wb.register();
}
}
}
}
This the app component, i thought it would be best to add it to main.ts after bootstrapModule.then() but I don't know how inject a service in this method
#Component({
selector: 'app-root',
template: '<route-handler></route-handler>'
})
export class AppComponent implements OnInit {
constructor(private ws: WorkerService) {
}
ngOnInit(): void {
this.ws.initWorker();
}
}
After setting up Workbox in a different way it worked, the problem effected even chrome which failed to clear all cache after each build when testing it, had to use incognito to make sure everything works.
Here is the solution thanks to Ralph Schaer article a must read. His method is not to Cache-Bust the chunks angular build generates, also he globs in all the production scripts of workbox used by the app into the build folder and finally in the index.html he calls on workbox-window to register the service-worker.

Electron PDF viewer

I have an Electron app that loads URL from PHP server. And the page contains an iFrame having a source to PDF. The PDF page seems absolutely ok in a normal web browser but asks for download in Electron. Any help?
My codes for html page is
<h1>Hello World!</h1>
Some html content here...
<iframe src="http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf" width="1200" height="800"></iframe>
And my js code is something like
mainWindow = new BrowserWindow({width: 800, height: 600})
mainWindow.loadURL(url.format({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true
}))
app.on('ready', createWindow)
Any help would be really greatful...
Electron is shipping already with an integrated PDF viewer.
So you can load PDF files just like normal HTML files, the PDF viewer will automatically show up.
E.g. in BrowserWindow with .loadURL(…), in <iframes>, <object> and also with the, at the moment discouraged, <webview>.
PS: The need to enable the plugins property in the BrowserWindow or <webview> is no more needed since Electron 9.
You will need
https://github.com/gerhardberger/electron-pdf-window
Example:
const { app } = require('electron')
const PDFWindow = require('electron-pdf-window')
app.on('ready', () => {
const win = new PDFWindow({
width: 800,
height: 600
})
win.loadURL('http://mozilla.github.io/pdf.js/web/compressed.tracemonkey-pldi-09.pdf')
})
This answer will focus on implementation with Angular.
After year of waiting (to be solved by the Electron) finally I decided to apply a workaround. For the people who needs it done, here it goes. Workaround comes with a cost of increasing bundle size totally 500K. (For Angular)
Workaround to use Mozilla PDF.js library.
NPM
GitHub
Implementation 1 (Setting nodeIntegration: true)
This implementation has no issue, you can implement by the document of the library mentioned. But if you run into additional problem like creating white window when route is changed, it is due to the setting nodeIntegration property to true. If so, use the following implementation.
Implementation 2 (Setting nodeIntegration: false)
This is the default by Electron. Using this configuration and viewing the PDF is bit tricky. Solution is to use Uint8Array instead of a blob or base64.
You can use the following function to convert base64 to Uint8Array.
base64ToArrayBuffer(data): Uint8Array {
const input = data.substring(data.indexOf(',') + 1);
const binaryString = window.atob(input ? input : data);
const binaryLen = binaryString.length;
const bytes = new Uint8Array(binaryLen);
for (let i = 0; i < binaryLen; i++) {
const ascii = binaryString.charCodeAt(i);
bytes[i] = ascii;
}
return bytes;
}
Or convert blob to array buffer
const blob = response;
let arrayBuffer = null;
arrayBuffer = await new Response(blob).arrayBuffer();
then pass the generated Uint8Array as the pdfSource to the ng2-pdfjs-viewer.
HTML
<ng2-pdfjs-viewer zoom="100" [pdfSrc]="pdfSource"></ng2-pdfjs-viewer>
Electron 9.0.0 has enabled PDF viewer already.
npm install electron#9.0.0

Web Scrape Meteor Pages

I'm trying to write an application that scrapes a meteor webpage. This is rather difficult as meteor webpages render initially entirely as Javascript. Is there some way perhaps to render the page with some sort of scraper?
Probably going to do it with node, if that helps.
Thanks
You could use phantomjs to render the webpage. This is an example, specifically designed for meteor webpages, (from spiderable) to capture their HTML:
var fs = require('fs');
var child_process = require('child_process');
console.log('Loading a web page');
var page = require('webpage').create();
page.open("http://localhost:3000", function(status) {
});
var i = 0;
setInterval(function() {
var ready = page.evaluate(function () {
if (typeof Meteor !== 'undefined'
&& typeof(Meteor.status) !== 'undefined'
&& Meteor.status().connected) {
Deps.flush();
return DDP._allSubscriptionsReady();
}
return false;
});
console.log("Ready", ready);
if (ready) {
var out = page.content;
console.log(out);
phantom.exit();
}
}, 100);
It is this way but you could wrap the output and capture it using require('child_process').exec and stdin.
You can run the code with phantomjs script.js and it would give you back the HTML of a meteor page.
If they have the spiderable package enabled, then you can pretend to be a web crawler to get the server to render the page.
If you don't control the server or it isn't enabled, you will probably have to use Selenium - but the crawling will be CPU intensive and slow.

Resources