Dispatch event from chrome extension clears event data transfer - google-chrome-extension

I want to simulate event that uses dataTransfer field on event. Before dispatch I manually create dataTransfer on Event, but after dispatch it's not present there. When I lunch same code from dev console (not chrome extension) everything works. How can I dispatch event with filled dataTransfer from chromeExtension?
function createCustomEvent(type) {
var event = new CustomEvent("CustomEvent")
event.initCustomEvent(type, true, true, null)
event.dataTransfer = {
data: {
},
setData: function(type, val) {
this.data[type] = val
},
getData: function(type) {
return this.data[type]
}
}
return event
}
var event = createCustomEvent('dragstart')
dispatchEvent(sourceNode, 'dragstart', event)

Related

How to get the path to the directory of the opened file?

I'm Kenyon Bowers.
I have some code that opens a open file dialog. It opens .DSCProj (which are specific to my project), and I am going to run some terminal commands in the directory that the opened file is in.
I have no idea how to do that.
preload.ts:
import { ipcRenderer, contextBridge } from "electron";
import { dialog } from '#electron/remote'
contextBridge.exposeInMainWorld("api", {
showOpenFileDialog: () => dialog.showOpenDialogSync({
properties: ["openFile"],
filters: [
{
name: "DSC Projects",
extensions: ["DSCProj"],
},
],
})
});
NewProject.ts:
declare var api: any;
function OpenProject(): void {
const file = api.showOpenFileDialog();
console.log("Done")
if(file != null){
localStorage.setItem('DirPath', file);
location.href='./views/projectOpen.html'
}
}
(() => {
document.querySelector('#btn-open-project')?.addEventListener('click', () => {
OpenProject();
}),
document.querySelector('#btn-new-project')?.addEventListener('click', () => {
location.href='./views/projectNew.html'
})
})()
As you can see on line 7, I'm setting local storage to the file's path. But I need to set it to the path of the directory that the file is in.
Whilst use of #electron/remote is great and all, you may be better served by implementing certain Electron modules within the main thread instead of the render thread(s). This is primarily for security but as a strong second reason, it keeps your code separated. IE: Separation of concerns.
Unlike vanilla Javascript, node.js has a simple function path.parse(path).dir to easily remove the file name (and extension) from the file path without needing to worry about which OS (IE: Directory separator) you are using. This would also be implemented within your main thread. Implementing something like this within your render thread would take a lot more work with vanilla Javascript to be OS proof.
Lastly, in the code below I will use a preload.js script that only deals with the movement of messages and their data between the main thread and render thread(s). I do not believe that the concrete implementation of functions in your preload.js script(s) is the right approach (though others may argue).
Note: I am not using typescript in the below code, but you should get the general idea.
Let's use the channel name getPath within the invoke method.
preload.js (main thread)
// Import the necessary Electron components.
const contextBridge = require('electron').contextBridge;
const ipcRenderer = require('electron').ipcRenderer;
// White-listed channels.
const ipc = {
'render': {
// From render to main.
'send': [],
// From main to render.
'receive': [],
// From render to main and back again.
'sendReceive': [
'getPath'
]
}
};
// Exposed protected methods in the render process.
contextBridge.exposeInMainWorld(
// Allowed 'ipcRenderer' methods.
'ipcRender', {
// From render to main.
send: (channel, args) => {
let validChannels = ipc.render.send;
if (validChannels.includes(channel)) {
ipcRenderer.send(channel, args);
}
},
// From main to render.
receive: (channel, listener) => {
let validChannels = ipc.render.receive;
if (validChannels.includes(channel)) {
// Deliberately strip event as it includes `sender`.
ipcRenderer.on(channel, (event, ...args) => listener(...args));
}
},
// From render to main and back again.
invoke: (channel, args) => {
let validChannels = ipc.render.sendReceive;
if (validChannels.includes(channel)) {
return ipcRenderer.invoke(channel, args);
}
}
}
);
Now, in the main thread, let's listen for a call on the getPath channel. When called, open up the dialog and upon the return of a path, process it with Node's path.parse(path).dir function to remove the file name (and extension). Lastly, return the modified path.
main.js (main thread)
const electronBrowserWindow = require('electron').BrowserWindow;
const electronDialog = require('electron').dialog;
const electronIpcMain = require('electron').ipcMain;
const nodePath = require("path");
let window;
function createWindow() {
const window = new electronBrowserWindow({
x: 0,
y: 0,
width: 800,
height: 600,
show: false,
webPreferences: {
nodeIntegration: false,
contextIsolation: true,
preload: nodePath.join(__dirname, 'preload.js')
}
});
window.loadFile('index.html')
.then(() => { window.show(); });
return window;
}
electronApp.on('ready', () => {
window = createWindow();
});
electronApp.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
electronApp.quit();
}
});
electronApp.on('activate', () => {
if (electronBrowserWindow.getAllWindows().length === 0) {
createWindow();
}
});
// -----
// Let's listen for a call on the 'getPath' channel
electronIpcMain.handle('getPath', async () => {
// Dialog options.
const options = {
properties: ["openFile"],
filters: [
{
name: "DSC Projects",
extensions: ["DSCProj"],
}
]
}
// When available, return the modified path back to the render thread via IPC
return await openDialog(window, options)
.then((result) => {
// User cancelled the dialog
if (result.canceled === true) { return; }
// Modify and return the path
let path = result.filePaths[0];
let modifiedPath = nodePath.parse(path).dir; // Here's the magic.
console.log(modifiedPath); // Testing
return modifiedPath;
})
})
// Create an open dialog
function openDialog(parentWindow, options) {
return electronDialog.showOpenDialog(parentWindow, options)
.then((result) => { if (result) { return result; } })
.catch((error) => { console.error('Show open dialog error: ' + error); });
}
Here you will get the general idea about how to use the returned result.
index.html (render thread)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Open Project Test</title>
</head>
<body>
<input type="button" id="btn-open-project" value="Open Project">
</body>
<script>
document.getElementById('btn-open-project').addEventListener('click', () => {
openProject();
});
function openProject() {
window.ipcRender.invoke('getPath')
.then((path) => {
// As we are using "invoke" a response will be returned, even if undefined.
if (path === undefined) { return; } // When user cancels dialog.
console.log(path); // Testing.
// window.localStorage.setItem('DirPath', path);
// location.href='./views/projectOpen.html';
});
}
</script>
</html>

Bull queue is getting added but never completed

I'm working on an express app that uses several Bull queues in production. We created a wrapper around BullQueue (I added a stripped down version of it down below)
import logger from '~/libs/logger'
import BullQueue from 'bull'
import Redis from '~/libs/redis'
import {report} from '~/libs/sentry'
import {ValidationError, RetryError} from '~/libs/errors'
export default class Queue {
constructor(opts={}) {
if (!opts.label) {
throw new ValidationError('Cannot create queue without label')
}
if (!this.handler) {
throw new ValidationError(`Cannot create queue ${opts.label} without handler`)
}
this.label = opts.label
this.jobOpts = Object.assign({
attempts: 3,
backoff: {
type: 'exponential',
delay: 10000,
},
concurrency: 1,
// clean up jobs on completion - prevents redis slowly filling
removeOnComplete: true
}, opts.jobOpts)
const queueOpts = Object.assign({
createClient: function (type) {
switch (type) {
case 'client':
return client
case 'subscriber':
return subscriber
default:
return new Redis().client
}
}
}, opts.queueOpts)
this.queue = new BullQueue(this.label, queueOpts)
if (process.env.NODE_ENV === 'test') {
return
}
logger.info(`Queue:${this.label} created`)
}
add(data, opts={}) {
const jobOpts = Object.assign(this.jobOpts, opts)
return this.queue.add(data, jobOpts)
}
}
Then I created a queue that is supposed to send a GET request using node-fetch
import Queue from '~/libs/queue'
import Sentry from '~/libs/sentry'
import fetch from 'node-fetch'
class IndexNowQueue extends Queue {
constructor(options = {}) {
super({
label: 'documents--index-now'
})
}
async handler(job) {
Sentry.addBreadcrumb({category: 'async'})
const {permalink} = job.data
const res = await fetch(`https://www.bing.com/indexnow?url=${permalink}&key=${process.env.INDEX_NOW_KEY}`)
if (res.status === 200) {
return
}
throw new Error(`Failed to submit url '${permalink}' to IndexNow. Status: ${res.status} ${await res.text()}`)
}
}
export default new IndexNowQueue()
And then this Queue is being added in the relevant endpoint
indexNowQueue.add({permalink: document.permalink})
In the logs I can see that the Queue is added, however, unlike the other queues (for instance aggregate-feeds) it never moves forward
No error is thrown and any debugger breakpoint I added in there never gets reached. I also tried isolating the handler function outside of the queue and it works as I would expect. What could be causing this issue? What other ways do I have to debug Bull?
It's worth mentioning that there are half a dozen queues in the projects and they are all working as expected.

Chrome extension send message from onInstall in backgroundjs to own extension javascript code

I need to check when the extension is installed and change my React state accordingly.
I use chrome.runtime.onInstalled on my background.js where I sendMessage to my react code - which is the content script of my extension.
background.js
async function getCurrentTab() {
let queryOptions = { active: true, currentWindow: true };
let [tab] = await chrome.tabs.query(queryOptions);
return tab;
}
chrome.runtime.onInstalled.addListener((details) => {
if (details?.reason === 'install') {
console.log('installed backgroundjs')
const tab = await getCurrentTab()
chrome.tabs.sendMessage(tab.id, { target: 'onInstall' })
openNewTab()
}
})
In my react Component - Dashboard.js
useEffect(() => {
if (extensionApiObject?.runtime) {
chrome.runtime.sendMessage({ target: 'background', message: 'check_installation', })
console.log('extension')
chrome.runtime.onMessage.addListener(handleMessage)
}
return () => {
if (extensionApiObject?.runtime) {
chrome.runtime.onMessage.removeListener(handleMessage)
}
}
})
function handleMessage(msg) {
console.log('handle messag func', msg)
if (msg.target === 'onInstall') {
console.log('extension on Install')
setShowWelcomeMessage(true)
}
}
What confuse me is that I already have a similar implementation for a different message that works without problem but in there I listen to chrome.runtime.onMessage() not chrome.runtime.onInstalled()
I wonder if I misunderstand how onInstalled method work and I cannot sendMessage from it?
UPDATE:
I change my code as suggested by #wOxxOm, I used chrome.tabs.sendMessage but still no luck.
chrome.runtime.onInstalled doesn't take as argument req, sender, sendResponse like other listener and I wonder if that means it will not be able to send a message from there :/
After #wOxxOm suggestions, I ended up removing the sendMessage solution and simply add an extra parameter to the url whenever I open a new tab after installation:
background.js
function openNewTab(param) {
chrome.tabs.create({
url: param ? `chrome://newtab?${param}` : 'chrome://newtab',
})
}
chrome.runtime.onInstalled.addListener((details) => {
if (details?.reason === 'install') {
chrome.tabs.sendMessage(tab.id, { target: 'onInstall' })
openNewTab('installed')
}
})
on my web app I just need to check the param and I can decide which UI to show

"Extension context invalidated" error when calling chrome.runtime.sendMessage()

I have a content script in a Chrome Extension that's passing messages. Every so often, when the content script calls
chrome.runtime.sendMessage({
message: 'hello',
});
it throws an error:
Uncaught Error: Extension context invalidated.
What does this error mean? I couldn't find any documentation on it.
It doesn't happen consistently. In fact, it's hard to reproduce. Seems to happen if I just leave the page open for a while in the background.
Another clue: I've written many Chrome Extensions with content scripts that pass messages and I haven't seen this error before. The main difference is that this content script is injected by the background page using
chrome.tabs.executeScript({
file: 'contentScript.js',
});
Does using executeScript instead of the manifest file somehow change the lifecycle of the content script?
This is certainly related to the message listener being lost in the middle of the connection between content and background scripts.
I've been using this approach in my extensions, so that I have a single module that I can use in both background and content scripts.
messenger.js
const context = (typeof browser.runtime.getBackgroundPage !== 'function') ? 'content' : 'background'
chrome.runtime.onConnect.addListener(function (port) {
port.onMessage.addListener(function (request) {
try {
const object = window.myGlobalModule[request.class]
object[request.action].apply(module, request.data)
} catch () {
console.error(error)
}
})
})
export function postMessage (request) {
if (context === 'content') {
const port = chrome.runtime.connect()
port.postMessage(request)
}
if (context === 'background') {
if (request.allTabs) {
chrome.tabs.query({}, (tabs) => {
for (let i = 0; i < tabs.length; ++i) {
const port = chrome.tabs.connect(tabs[i].id)
port.postMessage(request)
}
})
} else if (request.tabId) {
const port = chrome.tabs.connect(request.tabId)
port.postMessage(request)
} else if (request.tabDomain) {
const url = `*://*.${request.tabDomain}/*`
chrome.tabs.query({ url }, (tabs) => {
tabs.forEach((tab) => {
const port = chrome.tabs.connect(tab.id)
port.postMessage(request)
})
})
} else {
query({ active: true, currentWindow: true }, (tabs) => {
const port = chrome.tabs.connect(tabs[0].id)
port.postMessage(request)
})
}
}
}
export default { postMessage }
Now you'll just need to import this module in both content and background script. If you want to send a message, just do:
messenger.postMessage({
class: 'someClassInMyGlobalModuçe',
action: 'someMethodOfThatClass',
data: [] // any data type you want to send
})
You can specify if you want to send to allTabs: true, a specific domain tabDomain: 'google.com' or a single tab tabId: 12.

NodeJS streams not awaiting async

I have run into an issue when testing NodeJS streams. I can't seem to get my project to wait for the output from the Duplex and Transform streams after running a stream.pipeline, even though it is returning a promise. Perhaps I'm missing something, but I believe that the script should wait for the function to return before continuing. The most important part of the project I'm trying to get working is:
// Message system is a duplex (read/write) stream
export class MessageSystem extends Duplex {
constructor() {
super({highWaterMark: 100, readableObjectMode: true, writableObjectMode: true});
}
public _read(size: number): void {
var chunk = this.read();
console.log(`Recieved ${chunk}`);
this.push(chunk);
}
public _write(chunk: Message, encoding: string,
callback: (error?: Error | null | undefined, chunk?: Message) => any): void {
if (chunk.data === null) {
callback(new Error("Message.Data is null"));
} else {
callback();
}
}
}
export class SystemStream extends Transform {
public type: MessageType = MessageType.Global;
public data: Array<Message> = new Array<Message>();
constructor() {
super({highWaterMark: 100, readableObjectMode: true, writableObjectMode: true});
}
public _transform(chunk: Message, encoding: string,
callback: TransformCallback): void {
if (chunk.single && (chunk.type === this.type || chunk.type === MessageType.Global)) {
console.log(`Adding ${chunk}`);
this.data.push(chunk);
chunk = new Message(chunk.data, MessageType.Removed, true);
callback(undefined, chunk); // TODO: Is this correct?
} else if (chunk.type === this.type || chunk.type === MessageType.Global) { // Ours and global
this.data.push(chunk);
callback(undefined, chunk);
} else { // Not ours
callback(undefined, chunk);
}
}
}
export class EngineStream extends SystemStream {
public type: MessageType = MessageType.Engine;
}
export class IOStream extends SystemStream {
public type: MessageType = MessageType.IO;
}
let ms = new MessageSystem();
let es = new EngineStream();
let io = new IOStream();
let pipeline = promisify(Stream.pipeline);
async function start() {
console.log("Running Message System");
console.log("Writing new messages");
ms.write(new Message("Hello"));
ms.write(new Message("world!"));
ms.write(new Message("Engine data", MessageType.Engine));
ms.write(new Message("IO data", MessageType.IO));
ms.write(new Message("Order matters in the pipe, even if Global", MessageType.Global, true));
ms.end(new Message("Final message in the stream"));
console.log("Piping data");
await pipeline(
ms,
es,
io
);
}
Promise.all([start()]).then(() => {
console.log(`Engine Messages to parse: ${es.data.toString()}`);
console.log(`IO Messages to parse: ${io.data.toString()}`);
});
Output should look something like:
Running message system
Writing new messages
Hello
world!
Engine Data
IO Data
Order Matters in the pipe, even if Global
Engine messages to parse: Engine Data
IO messages to parse: IO Data
Any help would be greatly appreciated. Thanks!
Note: I posted this with my other account, and not this one that is my actual account. Apologies for the duplicate.
Edit: I initially had the repo private, but have made it public to help clarify the answer. More usage can be found on the feature/inital_system branch. It can be run with npm start when checked out.
Edit: I've put my custom streams here for verbosity. I think I'm on a better track than before, but now getting a "null" object recieved down the pipeline.
As the documentation states, stream.pipeline is callback-based doesn't return a promise.
It has custom promisified version that can be accessed with util.promisify:
const pipeline = util.promisify(stream.pipeline);
...
await pipeline(...);
After some work of the past couple of days, I've found my answer. The issue was my implementation of the Duplex stream. I have since changed the MessageSystem to be a Transform stream to be easier to manage and work with.
Here is the product:
export class MessageSystem extends Transform {
constructor() {
super({highWaterMark: 100, readableObjectMode: true, writableObjectMode: true});
}
public _transform(chunk: Message, encoding: string,
callback: TransformCallback): void {
try {
let output: string = chunk.toString();
callback(undefined, output);
} catch (err) {
callback(err);
}
}
}
Thank you to #estus for the quick reply and check. Again, I find my answer in the API all along!
An archived repository of my findings can be found in this repository.

Resources