Problems using remote debugging protocol in Google Chrome extension - google-chrome-extension

I am developing my own implementation of DevMode for GWT-like framework. This DevMode is to
work like GWT's old DevMode, which is going to be deprecated, since GWT teavm doesn't want to
support browser plugins. Another reason is NPAPI was removed from latest versions of Google Chrome.
So my DevMode implementation has its own Google Chrome extension, which is written completely in
JavaScript and communicates with server through WebSocket.
The main goal of DevMode is to enable debugging of Java code in Java IDE. It runs Java code
in JVM, and when JVM attempts to execute JavaScript code, it transfers it remotely to browser
extension, which executes JavaScript and sends result back. However, thing are slightly more
difficult in practice, as JVM can pass callback to JavaScript, so there is bi-directional
message flow between JVM and browser extension, in any depth of nesting. Consider the following
ping-pong:
public class PingPong {
public native void ping(int count) /*-{
console.log("Before ping %d", count);
this.pong(count - 1);
console.log("After ping %d", count);
-}*/;
public void pong(int count) {
System.out.println("Before pong " + count);
ping(count - 1);
System.out.println("After pong " + count);
}
}
When ping calls pong it should wait until JVM receives message, handles it and sends response
back. Furthermore, JVM calls ping again and so forth. However, WebSocket API is asynchronous.
Also, we can't just block until JVM responds, as JVM may call JavaScript again.
So I use remote debugging protocol in my extension.
When extension receives a command to call a JavaScript method,
it sends the Runtime.callFunctionOn command to debugger API. When Java passes callback to
extension, it creates the following proxy function:
function() {
var params = [];
for (var i = 0; i < arguments.length; ++i) {
params.push(arguments[i]);
}
return sendMessage({
type : "invoke-method",
methodName : methodName,
args : params
});
}
where sendMessage is:
function sendMessage(message) {
sentMessage = message;
while (sentMessage != null) {
debugger;
}
return receivedMessage;
}
Further, extension listens to the Debugger.paused notification. When a notification comes,
extension sends message to JVM and waits for response. After response is received,
extension sets receivedMessage, clears sentMessage and sends Debugger.resume to debugger.
Unfortunately, I have two major problems. First, I when call
element.addEventListener("click", callback)
from JVM, and then click the element, Debugger.paused comes as expected.
But when I read sentMessage through RDP, I get objectId different from that I passed to
addEventListener. Are objects ids persistent in Google Chrome debugger API?
Second problem is speed. RDP roundtrip takes 10-20 milliseconds on each call. Is there another
way to block execution of script using Google Chrome extension APIs? Also, I want not only
to block, but to enter a nested event loop, since JVM callback may call JavaScript again.

Related

How to send message from native app written in Node.js to Chrome extension?

This question is a continuation of that one, because there hasn't been any answer with Nodejs code in that question, 6 years now.
I use Chrome 79 and Nodejs 13, in windows 10.
I have a Nodejs script(see below) acting as the native messaging host, and a Chrome extension.
I want to send messages from the Nodejs script to my extension.
I know how to receive messages in my Chrome extension - here is its background.js:
var port = chrome.runtime.connectNative('my_messaging_host');
port.onMessage.addListener((message) => {
console.log("Received: " + message);
});
The Nodejs script I have(see below) is the native messaging host. I used as a reference the only Nodejs messaging host code example I could find, from MDN.
I've noticed that this Nodejs code example was added in that MDN wiki page very recently, 10 days ago, and it has issues/needs improvement.
Anyway, I've modified it as follows, but I'm getting the following error from my extension when sending messages: Failed to connect: Error when communicating with the native messaging host, which, based on the docs indicates an incorrect implementation of the communication protocol in the native messaging host.
So, why is my Nodejs script not working ok?
Could you please give me a working Nodejs host code example?
function sendMessage(msg) {
var buffer = Buffer.from(JSON.stringify(msg));
var header = Buffer.alloc(4);
header.writeUInt32LE(buffer.length, 0);
var data = Buffer.concat([header, buffer]);
process.stdout.write(data);
}
For reference, various native messaging host examples:
Google Native Messaging docs: in Python.
MDN Native Messaging docs: in Nodejs and Python 2 & 3,
as answers to the question mentioned above: in C, C++, C# and Python.
I've fixed the sendMessage function from the MDN wiki page as follows, i.e.:
no need allocating a new Buffer via stringifying the string/wrongly supposed as JSON, msg
the 1st of the 4 bytes of the header (converted in little-endian) should take the length of the given msg as string
the msg should be send using stout as it is, i.e. as string
function sendMessage(msg) {
var header = Buffer.alloc(4);
header.writeUInt32LE(msg.length, 0);
process.stdout.write(header);
process.stdout.write(msg);
}
Also, here is an example message to send:
sendMessage('{"test": "content"}');
PS: I also updated that MDN wiki page with it.

How to handle connection timeout in ZeroMQ.js properly?

Consider a Node.js application with few processes:
single main process sitting in the memory and working like a web server;
system user's commands that can be run through CLI and exit when they are done.
I want to implement something like IPC between main and CLI processes, and it seems that ZeroMQ bindings for Node.js is a quite good candidate for doing that. I've chosen 6.0.0-beta.4 version:
Version 6.0.0 (in beta) features a brand new API that solves many fundamental issues and is recommended for new projects.
Using Request/Reply I was able to achieve what I wanted: CLI process notifies the main process about some occurred event (and optionally receives some data as a response) and continues its execution. A problem I have right now is that my CLI process hangs if the main process is off (is not available). The command still has to be executed and exit without notifying the main process if it's unable to establish a connection to a socket.
Here is a simplified code snippet of my CLI running in asynchronous method:
const { Request } = require('zeromq');
async function notify() {
let parsedResponse;
try {
const message = { event: 'hello world' };
const socket = new Request({ connectTimeout: 500 });
socket.connect('tcp://127.0.0.1:33332');
await socket.send(JSON.stringify(message));
const response = await socket.receive();
parsedResponse = JSON.parse(response.toString());
}
catch (e) {
console.error(e);
}
return parsedResponse;
}
(async() => {
const response = await notify();
if (response) {
console.log(response);
}
else {
console.log('Nothing is received.');
}
})();
I set connectTimeout option but wonder how to use it. The docs state:
Sets how long to wait before timing-out a connect() system call. The connect() system call normally takes a long time before it returns a time out error. Setting this option allows the library to time out the call at an earlier interval.
Looking at connect one see that it's not asynchronous:
Connects to the socket at the given remote address and returns immediately. The connection will be made asynchronously in the background.
Ok, probably send method of the socket will wait for connection establishment and reject a promise on connection timeout...but nothing happens there. send method is executed and the code is stuck at resolving receive. It's waiting for reply from the main process that will never come. So the main question is: "How to use connectTimeout option to handle socket's connection timeout?" I found an answer to similar question related to C++ but it actually doesn't answer the question (or I can't understand it). Can't believe that this option is useless and that it was added to the API in order to nobody can't use it.
I also would be happy with some kind of a workaround, and found receiveTimeout option. Changing socket creation to
const socket = new Request({ receiveTimeout: 500 });
leads to the the rejection in receive method and the following output:
{ [Error: Socket temporarily unavailable] errno: 11, code: 'EAGAIN' }
Nothing is received.
Source code executed but the process doesn't exit in this case. Seems that some resources are busy and are not freed. When main process is on the line everything works fine, process exits and I have the following reply in output:
{ status: 'success' }
So another question is: "How to exit the process gracefully on rejecting receive method with receiveTimeout?". Calling process.exit() is not an option here!
P.S. My environment is:
Kubuntu 18.04.1;
Node 10.15.0;
ZeroMQ bindings are installed this way:
$ yarn add zeromq#6.0.0-beta.4 --zmq-shared
ZeroMQ decouples the socket connection mechanics from message delivery. As the documentation states connectTimeout only influences the timeout of the connect() system call and does not affect the timeouts of sending/receiving messages.
For example:
const zmq = require("zeromq")
async function run() {
const socket = new zmq.Dealer({connectTimeout: 2000})
socket.events.on("connect:retry", event => {
console.log(new Date(), event.type)
})
socket.connect("tcp://example.com:12345")
}
run()
The connect:retry event occurs every ~2 seconds:
> node test.js
2019-11-25T13:35:53.375Z connect:retry
2019-11-25T13:35:55.536Z connect:retry
2019-11-25T13:35:57.719Z connect:retry
If we change connectTimeout to 200 then you can see the event will occur much more frequently. The timeout is not the only thing influencing the delay between the events, but it should be clear that it happens much quicker.
> node test.js
2019-11-25T13:36:05.271Z connect:retry
2019-11-25T13:36:05.531Z connect:retry
2019-11-25T13:36:05.810Z connect:retry
Hope this clarifies the effect of connectTimeout.

Synchronization in Firefox OS (B2G)

I would like to stop a function in order to wait the end of another section of code.
Is there in Firefox OS some synchronization method like wait() and notify() in Java?
Thanks
JavaScript doesn't has this concept, but generally speaking uses callbacks (function pointers). A bit like the anonymous classes in Java. For example, I'm making a call to a web server:
function callToWebServer(url, doneCallback) {
// do all kinds of magic, waiting for the web server to reply etc.
// when done:
doneCallback();
}
Now using it via:
callToWebServer(function() {
// this is executed after the call to the web server succeeded
});
alert(1); // this is executed straight away
We can never wait, as JS is a single-thread execution environment. All code is written async.

How to keep long-lived connections between a Chrome extension and a native messaging host?

I created a Chrome extension and use Native Messaging to connect to a C++ native application.
But for every message that the Chrome extension sends to the native host, a new host exe instance is created. I think it is not efficient because I send many messages to the host.
Is there a long-lived connection method between a Chrome extension and a native messaging host?
If you are sending the messages with chrome.runtime.sendNativeMessage, or create a new Port object with chrome.runtime.connectNative for every message, then yes, it's inefficient.
The purpose of chrome.runtime.connectNative is to create and maintain open a message Port, which you can reuse. As long as your native host behaves in a manner Chrome expects and doesn't close the connection itself, it will be a long-lived connection.
function connect(messageHandler, disconnectHandler){
var port = chrome.runtime.connectNative('com.my_company.my_application');
if(disconnectHandler) { port.onDisconnect.addListener(disconnectHandler); }
if(messageHandler) { port.onMessage.addListener(messageHandler); }
return port;
}
var hostPort = connect(/*...*/);
port.postMessage({ text: "Hello, my_application" });
// Goes to the same instance
port.postMessage({ text: "P.S. I also wanted to say this" });
// If you want to explicitly end the instance
port.disconnect();

WCF Server (net-tcp) against backand running non-thread-safe unmanaged code

I've been tasked with writing a WCF service host (server) for an existing (session-full) service -- not hard so far. The service is an ADO Proxy server that proxies ADO connections to various back end databases. This works well in most cases, but one of the ADO .NET data providers I need to support is implemented as a driver connecting to an unmanaged code (C) API that is not thread-safe.
The preferred solutions, make the code thread-safe or implement a thread-safe, managed driver are off the table right now. It's been suggested that I could spin multiple processes as a sort of back end or second level proxy, but this struck me as a nightmare to implement when I first heard it, and even more so as I did a trial implementation.
My question is, is there another solution I am missing here? I've played around so far with ConncurrencyMode.Single And UseSynchronization = true, but the real heart of the matter is being both sessionfull and having a non-thread-safe back end. So far no luck. Am I stuck with proxying the connection to multiple processes, or can someone else suggest a more elegant solution?
Thanks!
What I would do (actually I have been in this very situation myself) is to spin up a dedicated thread that will service requests dispatched to the unmanaged API. The thread will sit there waiting for a request message. The request message will instruct the thread to do something with the API. Once the thread is finished processing the request it construct a response message. The response message will contain the returned data. The pattern is super easy if you use BlockingCollection to queue the request and response messages.
public class SingleThreadedApiAbstraction
{
private BlockingCollection<Request> requests = new BlockingCollection<Request>();
private BlockingCollection<Response> responses = new BlockingCollection<Response>();
public SingleThreadedApiAbstraction()
{
var thread = new Thread(Run);
thread.IsBackground = true;
thread.Start();
}
public /* Return Type */ SomeApiMethod(/* Parameters */)
{
var request = new Request(/* Parameters */);
requests.Add(request); // Submit the request.
var response = responses.Take(); // Wait for the response.
return response.ReturnValue;
}
private void Run()
{
while (true)
{
var request = requests.Take(); // Wait for a request.
// Forward the request parameters to the API.
// Then construct a response object with the return value.
var response = new Response(/* Returned Data */);
responses.Add(response); // Publish the response.
}
}
}
The idea is that the API is only ever accessed via this dedicated thread. It does not matter how or who is calling SomeApiMethod. It is important to note that Take blocks if the queue is empty. The Take method is where the magic happens.

Resources