Close Native Host when Chrome is closed by the user - google-chrome-extension

I am working on a Chrome extension that uses a C++ Native Host. In a background.js script (persistent set to false), using chrome.onstartup event, I create the connection to the C++ Host.
I want my Host to be running for as long as the user is actively using Chrome.
If I close all my current Chrome tabs, independent Chrome processes still appear in the "Background section" of the Task Manager (including my Host process that must be explicitly killed).
I understood that the user can configure the Chrome not to run background processes, but can I design my extension to kill the Host process(disconnect the port) when the user closes all Chrome tabs?
Moreover, the problem becomes more serious if I disable the extension. The Host process becomes a detached process in the background. If enable the extension again, kill all Chrome processed and restart Chrome (as my extension connects to Host on startup of Chrome), I will have multiple Host processes.

When Chrome terminates or your extension is unloaded, it will send -1 message to your native messaging host. You will have to check for that value, assuming your native messaging host is written in C++ then this is what you should do:
int read_char = ::getchar();
if (read_char == -1) {
// Do termination work here...
}

Related

Browser only allowing 3 tabs to connect to local server via Socket.io

I have a React App that currently in development. I use socket.io to connect the frontend to my server file which I'm running locally.
I opened multiple tabs (including incognito) so I can simulate multiple people using it at the same time and the browser hangs up on the 4th window. I can open up to 3 just fine. When I introduce the 4th one I can either not get the React app loaded or I load it and it hangs up when I try to do anything that emits a socket action.
I did notice that I can open a 4th window in Firefox no problem. So it seems like it's a Chrome / Browser thing limiting me to 3 socket connections from a single browser.
Any ideas on what's going on? I don't even have a ton of emits being sent out. I really don't think it's my server or client code. I tried turning on `multi-plexing using
const socket = io.connect('http://localhost:3000', { forceNew: true });
in my Client code (React) but it didn't fix the problem until I started using Chrome and Firefox together to keep Chrome under 4 tabs.
Unfortunately this is a hard-coded limit of open connections to a server in Chrome.
It's actually 6 open sockets per host (https://support.google.com/chrome/a/answer/3339263?hl=en). However, to confuse things, I suspect that you're using something like hot-reloading, which also uses a socket (hence why each page takes up two sockets, not just one).
The only thing you could do, depending on your architecture, is spawn multiple servers on different ports (then you'd be able to have 6 per port).
Alternatively, as you've found, you can use another browser that does not enforce this limit.

(How) Can a Chrome Extension listen for messages from my server?

My Chrome Extension's background page is set up as an event page, i.e., most of the time it is asleep unless some registered event listener wakes it up.
I'd like to be able to occasionally send messages from my server to the event page of an individual user of my extension. They should not necessarily show up as a desktop notification, it would rather be up to the background script to decide what to do with any incoming message. It might very well store some information in localstorage for example. If the user client was offline at the moment the message is being sent, it would ideally be delivered once it comes back online.
I'd like to avoid polling my server at regular intervals every time the background script is awake, though that would be an obvious solution.
My question is therefore if it is possible to register a special kind of event in my event page so that it wakes up and triggers some functionality once there's an incoming message from my server. Ideally, the server message would not be a general broadcast to all my users, but rather a targeted message to a specific user.
What options do I have?
I read about service workers and their Push API but it seems they are only slowly being rolled out to Chrome Extensions. I am not sure if they are ready for the browser's stable release yet and didn't find any documentation on how they work with extensions.
I also read a bit about Google Cloud Messaging but it is deprecated in favor of a new costly Firebase solution.
Service worker functions like a proxy server, allowing you to modify requests and responses, replace them with items from its own cache, and more. While Chrome has its own approach to caching/installing the resources need to display a Chrome Extension. Therefore, there will be an error when you will attempt to intercept the registration of a service worker to a Chrome Extension.
See for more information:
Introduction to service worker
Service Worker script errors if run in chrome extension
See related SO post:
Chrome Extensions with service worker receiving push notifications

Chrome Extension Event Page communicating with external native host

I'm building a Chrome Extension that, through a native host, should simulate a key press. I can connect to my native host to check that it's there etc in the popup I've created, but in my event page script, should I just connect to the native host? It says on the Chrome Extension Developer Page that:
Event pages are loaded only when they are needed. When the event page is not actively doing something, it is unloaded, freeing memory and other system resources.
So if I want it to run "forever", i.e. listen to the native host and simulating a key press whenever it gets a "ping", how should I do that? The page says I should create events for that, but do I just listen to the port then?
Thanks,
Johan
As long as there is an open port opened with connect(), an Event page will not shut down.
If you think it's going to be like that most of the time, don't bother with Event pages and put "persistent": true (or nothing) to make a normal background one.

SignalR & Windows 7's IIS 7.5 hangs

I am now testing new Chat with SignalR and IIS/ASP.net 4 and friends (MySQL, etc...)
I can use SignalR with two clients (IE9, Chrome) or one only and it works. after some actions with code (refresh, change, refresh, etc...) I can see that requests to server is frozen for minutes and if I want to work I need to restart IIS.
I took a dump of IIS process, and see a ten open connections to poll.ashx/connect?transport=foreverFrame .... was opened for minutes.
(I am using ashx extenstion to do not use the "runAllManagedModulesForAllRequests", as it hurting performance, it is useless)
I tried GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(30) and GlobalHost.Configuration.KeepAlive = null. same problem. I can see in the dump requests opened to 2 minutes or more.
I know that Windows 7's IIS limited to ten running requests, so I opened Fiddler and abort all sessions.
IIS still hangs after termination.
I tried to close Fiddler (and then - TCP/HTTP connections) - still hangs.
What I can do to stop the "losted" connections? the client (Fiddler or Browser) closed the TCP/HTTP connections and in IIS there are still alive.
I do not use hubs.
Thanks !

Running a compiled C++ program as CGI

We're going to add Fingerprint authentication to an iPad app;
Since we couldn't find any actual fingerprint hardware that works with an iPad, we found DigitalPersona, which is a supplier of great fingerprint scanner hardware, as well as an SDK for Linux C/C++. The idea is that the user would tap "authenticate with fingerprint" which would send a RESTful request to the linux box with the fingerprint scanner; the linux box would be running the compiled C++ program which waits for a user to scan their finger, determines a match or no-match, and send that response back to the iPad program's original request.
So... with very little C++ experience, and even less CGI (but lots of PHP and Objective-C), I was wondering if this is technically possible. Can a CGI binary accessed via HTTP actually wait for local user input (at the console) before sending the result to the HTTP request?
The linux box would run headless, and we'd control some LED's to help with letting the user know that the program is waiting for a fingerprint scan.
SOME Additional Details:
No PHP is planned on being used. Initially, we want three devices:
iPad
Linux
Server
iPad is running an app which would have a biometric authentication IBOutlet;
Linux has the fingerprint scanner on it, and a basic Apache; additionally the C++ SDK for the finger print scanner, which we would use to build the CGI program that, when invoked (by server), waits for a finger scanned, and once scanned, sends a "match" or "no match" to it's requestor.
Server would be the requestor. Once it receives a request from the iPad app, it invokes the CGI program on the Linux box, waiting for a "match" or "no match" request.
Another member of my team offered this:
iPad is running the app; user clicks 'auth with fingerprint' and the iPad is in a 'waiting' state
Linux has the finger print scanner, user scans finger print, and the finger print gets sent via HTTPS to the server
Server would receive a finger print, and match it up with a user. then checks if any iPad is in a 'waiting' state, and which user initiated it. If it matches the finger print-authenticated user, it would accept the iPad's data as an authentic punch, and release the iPad from the waiting state.
http://sveinbjorn.org/files/ObjectiveCGI.zip
There are basically two ways of doing this; you could have your PHP function call an external application through exec or you could write an external CGI in C++ directly using RudeCGI, Cgicc or freeCGI. There are a lot of libraries for you to choose from.
Why CGI need to wait for user input? You can rather invoke the CGI after taking input from Iphone and send it to the CGI...You then just have to read CGI response from Iphone...

Resources