Chrome extension: devtools protocol timeout - google-chrome-extension

My extn uses the devtools protocol (debugger permission); after it has attached the debugger - it needs to stay attached indefinitely, to monitor changes on the attached to tab, etc.
Keeping it running overnight turned out that after some period of time - the debugger flipped off.
Is there some built-in timeout for debugger.attach command, after which it automatically detaches itself?

Related

Trying to use ports to keep service worker persistent in Chrome extension doesn't work (Manifest v3)

I'm working on an extension that should have a persistent service worker (while using Manifest V3).
I've tried the solution proposed in this answer (https://stackoverflow.com/a/66618269/10364842), and it works when tested by itself (I used the files from here to verify: https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25).
However, when I put the code in the target extension, it doesn't prevent the worker from unloading.
I'm using worker_wrapper.js and injectScripts inside worker_wrapper.js. I added the code that keeps the service worker alive at the top of worker_wrapper.js (tried other locations as well).
I've verified that this code gets injected into one of the tabs:
chrome.runtime.connect({ name: 'keepAlive' });
console.log('keepAlive');
I can see 'keepAlive' printed in the console for that tab.
However, the service worker still gets unloaded.
Last time I tested, it unloaded ~1 minute after the last 'keepAlive' was printed to the console.
So it seems it works sometimes, but often ~1 minute after keepAlive function is called the service worker still unloads.
Unfortunately I can't attach a minimal reproducible example, as I'm not sure what causes the problem. And the code from https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25 works when run by itself.
I've also tested with DevTools open, and it still unloads (with the message 'DevTools was disconnected from the page. Once page is reloaded, DevTools will automatically reconnect.')
Tested Chrome versions: 99.0.4844.82, 101.0.4947.0
Tested OS: Ubuntu 20.04
Could there be any other bugs that could cause this? Should I try injecting the chrome.runtime.connect in each tab? Or try to run the keepAlive function every 55 seconds instead of every 4 minutes 55 seconds?

Why my console application is buffering log and socket messages?

Background:
I have a Python (console) application which includes a socket server. This application receives messages from a 3rd party client (start and stop messages from certain Process A) to control a recording data task (like start and stop recording). You can think of it as receiving messages via sockets to start and stop recording data from the same Process A for about 5 minutes. The 3rd party client sends messages for nearly 2 hours and then stops, and at the end, the Python application will be producing a group of files per session.
This application is running 24/7 (unattended on a Windows 10 Desktop machine) and there is a logging console open as well, but I have noticed that sometimes (Haven't identified a pattern) after running for 4 or 5 days, I access the system remotely, using TeamViewer, and the console window is showing that the last message is of 1-2 days ago. But once I click on the console or press a key in that console, I receive a full batch of messages from the sessions missed during those last days, thus, start and stop messages are received "simultaneously" leading to rubbish data files.
The code:
This is the socket server part of the code. I know I'm setting a buffer of 1024, but in normal operation, this buffer should not be full to read the data
with conn:
#display client information
logger.info('Connected with ' + addr[0] + ':' + str(addr[1]))
while self.enable:
#now keep talking with the client
data = conn.recv(1024)
if data:
self.data_cb(data)
else:
logger.debug("no data, closing connection." )
break
Question:
What is leading to this buffering behaviour?
Could it be...
the 3rd party client?
my Python application?
Something in Windows network stuff?
Has anyone had experienced something like this?
Any idea is really appreciated as I have no clue why is this happening? Thanks.
Edit - Additional info:
The application is running on a real desktop machine (no virtual machine)
The application has been able to work continuously for almost a month (just stopped for valid external reasons, power outage, version update, etc)
Last time I accessed through Teamviewer and noticed that the app wasn't receiving messages for a day (the app was running for 4 days at that time), BUT I assumed it was for another reason and planned to go to the site and check (Because something similar happened before). I accessed the next day, and it was the same. But on the third day, I click on the console and tried to review the messages and instantly the whole batch of messages from the previous 2 days appeared on the log.
The app has been running for 2 weeks and did not access the PC through TeamViewer during the last 4 days, in case that accessing it could prevent the issue to occur.
TL;DR
The selection feature of Command Prompt window prevents somehow the application from printing logging messages and/or reading data from the socket (both are in the same thread).
Well, I found the cause of this buffering behaviour but I am not sure if it is a known thing or not (It was not for me, so I will post later a specific question about that selection feature).
When I checked the system today I found that the console messages were frozen at 3 days before, so I clicked on the console window, and hit a key and all the messages for 3 days were shown at once. Then, I suspected of the selection feature of the console output.
I started the application as usual and followed these steps:
I selected a part of the content in the application console.
Using another console, I connected from a dummy client using ncat (At this point the expected client connected message didn't show up)
I sent dummy messages (didn't show up either)
I finished ncat connection (CTRL-C)
Clicked on the application console and hit any key
Voila! All the logging messages (regarding connection and data appeared), and all the messages that I sent using ncat were received as one big message.
EDIT: Didn't need to create a question, it's a known "feature". There are good questions here, here and here. The last one shows how to disable this "feature".

IBM Cognos Report Studio: "The connection closed before the request is processed."

We are consuming TM1 cubes with Report Studio through Framework Manager.
Quite often when I am trying to come up with new solutions to my challenges in Report Studio, I get an error when I run the report, and then the server goes down. Then I have to restart the dispatchers (Cognos Administration -> Status -> System -> Right Click on the server -> Test Dispatchers -> Right Click on the server -> Start Dispatchers).
The error message that I get is:
The connection closed before the request is processed. If you are
using WebSphere Application Server, to reduce the frequency of this
error, increase the Persistent Timeout parameter for the Web container
transport chains in the administrative console. Increase the time in
10-15 second intervals until the error no longer or rarely occurs.
We are not using WebSphere, but Tomcat (default with the installation).
-> Increasing connection timout interval on WebSphere thus not applicable
-> The timeout interval in the Tomcat config seems to be 60 seconds (60000 ms)
More importantly: The error message shows immediately (after 1 second) when I run the report
-> Indicates to me that this is regardless of any timeout interval setting
Additional info: The error message comes almost always when I manually and dynamically attempt to build MUNs. However, sometimes (dunno when and why) it shows the MUN that I've created and tells me that it is invalid. Which is WAY much better for debugging.
Any suggestions on why this is happening and how to fix it would be greatly appreciated!
Edit 1: http://www.linkedin.com/groups/Product-Cognos-BI-1011-Cognos-3917273.S.143157206
This post states (almost at the bottom) that
When the Cognos BI report ask for a field that does not exist, the TM1
Application disconnects the connection. And the Cognos BI Report will
timeout.
Is this true? If so; why am I sometimes told that my MUN is invalid, whereas other times the connection is closed and the server shut down? Is it because even Report Studio thinks that my MUN is valid and tries to get it from the TM1 Server?
And additionally: Is it possible to change this behavior for the TM1 server?
Edit 2: Or change the BI server behavior so that it does not shut down when the TM1 connection is disconnected, but rather show an error of some kind?
Thanks again!
Edit 3: Okay, so I did some checking with the TM1 top utility (http://pic.dhe.ibm.com/infocenter/ctm1/v9r5m0/index.jsp?topic=%2Fcom.ibm.swg.im.cognos.tm1_op.9.5.1.doc%2Ftm1_op_id6961UsingtheTM1TopUtility_N160F47.html).
When a normal report is run, a new thread is shown in the monitoring list. This thread then disappears when I stop the BI server dispatchers, or automatically after approximately 5 minutes of idle time without any reports being run (according to the TM1 Top log dump).
Likewise, when the error occurs, a new thread is shown in the list. However, it disappears after a short second (probably because the BI server dispatchers are shut down).
I have therefore concluded that it is safe to assume (?) that the request seems to reach the TM1 server, and that TM1 returns something back (or simply closes the connection as suggested in the linkedin-post that I referenced in my first edit) . And hence, that it is safe to assume that this is something that have to be fixed on the BI server side(?).
The question is therefore more likely: Is it possible to change the BI server behavior so that it does not shut down when the TM1 server returns something invalid or closes the connection, and rather show some kind of error message instead?
Thanks for any input!

Wait for worker threads to complete before NPAPI plugin being destroyed

I've written a windowless NPAPI plugin, and I am going to perform some long lasting operation (e.g.send a http post request with image data) in a plugin function called by web browser JavaScript.
To prevent web browser from hanging, I create one worker thread for every lengthy operation.
My question is that if the browser is closed while there are still worker threads running,
how can I prevent my plugin instance from being destroyed (in NPP_Destroy?) before worker threads completed?
For ActiveX control, I simply add/release plugin instance's reference count every time the worker thread is launched/completed. But for NPAPI plugin, the reference count is just for NPObject(created via NPN_CreateObject) instead of plugin instance itself. Now I get baffled.
Any help would be really appreciated.
You can't. I suppose you could launch another process and perform the operations in that; that way you could send it a signal when the plugin shuts down and say "you need to close, when you're ready" but not have it close 'til it finishes.
The plugin itself -- even in IE -- you can't control when it shuts down because if the browser shuts down it'll close all plugins at that point anyway.
Welcome to plugin land -- you don't get to control the lifecycle.

How to trace IIS worker process requests

I need to be able to monitor requests from IIS w3wp processes.
How can I see IIS worker process Requests?
To trace all requests currently executing in IIS worker processes
Open a command window and type logman startsession name–p "IIS:
Request Monitor" -ets and press ENTER.
Event Tracing for Windows prints to the screen details about the
trace session you just started, including the name of the session,
the file name where the trace data will be collected (session
name.etl by default), and whether or not the command was successful
Allow the trace session to run until you have reproduced the problem
or until your sites have processed enough requests to produce a
manageable data set
From the command prompt, type logman stopsession name-ets and press
ENTER.
I'm not as experienced on Windows vs Linux so Ravindra's answer seems interesting (is this just scheduling a particular event viewer style session or actually logging out deeper?).
As you particularly ask about 'IIS worker process Requests' you have two options.
GUI
Open inetmgr, go to the root server level, go to Worker Processes and double-click the worker process of your choice. A new screen will load and you will see anything that worker is currently processing.
Command-line
Rather than just give you a single command to copy and paste this article is a great starter - http://www.iis.net/learn/get-started/getting-started-with-iis/getting-started-with-appcmdexe
The particular command you want is under the section 'INSPECTING CURRENTLY EXECUTING REQUESTS'

Resources