i have a script that use node js and puppeteer, the script run wonderful on my windows 10 for as long as i dont close it from command line, when i`m using it .
on my VPS it is working for exactly 30 minute , i tried few times and all the time it is exactly 30 minutes, the node js is still functioning but no data is received after 30 minutes, i`m scraping web socket just for the info .
i have tried any args on launch but nothing is keep the connection alive.
Have you tried resetting the websocket connection yourself to bypass the issue? Not sure the application here, but a simple "disconnect - reconnect" every 29 minutes (or every minute for that matter) might just do the trick?
finally i found a solution :)
i guess the sites checking your activity and if you are not active for 30 minutes then they closing any connection that is open , so with puppeteer you can use mouse movement and that is the solution , i put movement in interval and all are fine now, if someone have that issue then just use this method and all are good.
Related
I'm working on an extension that should have a persistent service worker (while using Manifest V3).
I've tried the solution proposed in this answer (https://stackoverflow.com/a/66618269/10364842), and it works when tested by itself (I used the files from here to verify: https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25).
However, when I put the code in the target extension, it doesn't prevent the worker from unloading.
I'm using worker_wrapper.js and injectScripts inside worker_wrapper.js. I added the code that keeps the service worker alive at the top of worker_wrapper.js (tried other locations as well).
I've verified that this code gets injected into one of the tabs:
chrome.runtime.connect({ name: 'keepAlive' });
console.log('keepAlive');
I can see 'keepAlive' printed in the console for that tab.
However, the service worker still gets unloaded.
Last time I tested, it unloaded ~1 minute after the last 'keepAlive' was printed to the console.
So it seems it works sometimes, but often ~1 minute after keepAlive function is called the service worker still unloads.
Unfortunately I can't attach a minimal reproducible example, as I'm not sure what causes the problem. And the code from https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25 works when run by itself.
I've also tested with DevTools open, and it still unloads (with the message 'DevTools was disconnected from the page. Once page is reloaded, DevTools will automatically reconnect.')
Tested Chrome versions: 99.0.4844.82, 101.0.4947.0
Tested OS: Ubuntu 20.04
Could there be any other bugs that could cause this? Should I try injecting the chrome.runtime.connect in each tab? Or try to run the keepAlive function every 55 seconds instead of every 4 minutes 55 seconds?
I have a node.js app hosted on Heroku. I am paying the $7 a month hosting for the better plan which has me running with the next tier dynos and SSL. My problem is, I have a cronjob running in my app that runs every minute. It is VERY important this runs every minute and pretty much never misses. However, it happens to not run sometimes, and after debugging a little bit, I believe it to be that it restarts itself. like so:
So I was wondering if there is a way to schedule the app to restart instead of having it do it whenever, or if my cronjob is actually the problem and I can't do what I'm looking for. any ideas?
EDIT: here's the cronjob code:
var sendTexts = new CronJob('*/1 * * * *', function() {
// code that sends Texts if event is true
}, null, true)
and it should run every 1 minute. it does locally when my server is up, but again the issue seems to be with restarting dynos
Dynos are restarted (cycled) at least every 24 hours, if you restart manually (with heroku CLI for example) will reset the 24 hour period.
You could consider restarting you app every X hours to try to manage that, however you must consider:
Dynos can be restarted randomly by Heroku (after a platform error)
upon restart your chronojob starts immediately, so you are going to have executions before a whole minute is passed
You might want to consider an architectural change using a DB or a queue which allow you not to rely on the application always running.
In cloud-based architecture it is never a good idea to assume a single instance (container) is always available.
I have a web application running as a service on an Ubuntu EC2 Instance. As of the past 24 hours, the application has been crashing randomly 2-4 hours after running with the message attached in the image below. The error is:
[nodemon] app crashed - waiting for file changes before starting...
I have run into this error before but usually, it is a syntax error and it will not allow me to actually start the application. In this case, the app functions normally for several hours before crashing. I have no idea where to even start as there's nothing above it that looks like it could be causing the crash. The only thing is it looks like the website receives 3 Get / Requests before the server can respond then it crashes. Most of the posts I've found online about this also block the application from running and don't mention the fact that the app runs normally then crashes.
Any help would be greatly appreciated.
Thanks!
Error Log from Journalctl
It looks like a silent error. I would try to log every input (e.g. http request and timeouts) with timestamp and also log the crash with time. When a crash occurs I would compare the time to events happening right before.
Also check your /var/log/ if the programm was terminated by the system or another programm.
Background:
I have a Python (console) application which includes a socket server. This application receives messages from a 3rd party client (start and stop messages from certain Process A) to control a recording data task (like start and stop recording). You can think of it as receiving messages via sockets to start and stop recording data from the same Process A for about 5 minutes. The 3rd party client sends messages for nearly 2 hours and then stops, and at the end, the Python application will be producing a group of files per session.
This application is running 24/7 (unattended on a Windows 10 Desktop machine) and there is a logging console open as well, but I have noticed that sometimes (Haven't identified a pattern) after running for 4 or 5 days, I access the system remotely, using TeamViewer, and the console window is showing that the last message is of 1-2 days ago. But once I click on the console or press a key in that console, I receive a full batch of messages from the sessions missed during those last days, thus, start and stop messages are received "simultaneously" leading to rubbish data files.
The code:
This is the socket server part of the code. I know I'm setting a buffer of 1024, but in normal operation, this buffer should not be full to read the data
with conn:
#display client information
logger.info('Connected with ' + addr[0] + ':' + str(addr[1]))
while self.enable:
#now keep talking with the client
data = conn.recv(1024)
if data:
self.data_cb(data)
else:
logger.debug("no data, closing connection." )
break
Question:
What is leading to this buffering behaviour?
Could it be...
the 3rd party client?
my Python application?
Something in Windows network stuff?
Has anyone had experienced something like this?
Any idea is really appreciated as I have no clue why is this happening? Thanks.
Edit - Additional info:
The application is running on a real desktop machine (no virtual machine)
The application has been able to work continuously for almost a month (just stopped for valid external reasons, power outage, version update, etc)
Last time I accessed through Teamviewer and noticed that the app wasn't receiving messages for a day (the app was running for 4 days at that time), BUT I assumed it was for another reason and planned to go to the site and check (Because something similar happened before). I accessed the next day, and it was the same. But on the third day, I click on the console and tried to review the messages and instantly the whole batch of messages from the previous 2 days appeared on the log.
The app has been running for 2 weeks and did not access the PC through TeamViewer during the last 4 days, in case that accessing it could prevent the issue to occur.
TL;DR
The selection feature of Command Prompt window prevents somehow the application from printing logging messages and/or reading data from the socket (both are in the same thread).
Well, I found the cause of this buffering behaviour but I am not sure if it is a known thing or not (It was not for me, so I will post later a specific question about that selection feature).
When I checked the system today I found that the console messages were frozen at 3 days before, so I clicked on the console window, and hit a key and all the messages for 3 days were shown at once. Then, I suspected of the selection feature of the console output.
I started the application as usual and followed these steps:
I selected a part of the content in the application console.
Using another console, I connected from a dummy client using ncat (At this point the expected client connected message didn't show up)
I sent dummy messages (didn't show up either)
I finished ncat connection (CTRL-C)
Clicked on the application console and hit any key
Voila! All the logging messages (regarding connection and data appeared), and all the messages that I sent using ncat were received as one big message.
EDIT: Didn't need to create a question, it's a known "feature". There are good questions here, here and here. The last one shows how to disable this "feature".
I have a ajax that calls a function.
This function spend 5 minutes to complete.
When I run in my machine, it's everything ok.
But when I run in my deployed web site in azure, the request return with error 500 when past 3.5 minutes. But it's continue running and complete the work, I see in the database.
The response is blank.
Any help?
Thanks!
You can change approach and use web sockets.
5 minutes is a long time to hold a connection, a lot can happen in 5 minutes,
Different approach would be to return a guid before you start the process and make a lull request from the client every 10 sec or so until the process state is changed to finished and you can return the result.
Good luck.