ASP.NET Core IHostedService stops when hosted on linux - linux

I have an ASP.NEt Core 3.1 application with Angular 8 frontend. It runs fine when hosted on IIS but as I have moved it onto a new Ubuntu 18 Server with Nginx above Kestrel sometimes the long running background processes stop working (IHostedService). Then the app runs towards accepting new requests so only the background process is stopped.
These processes get files from clients and give immediate responses with a process ids. The clients can query the process state by their id. Everything have been running fine for months now on IIS but the new config must have some limits that kills these processes. I suppose there is some kestrel or nginx option I don't know about and affect processes started by http requests.
What options can I try and where can I get some logs?
I've tried to log everything from .net core but even the most verbose logs are useless here. Nginx logs doesn't contain any info about the stopped process either.
Although the application runs fine hosted on IIS I tried to find catch blocks without any output and added logging into them but still nothing. Are there anything I can add to my application globals to log any exceptions handled or unhandled?
I forgot to say that I use a local Microsoft SQL Server Express both on windows and linux. The linux Sql Server install was done by the official ms docs (as dotnet and nginx config, too). The database is restored from a windows sql server backup. The connection string is the same with multipleresultsets=true. Are there any differences I should aware of?

For anyone getting here in the future: this was caused by a bug in Microsoft.Data.SqlClient, so I had to update it (independently from EF Core 3.1.2) from nuget to the newer 1.1.2 version.
When it stucked I had two threads waiting for each other, both in SqlClient. With Just my code enabled VS debugger stopped at one of my linq queries. The only interesting part was that it never threw any exceptions and there was no deadlock event on the sql server either. It just waited there so all logs were empty.
https://github.com/dotnet/efcore/issues/18480
https://github.com/dotnet/SqlClient/issues/262

Related

Unexpected Disconnection with Code 1006 on Windows Server Hosted on Azure

My application does client authorization over WebSocket connection using ws#7 but after several minutes suddenly it gets disconnected with the error code 1006.
Interesting thing is it's working on AWS Windows Server instances but not on Azure instances or VMWare VMs. I assume there is some kind of configuration related to WebSockets should be handled before installing Node-based application but the main question is what I have to configure in order to move forward.
1006 error usually happens when there is a timeout. In the library you are using, the ws timeout is 30 seconds: https://github.com/websockets/ws/blob/4f293a8726092c75539287dd07358afaf151a2e5/lib/websocket.js
Check whether you are using a gateway or something in between the VM with a timeout less or equal than the ping interval that ws automatically does from the client.
You can usually can see this automatically generated ping messages in Firefox with the F12 tools in the network tab, these do not show up in Chrome nor in Edge but they happen as well:
I had similar problem with my Windows machine tryng to connect a server using Visual Studio Code. I have reset the routes, and reboot the machine, that solve the issue.
To reset use:
route -f

Parse Server periodically is getting slow

parse-server version 2.7.4 (Azure on a Standard_B4ms)
mongoDB-server version 3.4.14 (Azure on a separate Standard_B4ms)
I have an iOS & Android app, with LiveQuery (set on the parse-server's VM) that's being used a lot for chatting, where usually there are ± 50 simultaneous users. The thing is, after a few hours of continuous usage, the server's cloud code responses are getting REALLY slow! Not just from a specific one... all cloud functions!
I'm using screen to run the parse server. So I found that if I restart the parse server (not the vm), the app is getting back to normal.
I also have logs enabled at all times. (just mentioning it in case it could be the issue)
I can't understand why this is happening!
Any ideas?

Pomelo app kill and remove servers automatically

What are the circumstances to kill pomelo app itself?
The issue I am facing is that, the pomelo kills its own application without throwing any exception. When I try to run command 'pomelo list', it shows no server is running. The app gets close after few hours of running and automatically as well. I have gone through logs generated by pomelo on each server but there were no exception or blockage for the app.
Below are the details that might help you guys:
We are using distributed server structure of pomelo framework
MongoDB on different server
Redis instance on master-server itself
Pomelo version 1.2.1
We are also running same architecture for our different multi-player games but haven't gone through such issues ever. Can anybody explain why it might be happening? If you need any other info then please ask for the same.

keep nodejs server on azure VM running

I have a Windows Azure VM(linux server 14.04) running and am able to access the VM using command line on my mac/windows machines. I'm running a node.js server and a mongodb instance on this Azure VM.
The problem is that this nodejs server on the VM gets disconnected after sometime(timeout sort of thing). Is it possible that the server on the VM runs indefinitely and keeps serving requests?
PS: My VM is running indefinitely and properly, but the nodejs server on the VM itself times out after sometime. Please help!
Thanks.
It is probably just crashing!
A barebone node application does not get monitored by itself.
This might sound a little crazy if you come from other web frameworks / platforms like ASP.NET or PHP where you had IIS or Apache monitoring your application for you, which was kind of nice tbh. In node.js you choose your process manager / monitoring system. From my experience, the most popular and well supported PMs are the ones listed in the Expressjs documentation: http://expressjs.com/advanced/pm.html
As Azure VMs will not sleep or shutdown itself , and also will not stop any servers running on them.
And per your description
the nodejs server on the VM itself times out after sometime
The issue seems the same with what #svenskunganka said.
You can check what occurred the error “sometime”, leveraging PM2 as #Daniel and #svenskunganka suggested.
When you deploy your nodejs project with PM2, it will monitor the application and log errors automatically. You can also monitor your VM metrics (such as CUP Usage,Network in/out) from Azure Portal Monitor panel.

What are possible solution or script that can be used for server crash detection and trigger procedures?

I'm developing on the same server where I host some webpages, in this case with Ajenti, nginx and node.js installed on a Ubuntu Server, and I noticed that when I crash the server in a test, I need to log in to ajenti or ssh and restart the webpages.
This made me wonder if nginx or Ubuntu can detect such a crash like a 502 Bad Gateway Error and if there is also a command or tool to restart the webpages?
With this I could probably script it all up and get the webpages restarted, automatically, every time I do something to crash the server.
One solution might be to use something like monit which can (among many other things) check for (and optionally restart) crashed processes.

Resources