Unexpected Disconnection with Code 1006 on Windows Server Hosted on Azure - node.js

My application does client authorization over WebSocket connection using ws#7 but after several minutes suddenly it gets disconnected with the error code 1006.
Interesting thing is it's working on AWS Windows Server instances but not on Azure instances or VMWare VMs. I assume there is some kind of configuration related to WebSockets should be handled before installing Node-based application but the main question is what I have to configure in order to move forward.

1006 error usually happens when there is a timeout. In the library you are using, the ws timeout is 30 seconds: https://github.com/websockets/ws/blob/4f293a8726092c75539287dd07358afaf151a2e5/lib/websocket.js
Check whether you are using a gateway or something in between the VM with a timeout less or equal than the ping interval that ws automatically does from the client.
You can usually can see this automatically generated ping messages in Firefox with the F12 tools in the network tab, these do not show up in Chrome nor in Edge but they happen as well:

I had similar problem with my Windows machine tryng to connect a server using Visual Studio Code. I have reset the routes, and reboot the machine, that solve the issue.
To reset use:
route -f

Related

ASP.NET Core IHostedService stops when hosted on linux

I have an ASP.NEt Core 3.1 application with Angular 8 frontend. It runs fine when hosted on IIS but as I have moved it onto a new Ubuntu 18 Server with Nginx above Kestrel sometimes the long running background processes stop working (IHostedService). Then the app runs towards accepting new requests so only the background process is stopped.
These processes get files from clients and give immediate responses with a process ids. The clients can query the process state by their id. Everything have been running fine for months now on IIS but the new config must have some limits that kills these processes. I suppose there is some kestrel or nginx option I don't know about and affect processes started by http requests.
What options can I try and where can I get some logs?
I've tried to log everything from .net core but even the most verbose logs are useless here. Nginx logs doesn't contain any info about the stopped process either.
Although the application runs fine hosted on IIS I tried to find catch blocks without any output and added logging into them but still nothing. Are there anything I can add to my application globals to log any exceptions handled or unhandled?
I forgot to say that I use a local Microsoft SQL Server Express both on windows and linux. The linux Sql Server install was done by the official ms docs (as dotnet and nginx config, too). The database is restored from a windows sql server backup. The connection string is the same with multipleresultsets=true. Are there any differences I should aware of?
For anyone getting here in the future: this was caused by a bug in Microsoft.Data.SqlClient, so I had to update it (independently from EF Core 3.1.2) from nuget to the newer 1.1.2 version.
When it stucked I had two threads waiting for each other, both in SqlClient. With Just my code enabled VS debugger stopped at one of my linq queries. The only interesting part was that it never threw any exceptions and there was no deadlock event on the sql server either. It just waited there so all logs were empty.
https://github.com/dotnet/efcore/issues/18480
https://github.com/dotnet/SqlClient/issues/262

Parse Server periodically is getting slow

parse-server version 2.7.4 (Azure on a Standard_B4ms)
mongoDB-server version 3.4.14 (Azure on a separate Standard_B4ms)
I have an iOS & Android app, with LiveQuery (set on the parse-server's VM) that's being used a lot for chatting, where usually there are ± 50 simultaneous users. The thing is, after a few hours of continuous usage, the server's cloud code responses are getting REALLY slow! Not just from a specific one... all cloud functions!
I'm using screen to run the parse server. So I found that if I restart the parse server (not the vm), the app is getting back to normal.
I also have logs enabled at all times. (just mentioning it in case it could be the issue)
I can't understand why this is happening!
Any ideas?

Azure Virtual Machine Crashing every 2-3 hours

We've got a classic VM on azure. All it's doing is running SQL server on it with a lot of DB's (we've got another VM which is a web server which is the web facing side which accesses the sql classic VM for data).
The problem we have that since yesterday morning we are now experiencing outages every 2-3 hours. There doesnt seem to be any reason for it. We've been working with Azure support but they seem to be still struggling to work out what the issue is. There doesnt seem to be anything in the event logs that give's us any information.
All that happens is that we receive a pingdom alert saying the box is out, we then can't remote into it as it times out and all database calls to it fail. 5 minutes later it will come back up. It doesnt seem to fully reboot or anything it just haults.
Any ideas on what this could be caused by? Or any places that we could look for better info? Or ways to patch this from happening?
The only thing that seems to be in the event logs that occurs around the same time is a DNS Client Event "Name resolution for the name [DNSName] timed out after none of the configured DNS servers responded."
Smartest or Quick Recovery:
Did you check SQL Server by connecting inside VM(internal) using localhost or 127.0.0.1/Instance name. If you can able connect SQL Server without any Issue internally and then Capture or Snapshot SQL Server VM and Create new VM using Capture VM(i.e without lose any data).
This issue may be occurred by following criteria:
Azure Network Firewall
Windows Server Update
This ended up being a fault with the node/sector that our VM was on. I fixed this by enlarging the size of our VM instance (4 core to 8 core), this forced azure to move it to another node/sector and this rectified the issue.

keep nodejs server on azure VM running

I have a Windows Azure VM(linux server 14.04) running and am able to access the VM using command line on my mac/windows machines. I'm running a node.js server and a mongodb instance on this Azure VM.
The problem is that this nodejs server on the VM gets disconnected after sometime(timeout sort of thing). Is it possible that the server on the VM runs indefinitely and keeps serving requests?
PS: My VM is running indefinitely and properly, but the nodejs server on the VM itself times out after sometime. Please help!
Thanks.
It is probably just crashing!
A barebone node application does not get monitored by itself.
This might sound a little crazy if you come from other web frameworks / platforms like ASP.NET or PHP where you had IIS or Apache monitoring your application for you, which was kind of nice tbh. In node.js you choose your process manager / monitoring system. From my experience, the most popular and well supported PMs are the ones listed in the Expressjs documentation: http://expressjs.com/advanced/pm.html
As Azure VMs will not sleep or shutdown itself , and also will not stop any servers running on them.
And per your description
the nodejs server on the VM itself times out after sometime
The issue seems the same with what #svenskunganka said.
You can check what occurred the error “sometime”, leveraging PM2 as #Daniel and #svenskunganka suggested.
When you deploy your nodejs project with PM2, it will monitor the application and log errors automatically. You can also monitor your VM metrics (such as CUP Usage,Network in/out) from Azure Portal Monitor panel.

TortoiseSVN Error: Could not send request body: an existing connection was forcibly closed by the remote host

Let me preface this by saying I have basically 0 knowledge of web development. That being said, I'll still try to provide you with as much information as I possibly can. Our client is using IIS7 on a Windows Server 2008 R2 machine. The TortoiseSVN error they're getting is this:
Error: Could not send request body: an existing connection was forcibly closed by the remote host.
Using the powers of Google, it seems that there's two possible things that could be occurring here. As it is a 4GB file, I've seen people mention that it could be a configuration issue in that the timeout could be a little short, that I might need to enable a setting somewhere to allow committing of larger files or that it could be a network issue. It might be useful to note that they can commit smaller files.
I've all ready tried disabling the firewall, as well as the antivirus, on the server and having them retry, but that didn't work. They are trying to upload from a desktop to the server and they are on the same network through a gigabit switch. I'm sure I'm missing useful information for you guys but I'm a total noob to web dev, their set up, and actually understanding what they're trying to do. If you need any more information from me I'll be glad to provide it.
The problem could be the too strict timeout options configured in Apache2's reqtimeout module. I simply disabled it
a2dismod reqtimeout
/etc/init.d/apache2 restart
Chocolate to: https://serverfault.com/questions/297562/svn-https-problem-could-not-read-status-line-connection-was-closed-by-ser

Resources