IIS Zero Downtime Update ARR / Reverse Proxy - iis

I have a C# console application / Windows sevice that uses the HttpListener stuff to handle requests, IIS is setup to reverse proxy to this via ARR.
My problem is that when I update this application there is a short downtime between the old instance being shut down and the new one being ready.
The approach I'm thinking about would be to add 2 servers to the server farm via local hostnames with 2 ports and on update I'd start the new instance which would listen on the unused port, stop listening for new requests on the old instance and gracefully shut it down (ie process the current requests). Those last 2 steps would be started by the new instance to ensure that it is ready to handle the requests.
Is IIS ARR load balancing smart enough to try the other instance and mark the now shut down one as unavailable without losing any requests until the new one is updated or do I have to add health checks etc (would that again lead to a short downtime period?)

One idea that I believe could work (especially if your IIS is only being used for this purpose) is to leverage the IIS overlapped recycling capabilities that are built-in when you make a configuration change. In this case what you could do is:
start a new instance of your app running listening in a different
port,
edit the configuration in ARR to point to the new port.
IIS should allow any existing requests running in the application pool within the recycling timeout to drain successfully while new requests will be sent to the new application pool.
Maybe if you share a bit more on the configuration you are using in ARR (like a snippet of %windir%\system32\inetsrv\config\applicationHost.config and the webFarms section)

Related

(Rancher) Longhorn too many refreshes / connections on dashboard

I am using longhorn for my k3s cluster. My cluster is in a completely different environment than my local computer. The setup for accessing the dashboard looks like this.
Local PC (env1) <-> Reverse Proxy (env1, IIS6) <-> Reverse Proxy (env2, IIS8) <-> K3s Cluster (env2 nginx ingress) <-> Longhorn dashboard (env2)
This setup works fine for most of my apps. Like Kubernetes dashboard, graylog, some self developed stuff etc.
But longhorn is reloading stuff so many times, that all ephemeral windows ports on the first Reverse Proxy are getting occupied. And then, no other connection to the server is possible anymore. Also closing all ports needs a restart of IIS. Also then, getting all connections closed needs some time. So it is creating a high pressure on the server.
My question is, is there something wrong with my IIS configuration? I am using ARR on IIS 6. It is a default configuration, nothing special. Done on the user interface with the URL Rewrite module.
Or is there a setting for longhorn, where i can disable the relaod of the page?
Here is a screenshot of my browser. As you can see, many requests are sent to the server:

How to deploy Node.js app without causing downtime

My Node.JS application is running on production server via forever daemon:
forever start -w --watchDirectory=/path/to/app \
--watchIgnore=/path/to/app/node_modules/** /path/to/app/server.js
When I change files contents in /path/to/app/ directory, the process is restarted by forever. While restart takes around 2-3 seconds, the app is unavailable and so downtime occurs every time I deploy a new change. How can I prevent the downtime assuming I have full access to the server?
You can do that manually by using an HTTP load balancer, so you going to create two or more backends that are accessible only by the load balancer (the load balancer is only one reachable by a public address). The next step is to update one server only, while the load balancer controls the traffic to one backend (the available one). After the successful update, you can turn on the updated one and redirect the load balancer to the right backend (the updated), repeat the procedure, and both should be updated without service downtime.

Node socket.io on load balanced Amazon EC2

I have a standard LAMP EC2 instance set-up running on Amazon's AWS. Having also installed Node.js, socket.io and Express to meet the demands of live updating, I am now at the stage of load balancing the application. That's all working, but my sockets aren't. This is how my set-up looks:-
--- EC2 >> Node.js + socket.io
/
Client >> ELB --
\
--- EC2 >> Node.js + socket.io
[RDS MySQL - EC2 instances communicate to this]
As you can see, each instance has an installation of Node and socket.io. However, occasionally Chrome debug will 400 the socket request returning the reason {"code":1,"message":"Session ID unknown"}, and I guess this is because it's communicating to the other instance.
Additionally, let's say I am on page A and the socket needs to emit to page B - because of the load balancer these two pages might well be on a different instance (they will both be open at the same time). Using something like Sticky Sessions, to my knowledge, wouldn't work in that scenario because both pages would be restricted to their respective instances.
How can I get around this issue? Will I need a whole dedicated instance just for Node? That seems somewhat overkill...
The issues come up when you consider both websocket traffic (layer 4 -ish) and HTTP traffic (layer 7) moving across a load balancer that can only inspect one layer at a time. For example, if you set the ELB to load balance on layer 7 (HTTP/HTTPS) then websockets will not work at all across the ELB. However, if you set the ELB to load balance on layer 4 (TCP) then any fallback HTTP polling requests could end up at any of the upstream servers.
You have two options here. You can figure out a way to effectively load balance both HTTP and websocket requests or find a way to deterministically map requests to upstream servers regardless of the protocol.
The first one is pretty involved and requires another load balancer. A good walkthrough can be found here. It's worth noting that when that post was written HAProxy didn't have native SSL support. Now that this is the case it might be possible to just remove the ELB entirely, if that's the route you want to go. If that's the case the second option might be better.
Otherwise you can use HAProxy on its own (or a paid version of Nginx) to implement a deterministic load balancing mechanism. In this case you would use IP hashing since socket.io does not provide a route-based mechanism to identify a particular server like sockjs. This would use the first 3 octets of the IP address to determine which upstream server gets each request so unless the user changes IP addresses between HTTP polls then this should work.
The solution would be for the two(or more) node.js installs to use a common session source.
Here is a previous question on using REDIS as a common session store for node.js How to share session between NodeJs and PHP using Redis?
and another
Node.js Express sessions using connect-redis with Unix Domain Sockets

Taper off connections nicely in IIS 8.5

Is there any mechanism in IIS 8.5 where you can taper off the current connections?
The scenario is that you have two servers that are Network Load Balanced by forward facing hardware. You do not have access or direct control over the NLB. You wish to take a server offline, but you do not wish to kill any connections, just prevent new connections from occurring and wait for current connections to die.

IIS network load balancing

I have a clustered server with 4 nodes running Win server 2008 r2 with IIS 7.
Fail over kicks in when one of nodes fails but is there a way to have it round robin distribute incoming calls to different server?
This happens when incoming requests come from different client but our investigation shows that if there is one client that is making many requests, they all go to the same server.
I would like to the server to round robin request so that node 1 receives first request, node 2 receives second request and so on.
Each request could take a long time and having all requests go to the same node when I have 3 others idling is causing us perf issue. Thanks
NLB port rules have a couple of properties that control how requests are routed. The relevant properties seem to be:
Filtering mode - specifies whether a single host or multiple hosts in the cluster handle traffic for the given port
Affinity - controls how traffic is routed to hosts in the cluster
It is likely you need to set the Affinity value to none, which allows requests to be routed to multiple hosts within the cluster. The docs do not state whether round-robin or another algorithm is used for load balancing.
For more on Filtering Mode and Affinity: Network Load Balancing Manager Properties
How to: Edit a Network Load Balancing Port Rule
Round Robin Load Balancing will not distribute traffic coming from one destination. You will need to configure your load balancer to 'Least Connections'
Basically the NLB passes a new connection to the pool member or node that has the least number of active connections.

Resources