How to disable client timeout? - google-cloud-memorystore

Is it possible to configure Memorystore to disable the client timeout? It keeps disconnecting my long-lived connection pools at random times and messing up my health-check endpoints.

Related

Frequent Disconnection and re-connection of web socket requests after deployment in the K8s

I am working on the development of a chat application using the socket.io library in the back end and ngx-socket-io in the front end. The chat functionality is working fine on the local environment and there is only one web socket connection in the network tab of the browser.
But when i deploy the code on the Kubernetes cluster I can see that the web socket connection does not persist longer and the previous web socket request is closed and new request is initiated i.e. the web socket connection is disconnecting and then re-connecting.
It is not persistent even on a single active pod or service in the Kubernetes cluster.
I want a single web socket connection to persist for longer duration, only then i can have the live chat working otherwise live chat ceases once a new web socket connection is initiated.
You need to apply following annotations for Ingress with websocket protocol. See example here:
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
This issue was solved by using the traefik controller which is an advanced controller instead of nginx ingress controller.

What can cause connection timeouts from Azure Cloud Service?

We have a WebApi in Azure that sends requests to a VM cluster that is load balanced via an Azure Cloud Service. We see occasional timeouts where requests are working, then one times out for no reason. Reissuing the request immediately succeeds.
In Fiddler I see:
[Fiddler] The connection to '[myApp].cloudapp.net' failed. Error: TimedOut (0x274c). System.Net.Sockets.SocketException A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 40.122.42.33:9200
I can't find any telemetry in the portal that shows any kind of error, and all is fine when the request is issued from my api. Also, I don't see anything in the Event Logs on my VMs.
I am thinking it might have something to do with TCP port closure, but I am unfamiliar with this. My requests are specifying 'Connection: keep-alive', so I assume that subsequent requests to the same protocol/domain would attempt to use the same connection. It usually works, however.
Is there any kind of throttling on the number of active connections that can come into my Cloud Service? It is possible that these timeouts happen during peak load (though we don't have enough consistent traffic to verify this).
thanks!

Does an HTTPS connection to Windows Azure web role stay connected?

It's my understanding that if I connect to a windows Azure web role with HTTPS that there is an initial handshake to exchange certificates and then another connection is made to get data.
Can someone explain to me is the connection persisted or if the user needs another page a few minutes later would there me another exchange of handshakes? How about if the WebRole was serving data from the Web API, would that be the same?
It depends on a client capabilities but in terms of modern web browsers I wouldn't be so worried about single connection(handshake) per request:
HTTP 1.1 - Persistent connection
Modern browsers use HTTP 1.1 by default which according to RFC 2616 makes connection persistent by default. Another important aspect of HTTP 1.1 is that it forces support of HTTP pipelining which means that multiple requests to the same endpoint will be send in a batch and response will be also received in a batch (on the same connection). Browsers generally have a limit of connections per server (Chrome - 2 connections by default) and reuse connections.
Azure: It looks like Azure will drop connection if idle for 4 minutes
Handshake
Every first connection requires full handshake but subsequent can reuse session ticket (ID), but this depends on a client. Microsoft introduced TLS session resumption some time ago - What's New in TLS/SSL (Schannel SSP) in Windows Server and Windows. As long as you have only one host serving HTTPS connection it should resume sessions, according to this blog post:
There’s also a warning about session resumption. This is due to the
Azure load balancer and non-sticky sessions. If you run a single
instance in your cloud service, session resumption will turn green
since all connections will hit the same instance.
It should not make any difference if it's a WebAPI or Website. You can always test it using SSLyze.

What happens in IIS when it needs to create a new worker process?

I am trying to track down some issue where users will randomly get prompted by IIS 7.5 for their security credentials. I have come across something in the Event Viewer that says
A worker process with process id of 'xxxx' serving application pool 'MyAppPoolName' was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed.
So let's say that happens, and then a user comes in and hits the site. A new worker process gets started. Could this cause a prompt for credentials from IIS? I am using Windows Authentication with IIS.
The application pool will recycle after a configured amount of inactivity. If your session timeout is larger than the IIS recycle time then you are at the risk of losing sessions if you are using in-process session state. Normally, IIS will attempt to hang onto sessions created prior to a recycle and process new requests using the new thread pool.
Configuring the application to use the ASP.NET State Service or sql server to hold session state will allow the sessions' to be maintained during recycle. However, the initial request after the recycle event that you are seeing in your log will take a startup penalty.
I would configure the session timeout to be less than the recycle period in IIS, however in a properly configured application the user would be re-directed to login in either case. You might want to consider using sticky sessions.

Azure VM session timeout in RDP

I am evaluating Azure VM with MSDN subscription. I created a few server 2012 VMs. However, apparently Azure timeout the connections idle after a few minutes.
How to extend the timeout period at the Azure side?
This might be bit late , i update the response anyways . ITs possible to increate the time out by updating Keep Alive setting in the registry. This setting causes to send “heartbeat” packets to connected clients every so often. With this setting the connection can be prolonged.
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal
Services] "KeepAliveEnable"=dword:00000001
"KeepAliveInterval"=dword:00000001

Resources