I’m writing a HTTP/1.1 client that will be used against a variety of servers.
How can I decide a reasonable default keep-alive timeout value, as in, how long the client should keep an unused connection open before closing? Any value I think of seems extremely arbitrary.
First note that that with HTTP keep alive both client and server can close an idle connection (i.e. no outstanding response, no unfinished request) at any time. This means especially that the client cannot make the server keep the connection open by enforcing some timeout, all what a client-side timeout does is limit how long the client will try to keep the connection open. The server might close the connection even before this client-side timeout is reached.
Based on this there is no generic good value for the timeout but there actually does not need to be one. The timeout is essentially used to limit resources, i.e. how much idle connections will be open at the same time. If your specific use case will never visit the same site again anyway then using HTTP keep-alive would just be a waste of resources. If instead you don't know your specific usage pattern you could just place a limit on the number of open connections, i.e. close the longest unused connection if the limit is reached and a new connection is needed. It might make sense to have some upper limit timeout of 10..15 minutes anyway since usually after this time firewalls and NAT routers in between will have abandoned the connection state so the idle connection will no longer work for new requests anyway.
But in any case you also need to be sure that you detect if the server closes a connection and then discard this connection from the list of reusable connections. And if you use HTTP keep-alive you also need to be aware that the server might close the connection in the very moment you are trying to send a new request on an existing connection, i.e. you need to retry this request then on a newly created connection.
Related
When I connect to the socket server from the client side, which is considered react, every few seconds a repeated request is sent by the socket client. Generally, the requests are of get type and most of the time they are in pending mode. Sometimes the result of requests is 2.
What do you think is the problem of sending repeated requests after connecting or doing anything with the socket?
UPDATE
This problem occurs when I use namespace . I tried all the solutions but this problem was not solved.
image
This is expected behavior when the option used for transport is polling (long-polling).
What happens is, by default, the transport parameter is ["polling", "websocket"] (client, server), where the sequence of elements matters. So, the first connection attempt is made via polling (which is faster to start compared to websocket), and then (or in parallel, I don't know the working details) there is a connection attempt by websocket (this takes a little longer to establish but is faster for later communication).
If the websocket connection is successfully established, the communication will be carried in this way. But if an error occurs, or the connection takes a long time to be established, or this transport option is not present in the instance's parameters, then the communication will continue being carried out through polling, which are the various requests that remain pending. It is normal for them to remain pending, so they receive an update and are able to inform the requester immediately, without the need for several quick requests consulting the application's status.
Check the instance parameters you set for this connection to find out if transport via websocket is enabled. Be careful when using the socket server behind a reverse proxy, as this reverse proxy needs to be properly configured to accept websocket connections, otherwise it won't work.
You can check the websocket requests in the browser inspection, Network tab, by enabling the WS filter.
Here are some additional links for you to read more about:
https://socket.io/docs/v4/how-it-works/
https://socket.io/docs/v4/using-multiple-nodes/
https://socket.io/docs/v4/reverse-proxy/
https://ably.com/blog/websockets-vs-long-polling
I run sanic application and it raises an exception every several seconds even without any request coming in.
sanic.exceptions.RequestTimeout: Request Timeout
How to fix the issue?
I would point you towards the documentation so that you understand what you are doing and why you are receiving that exception. Just blindly changing KEEP_ALIVE to False may not be what you want.
The KEEP_ALIVE config variable is set to True in Sanic by default. If you don’t need this feature in your application, set it to False to cause all client connections to close immediately after a response is sent, regardless of the Keep-Alive header on the request.
The amount of time the server holds the TCP connection open is decided by the server itself. In Sanic, that value is configured using the KEEP_ALIVE_TIMEOUT value. By default, it is set to 5 seconds, this is the same default setting as the Apache HTTP server and is a good balance between allowing enough time for the client to send a new request, and not holding open too many connections at once. Do not exceed 75 seconds unless you know your clients are using a browser which supports TCP connections held open for that long.
The issue comes from the fact that the connection remains alive. Adding following configuration seems to have fixed my issue
from sanic.config import Config
Config.KEEP_ALIVE = False
This applies to non-user facing backend applications communicating with each other through HTTP. I'm wondering if there is a guideline for a maximum timeout for a synchronous HTTP request. For example, let's say a request can take up to 10 minutes to complete. Can I simply create a worker thread on the client and, in the worker thread, invoke the request synchronously? Or should I implement the request asynchronously, to return HTTP 202 Accepted and spin off a worker thread on the server side to complete the request and figure out a way to send the results back, presumable through a messaging framework?
One of my concerns is it safe to keep an socket open for an extended period of time?
How long a socket connection can remain open (without activity) depends on the (quality of the) network infrastructure.
A client HTTP request waiting for an answer from a server results in an open socket connection without any data going through that connection for a while. A proxy server might decide to close such inactive connections after 5 minutes. Similarly, a firewall can decide to close connections that are open for more than 30 minutes, active or not.
But since you are in the backend, these cases can be tested (just let the server thread handling the request sleep for a certain time before giving an answer). Once it is verified that socket connections are not closed by different network components, it is safe to rely on socket connections to remain open. Keep in mind though that network cables can be unplugged and servers can crash - you will always need a strategy to handle disruptions.
As for synchronous and asynchronous: both are feasable and both have advantages and disadvantages. But what is right for you depends on a whole lot more than just the reliability of socket connections.
How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.
I am using node.js Request module to make multiple post requests.
Does this module have a connection pool ?
Can we manage this connection pool ?
can we close open connections ?
How do we handle the socket hang up error
Request does not have a connection pool. However, the http module (which request uses) does:
In node 0.5.3+ there is a new implementation of the HTTP Agent which is used for pooling sockets used in HTTP client requests.
By default, there is a limit of 5 concurrent connections per host. There is an issue with the current agent implementation that causes hang up errors to occur when you try to open too many connections.
You can either:
increase the maximum number of connections: http.globalAgent.maxSockets.
disable the agent entirely: pass {pool: false} to request.
There are several reasons for having an HTTP agent in the first place:
it prevents you from accidentally opening thousands of connections to a host (would be perceived as an attack).
connections in the pool will be kept opened for HTTP 1.1 keepalive.
most of the time, maxSockets really depends on the host you're targetting. node.js will be perfectly happy to open 1000 concurrent connections if the other host handles it.
The behavior of the agent is explained in the node.js doc:
The current HTTP Agent also defaults client requests to using Connection:keep-alive. If no pending HTTP requests are waiting on a socket to become free the socket is closed. This means that node's pool has the benefit of keep-alive when under load but still does not require developers to manually close the HTTP clients using keep-alive.
The asynchronous architecture of node.js is what makes it very cheap to open new connections.