I run sanic application and it raises an exception every several seconds even without any request coming in.
sanic.exceptions.RequestTimeout: Request Timeout
How to fix the issue?
I would point you towards the documentation so that you understand what you are doing and why you are receiving that exception. Just blindly changing KEEP_ALIVE to False may not be what you want.
The KEEP_ALIVE config variable is set to True in Sanic by default. If you don’t need this feature in your application, set it to False to cause all client connections to close immediately after a response is sent, regardless of the Keep-Alive header on the request.
The amount of time the server holds the TCP connection open is decided by the server itself. In Sanic, that value is configured using the KEEP_ALIVE_TIMEOUT value. By default, it is set to 5 seconds, this is the same default setting as the Apache HTTP server and is a good balance between allowing enough time for the client to send a new request, and not holding open too many connections at once. Do not exceed 75 seconds unless you know your clients are using a browser which supports TCP connections held open for that long.
The issue comes from the fact that the connection remains alive. Adding following configuration seems to have fixed my issue
from sanic.config import Config
Config.KEEP_ALIVE = False
Related
When I connect to the socket server from the client side, which is considered react, every few seconds a repeated request is sent by the socket client. Generally, the requests are of get type and most of the time they are in pending mode. Sometimes the result of requests is 2.
What do you think is the problem of sending repeated requests after connecting or doing anything with the socket?
UPDATE
This problem occurs when I use namespace . I tried all the solutions but this problem was not solved.
image
This is expected behavior when the option used for transport is polling (long-polling).
What happens is, by default, the transport parameter is ["polling", "websocket"] (client, server), where the sequence of elements matters. So, the first connection attempt is made via polling (which is faster to start compared to websocket), and then (or in parallel, I don't know the working details) there is a connection attempt by websocket (this takes a little longer to establish but is faster for later communication).
If the websocket connection is successfully established, the communication will be carried in this way. But if an error occurs, or the connection takes a long time to be established, or this transport option is not present in the instance's parameters, then the communication will continue being carried out through polling, which are the various requests that remain pending. It is normal for them to remain pending, so they receive an update and are able to inform the requester immediately, without the need for several quick requests consulting the application's status.
Check the instance parameters you set for this connection to find out if transport via websocket is enabled. Be careful when using the socket server behind a reverse proxy, as this reverse proxy needs to be properly configured to accept websocket connections, otherwise it won't work.
You can check the websocket requests in the browser inspection, Network tab, by enabling the WS filter.
Here are some additional links for you to read more about:
https://socket.io/docs/v4/how-it-works/
https://socket.io/docs/v4/using-multiple-nodes/
https://socket.io/docs/v4/reverse-proxy/
https://ably.com/blog/websockets-vs-long-polling
I’m writing a HTTP/1.1 client that will be used against a variety of servers.
How can I decide a reasonable default keep-alive timeout value, as in, how long the client should keep an unused connection open before closing? Any value I think of seems extremely arbitrary.
First note that that with HTTP keep alive both client and server can close an idle connection (i.e. no outstanding response, no unfinished request) at any time. This means especially that the client cannot make the server keep the connection open by enforcing some timeout, all what a client-side timeout does is limit how long the client will try to keep the connection open. The server might close the connection even before this client-side timeout is reached.
Based on this there is no generic good value for the timeout but there actually does not need to be one. The timeout is essentially used to limit resources, i.e. how much idle connections will be open at the same time. If your specific use case will never visit the same site again anyway then using HTTP keep-alive would just be a waste of resources. If instead you don't know your specific usage pattern you could just place a limit on the number of open connections, i.e. close the longest unused connection if the limit is reached and a new connection is needed. It might make sense to have some upper limit timeout of 10..15 minutes anyway since usually after this time firewalls and NAT routers in between will have abandoned the connection state so the idle connection will no longer work for new requests anyway.
But in any case you also need to be sure that you detect if the server closes a connection and then discard this connection from the list of reusable connections. And if you use HTTP keep-alive you also need to be aware that the server might close the connection in the very moment you are trying to send a new request on an existing connection, i.e. you need to retry this request then on a newly created connection.
In Chrome, Socket IO seems to stop transmitting data. Is there an internal reason for this?
I've tried a very simple client and simple server side but consistently the server stops receiving any emits after 5 minute, will then reconnect and it's fine for another 5 minutes.
On top of the internal ping mechanism I have a polling mechanism which sends back session data every 20 seconds.
I don't use WebSocket with NodeJS or Socket.io but experienced the same behaviour with Jetty. It turns out that Jetty has an idle timeout default to 5 minutes (or 300 seconds) for all WebSocket's sessions. You could change the default idle timeout value to an appropriate value or ping/pong those connections before it timed out.
In my situation, I decided to use ping/pong as it also helps determine when the connection is no longer there. I observed that in some cases, connection was not closed even when the network is down.
According to engine.io (which is used by socket.io) docs, the server seems to have default pingInterval of 25 seconds. So unless you inadvertently disabled or changed default options, the ping/pong mechanism should be in place.
I have a web application using warp and while trying to query some large-ish using curl I noticed the connection get shutdown exactly after 1 minute transfer. I increased curl's own timeout but this did not changed anything so I assume this is set on the server side.
Is this actually the case there is a 60s timeout on sending response in warp, and if yes, how can I control it?
We have a node.js server which implements a REST API as a proxy to a central server which has a slightly different, and unfortunately asymmetric REST API.
Our client, which runs in various browsers, asks the node server to get the tasks from the central server. The node server gets a list of all the task ids from the central one and returns them to the client. The client then makes two REST API calls per id through the proxy.
As far as I can tell, this stuff is all done asynchronously. In the console log, it looks like this when I start the client:
Requested GET URL under /api/v1/tasks/*: /api/v1/tasks/
This takes a couple seconds to get the list from the central server. As soon as it gets the response, the server barfs this out very quickly:
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/438
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/438
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/439
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/439
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/441
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/441
Then, each time a pair of these requests gets a result from the central server, another two lines is barfed out very quickly.
So it seems our node.js server is only willing to have six requests out at a time.
There are no TCP connection limits imposed by Node itself. (The whole point is that it's highly concurrent and can handle thousands of simultaneous connections.) Your OS may limit TCP connections.
It's more likely that you're either hitting some kind of limitation of your backend server, or you're hitting the builtin HTTP library's connection limit, but it's hard to say without more details about that server or your Node implementation.
Node's built-in HTTP library (and obviously any libraries built on top of it, which are most) maintains a connection pool (via the Agent class) so that it can utilize HTTP keep-alives. This helps increase performance when you're running many requests to the same server: rather than opening a TCP connection, making a HTTP request, getting a response, closing the TCP connection, and repeating; new requests can be issued on reused TCP connections.
In node 0.10 and earlier, the HTTP Agent will only open 5 simultaneous connections to a single host by default. You can change this easily: (assuming you've required the HTTP module as http)
http.globalAgent.maxSockets = 20; // or whatever
node 0.12 sets the default maxSockets to Infinity.
You may want to keep some kind of connection limit in place. You don't want to completely overwhelm your backend server with hundreds of HTTP requests under a second – performance will most likely be worse than if you just let the Agent's connection pool do its thing, throttling requests so as to not overload your server. Your best bet will be to run some experiments to see what the optimal number of concurrent requests is in your situation.
However, if you really don't want connection pooling, you can simply bypass the pool entirely – sent agent to false in the request options:
http.get({host:'localhost', port:80, path:'/', agent:false}, callback);
In this case, there will be absolutely no limit on concurrent HTTP requests.
It's the limit on number of concurrent connections in the browser:
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
I have upvoted the other answers, as they helped me diagnose the problem. The clue was that node's socket limit was 5, and I was getting 6 at a time. 6 is the limit in Chrome, which is what I was using to test the server.
How are you getting data from the central server? "Node does not limit connections" is not entirely accurate when making HTTP requests with the http module. Client requests made in this way use the http.globalAgent instance of http.Agent, and each http.Agent has a setting called maxSockets which determines how many sockets the agent can have open to any given host; this defaults to 5.
So, if you're using http.request or http.get (or a library that relies on those methods) to get data from your central server, you might try changing the value of http.globalAgent.maxSockets (or modify that setting on whatever instance of http.Agent you're using).
See:
http.Agent documentation
agent.maxSockets documentation
http.globalAgent documentation
Options you can pass to http.request, including an agent parameter to specify your own agent
Node js can handle thousands of incoming requests - yes!
But when it comes down to ougoing requests every request has to deal with a dns lookup and dns lookup's, disk reads etc are handled by the libuv which is programmed in C++. The default value of threads for each node process is 4x threads.
If all 4x threads are busy with https requests ( dns lookup's ) other requests will be queued. That is why no matter how brilliant your code might be : you sometimes get 6 or sometimes less concurrent outgoing requests per second completed.
Learn about dns cache to reduce the amount of dns look up's and increase libuv size. If you use PM2 to manage your node processes they do have a well documentation on their side on environment variables and how to inject them. What you are looking for is the environment variable UV_THREADPOOL_SIZE = 4
You can set the value anywhere between 1 or max limit of 1024. But keep in mind libuv limit of 1024 is across all event loops.
I have seen the same problem in my server. It was only processing 4 requests.
As explained already from 0.12 maxsockets defaults to infinity. That easily overwhelms the sever. Limiting the requests to say 10 by
http.globalAgent.maxSockets = 20;
solved my problem.
Are you sure it just returns the results to the client? Node processes everything in one thread. So if you do some fancy response parsing or anything else which doesn't yield, then it would block all your requests.