I'm building out an API using Hapi.js. Some of my code is pushing small amounts of data to the API. The issue seems to be that the pusher code is swamping the API and I'm getting ECONNRESET errors -- which means messages are getting lost. I'm planning on installing a rate-limiter in the pusher code, probably node-rate-limiter (link).
The question is, what should I set that limit to? I want to max out performance for this app, so I could easily be attempting to send in thousands of messages per hour. The data just gets dumped into redis, so I doubt the code in the API will be an issue but I still need to get an idea of what kind of message rate Hapi is comfortable with. Do I need to just start with something reasonable and see how it goes? Maybe 1 message per 10 milliseconds?
Hapi = require('hapi');
server = new (Hapi.Server);
server.connection(port: config.port, routes: {
cors: {
origin: ['*']
}
});
server.route({method: 'POST', path: '/update/{id}', ...})
There is no generic answer to how many requests per second you can process. It depends upon many things in your configuration and code such as:
Type and performance of server hardware
The amount of CPU time an average request uses
Whether your requests are CPU or disk bound. If disk bounded, then it depends a lot on your database and disk performance.
Whether you implement clustering to use multiple cores (if CPU bound)
Whether you're on shared infrastructure or not
The max number of incoming connections your server is configured for
So, there is no absolute answer here that works for everyone. If you don't have some sort of design problem that is artificially limiting your concurrency, then the best way to discover what your server can actually handle is to build a test engine and test it. Find where and how it fails and either fix those issues to extend the scalability further or implement protections to avoid hitting that limit.
Note: When a public API makes rate limiting choices, it is typically done on a per-client basis and the limit is set to a value that seems to be a little above what a reasonable client would be doing. This is more to allow fair use of the server by many clients to that one single client does not consume too much of the overall resource. If issuing thousands of small requests from a single client is not considered "good practice" in using your API, then you can just pick a number that is much smaller than that for a per-client limit.
Note: You may also want to make it easier for clients by having your API let them upload multiple messages in one API request rather than lots of API requests.
Related
The example Scott Hanselman gives on his blog for using Parallel.ForEachAsync in .NET 6 specifies the value of MaxDegreeOfParallelism as 3.
However, if unspecified, the default MaxDegreeOfParallelism is ProcessorCount. This makes sense for CPU bound work, but for asynchronous I/O bound work, it seems like a poor choice for a default value.
If I'm doing something like in Scott's example below, but I want to do it as fast as possible, how should I determine the best value to use for MaxDegreeOfParallelism? Is it reasonable to specify this as int.MaxValue and just assume the TaskScheduler will do the most sensible thing when it comes to scheduling the work on the ThreadPool?
ParallelOptions parallelOptions = new()
{
MaxDegreeOfParallelism = 3
};
await Parallel.ForEachAsync(userHandlers, parallelOptions, async (uri, token) =>
{
var user = await client.GetFromJsonAsync<GitHubUser>(uri, token);
Console.WriteLine($"Name: {user.Name}\nBio: {user.Bio}\n");
});
IMHO The only way to get the number is...testing.
For http work there are two parties involved:
you code
the remote side that does the work for you.
Your fast may be too fast for the remote side. This can because of resources and/or throttling.
Note on the default
The default - which results in ProcessorCount - will depend on the machine that the code runs on and if you run your code in the cloud this number may be different than what's on your beefy laptop.
This can lead to unexpected differences between non-prod and prod environments.
GitHub specific
gitHub.com has a 5,000 requests per hour for non-enterprise users (from here) and there is also this:
In order to provide quality service on GitHub, additional rate limits may apply to some actions when using the API. For example, using the API to rapidly create content, poll aggressively instead of using webhooks, make multiple concurrent requests, or repeatedly request data that is computationally expensive may result in secondary rate limiting.
In Best practices for integrators we can read
Dealing with secondary rate limits
Secondary rate limits are another way we ensure the API's availability. To avoid hitting this limit, you should ensure your application follows the guidelines below.
...
Make requests for a single user or client ID serially. Do not make requests for a single user or client ID concurrently.
How many simultaneous requests can I make with the request package?
I am expecting data back from every request confirming the request was received and processed successfully. Is this hardware or OS dependent? Where do I start looking?
One of the more recent versions of node.js does not enforce a limit on outgoing requests (older versions did). If you were literally trying to make millions of outgoing connections at the same time, then you would probably hit a limit on your own node.js server that would be OS specific. But, the practical limit is more likely going to be determined by the target host.
Since all your requests are being sent to the same host, the more likely limit will be determined by the server you are making the requests to. It will have some sort of limit for how many simultaneous requests it can have "in-flight" at the same time before it starts refusing new connections. What that number is depends entirely upon how the server is configured and built. For http://www.google.com, the number is probably hundreds of thousands or millions of requests because they have a huge server farm and requests are balanced across all of them. For some simple single CPU server, the limit would obviously be much smaller than that.
In addition, there will little use in sending zillions of requests to a single CPU server anyway because it won't be able to work on all of them at once anyway.
So, if you want to know what would work best for a given target host, you would have to set up an adjustable test harness so you could test scenarios where you send from 1, 2, 5, 10, 50, 100, 200, 500, 1000 at a time and see what the average response time is and where you start to get errors (if any).
If you don't want to do any of that type of testing, then a reasonably safe choice that doesn't attempt to fully optimize things is to put no more than 5 requests in flight at the same time.
You can either build something yourself to manage to N requests in flight at a time or you can use one of the existing libraries that will do that for you. The Bluebird promise library has a concurrency option on some of it's functions such as Promise.map() which will automatically do that for you for whatever concurrency value you set. The async library also has something similar.
If you want more specific help crafting the code to manage how many requests are in flight at a time or to build a test harness for it, please show us some of your code for the source of all the requests so we have some idea how that works (if it's a giant array of requests or what the source of the URLs is).
I have a Node.js application which opens a file, scans each line and makes a REST call that involves Couchbase for each line. The average number of lines in a file is about 12 to 13 million. Currently without any special settings my app can completely process ~1 million records in ~24 minutes. I went through a lot of questions, articles, and Node docs but couldn't find out any information about following:
Where's the setting that says node can open X number of http connections / sockets concurrently ? and can I change it?
I had to regulate the file processing because the file reading is much faster than the REST call so after a while there are too many open REST requests and it clogs the system and it goes out of memory... so now I read 1000 lines wait for the REST calls to finish for those and then resume it ( i am doing it using pause and resume methods on stream) Is there a better alternative to this?
What all possible optimizations can I perform so that it becomes faster than this. I know the gc related config that prevents from frequent halts in the app.
Is using "cluster" module recommended? Does it work seamlessly?
Background: We have an existing java application that does exactly same by spawning 100 threads and it is able to achieve slightly better throughput than the current node counterpart. But I want to try node since the two operations in question (reading a file and making a REST call for each line) seem like perfect situation for node app since they both can be async in node where as Java app makes blocking calls for these...
Any help would be greatly appreciated...
Generally you should break your questions on Stack Overflow into pieces. Since your questions are all getting at the same thing, I will answer them. First, let me start with the bottom:
We have an existing java application that does exactly same by spawning 100 threads ... But I want to try node since the two operations in question ... seem like perfect situation for node app since they both can be async in node where as Java app makes blocking calls for these.
Asynchronous calls and blocking calls are just tools to help you control flow and workload. Your Java app is using 100 threads, and therefore has the potential of 100 things at a time. Your Node.js app may have the potential of doing 1,000 things at a time but some operations will be done in JavaScript on a single thread and other IO work will pull from a thread pool. In any case, none of this matters if the backend system you're calling can only handle 20 things at a time. If your system is 100% utilized, changing the way you do your work certainly won't speed it up.
In short, making something asynchronous is not a tool for speed, it is a tool for managing the workload.
Where's the setting that says node can open X number of http connections / sockets concurrently ? and can I change it?
Node.js' HTTP client automatically has an agent, allowing you to utilize keep-alive connections. It also means that you won't flood a single host unless you write code to do so. http.globalAgent.maxSocket=1000 is what you want, as mentioned in the documentation: http://nodejs.org/api/http.html#http_agent_maxsockets
I had to regulate the file processing because the file reading is much faster than the REST call so after a while there are too many open REST requests and it clogs the system and it goes out of memory... so now I read 1000 lines wait for the REST calls to finish for those and then resume it ( i am doing it using pause and resume methods on stream) Is there a better alternative to this?
Don't use .on('data') for your stream, use .on('readable'). Only read from the stream when you're ready. I also suggest using a transform stream to read by lines.
What all possible optimizations can I perform so that it becomes faster than this. I know the gc related config that prevents from frequent halts in the app.
This is impossible to answer without detailed analysis of your code. Read more about Node.js and how its internals work. If you spend some time on this, the optimizations that are right for you will become clear.
Is using "cluster" module recommended? Does it work seamlessly?
This is only needed if you are unable to fully utilize your hardware. It isn't clear what you mean by "seamlessly", but each process is its own process as far as the OS is concerned, so it isn't something I would call "seamless".
By default, node uses a socket pool for all http requests and the default global limit is 5 concurrent connections per host (these are re-used for keepalive connections however). There are a few ways around this limit:
Create your own http.Agent and specify it in your http requests:
var agent = new http.Agent({maxSockets: 1000});
http.request({
// ...
agent: agent
}, function(res) { });
Change the global/default http.Agent limit:
http.globalAgent.maxSockets = 1000;
Disable pooling/connection re-use entirely for a request:
http.request({
// ...
agent: false
}, function(res) { });
I'm building an application using tag subscriptions in the real-time API and have a question related to capacity planning. We may have a large number of users posting to a subscribed hashtag at once, so the question is how often will the API actually POST to our subscription processing endpoint? E.g., if 100 users post to #testhashtag within a second or two, will I receive 100 POSTs or does the API batch those together as one update? A related question: is there a maximum rate at which POSTs can be sent (e.g., one per second or one per ten seconds, etc.)?
The Instagram API seems to lack detailed information about both how many updates are sent and what are the rate limits. From the [API docs][1]:
Limits
Be nice. If you're sending too many requests too quickly, we'll send back a 503 error code (server unavailable).
You are limited to 5000 requests per hour per access_token or client_id overall. Practically, this means you should (when possible) authenticate users so that limits are well outside the reach of a given user.
In other words, you'll need to check for a 503 and throttle your application accordingly. No information I've seen for how long they might block you, but it's best to avoid that completely. I would advise you manage this by placing a rate limiting mechanism on your own code, such as pushing your API requests through a queue with rate control. That will also give you the benefit of a retry of you're throttled so you won't lose any of the updates.
Moreover, a mechanism such as a queue in the case of real-time updates is further relevant because of the following from the API docs:
You should build your system to accept multiple update objects per payload - though often there will be only one included. Also, you should acknowledge the POST within a 2 second timeout--if you need to do more processing of the received information, you can do so in an asynchronous task.
Regarding the number of updates, the API can send you 1 update or many. The problem with this is you can absolutely murder your API calls because I don't think you can batch calls to specific media items, at least not using the official python or ruby clients or API console as far as I have seen.
This means that if you receive 500 updates either as 1 request to your server or split into many, it won't matter because either way, you need to go and fetch these items. From what I observed in a real application, these seemed to count against our quota, however the quota itself seems to consume resources erratically. That is, sometimes we saw no calls at all consumed, other times the available calls dropped by far more than we actually made. My advice is to be conservative and take the 5000 as a best guess rather than an absolute. You can check the remaining calls by parsing one of the headers they send back.
Use common sense, don't be stupid, and using a rate limiting mechanism should keep you safe and have the benefit of dealing with failures either due to outages (this happens more than you may think), network hicups, and accidental rate limiting. You could try to be tricky and use different API keys in a pooling mechanism, but this is likely a violation of the TOS and if they are doing anything via IP, you'd have to split this up to different machines with different IPs.
My final advice would be to restructure your application to not completely rely on the subscription mechanism. It's less than reliable and very expensive API wise. It's only truly useful if you just need to do something in your app that doesn't require calling back to Instgram, your number of items is small, or you can filter out the majority of items to avoid calling back to Instagram accept when a specific business rule is matched.
Instead, you can do things like query the tag or the user (ex: recent media) and scale it out that way. Normally this allows you to grab 100 items with 1 request rather than 100 items with 100 requests. If you really want to be cute, you could at least merge the subscription notifications asynchronously and combine the similar ones into a single batched request when you combine the duplicate characteristics such as tag into a single bucket. Sort of like a map/reduce but on a small data set. You could of course do an actual map/reduce from time-to-time on your own data as another way of keeping things in async. Again, be careful not to thrash instagram, but rather just use map/reduce to batch out your calls in a way that's useful to your app.
Hope that helps.
I've been trying to implement a simple long polling service for use in my own projects and maybe release it as a SAAS if I succeed. These are the two approaches I've tried so far, both using Node.js (polling PostgreSQL in the back).
1. Periodically check all the clients in the same interval
Every new connection is pushed onto a queue of connections, which is being walked through in an interval.
var queue = [];
function acceptConnection(req, res) {
res.setTimeout(5000);
queue.push({ req: req, res: res });
}
function checkAll() {
queue.forEach(function(client) {
// respond if there is something new for the client
});
}
// this could be replaced with a timeout after all the clients are served
setInterval(checkAll, 500);
2. Check each client at a separate interval
Every client gets his own ticker which checks for new data
function acceptConnection(req, res) {
// something which periodically checks data for the client
// and responds if there is anything new
new Ticker(req, res);
}
While this keeps the minimum latency for each client lower, it also introduces overhead by setting a lot of timeouts.
Conclusion
Both of these approaches solve the problem quite easily, but I don't feel that this will scale up easily to something like 10 million open connections, especially since I'm polling the database on every check for every client.
I thought about doing this without the database and just immediately broadcast new messages to all open connections, but that will fail if a client's connection dies for a few seconds while the broadcast is happening, because it is not persistent. Which means I basically need to be able to look up messages in history when the client polls for the first time.
I guess one step up here would be to have a data source where I can subscribe to new data coming in (CouchDB change notifications?), but maybe I'm missing something in the big picture here?
What is the usual approach for doing highly scalable long polling? I'm not specifically bound to Node.js, I'd actually prefer any other suggestion with a reasoning why.
Not sure if this answers your question, but I like the approach of PushPin (+ explanation of concepts).
I love the idea (using reverse proxy and communicating with return codes + delayed REST return requests), but I do have reservations about the implementation. I might be underestimating the problem, but is seems to me that the technologies used are a bit on an overkill. Not sure if I will use it or not yet, would prefer a more lightweight solution, but I find the concept phenomenal.
Would love to hear what you used eventually.
Since you mentioned scalability, I have to get a little bit theoretical, as the only practical measure is load testing. Therefore, all I can offer is advice.
Generally speaking, once-per anything is bad for scalability. Especially once-per-connection or once-per-request since that makes part of your app proportional to the amount of traffic. Node.js removed the thread-per-connection dependency with its single-threaded asynchronous I/O model. Of course, you can't completely eliminate having something per-connection, like a request and response object and a socket.
I suggest avoiding anything that opens a database connection for every HTTP connection. This is what connections pools are for.
As for choosing between your two options above, I would personally go for the second choice because it keeps each connection isolated. The first option uses a loop over connections, which means actual execution time per connection. It's probably not a big deal given that I/O is asynchronous, but given a choice between an iteration-per-connection and the mere existence of an object-per-connection, I would prefer to just have an object. Then I have less to worry about when suddenly there are 10,000 connections.
The C10K problem seems like a good reference for this, though this is really personal judgement to be honest.
http://www.kegel.com/c10k.html
http://en.wikipedia.org/wiki/C10k_problem