I need to make POST http request at exact timestamp in future, as accurate as possible, down to milliseconds. But there is network latency as well. How can I achieve such a goal?
setTimeout is not enough here, because it always takes some time resulting in latecomer request due vary network latency. And firing this request before target timestamp may result in early coming request.
My goal is to make request guaranteed came to server after target timestamp, but as soon as possible after it. Could you suggest any solutions with Nodejs?
The best you can do in nodejs (which is not a real-time system) is to do the following:
Premeasure the expected latency so you know about how much to presend the request.
Use setTimeout() to schedule the send at precisely the one-way latency time before your target time. There is no other mechanism in nodejs that would be more precise.
If your request involves a DNS lookup, you can prefetch the TCP address for your hostname and take the DNS lookup time out of your request cycle or at least prime the local DNS cache.
Create a dedicated nodejs program that does nothing else - so its event loop will not be doing anything else at the time the setTimeout() needs to run. You could run this as a child_process from your larger program if desired.
Run a number of tests to see how the timing works and, if you are consistently off by some margin, then adjust your latency offset.
You can develop a regular latency test to determine if the latency changes with time.
As others have said, there is no way to predict what the natural response time will be of the target server (how long it takes to start processing your request from the moment your network packets arrive there). If lots of incoming requests are all racing for the same time slot, then your request will get interleaved in among all the others and served in some order that you do not control.
Other things you can consider. If the target server supports the latest http specifications, then you can have a pre-established http connection with the host (perhaps targeting some other endpoint) that will be kept alive for you to send your precise timing request on. This would take some experimentation to figure out what the target host supports and if this would work.
Related
Let us say, a gRPC client makes two requests R1 and R2 to gRPC server, one after the other (assume without any significant time gap, i.e R2 is made when R1 is still not served). Also, assume that R1 takes much more time than R2.
In this case, should I expect R2's response first as it takes less time or should I expect R1's response first as this request is made prior to R2? What will happen and why?
As far as what I have observed, I think requests are served in FCFS fashion, so, R1's response will be received by the client first and then R2's, but I am not sure.
Theoretically nothing discourages server and client process gRPC requests in parallel. GRPC connection is made over HTTP/2 one that can handle multiple requests at once. So yes - if server doesn't use some specific synchronization or limitation mechanisms then requests would be processes with overlapping. If server resources or policy doesn't allow it then they should be processed one by one. Also I can add than request can have a Timeout after which it would be cancelled. So long wait can lead to cancellation and non-processing at all.
All requests should be processed in parallel. The gRPC architecture for the Java implementation for example, it is divided into 2 "parts":
The event loop runs in a thread work group - It is similar to what we have to reactive implementations. One thread per core to handle the incoming requests.
The request processing is done in a dedicated thread which will be created using the CachedThreadPool system by default.
For single-thread languages like Javascript, I am not sure how they are doing it, but I would guess it is done in the same thread and therefore it would end up queuing the requests.
We have our HTTP layer served by Play Framework in Scala. One of our APIs is something of the form:
POST /customer/:id
Requests are sent by our UI team which calls these APIs through a React Framework.
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
Is it possible to configure some sort of thread affinity in Play Scala? What I mean by that is, can I configure Play to ensure that requests of a particular customer ID are handled by the same thread throughout the life-cycle of the application?
Batch is
put several API calls into a single HTTP request.
A batch request is a set of command in one HTTP request, like here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
You describe it as
The issue is that, sometimes, the requests are issued in batches, successively one after the other for the same customer ID. When this happens, different threads process these requests and so our persistent layer (MySQL) reaches an inconsistent state due to the difference in the timestamp of the handling of these requests.
This is a set of concurrent requests. Play framework usually works as a stateless server. I assume you also organize it as stateless. There is nothing that binds one request to another, you can't control order. Well, you can, if you create a special protocol, like "opening batch request", request #1, #2, ... "closing batch request". You need to check if exactly all request was correct. You also need to run some stateful threads and some queues ... Thought akka can help with this but I am pretty sure you wan't do it.
This issue is not a "play-framework" depended. You will reproduce it in any server. For example, the general case: Is it possible to receive out-of-order responses with HTTP?
You can go in either way:
1. "Batch" the command in one request
You need to change the client so it jams "batch" requests into one. You also need to change server so it processes all the commands from the batch one after another.
Example of the requests: https://developers.facebook.com/docs/graph-api/making-multiple-requests/
2. "Pipeline" requests
You need to change the client so it sends the next request after receive the response from the previous.
Example: Is it possible to receive out-of-order responses with HTTP?
The solution to this is to pipeline Ajax requests, transmitting them serially. ... . The next request sent only after the previous one has returned successfully."
The getstream.io documentation says that one should expect retrieving a feed in approximately 60ms. When I retrieve my feeds they contain a field named 'duration' which I take is the calculated server side processing time. This value is steadily around 10-40ms, with an average around 15ms.
The problem is, I seldomly get my feeds in less than 150ms and the average time is rather around 200-250ms and sometimes up to 300-400ms. This is the time for the getting the feed alone, no enrichment etc., and I have verified with tcpdump that the network roundtrip is low (around 25ms), and that the time is actually spent waiting for the server to respond.
I've tried to move around my application (eu-west and eu-central) but that doesn't seem to affect things much (again, network roundtrip is steadily around 25ms).
My question is - should I really expect 60ms and continue investigating, or is 200-400ms normal? On the getstream.io site it is explained that developer accounts receive "Low Priority Processing" - what does this mean in practise? How much difference could I expect with another plan?
I'm using the node js low level API.
Stream APIs use SSL to encrypt traffic. Unfortunately SSL introduces additional network I/O. Usually you need to pay for the increased latency only once because Stream HTTP APIs supports HTTP persistent connection (aka keep-alive).
Here's a Wireshark screenshot of the TCP traffic of 2 sequential API requests with keep alive disabled client side:
The 4 lines in red highlight that the TCP connection is getting closed each time. Another interesting thing is that the handshaking takes almost 100ms and it's done twice (the first bunch of lines).
After some investigation, it turns out that the library used to make API requests to Stream's APIs (request) does not have keep-alive enabled by default. Such change will be part of the library soon and is available on a development branch.
Here's a screenshot of the same two requests with keep-alive enabled (using the code from that branch):
This time there is not connection reset anymore and the second HTTP request does not do SSL handshaking.
How many simultaneous requests can I make with the request package?
I am expecting data back from every request confirming the request was received and processed successfully. Is this hardware or OS dependent? Where do I start looking?
One of the more recent versions of node.js does not enforce a limit on outgoing requests (older versions did). If you were literally trying to make millions of outgoing connections at the same time, then you would probably hit a limit on your own node.js server that would be OS specific. But, the practical limit is more likely going to be determined by the target host.
Since all your requests are being sent to the same host, the more likely limit will be determined by the server you are making the requests to. It will have some sort of limit for how many simultaneous requests it can have "in-flight" at the same time before it starts refusing new connections. What that number is depends entirely upon how the server is configured and built. For http://www.google.com, the number is probably hundreds of thousands or millions of requests because they have a huge server farm and requests are balanced across all of them. For some simple single CPU server, the limit would obviously be much smaller than that.
In addition, there will little use in sending zillions of requests to a single CPU server anyway because it won't be able to work on all of them at once anyway.
So, if you want to know what would work best for a given target host, you would have to set up an adjustable test harness so you could test scenarios where you send from 1, 2, 5, 10, 50, 100, 200, 500, 1000 at a time and see what the average response time is and where you start to get errors (if any).
If you don't want to do any of that type of testing, then a reasonably safe choice that doesn't attempt to fully optimize things is to put no more than 5 requests in flight at the same time.
You can either build something yourself to manage to N requests in flight at a time or you can use one of the existing libraries that will do that for you. The Bluebird promise library has a concurrency option on some of it's functions such as Promise.map() which will automatically do that for you for whatever concurrency value you set. The async library also has something similar.
If you want more specific help crafting the code to manage how many requests are in flight at a time or to build a test harness for it, please show us some of your code for the source of all the requests so we have some idea how that works (if it's a giant array of requests or what the source of the URLs is).
I'm building out an API using Hapi.js. Some of my code is pushing small amounts of data to the API. The issue seems to be that the pusher code is swamping the API and I'm getting ECONNRESET errors -- which means messages are getting lost. I'm planning on installing a rate-limiter in the pusher code, probably node-rate-limiter (link).
The question is, what should I set that limit to? I want to max out performance for this app, so I could easily be attempting to send in thousands of messages per hour. The data just gets dumped into redis, so I doubt the code in the API will be an issue but I still need to get an idea of what kind of message rate Hapi is comfortable with. Do I need to just start with something reasonable and see how it goes? Maybe 1 message per 10 milliseconds?
Hapi = require('hapi');
server = new (Hapi.Server);
server.connection(port: config.port, routes: {
cors: {
origin: ['*']
}
});
server.route({method: 'POST', path: '/update/{id}', ...})
There is no generic answer to how many requests per second you can process. It depends upon many things in your configuration and code such as:
Type and performance of server hardware
The amount of CPU time an average request uses
Whether your requests are CPU or disk bound. If disk bounded, then it depends a lot on your database and disk performance.
Whether you implement clustering to use multiple cores (if CPU bound)
Whether you're on shared infrastructure or not
The max number of incoming connections your server is configured for
So, there is no absolute answer here that works for everyone. If you don't have some sort of design problem that is artificially limiting your concurrency, then the best way to discover what your server can actually handle is to build a test engine and test it. Find where and how it fails and either fix those issues to extend the scalability further or implement protections to avoid hitting that limit.
Note: When a public API makes rate limiting choices, it is typically done on a per-client basis and the limit is set to a value that seems to be a little above what a reasonable client would be doing. This is more to allow fair use of the server by many clients to that one single client does not consume too much of the overall resource. If issuing thousands of small requests from a single client is not considered "good practice" in using your API, then you can just pick a number that is much smaller than that for a per-client limit.
Note: You may also want to make it easier for clients by having your API let them upload multiple messages in one API request rather than lots of API requests.