minimize latency and bandwidth usage between node.js server - node.js

I have a node.js script which sends data frequently to a node.js server. Right now client is using node.js request module to send post request and server is handling through request http server.
client:
const request = require('request');
// some iteration over time for sending event
request.post('http://localhost:8080/event',
{ json: true, body: body },
(err, res, body) => {
callback(err)
})
Server:
const http = require('http')
const server = http.createServer(function(req, res) {
let body = []
req.on('data', body.push.bind(body))
req.on('end', () => {
console.log(Buffer.concat(body).toString())
res.end()
})
})
I want to minimize the bandwidth usage and latency.
Here are my thoughts:
I was thinking to use http/2 as it may have better performance (not sure, just from some r&d) but it's still experimental in node.js.
right now it's using http, will websocket make any difference on that case as socket can send data over single connection without sending headers all the time.
what will be the best approach to minimize the latency and bandwidth usage?
Note: I am not expecting to use any third party service provider for that (want to improve from coding perspective).

If the question is WebSocket vs. HTTP/2, I'd go with WebSocket.
WebSocket frames are a bit smaller, 2-14 bytes of overhead per frame, compared to HTTP/2's fixed 9. For small messages therefore WebSocket offers less overhead. Which should result in better performance1.
As of right now, WebSocket is still better supported in servers, proxies and firewalls.
If the question is more general "how to reduce latency", then I'd abandon JavaScript and switch to C++ and zero-copy binary communication.
1 Though, to be sure, benchmark both and compare.

Related

How to prevent socket io client (browser) from being overflowed with huge payload coming from server?

I have a React.js client app and a Node.js server app and the Node.js app receives json data in real time via socket.io from another microservice. The JSON data is sent very often and this breaks the client app. For example:
I stop the server but the client still receives data
If I try refreshing the browser, it takes a lot of time to refresh
It also used to disconnect and reconnect the sockets (I fixed this by increasing the pingTimeout but that did not solve the other problems)
I also used maxHttpBufferSize and updateTimeout by increasing them but that does not really help. Decreasing the maxHttpBufferSize stops the messages from being received but I want them to be received just in a manner which does not break my client application.
Any advices on what I can do to improve my situation?
EDIT:
It could also work if I do not send all messages but skip every second or so but I am not sure how to achieve this?
Backpressure can be implemented with acknowledgements:
the client notifies the server when it has successfully handled the packet
socket.on("my-event", (data, cb) => {
// do something with the data
// then call the callback
cb();
});
the server must wait for the acknowledgement before sending more packets
io.on("connection", (socket) => {
socket.emit("my-event", data, () => {
// the client has acknowledged the packet, we can continue
});
})
Reference: https://socket.io/docs/v4/emitting-events/#acknowledgements
Note: using volatile packets won't work here, because the amount of data buffered on the server is not taken in account
Reference: https://github.com/websockets/ws/blob/master/doc/ws.md#websocketbufferedamount

Streaming API - how to send data from node JS connection (long-running HTTP GET call) to frontend (React application)

We are attempting to ingest data from a streaming sports API in Node.Js, for our React app (a MERN app). According to their API docs: "The livestream read APIs are long running HTTP GET calls. They are designed to be used to receive a continuous stream of game actions." So we am attempting to ingest data from long-running HTTP GET call in Node Js, to use in our React app.
Are long-running HTTP GET calls the same as websockets? We have not ingested streaming data previously, so not sure about either of these.
So far, we have added the following code into the index.js of our node app:
index.js
...
// long-running HTTP request
http.get(`https://live.test.wh.geniussports.com/v2/basketball/read/1234567?ak=OUR_KEY`, res => {
res.on('data', chunk => {
console.log(`NEW CHUNK`);
console.log(Buffer.from(chunk).toString('utf-8')); // simply console log for now
});
res.on('end', () => {
console.log('ENDING');
});
});
// Start Up The Server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => console.log(`Express server up and running on port: ${PORT}`));
This successfully connects and console-log's data in the terminal, and continues to console log new data as it becomes available in the long-running GET call.
Is it possible to send this data from our http.get() request up to React? Can we create a route for a long-running request in the same way that other routes are made? And then call that routes in React?
Server-sent events works like this, with a text/event-stream content-type header.
Server-sent events have a number of benefits over Websockets, so I wouldn't say that it's an outdated technique, but it looks like they rolled their own SSE, which is definitely not great.
My recommendation is to actually just use SSE for your use-case. Built-in browser support and ideal for a stream of read-only data.
To answer question #1:
Websockets are different from long-running HTTP GET calls. Websockets are full-duplex connections that let you send messages of any length in either direction. When a Websocket is created, it uses HTTP for the handshake, but then changes the protocol using the Upgrade: header. The Websockets protocol itself is just a thin layer over TCP, to enable discrete messages rather than a stream. It's pretty easy to design your own protocol on top of Websockets.
To answer question #2:
I find Websockets flexible and easy to use, though I haven't used the server-sent events that Evert describes in his answer. Websockets could do what you need (as could SSE according to Evert).
If you go with Websockets, note that it's possible to queue a message to be sent, and then (like any network connection) have the Websocket close before it's sent, losing any unsent messages. Thus, you probably want to build in acknowledgements for important messages. Also note that Websockets close after about a minute of being idle, at least on Android, so you need to either send heartbeat messages or be prepared to reconnect as needed.
If you decided to go on with websockets , I would recommend this approach using socket.io for both client and server:
Server-Side code:
const server = require('http').createServer();
const io = require('socket.io')(server);
io.on("connection", (socket) => {
http.get(`https://live.test.wh.geniussports.com/v2/basketball/read/1234567?ak=OUR_KEY`, res => {
res.on('data', chunk => {
console.log(`NEW CHUNK`);
let dataString = Buffer.from(chunk).toString('utf-8)
socket.emit('socketData' , {data:dataString});
});
res.on('end', () => {
console.log('ENDING');
});
});
});
server.liste(PORT);
Client-Side code:
import {
io
} from 'socket.io-client';
const socket = io('<your-backend-url>');
socket.on('socketData', (data) => {
//data is the same data object that we emited from server
console.log(data)
//use your websocket data
})

Websocket uses more network bandwidth than http

I am doing a project and I got stuck with websockets. I am using express.ws library.
Lately, the app has worked without websockets at all: all the requests were sent from the client in a plain http request. However, my app is built so that the clients have to ask the server for data rather often, so websocket looks like a correct solution for me. I have rewritten all my http endpoints to a ws which establishes only ones. However, it does not look like it has become more optimized: the network usage used to be about 35 Mb/s (incoming) before, but now it grows up to 90 Mb/s which is weird as websockets should not transfer a lot of "rubbish" data like headers with every message. Is it an expected behavior or am I misusing ws somewhere?
//if it may be important, I am using helmet, cors, hpp and express-device to build the middleware security. However, commenting those lines does not change anything
const httpsServer = https.createServer({...}, global.app);
express_ws(app, httpsServer);
app.ws('/extension', (ws, req) =>
{
//setting some details from req.query to ws.data
ws.on('message', async (msg) =>
{
msg = JSON.parse(msg);
if (msg.type == '...') { //handling the request and sending the response }
//some more similar handlers
});
});

Sending multiple responses with the same response object in nodejs

I have a long running process which needs to send data back at multiple reponse. Is there some way to send back multiple responses with express.js
I want have "test" after 3 seconde a new reponse "bar"
app.get('/api', (req, res) =>
{
res.write("test");
setTimeout(function(){
res.write("bar");
res.end();
}, 3000);
});
with res.write a wait a 3050 of have a on response
Yes, you can do it. What you need to know is actually chunked transfer encoding.
This is one of the old technics used a decade ago, since Websockets, I haven't seen anyone using this.
Obviously you only need to send responses in different times maybe up to some events will be fired later in chunks. This is actually the default response type of express.js.
But there is a catch. When you try to test this, assuming you are using a modern browser or curl which all of them buffer chunks, so you won't be seeing expected result. Trick is filling up chunk buffers before sending consecutive response chunks. See the example below:
const express = require('express'),
app = express(),
port = 3011;
app.get('/', (req, res) => {
// res.write("Hello\n");
res.write("Hello" + " ".repeat(1024) + "\n");
setTimeout(() => {
res.write("World");
res.end();
}, 2000);
});
app.listen(port, () => console.log(`Listening on port ${port}!`));
First res.write with additional 1024 spaces forces browser to render chunk. Then you second res.write behaves as you expected. You can see difference with uncommenting the first res.write.
On network level there is no difference. Actually even in browser you can achieve this by XHR Object (first AJAX implementation) Here is the related answer
An HTTP request response has one to one mapping. You can return HTTP response once, however long it may be. There are 2 common techniques:
Comet responses - Since Response is a writable stream, you can write data to it from time to time before ending it. Ensure the connection read timeout is properly configured on the client that would be receiving this data.
Simply use web sockets - These are persistent connection that allow 2 way messaging between server and client.

Throttling event-driven Nodejs HTTP requests

I have a Node net.Server that listens to a legacy system on a TCP socket. When a message is received, it sends an http request to another http server. Simplified, it looks like this:
var request = require('request-promise');
...
socket.on('readable', function () {
var msg = parse(socket.read());
var postOptions = {
uri: 'http://example.com/go',
method: 'POST',
json: msg,
headers: {
'Content-Type': 'application/json'
}
};
request(postOptions);
})
The problem is that the socket is readable about 1000 times per second. The requests then overload the http server. Almost immediately, we get multiple-second response times.
In running Apache benchmark, it's clear that the http server can handle well over 1000 requests per second in under 100ms response time - if we limit the number of concurrent requests to about 100.
So my question is, what is the best way to limit the concurrent requests outstanding using the request-promise (by extension, request, and core.http.request) library when each request is fired separately within an event callback?
Request's documentation says:
Note that if you are sending multiple requests in a loop and creating multiple new pool objects, maxSockets will not work as intended. To work around this, either use request.defaults with your pool options or create the pool object with the maxSockets property outside of the loop.
I'm pretty sure that this paragraph is telling me the answer to my problem, but I can't make sense of it. I've using defaults to limit the number of sockets open:
var rp = require('request-promise');
var request = rp.defaults({pool: {maxSockets: 50}});
Which doesn't help. My only thought at the moment is to manually manage a queue, but I expect that would be unnecessary if I only knew the conventional way to do it.
Well you need to throttle your request right? I have workaround this in two ways, but let me show you one patter I always use. I often use throttle-exec and Promise to make wrapper for request. You could install it with npm install throttle-exec and use Promise natively or third-party. Here is my gist for this wrapper https://gist.github.com/ans-4175/d7faec67dc6374803bbc
How do you use it? It's simple, just like ordinary request.
var Request = require("./Request")
Request({
url:url_endpoint,
json:param,
method:'POST'
})
.then(function(result){
console.log(result)
})
.catch(reject)
Tell me after you implement it. Either way I have another wrapper :)

Resources