node.js response buffering on node.js - node.js

I am developing a node.js service that will have a number of requests per second - say 1000. Lets imagine that the response data weights a bit, connection with our clients is extremely slow and it takes ~1s for the response to be sent back to the client.
Question #1 - I imagine if there was no proxy buffering, it would take node.js 1000 seconds to send back all responses as this is blocking operation, isn't it?
Question #2 - How nginx buffers (and buffers in general) work? Would I be able to receive all 1000 responses to buffer (provided RAM is not a problem) and only then flush them the clients? What are the limits of proxy_buffers? Can I set a number of buffers to 1000 1K each?
The goal is to flush all the responses out of the node.js as soon as possible in order not to block it and have some other system to deliver them.
Thanks!

Of course, sending the response is non-blocking operation. Node simply gives a chunk to a network driver, leaving all the other work to your OS.
If sending the response was a blocking operation, it would only take a single PC with its network artificially crippled down to DoS any node-based service.

Related

Which request would NodeJS serve first, if it receives n requests at same time?

I am working on NodeJS. I have a doubt that if nodejs receives many requests, it processes them one after the other in queue. But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve? What is the criteria and reason to select any request from many requests at same time?
Since all four requests arrive on the same physical internet connection, one of the request's packets will get there before the others. As the packets converge on the last router before your server, one of them will get processed by a router slightly before the other and that packet will arrive at your server before the other. That packet will then get to the TCP stack in the OS first which will notify node.js about it first. Nodejs will start to process that first request. Since the main thread in nodejs is single threaded, if the request handler doesn't call something asynchronous, then it will send a response for the first request before it gets to start processing the second request.
If the first request has non-blocking, asynchronous portions to its request handling code, then as soon as it makes an asynchronous call and returns control back to the nodejs event loop, then the 2nd request will get to start processing.
But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve?
This is not possible. As the packets from each of the requests converge on the last router before your server, they will eventually get sequenced one after the other on the ethernet connection connected to your server. The ethernet connection doesn't send 4 requests in parallel. It sends packets one after the other.
So, your server will see one of the incoming packets before the others. Also, keep in mind an incoming http request is not just a single packet. It consists of establishing a TCP connection (with the back and forth that that entails) and then the client sends the actually http request over the TCP connection that has been established. If you're using https, there is even more involved in establishing the connection. So, the whole notion of four incoming connections arriving at exactly the same moment is not possible. Even if it were (imagine you had four network cards with four physical connections to the internet), the underlying operating system is going to end up servicing one of the incoming network cards before the others. Whether it's a hardware interrupt at the lowest level or a polling loop, one of the network cards is going to be found to have incoming data before the others.
What is the criteria and reason to select any request from many requests at same time?
It doesn't work that way. The OS doesn't suddenly realize it has four requests that arrived at exactly the same moment and then it has to implement some algorithm to choose which request to serve first. It doesn't work that way. Instead, some low level hardware/software element (probably in an upstream router) will have forced the incoming packets into an order (either based on minute timing or just based on how it's software works - like it checks hardware portA and then hardware portB and then hardware portC, for example) and one will physically arrive before the other on your server. This is not something your server gets to decide.

How does one handle slow client connections in node.js?

Node.js is single threaded. If a slow client is making a request, I imagine it could block the thread until it completes. Does that make sense ?
How does one handle slow connections ? Does it make sense to just terminate a connection if it takes too long ? How does one determine this? How does one measure how long the request is taking, and terminate it if it is taking too long ? I'm not referring to the duration it takes to send back a response. I'm just referring the time it takes for node to receive all the data required to process a request. Is this a legitimate scenario ?
I imagine there must be some way to do this, otherwise it would be really easy to DOS attack a node.js server...
EDIT: In a post request, the data comes in, in chunks. So what if it just comes in slowly ? I'm not sure how to simulate this. But if this is a problem in node, it could equally be a problem in PHP etc, because you would just need to spawn many connection, all of which are very slow to attack a server.
It doesn't matter if client request data comes in slowly. I/O in node is asynchronous and non-blocking. So if a chunk of data isn't available on a socket for a long time, node can do other things in the meantime, such as receive chunks of data from other sockets.
You can set an inactivity timeout that fires when no data is seen on the socket for the desired length of time. For HTTP, you can set a global request timeout via server.setTimeout() that can automatically close the underlying socket or if you pass in a callback, you can handle the timeout however you want. For TCP, there is socket.setTimeout().

How does Node.js behave against a low latency mobile network?

I want to develop a mobile app that reads and occasionally writes tiny chunks of text and images of no more than 1KB. I was thinking to use node.js for this (I think fits perfectly), but I heard that node.js uses one single thread for all request in an async model. It's ok, but what if a mobile through a very low latency network is reading byte by byte (I mean very slowly) one of that chunks of text? Does this mean that if the mobile needs 10 seconds after completing the read, the rest of the connections has to wait 10 seconds before node.js replies them? I really hope that no.
No — incoming streams are evented. The events will be handled by the main thread as they come in. Your JavaScript code is executed only in this main thread, but I/O is handled outside that thread and raises events that trigger your callbacks in the main thread.

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

Additional technologies to correctly use node.js and Socket.IO in a time-intensive app?

As a hypothetical example, let's say that I wanted to make an application that displays peoples twitter networks. I would provide an API that would allow a client to query on a single username. That user's top x tweets would be sent to the client. Then, each person that had been mentioned by the initial person would be scanned. Their top x tweets would be sent to the client. This process would recursively continue, breadth-first, until a pre-defined depth was reached. The client would be receiving the data in real time, displaying statistics such as number of users scanned, number of known users remaining to scan, and a growing list of the tweet data. None of the processing is complicated (regex of small amounts of text), but many, many network requests would be spawned from a single initial request.
I really want the fantastic realtime capabilities of node.js with socket.io, but I feel like this is an abuse of those technologies - they're not meant for heavy server-side lifting. Is there a more appropriate toolset for what I am trying to accomplish, or a particular way to use these tools to that end? Milewise is doing something similar-ish, but I think that my application would consume significantly more network resources than theirs.
Thanks.
The best network transport which you can get on the web now are WebSockets which offers persistent bi-directional real-time connection between server and client. Although not every browser supports them, socket.io gives you a couple of fallback solutions which may however decrease the network performance when compared to WebSockets as stated in this article:
During making connection with WebSocket, client and server exchange
data per frame which is 2 bytes each, compared to 8 kilo bytes of http
header when you do continuous polling.
...
Reducing kilobytes of data
to 2 bytes…and reducing latency from 150ms to 50ms is far more than
marginal. In fact, these two factors alone are enough to make
WebSocket seriously interesting to Google.
Apart from network transport, other things may also be important, for example how are you fetching, formating and processing the data on the server side. In node.js heavy CPU bound computations may block processing of other asynchronous operations, therefore these kind of operations should be dispatched to separate threads or processes in order to prevent blocking.

Resources