How does Node.js behave against a low latency mobile network? - node.js

I want to develop a mobile app that reads and occasionally writes tiny chunks of text and images of no more than 1KB. I was thinking to use node.js for this (I think fits perfectly), but I heard that node.js uses one single thread for all request in an async model. It's ok, but what if a mobile through a very low latency network is reading byte by byte (I mean very slowly) one of that chunks of text? Does this mean that if the mobile needs 10 seconds after completing the read, the rest of the connections has to wait 10 seconds before node.js replies them? I really hope that no.

No — incoming streams are evented. The events will be handled by the main thread as they come in. Your JavaScript code is executed only in this main thread, but I/O is handled outside that thread and raises events that trigger your callbacks in the main thread.

Related

NodeJS - Reliability

One thing I know for a fact is that Node.js shouldn't be used to intensive CPU tasks. Now, imagine that I have a node.js server receiving audio stream from several clients (from MIC). This audio is buffered in a c/c++ addon by doing memcpy (which is very fast). But when the endevent is triggered, this addon will convert "audio-to-command" and send it, to client. This conversion consumes 75ms (max). Can Node.js be considered an reliable solution for this problem? 75ms can be considered an intensive task in node.js? What is the maximum time recommended to blocking operations?
Blocking is not a Node.js way.
You can make this operation asynchronously (in a separate thread), without any blocking, and invoke a callback from your addon when the operation will be finished, so the main node.js thread will not be blocked and will be able to handle other requests.
There are good helpers like AsyncWorker and AsyncQueueWorker in NAN.
https://github.com/nodejs/nan
Also there are C++ libraries to work with WebSockets, so I would think about a direct connection between clients and the addon.

node js file upload download and streaming data

I have been reading about Node and how it is single threaded. If I have a large file(500mb) to upload to server or download the file from a server , I am guessing this cannot happen as asynchronous at the server side . Is this a bad use case to use nodejs in this scenario ? or is there a solution where this can be done without blocking the event loop ?
There's one user thread but there are other threads in node.
Most IO operations are done for you behind the scene and you only act on events. Typically, you'll receive events with chunks of data and, if other requests happen at the same time, they may be interlaced with other events. If you don't do a lot in the main thread (which is usually the case), there's no reason your program blocks during an upload.

winsock application and multhreading - listening to socket event from another thread

assume we have an application which uses winsock to implement tcp communication.
for each socket we create a thread and block-receiving on it.
when data arrives, we would like to notify other threads (listening threads).
i was wondering what is the best way to implement this:
move away from this design and use a non-blocking socket, then the listening thread will have to iterate constantly and call a non-blocking receive, thus making it thread safe (no extra threads for the sockets)
use asynchronous procedure calls to notify listening threads - which again will have to alert-wait for apc to queue for them.
implement some thread safe message queue, where each socket thread will post messages to it, and the listener, again, will go over it every interval and pull data from it.
also, i read about WSAAsyncSelect, but i saw that this is used to send messages to a window. isnt there something similar for other threads? (well i guess apcs are...)
Thanks!
Use I/O completion ports. See the CreateIoCompletionPort() and the GetQueuedCompletionStatus() functions of the Win32 API (under File Management functions). In this instance, the socket descriptors are used in place of file handles.
You'll always be better off abstracting the mechanics of socket API (listening, accepting, reading & writing) in a separate layer from the application logic. Have an object that captures the state of a connection, which is created during an incoming connection and you can maintain buffers in this object for the incoming and outgoing traffic. This will allow your network interface layer to be independent of the application code. This will also make the code cleaner by separating the application functionality from the underlying communication mechanism.
Blocking or non-blocking socket decision depends on the level of scalability that your applications needs to achieve. If your application needs to support hundreds of incoming connections, adopting a thread-per-socket approach is not going to be very wise. You'll be better off going for an Io ports based implementation, which will make your app immensely scaleable at added code complexity. However, if you only foresee a few 10s of connections at any point in time, you can go for an asynchronous sockets model using Win32 events or messages. Win32 events based approach doesn't scale very well beyond a certain limit as you would have to manage multiple threads if the number of concurrent sockets exceed 63 (as WaitForMultipleObjects can only support a max of 64 sockets). Windows message based mechanism doesn't have this limitation though. OHOH, Win32 event based approach does not require a GUI window to work.
Check out WSAEventSelect along with WSAAsyncSelect API documentation in MSDN.
You might want to take a look at boost::asio package as well. It provides a neat (though a little complex) C++ abstraction over sockets API.

node.js response buffering on node.js

I am developing a node.js service that will have a number of requests per second - say 1000. Lets imagine that the response data weights a bit, connection with our clients is extremely slow and it takes ~1s for the response to be sent back to the client.
Question #1 - I imagine if there was no proxy buffering, it would take node.js 1000 seconds to send back all responses as this is blocking operation, isn't it?
Question #2 - How nginx buffers (and buffers in general) work? Would I be able to receive all 1000 responses to buffer (provided RAM is not a problem) and only then flush them the clients? What are the limits of proxy_buffers? Can I set a number of buffers to 1000 1K each?
The goal is to flush all the responses out of the node.js as soon as possible in order not to block it and have some other system to deliver them.
Thanks!
Of course, sending the response is non-blocking operation. Node simply gives a chunk to a network driver, leaving all the other work to your OS.
If sending the response was a blocking operation, it would only take a single PC with its network artificially crippled down to DoS any node-based service.

Creating a simple Linux API

I have a simple application on a OpenWRT style router. It's written in C++ currently. The router (embedded Linux) has very limited disk space and RAM. For example there is not enough space to install Python.
So, I want to control this daemon app via the network. I have read some tutorials on creating sockets, and listening to the port for activity. But I haven't been able to integrate the flow into a C++ class. And I haven't been able to figure out how to decode the information received, or how to send a response.
All the tutorials I've read are dead ends, they show you how to make a server that basically just blocks until it receives something, and then returns a message when it got something.
Is there something a little more higher level that can be used for this sort of thing?
Sounds like what you are asking is "how do I build a simple network service that will accept requests from clients and do something in response?" There are a bunch of parts to this -- how do you build a service framework, how do you encode and decode the requests, how do you process the requests and how do you tie it all together?
It sounds like you're having problems with the first and last parts. There are two basic ways of organizing a simple service like this -- the thread approach and the event approach.
In the thread approach, you create a thread for each incoming connection. That thread reads the messages (requests) from that connection (file descriptor), processes them, and writes back responses. When a connection goes away, the thread exits. You have a main 'listening' thread that accepts incoming connections and creates new threads to handle each one.
In the event approach, each incoming request becomes an event. You then have event handlers that processes these events, sending back responses. Its important that the event handlers NOT block and complete promptly, otherwise the service may appear to lock up. Your program has a main event loop that waits for incoming events (generally blocking on a single poll or select call) and reads and dispatches each event as appropriate.
I installed python-mini package with opkg, which has socket and thread support.
Works like a charm on a WRT160NL with backfire/10.03.1.

Resources