I would like to use libev for a streaming server I am writing.
This is how everything is supposed to work:
client opens a TCP socket connection to server
server receives connection
client sends a list of images they would like
server reads request
server loops through all of the images
server reads image from NAS
server processes image file meta data
server sends image data to client
I found sample code that allows me to read and write from the socket using libev I/O events (epoll under the hood). But, I am not sure how to handle the read from NAS and processing.
This could take some time. And I don't want to block the server while this is happening.
Should this be done in another thread, and have the thread send the
image data back to the client?
I was planning on using a thread pool. But, perhaps libev can support a processing step without blocking?
Any ideas or help would be greatly appreciated!
You'll need a file I/O library (such as Boost::ASIO) that supports asynchronous reads. The underlying APIs are aio_read, aio_suspend, lio_listio.
Related
I have a client and server application, where the client is receiving data from the server using WebSockets.
Client application is built using Hazelcast Jet to process data provided by server. ClientApplication stores data inside Queue and polls it when it is ready to process further. Client is relatively slow compared to server, so data is continuously inserted in client's Queue until the heap is full. I wonder if there are any Hazelcast Jet features which I can use to resolve this issue.
Note: I don't want the producer to stop producing data or the client to stop receiving it. I want to client to continue receiving the data and process it as fast as possible.
The solution I have in mind is to write data received from server into a file rather than storing in Queue and read data from it, this can resolve the memory issue. Is this the proper way to handle this use case or there are better solutions to this?
I'm writing a Node HTTP server that essentially only exists for NAT punchthrough. Its job is to facilitate a client sending a file, and another client receiving that file.
Edit: The clients are other Node processes, not browsers. We're using Websockets because some client locations won't allow non-HTTP/S port connections.
The overall process works like this:
All clients keep an open websocket connection.
The receiving client (Alice) tells the server via Websocket that it wants a file from another client (Bob).
The server generates a unique token for this transaction.
The server notifies Alice that it should download the file from /downloads?token=xxxx. Alice connects, and the connection is left open.
The server notifies Bob that it should upload the file to /uploads?token=xxxx. Bob connects and begins uploading the file, since Alice is already listening on the other side.
Once the transfer is complete, both connections are closed.
This is all accomplished by storing references to the HTTP req and res objects inside of a transfers object, indexed by the token. It all works great... as long as I'm not clustering the server.
In the past, when I've used clustering, I've converted the server to be stateless. However, that's not going to work here: I need the req and res objects to be stored in a state so that I can pipe the data.
I know that I could just buffer the transferring data to disk, but I would rather avoid that if at all possible: part of the requirements is that if I buffer anything to disk I must encrypt it, and I'd rather avoid putting the additional load of encrypting/decrypting transient data on the server.
Does anyone have any suggestion on how I could implement this in a way that supports clustering, or a pointer to further research sources I could look at?
Thanks so much!
I am trying to write internal transport system.
Data should be transferred from client to server using net sockets.
It is working fine except handling of network issues.
If I place firewall between client and server, on both sides I will not see any error, so data will continue to fill kernel buffer on client side.
And if I will restart app in this moment I will lose all data in buffer.
Question:
Do we have any way to detect network issues?
Do we have any way to get data back from kernel buffers?
Node js exposes the low level socket api to you very directly. I'm assuming that you are using a TCP socket to send and receive data.
One way to ensure that there is an active connection between the client and server is to send heartbeat signals back and forth. If you fail to receive a heartbeat from the server while sending data, you can assume that the connection failed.
As for the second part of your question: There is no easy way to get data back from kernel buffers. If losing the data will be a problem, I would make sure to write it to disk.
I have been reading about Node and how it is single threaded. If I have a large file(500mb) to upload to server or download the file from a server , I am guessing this cannot happen as asynchronous at the server side . Is this a bad use case to use nodejs in this scenario ? or is there a solution where this can be done without blocking the event loop ?
There's one user thread but there are other threads in node.
Most IO operations are done for you behind the scene and you only act on events. Typically, you'll receive events with chunks of data and, if other requests happen at the same time, they may be interlaced with other events. If you don't do a lot in the main thread (which is usually the case), there's no reason your program blocks during an upload.
For example we have a basic node.js server <-> client comunicaciton.
A basic node.js server who sends each 500ms a message to the only o every one client connected with their respective socket initiated, the client is responding correctly to the heratbeat and receiving all the messages in time. But, imagine the client has a temporal connection lag (without closing socket), cpu overload, etc.. And cannot process nothing during 2secs or more.
In this situation, where goes all those the messages that are not yet received by the client??
They are stored in node? in any buffer or similar?
And viceversa? The client is sending every 500ms a message to the server (the server only listens without responding), but the server has a temporary connection issue or cpu overhead during 2 or 3 secs..
Thanks in advice!! any information or aclaration will be welcomed
Javier
Yes, they are stored in buffers, primarily in buffers provided by the OS kernel. Same thing on the receiving end for connections incoming to a node server.