When to use sockets over Fifo for Client-Server IPC? - linux

Having looked at code for sometime now, I see most coders have used sockets for IPC over Pipes (Or FIFO to be specific).
Considering that there's only one client and one server, isn't it better to use FIFO over sockets?
Please educate me in this matter.

FIFO has following benefits:
they are atomic for writing if data len is less than PIPE_BUF
splice is almost garanted working with SPLICE_F_MOVE (no user space data copy kernel will move data between pipes)
they are more easy to set up in comparison with sockets
But on the other hand it is unidirectional i.e. you most likely need 2 separate fifo:
for writing data from client to server
for writing data from server to client
Or use one fifo but reopen it as need i.e. get data on server - reopen fifo WR_ONLY on server, reopen fifo RD_ONLY on client, get data on client and do vice versa after reading data from server.
http://man7.org/linux/man-pages/man7/pipe.7.html

Socket is bi-directional, pipe/fifo is uni-directional.
Socket can be stream or datagram, pipe/fifo are always streams.
Socket and fifo do not require related processes, whereas unnamed pipes do.
Socket can handle more than one peer.

Related

What kind of server needs select

I know that a server normally open one port and listen it.
Today I learnt that there was a function select in system Unix-Like. With select we can listen multi-sockets.
I just can't imagine a case where we need to use select. If we have two sockets, it means that we are listening two ports, right? So I have a question:
What kind of server would open more than one port but receive and process the same type of requests?
Using select helps with handling reads and writes on multiple sockets. It doesn't have to be multiple server sockets. The most typical use is for multiplexing a large number of client sockets.
You have a server with one listening socket. Each time you accept a connection, you add the new client socket to the multiplexing pool. select then returns any time any of those sockets has data available to read. The big win is that you're doing all this with one thread.
You also get as socket for each connection that you've accepted on the listening (server) socket.
selecting among these (client) sockets and the server socket (readable => new connection) allows you to write apps such as chat servers efficiently.
Ummm... remember the difference between ports and sockets.
A "port" is like a telephone-number. But a single phone-number could be handling any number of "calls!"
A "socket," then, represents a single telephone-call: a currently active connection between this server and a particular client. Each connection, by definition, "takes place over a particular port," but any number of connections might exist at the same time.
(The "accept" operation corresponds to: picking up the phone.)
So, then, what select() buys you is the ability to monitor any number of sockets at one time. It examines all the sockets, waits (if necessary) for something to happen on any one of them, and returns one message to you. Now, the design of your server becomes "a simple loop." No matter how many sockets you're listening to, and no matter how many of them have messages waiting, select() will return messages to you one at a time.
It's basically the case that "every server out there will use a select() loop at its heart, unless there's an exceptionally wonderful reason not to.
Take a look here:
One traditional way to write network servers is to have the main
server block on accept(), waiting for a connection. Once a connection
comes in, the server fork()s, the child process handles the connection
and the main server is able to service new incoming requests.
With select(), instead of having a process for each request, there is
usually only one process that "multi-plexes" all requests, servicing
each request as much as it can.
So one main advantage of using select() is that your server will only
require a single process to handle all requests. Thus, your server
will not need shared memory or synchronization primitives for
different 'tasks' to communicate.
One major disadvantage of using select(), is that your server cannot
act like there's only one client, like with a fork()'ing solution. For
example, with a fork()'ing solution, after the server fork()s, the
child process works with the client as if there was only one client in
the universe -- the child does not have to worry about new incoming
connections or the existence of other sockets. With select(), the
programming isn't as transparent.
http://www.lowtek.com/sockets/select.html

What is the performance of piping a stream in node.js?

I have a TCP server written in Node.js. When a socket is received on its server socket, the process passes off the socket to a process pool. It either forks or reuses a previously forked process. It then passes the received socket to that other process using ChildProcess.send(). This gives complete control of the socket to the child process.
I am considering taking a different approach, but I'm concerned about the potential performance trade-offs. I would like instead to pipe the socket to the child process either through stdin or a unix domain socket or maybe a pipe. There are a number of reasons why this approach would be preferable in my particular domain, but I won't belabor this question with those details.
So I am left to wonder about the performance characteristics of the pipe() method on a Node.js stream. Is the piping of the stream handled at the system level, or does Node.js have to read every byte from one stream and send it down the destination? There are a few system calls (i.e. splice()) that provide some level of zero-copy streaming of file descriptors. Does Node.js use some sort of mechanism like that or is it manual?
I recommend having a read of this blog post, which highlights how
when it comes to streams, things are only as fast as the slowest
stream in the workflow
Also, have a read of this answer explaining how to benchmark streams in node.

Node.JS Unbounded Concurrency / Stream backpressure over TCP

As I understand it, one of the consequences of Node's evented IO model is the inability to tell a Node process that is (for example) receiving data over a TCP socket, to block, once you've hooked up your receiving event handlers (or otherwise started listening for data).
If the receiver can't process the incoming data fast enough, "unbounded concurrency" can result, whereby node under-the-hood continues to read data off the socket as fast as it can, scheduling new data events on the event loop instead of block on the socket, until the process eventually runs out of memory and dies.
The receiver can't tell node to slow its reading, which would otherwise allow TCP's inbuilt flow control mechanisms to kick in and indicate to the sender that it needs to slow down.
Firstly, is what I've described so far accurate? Is there something I've missed that allows node to avoid this situation?
One of the much touted features of Node Streams is the automatic handling of backpressure.
AFAIK, the only way a writable stream (of a tcp socket) can tell if it needs to slow down or not is by looking at socket.bufferSize (indicating the amount of data written to the socket but not yet sent). Given that Node at the receiving end always reads as fast as it can, this can only indicate a slow network connection between sender and receiver, and NOT whether the receiver can't keep up.
So secondly, can Node Streams automatic backpressure somehow work in this situation to deal with a receiver that can't keep up?
It also seems that this problem affects browsers receiving data via websockets, for the similar reason that the websockets API doesn't provide a mechanism to tell the browser to slow its reading from the socket.
Is the only solution to this problem for Node (and browsers using websockets) to implement a manual flow control mechanism at the application level, to explicitly tell the sending process to slow down?
To answer your first question, I believe your understanding is not accurate -- at least not when piping data between streams. In fact, if you read the documentation for the pipe() function you'll see that it explicitly says that it automatically manages the flow so that "destination is not overwhelmed by a fast readable stream."
The underlying implementation of pipe() is taking care of all of the heavy lifting for you. The input stream (a Readable stream) will continue to emit data events until the output stream (a Writable stream) is full. As an aside, if I remember correctly, the stream will return false when you attempt to write data that it cannot currently process. At this point, the pipe will pause() the Readable stream, which will prevent it from emitting further data events. Thus, the event loop isn't going to fill up and exhaust your memory nor is it going to emit events that are simply lost. Instead, the Readable will stay paused until the Writable stream emits a drain event. At that point, the pipe will resume() the Readable stream.
The secret sauce is piping one stream into another, which is managing the back pressure for you automatically. This hopefully answers your second question, which is that Node can and does automatically manage this by simply piping streams.
And finally, there is really no need to implement this manually (unless you are writing a new stream from scratch) since it is already provided for you. :)
Handling all of this is not easy, as admitted on the Node blog post that announced the streams2 API in Node. It's a great resource and certainly provides much more information than I could here. One little gotcha that isn't entirely obvious that you should know however, from the docs here and for backwards compatibility reasons:
If you attach a data event listener, then it will switch the stream into flowing mode, and data will be passed to your handler as soon as it is available.
So just be aware that attaching the data event listener in an attempt to observe something in the stream will fundamentally alter the stream to the old way of doing things. Ask me how I know.

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

How does an asynchronous socket server work?

I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:
A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.
My questions are:
How off is my understanding?
Does each client socket require it's own thread to listen for data on?
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
Any clarifications and additional explanation would be greatly appreciated.
EDIT:
Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.
One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.
A socket server works more or less like this:
A listening socket is setup to accept connections, and added to a socketset
The socket set is checked for events
If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
If a connected socket has events, the relevant IO functions are called
The socket set is checked for events again
This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.
while running
select on socketset
for each socket with events
if socket is listener
accept new connected socket
add new socket to socketset
else if socket is connection
if event is readable
read data
process data
else if event is writable
write queued data
else if event is closed connection
remove socket from socketset
end
end
done
done
The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)
EDIT: In response to updated question.
I don't know either of the libraries you mention, but on the concepts you mention:
A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.
How off is my understanding?
Pretty far.
Does each client socket require it's own thread to listen for data on?
No.
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.
The IP Address is handled in on place.
The port # is handled in another place.
The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
What threaded environment?
Data sharing? What?
Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.

Resources