How to get socket states from socket winsock2 - visual-c++

Hi I'm new to C++ winsock2 sockets and I've been stuck at this for a while.
How do I get if a socket is connected, bound, listening or closed from the socket/file descriptor passed as an argument.
Note : I want to figure out the socket state without attempting to bind/connect/listen/closesocket because that will return a result and perform the operation. Example : if I wanted to check whether a socket is connected to a server I should attempt to connect and get the result which I don't want.
I am trying to achieve something like this.
// Using Winsock2 API.
bool isSocketBound(SOCKET s) {
// Check if socket is bound without attempting to bind.
// The code which I'm not able to figure out.
return isBound;
}
// Same goes for the other functions.
Thanks in advance.

How do I get if a socket is connected, bound, listening or closed from the socket/file descriptor passed as an argument.
There is no single socket API to query a SOCKET's current state. However:
getsockname() will tell you whether the socket is bound locally or not.
getpeername() will tell you whether the socket is connected to a remote peer or not.
getsockopt() with the SOL_SOCKET:SO_ACCEPTCONN option will tell you whether a connection-oriented (ie, TCP) SOCKET is listen()'ing or not.
Alternatively, if you try to connect() on a listen()'ing socket, you will get a WSAEINVAL error. If you try to listen() on a connect()'ed socket, you will get a WSAEISCONN error.
Alternatively, if the socket is at least bound locally, then you can use getsockname() to get its local IP/port and then use GetTcpTable(), GetTcpTable2(), or GetExtendedTcpTable() to find out whether that IP/port is in a LISTENING state or not.
If a SOCKET has been closed locally, the SOCKET is not valid to begin with, so you can't query it for anything. The best thing you can do is just set the SOCKET to INVALID_SOCKET when you are done using it, and then check for that condition when needed. On the other hand, if the socket has been closed remotely but not locally yet, then the SOCKET is still valid, but there is no way to query its close state. The only way to know if the socket has been closed by the peer is if a recv() operation reports 0 bytes received, or if a send() operation fails with a connection error, or if you are using the SOCKET in asynchronous mode (WSAAsyncSelect() or WSAEventSelect()) and receive an FD_CLOSE notification for it.
Note : I want to figure out the socket state without attempting to bind/connect/listen/closesocket because that will return a result and perform the operation. Example : if I wanted to check whether a socket is connected to a server I should attempt to connect and get the result which I don't want.
Why are you not simply keeping track of the state in your own code? If you bind() a SOCKET and it succeeds, then you know it is bound locally. If you connect() and it succeeds, then you know it is bound locally and is connected to a remote server. You should maintain a state variable along-side your SOCKET variable, and keep the state updated with each state-changing operation you perform on the SOCKET.

Related

Create single TCP conection for every pair of nodes in peer-to-peer application

I am developing a peer-to-peer application in Rust and my goal is to only have one TCP connection (1 TcpStream) where to read from and write to. This is the example network shape:
It is important that every connection from a node to the others is from the same IP address and port. I am trying to implement a preliminary step to create all the connections and store them, to later both listen and write on them.
I have tried the following using the std::net::TcpStream and std::net::TcpListen:
Every node starts and listens for connections on their address (1.1.) and then connects to the others (1.2.). Result: all the nodes get block listening and none of them advances.
Every node listens for a connection from all the nodes with a lower index (2.1.) and creates a connection using TcpStream::connect for all the nodes with a greater index (2.2.), in separate threads. Result: The desired connection is established, but the connection created TcpStream::connect are from a random port.
The same as 2., but create the connection (in 2.2.) using socket2 library that allows to bind an address before connecting. Result: I obtain an error saying "Transport endpoint is already connected" after the second try to bind the socket.
Is this possible? In that case, how should I implement it?
I want to note that I plan on implementing a poll using mio from all the sockets. Maybe this is an impediment to create the collection of sockets and read and write at the same time.
Please ask for any clarification if needed.
You can call socket.set_reuse_port(true) and socket.set_reuse_address(true) before calling socket.bind(address) to make your connection succeed.
See https://stackoverflow.com/a/14388707/50902 for the details of what each of these calls do.

Epoll and remote 1-way shutdown

Assume a TCP socket on the local linux host is in a connected state with a remote host. The local host is using epoll_wait to be notified of events on the socket with the remote host.
If the remote host were to call:
shutdown(s,SHUT_WR);
on its connected socket to indicate it is done transmitting, what event(s) will epoll_wait return on the local host for its socket?
I'm assuming EPOLLIN would always get returned and a subsequent recv call would return 0 to indicate the remote side has finished tranmitting.
What about EPOLLHUP or EPOLLRDHUP? (And what is the difference between these two events)?
Or even EPOLLERR ?
If the remote host calls "close" instead of "shutdown", does the answer to any of the above change?
I'm answering this myself after doing the heavy lifting to find the answer.
A socket listening for epoll events will typically receive an EPOLLRDHUP (in addition to EPOLLIN) event flag upon the remote peer calling close or shutdown(SHUT_WR). This does not neccessarily mean the socket is dead. Subsequent calls to recv() will return any unread data on the socket and eventually "0" will be returned to indicate EOF. It may even be possible to send data back if the remote peer only did a half-close of its socket.
The one notable exception is if the remote peer is using the SO_LINGER option enabled on its socket with a linger value of "0". The result of closing such a socket may result in a TCP RST getting sent instead of a FIN. From what I've read, a connection reset event will generate either a EPOLLHUP or EPOLLERR. (I haven't had time to confirm, but it makes sense).
There is some documentation to suggest there are older Linux implementations that don't support EPOLLRDHUP, as such EPOLLHUP gets generated instead.
And for what it is worth, in my particular case, I found that it is not too interesting to have code that special cases EPOLLHUP or EPOLLRDHUP events. Instead, just treat these events the same as EPOLLIN/EPOLLOUT and call recv() (or send() as appropriate). But pay close attention to return codes returned back from recv() and send().

Avoiding connection refused error for multiple sockets in C

Just a quick background. I am willing to open two sockets per thread of the application.The main thread has the accept() call to accept a TCP connection. There are three other threads and all of them also have an accept(). The problem is sometimes in multithreaded environment, the client tries to connect before the accept call of the server in a child thread which results in "connection refused" error. The client doesn't know when the server is ready to connect
I do not want the main thread socket to be sending any control information to the client like "You can now connect to the server". To avoid this, I have two approaches in my mind
1. To set a max counter(attempt) at the client side to connect to the server before exiting with connection refused error.
2. A separate thread whose only function is to accept connections at server side as a common accept function for all the thread connections except for the main thread.
Would really appreciate to know if there is any other approach. Thanks
Connection refused is not because you're calling accept late, it's because you're calling listen late. Make sure you call listen before any connect calls (you can check with strace). This probably requires that you listen before you spawn any children.
After you call listen on a socket incoming connections will queue until you call accept. At some point the not-yet-accepted connections can get dropped but this shouldn't occur with only 2 or 3 sockets.
If this is unix you can just use pipe2 or socketpair to create a pair of connected pipes/unix domain sockets with a lot less code. Of course, you need to do this before spawning the child thread and pass one end to the child.

Understanding BSD interface

I'm trying to understand how the events in a BSD socket interface translate to the state of a TCP Connection. In particular, I'm trying to understand at what stage in the connection process accept() returns on the server side
client sends SYN
server sends SYN+ACK
client sends ACK
In which one of these steps does accept() return?
accept returns when the connection is complete. The connection is complete after the client sends his ACK.
accept gives you a socket on which you can communicate. Of course you know, you can't communicate until the connection is established. And the connection can't be established before the handshake.
It wouldn't make sense to return before the client sens his ACK. It is entirely possible he won't say anything after the initial SYN.
The TCP/IP stack code in the kernel normally[1] completes the three-way handshake entirely without intervention from any user space code. The three steps you list all happen before accept() returns. Indeed, they may happen before accept() is even called!
When you tell the stack to listen() for connections on a particular TCP port, you pass a backlog parameter, which tells the kernel how many connections it can silently accept on behalf of your program at once. It is this queue that is being used when the kernel automatically accepts new connection requests, and there that they are held until your program gets around to accept()ing them. When there is one or more connections in the listen backlog queue when you call accept(), all that happens is that the oldest is removed from the queue and bound to a new socket.[2]
In other words, if your program calls listen(sd, 5), then goes into an infinite do-nothing loop so that it never calls accept(), five concurrent client connection requests will succeed, from the clients' point of view. A sixth connection request will get stalled on the first SYN packet until either the program owning the TCP port calls accept() or one of the other clients drops its connection.
[1] Firewall and other stack modifications can change this behavior, of course. I am speaking here only of default BSD sockets stack behavior.
[2] If there are no connections waiting in the backlog when you call accept(), it blocks by default, unless the listener socket was set to non-blocking, in which case it returns -1 and errno is EWOULDBLOCK.

ZeroMQ: Check if someone is listening behind Unix domain socket

Context: Linux (Ubuntu), C, ZeroMQ
I have a server which listens on ipc:// SUB ZeroMQ socket (which physically is a Unix domain socket).
I have a client which should connect to the socket, publish its message and disconnect.
The problem: If server is killed (or otherwise dies unnaturally), socket file stays in place. If client attempts to connect to this stale socket, it blocks in zmq_term().
I need to prevent client from blocking if server is not there, but guarantee delivery if server is alive but busy.
Assume that I can not track server lifetime by some external magic (e.g. by checking a PID file).
Any hints?
Non-portable solution seems to be to read /proc/net/unix and search there for a socket name.
Without showing your code all of this is guesswork... that said...
If you have a PUB/SUB pair, the PUB will hang around to make sure that its message gets through. Perhaps you're not using the right type of zmq pair. Sounds more like you have a REP/REQ pair instead.
This way, once you connect from the client (the REQ side), you can do a zmq_poll to determine if the socket is available for writing. If yes, then go ahead with your write, otherwise shutdown the client and handle the error condition (if it is an error in your system).
Maybe you can try to connect to the socket first, with your own native socket. If the connection succeeds, it's quite high possibility your publisher could work fine.
There is another solution. Don't use ipc:// sockets. Instead use something like tcp://127.0.0.101:10001. On most UNIXes that will be almost as fast as IPC because the OS recognizes that it is a local connection and shortcuts the full IP stack processing.

Resources