Sending file from server to client threads - multithreading

I have two programs, one for client and one for server. They can talk to each other with a chat using Threads.
After I added two buttons, one for client and one for server to send file from server to client.
When I connect the client to server I start the Thread which waits to receive something with recv() function.
The problem is that when I want to send a file to client because the thread is awake the transfer does not work.
When I do not use threads the transfer works.
My question is: Can I from a button click to stop that Thread?
PS: I am a begginer in Threads
This is my code with Threads:
delegate System::Void PrintStringDel(String^ str);
void PrintString(String^ str) {
txtReceive->Text += str;
txtReceive->SelectionStart = txtReceive->Text->Length;
txtReceive->ScrollToCaret();
}
void ReceiveThread() {
int ByteReceived;
char buff[1024];
while(true) {
ByteReceived = recv(SendingSocket, buff, sizeof(buff), 0);
if(ByteReceived > 0) {
PrintStringDel^ print = gcnew PrintStringDel(this, &Form1::PrintString);
String^ text = gcnew String(buff);
txtReceive->BeginInvoke(print, text);
}
}
}
on Connect() client
...
Threading::ThreadStart^ ts = gcnew Threading::ThreadStart(this,
&Form1::ReceiveThread);
Threading::Thread^ t = gcnew Threading::Thread(ts);
t->Start();

Related

[C++, windows form]How do I make the main thread wait for the called thread to finish?

I created 2 buttons, one for start a new thread, the other to end it. The actual calculation inside the new thread involved new[] and delete[] so I don't abort the thread directly, but using a flag to end it. It may take some time to end the delete[] and result-saving, so I want the main thread to wait for the new thread to end. But however I tried, I find the new thread doesn't run(though its ThreadState is running) until the command lines for the stop-button are conducted. System::Threading::Thread works quite different from thread to me. Is it how it should be?
#include "stdafx.h"
ref class Form1 : System::Windows::Forms::Form
{
public:
//define a thread name and a flag to terminate the thread
System::Threading::Thread^ th1;
static bool ITP1=0;
Form1(void)
{InitializeComponent();}
System::Windows::Forms::Button^ ButtonStart;
System::Windows::Forms::Button^ ButtonStop;
System::Windows::Forms::Label^ Label1;
void InitializeComponent(void)
{
this->SuspendLayout();
this->ButtonStart = gcnew System::Windows::Forms::Button();
this->ButtonStart->Location = System::Drawing::Point(20, 20);
this->ButtonStart->Click += gcnew System::EventHandler(this, &Form1::ButtonStart_Click);
this->Controls->Add(this->ButtonStart);
this->ButtonStop = gcnew System::Windows::Forms::Button();
this->ButtonStop->Location = System::Drawing::Point(120, 20);
this->ButtonStop->Click += gcnew System::EventHandler(this, &Form1::ButtonStop_Click);
this->Controls->Add(this->ButtonStop);
this->Label1 = gcnew System::Windows::Forms::Label();
this->Label1->Location = System::Drawing::Point(20, 80);
this->Controls->Add(this->Label1);
this->ResumeLayout(false);
}
void ThreadStart()
{
for (int idx=0;idx<999999999;++idx)
{
if (ITP1) break;
}
this->Label1->Text = "finished";
ITP1=0;
}
System::Void ButtonStart_Click(System::Object^ sender, System::EventArgs^ e)
{
th1 = gcnew System::Threading::Thread(gcnew System::Threading::ThreadStart(this,&Form1::ThreadStart));
th1->Start();
this->Label1->Text = "running";
}
System::Void ButtonStop_Click(System::Object^ sender, System::EventArgs^ e)
{
if (th1->ThreadState==System::Threading::ThreadState::Running)
{
//use the flag to stop the thread
ITP1=1;
//the wait method using while+sleep doesn't work
while (th1->ThreadState==System::Threading::ThreadState::Running) System::Threading::Thread::Sleep(1000);
//replacing the wait method above with "th1->Join()" doesn't work either
}
}
};
int main()
{
Form1^ A1 = gcnew Form1();
A1->ShowDialog();
return 0;
}
You have to join() the called thread in the main thread. Then the main thread will wait until the called thread is finished.
See the documentation for Join to know how it is to be called.
Finally I found the cause. It's just the "this->" pointer in the new thread. Removing it makes everything OK.
I suppose it's because the Form allows operation on only one element at the same time. I ask the button-click to wait for the new thread, and the new thread tries to edit the other form element. They wait for each other to end and cause a dead loop.

how to handle chrome.runtime.sendNativeMessage() in native app

I am working on native messaging host. I am able to launch my custom application by using api
var port = chrome.runtime.connectNative('com.my_company.my_application');
I can post message to my custom app by using api
port.postMessage({ text: "Hello, my_application" });
I know they are using input/out stream to send and receive messages.
how should my native application(c or c++ exe) will get notify about message received
which function/ event should i handle to receive message.
UPDATE:
Regarding how to listen for the messages on the native app, they are sent to the stdio (for the time being this is the only available communication channel between Chrome extensions and native apps).
Take a look at this sample app featuring a native messaging host implemented in python.
You listen for messages registering a listener on port's onMessage event.
Use sendNativeMessag() only if you want a one-time communication (not a persistent port). In that case, do not use chrome.runtime.connectNative(...). Instead, do something like this:
var msg = {...};
chrome.runtime.sendNativeMessage("<your_host_id>", msg, function(response) {
if (chrome.runtime.lastError) {
console.log("ERROR: " + chrome.runtime.lastError.message);
} else {
console.log("Messaging host sais: ", response);
}
});
The docs' section about Native Messaging is pretty detailed and a great source of information.
I am posting c++ code which will communicate i.e receives and sends the messages to chrome extension.
Hope this will help to other developer
int _tmain(int argc, _TCHAR* argv[])
{
cout.setf( std::ios_base::unitbuf ); //instead of "<< eof" and "flushall"
_setmode(_fileno(stdin),_O_BINARY);
unsigned int c, i, t=0;
string inp;
bool bCommunicationEnds = false;
bool rtnVal = true;
do {
inp="";
t=0;
//Reading message length
cin.read(reinterpret_cast<char*>(&t) ,4);
// Loop getchar to pull in the message until we reach the total
// length provided.
for (i=0; i < t; i++) {
c = getchar();
if(c == EOF)
{
bCommunicationEnds = true;
i = t;
}
else
{
inp += c;
}
}
if(!bCommunicationEnds)
{
//Writing Message length
cout.write(reinterpret_cast<char*>(&inp),4);
//Write original message.
cout<<inp;
}
}while(!bCommunicationEnds);
return 0;
}

Update value after passing pointer

I am using a TCP server to send a char array. The function send() takes a char *, but, before that, it has to listen and accept a connection. Given that, I want to send the most recent data when an incoming connection is accepted. Previously, I used two threads. One updated the value in the buffer, the other simply waited for connections, then sent data.
I understand that there can be problems with not locking a mutex, but aside from that, would this same scheme work if I passed the char * to a send function, rather than updating it as a global variable?
Some code to demonstrate:
#include <pthread.h>
char buf[BUFLEN];
void *updateBuffer(void *arg) {
while(true) {
getNewData(buf);
}
}
void *sendData(void *arg) {
//Setup socket
while(true) {
newfd = accept(sockfd, (struct sockaddr *)&their_addr, &size);
send(newfd, buf, BUFLEN, 0);
close(newfd);
}
}
This would send the updated values whenever a new connection was established.
I want to try this:
#include <pthread.h>
char buf[BUFLEN];
void *updateBuffer(void *arg) {
while(true) {
getNewData(buf);
}
}
void *sendData(void *arg) {
TCPServer tcpServer;
while(true) {
tcpServer.send(buf);
}
}
Where the function tcpServer.send(char *) is basically the same as sendData() above.
The reason for doing this is so that I can make the TCP server into a class, since I'll need to use the same code elsewhere.
From my understanding, since I am passing the pointer, it's basically the same as when I just call send(), since I also pass a pointer there. The value will continue to update, but the address won't change, so it should work. Please let me know if that is correct. I'm also open to new ways of doing this (without mutex locks, preferably).
Yes, that is the way most of us do a send, pass a pointer to a buffer either void * or char *
I would coded like this:
int sendData(const char * buffer, const int length)
{
Socket newfd;
Int NumOfConnects=0;
while ((newfd=accept(sockfd, (struct sockaddr *)&their_addr, &size)) > 0)
{
// It would be necessary here to lock the buffer with a Mutex
send(newfd, buffer, length, 0);
// Release the Mutex
close(newfd);
NumOfConnects++;
}
// there is an error in the accept
// this could be OK,
// if the main thread has closed the sockfd socket indicating us to quit.
// returns the number of transfers we have done.
return NumOfConnects;
}
One thing to consider about using a pointer to a buffer which is modify in another thread; Could it be that in the middle of a send the buffer changes and the data sent is not accurate.
But that situation you've already noticed as well. Using a Mutex is suggested as you indicated.

OBSE and Boost.Asio: Threaded async UDP server with deadline_timer on the same io_service

Platform: Windows 7 Professional 64 bit
Compiler: VS2010 Express
Boost: Version 1.49
Plugin System: OBSE 20 (for the Oblivion game by Bethesda)
I have a class based upon the async udp examples. I run the io service itself as a thread. Here is the code for the class:
// udp buffer queues
extern concurrent_queue<udp_packet> udp_input_queue; // input from external processes
extern concurrent_queue<udp_packet> udp_output_queue; // output to external processes
using boost::asio::ip::udp;
class udp_server
{
public:
udp_server(boost::asio::io_service& io_service, short port)
: io_service_(io_service),
socket_(io_service_, udp::endpoint(boost::asio::ip::address_v4::from_string(current_address), port))//, // udp::v4()
{
// start udp receive
socket_.async_receive_from(
boost::asio::buffer(recv_buf), sender_endpoint_,
boost::bind(&udp_server::handle_receive_from, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
send_timer_ = NULL;
}
~udp_server(){
io_service_.stop();
if(send_timer_){
send_timer_->cancel();
delete send_timer_;
}
}
void start(){
// start send timer
send_timer_ = new boost::asio::deadline_timer(io_service_, boost::posix_time::milliseconds(500));
send_timer_restart();
}
void handle_send_to(const boost::system::error_code& error, size_t bytes_recvd);
void handle_receive_from(const boost::system::error_code& error, size_t bytes_recvd);
//void handle_send_timer(const boost::system::error_code& error);
void handle_send_timer();
void send_timer_restart();
void stop()
{
io_service_.stop();
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
std::vector<udp::endpoint> clientList;
//std::auto_ptr<boost::asio::io_service::work> busy_work;
udp_buffer recv_buf;
boost::asio::deadline_timer* send_timer_;
};
Now I instantiate the class and thread like this:
udp_server *udp_server_ptr=NULL;
boost::asio::deadline_timer* dlineTimer=NULL;
static void PluginInit_PostLoadCallback()
{
_MESSAGE("NetworkPipe: PluginInit_PostLoadCallback called");
if(!g_Interface->isEditor)
{
_MESSAGE("NetworkPipe: Starting UDP");
udp_server_ptr = new udp_server(io_service, current_port);
//dlineTimer = new boost::asio::deadline_timer(io_service);
udp_thread = new boost::thread(boost::bind(&boost::asio::io_service::run, &io_service));
//
_MESSAGE("NetworkPipe: UDP Started");
NetworkPipeEnable = true;
}
else
{
_MESSAGE("NetworkPipe: Running in editor, not starting UDP");
}
}
Now notice that dlineTimer is commented out above. If I enable that it ceases to function. The only way I can get the dlineTimer to function with this io service is to create it during the udp_server::handle_receive_from call. I think this is because it is running inside the other thread. So for some reason the deadline_timer object does not like being created outside the thread it needs to run inside.
Now, in order to communicate to the main thread I use concurrent_queue objects. So these allow me to send messages in and out of the thread pretty simply. I could theoretically run the dlineTimer inside its own thread and use the output queue to manage its activity. However, I like the simplicity of having is in the same thread as the udp_server. For instance the udp_server object keeps track of clients in a vector. When the deadline_timer expires I cycle through the known clients and send them messages. Then I restart the timer. This makes my response independent of the udp packets that are sent to the server. So when packets arrive they are put on a queue for another part of the process. Then later data is placed on the output queue and the deadline_timer processes those responses and sends them to the appropriate clients.
So my main question is:
How do I more cleanly create the deadline_timer object using the same thread and same io_service as the udp_server object?
Okay, I was thinking about this really stupidly.
First the deadline_timer needs to be completely inside the thread I want it to time in. That means it needs to be created inside the thread.
Second I need to define the function called in the thread loop and not set it to the io_service::run function. So I made it the udp_server::start function. Inside the start call I create my deadline_timer.
So here is the class:
class udp_server
{
public:
udp_server(boost::asio::io_service& io_service, short port)
: io_service_(io_service),
socket_(io_service_, udp::endpoint(boost::asio::ip::address_v4::from_string(current_address), port))//, // udp::v4()
{
// start udp receive
socket_.async_receive_from(
boost::asio::buffer(recv_buf), sender_endpoint_,
boost::bind(&udp_server::handle_receive_from, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
send_timer_ = NULL;
}
~udp_server(){
io_service_.stop();
if(send_timer_){
send_timer_->cancel();
delete send_timer_;
}
}
void start();
void startSendTimer();
void handle_send_to(const boost::system::error_code& error, size_t bytes_recvd);
void handle_receive_from(const boost::system::error_code& error, size_t bytes_recvd);
void handle_send_timer();
void send_timer_restart();
void stop()
{
io_service_.stop();
}
private:
boost::asio::io_service& io_service_;
udp::socket socket_;
udp::endpoint sender_endpoint_;
std::vector<udp::endpoint> clientList;
udp_buffer recv_buf;
boost::asio::deadline_timer* send_timer_;
};
Here are the relevant functions:
void udp_server::start(){
// startup timer
startSendTimer();
// run ioservice
io_service_.run();
}
void udp_server::startSendTimer(){
// start send timer
if(!send_timer_)
send_timer_ = new boost::asio::deadline_timer(io_service_, boost::posix_time::milliseconds(500));
send_timer_restart();
}
void udp_server::send_timer_restart(){
if(send_timer_){
// restart send timer
send_timer_->expires_from_now(boost::posix_time::milliseconds(500));
send_timer_->async_wait(boost::bind(&udp_server::handle_send_timer, this));
}
}
void udp_server::handle_send_timer(){
for(std::vector<udp::endpoint>::iterator itr = clientList.begin(); itr != clientList.end(); ++itr){
socket_.async_send_to(
boost::asio::buffer("heart beat", strlen("heart beat")), *itr,
boost::bind(&udp_server::handle_send_to, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
send_timer_restart();
}
So I was thinking about this all wrong in the first place. I need to define my starting point of where the thread begins execution. The I can create the objects that need to reside in that thread inside the thread.
The udp_server is now started like this:
static void PluginInit_PostLoadCallback()
{
_MESSAGE("NetworkPipe: PluginInit_PostLoadCallback called");
if(!g_Interface->isEditor)
{
_MESSAGE("NetworkPipe: Starting UDP");
udp_server_ptr = new udp_server(io_service, current_port);
udp_thread = new boost::thread(boost::bind(&udp_server::start, udp_server_ptr));
_MESSAGE("NetworkPipe: UDP Started");
NetworkPipeEnable = true;
}
else
{
_MESSAGE("NetworkPipe: Running in editor, not starting UDP");
}
}
The deadline_timer creation occurs within the udp_thread now. Creating the deadline_timer object in the main thread would cause the program to fail to load properly.

How do I wake select() on a socket close?

I am currently using select loop to manage sockets in a proxy. One of the requirements of this proxy is that if the proxy sends a message to the outside server and does not get a response in a certain time, the proxy should close that socket and try to connect to a secondary server. The closing happens in a separate thread, while the select thread blocks waiting for activity.
I am having trouble figuring out how to detect that this socket closed specifically, so that I can handle the failure. If I call close() in the other thread, I get an EBADF, but I can't tell which socket closed. I tried to detect the socket through the exception fdset, thinking it would contain the closed socket, but I'm not getting anything returned there. I have also heard calling shutdown() will send a FIN to the server and receive a FIN back, so that I can close it; but the whole point is me trying to close this as a result of not getting a response within the timeout period, so I cant do that, either.
If my assumptions here are wrong, let me know. Any ideas would be appreciated.
EDIT:
In response to the suggestions about using select time out: I need to do the closing asynchronously, because the client connecting to the proxy will time out and I can't wait around for the select to be polled. This would only work if I made the select time out very small, which would then constantly be polling and wasting resources which I don't want.
Generally I just mark the socket for closing in the other thread, and then when select() returns from activity or timeout, I run a cleanup pass and close out all dead connections and update the fd_set. Doing it any other way opens you up to race conditions where you gave up on the connection, just as select() finally recognized some data for it, then you close it, but the other thread tries to process the data that was detected and gets upset to find the connection closed.
Oh, and poll() is generally better than select() in terms of not having to copy as much data around.
You cannot free a resource in one thread while another thread is or might be using it. Calling close on a socket that might be in use in another thread will never work right. There will always be potentially disastrous race conditions.
There are two good solutions to your problem:
Have the thread that calls select always use a timeout no greater than the longest you're willing to wait to process a timeout. When a timeout occurs, indicate that some place the thread that calls select will notice when it returns from select. Have that thread do the actual close of the socket in-between calls to select.
Have the thread that detects the timeout call shutdown on the socket. This will cause select to return and then have that thread do the close.
How to cope with EBADF on select():
int fopts = 0;
for (int i = 0; i < num_clients; ++i) {
if (fcntl(client[i].fd, F_GETFL, &fopts) < 0) {
// call close(), FD_CLR(), and remove i'th element from client list
}
}
This code assumes you have an array of client structures which have "fd" members for the socket descriptor. The fcntl() call checks whether the socket is still "alive", and if not, we do what we have to to remove the dead socket and its associated client info.
It's hard to comment when seeing only a small part of the elephant but maybe you are over complicating things?
Presumably you have some structure to keep track of each socket and its info (like time left to receive a reply). You can change the select() loop to use a timeout. Within it check whether it is time to close the socket. Do what you need to do for the close and don't add it to the fd sets the next time around.
If you use poll(2) as suggested in other answers, you can use the POLLNVAL status, which is essentially EBADF, but on a per-file-descriptor basis, not on the whole system call as it is for select(2).
Use a timeout for the select, and if the read-ready/write-ready/had-error sequences are all empty (w.r.t that socket), check if it was closed.
Just run a "test select" on every single socket that might have closed with a zero timeout and check the select result and errno until you found the one that has closed.
The following piece of demo code starts two server sockets on separate threads and creates two client sockets to connect to either server socket. Then it starts another thread, that will randomly kill one of the client sockets after 10 seconds (it will just close it). Closing either client socket causes select to fail with error in the main thread and the code below will now test which of the two sockets has actually closed.
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdint.h>
#include <pthread.h>
#include <stdbool.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <sys/select.h>
#include <sys/socket.h>
static void * serverThread ( void * threadArg )
{
int res;
int connSo;
int servSo;
socklen_t addrLen;
struct sockaddr_in soAddr;
uint16_t * port = threadArg;
servSo = socket(PF_INET, SOCK_STREAM, 0);
assert(servSo >= 0);
memset(&soAddr, 0, sizeof(soAddr));
soAddr.sin_family = AF_INET;
soAddr.sin_port = htons(*port);
// Uncommend line below if your system offers this field in the struct
// and also needs this field to be initialized correctly.
// soAddr.sin_len = sizeof(soAddr);
res = bind(servSo, (struct sockaddr *)&soAddr, sizeof(soAddr));
assert(res == 0);
res = listen(servSo, 10);
assert(res == 0);
addrLen = 0;
connSo = accept(servSo, NULL, &addrLen);
assert(connSo >= 0);
for (;;) {
char buffer[2048];
ssize_t bytesRead;
bytesRead = recv(connSo, buffer, sizeof(buffer), 0);
if (bytesRead <= 0) break;
printf("Received %zu bytes on port %d.\n", bytesRead, (int)*port);
}
free(port);
close(connSo);
close(servSo);
return NULL;
}
static void * killSocketIn10Seconds ( void * threadArg )
{
int * so = threadArg;
sleep(10);
printf("Killing socket %d.\n", *so);
close(*so);
free(so);
return NULL;
}
int main ( int argc, const char * const * argv )
{
int res;
int clientSo1;
int clientSo2;
int * socketArg;
uint16_t * portArg;
pthread_t killThread;
pthread_t serverThread1;
pthread_t serverThread2;
struct sockaddr_in soAddr;
// Create a server socket at port 19500
portArg = malloc(sizeof(*portArg));
assert(portArg != NULL);
*portArg = 19500;
res = pthread_create(&serverThread1, NULL, &serverThread, portArg);
assert(res == 0);
// Create another server socket at port 19501
portArg = malloc(sizeof(*portArg));
assert(portArg != NULL);
*portArg = 19501;
res = pthread_create(&serverThread1, NULL, &serverThread, portArg);
assert(res == 0);
// Create two client sockets, one for 19500 and one for 19501
// and connect both to the server sockets we created above.
clientSo1 = socket(PF_INET, SOCK_STREAM, 0);
assert(clientSo1 >= 0);
clientSo2 = socket(PF_INET, SOCK_STREAM, 0);
assert(clientSo2 >= 0);
memset(&soAddr, 0, sizeof(soAddr));
soAddr.sin_family = AF_INET;
soAddr.sin_port = htons(19500);
res = inet_pton(AF_INET, "127.0.0.1", &soAddr.sin_addr);
assert(res == 1);
// Uncommend line below if your system offers this field in the struct
// and also needs this field to be initialized correctly.
// soAddr.sin_len = sizeof(soAddr);
res = connect(clientSo1, (struct sockaddr *)&soAddr, sizeof(soAddr));
assert(res == 0);
soAddr.sin_port = htons(19501);
res = connect(clientSo2, (struct sockaddr *)&soAddr, sizeof(soAddr));
assert(res == 0);
// We want either client socket to be closed locally after 10 seconds.
// Which one is random, so try running test app multiple times.
socketArg = malloc(sizeof(*socketArg));
srandomdev();
*socketArg = (random() % 2 == 0 ? clientSo1 : clientSo2);
res = pthread_create(&killThread, NULL, &killSocketIn10Seconds, socketArg);
assert(res == 0);
for (;;) {
int ndfs;
int count;
fd_set readSet;
// ndfs must be the highest socket number + 1
ndfs = (clientSo2 > clientSo1 ? clientSo2 : clientSo1);
ndfs++;
FD_ZERO(&readSet);
FD_SET(clientSo1, &readSet);
FD_SET(clientSo2, &readSet);
// No timeout, that means select may block forever here.
count = select(ndfs, &readSet, NULL, NULL, NULL);
// Without a timeout count should never be zero.
// Zero is only returned if select ran into the timeout.
assert(count != 0);
if (count < 0) {
int error = errno;
printf("Select terminated with error: %s\n", strerror(error));
if (error == EBADF) {
fd_set closeSet;
struct timeval atonce;
FD_ZERO(&closeSet);
FD_SET(clientSo1, &closeSet);
memset(&atonce, 0, sizeof(atonce));
count = select(clientSo1 + 1, &closeSet, NULL, NULL, &atonce);
if (count == -1 && errno == EBADF) {
printf("Socket 1 (%d) closed.\n", clientSo1);
break; // Terminate test app
}
FD_ZERO(&closeSet);
FD_SET(clientSo2, &closeSet);
// Note: Standard requires you to re-init timeout for every
// select call, you must never rely that select has not changed
// its value in any way, not even if its all zero.
memset(&atonce, 0, sizeof(atonce));
count = select(clientSo2 + 1, &closeSet, NULL, NULL, &atonce);
if (count == -1 && errno == EBADF) {
printf("Socket 2 (%d) closed.\n", clientSo2);
break; // Terminate test app
}
}
}
}
// Be a good citizen, close all sockets, join all threads
close(clientSo1);
close(clientSo2);
pthread_join(killThread, NULL);
pthread_join(serverThread1, NULL);
pthread_join(serverThread2, NULL);
return EXIT_SUCCESS;
}
Sample output for running this test code twice:
$ ./sockclose
Killing socket 3.
Select terminated with error: Bad file descriptor
Socket 1 (3) closed.
$ ./sockclose
Killing socket 4.
Select terminated with error: Bad file descriptor
Socket 1 (4) closed.
However, if your system supports poll(), I would strongly advise you to consider using this API instead of select(). Select is a rather ugly, legacy API from the past, only left there for backward compatibility with existing code. Poll has a much better interface for this task and it has an extra flag to directly signal you that a socket has closed locally: POLLNVAL will be set on revents if this socket has been closed, regardless which flags you requested on events, since POLLNVAL is an output only flags, that means it is ignored when being set on events. If the socket was not closed locally but the remote server has just closed the connection, the flag POLLHUP will be set in revents (also an output only flag). Another advantage of poll is that the timeout is simply an int value (milliseconds, fine grained enough for real network sockets) and that there are no limitations to the number of sockets that can be monitored or their numeric value range.

Resources