I want a TCP server that waits for clients to connect, and as soon as they do, sends them some data continuously. I also want the server to notice if a client disappears suddenly, without a trace, and to remove them from the list of open sockets.
My code looks like this:
#!/usr/bin/env python3
import select, socket
# Listen Port
LISTEN_PORT = 1234
# Create socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Setup the socket
server.setblocking(0)
server.bind(('0.0.0.0', LISTEN_PORT))
server.listen(5)
# Make socket reusable
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Setup TCP Keepalive
server.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 1)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 3)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 5)
# Tell user we are listening
print("Listening on port %s" % LISTEN_PORT)
inputs = [server]
outputs = []
while True:
# Detecting clients that disappeared does NOT work when we ARE
# watching if any sockets are writable
#readable, writable, exceptional = select.select(inputs, outputs, inputs)
# Detecting clients that disappeared works when we aren't watching
# if any sockets are writable
readable, writable, exceptional = select.select(inputs, [], inputs)
for s in readable:
if s is server:
connection, client_address = s.accept()
print("New client connected: %s" % (client_address,))
connection.setblocking(0)
inputs.append(connection)
outputs.append(connection)
else:
try:
data = s.recv(1024)
except TimeoutError:
print("Client dropped out")
inputs.remove(s)
if s in outputs:
outputs.remove(s)
continue
if data:
print("Data from %s: %s" % (s.getpeername(), data.decode('ascii').rstrip()))
else:
print("%s disconnected" % (s.getpeername(),))
for s in writable:
s.send(b".")
As you can see, I'm using TCP Keepalive to allow me to see if a client has disappeared. The problem I'm seeing is this:
when I'm NOT having select() watch for writeable sockets, when the client disappears, select() will stop blocking after the TCP Keepalive timeout expires, and the socket will be in the readable list, so I can remove the client that disappeared from input and output (which is good)
when I AM having select() watch for writable sockets, when the client disappears, select() will NOT stop blocking after the TCP Keepalive timeout expires, and the client socket never ends up in the readable or writable list, so it never gets removed
I'm using telnet from a different machine as a client. To replicate a client disappearing, I'm using iptables to block the client from talking to the server while the client is connected.
Anyone know what's going on?
As the comments to your question have mentioned, the TCP_KEEPALIVE stuff won't make any difference for your use-case. TCP_KEEPALIVE is a mechanism for notifying a program when the peer on the other side of its TCP connection has gone away on an otherwise idle TCP connection. Since you are regularly sending data on the TCP connection(s), the TCP_KEEPALIVE functionality is never invoked (or needed) because the act of sending data over the connection is already enough, by itself, to cause the TCP stack to recognize ASAP when the remote client has gone away.
That said, I modified/simplified your example server code to get it to work (as correctly as possible) on my machine (a Mac, FWIW). What I did was:
Moved the socket.setsockopt(SO_REUSEADDR) to before the bind() line, so that bind() won't fail after you kill and then restart the program.
Changed the select() call to watch for writable-sockets.
Added exception-handling around the send() calls.
Moved the remove-socket-from-lists code into a separate RemoveSocketFromLists() function, to avoid redundant code
Note that the expected behavior for TCP is that if you quit a client gently (e.g. by control-C'ing it, or killing it via Task Manager, or otherwise causing it to exit in such a way that its host TCP stack is still able to communicate with the server to tell the server that the client is dead) then the server should recognize the dead client more or less immediately.
If, on the other hand, the client's network connectivity is disconnected suddenly (e.g. because someone yanked out the client computer's Ethernet or power cable) then it may take the server program several minutes to detect that the client has gone away, and that's expected behavior, since there's no way for the server to tell (in this situation) whether the client is dead or not. (i.e. it doesn't want to kill a viable TCP connection simply because a router dropped a few TCP packets, causing a temporary interruption in communications to a still-alive client)
If you want to try to drop the clients quickly in that scenario, you could try requiring the clients to send() a bit of dummy-data to the server every second or so. The server could keep track of the timestamp of when it last received any data from each client, and force-close any clients that it hasn't received any data from in "too long" (for whatever your idea of too long is). This would more or less work, although it risks false-positives (i.e. dropping clients that are still alive, just slow or suffering from packet-loss) if you set your timeout-threshold too low.
#!/usr/bin/env python3
import select, socket
# Listen Port
LISTEN_PORT = 1234
# Create socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Make socket reusable
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Setup the socket
server.setblocking(0)
server.bind(('0.0.0.0', LISTEN_PORT))
server.listen(5)
# Tell user we are listening
print("Listening on port %s" % LISTEN_PORT)
inputs = [server]
outputs = []
# Removes the specified socket from every list in the list-of-lists
def RemoveSocketFromLists(s, listOfLists):
for nextList in listOfLists:
if s in nextList:
nextList.remove(s)
while True:
# Detecting clients that disappeared does NOT work when we ARE
# watching if any sockets are writable
readable, writable, exceptional = select.select(inputs, outputs, [])
for s in readable:
if s is server:
connection, client_address = s.accept()
print("New client connected: %s" % (client_address,))
connection.setblocking(0)
inputs.append(connection)
outputs.append(connection)
else:
try:
data = s.recv(1024)
print("Data from %s: %s" % (s.getpeername(), data.decode('ascii').rstrip()))
except:
print("recv() reports that %s disconnected" % s)
RemoveSocketFromLists(s, [inputs, outputs, writable])
for s in writable:
try:
numBytesSent = s.send(b".")
except:
print("send() reports that %s disconnected" % s)
RemoveSocketFromLists(s, [inputs, outputs])
I need to communicate an Arduino Uno R3 with Tinysine GSM Shield (sim 900 module) to a NodeJS Server Socket using TCP/IP sockets. So, the embedded system is a TCP client. I need the Arduino sends some message and receive the answer, using the received data to blink a different collor led. The tcp socket is working, and I can send the message, and process it in my server socket, but cant receive the answer (socket.write) in embedded (actually, I receive some fuzzy and variables caracters).
My Server Socket works fine, using the Hercules like client TCP, I could complete whole process.
I am using the SIM900 and InetGSM libraries with AT Commands to TCP connection (initially I dont want to use AT HTTP connection because It would change my system).
How do I can receive a legible message from my server socket? Thanks
I solved it!
When I send the AT+CIPSEND command, what I receive is a sequence of caracters with the AT response for the command and the data sent from my server.
So, what I need to do is storing it in an array and select the data position in this array or pick the answer in the right array position, like example below.
For example:
//Im waiting for char '1'
//After sending message with AT commands ...
char answer;
for(i = 0; i < 15; i++){ //15 is an random limit value that worked for me, I dont know why
answer = (char)gsm.read();
if(answer == '1'){
Serial.println("I find the answer!");
}
}
I have a node.js server application that listens to raw TCP connections using the net module.
In order to keep track of network latency, I want to access the TSval and TSecr TCP fields. How exactly should it be done and where? The server layout is nothing spectacular:
var tcps = net.createServer(function(c) {
/* socket initialization */
...
c.on('data', function(chunk) {
/* message processing */
...
}
}
tcps.listen(serverPort);
See #mscdex's comment:
That kind of low-level access to TCP packets is not available in node.js
Any way to check if TCP socket in NodeJS has been connected.
As opposed to still connecting in transient.
Am looking for a property as opposed to an emitted event.
Is it even possible?
The manual shows you that an event is emitted for when the connection is established and that is how you should handle it the Node.js way (using non-blocking, asynchronous i/o)
There is though an undocumented way to check it, the socket object has a _connecting boolean property that if set to false means it has been connected already or failed.
Example :
var net = require('net');
//Connect to localhost:80
var socket = net.createConnection(80);
if(socket._connecting === true) {
console.log("The TCP connection is not established yet");
}
I'm testing communication between two NodeJS instances over TCP, using the net module.
Since the TCP doesn't rely on messages (socket.write()), I'm wrapping each message in a string like msg "{ json: 'encoded' }"; in order to handle them individually (otherwise, I'd receive packets with a random number of concatenated messages).
I'm running two NodeJS instances (server and client) on a CentOS 6.5 VirtualBox VM with bridged network and a Core i3-based host machine. The test lies on the client emitting a request to the server and waiting for the response:
Client connects to the server.
Client outputs current timestamp (Date.now()).
Client emits n requests.
Server replies to n requests.
Client increments a counter on every response.
When finished, client outputs the current timestamp.
The code is quite simple:
Server
var net = require('net');
var server = net.createServer(function(socket) {
socket.setNoDelay(true);
socket.on('data', function(packet) {
// Split packet in messages.
var messages = packet.toString('utf-8').match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
// Emit response:
socket.write('msg "PONG";');
}
});
});
server.listen(9999);
Client
var net = require('net');
var WSClient = new net.Socket();
WSClient.setNoDelay(true);
WSClient.connect(9999, 'localhost', function() {
var req = 0;
var res = 0;
console.log('Start:', Date.now());
WSClient.on('data', function(packet) {
var messages = packet.toString("utf-8").match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
res++;
if (res === 1000) console.log('End:', Date.now());
}
});
// Emit requests:
for (req = 0; req <= 1000; req++) WSClient.write('msg "PING";');
});
My results are:
With 1 request: 9 - 24 ms
With 1000 requests: 478 - 512 ms
With 10000 requests: 5021 - 5246 ms
My pings (ICMP) to localhost are between 0.6 - 0.1 seconds. I've not intense network traffic or CPU usage (running SSH, FTP, Apache, Memcached, and Redis).
Is this normal for NodeJS and TCP or it is just my CentOS VM or my low-performance host? Should I move to another platform like Java or a native C/C++ server?
I think that a 15 ms delay (average) per request on localhost is not acceptable for my project.
Wrapping the messages in some text and searching for a Regex match isn't enough.
The net.Server and net.Socket interfaces have a raw TCP stream as an underlying data source. The data event will fire whenever the underlying TCP stream has data available.
The problem is, you don't control the TCP stack. The timing of it firing data events has nothing to do with the logic of your code. So you have no guarantee that the data event that drives your listeners has exactly one, less than one, more than one, or any number and some remainder, of messages being sent. In fact, you can pretty much guarantee that the underlying TCP stack WILL break up your data into chunks. And the listener only fires when a chunk is available. Your current code has no shared state between data events.
You only mention latency, but I expect if you check, you will also find that the count of messages received (on both ends) is not what you expect. That's because any partial messages that make it across will be lost completely. If the TCP stream sends half a message at the end of chunk 1, and the remainder in chunk 2, the split message will be totally dropped.
The easy and robust way is to use a messaging protocol like ØMQ. You will need to use it on both endpoints. It takes care of framing the TCP stream into atomic messages.
If for some reason you will connecting to or receiving traffic from external sources, they will probably use something like a length header. Then what you want to do is create a Transform stream that buffers incoming traffic, and only emits data when the amount identified in the header has arrived.
Have you done any network dump? You may be creating network congestion due to the overhead introduced by enabling 'no delay' socket property. This property will send data down to TCP stack as soon as possible and if you have very small chunks of information it will lead to many TCP packets with small payloads, thus the decreasing transmission efficiency and eventually having TCP pausing the transmission due to congestion. If u want to use 'no delay' for your sockets, try increasing your receiving socket buffer so that data is pulled from the tcp stack faster. Let us know if that helped.