I tried to made a script that connects to a server with *python-socket.
# connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# in this case socket.socket() is more clearer
connection = socket.socket()
connection.connect((ip, port))
But when the ip or the port are incorrect, it tries to connect forever.
So how can you implement a certain time to wait for a server to respond/connect? For example when the server isn't reachable after 5 seconds, the connection will be closed with an error that the server is unreachable/offline.
To specify a specific time to wait for the server you can simply do this:
connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
connection.settimeout(5) # waits 5 sec until the connection will be closed
connection.connect((ip, port))
connection.settimeout(None) #sets the timeout to None (important)
In this example, the connection will wait for 5 seconds until it raises an exception, because the time is over. You can simply catch this exception in a try statement with "except socket.timeout:"
Honestly, this is lifted more or less from tutorials point networking on python-https://www.tutorialspoint.com/python/python_networking.html
Server program:
import socket # Import socket module
dd="You connected sucessfully to the server"
dd1=bytes(dd,'UTF-8')
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print ('Got connection from', addr)
c.send(dd1)
c.close() # Close the connection
Client side:
import socket # Import socket module
s = socket.socket(socket.AF_INET) # Create a socket object
host = '80.78.xxx.xxx' # Get local machine name/value
port = 12345 # Reserve a port for your service.
s.connect((host, port))
print(s.recv(102004))
s.close() # Close the socket when done
So, I'm able to run it on the same PC and get result. If my understanding of the code is correct, host means localhost. But it should also work when I try to access it using the ip address. But it doesn't. Please help. It returns the error: Error 10060
https://help.globalscape.com/help/archive/cuteftp6/socket_error_=__10060.htm#:~:text=10060%20is%20a%20Connection%20Time,prefers%20PORT%20for%20data%20connections.&text=ERROR%3A%3E%20Can't%20connect%20to%20remote%20server.
I forwarded the ports 12340 to 12350 to my ip address on the router. Removed all firewall. Yet this happens.
A similar error happend when I tried to put a website up using node.js. Works perfectly on local host but doesn't work when I try to access using public IP address. I'm very confused and would be glad if you pointed to any literature to get a deeper understanding.
I want a TCP server that waits for clients to connect, and as soon as they do, sends them some data continuously. I also want the server to notice if a client disappears suddenly, without a trace, and to remove them from the list of open sockets.
My code looks like this:
#!/usr/bin/env python3
import select, socket
# Listen Port
LISTEN_PORT = 1234
# Create socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Setup the socket
server.setblocking(0)
server.bind(('0.0.0.0', LISTEN_PORT))
server.listen(5)
# Make socket reusable
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Setup TCP Keepalive
server.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 1)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 3)
server.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 5)
# Tell user we are listening
print("Listening on port %s" % LISTEN_PORT)
inputs = [server]
outputs = []
while True:
# Detecting clients that disappeared does NOT work when we ARE
# watching if any sockets are writable
#readable, writable, exceptional = select.select(inputs, outputs, inputs)
# Detecting clients that disappeared works when we aren't watching
# if any sockets are writable
readable, writable, exceptional = select.select(inputs, [], inputs)
for s in readable:
if s is server:
connection, client_address = s.accept()
print("New client connected: %s" % (client_address,))
connection.setblocking(0)
inputs.append(connection)
outputs.append(connection)
else:
try:
data = s.recv(1024)
except TimeoutError:
print("Client dropped out")
inputs.remove(s)
if s in outputs:
outputs.remove(s)
continue
if data:
print("Data from %s: %s" % (s.getpeername(), data.decode('ascii').rstrip()))
else:
print("%s disconnected" % (s.getpeername(),))
for s in writable:
s.send(b".")
As you can see, I'm using TCP Keepalive to allow me to see if a client has disappeared. The problem I'm seeing is this:
when I'm NOT having select() watch for writeable sockets, when the client disappears, select() will stop blocking after the TCP Keepalive timeout expires, and the socket will be in the readable list, so I can remove the client that disappeared from input and output (which is good)
when I AM having select() watch for writable sockets, when the client disappears, select() will NOT stop blocking after the TCP Keepalive timeout expires, and the client socket never ends up in the readable or writable list, so it never gets removed
I'm using telnet from a different machine as a client. To replicate a client disappearing, I'm using iptables to block the client from talking to the server while the client is connected.
Anyone know what's going on?
As the comments to your question have mentioned, the TCP_KEEPALIVE stuff won't make any difference for your use-case. TCP_KEEPALIVE is a mechanism for notifying a program when the peer on the other side of its TCP connection has gone away on an otherwise idle TCP connection. Since you are regularly sending data on the TCP connection(s), the TCP_KEEPALIVE functionality is never invoked (or needed) because the act of sending data over the connection is already enough, by itself, to cause the TCP stack to recognize ASAP when the remote client has gone away.
That said, I modified/simplified your example server code to get it to work (as correctly as possible) on my machine (a Mac, FWIW). What I did was:
Moved the socket.setsockopt(SO_REUSEADDR) to before the bind() line, so that bind() won't fail after you kill and then restart the program.
Changed the select() call to watch for writable-sockets.
Added exception-handling around the send() calls.
Moved the remove-socket-from-lists code into a separate RemoveSocketFromLists() function, to avoid redundant code
Note that the expected behavior for TCP is that if you quit a client gently (e.g. by control-C'ing it, or killing it via Task Manager, or otherwise causing it to exit in such a way that its host TCP stack is still able to communicate with the server to tell the server that the client is dead) then the server should recognize the dead client more or less immediately.
If, on the other hand, the client's network connectivity is disconnected suddenly (e.g. because someone yanked out the client computer's Ethernet or power cable) then it may take the server program several minutes to detect that the client has gone away, and that's expected behavior, since there's no way for the server to tell (in this situation) whether the client is dead or not. (i.e. it doesn't want to kill a viable TCP connection simply because a router dropped a few TCP packets, causing a temporary interruption in communications to a still-alive client)
If you want to try to drop the clients quickly in that scenario, you could try requiring the clients to send() a bit of dummy-data to the server every second or so. The server could keep track of the timestamp of when it last received any data from each client, and force-close any clients that it hasn't received any data from in "too long" (for whatever your idea of too long is). This would more or less work, although it risks false-positives (i.e. dropping clients that are still alive, just slow or suffering from packet-loss) if you set your timeout-threshold too low.
#!/usr/bin/env python3
import select, socket
# Listen Port
LISTEN_PORT = 1234
# Create socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Make socket reusable
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Setup the socket
server.setblocking(0)
server.bind(('0.0.0.0', LISTEN_PORT))
server.listen(5)
# Tell user we are listening
print("Listening on port %s" % LISTEN_PORT)
inputs = [server]
outputs = []
# Removes the specified socket from every list in the list-of-lists
def RemoveSocketFromLists(s, listOfLists):
for nextList in listOfLists:
if s in nextList:
nextList.remove(s)
while True:
# Detecting clients that disappeared does NOT work when we ARE
# watching if any sockets are writable
readable, writable, exceptional = select.select(inputs, outputs, [])
for s in readable:
if s is server:
connection, client_address = s.accept()
print("New client connected: %s" % (client_address,))
connection.setblocking(0)
inputs.append(connection)
outputs.append(connection)
else:
try:
data = s.recv(1024)
print("Data from %s: %s" % (s.getpeername(), data.decode('ascii').rstrip()))
except:
print("recv() reports that %s disconnected" % s)
RemoveSocketFromLists(s, [inputs, outputs, writable])
for s in writable:
try:
numBytesSent = s.send(b".")
except:
print("send() reports that %s disconnected" % s)
RemoveSocketFromLists(s, [inputs, outputs])
Aiohttp has great websocket support:
# dictionary where the keys are user's ids, and the values are websocket connections
users_websock = {}
class WebSocket(web.View):
async def get(self):
# creating websocket instance
ws = web.WebSocketResponse()
await ws.prepare(self.request)
users_websock['some_user_id'] = ws
async for msg in ws:
# handle incoming messages
And now when I need to send a message to a special user:
# using ws.send_str() to send data via websocket connection
ws = users_websock['user_id']
ws.send_str('Some data')
That's good as long as I have only one server worker. But in production we always have multiple workers. And of course every worker has it's own different users_websock dictionary.
So the actual problem occurs when we need send message from worker 1 to some user connected to worker 2.
And the question is how and where I should store the list of websocket connections so that each worker can get the necessary connection?
Maybe I can store in the DB some connection id or something to create websocket instance everywhere?
Or there is another way to approach this?
aiohttp doesn't provide interprocess/internode communication channels, it is another big and interesting area not related to HTTP processing itself.
You need channels between your workers. It can be Redis pubsub, RabiitMQ or even websockets -- depending on your library.
Or tools like https://crossbar.io/
There is no single solution that covers all possible cases.
I am able to send messages from the clients to the server and also send a reply from the server to the client .
I am interested to know how I can exchange messages explicitly between 2 clients , unlike a chatroom where all messages are broadcasted to all clients I want to send message to a single target client .
you can only send message client server
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = '192.168.1.2' #ip of sv
port = 4444 #port example
s.connect((host, port))
r = raw_input('[+] Enter Message to Send : ')
s.send(r)
and the Server need to listen the socket by using netcat , or a listen using against Sockets , thats only