force socket to timeout in python - multithreading

I'm using a socket to listen on a port in a while loop, with a 5 second timeout set by socket.settimeout(). But I have another method, which set's the listening port, and when called with a new port, i wanna force the socket to timeout so that I can reinitialise the socket and set the appropriate port inside the while loop. Is there a way to do that?
The socket is inside a subclass of threading.Thread
PS. Since this is my second day with Python, any other suggestions regarding any part would be most welcome. Thank you
EDIT:
I almost forgot. I want to reinitialise the socket when the setoutboundport method is called.
EDIT2
Man the whole code is messed up. I reexamined everything and it's really wrong for what I wanna achieve. Just focus on the main question. Timing out the socket.
import threading
import socket
import ResponseSender
import sys
import traceback
def __init__(self, inboundport, outboundip, outboundport, ttl=60):
super(Relay, self).__init__()
self.inboundport = inboundport
self.outboundip = outboundip
self.outboundport = outboundport
self.ttl = ttl
self.serverOn = True
self.firstpacket = True
self.sibling = None
self.newoutboundport = 0
self.listener = None
# ENDOF: __init__
def stop(self):
self.serverOn = False
# ENDOF: stop
def setsiblingrelay(self, relay):
self.sibling = relay
# ENDOF: setsiblingrelay
def setoutboundport(self, port):
self.newoutboundport = port
# ENDOF: setoutboundport
def run(self):
s = None
try:
while self.serverOn:
if not s:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
if self.outboundport != self.newoutboundport:
try:
s.close()
except:
pass
s.settimeout(4)
s.bind(('', self.inboundport))
print("Relay started :{0} => {1}:{2}".format(self.inboundport, self.outboundip, self.outboundport))
print("---------------------------------------- LISTENING FOR INCOMING PACKETS")
data, address = s.recvfrom(32768)
print("Received {0} from {1}:{2} => sending to {3}:{4}"
.format(data, address[0], address[1], self.outboundip, self.outboundport))
ResponseSender.sendresponse(address[0], address[1], data)
except TimeoutError:
pass
except:
print("Error: {0}".format(traceback.format_exception(*sys.exc_info())))
# ENDOF: run

Related

Wait for message using python's async protocol

Into:
I am working in a TCP server that receives events over TCP. For this task, I decided to use asyncio Protocol libraries (yeah, maybe I should have used Streams), the reception of events works fine.
Problem:
I need to be able to connect to the clients, so I create another "server" used to look up all my connected clients, and after finding the correct one, I use the Protocol class transport object to send a message and try to grab the response by reading a buffer variable that always has the last received message.
My problem is, after sending the message, I don't know how to wait for the response, so I always get the previous message from the buffer.
I will try to simplify the code to illustrate (please, keep in mind that this is an example, not my real code):
import asyncio
import time
CONN = set()
class ServerProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
CONN.add(self)
def data_received(self, data):
self.buffer = data
# DO OTHER STUFF
print(data)
def connection_lost(self, exc=None):
CONN.remove(self)
class ConsoleProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
# Get first value just to ilustrate
self.client = next(iter(CONN))
def data_received(self, data):
# Forward the message to the client
self.client.transport.write(data)
# wait a fraction of a second
time.sleep(0.2)
# foward the response of the client
self.transport.write(self.client.buffer)
def main():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(
loop.create_server(protocol_factory=ServerProtocol,
host='0.0.0.0',
port=6789))
loop.run_until_complete(
loop.create_server(protocol_factory=ConsoleProtocol,
host='0.0.0.0',
port=9876))
try:
loop.run_forever()
except Exception as e:
print(e)
finally:
loop.close()
if __name__ == '__main__':
main()
This is not only my first experience writing a TCP server, but is also my first experience working with parallelism. So it took me days to realize that my sleep not only would not work, but I was locking the server while it "sleeps".
Any help is welcome.
time.sleep(0.2) is blocking, should not used in async programming, which will block the whole execution, if your program runing with 100 clients, the last client will be delayed for 0.2*99 seconds, which is not what you want.
the right way is trying to let program wait 0.2s but not blocking, then other concurrent clients would not be delayed,we can use thread.
import asyncio
import time
import threading
CONN = set()
class ServerProtocol(asyncio.Protocol):
def dealy_thread(self):
time.sleep(0.2)
def connection_made(self, transport):
self.transport = transport
CONN.add(self)
def data_received(self, data):
self.buffer = data
# DO OTHER STUFF
print(data)
def connection_lost(self, exc=None):
CONN.remove(self)
class ConsoleProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
# Get first value just to ilustrate
self.client = next(iter(CONN))
def data_received(self, data):
# Forward the message to the client
self.client.transport.write(data)
# wait a fraction of a second
thread = threading.Thread(target=self.delay_thread, args=())
thread.daemon = True
thread.start()
# foward the response of the client
self.transport.write(self.client.buffer)

Keras predict in a process freezes

I am trying to make an server which predicts (regression) given a certain input, however when I make a shared keras (with tensorflow backend) file to preload and skip loading the model every time (which would save about 1.8 seconds), and when I try to predict anything from a thread the program just freezes (even though only one thread is accessing it during my test). I know that the tensor is not made for this, however as it is only predicting there should be a workaround for this. I have tried using _make_prediction_function but that did not work.
This is the main function:
keras_model = keras_model_for_threads()
def thread_function(conn, addr, alive):
print('Connected by', addr)
start = time.time()
sent = conn.recv(1024)
x_pred = preproc(sent)
conn.sendall(keras_model.predict_single(x_pred))
conn.close()
import socket
HOST = ''
PORT = xxxxx
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1000)
print('Ready for listening')
while alive.get():
conn, addr = s.accept()
Process(target=thread_function, args=(conn, addr, alive)).start()
with the keras_model:
class keras_model_for_threads():
def __init__(self):
self.model = load_model(model_path)
self.model._make_predict_function()
def predict_single(self, x_pred):
return self.model.predict(x_pred)
Now if I run this normally, it executes and returns a prediction but through the Process with the thread function it freezes on the self.model.predict.
After some more searching I found an answer which works, namely making a manager to handle the prediction. This changes the original keras code to:
from multiprocessing.managers import BaseManager
from multiprocessing import Lock
class KerasModelForThreads():
def __init__(self):
self.lock = Lock()
self.model = None
def load_model(self):
from keras.models import load_model
self.model = load_model(model_path)
def predict_single(self, x_pred):
with self.lock:
return (self.model.predict(x_pred) + self.const.offset)[0][0]
class KerasManager(BaseManager):
pass
KerasManager.register('KerasModelForThreads', KerasModelForThreads)
And the main code to
from keras_for_threads import KerasManager
keras_manager = KerasManager()
keras_manager.start()
keras_model = keras_manager.KerasModelForThreads()
keras_model.load_model()
def thread_function(conn, addr, alive):
print('Connected by', addr)
start = time.time()
sent = conn.recv(1024)
x_pred = preproc(sent)
conn.sendall(keras_model.predict_single(x_pred))
conn.close()
import socket
HOST = ''
PORT = xxxxx
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1000)
print('Ready for listening')
while alive.get():
conn, addr = s.accept()
Process(target=thread_function, args=(conn, addr, alive)).start()
This is a stripped down version (without the Flask stuff, just the keras part) from the github project I found here:https://github.com/Dref360/tuto_keras_web

Handling a lot of concurrent connections in Python 3 asyncio

Iam trying to improve the performance of my application. It is a Python3.6 asyncio.Protocol based TCP server (SSL wrapped) handling a lot of requests.
It works fine and the performance is acceptable when only one connection is active, but as soon as another connection is opened, the client part of the application slows down. This is really noticeable once there are 10-15 client connection.
Is there a way to properly handle requests in parallel or should I resort to running multiple server instances?
/edit Added code
main.py
if __name__ == '__main__':
import package.server
server = package.server.TCPServer()
server.join()
package.server
import multiprocessing, asyncio, uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
from package.connection import Connection
class TCPServer(multiprocessing.Process):
name = 'tcpserver'
def __init__(self, discord_queue=None):
multiprocessing.Process.__init__(self)
self.daemon = True
# some setup in here
self.start()
def run(self):
loop = uvloop.new_event_loop()
self.loop = loop
# db setup, etc
server = loop.create_server(Connection, HOST, PORT, ssl=SSL_CONTEXT)
loop.run_until_complete(server)
loop.run_forever()
package.connection
import asyncio, hashlib, os
from time import sleep, time as timestamp
class Connection(asyncio.Protocol):
connections = {}
def setup(self, peer):
self.peer = peer
self.ip, self.port = self.peer[0], self.peer[1]
self.buffer = []
#property
def connection_id(self):
if not hasattr(self, '_connection_id'):
self._connection_id = hashlib.md5('{}{}{}'.format(self.ip, self.port, timestamp()).encode('utf-8')).hexdigest()
return self._connection_id
def connection_lost(self, exception):
del Connection.connections[self.connection_id]
def connection_made(self, transport):
self.transport = transport
self.setup(transport.get_extra_info('peername'))
Connection.connections[self.connection_id] = self
def data_received(self, data):
# processing, average server side execution time is around 30ms
sleep(0.030)
self.transport.write(os.urandom(64))
The application runs on Debian 9.9 and is started via systemd
To "benchmark" I use this script:
import os, socket
from multiprocessing import Pool
from time import time as timestamp
def foobar(i):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 60000))
while True:
ms = timestamp()*1000
s.send(os.urandom(128))
s.recv(1024*2)
print(i, timestamp()*1000-ms)
if __name__ == '__main__':
instances = 4
with Pool(instances) as p:
print(p.map(foobar, range(0, instances)))
To answer my own question here. I went with a solution that spawned multiple instances which were listening on base_port + x and I put a nginx TCP loadbalancer in front of it.
The individual TCPServer instances are still spawned as own process and communicate among themselves via a separate UDP connection and with the main process via multiprocessing.Queue.
While this does not "fix" the problem, it provides a somewhat scalable solution for my very specific problem.

Stop server from client's thread / Modify server's variable from client's thread

I would like to write an application that could stop the server based on client's input. The server is multi-threaded and I do not understand how can I do this.
Basically, I described my problem here: Modify server's variable from client's thread (threading, python).
However, this is the Python solution, not the general solution I could implement in Java, C, C++, etc.
I need to close other clients, when one of them guesses the number, but the server should be still alive, ready for the new game.
Can I ask for some advices, explanations?
I tried this (still do not know how to port it to C or Java), but it lets the clients send the numbers even if one of them just guesses it. It seems to me that kill_em_all does not do it's job, it does not close all the connections and does not disconnect the other clients as it should. How to improve this?
#!/usr/bin/env python
from random import randint
import socket, select
from time import gmtime, strftime
import threading
import sys
class Handler(threading.Thread):
def __init__(self, connection, randomnumber, server):
threading.Thread.__init__(self)
self.connection = connection
self.randomnumber = randomnumber
self.server = server
def run(self):
while True:
try:
data = self.connection.recv(1024)
if data:
print(data)
try:
num = int(data)
if self.server.guess(num) :
print 'someone guessed!'
self.server.kill_em_all()
break
else :
msg = "Try again!"
self.connection.sendall(msg.encode())
except ValueError as e:
msg = "%s" % e
self.connection.sendall(msg.encode())
else:
msg = "error"
self.connection.send(msg.encode())
except socket.error:
break
self.connection.close()
def send(self, msg):
self.connection.sendall(msg)
def close(self):
self.connection.close()
class Server:
randnum = randint(1,100)
def __init__(self, ip, port):
self.ip = ip
self.port = port
self.address = (self.ip, self.port)
self.server_socket = None
def guess(self, no):
if self.randnum == no:
self.randnum = randint(1, 100)
print("New number is ", self.randnum )
result = True
else:
result = False
return result
def kill_em_all(self):
for c in self.clients:
c.send("BYE!")
c.close()
def run(self):
try:
self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_socket.bind((self.ip, self.port))
self.server_socket.listen(10)
self.clients = []
print('Num is %s' % self.randnum)
while True:
connection, (ip, port) = self.server_socket.accept()
c = Handler(connection, self.randnum, self)
c.start()
self.clients.append(c)
except socket.error as e:
if self.server_socket:
self.server_socket.close()
sys.exit(1)
if __name__ == '__main__':
s = Server('127.0.0.1', 7777)
s.run()
Client code:
import socket
import sys
port = 7777
s = None
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostname()
s.connect(('127.0.0.1', port))
except socket.error, (value, message):
if s:
s.close()
print "Could not open socket: " + message
sys.exit(1)
while True:
data = raw_input('> ')
s.sendall(data)
data = s.recv(1024)
if data:
if data == "BYE!":
break
else:
print "Server sent: %s " % data
s.close()
Log in. Using whatever protocol you have, send the server a message telliing it to shut down. In the server, terminate your app when you get the shutdown message. That's it. It's not a problem with any OS I have used - any thread of a process can terminate that process.

Delay opening an asyncio connection

Some of my django REST services have to connect to an asyncio server to get some information. So I'm working in a threaded environment.
While connecting, the open_connection() takes an unreasonable 2 seconds (almost exactly, always just a bit more).
Client code:
import asyncio
import datetime
def call():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
#asyncio.coroutine
def msg_to_mars():
print("connecting", datetime.datetime.now())
reader, writer = yield from asyncio.open_connection('localhost', 8888, loop=loop)
print("connected", datetime.datetime.now()) # time reported here will be +2 seconds
return None
res = loop.run_until_complete(msg_to_mars())
loop.close()
return res
call()
Server code:
import asyncio
#asyncio.coroutine
def handle_connection(reader: asyncio.StreamReader, writer: asyncio.StreamWriter):
pass
loop = asyncio.get_event_loop()
asyncio.set_event_loop(loop)
# Each client connection will create a new protocol instance
coro = asyncio.start_server(handle_connection, '0.0.0.0', 8888, loop=loop)
server = loop.run_until_complete(coro)
# Serve requests until Ctrl+C is pressed
print('MARS Device server serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Both are basically copied from asyncio documentation samples for streamed communication, except for the additional assigning of event loop for threading.
How can I make this delay go away?
Turns out, the problem was in Windows DNS resolution.
Changing URLs from my computer name to 127.0.0.1 immediately killed the delays.

Resources