socket and multiprocessing blocking - python-3.x

I am trying to write a program that utilizes sockets to talk to others in the cluster. the problem I am having is that I can not seam to implment a server without blocking on accept() even though I have tried to utilize threading and multiprocessing.
server.py
import traceback
import sys, base64
from threading import Thread
import time
def server(host="", port=65098):
host= host
port= port
soc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
soc.bind((host, port))
soc.listen(5)
clients = []
time.sleep(1)
while 1:
try:
clientsocket, clentaddr = soc.accept()
clients.append(Thread(target=handler, args=(clientsocket, clientaddr)))
except KeyboardInterrupt:
soc.close()
for cl in clients:
if not cl.is_active:
cl.join()
clients.remove(cl)
def handler(clientsocket, clientaddr):
while True:
data = clientsocket.recv(1024).decode('utf8')
if data.startswith('---'):
if data[0:9] == '---test---':
ping(clientsocket, data[10:])
break
clientsocket.close()
def ping(clientsocket, data):
name = base64.decode(data)
clientsocket.send(base64.encode(b'hi' + name))
then I move on to a test.py where I try to work on the noneblocking server with a test.
test.py
from lib import server
import pdb
thread1 = multiprocessing.Process(target=server.server())
pdb.set_trace()
thread1.daemon = True
thread1.start()
# pdb.set_trace()
import socket
soc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
pdb.set_trace()
soc.connect('localhost', 65098)
message = b'This is our message. It is very long but will only be transmitted in chunks of 16 at a time'
sock.sendall(message)
when I run test.py The program blocks. when I ctl+c it seams that the program is not advancing past line 5(test.py) and is blocking even though I have yet to call thread1.start as you can see I have a pdb trace. this is not being executed. I didn't think it should matter with threading but server.py in the stack seams to be on line 18 where it calls soc.accept().
This is strange behavior to me. any ideas?

For some reason it being inside a module is not allowing threading to run it without the second thread blocking the first thread. I was able to get around this by creating a function in tests.py called tcpserv all it dose is call server.server() instead of calling it directly as the multiprocessing target. test.py now looks like this and dumps to the debugger like I expect.
import multiprocessing
from lib import server
import pdb
def tcpserv():
server.server()
thread1 = multiprocessing.Process(target=tcpserv)
thread1.start()
pdb.set_trace()

Related

Shutdown RPyC server from client

I've created a RPyC server. Connecting works, all my exposed methods work. Now I am looking to shut down the server from the client. Is this even possible? Security is not a concern as I am not worried about a rogue connection shutting down the server.
It is started with (Which is blocking):
from rpyc import ThreadPoolServer
from service import MyService
t = ThreadPoolServer(MyService(), port=56565)
t.start()
Now I just need to shut it down. I haven't found any documentation on stopping the server.
You can add to your Service class the method:
def exposed_stop(self):
pid = os.getpid()
if platform.system() == 'Windows':
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
else:
os.kill(pid, signal.SIGTERM)
This will make the service get its own PID and send SIGTERM to itself. There may be an better way of doing this hiding in some dark corner of the API, but I've found no better method.
If you want to do clean-up before your thread terminates, you can set up exit traps:
t = rpyc.utils.server.ThreadedServer(service, port=port, auto_register=True)
# Set up exit traps for graceful exit.
signal.signal(signal.SIGINT, lambda signum, frame: t.close())
signal.signal(signal.SIGTERM, lambda signum, frame: t.close())
t.start() # blocks thread
# SIGTERM or SIGINT was received and t.close() was called
print('Closing service.')
t = None
shutil.rmtree(tempdir)
# etc.
In case anybody is interested, I have found another way of doing it.
I'm just making the server object on a global scope, and then adding an exposed method to close it.
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
def exposed_stop(self):
server.close()
def exposed_echo(self, text):
print(text)
server = ThreadedServer(MyService, port = 18812)
if __name__ == "__main__":
print("server start")
server.start()
print("Server closed")
On the client side, you will have an EOF error due to the connection being remotely closed. So it's better to catch it.
import rpyc
c = rpyc.connect("localhost", 18812)
c.root.echo("hello")
try :
c.root.stop()
except EOFError as e:
print("Server was closed")
EDIT: I needed to be able to dinamically specify the server. So I came with this (Is it better ? I don't know, but it works well. Be careful though, if you have multiple server running this service: things could become weird):
import rpyc
from rpyc.utils.server import ThreadedServer
class MyService(rpyc.Service):
_server:ThreadedServer
#staticmethod
def set_server(inst=ThreadedServer):
MyService._server = inst
def exposed_stop(self):
if self._server:
self._server.close()
def exposed_echo(self, text):
print(text)
if __name__ == "__main__":
server = ThreadedServer(MyService, port = 18812)
MyService.set_server(server)
print("server start")
server.start()
print("Server closed")
PS: It probably is possible to avoid the EOF error by using Asynchronous Operations

Why does calling datetime hang a thread?

I am attempting to make use of concurrent.futures.ThreadPoolExecutor for the first time. One of my threads (level_monitor) consistently hangs on a call to datetime.now.strftime()—and on another hardware-specific function. For now I am assuming it is the same fundamental problem in both cases.
I've created a reproducible minimum example.
from concurrent.futures import ThreadPoolExecutor
import socket
from time import sleep
status = 'TRY AGAIN\n'
def get_level():
print('starting get_level()')
while True:
sleep(2)
now = datetime.now().strftime('%d-%b-%Y %H:%M:%S')
print('get_level woken...')
# report status when requested
def serve_level():
print('starting serve_level()')
si = socket.socket()
port = 12345
si.bind(('127.0.0.1',port))
si.listen()
print('socket is listening')
while True:
ci, addr = si.accept()
print('accepted client connection from ',addr)
with ci:
req = ci.recv(1024)
print( req )
str = status.encode('utf-8')
ci.send(str)
ci.close()
if __name__ == '__main__':
nthreads = 5
with ThreadPoolExecutor(nthreads) as executor:
level_monitor = executor.submit(get_level)
server = executor.submit(serve_level)
When I run it I see the serve_level thread works fine. I can talk to that thread using telnet. I can see the level_monitor thread starts too, but then it hangs before print('get_level woken...'). If I comment out the call to datetime then the thread behaves as expected.
I am sure that when I find out why I will have found out a lot.

Handling a lot of concurrent connections in Python 3 asyncio

Iam trying to improve the performance of my application. It is a Python3.6 asyncio.Protocol based TCP server (SSL wrapped) handling a lot of requests.
It works fine and the performance is acceptable when only one connection is active, but as soon as another connection is opened, the client part of the application slows down. This is really noticeable once there are 10-15 client connection.
Is there a way to properly handle requests in parallel or should I resort to running multiple server instances?
/edit Added code
main.py
if __name__ == '__main__':
import package.server
server = package.server.TCPServer()
server.join()
package.server
import multiprocessing, asyncio, uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
from package.connection import Connection
class TCPServer(multiprocessing.Process):
name = 'tcpserver'
def __init__(self, discord_queue=None):
multiprocessing.Process.__init__(self)
self.daemon = True
# some setup in here
self.start()
def run(self):
loop = uvloop.new_event_loop()
self.loop = loop
# db setup, etc
server = loop.create_server(Connection, HOST, PORT, ssl=SSL_CONTEXT)
loop.run_until_complete(server)
loop.run_forever()
package.connection
import asyncio, hashlib, os
from time import sleep, time as timestamp
class Connection(asyncio.Protocol):
connections = {}
def setup(self, peer):
self.peer = peer
self.ip, self.port = self.peer[0], self.peer[1]
self.buffer = []
#property
def connection_id(self):
if not hasattr(self, '_connection_id'):
self._connection_id = hashlib.md5('{}{}{}'.format(self.ip, self.port, timestamp()).encode('utf-8')).hexdigest()
return self._connection_id
def connection_lost(self, exception):
del Connection.connections[self.connection_id]
def connection_made(self, transport):
self.transport = transport
self.setup(transport.get_extra_info('peername'))
Connection.connections[self.connection_id] = self
def data_received(self, data):
# processing, average server side execution time is around 30ms
sleep(0.030)
self.transport.write(os.urandom(64))
The application runs on Debian 9.9 and is started via systemd
To "benchmark" I use this script:
import os, socket
from multiprocessing import Pool
from time import time as timestamp
def foobar(i):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 60000))
while True:
ms = timestamp()*1000
s.send(os.urandom(128))
s.recv(1024*2)
print(i, timestamp()*1000-ms)
if __name__ == '__main__':
instances = 4
with Pool(instances) as p:
print(p.map(foobar, range(0, instances)))
To answer my own question here. I went with a solution that spawned multiple instances which were listening on base_port + x and I put a nginx TCP loadbalancer in front of it.
The individual TCPServer instances are still spawned as own process and communicate among themselves via a separate UDP connection and with the main process via multiprocessing.Queue.
While this does not "fix" the problem, it provides a somewhat scalable solution for my very specific problem.

Real Time internet connection checker in python With GUI

so basically what i am trying to do is to check if the computer have access to internet till the end of the program....
It is in a GUI which is made with tkinter.....I tried to create new thread and run the function in a while loop(while 1:), but it says
Traceback (most recent call last):
.
.
.
RuntimeError: main thread is not in main loop
this is the program
import threading
import socket
import time
def is_connected():
try:
socket.create_connection(("www.google.com", 80))
print("Online",end="\n")
except OSError:
print("offline",end="\n")
tt3 =threading.Event()
while 1:
t3=threading.Thread(target=is_connected)
t3.start()
time.sleep(1)
This is the program with GUI
import threading
import socket
import time
import tkinter
top = tkinter.Tk()
top.title("")
l=tkinter.Label(top,text='')
l.pack()
def is_connected():
try:
socket.create_connection(("www.google.com", 80))
print("Online",end="\n")
l.config(text="Online")
except OSError:
l.config(text="offline")
print("offline",end="\n")
tt3 =threading.Event()
while 1:
t3=threading.Thread(target=is_connected)
t3.start()
time.sleep(1)
top.configure(background="#006666")
top.update()
top.mainloop()
any suggestion or help is most welcomed!!
(someone in reddit suggested me to use queue about which i have no idea )
First the while loop will block the tkinter mainloop from processing events. Second you are repeatedly creating new thread in each loop.
Better use .after():
import socket
import tkinter
top = tkinter.Tk()
top.title("Network Checker")
top.configure(background="#006666")
l=tkinter.Label(top,text='Checking ...')
l.pack()
def is_connected():
try:
socket.create_connection(("www.google.com", 80)) # better to set timeout as well
state = "Online"
except OSError:
state = "Offline"
l.config(text=state)
print(state)
top.after(1000, is_connected) # do checking again one second later
is_connected() # start the checking
top.mainloop()

python program not exiting on exception using aiozmq

I am using aiozmq for a simple RPC program.
I have created a client and server.
When the server is running, the client runs just fine.
I have a timeout set in the client to raise an exception in the event of no server being reachable.
The client code is below. When I run it without the server running, I get an expected exception but the script doesn't actually return to the terminal. It still seems to be executing.
Could someone firstly explain how this is happening and secondly how to fix it?
import asyncio
from asyncio import TimeoutError
from aiozmq import rpc
import sys
import os
import signal
import threading
import sys
import traceback
#signal.signal(signal.SIGINT, signal.SIG_DFL)
async def client():
print("waiting for connection..")
client = await rpc.connect_rpc(
connect='tcp://127.0.0.1:5555',
timeout=1
)
print("got client")
for i in range(100):
print("{}: calling simple_add".format(i))
ret = await client.call.simple_add(1, 2)
assert 3 == ret
print("calling slow_add")
ret = await client.call.slow_add(3, 5)
assert 8 == ret
client.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.set_debug(True)
future = asyncio.ensure_future(client())
try:
loop.run_until_complete(future)
except TimeoutError:
print("Timeout occurred...")
future.cancel()
loop.stop()
#loop.run_forever()
main_thread = threading.currentThread()
for t in threading.enumerate():
if t is main_thread:
print("skipping main_thread...")
continue
print("Thread is alive? {}".format({True:'yes',
False:'no'}[t.is_alive()]))
print("Waiting for thread...{}".format(t.getName()))
t.join()
print(sys._current_frames())
traceback.print_stack()
for thread_id, frame in sys._current_frames().items():
name = thread_id
for thread in threading.enumerate():
if thread.ident == thread_id:
name = thread.name
traceback.print_stack(frame)
print("exiting..")
sys.exit(1)
#os._exit(1)
print("eh?")
The result of running the above is below. Note again that the program was still running, I had to to exit.
> python client.py
waiting for connection..
got client
0: calling simple_add
Timeout occurred...
skipping main_thread...
{24804: <frame object at 0x00000000027C3848>}
File "client.py", line 54, in <module>
traceback.print_stack()
File "client.py", line 60, in <module>
traceback.print_stack(frame)
exiting..
^C
I also tried sys.exit() which also didn't work:
try:
loop.run_until_complete(future)
except:
print("exiting..")
sys.exit(1)
I can get the program to die, but only if I use os._exit(1). sys.exit() doesn't seem to cut it. I doesn't appear that there are any other threads preventing the interpreter from dying. (Unless I'm mistaken?) What else could be stopping the program from exiting?

Resources